IDEAS home Printed from https://ideas.repec.org/a/spr/annopr/v134y2005i1p215-23810.1007-s10479-005-5732-z.html
   My bibliography  Save this article

Basis Function Adaptation in Temporal Difference Reinforcement Learning

Author

Listed:
  • Ishai Menache
  • Shie Mannor
  • Nahum Shimkin

Abstract

Reinforcement Learning (RL) is an approach for solving complex multi-stage decision problems that fall under the general framework of Markov Decision Problems (MDPs), with possibly unknown parameters. Function approximation is essential for problems with a large state space, as it facilitates compact representation and enables generalization. Linear approximation architectures (where the adjustable parameters are the weights of pre-fixed basis functions) have recently gained prominence due to efficient algorithms and convergence guarantees. Nonetheless, an appropriate choice of basis function is important for the success of the algorithm. In the present paper we examine methods for adapting the basis function during the learning process in the context of evaluating the value function under a fixed control policy. Using the Bellman approximation error as an optimization criterion, we optimize the weights of the basis function while simultaneously adapting the (non-linear) basis function parameters. We present two algorithms for this problem. The first uses a gradient-based approach and the second applies the Cross Entropy method. The performance of the proposed algorithms is evaluated and compared in simulations. Copyright Springer Science + Business Media, Inc. 2005

Suggested Citation

  • Ishai Menache & Shie Mannor & Nahum Shimkin, 2005. "Basis Function Adaptation in Temporal Difference Reinforcement Learning," Annals of Operations Research, Springer, vol. 134(1), pages 215-238, February.
  • Handle: RePEc:spr:annopr:v:134:y:2005:i:1:p:215-238:10.1007/s10479-005-5732-z
    DOI: 10.1007/s10479-005-5732-z
    as

    Download full text from publisher

    File URL: http://hdl.handle.net/10.1007/s10479-005-5732-z
    Download Restriction: Access to full text is restricted to subscribers.

    File URL: https://libkey.io/10.1007/s10479-005-5732-z?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    As the access to this document is restricted, you may want to search for a different version of it.

    References listed on IDEAS

    as
    1. G. Alon & D. Kroese & T. Raviv & R. Rubinstein, 2005. "Application of the Cross-Entropy Method to the Buffer Allocation Problem in a Simulation-Based Environment," Annals of Operations Research, Springer, vol. 134(1), pages 137-151, February.
    Full references (including those not matched with items on IDEAS)

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. Arruda, E.F. & Fragoso, M.D. & do Val, J.B.R., 2011. "Approximate dynamic programming via direct search in the space of value function approximations," European Journal of Operational Research, Elsevier, vol. 211(2), pages 343-351, June.
    2. Manuel Castejón-Limas & Joaquín Ordieres-Meré & Ana González-Marcos & Víctor González-Castro, 2011. "Effort estimates through project complexity," Annals of Operations Research, Springer, vol. 186(1), pages 395-406, June.
    3. Dimitri P. Bertsekas & Huizhen Yu, 2012. "Q-Learning and Enhanced Policy Iteration in Discounted Dynamic Programming," Mathematics of Operations Research, INFORMS, vol. 37(1), pages 66-94, February.
    4. Rokhforoz, Pegah & Montazeri, Mina & Fink, Olga, 2023. "Safe multi-agent deep reinforcement learning for joint bidding and maintenance scheduling of generation units," Reliability Engineering and System Safety, Elsevier, vol. 232(C).
    5. Prasenjit Karmakar & Shalabh Bhatnagar, 2018. "Two Time-Scale Stochastic Approximation with Controlled Markov Noise and Off-Policy Temporal-Difference Learning," Mathematics of Operations Research, INFORMS, vol. 43(1), pages 130-151, February.

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. K.-P. Hui & N. Bean & M. Kraetzl & Dirk Kroese, 2005. "The Cross-Entropy Method for Network Reliability Estimation," Annals of Operations Research, Springer, vol. 134(1), pages 101-118, February.
    2. Fahimnia, Behnam & Sarkis, Joseph & Eshragh, Ali, 2015. "A tradeoff model for green supply chain planning:A leanness-versus-greenness analysis," Omega, Elsevier, vol. 54(C), pages 173-190.
    3. Sagron, Ruth & Pugatch, Rami, 2021. "Universal distribution of batch completion times and time-cost tradeoff in a production line with arbitrary buffer size," European Journal of Operational Research, Elsevier, vol. 293(3), pages 980-989.
    4. Illana Bendavid & Boaz Golany, 2009. "Setting gates for activities in the stochastic project scheduling problem through the cross entropy methodology," Annals of Operations Research, Springer, vol. 172(1), pages 259-276, November.
    5. Fahimnia, Behnam & Sarkis, Joseph & Choudhary, Alok & Eshragh, Ali, 2015. "Tactical supply chain planning under a carbon tax policy scheme: A case study," International Journal of Production Economics, Elsevier, vol. 164(C), pages 206-215.
    6. Krishna Chepuri & Tito Homem-de-Mello, 2005. "Solving the Vehicle Routing Problem with Stochastic Demands using the Cross-Entropy Method," Annals of Operations Research, Springer, vol. 134(1), pages 153-181, February.
    7. Pieter-Tjerk de Boer & Dirk Kroese & Shie Mannor & Reuven Rubinstein, 2005. "A Tutorial on the Cross-Entropy Method," Annals of Operations Research, Springer, vol. 134(1), pages 19-67, February.
    8. Illana Bendavid & Boaz Golany, 2011. "Setting gates for activities in the stochastic project scheduling problem through the cross entropy methodology," Annals of Operations Research, Springer, vol. 189(1), pages 25-42, September.
    9. Altiparmak, Fulya & Dengiz, Berna, 2009. "A cross entropy approach to design of reliable networks," European Journal of Operational Research, Elsevier, vol. 199(2), pages 542-552, December.
    10. Benham, Tim & Duan, Qibin & Kroese, Dirk P. & Liquet, Benoît, 2017. "CEoptim: Cross-Entropy R Package for Optimization," Journal of Statistical Software, Foundation for Open Access Statistics, vol. 76(i08).
    11. Illana Bendavid & Boaz Golany, 2011. "Predetermined intervals for start times of activities in the stochastic project scheduling problem," Annals of Operations Research, Springer, vol. 186(1), pages 429-442, June.
    12. Douek-Pinkovich, Yifat & Ben-Gal, Irad & Raviv, Tal, 2022. "The stochastic test collection problem: Models, exact and heuristic solution approaches," European Journal of Operational Research, Elsevier, vol. 299(3), pages 945-959.
    13. Ad Ridder, 2005. "Importance Sampling Simulations of Markovian Reliability Systems Using Cross-Entropy," Annals of Operations Research, Springer, vol. 134(1), pages 119-136, February.

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:spr:annopr:v:134:y:2005:i:1:p:215-238:10.1007/s10479-005-5732-z. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Sonal Shukla or Springer Nature Abstracting and Indexing (email available below). General contact details of provider: http://www.springer.com .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.