IDEAS home Printed from https://ideas.repec.org/p/chf/rpseri/rp2279.html
   My bibliography  Save this paper

Identifiability and Generalizability from Multiple Experts in Inverse Reinforcement Learning

Author

Listed:
  • Paul Rolland

    (Ecole Polytechnique Fédérale de Lausanne)

  • Luca Viano

    (Ecole Polytechnique Fédérale de Lausanne)

  • Norman Schürhoff

    (Swiss Finance Institute - HEC Lausanne)

  • Boris Nikolov

    (University of Lausanne; Swiss Finance Institute; European Corporate Governance Institute (ECGI))

  • Volkan Cevher

    (Ecole Polytechnique Fédérale de Lausanne)

Abstract

While Reinforcement Learning (RL) aims to train an agent from a reward function in a given environment, Inverse Reinforcement Learning (IRL) seeks to recover the reward function from observing an expert’s behavior. It is well known that, in general, various reward functions can lead to the same optimal policy, and hence, IRL is ill-defined. However, [1] showed that, if we observe two or more experts with different discount factors or acting in different environments, the reward function can under certain conditions be identified up to a constant. This work starts by showing an equivalent identifiability statement from multiple experts in tabular MDPs based on a rank condition, which is easily verifiable and is shown to be also necessary. We then extend our result to various different scenarios, i.e., we characterize reward identifiability in the case where the reward function can be represented as a linear combination of given features, making it more interpretable, or when we have access to approximate transition matrices. Even when the reward is not identifiable, we provide conditions characterizing when data on multiple experts in a given environment allows to generalize and train an optimal agent in a new environment. Our theoretical results on reward identifiability and generalizability are validated in various numerical experiments.

Suggested Citation

  • Paul Rolland & Luca Viano & Norman Schürhoff & Boris Nikolov & Volkan Cevher, 2022. "Identifiability and Generalizability from Multiple Experts in Inverse Reinforcement Learning," Swiss Finance Institute Research Paper Series 22-79, Swiss Finance Institute.
  • Handle: RePEc:chf:rpseri:rp2279
    as

    Download full text from publisher

    File URL: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4251437
    Download Restriction: no
    ---><---

    More about this item

    NEP fields

    This paper has been announced in the following NEP Reports:

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:chf:rpseri:rp2279. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    We have no bibliographic references for this item. You can help adding them by using this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Ridima Mittal (email available below). General contact details of provider: https://edirc.repec.org/data/fameech.html .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.