IDEAS home Printed from https://ideas.repec.org/h/spr/seschp/978-3-030-59959-1_8.html
   My bibliography  Save this book chapter

Reinforcement Learning Approach for Dynamic Pricing

In: The Economics of Digital Transformation

Author

Listed:
  • Maksim Balashov

    (PJSC Gazpromneft
    ITMO University)

  • Anton Kiselev

    (PJSC Gazpromneft)

  • Alena Kuryleva

    (PJSC Gazpromneft)

Abstract

With the introduction of digital technologies, it becomes easier for customers to compare prices and choose the product that is most profitable for them, this leads to instability of demand, which means that there is a need for market players to review pricing policies in favor of one that could take into account the characteristics of producer’s resources and current demand status. Dynamic pricing seems to be an adequate solution to the problem, as it is adaptive to customer expectations. In addition, with the digitalization of the economy, unique opportunities arise for using this apparatus. The purpose of this study is to evaluate the possibility of applying the concept of dynamic pricing to traditional retail. The goal of solving the dynamic pricing problem in the framework of this study is to maximize profits from the sale of a specific associated product at an automatic gas station. To solve this problem, the authors propose using machine learning approaches that adapt to the external environment, one of which is reinforcement learning (RL). At the same time, an approach is proposed to restore the demand surface for subsequent training of the agent.

Suggested Citation

  • Maksim Balashov & Anton Kiselev & Alena Kuryleva, 2021. "Reinforcement Learning Approach for Dynamic Pricing," Studies on Entrepreneurship, Structural Change and Industrial Dynamics, in: Tessaleno Devezas & João Leitão & Askar Sarygulov (ed.), The Economics of Digital Transformation, edition 1, pages 123-141, Springer.
  • Handle: RePEc:spr:seschp:978-3-030-59959-1_8
    DOI: 10.1007/978-3-030-59959-1_8
    as

    Download full text from publisher

    To our knowledge, this item is not available for download. To find whether it is available, there are three options:
    1. Check below whether another version of this item is available online.
    2. Check on the provider's web page whether it is in fact available.
    3. Perform a search for a similarly titled item that would be available.

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. Marco Silva & João Pedro Pedroso, 2022. "Deep Reinforcement Learning for Crowdshipping Last-Mile Delivery with Endogenous Uncertainty," Mathematics, MDPI, vol. 10(20), pages 1-23, October.

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:spr:seschp:978-3-030-59959-1_8. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    We have no bibliographic references for this item. You can help adding them by using this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Sonal Shukla or Springer Nature Abstracting and Indexing (email available below). General contact details of provider: http://www.springer.com .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.