Author
Listed:
- Chen Ding
- Fei Qiao
- Dongyuan Wang
- Juan Liu
Abstract
Reinforcement learning (RL) is an efficient method for addressing scheduling problems with good real-time performance. However, scheduling in aerospace component manufacturing (ACM) often involves multiple objectives, with decision-maker's preferences dynamically changing in real production. Additionally, specifying a numerical reward function for different objectives typically requires meticulous manual tuning by experts. To overcome these challenges, we present a novel hybrid intelligent scheduling method that integrates human feedback into RL (HIS-HFRL) for adaptive preference objectives. We focus on three objectives: total tardiness, maximum tardiness, and total inventory and delay costs. In HIS-HFRL, the reward model is developed by incorporating human feedback. Composite rules are simulated to generate trajectories and obtain objective values, which are then scored by human experts based on current preferences. States in different trajectories are labelled with rewards according to these scores. In this way, the samples with state and reward label are collected to construct the reward model. Finally, a double deep Q-network-based training algorithm is developed to train agents using this reward model, enabling effective scheduling decisions for machine assignment and operation sequencing. Extensive experiments in an ACM workshop demonstrate the superiority of HIS-HFRL over existing methods across various scenarios.
Suggested Citation
Chen Ding & Fei Qiao & Dongyuan Wang & Juan Liu, 2025.
"A novel hybrid intelligent scheduling: integrating human feedback into reinforcement learning for adaptive preference objectives,"
International Journal of Production Research, Taylor & Francis Journals, vol. 63(16), pages 6037-6055, August.
Handle:
RePEc:taf:tprsxx:v:63:y:2025:i:16:p:6037-6055
DOI: 10.1080/00207543.2025.2467448
Download full text from publisher
As the access to this document is restricted, you may want to
for a different version of it.
Corrections
All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:taf:tprsxx:v:63:y:2025:i:16:p:6037-6055. See general information about how to correct material in RePEc.
If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.
We have no bibliographic references for this item. You can help adding them by using this form .
If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Chris Longhurst (email available below). General contact details of provider: http://www.tandfonline.com/TPRS20 .
Please note that corrections may take a couple of weeks to filter through
the various RePEc services.