Skip to main navigation menu Skip to main content Skip to site footer

Articles

Vol. 11 No. 1 (2024)

Deep Reinforcement Learning Solves Job-shop Scheduling Problems

  • Anjiang Cai
  • Yangfan Yu
  • Manman Zhao
DOI
https://doi.org/10.15878/j.instr.202300165
Submitted
April 24, 2024
Published
2024-03-31

Abstract

Abstract: To solve the sparse reward problem of job-shop scheduling by deep reinforcement learning, a deep reinforcement learning framework considering sparse reward problem is proposed. The job shop scheduling problem is transformed into Markov decision process, and six state features are designed to improve the state feature representation by using two-way scheduling method, including four state features that distinguish the optimal action and two state features that are related to the learning goal. An extended variant of graph isomorphic network GIN++ is used to encode disjunction graphs to improve the performance and generalization ability of the model. Through iterative greedy algorithm, random strategy is generated as the initial strategy, and the action with the maximum information gain is selected to expand it to optimize the exploration ability of Actor-Critic algorithm. Through validation of the trained policy model on multiple public test data sets and comparison with other advanced DRL methods and scheduling rules, the proposed method reduces the minimum average gap by 3.49%, 5.31% and 4.16%, respectively, compared with the priority rule-based method, and 5.34% compared with the learning-based method. 11.97% and 5.02%, effectively improving the accuracy of DRL to solve the approximate solution of JSSP minimum completion time.

Downloads

Download data is not yet available.

Similar Articles

<< < 3 4 5 6 7 8 9 > >> 

You may also start an advanced similarity search for this article.