Author
Listed:
- Di Zhang
- Senlin Mu
- Joseph Mango
- Xiang Li
Abstract
Spatial resource allocation is a multi-objective spatial optimization problem with multiple constraints. The division of school districts is a classic problem of spatial resource allocation. This paper proposes a new dynamically districting optimization method based on deep reinforcement learning to optimize the global effect of school districting. In the proposed method, the school district’s constantly adjusted allocation process is regarded as a multi-step Markov decision-making process. The method combines the advantages of a deep convolutional neural network with reinforcement learning for real-time response and flexibility, and directly learns behavioural policies based on the input of changing school district states. According to various constraints, this algorithm optimizes the distance of students to school and the utilization rate of schools, and it proposes a better allocation plan. To demonstrate its validity, the proposed method was evaluated using real datasets of two school districts in the United States. The experimental results studied in six different scenarios show that, compared with traditional algorithms, the new proposed method requires less prior knowledge and is globally optimal, and can provide a better allocation plan for school districting, which reduces the distance between students and schools and balances the utilization rate of schools.
Suggested Citation
Di Zhang & Senlin Mu & Joseph Mango & Xiang Li, 2026.
"Deep reinforcement learning for spatial resource allocation: A case study of school districting,"
Environment and Planning B, , vol. 53(2), pages 418-434, February.
Handle:
RePEc:sae:envirb:v:53:y:2026:i:2:p:418-434
DOI: 10.1177/23998083241302187
Download full text from publisher
Corrections
All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:sae:envirb:v:53:y:2026:i:2:p:418-434. See general information about how to correct material in RePEc.
If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.
We have no bibliographic references for this item. You can help adding them by using this form .
If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: SAGE Publications (email available below). General contact details of provider: .
Please note that corrections may take a couple of weeks to filter through
the various RePEc services.