Search Results for author: Jiawen Lu

Found 4 papers, 0 papers with code

A Hierarchical Reinforcement Learning Based Optimization Framework for Large-scale Dynamic Pickup and Delivery Problems

no code implementations NeurIPS 2021 Yi Ma, Xiaotian Hao, Jianye Hao, Jiawen Lu, Xing Liu, Tong Xialiang, Mingxuan Yuan, Zhigang Li, Jie Tang, Zhaopeng Meng

To address this problem, existing methods partition the overall DPDP into fixed-size sub-problems by caching online generated orders and solve each sub-problem, or on this basis to utilize the predicted future orders to optimize each sub-problem further.

Hierarchical Reinforcement Learning

Learning to Optimize Industry-Scale Dynamic Pickup and Delivery Problems

no code implementations27 May 2021 Xijun Li, Weilin Luo, Mingxuan Yuan, Jun Wang, Jiawen Lu, Jie Wang, Jinhu Lu, Jia Zeng

Our method is entirely data driven and thus adaptive, i. e., the relational representation of adjacent vehicles can be learned and corrected by ST-DDGN from data periodically.

Graph Embedding Management +1

Bilevel Learning Model Towards Industrial Scheduling

no code implementations10 Aug 2020 Longkang Li, Hui-Ling Zhen, Mingxuan Yuan, Jiawen Lu, XialiangTong, Jia Zeng, Jun Wang, Dirk Schnieders

In this paper, we propose a Bilevel Deep reinforcement learning Scheduler, \textit{BDS}, in which the higher level is responsible for exploring an initial global sequence, whereas the lower level is aiming at exploitation for partial sequence refinements, and the two levels are connected by a sliding-window sampling mechanism.

Scheduling

Cannot find the paper you are looking for? You can Submit a new open access paper.