no code implementations • 17 Apr 2024 • Zhuoyi Lin, Yaoxin Wu, Bangjian Zhou, Zhiguang Cao, Wen Song, Yingqian Zhang, Senthilnath Jayavelu
Accordingly, we propose to pre-train the backbone Transformer for TSP, and then apply it in the process of fine-tuning the Transformer models for each target VRP variant.
no code implementations • 14 Jan 2022 • Zhuoyi Lin, Sheng Zang, Rundong Wang, Zhu Sun, J. Senthilnath, Chi Xu, Chee-Keong Kwoh
We then introduce a dynamic transformer encoder (DTE) to capture user-specific inter-item relationships among item candidates by seamlessly accommodating the learned latent user intentions via IDM.
no code implementations • 29 Sep 2021 • Zhuoyi Lin, Biao Ye, Xu He, Shuo Sun, Rundong Wang, Rui Yin, Xu Chi, Chee Keong Kwoh
A machine learning system is typically composed of model and data.
no code implementations • 28 Jul 2020 • Zhuoyi Lin, Lei Feng, Xingzhi Guo, Yu Zhang, Rui Yin, Chee Keong Kwoh, Chi Xu
In this paper, we propose a novel representation learning-based model called COMET (COnvolutional diMEnsion inTeraction), which simultaneously models the high-order interaction patterns among historical interactions and embedding dimensions.
no code implementations • 28 Jul 2020 • Zhuoyi Lin, Lei Feng, Rui Yin, Chi Xu, Chee-Keong Kwoh
We argue that recommendation on global and local graphs outperforms that on a single global graph or multiple local graphs.