1 code implementation • 13 Dec 2023 • Yanling Tian, Di Chen, Yunan Liu, Jian Yang, Shanshan Zhang
To the best of our knowledge, this is the first work that investigates how to support full-task pre-training using sub-task data.
no code implementations • 6 Mar 2023 • Xinyun Chen, Yunan Liu, Guiyu Hong
A major drawback of PTO is that its solution accuracy can often be highly sensitive to the parameter estimation errors because PTO is unable to properly link these errors (step 1) to the quality of the optimized solutions (step 2).
no code implementations • 23 Sep 2022 • Yanling Tian, Di Chen, Yunan Liu, Shanshan Zhang, Jian Yang
A straightforward solution is to manually assign different weights to different tasks, compensating for the diverse convergence rates.
no code implementations • NeurIPS 2021 • Yunan Liu, Shanshan Zhang, Yang Li, Jian Yang
In this setting, we embed an additional pair of “latent-latent” to reduce the domain gap between the source and different latent domains, allowing the model to adapt well on multiple target domains simultaneously.
no code implementations • 23 Nov 2021 • Si Zhang, Mingzhi Zhang, Rongxing Hu, David Lubkeman, Yunan Liu, Ning Lu
In Stage 1(individual training), while holding all the other agents inactive, we separately train each agent to obtain its own optimal VVC actions in the action space: {consume, generate, do-nothing}.
no code implementations • 7 Sep 2020 • Xinyun Chen, Yunan Liu, Guiyu Hong
In this work we propose an online learning framework designed for solving this problem which does not require the system's scale to increase.