no code implementations • 17 Feb 2024 • Tim Tsz-Kit Lau, Han Liu, Mladen Kolar
The choice of batch sizes in stochastic gradient optimizers is critical for model training.
1 code implementation • 25 May 2023 • Tim Tsz-Kit Lau, Han Liu, Thomas Pock
We study the problem of approximate sampling from non-log-concave distributions, e. g., Gaussian mixtures, which is often challenging even in low dimensions due to their multimodality.
1 code implementation • 10 Jul 2022 • Tim Tsz-Kit Lau, Han Liu
The proposed algorithms extend existing Langevin Monte Carlo algorithms in two aspects -- the ability to sample nonsmooth distributions with mirror descent-like algorithms, and the use of the more general Bregman--Moreau envelope in place of the Moreau envelope as a smooth approximation of the nonsmooth part of the potential.
no code implementations • 23 Mar 2022 • Tim Tsz-Kit Lau, Han Liu
On the other hand, in distributionally robust optimization, we seek data-driven decisions which perform well under the most adverse distribution from a nominal distribution constructed from data samples within a certain discrepancy of probability distributions.
no code implementations • 14 Mar 2022 • Tim Tsz-Kit Lau, Biswa Sengupta
We study two state-of-the-art solutions to the multi-agent pickup and delivery (MAPD) problem based on different principles -- multi-agent path-finding (MAPF) and multi-agent reinforcement learning (MARL).
Multi-Agent Path Finding Multi-agent Reinforcement Learning +2
no code implementations • 24 Mar 2018 • Tim Tsz-Kit Lau, Jinshan Zeng, Baoyuan Wu, Yuan Yao
Training deep neural networks (DNNs) efficiently is a challenge due to the associated highly nonconvex optimization.
2 code implementations • 1 Mar 2018 • Jinshan Zeng, Tim Tsz-Kit Lau, Shao-Bo Lin, Yuan YAO
Deep learning has aroused extensive attention due to its great empirical success.