Search Results for author: Tim Tsz-Kit Lau

Found 7 papers, 3 papers with code

AdAdaGrad: Adaptive Batch Size Schemes for Adaptive Gradient Methods

no code implementations17 Feb 2024 Tim Tsz-Kit Lau, Han Liu, Mladen Kolar

The choice of batch sizes in stochastic gradient optimizers is critical for model training.

Image Classification

Non-Log-Concave and Nonsmooth Sampling via Langevin Monte Carlo Algorithms

1 code implementation25 May 2023 Tim Tsz-Kit Lau, Han Liu, Thomas Pock

We study the problem of approximate sampling from non-log-concave distributions, e. g., Gaussian mixtures, which is often challenging even in low dimensions due to their multimodality.

Bayesian Inference Image Deconvolution

Bregman Proximal Langevin Monte Carlo via Bregman--Moreau Envelopes

1 code implementation10 Jul 2022 Tim Tsz-Kit Lau, Han Liu

The proposed algorithms extend existing Langevin Monte Carlo algorithms in two aspects -- the ability to sample nonsmooth distributions with mirror descent-like algorithms, and the use of the more general Bregman--Moreau envelope in place of the Moreau envelope as a smooth approximation of the nonsmooth part of the potential.

Wasserstein Distributionally Robust Optimization with Wasserstein Barycenters

no code implementations23 Mar 2022 Tim Tsz-Kit Lau, Han Liu

On the other hand, in distributionally robust optimization, we seek data-driven decisions which perform well under the most adverse distribution from a nominal distribution constructed from data samples within a certain discrepancy of probability distributions.

The Multi-Agent Pickup and Delivery Problem: MAPF, MARL and Its Warehouse Applications

no code implementations14 Mar 2022 Tim Tsz-Kit Lau, Biswa Sengupta

We study two state-of-the-art solutions to the multi-agent pickup and delivery (MAPD) problem based on different principles -- multi-agent path-finding (MAPF) and multi-agent reinforcement learning (MARL).

Multi-Agent Path Finding Multi-agent Reinforcement Learning +2

A Proximal Block Coordinate Descent Algorithm for Deep Neural Network Training

no code implementations24 Mar 2018 Tim Tsz-Kit Lau, Jinshan Zeng, Baoyuan Wu, Yuan Yao

Training deep neural networks (DNNs) efficiently is a challenge due to the associated highly nonconvex optimization.

Global Convergence of Block Coordinate Descent in Deep Learning

2 code implementations1 Mar 2018 Jinshan Zeng, Tim Tsz-Kit Lau, Shao-Bo Lin, Yuan YAO

Deep learning has aroused extensive attention due to its great empirical success.

Cannot find the paper you are looking for? You can Submit a new open access paper.