no code implementations • 19 Mar 2024 • Kaile Du, Yifan Zhou, Fan Lyu, Yuyang Li, Chen Lu, Guangcan Liu
The partial label challenge in Multi-Label Class-Incremental Learning (MLCIL) arises when only the new classes are labeled during training, while past and future labels remain unavailable.
no code implementations • 5 Apr 2023 • Sinho Chewi, Jaume de Dios Pont, Jerry Li, Chen Lu, Shyam Narayanan
Log-concave sampling has witnessed remarkable algorithmic advances in recent years, but the corresponding problem of proving lower bounds for this task has remained elusive, with lower bounds previously known only in dimension one.
no code implementations • 5 Oct 2022 • Sinho Chewi, Patrik Gerber, Holden Lee, Chen Lu
We prove two lower bounds for the complexity of non-log-concave sampling within the framework of Balasubramanian et al. (2022), who introduced the use of Fisher information (FI) bounds as a notion of approximate first-order stationarity in sampling.
no code implementations • 29 May 2021 • Sinho Chewi, Patrik Gerber, Chen Lu, Thibaut Le Gouic, Philippe Rigollet
We consider the task of generating exact samples from a target distribution, known up to normalization, over a finite alphabet.
no code implementations • 29 May 2021 • Sinho Chewi, Patrik Gerber, Chen Lu, Thibaut Le Gouic, Philippe Rigollet
We establish the first tight lower bound of $\Omega(\log\log\kappa)$ on the query complexity of sampling from the class of strongly log-concave and log-smooth distributions with condition number $\kappa$ in one dimension.
no code implementations • 23 Dec 2020 • Sinho Chewi, Chen Lu, Kwangjun Ahn, Xiang Cheng, Thibaut Le Gouic, Philippe Rigollet
Conventional wisdom in the sampling literature, backed by a popular diffusion scaling limit, suggests that the mixing time of the Metropolis-Adjusted Langevin Algorithm (MALA) scales as $O(d^{1/3})$, where $d$ is the dimension.
no code implementations • 15 Nov 2020 • Chen Lu, Subhabrata Sen
We study community detection in the contextual stochastic block model arXiv:1807. 09596 [cs. SI], arXiv:1607. 02675 [stat. ME].
1 code implementation • NeurIPS 2020 • Sinho Chewi, Thibaut Le Gouic, Chen Lu, Tyler Maunu, Philippe Rigollet
Stein Variational Gradient Descent (SVGD), a popular sampling algorithm, is often described as the kernelized gradient flow for the Kullback-Leibler divergence in the geometry of optimal transport.
no code implementations • NeurIPS 2020 • Sinho Chewi, Thibaut Le Gouic, Chen Lu, Tyler Maunu, Philippe Rigollet, Austin J. Stromme
Motivated by the problem of sampling from ill-conditioned log-concave distributions, we give a clean non-asymptotic convergence analysis of mirror-Langevin diffusions as introduced in Zhang et al. (2020).
no code implementations • 2 Dec 2019 • Chen Lu, Jing Wang, Shan Luo
Tactile sensors can provide detailed contact in-formation that can facilitate robots to perform dexterous, in-hand manipulation tasks.
Robotics
2 code implementations • 28 May 2019 • Chen Lu, Xinkun Nie, Stefan Wager
Identifying heterogeneity in a population's response to a health or policy intervention is crucial for evaluating and informing policy decisions.
Methodology
no code implementations • 21 Jun 2018 • Chen Lu, Balaji Jayaraman
In this effort, we explore the interplay of data sparsity, sparsity of the underlying flow system and sensor placement on energy sparse reconstruction performance enabled by data- driven SVD basis.
Computational Physics