Search Results for author: Chen Lu

Found 12 papers, 2 papers with code

Confidence Self-Calibration for Multi-Label Class-Incremental Learning

no code implementations19 Mar 2024 Kaile Du, Yifan Zhou, Fan Lyu, Yuyang Li, Chen Lu, Guangcan Liu

The partial label challenge in Multi-Label Class-Incremental Learning (MLCIL) arises when only the new classes are labeled during training, while past and future labels remain unavailable.

Class Incremental Learning Incremental Learning

Query lower bounds for log-concave sampling

no code implementations5 Apr 2023 Sinho Chewi, Jaume de Dios Pont, Jerry Li, Chen Lu, Shyam Narayanan

Log-concave sampling has witnessed remarkable algorithmic advances in recent years, but the corresponding problem of proving lower bounds for this task has remained elusive, with lower bounds previously known only in dimension one.

Fisher information lower bounds for sampling

no code implementations5 Oct 2022 Sinho Chewi, Patrik Gerber, Holden Lee, Chen Lu

We prove two lower bounds for the complexity of non-log-concave sampling within the framework of Balasubramanian et al. (2022), who introduced the use of Fisher information (FI) bounds as a notion of approximate first-order stationarity in sampling.

Rejection sampling from shape-constrained distributions in sublinear time

no code implementations29 May 2021 Sinho Chewi, Patrik Gerber, Chen Lu, Thibaut Le Gouic, Philippe Rigollet

We consider the task of generating exact samples from a target distribution, known up to normalization, over a finite alphabet.

The query complexity of sampling from strongly log-concave distributions in one dimension

no code implementations29 May 2021 Sinho Chewi, Patrik Gerber, Chen Lu, Thibaut Le Gouic, Philippe Rigollet

We establish the first tight lower bound of $\Omega(\log\log\kappa)$ on the query complexity of sampling from the class of strongly log-concave and log-smooth distributions with condition number $\kappa$ in one dimension.

Optimal dimension dependence of the Metropolis-Adjusted Langevin Algorithm

no code implementations23 Dec 2020 Sinho Chewi, Chen Lu, Kwangjun Ahn, Xiang Cheng, Thibaut Le Gouic, Philippe Rigollet

Conventional wisdom in the sampling literature, backed by a popular diffusion scaling limit, suggests that the mixing time of the Metropolis-Adjusted Langevin Algorithm (MALA) scales as $O(d^{1/3})$, where $d$ is the dimension.

Contextual Stochastic Block Model: Sharp Thresholds and Contiguity

no code implementations15 Nov 2020 Chen Lu, Subhabrata Sen

We study community detection in the contextual stochastic block model arXiv:1807. 09596 [cs. SI], arXiv:1607. 02675 [stat. ME].

Community Detection Stochastic Block Model

SVGD as a kernelized Wasserstein gradient flow of the chi-squared divergence

1 code implementation NeurIPS 2020 Sinho Chewi, Thibaut Le Gouic, Chen Lu, Tyler Maunu, Philippe Rigollet

Stein Variational Gradient Descent (SVGD), a popular sampling algorithm, is often described as the kernelized gradient flow for the Kullback-Leibler divergence in the geometry of optimal transport.

Exponential ergodicity of mirror-Langevin diffusions

no code implementations NeurIPS 2020 Sinho Chewi, Thibaut Le Gouic, Chen Lu, Tyler Maunu, Philippe Rigollet, Austin J. Stromme

Motivated by the problem of sampling from ill-conditioned log-concave distributions, we give a clean non-asymptotic convergence analysis of mirror-Langevin diffusions as introduced in Zhang et al. (2020).

Surface Following using Deep Reinforcement Learning and a GelSightTactile Sensor

no code implementations2 Dec 2019 Chen Lu, Jing Wang, Shan Luo

Tactile sensors can provide detailed contact in-formation that can facilitate robots to perform dexterous, in-hand manipulation tasks.

Robotics

Nonparametric Heterogeneous Treatment Effect Estimation in Repeated Cross Sectional Designs

2 code implementations28 May 2019 Chen Lu, Xinkun Nie, Stefan Wager

Identifying heterogeneity in a population's response to a health or policy intervention is crucial for evaluating and informing policy decisions.

Methodology

Interplay of Sensor Quantity, Placement and System Dimensionality on Energy Sparse Reconstruction of Fluid Flows

no code implementations21 Jun 2018 Chen Lu, Balaji Jayaraman

In this effort, we explore the interplay of data sparsity, sparsity of the underlying flow system and sensor placement on energy sparse reconstruction performance enabled by data- driven SVD basis.

Computational Physics

Cannot find the paper you are looking for? You can Submit a new open access paper.