Search Results for author: Cong Han Lim

Found 6 papers, 2 papers with code

Hierarchical Verification for Adversarial Robustness

no code implementations ICML 2020 Cong Han Lim, Raquel Urtasun, Ersin Yumer

We show that, under certain conditions on the algorithm parameters, LayerCert provably reduces the number and size of the convex programs that one needs to solve compared to GeoCert.

Adversarial Robustness

A Distributed Quasi-Newton Algorithm for Primal and Dual Regularized Empirical Risk Minimization

1 code implementation12 Dec 2019 Ching-pei Lee, Cong Han Lim, Stephen J. Wright

When applied to the distributed dual ERM problem, unlike state of the art that takes only the block-diagonal part of the Hessian, our approach is able to utilize global curvature information and is thus magnitudes faster.

Distributed Optimization

An Efficient Pruning Algorithm for Robust Isotonic Regression

no code implementations NeurIPS 2018 Cong Han Lim

We study a generalization of the classic isotonic regression problem where we allow separable nonconvex objective functions, focusing on the case of estimators used in robust regression.

regression

A Distributed Quasi-Newton Algorithm for Empirical Risk Minimization with Nonsmooth Regularization

1 code implementation4 Mar 2018 Ching-pei Lee, Cong Han Lim, Stephen J. Wright

Initial computational results on convex problems demonstrate that our method significantly improves on communication cost and running time over the current state-of-the-art methods.

Distributed Optimization

k-Support and Ordered Weighted Sparsity for Overlapping Groups: Hardness and Algorithms

no code implementations NeurIPS 2017 Cong Han Lim, Stephen Wright

We study the norms obtained from extending the k-support norm and OWL norms to the setting in which there are overlapping groups.

Beyond the Birkhoff Polytope: Convex Relaxations for Vector Permutation Problems

no code implementations NeurIPS 2014 Cong Han Lim, Stephen Wright

Using a recent construction of Goemans (2010), we show that when optimizing over the convex hull of the permutation vectors (the permutahedron), we can reduce the number of variables and constraints to $\Theta(n \log n)$ in theory and $\Theta(n \log^2 n)$ in practice.

Cannot find the paper you are looking for? You can Submit a new open access paper.