Search Results for author: Chicheng Zhang

Found 34 papers, 10 papers with code

Efficient Low-Rank Matrix Estimation, Experimental Design, and Arm-Set-Dependent Low-Rank Bandits

1 code implementation17 Feb 2024 Kyoungseok Jang, Chicheng Zhang, Kwang-Sung Jun

Assuming access to the distribution of the covariates, we propose a novel low-rank matrix estimation method called LowPopArt and provide its recovery guarantee that depends on a novel quantity denoted by B(Q) that characterizes the hardness of the problem, where Q is the covariance matrix of the measurement distribution.

Computational Efficiency Efficient Exploration +2

Agnostic Interactive Imitation Learning: New Theory and Practical Algorithms

1 code implementation28 Dec 2023 Yichen Li, Chicheng Zhang

We study interactive imitation learning, where a learner interactively queries a demonstrating expert for action annotations, aiming to learn a policy that has performance competitive with the expert, using as few annotations as possible.

continuous-control Continuous Control +1

Efficient Active Learning Halfspaces with Tsybakov Noise: A Non-convex Optimization Approach

no code implementations23 Oct 2023 Yinan Li, Chicheng Zhang

We study the problem of computationally and label efficient PAC active learning $d$-dimensional halfspaces with Tsybakov Noise~\citep{tsybakov2004optimal} under structured unlabeled data distributions.

Active Learning

Kullback-Leibler Maillard Sampling for Multi-armed Bandits with Bounded Rewards

1 code implementation NeurIPS 2023 Hao Qin, Kwang-Sung Jun, Chicheng Zhang

Maillard sampling \cite{maillard13apprentissage}, an attractive alternative to Thompson sampling, has recently been shown to achieve competitive regret guarantees in the sub-Gaussian reward setting \cite{bian2022maillard} while maintaining closed-form action probabilities, which is useful for offline policy evaluation.

Thompson Sampling

PopArt: Efficient Sparse Regression and Experimental Design for Optimal Sparse Linear Bandits

1 code implementation25 Oct 2022 Kyoungseok Jang, Chicheng Zhang, Kwang-Sung Jun

In this paper, we propose a simple and computationally efficient sparse linear estimation method called PopArt that enjoys a tighter $\ell_1$ recovery guarantee compared to Lasso (Tibshirani, 1996) in many problems.

Decision Making Experimental Design +2

On Efficient Online Imitation Learning via Classification

no code implementations26 Sep 2022 Yichen Li, Chicheng Zhang

We make the following contributions: (1) we show that in the $\textbf{COIL}$ problem, any proper online learning algorithm cannot guarantee a sublinear regret in general; (2) we propose $\textbf{Logger}$, an improper online learning algorithmic framework, that reduces $\textbf{COIL}$ to online linear optimization, by utilizing a new definition of mixed policy class; (3) we design two oracle-efficient algorithms within the $\textbf{Logger}$ framework that enjoy different sample and interaction round complexity tradeoffs, and conduct finite-sample analyses to show their improvements over naive behavior cloning; (4) we show that under the standard complexity-theoretic assumptions, efficient dynamic regret minimization is infeasible in the $\textbf{Logger}$ framework.

Classification Imitation Learning +1

Thompson Sampling for Robust Transfer in Multi-Task Bandits

1 code implementation17 Jun 2022 Zhi Wang, Chicheng Zhang, Kamalika Chaudhuri

We study the problem of online multi-task learning where the tasks are performed within similar but not necessarily identical multi-armed bandit environments.

Multi-Task Learning Thompson Sampling

Active Fairness Auditing

no code implementations16 Jun 2022 Tom Yan, Chicheng Zhang

The fast spreading adoption of machine learning (ML) by companies across industries poses significant regulatory challenges.

Fairness

Margin-distancing for safe model explanation

no code implementations23 Feb 2022 Tom Yan, Chicheng Zhang

The growing use of machine learning models in consequential settings has highlighted an important and seemingly irreconcilable tension between transparency and vulnerability to gaming.

SIM-ECG: A Signal Importance Mask-driven ECGClassification System

no code implementations28 Oct 2021 Dharma KC, Chicheng Zhang, Chris Gniady, Parth Sandeep Agarwal, Sushil Sharma

Heart disease is the number one killer, and ECGs can assist in the early diagnosis and prevention of deadly outcomes.

Improving the trustworthiness of image classification models by utilizing bounding-box annotations

1 code implementation15 Aug 2021 Dharma KC, Chicheng Zhang

We study utilizing auxiliary information in training data to improve the trustworthiness of machine learning models.

BIG-bench Machine Learning Classification +1

Improved Algorithms for Efficient Active Learning Halfspaces with Massart and Tsybakov noise

no code implementations10 Feb 2021 Chicheng Zhang, Yinan Li

We give a computationally-efficient PAC active learning algorithm for $d$-dimensional homogeneous halfspaces that can tolerate Massart noise (Massart and N\'ed\'elec, 2006) and Tsybakov noise (Tsybakov, 2004).

Active Learning

Multitask Bandit Learning Through Heterogeneous Feedback Aggregation

1 code implementation29 Oct 2020 Zhi Wang, Chicheng Zhang, Manish Kumar Singh, Laurel D. Riek, Kamalika Chaudhuri

In many real-world applications, multiple agents seek to learn how to perform highly related yet slightly different tasks in an online bandit learning protocol.

Active Online Learning with Hidden Shifting Domains

no code implementations25 Jun 2020 Yining Chen, Haipeng Luo, Tengyu Ma, Chicheng Zhang

We propose a surprisingly simple algorithm that adaptively balances its regret and its number of label queries in settings where the data streams are from a mixture of hidden domains.

Domain Adaptation regression

Crush Optimism with Pessimism: Structured Bandits Beyond Asymptotic Optimality

no code implementations NeurIPS 2020 Kwang-Sung Jun, Chicheng Zhang

In this paper, we focus on the finite hypothesis case and ask if one can achieve the asymptotic optimality while enjoying bounded regret whenever possible.

Active Online Domain Adaptation

no code implementations ICML Workshop LifelongML 2020 Yining Chen, Haipeng Luo, Tengyu Ma, Chicheng Zhang

We propose a surprisingly simple algorithm that adaptively balances its regret and its number of label queries in settings where the data streams are from a mixture of hidden domains.

Online Domain Adaptation regression

Attribute-Efficient Learning of Halfspaces with Malicious Noise: Near-Optimal Label Complexity and Noise Tolerance

no code implementations6 Jun 2020 Jie Shen, Chicheng Zhang

We answer this question in the affirmative by designing a computationally efficient active learning algorithm with near-optimal label complexity of $\tilde{O}\big({s \log^4 \frac d \epsilon} \big)$ and noise tolerance $\eta = \Omega(\epsilon)$, where $\epsilon \in (0, 1)$ is the target error rate, under the assumption that the distribution over (uncorrupted) unlabeled examples is isotropic log-concave.

Active Learning Attribute +1

Efficient active learning of sparse halfspaces with arbitrary bounded noise

no code implementations NeurIPS 2020 Chicheng Zhang, Jie Shen, Pranjal Awasthi

Even in the presence of mild label noise, i. e. $\eta$ is a small constant, this is a challenging problem and only recently have label complexity bounds of the form $\tilde{O}\big(s \cdot \mathrm{polylog}(d, \frac{1}{\epsilon})\big)$ been established in [Zhang, 2018] for computationally efficient algorithms.

Active Learning

Bandit Multiclass Linear Classification: Efficient Algorithms for the Separable Case

no code implementations6 Feb 2019 Alina Beygelzimer, Dávid Pál, Balázs Szörényi, Devanathan Thiruvenkatachari, Chen-Yu Wei, Chicheng Zhang

Under the more challenging weak linear separability condition, we design an efficient algorithm with a mistake bound of $\min (2^{\widetilde{O}(K \log^2 (1/\gamma))}, 2^{\widetilde{O}(\sqrt{1/\gamma} \log K)})$.

Classification General Classification

Warm-starting Contextual Bandits: Robustly Combining Supervised and Bandit Feedback

1 code implementation2 Jan 2019 Chicheng Zhang, Alekh Agarwal, Hal Daumé III, John Langford, Sahand N. Negahban

We investigate the feasibility of learning from a mix of both fully-labeled supervised data and contextual bandit data.

Multi-Armed Bandits

Efficient active learning of sparse halfspaces

no code implementations7 May 2018 Chicheng Zhang

We study the problem of efficient PAC active learning of homogeneous linear classifiers (halfspaces) in $\mathbb{R}^d$, where the goal is to learn a halfspace with low error using as few label queries as possible.

Active Learning Attribute

Spectral Learning of Binomial HMMs for DNA Methylation Data

no code implementations7 Feb 2018 Chicheng Zhang, Eran A. Mukamel, Kamalika Chaudhuri

We consider learning parameters of Binomial Hidden Markov Models, which may be used to model DNA methylation data.

Computational Efficiency Tensor Decomposition

Efficient Online Bandit Multiclass Learning with $\tilde{O}(\sqrt{T})$ Regret

no code implementations25 Feb 2017 Alina Beygelzimer, Francesco Orabona, Chicheng Zhang

The regret bound holds simultaneously with respect to a family of loss functions parameterized by $\eta$, for a range of $\eta$ restricted by the norm of the competitor.

Revisiting Perceptron: Efficient and Label-Optimal Learning of Halfspaces

no code implementations NeurIPS 2017 Songbai Yan, Chicheng Zhang

It has been a long-standing problem to efficiently learn a halfspace using as few labels as possible in the presence of noise.

Active Learning

Search Improves Label for Active Learning

no code implementations NeurIPS 2016 Alina Beygelzimer, Daniel Hsu, John Langford, Chicheng Zhang

We investigate active learning with access to two distinct oracles: Label (which is standard) and Search (which is not).

Active Learning

Active Learning from Weak and Strong Labelers

no code implementations NeurIPS 2015 Chicheng Zhang, Kamalika Chaudhuri

This work addresses active learning with labels obtained from strong and weak labelers, where in addition to the standard active learning setting, we have an extra weak labeler which may occasionally provide incorrect labels.

Active Learning

Spectral Learning of Large Structured HMMs for Comparative Epigenomics

no code implementations NeurIPS 2015 Chicheng Zhang, Jimin Song, Kevin C Chen, Kamalika Chaudhuri

We develop a latent variable model and an efficient spectral algorithm motivated by the recent emergence of very large data sets of chromatin marks from multiple human cell types.

Beyond Disagreement-based Agnostic Active Learning

no code implementations NeurIPS 2014 Chicheng Zhang, Kamalika Chaudhuri

We study agnostic active learning, where the goal is to learn a classifier in a pre-specified hypothesis class interactively with as few label queries as possible, while making no assumptions on the true function generating the labels.

Active Learning General Classification

Cannot find the paper you are looking for? You can Submit a new open access paper.