Search Results for author: Jean Honorio

Found 69 papers, 3 papers with code

PyXAB -- A Python Library for $\mathcal{X}$-Armed Bandit and Online Blackbox Optimization Algorithms

1 code implementation7 Mar 2023 Wenjie Li, Haoze Li, Jean Honorio, Qifan Song

We introduce a Python open-source library for $\mathcal{X}$-armed bandit and online blackbox optimization named PyXAB.

Exact Inference in High-order Structured Prediction

no code implementations7 Feb 2023 Chuyang Ke, Jean Honorio

In this paper, we study the problem of inference in high-order structured prediction tasks.

Structured Prediction Vocal Bursts Intensity Prediction

Support Recovery in Sparse PCA with Non-Random Missing Data

no code implementations3 Feb 2023 Hanbyul Lee, Qifan Song, Jean Honorio

We analyze a practical algorithm for sparse PCA on incomplete and noisy data under a general non-random sampling scheme.

Learning Against Distributional Uncertainty: On the Trade-off Between Robustness and Specificity

no code implementations31 Jan 2023 Shixiong Wang, Haowei Wang, Jean Honorio

Trustworthy machine learning aims at combating distributional uncertainties in training data distributions compared to population distributions.


MEDIC: Remove Model Backdoors via Importance Driven Cloning

no code implementations CVPR 2023 QiuLing Xu, Guanhong Tao, Jean Honorio, Yingqi Liu, Shengwei An, Guangyu Shen, Siyuan Cheng, Xiangyu Zhang

It trains the clone model from scratch on a very small subset of samples and aims to minimize a cloning loss that denotes the differences between the activations of important neurons across the two models.

Knowledge Distillation

A Theoretical Study of The Effects of Adversarial Attacks on Sparse Regression

no code implementations21 Dec 2022 Deepak Maurya, Jean Honorio

This paper analyzes $\ell_1$ regularized linear regression under the challenging scenario of having only adversarially corrupted data for training.


Distributional Robustness Bounds Generalization Errors

no code implementations20 Dec 2022 Shixiong Wang, Haowei Wang, Jean Honorio

Third, we show that generalization errors of machine learning models can be characterized using the distributional uncertainty of the nominal distribution and the robustness measures of these machine learning models, which is a new perspective to bound generalization errors, and therefore, explain the reason why distributionally robust machine learning models, Bayesian models, and regularization models tend to have smaller generalization errors.

A Novel Plug-and-Play Approach for Adversarially Robust Generalization

no code implementations19 Aug 2022 Deepak Maurya, Adarsh Barik, Jean Honorio

In this work, we propose a robust framework that employs adversarially robust training to safeguard the machine learning models against perturbed testing data.

Matrix Completion regression

Meta Learning for High-dimensional Ising Model Selection Using $\ell_1$-regularized Logistic Regression

no code implementations19 Aug 2022 Huiming Xie, Jean Honorio

In this paper, we consider the meta learning problem for estimating the graphs associated with high-dimensional Ising models, using the method of $\ell_1$-regularized logistic regression for neighborhood selection of each node.

Meta-Learning Model Selection +1

Meta Sparse Principal Component Analysis

no code implementations18 Aug 2022 Imon Banerjee, Jean Honorio

We assume each task to be a different random Principal Component (PC) matrix with a possibly different support and that the support union of the PC matrices is small.


Provable Guarantees for Sparsity Recovery with Deterministic Missing Data Patterns

no code implementations10 Jun 2022 Chuyang Ke, Jean Honorio

We study the problem of consistently recovering the sparsity pattern of a regression parameter vector from correlated observations governed by deterministic missing data patterns using Lasso.


Sparse Mixed Linear Regression with Guarantees: Taming an Intractable Problem with Invex Relaxation

no code implementations2 Jun 2022 Adarsh Barik, Jean Honorio

Since the data is unlabeled, our task is not only to figure out a good approximation of the regression parameter vectors but also to label the dataset correctly.


Support Recovery in Sparse PCA with Incomplete Data

no code implementations30 May 2022 Hanbyul Lee, Qifan Song, Jean Honorio

We study a practical algorithm for sparse principal component analysis (PCA) of incomplete and noisy data.

Federated X-Armed Bandit

1 code implementation30 May 2022 Wenjie Li, Qifan Song, Jean Honorio, Guang Lin

This work establishes the first framework of federated $\mathcal{X}$-armed bandit, where different clients face heterogeneous local objective functions defined on the same domain and are required to collaboratively figure out the global optimum.

Dual Convexified Convolutional Neural Networks

no code implementations27 May 2022 Site Bai, Chuyang Ke, Jean Honorio

To overcome this, we propose a highly novel weight recovery algorithm, which takes the dual solution and the kernel information as the input, and recovers the linear weight and the output of convolutional layer, instead of weight parameter.

Federated Myopic Community Detection with One-shot Communication

no code implementations14 Jun 2021 Chuyang Ke, Jean Honorio

We provide an efficient algorithm, which computes a consensus signed weighted graph from clients evidence, and recovers the underlying network structure in the central server.

Community Detection

A Lower Bound for the Sample Complexity of Inverse Reinforcement Learning

no code implementations7 Mar 2021 Abi Komanduru, Jean Honorio

Inverse reinforcement learning (IRL) is the task of finding a reward function that generates a desired optimal policy for a given Markov Decision Process (MDP).

reinforcement-learning Reinforcement Learning (RL)

Information-Theoretic Bounds for Integral Estimation

no code implementations19 Feb 2021 Donald Q. Adams, Adarsh Barik, Jean Honorio

For functions with nonzero fourth derivatives, the Gaussian Quadrature method achieves an upper bound which is not tight with the information-theoretic lower bound.

A Simple Unified Framework for High Dimensional Bandit Problems

no code implementations18 Feb 2021 Wenjie Li, Adarsh Barik, Jean Honorio

Stochastic high dimensional bandit problems with low dimensional structures are useful in different applications such as online advertising and drug discovery.

Drug Discovery Vocal Bursts Intensity Prediction

On the Fundamental Limits of Exact Inference in Structured Prediction

no code implementations17 Feb 2021 Hanbyul Lee, Kevin Bello, Jean Honorio

Inference is a main task in structured prediction and it is naturally modeled with a graph.

Structured Prediction

A Thorough View of Exact Inference in Graphs from the Degree-4 Sum-of-Squares Hierarchy

no code implementations16 Feb 2021 Kevin Bello, Chuyang Ke, Jean Honorio

Performing inference in graphs is a common task within several machine learning problems, e. g., image segmentation, community detection, among others.

Combinatorial Optimization Community Detection +2

Information Theoretic Limits of Exact Recovery in Sub-hypergraph Models for Community Detection

no code implementations29 Jan 2021 Jiajun Liang, Chuyang Ke, Jean Honorio

Our bounds are tight and pertain to the community detection problems in various models such as the planted hypergraph stochastic block model, the planted densest sub-hypergraph model, and the planted multipartite hypergraph model.

Community Detection Stochastic Block Model

Randomized Deep Structured Prediction for Discourse-Level Processing

no code implementations EACL 2021 Manuel Widmoser, Maria Leonor Pacheco, Jean Honorio, Dan Goldwasser

In this paper, we explore the use of randomized inference to alleviate this concern and show that we can efficiently leverage deep structured prediction and expressive neural encoders for a set of tasks involving complicated argumentative structures.

Structured Prediction

Information Theoretic Lower Bounds for Feed-Forward Fully-Connected Deep Networks

no code implementations1 Jul 2020 Xiaochen Yang, Jean Honorio

In this paper, we study the sample complexity lower bounds for the exact recovery of parameters and for a positive excess risk of a feed-forward, fully-connected neural network for binary classification, using information-theoretic tools.

Binary Classification

A Le Cam Type Bound for Adversarial Learning and Applications

no code implementations1 Jul 2020 QiuLing Xu, Kevin Bello, Jean Honorio

Robustness of machine learning methods is essential for modern practical applications.

Vocal Bursts Type Prediction

Fairness constraints can help exact inference in structured prediction

no code implementations NeurIPS 2020 Kevin Bello, Jean Honorio

Given a generative model with an undirected connected graph $G$ and true vector of binary labels, it has been previously shown that when $G$ has good expansion properties, such as complete graphs or $d$-regular expanders, one can exactly recover the true labels (with high probability and in polynomial time) from a single noisy observation of each edge and node.

Fairness Structured Prediction

Meta Learning for Support Recovery in High-dimensional Precision Matrix Estimation

no code implementations22 Jun 2020 Qian Zhang, Yilin Zheng, Jean Honorio

Then for the novel task, we prove that the minimization of the $\ell_1$-regularized log-determinant Bregman divergence with the additional constraint that the support is a subset of the estimated support union could reduce the sufficient sample complexity of successful support recovery to $O(\log(|S_{\text{off}}|))$ where $|S_{\text{off}}|$ is the number of off-diagonal elements in the support union and is much less than $N$ for sparse matrices.

Meta-Learning Vocal Bursts Intensity Prediction

Exact Support Recovery in Federated Regression with One-shot Communication

no code implementations22 Jun 2020 Adarsh Barik, Jean Honorio

Federated learning provides a framework to address the challenges of distributed computing, data ownership and privacy over a large number of distributed clients with low computational and communication capabilities.

Distributed Computing Federated Learning +2

Exact Partitioning of High-order Planted Models with a Tensor Nuclear Norm Constraint

no code implementations20 Jun 2020 Chuyang Ke, Jean Honorio

We study the problem of efficient exact partitioning of the hypergraphs generated by high-order planted models.

Provable Sample Complexity Guarantees for Learning of Continuous-Action Graphical Games with Nonparametric Utilities

no code implementations1 Apr 2020 Adarsh Barik, Jean Honorio

In this paper, we study the problem of learning the exact structure of continuous-action games with non-parametric utility functions.

Information-Theoretic Lower Bounds for Zero-Order Stochastic Gradient Estimation

no code implementations31 Mar 2020 Abdulrahman Alabdulkareem, Jean Honorio

In this paper we analyze the necessary number of samples to estimate the gradient of any multidimensional smooth (possibly non-convex) function in a zero-order stochastic oracle model.

First Order Methods take Exponential Time to Converge to Global Minimizers of Non-Convex Functions

no code implementations28 Feb 2020 Krishna Reddy Kesari, Jean Honorio

We show that the parameter estimation problem is equivalent to the problem of function identification in the given family.

BIG-bench Machine Learning

Novel Change of Measure Inequalities with Applications to PAC-Bayesian Bounds and Monte Carlo Estimation

no code implementations25 Feb 2020 Yuki Ohnishi, Jean Honorio

We introduce several novel change of measure inequalities for two families of divergences: $f$-divergences and $\alpha$-divergences.

The Sample Complexity of Meta Sparse Regression

no code implementations22 Feb 2020 Zhanyu Wang, Jean Honorio

A key difference between meta-learning and the classical multi-task learning, is that meta-learning focuses only on the recovery of the parameters of the novel task, while multi-task learning estimates the parameter of all tasks, which requires l to grow with T .

Few-Shot Learning Multi-Task Learning +1

Provable Computational and Statistical Guarantees for Efficient Learning of Continuous-Action Graphical Games

no code implementations8 Nov 2019 Adarsh Barik, Jean Honorio

We propose a $\ell_{12}-$ block regularized method which recovers a graphical game, whose Nash equilibria are the $\epsilon$-Nash equilibria of the game from which the data was generated (true game).

Exact Partitioning of High-order Models with a Novel Convex Tensor Cone Relaxation

no code implementations6 Nov 2019 Chuyang Ke, Jean Honorio

In this paper we propose an algorithm for exact partitioning of high-order models.

Direct Learning with Guarantees of the Difference DAG Between Structural Equation Models

no code implementations28 Jun 2019 Asish Ghoshal, Kevin Bello, Jean Honorio

Discovering cause-effect relationships between variables from observational data is a fundamental challenge in many scientific disciplines.

Minimax bounds for structured prediction

no code implementations2 Jun 2019 Kevin Bello, Asish Ghoshal, Jean Honorio

Structured prediction can be considered as a generalization of many standard supervised learning tasks, and is usually thought as a simultaneous prediction of multiple labels.

Structured Prediction

Exact inference in structured prediction

no code implementations NeurIPS 2019 Kevin Bello, Jean Honorio

Our results show that exact recovery is possible and achievable in polynomial time for a large class of graphs.

Structured Prediction

On the Correctness and Sample Complexity of Inverse Reinforcement Learning

1 code implementation NeurIPS 2019 Abi Komanduru, Jean Honorio

The paper further analyzes the proposed formulation of inverse reinforcement learning with $n$ states and $k$ actions, and shows a sample complexity of $O(n^2 \log (nk))$ for recovering a reward function that generates a policy that satisfies Bellman's optimality condition with respect to the true transition probabilities.

reinforcement-learning Reinforcement Learning (RL)

Learning Bayesian Networks with Low Rank Conditional Probability Tables

no code implementations NeurIPS 2019 Adarsh Barik, Jean Honorio

In this paper, we provide a method to learn the directed structure of a Bayesian network using data.

Exact Inference with Latent Variables in an Arbitrary Domain

no code implementations28 Jan 2019 Chuyang Ke, Jean Honorio

We analyze the necessary and sufficient conditions for exact inference of a latent model.

Optimality Implies Kernel Sum Classifiers are Statistically Efficient

no code implementations25 Jan 2019 Raphael Arkady Meyer, Jean Honorio

We propose a novel combination of optimization tools with learning theory bounds in order to analyze the sample complexity of optimal kernel sum classifiers.

Learning Theory

Statistically and Computationally Efficient Variance Estimator for Kernel Ridge Regression

no code implementations17 Sep 2018 Meimei Liu, Jean Honorio, Guang Cheng

In this paper, we propose a random projection approach to estimate variance in kernel ridge regression.


Learning Maximum-A-Posteriori Perturbation Models for Structured Prediction in Polynomial Time

no code implementations ICML 2018 Asish Ghoshal, Jean Honorio

In this paper, we propose a provably polynomial time randomized algorithm for learning the parameters of perturbed MAP predictors.

Structured Prediction

Regularized Loss Minimizers with Local Data Perturbation: Consistency and Data Irrecoverability

no code implementations19 May 2018 Zitao Li, Jean Honorio

We introduce a new concept, data irrecoverability, and show that the well-studied concept of data privacy is sufficient but not necessary for data irrecoverability.

Learning discrete Bayesian networks in polynomial time and sample complexity

no code implementations12 Mar 2018 Adarsh Barik, Jean Honorio

The problem is NP-hard in general but we show that under certain conditions we can recover the true structure of a Bayesian network with sufficient number of samples.

Information-theoretic Limits for Community Detection in Network Models

no code implementations NeurIPS 2018 Chuyang Ke, Jean Honorio

For the Latent Space Model, the non-recoverability condition depends on the dimension of the latent space, and how far and spread are the communities in the latent space.

Community Detection Stochastic Block Model

Cost-Aware Learning for Improved Identifiability with Multiple Experiments

no code implementations12 Feb 2018 Longyun Guo, Jean Honorio, John Morgan

We analyze the sample complexity of learning from multiple experiments where the experimenter has a total budget for obtaining samples.

The Error Probability of Random Fourier Features is Dimensionality Independent

no code implementations27 Oct 2017 Jean Honorio, Yu-Jun Li

We show that the error probability of reconstructing kernel matrices from Random Fourier Features for the Gaussian kernel function is at most $\mathcal{O}(R^{2/3} \exp(-D))$, where $D$ is the number of random features and $R$ is the diameter of the data domain.


Learning linear structural equation models in polynomial time and sample complexity

no code implementations15 Jul 2017 Asish Ghoshal, Jean Honorio

We develop a new algorithm --- which is computationally and statistically efficient and works in the high-dimensional regime --- for learning linear SEMs from purely observational data with arbitrary noise distribution.

Causal Inference

Learning Sparse Polymatrix Games in Polynomial Time and Sample Complexity

no code implementations18 Jun 2017 Asish Ghoshal, Jean Honorio

We also show that $\Omega(d \log (pm))$ samples are necessary for any method to consistently recover a game, with the same Nash-equilibria as the true game, from observations of strategic interactions.

Computationally and statistically efficient learning of causal Bayes nets using path queries

no code implementations NeurIPS 2018 Kevin Bello, Jean Honorio

In this paper we first propose a polynomial time algorithm for learning the exact correctly-oriented structure of the transitive reduction of any causal Bayesian network with high probability, by using interventional path queries.

Causal Discovery

On the Statistical Efficiency of Compositional Nonparametric Prediction

no code implementations6 Apr 2017 Yixi Xu, Jean Honorio, Xiao Wang

In this paper, we propose a compositional nonparametric method in which a model is expressed as a labeled binary tree of $2k+1$ nodes, where each node is either a summation, a multiplication, or the application of one of the $q$ basis functions to one of the $p$ covariates.


Learning Identifiable Gaussian Bayesian Networks in Polynomial Time and Sample Complexity

no code implementations NeurIPS 2017 Asish Ghoshal, Jean Honorio

In this paper we propose a provably polynomial-time algorithm for learning sparse Gaussian Bayesian networks with equal noise variance --- a class of Bayesian networks for which the DAG structure can be uniquely identified from observational data --- under high-dimensional settings.

Learning Graphical Games from Behavioral Data: Sufficient and Necessary Conditions

no code implementations3 Mar 2017 Asish Ghoshal, Jean Honorio

In this paper we obtain sufficient and necessary conditions on the number of samples required for exact recovery of the pure-strategy Nash equilibria (PSNE) set of a graphical game from noisy observations of joint actions.

From Behavior to Sparse Graphical Games: Efficient Recovery of Equilibria

no code implementations11 Jul 2016 Asish Ghoshal, Jean Honorio

In this paper we study the problem of exact recovery of the pure-strategy Nash equilibria (PSNE) set of a graphical game from noisy observations of joint actions of the players alone.

Information-Theoretic Lower Bounds for Recovery of Diffusion Network Structures

no code implementations28 Jan 2016 Keehwan Park, Jean Honorio

We study the information-theoretic lower bound of the sample complexity of the correct recovery of diffusion network structures.

Information-theoretic limits of Bayesian network structure learning

no code implementations27 Jan 2016 Asish Ghoshal, Jean Honorio

In this paper, we study the information-theoretic limits of learning the structure of Bayesian networks (BNs), on discrete as well as continuous random variables, from a finite number of samples.

regression Variable Selection

On the Sample Complexity of Learning Graphical Games

no code implementations27 Jan 2016 Jean Honorio

By using information-theoretic arguments, we show that if the number of samples is less than ${\Omega(k n \log^2{n})}$ for sparse graphs or ${\Omega(n^2 \log{n})}$ for dense graphs, then any conceivable method fails to recover the PSNE with arbitrary probability.

Structured Prediction: From Gaussian Perturbations to Linear-Time Principled Algorithms

no code implementations5 Aug 2015 Jean Honorio, Tommi Jaakkola

Thus, using the maximum loss over random structured outputs is a principled way of learning the parameter of structured prediction models.

Structured Prediction

Inverse Covariance Estimation for High-Dimensional Data in Linear Time and Space: Spectral Methods for Riccati and Sparse Models

no code implementations26 Sep 2013 Jean Honorio, Tommi S. Jaakkola

Furthermore, instead of obtaining a single solution for a specific regularization parameter, our algorithm finds the whole solution path.

Learning the Structure and Parameters of Large-Population Graphical Games from Behavioral Data

no code implementations16 Jun 2012 Jean Honorio, Luis Ortiz

We consider learning, from strictly behavioral data, the structure and parameters of linear influence games (LIGs), a class of parametric graphical games introduced by Irfan and Ortiz (2014).

Sparse and Locally Constant Gaussian Graphical Models

no code implementations NeurIPS 2009 Jean Honorio, Dimitris Samaras, Nikos Paragios, Rita Goldstein, Luis E. Ortiz

Locality information is crucial in datasets where each variable corresponds to a measurement in a manifold (silhouettes, motion trajectories, 2D and 3D images).

Cannot find the paper you are looking for? You can Submit a new open access paper.