1 code implementation • 7 Mar 2023 • Wenjie Li, Haoze Li, Jean Honorio, Qifan Song
We introduce a Python open-source library for $\mathcal{X}$-armed bandit and online blackbox optimization named PyXAB.
no code implementations • 7 Feb 2023 • Chuyang Ke, Jean Honorio
In this paper, we study the problem of inference in high-order structured prediction tasks.
no code implementations • 3 Feb 2023 • Hanbyul Lee, Qifan Song, Jean Honorio
We analyze a practical algorithm for sparse PCA on incomplete and noisy data under a general non-random sampling scheme.
no code implementations • 31 Jan 2023 • Shixiong Wang, Haowei Wang, Jean Honorio
Trustworthy machine learning aims at combating distributional uncertainties in training data distributions compared to population distributions.
no code implementations • CVPR 2023 • QiuLing Xu, Guanhong Tao, Jean Honorio, Yingqi Liu, Shengwei An, Guangyu Shen, Siyuan Cheng, Xiangyu Zhang
It trains the clone model from scratch on a very small subset of samples and aims to minimize a cloning loss that denotes the differences between the activations of important neurons across the two models.
no code implementations • 21 Dec 2022 • Deepak Maurya, Jean Honorio
This paper analyzes $\ell_1$ regularized linear regression under the challenging scenario of having only adversarially corrupted data for training.
no code implementations • 20 Dec 2022 • Shixiong Wang, Haowei Wang, Jean Honorio
Third, we show that generalization errors of machine learning models can be characterized using the distributional uncertainty of the nominal distribution and the robustness measures of these machine learning models, which is a new perspective to bound generalization errors, and therefore, explain the reason why distributionally robust machine learning models, Bayesian models, and regularization models tend to have smaller generalization errors.
no code implementations • 19 Aug 2022 • Deepak Maurya, Adarsh Barik, Jean Honorio
In this work, we propose a robust framework that employs adversarially robust training to safeguard the machine learning models against perturbed testing data.
no code implementations • 19 Aug 2022 • Huiming Xie, Jean Honorio
In this paper, we consider the meta learning problem for estimating the graphs associated with high-dimensional Ising models, using the method of $\ell_1$-regularized logistic regression for neighborhood selection of each node.
no code implementations • 18 Aug 2022 • Imon Banerjee, Jean Honorio
We assume each task to be a different random Principal Component (PC) matrix with a possibly different support and that the support union of the PC matrices is small.
no code implementations • 10 Jun 2022 • Chuyang Ke, Jean Honorio
We study the problem of consistently recovering the sparsity pattern of a regression parameter vector from correlated observations governed by deterministic missing data patterns using Lasso.
no code implementations • 2 Jun 2022 • Adarsh Barik, Jean Honorio
Since the data is unlabeled, our task is not only to figure out a good approximation of the regression parameter vectors but also to label the dataset correctly.
no code implementations • 30 May 2022 • Hanbyul Lee, Qifan Song, Jean Honorio
We study a practical algorithm for sparse principal component analysis (PCA) of incomplete and noisy data.
1 code implementation • 30 May 2022 • Wenjie Li, Qifan Song, Jean Honorio, Guang Lin
This work establishes the first framework of federated $\mathcal{X}$-armed bandit, where different clients face heterogeneous local objective functions defined on the same domain and are required to collaboratively figure out the global optimum.
no code implementations • 27 May 2022 • Site Bai, Chuyang Ke, Jean Honorio
To overcome this, we propose a highly novel weight recovery algorithm, which takes the dual solution and the kernel information as the input, and recovers the linear weight and the output of convolutional layer, instead of weight parameter.
no code implementations • 14 Jun 2021 • Chuyang Ke, Jean Honorio
We provide an efficient algorithm, which computes a consensus signed weighted graph from clients evidence, and recovers the underlying network structure in the central server.
no code implementations • 7 Mar 2021 • Abi Komanduru, Jean Honorio
Inverse reinforcement learning (IRL) is the task of finding a reward function that generates a desired optimal policy for a given Markov Decision Process (MDP).
no code implementations • 19 Feb 2021 • Donald Q. Adams, Adarsh Barik, Jean Honorio
For functions with nonzero fourth derivatives, the Gaussian Quadrature method achieves an upper bound which is not tight with the information-theoretic lower bound.
no code implementations • NeurIPS 2021 • Adarsh Barik, Jean Honorio
To the best of our knowledge, this is the first invex relaxation for a combinatorial problem.
no code implementations • 18 Feb 2021 • Wenjie Li, Adarsh Barik, Jean Honorio
Stochastic high dimensional bandit problems with low dimensional structures are useful in different applications such as online advertising and drug discovery.
no code implementations • 17 Feb 2021 • Hanbyul Lee, Kevin Bello, Jean Honorio
Inference is a main task in structured prediction and it is naturally modeled with a graph.
no code implementations • 16 Feb 2021 • Kevin Bello, Chuyang Ke, Jean Honorio
Performing inference in graphs is a common task within several machine learning problems, e. g., image segmentation, community detection, among others.
no code implementations • NeurIPS 2021 • Gregory Dexter, Kevin Bello, Jean Honorio
Inverse Reinforcement Learning (IRL) is the problem of finding a reward function which describes observed/known expert behavior.
no code implementations • 29 Jan 2021 • Jiajun Liang, Chuyang Ke, Jean Honorio
Our bounds are tight and pertain to the community detection problems in various models such as the planted hypergraph stochastic block model, the planted densest sub-hypergraph model, and the planted multipartite hypergraph model.
no code implementations • EACL 2021 • Manuel Widmoser, Maria Leonor Pacheco, Jean Honorio, Dan Goldwasser
In this paper, we explore the use of randomized inference to alleviate this concern and show that we can efficiently leverage deep structured prediction and expressive neural encoders for a set of tasks involving complicated argumentative structures.
no code implementations • 1 Jul 2020 • Xiaochen Yang, Jean Honorio
In this paper, we study the sample complexity lower bounds for the exact recovery of parameters and for a positive excess risk of a feed-forward, fully-connected neural network for binary classification, using information-theoretic tools.
no code implementations • 1 Jul 2020 • QiuLing Xu, Kevin Bello, Jean Honorio
Robustness of machine learning methods is essential for modern practical applications.
no code implementations • NeurIPS 2020 • Kevin Bello, Jean Honorio
Given a generative model with an undirected connected graph $G$ and true vector of binary labels, it has been previously shown that when $G$ has good expansion properties, such as complete graphs or $d$-regular expanders, one can exactly recover the true labels (with high probability and in polynomial time) from a single noisy observation of each edge and node.
no code implementations • 22 Jun 2020 • Qian Zhang, Yilin Zheng, Jean Honorio
Then for the novel task, we prove that the minimization of the $\ell_1$-regularized log-determinant Bregman divergence with the additional constraint that the support is a subset of the estimated support union could reduce the sufficient sample complexity of successful support recovery to $O(\log(|S_{\text{off}}|))$ where $|S_{\text{off}}|$ is the number of off-diagonal elements in the support union and is much less than $N$ for sparse matrices.
no code implementations • 22 Jun 2020 • Adarsh Barik, Jean Honorio
Federated learning provides a framework to address the challenges of distributed computing, data ownership and privacy over a large number of distributed clients with low computational and communication capabilities.
no code implementations • 20 Jun 2020 • Chuyang Ke, Jean Honorio
We study the problem of efficient exact partitioning of the hypergraphs generated by high-order planted models.
no code implementations • 1 Apr 2020 • Adarsh Barik, Jean Honorio
In this paper, we study the problem of learning the exact structure of continuous-action games with non-parametric utility functions.
no code implementations • 31 Mar 2020 • Abdulrahman Alabdulkareem, Jean Honorio
In this paper we analyze the necessary number of samples to estimate the gradient of any multidimensional smooth (possibly non-convex) function in a zero-order stochastic oracle model.
no code implementations • 28 Feb 2020 • Krishna Reddy Kesari, Jean Honorio
We show that the parameter estimation problem is equivalent to the problem of function identification in the given family.
no code implementations • 25 Feb 2020 • Yuki Ohnishi, Jean Honorio
We introduce several novel change of measure inequalities for two families of divergences: $f$-divergences and $\alpha$-divergences.
no code implementations • 22 Feb 2020 • Zhanyu Wang, Jean Honorio
A key difference between meta-learning and the classical multi-task learning, is that meta-learning focuses only on the recovery of the parameters of the novel task, while multi-task learning estimates the parameter of all tasks, which requires l to grow with T .
no code implementations • 8 Nov 2019 • Adarsh Barik, Jean Honorio
We propose a $\ell_{12}-$ block regularized method which recovers a graphical game, whose Nash equilibria are the $\epsilon$-Nash equilibria of the game from which the data was generated (true game).
no code implementations • 6 Nov 2019 • Chuyang Ke, Jean Honorio
In this paper we propose an algorithm for exact partitioning of high-order models.
no code implementations • 28 Jun 2019 • Asish Ghoshal, Kevin Bello, Jean Honorio
Discovering cause-effect relationships between variables from observational data is a fundamental challenge in many scientific disciplines.
no code implementations • 2 Jun 2019 • Kevin Bello, Asish Ghoshal, Jean Honorio
Structured prediction can be considered as a generalization of many standard supervised learning tasks, and is usually thought as a simultaneous prediction of multiple labels.
no code implementations • NeurIPS 2019 • Kevin Bello, Jean Honorio
Our results show that exact recovery is possible and achievable in polynomial time for a large class of graphs.
1 code implementation • NeurIPS 2019 • Abi Komanduru, Jean Honorio
The paper further analyzes the proposed formulation of inverse reinforcement learning with $n$ states and $k$ actions, and shows a sample complexity of $O(n^2 \log (nk))$ for recovering a reward function that generates a policy that satisfies Bellman's optimality condition with respect to the true transition probabilities.
no code implementations • NeurIPS 2019 • Adarsh Barik, Jean Honorio
In this paper, we provide a method to learn the directed structure of a Bayesian network using data.
no code implementations • 28 Jan 2019 • Chuyang Ke, Jean Honorio
We analyze the necessary and sufficient conditions for exact inference of a latent model.
no code implementations • 25 Jan 2019 • Raphael Arkady Meyer, Jean Honorio
We propose a novel combination of optimization tools with learning theory bounds in order to analyze the sample complexity of optimal kernel sum classifiers.
no code implementations • 17 Sep 2018 • Meimei Liu, Jean Honorio, Guang Cheng
In this paper, we propose a random projection approach to estimate variance in kernel ridge regression.
no code implementations • NeurIPS 2018 • Kevin Bello, Jean Honorio
The standard margin-based structured prediction commonly uses a maximum loss over all possible structured outputs.
no code implementations • ICML 2018 • Asish Ghoshal, Jean Honorio
In this paper, we propose a provably polynomial time randomized algorithm for learning the parameters of perturbed MAP predictors.
no code implementations • 19 May 2018 • Zitao Li, Jean Honorio
We introduce a new concept, data irrecoverability, and show that the well-studied concept of data privacy is sufficient but not necessary for data irrecoverability.
no code implementations • 12 Mar 2018 • Adarsh Barik, Jean Honorio
The problem is NP-hard in general but we show that under certain conditions we can recover the true structure of a Bayesian network with sufficient number of samples.
no code implementations • NeurIPS 2018 • Chuyang Ke, Jean Honorio
For the Latent Space Model, the non-recoverability condition depends on the dimension of the latent space, and how far and spread are the communities in the latent space.
no code implementations • 12 Feb 2018 • Longyun Guo, Jean Honorio, John Morgan
We analyze the sample complexity of learning from multiple experiments where the experimenter has a total budget for obtaining samples.
no code implementations • 27 Oct 2017 • Jean Honorio, Yu-Jun Li
We show that the error probability of reconstructing kernel matrices from Random Fourier Features for the Gaussian kernel function is at most $\mathcal{O}(R^{2/3} \exp(-D))$, where $D$ is the number of random features and $R$ is the diameter of the data domain.
no code implementations • 15 Jul 2017 • Asish Ghoshal, Jean Honorio
We develop a new algorithm --- which is computationally and statistically efficient and works in the high-dimensional regime --- for learning linear SEMs from purely observational data with arbitrary noise distribution.
no code implementations • 18 Jun 2017 • Asish Ghoshal, Jean Honorio
We also show that $\Omega(d \log (pm))$ samples are necessary for any method to consistently recover a game, with the same Nash-equilibria as the true game, from observations of strategic interactions.
no code implementations • NeurIPS 2018 • Kevin Bello, Jean Honorio
In this paper we first propose a polynomial time algorithm for learning the exact correctly-oriented structure of the transitive reduction of any causal Bayesian network with high probability, by using interventional path queries.
no code implementations • 6 Apr 2017 • Yixi Xu, Jean Honorio, Xiao Wang
In this paper, we propose a compositional nonparametric method in which a model is expressed as a labeled binary tree of $2k+1$ nodes, where each node is either a summation, a multiplication, or the application of one of the $q$ basis functions to one of the $p$ covariates.
no code implementations • NeurIPS 2017 • Asish Ghoshal, Jean Honorio
In this paper we propose a provably polynomial-time algorithm for learning sparse Gaussian Bayesian networks with equal noise variance --- a class of Bayesian networks for which the DAG structure can be uniquely identified from observational data --- under high-dimensional settings.
no code implementations • 3 Mar 2017 • Asish Ghoshal, Jean Honorio
In this paper we obtain sufficient and necessary conditions on the number of samples required for exact recovery of the pure-strategy Nash equilibria (PSNE) set of a graphical game from noisy observations of joint actions.
no code implementations • 26 Jan 2017 • Adarsh Barik, Jean Honorio, Mohit Tawarmalani
We analyze the necessary number of samples for sparse vector recovery in a noisy linear prediction setup.
no code implementations • 11 Jul 2016 • Asish Ghoshal, Jean Honorio
In this paper we study the problem of exact recovery of the pure-strategy Nash equilibria (PSNE) set of a graphical game from noisy observations of joint actions of the players alone.
no code implementations • 28 Jan 2016 • Keehwan Park, Jean Honorio
We study the information-theoretic lower bound of the sample complexity of the correct recovery of diffusion network structures.
no code implementations • 27 Jan 2016 • Asish Ghoshal, Jean Honorio
In this paper, we study the information-theoretic limits of learning the structure of Bayesian networks (BNs), on discrete as well as continuous random variables, from a finite number of samples.
no code implementations • 27 Jan 2016 • Jean Honorio
By using information-theoretic arguments, we show that if the number of samples is less than ${\Omega(k n \log^2{n})}$ for sparse graphs or ${\Omega(n^2 \log{n})}$ for dense graphs, then any conceivable method fails to recover the PSNE with arbitrary probability.
no code implementations • 5 Aug 2015 • Jean Honorio, Tommi Jaakkola
Thus, using the maximum loss over random structured outputs is a principled way of learning the parameter of structured prediction models.
no code implementations • 26 Sep 2013 • Jean Honorio, Tommi S. Jaakkola
Furthermore, instead of obtaining a single solution for a specific regularization parameter, our algorithm finds the whole solution path.
no code implementations • 18 Jul 2012 • Jean Honorio, Tommi Jaakkola, Dimitris Samaras
In this paper, we present $\ell_{1, p}$ multi-task structure learning for Gaussian graphical models.
no code implementations • 16 Jun 2012 • Jean Honorio, Luis Ortiz
We consider learning, from strictly behavioral data, the structure and parameters of linear influence games (LIGs), a class of parametric graphical games introduced by Irfan and Ortiz (2014).
no code implementations • NeurIPS 2009 • Jean Honorio, Dimitris Samaras, Nikos Paragios, Rita Goldstein, Luis E. Ortiz
Locality information is crucial in datasets where each variable corresponds to a measurement in a manifold (silhouettes, motion trajectories, 2D and 3D images).