no code implementations • 17 Feb 2024 • Tim Tsz-Kit Lau, Han Liu, Mladen Kolar

The choice of batch sizes in stochastic gradient optimizers is critical for model training.

no code implementations • 28 Dec 2023 • Zhao Lyu, Wai Ming Tai, Mladen Kolar, Bryon Aragam

In this paper, we highlight the inherent limitations of cross-validation when employed to discern the structure of a Gaussian graphical model.

1 code implementation • 5 Jun 2023 • Boxin Zhao, Boxiang Lyu, Raul Castro Fernandez, Mladen Kolar

Data markets help with identifying valuable training data: model consumers pay to train a model, the market uses that budget to identify data and train the model (the budget allocation problem), and finally the market compensates data providers according to their data contribution (revenue allocation problem).

1 code implementation • 28 May 2023 • Ilgee Hong, Sen Na, Michael W. Mahoney, Mladen Kolar

Our method adaptively controls the accuracy of the randomized solver and the penalty parameters of the exact augmented Lagrangian, to ensure that the inexact Newton direction is a descent direction of the exact augmented Lagrangian.

1 code implementation • 29 Nov 2022 • Yuchen Fang, Sen Na, Michael W. Mahoney, Mladen Kolar

We propose a trust-region stochastic sequential quadratic programming algorithm (TR-StoSQP) to solve nonlinear optimization problems with stochastic objectives and deterministic equality constraints.

no code implementations • 31 Oct 2022 • Katherine Tsai, Boxin Zhao, Sanmi Koyejo, Mladen Kolar

Joint multimodal functional data acquisition, where functional data from multiple modes are measured simultaneously from the same subject, has emerged as an exciting modern approach enabled by recent engineering breakthroughs in the neurological and biological sciences.

no code implementations • 31 May 2022 • Pedro Cisneros-Velarde, Boxiang Lyu, Sanmi Koyejo, Mladen Kolar

Although parallelism has been extensively used in reinforcement learning (RL), the quantitative effects of parallel exploration are not well understood theoretically.

no code implementations • 5 May 2022 • Boxiang Lyu, Zhaoran Wang, Mladen Kolar, Zhuoran Yang

In the setting where the function approximation is employed to handle large state spaces, with only mild assumptions on the expressiveness of the function class, we are able to design a dynamic mechanism using offline reinforcement learning algorithms.

1 code implementation • 28 Apr 2022 • Boxiang Lyu, Filip Hanzely, Mladen Kolar

We consider the problem of personalized federated learning when there are known cluster structures within users.

no code implementations • 31 Jan 2022 • Boxin Zhao, Boxiang Lyu, Mladen Kolar

Stochastic gradient-based optimization methods, such as L-SVRG and its accelerated variant L-Katyusha (Kovalev et al., 2020), are widely used to train machine learning models. The theoretical and empirical performance of L-SVRG and L-Katyusha can be improved by sampling observations from a non-uniform distribution (Qian et al., 2021).

1 code implementation • 28 Dec 2021 • Boxin Zhao, Lingxiao Wang, Mladen Kolar, Ziqi Liu, Zhiqiang Zhang, Jun Zhou, Chaochao Chen

As a result, client sampling plays an important role in FL systems as it affects the convergence rate of optimization algorithms used to train machine learning models.

no code implementations • 6 Nov 2021 • Yuwei Luo, Varun Gupta, Mladen Kolar

Under the assumption that a sequence of stabilizing, but potentially sub-optimal controllers is available for all $t$, we present an algorithm that achieves the optimal dynamic regret of $\tilde{\mathcal{O}}\left(V_T^{2/5}T^{3/5}\right)$.

1 code implementation • 19 Oct 2021 • Katherine Tsai, Oluwasanmi Koyejo, Mladen Kolar

Graphs from complex systems often share a partial underlying structure across domains while retaining individual features.

1 code implementation • 23 Sep 2021 • Sen Na, Mihai Anitescu, Mladen Kolar

We study nonlinear optimization problems with a stochastic objective and deterministic equality and inequality constraints, which emerge in numerous applications including finance, manufacturing, power systems and, recently, deep neural networks.

no code implementations • 18 Jun 2021 • Luofeng Liao, Li Shen, Jia Duan, Mladen Kolar, DaCheng Tao

Large scale convex-concave minimax problems arise in numerous applications, including game theory, robust training, and training of generative adversarial networks.

1 code implementation • 14 Jun 2021 • Y. Samuel Wang, Si Kai Lee, Panos Toulis, Mladen Kolar

We propose a residual randomization procedure designed for robust Lasso-based inference in the high-dimensional setting.

1 code implementation • 6 May 2021 • Boxin Zhao, Percy S. Zhai, Y. Samuel Wang, Mladen Kolar

We propose a neighborhood selection approach to estimate the structure of Gaussian functional graphical models, where we first estimate the neighborhood of each node via a function-on-function regression and subsequently recover the entire graph structure by combining the estimated neighborhoods.

1 code implementation • 19 Feb 2021 • Filip Hanzely, Boxin Zhao, Mladen Kolar

We investigate the optimization aspects of personalized Federated Learning (FL).

no code implementations • 19 Feb 2021 • Luofeng Liao, Zuyue Fu, Zhuoran Yang, Yixin Wang, Mladen Kolar, Zhaoran Wang

Instrumental variables (IVs), in the context of RL, are the variables whose influence on the state variables are all mediated through the action.

1 code implementation • 10 Feb 2021 • Sen Na, Mihai Anitescu, Mladen Kolar

Based on the simplified deterministic algorithm, we then propose a non-adaptive SQP for dealing with stochastic objective, where the gradient and Hessian are replaced by stochastic estimates but the stepsizes are deterministic and prespecified.

no code implementations • 30 Dec 2020 • You-Lin Chen, Zhaoran Wang, Mladen Kolar

Training a classifier under non-convex constraints has gotten increasing attention in the machine learning community thanks to its wide range of applications such as algorithmic fairness and class-imbalanced classification.

no code implementations • NeurIPS 2020 • Luofeng Liao, You-Lin Chen, Zhuoran Yang, Bo Dai, Mladen Kolar, Zhaoran Wang

We study estimation in a class of generalized SEMs where the object of interest is defined as the solution to a linear operator equation.

no code implementations • 11 Nov 2020 • Katherine Tsai, Mladen Kolar, Oluwasanmi Koyejo

We prove a linear convergence rate up to a nontrivial statistical error for the proposed descent scheme and establish sample complexity guarantees for the estimator.

1 code implementation • 15 Jul 2020 • Xu Wang, Mladen Kolar, Ali Shojaie

The key ingredient for this inference procedure is a new concentration inequality on the first- and second-order statistics for integrated stochastic processes, which summarize the entire history of the process.

no code implementations • 2 Jul 2020 • Luofeng Liao, You-Lin Chen, Zhuoran Yang, Bo Dai, Zhaoran Wang, Mladen Kolar

We study estimation in a class of generalized SEMs where the object of interest is defined as the solution to a linear operator equation.

no code implementations • 22 Jun 2020 • Shuang Qiu, Xiaohan Wei, Mladen Kolar

We study online convex optimization with constraints consisting of multiple functional constraints and a relatively simple constraint set, such as a Euclidean ball.

no code implementations • 11 Mar 2020 • Boxin Zhao, Y. Samuel Wang, Mladen Kolar

We first define a functional differential graph that captures the differences between two functional graphical models and formally characterize when the functional differential graph is well defined.

no code implementations • ICML 2020 • Sen Na, Yuwei Luo, Zhuoran Yang, Zhaoran Wang, Mladen Kolar

We consider the bipartite graph and formalize its representation learning problem as a statistical estimation problem of parameters in a semiparametric exponential family distribution.

no code implementations • 15 Feb 2020 • Song Liu, Yulong Zhang, Mingxuan Yi, Mladen Kolar

Density Ratio Estimation has attracted attention from the machine learning community due to its ability to compare the underlying distributions of two datasets.

no code implementations • 14 Dec 2019 • Yuwei Luo, Zhuoran Yang, Zhaoran Wang, Mladen Kolar

Multi-agent reinforcement learning has been successfully applied to a number of challenging problems.

Multi-agent Reinforcement Learning
reinforcement-learning
**+1**

1 code implementation • NeurIPS 2019 • Ming Yu, Zhuoran Yang, Mladen Kolar, Zhaoran Wang

We study the safe reinforcement learning problem with nonlinear function approximation, where policy optimization is formulated as a constrained optimization problem with both the objective and the constraint being nonconvex functions.

Multi-agent Reinforcement Learning
reinforcement-learning
**+2**

1 code implementation • NeurIPS 2019 • Boxin Zhao, Y. Samuel Wang, Mladen Kolar

We consider the problem of estimating the difference between two functional undirected graphical models with shared structures.

1 code implementation • 12 Sep 2019 • Sen Na, Mladen Kolar, Oluwasanmi Koyejo

Differential graphical models are designed to represent the difference between the conditional dependence structures of two groups, thus are of particular interest for scientific investigation.

1 code implementation • 12 Jun 2019 • You-Lin Chen, Mladen Kolar, Ruey S. Tsay

In many applications, such as classification of images or videos, it is of interest to develop a framework for tensor data instead of an ad-hoc way of transforming data to vectors due to the computational and under-sampling issues.

no code implementations • 8 Jun 2019 • Sinong Geng, Minhao Yan, Mladen Kolar, Oluwasanmi Koyejo

We propose a partially linear additive Gaussian graphical model (PLA-GGM) for the estimation of associations between random variables distorted by observed confounders.

no code implementations • 27 Nov 2018 • Sen Na, Mladen Kolar

We study the estimation of the parametric components of single and multiple index volatility models.

no code implementations • NeurIPS 2018 • Ming Yu, Zhuoran Yang, Tuo Zhao, Mladen Kolar, Zhaoran Wang

In this paper, we study the Gaussian embedding model and develop the first theoretical results for exponential family embedding models.

1 code implementation • 16 Oct 2018 • Sen Na, Zhuoran Yang, Zhaoran Wang, Mladen Kolar

We study the parameter estimation problem for a varying index coefficient model in high dimensions.

no code implementations • 16 Oct 2018 • Sinong Geng, Mladen Kolar, Oluwasanmi Koyejo

Empirical results are presented using simulated and real brain imaging data, which suggest that our approach improves precision matrix estimation, as compared to baselines, when confounding is present.

no code implementations • 14 Jun 2018 • Ming Yu, Varun Gupta, Mladen Kolar

Specifically, we endow each node with two node-topic vectors: an influence vector that measures how influential/authoritative they are on each topic; and a receptivity vector that measures how receptive/susceptible they are to each topic.

no code implementations • 20 Feb 2018 • Ming Yu, Varun Gupta, Mladen Kolar

We show linear convergence of the iterates obtained by GDT to a region within statistical error of an optimal solution.

no code implementations • 11 Feb 2018 • Weiran Wang, Jialei Wang, Mladen Kolar, Nathan Srebro

We propose methods for distributed graph-based multi-task learning that are based on weighted averaging of messages from other machines.

no code implementations • NeurIPS 2017 • Arun Suggala, Mladen Kolar, Pradeep K. Ravikumar

Non-parametric multivariate density estimation faces strong statistical and computational bottlenecks, and the more practical approaches impose near-parametric assumptions on the form of the density functions.

no code implementations • 14 Nov 2017 • Sen Na, Mingyuan Ma, Mladen Kolar

Along with developing of Peaceman-Rachford Splittling Method (PRSM), many batch algorithms based on it have been studied very deeply.

1 code implementation • 6 Sep 2017 • Ming Yu, Varun Gupta, Mladen Kolar

We consider the problem of estimating the latent structure of a social network based on the observed information diffusion events, or cascades, where the observations for a given cascade consist of only the timestamps of infection for infected nodes but not the source of the infection.

no code implementations • 20 Feb 2017 • Jelena Bradic, Mladen Kolar

The main technical result are the development of a Bahadur representation of the debiasing estimator that is uniform over a range of quantiles and uniform convergence of the quantile process to the Brownian bridge process, which are of independent interest.

no code implementations • NeurIPS 2016 • Ming Yu, Mladen Kolar, Varun Gupta

As a result, there is a large body of literature focused on consistent model selection.

no code implementations • 10 Oct 2016 • Jialei Wang, Jason D. Lee, Mehrdad Mahdavi, Mladen Kolar, Nathan Srebro

Sketching techniques have become popular for scaling up machine learning algorithms by reducing the sample size or dimensionality of massive data sets, while still maintaining the statistical power of big data.

no code implementations • ICML 2017 • Jialei Wang, Mladen Kolar, Nathan Srebro, Tong Zhang

We propose a novel, efficient approach for distributed sparse learning in high-dimensions, where observations are randomly partitioned across machines.

no code implementations • 7 Mar 2016 • Jialei Wang, Mladen Kolar, Nathan Srebro

We study the problem of distributed multi-task learning with shared representation, where each machine aims to learn a separate, but related, task in an unknown shared low-dimensional subspaces, i. e. when the predictor matrix has low rank.

no code implementations • 28 Dec 2015 • Junwei Lu, Mladen Kolar, Han Liu

The testing procedures are based on a high dimensional, debiasing-free moment estimator, which uses a novel kernel smoothed Kendall's tau correlation matrix as an input statistic.

no code implementations • NeurIPS 2015 • Siqi Sun, Mladen Kolar, Jinbo Xu

Learning the structure of a probabilistic graphical models is a well studied problem in the machine learning community due to its importance in many applications.

no code implementations • 2 Oct 2015 • Jialei Wang, Mladen Kolar, Nathan Srebro

We present a communication-efficient estimator based on the debiased lasso and show that it is comparable with the optimal centralized method.

no code implementations • 10 Mar 2015 • Junwei Lu, Mladen Kolar, Han Liu

We develop a novel procedure for constructing confidence bands for components of a sparse additive model.

no code implementations • 26 Feb 2015 • Rina Foygel Barber, Mladen Kolar

Undirected graphical models are used extensively in the biological and social sciences to encode a pattern of conditional independences between variables, where the absence of an edge between two nodes $a$ and $b$ indicates that the corresponding two variables $X_a$ and $X_b$ are believed to be conditionally independent, after controlling for all other measured variables.

no code implementations • 30 Dec 2014 • Tianqi Zhao, Mladen Kolar, Han Liu

Our de-biasing procedure does not require solving the $L_1$-penalized composite quantile regression.

no code implementations • 24 Dec 2014 • Jialei Wang, Mladen Kolar

observations of a random vector $(X, Z)$, where $X$ is a high-dimensional vector and $Z$ is a low-dimensional index variable, we study the problem of estimating the conditional inverse covariance matrix $\Omega(z) = (E[(X-E[X \mid Z])(X-E[X \mid Z])^T \mid Z=z])^{-1}$ under the assumption that the set of non-zero elements is small and does not depend on the index variable.

no code implementations • 23 Nov 2014 • Irina Gaynanova, Mladen Kolar

This article considers the problem of multi-group classification in the setting where the number of variables $p$ is larger than the number of observations $n$.

no code implementations • 26 Sep 2013 • Larry Wasserman, Mladen Kolar, Alessandro Rinaldo

In particular, we consider: cluster graphs, restricted partial correlation graphs and correlation graphs.

no code implementations • 27 Jun 2013 • Mladen Kolar, Han Liu

Through careful analysis, we establish rates of convergence that are significantly faster than the best known results and admit an optimal scaling of the sample size n, dimensionality p, and sparsity level s in the high-dimensional setting.

no code implementations • 29 Oct 2012 • Mladen Kolar, Han Liu, Eric P. Xing

Many real world network problems often concern multivariate nodal attributes such as image, textual, and multi-view feature vectors on nodes, rather than simple univariate nodal attributes.

no code implementations • 15 Sep 2012 • Sivaraman Balakrishnan, Mladen Kolar, Alessandro Rinaldo, Aarti Singh

We consider the problems of detection and localization of a contiguous block of weak activation in a large matrix, from a small number of noisy, possibly adaptive, compressive (linear) measurements.

no code implementations • NeurIPS 2011 • Mladen Kolar, Sivaraman Balakrishnan, Alessandro Rinaldo, Aarti Singh

We consider the problem of identifying a sparse set of relevant columns and rows in a large data matrix with highly corrupted entries.

no code implementations • NeurIPS 2009 • Mladen Kolar, Le Song, Eric P. Xing

In this paper, we investigate sparsistent learning of a sub-family of this model --- piecewise constant VCVS models.

no code implementations • NeurIPS 2009 • Le Song, Mladen Kolar, Eric P. Xing

In this paper, we propose a time-varying dynamic Bayesian network (TV-DBN) for modeling the structurally varying directed dependency structures underlying non-stationary biological/neural time series.

no code implementations • 14 Jul 2009 • Mladen Kolar, Eric P. Xing

Network models have been popular for modeling and representing complex relationships and dependencies between observed variables.

Cannot find the paper you are looking for? You can
Submit a new open access paper.

Contact us on:
hello@paperswithcode.com
.
Papers With Code is a free resource with all data licensed under CC-BY-SA.