1 code implementation • 21 Oct 2024 • Alireza Mousavi-Hosseini, Adel Javanmard, Murat A. Erdogdu
Recently, there have been numerous studies on feature learning with neural networks, specifically on learning single- and multi-index models where the target is a function of a low-dimensional projection of the input.
no code implementations • 18 Oct 2024 • Adel Javanmard, Jingwei Ji, Renyuan Xu
We show that the regret of our policy is better than both the policy that treats each security individually and the policy that treats all securities as the same.
no code implementations • 17 Jun 2024 • Rudrajit Das, Inderjit S. Dhillon, Alessandro Epasto, Adel Javanmard, Jieming Mao, Vahab Mirrokni, Sujay Sanghavi, Peilin Zhong
In this paper, we theoretically analyze retraining in a linearly separable setting with randomly corrupted labels given to us and prove that retraining can improve the population accuracy obtained by initially training with the given (noisy) labels.
1 code implementation • 1 Jun 2024 • Gene Li, Lin Chen, Adel Javanmard, Vahab Mirrokni
We consider a weakly supervised learning problem called Learning from Label Proportions (LLP), where examples are grouped into ``bags'' and only the average label within each bag is revealed to the learner.
no code implementations • 7 Feb 2024 • Adel Javanmard, Matthew Fahrbach, Vahab Mirrokni
This work studies algorithms for learning from aggregate responses.
no code implementations • 20 Jan 2024 • Adel Javanmard, Lin Chen, Vahab Mirrokni, Ashwinkumar Badanidiyuru, Gang Fu
In this paper, we study two natural loss functions for learning from aggregate responses: bag-level loss and the instance-level loss.
no code implementations • 2 Aug 2023 • Adel Javanmard, Vahab Mirrokni, Jean Pouget-Abadie
Estimating causal effects from randomized experiments is only feasible if participants agree to reveal their potentially sensitive responses.
3 code implementations • 12 Apr 2023 • CJ Carey, Travis Dick, Alessandro Epasto, Adel Javanmard, Josh Karlin, Shankar Kumar, Andres Munoz Medina, Vahab Mirrokni, Gabriel Henrique Nunes, Sergei Vassilvitskii, Peilin Zhong
In this work, we present a new theoretical framework to measure re-identification risk in such user representations.
no code implementations • 28 Mar 2023 • Rashmi Ranjan Bhuyan, Adel Javanmard, Sungchul Kim, Gourab Mukherjee, Ryan A. Rossi, Tong Yu, Handong Zhao
We consider dynamic pricing strategies in a streamed longitudinal data set-up where the objective is to maximize, over time, the cumulative profit across a large number of customer segments.
1 code implementation • 27 Mar 2023 • Matthew Fahrbach, Adel Javanmard, Vahab Mirrokni, Pratik Worah
We design learning rate schedules that minimize regret for SGD-based online learning in the presence of a changing data distribution.
no code implementations • 30 Oct 2022 • Adel Javanmard, Simeng Shao, Jacob Bien
Large datasets make it possible to build predictive models that can capture heterogenous relationships between the response variable and features.
no code implementations • 5 Sep 2022 • Adel Javanmard, Mohammad Mehrabi
Performance of classifiers is often measured in terms of average accuracy on test data.
no code implementations • 13 Jan 2022 • Hamed Hassani, Adel Javanmard
Our developed theory reveals the nontrivial effect of overparametrization on robustness and indicates that for adversarially trained random features models, high overparametrization can hurt robust generalization.
1 code implementation • 22 Oct 2021 • Adel Javanmard, Mohammad Mehrabi
We develop a theory to show that the low-dimensional manifold structure allows one to obtain models that are nearly optimal with respect to both, the standard accuracy and the robust accuracy measures.
no code implementations • 11 Aug 2021 • Simeng Shao, Jacob Bien, Adel Javanmard
In many domains, data measurements can naturally be associated with the leaves of a tree, expressing the relationships among these measurements.
no code implementations • 15 Jan 2021 • Mohammad Mehrabi, Adel Javanmard, Ryan A. Rossi, Anup Rao, Tung Mai
We study the tradeoff between standard risk and adversarial risk and derive the Pareto-optimal tradeoff, achievable over specific classes of models, in the infinite data limit with features dimension kept fixed.
1 code implementation • 4 Dec 2020 • Dmitrii M. Ostrovskii, Mohamed Ndaoud, Adel Javanmard, Meisam Razaviyayn
Here we provide matching upper and lower bounds on the sample complexity as given by $\min\{1/\Delta^2,\sqrt{r}/\Delta\}$ up to a constant factor; here $\Delta$ is a measure of separation between $\mathbb{P}_0$ and $\mathbb{P}_1$ and $r$ is the rank of the design covariance matrix.
no code implementations • 21 Oct 2020 • Adel Javanmard, Mahdi Soltanolkotabi
Despite the wide empirical success of modern machine learning algorithms and models in a multitude of applications, they are known to be highly susceptible to seemingly small indiscernible perturbations to the input data known as \emph{adversarial attacks}.
no code implementations • NeurIPS 2019 • Negin Golrezaei, Adel Javanmard, Vahab Mirrokni
Motivated by pricing in ad exchange markets, we consider the problem of robust learning of reserve prices against strategic buyers in repeated contextual second-price auctions.
no code implementations • 24 Feb 2020 • Adel Javanmard, Mahdi Soltanolkotabi, Hamed Hassani
Furthermore, we precisely characterize the standard/robust accuracy and the corresponding tradeoff achieved by a contemporary mini-max adversarial training approach in a high-dimensional regime where the number of data points and the parameters of the model grow in proportion to each other.
no code implementations • 4 Nov 2019 • Yash Deshpande, Adel Javanmard, Mohammad Mehrabi
Adaptive collection of data is commonplace in applications throughout science and engineering.
no code implementations • 10 Apr 2019 • Amin Jalali, Adel Javanmard, Maryam Fazel
Prior knowledge on properties of a target model often come as discrete or combinatorial descriptions.
no code implementations • 5 Jan 2019 • Adel Javanmard, Marco Mondelli, Andrea Montanari
We prove that, in the limit in which the number of neurons diverges, the evolution of gradient descent converges to a Wasserstein gradient flow in the space of probability distributions over $\Omega$.
no code implementations • 4 Jan 2019 • Adel Javanmard, Hamid Nazerzadeh, Simeng Shao
We measure the performance of a pricing policy in terms of regret, which is the expected revenue loss with respect to a clairvoyant policy that knows the parameters of the choice model in advance and always sets the revenue-maximizing prices.
no code implementations • 22 Oct 2018 • Ery Arias-Castro, Adel Javanmard, Bruno Pelletier
One of the common tasks in unsupervised learning is dimensionality reduction, where the goal is to find meaningful low-dimensional structures hidden in high-dimensional data.
no code implementations • 12 Mar 2018 • Adel Javanmard, Hamid Javadi
We consider the problem of variable selection in high-dimensional statistical models where the goal is to report a set of variables, out of many predictors $X_1, \dotsc, X_p$, that are relevant to a response of interest.
no code implementations • 16 Jul 2017 • Mahdi Soltanolkotabi, Adel Javanmard, Jason D. Lee
In this paper we study the problem of learning a shallow artificial neural network that best fits a training data set.
no code implementations • 26 Apr 2017 • Adel Javanmard, Jason D. Lee
By duality between hypotheses testing and confidence intervals, the proposed framework can be used to obtain valid confidence intervals for various functionals of the model parameters.
no code implementations • 13 Jan 2017 • Adel Javanmard
In the first one, feature vectors are chosen antagonistically by nature and we prove that the regret of PSGD pricing policy is of order $O(\sqrt{T} + \sum_{t=1}^T \sqrt{t}\delta_t)$.
no code implementations • 24 Sep 2016 • Adel Javanmard, Hamid Nazerzadeh
We study the pricing problem faced by a firm that sells a large number of products, described via a wide range of features, to customers that arrive over time.
no code implementations • 30 Mar 2016 • Adel Javanmard, Andrea Montanari, Federico Ricci-Tersenghi
In this paper we study in detail several practical aspects of this new algorithm based on semidefinite programming for the detection of the planted partition.
no code implementations • 29 Mar 2016 • Adel Javanmard, Andrea Montanari
In this paper we consider the problem of controlling FDR in an "online manner".
no code implementations • 11 Aug 2015 • Adel Javanmard, Andrea Montanari
When the covariance is known, we prove that the debiased estimator is asymptotically Gaussian under the nearly optimal condition $s_0 = o(n/ (\log p)^2)$.
no code implementations • 24 Feb 2015 • Sonia Bhaskar, Adel Javanmard
We consider the problem of noisy 1-bit matrix completion under an exact rank constraint on the true underlying matrix $M^*$.
1 code implementation • 22 Feb 2015 • Adel Javanmard, Andrea Montanari
Given a sequence of null hypotheses $\mathcal{H}(n) = (H_1,..., H_n)$, Benjamini and Hochberg \cite{benjamini1995controlling} introduced the false discovery rate (FDR) criterion, which is the expected proportion of false positives among rejected null hypotheses, and proposed a testing procedure that controls FDR below a pre-assigned significance level.
no code implementations • NeurIPS 2013 • Adel Javanmard, Andrea Montanari
This in turn implies that it is extremely challenging to quantify the `uncertainty' associated with a certain parameter estimate.
no code implementations • 1 Nov 2013 • Adel Javanmard, Andrea Montanari
In the regime where the number of parameters $p$ is comparable to or exceeds the sample size $n$, a successful approach uses an $\ell_1$-penalized least squares estimator, known as Lasso.
no code implementations • NeurIPS 2013 • Adel Javanmard, Andrea Montanari
This in turn implies that it is extremely challenging to quantify the \emph{uncertainty} associated with a certain parameter estimate.
no code implementations • NeurIPS 2013 • Adel Javanmard, Andrea Montanari
In the high-dimensional regression model a response variable is linearly related to $p$ covariates, but the sample size $n$ is smaller than $p$.
no code implementations • NeurIPS 2012 • Morteza Ibrahimi, Adel Javanmard, Benjamin Van Roy
In particular, our algorithm has an average cost of $(1+\eps)$ times the optimum cost after $T = \polylog(p) O(1/\eps^2)$.
no code implementations • 17 Jan 2013 • Adel Javanmard, Andrea Montanari
In this case we prove that a similar distributional characterization (termed `standard distributional limit') holds for $n$ much larger than $s_0(\log p)^2$.
no code implementations • 24 Sep 2012 • Animashree Anandkumar, Daniel Hsu, Adel Javanmard, Sham M. Kakade
The sufficient conditions for identifiability of these models are primarily based on weak expansion constraints on the topic-word matrix, for topic models, and on the directed acyclic graph, for Bayesian networks.