no code implementations • ICML 2020 • Shubhanshu Shekhar, Tara Javidi, Mohammad Ghavamzadeh
We consider the problem of allocating a fixed budget of samples to a finite set of discrete distributions to learn them uniformly well (minimizing the maximum error) in terms of four common distance measures: $\ell_2^2$, $\ell_1$, $f$-divergence, and separation distance.
no code implementations • 29 Sep 2023 • Juan Elenter, Navid Naderializadeh, Tara Javidi, Alejandro Ribeiro
Continual learning is inherently a constrained learning problem.
no code implementations • 4 Aug 2023 • Nasimeh Heydaribeni, Ruisi Zhang, Tara Javidi, Cristina Nita-Rotaru, Farinaz Koushanfar
We theoretically prove the robustness of our algorithm against data and model poisoning attacks in a decentralized linear regression setting.
no code implementations • 31 May 2022 • Avishek Ghosh, Abishek Sankararaman, Kannan Ramchandran, Tara Javidi, Arya Mazumdar
We propose and analyze a decentralized and asynchronous learning algorithm, namely Decentralized Non-stationary Competing Bandits (\texttt{DNCB}), where the agents play (restrictive) successive elimination type learning algorithms to learn their preference over the arms.
no code implementations • 12 Mar 2022 • Shubhanshu Shekhar, Tara Javidi
We study the kernelized bandit problem, that involves designing an adaptive strategy for querying a noisy zeroth-order-oracle to efficiently learn about the optimizer of an unknown function $f$ with a norm bounded by $M<\infty$ in a Reproducing Kernel Hilbert Space~(RKHS) associated with a positive definite kernel $K$.
no code implementations • 28 Oct 2021 • Sattar Vakili, Jonathan Scarlett, Tara Javidi
Confidence intervals are a crucial building block in the analysis of various online learning problems.
no code implementations • 7 Sep 2021 • Greg Fields, Mohammad Samragh, Mojan Javaheripi, Farinaz Koushanfar, Tara Javidi
Deep neural networks have been shown to be vulnerable to backdoor, or trojan, attacks where an adversary has embedded a trigger in the network at training time such that the model correctly classifies all standard inputs, but generates a targeted, incorrect classification on any input which contains the trigger.
no code implementations • 15 Jul 2021 • Vyacheslav Kungurtsev, Adam Cobb, Tara Javidi, Brian Jalaian
Federated learning performed by a decentralized networks of agents is becoming increasingly important with the prevalence of embedded software on autonomous devices.
2 code implementations • 14 Jul 2021 • Jianyu Wang, Zachary Charles, Zheng Xu, Gauri Joshi, H. Brendan McMahan, Blaise Aguera y Arcas, Maruan Al-Shedivat, Galen Andrew, Salman Avestimehr, Katharine Daly, Deepesh Data, Suhas Diggavi, Hubert Eichner, Advait Gadhikar, Zachary Garrett, Antonious M. Girgis, Filip Hanzely, Andrew Hard, Chaoyang He, Samuel Horvath, Zhouyuan Huo, Alex Ingerman, Martin Jaggi, Tara Javidi, Peter Kairouz, Satyen Kale, Sai Praneeth Karimireddy, Jakub Konecny, Sanmi Koyejo, Tian Li, Luyang Liu, Mehryar Mohri, Hang Qi, Sashank J. Reddi, Peter Richtarik, Karan Singhal, Virginia Smith, Mahdi Soltanolkotabi, Weikang Song, Ananda Theertha Suresh, Sebastian U. Stich, Ameet Talwalkar, Hongyi Wang, Blake Woodworth, Shanshan Wu, Felix X. Yu, Honglin Yuan, Manzil Zaheer, Mi Zhang, Tong Zhang, Chunxiang Zheng, Chen Zhu, Wennan Zhu
Federated learning and analytics are a distributed approach for collaboratively learning models (or statistics) from decentralized data, motivated by and designed for privacy protection.
no code implementations • 21 Jun 2021 • Nancy Ronquillo, Tara Javidi
We consider the problem of active and sequential beam tracking at mmWave frequencies and above.
no code implementations • NeurIPS 2021 • Shubhanshu Shekhar, Greg Fields, Mohammad Ghavamzadeh, Tara Javidi
Machine learning models trained on uncurated datasets can often end up adversely affecting inputs belonging to underrepresented groups.
no code implementations • 8 Dec 2020 • Dhruva Kartik, Neeraj Sood, Urbashi Mitra, Tara Javidi
A Bayesian variant of the existing upper confidence bound (UCB) based approaches is proposed.
no code implementations • 4 Sep 2020 • Mojan Javaheripi, Mohammad Samragh, Gregory Fields, Tara Javidi, Farinaz Koushanfar
We propose CLEANN, the first end-to-end framework that enables online mitigation of Trojans for embedded Deep Neural Network (DNN) applications.
no code implementations • 15 May 2020 • Sung-En Chiu, Tara Javidi
Motivated by practical applications such as initial beam alignment in array processing, heavy hitter detection in networking, and visual search in robotics, we consider practically important complexity constraints/metrics: \textit{time complexity}, \textit{computational and memory complexity}, and the complexity of possible query sets in terms of geometry and cardinality.
no code implementations • 11 May 2020 • Shubhanshu Shekhar, Tara Javidi
We aim to optimize a black-box function $f:\mathcal{X} \mapsto \mathbb{R}$ under the assumption that $f$ is H\"older smooth and has bounded norm in the RKHS associated with a given kernel $K$.
no code implementations • 8 Apr 2020 • Mojan Javaheripi, Mohammad Samragh, Tara Javidi, Farinaz Koushanfar
In the contemporary big data realm, Deep Neural Networks (DNNs) are evolving towards more complex architectures to achieve higher inference accuracy.
9 code implementations • 10 Dec 2019 • Peter Kairouz, H. Brendan McMahan, Brendan Avent, Aurélien Bellet, Mehdi Bennis, Arjun Nitin Bhagoji, Kallista Bonawitz, Zachary Charles, Graham Cormode, Rachel Cummings, Rafael G. L. D'Oliveira, Hubert Eichner, Salim El Rouayheb, David Evans, Josh Gardner, Zachary Garrett, Adrià Gascón, Badih Ghazi, Phillip B. Gibbons, Marco Gruteser, Zaid Harchaoui, Chaoyang He, Lie He, Zhouyuan Huo, Ben Hutchinson, Justin Hsu, Martin Jaggi, Tara Javidi, Gauri Joshi, Mikhail Khodak, Jakub Konečný, Aleksandra Korolova, Farinaz Koushanfar, Sanmi Koyejo, Tancrède Lepoint, Yang Liu, Prateek Mittal, Mehryar Mohri, Richard Nock, Ayfer Özgür, Rasmus Pagh, Mariana Raykova, Hang Qi, Daniel Ramage, Ramesh Raskar, Dawn Song, Weikang Song, Sebastian U. Stich, Ziteng Sun, Ananda Theertha Suresh, Florian Tramèr, Praneeth Vepakomma, Jianyu Wang, Li Xiong, Zheng Xu, Qiang Yang, Felix X. Yu, Han Yu, Sen Zhao
FL embodies the principles of focused data collection and minimization, and can mitigate many of the systemic privacy risks and costs resulting from traditional, centralized machine learning and data science approaches.
no code implementations • 15 Nov 2019 • Mojan Javaheripi, Mohammad Samragh, Tara Javidi, Farinaz Koushanfar
This paper introduces ASCAI, a novel adaptive sampling methodology that can learn how to effectively compress Deep Neural Networks (DNNs) for accelerated inference on resource-constrained platforms.
no code implementations • 28 Oct 2019 • Shubhanshu Shekhar, Tara Javidi, Mohammad Ghavamzadeh
We consider the problem of allocating samples to a finite set of discrete distributions in order to learn them uniformly well in terms of four common distance measures: $\ell_2^2$, $\ell_1$, $f$-divergence, and separation distance.
no code implementations • 1 Jun 2019 • Shubhanshu Shekhar, Mohammad Ghavamzadeh, Tara Javidi
We construct and analyze active learning algorithms for the problem of binary classification with abstention.
1 code implementation • NeurIPS 2019 • Songbai Yan, Kamalika Chaudhuri, Tara Javidi
We provably demonstrate that the result of this is an algorithm which is statistically consistent as well as more label-efficient than prior work.
no code implementations • 24 May 2019 • Yongxi Lu, Ziyao Tang, Tara Javidi
Partially annotated clips contain rich temporal contexts that can complement the sparse key frame annotations in providing supervision for model training.
no code implementations • 24 May 2019 • Anusha Lalitha, Xinghan Wang, Osman Kilinc, Yongxi Lu, Tara Javidi, Farinaz Koushanfar
The proposed algorithm can be viewed as a Bayesian and peer-to-peer variant of federated learning in which each agent keeps a "posterior probability distribution" over a global model parameters.
no code implementations • 23 May 2019 • Shubhanshu Shekhar, Mohammad Ghavamzadeh, Tara Javidi
We then propose a plug-in classifier that employs unlabeled samples to decide the region of abstention and derive an upper-bound on the excess risk of our classifier under standard \emph{H\"older smoothness} and \emph{margin} assumptions.
no code implementations • 26 Feb 2019 • Shubhanshu Shekhar, Tara Javidi
In this paper, the problem of estimating the level set of a black-box function from noisy and expensive evaluation queries is considered.
no code implementations • 31 Jan 2019 • Anusha Lalitha, Osman Cihan Kilinc, Tara Javidi, Farinaz Koushanfar
We consider the problem of training a machine learning model over a network of nodes in a fully decentralized framework.
no code implementations • 19 Dec 2018 • Sung-En Chiu, Nancy Ronquillo, Tara Javidi
In addition, given the knowledge of an optimal directional beamforming vector, large antenna arrays have been shown to overcome both the severe signal attenuation in mmWave as well as the interference problem.
2 code implementations • CVPR 2019 • Yue Meng, Yongxi Lu, Aman Raj, Samuel Sunarjo, Rui Guo, Tara Javidi, Gaurav Bansal, Dinesh Bharadia
SIGNet is shown to improve upon the state-of-the-art unsupervised learning for depth prediction by 30% (in squared relative error).
Ranked #64 on Monocular Depth Estimation on KITTI Eigen split
no code implementations • 24 Nov 2018 • Ziyao Tang, Yongxi Lu, Tara Javidi
One of the greatest challenges in the design of a real-time perception system for autonomous driving vehicles and drones is the conflicting requirement of safety (high prediction accuracy) and efficiency.
no code implementations • 17 Sep 2018 • Mohammad Javad Khojasteh, Anatoly Khina, Massimo Franceschetti, Tara Javidi
In the case of scalar plants, we derive an upper bound on the attacker's deception probability for any measurable control policy when the attacker uses an arbitrary learning algorithm to estimate the system dynamics.
no code implementations • ICML 2018 • Songbai Yan, Kamalika Chaudhuri, Tara Javidi
We consider active learning with logged data, where labeled examples are drawn conditioned on a predetermined logging policy, and the goal is to learn a classifier on the entire population, not just conditioned on the logging policy.
no code implementations • ICLR 2018 • Bita Darvish Rouhani, Mohammad Samragh, Tara Javidi, Farinaz Koushanfar
We introduce a novel automated countermeasure called Parallel Checkpointing Learners (PCL) to thwart the potential adversarial attacks and significantly improve the reliability (safety) of a victim DL model.
no code implementations • 5 Dec 2017 • Shubhanshu Shekhar, Tara Javidi
In this paper, the problem of maximizing a black-box function $f:\mathcal{X} \to \mathbb{R}$ is studied in the Bayesian framework with a Gaussian Process (GP) prior.
no code implementations • 8 Sep 2017 • Bita Darvish Rouhani, Mohammad Samragh, Mojan Javaheripi, Tara Javidi, Farinaz Koushanfar
Recent advances in adversarial Deep Learning (DL) have opened up a largely unexplored surface for malicious attacks jeopardizing the integrity of autonomous DL systems.
1 code implementation • CVPR 2017 • Yongxi Lu, Abhishek Kumar, Shuangfei Zhai, Yu Cheng, Tara Javidi, Rogerio Feris
Multi-task learning aims to improve generalization performance of multiple prediction tasks by appropriately sharing relevant information across them.
no code implementations • NeurIPS 2016 • Songbai Yan, Kamalika Chaudhuri, Tara Javidi
We study active learning where the labeler can not only return incorrect labels but also abstain from labeling.
1 code implementation • CVPR 2016 • Yongxi Lu, Tara Javidi, Svetlana Lazebnik
Compared to methods based on fixed anchor locations, our approach naturally adapts to cases where object instances are sparse and small.
no code implementations • 5 Oct 2015 • Yongxi Lu, Tara Javidi
Efficient generation of high-quality object proposals is an essential step in state-of-the-art object detection systems based on deep convolutional neural networks (DCNN) features.