Search Results for author: Ameet Talwalkar

Found 46 papers, 20 papers with code

Model-Agnostic Characterization of Fairness Trade-offs

no code implementations ICML 2020 Joon Kim, Jiahao Chen, Ameet Talwalkar

There exist several inherent trade-offs while designing a fair model, such as those between the model’s predictive accuracy and fairness, or even among different notions of fairness.


Should We Be Pre-training? An Argument for End-task Aware Training as an Alternative

no code implementations15 Sep 2021 Lucio M. Dery, Paul Michel, Ameet Talwalkar, Graham Neubig

First, on three different low-resource NLP tasks from two domains, we demonstrate that multi-tasking the end-task and auxiliary objectives results in significantly better downstream task performance than the widely-used task-agnostic continued pre-training paradigm of Gururangan et al. (2020).


Learning-to-learn non-convex piecewise-Lipschitz functions

no code implementations19 Aug 2021 Maria-Florina Balcan, Mikhail Khodak, Dravyansh Sharma, Ameet Talwalkar

We analyze the meta-learning of the initialization and step-size of learning algorithms for piecewise-Lipschitz functions, a non-convex setting with applications to both machine learning and algorithms.


Finding and Fixing Spurious Patterns with Explanations

no code implementations3 Jun 2021 Gregory Plumb, Marco Tulio Ribeiro, Ameet Talwalkar

Machine learning models often use spurious patterns such as "relying on the presence of a person to detect a tennis racket," which do not generalize.

Data Augmentation

Sanity Simulations for Saliency Methods

no code implementations13 May 2021 Joon Sik Kim, Gregory Plumb, Ameet Talwalkar

Saliency methods are a popular class of feature attribution tools that aim to capture a model's predictive reasoning by identifying "important" pixels in an input image.

Rethinking Neural Operations for Diverse Tasks

1 code implementation29 Mar 2021 Nicholas Roberts, Mikhail Khodak, Tri Dao, Liam Li, Christopher Ré, Ameet Talwalkar

An important goal of neural architecture search (NAS) is to automate-away the design of neural networks on new tasks in under-explored domains.

Image Classification Neural Architecture Search

Interpretable Machine Learning: Moving From Mythos to Diagnostics

no code implementations10 Mar 2021 Valerie Chen, Jeffrey Li, Joon Sik Kim, Gregory Plumb, Ameet Talwalkar

Despite increasing interest in the field of Interpretable Machine Learning (IML), a significant gap persists between the technical objectives targeted by researchers' methods and the high-level goals of consumers' use cases.

Interpretable Machine Learning

Gradient Descent on Neural Networks Typically Occurs at the Edge of Stability

1 code implementation ICLR 2021 Jeremy M. Cohen, Simran Kaur, Yuanzhi Li, J. Zico Kolter, Ameet Talwalkar

We empirically demonstrate that full-batch gradient descent on neural network training objectives typically operates in a regime we call the Edge of Stability.

On Data Efficiency of Meta-learning

no code implementations30 Jan 2021 Maruan Al-Shedivat, Liam Li, Eric Xing, Ameet Talwalkar

Meta-learning has enabled learning statistical models that can be quickly adapted to new prediction tasks.

Meta-Learning Personalized Federated Learning

Searching for Convolutions and a More Ambitious NAS

no code implementations1 Jan 2021 Nicholas Carl Roberts, Mikhail Khodak, Tri Dao, Liam Li, Nina Balcan, Christopher Re, Ameet Talwalkar

An important goal of neural architecture search (NAS) is to automate-away the design of neural networks on new tasks in under-explored domains, thus helping to democratize machine learning.

Neural Architecture Search

A Learning Theoretic Perspective on Local Explainability

no code implementations ICLR 2021 Jeffrey Li, Vaishnavh Nagarajan, Gregory Plumb, Ameet Talwalkar

In this paper, we explore connections between interpretable machine learning and learning theory through the lens of local approximation explanations.

Interpretable Machine Learning Learning Theory

Geometry-Aware Gradient Algorithms for Neural Architecture Search

1 code implementation ICLR 2021 Liam Li, Mikhail Khodak, Maria-Florina Balcan, Ameet Talwalkar

Recent state-of-the-art methods for neural architecture search (NAS) exploit gradient-based optimization by relaxing the problem into continuous optimization over architectures and shared-weights, a noisy process that remains poorly understood.

Neural Architecture Search

FACT: A Diagnostic for Group Fairness Trade-offs

1 code implementation7 Apr 2020 Joon Sik Kim, Jiahao Chen, Ameet Talwalkar

Group fairness, a class of fairness notions that measure how different groups of individuals are treated differently according to their protected attributes, has been shown to conflict with one another, often with a necessary cost in loss of model's predictive performance.


Explaining Groups of Points in Low-Dimensional Representations

3 code implementations ICML 2020 Gregory Plumb, Jonathan Terhorst, Sriram Sankararaman, Ameet Talwalkar

A common workflow in data exploration is to learn a low-dimensional representation of the data, identify groups of points in that representation, and examine the differences between the groups to determine what they represent.

Counterfactual Explanation Interpretable Machine Learning

FedDANE: A Federated Newton-Type Method

1 code implementation7 Jan 2020 Tian Li, Anit Kumar Sahu, Manzil Zaheer, Maziar Sanjabi, Ameet Talwalkar, Virginia Smith

Federated learning aims to jointly learn statistical models over massively distributed remote devices.

Distributed Optimization Federated Learning

Differentially Private Meta-Learning

no code implementations ICLR 2020 Jeffrey Li, Mikhail Khodak, Sebastian Caldas, Ameet Talwalkar

Parameter-transfer is a well-known and versatile approach for meta-learning, with applications including few-shot learning, federated learning, and reinforcement learning.

Federated Learning Few-Shot Learning +4

Federated Learning: Challenges, Methods, and Future Directions

1 code implementation21 Aug 2019 Tian Li, Anit Kumar Sahu, Ameet Talwalkar, Virginia Smith

Federated learning involves training statistical models over remote devices or siloed data centers, such as mobile phones or hospitals, while keeping data localized.

Distributed Optimization Federated Learning

Learning Fair Representations for Kernel Models

2 code implementations27 Jun 2019 Zilong Tan, Samuel Yeom, Matt Fredrikson, Ameet Talwalkar

In contrast, we demonstrate the promise of learning a model-aware fair representation, focusing on kernel-based models.

Dimensionality Reduction Fairness

Adaptive Gradient-Based Meta-Learning Methods

1 code implementation NeurIPS 2019 Mikhail Khodak, Maria-Florina Balcan, Ameet Talwalkar

We build a theoretical framework for designing and understanding practical meta-learning methods that integrates sophisticated formalizations of task-similarity with the extensive literature on online convex optimization and sequential prediction algorithms.

Federated Learning Few-Shot Learning

Regularizing Black-box Models for Improved Interpretability (HILL 2019 Version)

no code implementations31 May 2019 Gregory Plumb, Maruan Al-Shedivat, Eric Xing, Ameet Talwalkar

Most of the work on interpretable machine learning has focused on designing either inherently interpretable models, which typically trade-off accuracy for interpretability, or post-hoc explanation systems, which lack guarantees about their explanation quality.

Interpretable Machine Learning

Exploiting Reuse in Pipeline-Aware Hyperparameter Tuning

no code implementations12 Mar 2019 Liam Li, Evan Sparks, Kevin Jamieson, Ameet Talwalkar

Hyperparameter tuning of multi-stage pipelines introduces a significant computational burden.

One-Shot Federated Learning

no code implementations28 Feb 2019 Neel Guha, Ameet Talwalkar, Virginia Smith

We present one-shot federated learning, where a central server learns a global model over a network of federated devices in a single round of communication.

Ensemble Learning Federated Learning

Provable Guarantees for Gradient-Based Meta-Learning

1 code implementation27 Feb 2019 Mikhail Khodak, Maria-Florina Balcan, Ameet Talwalkar

We study the problem of meta-learning through the lens of online convex optimization, developing a meta-algorithm bridging the gap between popular gradient-based meta-learning and classical regularization-based multi-task transfer methods.

Generalization Bounds Meta-Learning

Random Search and Reproducibility for Neural Architecture Search

3 code implementations20 Feb 2019 Liam Li, Ameet Talwalkar

Neural architecture search (NAS) is a promising research direction that has the potential to replace expert-designed networks with learned, task-specific architectures.

Hyperparameter Optimization Neural Architecture Search

Regularizing Black-box Models for Improved Interpretability

1 code implementation NeurIPS 2020 Gregory Plumb, Maruan Al-Shedivat, Angel Alexander Cabrera, Adam Perer, Eric Xing, Ameet Talwalkar

Most of the work on interpretable machine learning has focused on designing either inherently interpretable models, which typically trade-off accuracy for interpretability, or post-hoc explanation systems, whose explanation quality can be unpredictable.

Interpretable Machine Learning

Expanding the Reach of Federated Learning by Reducing Client Resource Requirements

no code implementations ICLR 2019 Sebastian Caldas, Jakub Konečny, H. Brendan McMahan, Ameet Talwalkar

Communication on heterogeneous edge networks is a fundamental bottleneck in Federated Learning (FL), restricting both model capacity and user participation.

Federated Learning

Federated Optimization in Heterogeneous Networks

6 code implementations14 Dec 2018 Tian Li, Anit Kumar Sahu, Manzil Zaheer, Maziar Sanjabi, Ameet Talwalkar, Virginia Smith

Theoretically, we provide convergence guarantees for our framework when learning over data from non-identical distributions (statistical heterogeneity), and while adhering to device-level systems constraints by allowing each participating device to perform a variable amount of work (systems heterogeneity).

Distributed Optimization Federated Learning

LEAF: A Benchmark for Federated Settings

4 code implementations3 Dec 2018 Sebastian Caldas, Sai Meher Karthik Duddu, Peter Wu, Tian Li, Jakub Konečný, H. Brendan McMahan, Virginia Smith, Ameet Talwalkar

Modern federated networks, such as those comprised of wearable devices, mobile phones, or autonomous vehicles, generate massive amounts of data each day.

Autonomous Vehicles Federated Learning +2

Model Agnostic Supervised Local Explanations

2 code implementations NeurIPS 2018 Gregory Plumb, Denali Molitor, Ameet Talwalkar

Some of the most common forms of interpretability systems are example-based, local, and global explanations.

Feature Selection

Massively Parallel Hyperparameter Tuning

no code implementations ICLR 2018 Lisha Li, Kevin Jamieson, Afshin Rostamizadeh, Katya Gonina, Moritz Hardt, Benjamin Recht, Ameet Talwalkar

Modern machine learning models are characterized by large hyperparameter search spaces and prohibitively expensive training costs.

Parle: parallelizing stochastic gradient descent

no code implementations3 Jul 2017 Pratik Chaudhari, Carlo Baldassi, Riccardo Zecchina, Stefano Soatto, Ameet Talwalkar, Adam Oberman

We propose a new algorithm called Parle for parallel training of deep networks that converges 2-4x faster than a data-parallel implementation of SGD, while achieving significantly improved error rates that are nearly state-of-the-art on several benchmarks including CIFAR-10 and CIFAR-100, without introducing any additional hyper-parameters.

Federated Multi-Task Learning

1 code implementation NeurIPS 2017 Virginia Smith, Chao-Kai Chiang, Maziar Sanjabi, Ameet Talwalkar

Federated learning poses new statistical and systems challenges in training machine learning models over distributed networks of devices.

Federated Learning Multi-Task Learning

Hyperband: A Novel Bandit-Based Approach to Hyperparameter Optimization

12 code implementations21 Mar 2016 Lisha Li, Kevin Jamieson, Giulia Desalvo, Afshin Rostamizadeh, Ameet Talwalkar

Performance of machine learning algorithms depends critically on identifying a good set of hyperparameters.

Hyperparameter Optimization

MLlib: Machine Learning in Apache Spark

no code implementations26 May 2015 Xiangrui Meng, Joseph Bradley, Burak Yavuz, Evan Sparks, Shivaram Venkataraman, Davies Liu, Jeremy Freeman, DB Tsai, Manish Amde, Sean Owen, Doris Xin, Reynold Xin, Michael J. Franklin, Reza Zadeh, Matei Zaharia, Ameet Talwalkar

Apache Spark is a popular open-source platform for large-scale data processing that is well-suited for iterative machine learning tasks.

Non-stochastic Best Arm Identification and Hyperparameter Optimization

no code implementations27 Feb 2015 Kevin Jamieson, Ameet Talwalkar

Motivated by the task of hyperparameter optimization, we introduce the non-stochastic best-arm identification problem.

Hyperparameter Optimization

TuPAQ: An Efficient Planner for Large-scale Predictive Analytic Queries

no code implementations31 Jan 2015 Evan R. Sparks, Ameet Talwalkar, Michael J. Franklin, Michael. I. Jordan, Tim Kraska

The proliferation of massive datasets combined with the development of sophisticated analytical techniques have enabled a wide variety of novel applications such as improved product recommendations, automatic image tagging, and improved speech-driven interfaces.

Matrix Coherence and the Nystrom Method

no code implementations9 Aug 2014 Ameet Talwalkar, Afshin Rostamizadeh

Crucial to the performance of this technique is the assumption that a matrix can be well approximated by working exclusively with a subset of its columns.

Matrix Completion

MLI: An API for Distributed Machine Learning

no code implementations21 Oct 2013 Evan R. Sparks, Ameet Talwalkar, Virginia Smith, Jey Kottalam, Xinghao Pan, Joseph Gonzalez, Michael J. Franklin, Michael. I. Jordan, Tim Kraska

MLI is an Application Programming Interface designed to address the challenges of building Machine Learn- ing algorithms in a distributed setting based on data-centric computing.

Distributed Low-rank Subspace Segmentation

no code implementations20 Apr 2013 Ameet Talwalkar, Lester Mackey, Yadong Mu, Shih-Fu Chang, Michael. I. Jordan

Vision problems ranging from image clustering to motion segmentation to semi-supervised learning can naturally be framed as subspace segmentation problems, in which one aims to recover multiple low-dimensional subspaces from noisy and corrupted input data.

Event Detection Face Recognition +2

Divide-and-Conquer Matrix Factorization

no code implementations NeurIPS 2011 Lester W. Mackey, Michael. I. Jordan, Ameet Talwalkar

This work introduces Divide-Factor-Combine (DFC), a parallel divide-and-conquer framework for noisy matrix factorization.

Distributed Matrix Completion and Robust Factorization

no code implementations5 Jul 2011 Lester Mackey, Ameet Talwalkar, Michael. I. Jordan

If learning methods are to scale to the massive sizes of modern datasets, it is essential for the field of machine learning to embrace parallel and distributed computing.

Distributed Computing Matrix Completion

Ensemble Nystrom Method

no code implementations NeurIPS 2009 Sanjiv Kumar, Mehryar Mohri, Ameet Talwalkar

A crucial technique for scaling kernel methods to very large data sets reaching or exceeding millions of instances is based on low-rank approximation of kernel matrices.

Cannot find the paper you are looking for? You can Submit a new open access paper.