Search Results for author: Ameet Talwalkar

Found 71 papers, 37 papers with code

Ensemble Nystrom Method

no code implementations NeurIPS 2009 Sanjiv Kumar, Mehryar Mohri, Ameet Talwalkar

A crucial technique for scaling kernel methods to very large data sets reaching or exceeding millions of instances is based on low-rank approximation of kernel matrices.

regression

Distributed Matrix Completion and Robust Factorization

no code implementations5 Jul 2011 Lester Mackey, Ameet Talwalkar, Michael. I. Jordan

If learning methods are to scale to the massive sizes of modern datasets, it is essential for the field of machine learning to embrace parallel and distributed computing.

Collaborative Filtering Distributed Computing +1

Divide-and-Conquer Matrix Factorization

no code implementations NeurIPS 2011 Lester W. Mackey, Michael. I. Jordan, Ameet Talwalkar

This work introduces Divide-Factor-Combine (DFC), a parallel divide-and-conquer framework for noisy matrix factorization.

Collaborative Filtering

Distributed Low-rank Subspace Segmentation

no code implementations20 Apr 2013 Ameet Talwalkar, Lester Mackey, Yadong Mu, Shih-Fu Chang, Michael. I. Jordan

Vision problems ranging from image clustering to motion segmentation to semi-supervised learning can naturally be framed as subspace segmentation problems, in which one aims to recover multiple low-dimensional subspaces from noisy and corrupted input data.

Clustering Event Detection +4

MLI: An API for Distributed Machine Learning

no code implementations21 Oct 2013 Evan R. Sparks, Ameet Talwalkar, Virginia Smith, Jey Kottalam, Xinghao Pan, Joseph Gonzalez, Michael J. Franklin, Michael. I. Jordan, Tim Kraska

MLI is an Application Programming Interface designed to address the challenges of building Machine Learn- ing algorithms in a distributed setting based on data-centric computing.

BIG-bench Machine Learning

Matrix Coherence and the Nystrom Method

no code implementations9 Aug 2014 Ameet Talwalkar, Afshin Rostamizadeh

Crucial to the performance of this technique is the assumption that a matrix can be well approximated by working exclusively with a subset of its columns.

Matrix Completion

TuPAQ: An Efficient Planner for Large-scale Predictive Analytic Queries

no code implementations31 Jan 2015 Evan R. Sparks, Ameet Talwalkar, Michael J. Franklin, Michael. I. Jordan, Tim Kraska

The proliferation of massive datasets combined with the development of sophisticated analytical techniques have enabled a wide variety of novel applications such as improved product recommendations, automatic image tagging, and improved speech-driven interfaces.

Non-stochastic Best Arm Identification and Hyperparameter Optimization

1 code implementation27 Feb 2015 Kevin Jamieson, Ameet Talwalkar

Motivated by the task of hyperparameter optimization, we introduce the non-stochastic best-arm identification problem.

Hyperparameter Optimization

Federated Multi-Task Learning

2 code implementations NeurIPS 2017 Virginia Smith, Chao-Kai Chiang, Maziar Sanjabi, Ameet Talwalkar

Federated learning poses new statistical and systems challenges in training machine learning models over distributed networks of devices.

BIG-bench Machine Learning Federated Learning +1

Parle: parallelizing stochastic gradient descent

no code implementations3 Jul 2017 Pratik Chaudhari, Carlo Baldassi, Riccardo Zecchina, Stefano Soatto, Ameet Talwalkar, Adam Oberman

We propose a new algorithm called Parle for parallel training of deep networks that converges 2-4x faster than a data-parallel implementation of SGD, while achieving significantly improved error rates that are nearly state-of-the-art on several benchmarks including CIFAR-10 and CIFAR-100, without introducing any additional hyper-parameters.

Massively Parallel Hyperparameter Tuning

no code implementations ICLR 2018 Lisha Li, Kevin Jamieson, Afshin Rostamizadeh, Katya Gonina, Moritz Hardt, Benjamin Recht, Ameet Talwalkar

Modern machine learning models are characterized by large hyperparameter search spaces and prohibitively expensive training costs.

Model Agnostic Supervised Local Explanations

2 code implementations NeurIPS 2018 Gregory Plumb, Denali Molitor, Ameet Talwalkar

Some of the most common forms of interpretability systems are example-based, local, and global explanations.

feature selection

LEAF: A Benchmark for Federated Settings

7 code implementations3 Dec 2018 Sebastian Caldas, Sai Meher Karthik Duddu, Peter Wu, Tian Li, Jakub Konečný, H. Brendan McMahan, Virginia Smith, Ameet Talwalkar

Modern federated networks, such as those comprised of wearable devices, mobile phones, or autonomous vehicles, generate massive amounts of data each day.

Autonomous Vehicles Benchmarking +3

Federated Optimization in Heterogeneous Networks

19 code implementations14 Dec 2018 Tian Li, Anit Kumar Sahu, Manzil Zaheer, Maziar Sanjabi, Ameet Talwalkar, Virginia Smith

Theoretically, we provide convergence guarantees for our framework when learning over data from non-identical distributions (statistical heterogeneity), and while adhering to device-level systems constraints by allowing each participating device to perform a variable amount of work (systems heterogeneity).

Distributed Optimization Federated Learning

Expanding the Reach of Federated Learning by Reducing Client Resource Requirements

1 code implementation ICLR 2019 Sebastian Caldas, Jakub Konečny, H. Brendan McMahan, Ameet Talwalkar

Communication on heterogeneous edge networks is a fundamental bottleneck in Federated Learning (FL), restricting both model capacity and user participation.

Federated Learning

Regularizing Black-box Models for Improved Interpretability

1 code implementation NeurIPS 2020 Gregory Plumb, Maruan Al-Shedivat, Angel Alexander Cabrera, Adam Perer, Eric Xing, Ameet Talwalkar

Most of the work on interpretable machine learning has focused on designing either inherently interpretable models, which typically trade-off accuracy for interpretability, or post-hoc explanation systems, whose explanation quality can be unpredictable.

BIG-bench Machine Learning Interpretable Machine Learning

Random Search and Reproducibility for Neural Architecture Search

4 code implementations20 Feb 2019 Liam Li, Ameet Talwalkar

Neural architecture search (NAS) is a promising research direction that has the potential to replace expert-designed networks with learned, task-specific architectures.

Hyperparameter Optimization Neural Architecture Search

Provable Guarantees for Gradient-Based Meta-Learning

1 code implementation27 Feb 2019 Mikhail Khodak, Maria-Florina Balcan, Ameet Talwalkar

We study the problem of meta-learning through the lens of online convex optimization, developing a meta-algorithm bridging the gap between popular gradient-based meta-learning and classical regularization-based multi-task transfer methods.

Generalization Bounds Meta-Learning

One-Shot Federated Learning

no code implementations28 Feb 2019 Neel Guha, Ameet Talwalkar, Virginia Smith

We present one-shot federated learning, where a central server learns a global model over a network of federated devices in a single round of communication.

Ensemble Learning Federated Learning

Exploiting Reuse in Pipeline-Aware Hyperparameter Tuning

no code implementations12 Mar 2019 Liam Li, Evan Sparks, Kevin Jamieson, Ameet Talwalkar

Hyperparameter tuning of multi-stage pipelines introduces a significant computational burden.

Regularizing Black-box Models for Improved Interpretability (HILL 2019 Version)

no code implementations31 May 2019 Gregory Plumb, Maruan Al-Shedivat, Eric Xing, Ameet Talwalkar

Most of the work on interpretable machine learning has focused on designing either inherently interpretable models, which typically trade-off accuracy for interpretability, or post-hoc explanation systems, which lack guarantees about their explanation quality.

BIG-bench Machine Learning Interpretable Machine Learning

Adaptive Gradient-Based Meta-Learning Methods

1 code implementation NeurIPS 2019 Mikhail Khodak, Maria-Florina Balcan, Ameet Talwalkar

We build a theoretical framework for designing and understanding practical meta-learning methods that integrates sophisticated formalizations of task-similarity with the extensive literature on online convex optimization and sequential prediction algorithms.

Federated Learning Few-Shot Learning

Learning Fair Representations for Kernel Models

2 code implementations27 Jun 2019 Zilong Tan, Samuel Yeom, Matt Fredrikson, Ameet Talwalkar

In contrast, we demonstrate the promise of learning a model-aware fair representation, focusing on kernel-based models.

Dimensionality Reduction Fairness

Federated Learning: Challenges, Methods, and Future Directions

1 code implementation21 Aug 2019 Tian Li, Anit Kumar Sahu, Ameet Talwalkar, Virginia Smith

Federated learning involves training statistical models over remote devices or siloed data centers, such as mobile phones or hospitals, while keeping data localized.

BIG-bench Machine Learning Distributed Optimization +2

Differentially Private Meta-Learning

no code implementations ICLR 2020 Jeffrey Li, Mikhail Khodak, Sebastian Caldas, Ameet Talwalkar

Parameter-transfer is a well-known and versatile approach for meta-learning, with applications including few-shot learning, federated learning, and reinforcement learning.

Federated Learning Few-Shot Learning +4

On Weight-Sharing and Bilevel Optimization in Architecture Search

no code implementations25 Sep 2019 Mikhail Khodak, Liam Li, Maria-Florina Balcan, Ameet Talwalkar

Weight-sharing—the simultaneous optimization of multiple neural networks using the same parameters—has emerged as a key component of state-of-the-art neural architecture search.

Bilevel Optimization feature selection +1

Explaining Groups of Points in Low-Dimensional Representations

3 code implementations ICML 2020 Gregory Plumb, Jonathan Terhorst, Sriram Sankararaman, Ameet Talwalkar

A common workflow in data exploration is to learn a low-dimensional representation of the data, identify groups of points in that representation, and examine the differences between the groups to determine what they represent.

counterfactual Counterfactual Explanation +1

FACT: A Diagnostic for Group Fairness Trade-offs

1 code implementation7 Apr 2020 Joon Sik Kim, Jiahao Chen, Ameet Talwalkar

Group fairness, a class of fairness notions that measure how different groups of individuals are treated differently according to their protected attributes, has been shown to conflict with one another, often with a necessary cost in loss of model's predictive performance.

Attribute Fairness

Geometry-Aware Gradient Algorithms for Neural Architecture Search

1 code implementation ICLR 2021 Liam Li, Mikhail Khodak, Maria-Florina Balcan, Ameet Talwalkar

Recent state-of-the-art methods for neural architecture search (NAS) exploit gradient-based optimization by relaxing the problem into continuous optimization over architectures and shared-weights, a noisy process that remains poorly understood.

Neural Architecture Search

A Learning Theoretic Perspective on Local Explainability

no code implementations ICLR 2021 Jeffrey Li, Vaishnavh Nagarajan, Gregory Plumb, Ameet Talwalkar

In this paper, we explore connections between interpretable machine learning and learning theory through the lens of local approximation explanations.

BIG-bench Machine Learning Interpretable Machine Learning +1

Searching for Convolutions and a More Ambitious NAS

no code implementations1 Jan 2021 Nicholas Carl Roberts, Mikhail Khodak, Tri Dao, Liam Li, Nina Balcan, Christopher Re, Ameet Talwalkar

An important goal of neural architecture search (NAS) is to automate-away the design of neural networks on new tasks in under-explored domains, thus helping to democratize machine learning.

Neural Architecture Search

On Data Efficiency of Meta-learning

no code implementations30 Jan 2021 Maruan Al-Shedivat, Liam Li, Eric Xing, Ameet Talwalkar

Meta-learning has enabled learning statistical models that can be quickly adapted to new prediction tasks.

Meta-Learning Personalized Federated Learning

Gradient Descent on Neural Networks Typically Occurs at the Edge of Stability

1 code implementation ICLR 2021 Jeremy M. Cohen, Simran Kaur, Yuanzhi Li, J. Zico Kolter, Ameet Talwalkar

We empirically demonstrate that full-batch gradient descent on neural network training objectives typically operates in a regime we call the Edge of Stability.

Interpretable Machine Learning: Moving From Mythos to Diagnostics

no code implementations10 Mar 2021 Valerie Chen, Jeffrey Li, Joon Sik Kim, Gregory Plumb, Ameet Talwalkar

Despite increasing interest in the field of Interpretable Machine Learning (IML), a significant gap persists between the technical objectives targeted by researchers' methods and the high-level goals of consumers' use cases.

BIG-bench Machine Learning Interpretable Machine Learning

Sanity Simulations for Saliency Methods

1 code implementation13 May 2021 Joon Sik Kim, Gregory Plumb, Ameet Talwalkar

Saliency methods are a popular class of feature attribution explanation methods that aim to capture a model's predictive reasoning by identifying "important" pixels in an input image.

Benchmarking

Finding and Fixing Spurious Patterns with Explanations

no code implementations3 Jun 2021 Gregory Plumb, Marco Tulio Ribeiro, Ameet Talwalkar

Image classifiers often use spurious patterns, such as "relying on the presence of a person to detect a tennis racket, which do not generalize.

Data Augmentation

Learning-to-learn non-convex piecewise-Lipschitz functions

no code implementations NeurIPS 2021 Maria-Florina Balcan, Mikhail Khodak, Dravyansh Sharma, Ameet Talwalkar

We analyze the meta-learning of the initialization and step-size of learning algorithms for piecewise-Lipschitz functions, a non-convex setting with applications to both machine learning and algorithms.

Meta-Learning

Should We Be Pre-training? An Argument for End-task Aware Training as an Alternative

2 code implementations ICLR 2022 Lucio M. Dery, Paul Michel, Ameet Talwalkar, Graham Neubig

In most settings of practical concern, machine learning practitioners know in advance what end-task they wish to boost with auxiliary tasks.

Meta-Learning

Bayesian Persuasion for Algorithmic Recourse

no code implementations12 Dec 2021 Keegan Harris, Valerie Chen, Joon Sik Kim, Ameet Talwalkar, Hoda Heidari, Zhiwei Steven Wu

While the decision maker's problem of finding the optimal Bayesian incentive-compatible (BIC) signaling policy takes the form of optimization over infinitely-many variables, we show that this optimization can be cast as a linear program over finitely-many regions of the space of possible assessment rules.

Decision Making

Learning Predictions for Algorithms with Predictions

no code implementations18 Feb 2022 Mikhail Khodak, Maria-Florina Balcan, Ameet Talwalkar, Sergei Vassilvitskii

A burgeoning paradigm in algorithm design is the field of algorithms with predictions, in which algorithms can take advantage of a possibly-imperfect prediction of some aspect of the problem.

Scheduling

Efficient Architecture Search for Diverse Tasks

1 code implementation15 Apr 2022 Junhong Shen, Mikhail Khodak, Ameet Talwalkar

While neural architecture search (NAS) has enabled automated machine learning (AutoML) for well-researched areas, its application to tasks beyond computer vision is still under-explored.

Neural Architecture Search Protein Folding

Perspectives on Incorporating Expert Feedback into Model Updates

no code implementations13 May 2022 Valerie Chen, Umang Bhatt, Hoda Heidari, Adrian Weller, Ameet Talwalkar

A practitioner may receive feedback from an expert at the observation- or domain-level, and convert this feedback into updates to the dataset, loss function, or parameter space.

AANG: Automating Auxiliary Learning

2 code implementations27 May 2022 Lucio M. Dery, Paul Michel, Mikhail Khodak, Graham Neubig, Ameet Talwalkar

Auxiliary objectives, supplementary learning signals that are introduced to help aid learning on data-starved or highly complex end-tasks, are commonplace in machine learning.

Auxiliary Learning

Use-Case-Grounded Simulations for Explanation Evaluation

no code implementations5 Jun 2022 Valerie Chen, Nari Johnson, Nicholay Topin, Gregory Plumb, Ameet Talwalkar

SimEvals involve training algorithmic agents that take as input the information content (such as model explanations) that would be presented to each participant in a human subject study, to predict answers to the use case of interest.

counterfactual Counterfactual Reasoning

On the Importance of Application-Grounded Experimental Design for Evaluating Explainable ML Methods

no code implementations24 Jun 2022 Kasun Amarasinghe, Kit T. Rodolfa, Sérgio Jesus, Valerie Chen, Vladimir Balayan, Pedro Saleiro, Pedro Bizarro, Ameet Talwalkar, Rayid Ghani

Most existing evaluations of explainable machine learning (ML) methods rely on simplifying assumptions or proxies that do not reflect real-world use cases; the handful of more robust evaluations on real-world settings have shortcomings in their design, resulting in limited conclusions of methods' real-world utility.

Experimental Design Fraud Detection

Towards a More Rigorous Science of Blindspot Discovery in Image Classification Models

2 code implementations8 Jul 2022 Gregory Plumb, Nari Johnson, Ángel Alexander Cabrera, Ameet Talwalkar

A growing body of work studies Blindspot Discovery Methods ("BDM"s): methods that use an image embedding to find semantically meaningful (i. e., united by a human-understandable concept) subsets of the data where an image classifier performs significantly worse.

Dimensionality Reduction Image Classification

Provably tuning the ElasticNet across instances

no code implementations20 Jul 2022 Maria-Florina Balcan, Mikhail Khodak, Dravyansh Sharma, Ameet Talwalkar

We consider the problem of tuning the regularization parameters of Ridge regression, LASSO, and the ElasticNet across multiple problem instances, a setting that encompasses both cross-validation and multi-task hyperparameter optimization.

Hyperparameter Optimization regression

SONAR: Joint Architecture and System Optimization Search

no code implementations25 Aug 2022 Elias Jääsaari, Michelle Ma, Ameet Talwalkar, Tianqi Chen

There is a growing need to deploy machine learning for different tasks on a wide array of new hardware platforms.

AutoML for Climate Change: A Call to Action

1 code implementation7 Oct 2022 Renbo Tu, Nicholas Roberts, Vishak Prasad, Sibasis Nayak, Paarth Jain, Frederic Sala, Ganesh Ramakrishnan, Ameet Talwalkar, Willie Neiswanger, Colin White

The challenge that climate change poses to humanity has spurred a rapidly developing field of artificial intelligence research focused on climate change applications.

AutoML

On Noisy Evaluation in Federated Hyperparameter Tuning

1 code implementation17 Dec 2022 Kevin Kuo, Pratiksha Thaker, Mikhail Khodak, John Nguyen, Daniel Jiang, Ameet Talwalkar, Virginia Smith

In this work, we perform the first systematic study on the effect of noisy evaluation in federated hyperparameter tuning.

Federated Learning

Cross-Modal Fine-Tuning: Align then Refine

1 code implementation11 Feb 2023 Junhong Shen, Liam Li, Lucio M. Dery, Corey Staten, Mikhail Khodak, Graham Neubig, Ameet Talwalkar

Fine-tuning large-scale pretrained models has led to tremendous progress in well-studied modalities such as vision and NLP.

AutoML

Assisting Human Decisions in Document Matching

1 code implementation16 Feb 2023 Joon Sik Kim, Valerie Chen, Danish Pruthi, Nihar B. Shah, Ameet Talwalkar

Many practical applications, ranging from paper-reviewer assignment in peer review to job-applicant matching for hiring, require human decision makers to identify relevant matches by combining their expertise with predictions from machine learning models.

Learning Personalized Decision Support Policies

no code implementations13 Apr 2023 Umang Bhatt, Valerie Chen, Katherine M. Collins, Parameswaran Kamalaruban, Emma Kallina, Adrian Weller, Ameet Talwalkar

In this work, we propose learning a decision support policy that, for a given input, chooses which form of support, if any, to provide.

Multi-Armed Bandits

Where Does My Model Underperform? A Human Evaluation of Slice Discovery Algorithms

2 code implementations13 Jun 2023 Nari Johnson, Ángel Alexander Cabrera, Gregory Plumb, Ameet Talwalkar

Motivated by these challenges, ML researchers have developed new slice discovery algorithms that aim to group together coherent and high-error subsets of data.

object-detection Object Detection

Learning to Relax: Setting Solver Parameters Across a Sequence of Linear System Instances

no code implementations3 Oct 2023 Mikhail Khodak, Edmond Chow, Maria-Florina Balcan, Ameet Talwalkar

For this method, we prove that a bandit online learning algorithm -- using only the number of iterations as feedback -- can select parameters for a sequence of instances such that the overall cost approaches that of the best fixed $\omega$ as the sequence length increases.

Do LLMs exhibit human-like response biases? A case study in survey design

1 code implementation7 Nov 2023 Lindia Tjuatja, Valerie Chen, Sherry Tongshuang Wu, Ameet Talwalkar, Graham Neubig

As large language models (LLMs) become more capable, there is growing excitement about the possibility of using LLMs as proxies for humans in real-world tasks where subjective labels are desired, such as in surveys and opinion polling.

Multitask Learning Can Improve Worst-Group Outcomes

1 code implementation5 Dec 2023 Atharva Kulkarni, Lucio Dery, Amrith Setlur, aditi raghunathan, Ameet Talwalkar, Graham Neubig

We primarily consider the standard setting of fine-tuning a pre-trained model, where, following recent work \citep{gururangan2020don, dery2023aang}, we multitask the end task with the pre-training objective constructed from the end task data itself.

Fairness

Everybody Prune Now: Structured Pruning of LLMs with only Forward Passes

1 code implementation8 Feb 2024 Lucio Dery, Steven Kolawole, Jean-François Kagy, Virginia Smith, Graham Neubig, Ameet Talwalkar

Given the generational gap in available hardware between lay practitioners and the most endowed institutions, LLMs are becoming increasingly inaccessible as they grow in size.

UPS: Towards Foundation Models for PDE Solving via Cross-Modal Adaptation

1 code implementation11 Mar 2024 Junhong Shen, Tanya Marwah, Ameet Talwalkar

We introduce UPS (Unified PDE Solver), an effective and data-efficient approach to solve diverse spatiotemporal PDEs defined over various domains, dimensions, and resolutions.

Multi-Task Learning

The RealHumanEval: Evaluating Large Language Models' Abilities to Support Programmers

1 code implementation3 Apr 2024 Hussein Mozannar, Valerie Chen, Mohammed Alsobay, Subhro Das, Sebastian Zhao, Dennis Wei, Manish Nagireddy, Prasanna Sattigeri, Ameet Talwalkar, David Sontag

Evaluation of large language models (LLMs) for code has primarily relied on static benchmarks, including HumanEval (Chen et al., 2021), which measure the ability of LLMs to generate complete code that passes unit tests.

Model-Agnostic Characterization of Fairness Trade-offs

no code implementations ICML 2020 Joon Kim, Jiahao Chen, Ameet Talwalkar

There exist several inherent trade-offs while designing a fair model, such as those between the model’s predictive accuracy and fairness, or even among different notions of fairness.

Fairness

Cannot find the paper you are looking for? You can Submit a new open access paper.