no code implementations • ICML 2020 • Joon Kim, Jiahao Chen, Ameet Talwalkar
There exist several inherent trade-offs while designing a fair model, such as those between the model’s predictive accuracy and fairness, or even among different notions of fairness.
no code implementations • 2 Jul 2024 • Steven Kolawole, Don Dennis, Ameet Talwalkar, Virginia Smith
Finally, for edge inference scenarios where portions of the cascade reside at the edge vs. in the cloud, CoE can provide a 14x reduction in communication cost and inference latency without sacrificing accuracy.
no code implementations • 24 Jun 2024 • Katherine M. Collins, Valerie Chen, Ilia Sucholutsky, Hannah Rose Kirk, Malak Sadek, Holli Sargeant, Ameet Talwalkar, Adrian Weller, Umang Bhatt
Through a user study with real humans, we observe shifts in user behavior from the imposition of a friction over LLMs in the context of a multi-topic question-answering task as a representative task that people may use LLMs for, e. g., in education and information retrieval.
1 code implementation • 3 Apr 2024 • Hussein Mozannar, Valerie Chen, Mohammed Alsobay, Subhro Das, Sebastian Zhao, Dennis Wei, Manish Nagireddy, Prasanna Sattigeri, Ameet Talwalkar, David Sontag
Evaluation of large language models (LLMs) for code has primarily relied on static benchmarks, including HumanEval (Chen et al., 2021), which measure the ability of LLMs to generate complete code that passes unit tests.
1 code implementation • 11 Mar 2024 • Junhong Shen, Tanya Marwah, Ameet Talwalkar
We present Unified PDE Solvers (UPS), a data- and compute-efficient approach to developing unified neural operators for diverse families of spatiotemporal PDEs from various domains, dimensions, and resolutions.
1 code implementation • 8 Feb 2024 • Lucio Dery, Steven Kolawole, Jean-François Kagy, Virginia Smith, Graham Neubig, Ameet Talwalkar
Given the generational gap in available hardware between lay practitioners and the most endowed institutions, LLMs are becoming increasingly inaccessible as they grow in size.
1 code implementation • 5 Dec 2023 • Atharva Kulkarni, Lucio Dery, Amrith Setlur, aditi raghunathan, Ameet Talwalkar, Graham Neubig
We primarily consider the standard setting of fine-tuning a pre-trained model, where, following recent work \citep{gururangan2020don, dery2023aang}, we multitask the end task with the pre-training objective constructed from the end task data itself.
1 code implementation • 7 Nov 2023 • Lindia Tjuatja, Valerie Chen, Sherry Tongshuang Wu, Ameet Talwalkar, Graham Neubig
As large language models (LLMs) become more capable, there is growing excitement about the possibility of using LLMs as proxies for humans in real-world tasks where subjective labels are desired, such as in surveys and opinion polling.
no code implementations • 3 Oct 2023 • Mikhail Khodak, Edmond Chow, Maria-Florina Balcan, Ameet Talwalkar
For this method, we prove that a bandit online learning algorithm--using only the number of iterations as feedback--can select parameters for a sequence of instances such that the overall cost approaches that of the best fixed $\omega$ as the sequence length increases.
no code implementations • 28 Jul 2023 • Matthew Barker, Emma Kallina, Dhananjay Ashok, Katherine M. Collins, Ashley Casovan, Adrian Weller, Ameet Talwalkar, Valerie Chen, Umang Bhatt
We propose FeedbackLogs, addenda to existing documentation of ML pipelines, to track the input of multiple stakeholders.
2 code implementations • 13 Jun 2023 • Nari Johnson, Ángel Alexander Cabrera, Gregory Plumb, Ameet Talwalkar
Motivated by these challenges, ML researchers have developed new slice discovery algorithms that aim to group together coherent and high-error subsets of data.
no code implementations • 13 Apr 2023 • Umang Bhatt, Valerie Chen, Katherine M. Collins, Parameswaran Kamalaruban, Emma Kallina, Adrian Weller, Ameet Talwalkar
$\texttt{Modiste}$ leverages stochastic contextual bandit techniques to personalize a decision support policy for each decision-maker and supports extensions to the multi-objective setting to account for auxiliary objectives like the cost of support.
1 code implementation • 16 Feb 2023 • Joon Sik Kim, Valerie Chen, Danish Pruthi, Nihar B. Shah, Ameet Talwalkar
Many practical applications, ranging from paper-reviewer assignment in peer review to job-applicant matching for hiring, require human decision makers to identify relevant matches by combining their expertise with predictions from machine learning models.
1 code implementation • 11 Feb 2023 • Junhong Shen, Liam Li, Lucio M. Dery, Corey Staten, Mikhail Khodak, Graham Neubig, Ameet Talwalkar
Fine-tuning large-scale pretrained models has led to tremendous progress in well-studied modalities such as vision and NLP.
1 code implementation • 17 Dec 2022 • Kevin Kuo, Pratiksha Thaker, Mikhail Khodak, John Nguyen, Daniel Jiang, Ameet Talwalkar, Virginia Smith
In this work, we perform the first systematic study on the effect of noisy evaluation in federated hyperparameter tuning.
1 code implementation • 7 Oct 2022 • Renbo Tu, Nicholas Roberts, Vishak Prasad, Sibasis Nayak, Paarth Jain, Frederic Sala, Ganesh Ramakrishnan, Ameet Talwalkar, Willie Neiswanger, Colin White
The challenge that climate change poses to humanity has spurred a rapidly developing field of artificial intelligence research focused on climate change applications.
no code implementations • 25 Aug 2022 • Elias Jääsaari, Michelle Ma, Ameet Talwalkar, Tianqi Chen
There is a growing need to deploy machine learning for different tasks on a wide array of new hardware platforms.
no code implementations • 20 Jul 2022 • Maria-Florina Balcan, Mikhail Khodak, Dravyansh Sharma, Ameet Talwalkar
We consider the problem of tuning the regularization parameters of Ridge regression, LASSO, and the ElasticNet across multiple problem instances, a setting that encompasses both cross-validation and multi-task hyperparameter optimization.
2 code implementations • 8 Jul 2022 • Gregory Plumb, Nari Johnson, Ángel Alexander Cabrera, Ameet Talwalkar
A growing body of work studies Blindspot Discovery Methods ("BDM"s): methods that use an image embedding to find semantically meaningful (i. e., united by a human-understandable concept) subsets of the data where an image classifier performs significantly worse.
no code implementations • 24 Jun 2022 • Kasun Amarasinghe, Kit T. Rodolfa, Sérgio Jesus, Valerie Chen, Vladimir Balayan, Pedro Saleiro, Pedro Bizarro, Ameet Talwalkar, Rayid Ghani
Most existing evaluations of explainable machine learning (ML) methods rely on simplifying assumptions or proxies that do not reflect real-world use cases; the handful of more robust evaluations on real-world settings have shortcomings in their design, resulting in limited conclusions of methods' real-world utility.
no code implementations • 5 Jun 2022 • Valerie Chen, Nari Johnson, Nicholay Topin, Gregory Plumb, Ameet Talwalkar
SimEvals involve training algorithmic agents that take as input the information content (such as model explanations) that would be presented to each participant in a human subject study, to predict answers to the use case of interest.
2 code implementations • 27 May 2022 • Lucio M. Dery, Paul Michel, Mikhail Khodak, Graham Neubig, Ameet Talwalkar
Auxiliary objectives, supplementary learning signals that are introduced to help aid learning on data-starved or highly complex end-tasks, are commonplace in machine learning.
no code implementations • 13 May 2022 • Valerie Chen, Umang Bhatt, Hoda Heidari, Adrian Weller, Ameet Talwalkar
A practitioner may receive feedback from an expert at the observation- or domain-level, and convert this feedback into updates to the dataset, loss function, or parameter space.
1 code implementation • 15 Apr 2022 • Junhong Shen, Mikhail Khodak, Ameet Talwalkar
While neural architecture search (NAS) has enabled automated machine learning (AutoML) for well-researched areas, its application to tasks beyond computer vision is still under-explored.
no code implementations • 18 Feb 2022 • Mikhail Khodak, Maria-Florina Balcan, Ameet Talwalkar, Sergei Vassilvitskii
A burgeoning paradigm in algorithm design is the field of algorithms with predictions, in which algorithms can take advantage of a possibly-imperfect prediction of some aspect of the problem.
no code implementations • 12 Dec 2021 • Keegan Harris, Valerie Chen, Joon Sik Kim, Ameet Talwalkar, Hoda Heidari, Zhiwei Steven Wu
While the decision maker's problem of finding the optimal Bayesian incentive-compatible (BIC) signaling policy takes the form of optimization over infinitely-many variables, we show that this optimization can be cast as a linear program over finitely-many regions of the space of possible assessment rules.
1 code implementation • 12 Oct 2021 • Renbo Tu, Nicholas Roberts, Mikhail Khodak, Junhong Shen, Frederic Sala, Ameet Talwalkar
This makes the performance of NAS approaches in more diverse areas poorly understood.
2 code implementations • ICLR 2022 • Lucio M. Dery, Paul Michel, Ameet Talwalkar, Graham Neubig
In most settings of practical concern, machine learning practitioners know in advance what end-task they wish to boost with auxiliary tasks.
no code implementations • NeurIPS 2021 • Maria-Florina Balcan, Mikhail Khodak, Dravyansh Sharma, Ameet Talwalkar
We analyze the meta-learning of the initialization and step-size of learning algorithms for piecewise-Lipschitz functions, a non-convex setting with applications to both machine learning and algorithms.
2 code implementations • 14 Jul 2021 • Jianyu Wang, Zachary Charles, Zheng Xu, Gauri Joshi, H. Brendan McMahan, Blaise Aguera y Arcas, Maruan Al-Shedivat, Galen Andrew, Salman Avestimehr, Katharine Daly, Deepesh Data, Suhas Diggavi, Hubert Eichner, Advait Gadhikar, Zachary Garrett, Antonious M. Girgis, Filip Hanzely, Andrew Hard, Chaoyang He, Samuel Horvath, Zhouyuan Huo, Alex Ingerman, Martin Jaggi, Tara Javidi, Peter Kairouz, Satyen Kale, Sai Praneeth Karimireddy, Jakub Konecny, Sanmi Koyejo, Tian Li, Luyang Liu, Mehryar Mohri, Hang Qi, Sashank J. Reddi, Peter Richtarik, Karan Singhal, Virginia Smith, Mahdi Soltanolkotabi, Weikang Song, Ananda Theertha Suresh, Sebastian U. Stich, Ameet Talwalkar, Hongyi Wang, Blake Woodworth, Shanshan Wu, Felix X. Yu, Honglin Yuan, Manzil Zaheer, Mi Zhang, Tong Zhang, Chunxiang Zheng, Chen Zhu, Wennan Zhu
Federated learning and analytics are a distributed approach for collaboratively learning models (or statistics) from decentralized data, motivated by and designed for privacy protection.
no code implementations • NeurIPS 2021 • Mikhail Khodak, Renbo Tu, Tian Li, Liam Li, Maria-Florina Balcan, Virginia Smith, Ameet Talwalkar
Tuning hyperparameters is a crucial but arduous part of the machine learning pipeline.
no code implementations • 3 Jun 2021 • Gregory Plumb, Marco Tulio Ribeiro, Ameet Talwalkar
Image classifiers often use spurious patterns, such as "relying on the presence of a person to detect a tennis racket, which do not generalize.
1 code implementation • 13 May 2021 • Joon Sik Kim, Gregory Plumb, Ameet Talwalkar
Saliency methods are a popular class of feature attribution explanation methods that aim to capture a model's predictive reasoning by identifying "important" pixels in an input image.
3 code implementations • NeurIPS 2021 • Nicholas Roberts, Mikhail Khodak, Tri Dao, Liam Li, Christopher Ré, Ameet Talwalkar
An important goal of AutoML is to automate-away the design of neural networks on new tasks in under-explored domains.
no code implementations • 10 Mar 2021 • Valerie Chen, Jeffrey Li, Joon Sik Kim, Gregory Plumb, Ameet Talwalkar
Despite increasing interest in the field of Interpretable Machine Learning (IML), a significant gap persists between the technical objectives targeted by researchers' methods and the high-level goals of consumers' use cases.
1 code implementation • ICLR 2021 • Jeremy M. Cohen, Simran Kaur, Yuanzhi Li, J. Zico Kolter, Ameet Talwalkar
We empirically demonstrate that full-batch gradient descent on neural network training objectives typically operates in a regime we call the Edge of Stability.
no code implementations • 30 Jan 2021 • Maruan Al-Shedivat, Liam Li, Eric Xing, Ameet Talwalkar
Meta-learning has enabled learning statistical models that can be quickly adapted to new prediction tasks.
no code implementations • 1 Jan 2021 • Nicholas Carl Roberts, Mikhail Khodak, Tri Dao, Liam Li, Nina Balcan, Christopher Re, Ameet Talwalkar
An important goal of neural architecture search (NAS) is to automate-away the design of neural networks on new tasks in under-explored domains, thus helping to democratize machine learning.
no code implementations • ICLR 2021 • Jeffrey Li, Vaishnavh Nagarajan, Gregory Plumb, Ameet Talwalkar
In this paper, we explore connections between interpretable machine learning and learning theory through the lens of local approximation explanations.
BIG-bench Machine Learning Interpretable Machine Learning +1
1 code implementation • ICLR 2021 • Liam Li, Mikhail Khodak, Maria-Florina Balcan, Ameet Talwalkar
Recent state-of-the-art methods for neural architecture search (NAS) exploit gradient-based optimization by relaxing the problem into continuous optimization over architectures and shared-weights, a noisy process that remains poorly understood.
1 code implementation • 7 Apr 2020 • Joon Sik Kim, Jiahao Chen, Ameet Talwalkar
Group fairness, a class of fairness notions that measure how different groups of individuals are treated differently according to their protected attributes, has been shown to conflict with one another, often with a necessary cost in loss of model's predictive performance.
3 code implementations • ICML 2020 • Gregory Plumb, Jonathan Terhorst, Sriram Sankararaman, Ameet Talwalkar
A common workflow in data exploration is to learn a low-dimensional representation of the data, identify groups of points in that representation, and examine the differences between the groups to determine what they represent.
2 code implementations • 7 Jan 2020 • Tian Li, Anit Kumar Sahu, Manzil Zaheer, Maziar Sanjabi, Ameet Talwalkar, Virginia Smith
Federated learning aims to jointly learn statistical models over massively distributed remote devices.
no code implementations • 25 Sep 2019 • Mikhail Khodak, Liam Li, Maria-Florina Balcan, Ameet Talwalkar
Weight-sharing—the simultaneous optimization of multiple neural networks using the same parameters—has emerged as a key component of state-of-the-art neural architecture search.
no code implementations • ICLR 2020 • Jeffrey Li, Mikhail Khodak, Sebastian Caldas, Ameet Talwalkar
Parameter-transfer is a well-known and versatile approach for meta-learning, with applications including few-shot learning, federated learning, and reinforcement learning.
1 code implementation • 21 Aug 2019 • Tian Li, Anit Kumar Sahu, Ameet Talwalkar, Virginia Smith
Federated learning involves training statistical models over remote devices or siloed data centers, such as mobile phones or hospitals, while keeping data localized.
2 code implementations • 27 Jun 2019 • Zilong Tan, Samuel Yeom, Matt Fredrikson, Ameet Talwalkar
In contrast, we demonstrate the promise of learning a model-aware fair representation, focusing on kernel-based models.
1 code implementation • NeurIPS 2019 • Mikhail Khodak, Maria-Florina Balcan, Ameet Talwalkar
We build a theoretical framework for designing and understanding practical meta-learning methods that integrates sophisticated formalizations of task-similarity with the extensive literature on online convex optimization and sequential prediction algorithms.
no code implementations • 31 May 2019 • Gregory Plumb, Maruan Al-Shedivat, Eric Xing, Ameet Talwalkar
Most of the work on interpretable machine learning has focused on designing either inherently interpretable models, which typically trade-off accuracy for interpretability, or post-hoc explanation systems, which lack guarantees about their explanation quality.
no code implementations • 29 Mar 2019 • Alexander Ratner, Dan Alistarh, Gustavo Alonso, David G. Andersen, Peter Bailis, Sarah Bird, Nicholas Carlini, Bryan Catanzaro, Jennifer Chayes, Eric Chung, Bill Dally, Jeff Dean, Inderjit S. Dhillon, Alexandros Dimakis, Pradeep Dubey, Charles Elkan, Grigori Fursin, Gregory R. Ganger, Lise Getoor, Phillip B. Gibbons, Garth A. Gibson, Joseph E. Gonzalez, Justin Gottschlich, Song Han, Kim Hazelwood, Furong Huang, Martin Jaggi, Kevin Jamieson, Michael. I. Jordan, Gauri Joshi, Rania Khalaf, Jason Knight, Jakub Konečný, Tim Kraska, Arun Kumar, Anastasios Kyrillidis, Aparna Lakshmiratan, Jing Li, Samuel Madden, H. Brendan McMahan, Erik Meijer, Ioannis Mitliagkas, Rajat Monga, Derek Murray, Kunle Olukotun, Dimitris Papailiopoulos, Gennady Pekhimenko, Theodoros Rekatsinas, Afshin Rostamizadeh, Christopher Ré, Christopher De Sa, Hanie Sedghi, Siddhartha Sen, Virginia Smith, Alex Smola, Dawn Song, Evan Sparks, Ion Stoica, Vivienne Sze, Madeleine Udell, Joaquin Vanschoren, Shivaram Venkataraman, Rashmi Vinayak, Markus Weimer, Andrew Gordon Wilson, Eric Xing, Matei Zaharia, Ce Zhang, Ameet Talwalkar
Machine learning (ML) techniques are enjoying rapidly increasing adoption.
no code implementations • 12 Mar 2019 • Liam Li, Evan Sparks, Kevin Jamieson, Ameet Talwalkar
Hyperparameter tuning of multi-stage pipelines introduces a significant computational burden.
no code implementations • 28 Feb 2019 • Neel Guha, Ameet Talwalkar, Virginia Smith
We present one-shot federated learning, where a central server learns a global model over a network of federated devices in a single round of communication.
1 code implementation • 27 Feb 2019 • Mikhail Khodak, Maria-Florina Balcan, Ameet Talwalkar
We study the problem of meta-learning through the lens of online convex optimization, developing a meta-algorithm bridging the gap between popular gradient-based meta-learning and classical regularization-based multi-task transfer methods.
4 code implementations • 20 Feb 2019 • Liam Li, Ameet Talwalkar
Neural architecture search (NAS) is a promising research direction that has the potential to replace expert-designed networks with learned, task-specific architectures.
Ranked #32 on Neural Architecture Search on NAS-Bench-201, CIFAR-10
1 code implementation • NeurIPS 2020 • Gregory Plumb, Maruan Al-Shedivat, Angel Alexander Cabrera, Adam Perer, Eric Xing, Ameet Talwalkar
Most of the work on interpretable machine learning has focused on designing either inherently interpretable models, which typically trade-off accuracy for interpretability, or post-hoc explanation systems, whose explanation quality can be unpredictable.
1 code implementation • ICLR 2019 • Sebastian Caldas, Jakub Konečny, H. Brendan McMahan, Ameet Talwalkar
Communication on heterogeneous edge networks is a fundamental bottleneck in Federated Learning (FL), restricting both model capacity and user participation.
20 code implementations • 14 Dec 2018 • Tian Li, Anit Kumar Sahu, Manzil Zaheer, Maziar Sanjabi, Ameet Talwalkar, Virginia Smith
Theoretically, we provide convergence guarantees for our framework when learning over data from non-identical distributions (statistical heterogeneity), and while adhering to device-level systems constraints by allowing each participating device to perform a variable amount of work (systems heterogeneity).
7 code implementations • 3 Dec 2018 • Sebastian Caldas, Sai Meher Karthik Duddu, Peter Wu, Tian Li, Jakub Konečný, H. Brendan McMahan, Virginia Smith, Ameet Talwalkar
Modern federated networks, such as those comprised of wearable devices, mobile phones, or autonomous vehicles, generate massive amounts of data each day.
5 code implementations • ICLR 2018 • Liam Li, Kevin Jamieson, Afshin Rostamizadeh, Ekaterina Gonina, Moritz Hardt, Benjamin Recht, Ameet Talwalkar
Modern learning models are characterized by large hyperparameter spaces and long training times.
2 code implementations • NeurIPS 2018 • Gregory Plumb, Denali Molitor, Ameet Talwalkar
Some of the most common forms of interpretability systems are example-based, local, and global explanations.
no code implementations • ICLR 2018 • Lisha Li, Kevin Jamieson, Afshin Rostamizadeh, Katya Gonina, Moritz Hardt, Benjamin Recht, Ameet Talwalkar
Modern machine learning models are characterized by large hyperparameter search spaces and prohibitively expensive training costs.
no code implementations • 3 Jul 2017 • Pratik Chaudhari, Carlo Baldassi, Riccardo Zecchina, Stefano Soatto, Ameet Talwalkar, Adam Oberman
We propose a new algorithm called Parle for parallel training of deep networks that converges 2-4x faster than a data-parallel implementation of SGD, while achieving significantly improved error rates that are nearly state-of-the-art on several benchmarks including CIFAR-10 and CIFAR-100, without introducing any additional hyper-parameters.
2 code implementations • NeurIPS 2017 • Virginia Smith, Chao-Kai Chiang, Maziar Sanjabi, Ameet Talwalkar
Federated learning poses new statistical and systems challenges in training machine learning models over distributed networks of devices.
17 code implementations • 21 Mar 2016 • Lisha Li, Kevin Jamieson, Giulia Desalvo, Afshin Rostamizadeh, Ameet Talwalkar
Performance of machine learning algorithms depends critically on identifying a good set of hyperparameters.
no code implementations • 26 May 2015 • Xiangrui Meng, Joseph Bradley, Burak Yavuz, Evan Sparks, Shivaram Venkataraman, Davies Liu, Jeremy Freeman, DB Tsai, Manish Amde, Sean Owen, Doris Xin, Reynold Xin, Michael J. Franklin, Reza Zadeh, Matei Zaharia, Ameet Talwalkar
Apache Spark is a popular open-source platform for large-scale data processing that is well-suited for iterative machine learning tasks.
1 code implementation • 27 Feb 2015 • Kevin Jamieson, Ameet Talwalkar
Motivated by the task of hyperparameter optimization, we introduce the non-stochastic best-arm identification problem.
no code implementations • 31 Jan 2015 • Evan R. Sparks, Ameet Talwalkar, Michael J. Franklin, Michael. I. Jordan, Tim Kraska
The proliferation of massive datasets combined with the development of sophisticated analytical techniques have enabled a wide variety of novel applications such as improved product recommendations, automatic image tagging, and improved speech-driven interfaces.
no code implementations • 9 Aug 2014 • Ameet Talwalkar, Afshin Rostamizadeh
Crucial to the performance of this technique is the assumption that a matrix can be well approximated by working exclusively with a subset of its columns.
no code implementations • 21 Oct 2013 • Evan R. Sparks, Ameet Talwalkar, Virginia Smith, Jey Kottalam, Xinghao Pan, Joseph Gonzalez, Michael J. Franklin, Michael. I. Jordan, Tim Kraska
MLI is an Application Programming Interface designed to address the challenges of building Machine Learn- ing algorithms in a distributed setting based on data-centric computing.
no code implementations • 20 Apr 2013 • Ameet Talwalkar, Lester Mackey, Yadong Mu, Shih-Fu Chang, Michael. I. Jordan
Vision problems ranging from image clustering to motion segmentation to semi-supervised learning can naturally be framed as subspace segmentation problems, in which one aims to recover multiple low-dimensional subspaces from noisy and corrupted input data.
no code implementations • NeurIPS 2011 • Lester W. Mackey, Michael. I. Jordan, Ameet Talwalkar
This work introduces Divide-Factor-Combine (DFC), a parallel divide-and-conquer framework for noisy matrix factorization.
no code implementations • 5 Jul 2011 • Lester Mackey, Ameet Talwalkar, Michael. I. Jordan
If learning methods are to scale to the massive sizes of modern datasets, it is essential for the field of machine learning to embrace parallel and distributed computing.
no code implementations • NeurIPS 2009 • Sanjiv Kumar, Mehryar Mohri, Ameet Talwalkar
A crucial technique for scaling kernel methods to very large data sets reaching or exceeding millions of instances is based on low-rank approximation of kernel matrices.