Search Results for author: Liam Collins

Found 11 papers, 3 papers with code

Exploiting Shared Representations for Personalized Federated Learning

3 code implementations14 Feb 2021 Liam Collins, Hamed Hassani, Aryan Mokhtari, Sanjay Shakkottai

Based on this intuition, we propose a novel federated learning framework and algorithm for learning a shared data representation across clients and unique local heads for each client.

Meta-Learning Multi-Task Learning +2

Imaging Mechanism for Hyperspectral Scanning Probe Microscopy via Gaussian Process Modelling

2 code implementations26 Nov 2019 Maxim Ziatdinov, Dohyung Kim, Sabine Neumayer, Rama K. Vasudevan, Liam Collins, Stephen Jesse, Mahshid Ahmadi, Sergei V. Kalinin

We investigate the ability to reconstruct and derive spatial structure from sparsely sampled 3D piezoresponse force microcopy data, captured using the band-excitation (BE) technique, via Gaussian Process (GP) methods.

Computational Physics Materials Science

How Does the Task Landscape Affect MAML Performance?

no code implementations27 Oct 2020 Liam Collins, Aryan Mokhtari, Sanjay Shakkottai

Model-Agnostic Meta-Learning (MAML) has become increasingly popular for training models that can quickly adapt to new tasks via one or few stochastic gradient descent steps.

Few-Shot Image Classification Meta-Learning

MAML and ANIL Provably Learn Representations

no code implementations7 Feb 2022 Liam Collins, Aryan Mokhtari, Sewoong Oh, Sanjay Shakkottai

Recent empirical evidence has driven conventional wisdom to believe that gradient-based meta-learning (GBML) methods perform well at few-shot learning because they learn an expressive data representation that is shared across tasks.

Few-Shot Learning Representation Learning

FedAvg with Fine Tuning: Local Updates Lead to Representation Learning

no code implementations27 May 2022 Liam Collins, Hamed Hassani, Aryan Mokhtari, Sanjay Shakkottai

We show that the reason behind generalizability of the FedAvg's output is its power in learning the common data representation among the clients' tasks, by leveraging the diversity among client data distributions via local updates.

Federated Learning Image Classification +1

InfoNCE Loss Provably Learns Cluster-Preserving Representations

no code implementations15 Feb 2023 Advait Parulekar, Liam Collins, Karthikeyan Shanmugam, Aryan Mokhtari, Sanjay Shakkottai

The goal of contrasting learning is to learn a representation that preserves underlying clusters by keeping samples with similar content, e. g. the ``dogness'' of a dog, close to each other in the space generated by the representation.

Provable Multi-Task Representation Learning by Two-Layer ReLU Neural Networks

no code implementations13 Jul 2023 Liam Collins, Hamed Hassani, Mahdi Soltanolkotabi, Aryan Mokhtari, Sanjay Shakkottai

An increasingly popular machine learning paradigm is to pretrain a neural network (NN) on many tasks offline, then adapt it to downstream tasks, often by re-training only the last linear layer of the network.

Binary Classification Multi-Task Learning +1

Profit: Benchmarking Personalization and Robustness Trade-off in Federated Prompt Tuning

no code implementations6 Oct 2023 Liam Collins, Shanshan Wu, Sewoong Oh, Khe Chai Sim

In many applications of federated learning (FL), clients desire models that are personalized using their local data, yet are also robust in the sense that they retain general global knowledge.

Benchmarking Federated Learning +2

In-Context Learning with Transformers: Softmax Attention Adapts to Function Lipschitzness

no code implementations18 Feb 2024 Liam Collins, Advait Parulekar, Aryan Mokhtari, Sujay Sanghavi, Sanjay Shakkottai

We show that an attention unit learns a window that it uses to implement a nearest-neighbors predictor adapted to the landscape of the pretraining tasks.

In-Context Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.