Search Results for author: Nicholas Donald Lane

Found 9 papers, 3 papers with code

Enhancing Data Quality in Federated Fine-Tuning of Foundation Models

no code implementations7 Mar 2024 Wanru Zhao, Yaxin Du, Nicholas Donald Lane, Siheng Chen, Yanfeng Wang

In the current landscape of foundation model training, there is a significant reliance on public domain data, which is nearing exhaustion according to recent research.

BLOX: Macro Neural Architecture Search Benchmark and Algorithms

1 code implementation13 Oct 2022 Thomas Chun Pong Chau, Łukasz Dudziak, Hongkai Wen, Nicholas Donald Lane, Mohamed S Abdelfattah

To provide a systematic study of the performance of NAS algorithms on a macro search space, we release Blox - a benchmark that consists of 91k unique models trained on the CIFAR-100 dataset.

Neural Architecture Search

ZeroFL: Efficient On-Device Training for Federated Learning with Local Sparsity

no code implementations ICLR 2022 Xinchi Qiu, Javier Fernandez-Marques, Pedro PB Gusmao, Yan Gao, Titouan Parcollet, Nicholas Donald Lane

When the available hardware cannot meet the memory and compute requirements to efficiently train high performing machine learning models, a compromise in either the training quality or the model complexity is needed.

Federated Learning

Prospect Pruning: Finding Trainable Weights at Initialization using Meta-Gradients

1 code implementation ICLR 2022 Milad Alizadeh, Shyam A. Tailor, Luisa M Zintgraf, Joost van Amersfoort, Sebastian Farquhar, Nicholas Donald Lane, Yarin Gal

Pruning neural networks at initialization would enable us to find sparse models that retain the accuracy of the original network while consuming fewer computational resources for training and inference.

Conditioning Sequence-to-sequence Networks with Learned Activations

no code implementations ICLR 2022 Alberto Gil Couto Pimentel Ramos, Abhinav Mehrotra, Nicholas Donald Lane, Sourav Bhattacharya

Conditional neural networks play an important role in a number of sequence-to-sequence modeling tasks, including personalized sound enhancement (PSE), speaker dependent automatic speech recognition (ASR), and generative modeling such as text-to-speech synthesis.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +3

Adaptive Filters for Low-Latency and Memory-Efficient Graph Neural Networks

no code implementations ICLR 2022 Shyam A. Tailor, Felix Opolka, Pietro Lio, Nicholas Donald Lane

Scaling and deploying graph neural networks (GNNs) remains difficult due to their high memory consumption and inference latency.

Scaling Unsupervised Domain Adaptation through Optimal Collaborator Selection and Lazy Discriminator Synchronization

no code implementations1 Jan 2021 Akhil Mathur, Shaoduo Gan, Anton Isopoussu, Fahim Kawsar, Nadia Berthouze, Nicholas Donald Lane

Breakthroughs in unsupervised domain adaptation (uDA) have opened up the possibility of adapting models from a label-rich source domain to unlabeled target domains.

Privacy Preserving Unsupervised Domain Adaptation

BinaryFlex: On-the-Fly Kernel Generation in Binary Convolutional Networks

no code implementations ICLR 2018 Vincent W.-S. Tseng, Sourav Bhattachary, Javier Fernández Marqués, Milad Alizadeh, Catherine Tong, Nicholas Donald Lane

In this work we present BinaryFlex, a neural network architecture that learns weighting coefficients of predefined orthogonal binary basis instead of the conventional approach of learning directly the convolutional filters.

Cannot find the paper you are looking for? You can Submit a new open access paper.