no code implementations • 7 Mar 2024 • Wanru Zhao, Yaxin Du, Nicholas Donald Lane, Siheng Chen, Yanfeng Wang
In the current landscape of foundation model training, there is a significant reliance on public domain data, which is nearing exhaustion according to recent research.
1 code implementation • 13 Oct 2022 • Thomas Chun Pong Chau, Łukasz Dudziak, Hongkai Wen, Nicholas Donald Lane, Mohamed S Abdelfattah
To provide a systematic study of the performance of NAS algorithms on a macro search space, we release Blox - a benchmark that consists of 91k unique models trained on the CIFAR-100 dataset.
no code implementations • ICLR 2022 • Xinchi Qiu, Javier Fernandez-Marques, Pedro PB Gusmao, Yan Gao, Titouan Parcollet, Nicholas Donald Lane
When the available hardware cannot meet the memory and compute requirements to efficiently train high performing machine learning models, a compromise in either the training quality or the model complexity is needed.
1 code implementation • ICLR 2022 • Milad Alizadeh, Shyam A. Tailor, Luisa M Zintgraf, Joost van Amersfoort, Sebastian Farquhar, Nicholas Donald Lane, Yarin Gal
Pruning neural networks at initialization would enable us to find sparse models that retain the accuracy of the original network while consuming fewer computational resources for training and inference.
no code implementations • ICLR 2022 • Alberto Gil Couto Pimentel Ramos, Abhinav Mehrotra, Nicholas Donald Lane, Sourav Bhattacharya
Conditional neural networks play an important role in a number of sequence-to-sequence modeling tasks, including personalized sound enhancement (PSE), speaker dependent automatic speech recognition (ASR), and generative modeling such as text-to-speech synthesis.
Automatic Speech Recognition Automatic Speech Recognition (ASR) +3
no code implementations • ICLR 2022 • Shyam A. Tailor, Felix Opolka, Pietro Lio, Nicholas Donald Lane
Scaling and deploying graph neural networks (GNNs) remains difficult due to their high memory consumption and inference latency.
1 code implementation • ICLR 2021 • Abhinav Mehrotra, Alberto Gil C. P. Ramos, Sourav Bhattacharya, Łukasz Dudziak, Ravichander Vipperla, Thomas Chau, Mohamed S Abdelfattah, Samin Ishtiaq, Nicholas Donald Lane
These datasets, however, focus predominantly on computer vision and NLP tasks and thus suffer from the problem of limited coverage of application domains.
Automatic Speech Recognition Automatic Speech Recognition (ASR) +2
no code implementations • 1 Jan 2021 • Akhil Mathur, Shaoduo Gan, Anton Isopoussu, Fahim Kawsar, Nadia Berthouze, Nicholas Donald Lane
Breakthroughs in unsupervised domain adaptation (uDA) have opened up the possibility of adapting models from a label-rich source domain to unlabeled target domains.
no code implementations • ICLR 2018 • Vincent W.-S. Tseng, Sourav Bhattachary, Javier Fernández Marqués, Milad Alizadeh, Catherine Tong, Nicholas Donald Lane
In this work we present BinaryFlex, a neural network architecture that learns weighting coefficients of predefined orthogonal binary basis instead of the conventional approach of learning directly the convolutional filters.