Search Results for author: Javier Fernandez-Marques

Found 16 papers, 4 papers with code

How Much Is Hidden in the NAS Benchmarks? Few-Shot Adaptation of a NAS Predictor

no code implementations30 Nov 2023 Hrushikesh Loya, Łukasz Dudziak, Abhinav Mehrotra, Royson Lee, Javier Fernandez-Marques, Nicholas D. Lane, Hongkai Wen

Neural architecture search has proven to be a powerful approach to designing and refining neural networks, often boosting their performance and efficiency over manually-designed variations, but comes with computational overhead.

Image Classification Meta-Learning +1

Mitigating Memory Wall Effects in CNN Engines with On-the-Fly Weights Generation

no code implementations25 Jul 2023 Stylianos I. Venieris, Javier Fernandez-Marques, Nicholas D. Lane

In this work, we investigate the implications in terms of CNN engine design for a class of models that introduce a pre-convolution stage to decompress the weights at run time.

PQA: Exploring the Potential of Product Quantization in DNN Hardware Acceleration

1 code implementation25 May 2023 Ahmed F. AbouElhamayed, Angela Cui, Javier Fernandez-Marques, Nicholas D. Lane, Mohamed S. Abdelfattah

We identify PQ configurations that improve performance-per-area for ResNet20 by up to 3. 1$\times$, even when compared to a highly optimized conventional DNN accelerator, with similar improvements on two additional compact DNNs.

Quantization

Federated Learning for Inference at Anytime and Anywhere

no code implementations8 Dec 2022 Zicheng Liu, Da Li, Javier Fernandez-Marques, Stefanos Laskaridis, Yan Gao, Łukasz Dudziak, Stan Z. Li, Shell Xu Hu, Timothy Hospedales

Federated learning has been predominantly concerned with collaborative training of deep networks from scratch, and especially the many challenges that arise, such as communication cost, robustness to heterogeneous data, and support for diverse device capabilities.

Federated Learning

ZeroFL: Efficient On-Device Training for Federated Learning with Local Sparsity

no code implementations ICLR 2022 Xinchi Qiu, Javier Fernandez-Marques, Pedro PB Gusmao, Yan Gao, Titouan Parcollet, Nicholas Donald Lane

When the available hardware cannot meet the memory and compute requirements to efficiently train high performing machine learning models, a compromise in either the training quality or the model complexity is needed.

Federated Learning

Protea: Client Profiling within Federated Systems using Flower

no code implementations3 Jul 2022 Wanru Zhao, Xinchi Qiu, Javier Fernandez-Marques, Pedro P. B. de Gusmão, Nicholas D. Lane

Federated Learning (FL) has emerged as a prospective solution that facilitates the training of a high-performing centralised model without compromising the privacy of users.

Federated Learning

FedorAS: Federated Architecture Search under system heterogeneity

no code implementations22 Jun 2022 Lukasz Dudziak, Stefanos Laskaridis, Javier Fernandez-Marques

In this paper we explore the question of whether we can design architectures of different footprints in a cross-device federated setting, where the device landscape, availability and scale are very different.

Federated Learning Neural Architecture Search

End-to-End Speech Recognition from Federated Acoustic Models

1 code implementation29 Apr 2021 Yan Gao, Titouan Parcollet, Salah Zaiem, Javier Fernandez-Marques, Pedro P. B. de Gusmao, Daniel J. Beutel, Nicholas D. Lane

Training Automatic Speech Recognition (ASR) models under federated learning (FL) settings has attracted a lot of attention recently.

2k 4k +4

On-device Federated Learning with Flower

no code implementations7 Apr 2021 Akhil Mathur, Daniel J. Beutel, Pedro Porto Buarque de Gusmão, Javier Fernandez-Marques, Taner Topal, Xinchi Qiu, Titouan Parcollet, Yan Gao, Nicholas D. Lane

Federated Learning (FL) allows edge devices to collaboratively learn a shared prediction model while keeping their training data on the device, thereby decoupling the ability to do machine learning from the need to store data in the cloud.

BIG-bench Machine Learning Federated Learning

unzipFPGA: Enhancing FPGA-based CNN Engines with On-the-Fly Weights Generation

no code implementations9 Mar 2021 Stylianos I. Venieris, Javier Fernandez-Marques, Nicholas D. Lane

Single computation engines have become a popular design choice for FPGA-based convolutional neural networks (CNNs) enabling the deployment of diverse models without fabric reconfiguration.

A first look into the carbon footprint of federated learning

no code implementations15 Feb 2021 Xinchi Qiu, Titouan Parcollet, Javier Fernandez-Marques, Pedro Porto Buarque de Gusmao, Yan Gao, Daniel J. Beutel, Taner Topal, Akhil Mathur, Nicholas D. Lane

Despite impressive results, deep learning-based technologies also raise severe privacy and environmental concerns induced by the training procedure often conducted in data centers.

Federated Learning

Degree-Quant: Quantization-Aware Training for Graph Neural Networks

no code implementations ICLR 2021 Shyam A. Tailor, Javier Fernandez-Marques, Nicholas D. Lane

Graph neural networks (GNNs) have demonstrated strong performance on a wide variety of tasks due to their ability to model non-uniform structured data.

Graph Classification Graph Regression +2

Flower: A Friendly Federated Learning Research Framework

1 code implementation28 Jul 2020 Daniel J. Beutel, Taner Topal, Akhil Mathur, Xinchi Qiu, Javier Fernandez-Marques, Yan Gao, Lorenzo Sani, Kwing Hei Li, Titouan Parcollet, Pedro Porto Buarque de Gusmão, Nicholas D. Lane

Federated Learning (FL) has emerged as a promising technique for edge devices to collaboratively learn a shared prediction model, while keeping their training data on the device, thereby decoupling the ability to do machine learning from the need to store the data in the cloud.

Federated Learning

Searching for Winograd-aware Quantized Networks

1 code implementation25 Feb 2020 Javier Fernandez-Marques, Paul N. Whatmough, Andrew Mundy, Matthew Mattina

Lightweight architectural designs of Convolutional Neural Networks (CNNs) together with quantization have paved the way for the deployment of demanding computer vision applications on mobile devices.

Neural Architecture Search Quantization

Cannot find the paper you are looking for? You can Submit a new open access paper.