Search Results for author: Shyam A. Tailor

Found 6 papers, 2 papers with code

Are Accelerometers for Activity Recognition a Dead-end?

no code implementations22 Jan 2020 Catherine Tong, Shyam A. Tailor, Nicholas D. Lane

Overall, our work highlights the need to move away from accelerometers and calls for further exploration of using imagers for activity recognition.

Feature Engineering Human Activity Recognition

Degree-Quant: Quantization-Aware Training for Graph Neural Networks

no code implementations ICLR 2021 Shyam A. Tailor, Javier Fernandez-Marques, Nicholas D. Lane

Graph neural networks (GNNs) have demonstrated strong performance on a wide variety of tasks due to their ability to model non-uniform structured data.

Graph Classification Graph Regression +2

Do We Need Anisotropic Graph Neural Networks?

2 code implementations3 Apr 2021 Shyam A. Tailor, Felix L. Opolka, Pietro Liò, Nicholas D. Lane

We demonstrate that EGC outperforms existing approaches across 6 large and diverse benchmark datasets, and conclude by discussing questions that our work raise for the community going forward.

Towards Efficient Point Cloud Graph Neural Networks Through Architectural Simplification

no code implementations13 Aug 2021 Shyam A. Tailor, René de Jong, Tiago Azevedo, Matthew Mattina, Partha Maji

In recent years graph neural network (GNN)-based approaches have become a popular strategy for processing point cloud data, regularly achieving state-of-the-art performance on a variety of tasks.

Mixed Reality

Adaptive Filters for Low-Latency and Memory-Efficient Graph Neural Networks

no code implementations ICLR 2022 Shyam A. Tailor, Felix Opolka, Pietro Lio, Nicholas Donald Lane

Scaling and deploying graph neural networks (GNNs) remains difficult due to their high memory consumption and inference latency.

Prospect Pruning: Finding Trainable Weights at Initialization using Meta-Gradients

1 code implementation ICLR 2022 Milad Alizadeh, Shyam A. Tailor, Luisa M Zintgraf, Joost van Amersfoort, Sebastian Farquhar, Nicholas Donald Lane, Yarin Gal

Pruning neural networks at initialization would enable us to find sparse models that retain the accuracy of the original network while consuming fewer computational resources for training and inference.

Cannot find the paper you are looking for? You can Submit a new open access paper.