Computational Efficiency
881 papers with code • 0 benchmarks • 0 datasets
Benchmarks
These leaderboards are used to track progress in Computational Efficiency
Libraries
Use these libraries to find Computational Efficiency models and implementationsMost implemented papers
Cluster-GCN: An Efficient Algorithm for Training Deep and Large Graph Convolutional Networks
Furthermore, Cluster-GCN allows us to train much deeper GCN without much time and memory overhead, which leads to improved prediction accuracy---using a 5-layer Cluster-GCN, we achieve state-of-the-art test F1 score 99. 36 on the PPI dataset, while the previous best result was 98. 71 by [16].
A Transformer-based Framework for Multivariate Time Series Representation Learning
In this work we propose for the first time a transformer-based framework for unsupervised representation learning of multivariate time series.
Towards Good Practices for Very Deep Two-Stream ConvNets
However, for action recognition in videos, the improvement of deep convolutional networks is not so evident.
Distribution-Free Predictive Inference For Regression
In the spirit of reproducibility, all of our empirical results can also be easily (re)generated using this package.
Continual Learning Through Synaptic Intelligence
While deep learning has led to remarkable advances across diverse applications, it struggles in domains where the data distribution changes over the course of learning.
GraphGAN: Graph Representation Learning with Generative Adversarial Nets
The goal of graph representation learning is to embed each vertex in a graph into a low-dimensional vector space.
Multi-level Wavelet-CNN for Image Restoration
With the modified U-Net architecture, wavelet transform is introduced to reduce the size of feature maps in the contracting subnetwork.
RWKV: Reinventing RNNs for the Transformer Era
This work presents a significant step towards reconciling trade-offs between computational efficiency and model performance in sequence processing tasks.
Discovering and Deciphering Relationships Across Disparate Data Modalities
Understanding the relationships between different properties of data, such as whether a connectome or genome has information about disease status, is becoming increasingly important in modern biological datasets.
Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer
In this work, we address these challenges and finally realize the promise of conditional computation, achieving greater than 1000x improvements in model capacity with only minor losses in computational efficiency on modern GPU clusters.