Dynamic neural networks
15 papers with code • 0 benchmarks • 0 datasets
Dynamic neural networks are adaptable models that can change their structure or parameters during training or inference based on input complexity or computational constraints. They offer benefits like improved efficiency, adaptability, and scalability compared to static architectures.
Benchmarks
These leaderboards are used to track progress in Dynamic neural networks
Most implemented papers
Nonlinear Systems Identification Using Deep Dynamic Neural Networks
Neural networks are known to be effective function approximators.
AMPNet: Asynchronous Model-Parallel Training for Dynamic Neural Networks
Through an implementation on multi-core CPUs, we show that AMP training converges to the same accuracy as conventional synchronous training algorithms in a similar number of epochs, but utilizes the available hardware more efficiently even for small minibatch sizes, resulting in significantly shorter overall training times.
Dynamic Dual Gating Neural Networks
In particular, dynamic dual gating can provide 59. 7% saving in computing of ResNet50 with 76. 41% top-1 accuracy on ImageNet, which has advanced the state-of-the-art.
Learning Task-Oriented Communication for Edge Inference: An Information Bottleneck Approach
Extensive experiments evidence that the proposed task-oriented communication system achieves a better rate-distortion tradeoff than baseline methods and significantly reduces the feature transmission latency in dynamic channel conditions.
DYNASHARE: DYNAMIC NEURAL NETWORKS FOR MULTI-TASK LEARNING
Parameter sharing approaches for deep multi-task learning share a common intuition: for a single network to perform multiple prediction tasks, the network needs to support multiple specialized execution paths.
Temporal Domain Generalization with Drift-Aware Dynamic Neural Networks
Temporal domain generalization is a promising yet extremely challenging area where the goal is to learn models under temporally changing data distributions and generalize to unseen data distributions following the trends of the change.
SATBench: Benchmarking the speed-accuracy tradeoff in object recognition by humans and dynamic neural networks
Using FLOPs as an analog for reaction time, we compare networks with humans on curve-fit error, category-wise correlation, and curve steepness, and conclude that cascaded dynamic neural networks are a promising model of human reaction time in object recognition tasks.
Boosted Dynamic Neural Networks
To optimize the model, these prediction heads together with the network backbone are trained on every batch of training data.
HADAS: Hardware-Aware Dynamic Neural Architecture Search for Edge Performance Scaling
Dynamic neural networks (DyNNs) have become viable techniques to enable intelligence on resource-constrained edge devices while maintaining computational efficiency.
Fixing Overconfidence in Dynamic Neural Networks
Dynamic neural networks are a recent technique that promises a remedy for the increasing size of modern deep learning models by dynamically adapting their computational cost to the difficulty of the inputs.