H-Meta-NAS shows a Pareto dominance compared to a variety of NAS and manual baselines in popular few-shot learning benchmarks with various hardware platforms and constraints.
The wide adaption of 3D point-cloud data in safety-critical applications such as autonomous driving makes adversarial samples a real threat.
LPGNAS learns the optimal architecture coupled with the best quantisation strategy for different components in the GNN automatically using back-propagation in a single search round.
The high energy costs of neural network training and inference led to the use of acceleration hardware such as GPUs and TPUs.
Convolutional Neural Networks (CNNs) are deployed in more and more classification systems, but adversarial samples can be maliciously crafted to trick them, and are becoming a real threat.
Furthermore, we show how Tomato produces implementations of networks with various sizes running on single or multiple FPGAs.
In this work, we show how such samples can be generalised from White-box and Grey-box attacks to a strong Black-box case, where the attacker has no knowledge of the agents, their training parameters and their training methods.
In ResNet-50, we achieved a 18. 08x CR with only 0. 24% loss in top-5 accuracy, outperforming existing compression methods.
The Winograd or Cook-Toom class of algorithms help to reduce the overall compute complexity of many modern deep convolutional neural networks (CNNs).
Convolutional Neural Networks (CNNs) are widely used to solve classification tasks in computer vision.
Most existing detection mechanisms against adversarial attacksimpose significant costs, either by using additional classifiers to spot adversarial samples, or by requiring the DNN to be restructured.
Making deep convolutional neural networks more accurate typically comes at the cost of increased computational and memory resources.
We, therefore, investigate the extent to which adversarial samples are transferable between uncompressed and compressed DNNs.