|Trend||Dataset||Best Method||Paper title||Paper||Code||Compare|
We show that our proposed regularization method results in improved latent representations for both supervised learning and clustering downstream tasks when compared to autoencoders using other bottleneck structures.
We analyze the trade-off between quantization noise and clipping distortion in low precision networks.
While mainstream deep learning methods train the neural networks weights while keeping the network architecture fixed, the emerging neural architecture search (NAS) techniques make the latter also amenable to training.
Efficient point cloud compression is fundamental to enable the deployment of virtual and mixed reality applications, since the number of points to code can range in the order of millions.
We propose a method of training quantization clipping thresholds for uniform symmetric quantizers using standard backpropagation and gradient descent.
In this work, a deep learning-based method for log-likelihood ratio (LLR) lossy compression and quantization is proposed, with emphasis on a single-input single-output uncorrelated fading communication setting.
Deep convolutional neural networks (CNNs) are powerful tools for a wide range of vision tasks, but the enormous amount of memory and compute resources required by CNNs poses a challenge in deploying them on constrained devices.
This limitation is expected to become more stringent as existing knowledge graphs, which are already huge, keep steadily growing in scale.