no code implementations • 4 Mar 2019 • Partha Maji, Andrew Mundy, Ganesh Dasika, Jesse Beu, Matthew Mattina, Robert Mullins
The Winograd or Cook-Toom class of algorithms help to reduce the overall compute complexity of many modern deep convolutional neural networks (CNNs).
no code implementations • 13 May 2021 • Lorena Qendro, Sangwon Ha, René de Jong, Partha Maji
Quantized neural networks (NN) are the common standard to efficiently deploy deep learning models on tiny hardware platforms.
no code implementations • 13 Aug 2021 • Shyam A. Tailor, René de Jong, Tiago Azevedo, Matthew Mattina, Partha Maji
In recent years graph neural network (GNN)-based approaches have become a popular strategy for processing point cloud data, regularly achieving state-of-the-art performance on a variety of tasks.
no code implementations • 11 Nov 2021 • Johanna Rock, Tiago Azevedo, René de Jong, Daniel Ruiz-Muñoz, Partha Maji
Deep neural networks have shown great success in prediction quality while reliable and robust uncertainty estimation remains a challenge.
1 code implementation • NeurIPS Workshop ICBINB 2021 • Guoxuan Xia, Sangwon Ha, Tiago Azevedo, Partha Maji
We show that this robustness can be partially explained by the calibration behavior of modern CNNs, and may be improved with overconfidence.
1 code implementation • 7 Sep 2020 • Tiago Azevedo, René de Jong, Matthew Mattina, Partha Maji
In this paper, we adapt the well-established YOLOv3 architecture to generate uncertainty estimations by introducing stochasticity in the form of Monte Carlo Dropout (MC-Drop), and evaluate it across different levels of dataset shift.
1 code implementation • 22 Feb 2021 • Martin Ferianc, Partha Maji, Matthew Mattina, Miguel Rodrigues
Bayesian neural networks (BNNs) are making significant progress in many research areas where decision-making needs to be accompanied by uncertainty estimation.