2 code implementations • ICLR 2020 • Pedro Tabacof, Luca Costabello
We show popular embedding models are indeed uncalibrated.
Calibration for Link Prediction Knowledge Graph Embedding +2
1 code implementation • 12 Jun 2018 • George Gondim-Ribeiro, Pedro Tabacof, Eduardo Valle
Adversarial attacks are malicious inputs that derail machine-learning models.
1 code implementation • 5 Dec 2016 • Ramon Oliveira, Pedro Tabacof, Eduardo Valle
We compare the following candidate neural network models: Maximum Likelihood, Bayesian Dropout, OSBA, and --- for MNIST --- the standard variational approximation.
1 code implementation • 1 Dec 2016 • Pedro Tabacof, Julia Tavares, Eduardo Valle
We find that autoencoders are much more robust to the attack than classifiers: while some examples have tolerably small input distortion, and reasonable similarity to the target image, there is a quasi-linear trade-off between those aims.
2 code implementations • 19 Oct 2015 • Pedro Tabacof, Eduardo Valle
Adversarial examples have raised questions regarding the robustness and security of deep neural networks.