Search Results for author: Azadeh Sadat Mozafari

Found 7 papers, 0 papers with code

A Novel Unsupervised Post-Processing Calibration Method for DNNS with Robustness to Domain Shift

no code implementations25 Nov 2019 Azadeh Sadat Mozafari, Hugo Siqueira Gomes, Christian Gagne

The uncertainty estimation is critical in real-world decision making applications, especially when distributional shift between the training and test data are prevalent.

Decision Making

Unsupervised Temperature Scaling: Robust Post-processing Calibration for Domain Shift

no code implementations25 Sep 2019 Azadeh Sadat Mozafari, Hugo Siqueira Gomes, Christian Gagne

The uncertainty estimation is critical in real-world decision making applications, especially when distributional shift between the training and test data are prevalent.

Decision Making

Controlling Over-generalization and its Effect on Adversarial Examples Detection and Generation

no code implementations ICLR 2019 Mahdieh Abbasi, Arezoo Rajabi, Azadeh Sadat Mozafari, Rakesh B. Bobba, Christian Gagné

As an appropriate training set for the extra class, we introduce two resources that are computationally efficient to obtain: a representative natural out-distribution set and interpolated in-distribution samples.

Attended Temperature Scaling: A Practical Approach for Calibrating Deep Neural Networks

no code implementations27 Oct 2018 Azadeh Sadat Mozafari, Hugo Siqueira Gomes, Wilson Leão, Steeven Janny, Christian Gagné

Temperature Scaling (TS) is a state-of-the-art among measure-based calibration methods which has low time and memory complexity as well as effectiveness.

Autonomous Driving Decision Making +1

Controlling Over-generalization and its Effect on Adversarial Examples Generation and Detection

no code implementations21 Aug 2018 Mahdieh Abbasi, Arezoo Rajabi, Azadeh Sadat Mozafari, Rakesh B. Bobba, Christian Gagne

As an appropriate training set for the extra class, we introduce two resources that are computationally efficient to obtain: a representative natural out-distribution set and interpolated in-distribution samples.

Diversity regularization in deep ensembles

no code implementations22 Feb 2018 Changjian Shui, Azadeh Sadat Mozafari, Jonathan Marek, Ihsen Hedhli, Christian Gagné

Calibrating the confidence of supervised learning models is important for a variety of contexts where the certainty over predictions should be reliable.

Cannot find the paper you are looking for? You can Submit a new open access paper.