Search Results for author: Hossein Azizpour

Found 32 papers, 18 papers with code

Logistic-Normal Likelihoods for Heteroscedastic Label Noise

1 code implementation6 Apr 2023 Erik Englesson, Amir Mehrpanah, Hossein Azizpour

A natural way of estimating heteroscedastic label noise in regression is to model the observed (potentially noisy) target as a sample from a normal distribution, whose parameters can be learned by minimizing the negative log-likelihood.


Predicting the wall-shear stress and wall pressure through convolutional neural networks

no code implementations1 Mar 2023 Arivazhagan G. Balasubramanian, Luca Gastonia, Philipp Schlatter, Hossein Azizpour, Ricardo Vinuesa

At $Re_{\tau}=550$, both FCN and R-Net can take advantage of the self-similarity in the logarithmic region of the flow and predict the velocity-fluctuation fields at $y^{+} = 50$ using the velocity-fluctuation fields at $y^{+} = 100$ as input with about 10% error in prediction of streamwise-fluctuations intensity.

On the Lipschitz Constant of Deep Networks and Double Descent

1 code implementation28 Jan 2023 Matteo Gamba, Hossein Azizpour, Mårten Björkman

Existing bounds on the generalization error of deep networks assume some form of smooth or bounded dependence on the input variable, falling short of investigating the mechanisms controlling such factors in practice.

Dense FixMatch: a simple semi-supervised learning method for pixel-wise prediction tasks

1 code implementation18 Oct 2022 Miquel Martí i Rabadán, Alessandro Pieropan, Hossein Azizpour, Atsuto Maki

We propose Dense FixMatch, a simple method for online semi-supervised learning of dense and structured prediction tasks combining pseudo-labeling and consistency regularization via strong data augmentation.

Data Augmentation Semi-Supervised Semantic Segmentation

Deep Double Descent via Smooth Interpolation

1 code implementation21 Sep 2022 Matteo Gamba, Erik Englesson, Mårten Björkman, Hossein Azizpour

The ability of overparameterized deep networks to interpolate noisy data, while at the same time showing good generalization performance, has been recently characterized in terms of the double descent curve for the test error.

PatchDropout: Economizing Vision Transformers Using Patch Dropout

1 code implementation10 Aug 2022 Yue Liu, Christos Matsoukas, Fredrik Strand, Hossein Azizpour, Kevin Smith

This simple approach, PatchDropout, reduces FLOPs and memory by at least 50% in standard natural image datasets such as ImageNet, and those savings only increase with image size.

Image Classification Medical Image Classification

Towards Self-Supervised Learning of Global and Object-Centric Representations

1 code implementation11 Mar 2022 Federico Baldassarre, Hossein Azizpour

Self-supervision allows learning meaningful representations of natural images, which usually contain one central object.

Data Augmentation Object Discovery +1

CSAW-M: An Ordinal Classification Dataset for Benchmarking Mammographic Masking of Cancer

2 code implementations2 Dec 2021 Moein Sorkhei, Yue Liu, Hossein Azizpour, Edward Azavedo, Karin Dembrower, Dimitra Ntoula, Athanasios Zouzos, Fredrik Strand, Kevin Smith

Interval and large invasive breast cancers, which are associated with worse prognosis than other cancers, are usually detected at a late stage due to false negative assessments of screening mammograms.


Consistency Regularization Can Improve Robustness to Label Noise

no code implementations4 Oct 2021 Erik Englesson, Hossein Azizpour

Consistency regularization is a commonly-used technique for semi-supervised and self-supervised learning.

Self-Supervised Learning

Generalized Jensen-Shannon Divergence Loss for Learning with Noisy Labels

1 code implementation NeurIPS 2021 Erik Englesson, Hossein Azizpour

Prior works have found it beneficial to combine provably noise-robust loss functions e. g., mean absolute error (MAE) with standard categorical loss function e. g. cross entropy (CE) to improve their learnability.

Learning with noisy labels


no code implementations ICLR Workshop Learning_to_Learn 2021 Ali Ghadirzadeh, Petra Poklukar, Xi Chen, Huaxiu Yao, Hossein Azizpour, Mårten Björkman, Chelsea Finn, Danica Kragic

Few-shot meta-learning methods aim to learn the common structure shared across a set of tasks to facilitate learning new tasks with small amounts of data.

Meta-Learning Variational Inference

From coarse wall measurements to turbulent velocity fields through deep learning

no code implementations12 Mar 2021 Alejandro Güemes, Hampus Tober, Stefano Discetti, Andrea Ianiro, Beril Sirmacek, Hossein Azizpour, Ricardo Vinuesa

The method is applied both for the resolution enhancement of wall fields and the estimation of wall-parallel velocity fields from coarse wall measurements of shear stress and pressure.

Fluid Dynamics

Decoupling Inherent Risk and Early Cancer Signs in Image-based Breast Cancer Risk Models

1 code implementation11 Jul 2020 Yue Liu, Hossein Azizpour, Fredrik Strand, Kevin Smith

With this in mind, we trained networks using three different criteria to select the positive training data (i. e. images from patients that will develop cancer): an inherent risk model trained on images with no visible signs of cancer, a cancer signs model trained on images containing cancer or early signs of cancer, and a conflated model trained on all images from patients with a cancer diagnosis.

Decision Making

Recurrent neural networks and Koopman-based frameworks for temporal predictions in a low-order model of turbulence

no code implementations1 May 2020 Hamidreza Eivazi, Luca Guastoni, Philipp Schlatter, Hossein Azizpour, Ricardo Vinuesa

We also observe that using a loss function based only on the instantaneous predictions of the chaotic system can lead to suboptimal reproductions in terms of long-term statistics.

Model Selection

Hyperplane Arrangements of Trained ConvNets Are Biased

1 code implementation17 Mar 2020 Matteo Gamba, Stefan Carlsson, Hossein Azizpour, Mårten Björkman

We investigate the geometric properties of the functions learned by trained ConvNets in the preactivation space of their convolutional layers, by performing an empirical study of hyperplane arrangements induced by a convolutional layer.

On the use of recurrent neural networks for predictions of turbulent flows

no code implementations4 Feb 2020 Luca Guastoni, Prem A. Srinivasan, Hossein Azizpour, Philipp Schlatter, Ricardo Vinuesa

We also observe that using a loss function based only on the instantaneous predictions of the flow may not lead to the best predictions in terms of turbulence statistics, and it is necessary to define a stopping criterion based on the computed statistics.

Efficient Evaluation-Time Uncertainty Estimation by Improved Distillation

no code implementations12 Jun 2019 Erik Englesson, Hossein Azizpour

In this work we aim to obtain computationally-efficient uncertainty estimates with deep networks.

Knowledge Distillation

Explainability Techniques for Graph Convolutional Networks

2 code implementations31 May 2019 Federico Baldassarre, Hossein Azizpour

Graph Networks are used to make decisions in potentially complex scenarios but it is usually not obvious how or why they made them.

Bayesian Uncertainty Estimation for Batch Normalized Deep Networks

3 code implementations18 Feb 2018 Mattias Teye, Hossein Azizpour, Kevin Smith

We show that training a deep network using batch normalization is equivalent to approximate inference in Bayesian models.

Spotlight the Negatives: A Generalized Discriminative Latent Model

no code implementations8 Jul 2015 Hossein Azizpour, Mostafa Arefiyan, Sobhan Naderi Parizi, Stefan Carlsson

Discriminative latent variable models (LVM) are frequently applied to various visual recognition tasks.

Persistent Evidence of Local Image Properties in Generic ConvNets

no code implementations24 Nov 2014 Ali Sharif Razavian, Hossein Azizpour, Atsuto Maki, Josephine Sullivan, Carl Henrik Ek, Stefan Carlsson

Supervised training of a convolutional network for object classification should make explicit any information related to the class of objects and disregard any auxiliary information associated with the capture of the image or the variation within the object class.

General Classification

Factors of Transferability for a Generic ConvNet Representation

no code implementations22 Jun 2014 Hossein Azizpour, Ali Sharif Razavian, Josephine Sullivan, Atsuto Maki, Stefan Carlsson

In the common scenario, a ConvNet is trained on a large labeled dataset (source) and the feed-forward units activation of the trained network, at a certain layer of the network, is used as a generic representation of an input image for a task with relatively smaller training set (target).

Dimensionality Reduction Representation Learning

Self-tuned Visual Subclass Learning with Shared Samples An Incremental Approach

no code implementations22 May 2014 Hossein Azizpour, Stefan Carlsson

Finally, we show that state of the art object detection methods (e. g. DPM) are unable to use the tails of this distribution comprising 50\% of the training samples.

Clustering General Classification +2

CNN Features off-the-shelf: an Astounding Baseline for Recognition

4 code implementations23 Mar 2014 Ali Sharif Razavian, Hossein Azizpour, Josephine Sullivan, Stefan Carlsson

We report on a series of experiments conducted for different recognition tasks using the publicly available code and model of the \overfeat network which was trained to perform object classification on ILSVRC13.

General Classification Image Classification +3

Cannot find the paper you are looking for? You can Submit a new open access paper.