1 code implementation • ICCV 2023 • Marc Botet Colomer, Pier Luigi Dovesi, Theodoros Panagiotakopoulos, Joao Frederico Carvalho, Linus Härenstam-Nielsen, Hossein Azizpour, Hedvig Kjellström, Daniel Cremers, Matteo Poggi
The goal of Online Domain Adaptation for semantic segmentation is to handle unforeseeable domain changes that occur during deployment, like sudden weather events.
1 code implementation • 6 Apr 2023 • Erik Englesson, Amir Mehrpanah, Hossein Azizpour
A natural way of estimating heteroscedastic label noise in regression is to model the observed (potentially noisy) target as a sample from a normal distribution, whose parameters can be learned by minimizing the negative log-likelihood.
no code implementations • 1 Mar 2023 • Arivazhagan G. Balasubramanian, Luca Gastonia, Philipp Schlatter, Hossein Azizpour, Ricardo Vinuesa
At $Re_{\tau}=550$, both FCN and R-Net can take advantage of the self-similarity in the logarithmic region of the flow and predict the velocity-fluctuation fields at $y^{+} = 50$ using the velocity-fluctuation fields at $y^{+} = 100$ as input with about 10% error in prediction of streamwise-fluctuations intensity.
1 code implementation • 28 Jan 2023 • Matteo Gamba, Hossein Azizpour, Mårten Björkman
Existing bounds on the generalization error of deep networks assume some form of smooth or bounded dependence on the input variable, falling short of investigating the mechanisms controlling such factors in practice.
no code implementations • 7 Dec 2022 • Ritu Yadav, Andrea Nascetti, Hossein Azizpour, Yifang Ban
Our proposed CD model is evaluated on flood detection data.
1 code implementation • 18 Oct 2022 • Miquel Martí i Rabadán, Alessandro Pieropan, Hossein Azizpour, Atsuto Maki
We propose Dense FixMatch, a simple method for online semi-supervised learning of dense and structured prediction tasks combining pseudo-labeling and consistency regularization via strong data augmentation.
1 code implementation • 21 Sep 2022 • Matteo Gamba, Erik Englesson, Mårten Björkman, Hossein Azizpour
The ability of overparameterized deep networks to interpolate noisy data, while at the same time showing good generalization performance, has been recently characterized in terms of the double descent curve for the test error.
1 code implementation • 10 Aug 2022 • Yue Liu, Christos Matsoukas, Fredrik Strand, Hossein Azizpour, Kevin Smith
This simple approach, PatchDropout, reduces FLOPs and memory by at least 50% in standard natural image datasets such as ImageNet, and those savings only increase with image size.
1 code implementation • 11 Mar 2022 • Federico Baldassarre, Hossein Azizpour
Self-supervision allows learning meaningful representations of natural images, which usually contain one central object.
1 code implementation • 23 Feb 2022 • Matteo Gamba, Adrian Chmielewski-Anders, Josephine Sullivan, Hossein Azizpour, Mårten Björkman
The number of linear regions has been studied as a proxy of complexity for ReLU networks.
1 code implementation • 3 Jan 2022 • Miquel Martí i Rabadán, Sebastian Bujwid, Alessandro Pieropan, Hossein Azizpour, Atsuto Maki
Most semi-supervised learning methods over-sample labeled data when constructing training mini-batches.
2 code implementations • 2 Dec 2021 • Moein Sorkhei, Yue Liu, Hossein Azizpour, Edward Azavedo, Karin Dembrower, Dimitra Ntoula, Athanasios Zouzos, Fredrik Strand, Kevin Smith
Interval and large invasive breast cancers, which are associated with worse prognosis than other cancers, are usually detected at a late stage due to false negative assessments of screening mammograms.
no code implementations • 4 Oct 2021 • Erik Englesson, Hossein Azizpour
Consistency regularization is a commonly-used technique for semi-supervised and self-supervised learning.
1 code implementation • NeurIPS 2021 • Erik Englesson, Hossein Azizpour
Prior works have found it beneficial to combine provably noise-robust loss functions e. g., mean absolute error (MAE) with standard categorical loss function e. g. cross entropy (CE) to improve their learnability.
Ranked #17 on
Image Classification
on mini WebVision 1.0
no code implementations • ICLR Workshop Learning_to_Learn 2021 • Ali Ghadirzadeh, Petra Poklukar, Xi Chen, Huaxiu Yao, Hossein Azizpour, Mårten Björkman, Chelsea Finn, Danica Kragic
Few-shot meta-learning methods aim to learn the common structure shared across a set of tasks to facilitate learning new tasks with small amounts of data.
no code implementations • 12 Mar 2021 • Alejandro Güemes, Hampus Tober, Stefano Discetti, Andrea Ianiro, Beril Sirmacek, Hossein Azizpour, Ricardo Vinuesa
The method is applied both for the resolution enhancement of wall fields and the estimation of wall-parallel velocity fields from coarse wall measurements of shear stress and pressure.
Fluid Dynamics
1 code implementation • 11 Jul 2020 • Yue Liu, Hossein Azizpour, Fredrik Strand, Kevin Smith
With this in mind, we trained networks using three different criteria to select the positive training data (i. e. images from patients that will develop cancer): an inherent risk model trained on images with no visible signs of cancer, a cancer signs model trained on images containing cancer or early signs of cancer, and a conflated model trained on all images from patients with a cancer diagnosis.
1 code implementation • ECCV 2020 • Federico Baldassarre, Kevin Smith, Josephine Sullivan, Hossein Azizpour
Visual relationship detection is fundamental for holistic image understanding.
no code implementations • 1 May 2020 • Hamidreza Eivazi, Luca Guastoni, Philipp Schlatter, Hossein Azizpour, Ricardo Vinuesa
We also observe that using a loss function based only on the instantaneous predictions of the chaotic system can lead to suboptimal reproductions in terms of long-term statistics.
1 code implementation • 17 Mar 2020 • Matteo Gamba, Stefan Carlsson, Hossein Azizpour, Mårten Björkman
We investigate the geometric properties of the functions learned by trained ConvNets in the preactivation space of their convolutional layers, by performing an empirical study of hyperplane arrangements induced by a convolutional layer.
no code implementations • 4 Feb 2020 • Luca Guastoni, Prem A. Srinivasan, Hossein Azizpour, Philipp Schlatter, Ricardo Vinuesa
We also observe that using a loss function based only on the instantaneous predictions of the flow may not lead to the best predictions in terms of turbulence statistics, and it is necessary to define a stopping criterion based on the computed statistics.
1 code implementation • 25 Sep 2019 • Federico Baldassarre, David Menéndez Hurtado, Arne Elofsson, Hossein Azizpour
Proteins are ubiquitous molecules whose function in biological processes is determined by their 3D structure.
no code implementations • 12 Jun 2019 • Erik Englesson, Hossein Azizpour
In this work we aim to obtain computationally-efficient uncertainty estimates with deep networks.
2 code implementations • 31 May 2019 • Federico Baldassarre, Hossein Azizpour
Graph Networks are used to make decisions in potentially complex scenarios but it is usually not obvious how or why they made them.
no code implementations • 30 Apr 2019 • Ricardo Vinuesa, Hossein Azizpour, Iolanda Leite, Madeline Balaam, Virginia Dignum, Sami Domisch, Anna Felländer, Simone Langhans, Max Tegmark, Francesco Fuso Nerini
We find that AI can support the achievement of 128 targets across all SDGs, but it may also inhibit 58 targets.
no code implementations • 26 Nov 2018 • Sebastian Bujwid, Miquel Martí, Hossein Azizpour, Alessandro Pieropan
In this work, we propose a novel method for constraining the output space of unpaired image-to-image translation.
3 code implementations • 18 Feb 2018 • Mattias Teye, Hossein Azizpour, Kevin Smith
We show that training a deep network using batch normalization is equivalent to approximate inference in Bayesian models.
no code implementations • 8 Jul 2015 • Hossein Azizpour, Mostafa Arefiyan, Sobhan Naderi Parizi, Stefan Carlsson
Discriminative latent variable models (LVM) are frequently applied to various visual recognition tasks.
no code implementations • 24 Nov 2014 • Ali Sharif Razavian, Hossein Azizpour, Atsuto Maki, Josephine Sullivan, Carl Henrik Ek, Stefan Carlsson
Supervised training of a convolutional network for object classification should make explicit any information related to the class of objects and disregard any auxiliary information associated with the capture of the image or the variation within the object class.
no code implementations • 22 Jun 2014 • Hossein Azizpour, Ali Sharif Razavian, Josephine Sullivan, Atsuto Maki, Stefan Carlsson
In the common scenario, a ConvNet is trained on a large labeled dataset (source) and the feed-forward units activation of the trained network, at a certain layer of the network, is used as a generic representation of an input image for a task with relatively smaller training set (target).
no code implementations • 22 May 2014 • Hossein Azizpour, Stefan Carlsson
Finally, we show that state of the art object detection methods (e. g. DPM) are unable to use the tails of this distribution comprising 50\% of the training samples.
4 code implementations • 23 Mar 2014 • Ali Sharif Razavian, Hossein Azizpour, Josephine Sullivan, Stefan Carlsson
We report on a series of experiments conducted for different recognition tasks using the publicly available code and model of the \overfeat network which was trained to perform object classification on ILSVRC13.