Search Results for author: Amir Najafi

Found 9 papers, 2 papers with code

Out-Of-Domain Unlabeled Data Improves Generalization

no code implementations29 Sep 2023 Amir Hossein Saberi, Amir Najafi, Alireza Heidari, Mohammad Hosein Movasaghinia, Abolfazl Motahari, Babak H. Khalaj

From a theoretical standpoint, we apply our framework on the classification problem of a mixture of two Gaussians in $\mathbb{R}^d$, where in addition to the $m$ independent and labeled samples from the true distribution, a set of $n$ (usually with $n\gg m$) out of domain and unlabeled samples are given as well.

Sample Complexity Bounds for Learning High-dimensional Simplices in Noisy Regimes

no code implementations9 Sep 2022 Amir Hossein Saberi, Amir Najafi, Seyed Abolfazl Motahari, Babak H. Khalaj

Also, we theoretically show that in order to achieve this bound, it is sufficient to have $n\ge\left(K^2/\varepsilon^2\right)e^{\Omega\left(K/\mathrm{SNR}^2\right)}$ samples, where $\mathrm{SNR}$ stands for the signal-to-noise ratio.

Density Estimation Vocal Bursts Intensity Prediction

Distributed Sparse Feature Selection in Communication-Restricted Networks

no code implementations2 Nov 2021 Hanie Barghi, Amir Najafi, Seyed Abolfazl Motahari

This paper aims to propose and theoretically analyze a new distributed scheme for sparse linear regression and feature selection.

feature selection

Regularizing Recurrent Neural Networks via Sequence Mixup

no code implementations27 Nov 2020 Armin Karamzade, Amir Najafi, Seyed Abolfazl Motahari

In this paper, we extend a class of celebrated regularization techniques originally proposed for feed-forward neural networks, namely Input Mixup (Zhang et al., 2017) and Manifold Mixup (Verma et al., 2018), to the realm of Recurrent Neural Networks (RNN).

named-entity-recognition Named Entity Recognition +1

Robustness to Adversarial Perturbations in Learning from Incomplete Data

no code implementations NeurIPS 2019 Amir Najafi, Shin-ichi Maeda, Masanori Koyama, Takeru Miyato

What is the role of unlabeled data in an inference problem, when the presumed underlying distribution is adversarially perturbed?

Manifold Mixup: Learning Better Representations by Interpolating Hidden States

1 code implementation ICLR 2019 Vikas Verma, Alex Lamb, Christopher Beckham, Amir Najafi, Aaron Courville, Ioannis Mitliagkas, Yoshua Bengio

Because the hidden states are learned, this has an important effect of encouraging the hidden states for a class to be concentrated in such a way so that interpolations within the same class or between two different classes do not intersect with the real data points from other classes.

On Statistical Learning of Simplices: Unmixing Problem Revisited

no code implementations18 Oct 2018 Amir Najafi, Saeed Ilchi, Amir H. Saberi, Seyed Abolfazl Motahari, Babak H. Khalaj, Hamid R. Rabiee

We study the sample complexity of learning a high-dimensional simplex from a set of points uniformly sampled from its interior.

Manifold Mixup: Better Representations by Interpolating Hidden States

12 code implementations ICLR 2019 Vikas Verma, Alex Lamb, Christopher Beckham, Amir Najafi, Ioannis Mitliagkas, Aaron Courville, David Lopez-Paz, Yoshua Bengio

Deep neural networks excel at learning the training data, but often provide incorrect and confident predictions when evaluated on slightly different test examples.

Image Classification

Reliable Clustering of Bernoulli Mixture Models

no code implementations5 Oct 2017 Amir Najafi, Abolfazl Motahari, Hamid R. Rabiee

A Bernoulli Mixture Model (BMM) is a finite mixture of random binary vectors with independent dimensions.

Clustering

Cannot find the paper you are looking for? You can Submit a new open access paper.