no code implementations • 23 Jan 2024 • Shih-Han Chou, Matthew Kowal, Yasmin Niknam, Diana Moyano, Shayaan Mehdi, Richard Pito, Cheng Zhang, Ian Knopke, Sedef Akinli Kocak, Leonid Sigal, Yalda Mohsenzadeh
Towards a solution for designing this ability in algorithms, we present a large-scale analysis on an in-house dataset collected by the Reuters News Agency, called Reuters Video-Language News (ReutersViLNews) dataset which focuses on high-level video-language understanding with an emphasis on long-form news.
1 code implementation • 2 Nov 2023 • Rouzbeh Meshkinnejad, Jie Mei, Daniel Lizotte, Yalda Mohsenzadeh
Contrastive representation learning has emerged as a promising technique for continual learning as it can learn representations that are robust to catastrophic forgetting and generalize well to unseen future tasks.
no code implementations • 3 Jul 2022 • Vahid Reza Khazaie, Anthony Wong, John Taylor Jewell, Yalda Mohsenzadeh
The Adversarial Distorter is a convolutional encoder that learns to produce effective perturbations and the autoencoder is a deep convolutional neural network that aims to reconstruct the images from the perturbed latent feature space.
no code implementations • 3 Jul 2022 • Vahid Reza Khazaie, Anthony Wong, Yalda Mohsenzadeh
Therefore, training a regressor on these augmented samples will result in more separable distributions of labels for normal and real anomalous data points.
no code implementations • 24 Feb 2022 • Mohammad Younesi, Yalda Mohsenzadeh
In our proposed method, we first found a hyperplane in the latent space of StyleGAN to separate high and low memorable images.
no code implementations • 13 Oct 2021 • Haider Al-Tahan, Yalda Mohsenzadeh
Hence, we extensively investigate composition of temporal augmentations suitable for learning audiovisual representations; we find lossy spatio-temporal transformations that do not corrupt the temporal coherency of videos are the most effective.
no code implementations • 29 Sep 2021 • Mohammad Younesi, Yalda Mohsenzadeh
In our proposed method, we first find a hyperplane in the latent space of StyleGAN to separate high and low memorable images.
1 code implementation • 27 Mar 2021 • John Taylor Jewell, Vahid Reza Khazaie, Yalda Mohsenzadeh
In particular, context autoencoders have been successful in the novelty detection task because of the more effective representations they learn by reconstructing original images from randomly masked images.
Ranked #28 on Anomaly Detection on One-class CIFAR-10
1 code implementation • 23 Oct 2020 • Vahid Reza Khazaie, Nicky Bayat, Yalda Mohsenzadeh
While they are capable of reconstructing images that are visually appealing, the identity-related information is not preserved.
no code implementations • 19 Oct 2020 • Haider Al-Tahan, Yalda Mohsenzadeh
We illustrate that by combining all these methods and with substantially less labeled data, our framework (CLAR) achieves significant improvement on prediction performance compared to supervised approach.
no code implementations • 16 Oct 2020 • Andrew Keyes, Nicky Bayat, Vahid Reza Khazaie, Yalda Mohsenzadeh
Through our deep neural network based method of training on real and synthesized audio, we are able to predict a latent vector that corresponds to a reasonable reconstruction of real audio.
1 code implementation • 11 Sep 2020 • Nicky Bayat, Vahid Reza Khazaie, Yalda Mohsenzadeh
The vast majority of studies on latent vector recovery perform well only on generated images, we argue that our method can be used to determine a mapping between real human faces and latent-space vectors that contain most of the important face style details.
no code implementations • 14 May 2019 • Radoslaw Martin Cichy, Gemma Roig, Alex Andonian, Kshitij Dwivedi, Benjamin Lahner, Alex Lascelles, Yalda Mohsenzadeh, Kandan Ramakrishnan, Aude Oliva
Recently, researchers of natural intelligence have begun using those AI models to explore how the brain performs such tasks.
no code implementations • 20 Jun 2017 • Erfan Zangeneh, Mohammad Rahmati, Yalda Mohsenzadeh
We propose a novel couple mappings method for low resolution face recognition using deep convolutional neural networks (DCNNs).