no code implementations • 5 Mar 2024 • Hossein Aboutalebi, Hwanjun Song, Yusheng Xie, Arshit Gupta, Justin Sun, Hang Su, Igor Shalyminov, Nikolaos Pappas, Siffi Singh, Saab Mansour
Development of multimodal interactive systems is hindered by the lack of rich, multimodal (text, images) conversational data, which is needed in large quantities for LLMs.
1 code implementation • 2 Jun 2023 • Hossein Aboutalebi, Dayou Mao, Carol Xu, Alexander Wong
Motivated to address these key concerns to encourage responsible generative AI, we introduce the DeepfakeArt Challenge, a large-scale challenge benchmark dataset designed specifically to aid in the building of machine learning algorithms for generative AI art forgery and data poisoning detection.
no code implementations • 8 Jun 2022 • Maya Pavlova, Tia Tuinstra, Hossein Aboutalebi, Andy Zhao, Hayden Gunraj, Alexander Wong
After more than two years since the beginning of the COVID-19 pandemic, the pressure of this crisis continues to devastate globally.
1 code implementation • 24 Apr 2022 • Hossein Aboutalebi, Maya Pavlova, Mohammad Javad Shafiee, Adrian Florea, Andrew Hryniowski, Alexander Wong
Since the World Health Organization declared COVID-19 a pandemic in 2020, the global community has faced ongoing challenges in controlling and mitigating the transmission of the SARS-CoV-2 virus, as well as its evolving subvariants and recombinants.
no code implementations • 12 Oct 2021 • Hossein Aboutalebi, Maya Pavlova, Hayden Gunraj, Mohammad Javad Shafiee, Ali Sabri, Amer Alaref, Alexander Wong
In this work, we explore the concept of self-attention for tackling such subtleties in and between diseases.
no code implementations • 14 Sep 2021 • Audrey G. Chung, Maya Pavlova, Hayden Gunraj, Naomi Terhljan, Alexander MacLean, Hossein Aboutalebi, Siddharth Surana, Andy Zhao, Saad Abbasi, Alexander Wong
As the COVID-19 pandemic continues to devastate globally, one promising field of research is machine learning-driven computer vision to streamline various parts of the COVID-19 clinical workflow.
no code implementations • 8 Sep 2021 • Maziar Gomrokchi, Susan Amin, Hossein Aboutalebi, Alexander Wong, Doina Precup
To address this gap, we propose an adversarial attack framework designed for testing the vulnerability of a state-of-the-art deep reinforcement learning algorithm to a membership inference attack.
no code implementations • 18 Jun 2021 • Hossein Aboutalebi, Mohammad Javad Shafiee, Michelle Karg, Christian Scharfenberger, Alexander Wong
Motivated by this, this study presents the concept of residual error, a new performance measure for not only assessing the adversarial robustness of a deep neural network at the individual sample level, but also can be used to differentiate between adversarial and non-adversarial examples to facilitate for adversarial example detection.
no code implementations • 14 May 2021 • Maya Pavlova, Naomi Terhljan, Audrey G. Chung, Andy Zhao, Siddharth Surana, Hossein Aboutalebi, Hayden Gunraj, Ali Sabri, Amer Alaref, Alexander Wong
As the COVID-19 pandemic continues to devastate globally, the use of chest X-ray (CXR) imaging as a complimentary screening strategy to RT-PCR testing continues to grow given its routine clinical use for respiratory complaint.
no code implementations • 4 May 2021 • Hossein Aboutalebi, Saad Abbasi, Mohammad Javad Shafiee, Alexander Wong
The health and socioeconomic difficulties caused by the COVID-19 pandemic continues to cause enormous tensions around the world.
no code implementations • 1 May 2021 • Hossein Aboutalebi, Maya Pavlova, Mohammad Javad Shafiee, Ali Sabri, Amer Alaref, Alexander Wong
More specifically, we leveraged transfer learning to transfer representational knowledge gained from over 16, 000 CXR images from a multinational cohort of over 15, 000 patient cases into a custom network architecture for severity assessment.
1 code implementation • 26 Dec 2020 • Susan Amin, Maziar Gomrokchi, Hossein Aboutalebi, Harsh Satija, Doina Precup
A major challenge in reinforcement learning is the design of exploration strategies, especially for environments with sparse reward structures and continuous state and action spaces.
no code implementations • 18 Nov 2020 • Hossein Aboutalebi, Mohammad Javad Shafiee Alexander Wong
In this study, we hypothesize that part of the reason for the incredible effectiveness of adversarial attacks is their ability to implicitly tap into and exploit the gradient flow of a deep neural network.
no code implementations • 1 Aug 2020 • Hossein Aboutalebi, Mohammad Javad Shafiee, Michelle Karg, Christian Scharfenberger, Alexander Wong
In this study, we investigate the effect of adversarial machine learning on the bias and variance of a trained deep neural network and analyze how adversarial perturbations can affect the generalization of a network.
no code implementations • 4 Mar 2019 • Hossein Aboutalebi, Doina Precup, Tibor Schuster
We present a regret bound for our approach and evaluate it empirically both on synthetic problems as well as on a dataset from the clinical trial literature.