no code implementations • 23 Jan 2014 • Hossein Hosseini, Ali Goli, Neda Barzegar Marvasti, Masoume Azghani, Farokh Marvasti
In this paper, we propose a method for image block loss restoration based on the notion of sparse representation.
no code implementations • 10 Jul 2014 • Hossein Hosseini, Farzad Hessar, Farokh Marvasti
In this paper, we propose a method for real-time high density impulse noise suppression from images.
no code implementations • 27 Aug 2016 • Hossein Hosseini, Sreeram Kannan, Baosen Zhang, Radha Poovendran
We consider the setting where a collection of time series, modeled as random processes, evolve in a causal manner, and one is interested in learning the graph governing the relationships of these processes.
no code implementations • 27 Feb 2017 • Hossein Hosseini, Sreeram Kannan, Baosen Zhang, Radha Poovendran
In this paper, we propose an attack on the Perspective toxic detection system based on the adversarial examples.
no code implementations • 13 Mar 2017 • Hossein Hosseini, Yize Chen, Sreeram Kannan, Baosen Zhang, Radha Poovendran
Advances in Machine Learning (ML) have led to its adoption as an integral component in many applications, including banking, medical diagnosis, and driverless cars.
no code implementations • 20 Mar 2017 • Hossein Hosseini, Baicen Xiao, Mayoore Jaiswal, Radha Poovendran
To this end, we evaluate CNNs on negative images, since they share the same structure and semantics as regular images and humans can classify them correctly.
no code implementations • 26 Mar 2017 • Hossein Hosseini, Baicen Xiao, Radha Poovendran
For this, we select an image, which is different from the video content, and insert it, periodically and at a very low rate, into the video.
no code implementations • 16 Apr 2017 • Hossein Hosseini, Baicen Xiao, Radha Poovendran
For example, an adversary can bypass an image filtering system by adding noise to inappropriate images.
no code implementations • 14 Aug 2017 • Hossein Hosseini, Baicen Xiao, Andrew Clark, Radha Poovendran
At the end, we propose introducing randomness to video analysis algorithms as a countermeasure to our attacks.
1 code implementation • 16 Mar 2018 • Hossein Hosseini, Radha Poovendran
This property is used by several defense methods to counter adversarial examples by applying denoising filters or training the model to be robust to small perturbations.
no code implementations • 21 Mar 2018 • Hossein Hosseini, Baicen Xiao, Mayoore Jaiswal, Radha Poovendran
In order to conduct large scale experiments, we propose using the model accuracy on images with reversed brightness as a metric to evaluate the shape bias property.
no code implementations • 1 May 2019 • Hossein Hosseini, Sreeram Kannan, Radha Poovendran
Deep neural networks are vulnerable against adversarial examples.
no code implementations • 28 Jul 2019 • Hossein Hosseini, Sreeram Kannan, Radha Poovendran
In this paper, we first develop a classifier-based adaptation of the statistical test method and show that it improves the detection performance.
no code implementations • 9 Jul 2020 • Hossein Hosseini, Sungrack Yun, Hyunsin Park, Christos Louizos, Joseph Soriaga, Max Welling
In this paper, we propose Federated User Authentication (FedUA), a framework for privacy-preserving training of UA models.
no code implementations • 1 Jan 2021 • Hossein Hosseini, Hyunsin Park, Sungrack Yun, Christos Louizos, Joseph Soriaga, Max Welling
We consider the problem of training User Verification (UV) models in federated setup, where the conventional loss functions are not applicable due to the constraints that each user has access to the data of only one class and user embeddings cannot be shared with the server or other users.
no code implementations • 1 Jan 2021 • Mohammad Samragh, Hossein Hosseini, Kambiz Azarian, Joseph Soriaga
Splitting network computations between the edge device and the cloud server is a promising approach for enabling low edge-compute and private inference of neural networks.
no code implementations • 18 Apr 2021 • Hossein Hosseini, Hyunsin Park, Sungrack Yun, Christos Louizos, Joseph Soriaga, Max Welling
We consider the problem of training User Verification (UV) models in federated setting, where each user has access to the data of only one class and user embeddings cannot be shared with the server or other users.
no code implementations • 23 Apr 2021 • Mohammad Samragh, Hossein Hosseini, Aleksei Triastcyn, Kambiz Azarian, Joseph Soriaga, Farinaz Koushanfar
In our method, the edge device runs the model up to a split layer determined based on its computational capacity.