Search Results for author: Hossein Hosseini

Found 18 papers, 1 papers with code

Semantic Adversarial Examples

1 code implementation16 Mar 2018 Hossein Hosseini, Radha Poovendran

This property is used by several defense methods to counter adversarial examples by applying denoising filters or training the model to be robust to small perturbations.

Denoising

Assessing Shape Bias Property of Convolutional Neural Networks

no code implementations21 Mar 2018 Hossein Hosseini, Baicen Xiao, Mayoore Jaiswal, Radha Poovendran

In order to conduct large scale experiments, we propose using the model accuracy on images with reversed brightness as a metric to evaluate the shape bias property.

One-Shot Learning

On the Limitation of Convolutional Neural Networks in Recognizing Negative Images

no code implementations20 Mar 2017 Hossein Hosseini, Baicen Xiao, Mayoore Jaiswal, Radha Poovendran

To this end, we evaluate CNNs on negative images, since they share the same structure and semantics as regular images and humans can classify them correctly.

Google's Cloud Vision API Is Not Robust To Noise

no code implementations16 Apr 2017 Hossein Hosseini, Baicen Xiao, Radha Poovendran

For example, an adversary can bypass an image filtering system by adding noise to inappropriate images.

Deceiving Google's Cloud Video Intelligence API Built for Summarizing Videos

no code implementations26 Mar 2017 Hossein Hosseini, Baicen Xiao, Radha Poovendran

For this, we select an image, which is different from the video content, and insert it, periodically and at a very low rate, into the video.

Image Classification

Blocking Transferability of Adversarial Examples in Black-Box Learning Systems

no code implementations13 Mar 2017 Hossein Hosseini, Yize Chen, Sreeram Kannan, Baosen Zhang, Radha Poovendran

Advances in Machine Learning (ML) have led to its adoption as an integral component in many applications, including banking, medical diagnosis, and driverless cars.

Blocking Medical Diagnosis

Deceiving Google's Perspective API Built for Detecting Toxic Comments

no code implementations27 Feb 2017 Hossein Hosseini, Sreeram Kannan, Baosen Zhang, Radha Poovendran

In this paper, we propose an attack on the Perspective toxic detection system based on the adversarial examples.

Image Block Loss Restoration Using Sparsity Pattern as Side Information

no code implementations23 Jan 2014 Hossein Hosseini, Ali Goli, Neda Barzegar Marvasti, Masoume Azghani, Farokh Marvasti

In this paper, we propose a method for image block loss restoration based on the notion of sparse representation.

Learning Temporal Dependence from Time-Series Data with Latent Variables

no code implementations27 Aug 2016 Hossein Hosseini, Sreeram Kannan, Baosen Zhang, Radha Poovendran

We consider the setting where a collection of time series, modeled as random processes, evolve in a causal manner, and one is interested in learning the graph governing the relationships of these processes.

Time Series Time Series Analysis

Real-Time Impulse Noise Suppression from Images Using an Efficient Weighted-Average Filtering

no code implementations10 Jul 2014 Hossein Hosseini, Farzad Hessar, Farokh Marvasti

In this paper, we propose a method for real-time high density impulse noise suppression from images.

Are Odds Really Odd? Bypassing Statistical Detection of Adversarial Examples

no code implementations28 Jul 2019 Hossein Hosseini, Sreeram Kannan, Radha Poovendran

In this paper, we first develop a classifier-based adaptation of the statistical test method and show that it improves the detection performance.

Federated Learning of User Authentication Models

no code implementations9 Jul 2020 Hossein Hosseini, Sungrack Yun, Hyunsin Park, Christos Louizos, Joseph Soriaga, Max Welling

In this paper, we propose Federated User Authentication (FedUA), a framework for privacy-preserving training of UA models.

Federated Learning Privacy Preserving +1

Secure Federated Learning of User Verification Models

no code implementations1 Jan 2021 Hossein Hosseini, Hyunsin Park, Sungrack Yun, Christos Louizos, Joseph Soriaga, Max Welling

We consider the problem of training User Verification (UV) models in federated setup, where the conventional loss functions are not applicable due to the constraints that each user has access to the data of only one class and user embeddings cannot be shared with the server or other users.

Federated Learning

Private Split Inference of Deep Networks

no code implementations1 Jan 2021 Mohammad Samragh, Hossein Hosseini, Kambiz Azarian, Joseph Soriaga

Splitting network computations between the edge device and the cloud server is a promising approach for enabling low edge-compute and private inference of neural networks.

Federated Learning of User Verification Models Without Sharing Embeddings

no code implementations18 Apr 2021 Hossein Hosseini, Hyunsin Park, Sungrack Yun, Christos Louizos, Joseph Soriaga, Max Welling

We consider the problem of training User Verification (UV) models in federated setting, where each user has access to the data of only one class and user embeddings cannot be shared with the server or other users.

Federated Learning

Unsupervised Information Obfuscation for Split Inference of Neural Networks

no code implementations23 Apr 2021 Mohammad Samragh, Hossein Hosseini, Aleksei Triastcyn, Kambiz Azarian, Joseph Soriaga, Farinaz Koushanfar

In our method, the edge device runs the model up to a split layer determined based on its computational capacity.

Cannot find the paper you are looking for? You can Submit a new open access paper.