Search Results for author: Soteris Demetriou

Found 8 papers, 2 papers with code

Data Augmentation for Dementia Detection in Spoken Language

1 code implementation26 Jun 2022 Anna Hlédiková, Dominika Woszczyk, Alican Akman, Soteris Demetriou, Björn Schuller

In this work, we investigate data augmentation techniques for the task of AD detection and perform an empirical evaluation of the different approaches on two kinds of models for both the text and audio domains.

Data Augmentation

Using 3D Shadows to Detect Object Hiding Attacks on Autonomous Vehicle Perception

no code implementations29 Apr 2022 Zhongyuan Hau, Soteris Demetriou, Emil C. Lupu

We achieve this by searching for void regions and locating the obstacles that cause these shadows.

Autonomous Vehicles

Temporal Consistency Checks to Detect LiDAR Spoofing Attacks on Autonomous Vehicle Perception

no code implementations15 Jun 2021 Chengzeng You, Zhongyuan Hau, Soteris Demetriou

In particular, model-level LiDAR spoofing attacks aim to inject fake depth measurements to elicit ghost objects that are erroneously detected by 3D Object Detectors, resulting in hazardous driving decisions.

Autonomous Vehicles motion prediction

Quantifying and Localizing Usable Information Leakage from Neural Network Gradients

no code implementations28 May 2021 Fan Mo, Anastasia Borovykh, Mohammad Malekzadeh, Soteris Demetriou, Deniz Gündüz, Hamed Haddadi

Our proposed framework enables clients to localize and quantify the private information leakage in a layer-wise manner, and enables a better understanding of the sources of information leakage in collaborative learning, which can be used by future studies to benchmark new attacks and defense mechanisms.

Layer-wise Characterization of Latent Information Leakage in Federated Learning

no code implementations17 Oct 2020 Fan Mo, Anastasia Borovykh, Mohammad Malekzadeh, Hamed Haddadi, Soteris Demetriou

Training deep neural networks via federated learning allows clients to share, instead of the original data, only the model trained on their data.

Federated Learning

DarkneTZ: Towards Model Privacy at the Edge using Trusted Execution Environments

2 code implementations12 Apr 2020 Fan Mo, Ali Shahin Shamsabadi, Kleomenis Katevas, Soteris Demetriou, Ilias Leontiadis, Andrea Cavallaro, Hamed Haddadi

We present DarkneTZ, a framework that uses an edge device's Trusted Execution Environment (TEE) in conjunction with model partitioning to limit the attack surface against Deep Neural Networks (DNNs).

Image Classification

Cannot find the paper you are looking for? You can Submit a new open access paper.