Search Results for author: Kleomenis Katevas

Found 13 papers, 6 papers with code

P4L: Privacy Preserving Peer-to-Peer Learning for Infrastructureless Setups

no code implementations26 Feb 2023 Ioannis Arapakis, Panagiotis Papadopoulos, Kleomenis Katevas, Diego Perino

Distributed (or Federated) learning enables users to train machine learning models on their very own devices, while they share only the gradients of their models usually in a differentially private way (utility loss).

Federated Learning Privacy Preserving

Choosing the Best of Both Worlds: Diverse and Novel Recommendations through Multi-Objective Reinforcement Learning

no code implementations28 Oct 2021 Dusan Stamenkovic, Alexandros Karatzoglou, Ioannis Arapakis, Xin Xin, Kleomenis Katevas

The proposed SMORL agent augments standard recommendation models with additional RL layers that enforce it to simultaneously satisfy three principal objectives: accuracy, diversity, and novelty of recommendations.

Multi-Objective Reinforcement Learning reinforcement-learning +2

PPFL: Privacy-preserving Federated Learning with Trusted Execution Environments

1 code implementation29 Apr 2021 Fan Mo, Hamed Haddadi, Kleomenis Katevas, Eduard Marin, Diego Perino, Nicolas Kourtellis

We propose and implement a Privacy-preserving Federated Learning ($PPFL$) framework for mobile systems to limit privacy leakages in federated learning.

Federated Learning Privacy Preserving

FLaaS: Federated Learning as a Service

no code implementations18 Nov 2020 Nicolas Kourtellis, Kleomenis Katevas, Diego Perino

Indeed, FL enables local training on user devices, avoiding user data to be transferred to centralized servers, and can be enhanced with differential privacy mechanisms.

Federated Learning Management +3

DarkneTZ: Towards Model Privacy at the Edge using Trusted Execution Environments

2 code implementations12 Apr 2020 Fan Mo, Ali Shahin Shamsabadi, Kleomenis Katevas, Soteris Demetriou, Ilias Leontiadis, Andrea Cavallaro, Hamed Haddadi

We present DarkneTZ, a framework that uses an edge device's Trusted Execution Environment (TEE) in conjunction with model partitioning to limit the attack surface against Deep Neural Networks (DNNs).

Image Classification

Policy-Based Federated Learning

2 code implementations14 Mar 2020 Kleomenis Katevas, Eugene Bagdasaryan, Jason Waterman, Mohamad Mounir Safadieh, Eleanor Birrell, Hamed Haddadi, Deborah Estrin

In this paper we present PoliFL, a decentralized, edge-based framework that supports heterogeneous privacy policies for federated learning.

Federated Learning Image Classification

Towards Characterizing and Limiting Information Exposure in DNN Layers

no code implementations13 Jul 2019 Fan Mo, Ali Shahin Shamsabadi, Kleomenis Katevas, Andrea Cavallaro, Hamed Haddadi

Pre-trained Deep Neural Network (DNN) models are increasingly used in smartphones and other user devices to enable prediction services, leading to potential disclosures of (sensitive) information from training data captured inside these models.

Deep Private-Feature Extraction

1 code implementation9 Feb 2018 Seyed Ali Osia, Ali Taheri, Ali Shahin Shamsabadi, Kleomenis Katevas, Hamed Haddadi, Hamid R. Rabiee

We present and evaluate Deep Private-Feature Extractor (DPFE), a deep model which is trained and evaluated based on information theoretic constraints.

Continual Prediction of Notification Attendance with Classical and Deep Network Approaches

no code implementations19 Dec 2017 Kleomenis Katevas, Ilias Leontiadis, Martin Pielot, Joan Serrà

Besides using classical gradient-boosted trees, we demonstrate how to make continual predictions using a recurrent neural network (RNN).

Human-Computer Interaction

Privacy-Preserving Deep Inference for Rich User Data on The Cloud

1 code implementation4 Oct 2017 Seyed Ali Osia, Ali Shahin Shamsabadi, Ali Taheri, Kleomenis Katevas, Hamid R. Rabiee, Nicholas D. Lane, Hamed Haddadi

Our evaluations show that by using certain kind of fine-tuning and embedding techniques and at a small processing costs, we can greatly reduce the level of information available to unintended tasks applied to the data feature on the cloud, and hence achieving the desired tradeoff between privacy and performance.

Privacy Preserving

A Hybrid Deep Learning Architecture for Privacy-Preserving Mobile Analytics

1 code implementation8 Mar 2017 Seyed Ali Osia, Ali Shahin Shamsabadi, Sina Sajadmanesh, Ali Taheri, Kleomenis Katevas, Hamid R. Rabiee, Nicholas D. Lane, Hamed Haddadi

To this end, instead of performing the whole operation on the cloud, we let an IoT device to run the initial layers of the neural network, and then send the output to the cloud to feed the remaining layers and produce the final result.

Privacy Preserving

Cannot find the paper you are looking for? You can Submit a new open access paper.