Search Results for author: Iyiola E. Olatunji

Found 8 papers, 4 papers with code

Does Black-box Attribute Inference Attacks on Graph Neural Networks Constitute Privacy Risk?

no code implementations1 Jun 2023 Iyiola E. Olatunji, Anmar Hizber, Oliver Sihlovec, Megha Khosla

Graph neural networks (GNNs) have shown promising results on real-life datasets and applications, including healthcare, finance, and education.

Attribute Inference Attack +2

Private Graph Extraction via Feature Explanations

1 code implementation29 Jun 2022 Iyiola E. Olatunji, Mandeep Rathee, Thorben Funke, Megha Khosla

Based on the different kinds of auxiliary information available to the adversary, we propose several graph reconstruction attacks.

BIG-bench Machine Learning Graph Reconstruction

Releasing Graph Neural Networks with Differential Privacy Guarantees

1 code implementation18 Sep 2021 Iyiola E. Olatunji, Thorben Funke, Megha Khosla

With the increasing popularity of graph neural networks (GNNs) in several sensitive applications like healthcare and medicine, concerns have been raised over the privacy aspects of trained GNNs.

Knowledge Distillation Privacy Preserving

Achieving differential privacy for $k$-nearest neighbors based outlier detection by data partitioning

no code implementations16 Apr 2021 Jens Rauch, Iyiola E. Olatunji, Megha Khosla

When applying outlier detection in settings where data is sensitive, mechanisms which guarantee the privacy of the underlying data are needed.

Outlier Detection

A Review of Anonymization for Healthcare Data

1 code implementation13 Apr 2021 Iyiola E. Olatunji, Jens Rauch, Matthias Katzensteiner, Megha Khosla

Mining health data can lead to faster medical decisions, improvement in the quality of treatment, disease prevention, reduced cost, and it drives innovative solutions within the healthcare sector.

Reconstruction Attack

Membership Inference Attack on Graph Neural Networks

1 code implementation17 Jan 2021 Iyiola E. Olatunji, Wolfgang Nejdl, Megha Khosla

While choosing the simplest possible attack model that utilizes the posteriors of the trained model (black-box access), we thoroughly analyze the properties of GNNs and the datasets which dictate the differences in their robustness towards MI attack.

Graph Classification Inference Attack +3

Context-aware Helpfulness Prediction for Online Product Reviews

no code implementations27 Apr 2020 Iyiola E. Olatunji, Xin Li, Wai Lam

In this paper, we propose a neural deep learning model that predicts the helpfulness score of a review.

Human Activity Recognition for Mobile Robot

no code implementations23 Jan 2018 Iyiola E. Olatunji

We trained and validated the model using the Vicon physical action dataset and also tested the model on our generated dataset (VMCUHK).

Human Activity Recognition

Cannot find the paper you are looking for? You can Submit a new open access paper.