Search Results for author: Tribhuvanesh Orekondy

Found 10 papers, 2 papers with code

Transformer-Based Neural Surrogate for Link-Level Path Loss Prediction from Variable-Sized Maps

no code implementations6 Oct 2023 Thomas M. Hehn, Tribhuvanesh Orekondy, Ori Shental, Arash Behboodi, Juan Bucheli, Akash Doshi, June Namgoong, Taesang Yoo, Ashwin Sampath, Joseph B. Soriaga

The transformer model attends to the regions that are relevant for path loss prediction and, therefore, scales efficiently to maps of different size.

MIMO-GAN: Generative MIMO Channel Modeling

no code implementations16 Mar 2022 Tribhuvanesh Orekondy, Arash Behboodi, Joseph B. Soriaga

We propose generative channel modeling to learn statistical channel models from channel input-output measurements.

Sampling Attacks: Amplification of Membership Inference Attacks by Repeated Queries

no code implementations1 Sep 2020 Shadi Rahimian, Tribhuvanesh Orekondy, Mario Fritz

Our work consists of two sides: We introduce sampling attack, a novel membership inference technique that unlike other standard membership adversaries is able to work under severe restriction of no access to scores of the victim model.

BIG-bench Machine Learning Inference Attack +1

GS-WGAN: A Gradient-Sanitized Approach for Learning Differentially Private Generators

1 code implementation NeurIPS 2020 Dingfan Chen, Tribhuvanesh Orekondy, Mario Fritz

The wide-spread availability of rich data has fueled the growth of machine learning applications in numerous domains.

InfoScrub: Towards Attribute Privacy by Targeted Obfuscation

no code implementations20 May 2020 Hui-Po Wang, Tribhuvanesh Orekondy, Mario Fritz

Personal photos of individuals when shared online, apart from exhibiting a myriad of memorable details, also reveals a wide range of private information and potentially entails privacy risks (e. g., online harassment, tracking).

Attribute Translation

Knockoff Nets: Stealing Functionality of Black-Box Models

2 code implementations CVPR 2019 Tribhuvanesh Orekondy, Bernt Schiele, Mario Fritz

We formulate model functionality stealing as a two-step approach: (i) querying a set of input images to the blackbox model to obtain predictions; and (ii) training a "knockoff" with queried image-prediction pairs.

Gradient-Leaks: Understanding and Controlling Deanonymization in Federated Learning

no code implementations15 May 2018 Tribhuvanesh Orekondy, Seong Joon Oh, Yang Zhang, Bernt Schiele, Mario Fritz

At the core of FL is a network of anonymous user devices sharing training information (model parameter updates) computed locally on personal data.

Data Augmentation Federated Learning +1

Towards a Visual Privacy Advisor: Understanding and Predicting Privacy Risks in Images

no code implementations ICCV 2017 Tribhuvanesh Orekondy, Bernt Schiele, Mario Fritz

Third, we propose models that predict user specific privacy score from images in order to enforce the users' privacy preferences.

Cannot find the paper you are looking for? You can Submit a new open access paper.