no code implementations • 22 Nov 2023 • Mingtian Tan, Tianhao Wang, Somesh Jha
In response, we develop a novel technique, RIW (Robust Invisible Watermarking), to embed invisible watermarks leveraging adversarial example techniques.
no code implementations • 12 Nov 2023 • Zihang Xiang, Tianhao Wang, Di Wang
In this study, we propose a solution that specifically addresses the issue of node-level privacy.
no code implementations • 19 Oct 2023 • Kecen Li, Chen Gong, Zhixiang Li, Yuzhong Zhao, Xinwen Hou, Tianhao Wang
Then, this function assists in querying the semantic distribution of the sensitive dataset, facilitating the selection of data from the public dataset with analogous semantics for pre-training.
no code implementations • 17 Oct 2023 • Rui Wen, Tianhao Wang, Michael Backes, Yang Zhang, Ahmed Salem
Large Language Models (LLMs) are powerful tools for natural language processing, enabling novel applications and user experiences.
no code implementations • 27 Sep 2023 • Xuanlong Yu, Yi Zuo, Zitao Wang, Xiaowen Zhang, Jiaxuan Zhao, Yuting Yang, Licheng Jiao, Rui Peng, Xinyi Wang, Junpei Zhang, Kexin Zhang, Fang Liu, Roberto Alcover-Couso, Juan C. SanMiguel, Marcos Escudero-Viñolo, Hanlin Tian, Kenta Matsui, Tianhao Wang, Fahmy Adan, Zhitong Gao, Xuming He, Quentin Bouniot, Hossein Moghaddam, Shyam Nandan Rai, Fabio Cermelli, Carlo Masone, Andrea Pilzer, Elisa Ricci, Andrei Bursuc, Arno Solin, Martin Trapp, Rui Li, Angela Yao, Wenlong Chen, Ivor Simpson, Neill D. F. Campbell, Gianni Franchi
This paper outlines the winning solutions employed in addressing the MUAD uncertainty quantification challenge held at ICCV 2023.
no code implementations • 27 Jul 2023 • Runzhe Wang, Sadhika Malladi, Tianhao Wang, Kaifeng Lyu, Zhiyuan Li
Momentum is known to accelerate the convergence of gradient descent in strongly convex settings without stochastic gradient noise.
no code implementations • 3 Jul 2023 • Debopam Sanyal, Jui-Tse Hung, Manav Agrawal, Prahlad Jasti, Shahab Nikkhoo, Somesh Jha, Tianhao Wang, Sibin Mohan, Alexey Tumanov
Second, we counter the proposed attack with a noise-based defense mechanism that thwarts fingerprinting by adding noise to the specified performance metrics.
no code implementations • 14 Jun 2023 • Xizixiang Wei, Tianhao Wang, Ruiquan Huang, Cong Shen, Jing Yang, H. Vincent Poor
A new FL convergence bound is derived which, combined with the privacy guarantees, allows for a smooth tradeoff between the achieved convergence rate and differential privacy levels.
no code implementations • 1 Jun 2023 • Kunal Mukherjee, Joshua Wiedemeier, Tianhao Wang, Muhyun Kim, Feng Chen, Murat Kantarcioglu, Kangkook Jee
PROVEXPLAINER allowed simple DT models to achieve 95% fidelity to the GNN on program classification tasks with general graph structural features, and 99% fidelity on malware detection tasks with a task-specific feature package tailored for direct interpretation.
no code implementations • 10 May 2023 • Yifei Min, Jiafan He, Tianhao Wang, Quanquan Gu
We study multi-agent reinforcement learning in the setting of episodic Markov decision processes, where multiple agents cooperate via communication through a central server.
no code implementations • 18 Apr 2023 • James Koch, Woongjo Choi, Ethan King, David Garcia, Hrishikesh Das, Tianhao Wang, Ken Ross, Keerti Kappagantula
Lumped parameter methods aim to simplify the evolution of spatially-extended or continuous physical systems to that of a "lumped" element representative of the physical scales of the modeled system.
1 code implementation • 15 Apr 2023 • Zihang Xiang, Tianhao Wang, WanYu Lin, Di Wang
In contrast, we leverage the random noise to construct an aggregation that effectively rejects many existing Byzantine attacks.
2 code implementations • 5 Apr 2023 • Min Chen, Zhikun Zhang, Tianhao Wang, Michael Backes, Yang Zhang
Few-shot-based facial recognition systems have gained increasing attention due to their scalability and ability to work with a few face images during the model deployment phase.
no code implementations • 24 Feb 2023 • Ruitu Xu, Yifei Min, Tianhao Wang, Zhaoran Wang, Michael I. Jordan, Zhuoran Yang
We study a heterogeneous agent macroeconomic model with an infinite number of households and firms competing in a labor market.
1 code implementation • 23 Feb 2023 • Boyang Zhang, Xinlei He, Yun Shen, Tianhao Wang, Yang Zhang
Given the simplicity and effectiveness of the attack method, our study indicates scientific plots indeed constitute a valid side channel for model information stealing attacks.
1 code implementation • 7 Oct 2022 • Chen Gong, Zhou Yang, Yunpeng Bai, Junda He, Jieke Shi, Kecen Li, Arunesh Sinha, Bowen Xu, Xinwen Hou, David Lo, Tianhao Wang
In this paper, we propose Baffle (Backdoor Attack for Offline Reinforcement Learning), an approach that automatically implants backdoors to RL agents by poisoning the offline RL dataset, and evaluate how different offline RL algorithms react to this attack.
1 code implementation • 6 Oct 2022 • Samuel Maddock, Graham Cormode, Tianhao Wang, Carsten Maple, Somesh Jha
There is great demand for scalable, secure, and efficient privacy-preserving machine learning models that can be trained over distributed data.
2 code implementations • 2 Aug 2022 • Zitao Li, Tianhao Wang, Ninghui Li
To enable model learning while protecting the privacy of the data subjects, we need vertical federated learning (VFL) techniques, where the data parties share only information for training the model, instead of the private data.
no code implementations • 22 Jul 2022 • Tong Wu, Tianhao Wang, Vikash Sehwag, Saeed Mahloujifar, Prateek Mittal
Our attack can be easily deployed in the real world since it only requires rotating the object, as we show in both image classification and object detection applications.
no code implementations • 8 Jul 2022 • Zhiyuan Li, Tianhao Wang, JasonD. Lee, Sanjeev Arora
Conversely, continuous mirror descent with any Legendre function can be viewed as gradient flow with a related commuting parametrization.
no code implementations • 7 Jul 2022 • Jiafan He, Tianhao Wang, Yifei Min, Quanquan Gu
To the best of our knowledge, this is the first provably efficient algorithm that allows fully asynchronous communication for federated contextual linear bandits, while achieving the same regret guarantee as in the single-agent setting.
1 code implementation • 25 May 2022 • FatemehSadat Mireshghallah, Archit Uniyal, Tianhao Wang, David Evans, Taylor Berg-Kirkpatrick
Large language models are shown to present privacy risks through memorization of training data, and several recent works have studied such risks for the pre-training phase.
no code implementations • 7 Mar 2022 • Yifei Min, Tianhao Wang, Ruitu Xu, Zhaoran Wang, Michael I. Jordan, Zhuoran Yang
We study a Markov matching market involving a planner and a set of strategic agents on the two sides of the market.
no code implementations • 25 Oct 2021 • Yifei Min, Jiafan He, Tianhao Wang, Quanquan Gu
To the best of our knowledge, this is the first algorithm with a sublinear regret guarantee for learning linear mixture SSP.
no code implementations • ICLR 2022 • Zhiyuan Li, Tianhao Wang, Sanjeev Arora
Understanding the implicit bias of Stochastic Gradient Descent (SGD) is one of the key challenges in deep learning, especially for overparametrized models, where the local minimizers of the loss function $L$ can form a manifold.
no code implementations • 29 Sep 2021 • Tianhao Wang, Yi Zeng, Ming Jin, Ruoxi Jia
In this paper, we focus on the problem of identifying bad training data when the underlying cause is unknown in advance.
no code implementations • 14 Jul 2021 • Si Chen, Tianhao Wang, Ruoxi Jia
Our algorithm does not rely on any feedback from annotators in the target domain and hence, can be used to perform zero-round active learning or warm-start existing multi-round active learning strategies.
1 code implementation • 13 Jul 2021 • Tianhao Wang, Yu Yang, Ruoxi Jia
The Shapley value (SV) and Least core (LC) are classic methods in cooperative game theory for cost/profit sharing problems.
no code implementations • NeurIPS 2021 • Yifei Min, Tianhao Wang, Dongruo Zhou, Quanquan Gu
We study the off-policy evaluation (OPE) problem in reinforcement learning with linear function approximation, which aims to estimate the value function of a target policy based on the offline data collected by a behavior policy.
no code implementations • 10 Jun 2021 • Tianhao Wang, Yi Zeng, Ming Jin, Ruoxi Jia
High-quality data is critical to train performant Machine Learning (ML) models, highlighting the importance of Data Quality Management (DQM).
1 code implementation • Findings (ACL) 2021 • Xiang Yue, Minxin Du, Tianhao Wang, Yaliang Li, Huan Sun, Sherman S. M. Chow
The sanitized texts also contribute to our sanitization-aware pretraining and fine-tuning, enabling privacy-preserving natural language processing over the BERT language model with promising utility.
no code implementations • 23 Apr 2021 • Tianhao Wang, Si Chen, Ruoxi Jia
In this work, we initiate the study of one-round active learning, which aims to select a subset of unlabeled data points that achieve the highest model performance after being labeled with only the information from initially labeled data points.
1 code implementation • 27 Mar 2021 • Min Chen, Zhikun Zhang, Tianhao Wang, Michael Backes, Mathias Humbert, Yang Zhang
In this paper, we propose GraphEraser, a novel machine unlearning framework tailored to graph data.
2 code implementations • 2 Mar 2021 • Wenxiao Wang, Tianhao Wang, Lun Wang, Nanqing Luo, Pan Zhou, Dawn Song, Ruoxi Jia
Deep learning techniques have achieved remarkable performance in wide-ranging tasks.
no code implementations • NeurIPS 2021 • Tianhao Wang, Dongruo Zhou, Quanquan Gu
In specific, for the batch learning model, our proposed LSVI-UCB-Batch algorithm achieves an $\tilde O(\sqrt{d^3H^3T} + dHT/B)$ regret, where $d$ is the dimension of the feature mapping, $H$ is the episode length, $T$ is the number of interactions and $B$ is the number of batches.
no code implementations • 17 Dec 2020 • Fabrizio Cicala, Weicheng Wang, Tianhao Wang, Ninghui Li, Elisa Bertino, Faming Liang, Yang Yang
Many proximity-based tracing (PCT) protocols have been proposed and deployed to combat the spreading of COVID-19.
Computers and Society C.3; H.4; J.3; J.7; K.4; K.6.5
no code implementations • 14 Sep 2020 • Tianhao Wang, Johannes Rausch, Ce Zhang, Ruoxi Jia, Dawn Song
The federated SV preserves the desirable properties of the canonical SV while it can be calculated without incurring extra communication cost and is also able to capture the effect of participation order on data value.
2 code implementations • 11 Sep 2020 • Tianhao Wang, Yuheng Zhang, Ruoxi Jia
This paper studies defense mechanisms against model inversion (MI) attacks -- a type of privacy attacks aimed at inferring information about the training data distribution given the access to a target machine learning model.
no code implementations • 24 May 2020 • Tianhao Wang, Joann Qiongna Chen, Zhikun Zhang, Dong Su, Yueqiang Cheng, Zhou Li, Ninghui Li, Somesh Jha
To our knowledge, this is the first LDP algorithm for publishing streaming data.
1 code implementation • 5 May 2020 • Min Chen, Zhikun Zhang, Tianhao Wang, Michael Backes, Mathias Humbert, Yang Zhang
More importantly, we show that our attack in multiple cases outperforms the classical membership inference attack on the original ML model, which indicates that machine unlearning can have counterproductive effects on privacy.
2 code implementations • 2 Dec 2019 • Zitao Li, Tianhao Wang, Milan Lopuhaä-Zwakenberg, Boris Skoric, Ninghui Li
When collecting information, local differential privacy (LDP) relieves the concern of privacy leakage from users' perspective, as user's private information is randomized before sent to the aggregator.
1 code implementation • 31 Oct 2019 • Tianhao Wang, Florian Kerschbaum
White-box watermarking algorithms have the advantage that they do not impact the accuracy of the watermarked model.
1 code implementation • 30 Aug 2019 • Tianhao Wang, Bolin Ding, Min Xu, Zhicong Huang, Cheng Hong, Jingren Zhou, Ninghui Li, Somesh Jha
When collecting information, local differential privacy (LDP) alleviates privacy concerns of users because their private information is randomized before being sent it to the central aggregator.
1 code implementation • 20 May 2019 • Tianhao Wang, Milan Lopuhaä-Zwakenberg, Zitao Li, Boris Skoric, Ninghui Li
In this paper, we show that adding post-processing steps to FO protocols by exploiting the knowledge that all individual frequencies should be non-negative and they sum up to one can lead to significantly better accuracy for a wide range of tasks, including frequencies of individual values, frequencies of the most frequent values, and frequencies of subsets of values.
no code implementations • ICML 2018 • Pan Xu, Tianhao Wang, Quanquan Gu
We provide a second-order stochastic differential equation (SDE), which characterizes the continuous-time dynamics of accelerated stochastic mirror descent (ASMD) for strongly convex functions.