no code implementations • 7 Feb 2022 • Ji Gao, Sanjam Garg, Mohammad Mahmoody, Prashant Nalini Vasudevan
Privacy attacks on machine learning models aim to identify the data that is used to train such models.
1 code implementation • 1 Jul 2021 • Haibin Wu, Po-chun Hsu, Ji Gao, Shanshan Zhang, Shen Huang, Jian Kang, Zhiyong Wu, Helen Meng, Hung-Yi Lee
We also show that the neural vocoder adopted in the detection framework is dataset-independent.
no code implementations • 18 May 2021 • Ji Gao, Amin Karbasi, Mohammad Mahmoody
In this paper, we study PAC learnability and certification of predictions under instance-targeted poisoning attacks, where the adversary who knows the test instance may change a fraction of the training set with the goal of fooling the learner at the test instance.
no code implementations • NeurIPS 2020 • Meiyi Ma, Ji Gao, Lu Feng, John Stankovic
In this paper, we develop a new temporal logic-based learning framework, STLnet, which guides the RNN learning process with auxiliary knowledge of model properties, and produces a more robust model for improved future predictions.
no code implementations • 21 Mar 2018 • Jack Lanchantin, Ji Gao
Statistical language models are powerful tools which have been used for many tasks within natural language processing.
2 code implementations • 13 Jan 2018 • Ji Gao, Jack Lanchantin, Mary Lou Soffa, Yanjun Qi
Although various techniques have been proposed to generate adversarial samples for white-box attacks on text, little attention has been paid to black-box attacks, which are more realistic scenarios.
no code implementations • 22 Feb 2017 • Ji Gao, Beilun Wang, Zeming Lin, Weilin Xu, Yanjun Qi
By identifying and removing unnecessary features in a DNN model, DeepCloak limits the capacity an attacker can use generating adversarial samples and therefore increase the robustness against such inputs.
2 code implementations • 9 Feb 2017 • Beilun Wang, Ji Gao, Yanjun Qi
Estimating multiple sparse Gaussian Graphical Models (sGGMs) jointly for many related tasks (large $K$) under a high-dimensional (large $p$) situation is an important task.
no code implementations • 1 Dec 2016 • Beilun Wang, Ji Gao, Yanjun Qi
Most machine learning classifiers, including deep neural networks, are vulnerable to adversarial examples.