no code implementations • EMNLP (ClinicalNLP) 2020 • Wenjie Wang, Youngja Park, Taesung Lee, Ian Molloy, Pengfei Tang, Li Xiong
Among the modalities of medical data, the clinical summaries have higher risks to be attacked because they are generated by third-party companies.
1 code implementation • 3 Aug 2023 • Kevin Eykholt, Taesung Lee, Douglas Schales, Jiyong Jang, Ian Molloy, Masha Zorin
In this work, we propose a new framework to enable the generation of adversarial inputs irrespective of the input type and task domain.
no code implementations • 14 Dec 2020 • Shiqi Wang, Kevin Eykholt, Taesung Lee, Jiyong Jang, Ian Molloy
On CIFAR10, a non-robust LeNet model has a 21. 63% error rate, while a model created using verifiable training and a L-infinity robustness criterion of 8/255, has an error rate of 57. 10%.
no code implementations • 14 Jul 2020 • Nico Döttling, Kathrin Grosse, Michael Backes, Ian Molloy
In this work we study the limitations of robust classification if the target metric is uncertain.
no code implementations • 11 Jun 2020 • Kathrin Grosse, Taesung Lee, Battista Biggio, Youngja Park, Michael Backes, Ian Molloy
Backdoor attacks mislead machine-learning models to output an attacker-specified class when presented a specific trigger at test time.
no code implementations • 7 Dec 2018 • Zhongshu Gu, Hani Jamjoom, Dong Su, Heqing Huang, Jialong Zhang, Tengfei Ma, Dimitrios Pendarakis, Ian Molloy
We also demonstrate that when malicious training participants tend to implant backdoors during model training, CALTRAIN can accurately and precisely discover the poisoned and mislabeled training data that lead to the runtime mispredictions.
1 code implementation • 9 Nov 2018 • Bryant Chen, Wilka Carvalho, Nathalie Baracaldo, Heiko Ludwig, Benjamin Edwards, Taesung Lee, Ian Molloy, Biplav Srivastava
While machine learning (ML) models are being increasingly trusted to make decisions in different and varying areas, the safety of systems using such models has become an increasing concern.
no code implementations • 31 May 2018 • Taesung Lee, Benjamin Edwards, Ian Molloy, Dong Su
Machine learning models are vulnerable to simple model stealing attacks if the adversary can obtain output labels for chosen inputs.
no code implementations • 12 Nov 2013 • Shandian Zhe, Yuan Qi, Youngja Park, Ian Molloy, Suresh Chari
To overcome this limitation, we present Distributed Infinite Tucker (DINTUCKER), a large-scale nonlinear tensor decomposition algorithm on MAPREDUCE.