Search Results for author: Jiacen Xu

Found 7 papers, 3 papers with code

Adversarial Attack Generation Empowered by Min-Max Optimization

1 code implementation NeurIPS 2021 Jingkang Wang, Tianyun Zhang, Sijia Liu, Pin-Yu Chen, Jiacen Xu, Makan Fardad, Bo Li

In this paper, we show how a general framework of min-max optimization over multiple domains can be leveraged to advance the design of different types of adversarial attacks.

Adversarial Attack Adversarial Robustness

Towards A Unified Min-Max Framework for Adversarial Exploration and Robustness

no code implementations25 Sep 2019 Jingkang Wang, Tianyun Zhang, Sijia Liu, Pin-Yu Chen, Jiacen Xu, Makan Fardad, Bo Li

The worst-case training principle that minimizes the maximal adversarial loss, also known as adversarial training (AT), has shown to be a state-of-the-art approach for enhancing adversarial robustness against norm-ball bounded input perturbations.

Adversarial Attack Adversarial Robustness

An Empirical and Comparative Analysis of Data Valuation with Scalable Algorithms

no code implementations25 Sep 2019 Ruoxi Jia, Xuehui Sun, Jiacen Xu, Ce Zhang, Bo Li, Dawn Song

Existing approximation algorithms, although achieving great improvement over the exact algorithm, relies on retraining models for multiple times, thus remaining limited when applied to larger-scale learning tasks and real-world datasets.

Data Summarization Data Valuation +1

Scalability vs. Utility: Do We Have to Sacrifice One for the Other in Data Importance Quantification?

1 code implementation CVPR 2021 Ruoxi Jia, Fan Wu, Xuehui Sun, Jiacen Xu, David Dao, Bhavya Kailkhura, Ce Zhang, Bo Li, Dawn Song

Quantifying the importance of each training point to a learning task is a fundamental problem in machine learning and the estimated importance scores have been leveraged to guide a range of data workflows such as data summarization and domain adaption.

Data Summarization Domain Adaptation

Maestro: A Gamified Platform for Teaching AI Robustness

no code implementations14 Jun 2023 Margarita Geleta, Jiacen Xu, Manikanta Loya, Junlin Wang, Sameer Singh, Zhou Li, Sergio Gago-Masague

We assessed Maestro's influence on students' engagement, motivation, and learning success in robust AI.

Active Learning

AutoAttacker: A Large Language Model Guided System to Implement Automatic Cyber-attacks

no code implementations2 Mar 2024 Jiacen Xu, Jack W. Stokes, Geoff McDonald, Xuesong Bai, David Marshall, Siyue Wang, Adith Swaminathan, Zhou Li

Large language models (LLMs) have demonstrated impressive results on natural language tasks, and security researchers are beginning to employ them in both offensive and defensive systems.

Computer Security Language Modelling +1

Cannot find the paper you are looking for? You can Submit a new open access paper.