no code implementations • 1 Jul 2024 • Dan Peng, Zhihui Fu, Jun Wang
To tackle this, we propose employing derivative-free optimization techniques to enable on-device fine-tuning of LLM, even on memory-limited mobile devices.
no code implementations • 15 Jun 2024 • Tian-Hua Li, Tian-Fang Ma, Dan Peng, Wei-Long Zheng, Bao-liang Lu
Utilizing deep learning models for learning EEG and eye movement features proves effective in classifying brain activities.
no code implementations • 13 Apr 2021 • Shiyi Chen, Ziao Wang, Xinni Zhang, Xiaofeng Zhang, Dan Peng
Graph representation learning has long been an important yet challenging task for various real-world applications.
no code implementations • 2 Jul 2020 • Yifei Wang, Dan Peng, Furui Liu, Zhenguo Li, Zhitang Chen, Jiansheng Yang
Adversarial Training (AT) is proposed to alleviate the adversarial vulnerability of machine learning models by extracting only robust features from the input, which, however, inevitably leads to severe accuracy reduction as it discards the non-robust yet useful features.
no code implementations • 22 Oct 2019 • Dan Peng, Zizhan Zheng, Linhao Luo, Xiaofeng Zhang
In this paper, we propose the novel concepts of structure patterns and structure-aware perturbations that relax the small perturbation constraint while still keeping images natural.
no code implementations • 25 Sep 2019 • Tianshuo Cong, Dan Peng, Furui Liu, Zhitang Chen
Our experiments demonstrate our method is able to correctly identify the bivariate causal relationship between concepts in images and the representation learned enables a do-calculus manipulation to images, which generates artificial images that might possibly break the physical law depending on where we intervene the causal system.
1 code implementation • 8 Sep 2018 • Dan Peng, Zizhan Zheng, Xiaofeng Zhang
A common requirement in all these works is that the malicious perturbations should be small enough (measured by an L_p norm for some p) so that they are imperceptible to humans.