1 code implementation • 2 Feb 2024 • Kun-Peng Ning, Shuo Yang, Yu-Yang Liu, Jia-Yu Yao, Zhen-Hui Liu, Yu Wang, Ming Pang, Li Yuan
Existing large language models (LLMs) evaluation methods typically focus on testing the performance on some closed-environment and domain-specific benchmarks with human annotations.
no code implementations • 2 Aug 2023 • Kun-Peng Ning, Ming Pang, Zheng Fang, Xue Jiang, Xi-Wei Zhao, Chang-Ping Peng, Zhan-Gang Lin, Jing-He Hu, Jing-Ping Shao
To overcome this challenge, in this paper, we propose knowledge condensation (KC), a simple yet effective knowledge distillation framework to boost the classification performance of the online FastText model under strict low latency constraints.
no code implementations • 4 Sep 2022 • Xin Mu, Ming Pang, Feida Zhu
In this paper, we introduce Data Provenance via Differential Auditing (DPDA), a practical framework for auditing data provenance with a different approach based on statistically significant differentials, i. e., after carefully designed transformation, perturbed input data from the target model's training set would result in much more drastic changes in the output than those from the model's non-training set.
no code implementations • the 18th IEEE International Conference on Data Mining 2019 • Ming Pang, Kai-Ming Ting, Peng Zhao, Zhi-Hua Zhou
Most studies about deep learning are based on neural network models, where many layers of parameterized nonlinear differentiable modules are trained by back propagation.
no code implementations • NeurIPS 2018 • Ming Pang, Wei Gao, Min Tao, Zhi-Hua Zhou
This work considers a different attack style: unorganized malicious attacks, where attackers individually utilize a small number of user profiles to attack different items without any organizer.