Search Results for author: Kun-Peng Ning

Found 8 papers, 3 papers with code

Bidirectional Uncertainty-Based Active Learning for Open Set Annotation

no code implementations23 Feb 2024 Chen-Chen Zong, Ye-Wen Wang, Kun-Peng Ning, Haibo Ye, Sheng-Jun Huang

In this paper, we attempt to query examples that are both likely from known classes and highly informative, and propose a \textit{Bidirectional Uncertainty-based Active Learning} (BUAL) framework.

Active Learning

Peer-review-in-LLMs: Automatic Evaluation Method for LLMs in Open-environment

1 code implementation2 Feb 2024 Kun-Peng Ning, Shuo Yang, Yu-Yang Liu, Jia-Yu Yao, Zhen-Hui Liu, Yu Wang, Ming Pang, Li Yuan

Existing large language models (LLMs) evaluation methods typically focus on testing the performance on some closed-environment and domain-specific benchmarks with human annotations.

LLM Lies: Hallucinations are not Bugs, but Features as Adversarial Examples

1 code implementation2 Oct 2023 Jia-Yu Yao, Kun-Peng Ning, Zhen-Hui Liu, Mu-Nan Ning, Li Yuan

This phenomenon forces us to revisit that hallucination may be another view of adversarial examples, and it shares similar features with conventional adversarial examples as the basic feature of LLMs.

Hallucination

Towards Better Query Classification with Multi-Expert Knowledge Condensation in JD Ads Search

no code implementations2 Aug 2023 Kun-Peng Ning, Ming Pang, Zheng Fang, Xue Jiang, Xi-Wei Zhao, Chang-Ping Peng, Zhan-Gang Lin, Jing-He Hu, Jing-Ping Shao

To overcome this challenge, in this paper, we propose knowledge condensation (KC), a simple yet effective knowledge distillation framework to boost the classification performance of the online FastText model under strict low latency constraints.

Knowledge Distillation

Active Learning for Open-set Annotation

1 code implementation CVPR 2022 Kun-Peng Ning, Xun Zhao, Yu Li, Sheng-Jun Huang

To tackle this open-set annotation (OSA) problem, we propose a new active learning framework called LfOSA, which boosts the classification performance with an effective sampling strategy to precisely detect examples from known classes for annotation.

Active Learning

Improving Model Robustness by Adaptively Correcting Perturbation Levels with Active Queries

no code implementations27 Mar 2021 Kun-Peng Ning, Lue Tao, Songcan Chen, Sheng-Jun Huang

Recently, much research has been devoted to improving the model robustness by training with noise perturbations.

Active Learning

Co-Imitation Learning without Expert Demonstration

no code implementations27 Mar 2021 Kun-Peng Ning, Hu Xu, Kun Zhu, Sheng-Jun Huang

Imitation learning is a primary approach to improve the efficiency of reinforcement learning by exploiting the expert demonstrations.

Imitation Learning

Reinforcement Learning with Supervision from Noisy Demonstrations

no code implementations14 Jun 2020 Kun-Peng Ning, Sheng-Jun Huang

In this paper, we propose a novel framework to adaptively learn the policy by jointly interacting with the environment and exploiting the expert demonstrations.

reinforcement-learning Reinforcement Learning (RL)

Cannot find the paper you are looking for? You can Submit a new open access paper.