no code implementations • 5 Feb 2025 • Hao Zeng, Kangdao Liu, BingYi Jing, Hongxin Wei
In this work, we empirically find that the tuning bias - the coverage gap introduced by leveraging the same dataset for tuning and calibration, is negligible for simple parameter tuning in many conformal prediction methods.
no code implementations • 24 Oct 2024 • Hengxiang Zhang, Hongfu Gao, Qiang Hu, Guanhua Chen, Lili Yang, BingYi Jing, Hongxin Wei, Bing Wang, Haifeng Bai, Lei Yang
While previous works have introduced several benchmarks to evaluate the safety risk of LLMs, the community still has a limited understanding of current LLMs' capability to recognize illegal and unsafe content in Chinese contexts.
no code implementations • 9 Oct 2024 • Hengxiang Zhang, Songxin Zhang, BingYi Jing, Hongxin Wei
In light of this, we introduce a novel and effective method termed Fine-tuned Score Deviation (FSD), which improves the performance of current scoring functions for pretraining data detection.
no code implementations • 3 Apr 2024 • Simiao Li, Yun Zhang, Wei Li, Hanting Chen, Wenjia Wang, BingYi Jing, Shaohui Lin, Jie Hu
Knowledge distillation (KD) is a promising yet challenging model compression technique that transfers rich learning representations from a well-performing but cumbersome teacher model to a compact student model.
no code implementations • 28 Mar 2024 • Kexin Shi, Jing Zhang, Linjiajie Fang, Wenjia Wang, BingYi Jing
In implicit collaborative filtering, hard negative mining techniques are developed to accelerate and enhance the recommendation model learning.
1 code implementation • 20 Feb 2024 • Jianguo Huang, Jianqing Song, Xuanning Zhou, BingYi Jing, Hongxin Wei
Conformal Prediction (CP) has attracted great attention from the research community due to its strict theoretical guarantees.
no code implementations • 8 Feb 2024 • Wenyu Jiang, Zhenlong Liu, Zejian Xie, Songxin Zhang, BingYi Jing, Hongxin Wei
In this paper, we propose a straightforward, novel, and training-free hardness score named Distorting-based Learning Complexity (DLC), to identify informative images and instructions from the downstream dataset efficiently.
no code implementations • 8 Dec 2023 • Junyu Lu, Dixiang Zhang, Songxin Zhang, Zejian Xie, Zhuoyang Song, Cong Lin, Jiaxing Zhang, BingYi Jing, Pingjian Zhang
During the instruction fine-tuning stage, we introduce semantic-aware visual feature extraction, a crucial method that enables the model to extract informative features from concrete visual objects.
Ranked #1 on
Image Captioning
on nocaps entire
1 code implementation • 25 Sep 2023 • Yun Zhang, Wei Li, Simiao Li, Hanting Chen, Zhijun Tu, Wenjia Wang, BingYi Jing, Shaohui Lin, Jie Hu
Knowledge distillation (KD) compresses deep neural networks by transferring task-related knowledge from cumbersome pre-trained teacher models to compact student models.
Ranked #29 on
Image Super-Resolution
on Urban100 - 4x upscaling
no code implementations • 25 Nov 2022 • Kexin Shi, Yun Zhang, BingYi Jing, Wenjia Wang
In implicit collaborative filtering (CF) task of recommender systems, recent works mainly focus on model structure design with promising techniques like graph neural networks (GNNs).