Search Results for author: Minmin Lin

Found 4 papers, 3 papers with code

A Dataset for the Validation of Truth Inference Algorithms Suitable for Online Deployment

1 code implementation10 Mar 2024 Fei Wang, Haoyu Liu, Haoyang Bi, Xiangzhuang Shen, Renyu Zhu, Runze Wu, Minmin Lin, Tangjie Lv, Changjie Fan, Qi Liu, Zhenya Huang, Enhong Chen

In this paper, we introduce a substantial crowdsourcing annotation dataset collected from a real-world crowdsourcing platform.

FreeAL: Towards Human-Free Active Learning in the Era of Large Language Models

1 code implementation27 Nov 2023 Ruixuan Xiao, Yiwen Dong, Junbo Zhao, Runze Wu, Minmin Lin, Gang Chen, Haobo Wang

While copious solutions, such as active learning for small language models (SLMs) and prevalent in-context learning in the era of large language models (LLMs), have been proposed and alleviate the labeling burden to some extent, their performances are still subject to human intervention.

Active Learning In-Context Learning

Towards Long-term Annotators: A Supervised Label Aggregation Baseline

no code implementations15 Nov 2023 Haoyu Liu, Fei Wang, Minmin Lin, Runze Wu, Renyu Zhu, Shiwei Zhao, Kai Wang, Tangjie Lv, Changjie Fan

These annotators could leave substantial historical annotation records on the crowdsourcing platforms, which can benefit label aggregation, but are ignored by previous works.

Rethinking Noisy Label Learning in Real-world Annotation Scenarios from the Noise-type Perspective

1 code implementation28 Jul 2023 Renyu Zhu, Haoyu Liu, Runze Wu, Minmin Lin, Tangjie Lv, Changjie Fan, Haobo Wang

In this paper, we investigate the problem of learning with noisy labels in real-world annotation scenarios, where noise can be categorized into two types: factual noise and ambiguity noise.

Learning with noisy labels

Cannot find the paper you are looking for? You can Submit a new open access paper.