no code implementations • 29 Mar 2024 • Yongqi Tong, Dawei Li, Sizhe Wang, Yujia Wang, Fei Teng, Jingbo Shang
We conduct a series of experiments to prove LLMs can obtain benefits from mistakes in both directions.
no code implementations • 26 Oct 2023 • Zi Lin, Zihan Wang, Yongqi Tong, Yangkun Wang, Yuxin Guo, Yujia Wang, Jingbo Shang
This benchmark contains the rich, nuanced phenomena that can be tricky for current toxicity detection models to identify, revealing a significant domain difference compared to social media content.
no code implementations • 18 Oct 2023 • Yongqi Tong, Yifan Wang, Dawei Li, Sizhe Wang, Zi Lin, Simeng Han, Jingbo Shang
Chain-of-Thought(CoT) prompting and its variants explore equipping large language models (LLMs) with high-level reasoning abilities by emulating human-like linear cognition and logic.
no code implementations • 20 Aug 2022 • Yanjie Gou, Yinjie Lei, Lingqiao Liu, Yong Dai, Chunxu Shen, Yongqi Tong
Existing works usually formulate the span detection as a 1D token tagging problem, and model the sentiment recognition with a 2D tagging matrix of token pairs.