1 code implementation • RANLP 2021 • Minghuan Tan, Jing Jiang
We find that our method substantially outperforms existing methods on the evaluation dataset we have constructed.
no code implementations • RANLP 2021 • Minghuan Tan, Jing Jiang
Understanding idioms is important in NLP.
no code implementations • 16 Jan 2025 • Ancheng Xu, Di Yang, Renhao Li, Jingwei Zhu, Minghuan Tan, Min Yang, Wanxin Qiu, Mingchen Ma, Haihong Wu, Bingyu Li, Feng Sha, Chengming Li, Xiping Hu, Qiang Qu, Derek F. Wong, Ruifeng Xu
Traditional in-person psychological counseling remains primarily niche, often chosen by individuals with psychological issues, while online automated counseling offers a potential solution for those hesitant to seek help due to feelings of shame.
no code implementations • 26 Sep 2024 • Fuqiang Niu, Minghuan Tan, BoWen Zhang, Min Yang, Ruifeng Xu
To demonstrate the effectiveness of this approach, we integrate multiple existing resources and construct an emotional idiom lexicon expansion dataset (called EmoIdiomE), which encompasses a comprehensive repository of Chinese and English idioms.
1 code implementation • 29 Jul 2024 • Jingwei Zhu, Minghuan Tan, Min Yang, Ruixue Li, Hamid Alinejad-Rokny
The rapid progress in Large Language Models (LLMs) has prompted the creation of numerous benchmarks to evaluate their capabilities. This study focuses on the Comprehensive Medical Benchmark in Chinese (CMB), showcasing how dataset diversity and distribution in supervised fine-tuning (SFT) may enhance LLM performance. Remarkably, We successfully trained a smaller base model to achieve scores comparable to larger models, indicating that a diverse and well-distributed dataset can optimize performance regardless of model size. This study suggests that even smaller models may reach high performance levels with carefully curated and varied datasets.
1 code implementation • 23 Jul 2024 • Yuxuan Hu, Minghuan Tan, Chenwei Zhang, Zixuan Li, Xiaodan Liang, Min Yang, Chengming Li, Xiping Hu
By incorporating emotional support strategies, we aim to enrich the model's capabilities in both cognitive and affective empathy, leading to a more nuanced and comprehensive empathetic response.
1 code implementation • 11 Jun 2024 • Renhao Li, Minghuan Tan, Derek F. Wong, Min Yang
The responses within IFT data could be further enhanced by leveraging the capabilities of LLMs themselves.
1 code implementation • 5 Jun 2024 • Ancheng Xu, Minghuan Tan, Lei Wang, Min Yang, Ruifeng Xu
We first anatomize the reasoning of math word problems to different sub-procedures like numeral conversions from language to numbers and measurement conversions based on units.
2 code implementations • 26 May 2024 • Chenhao Zhang, Renhao Li, Minghuan Tan, Min Yang, Jingwei Zhu, Di Yang, Jiahao Zhao, Guancheng Ye, Chengming Li, Xiping Hu
To bridge the gap, we propose CPsyCoun, a report-based multi-turn dialogue reconstruction and evaluation framework for Chinese psychological counseling.
1 code implementation • 16 May 2024 • Jiahao Zhao, Jingwei Zhu, Minghuan Tan, Min Yang, Renhao Li, Di Yang, Chenhao Zhang, Guancheng Ye, Chengming Li, Xiping Hu, Derek F. Wong
In this paper, we introduce a novel psychological benchmark, CPsyExam, constructed from questions sourced from Chinese language examinations.
1 code implementation • 26 Feb 2024 • Shiwen Ni, Minghuan Tan, Yuelin Bai, Fuqiang Niu, Min Yang, BoWen Zhang, Ruifeng Xu, Xiaojun Chen, Chengming Li, Xiping Hu, Ye Li, Jianping Fan
In this paper, we contribute a new benchmark, the first Multilingual-oriented quiZ on Intellectual Property (MoZIP), for the evaluation of LLMs in the IP domain.
1 code implementation • SemEval (NAACL) 2022 • Minghuan Tan
This paper describes an approach to detect idiomaticity only from the contextualized representation of a MWE over multilingual pretrained language models.
no code implementations • 12 May 2022 • Yong Dai, Duyu Tang, Liangxin Liu, Minghuan Tan, Cong Zhou, Jingquan Wang, Zhangyin Feng, Fan Zhang, Xueyu Hu, Shuming Shi
Moreover, our model supports self-supervised pretraining with the same sparsely activated way, resulting in better initialized parameters for different modalities.
1 code implementation • ACL 2022 • Minghuan Tan, Yong Dai, Duyu Tang, Zhangyin Feng, Guoping Huang, Jing Jiang, Jiwei Li, Shuming Shi
We find that a frozen GPT achieves state-of-the-art performance on perfect pinyin.
1 code implementation • 19 May 2021 • Minghuan Tan, Lei Wang, Lingxiao Jiang, Jing Jiang
In this paper, we revisit math word problems~(MWPs) from the cross-lingual and multilingual perspective.
1 code implementation • COLING 2020 • Minghuan Tan, Jing Jiang
Specifically, we first match the embedding of each candidate idiom with the hidden representation corresponding to the blank in the context.