Search Results for author: Minghuan Tan

Found 16 papers, 12 papers with code

Learning and Evaluating Chinese Idiom Embeddings

1 code implementation RANLP 2021 Minghuan Tan, Jing Jiang

We find that our method substantially outperforms existing methods on the evaluation dataset we have constructed.

AutoCBT: An Autonomous Multi-agent Framework for Cognitive Behavioral Therapy in Psychological Counseling

no code implementations16 Jan 2025 Ancheng Xu, Di Yang, Renhao Li, Jingwei Zhu, Minghuan Tan, Min Yang, Wanxin Qiu, Mingchen Ma, Haihong Wu, Bingyu Li, Feng Sha, Chengming Li, Xiping Hu, Qiang Qu, Derek F. Wong, Ruifeng Xu

Traditional in-person psychological counseling remains primarily niche, often chosen by individuals with psychological issues, while online automated counseling offers a potential solution for those hesitant to seek help due to feelings of shame.

DualCoTs: Dual Chain-of-Thoughts Prompting for Sentiment Lexicon Expansion of Idioms

no code implementations26 Sep 2024 Fuqiang Niu, Minghuan Tan, BoWen Zhang, Min Yang, Ruifeng Xu

To demonstrate the effectiveness of this approach, we integrate multiple existing resources and construct an emotional idiom lexicon expansion dataset (called EmoIdiomE), which encompasses a comprehensive repository of Chinese and English idioms.

Sentiment Analysis

CollectiveSFT: Scaling Large Language Models for Chinese Medical Benchmark with Collective Instructions in Healthcare

1 code implementation29 Jul 2024 Jingwei Zhu, Minghuan Tan, Min Yang, Ruixue Li, Hamid Alinejad-Rokny

The rapid progress in Large Language Models (LLMs) has prompted the creation of numerous benchmarks to evaluate their capabilities. This study focuses on the Comprehensive Medical Benchmark in Chinese (CMB), showcasing how dataset diversity and distribution in supervised fine-tuning (SFT) may enhance LLM performance. Remarkably, We successfully trained a smaller base model to achieve scores comparable to larger models, indicating that a diverse and well-distributed dataset can optimize performance regardless of model size. This study suggests that even smaller models may reach high performance levels with carefully curated and varied datasets.

Diversity

APTNESS: Incorporating Appraisal Theory and Emotion Support Strategies for Empathetic Response Generation

1 code implementation23 Jul 2024 Yuxuan Hu, Minghuan Tan, Chenwei Zhang, Zixuan Li, Xiaodan Liang, Min Yang, Chengming Li, Xiping Hu

By incorporating emotional support strategies, we aim to enrich the model's capabilities in both cognitive and affective empathy, leading to a more nuanced and comprehensive empathetic response.

Empathetic Response Generation Response Generation +2

CoEvol: Constructing Better Responses for Instruction Finetuning through Multi-Agent Cooperation

1 code implementation11 Jun 2024 Renhao Li, Minghuan Tan, Derek F. Wong, Min Yang

The responses within IFT data could be further enhanced by leveraging the capabilities of LLMs themselves.

Instruction Following

NUMCoT: Numerals and Units of Measurement in Chain-of-Thought Reasoning using Large Language Models

1 code implementation5 Jun 2024 Ancheng Xu, Minghuan Tan, Lei Wang, Min Yang, Ruifeng Xu

We first anatomize the reasoning of math word problems to different sub-procedures like numeral conversions from language to numbers and measurement conversions based on units.

Math Mathematical Reasoning

CPsyCoun: A Report-based Multi-turn Dialogue Reconstruction and Evaluation Framework for Chinese Psychological Counseling

2 code implementations26 May 2024 Chenhao Zhang, Renhao Li, Minghuan Tan, Min Yang, Jingwei Zhu, Di Yang, Jiahao Zhao, Guancheng Ye, Chengming Li, Xiping Hu

To bridge the gap, we propose CPsyCoun, a report-based multi-turn dialogue reconstruction and evaluation framework for Chinese psychological counseling.

CPsyExam: A Chinese Benchmark for Evaluating Psychology using Examinations

1 code implementation16 May 2024 Jiahao Zhao, Jingwei Zhu, Minghuan Tan, Min Yang, Renhao Li, Di Yang, Chenhao Zhang, Guancheng Ye, Chengming Li, Xiping Hu, Derek F. Wong

In this paper, we introduce a novel psychological benchmark, CPsyExam, constructed from questions sourced from Chinese language examinations.

4k

MoZIP: A Multilingual Benchmark to Evaluate Large Language Models in Intellectual Property

1 code implementation26 Feb 2024 Shiwen Ni, Minghuan Tan, Yuelin Bai, Fuqiang Niu, Min Yang, BoWen Zhang, Ruifeng Xu, Xiaojun Chen, Chengming Li, Xiping Hu, Ye Li, Jianping Fan

In this paper, we contribute a new benchmark, the first Multilingual-oriented quiZ on Intellectual Property (MoZIP), for the evaluation of LLMs in the IP domain.

Language Modeling Language Modelling +3

HiJoNLP at SemEval-2022 Task 2: Detecting Idiomaticity of Multiword Expressions using Multilingual Pretrained Language Models

1 code implementation SemEval (NAACL) 2022 Minghuan Tan

This paper describes an approach to detect idiomaticity only from the contextualized representation of a MWE over multilingual pretrained language models.

Task 2

One Model, Multiple Modalities: A Sparsely Activated Approach for Text, Sound, Image, Video and Code

no code implementations12 May 2022 Yong Dai, Duyu Tang, Liangxin Liu, Minghuan Tan, Cong Zhou, Jingquan Wang, Zhangyin Feng, Fan Zhang, Xueyu Hu, Shuming Shi

Moreover, our model supports self-supervised pretraining with the same sparsely activated way, resulting in better initialized parameters for different modalities.

Image Retrieval Retrieval

Investigating Math Word Problems using Pretrained Multilingual Language Models

1 code implementation19 May 2021 Minghuan Tan, Lei Wang, Lingxiao Jiang, Jing Jiang

In this paper, we revisit math word problems~(MWPs) from the cross-lingual and multilingual perspective.

Machine Translation Math +2

A BERT-based Dual Embedding Model for Chinese Idiom Prediction

1 code implementation COLING 2020 Minghuan Tan, Jing Jiang

Specifically, we first match the embedding of each candidate idiom with the hidden representation corresponding to the blank in the context.

Cloze Test

Cannot find the paper you are looking for? You can Submit a new open access paper.