1 code implementation • NAACL 2022 • Mingqi Gao, Xiaojun Wan
Dialogue summarization is receiving increasing attention from researchers due to its extraordinary difficulty and unique application value.
no code implementations • 31 Dec 2024 • Mingqi Gao, Yixin Liu, Xinyu Hu, Xiaojun Wan, Jonathan Bragg, Arman Cohan
Due to the high cost and time-consuming nature of human evaluations, an automatic LLM bencher (i. e., an automatic evaluation framework that aims to rank LLMs based on their alignment with human preferences) is indispensable.
no code implementations • 22 Oct 2024 • Mingqi Gao, Xinyu Hu, Li Lin, Xiaojun Wan
The correlation between NLG automatic evaluation metrics and human evaluation is often regarded as a critical criterion for assessing the capability of an evaluation metric.
1 code implementation • 26 Jun 2024 • Xinyu Hu, Li Lin, Mingqi Gao, Xunjian Yin, Xiaojun Wan
The evaluation of natural language generation (NLG) tasks is a significant and longstanding research area.
2 code implementations • 24 Jun 2024 • Henghui Ding, Chang Liu, Yunchao Wei, Nikhila Ravi, Shuting He, Song Bai, Philip Torr, Deshui Miao, Xin Li, Zhenyu He, YaoWei Wang, Ming-Hsuan Yang, Zhensong Xu, Jiangtao Yao, Chengjing Wu, Ting Liu, Luoqi Liu, Xinyu Liu, Jing Zhang, Kexin Zhang, Yuting Yang, Licheng Jiao, Shuyuan Yang, Mingqi Gao, Jingnan Luo, Jinyu Yang, Jungong Han, Feng Zheng, Bin Cao, Yisi Zhang, Xuanxu Lin, Xingjian He, Bo Zhao, Jing Liu, Feiyu Pan, Hao Fang, Xiankai Lu
Moreover, we provide a new motion expression guided video segmentation dataset MeViS to study the natural language-guided video understanding in complex environments.
1 code implementation • 12 Jun 2024 • Jie Ruan, Xiao Pu, Mingqi Gao, Xiaojun Wan, Yuesheng Zhu
Human evaluation is viewed as a reliable evaluation method for NLG which is expensive and time-consuming.
1 code implementation • 11 Jun 2024 • Mingqi Gao, Jingnan Luo, Jinyu Yang, Jungong Han, Feng Zheng
Motion Expression guided Video Segmentation (MeViS), as an emerging task, poses many new challenges to the field of referring video object segmentation (RVOS).
no code implementations • 22 Feb 2024 • Ziling Liu, Jinyu Yang, Mingqi Gao, Feng Zheng
This paper introduces a novel and efficient system named Place-Anything, which facilitates the insertion of any object into any video solely based on a picture or text description of the target object or element.
2 code implementations • 19 Feb 2024 • Xinyu Hu, Mingqi Gao, Sen Hu, Yang Zhang, Yicheng Chen, Teng Xu, Xiaojun Wan
Some prior work has shown that LLMs perform well in NLG evaluation for different tasks.
no code implementations • 2 Feb 2024 • Mingqi Gao, Xinyu Hu, Jie Ruan, Xiao Pu, Xiaojun Wan
Evaluating natural language generation (NLG) is a vital but challenging problem in artificial intelligence.
no code implementations • 18 Sep 2023 • Xiao Pu, Mingqi Gao, Xiaojun Wan
How well can large language models (LLMs) generate summaries?
1 code implementation • ICCV 2023 • Guanghui Li, Mingqi Gao, Heng Liu, XianTong Zhen, Feng Zheng
Referring video object segmentation (RVOS), as a supervised learning task, relies on sufficient annotated data for a given scene.
Referring Video Object Segmentation Semantic Segmentation +1
1 code implementation • 8 Jun 2023 • Mingqi Gao, Xiaojun Wan, Jia Su, Zhefeng Wang, Baoxing Huai
To address this problem, we are the first to manually annotate a FEC dataset for dialogue summarization containing 4000 items and propose FERRANTI, a fine-grained evaluation framework based on reference correction that automatically evaluates the performance of FEC models on different error categories.
no code implementations • 24 May 2023 • Xiao Pu, Mingqi Gao, Xiaojun Wan
The results show that summaries generated by fine-tuned models lead to higher consistency in usefulness across all three tasks, as rankings of fine-tuned summarization systems are close across downstream tasks according to the proposed extrinsic metrics.
1 code implementation • 4 May 2023 • Teng Wang, Jinrui Zhang, Junjie Fei, Hao Zheng, Yunlong Tang, Zhe Li, Mingqi Gao, Shanshan Zhao
Controllable image captioning is an emerging multimodal topic that aims to describe the image with natural language following human purpose, $\textit{e. g.}$, looking at the specified regions or telling in a particular text style.
no code implementations • 2 May 2023 • Anya Belz, Craig Thomson, Ehud Reiter, Gavin Abercrombie, Jose M. Alonso-Moral, Mohammad Arvan, Anouck Braggaar, Mark Cieliebak, Elizabeth Clark, Kees Van Deemter, Tanvi Dinkar, Ondřej Dušek, Steffen Eger, Qixiang Fang, Mingqi Gao, Albert Gatt, Dimitra Gkatzia, Javier González-Corbelle, Dirk Hovy, Manuela Hürlimann, Takumi Ito, John D. Kelleher, Filip Klubicka, Emiel Krahmer, Huiyuan Lai, Chris van der Lee, Yiru Li, Saad Mahamood, Margot Mieskes, Emiel van Miltenburg, Pablo Mosteiro, Malvina Nissim, Natalie Parde, Ondřej Plátek, Verena Rieser, Jie Ruan, Joel Tetreault, Antonio Toral, Xiaojun Wan, Leo Wanner, Lewis Watson, Diyi Yang
We report our efforts in identifying a set of previous human evaluations in NLP that would be suitable for a coordinated study examining what makes human evaluations in NLP more/less reproducible.
1 code implementation • 24 Apr 2023 • Jinyu Yang, Mingqi Gao, Zhe Li, Shang Gao, Fangjing Wang, Feng Zheng
Therefore, in this report, we propose Track Anything Model (TAM), which achieves high-performance interactive tracking and segmentation in videos.
1 code implementation • 5 Apr 2023 • Mingqi Gao, Jie Ruan, Renliang Sun, Xunjian Yin, Shiping Yang, Xiaojun Wan
Evaluating text summarization is a challenging problem, and existing evaluation metrics are far from satisfactory.
no code implementations • 17 Oct 2022 • Mingqi Gao, Xiaojun Wan
Many studies have revealed that word embeddings, language models, and models for specific downstream tasks in NLP are prone to social biases, especially gender bias.
2 code implementations • CVPR 2018 • Hao Wang, Qilong Wang, Mingqi Gao, Peihua Li, WangMeng Zuo
Our MLKP can be efficiently computed on a modified multi-scale feature map using a low-dimensional polynomial kernel approximation. Moreover, different from existing orderless global representations based on high-order statistics, our proposed MLKP is location retentive and sensitive so that it can be flexibly adopted to object detection.