Search Results for author: Mingqi Gao

Found 14 papers, 7 papers with code

DialSummEval: Revisiting Summarization Evaluation for Dialogues

1 code implementation NAACL 2022 Mingqi Gao, Xiaojun Wan

Dialogue summarization is receiving increasing attention from researchers due to its extraordinary difficulty and unique application value.

Place Anything into Any Video

no code implementations22 Feb 2024 Ziling Liu, Jinyu Yang, Mingqi Gao, Feng Zheng

This paper introduces a novel and efficient system named Place-Anything, which facilitates the insertion of any object into any video solely based on a picture or text description of the target object or element.

3D Generation Object +2

Are LLM-based Evaluators Confusing NLG Quality Criteria?

no code implementations19 Feb 2024 Xinyu Hu, Mingqi Gao, Sen Hu, Yang Zhang, Yicheng Chen, Teng Xu, Xiaojun Wan

Some prior work has shown that LLMs perform well in NLG evaluation for different tasks.

nlg evaluation

LLM-based NLG Evaluation: Current Status and Challenges

no code implementations2 Feb 2024 Mingqi Gao, Xinyu Hu, Jie Ruan, Xiao Pu, Xiaojun Wan

Evaluating natural language generation (NLG) is a vital but challenging problem in artificial intelligence.

nlg evaluation Text Generation

Summarization is (Almost) Dead

no code implementations18 Sep 2023 Xiao Pu, Mingqi Gao, Xiaojun Wan

How well can large language models (LLMs) generate summaries?

Text Summarization

Reference Matters: Benchmarking Factual Error Correction for Dialogue Summarization with Fine-grained Evaluation Framework

1 code implementation8 Jun 2023 Mingqi Gao, Xiaojun Wan, Jia Su, Zhefeng Wang, Baoxing Huai

To address this problem, we are the first to manually annotate a FEC dataset for dialogue summarization containing 4000 items and propose FERRANTI, a fine-grained evaluation framework based on reference correction that automatically evaluates the performance of FEC models on different error categories.

Benchmarking

Is Summary Useful or Not? An Extrinsic Human Evaluation of Text Summaries on Downstream Tasks

no code implementations24 May 2023 Xiao Pu, Mingqi Gao, Xiaojun Wan

The results show that summaries generated by fine-tuned models lead to higher consistency in usefulness across all three tasks, as rankings of fine-tuned summarization systems are close across downstream tasks according to the proposed extrinsic metrics.

Informativeness Question Answering +4

Caption Anything: Interactive Image Description with Diverse Multimodal Controls

1 code implementation4 May 2023 Teng Wang, Jinrui Zhang, Junjie Fei, Hao Zheng, Yunlong Tang, Zhe Li, Mingqi Gao, Shanshan Zhao

Controllable image captioning is an emerging multimodal topic that aims to describe the image with natural language following human purpose, $\textit{e. g.}$, looking at the specified regions or telling in a particular text style.

controllable image captioning Instruction Following

Track Anything: Segment Anything Meets Videos

1 code implementation24 Apr 2023 Jinyu Yang, Mingqi Gao, Zhe Li, Shang Gao, Fangjing Wang, Feng Zheng

Therefore, in this report, we propose Track Anything Model (TAM), which achieves high-performance interactive tracking and segmentation in videos.

Image Segmentation Segmentation +2

Human-like Summarization Evaluation with ChatGPT

1 code implementation5 Apr 2023 Mingqi Gao, Jie Ruan, Renliang Sun, Xunjian Yin, Shiping Yang, Xiaojun Wan

Evaluating text summarization is a challenging problem, and existing evaluation metrics are far from satisfactory.

Text Summarization

Social Biases in Automatic Evaluation Metrics for NLG

no code implementations17 Oct 2022 Mingqi Gao, Xiaojun Wan

Many studies have revealed that word embeddings, language models, and models for specific downstream tasks in NLP are prone to social biases, especially gender bias.

Sentence Sentence Embeddings +3

Multi-scale Location-aware Kernel Representation for Object Detection

2 code implementations CVPR 2018 Hao Wang, Qilong Wang, Mingqi Gao, Peihua Li, WangMeng Zuo

Our MLKP can be efficiently computed on a modified multi-scale feature map using a low-dimensional polynomial kernel approximation. Moreover, different from existing orderless global representations based on high-order statistics, our proposed MLKP is location retentive and sensitive so that it can be flexibly adopted to object detection.

General Classification Object +2

Cannot find the paper you are looking for? You can Submit a new open access paper.