Search Results for author: Zhihong Shao

Found 7 papers, 4 papers with code

Synthetic Prompting: Generating Chain-of-Thought Demonstrations for Large Language Models

no code implementations1 Feb 2023 Zhihong Shao, Yeyun Gong, Yelong Shen, Minlie Huang, Nan Duan, Weizhu Chen

However, the quality of the prompts depends on the demonstrations given to the models, and creating many of them by hand is costly.

Chaining Simultaneous Thoughts for Numerical Reasoning

no code implementations29 Nov 2022 Zhihong Shao, Fei Huang, Minlie Huang

Given that rich information is hidden behind ubiquitous numbers in text, numerical reasoning over text should be an essential skill of AI systems.

Answering Open-Domain Multi-Answer Questions via a Recall-then-Verify Framework

1 code implementation ACL 2022 Zhihong Shao, Minlie Huang

Open-domain questions are likely to be open-ended and ambiguous, leading to multiple valid answers.

A Mutual Information Maximization Approach for the Spurious Solution Problem in Weakly Supervised Question Answering

1 code implementation ACL 2021 Zhihong Shao, Lifeng Shang, Qun Liu, Minlie Huang

This setting gives rise to the spurious solution problem: there may exist many spurious solutions that coincidentally derive the correct answer, but training on such solutions can hurt model performance (e. g., producing wrong solutions or answers).

Question Answering

AdvExpander: Generating Natural Language Adversarial Examples by Expanding Text

no code implementations18 Dec 2020 Zhihong Shao, Zitao Liu, Jiyong Zhang, Zhongqin Wu, Minlie Huang

In this paper, we present AdvExpander, a method that crafts new adversarial examples by expanding text, which is complementary to previous substitution-based methods.

Text Matching

CoTK: An Open-Source Toolkit for Fast Development and Fair Evaluation of Text Generation

1 code implementation3 Feb 2020 Fei Huang, Dazhen Wan, Zhihong Shao, Pei Ke, Jian Guan, Yilin Niu, Xiaoyan Zhu, Minlie Huang

In text generation evaluation, many practical issues, such as inconsistent experimental settings and metric implementations, are often ignored but lead to unfair evaluation and untenable conclusions.

Text Generation

Long and Diverse Text Generation with Planning-based Hierarchical Variational Model

1 code implementation IJCNLP 2019 Zhihong Shao, Minlie Huang, Jiangtao Wen, Wenfei Xu, Xiaoyan Zhu

Existing neural methods for data-to-text generation are still struggling to produce long and diverse texts: they are insufficient to model input data dynamically during generation, to capture inter-sentence coherence, or to generate diversified expressions.

Data-to-Text Generation

Cannot find the paper you are looking for? You can Submit a new open access paper.