no code implementations • CCL 2022 • Xuanfan Ni, Piji Li
“开放式自动故事生成通过输入故事的开头、大纲、主线等, 得到具有一致性、连贯性和逻辑性的故事。现有的方法想要提升生成故事的质量, 往往需要大量训练数据和更多参数的模型。针对以上问题, 该文利用提示学习在零样本与少样本场景下的优势, 同时使用外部常识推理知识, 提出了一种故事生成方法。该方法将故事生成分为三个阶段:输入故事的开头, 常识推理模型生成可能的事件;根据类型不同, 将事件填入问题模板中, 构建引导模型生成合理回答的问题;问答模型产生对应问题的答案, 并选择困惑度最小的作为故事下文。重复上述过程, 最终生成完整的故事。自动评测与人工评测指标表明, 与基线模型相比, 该文提出的方法能够生成更连贯、具体和合乎逻辑的故事。”
no code implementations • 16 May 2024 • Xuanfan Ni, Piji Li
Recent efforts have evaluated large language models (LLMs) in areas such as commonsense reasoning, mathematical reasoning, and code generation.
no code implementations • 8 Apr 2024 • Xuanfan Ni, Hengyi Cai, Xiaochi Wei, Shuaiqiang Wang, Dawei Yin, Piji Li
However, prior benchmarks create datasets that ostensibly cater to long-text comprehension by expanding the input of traditional tasks, which falls short to exhibit the unique characteristics of long-text understanding, including long dependency tasks and longer text length compatible with modern LLMs' context window size.
no code implementations • 27 Mar 2023 • Xuanfan Ni, Piji Li, Huayang Li
Text structuralization is one of the important fields of natural language processing (NLP) consists of information extraction (IE) and structure formalization.