no code implementations • 14 Mar 2025 • Hai Zhao, Hongqiu Wu, Dongjie Yang, Anni Zou, Jiale Hong
In the language model scenario, the token is defined as a node in the graph.
1 code implementation • 25 Feb 2025 • Hongqiu Wu, Weiqi Wu, Tianyang Xu, Jiameng Zhang, Hai Zhao
LLM-based Interactive Drama is a novel AI-based dialogue scenario, where the user (i. e. the player) plays the role of a character in the story, has conversations with characters played by LLM agents, and experiences an unfolding story.
no code implementations • 17 Oct 2024 • Hongqiu Wu, XingYuan Liu, Yan Wang, Hai Zhao
The IDGE allows users to create games simply by natural language instructions, which significantly lowers the barrier for game development.
1 code implementation • 6 Sep 2024 • Xiangke Zeng, Zuchao Li, Lefei Zhang, Ping Wang, Hongqiu Wu, Hai Zhao
Our detector is designed to yield two error detection results, each characterized by high precision and recall.
no code implementations • 19 Aug 2024 • Weiqi Wu, Hongqiu Wu, Hai Zhao
This fails to reflect a natural conversational style and hinders the evaluation of Large Language Models (LLMs) in complex and prolonged dialogues.
no code implementations • 18 Aug 2024 • Jiale Hong, Hongqiu Wu, Hai Zhao
Game development is a highly specialized task that relies on a complex game engine powered by complex programming languages, preventing many gaming enthusiasts from handling it.
1 code implementation • 11 Aug 2024 • Hongqiu Wu, Zekai Xu, Tianyang Xu, Shize Wei, Yan Wang, Jiale Hong, Weiqi Wu, Hai Zhao
In this paper, we propose a new style of game-play to bridge self-expression and role-playing: \emph{open role-playing games (ORPGs)}, where players are allowed to craft and embody their unique characters in the game world.
no code implementations • 23 May 2024 • Weiqi Wu, Hongqiu Wu, Lai Jiang, XingYuan Liu, Jiale Hong, Hai Zhao, Min Zhang
Drama is a form of storytelling inspired by human creativity, proceeding with a predefined storyline, carrying emotions and thoughts.
1 code implementation • 30 Mar 2024 • Hongqiu Wu, Yan Wang, XingYuan Liu, Hai Zhao, Min Zhang
The Instruction-Driven Game Engine (IDGE) project aims to democratize game development by enabling a large language model (LLM) to follow free-form game rules and autonomously generate game-play processes.
1 code implementation • 26 Feb 2024 • Khai Jiet Liong, Hongqiu Wu, Hai Zhao
(2) We introduce \textit{S-Attend}, a novel smoothing technique that effectively makes SA robust via structural perturbations.
1 code implementation • 17 Dec 2023 • Haoxin Lin, Hongqiu Wu, Jiaji Zhang, Yihao Sun, Junyin Ye, Yang Yu
Real-world decision-making problems are usually accompanied by delayed rewards, which affects the sample efficiency of Reinforcement Learning, especially in the extremely delayed case where the only feedback is the episodic reward obtained at the end of an episode.
1 code implementation • 9 Oct 2023 • Hongqiu Wu, Linfeng Liu, Hai Zhao, Min Zhang
Beyond the great cognitive powers showcased by language models, it is crucial to scrutinize whether their reasoning capabilities stem from strong generalization or merely exposure to relevant data.
2 code implementations • 17 Aug 2023 • Linfeng Liu, Hongqiu Wu, Hai Zhao
However, we note a critical flaw in the process of tagging one character to another, that the correction is excessively conditioned on the error.
1 code implementation • 28 May 2023 • Hongqiu Wu, Shaohua Zhang, Yuchen Zhang, Hai Zhao
In this paper, we study Chinese Spelling Correction (CSC) as a joint decision made by two separate models: a language model and an error model.
no code implementations • 9 May 2023 • Yifei Yang, Hongqiu Wu, Hai Zhao
This is due to the fine-grained nature of NER, as even minor word changes in the sentence can result in the emergence or mutation of any entities, resulting in invalid adversarial examples.
1 code implementation • 8 May 2023 • Hongqiu Wu, Yongxiang Liu, Hanwen Shi, Hai Zhao, Min Zhang
Based on the observation, we propose simple yet effective \textit{Contextualized representation-Adversarial Training} (CreAT), in which the attack is explicitly optimized to deviate the contextualized representation of the encoder.
2 code implementations • 19 Oct 2022 • Hongqiu Wu, Ruixue Ding, Hai Zhao, Boli Chen, Pengjun Xie, Fei Huang, Min Zhang
Multiple pre-training objectives fill the vacancy of the understanding capability of single-objective language modeling, which serves the ultimate purpose of pre-trained language models (PrLMs), generalizing well on a mass of scenarios.
1 code implementation • COLING 2022 • Yiyang Li, Hongqiu Wu, Hai Zhao
Based on the tremendous success of pre-trained language models (PrLMs) for source code comprehension tasks, current literature studies either ways to further improve the performance (generalization) of PrLMs, or their robustness against adversarial attacks.
1 code implementation • 25 Jun 2022 • Hongqiu Wu, Ruixue Ding, Hai Zhao, Pengjun Xie, Fei Huang, Min Zhang
Deep neural models (e. g. Transformer) naturally learn spurious features, which create a ``shortcut'' between the labels and inputs, thus impairing the generalization and robustness.
Ranked #1 on
Machine Reading Comprehension
on DREAM
Machine Reading Comprehension
Named Entity Recognition (NER)
+4
no code implementations • NeurIPS 2021 • Hongqiu Wu, Hai Zhao, Min Zhang
Beyond the success story of pre-trained language models (PrLMs) in recent natural language processing, they are susceptible to over-fitting due to unusual large model size.
1 code implementation • Findings (ACL) 2021 • Hongqiu Wu, Hai Zhao, Min Zhang
Code summarization (CS) is becoming a promising area in recent language understanding, which aims to generate sensible human language automatically for programming language in the format of source code, serving in the most convenience of programmer developing.