Search Results for author: Hongqiu Wu

Found 21 papers, 14 papers with code

Towards Enhanced Immersion and Agency for LLM-based Interactive Drama

1 code implementation25 Feb 2025 Hongqiu Wu, Weiqi Wu, Tianyang Xu, Jiameng Zhang, Hai Zhao

LLM-based Interactive Drama is a novel AI-based dialogue scenario, where the user (i. e. the player) plays the role of a character in the story, has conversations with characters played by LLM agents, and experiences an unfolding story.

Instruction-Driven Game Engine: A Poker Case Study

no code implementations17 Oct 2024 Hongqiu Wu, XingYuan Liu, Yan Wang, Hai Zhao

The IDGE allows users to create games simply by natural language instructions, which significantly lowers the barrier for game development.

Diversity Language Modeling +2

A Coin Has Two Sides: A Novel Detector-Corrector Framework for Chinese Spelling Correction

1 code implementation6 Sep 2024 Xiangke Zeng, Zuchao Li, Lefei Zhang, Ping Wang, Hongqiu Wu, Hai Zhao

Our detector is designed to yield two error detection results, each characterized by high precision and recall.

Spelling Correction

Self-Directed Turing Test for Large Language Models

no code implementations19 Aug 2024 Weiqi Wu, Hongqiu Wu, Hai Zhao

This fails to reflect a natural conversational style and hinders the evaluation of Large Language Models (LLMs) in complex and prolonged dialogues.

Game Development as Human-LLM Interaction

no code implementations18 Aug 2024 Jiale Hong, Hongqiu Wu, Hai Zhao

Game development is a highly specialized task that relies on a complex game engine powered by complex programming languages, preventing many gaming enthusiasts from handling it.

Open Role-Playing with Delta-Engines

1 code implementation11 Aug 2024 Hongqiu Wu, Zekai Xu, Tianyang Xu, Shize Wei, Yan Wang, Jiale Hong, Weiqi Wu, Hai Zhao

In this paper, we propose a new style of game-play to bridge self-expression and role-playing: \emph{open role-playing games (ORPGs)}, where players are allowed to craft and embody their unique characters in the game world.

From Role-Play to Drama-Interaction: An LLM Solution

no code implementations23 May 2024 Weiqi Wu, Hongqiu Wu, Lai Jiang, XingYuan Liu, Jiale Hong, Hai Zhao, Min Zhang

Drama is a form of storytelling inspired by human creativity, proceeding with a predefined storyline, carrying emotions and thoughts.

Instruction Following

Instruction-Driven Game Engines on Large Language Models

1 code implementation30 Mar 2024 Hongqiu Wu, Yan Wang, XingYuan Liu, Hai Zhao, Min Zhang

The Instruction-Driven Game Engine (IDGE) project aims to democratize game development by enabling a large language model (LLM) to follow free-form game rules and autonomously generate game-play processes.

Language Modelling Large Language Model

Unveiling Vulnerability of Self-Attention

1 code implementation26 Feb 2024 Khai Jiet Liong, Hongqiu Wu, Hai Zhao

(2) We introduce \textit{S-Attend}, a novel smoothing technique that effectively makes SA robust via structural perturbations.

Episodic Return Decomposition by Difference of Implicitly Assigned Sub-Trajectory Reward

1 code implementation17 Dec 2023 Haoxin Lin, Hongqiu Wu, Jiaji Zhang, Yihao Sun, Junyin Ye, Yang Yu

Real-world decision-making problems are usually accompanied by delayed rewards, which affects the sample efficiency of Reinforcement Learning, especially in the extremely delayed case where the only feedback is the episodic reward obtained at the end of an episode.

Decision Making

Empower Nested Boolean Logic via Self-Supervised Curriculum Learning

1 code implementation9 Oct 2023 Hongqiu Wu, Linfeng Liu, Hai Zhao, Min Zhang

Beyond the great cognitive powers showcased by language models, it is crucial to scrutinize whether their reasoning capabilities stem from strong generalization or merely exposure to relevant data.

Logical Reasoning Self-Supervised Learning

Chinese Spelling Correction as Rephrasing Language Model

2 code implementations17 Aug 2023 Linfeng Liu, Hongqiu Wu, Hai Zhao

However, we note a critical flaw in the process of tagging one character to another, that the correction is excessively conditioned on the error.

Language Modeling Language Modelling +3

Rethinking Masked Language Modeling for Chinese Spelling Correction

1 code implementation28 May 2023 Hongqiu Wu, Shaohua Zhang, Yuchen Zhang, Hai Zhao

In this paper, we study Chinese Spelling Correction (CSC) as a joint decision made by two separate models: a language model and an error model.

Diversity Domain Generalization +4

Attack Named Entity Recognition by Entity Boundary Interference

no code implementations9 May 2023 Yifei Yang, Hongqiu Wu, Hai Zhao

This is due to the fine-grained nature of NER, as even minor word changes in the sentence can result in the emergence or mutation of any entities, resulting in invalid adversarial examples.

named-entity-recognition Named Entity Recognition +3

Toward Adversarial Training on Contextualized Language Representation

1 code implementation8 May 2023 Hongqiu Wu, Yongxiang Liu, Hanwen Shi, Hai Zhao, Min Zhang

Based on the observation, we propose simple yet effective \textit{Contextualized representation-Adversarial Training} (CreAT), in which the attack is explicitly optimized to deviate the contextualized representation of the encoder.

Decoder global-optimization +3

Forging Multiple Training Objectives for Pre-trained Language Models via Meta-Learning

2 code implementations19 Oct 2022 Hongqiu Wu, Ruixue Ding, Hai Zhao, Boli Chen, Pengjun Xie, Fei Huang, Min Zhang

Multiple pre-training objectives fill the vacancy of the understanding capability of single-objective language modeling, which serves the ultimate purpose of pre-trained language models (PrLMs), generalizing well on a mass of scenarios.

Language Modeling Language Modelling +1

Semantic-Preserving Adversarial Code Comprehension

1 code implementation COLING 2022 Yiyang Li, Hongqiu Wu, Hai Zhao

Based on the tremendous success of pre-trained language models (PrLMs) for source code comprehension tasks, current literature studies either ways to further improve the performance (generalization) of PrLMs, or their robustness against adversarial attacks.

Adversarial Self-Attention for Language Understanding

1 code implementation25 Jun 2022 Hongqiu Wu, Ruixue Ding, Hai Zhao, Pengjun Xie, Fei Huang, Min Zhang

Deep neural models (e. g. Transformer) naturally learn spurious features, which create a ``shortcut'' between the labels and inputs, thus impairing the generalization and robustness.

Machine Reading Comprehension Named Entity Recognition (NER) +4

Not All Attention Is All You Need

no code implementations NeurIPS 2021 Hongqiu Wu, Hai Zhao, Min Zhang

Beyond the success story of pre-trained language models (PrLMs) in recent natural language processing, they are susceptible to over-fitting due to unusual large model size.

All Document Classification +2

Code Summarization with Structure-induced Transformer

1 code implementation Findings (ACL) 2021 Hongqiu Wu, Hai Zhao, Min Zhang

Code summarization (CS) is becoming a promising area in recent language understanding, which aims to generate sensible human language automatically for programming language in the format of source code, serving in the most convenience of programmer developing.

Code Summarization Graph Neural Network +2

Cannot find the paper you are looking for? You can Submit a new open access paper.