Search Results for author: Wenyue Hua

Found 25 papers, 17 papers with code

A Predicate-Function-Argument Annotation of Natural Language for Open-Domain Information eXpression

no code implementations EMNLP 2020 Mingming Sun, Wenyue Hua, Zoey Liu, Xin Wang, Kangjie Zheng, Ping Li

Based on the same platform of OIX, the OIE strategies are reusable, and people can select a set of strategies to assemble their algorithm for a specific task so that the adaptability may be significantly increased.

Open Information Extraction Sentence

BattleAgent: Multi-modal Dynamic Emulation on Historical Battles to Complement Historical Analysis

no code implementations23 Apr 2024 Shuhang Lin, Wenyue Hua, Lingyao Li, Che-Jui Chang, Lizhou Fan, Jianchao Ji, Hang Hua, Mingyu Jin, Jiebo Luo, Yongfeng Zhang

This novel system aims to simulate complex dynamic interactions among multiple agents, as well as between agents and their environments, over a period of time.

Exploring Concept Depth: How Large Language Models Acquire Knowledge at Different Layers?

1 code implementation10 Apr 2024 Mingyu Jin, Qinkai Yu, Jingyuan Huang, Qingcheng Zeng, Zhenting Wang, Wenyue Hua, Haiyan Zhao, Kai Mei, Yanda Meng, Kaize Ding, Fan Yang, Mengnan Du, Yongfeng Zhang

We employ a probing technique to extract representations from different layers of the model and apply these to classification tasks.

Towards LLM-RecSys Alignment with Textual ID Learning

1 code implementation27 Mar 2024 Juntao Tan, Shuyuan Xu, Wenyue Hua, Yingqiang Ge, Zelong Li, Yongfeng Zhang

The results show that the zero-shot performance of the pre-trained foundation model is comparable to or even better than some traditional recommendation models based on supervised training, showing the potential of the IDGen paradigm serving as the foundation model for generative recommendation.

Sequential Recommendation Text Generation

Large Language Models in Biomedical and Health Informatics: A Bibliometric Review

no code implementations24 Mar 2024 Huizi Yu, Lizhou Fan, Lingyao Li, Jiayan Zhou, Zihui Ma, Lu Xian, Wenyue Hua, Sijia He, Mingyu Jin, Yongfeng Zhang, Ashvin Gandhi, Xin Ma

Large Language Models (LLMs) have rapidly become important tools in Biomedical and Health Informatics (BHI), enabling new ways to analyze data, treat patients, and conduct research.

Management Medical Diagnosis

What if LLMs Have Different World Views: Simulating Alien Civilizations with LLM-based Agents

no code implementations20 Feb 2024 Mingyu Jin, Beichen Wang, Zhaoqian Xue, Suiyuan Zhu, Wenyue Hua, Hua Tang, Kai Mei, Mengnan Du, Yongfeng Zhang

In this study, we introduce "CosmoAgent," an innovative artificial intelligence framework utilizing Large Language Models (LLMs) to simulate complex interactions between human and extraterrestrial civilizations, with a special emphasis on Stephen Hawking's cautionary advice about not sending radio signals haphazardly into the universe.

Decision Making

EmojiCrypt: Prompt Encryption for Secure Communication with Large Language Models

2 code implementations8 Feb 2024 Guo Lin, Wenyue Hua, Yongfeng Zhang

While these models offer substantial benefits in terms of accessibility and functionality, they also introduce significant privacy concerns: the transmission and storage of user data in cloud infrastructures pose substantial risks of data breaches and unauthorized access to sensitive information; even if the transmission and storage of data is encrypted, the LLM service provider itself still knows the real contents of the data, preventing individuals or entities from confidently using such LLM services.

Sentiment Analysis

TrustAgent: Towards Safe and Trustworthy LLM-based Agents through Agent Constitution

1 code implementation2 Feb 2024 Wenyue Hua, Xianjun Yang, Zelong Li, Wei Cheng, Yongfeng Zhang

This paper presents an Agent-Constitution-based agent framework, TrustAgent, an initial investigation into improving the safety dimension of trustworthiness in LLM-based agents.

Formal-LLM: Integrating Formal Language and Natural Language for Controllable LLM-based Agents

1 code implementation1 Feb 2024 Zelong Li, Wenyue Hua, Hao Wang, He Zhu, Yongfeng Zhang

A stack-based LLM plan generation process is then conducted under the supervision of the automaton to ensure that the generated plan satisfies the constraints, making the planning process controllable.

PAP-REC: Personalized Automatic Prompt for Recommendation Language Model

1 code implementation1 Feb 2024 Zelong Li, Jianchao Ji, Yingqiang Ge, Wenyue Hua, Yongfeng Zhang

In this paper, we propose PAP-REC, a framework to generate the Personalized Automatic Prompt for RECommendation language models to mitigate the inefficiency and ineffectiveness problems derived from manually designed prompts.

Language Modelling

Health-LLM: Personalized Retrieval-Augmented Disease Prediction System

1 code implementation1 Feb 2024 Mingyu Jin, Qinkai Yu, Dong Shu, Chong Zhang, Lizhou Fan, Wenyue Hua, Suiyuan Zhu, Yanda Meng, Zhenting Wang, Mengnan Du, Yongfeng Zhang

Compared to traditional health management applications, our system has three main advantages: (1) It integrates health reports and medical knowledge into a large model to ask relevant questions to large language model for disease prediction; (2) It leverages a retrieval augmented generation (RAG) mechanism to enhance feature extraction; (3) It incorporates a semi-automated feature updating framework that can merge and delete features to improve accuracy of disease prediction.

Disease Prediction Language Modelling +3

Propagation and Pitfalls: Reasoning-based Assessment of Knowledge Editing through Counterfactual Tasks

no code implementations31 Jan 2024 Wenyue Hua, Jiang Guo, Mingwen Dong, Henghui Zhu, Patrick Ng, Zhiguo Wang

Our analysis over the chain-of-thought generation of edited models further uncover key reasons behind the inadequacy of existing knowledge editing methods from a reasoning standpoint, involving aspects on fact-wise editing, fact recall ability, and coherence in generation.

counterfactual knowledge editing

The Impact of Reasoning Step Length on Large Language Models

1 code implementation10 Jan 2024 Mingyu Jin, Qinkai Yu, Dong Shu, Haiyan Zhao, Wenyue Hua, Yanda Meng, Yongfeng Zhang, Mengnan Du

Alternatively, shortening the reasoning steps, even while preserving the key information, significantly diminishes the reasoning abilities of models.

NPHardEval: Dynamic Benchmark on Reasoning Ability of Large Language Models via Complexity Classes

1 code implementation22 Dec 2023 Lizhou Fan, Wenyue Hua, Lingyao Li, Haoyang Ling, Yongfeng Zhang

Complex reasoning ability is one of the most important features of current LLMs, which has also been leveraged to play an integral role in complex decision-making tasks.

LLM as OS, Agents as Apps: Envisioning AIOS, Agents and the AIOS-Agent Ecosystem

1 code implementation6 Dec 2023 Yingqiang Ge, Yujie Ren, Wenyue Hua, Shuyuan Xu, Juntao Tan, Yongfeng Zhang

We envision that LLM's impact will not be limited to the AI application level, instead, it will in turn revolutionize the design and implementation of computer system, architecture, software, and programming language, featured by several main concepts: LLM as OS (system-level), Agents as Applications (application-level), Natural Language as Programming Interface (user-level), and Tools as Devices/Libraries (hardware/middleware-level).

Language Modelling Large Language Model

GenRec: Large Language Model for Generative Recommendation

1 code implementation2 Jul 2023 Jianchao Ji, Zelong Li, Shuyuan Xu, Wenyue Hua, Yingqiang Ge, Juntao Tan, Yongfeng Zhang

Subsequently, we use these prompts to fine-tune the LLaMA backbone LLM on a dataset of user-item interactions, represented by textual data, to capture user preferences and item characteristics.

Language Modelling Large Language Model +1

OpenP5: An Open-Source Platform for Developing, Training, and Evaluating LLM-based Recommender Systems

4 code implementations19 Jun 2023 Shuyuan Xu, Wenyue Hua, Yongfeng Zhang

In recent years, the integration of Large Language Models (LLMs) into recommender systems has garnered interest among both practitioners and researchers.

Benchmarking Sequential Recommendation +1

UP5: Unbiased Foundation Model for Fairness-aware Recommendation

no code implementations20 May 2023 Wenyue Hua, Yingqiang Ge, Shuyuan Xu, Jianchao Ji, Yongfeng Zhang

However, at present, there is a lack of understanding regarding the level of fairness exhibited by recommendation foundation models and the appropriate methods for equitably treating different groups of users in foundation models.

Decision Making Fairness +1

How to Index Item IDs for Recommendation Foundation Models

4 code implementations11 May 2023 Wenyue Hua, Shuyuan Xu, Yingqiang Ge, Yongfeng Zhang

To emphasize the importance of item indexing, we first discuss the issues of several trivial item indexing methods, such as random indexing, title indexing, and independent indexing.

Language Modelling

OpenAGI: When LLM Meets Domain Experts

1 code implementation NeurIPS 2023 Yingqiang Ge, Wenyue Hua, Kai Mei, Jianchao Ji, Juntao Tan, Shuyuan Xu, Zelong Li, Yongfeng Zhang

This capability is vital for Artificial Intelligence (AI) and should be embedded in comprehensive AI Agents, enabling them to harness expert models for complex task-solving towards Artificial General Intelligence (AGI).

Benchmarking Natural Language Queries

LegalRelectra: Mixed-domain Language Modeling for Long-range Legal Text Comprehension

no code implementations16 Dec 2022 Wenyue Hua, Yuchen Zhang, Zhe Chen, Josie Li, Melanie Weber

We show that our model improves over general-domain and single-domain medical and legal language models when processing mixed-domain (personal injury) text.

Language Modelling Reading Comprehension

Discover, Explanation, Improvement: An Automatic Slice Detection Framework for Natural Language Processing

no code implementations8 Nov 2022 Wenyue Hua, Lifeng Jin, Linfeng Song, Haitao Mi, Yongfeng Zhang, Dong Yu

Pretrained natural language processing (NLP) models have achieved high overall performance, but they still make systematic errors.

EntQA: Entity Linking as Question Answering

1 code implementation ICLR 2022 Wenzheng Zhang, Wenyue Hua, Karl Stratos

A conventional approach to entity linking is to first find mentions in a given document and then infer their underlying entities in the knowledge base.

Benchmarking Entity Linking +4

Cannot find the paper you are looking for? You can Submit a new open access paper.