Search Results for author: Xingyao Wang

Found 20 papers, 12 papers with code

Text-Based Reasoning About Vector Graphics

no code implementations9 Apr 2024 Zhenhailong Wang, Joy Hsu, Xingyao Wang, Kuan-Hao Huang, Manling Li, Jiajun Wu, Heng Ji

By casting an image to a text-based representation, we can leverage the power of language models to learn alignment from SVG to visual primitives and generalize to unseen question-answering tasks.

Descriptive Language Modelling +2

Executable Code Actions Elicit Better LLM Agents

1 code implementation1 Feb 2024 Xingyao Wang, Yangyi Chen, Lifan Yuan, Yizhe Zhang, Yunzhu Li, Hao Peng, Heng Ji

LLM agents are typically prompted to produce actions by generating JSON or text in a pre-defined format, which is usually limited by constrained action space (e. g., the scope of pre-defined tools) and restricted flexibility (e. g., inability to compose multiple tools).

Language Modelling Large Language Model

If LLM Is the Wizard, Then Code Is the Wand: A Survey on How Code Empowers Large Language Models to Serve as Intelligent Agents

no code implementations1 Jan 2024 Ke Yang, Jiateng Liu, John Wu, Chaoqi Yang, Yi R. Fung, Sha Li, Zixuan Huang, Xu Cao, Xingyao Wang, Yiquan Wang, Heng Ji, ChengXiang Zhai

The prominent large language models (LLMs) of today differ from past language models not only in size, but also in the fact that they are trained on a combination of natural language and formal language (code).

Code Generation

ViStruct: Visual Structural Knowledge Extraction via Curriculum Guided Code-Vision Representation

1 code implementation22 Nov 2023 Yangyi Chen, Xingyao Wang, Manling Li, Derek Hoiem, Heng Ji

We adopt a weakly-supervised approach to directly generate visual event structures from captions for ViStruct training, capitalizing on abundant image-caption pairs from the web.

Examining LLMs' Uncertainty Expression Towards Questions Outside Parametric Knowledge

1 code implementation16 Nov 2023 Genglin Liu, Xingyao Wang, Lifan Yuan, Yangyi Chen, Hao Peng

Can large language models (LLMs) express their uncertainty in situations where they lack sufficient parametric knowledge to generate reasonable responses?

Question Answering valid

R-Tuning: Teaching Large Language Models to Refuse Unknown Questions

1 code implementation16 Nov 2023 Hanning Zhang, Shizhe Diao, Yong Lin, Yi R. Fung, Qing Lian, Xingyao Wang, Yangyi Chen, Heng Ji, Tong Zhang

This approach is formalized by first identifying the knowledge gap between parametric knowledge and the instruction tuning data.

Hallucination Sentence

Defining a New NLP Playground

no code implementations31 Oct 2023 Sha Li, Chi Han, Pengfei Yu, Carl Edwards, Manling Li, Xingyao Wang, Yi R. Fung, Charles Yu, Joel R. Tetreault, Eduard H. Hovy, Heng Ji

The recent explosion of performance of large language models (LLMs) has changed the field of Natural Language Processing (NLP) more abruptly and seismically than any other shift in the field's 80-year history.

CRAFT: Customizing LLMs by Creating and Retrieving from Specialized Toolsets

1 code implementation29 Sep 2023 Lifan Yuan, Yangyi Chen, Xingyao Wang, Yi R. Fung, Hao Peng, Heng Ji

It creates toolsets specifically curated for the tasks and equips LLMs with a component that retrieves tools from these sets to enhance their capability to solve complex tasks.

Language Modelling Mathematical Reasoning

MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback

1 code implementation19 Sep 2023 Xingyao Wang, Zihan Wang, Jiateng Liu, Yangyi Chen, Lifan Yuan, Hao Peng, Heng Ji

However, current evaluation protocols often emphasize benchmark performance with single-turn exchanges, neglecting the nuanced interactions among the user, LLMs, and external tools, while also underestimating the importance of natural language feedback from users.

Decision Making

Making Pre-trained Language Models both Task-solvers and Self-calibrators

1 code implementation21 Jul 2023 Yangyi Chen, Xingyao Wang, Heng Ji

In this work, we consider the practical scenario that we need to effectively utilize training samples to make PLMs both task-solvers and self-calibrators.

Adversarial Defense

LeTI: Learning to Generate from Textual Interactions

1 code implementation17 May 2023 Xingyao Wang, Hao Peng, Reyhaneh Jabbarvand, Heng Ji

We explore LMs' potential to learn from textual interactions (LETI) that not only check their correctness with binary labels but also pinpoint and explain errors in their outputs through textual feedback.

Code Generation Event Argument Extraction

ECG-CL: A Comprehensive Electrocardiogram Interpretation Method Based on Continual Learning

no code implementations10 Apr 2023 Hongxiang Gao, Xingyao Wang, Zhenghua Chen, Min Wu, Jianqing Li, Chengyu Liu

From the perspective of intelligent wearable applications, the possibility of a comprehensive ECG interpretation algorithm based on single-lead ECGs is also confirmed.

Continual Learning Incremental Learning +1

POTATO: The Portable Text Annotation Tool

1 code implementation16 Dec 2022 Jiaxin Pei, Aparna Ananthasubramaniam, Xingyao Wang, Naitian Zhou, Jackson Sargent, Apostolos Dedeloudis, David Jurgens

We present POTATO, the Portable text annotation tool, a free, fully open-sourced annotation system that 1) supports labeling many types of text and multimodal data; 2) offers easy-to-configure features to maximize the productivity of both deployers and annotators (convenient templates for common ML/NLP tasks, active learning, keypress shortcuts, keyword highlights, tooltips); and 3) supports a high degree of customization (editable UI, inserting pre-screening questions, attention and qualification tests).

Active Learning text annotation

Code4Struct: Code Generation for Few-Shot Event Structure Prediction

1 code implementation23 Oct 2022 Xingyao Wang, Sha Li, Heng Ji

As a case study, we formulate Event Argument Extraction (EAE) as converting text into event-argument structures that can be represented as a class object using code.

Code Generation Event Argument Extraction +3

A Causal Intervention Scheme for Semantic Segmentation of Quasi-periodic Cardiovascular Signals

no code implementations19 Sep 2022 Xingyao Wang, Yuwen Li, Hongxiang Gao, Xianghong Cheng, Jianqing Li, Chengyu Liu

To address this issue, we establish a structural causal model as the foundation to customize the intervention approaches on Am and Ar, respectively.

Attribute Segmentation +1

Sequence-to-Sequence Load Disaggregation Using Multi-Scale Residual Neural Network

no code implementations25 Sep 2020 Gan Zhou, Zhi Li, Meng Fu, Yanjun Feng, Xingyao Wang, Chengwei Huang

Secondly, we propose dilated convolution to curtail the excessive quantity of model parameters and obtain bigger receptive field, and multi-scale structure to learn mixed data features in a more targeted way.

Non-Intrusive Load Monitoring

Temporal-Framing Adaptive Network for Heart Sound Segmentation without Prior Knowledge of State Duration

no code implementations9 May 2020 Xingyao Wang, Chengyu Liu, Yuwen Li, Xianghong Cheng, Jianqing Li, Gari D. Clifford

Moreover, the TFAN-based method achieved an overall F1 score of 99. 2%, 94. 4%, 91. 4% on LEVEL-I, -II and -III data respectively, compared to 98. 4%, 88. 54% and 79. 80% for the current state-of-the-art method.

Segmentation Time Series Analysis

Cannot find the paper you are looking for? You can Submit a new open access paper.