Search Results for author: Xingyao Wang

Found 16 papers, 7 papers with code

ViStruct: Visual Structural Knowledge Extraction via Curriculum Guided Code-Vision Representation

1 code implementation22 Nov 2023 Yangyi Chen, Xingyao Wang, Manling Li, Derek Hoiem, Heng Ji

We adopt a weakly-supervised approach to directly generate visual event structures from captions for ViStruct training, capitalizing on abundant image-caption pairs from the web.

Prudent Silence or Foolish Babble? Examining Large Language Models' Responses to the Unknown

no code implementations16 Nov 2023 Genglin Liu, Xingyao Wang, Lifan Yuan, Yangyi Chen, Hao Peng

When presented with such unanswerable questions, an LLM should appropriately convey uncertainty, and be able to challenge the premise and refuse to generate a response.

Question Answering valid

R-Tuning: Teaching Large Language Models to Refuse Unknown Questions

no code implementations16 Nov 2023 Hanning Zhang, Shizhe Diao, Yong Lin, Yi R. Fung, Qing Lian, Xingyao Wang, Yangyi Chen, Heng Ji, Tong Zhang

This approach is formalized by first identifying the knowledge gap between parametric knowledge and the instruction tuning data.

Defining a New NLP Playground

no code implementations31 Oct 2023 Sha Li, Chi Han, Pengfei Yu, Carl Edwards, Manling Li, Xingyao Wang, Yi R. Fung, Charles Yu, Joel R. Tetreault, Eduard H. Hovy, Heng Ji

The recent explosion of performance of large language models (LLMs) has changed the field of Natural Language Processing (NLP) more abruptly and seismically than any other shift in the field's 80-year history.

CRAFT: Customizing LLMs by Creating and Retrieving from Specialized Toolsets

1 code implementation29 Sep 2023 Lifan Yuan, Yangyi Chen, Xingyao Wang, Yi R. Fung, Hao Peng, Heng Ji

It creates toolsets specifically curated for the tasks and equips LLMs with a component that retrieves tools from these sets to enhance their capability to solve complex tasks.

Language Modelling Mathematical Reasoning

MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback

no code implementations19 Sep 2023 Xingyao Wang, Zihan Wang, Jiateng Liu, Yangyi Chen, Lifan Yuan, Hao Peng, Heng Ji

However, current evaluation protocols often emphasize benchmark performance with single-turn exchanges, neglecting the nuanced interactions among the user, LLMs, and external tools, while also underestimating the importance of natural language feedback from users.

Decision Making

Making Pre-trained Language Models both Task-solvers and Self-calibrators

1 code implementation21 Jul 2023 Yangyi Chen, Xingyao Wang, Heng Ji

In this work, we consider the practical scenario that we need to effectively utilize training samples to make PLMs both task-solvers and self-calibrators.

Adversarial Defense

LeTI: Learning to Generate from Textual Interactions

1 code implementation17 May 2023 Xingyao Wang, Hao Peng, Reyhaneh Jabbarvand, Heng Ji

LeTI iteratively fine-tunes the model, using the LM objective, on a concatenation of natural language instructions, LM-generated programs, and textual feedback, which is only provided when the generated program fails to solve the task.

Code Generation Event Argument Extraction

ECG-CL: A Comprehensive Electrocardiogram Interpretation Method Based on Continual Learning

no code implementations10 Apr 2023 Hongxiang Gao, Xingyao Wang, Zhenghua Chen, Min Wu, Jianqing Li, Chengyu Liu

From the perspective of intelligent wearable applications, the possibility of a comprehensive ECG interpretation algorithm based on single-lead ECGs is also confirmed.

Continual Learning Incremental Learning +1

POTATO: The Portable Text Annotation Tool

1 code implementation16 Dec 2022 Jiaxin Pei, Aparna Ananthasubramaniam, Xingyao Wang, Naitian Zhou, Jackson Sargent, Apostolos Dedeloudis, David Jurgens

We present POTATO, the Portable text annotation tool, a free, fully open-sourced annotation system that 1) supports labeling many types of text and multimodal data; 2) offers easy-to-configure features to maximize the productivity of both deployers and annotators (convenient templates for common ML/NLP tasks, active learning, keypress shortcuts, keyword highlights, tooltips); and 3) supports a high degree of customization (editable UI, inserting pre-screening questions, attention and qualification tests).

Active Learning text annotation

Code4Struct: Code Generation for Few-Shot Event Structure Prediction

1 code implementation23 Oct 2022 Xingyao Wang, Sha Li, Heng Ji

As a case study, we formulate Event Argument Extraction (EAE) as converting text into event-argument structures that can be represented as a class object using code.

Code Generation Event Argument Extraction +3

A Causal Intervention Scheme for Semantic Segmentation of Quasi-periodic Cardiovascular Signals

no code implementations19 Sep 2022 Xingyao Wang, Yuwen Li, Hongxiang Gao, Xianghong Cheng, Jianqing Li, Chengyu Liu

To address this issue, we establish a structural causal model as the foundation to customize the intervention approaches on Am and Ar, respectively.

Segmentation Semantic Segmentation

Sequence-to-Sequence Load Disaggregation Using Multi-Scale Residual Neural Network

no code implementations25 Sep 2020 Gan Zhou, Zhi Li, Meng Fu, Yanjun Feng, Xingyao Wang, Chengwei Huang

Secondly, we propose dilated convolution to curtail the excessive quantity of model parameters and obtain bigger receptive field, and multi-scale structure to learn mixed data features in a more targeted way.

Non-Intrusive Load Monitoring

Temporal-Framing Adaptive Network for Heart Sound Segmentation without Prior Knowledge of State Duration

no code implementations9 May 2020 Xingyao Wang, Chengyu Liu, Yuwen Li, Xianghong Cheng, Jianqing Li, Gari D. Clifford

Moreover, the TFAN-based method achieved an overall F1 score of 99. 2%, 94. 4%, 91. 4% on LEVEL-I, -II and -III data respectively, compared to 98. 4%, 88. 54% and 79. 80% for the current state-of-the-art method.

Segmentation Time Series Analysis

Cannot find the paper you are looking for? You can Submit a new open access paper.