1 code implementation • 22 Nov 2023 • Yangyi Chen, Xingyao Wang, Manling Li, Derek Hoiem, Heng Ji
We adopt a weakly-supervised approach to directly generate visual event structures from captions for ViStruct training, capitalizing on abundant image-caption pairs from the web.
no code implementations • 16 Nov 2023 • Genglin Liu, Xingyao Wang, Lifan Yuan, Yangyi Chen, Hao Peng
When presented with such unanswerable questions, an LLM should appropriately convey uncertainty, and be able to challenge the premise and refuse to generate a response.
no code implementations • 16 Nov 2023 • Hanning Zhang, Shizhe Diao, Yong Lin, Yi R. Fung, Qing Lian, Xingyao Wang, Yangyi Chen, Heng Ji, Tong Zhang
This approach is formalized by first identifying the knowledge gap between parametric knowledge and the instruction tuning data.
no code implementations • 31 Oct 2023 • Sha Li, Chi Han, Pengfei Yu, Carl Edwards, Manling Li, Xingyao Wang, Yi R. Fung, Charles Yu, Joel R. Tetreault, Eduard H. Hovy, Heng Ji
The recent explosion of performance of large language models (LLMs) has changed the field of Natural Language Processing (NLP) more abruptly and seismically than any other shift in the field's 80-year history.
1 code implementation • 29 Sep 2023 • Lifan Yuan, Yangyi Chen, Xingyao Wang, Yi R. Fung, Hao Peng, Heng Ji
It creates toolsets specifically curated for the tasks and equips LLMs with a component that retrieves tools from these sets to enhance their capability to solve complex tasks.
no code implementations • 19 Sep 2023 • Xingyao Wang, Zihan Wang, Jiateng Liu, Yangyi Chen, Lifan Yuan, Hao Peng, Heng Ji
However, current evaluation protocols often emphasize benchmark performance with single-turn exchanges, neglecting the nuanced interactions among the user, LLMs, and external tools, while also underestimating the importance of natural language feedback from users.
1 code implementation • 21 Jul 2023 • Yangyi Chen, Xingyao Wang, Heng Ji
In this work, we consider the practical scenario that we need to effectively utilize training samples to make PLMs both task-solvers and self-calibrators.
1 code implementation • 17 May 2023 • Xingyao Wang, Hao Peng, Reyhaneh Jabbarvand, Heng Ji
LeTI iteratively fine-tunes the model, using the LM objective, on a concatenation of natural language instructions, LM-generated programs, and textual feedback, which is only provided when the generated program fails to solve the task.
no code implementations • 10 Apr 2023 • Hongxiang Gao, Xingyao Wang, Zhenghua Chen, Min Wu, Jianqing Li, Chengyu Liu
From the perspective of intelligent wearable applications, the possibility of a comprehensive ECG interpretation algorithm based on single-lead ECGs is also confirmed.
1 code implementation • 16 Dec 2022 • Jiaxin Pei, Aparna Ananthasubramaniam, Xingyao Wang, Naitian Zhou, Jackson Sargent, Apostolos Dedeloudis, David Jurgens
We present POTATO, the Portable text annotation tool, a free, fully open-sourced annotation system that 1) supports labeling many types of text and multimodal data; 2) offers easy-to-configure features to maximize the productivity of both deployers and annotators (convenient templates for common ML/NLP tasks, active learning, keypress shortcuts, keyword highlights, tooltips); and 3) supports a high degree of customization (editable UI, inserting pre-screening questions, attention and qualification tests).
1 code implementation • 23 Oct 2022 • Xingyao Wang, Sha Li, Heng Ji
As a case study, we formulate Event Argument Extraction (EAE) as converting text into event-argument structures that can be represented as a class object using code.
no code implementations • 19 Sep 2022 • Xingyao Wang, Yuwen Li, Hongxiang Gao, Xianghong Cheng, Jianqing Li, Chengyu Liu
To address this issue, we establish a structural causal model as the foundation to customize the intervention approaches on Am and Ar, respectively.
1 code implementation • Findings (EMNLP) 2021 • Xingyao Wang, David Jurgens
Online conversations include more than just text.
Ranked #1 on
Multimodal GIF Dialog
on GIF Reply Dataset
no code implementations • 20 Oct 2020 • Shaohuai Shi, Xianhao Zhou, Shutao Song, Xingyao Wang, Zilin Zhu, Xue Huang, Xinan Jiang, Feihu Zhou, Zhenyu Guo, Liqiang Xie, Rui Lan, Xianbin Ouyang, Yan Zhang, Jieqian Wei, Jing Gong, Weiliang Lin, Ping Gao, Peng Meng, Xiaomin Xu, Chenyang Guo, Bo Yang, Zhibo Chen, Yongjian Wu, Xiaowen Chu
Distributed training techniques have been widely deployed in large-scale deep neural networks (DNNs) training on dense-GPU clusters.
no code implementations • 25 Sep 2020 • Gan Zhou, Zhi Li, Meng Fu, Yanjun Feng, Xingyao Wang, Chengwei Huang
Secondly, we propose dilated convolution to curtail the excessive quantity of model parameters and obtain bigger receptive field, and multi-scale structure to learn mixed data features in a more targeted way.
no code implementations • 9 May 2020 • Xingyao Wang, Chengyu Liu, Yuwen Li, Xianghong Cheng, Jianqing Li, Gari D. Clifford
Moreover, the TFAN-based method achieved an overall F1 score of 99. 2%, 94. 4%, 91. 4% on LEVEL-I, -II and -III data respectively, compared to 98. 4%, 88. 54% and 79. 80% for the current state-of-the-art method.