no code implementations • EMNLP 2021 • Yiming Ju, Yuanzhe Zhang, Zhixing Tian, Kang Liu, Xiaohuan Cao, Wenting Zhao, Jinlong Li, Jun Zhao
Multiple-choice MRC is one of the most studied tasks in MRC due to the convenience of evaluation and the flexibility of answer format.
no code implementations • ICML 2020 • Di Chen, Yiwei Bai, Wenting Zhao, Sebastian Ament, John Gregoire, Carla Gomes
We introduce Deep Reasoning Networks (DRNets), an end-to-end framework that combines deep learning with constraint reasoning for solving pattern de-mixing problems, typically in an unsupervised or very-weakly-supervised setting.
1 code implementation • 2 Dec 2024 • Wenting Zhao, Nan Jiang, Celine Lee, Justin T Chiu, Claire Cardie, Matthias Gallé, Alexander M Rush
As a benchmark, Commit0 is designed to move beyond static one-shot code generation towards agents that must process long-form natural language specifications, adapt to multi-stage feedback, and generate code with complex dependencies.
no code implementations • 13 Nov 2024 • Shaden Shaar, Wayne Chen, Maitreyi Chatterjee, Barry Wang, Wenting Zhao, Claire Cardie
Our research shows that trigger effectiveness varies based on the extraction task's characteristics and data quality, with basic, automatically-generated triggers serving as a viable alternative to human-annotated ones.
1 code implementation • 18 Sep 2024 • Yi Lu, Jing Nathan Yan, Songlin Yang, Justin T. Chiu, Siyu Ren, Fei Yuan, Wenting Zhao, Zhiyong Wu, Alexander M. Rush
Broad textual understanding and in-context learning require language models that utilize full document contexts.
no code implementations • 5 Sep 2024 • Yuntian Deng, Wenting Zhao, Jack Hessel, Xiang Ren, Claire Cardie, Yejin Choi
The increasing availability of real-world conversation data offers exciting opportunities for researchers to study user-chatbot interactions.
1 code implementation • 21 Aug 2024 • Shangyi Geng, Wenting Zhao, Alexander M Rush
$K$-nearest neighbor language models ($k$NN-LMs), which integrate retrieval with next-word prediction, have demonstrated strong performance in language modeling as well as downstream NLP benchmarks.
no code implementations • 15 Aug 2024 • Shachar Don-Yehiya, Ben Burtenshaw, Ramon Fernandez Astudillo, Cailean Osborne, Mimansa Jaiswal, Tzu-Sheng Kuo, Wenting Zhao, Idan Shenfeld, Andi Peng, Mikhail Yurochkin, Atoosa Kasirzadeh, Yangsibo Huang, Tatsunori Hashimoto, Yacine Jernite, Daniel Vila-Suero, Omri Abend, Jennifer Ding, Sara Hooker, Hannah Rose Kirk, Leshem Choshen
In this work, we bring together interdisciplinary experts to assess the opportunities and challenges to realizing an open ecosystem of human feedback for AI.
no code implementations • 24 Jul 2024 • Wenting Zhao, Tanya Goyal, Yu Ying Chiu, Liwei Jiang, Benjamin Newman, Abhilasha Ravichander, Khyathi Chandu, Ronan Le Bras, Claire Cardie, Yuntian Deng, Yejin Choi
While hallucinations of large language models (LLMs) prevail as a major challenge, existing evaluation benchmarks on factuality do not cover the diverse domains of knowledge that the real-world users of LLMs seek information about.
1 code implementation • 24 Jul 2024 • Wenting Zhao, Ge Gao, Claire Cardie, Alexander M. Rush
We curate CouldAsk, an evaluation benchmark composed of existing and new datasets for document-grounded question answering, specifically designed to study reformulating unanswerable questions.
1 code implementation • 24 Jun 2024 • Jiangshu Du, Yibo Wang, Wenting Zhao, Zhongfen Deng, Shuaiqi Liu, Renze Lou, Henry Peng Zou, Pranav Narayanan Venkit, Nan Zhang, Mukund Srinath, Haoran Ranran Zhang, Vipul Gupta, Yinghui Li, Tao Li, Fei Wang, Qin Liu, Tianlin Liu, Pengzhi Gao, Congying Xia, Chen Xing, Jiayang Cheng, Zhaowei Wang, Ying Su, Raj Sanjay Shah, Ruohao Guo, Jing Gu, Haoran Li, Kangda Wei, ZiHao Wang, Lu Cheng, Surangika Ranathunga, Meng Fang, Jie Fu, Fei Liu, Ruihong Huang, Eduardo Blanco, Yixin Cao, Rui Zhang, Philip S. Yu, Wenpeng Yin
This study focuses on the topic of LLMs assist NLP Researchers, particularly examining the effectiveness of LLM in assisting paper (meta-)reviewing and its recognizability.
no code implementations • 2 May 2024 • Wenting Zhao, Xiang Ren, Jack Hessel, Claire Cardie, Yejin Choi, Yuntian Deng
In addition to timestamped chat transcripts, we enrich the dataset with demographic data, including state, country, and hashed IP addresses, alongside request headers.
no code implementations • 17 Dec 2023 • Wenting Zhao, Ye Liu, Yao Wan, Yibo Wang, Qingyang Wu, Zhongfen Deng, Jiangshu Du, Shuaiqi Liu, Yunlong Xu, Philip S. Yu
Task-Oriented Parsing (TOP) enables conversational assistants to interpret user commands expressed in natural language, transforming them into structured outputs that combine elements of both natural language and intent/slot tags.
2 code implementations • 22 Nov 2023 • John X. Morris, Wenting Zhao, Justin T. Chiu, Vitaly Shmatikov, Alexander M. Rush
We consider the problem of language model inversion and show that next-token probabilities contain a surprising amount of information about the preceding text.
no code implementations • 14 Nov 2023 • Wenting Zhao, Justin T Chiu, Jena D. Hwang, Faeze Brahman, Jack Hessel, Sanjiban Choudhury, Yejin Choi, Xiang Lorraine Li, Alane Suhr
To instead investigate the ability to model unusual, unexpected, and unlikely situations, we explore the task of uncommonsense abductive reasoning.
1 code implementation • 13 Nov 2023 • Huihan Li, Yuting Ning, Zeyi Liao, Siyuan Wang, Xiang Lorraine Li, Ximing Lu, Wenting Zhao, Faeze Brahman, Yejin Choi, Xiang Ren
To effectively use large language models (LLMs) for real-world queries, it is imperative that they generalize to the long-tail distribution, i. e. rare examples where models exhibit low confidence.
1 code implementation • 7 Nov 2023 • Zhongfen Deng, Hao Peng, Tao Zhang, Shuaiqi Liu, Wenting Zhao, Yibo Wang, Philip S. Yu
Furthermore, the copy mechanism in value generator and the value attention module in value classifier help our model address the data discrepancy issue by only focusing on the relevant part of input text and ignoring other information which causes the discrepancy issue such as sentence structure in the text.
no code implementations • 7 Nov 2023 • Zhongfen Deng, Seunghyun Yoon, Trung Bui, Franck Dernoncourt, Quan Hung Tran, Shuaiqi Liu, Wenting Zhao, Tao Zhang, Yibo Wang, Philip S. Yu
Then we merge the sentences selected for a specific aspect as the input for the summarizer to produce the aspect-based summary.
no code implementations • 31 Oct 2023 • Wenting Zhao, Ye Liu, Tong Niu, Yao Wan, Philip S. Yu, Shafiq Joty, Yingbo Zhou, Semih Yavuz
Moreover, a significant gap in the current landscape is the absence of a realistic benchmark for evaluating the effectiveness of grounding LLMs on heterogeneous knowledge sources (e. g., knowledge base and text).
1 code implementation • 26 Oct 2023 • Justin T. Chiu, Wenting Zhao, Derek Chen, Saujas Vaduguru, Alexander M. Rush, Daniel Fried
Large language models (LLMs) excel at processing and generating both text and code.
1 code implementation • 20 Sep 2023 • Yibo Wang, Wenting Zhao, Yao Wan, Zhongfen Deng, Philip S. Yu
In this paper, we propose to incorporate the label dependencies among entity types into a multi-task learning framework for better MRC-based NER.
no code implementations • 20 Sep 2023 • Wenting Zhao, Ye Liu, Yao Wan, Yibo Wang, Zhongfen Deng, Philip S. Yu
Furthermore, TAG-QA outperforms the end-to-end model T5 by 16% and 12% on BLEU-4 and PARENT F-score, respectively.
1 code implementation • ICCV 2023 • Qianxiong Xu, Wenting Zhao, Guosheng Lin, Cheng Long
Moreover, when calculating SCCA, we design a scaled-cosine mechanism to better utilize the support features for similarity calculation.
Ranked #9 on Few-Shot Semantic Segmentation on COCO-20i (5-shot)
no code implementations • 29 Jul 2023 • Yibo Wang, Yanbing Xue, Bo Liu, Musen Wen, Wenting Zhao, Stephen Guo, Philip S. Yu
Position bias, the phenomenon whereby users tend to focus on higher-ranked items of the search result list regardless of the actual relevance to queries, is prevailing in many ranking systems.
no code implementations • 18 Jun 2023 • Guangbu Liu, Tong Zhang, Xudong Wang, Wenting Zhao, Chuanwei Zhou, Zhen Cui
Instead of a plain use of a base graph dictionary, we propose the variational graph dictionary adaptation (VGDA) to generate a personalized dictionary (named adapted graph dictionary) for catering to each input graph.
no code implementations • 24 May 2023 • Wenting Zhao, Justin T. Chiu, Claire Cardie, Alexander M. Rush
Instead of using direct supervision, this work proposes an approach for abductive commonsense reasoning that exploits the fact that only a subset of explanations is correct for a given context.
no code implementations • 23 May 2023 • Wenting Zhao, Justin T. Chiu, Claire Cardie, Alexander M. Rush
Explainable multi-hop question answering (QA) not only predicts answers but also identifies rationales, i. e. subsets of input sentences used to derive the answers.
no code implementations • 5 Jan 2023 • Wenting Zhao, Ibrahim Abdelaziz, Julian Dolby, Kavitha Srinivas, Mossad Helali, Essam Mansour
We demonstrate the efficiency and usefulness of Serenity's analysis in two applications: code completion and automated machine learning.
1 code implementation • NAACL 2022 • Wenting Zhao, Konstantine Arkoudas, Weiqi Sun, Claire Cardie
Task-oriented parsing (TOP) aims to convert natural language into machine-readable representations of specific tasks, such as setting an alarm.
1 code implementation • Findings (EMNLP) 2021 • Wenting Zhao, Ye Liu, Yao Wan, Philip S. Yu
Few-shot table-to-text generation is a task of composing fluent and faithful sentences to convey table content using limited data.
1 code implementation • Findings (ACL) 2022 • Yang Wu, Yanyan Zhao, Hao Yang, Song Chen, Bing Qin, Xiaohuan Cao, Wenting Zhao
Through further analysis of the ASR outputs, we find that in some cases the sentiment words, the key sentiment elements in the textual modality, are recognized as other words, which makes the sentiment of the text change and hurts the performance of multimodal sentiment models directly.
Automatic Speech Recognition Automatic Speech Recognition (ASR) +4
no code implementations • 21 Aug 2021 • Di Chen, Yiwei Bai, Sebastian Ament, Wenting Zhao, Dan Guevarra, Lan Zhou, Bart Selman, R. Bruce van Dover, John M. Gregoire, Carla P. Gomes
DRNets compensate for the limited data by exploiting and magnifying the rich prior knowledge about the thermodynamic rules governing the mixtures of crystals with constraint reasoning seamlessly integrated into neural network optimization.
no code implementations • EACL 2021 • Ye Liu, Yao Wan, JianGuo Zhang, Wenting Zhao, Philip Yu
In this paper, we claim that the syntactic and semantic structures among natural language are critical for non-autoregressive machine translation and can further improve the performance.
no code implementations • 9 Mar 2021 • Wenting Zhao, Shufeng Kong, Junwen Bai, Daniel Fink, Carla Gomes
This in turn leads to a challenging and long-standing problem in the field of computer science - how to perform ac-curate multi-label classification with hundreds of labels?
no code implementations • 16 Feb 2021 • Wenting Zhao, Carla Gomes
In the real world, it is more common to deal with noisy datasets than clean datasets, given how modern datasets are labeled by a large group of annotators on crowdsourcing platforms, but little attention has been given to evaluating multi-label classifiers with noisy labels.
no code implementations • 5 Feb 2021 • Yiwei Bai, Wenting Zhao, Carla P. Gomes
There has been an increasing interest in harnessing deep learning to tackle combinatorial optimization (CO) problems in recent years.
no code implementations • 22 Jan 2021 • Ye Liu, Yao Wan, Jian-Guo Zhang, Wenting Zhao, Philip S. Yu
In this paper, we claim that the syntactic and semantic structures among natural language are critical for non-autoregressive machine translation and can further improve the performance.
no code implementations • 1 Jan 2021 • Wenting Zhao, Yuan Fang, Zhen Cui, Tong Zhang, Jian Yang, Wei Liu
In this paper, we propose a simple yet effective graph deformer network (GDN) to fulfill anisotropic convolution filtering on graphs, analogous to the standard convolution operation on images.
no code implementations • 20 Aug 2020 • Guangshuai Gao, Wenting Zhao, Qingjie Liu, Yunhong Wang
Co-saliency detection aims to detect common salient objects from a group of relevant images.
no code implementations • 28 Nov 2019 • Xueya Zhang, Tong Zhang, Wenting Zhao, Zhen Cui, Jian Yang
Graph convolutional networks (GCNs) have shown the powerful ability in text structure representation and effectively facilitate the task of text classification.
no code implementations • 25 Sep 2019 • Di Chen, Yiwei Bai, Wenting Zhao, Sebastian Ament, John M. Gregoire, Carla P. Gomes
We introduce Deep Reasoning Networks (DRNets), an end-to-end framework that combines deep learning with reasoning for solving pattern de-mixing problems, typically in an unsupervised or weakly-supervised setting.
no code implementations • 3 Jun 2019 • Di Chen, Yiwei Bai, Wenting Zhao, Sebastian Ament, John M. Gregoire, Carla P. Gomes
At a high level, DRNets encode a structured latent space of the input data, which is constrained to adhere to prior knowledge by a reasoning module.
no code implementations • 7 Jul 2018 • Wenting Zhao, Chunyan Xu, Zhen Cui, Tong Zhang, Jiatao Jiang, Zhen-Yu Zhang, Jian Yang
In this paper, we aim to give a comprehensive analysis of when work matters by transforming different classical network structures to graph CNN, particularly in the basic graph recognition problem.
Ranked #3 on Graph Classification on IMDb-B