2 code implementations • 29 Sep 2023 • Tsu-Jui Fu, Wenze Hu, Xianzhi Du, William Yang Wang, Yinfei Yang, Zhe Gan
Extensive experimental results demonstrate that expressive instructions are crucial to instruction-based image editing, and our MGIE can lead to a notable improvement in automatic metrics and human evaluation while maintaining competitive inference efficiency.
2 code implementations • NeurIPS 2023 • Wanrong Zhu, Jack Hessel, Anas Awadalla, Samir Yitzhak Gadre, Jesse Dodge, Alex Fang, Youngjae Yu, Ludwig Schmidt, William Yang Wang, Yejin Choi
We release Multimodal C4, an augmentation of the popular text-only C4 corpus with images interleaved.
2 code implementations • ACL 2019 • Wenhu Chen, Jianshu Chen, Pengda Qin, Xifeng Yan, William Yang Wang
Semantically controlled neural response generation on limited-domain has achieved great performance.
Ranked #5 on Data-to-Text Generation on MULTIWOZ 2.1
2 code implementations • EMNLP 2017 • Wenhan Xiong, Thien Hoang, William Yang Wang
We study the problem of learning to reason in large scale knowledge graphs (KGs).
Ranked #1 on Link Prediction on NELL-995 (Mean AP metric)
1 code implementation • ECCV 2018 • Xin Wang, Wenhan Xiong, Hongmin Wang, William Yang Wang
In this paper, we take a radical approach to bridge the gap between synthetic studies and real-world practices---We propose a novel, planned-ahead hybrid reinforcement learning model that combines model-free and model-based reinforcement learning to solve a real-world vision-language navigation task.
Model-based Reinforcement Learning reinforcement-learning +4
1 code implementation • ICLR 2020 • Wenhu Chen, Hongmin Wang, Jianshu Chen, Yunkai Zhang, Hong Wang, Shiyang Li, Xiyou Zhou, William Yang Wang
To this end, we construct a large-scale dataset called TabFact with 16k Wikipedia tables as the evidence for 118k human-annotated natural language statements, which are labeled as either ENTAILED or REFUTED.
Ranked #10 on Table-based Fact Verification on TabFact
9 code implementations • 18 Oct 2018 • Mahnaz Koupaee, William Yang Wang
Sequence-to-sequence models have recently gained the state of the art performance in summarization.
Ranked #3 on Text Summarization on WikiHow
1 code implementation • 6 Aug 2023 • Liangming Pan, Michael Saxon, Wenda Xu, Deepak Nathani, Xinyi Wang, William Yang Wang
Large language models (LLMs) have demonstrated remarkable performance across a wide array of NLP tasks.
1 code implementation • 9 Dec 2022 • Weixi Feng, Xuehai He, Tsu-Jui Fu, Varun Jampani, Arjun Akula, Pradyumna Narayana, Sugato Basu, Xin Eric Wang, William Yang Wang
In this work, we improve the compositional skills of T2I models, specifically more accurate attribute binding and better image compositions.
1 code implementation • EMNLP 2018 • Wenhan Xiong, Mo Yu, Shiyu Chang, Xiaoxiao Guo, William Yang Wang
Knowledge graphs (KGs) are the key components of various natural language processing applications.
1 code implementation • NeurIPS 2023 • Weixi Feng, Wanrong Zhu, Tsu-Jui Fu, Varun Jampani, Arjun Akula, Xuehai He, Sugato Basu, Xin Eric Wang, William Yang Wang
When combined with a downstream image generation model, LayoutGPT outperforms text-to-image models/systems by 20-40% and achieves comparable performance as human users in designing visual layouts for numerical and spatial correctness.
3 code implementations • NAACL 2018 • Liwei Cai, William Yang Wang
This framework is independent of the concrete form of generator and discriminator, and therefore can utilize a wide variety of knowledge graph embedding models as its building blocks.
Ranked #24 on Link Prediction on WN18
1 code implementation • ICLR 2021 • Wenhan Xiong, Xiang Lorraine Li, Srini Iyer, Jingfei Du, Patrick Lewis, William Yang Wang, Yashar Mehdad, Wen-tau Yih, Sebastian Riedel, Douwe Kiela, Barlas Oğuz
We propose a simple and efficient multi-hop dense retrieval approach for answering complex open-domain questions, which achieves state-of-the-art performance on two multi-hop datasets, HotpotQA and multi-evidence FEVER.
Ranked #14 on Question Answering on HotpotQA
1 code implementation • EMNLP 2021 • Zhiyu Chen, Wenhu Chen, Charese Smiley, Sameena Shah, Iana Borova, Dylan Langdon, Reema Moussa, Matt Beane, Ting-Hao Huang, Bryan Routledge, William Yang Wang
In contrast to existing tasks on general domain, the finance domain includes complex numerical reasoning and understanding of heterogeneous representations.
Ranked #4 on Question Answering on FinQA
2 code implementations • ACL 2020 • Zhiyu Chen, Harini Eavani, Wenhu Chen, Yinyin Liu, William Yang Wang
Neural-based end-to-end approaches to natural language generation (NLG) from structured data or knowledge are data-hungry, making their adoption for real-world applications difficult with limited data.
1 code implementation • 20 May 2023 • Liangming Pan, Alon Albalak, Xinyi Wang, William Yang Wang
We also introduce a self-refinement module, which utilizes the symbolic solver's error messages to revise symbolic formalizations.
1 code implementation • ACL 2020 • Wenhu Chen, Jianshu Chen, Yu Su, Zhiyu Chen, William Yang Wang
To facilitate the study of the proposed logical NLG problem, we use the existing TabFact dataset \cite{chen2019tabfact} featured with a wide range of logical/symbolic inferences as our testbed, and propose new automatic metrics to evaluate the fidelity of generation models w. r. t.\ logical inference.
1 code implementation • EMNLP 2020 • Wenhu Chen, Yu Su, Xifeng Yan, William Yang Wang
We propose a knowledge-grounded pre-training (KGPT), which consists of two parts, 1) a general knowledge-grounded generation model to generate knowledge-enriched text.
Ranked #9 on KG-to-Text Generation on WebNLG 2.0 (Unconstrained)
3 code implementations • 10 Nov 2019 • Kai Nakamura, Sharon Levy, William Yang Wang
We construct hybrid text+image models and perform extensive experiments for multiple variations of classification, demonstrating the importance of the novel aspect of multimodality and fine-grained classification unique to Fakeddit.
Classification Cultural Vocal Bursts Intensity Prediction +2
2 code implementations • ACL 2019 • Wenhan Xiong, Mo Yu, Shiyu Chang, Xiaoxiao Guo, William Yang Wang
We propose a new end-to-end question answering model, which learns to aggregate answer evidence from an incomplete knowledge base (KB) and a set of retrieved text snippets.
2 code implementations • ACL 2018 • Xin Wang, Wenhu Chen, Yuan-Fang Wang, William Yang Wang
Though impressive results have been achieved in visual captioning, the task of generating abstract stories from photo streams is still a little-tapped problem.
Ranked #13 on Visual Storytelling on VIST
1 code implementation • 24 Nov 2021 • Tsu-Jui Fu, Linjie Li, Zhe Gan, Kevin Lin, William Yang Wang, Lijuan Wang, Zicheng Liu
Further, unlike previous studies that found pre-training tasks on video inputs (e. g., masked frame modeling) not very effective, we design a new pre-training task, Masked Visual-token Modeling (MVM), for better video modeling.
Ranked #20 on Zero-Shot Video Retrieval on DiDeMo
11 code implementations • ACL 2017 • William Yang Wang
In this paper, we present liar: a new, publicly available dataset for fake news detection.
Ranked #1 on Fake News Detection on LIAR
2 code implementations • ACL 2017 • William Yang Wang
In this paper, we present LIAR: a new, publicly available dataset for fake news detection.
2 code implementations • ACL 2018 • Xianda Zhou, William Yang Wang
In this paper, we take a more radical approach: we exploit the idea of leveraging Twitter data that are naturally labeled with emojis.
1 code implementation • 24 Oct 2023 • Xianjun Yang, Liangming Pan, Xuandong Zhao, Haifeng Chen, Linda Petzold, William Yang Wang, Wei Cheng
The burgeoning capabilities of advanced large language models (LLMs) such as ChatGPT have led to an increase in synthetic content generation with implications across a variety of sectors, including media, cybersecurity, public discourse, and education.
1 code implementation • NeurIPS 2023 • Yujie Lu, Xianjun Yang, Xiujun Li, Xin Eric Wang, William Yang Wang
Existing automatic evaluation on text-to-image synthesis can only provide an image-text matching score, without considering the object-level compositionality, which results in poor correlation with human judgments.
1 code implementation • CVPR 2020 • Yuankai Qi, Qi Wu, Peter Anderson, Xin Wang, William Yang Wang, Chunhua Shen, Anton Van Den Hengel
One of the long-term challenges of robotics is to enable robots to interact with humans in the visual world via natural language, as humans are visual animals that communicate through language.
1 code implementation • NAACL 2019 • Wenhan Xiong, Jiawei Wu, Deren Lei, Mo Yu, Shiyu Chang, Xiaoxiao Guo, William Yang Wang
Existing entity typing systems usually exploit the type hierarchy provided by knowledge base (KB) schema to model label correlations and thus improve the overall performance.
Ranked #3 on Entity Typing on Ontonotes v5 (English)
1 code implementation • 2 May 2023 • Yujie Lu, Pan Lu, Zhiyu Chen, Wanrong Zhu, Xin Eric Wang, William Yang Wang
The key challenges of MPP are to ensure the informativeness, temporal coherence, and accuracy of plans across modalities.
1 code implementation • NAACL 2021 • Liangming Pan, Wenhu Chen, Wenhan Xiong, Min-Yen Kan, William Yang Wang
Obtaining training data for multi-hop question answering (QA) is time-consuming and resource-intensive.
1 code implementation • 8 Jun 2021 • Linjie Li, Jie Lei, Zhe Gan, Licheng Yu, Yen-Chun Chen, Rohit Pillai, Yu Cheng, Luowei Zhou, Xin Eric Wang, William Yang Wang, Tamara Lee Berg, Mohit Bansal, Jingjing Liu, Lijuan Wang, Zicheng Liu
Most existing video-and-language (VidL) research focuses on a single dataset, or multiple datasets of a single task.
1 code implementation • ECCV 2020 • Xin Eric Wang, Vihan Jain, Eugene Ie, William Yang Wang, Zornitsa Kozareva, Sujith Ravi
Recent research efforts enable study for natural language grounded navigation in photo-realistic environments, e. g., following natural language instructions or dialog.
1 code implementation • 7 Oct 2022 • Zhiyu Chen, Shiyang Li, Charese Smiley, Zhiqiang Ma, Sameena Shah, William Yang Wang
With the recent advance in large pre-trained language models, researchers have achieved record performances in NLP tasks that mostly focus on language pattern matching.
Ranked #2 on Question Answering on ConvFinQA
2 code implementations • ACL 2018 • Pengda Qin, Weiran Xu, William Yang Wang
The experimental results show that the proposed strategy significantly improves the performance of distant supervision comparing to state-of-the-art systems.
1 code implementation • IJCNLP 2019 • Jing Qian, Anna Bethke, Yinyin Liu, Elizabeth Belding, William Yang Wang
In this paper, we also analyze the datasets to understand the common intervention strategies and explore the performance of common automatic response generation methods on these new datasets to provide a benchmark for future research.
1 code implementation • Findings of the Association for Computational Linguistics 2020 • Zhiyu Chen, Wenhu Chen, Hanwen Zha, Xiyou Zhou, Yunkai Zhang, Sairam Sundaresan, William Yang Wang
If only provided with the table, it is hard for existing models to produce controllable and high-fidelity logical generations.
1 code implementation • ACL 2021 • Liangming Pan, Wenhu Chen, Wenhan Xiong, Min-Yen Kan, William Yang Wang
However, for each new domain that requires fact verification, creating a dataset by manually writing claims and linking them to their supporting evidence is expensive.
1 code implementation • 26 Feb 2024 • Alon Albalak, Yanai Elazar, Sang Michael Xie, Shayne Longpre, Nathan Lambert, Xinyi Wang, Niklas Muennighoff, Bairu Hou, Liangming Pan, Haewon Jeong, Colin Raffel, Shiyu Chang, Tatsunori Hashimoto, William Yang Wang
A major factor in the recent success of large language models is the use of enormous and ever-growing text datasets for unsupervised pre-training.
1 code implementation • NAACL 2019 • Prince Zizhuang Wang, William Yang Wang
The RNF transforms a latent variable into a space that respects the geometric characteristics of input space, which makes posterior impossible to collapse to the non-informative prior.
2 code implementations • NAACL 2019 • Hong Wang, Wenhan Xiong, Mo Yu, Xiaoxiao Guo, Shiyu Chang, William Yang Wang
We formulate such a challenging problem as lifelong relation extraction and investigate memory-efficient incremental learning methods without catastrophically forgetting knowledge learned from previous tasks.
1 code implementation • NeurIPS 2023 • Xinyi Wang, Wanrong Zhu, Michael Saxon, Mark Steyvers, William Yang Wang
This study aims to examine the in-context learning phenomenon through a Bayesian lens, viewing real-world LLMs as latent variable models.
1 code implementation • 13 Aug 2021 • Wenhu Chen, Xinyi Wang, William Yang Wang
Lots of facts can evolve with respect to time.
1 code implementation • 23 May 2023 • Wenda Xu, Danqing Wang, Liangming Pan, Zhenqiao Song, Markus Freitag, William Yang Wang, Lei LI
By harnessing both explicit human instruction and the implicit knowledge of GPT-4, we fine-tune a text evaluation metric based on LLaMA, producing both a score for generated text and a human readable diagnostic report.
2 code implementations • ICCV 2019 • Xin Wang, Jiawei Wu, Junkun Chen, Lei LI, Yuan-Fang Wang, William Yang Wang
We also introduce two tasks for video-and-language research based on VATEX: (1) Multilingual Video Captioning, aimed at describing a video in various languages with a compact unified captioning model, and (2) Video-guided Machine Translation, to translate a source language description into the target language using the video information as additional spatiotemporal context.
2 code implementations • 8 Jan 2020 • Pengda Qin, Xin Wang, Wenhu Chen, Chunyun Zhang, Weiran Xu, William Yang Wang
Large-scale knowledge graphs (KGs) are shown to become more important in current information systems.
1 code implementation • 30 Jan 2024 • Xuandong Zhao, Xianjun Yang, Tianyu Pang, Chao Du, Lei LI, Yu-Xiang Wang, William Yang Wang
In this paper, we propose the weak-to-strong jailbreaking attack, an efficient method to attack aligned LLMs to produce harmful text.
1 code implementation • EACL 2021 • Wenhan Xiong, Hong Wang, William Yang Wang
In this work, we propose a simple and resource-efficient method to pretrain the paragraph encoder.
1 code implementation • 27 May 2023 • Xianjun Yang, Wei Cheng, Yue Wu, Linda Petzold, William Yang Wang, Haifeng Chen
However, this progress also presents a significant challenge in detecting the origin of a given text, and current research on detection methods lags behind the rapid evolution of LLMs.
1 code implementation • 22 May 2023 • Liangming Pan, Xiaobao Wu, Xinyuan Lu, Anh Tuan Luu, William Yang Wang, Min-Yen Kan, Preslav Nakov
Fact-checking real-world claims often requires collecting multiple pieces of evidence and applying complex multi-step reasoning.
2 code implementations • ACL 2019 • Hong Wang, Xin Wang, Wenhan Xiong, Mo Yu, Xiaoxiao Guo, Shiyu Chang, William Yang Wang
Existing models for extractive summarization are usually trained from scratch with a cross-entropy loss, which does not explicitly capture the global context at the document level.
1 code implementation • EMNLP 2018 • Wenhu Chen, Jianshu Chen, Yu Su, Xin Wang, Dong Yu, Xifeng Yan, William Yang Wang
Then, we pre-train a state tracker for the source language as a teacher, which is able to exploit easy-to-access parallel data.
1 code implementation • CVPR 2023 • Tsu-Jui Fu, Linjie Li, Zhe Gan, Kevin Lin, William Yang Wang, Lijuan Wang, Zicheng Liu
Masked visual modeling (MVM) has been recently proven effective for visual pre-training.
Ranked #1 on Video Question Answering on LSMDC-MC
3 code implementations • 16 Jun 2018 • Wenhan Xiong, Xiaoxiao Guo, Mo Yu, Shiyu Chang, Bo-Wen Zhou, William Yang Wang
We investigate the task of learning to follow natural language instructions by jointly reasoning with visual observations and language inputs.
1 code implementation • ACL 2018 • Qingyu Yin, Yu Zhang, Wei-Nan Zhang, Ting Liu, William Yang Wang
In this study, we show how to integrate local and global decision-making by exploiting deep reinforcement learning models.
1 code implementation • 23 May 2023 • SiQi Liu, Weixi Feng, Tsu-Jui Fu, Wenhu Chen, William Yang Wang
Making image retrieval methods practical for real-world search applications requires significant progress in dataset scales, entity comprehension, and multimodal information fusion.
1 code implementation • NeurIPS 2023 • Kexun Zhang, Danqing Wang, Jingtao Xia, William Yang Wang, Lei LI
To address these challenges, we propose ALGO, a framework that synthesizes Algorithmic programs with LLM-Generated Oracles to guide the generation and verify their correctness.
1 code implementation • NAACL 2018 • Xin Wang, Yuan-Fang Wang, William Yang Wang
Furthermore, for the first time, we validate the superior performance of the deep audio features on the video captioning task.
2 code implementations • 21 Jan 2023 • Shuaichen Chang, Jun Wang, Mingwen Dong, Lin Pan, Henghui Zhu, Alexander Hanbo Li, Wuwei Lan, Sheng Zhang, Jiarong Jiang, Joseph Lilien, Steve Ash, William Yang Wang, Zhiguo Wang, Vittorio Castelli, Patrick Ng, Bing Xiang
Neural text-to-SQL models have achieved remarkable performance in translating natural language questions into SQL queries.
1 code implementation • NeurIPS 2023 • Alon Albalak, Colin Raffel, William Yang Wang
In this work, we focus on Few-shot Learning with Auxiliary Data (FLAD), a training paradigm that assumes access to auxiliary data during few-shot learning in hopes of improving generalization.
1 code implementation • EACL 2021 • Wanrong Zhu, Xin Eric Wang, Tsu-Jui Fu, An Yan, Pradyumna Narayana, Kazoo Sone, Sugato Basu, William Yang Wang
Outdoor vision-and-language navigation (VLN) is such a task where an agent follows natural language instructions and navigates a real-life urban environment.
Ranked #4 on Vision and Language Navigation on Touchdown Dataset (using extra training data)
1 code implementation • EMNLP 2020 • Tsu-Jui Fu, Xin Eric Wang, Scott Grafton, Miguel Eckstein, William Yang Wang
In this paper, we introduce a Self-Supervised Counterfactual Reasoning (SSCR) framework that incorporates counterfactual thinking to overcome data scarcity.
1 code implementation • 1 Jun 2021 • Tsu-Jui Fu, Xin Eric Wang, William Yang Wang
We propose contrastive language visual artist (CLVA) that learns to extract visual semantics from style instructions and accomplish LDAST by the patch-wise style discriminator.
1 code implementation • 13 Sep 2019 • Mengdi Zhu, Zheye Deng, Wenhan Xiong, Mo Yu, Ming Zhang, William Yang Wang
In this work, to address the low precision and recall problems, we first utilize DBpedia as the source of distant supervision to annotate abstracts from Wikipedia and design a neural correction model trained with a human-annotated NER dataset, DocRED, to correct the false entity labels.
1 code implementation • CVPR 2023 • Tsu-Jui Fu, Licheng Yu, Ning Zhang, Cheng-Yang Fu, Jong-Chyi Su, William Yang Wang, Sean Bell
Inspired by this, we introduce a novel task, text-guided video completion (TVC), which requests the model to generate a video from partial frames guided by an instruction.
Ranked #3 on Video Prediction on BAIR Robot Pushing
2 code implementations • 11 Apr 2018 • Mai ElSherief, Vivek Kulkarni, Dana Nguyen, William Yang Wang, Elizabeth Belding
While social media empowers freedom of expression and individual voices, it also enables anti-social behavior, online harassment, cyberbullying, and hate speech.
1 code implementation • COLING 2018 • Qingyu Yin, Yu Zhang, Wei-Nan Zhang, Ting Liu, William Yang Wang
Recent neural network methods for zero pronoun resolution explore multiple models for generating representation vectors for zero pronouns and their candidate antecedents.
1 code implementation • NeurIPS 2021 • Yi-Lin Tuan, Connor Pryor, Wenhu Chen, Lise Getoor, William Yang Wang
To gain insights into the reasoning process of a generation model, we propose a new method, local explanation of response generation (LERG) that regards the explanations as the mutual interaction of segments in input and output sentences.
1 code implementation • NAACL 2022 • Yujie Lu, Wanrong Zhu, Xin Eric Wang, Miguel Eckstein, William Yang Wang
Human brains integrate linguistic and perceptual information simultaneously to understand natural language, and hold the critical ability to render imaginations.
1 code implementation • 7 Oct 2022 • Wanrong Zhu, An Yan, Yujie Lu, Wenda Xu, Xin Eric Wang, Miguel Eckstein, William Yang Wang
Recent advances in text-to-image synthesis make it possible to visualize machine imaginations for a given context.
1 code implementation • Findings (ACL) 2022 • Yi-Lin Tuan, Sajjad Beygi, Maryam Fazel-Zarandi, Qiaozi Gao, Alessandra Cervone, William Yang Wang
Our proposed method allows a single transformer model to directly walk on a large-scale knowledge graph to generate responses.
1 code implementation • 10 Oct 2022 • Wenda Xu, YiLin Tuan, Yujie Lu, Michael Saxon, Lei LI, William Yang Wang
Is it possible to build a general and automatic natural language generation (NLG) evaluation metric?
1 code implementation • 12 May 2022 • Alon Albalak, Yi-Lin Tuan, Pegah Jandaghi, Connor Pryor, Luke Yoffe, Deepak Ramachandran, Lise Getoor, Jay Pujara, William Yang Wang
Task transfer, transferring knowledge contained in related tasks, holds the promise of reducing the quantity of labeled data required to fine-tune language models.
1 code implementation • 19 Dec 2022 • Wenda Xu, Xian Qian, Mingxuan Wang, Lei LI, William Yang Wang
In this paper, we propose SESCORE2, a self-supervised approach for training a model-based metric for text generation evaluation.
1 code implementation • 5 Feb 2023 • Kexun Zhang, Xianjun Yang, William Yang Wang, Lei LI
Diffusion models show promising generation capability for a variety of data.
2 code implementations • 24 Oct 2019 • An Yan, Xin Eric Wang, Jiangtao Feng, Lei LI, William Yang Wang
Commanding a robot to navigate with natural language instructions is a long-term goal for grounded language understanding and robotics.
1 code implementation • 2 Jun 2023 • Michael Saxon, William Yang Wang
We propose "Conceptual Coverage Across Languages" (CoCo-CroLa), a technique for benchmarking the degree to which any generative text-to-image system provides multilingual parity to its training language in terms of tangible nouns.
1 code implementation • NeurIPS 2021 • Xinyi Wang, Wenhu Chen, Michael Saxon, William Yang Wang
Although deep learning models have driven state-of-the-art performance on a wide array of tasks, they are prone to spurious correlations that should not be learned as predictive clues.
1 code implementation • 25 Jan 2023 • Kung-Hsiang Huang, Siffi Singh, Xiaofei Ma, Wei Xiao, Feng Nan, Nicholas Dingwall, William Yang Wang, Kathleen McKeown
Missing information is a common issue of dialogue summarization where some information in the reference summaries is not covered in the generated summaries.
1 code implementation • 18 May 2023 • Xuehai He, Weixi Feng, Tsu-Jui Fu, Varun Jampani, Arjun Akula, Pradyumna Narayana, Sugato Basu, William Yang Wang, Xin Eric Wang
Diffusion models, such as Stable Diffusion, have shown incredible performance on text-to-image generation.
1 code implementation • 7 Apr 2018 • Vivek Kulkarni, William Yang Wang
We propose generative models for three types of extra-grammatical word formation phenomena abounding in English slang: Blends, Clippings, and Reduplicatives.
1 code implementation • NAACL 2018 • Vivek Kulkarni, William Yang Wang
We propose the first generative models for three types of extra-grammatical word formation phenomena abounding in slang: Blends, Clippings, and Reduplicatives.
1 code implementation • 12 Jul 2023 • Raphael Schumann, Wanrong Zhu, Weixi Feng, Tsu-Jui Fu, Stefan Riezler, William Yang Wang
In this work, we propose VELMA, an embodied LLM agent that uses a verbalization of the trajectory and of visual environment observations as contextual prompt for the next action.
1 code implementation • 10 Jun 2022 • Xinyi Wang, Michael Saxon, Jiachen Li, Hongyang Zhang, Kun Zhang, William Yang Wang
While machine learning models rapidly advance the state-of-the-art on various real-world tasks, out-of-domain (OOD) generalization remains a challenging problem given the vulnerability of these models to spurious correlations.
1 code implementation • 23 May 2023 • Yikang Pan, Liangming Pan, Wenhu Chen, Preslav Nakov, Min-Yen Kan, William Yang Wang
In this paper, we comprehensively investigate the potential misuse of modern Large Language Models (LLMs) for generating credible-sounding misinformation and its subsequent impact on information-intensive applications, particularly Open-Domain Question Answering (ODQA) systems.
1 code implementation • 19 Oct 2023 • Deepak Nathani, David Wang, Liangming Pan, William Yang Wang
Language Models (LMs) have shown impressive performance in various natural language tasks.
1 code implementation • NAACL 2022 • Wanrong Zhu, Yuankai Qi, Pradyumna Narayana, Kazoo Sone, Sugato Basu, Xin Eric Wang, Qi Wu, Miguel Eckstein, William Yang Wang
Results show that indoor navigation agents refer to both object and direction tokens when making decisions.
1 code implementation • NLP4ConvAI (ACL) 2022 • Alon Albalak, Varun Embar, Yi-Lin Tuan, Lise Getoor, William Yang Wang
Existing research studies on cross-sentence relation extraction in long-form multi-party conversations aim to improve relation extraction without considering the explainability of such methods.
Ranked #7 on Dialog Relation Extraction on DialogRE
1 code implementation • EMNLP (ACL) 2021 • Sharon Levy, Kevin Mo, Wenhan Xiong, William Yang Wang
In this work, we present such a system for the emergent domain of COVID-19.
1 code implementation • EMNLP 2020 • Sophie Groenwold, Lily Ou, Aesha Parekh, Samhita Honnavalli, Sharon Levy, Diba Mirza, William Yang Wang
The growth of social media has encouraged the written use of African American Vernacular English (AAVE), which has traditionally been used only in oral contexts.
1 code implementation • 18 Oct 2022 • Weixi Feng, Tsu-Jui Fu, Yujie Lu, William Yang Wang
Vision-and-Language Navigation (VLN) is a task to guide an embodied agent moving to a target position using language instructions.
1 code implementation • NeurIPS 2023 • Zih-Yun Chiu, Yi-Lin Tuan, William Yang Wang, Michael C. Yip
In this work, we present Knowledge-Grounded RL (KGRL), an RL paradigm fusing multiple knowledge policies and aiming for human-like efficiency and flexibility.
1 code implementation • ACL 2020 • Chen Wu, Prince Zizhuang Wang, William Yang Wang
To this end, we propose Coupled-VAE, which couples a VAE model with a deterministic autoencoder with the same structure and improves the encoder and decoder parameterizations via encoder weight sharing and decoder signal matching.
1 code implementation • NAACL 2021 • Tsu-Jui Fu, William Yang Wang
Using natural language as a hint can supply an additional reward for playing sparse-reward games.
1 code implementation • 12 Oct 2022 • Xiyang Hu, Xinchi Chen, Peng Qi, Deguang Kong, Kunlun Liu, William Yang Wang, Zhiheng Huang
Multilingual information retrieval (IR) is challenging since annotated training data is costly to obtain in many languages.
1 code implementation • 5 Feb 2024 • Xinyi Wang, Alfonso Amayuelas, Kexun Zhang, Liangming Pan, Wenhu Chen, William Yang Wang
To understand how pre-training with a next-token prediction objective contributes to the emergence of such reasoning capability, we propose that we can view an LM as deriving new conclusions by aggregating indirect reasoning paths seen at pre-training time.
1 code implementation • LREC 2022 • Samhita Honnavalli, Aesha Parekh, Lily Ou, Sophie Groenwold, Sharon Levy, Vicente Ordonez, William Yang Wang
Our results show that GPT-2 amplifies bias by considering women as junior and men as senior more often than the ground truth in both domains.
1 code implementation • 20 Dec 2022 • Yi-Lin Tuan, Alon Albalak, Wenda Xu, Michael Saxon, Connor Pryor, Lise Getoor, William Yang Wang
Despite their widespread adoption, neural conversation models have yet to exhibit natural chat capabilities with humans.
1 code implementation • 8 Oct 2023 • Xianjun Yang, Kexun Zhang, Haifeng Chen, Linda Petzold, William Yang Wang, Wei Cheng
We then modify the previous zero-shot text detection method, DetectGPT (Mitchell et al., 2023) by utilizing a surrogate white-box model to estimate the probability of the rightmost tokens, allowing us to identify code snippets generated by language models.
1 code implementation • 5 Apr 2024 • Michael Saxon, Fatima Jahara, Mahsa Khoshnoodi, Yujie Lu, Aditya Sharma, William Yang Wang
With advances in the quality of text-to-image (T2I) models has come interest in benchmarking their prompt faithfulness-the semantic coherence of generated images to the prompts they were conditioned on.
1 code implementation • LREC 2020 • Ray Oshikawa, Jing Qian, William Yang Wang
We also highlight the difference between fake news detection and other related tasks, and the importance of NLP solutions for fake news detection.
1 code implementation • EMNLP 2021 • Alex Jones, William Yang Wang, Kyle Mahowald
We verify some of our linguistic findings by looking at the effect of morphological segmentation on English-Inuktitut alignment, in addition to examining the effect of word order agreement on isomorphism for 66 zero-shot language pairs from a different corpus.
1 code implementation • 19 Dec 2022 • Alex Mei, Sharon Levy, William Yang Wang
Users' physical safety is an increasing concern as the market for intelligent systems continues to grow, where unconstrained systems may recommend users dangerous actions that can lead to serious injury.
1 code implementation • 23 May 2023 • Vaishnavi Himakunthala, Andy Ouyang, Daniel Rose, Ryan He, Alex Mei, Yujie Lu, Chinmay Sonar, Michael Saxon, William Yang Wang
Despite exciting recent results showing vision-language systems' capacity to reason about images using natural language, their capacity for video reasoning remains under-explored.
1 code implementation • ACL 2019 • Tony Sun, Andrew Gaut, Shirlyn Tang, Yuxin Huang, Mai ElSherief, Jieyu Zhao, Diba Mirza, Elizabeth Belding, Kai-Wei Chang, William Yang Wang
In this paper, we review contemporary studies on recognizing and mitigating gender bias in NLP.
1 code implementation • ACL 2020 • Andrew Gaut, Tony Sun, Shirlyn Tang, Yuxin Huang, Jing Qian, Mai ElSherief, Jieyu Zhao, Diba Mirza, Elizabeth Belding, Kai-Wei Chang, William Yang Wang
We use WikiGenderBias to evaluate systems for bias and find that NRE systems exhibit gender biased predictions and lay groundwork for future evaluation of bias in NRE.
1 code implementation • 6 Oct 2021 • Wenda Xu, Michael Saxon, Misha Sra, William Yang Wang
This is a particularly notable issue in the medical domain, where layman are often confused by medical text online.
1 code implementation • 11 Oct 2023 • Zhiyu Chen, Yujie Lu, William Yang Wang
Mental illness remains one of the most critical public health issues of our time, due to the severe scarcity and accessibility limit of professionals.
1 code implementation • 14 Oct 2023 • Alex Mei, Sharon Levy, William Yang Wang
As large language models are integrated into society, robustness toward a suite of prompts is increasingly important to maintain reliability in a high-variance environment. Robustness evaluations must comprehensively encapsulate the various settings in which a user may invoke an intelligent system.
1 code implementation • 3 Nov 2018 • Sharon Levy, Wenhan Xiong, Elizabeth Belding, William Yang Wang
We propose SafeRoute, a novel solution to the problem of navigating cities and avoiding street harassment and crime.
1 code implementation • Findings (ACL) 2021 • Sharon Levy, Michael Saxon, William Yang Wang
In this work, we investigate the capability of language models to generate conspiracy theory text.
1 code implementation • 15 Oct 2021 • Liangming Pan, Wenhu Chen, Min-Yen Kan, William Yang Wang
We curate both human-written and model-generated false documents that we inject into the evidence corpus of QA models and assess the impact on the performance of these systems.
1 code implementation • 26 Jan 2022 • Alon Albalak, Sharon Levy, William Yang Wang
Open-retrieval question answering systems are generally trained and tested on large datasets in well-established domains.
1 code implementation • 19 Dec 2022 • Kaiser Sun, Peng Qi, Yuhao Zhang, Lan Liu, William Yang Wang, Zhiheng Huang
We show that, with consistent tokenization, the model performs better in both in-domain and out-of-domain datasets, with a notable average of +1. 7 F2 gain when a BART model is trained on SQuAD and evaluated on 8 QA datasets.
no code implementations • ACL 2018 • Pengda Qin, Weiran Xu, William Yang Wang
Distant supervision can effectively label data for relation extraction, but suffers from the noise labeling problem.
no code implementations • NAACL 2018 • Jiawei Wu, Lei LI, William Yang Wang
However, the selection of samples in existing co-training methods is based on a predetermined policy, which ignores the sampling bias between the unlabeled and the labeled subsets, and fails to explore the data space.
no code implementations • NAACL 2018 • Jing Qian, Mai ElSherief, Elizabeth M. Belding, William Yang Wang
Hate speech detection is a critical, yet challenging problem in Natural Language Processing (NLP).
no code implementations • CVPR 2018 • Xin Wang, Wenhu Chen, Jiawei Wu, Yuan-Fang Wang, William Yang Wang
Video captioning is the task of automatically generating a textual description of the actions in a video.
Hierarchical Reinforcement Learning reinforcement-learning +2
no code implementations • 8 Mar 2018 • Shuqing Bian, Zhenpeng Deng, Fei Li, Will Monroe, Peng Shi, Zijun Sun, Wei Wu, Sikuang Wang, William Yang Wang, Arianna Yuan, Tianwei Zhang, Jiwei Li
For the best setting, the proposed system is able to identify scam ICO projects with 0. 83 precision.
no code implementations • 22 Dec 2017 • Vivek Kulkarni, William Yang Wang
In this work, we use UrbanDictionary to conduct the first large-scale linguistic analysis of slang and its social aspects on the Internet to yield insights into this variety of language that is increasingly used all over the world online.
1 code implementation • IJCNLP 2017 • Ke Ni, William Yang Wang
We describe a data-driven approach for automatically explaining new, non-standard English expressions in a given sentence, building on a large dataset that includes 15 years of crowdsourced examples from UrbanDictionary. com.
1 code implementation • EMNLP 2017 • Yi Yao Huang, William Yang Wang
Deep residual learning (ResNet) is a new method for training very deep neural networks using identity map-ping for shortcut connections.
no code implementations • 12 Apr 2014 • William Yang Wang, Kathryn Mazaitis, Ni Lao, Tom Mitchell, William W. Cohen
We show that the problem of constructing proofs for this logic is related to computation of personalized PageRank (PPR) on a linearized version of the proof space, and using on this connection, we develop a proveably-correct approximate grounding scheme, based on the PageRank-Nibble algorithm.
no code implementations • 10 May 2013 • William Yang Wang, Kathryn Mazaitis, William W. Cohen
In many probabilistic first-order representation systems, inference is performed by "grounding"---i. e., mapping it to a propositional representation, and then performing propositional inference.
no code implementations • EMNLP 2018 • Jing Qian, Mai ElSherief, Elizabeth Belding, William Yang Wang
Existing work on automated hate speech detection typically focuses on binary classification or on differentiating among a small set of categories.
no code implementations • EMNLP 2018 • Vivek Kulkarni, Junting Ye, Steven Skiena, William Yang Wang
A news article's title, content and link structure often reveal its political ideology.
no code implementations • 18 Oct 2018 • Mahnaz Koupaee, William Yang Wang
Convolutional neural networks have been successfully applied to various NLP tasks.
no code implementations • ACL 2019 • Hui Liu, Qingyu Yin, William Yang Wang
Building explainable systems is a critical problem in the field of Natural Language Processing (NLP), since most machine learning models provide no explanations for the predictions.
no code implementations • AKBC 2020 • Haoyu Wang, Vivek Kulkarni, William Yang Wang
We introduce a new method DOLORES for learning knowledge graph embeddings that effectively captures contextual cues and dependencies among entities and relations.
no code implementations • 31 Oct 2018 • Yijun Xiao, Tiancheng Zhao, William Yang Wang
We introduce an improved variational autoencoder (VAE) for text modeling with topic information explicitly modeled as a Dirichlet latent variable.
no code implementations • 1 Nov 2018 • Deren Lei, Zichen Sun, Yijun Xiao, William Yang Wang
To bridge this gap, we study the role of SGD implicit regularization in deep learning systems.
no code implementations • 7 Nov 2018 • Xin Wang, Jiawei Wu, Da Zhang, Yu Su, William Yang Wang
Although promising results have been achieved in video captioning, existing models are limited to the fixed inventory of activities in the training corpus, and do not generalize to open vocabulary scenarios.
no code implementations • 18 Nov 2018 • Yijun Xiao, William Yang Wang
Reliable uncertainty quantification is a first step towards building explainable, transparent, and accountable artificial intelligent systems.
no code implementations • CVPR 2019 • Xin Wang, Qiuyuan Huang, Asli Celikyilmaz, Jianfeng Gao, Dinghan Shen, Yuan-Fang Wang, William Yang Wang, Lei Zhang
Vision-language navigation (VLN) is the task of navigating an embodied agent to carry out natural language instructions inside real 3D environments.
Ranked #2 on Vision-Language Navigation on Room2Room
no code implementations • ACL 2018 • William Yang Wang, Jiwei Li, Xiaodong He
Many Natural Language Processing (NLP) tasks (including generation, language grounding, reasoning, information extraction, coreference resolution, and dialog) can be formulated as deep reinforcement learning (DRL) problems.
no code implementations • NAACL 2018 • Xiang Ren, Nanyun Peng, William Yang Wang
In today{'}s information-based society, there is abundant knowledge out there carried in the form of natural language texts (e. g., news articles, social media posts, scientific publications), which spans across various domains (e. g., corporate documents, advertisements, legal acts, medical reports), which grows at an astonishing rate.
no code implementations • NAACL 2019 • Jing Qian, Mai ElSherief, Elizabeth Belding, William Yang Wang
Furthermore, we propose a novel Variational Decipher and show how it can generalize better to unseen hate symbols in a more challenging testing setting.
no code implementations • NAACL 2019 • Jiawei Wu, Xin Wang, William Yang Wang
The overreliance on large parallel corpora significantly limits the applicability of machine translation systems to the majority of language pairs.
no code implementations • NAACL 2019 • William Yang Wang, Sameer Singh, Jiwei Li
Adversarial learning is a game-theoretic learning paradigm, which has achieved huge successes in the field of Computer Vision recently.
no code implementations • ACL 2019 • Jiawei Wu, Xin Wang, William Yang Wang
The sequential order of utterances is often meaningful in coherent dialogues, and the order changes of utterances could lead to low-quality and incoherent conversations.
no code implementations • ACL 2019 • Wenhan Xiong, Jiawei Wu, Hong Wang, Vivek Kulkarni, Mo Yu, Shiyu Chang, Xiaoxiao Guo, William Yang Wang
With social media becoming increasingly pop-ular on which lots of news and real-time eventsare reported, developing automated questionanswering systems is critical to the effective-ness of many applications that rely on real-time knowledge.
no code implementations • ACL 2019 • Pushkar Shukla, Carlos Elmadjian, Richika Sharan, Vivek Kulkarni, Matthew Turk, William Yang Wang
In this work, we focus on the task of goal-oriented visual dialogue, aiming to automatically generate a series of questions about an image with a single objective.
no code implementations • 28 Jul 2019 • Pushkar Shukla, Carlos Elmadjian, Richika Sharan, Vivek Kulkarni, Matthew Turk, William Yang Wang
In this work, we focus on the task of goal-oriented visual dialogue, aiming to automatically generate a series of questions about an image with a single objective.
no code implementations • 13 Aug 2019 • Hong Wang, Wenhan Xiong, Mo Yu, Xiaoxiao Guo, Shiyu Chang, William Yang Wang
The ability to reason over learned knowledge is an innate ability for humans and humans can easily master new reasoning rules with only a few demonstrations.
no code implementations • 15 Aug 2019 • Shaolei Wang, Wanxiang Che, Qi Liu, Pengda Qin, Ting Liu, William Yang Wang
The pre-trained network is then fine-tuned using human-annotated disfluency detection training data.
no code implementations • 27 Aug 2019 • Yijun Xiao, William Yang Wang
We propose syntax-aware variational autoencoders (SAVAEs) that dedicate a subspace in the latent dimensions dubbed syntactic latent to represent syntactic structures of sentences.
no code implementations • IJCNLP 2019 • Siyao Li, Deren Lei, Pengda Qin, William Yang Wang
Deep reinforcement learning (RL) has been a commonly-used strategy for the abstractive summarization task to address both the exposure bias and non-differentiable task issues.
no code implementations • IJCNLP 2019 • Prince Zizhuang Wang, William Yang Wang
We argue that this would cause a typical training problem called posterior collapse observed in all other variational language models.
no code implementations • IJCNLP 2019 • Jiawei Wu, Wenhan Xiong, William Yang Wang
Many tasks in natural language processing can be viewed as multi-label classification problems.
no code implementations • WS 2019 • Wenhan Xiong, Mo Yu, Xiaoxiao Guo, Hong Wang, Shiyu Chang, Murray Campbell, William Yang Wang
To resolve this issue, we introduce a new sub-problem of open-domain multi-hop QA, which aims to recognize the bridge (\emph{i. e.}, the anchor that links to the answer passage) from the context of a set of start passages with a reading comprehension model.
no code implementations • 9 Nov 2019 • Tianyu Liu, Wei Wei, William Yang Wang
In this paper, we propose the new task of table-to-text NLG with unseen schemas, which specifically aims to test the generalization of NLG for input tables with attribute types that never appear during training.
no code implementations • CVPR 2020 • Juncheng Li, Xin Wang, Siliang Tang, Haizhou Shi, Fei Wu, Yueting Zhuang, William Yang Wang
Visual navigation is a task of training an embodied agent by intelligently navigating to a target object (e. g., television) using only visual observations.
no code implementations • 17 Nov 2019 • Tsu-Jui Fu, Xin Eric Wang, Matthew Peterson, Scott Grafton, Miguel Eckstein, William Yang Wang
In particular, we present a model-agnostic adversarial path sampler (APS) that learns to sample challenging paths that force the navigator to improve based on the navigation performance.
no code implementations • ICLR 2020 • Wenhan Xiong, Jingfei Du, William Yang Wang, Veselin Stoyanov
Models trained with our new objective yield significant improvements on the fact completion task.
no code implementations • 30 Dec 2019 • Yijun Xiao, William Yang Wang
However, Kullback-Leibler (KL) divergence-based total correlation is metric-agnostic and sensitive to data samples.
no code implementations • EACL 2021 • Xiyou Zhou, Zhiyu Chen, Xiaoyong Jin, William Yang Wang
We introduce HULK, a multi-task energy efficiency benchmarking platform for responsible natural language processing.
no code implementations • 29 Apr 2020 • Sophie Groenwold, Samhita Honnavalli, Lily Ou, Aesha Parekh, Sharon Levy, Diba Mirza, William Yang Wang
As NLP tools become ubiquitous in today's technological landscape, they are increasingly applied to languages with a variety of typological structures.
no code implementations • 30 Apr 2020 • Yi-Lin Tuan, Wei Wei, William Yang Wang
First, we train a large-scale language model and query it as textual knowledge.
no code implementations • 29 Apr 2020 • Qingfu Zhu, Wei-Nan Zhang, Ting Liu, William Yang Wang
Open-domain dialogue generation suffers from the data insufficiency problem due to the vast size of potential responses.
no code implementations • LREC 2020 • Kai Nakamura, Sharon Levy, William Yang Wang
We construct hybrid text+image models and perform extensive experiments for multiple variations of classification, demonstrating the importance of the novel aspect of multimodality and fine-grained classification unique to Fakeddit.
Classification Cultural Vocal Bursts Intensity Prediction +2
no code implementations • ACL 2020 • Sharon Levy, William Yang Wang
The spread of COVID-19 has become a significant and troubling aspect of society in 2020.
no code implementations • ECCV 2020 • Tsu-Jui Fu, Xin Eric Wang, Matthew F. Peterson,Scott T. Grafton, Miguel P. Eckstein, William Yang Wang
In particular, we present a model-agnostic adversarial path sampler (APS) that learns to sample challenging paths that force the navigator to improve based on the navigation performance.
no code implementations • Findings of the Association for Computational Linguistics 2020 • Jiannan Xiang, Xin Eric Wang, William Yang Wang
Vision-and-Language Navigation (VLN) is a natural language grounding task where an agent learns to follow language instructions and navigate to specified destinations in real-world environments.
Ranked #3 on Vision and Language Navigation on Touchdown Dataset
no code implementations • EMNLP 2020 • Wanrong Zhu, Xin Eric Wang, Pradyumna Narayana, Kazoo Sone, Sugato Basu, William Yang Wang
A major challenge in visually grounded language generation is to build robust benchmark datasets and models that can generalize well in real-world settings.
no code implementations • EMNLP 2020 • Qingfu Zhu, Wei-Nan Zhang, Ting Liu, William Yang Wang
Open-domain dialogue generation suffers from the data insufficiency problem due to the vast size of potential responses.
1 code implementation • EMNLP 2021 • Michael Saxon, Sharon Levy, Xinyi Wang, Alon Albalak, William Yang Wang
Broader disclosive transparency$-$truth and clarity in communication regarding the function of AI systems$-$is widely considered desirable.
no code implementations • 28 Jan 2021 • Tsu-Jui Fu, William Yang Wang, Daniel McDuff, Yale Song
Creating presentation materials requires complex multimodal reasoning skills to summarize key concepts and arrange them in a logical and visually pleasing manner.
no code implementations • EACL 2021 • An Yan, Xin Eric Wang, Tsu-Jui Fu, William Yang Wang
Recent advances in language and vision push forward the research of captioning a single image to describing visual differences between image pairs.
no code implementations • 12 Feb 2021 • Tony Sun, Kellie Webster, Apu Shah, William Yang Wang, Melvin Johnson
Responsible development of technology involves applications being inclusive of the diverse set of users they hope to support.
no code implementations • EACL 2021 • Yijun Xiao, William Yang Wang
Despite improvements in performances on different natural language generation tasks, deep neural models are prone to hallucinating facts that are incorrect or nonexistent.
no code implementations • CVPR 2022 • Tsu-Jui Fu, Xin Eric Wang, Scott T. Grafton, Miguel P. Eckstein, William Yang Wang
LBVE contains two features: 1) the scenario of the source video is preserved instead of generating a completely different video; 2) the semantic is presented differently in the target video, and all changes are controlled by the given instruction.
no code implementations • 17 Apr 2021 • Nicole X. Han, William Yang Wang, Miguel P. Eckstein
Making accurate inferences about other individuals' locus of attention is essential for human social interactions and will be important for AI to effectively interact with humans.
no code implementations • 29 Apr 2021 • Shravan Murlidaran, William Yang Wang, Miguel P. Eckstein
Results show that the machine/human agreement scene descriptions are much lower than human/human agreement for our complex scenes.
no code implementations • 29 May 2021 • Aditya Jonnalagadda, William Yang Wang, B. S. Manjunath, Miguel P. Eckstein
We propose Foveated Transformer (FoveaTer) model, which uses pooling regions and eye movements to perform object classification tasks using a Vision Transformer architecture.
no code implementations • 10 Jun 2021 • Wanrong Zhu, Xin Eric Wang, An Yan, Miguel Eckstein, William Yang Wang
Automatic evaluations for natural language generation (NLG) conventionally rely on token-level or embedding-level comparisons with text references.
no code implementations • ACL 2021 • Qingfu Zhu, Wei-Nan Zhang, Ting Liu, William Yang Wang
Generating open-domain conversational responses in the desired style usually suffers from the lack of parallel data in the style.
no code implementations • 22 Oct 2021 • Yujie Lu, Ping Nie, Shengyu Zhang, Ming Zhao, Ruobing Xie, William Yang Wang, Yi Ren
However, existing work are primarily built upon pre-defined retrieval channels, including User-CF (U2U), Item-CF (I2I), and Embedding-based Retrieval (U2I), thus access to the limited correlation between users and items which solely entail from partial information of latent interactions.
no code implementations • 22 Oct 2021 • Jiachen Li, Shuo Cheng, Zhenyu Liao, Huayan Wang, William Yang Wang, Qinxun Bai
Improving the sample efficiency of reinforcement learning algorithms requires effective exploration.
no code implementations • 2 Dec 2021 • Wenqiao Zhang, Xin Eric Wang, Siliang Tang, Haizhou Shi, Haocheng Shi, Jun Xiao, Yueting Zhuang, William Yang Wang
Such a setting can help explain the decisions of captioning models and prevents the model from hallucinating object words in its description.
no code implementations • 16 Dec 2021 • Michael Saxon, Xinyi Wang, Wenda Xu, William Yang Wang
Building natural language inference (NLI) benchmarks that are both challenging for modern techniques, and free from shortcut biases is difficult.
no code implementations • COLING 2022 • Wanrong Zhu, Bo Pang, Ashish V. Thapliyal, William Yang Wang, Radu Soricut
Dense video captioning aims to identify the events of interest in an input video, and generate descriptive captions for each event.
Ranked #3 on Dense Video Captioning on ViTT (CIDEr metric, using extra training data)
no code implementations • Findings (ACL) 2022 • Kai Nakamura, Sharon Levy, Yi-Lin Tuan, Wenhu Chen, William Yang Wang
A pressing challenge in current dialogue systems is to successfully converse with users on topics with information distributed across different modalities.
no code implementations • Findings (NAACL) 2022 • Zhiyu Chen, Bing Liu, Seungwhan Moon, Chinnadhurai Sankar, Paul Crook, William Yang Wang
We also propose two new models, SimpleToDPlus and Combiner, for the proposed task.
no code implementations • 6 Jun 2022 • Yujie Lu, Weixi Feng, Wanrong Zhu, Wenda Xu, Xin Eric Wang, Miguel Eckstein, William Yang Wang
Procedural planning aims to implement complex high-level goals by decomposition into sequential simpler low-level steps.
no code implementations • 10 Sep 2022 • Yujie Lu, Huiliang Zhang, Ping Nie, Weixi Feng, Wenda Xu, Xin Eric Wang, William Yang Wang
In this paper, we propose an Unseen Discrepancy Anticipating Vision and Language Navigation (DAVIS) that learns to generalize to unseen environments via encouraging test-time visual consistency.
no code implementations • LREC 2022 • Alex Mei, Anisha Kabir, Rukmini Bapat, John Judge, Tony Sun, William Yang Wang
Neural text summarization has shown great potential in recent years.
no code implementations • 7 Oct 2022 • Yi-Lin Tuan, Zih-Yun Chiu, William Yang Wang
A core problem in machine learning is to learn expressive latent variables for model prediction on complex data that involves multiple sub-components in a flexible and interpretable fashion.