1 code implementation • 2 Feb 2024 • Weiting Tan, Yunmo Chen, Tongfei Chen, Guanghui Qin, Haoran Xu, Heidi C. Zhang, Benjamin Van Durme, Philipp Koehn
We introduce STAR (Stream Transduction with Anchor Representations), a novel Transformer-based model designed for efficient sequence-to-sequence transduction over streams.
Automatic Speech Recognition Automatic Speech Recognition (ASR) +1
1 code implementation • 20 Oct 2023 • Yunmo Chen, William Gantt, Tongfei Chen, Aaron Steven White, Benjamin Van Durme
We present a conceptual framework that unifies a variety of evaluation metrics for different structured prediction tasks (e. g. event and relation extraction, syntactic and semantic parsing).
no code implementations • 13 Oct 2022 • Weiwei Gu, Boyuan Zheng, Yunmo Chen, Tongfei Chen, Benjamin Van Durme
We present an empirical study on methods for span finding, the selection of consecutive tokens in text for some downstream tasks.
2 code implementations • 12 Oct 2022 • Yunmo Chen, William Gantt, Weiwei Gu, Tongfei Chen, Aaron Steven White, Benjamin Van Durme
We present a novel iterative extraction model, IterX, for extracting complex relations, or templates (i. e., N-tuples representing a mapping from named slots to spans of text) within a document.
1 code implementation • NeurIPS 2023 • Subhro Roy, Sam Thomson, Tongfei Chen, Richard Shin, Adam Pauls, Jason Eisner, Benjamin Van Durme
We introduce BenchCLAMP, a Benchmark to evaluate Constrained LAnguage Model Parsing, that includes context-free grammars for seven semantic parsing datasets and two syntactic parsing datasets with varied output representations, as well as a constrained decoding interface to generate only valid outputs covered by these grammars.
no code implementations • EACL 2021 • Patrick Xia, Guanghui Qin, Siddharth Vashishtha, Yunmo Chen, Tongfei Chen, Chandler May, Craig Harman, Kyle Rawlins, Aaron Steven White, Benjamin Van Durme
We present LOME, a system for performing multilingual information extraction.
1 code implementation • 20 Nov 2020 • Yunmo Chen, Tongfei Chen, Benjamin Van Durme
We recognize the task of event argument linking in documents as similar to that of intent slot resolution in dialogue, providing a Transformer-based model that extends from a recently proposed solution to resolve references to slots.
no code implementations • 29 Apr 2020 • Luyu Gao, Zhuyun Dai, Tongfei Chen, Zhen Fan, Benjamin Van Durme, Jamie Callan
This paper presents CLEAR, a retrieval model that seeks to complement classical lexical exact-match models such as BM25 with semantic matching signals from a neural embedding matching model.
1 code implementation • ACL 2020 • Tongfei Chen, Yunmo Chen, Benjamin Van Durme
We propose a novel method for hierarchical entity classification that embraces ontological structure at both training and during prediction.
no code implementations • EMNLP (spnlp) 2020 • Yunmo Chen, Tongfei Chen, Seth Ebner, Aaron Steven White, Benjamin Van Durme
We ask whether text understanding has progressed to where we may extract event information through incremental refinement of bleached statements derived from annotation manuals.
1 code implementation • 18 Sep 2019 • Yiming Wang, Tongfei Chen, Hainan Xu, Shuoyang Ding, Hang Lv, Yiwen Shao, Nanyun Peng, Lei Xie, Shinji Watanabe, Sanjeev Khudanpur
We present Espresso, an open-source, modular, extensible end-to-end neural automatic speech recognition (ASR) toolkit based on the deep learning library PyTorch and the popular neural machine translation toolkit fairseq.
Ranked #1 on Speech Recognition on Hub5'00 CallHome
Automatic Speech Recognition Automatic Speech Recognition (ASR) +5
no code implementations • ACL 2020 • Tongfei Chen, Zhengping Jiang, Adam Poliak, Keisuke Sakaguchi, Benjamin Van Durme
We introduce Uncertain Natural Language Inference (UNLI), a refinement of Natural Language Inference (NLI) that shifts away from categorical labels, targeting instead the direct prediction of subjective probability assessments.
no code implementations • ACL 2019 • Zhongyang Li, Tongfei Chen, Benjamin Van Durme
Researchers illustrate improvements in contextual encoding strategies via resultant performance on a battery of shared Natural Language Understanding (NLU) tasks.
no code implementations • WS 2019 • Tongfei Chen, Chetan Naik, Hua He, Pushpendre Rastogi, Lambert Mathias
One such approach for tracking the dialogue state is slot carryover, where a model makes a binary decision if a slot from the context is relevant to the current turn.
1 code implementation • NAACL 2019 • J. Edward Hu, Huda Khayrallah, Ryan Culkin, Patrick Xia, Tongfei Chen, Matt Post, Benjamin Van Durme
Lexically-constrained sequence decoding allows for explicit positive or negative phrase-based constraints to be placed on target output strings in generation tasks such as machine translation or monolingual text rewriting.
no code implementations • NAACL 2019 • Pushpendre Rastogi, Arpit Gupta, Tongfei Chen, Lambert Mathias
We present a novel approach to dialogue state tracking and referring expression resolution tasks.
Dialogue State Tracking Multi-domain Dialogue State Tracking +3
no code implementations • 6 Feb 2019 • Yiming Wang, Xing Fan, I-Fan Chen, Yuzong Liu, Tongfei Chen, Björn Hoffmeister
The anchored segment refers to the wake-up word part of an audio stream, which contains valuable speaker information that can be used to suppress interfering speech and background noise.
no code implementations • 14 May 2018 • Tongfei Chen, Jiří Navrátil, Vijay Iyengar, Karthikeyan Shanmugam
We propose a novel confidence scoring mechanism for deep neural networks based on a two-model paradigm involving a base model and a meta-model.
no code implementations • IJCNLP 2017 • Benjamin Van Durme, Tom Lippincott, Kevin Duh, Deana Burchfield, Adam Poliak, Cash Costello, Tim Finin, Scott Miller, James Mayfield, Philipp Koehn, Craig Harman, Dawn Lawrie, Ch May, ler, Max Thomas, Annabelle Carrell, Julianne Chaloux, Tongfei Chen, Alex Comerford, Mark Dredze, Benjamin Glass, Shudong Hao, Patrick Martin, Pushpendre Rastogi, Rashmi Sankepally, Travis Wolfe, Ying-Ying Tran, Ted Zhang
It combines a multitude of analytics together with a flexible environment for customizing the workflow for different users.
1 code implementation • EACL 2017 • Tongfei Chen, Benjamin Van Durme
We propose a framework for discriminative IR atop linguistic features, trained to improve the recall of answer candidate passage retrieval, the initial step in text-based question answering.