no code implementations • 27 Feb 2024 • Corby Rosset, Ho-Lam Chung, Guanghui Qin, Ethan C. Chau, Zhuo Feng, Ahmed Awadallah, Jennifer Neville, Nikhil Rao
We show that users spend a lot of ``effort'' on these questions in terms of signals like clicks and session length, and that they are also challenging for GPT-4.
1 code implementation • 2 Feb 2024 • Weiting Tan, Yunmo Chen, Tongfei Chen, Guanghui Qin, Haoran Xu, Heidi C. Zhang, Benjamin Van Durme, Philipp Koehn
We introduce STAR (Stream Transduction with Anchor Representations), a novel Transformer-based model designed for efficient sequence-to-sequence transduction over streams.
Automatic Speech Recognition Automatic Speech Recognition (ASR) +1
no code implementations • 3 Oct 2023 • Guanghui Qin, Benjamin Van Durme
This is problematic, as the amount of information contained in text often varies with the length of the input.
no code implementations • 3 Oct 2023 • Guanghui Qin, Corby Rosset, Ethan C. Chau, Nikhil Rao, Benjamin Van Durme
Standard Transformer-based language models (LMs) scale poorly to long contexts.
no code implementations • 16 Feb 2022 • Guanghui Qin, Yukun Feng, Benjamin Van Durme
Transformer models cannot easily scale to long sequences due to their O(N^2) time and space complexity.
2 code implementations • EMNLP 2021 • Mahsa Yarmohammadi, Shijie Wu, Marc Marone, Haoran Xu, Seth Ebner, Guanghui Qin, Yunmo Chen, Jialiang Guo, Craig Harman, Kenton Murray, Aaron Steven White, Mark Dredze, Benjamin Van Durme
Zero-shot cross-lingual information extraction (IE) describes the construction of an IE model for some target language, given existing annotations exclusively in some other language, typically English.
2 code implementations • NAACL 2021 • Guanghui Qin, Jason Eisner
We explore the idea of learning prompts by gradient descent -- either fine-tuning prompts taken from previous work, or starting from random initialization.
no code implementations • EACL 2021 • Patrick Xia, Guanghui Qin, Siddharth Vashishtha, Yunmo Chen, Tongfei Chen, Chandler May, Craig Harman, Kyle Rawlins, Aaron Steven White, Benjamin Van Durme
We present LOME, a system for performing multilingual information extraction.
no code implementations • EMNLP (spnlp) 2020 • Abhinav Singh, Patrick Xia, Guanghui Qin, Mahsa Yarmohammadi, Benjamin Van Durme
Copy mechanisms are employed in sequence to sequence models (seq2seq) to generate reproductions of words from the input to the output.
no code implementations • 1 Jul 2020 • Ryan Culkin, J. Edward Hu, Elias Stengel-Eskin, Guanghui Qin, Benjamin Van Durme
We introduce a novel paraphrastic augmentation strategy based on sentence-level lexically constrained paraphrasing and discriminative span alignment.
1 code implementation • ICML 2020 • Hongyuan Mei, Guanghui Qin, Minjie Xu, Jason Eisner
Learning how to predict future events from patterns of past events is difficult when the set of possible event types is large.
no code implementations • 25 Sep 2019 • Hongyuan Mei, Guanghui Qin, Minjie Xu, Jason Eisner
Consider a world in which events occur that involve various entities.
2 code implementations • 14 May 2019 • Hongyuan Mei, Guanghui Qin, Jason Eisner
On held-out incomplete sequences, our method is effective at inferring the ground-truth unobserved events, with particle smoothing consistently improving upon particle filtering.
no code implementations • EMNLP 2018 • Longxu Dou, Guanghui Qin, Jinpeng Wang, Jin-Ge Yao, Chin-Yew Lin
Data2Text Studio is a platform for automated text generation from structured data.
1 code implementation • EMNLP 2018 • Guanghui Qin, Jin-Ge Yao, Xuening Wang, Jinpeng Wang, Chin-Yew Lin
Previous work on grounded language learning did not fully capture the semantics underlying the correspondences between structured world state representations and texts, especially those between numerical values and lexical terms.
no code implementations • 27 Sep 2018 • Hongyuan Mei, Guanghui Qin, Jason Eisner
Particle smoothing is an extension of particle filtering in which proposed events are conditioned on the future as well as the past.