Search Results for author: Guanghui Qin

Found 17 papers, 7 papers with code

CLERC: A Dataset for Legal Case Retrieval and Retrieval-Augmented Analysis Generation

1 code implementation24 Jun 2024 Abe Bohan Hou, Orion Weller, Guanghui Qin, Eugene Yang, Dawn Lawrie, Nils Holzenberger, Andrew Blair-Stanek, Benjamin Van Durme

This dataset CLERC (Case Law Evaluation Retrieval Corpus), is constructed for training and evaluating models on their ability to (1) find corresponding citations for a given piece of legal analysis and to (2) compile the text of these citations (as well as previous context) into a cogent analysis that supports a reasoning goal.

Information Retrieval RAG +1

Researchy Questions: A Dataset of Multi-Perspective, Decompositional Questions for LLM Web Agents

no code implementations27 Feb 2024 Corby Rosset, Ho-Lam Chung, Guanghui Qin, Ethan C. Chau, Zhuo Feng, Ahmed Awadallah, Jennifer Neville, Nikhil Rao

We show that users spend a lot of ``effort'' on these questions in terms of signals like clicks and session length, and that they are also challenging for GPT-4.

Known Unknowns Question Answering +1

Streaming Sequence Transduction through Dynamic Compression

1 code implementation2 Feb 2024 Weiting Tan, Yunmo Chen, Tongfei Chen, Guanghui Qin, Haoran Xu, Heidi C. Zhang, Benjamin Van Durme, Philipp Koehn

We introduce STAR (Stream Transduction with Anchor Representations), a novel Transformer-based model designed for efficient sequence-to-sequence transduction over streams.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +1

Nugget: Neural Agglomerative Embeddings of Text

no code implementations3 Oct 2023 Guanghui Qin, Benjamin Van Durme

This is problematic, as the amount of information contained in text often varies with the length of the input.

Language Modelling Machine Translation +1

Dodo: Dynamic Contextual Compression for Decoder-only LMs

no code implementations3 Oct 2023 Guanghui Qin, Corby Rosset, Ethan C. Chau, Nikhil Rao, Benjamin Van Durme

For example, in the autoencoding task, Dodo shrinks context at a 20x compression ratio with a BLEU score of 98% for reconstruction, achieving nearly lossless encoding.

Decoder Language Modelling +1

The NLP Task Effectiveness of Long-Range Transformers

no code implementations16 Feb 2022 Guanghui Qin, Yukun Feng, Benjamin Van Durme

Transformer models cannot easily scale to long sequences due to their O(N^2) time and space complexity.

Learning How to Ask: Querying LMs with Mixtures of Soft Prompts

2 code implementations NAACL 2021 Guanghui Qin, Jason Eisner

We explore the idea of learning prompts by gradient descent -- either fine-tuning prompts taken from previous work, or starting from random initialization.

Language Modelling

Iterative Paraphrastic Augmentation with Discriminative Span Alignment

no code implementations1 Jul 2020 Ryan Culkin, J. Edward Hu, Elias Stengel-Eskin, Guanghui Qin, Benjamin Van Durme

We introduce a novel paraphrastic augmentation strategy based on sentence-level lexically constrained paraphrasing and discriminative span alignment.


Neural Datalog Through Time: Informed Temporal Modeling via Logical Specification

1 code implementation ICML 2020 Hongyuan Mei, Guanghui Qin, Minjie Xu, Jason Eisner

Learning how to predict future events from patterns of past events is difficult when the set of possible event types is large.

Imputing Missing Events in Continuous-Time Event Streams

2 code implementations14 May 2019 Hongyuan Mei, Guanghui Qin, Jason Eisner

On held-out incomplete sequences, our method is effective at inferring the ground-truth unobserved events, with particle smoothing consistently improving upon particle filtering.

Learning Latent Semantic Annotations for Grounding Natural Language to Structured Data

1 code implementation EMNLP 2018 Guanghui Qin, Jin-Ge Yao, Xuening Wang, Jinpeng Wang, Chin-Yew Lin

Previous work on grounded language learning did not fully capture the semantics underlying the correspondences between structured world state representations and texts, especially those between numerical values and lexical terms.

Grounded language learning Text Generation

Inference of unobserved event streams with neural Hawkes particle smoothing

no code implementations27 Sep 2018 Hongyuan Mei, Guanghui Qin, Jason Eisner

Particle smoothing is an extension of particle filtering in which proposed events are conditioned on the future as well as the past.


Cannot find the paper you are looking for? You can Submit a new open access paper.