no code implementations • 15 Dec 2022 • Caleb Ziems, William Held, Jingfeng Yang, Diyi Yang
First, we use this system to build stress tests for question answering, machine translation, and semantic parsing tasks.
no code implementations • 15 Dec 2022 • Omar Shaikh, Hongxin Zhang, William Held, Michael Bernstein, Diyi Yang
Generating a chain of thought (CoT) can increase large language model (LLM) performance on a wide range of tasks.
1 code implementation • 15 Dec 2022 • William Held, Christopher Hidey, Fei Liu, Eric Zhu, Rahul Goel, Diyi Yang, Rushin Shah
Modern virtual assistants use internal semantic parsing engines to convert user utterances to actionable commands.
no code implementations • 11 Oct 2022 • William Held, Diyi Yang
However, as a fixed-size model acquires more languages, its performance across all languages degrades, a phenomenon termed interference.
1 code implementation • EMNLP 2021 • William Held, Dan Iter, Dan Jurafsky
We model the entities/events in a reader's focus as a neighborhood within a learned latent embedding space which minimizes the distance between mentions and the centroids of their gold coreference clusters.
Ranked #1 on
Event Coreference Resolution
on Gun Violence Corpus
no code implementations • ACL 2019 • William Held, Nizar Habash
Hypernymy modeling has largely been separated according to two paradigms, pattern-based methods and distributional methods.