no code implementations • 26 Feb 2024 • Domenic Rosati, Jan Wehner, Kai Williams, Łukasz Bartoszcze, Jan Batzner, Hassan Sajjad, Frank Rudzicz
Approaches to aligning large language models (LLMs) with human values has focused on correcting misalignment that emerges from pretraining.
1 code implementation • 14 Feb 2024 • Domenic Rosati, Robie Gonzales, Jinkun Chen, Xuemin Yu, Melis Erkan, Yahya Kayani, Satya Deepika Chavatapalli, Frank Rudzicz, Hassan Sajjad
Evaluations of model editing currently only use the `next few token' completions after a prompt.
1 code implementation • 20 Oct 2023 • Henning Bartsch, Ole Jorgensen, Domenic Rosati, Jason Hoelscher-Obermaier, Jacob Pfau
Using this test, we find that despite increases in self-consistency, models usually place significant weight on alternative, inconsistent answers.
no code implementations • 17 Aug 2023 • Harsh Raj, Vipul Gupta, Domenic Rosati, Subhabrata Majumdar
Large Language Models (LLMs) exhibit remarkable fluency and competence across various natural language tasks.
no code implementations • 3 May 2023 • Domenic Rosati
We found that grounding source documents improves the relevance and readability of lay summaries but does not improve factuality of lay summaries.
1 code implementation • 2 Mar 2023 • Derek Chen, Celine Lee, Yunan Lu, Domenic Rosati, Zhou Yu
Large language models (LLMs) effectively generate fluent text when the target output follows natural language patterns.
1 code implementation • 10 Nov 2022 • Harsh Raj, Domenic Rosati, Subhabrata Majumdar
While large pretrained language models (PLMs) demonstrate incredible fluency and performance on many natural language tasks, recent work has shown that well-performing PLMs are very sensitive to what prompts are feed into them.
no code implementations • 28 Oct 2022 • Domenic Rosati
Our case study shows that in addition to more human-like topics there are additional advantages to evaluation by using clustering and summarization measures instead of topic model measures.
no code implementations • 28 Sep 2022 • Étienne Fortier-Dubois, Domenic Rosati
This work examines the use of contradiction in natural language inference (NLI) for question answering (QA).
1 code implementation • sdp (COLING) 2022 • Domenic Rosati
By training the same model that performed well on DAGPap22 on SynSciPass, we show that not only is the model more robust to domain shifts but also is able to uncover the type of technology used for machine generated text.
no code implementations • 29 Jan 2022 • Domenic Rosati
Machine learning models allow us to compare languages by showing how hard a task in each language might be to learn and perform well on.
1 code implementation • 16 Apr 2021 • Domenic Rosati
A key issue in citation content analysis is looking for linguistic structures that characterize distinct classes of citations for the purposes of understanding the intent and function of a citation.
no code implementations • 22 Feb 2021 • Domenic Rosati
Using this citation index containing citation types based on citation function (supporting, disputing, or mentioning) we present initial results on the statistical characterization of citations to journals based on citation function.