Search Results for author: Justin Vasselli

Found 6 papers, 4 papers with code

Measuring the Robustness of Reference-Free Dialogue Evaluation Systems

1 code implementation12 Jan 2025 Justin Vasselli, Adam Nohejl, Taro Watanabe

Advancements in dialogue systems powered by large language models (LLMs) have outpaced the development of reliable evaluation metrics, particularly for diverse and creative responses.

Dialogue Evaluation TAG

Improving Explainability of Sentence-level Metrics via Edit-level Attribution for Grammatical Error Correction

1 code implementation17 Dec 2024 Takumi Goto, Justin Vasselli, Taro Watanabe

Various evaluation metrics have been proposed for Grammatical Error Correction (GEC), but many, particularly reference-free metrics, lack explainability.

Attribute Grammatical Error Correction +1

How to Make the Most of LLMs' Grammatical Knowledge for Acceptability Judgments

no code implementations19 Aug 2024 Yusuke Ide, Yuto Nishida, Miyu Oba, Yusuke Sakai, Justin Vasselli, Hidetaka Kamigaito, Taro Watanabe

The grammatical knowledge of language models (LMs) is often measured using a benchmark of linguistic minimal pairs, where LMs are presented with a pair of acceptable and unacceptable sentences and required to judge which is acceptable.

knn-seq: Efficient, Extensible kNN-MT Framework

1 code implementation18 Oct 2023 Hiroyuki Deguchi, Hayate Hirano, Tomoki Hoshino, Yuto Nishida, Justin Vasselli, Taro Watanabe

We publish our knn-seq as an MIT-licensed open-source project and the code is available on https://github. com/naist-nlp/knn-seq .

Machine Translation NMT +1

Cannot find the paper you are looking for? You can Submit a new open access paper.