no code implementations • 2 Feb 2024 • Paweł Mąka, Yusuf Can Semerci, Jan Scholtes, Gerasimos Spanakis
In this study, we show that a special case of multi-encoder architecture, where the latent representation of the source sentence is cached and reused as the context in the next step, achieves higher accuracy on the contrastive datasets (where the models have to rank the correct translation among the provided sentences) and comparable BLEU and COMET scores as the single- and multi-encoder approaches.
no code implementations • 25 Nov 2023 • Timo Kats, Peter van der Putten, Jan Scholtes
Our results show that this method can reduce review effort between 17. 85% and 59. 04%, compared to a baseline approach (of no feedback), given a fixed recall target