no code implementations • RANLP 2021 • Tiziano Labruna, Bernardo Magnini
Recent task-oriented dialogue systems learn a model from annotated dialogues, and such dialogues are in turn collected and annotated so that they are consistent with certain domain knowledge.
1 code implementation • 30 Apr 2024 • Tiziano Labruna, Jon Ander Campos, Gorka Azkune
Through our analysis, we demonstrate that Adapt-LLM is able to generate the <RET> token when it determines that it does not know how to answer a question, indicating the need for IR, while it achieves notably high accuracy levels when it chooses to rely only on its parametric memory.
no code implementations • 23 May 2023 • Tiziano Labruna, Sofia Brenna, Andrea Zaninello, Bernardo Magnini
Large pre-trained language models have exhibited unprecedented capabilities in producing high-quality text via prompting techniques.