no code implementations • 9 Mar 2024 • Shamik Roy, Sailik Sengupta, Daniele Bonadiman, Saab Mansour, Arshit Gupta
To study this, we propose the problem of faithful planning in TODs that needs to resolve user intents by following predefined flows and preserving API dependencies.
no code implementations • 5 Mar 2024 • Bryan Li, Tamer Alkhouli, Daniele Bonadiman, Nikolaos Pappas, Saab Mansour
xSTREET exposes a gap in base LLM performance between English and non-English reasoning tasks.
no code implementations • 5 Feb 2024 • James Y. Huang, Sailik Sengupta, Daniele Bonadiman, Yi-An Lai, Arshit Gupta, Nikolaos Pappas, Saab Mansour, Katrin Kirchhoff, Dan Roth
Current work focuses on alignment at model training time, through techniques such as Reinforcement Learning with Human Feedback (RLHF).
1 code implementation • 15 Dec 2022 • Denis Emelin, Daniele Bonadiman, Sawsan Alqahtani, Yi Zhang, Saab Mansour
Pre-trained language models (PLM) have advanced the state-of-the-art across NLP applications, but lack domain-specific knowledge that does not naturally occur in pre-training data.
1 code implementation • 4 Dec 2022 • Han He, Song Feng, Daniele Bonadiman, Yi Zhang, Saab Mansour
DataFlow has been emerging as a new paradigm for building task-oriented chatbots due to its expressive semantic representations of the dialogue tasks.
1 code implementation • NAACL 2021 • Piyawat Lertvittayakumjorn, Daniele Bonadiman, Saab Mansour
Practically, some combinations of slot values can be invalid according to external knowledge.
no code implementations • COLING 2020 • Daniele Bonadiman, Alessandro Moschitti
An essential task of most Question Answering (QA) systems is to re-rank the set of answer candidates, i. e., Answer Sentence Selection (A2S).
no code implementations • WS 2019 • Daniele Bonadiman, Anjishnu Kumar, Arpit Mittal
The goal of a Question Paraphrase Retrieval (QPR) system is to retrieve equivalent questions that result in the same answer as the original question.
1 code implementation • ACL 2018 • Antonio Uva, Daniele Bonadiman, Alessandro Moschitti
Effectively using full syntactic parsing information in Neural Networks (NNs) to solve relational tasks, e. g., question similarity, is still an open problem.
no code implementations • EMNLP 2017 • Kateryna Tymoshenko, Daniele Bonadiman, Aless Moschitti, ro
Recent work has shown that Tree Kernels (TKs) and Convolutional Neural Networks (CNNs) obtain the state of the art in answer sentence reranking.
no code implementations • EACL 2017 • Daniele Bonadiman, Antonio Uva, Aless Moschitti, ro
An important asset of using Deep Neural Networks (DNNs) for text applications is their ability to automatically engineering features.
no code implementations • 13 Feb 2017 • Daniele Bonadiman, Antonio Uva, Alessandro Moschitti
In this paper, we developed a deep neural network (DNN) that learns to solve simultaneously the three tasks of the cQA challenge proposed by the SemEval-2016 Task 3, i. e., question-comment similarity, question-question similarity and new question-comment similarity.
no code implementations • SEMEVAL 2016 • Alberto Barr{\'o}n-Cede{\~n}o, Daniele Bonadiman, Giovanni Da San Martino, Shafiq Joty, Aless Moschitti, ro, Fahad Al Obaidli, Salvatore Romeo, Kateryna Tymoshenko, Antonio Uva
Ranked #2 on
Question Answering
on SemEvalCQA