To address these challenges, this paper introduces a set of KPIs tailored for evaluating the performance of in-car ConvQA systems, along with datasets specifically designed for these KPIs.
Large language models (LLMs) have demonstrated remarkable performance by following natural language instructions without fine-tuning them on domain-specific tasks and data.
In our approach, we build on existing strong representations of single modalities and we use hypercomplex algebra to represent both, (i), single-modality embedding as well as, (ii), the interaction between different modalities and their complementary means of knowledge representation.
Task-oriented dialogue generation is challenging since the underlying knowledge is often dynamic and effectively incorporating knowledge into the learning process is hard.
Secondly, it should consider the grammatical quality of the generated sentence.
The learning process of such models can be performed by contrasting positive and negative triples.
We also present results of training our dataset on multiple baseline models adapted from current state-of-the-art Natural Language Generation (NLG) architectures.
Generating knowledge grounded responses in both goal and non-goal oriented dialogue systems is an important research challenge.
Non-goal oriented, generative dialogue systems lack the ability to generate answers with grounded facts.