Generative Question Answering
11 papers with code • 2 benchmarks • 6 datasets
Latest papers with no code
Two-stage Generative Question Answering on Temporal Knowledge Graph Using Large Language Models
Temporal knowledge graph question answering (TKGQA) poses a significant challenge task, due to the temporal constraints hidden in questions and the answers sought from dynamic structured knowledge.
Does the Generator Mind its Contexts? An Analysis of Generative Model Faithfulness under Context Transfer
The present study introduces the knowledge-augmented generator, which is specifically designed to produce information that remains grounded in contextual knowledge, regardless of alterations in the context.
A Search for Prompts: Generating Structured Answers from Contracts
In many legal processes being able to action on the concrete implication of a legal question can be valuable to automating human review or signalling certain conditions (e. g., alerts around automatic renewal).
Training Generative Question-Answering on Synthetic Data Obtained from an Instruct-tuned Model
This paper presents a simple and cost-effective method for synthesizing data to train question-answering systems.
Retrieving Supporting Evidence for Generative Question Answering
After presenting a question to an LLM and receiving a generated answer, we query the corpus with the combination of the question + generated answer.
Benchmarks for Pirá 2.0, a Reading Comprehension Dataset about the Ocean, the Brazilian Coast, and Climate Change
By creating these baselines, researchers can more easily utilize Pir\'a as a resource for testing machine learning models across a wide range of question answering tasks.
Prompt Generate Train (PGT): Few-shot Domain Adaption of Retrieval Augmented Generation Models for Open Book Question-Answering
The framework adapts a retriever augmented generation (RAG) model to the target domain using supervised fine-tuning and reinforcement learning with synthetic feedback in a few-shot setting.
Evaluation of medium-large Language Models at zero-shot closed book generative question answering
Large language models (LLMs) have garnered significant attention, but the definition of "large" lacks clarity.
Understanding and Improving Zero-shot Multi-hop Reasoning in Generative Question Answering
In sum, these results demonstrate that multi-hop reasoning does not emerge naturally in generative QA models, but can be encouraged by advances in training or modeling techniques.
Few-shot Question Generation for Personalized Feedback in Intelligent Tutoring Systems
Our personalized feedback can pinpoint correct and incorrect or missing phrases in student answers as well as guide them towards correct answer by asking a question in natural language.