Hallucination
340 papers with code • 0 benchmarks • 0 datasets
Benchmarks
These leaderboards are used to track progress in Hallucination
Libraries
Use these libraries to find Hallucination models and implementationsMost implemented papers
Think-on-Graph: Deep and Responsible Reasoning of Large Language Model on Knowledge Graph
Although large language models (LLMs) have achieved significant success in various tasks, they often struggle with hallucination problems, especially in scenarios requiring deep and responsible reasoning.
RLHF-V: Towards Trustworthy MLLMs via Behavior Alignment from Fine-grained Correctional Human Feedback
Multimodal Large Language Models (MLLMs) have recently demonstrated impressive capabilities in multimodal understanding, reasoning, and interaction.
Brain MRI Image Super Resolution using Phase Stretch Transform and Transfer Learning
A hallucination-free and computationally efficient algorithm for enhancing the resolution of brain MRI images is demonstrated.
Context-Patch Face Hallucination Based on Thresholding Locality-constrained Representation and Reproducing Learning
To this end, this study incorporates the contextual information of image patch and proposes a powerful and efficient context-patch based face hallucination approach, namely Thresholding Locality-constrained Representation and Reproducing learning (TLcR-RL).
HalluciNet-ing Spatiotemporal Representations Using a 2D-CNN
The hallucination task is treated as an auxiliary task, which can be used with any other action related task in a multitask learning setting.
3D Sketch-aware Semantic Scene Completion via Semi-supervised Structure Prior
To this end, we first propose a novel 3D sketch-aware feature embedding to explicitly encode geometric information effectively and efficiently.
BIGPrior: Towards Decoupling Learned Prior Hallucination and Data Fidelity in Image Restoration
Our method, though partly reliant on the quality of the generative network inversion, is competitive with state-of-the-art supervised and task-specific restoration methods.
Detecting Hallucinated Content in Conditional Neural Sequence Generation
Neural sequence models can generate highly fluent sentences, but recent studies have also shown that they are also prone to hallucinate additional content not supported by the input.
Projected Distribution Loss for Image Enhancement
More explicitly, we show that in imaging applications such as denoising, super-resolution, demosaicing, deblurring and JPEG artifact removal, the proposed learning loss outperforms the current state-of-the-art on reference-based perceptual losses.
A Token-level Reference-free Hallucination Detection Benchmark for Free-form Text Generation
Large pretrained generative models like GPT-3 often suffer from hallucinating non-existent or incorrect content, which undermines their potential merits in real applications.