Hallucination

340 papers with code • 0 benchmarks • 0 datasets

This task has no description! Would you like to contribute one?

Libraries

Use these libraries to find Hallucination models and implementations

Most implemented papers

Think-on-Graph: Deep and Responsible Reasoning of Large Language Model on Knowledge Graph

idea-finai/tog 15 Jul 2023

Although large language models (LLMs) have achieved significant success in various tasks, they often struggle with hallucination problems, especially in scenarios requiring deep and responsible reasoning.

RLHF-V: Towards Trustworthy MLLMs via Behavior Alignment from Fine-grained Correctional Human Feedback

openbmb/omnilmm 1 Dec 2023

Multimodal Large Language Models (MLLMs) have recently demonstrated impressive capabilities in multimodal understanding, reasoning, and interaction.

Brain MRI Image Super Resolution using Phase Stretch Transform and Transfer Learning

JalaliLabUCLA/Jalali-Lab-Implementation-of-RAISR 31 Jul 2018

A hallucination-free and computationally efficient algorithm for enhancing the resolution of brain MRI images is demonstrated.

Context-Patch Face Hallucination Based on Thresholding Locality-constrained Representation and Reproducing Learning

junjun-jiang/Face-Hallucination-Benchmark 3 Sep 2018

To this end, this study incorporates the contextual information of image patch and proposes a powerful and efficient context-patch based face hallucination approach, namely Thresholding Locality-constrained Representation and Reproducing learning (TLcR-RL).

HalluciNet-ing Spatiotemporal Representations Using a 2D-CNN

ParitoshParmar/HalluciNet 10 Dec 2019

The hallucination task is treated as an auxiliary task, which can be used with any other action related task in a multitask learning setting.

3D Sketch-aware Semantic Scene Completion via Semi-supervised Structure Prior

charlesCXK/3D-SketchAware-SSC CVPR 2020

To this end, we first propose a novel 3D sketch-aware feature embedding to explicitly encode geometric information effectively and efficiently.

BIGPrior: Towards Decoupling Learned Prior Hallucination and Data Fidelity in Image Restoration

majedelhelou/BIGPrior 3 Nov 2020

Our method, though partly reliant on the quality of the generative network inversion, is competitive with state-of-the-art supervised and task-specific restoration methods.

Detecting Hallucinated Content in Conditional Neural Sequence Generation

violet-zct/fairseq-detect-hallucination Findings (ACL) 2021

Neural sequence models can generate highly fluent sentences, but recent studies have also shown that they are also prone to hallucinate additional content not supported by the input.

Projected Distribution Loss for Image Enhancement

saurabh-kataria/projected-distribution-loss 16 Dec 2020

More explicitly, we show that in imaging applications such as denoising, super-resolution, demosaicing, deblurring and JPEG artifact removal, the proposed learning loss outperforms the current state-of-the-art on reference-based perceptual losses.

A Token-level Reference-free Hallucination Detection Benchmark for Free-form Text Generation

microsoft/HaDes ACL 2022

Large pretrained generative models like GPT-3 often suffer from hallucinating non-existent or incorrect content, which undermines their potential merits in real applications.