Defocus Blur Detection
9 papers with code • 5 benchmarks • 3 datasets
Most implemented papers
Explicit Visual Prompting for Universal Foreground Segmentations
We take inspiration from the widely-used pre-training and then prompt tuning protocols in NLP and propose a new visual prompting model, named Explicit Visual Prompting (EVP).
Defocus Blur Detection via Depth Distillation
In detail, we learn the defocus blur from ground truth and the depth distilled from a well-trained depth estimation network at the same time.
Self-Generated Defocus Blur Detection via Dual Adversarial Discriminators
The core insight is that a defocus blur region/focused clear area can be arbitrarily pasted to a given realistic full blurred image/full clear image without affecting the judgment of the full blurred image/full clear image.
Distill-DBDGAN: Knowledge Distillation and Adversarial Learning Framework for Defocus Blur Detection
Defocus blur detection (DBD) aims to segment the blurred regions from a given image affected by defocus blur.
Explicit Visual Prompting for Low-Level Structure Segmentations
Different from the previous visual prompting which is typically a dataset-level implicit embedding, our key insight is to enforce the tunable parameters focusing on the explicit visual content from each individual image, i. e., the features from frozen patch embeddings and the input's high-frequency components.
Depth and DOF Cues Make A Better Defocus Blur Detector
Our method proposes a depth feature distillation strategy to obtain depth knowledge from a pre-trained monocular depth estimation model and uses a DOF-edge loss to understand the relationship between DOF and depth.
Are you sure it’s an artifact? Artifact detection and uncertainty quantification in histological images
We achieved 0. 996 and 0. 938 F1 scores for blur and folded tissue detection on unseen data, respectively.
Equipping Computational Pathology Systems with Artifact Processing Pipelines: A Showcase for Computation and Performance Trade-offs
We developed DL pipelines using two MoEs and two multiclass models of state-of-the-art deep convolutional neural networks (DCNNs) and vision transformers (ViTs).
FOCUS: Towards Universal Foreground Segmentation
Foreground segmentation is a fundamental task in computer vision, encompassing various subdivision tasks.