Patch Matching
32 papers with code • 2 benchmarks • 4 datasets
Most implemented papers
A Large Dataset for Improving Patch Matching
Similarly on the Strecha dataset, we see an improvement of 3-5% for the matching task in non-planar scenes.
TS-Net: Combining modality specific and common features for multimodal patch matching
Multimodal patch matching addresses the problem of finding the correspondences between image patches from two different modalities, e. g. RGB vs sketch or RGB vs near-infrared.
CrossNet: An End-to-end Reference-based Super Resolution Network using Cross-scale Warping
The Reference-based Super-resolution (RefSR) super-resolves a low-resolution (LR) image given an external high-resolution (HR) reference image, where the reference image and LR image share similar viewpoint but with significant resolution gap x8.
Semi-Supervised Learning for Face Sketch Synthesis in the Wild
Instead of supervising the network with ground truth sketches, we first perform patch matching in feature space between the input photo and photos in a small reference set of photo-sketch pairs.
SOLAR: Second-Order Loss and Attention for Image Retrieval
One is focused on second-order spatial information to increase the performance of image descriptors, both local and global.
On Translation Invariance in CNNs: Convolutional Layers can Exploit Absolute Spatial Location
In this paper we challenge the common assumption that convolutional layers in modern CNNs are translation invariant.
Naive-Student: Leveraging Semi-Supervised Learning in Video Sequences for Urban Scene Segmentation
We view this work as a notable step towards building a simple procedure to harness unlabeled video sequences and extra images to surpass state-of-the-art performance on core computer vision tasks.
HyNet: Learning Local Descriptor with Hybrid Similarity Measure and Triplet Loss
Recent works show that local descriptor learning benefits from the use of L2 normalisation, however, an in-depth analysis of this effect lacks in the literature.
Attention-Based Multimodal Image Matching
We propose an attention-based approach for multimodal image patch matching using a Transformer encoder attending to the feature maps of a multiscale Siamese CNN.
Patch Craft: Video Denoising by Deep Modeling and Patch Matching
Our algorithm augments video sequences with patch-craft frames and feeds them to a CNN.