Search Results for author: Hao-Yu Wu

Found 7 papers, 3 papers with code

Classification is a Strong Baseline for Deep Metric Learning

2 code implementations30 Nov 2018 Andrew Zhai, Hao-Yu Wu

Deep metric learning aims to learn a function mapping image pixels to embedding feature vectors that model the similarity between images.

Binarization Classification +6

HighEr-Resolution Network for Image Demosaicing and Enhancing

1 code implementation19 Nov 2019 Kangfu Mei, Juncheng Li, Jiajie Zhang, Hao-Yu Wu, Jie Li, Rui Huang

However, plenty of studies have shown that global information is crucial for image restoration tasks like image demosaicing and enhancing.

Demosaicking

Large Scale Open-Set Deep Logo Detection

1 code implementation18 Nov 2019 Muhammet Bastan, Hao-Yu Wu, Tian Cao, Bhargava Kota, Mehmet Tek

We present an open-set logo detection (OSLD) system, which can detect (localize and recognize) any number of unseen logo classes without re-training; it only requires a small set of canonical logo images for each logo class.

Metric Learning

Learning a Unified Embedding for Visual Search at Pinterest

no code implementations5 Aug 2019 Andrew Zhai, Hao-Yu Wu, Eric Tzeng, Dong Huk Park, Charles Rosenberg

The solution we present not only allows us to train for multiple application objectives in a single deep neural network architecture, but takes advantage of correlated information in the combination of all training data from each application to generate a unified embedding that outperforms all specialized embeddings previously deployed for each product.

Metric Learning Navigate +2

Disentangle Perceptual Learning through Online Contrastive Learning

no code implementations24 Jun 2020 Kangfu Mei, Yao Lu, Qiaosi Yi, Hao-Yu Wu, Juncheng Li, Rui Huang

Perceptual learning approaches like perceptual loss are empirically powerful for such tasks but they usually rely on the pre-trained classification network to provide features, which are not necessarily optimal in terms of visual perception of image transformation.

Contrastive Learning feature selection

Billion-Scale Pretraining with Vision Transformers for Multi-Task Visual Representations

no code implementations12 Aug 2021 Josh Beal, Hao-Yu Wu, Dong Huk Park, Andrew Zhai, Dmitry Kislyuk

Large-scale pretraining of visual representations has led to state-of-the-art performance on a range of benchmark computer vision tasks, yet the benefits of these techniques at extreme scale in complex production systems has been relatively unexplored.

Ranked #26 on Image Classification on ObjectNet (using extra training data)

Image Classification Multi-Task Learning +2

Cannot find the paper you are looking for? You can Submit a new open access paper.