Search Results for author: Erkun Yang

Found 9 papers, 2 papers with code

Learning with Diversity: Self-Expanded Equalization for Better Generalized Deep Metric Learning

no code implementations ICCV 2023 Jiexi Yan, Zhihui Yin, Erkun Yang, Yanhua Yang, Heng Huang

Most existing DML methods focus on improving the model robustness against category shift to keep the performance on unseen categories.

Metric Learning

Mutual Quantization for Cross-Modal Search With Noisy Labels

no code implementations CVPR 2022 Erkun Yang, Dongren Yao, Tongliang Liu, Cheng Deng

More specifically, we propose a proxy-based contrastive (PC) loss to mitigate the gap between different modalities and train networks for different modalities jointly with small-loss samples that are selected with the PC loss and a mutual quantization loss.

Quantization

Understanding and Improving Early Stopping for Learning with Noisy Labels

1 code implementation NeurIPS 2021 Yingbin Bai, Erkun Yang, Bo Han, Yanhua Yang, Jiatong Li, Yinian Mao, Gang Niu, Tongliang Liu

Instead of the early stopping, which trains a whole DNN all at once, we initially train former DNN layers by optimizing the DNN with a relatively large number of epochs.

Learning with noisy labels Memorization

Estimating Instance-dependent Bayes-label Transition Matrix using a Deep Neural Network

no code implementations27 May 2021 Shuo Yang, Erkun Yang, Bo Han, Yang Liu, Min Xu, Gang Niu, Tongliang Liu

Motivated by that classifiers mostly output Bayes optimal labels for prediction, in this paper, we study to directly model the transition from Bayes optimal labels to noisy labels (i. e., Bayes-label transition matrix (BLTM)) and learn a classifier to predict Bayes optimal labels.

Learning With Privileged Tasks

no code implementations ICCV 2021 Yuru Song, Zan Lou, Shan You, Erkun Yang, Fei Wang, Chen Qian, ChangShui Zhang, Xiaogang Wang

Concretely, we introduce a privileged parameter so that the optimization direction does not necessarily follow the gradient from the privileged tasks, but concentrates more on the target tasks.

Multi-Task Learning

DistillHash: Unsupervised Deep Hashing by Distilling Data Pairs

no code implementations CVPR 2019 Erkun Yang, Tongliang Liu, Cheng Deng, Wei Liu, DaCheng Tao

To address this issue, we propose a novel deep unsupervised hashing model, dubbed DistillHash, which can learn a distilled data set consisted of data pairs, which have confidence similarity signals.

Deep Hashing Semantic Similarity +1

Shared Predictive Cross-Modal Deep Quantization

no code implementations16 Apr 2019 Erkun Yang, Cheng Deng, Chao Li, Wei Liu, Jie Li, DaCheng Tao

In this paper, we propose a deep quantization approach, which is among the early attempts of leveraging deep neural networks into quantization-based cross-modal similarity search.

Quantization

Semantic Structure-based Unsupervised Deep Hashing

1 code implementation IJCAI2018 2018 Erkun Yang, Cheng Deng, Tongliang Liu, Wei Liu, DaCheng Tao

Hashing is becoming increasingly popular for approximate nearest neighbor searching in massive databases due to its storage and search efficiency.

Deep Hashing Semantic Similarity +1

Cannot find the paper you are looking for? You can Submit a new open access paper.