Search Results for author: Mengyang Yu

Found 10 papers, 4 papers with code

Auto-Encoding Twin-Bottleneck Hashing

2 code implementations CVPR 2020 Yuming Shen, Jie Qin, Jiaxin Chen, Mengyang Yu, Li Liu, Fan Zhu, Fumin Shen, Ling Shao

One bottleneck (i. e., binary codes) conveys the high-level intrinsic data structure captured by the code-driven graph to the other (i. e., continuous variables for low-level detail information), which in turn propagates the updated network feedback for the encoder to learn more discriminative binary codes.

graph construction Retrieval

Two Generator Game: Learning to Sample via Linear Goodness-of-Fit Test

no code implementations NeurIPS 2019 Lizhong Ding, Mengyang Yu, Li Liu, Fan Zhu, Yong liu, Yu Li, Ling Shao

DEAN can be interpreted as a GOF game between two generative networks, where one explicit generative network learns an energy-based distribution that fits the real data, and the other implicit generative network is trained by minimizing a GOF test statistic between the energy-based distribution and the generated data, such that the underlying distribution of the generated data is close to the energy-based distribution.

Fast Large-Scale Discrete Optimization Based on Principal Coordinate Descent

no code implementations16 Sep 2019 Huan Xiong, Mengyang Yu, Li Liu, Fan Zhu, Fumin Shen, Ling Shao

Binary optimization, a representative subclass of discrete optimization, plays an important role in mathematical optimization and has various applications in computer vision and machine learning.

Quantization

STAR: A Structure and Texture Aware Retinex Model

1 code implementation16 Jun 2019 Jun Xu, Yingkun Hou, Dongwei Ren, Li Liu, Fan Zhu, Mengyang Yu, Haoqian Wang, Ling Shao

A novel Structure and Texture Aware Retinex (STAR) model is further proposed for illumination and reflectance decomposition of a single image.

Low-Light Image Enhancement

Generative Domain-Migration Hashing for Sketch-to-Image Retrieval

1 code implementation ECCV 2018 Jingyi Zhang, Fumin Shen, Li Liu, Fan Zhu, Mengyang Yu, Ling Shao, Heng Tao Shen, Luc van Gool

The generative model learns a mapping that the distributions of sketches can be indistinguishable from the distribution of natural images using an adversarial loss, and simultaneously learns an inverse mapping based on the cycle consistency loss in order to enhance the indistinguishability.

Multi-Task Learning Retrieval +1

Scaled Simplex Representation for Subspace Clustering

3 code implementations26 Jul 2018 Jun Xu, Mengyang Yu, Ling Shao, WangMeng Zuo, Deyu Meng, Lei Zhang, David Zhang

However, the negative entries in the coefficient matrix are forced to be positive when constructing the affinity matrix via exponentiation, absolute symmetrization, or squaring operations.

Clustering

Discretely Coding Semantic Rank Orders for Supervised Image Hashing

no code implementations CVPR 2017 Li Liu, Ling Shao, Fumin Shen, Mengyang Yu

Learning to hash has been recognized to accomplish highly efficient storage and retrieval for large-scale visual data.

Retrieval Word Embeddings

Projection Bank: From High-dimensional Data to Medium-length Binary Codes

no code implementations ICCV 2015 Li Liu, Mengyang Yu, Ling Shao

Recently, very high-dimensional feature representations, e. g., Fisher Vector, have achieved excellent performance for visual recognition and retrieval.

Computational Efficiency Retrieval +1

Kernelized Multiview Projection

no code implementations3 Aug 2015 Mengyang Yu, Li Liu, Ling Shao

Conventional vision algorithms adopt a single type of feature or a simple concatenation of multiple features, which is always represented in a high-dimensional space.

Supervised Descriptor Learning for Multi-Output Regression

no code implementations CVPR 2015 Xiantong Zhen, Zhijie Wang, Mengyang Yu, Shuo Li

In this paper, we propose a novel supervised descriptor learning (SDL) algorithm to establish a discriminative and compact feature representation for multi-output regression.

Head Pose Estimation regression

Cannot find the paper you are looking for? You can Submit a new open access paper.