no code implementations • 10 Jan 2022 • Ansong Li, Zhiyong Cheng, Fan Liu, Zan Gao, Weili Guan, Yuxin Peng
The session embedding is then generated by aggregating the item embeddings with attention weights of each item's factors.
1 code implementation • CVPR 2022 • Minghang Zheng, Yanjie Huang, Qingchao Chen, Yuxin Peng, Yang Liu
Moreover, they train their model to distinguish positive visual-language pairs from negative ones randomly collected from other videos, ignoring the highly confusing video segments within the same video.
no code implementations • 11 Nov 2021 • Xiu-Shen Wei, Yi-Zhe Song, Oisin Mac Aodha, Jianxin Wu, Yuxin Peng, Jinhui Tang, Jian Yang, Serge Belongie
Fine-grained image analysis (FGIA) is a longstanding and fundamental problem in computer vision and pattern recognition, and underpins a diverse set of real-world applications.
1 code implementation • 10 Jul 2019 • Xiangteng He, Yuxin Peng, Liu Xie
To the best of our knowledge, it is the first benchmark with 4 media types for fine-grained cross-media retrieval.
no code implementations • CVPR 2019 • Junchao Zhang, Yuxin Peng
The main novelties and advantages are: (1) Bidirectional temporal graph: A bidirectional temporal graph is constructed along and reversely along the temporal order, which provides complementary ways to capture the temporal trajectories for each salient object.
no code implementations • 21 Aug 2018 • Mingkuan Yuan, Yuxin Peng
For addressing these problems, we exploit the excellent capability of generic discriminative models (e. g. VGG19), which can guide the training process of a new generative model on multiple levels to bridge the two gaps.
no code implementations • 26 Apr 2018 • Chenrui Zhang, Yuxin Peng
Video representation learning is a vital problem for classification task.
no code implementations • 26 Apr 2018 • Chenrui Zhang, Yuxin Peng
First, we propose multi-level semantic inference to boost video feature synthesis, which captures the discriminative information implied in joint visual-semantic distribution via feature-level and label-level semantic inference.
1 code implementation • 25 Apr 2018 • Jinwei Qi, Yuxin Peng, Yuxin Yuan
First, we propose visual-language relation attention model to explore both fine-grained patches and their relations of different media types.
no code implementations • CVPR 2018 • Xin Huang, Yuxin Peng
For achieving the goal, this paper proposes deep cross-media knowledge transfer (DCKT) approach, which transfers knowledge from a large-scale cross-media dataset to promote the model training on another small-scale cross-media dataset.
Multimedia
no code implementations • 7 Feb 2018 • Jian Zhang, Yuxin Peng, Mingkuan Yuan
(2) Ignore the rich information contained in the large amount of unlabeled data across different modalities, especially the margin examples that are easily to be incorrectly retrieved, which can help to model the correlations.
no code implementations • 7 Feb 2018 • Yuxin Peng, Jian Zhang, Zhaoda Ye
Inspired by the sequential decision ability of deep reinforcement learning, we propose a new Deep Reinforcement Learning approach for Image Hashing (DRLIH).
no code implementations • 1 Dec 2017 • Jian Zhang, Yuxin Peng, Mingkuan Yuan
To address the above problem, in this paper we propose an Unsupervised Generative Adversarial Cross-modal Hashing approach (UGACH), which makes full use of GAN's ability for unsupervised representation learning to exploit the underlying manifold structure of cross-modal data.
no code implementations • 9 Nov 2017 • Yuxin Peng, Yunzhen Zhao, Junchao Zhang
Recently, researchers generally adopt the deep networks to capture the static and motion information \textbf{\emph{separately}}, which mainly has two limitations: (1) Ignoring the coexistence relationship between spatial and temporal attention, while they should be jointly modelled as the spatial and temporal evolutions of video, thus discriminative video features can be extracted.
no code implementations • 14 Oct 2017 • Yuxin Peng, Jinwei Qi, Yuxin Yuan
They can not only exploit cross-modal correlation for learning common representation, but also preserve reconstruction information for capturing semantic consistency within each modality.
no code implementations • 30 Sep 2017 • Xiangteng He, Yuxin Peng, Junjie Zhao
Therefore, we propose a weakly supervised discriminative localization approach (WSDL) for fast fine-grained image classification to address the two limitations at the same time, and its main advantages are: (1) n-pathway end-to-end discriminative localization network is designed to improve classification speed, which simultaneously localizes multiple different discriminative regions for one image to boost classification accuracy, and shares full-image convolutional features generated by region proposal network to accelerate the process of generating region proposals as well as reduce the computation of convolutional operation.
no code implementations • 25 Sep 2017 • Xiangteng He, Yuxin Peng, Junjie Zhao
Existing methods generally adopt a two-stage learning framework: The first stage is to localize the discriminative regions of objects, and the second is to encode the discriminative features for training classifiers.
1 code implementation • 31 Aug 2017 • Xiangteng He, Yuxin Peng
As is known to all, when we describe the object of an image via textual descriptions, we mainly focus on the pivotal characteristics, and rarely pay attention to common characteristics as well as the background areas.
1 code implementation • 16 Aug 2017 • Yuxin Peng, Jinwei Qi, Yuxin Yuan
Effectively measuring the similarity between different modalities of data is the key of cross-modal retrieval.
no code implementations • 8 Aug 2017 • Xin Huang, Yuxin Peng, Mingkuan Yuan
Transfer learning is for relieving the problem of insufficient training data, but it mainly focuses on knowledge transfer only from large-scale datasets as single-modal source domain to single-modal target domain.
no code implementations • CVPR 2017 • Xiangteng He, Yuxin Peng
Most existing fine-grained image classification methods generally learn part detection models to obtain the semantic parts for better classification accuracy.
no code implementations • 1 Jun 2017 • Xin Huang, Yuxin Peng, Mingkuan Yuan
Knowledge in source domain cannot be directly transferred to both two different modalities in target domain, and the inherent cross-modal correlation contained in target domain provides key hints for cross-modal retrieval which should be preserved during transfer process.
no code implementations • 14 Apr 2017 • Jinwei Qi, Xin Huang, Yuxin Peng
Motivated by the strong ability of deep neural network in feature representation and comparison functions learning, we propose the Unified Network for Cross-media Similarity Metric (UNCSM) to associate cross-media shared representation learning with distance metric in a unified framework.
no code implementations • 10 Apr 2017 • Xiangteng He, Yuxin Peng
Most existing fine-grained image classification methods generally learn part detection models to obtain the semantic parts for better classification accuracy.
1 code implementation • 6 Apr 2017 • Yuxin Peng, Xiangteng He, Junjie Zhao
Both are jointly employed to exploit the subtle and local differences for distinguishing the subcategories.
no code implementations • 23 Mar 2017 • Yunzhen Zhao, Yuxin Peng
Then two streams of 3D CNN are trained individually for raw frames and optical flow on salient areas, and another 2D CNN is trained for raw frames on non-salient areas.
no code implementations • 21 Mar 2017 • Xin Huang, Yuxin Peng
The quadruplet ranking loss can model the semantically similar and dissimilar constraints to preserve cross-modal relative similarity ranking information.
no code implementations • 8 Dec 2016 • Jian Zhang, Yuxin Peng
On the other hand, different hash bits actually contribute to the image retrieval differently, and treating them equally greatly affects the retrieval accuracy of image.
no code implementations • 28 Jul 2016 • Jian Zhang, Yuxin Peng
(2) A semi-supervised deep hashing network is designed to extensively exploit both labeled and unlabeled data, in which we propose an online graph construction method to benefit from the evolving deep features during training to better capture semantic neighbors.
no code implementations • CVPR 2015 • Tianjun Xiao, Yichong Xu, Kuiyuan Yang, Jiaxing Zhang, Yuxin Peng, Zheng Zhang
Our pipeline integrates three types of attention: the bottom-up attention that propose candidate patches, the object-level top-down attention that selects relevant patches to a certain object, and the part-level top-down attention that localizes discriminative parts.
1 code implementation • 22 Sep 2011 • Zhiwu Lu, Horace H. S. Ip, Yuxin Peng
This paper presents a novel pairwise constraint propagation approach by decomposing the challenging constraint propagation problem into a set of independent semi-supervised learning subproblems which can be solved in quadratic time using label propagation based on k-nearest neighbor graphs.