no code implementations • 3 Dec 2023 • Eashan Adhikarla, Kai Zhang, Jun Yu, Lichao Sun, John Nicholson, Brian D. Davison
As a result, it raises concerns about the overall robustness of the machine learning techniques for computer vision applications that are deployed publicly for consumers.
no code implementations • 30 Nov 2023 • Jianjian Qin, Chunzhi Gu, Jun Yu, Chao Zhang
To fully exploit saliency guidance, on each map, we select a pixel pair from the cluster with the highest centroid saliency to form a patch pair.
no code implementations • 7 Nov 2023 • Jianjian Qin, Chunzhi Gu, Jun Yu, Chao Zhang
We present PD-REAL, a novel large-scale dataset for unsupervised anomaly detection (AD) in the 3D domain.
1 code implementation • NeurIPS 2023 • Zhuo Huang, Li Shen, Jun Yu, Bo Han, Tongliang Liu
Therefore, the label guidance on labeled data is hard to be propagated to unlabeled data.
no code implementations • 25 Oct 2023 • Zhuo Huang, Muyang Li, Li Shen, Jun Yu, Chen Gong, Bo Han, Tongliang Liu
By fully exploring both variant and invariant parameters, our EVIL can effectively identify a robust subnetwork to improve OOD generalization.
no code implementations • 1 Oct 2023 • Chaojian Yu, Xiaolong Shi, Jun Yu, Bo Han, Tongliang Liu
Adversarial Training (AT) is a widely-used algorithm for building robust neural networks, but it suffers from the issue of robust overfitting, the fundamental mechanism of which remains unclear.
no code implementations • 14 Sep 2023 • Yu Ding, Jun Yu, Chunzhi Gu, Shangce Gao, Chao Zhang
Recently, a novel mathematical ANN model, known as the dendritic neuron model (DNM), has been proposed to address nonlinear problems by more accurately reflecting the structure of real neurons.
1 code implementation • 2 Sep 2023 • Xiaobo Xia, Pengqian Lu, Chen Gong, Bo Han, Jun Yu, Tongliang Liu
However, such a procedure is arguably debatable from two folds: (a) it does not consider the bad influence of noisy labels in selected small-loss examples; (b) it does not make good use of the discarded large-loss examples, which may be clean or have meaningful information for generalization.
1 code implementation • 27 Jul 2023 • Lingdong Kong, Yaru Niu, Shaoyuan Xie, Hanjiang Hu, Lai Xing Ng, Benoit R. Cottereau, Ding Zhao, Liangjun Zhang, Hesheng Wang, Wei Tsang Ooi, Ruijie Zhu, Ziyang Song, Li Liu, Tianzhu Zhang, Jun Yu, Mohan Jing, Pengwei Li, Xiaohua Qi, Cheng Jin, Yingfeng Chen, Jie Hou, Jie Zhang, Zhen Kan, Qiang Ling, Liang Peng, Minglei Li, Di Xu, Changpeng Yang, Yuanqi Yao, Gang Wu, Jian Kuai, Xianming Liu, Junjun Jiang, Jiamian Huang, Baojun Li, Jiale Chen, Shuang Zhang, Sun Ao, Zhenyu Li, Runze Chen, Haiyong Luo, Fang Zhao, Jingze Yu
In this paper, we summarize the winning solutions from the RoboDepth Challenge -- an academic competition designed to facilitate and advance robust OoD depth estimation.
1 code implementation • 21 Jul 2023 • Fang Gao, Xuetao Li, Jun Yu, Feng Shaung
The advent of Chat-GPT has led to a surge of interest in Embodied AI.
no code implementations • 11 Jul 2023 • Hui Kang, Sheng Liu, Huaxi Huang, Jun Yu, Bo Han, Dadong Wang, Tongliang Liu
In recent years, research on learning with noisy labels has focused on devising novel algorithms that can achieve robustness to noisy training labels while generalizing to clean data.
no code implementations • 12 Jun 2023 • Yuhao Wu, Xiaobo Xia, Jun Yu, Bo Han, Gang Niu, Masashi Sugiyama, Tongliang Liu
Training a classifier exploiting a huge amount of supervised data is expensive or even prohibited in a situation, where the labeling cost is high.
1 code implementation • 11 Jun 2023 • Mengyu Li, Jun Yu, Tao Li, Cheng Meng
Sinkhorn algorithm has been used pervasively to approximate the solution to optimal transport (OT) and unbalanced optimal transport (UOT) problems.
no code implementations • 9 Jun 2023 • Zepeng Liu, Zhicheng Yang, Mingye Zhu, Andy Wong, Yibing Wei, Mei Han, Jun Yu, Jui-Hsin Lai
Image dehazing is a meaningful low-level computer vision task and can be applied to a variety of contexts.
1 code implementation • 26 May 2023 • Kai Zhang, Jun Yu, Zhiling Yan, Yixin Liu, Eashan Adhikarla, Sunyang Fu, Xun Chen, Chen Chen, Yuyin Zhou, Xiang Li, Lifang He, Brian D. Davison, Quanzheng Li, Yong Chen, Hongfang Liu, Lichao Sun
In this paper, we introduce a unified and generalist Biomedical Generative Pre-trained Transformer (BiomedGPT) model, which leverages self-supervision on large and diverse datasets to accept multi-modal inputs and perform a range of downstream tasks.
Ranked #1 on
Text Summarization
on MeQSum
no code implementations • 13 May 2023 • Ke Zhang, Yan Yang, Jun Yu, Hanliang Jiang, Jianping Fan, Qingming Huang, Weidong Han
To address this limitation, we propose a unified Med-VLP framework based on Multi-task Paired Masking with Alignment (MPMA) to integrate the cross-modal alignment task into the joint image-text reconstruction framework to achieve more comprehensive cross-modal interaction, while a Global and Local Alignment (GLA) module is designed to assist self-supervised paradigm in obtaining semantic representations with rich domain knowledge.
1 code implementation • CVPR 2023 • Zhou Yu, Lixiang Zheng, Zhou Zhao, Fei Wu, Jianping Fan, Kui Ren, Jun Yu
A recent benchmark AGQA poses a promising paradigm to generate QA pairs automatically from pre-annotated scene graphs, enabling it to measure diverse reasoning abilities with granular control.
1 code implementation • 18 Apr 2023 • Zhaoming Kong, Fangxi Deng, Haomin Zhuang, Jun Yu, Lifang He, Xiaowei Yang
In this paper, to investigate the applicability of existing denoising techniques, we compare a variety of denoising methods on both synthetic and real-world datasets for different applications.
no code implementations • 14 Apr 2023 • Jaime Spencer, C. Stella Qian, Michaela Trescakova, Chris Russell, Simon Hadfield, Erich W. Graf, Wendy J. Adams, Andrew J. Schofield, James Elder, Richard Bowden, Ali Anwar, Hao Chen, Xiaozhi Chen, Kai Cheng, Yuchao Dai, Huynh Thai Hoa, Sadat Hossain, Jianmian Huang, Mohan Jing, Bo Li, Chao Li, Baojun Li, Zhiwen Liu, Stefano Mattoccia, Siegfried Mercelis, Myungwoo Nam, Matteo Poggi, Xiaohua Qi, Jiahui Ren, Yang Tang, Fabio Tosi, Linh Trinh, S. M. Nadim Uddin, Khan Muhammad Umair, Kaixuan Wang, YuFei Wang, Yixing Wang, Mochu Xiang, Guangkai Xu, Wei Yin, Jun Yu, Qi Zhang, Chaoqiang Zhao
This paper discusses the results for the second edition of the Monocular Depth Estimation Challenge (MDEC).
no code implementations • 8 Apr 2023 • Jun Yu, Shenshen Du, Guochen Xie, Renjie Lu, Pengwei Li, Zhongpeng Cai, Keda Lu
Synthetic Aperture Radar (SAR) to electro-optical (EO) image translation is a fundamental task in remote sensing that can enrich the dataset by fusing information from different sources.
2 code implementations • CVPR 2023 • Zhuo Huang, Miaoxi Zhu, Xiaobo Xia, Li Shen, Jun Yu, Chen Gong, Bo Han, Bo Du, Tongliang Liu
Experimentally, we simulate photon-limited corruptions using CIFAR10/100 and ImageNet30 datasets and show that SharpDRO exhibits a strong generalization ability against severe corruptions and exceeds well-known baseline methods with large performance gains.
no code implementations • 16 Mar 2023 • Jun Yu, Jichao Zhu, Wangyuan Zhu, Zhongpeng Cai, Guochen Xie, Renda Li, Gongpeng Zhao
Emotional Reaction Intensity(ERI) estimation is an important task in multimodal scenarios, and has fundamental applications in medicine, safe driving and other fields.
no code implementations • 15 Mar 2023 • Jun Yu, Renda Li, Zhongpeng Cai, Gongpeng Zhao, Guochen Xie, Jichao Zhu, Wangyuan Zhu
Human affective behavior analysis plays a vital role in human-computer interaction (HCI) systems.
no code implementations • 15 Mar 2023 • Jun Yu, Zhongpeng Cai, Renda Li, Gongpeng Zhao, Guochen Xie, Jichao Zhu, Wangyuan Zhu
Facial Expression Recognition (FER) is an important task in computer vision and has wide applications in human-computer interaction, intelligent security, emotion analysis, and other fields.
1 code implementation • CVPR 2023 • Zhenwei Shao, Zhou Yu, Meng Wang, Jun Yu
Knowledge-based visual question answering (VQA) requires external knowledge beyond the image to answer the question.
Ranked #2 on
Visual Question Answering (VQA)
on A-OKVQA
no code implementations • 27 Feb 2023 • Buyu Liu, BaoJun, Jianping Fan, Xi Peng, Kui Ren, Jun Yu
More desired attacks, to this end, should be able to fool defenses with such consistency checks.
no code implementations • 18 Feb 2023 • Ce Zhou, Qian Li, Chen Li, Jun Yu, Yixin Liu, Guangjing Wang, Kai Zhang, Cheng Ji, Qiben Yan, Lifang He, Hao Peng, JianXin Li, Jia Wu, Ziwei Liu, Pengtao Xie, Caiming Xiong, Jian Pei, Philip S. Yu, Lichao Sun
This study provides a comprehensive review of recent research advancements, challenges, and opportunities for PFMs in text, image, graph, as well as other data modalities.
no code implementations • 5 Feb 2023 • Zijian Zhang, Zhou Zhao, Jun Yu, Qi Tian
In this paper, we propose a novel and flexible conditional diffusion model by introducing conditions into the forward process.
no code implementations • ICCV 2023 • Xiaobo Xia, Bo Han, Yibing Zhan, Jun Yu, Mingming Gong, Chen Gong, Tongliang Liu
As selected data have high discrepancies in probabilities, the divergence of two networks can be maintained by training on such data.
3 code implementations • ICCV 2023 • Yijie Lin, Mouxing Yang, Jun Yu, Peng Hu, Changqing Zhang, Xi Peng
In this paper, we study a novel and widely existing problem in graph matching (GM), namely, Bi-level Noisy Correspondence (BNC), which refers to node-level noisy correspondence (NNC) and edge-level noisy correspondence (ENC).
Ranked #1 on
Graph Matching
on Willow Object Class
no code implementations • 31 Oct 2022 • Jianjian Qin, Chunzhi Gu, Jun Yu, Chao Zhang
Moreover, our method only requires very few normal samples to train the student network due to the teacher-student distillation mechanism.
1 code implementation • MM '22: Proceedings of the 30th ACM International Conference on Multimedia 2022 • Jun Yu, Zhongpeng Cai, Zepeng Liu, Guochen Xie, Peng He
The purpose of micro expression (ME) and macro expression (MaE) spotting task is to locate the onset and offset frames of MaE and ME clips.
3 code implementations • MM '22: Proceedings of the 30th ACM International Conference on Multimedia 2022 • Jun Yu, Guochen Xie, Zhongpeng Cai, Peng He, Fang Gao, Qiang Ling
We (Team: USTC-IAT-United) also compare our method with other competitors' in MEGC2022, and the expert evaluation results show that our method performs best, which verifies the effectiveness of our method.
no code implementations • 4 Oct 2022 • Chaojian Yu, Dawei Zhou, Li Shen, Jun Yu, Bo Han, Mingming Gong, Nannan Wang, Tongliang Liu
Firstly, applying a pre-specified perturbation budget on networks of various model capacities will yield divergent degree of robustness disparity between natural and robust accuracies, which deviates from robust network's desideratum.
1 code implementation • Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops 2022 • Jun Yu, Liwen Zhang, Shenshen Du, Hao Chang, Keda Lu, Zhong Zhang, Ye Yu, Lei Wang, Qiang Ling
To overcome these difficulties, this paper first select fewer but suitable data augmentation methods to improve the accuracy of the supervised model based on the labeled training set, which is suitable for the characteristics of hyperspectral images.
1 code implementation • 23 Sep 2022 • Jun Yu, Zhaoming Kong, Liang Zhan, Li Shen, Lifang He
The assessment of Alzheimer's Disease (AD) and Mild Cognitive Impairment (MCI) associated with brain changes remains a challenging task.
1 code implementation • Conference and Labs of the Evaluation Forum 2022 • Jun Yu, Hao Chang, Keda Lu, Guochen Xie, Liwen Zhang, Zhongpeng Cai, Shenshen Du, Zhihong Wei, Zepeng Liu, Fang Gao, Feng Shuang
This motivates us to explore the impact of different methods and components in fine-grained classification on FungiCLEF 2022.
no code implementations • 21 Aug 2022 • Jun Yu, Shunqing Zhang, Jiayun Sun, Shugong Xu, Shan Cao
Multi-stream carrier aggregation is a key technology to expand bandwidth and improve the throughput of the fifth-generation wireless communication systems.
2 code implementations • Machine Learning 2022 • Hao Chang, Guochen Xie, Jun Yu, Qiang Ling, Fang Gao, Ye Yu
Semi-supervised Fine-Grained Recognition is a challenging task due to the difficulty of data imbalance, high inter-class similarity and domain mismatch.
no code implementations • 4 Jul 2022 • Chunzhi Gu, Jun Yu, Chao Zhang
Specifically, the inductive bias imposed by the extra CVAE path encourages two latent variables in two paths to respectively govern separate representations for each partial-body motion.
1 code implementation • 17 Jun 2022 • Chaojian Yu, Bo Han, Li Shen, Jun Yu, Chen Gong, Mingming Gong, Tongliang Liu
Here, we explore the causes of robust overfitting by comparing the data distribution of \emph{non-overfit} (weak adversary) and \emph{overfitted} (strong adversary) adversarial training, and observe that the distribution of the adversarial data generated by weak adversary mainly contain small-loss data.
no code implementations • 31 May 2022 • Jingyi Zhang, Cheng Meng, Jun Yu, Mengrui Zhang, Wenxuan Zhong, Ping Ma
Theoretically, we show the selected subsample can be used for efficient density estimation by deriving the convergence rate for the proposed subsample kernel density estimator.
1 code implementation • 30 May 2022 • Tao Li, Cheng Meng, Hongteng Xu, Jun Yu
Distribution comparison plays a central role in many machine learning tasks like data classification and generative modeling.
1 code implementation • 26 May 2022 • Mengyu Li, Jun Yu, Hongteng Xu, Cheng Meng
As a valid metric of metric-measure spaces, Gromov-Wasserstein (GW) distance has shown the potential for matching problems of structured data like point clouds and graphs.
2 code implementations • 4 May 2022 • Jun Yu, Hao Chang, Keda Lu, Liwen Zhang, Shenshen Du, Zhong Zhang
Multi-modal aerial view object classification (MAVOC) in Automatic target recognition (ATR), although an important and challenging problem, has been under studied.
no code implementations • 28 Mar 2022 • Jun Yu, Zhongpeng Cai, Peng He, Guocheng Xie, Qiang Ling
Moreover, we introduce the multi-fold ensemble method to train and ensemble several models with the same architecture but different data distributions to enhance the performance of our solution.
1 code implementation • 24 Mar 2022 • Zhou Yu, Zitian Jin, Jun Yu, Mingliang Xu, Hongbo Wang, Jianping Fan
Recent advances in Transformer architectures [1] have brought remarkable improvements to visual question answering (VQA).
no code implementations • 15 Feb 2022 • Yibing Zhan, Zhi Chen, Jun Yu, Baosheng Yu, DaCheng Tao, Yong Luo
As a result, HLN significantly improves the performance of scene graph generation by integrating and reasoning from object interactions, relationship interactions, and transitive inference of hyper-relationships.
no code implementations • 30 Jan 2022 • Yexiong Lin, Yu Yao, Yuxuan Du, Jun Yu, Bo Han, Mingming Gong, Tongliang Liu
Algorithms which minimize the averaged loss have been widely designed for dealing with noisy labels.
1 code implementation • CVPR 2022 • Wenwen Pan, Haonan Shi, Zhou Zhao, Jieming Zhu, Xiuqiang He, Zhigeng Pan, Lianli Gao, Jun Yu, Fei Wu, Qi Tian
Audio-Guided video semantic segmentation is a challenging problem in visual analysis and editing, which automatically separates foreground objects from background in a video sequence according to the referring audio expressions.
no code implementations • CVPR 2022 • Jun Bao, Buyu Liu, Jun Yu
This paper aims to address the single image gaze target detection problem.
1 code implementation • Association for the Advancement of Artificial Intelligence 2021 • Jun Yu, Hao Chang, Keda Lu
It’s more efficient to look for ways improving the data based a fixed neural network architecture.
no code implementations • 21 Nov 2021 • Jun Yu, Zhaoming Kong, Aditya Kendre, Hao Peng, Carl Yang, Lichao Sun, Alex Leow, Lifang He
This paper presents a novel graph-based kernel learning approach for connectome analysis.
no code implementations • 7 Oct 2021 • Xiaopeng Li, Jiang Wu, Zhanbo Xu, Kun Liu, Jun Yu, Xiaohong Guan
This paper focuses on the uncertainty set prediction of the aggregated generation of geographically distributed wind farms.
no code implementations • 29 Sep 2021 • Xiaobo Xia, Bo Han, Yibing Zhan, Jun Yu, Mingming Gong, Chen Gong, Tongliang Liu
The sample selection approach is popular in learning with noisy labels, which tends to select potentially clean data out of noisy data for robust training.
1 code implementation • 16 Aug 2021 • Yuhao Cui, Zhou Yu, Chunqi Wang, Zhongzhou Zhao, Ji Zhang, Meng Wang, Jun Yu
Nevertheless, most existing VLP approaches have not fully utilized the intrinsic knowledge within the image-text pairs, which limits the effectiveness of the learned alignments and further restricts the performance of their models.
no code implementations • 14 Jul 2021 • Hao Chang, Guochen Xie, Jun Yu, Qiang Ling
Semi-supervised Fine-Grained Recognition is a challenge task due to the difficulty of data imbalance, high inter-class similarity and domain mismatch.
no code implementations • 10 Jul 2021 • Fang Gao, Jiabao Wang, Jun Yu, Yaoxiong Wang, Feng Shuang
It consists of a dense residual network structure, an adaptive weight channel attention (AWCA) module, a patch second non-local (PSNL) module and a soft label generation method.
1 code implementation • 27 Jun 2021 • Jun Bao, Buyu Liu, Jun Yu
We propose a novel method on refining cross-person gaze prediction task with eye/face images only by explicitly modelling the person-specific differences.
no code implementations • 10 Jun 2021 • Dawei Zhou, Nannan Wang, Xinbo Gao, Bo Han, Jun Yu, Xiaoyu Wang, Tongliang Liu
However, pre-processing methods may suffer from the robustness degradation effect, in which the defense reduces rather than improving the adversarial robustness of a target model in a white-box setting.
no code implementations • NeurIPS 2021 • Xiaobo Xia, Tongliang Liu, Bo Han, Mingming Gong, Jun Yu, Gang Niu, Masashi Sugiyama
In this way, we also give large-loss but less selected data a try; then, we can better distinguish between the cases (a) and (b) by seeing if the losses effectively decrease with the uncertainty after the try.
Ranked #26 on
Image Classification
on mini WebVision 1.0
no code implementations • 1 Jun 2021 • Xiaobo Xia, Tongliang Liu, Bo Han, Mingming Gong, Jun Yu, Gang Niu, Masashi Sugiyama
Lots of approaches, e. g., loss correction and label correction, cannot handle such open-set noisy labels well, since they need training data and test data to share the same label space, which does not hold for learning with open-set noisy labels.
no code implementations • 18 May 2021 • Bofeng Wu, guocheng niu, Jun Yu, Xinyan Xiao, Jian Zhang, Hua Wu
This paper proposes an approach to Dense Video Captioning (DVC) without pairwise event-sentence annotation.
no code implementations • ICCV 2021 • Dawei Zhou, Nannan Wang, Chunlei Peng, Xinbo Gao, Xiaoyu Wang, Jun Yu, Tongliang Liu
Then, we train a denoising model to minimize the distances between the adversarial examples and the natural examples in the class activation feature space.
1 code implementation • 25 Dec 2020 • Jun Yu, Hao Zhou, Yibing Zhan, DaCheng Tao
Essentially, DGCPN addresses the inaccurate similarity problem by exploring and exploiting the data's intrinsic relationships in a graph.
1 code implementation • NeurIPS 2020 • Cheng Meng, Jun Yu, Jingyi Zhang, Ping Ma, Wenxuan Zhong
The proposed method, named principal optimal transport direction (POTD), estimates the basis of the SDR subspace using the principal directions of the optimal transport coupling between the data respecting different response categories.
no code implementations • 21 Aug 2020 • Jinfeng Li, Weifeng Liu, Yicong Zhou, Jun Yu, Dapeng Tao
Traditional domain adaptation algorithms assume that enough labeled data, which are treated as the prior knowledge are available in the source domain.
no code implementations • 30 May 2020 • Jun Yu, Guochen Xie, Mengyan Li, Xinlong Hao
While in inference procedure, we try another similarity computing method by dropping the followed several fully connected layers and directly computing the cosine similarity of the two feature vectors.
no code implementations • 30 May 2020 • Jun Yu, Mengyan Li, Xinlong Hao, Guochen Xie
Recognizing Families In the Wild (RFIW) is a challenging kinship recognition task with multiple tracks, which is based on Families in the Wild (FIW), a large-scale and comprehensive image database for automatic kinship recognition.
no code implementations • 21 May 2020 • Jun Yu, HaiYing Wang, Mingyao Ai, Huiming Zhang
We first derive optimal Poisson subsampling probabilities in the context of quasi-likelihood estimation under the A- and L-optimality criteria.
1 code implementation • 25 Apr 2020 • Zhou Yu, Yuhao Cui, Jun Yu, Meng Wang, DaCheng Tao, Qi Tian
Most existing works focus on a single task and design neural architectures manually, which are highly task-specific and hard to generalize to different tasks.
Ranked #19 on
Visual Question Answering (VQA)
on VQA v2 test-std
no code implementations • 7 Apr 2020 • Maoying Qiao, Tongliang Liu, Jun Yu, Wei Bian, DaCheng Tao
To alleviate this problem, in this paper, a repulsiveness-encouraging prior is introduced among mixing components and a diversified EPCA mixture (DEPCAM) model is developed in the Bayesian framework.
no code implementations • 6 Apr 2020 • Maoying Qiao, Jun Yu, Wei Bian, DaCheng Tao
Specifically, an HMRNet is reorganized into a hierarchical structure with homogeneous networks as its layers and heterogeneous links connecting them.
no code implementations • 16 Mar 2020 • Yijun Song, Jingwen Wang, Lin Ma, Zhou Yu, Jun Yu
The task of temporally grounding textual queries in videos is to localize one video segment that semantically corresponds to the given query.
no code implementations • 12 Aug 2019 • Zhou Yu, Yuhao Cui, Jun Yu, DaCheng Tao, Qi Tian
Learning an effective attention mechanism for multimodal data is important in many vision-and-language tasks that require a synergic understanding of both the visual and textual contents.
7 code implementations • CVPR 2019 • Zhou Yu, Jun Yu, Yuhao Cui, DaCheng Tao, Qi Tian
In this paper, we propose a deep Modular Co-Attention Network (MCAN) that consists of Modular Co-Attention (MCA) layers cascaded in depth.
Ranked #26 on
Visual Question Answering (VQA)
on VQA v2 test-std
no code implementations • 20 Jun 2019 • Natalya Pya Arnqvist, Blaise Ngendangenzwa, Eric Lindahl, Leif Nilsson, Jun Yu
One of the primary concerns of product quality control in the automotive industry is an automated detection of defects of small sizes on specular car body surfaces.
1 code implementation • 6 Jun 2019 • Zhou Yu, Dejing Xu, Jun Yu, Ting Yu, Zhou Zhao, Yueting Zhuang, DaCheng Tao
It is both crucial and natural to extend this research direction to the video domain for video question answering (VideoQA).
Ranked #15 on
Video Question Answering
on ActivityNet-QA
Visual Question Answering (VQA)
Zero-Shot Video Question Answer
no code implementations • 20 May 2019 • Jun Yu, Jing Li, Zhou Yu, Qingming Huang
Despite the success of existing studies, current methods only model the co-attention that characterizes the inter-modal interactions while neglecting the self-attention that characterizes the intra-modal interactions.
no code implementations • 9 May 2019 • Yinglu Liu, Hao Shen, Yue Si, Xiaobo Wang, Xiangyu Zhu, Hailin Shi, Zhibin Hong, Hanqi Guo, Ziyuan Guo, Yanqin Chen, Bi Li, Teng Xi, Jun Yu, Haonian Xie, Guochen Xie, Mengyan Li, Qing Lu, Zengfu Wang, Shenqi Lai, Zhenhua Chai, Xiaoming Wei
However, previous competitions on facial landmark localization (i. e., the 300-W, 300-VW and Menpo challenges) aim to predict 68-point landmarks, which are incompetent to depict the structure of facial components.
no code implementations • CVPR 2019 • Yibing Zhan, Jun Yu, Ting Yu, DaCheng Tao
In this paper, we explore the beneficial effect of undetermined relationships on visual relationship detection.
no code implementations • 22 Apr 2019 • Jian Zhang, Jun Yu, DaCheng Tao
Next, we exploit an affine transformation to align the local deep features of each neighbourhood with the global features.
no code implementations • 16 Apr 2019 • Jun Yu, Jinghan Yao, Jian Zhang, Zhou Yu, DaCheng Tao
In this paper, we propose a one-stage framework, SPRNet, which performs efficient instance segmentation by introducing a single pixel reconstruction (SPR) branch to off-the-shelf one-stage detectors.
no code implementations • 5 Apr 2019 • Maoying Qiao, Jun Yu, Wei Bian, Qiang Li, DaCheng Tao
Stochastic block models (SBMs) have been playing an important role in modeling clusters or community structures of network data.
no code implementations • 26 Mar 2019 • Jun Yu, Xiao-Jun Wu
Our model not only considers the inter-modality correlation by maximizing the kernel correlation but also preserves the semantically structural information within each modality.
no code implementations • 26 Mar 2019 • Jun Yu, Xiao-Jun Wu
With the advantage of low storage cost and high efficiency, hashing learning has received much attention in the domain of Big Data.
no code implementations • 6 Dec 2018 • Jun Yu, Xiao-Jun Wu, Josef Kittler
With the advantage of low storage cost and high retrieval efficiency, hashing techniques have recently been an emerging topic in cross-modal similarity search.
no code implementations • 24 Oct 2018 • Zhou Zhao, Hanbing Zhan, Lingtao Meng, Jun Xiao, Jun Yu, Min Yang, Fei Wu, Deng Cai
In this paper, we study the problem of image retweet prediction in social media, which predicts the image sharing behavior that the user reposts the image tweets from their followees.
no code implementations • 13 Aug 2018 • Jun Yu, Xiao-Jun Wu, Josef Kittler
Many hashing methods based on a single view have been extensively studied for information retrieval.
no code implementations • 19 Jun 2018 • Jun Yu, Xiao-Jun Wu, Josef Kittler
Recently, hashing techniques have gained importance in large-scale retrieval tasks because of their retrieval speed.
1 code implementation • 9 May 2018 • Zhou Yu, Jun Yu, Chenchao Xiang, Zhou Zhao, Qi Tian, DaCheng Tao
Visual grounding aims to localize an object in an image referred to by a textual query phrase.
Ranked #9 on
Phrase Grounding
on Flickr30k Entities Test
no code implementations • ECCV 2018 • Xiaoqing Yin, Xinchao Wang, Jun Yu, Maojun Zhang, Pascal Fua, DaCheng Tao
Images captured by fisheye lenses violate the pinhole camera assumption and suffer from distortions.
no code implementations • ICLR 2018 • Wei Zhang, Qiuyu Chen, Jun Yu, Jianping Fan
In this paper, a deep boosting algorithm is developed to learn more discriminative ensemble classifier by seamlessly combining a set of base deep CNNs (base experts) with diverse capabilities, e. g., these base deep CNNs are sequentially trained to recognize a set of object classes in an easy-to-hard way according to their learning complexities.
no code implementations • 18 Dec 2017 • Chaoqun Hong, Jun Yu
In the proposed deep learning based framework, Manifold Regularized Convolutional Layers (MRCL) improve traditional convolutional layers by learning the relationship among outputs of neurons.
2 code implementations • 4 Dec 2017 • Jun Yu, Xingxin Xu, Fei Gao, Shengjie Shi, Meng Wang, DaCheng Tao, Qingming Huang
Experimental results show that our method is capable of generating both visually comfortable and identity-preserving face sketches/photos over a wide range of challenging data.
Ranked #1 on
Face Sketch Synthesis
on CUFS
(FID metric)
2 code implementations • 10 Aug 2017 • Zhou Yu, Jun Yu, Chenchao Xiang, Jianping Fan, DaCheng Tao
For fine-grained image and question representations, a `co-attention' mechanism is developed by using a deep neural network architecture to jointly learn the attentions for both the image and the question, which can allow us to reduce the irrelevant features effectively and obtain more discriminative features for image and question representations.
6 code implementations • ICCV 2017 • Zhou Yu, Jun Yu, Jianping Fan, DaCheng Tao
For multi-modal feature fusion, here we develop a Multi-modal Factorized Bilinear (MFB) pooling approach to efficiently and effectively combine multi-modal features, which results in superior performance for VQA compared with other bilinear pooling approaches.
no code implementations • 8 Jul 2017 • Tianyi Zhao, Baopeng Zhang, Wei zhang, Ning Zhou, Jun Yu, Jianping Fan
Our LMM model can provide an end-to-end approach for jointly learning: (a) the deep networks to extract more discriminative deep features for image and object class representation; (b) the tree classifier for recognizing large numbers of object classes hierarchically; and (c) the visual hierarchy adaptation for achieving more accurate indexing of large numbers of object classes hierarchically.
no code implementations • 24 Jun 2017 • Tianyi Zhao, Jun Yu, Zhenzhong Kuang, Wei zhang, Jianping Fan
In this paper, a deep mixture of diverse experts algorithm is developed for seamlessly combining a set of base deep CNNs (convolutional neural networks) with diverse outputs (task spaces), e. g., such base deep CNNs are trained to recognize different subsets of tens of thousands of atomic object classes.
no code implementations • 15 Nov 2016 • Ping Li, Jun Yu, Meng Wang, Luming Zhang, Deng Cai, Xuelong. Li
To achieve this goal, we cast the problem into a constrained rank minimization framework by adopting the least squares regularization.
no code implementations • 7 Jul 2016 • Anders Hildeman, David Bolin, Jonas Wallin, Adam Johansson, Tufve Nyholm, Thomas Asklund, Jun Yu
The amount of data needed to train a model for s-CT generation is of the order of 100 million voxels.
no code implementations • 8 Feb 2015 • Matt Taddy, Chun-Sheng Chen, Jun Yu, Mitch Wyle
We derive ensembles of decision trees through a nonparametric Bayesian model, allowing us to view random forests as samples from a posterior distribution.
Applications