no code implementations • ECCV 2020 • Henghui Ding, Scott Cohen, Brian Price, Xudong Jiang
We propose to employ phrase expressions as another interaction input to infer the attributes of target object.
no code implementations • 8 May 2022 • Yunqing Zhao, Henghui Ding, Houjing Huang, Ngai-Man Cheung
Informed by our analysis and to slow down the diversity degradation of the target generator during adaptation, our second contribution proposes to apply mutual information (MI) maximization to retain the source domain's rich multi-level diversity information in the target domain generator.
no code implementations • 26 Apr 2022 • Chang Liu, Xudong Jiang, Henghui Ding
In this work, we propose a novel framework that simultaneously detects the target-of-interest via feature propagation and generates a fine-grained segmentation mask.
1 code implementation • 7 Apr 2022 • Guolei Sun, Yun Liu, Henghui Ding, Thomas Probst, Luc van Gool
To address this problem, we propose a Coarse-to-Fine Feature Mining (CFFM) technique to learn a unified presentation of static contexts and motional contexts.
no code implementations • 6 Jan 2022 • Jing Lin, Yuanhao Cai, Xiaowan Hu, Haoqian Wang, Youliang Yan, Xueyi Zou, Henghui Ding, Yulun Zhang, Radu Timofte, Luc van Gool
Exploiting similar and sharper scene patches in spatio-temporal neighborhoods is critical for video deblurring.
no code implementations • 18 Dec 2021 • Yiwei Wang, Yujun Cai, Yuxuan Liang, Henghui Ding, Changhu Wang, Bryan Hooi
In this work, we propose the TNS (Time-aware Neighbor Sampling) method: TNS learns from temporal information to provide an adaptive receptive neighborhood for every node at any time.
no code implementations • 1 Dec 2021 • Yiwei Wang, Yujun Cai, Yuxuan Liang, Wei Wang, Henghui Ding, Muhao Chen, Jing Tang, Bryan Hooi
Representing a label distribution as a one-hot vector is a common practice in training node classification models.
1 code implementation • NeurIPS 2021 • Zekun Tong, Yuxuan Liang, Henghui Ding, Yongxing Dai, Xinke Li, Changhu Wang
However, it is still in its infancy with two concerns: 1) changing the graph structure through data augmentation to generate contrastive views may mislead the message passing scheme, as such graph changing action deprives the intrinsic graph structural information, especially the directional structure in directed graphs; 2) since GCL usually uses predefined contrastive views with hand-picking parameters, it does not take full advantage of the contrastive information provided by data augmentation, resulting in incomplete structure information for models learning.
no code implementations • NeurIPS 2021 • Yiwei Wang, Yujun Cai, Yuxuan Liang, Henghui Ding, Changhu Wang, Siddharth Bhatia, Bryan Hooi
To address this issue, our idea is to transform the temporal graphs using data augmentation (DA) with adaptive magnitudes, so as to effectively augment the input features and preserve the essential semantic information.
no code implementations • 29 Sep 2021 • Weide Liu, Zhonghua Wu, Yiming Wang, Henghui Ding, Fayao Liu, Jie Lin, Guosheng Lin
In this work, we argue that there are common latent features between the head and tailed classes that can be used to give better feature representation.
no code implementations • ICCV 2021 • Chi Zhang, Henghui Ding, Guosheng Lin, Ruibo Li, Changhu Wang, Chunhua Shen
Inspired by the recent success in Automated Machine Learning literature (AutoML), in this paper, we present Meta Navigator, a framework that attempts to solve the aforementioned limitation in few-shot learning by seeking a higher-level strategy and proffer to automate the selection from various few-shot learning designs.
no code implementations • 29 Aug 2021 • Chi Zhang, Guosheng Lin, Lvlong Lai, Henghui Ding, Qingyao Wu
First, we present a Class Activation Map Calibration (CAMC) module to improve the learning and prediction of network classifiers, by enforcing network prediction based on important image regions.
no code implementations • 19 Aug 2021 • Weide Liu, Chi Zhang, Henghui Ding, Tzu-Yi Hung, Guosheng Lin
We address the challenging task of few-shot segmentation in this work.
1 code implementation • ICCV 2021 • Henghui Ding, Chang Liu, Suchen Wang, Xudong Jiang
We introduce transformer and multi-head attention to build a network with an encoder-decoder attention mechanism architecture that "queries" the given image with the language expression.
Ranked #1 on
Referring Expression Segmentation
on RefCOCOg-val
1 code implementation • 11 Aug 2021 • Weide Liu, Zhonghua Wu, Henghui Ding, Fayao Liu, Jie Lin, Guosheng Lin
To this end, we first propose a prior extractor to learn the query information from the unlabeled images with our proposed global-local contrastive learning.
no code implementations • 5 Aug 2021 • Xin Sun, Henghui Ding, Chi Zhang, Guosheng Lin, Keck-Voon Ling
In this work, we aim to address the challenging task of open set recognition (OSR).
1 code implementation • 28 Jul 2021 • Xiangtai Li, Hao He, Henghui Ding, Kuiyuan Yang, Guangliang Cheng, Jianping Shi, Yunhai Tong
Moreover, our approach is a plug-and-play module and can be easily applied to existing instance segmentation methods.
1 code implementation • 5 Jul 2021 • Meng-Jiun Chiou, Henghui Ding, Hanshu Yan, Changhu Wang, Roger Zimmermann, Jiashi Feng
Given input images, scene graph generation (SGG) aims to produce comprehensive, graphical representations describing visual relationships among salient objects.
Ranked #1 on
Unbiased Scene Graph Generation
on Visual Genome
no code implementations • 7 Jun 2021 • XiaoHong Wang, Xudong Jiang, Henghui Ding, Yuqian Zhao, Jun Liu
In this paper, we propose a novel knowledge-aware deep framework that incorporates some clinical knowledge into collaborative learning of two important melanoma diagnosis tasks, i. e., skin lesion segmentation and melanoma recognition.
1 code implementation • ICCV 2021 • Jiaxin Li, Zijian Feng, Qi She, Henghui Ding, Changhu Wang, Gim Hee Lee
In this paper, we propose MINE to perform novel view synthesis and depth estimation via dense 3D reconstruction from a single image.
no code implementations • 22 Jan 2021 • Chang Liu, Henghui Ding, Xudong Jiang
In this paper, we argue that recovering these microscopic details relies on low-level but high-definition texture features.
no code implementations • ICCV 2021 • Suchen Wang, Kim-Hui Yap, Henghui Ding, Jiyan Wu, Junsong Yuan, Yap-Peng Tan
In this work, we study the problem of human-object interaction (HOI) detection with large vocabulary object categories.
no code implementations • ICCV 2021 • Henghui Ding, HUI ZHANG, Jun Liu, Jiaxin Li, Zijian Feng, Xudong Jiang
In this work, we treat each respective region in an image as a whole, and capture the structure topology as well as the affinity among different regions.
no code implementations • ICCV 2021 • HUI ZHANG, Henghui Ding
In this work, we present zero-shot semantic segmentation, which aims to identify not only the seen classes contained in training but also the novel classes that have never been seen.
no code implementations • ICCV 2021 • Yujun Cai, Yiwei Wang, Yiheng Zhu, Tat-Jen Cham, Jianfei Cai, Junsong Yuan, Jun Liu, Chuanxia Zheng, Sijie Yan, Henghui Ding, Xiaohui Shen, Ding Liu, Nadia Magnenat Thalmann
Notably, by considering this problem as a conditional generation process, we estimate a parametric distribution of the missing regions based on the input conditions, from which to sample and synthesize the full motion series.
no code implementations • ICCV 2021 • Tianjiao Li, Qiuhong Ke, Hossein Rahmani, Rui En Ho, Henghui Ding, Jun Liu
This makes online continual action recognition a challenging task.
no code implementations • 20 Feb 2020 • Jianhan Mei, Henghui Ding, Xudong Jiang
In this paper, we address the challenging task of estimating 6D object pose from a single RGB image.
no code implementations • 20 Feb 2020 • Xiaohong Wang, Xudong Jiang, Henghui Ding, Jun Liu
Accurate segmentation of skin lesion from dermoscopic images is a crucial part of computer-aided diagnosis of melanoma.
no code implementations • CVPR 2019 • Henghui Ding, Xudong Jiang, Bing Shuai, Ai Qun Liu, Gang Wang
In this way, the proposed network aggregates the context information of a pixel from its semantic-correlated region instead of a predefined fixed region.
Ranked #9 on
Semantic Segmentation
on COCO-Stuff test
1 code implementation • ICCV 2019 • Henghui Ding, Xudong Jiang, Ai Qun Liu, Nadia Magnenat Thalmann, Gang Wang
Furthermore, we propose a boundary-aware feature propagation (BFP) module to harvest and propagate the local features within their regions isolated by the learned boundaries in the UAG-structured image.
Ranked #28 on
Semantic Segmentation
on PASCAL Context
1 code implementation • journal 2019 • Bing Shuai, Henghui Ding, Ting Liu, Gang Wang, Xudong Jiang
Furthermore, we introduce a “dense skip” architecture to retain a rich set of low-level information from the pre-trained CNN, which is essential to improve the low-level parsing performance.
no code implementations • 15 Jan 2019 • Jun Liu, Henghui Ding, Amir Shahroudy, Ling-Yu Duan, Xudong Jiang, Gang Wang, Alex C. Kot
Learning a set of features that are reliable and discriminatively representative of the pose of a hand (or body) part is difficult due to the ambiguities, texture and illumination variation, and self-occlusion in the real application of 3D pose estimation.
1 code implementation • CVPR 2018 • Henghui Ding, Xudong Jiang, Bing Shuai, Ai Qun Liu, Gang Wang
In this paper, we first propose a novel context contrasted local feature that not only leverages the informative context but also spotlights the local information in contrast to the context.
Ranked #12 on
Semantic Segmentation
on COCO-Stuff test