Search Results for author: Keren Fu

Found 11 papers, 8 papers with code

Full-Duplex Strategy for Video Object Segmentation

1 code implementation ICCV 2021 Ge-Peng Ji, Deng-Ping Fan, Keren Fu, Zhe Wu, Jianbing Shen, Ling Shao

Previous video object segmentation approaches mainly focus on using simplex solutions between appearance and motion, limiting feature collaboration efficiency among and across these two cues.

Salient Object Detection Semantic Segmentation +2

Depth Quality-Inspired Feature Manipulation for Efficient RGB-D Salient Object Detection

1 code implementation5 Jul 2021 Wenbo Zhang, Ge-Peng Ji, Zhuo Wang, Keren Fu, Qijun Zhao

To tackle this dilemma and also inspired by the fact that depth quality is a key factor influencing the accuracy, we propose a novel depth quality-inspired feature manipulation (DQFM) process, which is efficient itself and can serve as a gating mechanism for filtering depth features to greatly boost the accuracy.

RGB-D Salient Object Detection Salient Object Detection

RGB-D Salient Object Detection via 3D Convolutional Neural Networks

1 code implementation25 Jan 2021 Qian Chen, Ze Liu, Yi Zhang, Keren Fu, Qijun Zhao, Hongwei Du

The proposed model, named RD3D, aims at pre-fusion in the encoder stage and in-depth fusion in the decoder stage to effectively promote the full integration of RGB and depth streams.

RGB-D Salient Object Detection Salient Object Detection

EF-Net: A novel enhancement and fusion network for RGB-D saliency detection

1 code implementation4 Nov 2020 Qian Chen, Keren Fu, Ze Liu, Geng Chen, Hongwei Du, Bensheng Qiu, LingShao

Finally, we propose an effective layer-wise aggregation module to fuse the features extracted from the enhanced depth maps and RGB images for the accurate detection of salient objects.

Object Detection Saliency Detection +1

Light Field Salient Object Detection: A Review and Benchmark

1 code implementation10 Oct 2020 Keren Fu, Yao Jiang, Ge-Peng Ji, Tao Zhou, Qijun Zhao, Deng-Ping Fan

Secondly, we benchmark nine representative light field SOD models together with several cutting-edge RGB-D SOD models on four widely used light field datasets, from which insightful discussions and analyses, including a comparison between light field SOD and RGB-D SOD models, are achieved.

Object Detection Saliency Detection +1

Siamese Network for RGB-D Salient Object Detection and Beyond

2 code implementations26 Aug 2020 Keren Fu, Deng-Ping Fan, Ge-Peng Ji, Qijun Zhao, Jianbing Shen, Ce Zhu

Inspired by the observation that RGB and depth modalities actually present certain commonality in distinguishing salient objects, a novel joint learning and densely cooperative fusion (JL-DCF) architecture is designed to learn from both RGB and depth inputs through a shared network backbone, known as the Siamese architecture.

Ranked #2 on RGB-D Salient Object Detection on SIP (using extra training data)

RGB-D Salient Object Detection Salient Object Detection +1

Unsupervised Many-to-Many Image-to-Image Translation Across Multiple Domains

no code implementations28 Nov 2019 Ye Lin, Keren Fu, Shenggui Ling, Cheng Peng

To improve the image quality, we propose an effective many-to-many mapping framework for unsupervised multi-domain image-to-image translation.

Translation Unsupervised Image-To-Image Translation

Robust Visual Tracking via Inverse Nonnegative Matrix Factorization

no code implementations20 Sep 2015 Fanghui Liu, Tao Zhou, Keren Fu, Irene Y. H. Gu, Jie Yang

It utilizes both the foreground and background information, and imposes a local coordinate constraint, where the basis matrix is sparse matrix from the linear combination of candidates with corresponding nonnegative coefficient vectors.

Visual Tracking

Saliency Propagation From Simple to Difficult

no code implementations CVPR 2015 Chen Gong, DaCheng Tao, Wei Liu, Stephen J. Maybank, Meng Fang, Keren Fu, Jie Yang

In the teaching-to-learn step, a teacher is designed to arrange the regions from simple to difficult and then assign the simplest regions to the learner.

Saliency Detection

Cannot find the paper you are looking for? You can Submit a new open access paper.