no code implementations • ECCV 2020 • Lijun Wang, Jianming Zhang, Yifan Wang, Huchuan Lu, Xiang Ruan
This paper proposes a hierarchical loss for monocular depth estimation, which measures the differences between the prediction and ground truth in hierarchical embedding spaces of depth maps.
no code implementations • 26 Dec 2022 • Lijun Wang, Suradej Duangpummet, Masashi Unoki
The root-mean-square errors between the estimated and ground-truth results were used to comparatively evaluate the proposed method with the previous method.
no code implementations • CVPR 2022 • Yifan Wang, Wenbo Zhang, Lijun Wang, Ting Liu, Huchuan Lu
We design an Uncertainty Mining Network (UMNet) which consists of multiple Merge-and-Split (MS) modules to recursively analyze the commonality and difference among multiple noisy labels and infer pixel-wise uncertainty map for each label.
no code implementations • 15 Nov 2021 • Zhongwang Pang, Guan Wang, Bo wang, Lijun Wang
It stands in clear contrast to the result of cross-correlation method, whose localization error is 70 m and the standard deviation is 208. 4 m. Compared with cross-correlation method, TSDEV has the same resistance to white noise, but has fewer boundary conditions and better suppression on linear drift or common noise, which leads to more precise TDE results.
1 code implementation • 19 Oct 2021 • Jiao Peng, Feifan Wang, Zhongqiang Fu, Yiying Hu, Zichen Chen, Xinghan Zhou, Lijun Wang
Recent years have witnessed the advancement of deep learning vision technologies and applications in the medical industry.
1 code implementation • ICCV 2021 • Kenan Dai, Jie Zhao, Lijun Wang, Dong Wang, Jianhua Li, Huchuan Lu, Xuesheng Qian, Xiaoyun Yang
Deep learning based visual trackers entail offline pre-training on large volumes of video datasets with accurate bounding box annotations that are labor-expensive to achieve.
no code implementations • ICCV 2021 • Lijun Wang, Yifan Wang, Linzhao Wang, Yunlong Zhan, Ying Wang, Huchuan Lu
The integration of SAG loss and two-stream network enables more consistent scale inference and more accurate relative depth estimation.
2 code implementations • 24 Aug 2020 • Hongying Liu, Zhubo Ruan, Chaowei Fang, Peng Zhao, Fanhua Shang, Yuanyuan Liu, Lijun Wang
Spherical videos, also known as \ang{360} (panorama) videos, can be viewed with various virtual reality devices such as computers and head-mounted displays.
1 code implementation • 9 Aug 2020 • Lijun Wang, Yanting Zhu, Jue Shi, Xiaodan Fan
We focus on the general MOT problem regardless of the appearance and propose an appearance-free tripartite matching to avoid the irregular velocity problem of the bipartite matching.
no code implementations • CVPR 2020 • Lijun Wang, Jianming Zhang, Oliver Wang, Zhe Lin, Huchuan Lu
Monocular depth estimation is an ill-posed problem, and as such critically relies on scene priors and semantics.
Ranked #1 on
Depth Estimation
on Cityscapes test
1 code implementation • 24 Feb 2020 • Runmin Wu, Kunyao Zhang, Lijun Wang, Yue Wang, Pingping Zhang, Huchuan Lu, Yizhou Yu
Though recent research has achieved remarkable progress in generating realistic images with generative adversarial networks (GANs), the lack of training stability is still a lingering concern of most GANs, especially on high-resolution inputs and complex datasets.
no code implementations • 13 Jun 2019 • Lijun Wang, Jianbing Gong, Yingxia Zhang, Tianmou Liu, Junhui Gao
We designed a fast similarity search engine for large molecular libraries: FPScreen.
no code implementations • 18 Oct 2018 • Lijun Wang, Xiaohui Shen, Jianming Zhang, Oliver Wang, Zhe Lin, Chih-Yao Hsieh, Sarah Kong, Huchuan Lu
To achieve this, we propose a novel neural network model comprised of a depth prediction module, a lens blur module, and a guided upsampling module.
3 code implementations • 12 Sep 2018 • Yunhua Zhang, Dong Wang, Lijun Wang, Jinqing Qi, Huchuan Lu
Compared with short-term tracking, the long-term tracking task requires determining the tracked object is present or absent, and then estimating the accurate bounding box if present or conducting image-wide re-detection if absent.
no code implementations • ECCV 2018 • Yunhua Zhang, Lijun Wang, Jinqing Qi, Dong Wang, Mengyang Feng, Huchuan Lu
In this paper, we circumvent this issue by proposing a local structure learning method, which simultaneously considers the local patterns of the target and their structural relationships for more accurate target tracking.
no code implementations • CVPR 2017 • Lijun Wang, Huchuan Lu, Yifan Wang, Mengyang Feng, Dong Wang, Bao-Cai Yin, Xiang Ruan
In the second stage, FIN is fine-tuned with its predicted saliency maps as ground truth.
no code implementations • 27 Jul 2016 • Bohan Zhuang, Lijun Wang, Huchuan Lu
In the discriminative model, we exploit the advances of deep learning architectures to learn generic features which are robust to both background clutters and foreground appearance variations.
no code implementations • 26 Jul 2016 • Yifan Wang, Lijun Wang, Hongyu Wang, Peihua Li
In this paper, we seek an alternative and propose a new image SR method, which jointly learns the feature extraction, upsampling and HR reconstruction modules, yielding a completely end-to-end trainable deep CNN.
no code implementations • CVPR 2016 • Lijun Wang, Wanli Ouyang, Xiaogang Wang, Huchuan Lu
To further improve the robustness of each base learner, we propose to train the convolutional layers with random binary masks, which serves as a regularization to enforce each base learner to focus on different input features.
no code implementations • ICCV 2015 • Lijun Wang, Wanli Ouyang, Xiaogang Wang, Huchuan Lu
Instead of treating convolutional neural network (CNN) as a black-box feature extractor, we conduct in-depth study on the properties of CNN features offline pre-trained on massive image data and classification task on ImageNet.
no code implementations • CVPR 2015 • Lijun Wang, Huchuan Lu, Xiang Ruan, Ming-Hsuan Yang
In the global search stage, the local saliency map together with global contrast and geometric information are used as global features to describe a set of object candidate regions.