Representing visual signals by implicit representation (e. g., a coordinate based deep network) has prevailed among many vision tasks.
Despite the rapid development of Neural Radiance Field (NeRF), the necessity of dense covers largely prohibits its wider applications.
Deep vision models are nowadays widely integrated into visual reinforcement learning (RL) to parameterize the policy networks.
The proposed CNN-GCN method achieved state-of-the-art (SOTA) performance on the task of semantic symbol spotting, and help us build a baseline network for the panoptic symbol spotting task.
Deep learning based 3D shape generation methods generally utilize latent features extracted from color images to encode the semantics of objects and guide the shape generation process.
The deep multi-view stereo (MVS) and stereo matching approaches generally construct 3D cost volumes to regularize and regress the output depth or disparity.
Ranked #6 on Point Clouds on Tanks and Temples
The need for fast acquisition and automatic analysis of MRI data is growing in the age of big data.
Single image rain streaks removal is extremely important since rainy images adversely affect many computer vision systems.
In multi-contrast magnetic resonance imaging (MRI), compressed sensing theory can accelerate imaging by sampling fewer measurements within each contrast.
In this paper, we proposed a segmentation-aware deep fusion network called SADFN for compressed sensing MRI.
Compressed sensing (CS) theory assures us that we can accurately reconstruct magnetic resonance images using fewer k-space measurements than the Nyquist sampling rate requires.