no code implementations • 9 Jul 2024 • Huilin Chen, Qiyu Sun, Fangfei Li, Yang Tang
Computer vision tasks are crucial for aerospace missions as they help spacecraft to understand and interpret the space environment, such as estimating position and orientation, reconstructing 3D models, and recognizing objects, which have been extensively studied to successfully carry out the missions.
no code implementations • 6 Nov 2023 • Seok-Young Chung, Qiyu Sun
In this paper, we introduce a Barron space of functions on a compact domain of graph signals.
1 code implementation • ICCV 2023 • Chaoqiang Zhao, Matteo Poggi, Fabio Tosi, Lei Zhou, Qiyu Sun, Yang Tang, Stefano Mattoccia
This paper tackles the challenges of self-supervised monocular depth estimation in indoor scenes caused by large rotation between frames and low texture.
no code implementations • 12 Sep 2023 • Qiyu Sun, Huilin Chen, Meng Zheng, Ziyan Wu, Michael Felsberg, Yang Tang
Domain generalized semantic segmentation (DGSS) is a critical yet challenging task, where the model is trained only on source data without access to any target data.
1 code implementation • ICCV 2023 • Ruihao Xia, Chaoqiang Zhao, Meng Zheng, Ziyan Wu, Qiyu Sun, Yang Tang
However, limited by the low dynamic range of conventional cameras, images fail to capture structural details and boundary information in low-light conditions.
no code implementations • 26 Jul 2023 • Kexuan Zhang, Qiyu Sun, Chaoqiang Zhao, Yang Tang
Deep learning has revolutionized the field of artificial intelligence.
no code implementations • 4 Jul 2023 • Qiyu Sun, Pavlo Melnyk, Michael Felsberg, Yang Tang
Domain generalized semantic segmentation (DGSS) is an essential but highly challenging task, in which the model is trained only on source data and any target data is not available.
1 code implementation • CVPR 2024 • Johan Edstedt, Qiyu Sun, Georg Bökman, Mårten Wadenbäck, Michael Felsberg
The aim is to learn a robust model, i. e., a model able to match under challenging real-world changes.
no code implementations • 25 Nov 2022 • Tianyu Wu, Yang Tang, Qiyu Sun, Luolin Xiong
To further fusing such multi-modal imformation, the correspondence between learned chemical feature from different representation should be considered.
no code implementations • 20 Nov 2022 • Arash Amini, Qiyu Sun, Nader Motee
We consider a class of stochastic dynamical networks whose governing dynamics can be modeled using a coupling function.
no code implementations • 14 Nov 2022 • Wenqi Ren, Qiyu Sun, Chaoqiang Zhao, Yang Tang
In contrast, we present a domain generalization framework based on meta-learning to dig out representative and discriminative internal properties of real hazy domains without test-time training.
no code implementations • 13 Nov 2022 • Wenqi Ren, Yang Tang, Qiyu Sun, Chaoqiang Zhao, Qing-Long Han
Specifically, the preliminaries on few/zero-shot visual semantic segmentation, including the problem definitions, typical datasets, and technical remedies, are briefly reviewed and discussed.
no code implementations • 12 May 2022 • Yang Chen, Cheng Cheng, Qiyu Sun
The proposed GFT is consistent with the conventional GFT in the undirected graph setting, and on directed circulant graphs, the proposed GFT is the classical discrete Fourier transform, up to some rotation, permutation and phase adjustment.
no code implementations • 9 May 2022 • Cong Zheng, Cheng Cheng, Qiyu Sun
In this paper, we consider Wiener filters to reconstruct deterministic and (wide-band) stationary graph signals from their observations corrupted by random noises, and we propose distributed algorithms to implement Wiener filters and inverse filters on networks in which agents are equipped with a data processing subsystem for limited data storage and computation power, and with a one-hop communication subsystem for direct data exchange only with their adjacent agents.
no code implementations • 13 Apr 2022 • Elie Atallah, Nazanin Rahnavard, Qiyu Sun
In this paper, we consider a large network containing many regions such that each region is equipped with a worker with some data processing and communication capability.
no code implementations • 26 Mar 2022 • Qiyu Sun, Gary G. Yen, Yang Tang, Chaoqiang Zhao
To boost the transferability of depth estimation models, we propose an adversarial depth estimation task and train the model in the pipeline of meta-learning.
1 code implementation • 28 Jul 2021 • Chaoqiang Zhao, Yang Tang, Qiyu Sun
Meanwhile, we further tackle the effects of unstable image transfer quality on domain adaptation, and an image adaptation approach is proposed to evaluate the quality of transferred images and re-weight the corresponding losses, so as to improve the performance of the adapted depth model.
no code implementations • 29 Nov 2020 • Chongzhen Zhang, Yang Tang, Chaoqiang Zhao, Qiyu Sun, Zhencheng Ye, Jürgen Kurths
Semantic segmentation and depth completion are two challenging tasks in scene understanding, and they are widely used in robotics and autonomous driving.
no code implementations • 9 Apr 2020 • Chaoqiang Zhao, Gary G. Yen, Qiyu Sun, Chongzhen Zhang, Yang Tang
This paper proposes a masked generative adversarial network (GAN) for unsupervised monocular depth and ego-motion estimation. The MaskNet and Boolean mask scheme are designed in this framework to eliminate the effects of occlusions and impacts of visual field changes on the reconstruction loss and adversarial loss, respectively.
no code implementations • 29 Mar 2020 • Chongzhen Zhang, Jianrui Wang, Gary G. Yen, Chaoqiang Zhao, Qiyu Sun, Yang Tang, Feng Qian, Jürgen Kurths
Then, we further review the performance of RL and meta-learning from the aspects of accuracy or transferability or both of them in autonomous systems, involving pedestrian tracking, robot navigation and robotic manipulation.
no code implementations • 14 Mar 2020 • Chaoqiang Zhao, Qiyu Sun, Chongzhen Zhang, Yang Tang, Feng Qian
With the rapid development of deep neural networks, monocular depth estimation based on deep learning has been widely studied recently and achieved promising performance in accuracy.
no code implementations • 8 Jan 2020 • Yang Tang, Chaoqiang Zhao, Jianrui Wang, Chongzhen Zhang, Qiyu Sun, Weixing Zheng, Wenli Du, Feng Qian, Juergen Kurths
Second, we review the visual-based environmental perception and understanding methods based on deep learning, including deep learning-based monocular depth estimation, monocular ego-motion prediction, image enhancement, object detection, semantic segmentation, and their combinations with traditional vSLAM frameworks.
no code implementations • 11 Dec 2019 • Chaoqiang Zhao, Yang Tang, Qiyu Sun, Athanasios V. Vasilakos
Extensive experiments on the KITTI dataset show that the proposed constraints can effectively improve the scale-consistency of TrajNet when compared with previous unsupervised monocular methods, and integration with TrajNet makes the initialization and tracking of DSO more robust and accurate.