1 code implementation • ECCV 2020 • Yiming Qian, Yasutaka Furukawa
This paper proposes a novel single-image piecewise planar reconstruction technique that infers and enforces inter-plane relationships.
no code implementations • 1 Apr 2024 • Fenggen Yu, Yiming Qian, Xu Zhang, Francisca Gil-Ureta, Brian Jackson, Eric Bennett, Hao Zhang
We present a differentiable rendering framework to learn structured 3D abstractions in the form of primitive assemblies from sparse RGB images capturing a 3D object.
1 code implementation • 31 Mar 2024 • Zhenyu Qian, Yiming Qian, Yuting Song, Fei Gao, Hai Jin, Chen Yu, Xia Xie
To equip the graph processing with both high accuracy and explainability, we introduce a novel approach that harnesses the power of a large language model (LLM), enhanced by an uncertainty-aware module to provide a confidence score on the generated answer.
no code implementations • 24 Jan 2024 • Guoxin Chen, Kexin Tang, Chao Yang, Fuying Ye, Yu Qiao, Yiming Qian
Moreover, existing reinforcement learning (RL) based methods overlook the structured relationships, underutilizing the potential of RL in structured reasoning.
1 code implementation • 27 Oct 2023 • Guoxin Chen, Yiming Qian, Bowen Wang, Liangzhi Li
The large language models have achieved superior performance on various natural language tasks.
no code implementations • 24 Oct 2023 • Junyi Liu, Liangzhi Li, Tong Xiang, Bowen Wang, Yiming Qian
Our summarization compression can reduce 65% of the retrieval token size with further 0. 3% improvement on the accuracy; semantic compression provides a more flexible way to trade-off the token size with performance, for which we can reduce the token size by 20% with only 1. 6% of accuracy drop.
1 code implementation • 19 Sep 2023 • Chuanyu Jiang, Yiming Qian, Lijun Chen, Yang Gu, Xia Xie
We outperformed the state-of-the-art method in unsupervised and semi-supervised categories.
no code implementations • ICCV 2023 • Ahmed Hatem, Yiming Qian, Yang Wang
This could be sub-optimal since it is difficult for the same model to handle all the variations during testing.
no code implementations • 31 Aug 2023 • Ahmed Hatem, Yiming Qian, Yang Wang
During meta-testing, the trained model is fine-tuned with a few gradient updates to produce a unique set of network parameters for each test instance.
1 code implementation • 23 Aug 2023 • Feiyu Zhang, Liangzhi Li, JunHao Chen, Zhouqiang Jiang, Bowen Wang, Yiming Qian
This approach is different from the pruning method as it is not limited by the initial number of training parameters, and each parameter matrix has a higher rank upper bound for the same training overhead.
no code implementations • 13 Apr 2023 • Akshay Gadi Patil, Yiming Qian, Shan Yang, Brian Jackson, Eric Bennett, Hao Zhang
The dominant majority of 3D models that appear in gaming, VR/AR, and those we use to train geometric deep learning algorithms are incomplete, since they are modeled as surface meshes and missing their interior structures.
no code implementations • 8 Apr 2023 • Meng Wang, Tian Lin, Lianyu Wang, Aidi Lin, Ke Zou, Xinxing Xu, Yi Zhou, Yuanyuan Peng, Qingquan Meng, Yiming Qian, Guoyao Deng, Zhiqun Wu, Junhong Chen, Jianhong Lin, Mingzhi Zhang, Weifang Zhu, Changqing Zhang, Daoqiang Zhang, Rick Siow Mong Goh, Yong liu, Chi Pui Pang, Xinjian Chen, Haoyu Chen, Huazhu Fu
Failure to recognize samples from the classes unseen during training is a major limitation of artificial intelligence in the real-world implementation for recognition and classification of retinal anomalies.
1 code implementation • 4 Apr 2023 • Dong Huo, Jian Wang, Yiming Qian, Yee-Hong Yang
Instead of relying on naive end-to-end training, we also propose a novel architecture that integrates the physical relationship between the spectral reflectance and the corresponding RGB images into the network based on our mathematical analysis.
no code implementations • 23 Mar 2023 • Meng Wang, Lianyu Wang, Xinxing Xu, Ke Zou, Yiming Qian, Rick Siow Mong Goh, Yong liu, Huazhu Fu
Our TWEU employs an evidential deep layer to produce the uncertainty score with the DR staging results for client reliability evaluation.
no code implementations • 31 Jan 2023 • Hugo Lemarchant, Liangzi Li, Yiming Qian, Yuta Nakashima, Hajime Nagahara
Vision Transformers (ViTs) are becoming a very popular paradigm for vision tasks as they achieve state-of-the-art performance on image classification.
no code implementations • 30 Jan 2023 • Meng Wang, Kai Yu, Chun-Mei Feng, Yiming Qian, Ke Zou, Lianyu Wang, Rick Siow Mong Goh, Yong liu, Huazhu Fu
To the best of our knowledge, our proposed RFedDis is the first work to develop an FL approach based on evidential uncertainty combined with feature disentangling, which enhances the performance and reliability of FL in non-IID domain features.
no code implementations • ICCV 2023 • Fenggen Yu, Yiming Qian, Francisca Gil-Ureta, Brian Jackson, Eric Bennett, Hao Zhang
We present the first active learning tool for fine-grained 3D part labeling, a problem which challenges even the most advanced deep learning (DL) methods due to the significant structural variations among the small and intricate parts.
1 code implementation • 26 Jul 2022 • Yiming Qian, James H. Elder
Linear perspectivecues deriving from regularities of the built environment can be used to recalibrate both intrinsic and extrinsic camera parameters online, but these estimates can be unreliable due to irregularities in the scene, uncertainties in line segment estimation and background clutter.
no code implementations • 18 Jun 2022 • Zhanghao Sun, Yu Zhang, Yicheng Wu, Dong Huo, Yiming Qian, Jian Wang
We propose three applications using our redundancy codes: (1) Self error-correction for SL imaging under strong ambient light, (2) Error detection for adaptive reconstruction under global illumination, and (3) Interference filtering with device-specific projection sequence encoding, especially for event camera-based SL and light curtain devices.
1 code implementation • 12 Apr 2022 • Dong Huo, Jian Wang, Yiming Qian, Yee-Hong Yang
Due to the large difference between the transmission property of visible light and that of the thermal energy through the glass where most glass is transparent to the visible light but opaque to thermal energy, glass regions of a scene are made more distinguishable with a pair of RGB and thermal images than solely with an RGB image.
no code implementations • 19 Mar 2022 • Qing Cai, Yiming Qian, Jinxing Li, Jun Lv, Yee-Hong Yang, Feng Wu, David Zhang
Transformer-based architectures start to emerge in single image super resolution (SISR) and have achieved promising performance.
1 code implementation • CVPR 2022 • Jiacheng Chen, Yiming Qian, Yasutaka Furukawa
This paper presents a novel attention-based neural network for structured reconstruction, which takes a 2D raster image as an input and reconstructs a planar graph depicting an underlying geometric structure.
Edge Classification Extracting Buildings In Remote Sensing Images +1
no code implementations • 2 Sep 2021 • Yiming Qian, Cheikh Brahim El Vaigh, Yuta Nakashima, Benjamin Renoust, Hajime Nagahara, Yutaka Fujioka
Buddha statues are a part of human culture, especially of the Asia area, and they have been alongside human civilisation for more than 2, 000 years.
1 code implementation • 15 Aug 2021 • Shihao Zou, Xinxin Zuo, Sen Wang, Yiming Qian, Chuan Guo, Li Cheng
This paper focuses on a new problem of estimating human pose and shape from single polarization images.
1 code implementation • 18 May 2021 • Sachini Herath, Saghar Irandoust, Bowen Chen, Yiming Qian, Pyojin Kim, Yasutaka Furukawa
The paper proposes a multi-modal sensor fusion algorithm that fuses WiFi, IMU, and floorplan information to infer an accurate and dense location history in indoor environments.
1 code implementation • CVPR 2021 • Yiming Qian, Hao Zhang, Yasutaka Furukawa
This paper presents Roof-GAN, a novel generative adversarial network that generates structured geometry of residential roof structures as a set of roof primitives and their relationships.
no code implementations • ECCV 2020 • Shihao Zou, Xinxin Zuo, Yiming Qian, Sen Wang, Chi Xu, Minglun Gong, Li Cheng
Inspired by the recent advances in human shape estimation from single color images, in this paper, we attempt at estimating human body shapes by leveraging the geometric cues from single polarization images.
no code implementations • 30 Apr 2020 • Shihao Zou, Xinxin Zuo, Yiming Qian, Sen Wang, Chuan Guo, Chi Xu, Minglun Gong, Li Cheng
Polarization images are known to be able to capture polarized reflected lights that preserve rich geometric cues of an object, which has motivated its recent applications in reconstructing detailed surface normal of the objects of interest.
no code implementations • 6 Jan 2020 • James H. Elder, Emilio J. Almazàn, Yiming Qian, Ron Tal
Traditional approaches to line segment detection typically involve perceptual grouping in the image domain and/or global accumulation in the Hough domain.
no code implementations • ECCV 2018 • Yiming Qian, Yinqiang Zheng, Minglun Gong, Yee-Hong Yang
This paper presents the first approach for simultaneously recovering the 3D shape of both the wavy water surface and the moving underwater scene.
no code implementations • 9 May 2018 • Bojian Wu, Yang Zhou, Yiming Qian, Minglun Gong, Hui Huang
Numerous techniques have been proposed for reconstructing 3D models for opaque objects in past decades.
no code implementations • CVPR 2017 • Yiming Qian, Minglun Gong, Yee-Hong Yang
3D reconstruction of dynamic fluid surfaces is an open and challenging problem in computer vision.
no code implementations • CVPR 2017 • Emilio J. Almazan, Ron Tal, Yiming Qian, James H. Elder
Prior approaches to line segment detection typically involve perceptual grouping in the image domain or global accumulation in the Hough domain.
Ranked #8 on Line Segment Detection on York Urban Dataset
no code implementations • CVPR 2016 • Yiming Qian, Minglun Gong, Yee Hong Yang
Estimating the shape of transparent and refractive objects is one of the few open problems in 3D reconstruction.
no code implementations • ICCV 2015 • Yiming Qian, Minglun Gong, Yee-Hong Yang
Extracting environment mattes using existing approaches often requires either thousands of captured images or a long processing time, or both.