Search Results for author: Qirui Wu

Found 6 papers, 2 papers with code

Plan2Scene: Converting Floorplans to 3D Scenes

1 code implementation CVPR 2021 Madhawa Vidanapathirana, Qirui Wu, Yasutaka Furukawa, Angel X. Chang, Manolis Savva

We address the task of converting a floorplan and a set of associated photos of a residence into a textured 3D mesh model, a task which we call Plan2Scene.

Plan2Scene

Dual Modality Prompt Tuning for Vision-Language Pre-Trained Model

1 code implementation17 Aug 2022 Yinghui Xing, Qirui Wu, De Cheng, Shizhou Zhang, Guoqiang Liang, Peng Wang, Yanning Zhang

To make the final image feature concentrate more on the target visual concept, a Class-Aware Visual Prompt Tuning (CAVPT) scheme is further proposed in our DPT, where the class-aware visual prompt is generated dynamically by performing the cross attention between text prompts features and image patch token embeddings to encode both the downstream task-related information and visual instance information.

General Knowledge Language Modelling +1

Physics-enhanced Gaussian Process Variational Autoencoder

no code implementations15 May 2023 Thomas Beckers, Qirui Wu, George J. Pappas

Variational autoencoders allow to learn a lower-dimensional latent space based on high-dimensional input/output data.

Generalizing Single-View 3D Shape Retrieval to Occlusions and Unseen Objects

no code implementations31 Dec 2023 Qirui Wu, Daniel Ritchie, Manolis Savva, Angel X. Chang

Single-view 3D shape retrieval is a challenging task that is increasingly important with the growth of available 3D data.

3D Shape Retrieval Retrieval

R3DS: Reality-linked 3D Scenes for Panoramic Scene Understanding

no code implementations18 Mar 2024 Qirui Wu, Sonia Raychaudhuri, Daniel Ritchie, Manolis Savva, Angel X Chang

We introduce the Reality-linked 3D Scenes (R3DS) dataset of synthetic 3D scenes mirroring the real-world scene arrangements from Matterport3D panoramas.

Object Scene Understanding

Cannot find the paper you are looking for? You can Submit a new open access paper.