Search Results for author: Haodong Chen

Found 14 papers, 5 papers with code

Towards End-to-End Neuromorphic Voxel-based 3D Object Reconstruction Without Physical Priors

no code implementations1 Jan 2025 Chuanzhi Xu, Langyi Chen, Vincent Qu, Haodong Chen, Vera Chung

Neuromorphic cameras, also known as event cameras, are asynchronous brightness-change sensors that can capture extremely fast motion without suffering from motion blur, making them particularly promising for 3D reconstruction in extreme environments.

3D Object Reconstruction 3D Reconstruction +2

OmniCreator: Self-Supervised Unified Generation with Universal Editing

no code implementations3 Dec 2024 Haodong Chen, Lan Wang, Harry Yang, Ser-Nam Lim

On the other hand, when presented with a text prompt only, OmniCreator becomes generative, producing high-quality video as a result of the semantic correspondence learned.

Denoising Semantic correspondence +2

Beyond Gaussians: Fast and High-Fidelity 3D Splatting with Linear Kernels

no code implementations19 Nov 2024 Haodong Chen, Runnan Chen, Qiang Qu, Zhaoqing Wang, Tongliang Liu, Xiaoming Chen, Yuk Ying Chung

Recent advancements in 3D Gaussian Splatting (3DGS) have substantially improved novel view synthesis, enabling high-quality reconstruction and real-time rendering.

Novel View Synthesis

Adaptively Augmented Consistency Learning: A Semi-supervised Segmentation Framework for Remote Sensing

no code implementations14 Nov 2024 Hui Ye, Haodong Chen, Xiaoming Chen, Vera Chung

Remote sensing (RS) involves the acquisition of data about objects or areas from a distance, primarily to monitor environmental changes, manage resources, and support planning and disaster response.

Disaster Response Diversity +1

Efficient Multi-disparity Transformer for Light Field Image Super-resolution

no code implementations22 Jul 2024 Zeke Zexi Hu, Haodong Chen, Yuk Ying Chung, Xiaoming Chen

This paper presents the Multi-scale Disparity Transformer (MDT), a novel Transformer tailored for light field image super-resolution (LFSR) that addresses the issues of computational redundancy and disparity entanglement caused by the indiscriminate processing of sub-aperture images inherent in conventional methods.

Image Super-Resolution

CREST: Cross-modal Resonance through Evidential Deep Learning for Enhanced Zero-Shot Learning

1 code implementation15 Apr 2024 Haojian Huang, Xiaozhen Qiao, Zhuo Chen, Haodong Chen, Bingyu Li, Zhe Sun, Mulin Chen, Xuelong Li

Zero-shot learning (ZSL) enables the recognition of novel classes by leveraging semantic knowledge transfer from known to unknown categories.

Attribute Transfer Learning +2

UrbanCLIP: Learning Text-enhanced Urban Region Profiling with Contrastive Language-Image Pretraining from the Web

1 code implementation22 Oct 2023 Yibo Yan, Haomin Wen, Siru Zhong, Wei Chen, Haodong Chen, Qingsong Wen, Roger Zimmermann, Yuxuan Liang

To answer the questions, we leverage the power of Large Language Models (LLMs) and introduce the first-ever LLM-enhanced framework that integrates the knowledge of textual modality into urban imagery profiling, named LLM-enhanced Urban Region Profiling with Contrastive Language-Image Pretraining (UrbanCLIP).

Image to text Language Modeling +2

Dense Voxel 3D Reconstruction Using a Monocular Event Camera

no code implementations1 Sep 2023 Haodong Chen, Vera Chung, Li Tan, Xiaoming Chen

Our preliminary results demonstrate that the proposed method can produce visually distinguishable dense 3D reconstructions directly without requiring pipelines like those used by existing methods.

3D Reconstruction Semantic Segmentation +1

Attention-Based Sensor Fusion for Human Activity Recognition Using IMU Signals

no code implementations20 Dec 2021 Wenjin Tao, Haodong Chen, Md Moniruzzaman, Ming C. Leu, Zhaozheng Yi, Ruwen Qin

Secondly, an attention-based fusion mechanism is developed to learn the importance of sensors at different body locations and to generate an attentive feature representation.

Human Activity Recognition Sensor Fusion

Cannot find the paper you are looking for? You can Submit a new open access paper.