Search Results for author: Venice Erin Liong

Found 6 papers, 2 papers with code

Deep Hashing for Compact Binary Codes Learning

no code implementations CVPR 2015 Venice Erin Liong, Jiwen Lu, Gang Wang, Pierre Moulin, Jie zhou

In this paper, we propose a new deep hashing (DH) approach to learn compact binary codes for large scale visual search.

Deep Hashing

Simultaneous Local Binary Feature Learning and Encoding for Face Recognition

no code implementations ICCV 2015 Jiwen Lu, Venice Erin Liong, Jie zhou

In this paper, we propose a simultaneous local binary feature learning and encoding (SLBFLE) method for face recognition.

Face Recognition

Cross-Modal Deep Variational Hashing

no code implementations ICCV 2017 Venice Erin Liong, Jiwen Lu, Yap-Peng Tan, Jie zhou

In this paper, we propose a cross-modal deep variational hashing (CMDVH) method to learn compact binary codes for cross-modality multimedia retrieval.

Retrieval

AMVNet: Assertion-based Multi-View Fusion Network for LiDAR Semantic Segmentation

no code implementations9 Dec 2020 Venice Erin Liong, Thi Ngoc Tho Nguyen, Sergi Widjaja, Dhananjai Sharma, Zhuang Jie Chong

In this paper, we present an Assertion-based Multi-View Fusion network (AMVNet) for LiDAR semantic segmentation which aggregates the semantic features of individual projection-based networks using late fusion.

Autonomous Vehicles LIDAR Semantic Segmentation +1

ConDA: Unsupervised Domain Adaptation for LiDAR Segmentation via Regularized Domain Concatenation

1 code implementation30 Nov 2021 Lingdong Kong, Niamul Quader, Venice Erin Liong

We present ConDA, a concatenation-based domain adaptation framework for LiDAR segmentation that: 1) constructs an intermediate domain consisting of fine-grained interchange signals from both source and target domains without destabilizing the semantic coherency of objects and background around the ego-vehicle; and 2) utilizes the intermediate domain for self-training.

Autonomous Driving LIDAR Semantic Segmentation +2

Cannot find the paper you are looking for? You can Submit a new open access paper.