Search Results for author: Allen Yang

Found 5 papers, 1 papers with code

Soft Expectation and Deep Maximization for Image Feature Detection

1 code implementation21 Apr 2021 Alexander Mai, Allen Yang, Dominique E. Meyer

Central to the application of many multi-view geometry algorithms is the extraction of matching points between multiple viewpoints, enabling classical tasks such as camera pose estimation and 3D reconstruction.

3D Reconstruction Pose Estimation +1

Training Deep Neural Networks to Detect Repeatable 2D Features Using Large Amounts of 3D World Capture Data

no code implementations9 Dec 2019 Alexander Mai, Joseph Menke, Allen Yang

We claim that in order to train detectors to work well in indoor environments, they must be robust to this type of geometry, and repeatable under true viewpoint change instead of homographies.

Loop Closure Detection with RGB-D Feature Pyramid Siamese Networks

no code implementations25 Nov 2018 Zhang Qianhao, Alexander Mai, Joseph Menke, Allen Yang

We show for the first time that deep neural networks are capable of detecting loop closures, and we provide a method for generating large-scale datasets for use in evaluating and training loop closure detectors.

Loop Closure Detection Simultaneous Localization and Mapping

CPRL -- An Extension of Compressive Sensing to the Phase Retrieval Problem

no code implementations NeurIPS 2012 Henrik Ohlsson, Allen Yang, Roy Dong, Shankar Sastry

This paper presents a novel extension of CS to the phase retrieval problem, where intensity measurements of a linear system are used to recover a complex sparse signal.

Compressive Sensing Retrieval

Sparsity and Robustness in Face Recognition

no code implementations3 Nov 2011 John Wright, Arvind Ganesh, Allen Yang, Zihan Zhou, Yi Ma

This report concerns the use of techniques for sparse signal representation and sparse error correction for automatic face recognition.

Face Recognition Robust Face Recognition

Cannot find the paper you are looking for? You can Submit a new open access paper.