Search Results for author: Shoou-I Yu

Found 17 papers, 5 papers with code

Harry Potter's Marauder's Map: Localizing and Tracking Multiple Persons-of-Interest by Nonnegative Discretization

no code implementations CVPR 2013 Shoou-I Yu, Yi Yang, Alexander Hauptmann

A device just like Harry Potter's Marauder's Map, which pinpoints the location of each person-of-interest at all times, provides invaluable information for analysis of surveillance videos.

Face Recognition Human Detection

Self-Paced Learning with Diversity

no code implementations NeurIPS 2014 Lu Jiang, Deyu Meng, Shoou-I Yu, Zhenzhong Lan, Shiguang Shan, Alexander Hauptmann

Self-paced learning (SPL) is a recently proposed learning regime inspired by the learning process of humans and animals that gradually incorporates easy to more complex samples into training.

The Best of Both Worlds: Combining Data-independent and Data-driven Approaches for Action Recognition

no code implementations17 May 2015 Zhenzhong Lan, Dezhong Yao, Ming Lin, Shoou-I Yu, Alexander Hauptmann

First, we propose a two-stream Stacked Convolutional Independent Subspace Analysis (ConvISA) architecture to show that unsupervised learning methods can significantly boost the performance of traditional local features extracted from data-independent models.

Action Recognition Multi-class Classification +3

Handcrafted Local Features are Convolutional Neural Networks

no code implementations16 Nov 2015 Zhenzhong Lan, Shoou-I Yu, Ming Lin, Bhiksha Raj, Alexander G. Hauptmann

We approach this problem by first showing that local handcrafted features and Convolutional Neural Networks (CNNs) share the same convolution-pooling network structure.

Action Recognition Optical Flow Estimation +2

Improving Human Activity Recognition Through Ranking and Re-ranking

no code implementations11 Dec 2015 Zhenzhong Lan, Shoou-I Yu, Alexander G. Hauptmann

We propose two well-motivated ranking-based methods to enhance the performance of current state-of-the-art human activity recognition systems.

Human Activity Recognition Re-Ranking

Long-Term Identity-Aware Multi-Person Tracking for Surveillance Video Summarization

no code implementations25 Apr 2016 Shoou-I Yu, Yi Yang, Xuanchong Li, Alexander G. Hauptmann

Therefore, our tracker propagates identity information to frames without recognized faces by uncovering the appearance and spatial manifold formed by person detections.

Face Recognition Video Summarization

The Solution Path Algorithm for Identity-Aware Multi-Object Tracking

no code implementations CVPR 2016 Shoou-I Yu, Deyu Meng, WangMeng Zuo, Alexander Hauptmann

The tracker is formulated as a quadratic optimization problem with L0 norm constraints, which we propose to solve with the solution path algorithm.

Active Learning Decision Making +2

Strategies for Searching Video Content with Text Queries or Video Examples

no code implementations17 Jun 2016 Shoou-I Yu, Yi Yang, Zhongwen Xu, Shicheng Xu, Deyu Meng, Zexi Mao, Zhigang Ma, Ming Lin, Xuanchong Li, Huan Li, Zhenzhong Lan, Lu Jiang, Alexander G. Hauptmann, Chuang Gan, Xingzhong Du, Xiaojun Chang

The large number of user-generated videos uploaded on to the Internet everyday has led to many commercial video search engines, which mainly rely on text metadata for search.

Event Detection Retrieval +1

Learning Patch Reconstructability for Accelerating Multi-View Stereo

no code implementations CVPR 2018 Alex Poms, Chenglei Wu, Shoou-I Yu, Yaser Sheikh

By prioritizing stereo matching on a subset of patches that are highly reconstructable and also cover the 3D surface, we are able to accelerate MVS with minimal reduction in accuracy and completeness.

Stereo Matching Stereo Matching Hand +1

Self-Supervised Adaptation of High-Fidelity Face Models for Monocular Performance Tracking

no code implementations CVPR 2019 Jae Shin Yoon, Takaaki Shiratori, Shoou-I Yu, Hyun Soo Park

In this paper, we propose a self-supervised domain adaptation approach to enable the animation of high-fidelity face models from a commodity camera.

Domain Adaptation Face Model

Epipolar Transformers

1 code implementation CVPR 2020 Yihui He, Rui Yan, Katerina Fragkiadaki, Shoou-I Yu

The intuition is: given a 2D location p in the current view, we would like to first find its corresponding point p' in a neighboring view, and then combine the features at p' with the features at p, thus leading to a 3D-aware feature at p. Inspired by stereo matching, the epipolar transformer leverages epipolar constraints and feature matching to approximate the features at p'.

2D Pose Estimation 3D Hand Pose Estimation +3

Supervision by Registration and Triangulation for Landmark Detection

1 code implementation25 Jan 2021 Xuanyi Dong, Yi Yang, Shih-En Wei, Xinshuo Weng, Yaser Sheikh, Shoou-I Yu

End-to-end training is made possible by differentiable registration and 3D triangulation modules.

Optical Flow Estimation

CodedStereo: Learned Phase Masks for Large Depth-of-field Stereo

no code implementations CVPR 2021 Shiyu Tan, Yicheng Wu, Shoou-I Yu, Ashok Veeraraghavan

Conventional stereo suffers from a fundamental trade-off between imaging volume and signal-to-noise ratio (SNR) -- due to the conflicting impact of aperture size on both these variables.

Disparity Estimation Image Reconstruction +1

URHand: Universal Relightable Hands

no code implementations10 Jan 2024 Zhaoxi Chen, Gyeongsik Moon, Kaiwen Guo, Chen Cao, Stanislav Pidhorskyi, Tomas Simon, Rohan Joshi, Yuan Dong, Yichen Xu, Bernardo Pires, He Wen, Lucas Evans, Bo Peng, Julia Buffalini, Autumn Trimble, Kevyn McPhail, Melissa Schoeller, Shoou-I Yu, Javier Romero, Michael Zollhöfer, Yaser Sheikh, Ziwei Liu, Shunsuke Saito

To simplify the personalization process while retaining photorealism, we build a powerful universal relightable prior based on neural relighting from multi-view images of hands captured in a light stage with hundreds of identities.

Cannot find the paper you are looking for? You can Submit a new open access paper.