Search Results for author: Junhwa Hur

Found 20 papers, 7 papers with code

Motion Prompting: Controlling Video Generation with Motion Trajectories

no code implementations3 Dec 2024 Daniel Geng, Charles Herrmann, Junhwa Hur, Forrester Cole, Serena Zhang, Tobias Pfaff, Tatiana Lopez-Guevara, Carl Doersch, Yusuf Aytar, Michael Rubinstein, Chen Sun, Oliver Wang, Andrew Owens, Deqing Sun

Motion control is crucial for generating expressive and compelling video content; however, most existing video generation models rely mainly on text prompts for control, which struggle to capture the nuances of dynamic actions and temporal compositions.

Video Generation

High-Resolution Frame Interpolation with Patch-based Cascaded Diffusion

no code implementations15 Oct 2024 Junhwa Hur, Charles Herrmann, Saurabh Saxena, Janne Kontkanen, Wei-Sheng Lai, YiChang Shih, Michael Rubinstein, David J. Fleet, Deqing Sun

However, contrary to prior work on cascaded diffusion models which perform diffusion on increasingly large resolutions, we use a single model that always performs diffusion at the same resolution and upsamples by processing patches of the inputs and the prior solution.

8k Video Frame Interpolation

Boundary Attention: Learning curves, corners, junctions and grouping

no code implementations1 Jan 2024 Mia Gaia Polansky, Charles Herrmann, Junhwa Hur, Deqing Sun, Dor Verbin, Todd Zickler

We present a lightweight network that infers grouping and boundaries, including curves, corners and junctions.

Zero-Shot Metric Depth with a Field-of-View Conditioned Diffusion Model

no code implementations20 Dec 2023 Saurabh Saxena, Junhwa Hur, Charles Herrmann, Deqing Sun, David J. Fleet

In contrast, we advocate a generic, task-agnostic diffusion model, with several advancements such as log-scale depth parameterization to enable joint modeling of indoor and outdoor scenes, conditioning on the field-of-view (FOV) to handle scale ambiguity and synthetically augmenting FOV during training to generalize beyond the limited camera intrinsics in training datasets.

Ranked #19 on Monocular Depth Estimation on NYU-Depth V2 (using extra training data)

Denoising Monocular Depth Estimation

Telling Left from Right: Identifying Geometry-Aware Semantic Correspondence

1 code implementation CVPR 2024 Junyi Zhang, Charles Herrmann, Junhwa Hur, Eric Chen, Varun Jampani, Deqing Sun, Ming-Hsuan Yang

This paper identifies the importance of being geometry-aware for semantic correspondence and reveals a limitation of the features of current foundation models under simple post-processing.

Animal Pose Estimation Semantic correspondence

Self-supervised AutoFlow

no code implementations CVPR 2023 Hsin-Ping Huang, Charles Herrmann, Junhwa Hur, Erika Lu, Kyle Sargent, Austin Stone, Ming-Hsuan Yang, Deqing Sun

Recently, AutoFlow has shown promising results on learning a training set for optical flow, but requires ground truth labels in the target domain to compute its search metric.

Optical Flow Estimation

RAFT-MSF: Self-Supervised Monocular Scene Flow using Recurrent Optimizer

no code implementations3 May 2022 Bayram Bayramli, Junhwa Hur, Hongtao Lu

Self-supervised methods demonstrate learning scene flow estimation from unlabeled data, yet their accuracy lags behind (semi-)supervised methods.

Decoder Optical Flow Estimation +1

Self-Supervised Multi-Frame Monocular Scene Flow

1 code implementation CVPR 2021 Junhwa Hur, Stefan Roth

Estimating 3D scene flow from a sequence of monocular images has been gaining increased attention due to the simple, economical capture setup.

Decoder Scene Flow Estimation +1

Self-Supervised Monocular Scene Flow Estimation

1 code implementation CVPR 2020 Junhwa Hur, Stefan Roth

Our model achieves state-of-the-art accuracy among unsupervised/self-supervised learning approaches to monocular scene flow, and yields competitive results for the optical flow and monocular depth estimation sub-tasks.

Monocular Depth Estimation Optical Flow Estimation +2

Optical Flow Estimation in the Deep Learning Age

no code implementations6 Apr 2020 Junhwa Hur, Stefan Roth

Akin to many subareas of computer vision, the recent advances in deep learning have also significantly influenced the literature on optical flow.

Deep Learning Motion Estimation +1

UnFlow: Unsupervised Learning of Optical Flow with a Bidirectional Census Loss

2 code implementations21 Nov 2017 Simon Meister, Junhwa Hur, Stefan Roth

By optionally fine-tuning on the KITTI training data, our method achieves competitive optical flow accuracy on the KITTI 2012 and 2015 benchmarks, thus in addition enabling generic pre-training of supervised networks for datasets with limited amounts of ground truth.

Optical Flow Estimation

MirrorFlow: Exploiting Symmetries in Joint Optical Flow and Occlusion Estimation

no code implementations ICCV 2017 Junhwa Hur, Stefan Roth

The key feature of our model is to fully exploit the symmetry properties that characterize optical flow and occlusions in the two consecutive images.

Occlusion Estimation Optical Flow Estimation

Joint Optical Flow and Temporally Consistent Semantic Segmentation

no code implementations26 Jul 2016 Junhwa Hur, Stefan Roth

The importance and demands of visual scene understanding have been steadily increasing along with the active development of autonomous systems.

Motion Estimation Optical Flow Estimation +3

Generalized Deformable Spatial Pyramid: Geometry-Preserving Dense Correspondence Estimation

no code implementations CVPR 2015 Junhwa Hur, Hwasup Lim, Changsoo Park, Sang Chul Ahn

We present a Generalized Deformable Spatial Pyramid (GDSP) matching algorithm for calculating the dense correspondence between a pair of images with large appearance variations.

Cannot find the paper you are looking for? You can Submit a new open access paper.