no code implementations • 15 Jul 2023 • Jiahui Huang, Leonid Sigal, Kwang Moo Yi, Oliver Wang, Joon-Young Lee
We present Interactive Neural Video Editing (INVE), a real-time video editing solution, which can assist the video editing process by consistently propagating sparse frame edits to the entire video clip.
no code implementations • 24 May 2023 • Eric Hedlin, Gopal Sharma, Shweta Mahajan, Hossam Isack, Abhishek Kar, Andrea Tagliasacchi, Kwang Moo Yi
Text-to-image diffusion models are now capable of generating images that are often indistinguishable from real images.
1 code implementation • 18 May 2023 • Shunyuan Mao, Ruobing Dong, Lu Lu, Kwang Moo Yi, Sifan Wang, Paris Perdikaris
We develop a tool, which we name Protoplanetary Disk Operator Network (PPDONet), that can predict the solution of disk-planet interactions in protoplanetary disks in real-time.
no code implementations • CVPR 2023 • Kacper Kania, Stephan J. Garbin, Andrea Tagliasacchi, Virginia Estellers, Kwang Moo Yi, Julien Valentin, Tomasz Trzciński, Marek Kowalski
Generating faithful visualizations of human faces requires capturing both coarse and fine-level details of the face geometry and appearance.
no code implementations • CVPR 2023 • Jen-Hao Rick Chang, Wei-Yu Chen, Anurag Ranjan, Kwang Moo Yi, Oncel Tuzel
Specifically, we train a set transformer that, given a small number of local neighbor points along a light ray, provides the intersection point, the surface normal, and the material blending weights, which are used to render the outcome of this light ray.
no code implementations • 29 Mar 2023 • Eric Hedlin, Jinfan Yang, Nicholas Vining, Kwang Moo Yi, Alla Sheffer
We introduce CN-DHF (Compact Neural Double-Height-Field), a novel hybrid neural implicit 3D shape representation that is dramatically more compact than the current state of the art.
no code implementations • CVPR 2023 • Anurag Ranjan, Kwang Moo Yi, Jen-Hao Rick Chang, Oncel Tuzel
We propose a generative framework, FaceLit, capable of generating a 3D face that can be rendered at various user-defined lighting conditions and views, learned purely from 2D images in-the-wild without any manual annotation.
1 code implementation • CVPR 2023 • Zhijie Wu, Yuhe Jin, Kwang Moo Yi
We present a novel method to provide efficient and highly detailed reconstructions.
1 code implementation • 27 Oct 2022 • Aritro Roy Arko, James J. Little, Kwang Moo Yi
We propose a bootstrapping framework to enhance human optical flow and pose.
no code implementations • 21 Sep 2022 • Daniel Rebain, Mark J. Matthews, Kwang Moo Yi, Gopal Sharma, Dmitry Lagun, Andrea Tagliasacchi
Neural fields model signals by mapping coordinate inputs to sampled values.
1 code implementation • 3 Aug 2022 • Fabrizio Pedersoli, Dryden Wiebe, Amin Banitalebi, Yong Zhang, George Tzanetakis, Kwang Moo Yi
Therefore, audio-based methods can be useful even for applications in which only visual information is of interest Our framework is based on Manifold Learning and consists of two steps.
no code implementations • 20 Jul 2022 • Weiwei Sun, Daniel Rebain, Renjie Liao, Vladimir Tankovich, Soroosh Yazdani, Kwang Moo Yi, Andrea Tagliasacchi
We introduce a method for instance proposal generation for 3D point clouds.
no code implementations • 16 Jun 2022 • Yuhe Jin, Weiwei Sun, Jan Hosang, Eduard Trulls, Kwang Moo Yi
Existing unsupervised methods for keypoint learning rely heavily on the assumption that a specific keypoint type (e. g. elbow, digit, abstract geometric shape) appears only once in an image.
1 code implementation • 27 May 2022 • Teaghan O'Briain, Carlos Uribe, Kwang Moo Yi, Jonas Teuwen, Ioannis Sechopoulos, Magdalena Bazalova-Carter
To correct for respiratory motion in PET imaging, an interpretable and unsupervised deep learning technique, FlowNet-PET, was constructed.
2 code implementations • 29 Apr 2022 • Eric Hedlin, Helge Rhodin, Kwang Moo Yi
While the quality of this pseudo-ground-truth is challenging to assess due to the lack of actual ground-truth SMPL, with the Human 3. 6m dataset, we qualitatively show that our joint locations are more accurate and that our regressor leads to improved pose estimations results on the test set without any need for retraining.
1 code implementation • 23 Mar 2022 • Wei Jiang, Kwang Moo Yi, Golnoosh Samei, Oncel Tuzel, Anurag Ranjan
Photorealistic rendering and reposing of humans is important for enabling augmented reality experiences.
1 code implementation • CVPR 2022 • Klaus Greff, Francois Belletti, Lucas Beyer, Carl Doersch, Yilun Du, Daniel Duckworth, David J. Fleet, Dan Gnanapragasam, Florian Golemo, Charles Herrmann, Thomas Kipf, Abhijit Kundu, Dmitry Lagun, Issam Laradji, Hsueh-Ti, Liu, Henning Meyer, Yishu Miao, Derek Nowrouzezahrai, Cengiz Oztireli, Etienne Pot, Noha Radwan, Daniel Rebain, Sara Sabour, Mehdi S. M. Sajjadi, Matan Sela, Vincent Sitzmann, Austin Stone, Deqing Sun, Suhani Vora, Ziyu Wang, Tianhao Wu, Kwang Moo Yi, Fangcheng Zhong, Andrea Tagliasacchi
Data is the driving force of machine learning, with the amount and quality of training data often being more important for the performance of a system than architecture and training details.
no code implementations • 7 Jan 2022 • Nora Horanyi, Kedi Xia, Kwang Moo Yi, Abhishake Kumar Bojja, Ales Leonardis, Hyung Jin Chang
We propose a novel optimization framework that crops a given image based on user description and aesthetics.
1 code implementation • CVPR 2022 • Kacper Kania, Kwang Moo Yi, Marek Kowalski, Tomasz Trzciński, Andrea Tagliasacchi
We extend neural 3D representations to allow for intuitive and interpretable user control beyond novel view rendering (i. e. camera control).
no code implementations • 24 Nov 2021 • Jiahui Huang, Yuhe Jin, Kwang Moo Yi, Leonid Sigal
In the first stage, with the rich set of losses and dynamic foreground size prior, we learn how to separate the frame into foreground and background layers and, conditioned on these layers, how to generate the next frame using VQ-VAE generator.
no code implementations • CVPR 2022 • Daniel Rebain, Mark Matthews, Kwang Moo Yi, Dmitry Lagun, Andrea Tagliasacchi
We present a method for learning a generative 3D model based on neural radiance fields, trained solely from data with only single views of each object.
1 code implementation • CVPR 2021 • Baptiste Angles, Yuhe Jin, Simon Kornblith, Andrea Tagliasacchi, Kwang Moo Yi
We propose a deep network that can be trained to tackle image reconstruction and classification problems that involve detection of multiple object instances, without any supervision regarding their whereabouts.
no code implementations • 7 Jun 2021 • Daniel Rebain, Ke Li, Vincent Sitzmann, Soroosh Yazdani, Kwang Moo Yi, Andrea Tagliasacchi
Implicit representations of geometry, such as occupancy fields or signed distance fields (SDF), have recently re-gained popularity in encoding 3D solid shape in a functional form.
1 code implementation • ICCV 2021 • Wei Jiang, Eduard Trulls, Jan Hosang, Andrea Tagliasacchi, Kwang Moo Yi
We propose a novel framework for finding correspondences in images based on a deep neural network that, given two images and a query point in one of them, finds its correspondence in the other.
Ranked #1 on
Dense Pixel Correspondence Estimation
on ETH3D
Dense Pixel Correspondence Estimation
Optical Flow Estimation
1 code implementation • NeurIPS 2021 • Weiwei Sun, Andrea Tagliasacchi, Boyang Deng, Sara Sabour, Soroosh Yazdani, Geoffrey Hinton, Kwang Moo Yi
We propose a self-supervised capsule architecture for 3D point clouds.
no code implementations • CVPR 2021 • Daniel Rebain, Wei Jiang, Soroosh Yazdani, Ke Li, Kwang Moo Yi, Andrea Tagliasacchi
Moreover, we show that a Voronoi spatial decomposition is preferable for this purpose, as it is provably compatible with the Painter's Algorithm for efficient and GPU-friendly rendering.
no code implementations • 15 Apr 2020 • Zheng Dang, Kwang Moo Yi, Yinlin Hu, Fei Wang, Pascal Fua, Mathieu Salzmann
In this paper, we introduce an eigendecomposition-free approach to training a deep network whose loss depends on the eigenvector corresponding to a zero eigenvalue of a matrix predicted by the network.
no code implementations • CVPR 2021 • Jongwon Choi, Kwang Moo Yi, Ji-Hoon Kim, Jinho Choo, Byoungjip Kim, Jin-Yeop Chang, Youngjune Gwon, Hyung Jin Chang
We show that our method can be applied to classification tasks on multiple different datasets -- including one that is a real-world dataset with heavy data imbalance -- significantly outperforming the state of the art.
5 code implementations • 3 Mar 2020 • Yuhe Jin, Dmytro Mishkin, Anastasiia Mishchuk, Jiri Matas, Pascal Fua, Kwang Moo Yi, Eduard Trulls
We introduce a comprehensive benchmark for local features and robust estimation algorithms, focusing on the downstream task -- the accuracy of the reconstructed camera pose -- as our primary metric.
no code implementations • 8 Dec 2019 • Francis Williams, Daniele Panozzo, Kwang Moo Yi, Andrea Tagliasacchi
Voronoi diagrams are highly compact representations that are used in various Graphics applications.
no code implementations • 25 Nov 2019 • Teaghan O'Briain, Kyong Hwan Jin, Hongyoon Choi, Erika Chin, Magdalena Bazalova-Carter, Kwang Moo Yi
We aim to reduce the tedious nature of developing and evaluating methods for aligning PET-CT scans from multiple patient visits.
no code implementations • 25 Sep 2019 • Baptiste Angles, Simon Kornblith, Shahram Izadi, Andrea Tagliasacchi, Kwang Moo Yi
We propose a deep network that can be trained to tackle image reconstruction and classification problems that involve detection of multiple object instances, without any supervision regarding their whereabouts.
1 code implementation • 17 Sep 2019 • Wei Jiang, Juan Camilo Gamboa Higuera, Baptiste Angles, Weiwei Sun, Mehrsan Javan, Kwang Moo Yi
We propose an optimization-based framework to register sports field templates onto broadcast videos.
1 code implementation • ICCV 2019 • Patrick Ebel, Anastasiia Mishchuk, Kwang Moo Yi, Pascal Fua, Eduard Trulls
We demonstrate that this representation is particularly amenable to learning descriptors with deep networks.
1 code implementation • CVPR 2020 • Weiwei Sun, Wei Jiang, Eduard Trulls, Andrea Tagliasacchi, Kwang Moo Yi
Many problems in computer vision require dealing with sparse, unordered data in the form of point clouds.
1 code implementation • ICCV 2019 • Wei Jiang, Weiwei Sun, Andrea Tagliasacchi, Eduard Trulls, Kwang Moo Yi
We propose a novel image sampling method for differentiable image transformation in deep neural networks.
no code implementations • 14 Jan 2019 • Kyong Hwan Jin, Michael Unser, Kwang Moo Yi
The reconstruction network is trained to give the highest reconstruction quality, given the MCTS sampling pattern.
1 code implementation • 26 Nov 2018 • Baptiste Angles, Yuhe Jin, Simon Kornblith, Andrea Tagliasacchi, Kwang Moo Yi
We propose a deep network that can be trained to tackle image reconstruction and classification problems that involve detection of multiple object instances, without any supervision regarding their whereabouts.
Anomaly Detection In Surveillance Videos
Image Reconstruction
4 code implementations • NeurIPS 2018 • Yuki Ono, Eduard Trulls, Pascal Fua, Kwang Moo Yi
We present a novel deep architecture and a training strategy to learn a local feature pipeline from scratch, using collections of images without the need for human supervision.
no code implementations • ECCV 2018 • Zheng Dang, Kwang Moo Yi, Yinlin Hu, Fei Wang, Pascal Fua, Mathieu Salzmann
Many classical Computer Vision problems, such as essential matrix computation and pose estimation from 3D to 2D correspondences, can be solved by finding the eigenvector corresponding to the smallest, or zero, eigenvalue of a matrix representing a linear system.
3 code implementations • CVPR 2018 • Kwang Moo Yi, Eduard Trulls, Yuki Ono, Vincent Lepetit, Mathieu Salzmann, Pascal Fua
We develop a deep architecture to learn to find good correspondences for wide-baseline stereo.
no code implementations • 16 Nov 2017 • Abhishake Kumar Bojja, Franziska Mueller, Sri Raghu Malireddi, Markus Oberweger, Vincent Lepetit, Christian Theobalt, Kwang Moo Yi, Andrea Tagliasacchi
We propose an automatic method for generating high-quality annotations for depth-based hand segmentation, and introduce a large-scale hand segmentation dataset.
1 code implementation • 30 Mar 2016 • Kwang Moo Yi, Eduard Trulls, Vincent Lepetit, Pascal Fua
We introduce a novel Deep Network architecture that implements the full feature point handling pipeline, that is, detection, orientation estimation, and feature description.
no code implementations • ICCV 2015 • Alberto Crivellaro, Mahdi Rad, Yannick Verdie, Kwang Moo Yi, Pascal Fua, Vincent Lepetit
We present a method that estimates in real-time and under challenging conditions the 3D pose of a known object.
no code implementations • CVPR 2016 • Kwang Moo Yi, Yannick Verdie, Pascal Fua, Vincent Lepetit
We show how to train a Convolutional Neural Network to assign a canonical orientation to feature points given an image patch centered on the feature point.
no code implementations • CVPR 2015 • Yannick Verdie, Kwang Moo Yi, Pascal Fua, Vincent Lepetit
We introduce a learning-based approach to detect repeatable keypoints under drastic imaging changes of weather and lighting conditions to which state-of-the-art keypoint detectors are surprisingly sensitive.