1 code implementation • 18 Jun 2020 • Ronald Yu
The Variational Auto-Encoder (VAE) is a simple, efficient, and popular deep maximum likelihood model.
2 code implementations • 16 Aug 2019 • Daniel Liu, Ronald Yu, Hao Su
The importance of training robust neural network grows as 3D data is increasingly utilized in deep learning for vision tasks in robotics, drone control, and autonomous driving.
1 code implementation • 10 Jan 2019 • Daniel Liu, Ronald Yu, Hao Su
We present a preliminary evaluation of adversarial attacks on deep 3D point cloud classifiers, namely PointNet and PointNet++, by evaluating both white-box and black-box adversarial attacks that were proposed for 2D images and extending those attacks to reduce the perceptibility of the perturbations in 3D space.
1 code implementation • CVPR 2019 • Bo Sun, Nian-hsuan Tsai, Fangchen Liu, Ronald Yu, Hao Su
We propose an adversarial defense method that achieves state-of-the-art performance among attack-agnostic adversarial defense methods while also maintaining robustness to input resolution, scale of adversarial perturbation, and scale of dataset size.
1 code implementation • NeurIPS 2018 • Minhyuk Sung, Hao Su, Ronald Yu, Leonidas Guibas
Even though our shapes have independent discretizations and no functional correspondences are provided, the network is able to generate latent bases, in a consistent order, that reflect the shared semantic structure among the shapes.
1 code implementation • CVPR 2018 • Weiyue Wang, Ronald Yu, Qiangui Huang, Ulrich Neumann
Experimental results on various 3D scenes show the effectiveness of our method on 3D instance segmentation, and we also evaluate the capability of SGPN to improve 3D object detection and semantic segmentation results.
Ranked #1 on
3D Semantic Instance Segmentation
on ScanNetV1
no code implementations • ICCV 2017 • Kyle Olszewski, Zimo Li, Chao Yang, Yi Zhou, Ronald Yu, Zeng Huang, Sitao Xiang, Shunsuke Saito, Pushmeet Kohli, Hao Li
By retargeting the PCA expression geometry from the source, as well as using the newly inferred texture, we can both animate the face and perform video face replacement on the source video using the target appearance.
no code implementations • ICCV 2017 • Ronald Yu, Shunsuke Saito, Haoxiang Li, Duygu Ceylan, Hao Li
To train such a network, we generate a massive dataset of synthetic faces with dense labels using renderings of a morphable face model with variations in pose, expressions, lighting, and occlusions.
1 code implementation • 21 Sep 2016 • Samuli Laine, Tero Karras, Timo Aila, Antti Herva, Shunsuke Saito, Ronald Yu, Hao Li, Jaakko Lehtinen
We present a real-time deep learning framework for video-based facial performance capture -- the dense 3D tracking of an actor's face given a monocular video.