Search Results for author: Tzu-Mao Li

Found 5 papers, 2 papers with code

Small in-distribution changes in 3D perspective and lighting fool both CNNs and Transformers

no code implementations30 Jun 2021 Spandan Madan, Tomotake Sasaki, Tzu-Mao Li, Xavier Boix, Hanspeter Pfister

Despite training with a large-scale (0. 5 million images), unbiased dataset of camera and light variations, in over 71% cases CMA-Search can find camera parameters in the vicinity of a correctly classified image which lead to in-distribution misclassifications with < 3. 6% change in parameters.

DiffTaichi: Differentiable Programming for Physical Simulation

2 code implementations ICLR 2020 Yuanming Hu, Luke Anderson, Tzu-Mao Li, Qi Sun, Nathan Carr, Jonathan Ragan-Kelley, Frédo Durand

We present DiffTaichi, a new differentiable programming language tailored for building high-performance differentiable physical simulators.

Physical Simulations

Differentiable Visual Computing

no code implementations27 Apr 2019 Tzu-Mao Li

Simulating light transport in the presence of multi-bounce glossy effects and motion in 3D rendering is challenging due to the hard-to-sample high-contribution areas.

Inverse Path Tracing for Joint Material and Lighting Estimation

no code implementations17 Mar 2019 Dejan Azinović, Tzu-Mao Li, Anton Kaplanyan, Matthias Nießner

We introduce Inverse Path Tracing, a novel approach to jointly estimate the material properties of objects and light sources in indoor scenes by using an invertible light transport simulation.

Differentiable Monte Carlo Ray Tracing through Edge Sampling

1 code implementation SIGGRAPH 2018 Tzu-Mao Li, Miika Aittala, Frédo Durand, Jaakko Lehtinen

We introduce a general-purpose differentiable ray tracer, which, to our knowledge, is the first comprehensive solution that is able to compute derivatives of scalar functions over a rendered image with respect to arbitrary scene parameters such as camera pose, scene geometry, materials, and lighting parameters.

Cannot find the paper you are looking for? You can Submit a new open access paper.