1 code implementation • 29 Jun 2023 • Zeqi Gu, Abe Davis
In this work, we show that much of this low-level control can be achieved without additional training or any access to features of the diffusion model.
no code implementations • 3 Nov 2022 • Zeqi Gu, Wenqi Xian, Noah Snavely, Abe Davis
Based on this observation, we present a method for solving the factor matting problem that produces useful decompositions even for video with complex cross-layer interactions like splashes, shadows, and reflections.
1 code implementation • 31 Jul 2022 • Jan Ondras, Di Ni, Xi Deng, Zeqi Gu, Henry Zheng, Tapomayukh Bhattacharjee
The ability to deform soft objects remains a great challenge for robots due to difficulties in defining the problem mathematically.
1 code implementation • 21 Dec 2019 • Yin Cui, Zeqi Gu, Dhruv Mahajan, Laurens van der Maaten, Serge Belongie, Ser-Nam Lim
We also investigate the interplay between dataset granularity with a variety of factors and find that fine-grained datasets are more difficult to learn from, more difficult to transfer to, more difficult to perform few-shot learning with, and more vulnerable to adversarial attacks.
2 code implementations • ICCV 2019 • Qian Huang, Isay Katsman, Horace He, Zeqi Gu, Serge Belongie, Ser-Nam Lim
We show that we can select a layer of the source model to perturb without any knowledge of the target models while achieving high transferability.
no code implementations • 20 Nov 2018 • Qian Huang, Zeqi Gu, Isay Katsman, Horace He, Pian Pawakapan, Zhiqiu Lin, Serge Belongie, Ser-Nam Lim
Neural networks are vulnerable to adversarial examples, malicious inputs crafted to fool trained models.