no code implementations • ICML 2020 • Ching-Yao Chuang, Antonio Torralba, Stefanie Jegelka
We also propose a method for estimating how well a model based on domain-invariant representations will perform on the target domain, without having seen any target labels.
2 code implementations • 17 Oct 2024 • Adam Polyak, Amit Zohar, Andrew Brown, Andros Tjandra, Animesh Sinha, Ann Lee, Apoorv Vyas, Bowen Shi, Chih-Yao Ma, Ching-Yao Chuang, David Yan, Dhruv Choudhary, Dingkang Wang, Geet Sethi, Guan Pang, Haoyu Ma, Ishan Misra, Ji Hou, Jialiang Wang, Kiran Jagadeesh, Kunpeng Li, Luxin Zhang, Mannat Singh, Mary Williamson, Matt Le, Matthew Yu, Mitesh Kumar Singh, Peizhao Zhang, Peter Vajda, Quentin Duval, Rohit Girdhar, Roshan Sumbaly, Sai Saketh Rambhatla, Sam Tsai, Samaneh Azadi, Samyak Datta, Sanyuan Chen, Sean Bell, Sharadh Ramaswamy, Shelly Sheynin, Siddharth Bhattacharya, Simran Motwani, Tao Xu, Tianhe Li, Tingbo Hou, Wei-Ning Hsu, Xi Yin, Xiaoliang Dai, Yaniv Taigman, Yaqiao Luo, Yen-Cheng Liu, Yi-Chiao Wu, Yue Zhao, Yuval Kirstain, Zecheng He, Zijian He, Albert Pumarola, Ali Thabet, Artsiom Sanakoyeu, Arun Mallya, Baishan Guo, Boris Araya, Breena Kerr, Carleigh Wood, Ce Liu, Cen Peng, Dimitry Vengertsev, Edgar Schonfeld, Elliot Blanchard, Felix Juefei-Xu, Fraylie Nord, Jeff Liang, John Hoffman, Jonas Kohler, Kaolin Fire, Karthik Sivakumar, Lawrence Chen, Licheng Yu, Luya Gao, Markos Georgopoulos, Rashel Moritz, Sara K. Sampson, Shikai Li, Simone Parmeggiani, Steve Fine, Tara Fowler, Vladan Petrovic, Yuming Du
Our models set a new state-of-the-art on multiple tasks: text-to-video synthesis, video personalization, video editing, video-to-audio generation, and text-to-audio generation.
no code implementations • CVPR 2024 • Bichen Wu, Ching-Yao Chuang, Xiaoyan Wang, Yichen Jia, Kapil Krishnakumar, Tong Xiao, Feng Liang, Licheng Yu, Peter Vajda
In this paper, we introduce Fairy, a minimalist yet robust adaptation of image-editing diffusion models, enhancing them for video editing applications.
no code implementations • 22 Jun 2023 • Khashayar Gatmiry, Zhiyuan Li, Ching-Yao Chuang, Sashank Reddi, Tengyu Ma, Stefanie Jegelka
Recent works on over-parameterized neural networks have shown that the stochasticity in optimizers has the implicit regularization effect of minimizing the sharpness of the loss function (in particular, the trace of its Hessian) over the family zero-loss solutions.
1 code implementation • 31 Jan 2023 • Ching-Yao Chuang, Varun Jampani, Yuanzhen Li, Antonio Torralba, Stefanie Jegelka
Machine learning models have been shown to inherit biases from their training datasets.
1 code implementation • 6 Oct 2022 • Ching-Yao Chuang, Stefanie Jegelka, David Alvarez-Melis
Optimal transport aligns samples across distributions by minimizing the transportation cost between them, e. g., the geometric distances.
1 code implementation • 4 Oct 2022 • Ching-Yao Chuang, Stefanie Jegelka
Understanding generalization and robustness of machine learning models fundamentally relies on assuming an appropriate metric on the data space.
1 code implementation • CVPR 2022 • Ching-Yao Chuang, R Devon Hjelm, Xin Wang, Vibhav Vineet, Neel Joshi, Antonio Torralba, Stefanie Jegelka, Yale Song
Contrastive learning relies on an assumption that positive pairs contain related views, e. g., patches of an image or co-occurring multimodal signals of a video, that share certain underlying information about an instance.
1 code implementation • NeurIPS 2021 • Ching-Yao Chuang, Youssef Mroueh, Kristjan Greenewald, Antonio Torralba, Stefanie Jegelka
Understanding the generalization of deep neural networks is one of the most important tasks in deep learning.
1 code implementation • ICLR 2021 • Ching-Yao Chuang, Youssef Mroueh
Training classifiers under fairness constraints such as group fairness, regularizes the disparities of predictions between the groups.
1 code implementation • ICLR 2021 • Joshua Robinson, Ching-Yao Chuang, Suvrit Sra, Stefanie Jegelka
How can you sample good negative examples for contrastive learning?
1 code implementation • 6 Jul 2020 • Ching-Yao Chuang, Antonio Torralba, Stefanie Jegelka
When machine learning models are deployed on a test distribution different from the training distribution, they can perform poorly, but overestimate their performance.
1 code implementation • NeurIPS 2020 • Ching-Yao Chuang, Joshua Robinson, Lin Yen-Chen, Antonio Torralba, Stefanie Jegelka
A prominent technique for self-supervised representation learning has been to contrast semantically similar and dissimilar pairs of samples.
1 code implementation • 13 Oct 2019 • Ching-Yao Chuang, Antonio Torralba, Stefanie Jegelka
In this work, we study, theoretically and empirically, the effect of the embedding complexity on generalization to the target domain.
no code implementations • CVPR 2018 • Ching-Yao Chuang, Jiaman Li, Antonio Torralba, Sanja Fidler
We address the problem of affordance reasoning in diverse scenes that appear in the real world.
1 code implementation • ICCV 2017 • Tseng-Hung Chen, Yuan-Hong Liao, Ching-Yao Chuang, Wan-Ting Hsu, Jianlong Fu, Min Sun
The domain critic assesses whether the generated sentences are indistinguishable from sentences in the target domain.
no code implementations • 12 Nov 2016 • Kuo-Hao Zeng, Tseng-Hung Chen, Ching-Yao Chuang, Yuan-Hong Liao, Juan Carlos Niebles, Min Sun
Then, a large number of candidate QA pairs are automatically generated from descriptions rather than manually annotated.