Search Results for author: Ching-Yao Chuang

Found 16 papers, 11 papers with code

Understanding and Estimating the Adaptability of Domain-Invariant Representations

no code implementations ICML 2020 Ching-Yao Chuang, Antonio Torralba, Stefanie Jegelka

We also propose a method for estimating how well a model based on domain-invariant representations will perform on the target domain, without having seen any target labels.

Model Selection Unsupervised Domain Adaptation

Fairy: Fast Parallelized Instruction-Guided Video-to-Video Synthesis

no code implementations20 Dec 2023 Bichen Wu, Ching-Yao Chuang, Xiaoyan Wang, Yichen Jia, Kapil Krishnakumar, Tong Xiao, Feng Liang, Licheng Yu, Peter Vajda

In this paper, we introduce Fairy, a minimalist yet robust adaptation of image-editing diffusion models, enhancing them for video editing applications.

Data Augmentation Video Editing +1

The Inductive Bias of Flatness Regularization for Deep Matrix Factorization

no code implementations22 Jun 2023 Khashayar Gatmiry, Zhiyuan Li, Ching-Yao Chuang, Sashank Reddi, Tengyu Ma, Stefanie Jegelka

Recent works on over-parameterized neural networks have shown that the stochasticity in optimizers has the implicit regularization effect of minimizing the sharpness of the loss function (in particular, the trace of its Hessian) over the family zero-loss solutions.

Inductive Bias

Debiasing Vision-Language Models via Biased Prompts

1 code implementation31 Jan 2023 Ching-Yao Chuang, Varun Jampani, Yuanzhen Li, Antonio Torralba, Stefanie Jegelka

Machine learning models have been shown to inherit biases from their training datasets.

InfoOT: Information Maximizing Optimal Transport

1 code implementation6 Oct 2022 Ching-Yao Chuang, Stefanie Jegelka, David Alvarez-Melis

Optimal transport aligns samples across distributions by minimizing the transportation cost between them, e. g., the geometric distances.

Domain Adaptation Retrieval

Tree Mover's Distance: Bridging Graph Metrics and Stability of Graph Neural Networks

1 code implementation4 Oct 2022 Ching-Yao Chuang, Stefanie Jegelka

Understanding generalization and robustness of machine learning models fundamentally relies on assuming an appropriate metric on the data space.

Graph Classification

Robust Contrastive Learning against Noisy Views

1 code implementation CVPR 2022 Ching-Yao Chuang, R Devon Hjelm, Xin Wang, Vibhav Vineet, Neel Joshi, Antonio Torralba, Stefanie Jegelka, Yale Song

Contrastive learning relies on an assumption that positive pairs contain related views, e. g., patches of an image or co-occurring multimodal signals of a video, that share certain underlying information about an instance.

Binary Classification Contrastive Learning

Fair Mixup: Fairness via Interpolation

1 code implementation ICLR 2021 Ching-Yao Chuang, Youssef Mroueh

Training classifiers under fairness constraints such as group fairness, regularizes the disparities of predictions between the groups.

Data Augmentation Fairness

Estimating Generalization under Distribution Shifts via Domain-Invariant Representations

1 code implementation6 Jul 2020 Ching-Yao Chuang, Antonio Torralba, Stefanie Jegelka

When machine learning models are deployed on a test distribution different from the training distribution, they can perform poorly, but overestimate their performance.

Domain Adaptation Model Selection

Debiased Contrastive Learning

1 code implementation NeurIPS 2020 Ching-Yao Chuang, Joshua Robinson, Lin Yen-Chen, Antonio Torralba, Stefanie Jegelka

A prominent technique for self-supervised representation learning has been to contrast semantically similar and dissimilar pairs of samples.

Contrastive Learning Generalization Bounds +2

The Role of Embedding Complexity in Domain-invariant Representations

1 code implementation13 Oct 2019 Ching-Yao Chuang, Antonio Torralba, Stefanie Jegelka

In this work, we study, theoretically and empirically, the effect of the embedding complexity on generalization to the target domain.

Unsupervised Domain Adaptation

Cannot find the paper you are looking for? You can Submit a new open access paper.