1 code implementation • 8 Feb 2023 • Yilun Xu, Ziming Liu, Yonglong Tian, Shangyuan Tong, Max Tegmark, Tommi Jaakkola
The new models reduce to PFGM when $D{=}1$ and to diffusion models when $D{\to}\infty$.
Ranked #1 on
Image Generation
on FFHQ 64x64 - 4x upscaling
1 code implementation • 1 Feb 2023 • Yilun Xu, Shangyuan Tong, Tommi Jaakkola
We show that the procedure indeed helps in the challenging intermediate regime by reducing (the trace of) the covariance of training targets.
Ranked #5 on
Image Generation
on CIFAR-10
1 code implementation • 22 Sep 2022 • Yilun Xu, Ziming Liu, Max Tegmark, Tommi Jaakkola
We interpret the data points as electrical charges on the $z=0$ hyperplane in a space augmented with an additional dimension $z$, generating a high-dimensional electric field (the gradient of the solution to Poisson equation).
Ranked #20 on
Image Generation
on CIFAR-10
1 code implementation • ICLR 2022 • Yilun Xu, Hao He, Tianxiao Shen, Tommi Jaakkola
We propose to identify directions invariant to a given classifier so that these directions can be controlled in tasks such as style transfer.
no code implementations • 14 Nov 2021 • Yilun Xu, Ziyang Liu, Xingming Wu, Weihai Chen, Changyun Wen, Zhengguo Li
For the former challenge, a spatially varying convolution (SVC) is designed to process the Bayer images carried with varying exposures.
no code implementations • 14 Nov 2021 • Yilun Xu, Zhengguo Li, Weihai Chen, Changyun Wen
It is challenging to align the brightness distribution of the images with different exposures due to possible color distortion and loss of details in the brightest and darkest regions of input images.
2 code implementations • 19 Oct 2021 • Yilun Xu, Tommi Jaakkola
We further demonstrate the impact of optimizing such transfer risk on two controlled settings, each representing a different pattern of environment shift, as well as on two real-world datasets.
no code implementations • 5 Jun 2021 • Dinghuai Zhang, Kartik Ahuja, Yilun Xu, Yisen Wang, Aaron Courville
Can models with particular structure avoid being biased towards spurious correlation in out-of-distribution (OOD) generalization?
1 code implementation • ICLR 2021 • Yilun Xu, Yang song, Sahaj Garg, Linyuan Gong, Rui Shu, Aditya Grover, Stefano Ermon
Experimentally, we demonstrate in several image and audio generation tasks that sample quality degrades gracefully as we reduce the computational budget for sampling.
no code implementations • ECCV 2020 • Xinwei Sun, Yilun Xu, Peng Cao, Yuqing Kong, Lingjing Hu, Shanghang Zhang, Yizhou Wang
In this paper, we propose a novel information-theoretic approach, namely \textbf{T}otal \textbf{C}orrelation \textbf{G}ain \textbf{M}aximization (TCGM), for semi-supervised multi-modal learning, which is endowed with promising properties: (i) it can utilize effectively the information across different modalities of unlabeled data points to facilitate training classifiers of each modality (ii) it has theoretical guarantee to identify Bayesian classifiers, i. e., the ground truth posteriors of all modalities.
1 code implementation • ICLR 2020 • Yilun Xu, Shengjia Zhao, Jiaming Song, Russell Stewart, Stefano Ermon
We propose a new framework for reasoning about information in complex systems.
no code implementations • NeurIPS 2019 • Yilun Xu, Peng Cao, Yuqing Kong, Yizhou Wang
To the best of our knowledge, L_DMI is the first loss function that is provably robust to instance-independent label noise, regardless of noise pattern, and it can be applied to any existing classification neural networks straightforwardly without any auxiliary information.
Ranked #33 on
Image Classification
on Clothing1M
(using extra training data)
2 code implementations • 8 Sep 2019 • Yilun Xu, Peng Cao, Yuqing Kong, Yizhou Wang
\emph{To the best of our knowledge, $\mathcal{L}_{DMI}$ is the first loss function that is provably robust to instance-independent label noise, regardless of noise pattern, and it can be applied to any existing classification neural networks straightforwardly without any auxiliary information}.
Ranked #33 on
Image Classification
on Clothing1M
1 code implementation • ICLR 2019 • Peng Cao, Yilun Xu, Yuqing Kong, Yizhou Wang
Furthermore, we devise an accurate data-crowds forecaster that employs both the data and the crowdsourced labels to forecast the ground truth.