Search Results for author: Yilun Xu

Found 14 papers, 9 papers with code

Stable Target Field for Reduced Variance Score Estimation in Diffusion Models

1 code implementation1 Feb 2023 Yilun Xu, Shangyuan Tong, Tommi Jaakkola

We show that the procedure indeed helps in the challenging intermediate regime by reducing (the trace of) the covariance of training targets.

Denoising Image Generation

Poisson Flow Generative Models

1 code implementation22 Sep 2022 Yilun Xu, Ziming Liu, Max Tegmark, Tommi Jaakkola

We interpret the data points as electrical charges on the $z=0$ hyperplane in a space augmented with an additional dimension $z$, generating a high-dimensional electric field (the gradient of the solution to Poisson equation).

Image Generation

Controlling Directions Orthogonal to a Classifier

1 code implementation ICLR 2022 Yilun Xu, Hao He, Tianxiao Shen, Tommi Jaakkola

We propose to identify directions invariant to a given classifier so that these directions can be controlled in tasks such as style transfer.

Domain Adaptation Fairness +1

Deep Joint Demosaicing and High Dynamic Range Imaging within a Single Shot

no code implementations14 Nov 2021 Yilun Xu, Ziyang Liu, Xingming Wu, Weihai Chen, Changyun Wen, Zhengguo Li

For the former challenge, a spatially varying convolution (SVC) is designed to process the Bayer images carried with varying exposures.

Demosaicking

Novel Intensity Mapping Functions: Weighted Histogram Averaging

no code implementations14 Nov 2021 Yilun Xu, Zhengguo Li, Weihai Chen, Changyun Wen

It is challenging to align the brightness distribution of the images with different exposures due to possible color distortion and loss of details in the brightest and darkest regions of input images.

Learning Representations that Support Robust Transfer of Predictors

2 code implementations19 Oct 2021 Yilun Xu, Tommi Jaakkola

We further demonstrate the impact of optimizing such transfer risk on two controlled settings, each representing a different pattern of environment shift, as well as on two real-world datasets.

Domain Generalization Out-of-Distribution Generalization

Can Subnetwork Structure be the Key to Out-of-Distribution Generalization?

no code implementations5 Jun 2021 Dinghuai Zhang, Kartik Ahuja, Yilun Xu, Yisen Wang, Aaron Courville

Can models with particular structure avoid being biased towards spurious correlation in out-of-distribution (OOD) generalization?

Out-of-Distribution Generalization

Anytime Sampling for Autoregressive Models via Ordered Autoencoding

1 code implementation ICLR 2021 Yilun Xu, Yang song, Sahaj Garg, Linyuan Gong, Rui Shu, Aditya Grover, Stefano Ermon

Experimentally, we demonstrate in several image and audio generation tasks that sample quality degrades gracefully as we reduce the computational budget for sampling.

Audio Generation

TCGM: An Information-Theoretic Framework for Semi-Supervised Multi-Modality Learning

no code implementations ECCV 2020 Xinwei Sun, Yilun Xu, Peng Cao, Yuqing Kong, Lingjing Hu, Shanghang Zhang, Yizhou Wang

In this paper, we propose a novel information-theoretic approach, namely \textbf{T}otal \textbf{C}orrelation \textbf{G}ain \textbf{M}aximization (TCGM), for semi-supervised multi-modal learning, which is endowed with promising properties: (i) it can utilize effectively the information across different modalities of unlabeled data points to facilitate training classifiers of each modality (ii) it has theoretical guarantee to identify Bayesian classifiers, i. e., the ground truth posteriors of all modalities.

Disease Prediction Emotion Recognition +1

L_DMI: A Novel Information-theoretic Loss Function for Training Deep Nets Robust to Label Noise

no code implementations NeurIPS 2019 Yilun Xu, Peng Cao, Yuqing Kong, Yizhou Wang

To the best of our knowledge, L_DMI is the first loss function that is provably robust to instance-independent label noise, regardless of noise pattern, and it can be applied to any existing classification neural networks straightforwardly without any auxiliary information.

Ranked #33 on Image Classification on Clothing1M (using extra training data)

Learning with noisy labels

L_DMI: An Information-theoretic Noise-robust Loss Function

2 code implementations8 Sep 2019 Yilun Xu, Peng Cao, Yuqing Kong, Yizhou Wang

\emph{To the best of our knowledge, $\mathcal{L}_{DMI}$ is the first loss function that is provably robust to instance-independent label noise, regardless of noise pattern, and it can be applied to any existing classification neural networks straightforwardly without any auxiliary information}.

Learning with noisy labels

Max-MIG: an Information Theoretic Approach for Joint Learning from Crowds

1 code implementation ICLR 2019 Peng Cao, Yilun Xu, Yuqing Kong, Yizhou Wang

Furthermore, we devise an accurate data-crowds forecaster that employs both the data and the crowdsourced labels to forecast the ground truth.

Cannot find the paper you are looking for? You can Submit a new open access paper.