no code implementations • 15 Mar 2024 • Haoyue Tang, Tian Xie, Aosong Feng, Hanyu Wang, Chenyang Zhang, Yang Bai
Solving image inverse problems (e. g., super-resolution and inpainting) requires generating a high fidelity image that matches the given input (the low-resolution image or the masked image).
1 code implementation • NeurIPS 2021 • Chang Liu, Haoyue Tang, Tao Qin, Jintao Wang, Tie-Yan Liu
This is motivated by the observation that deep generative models, in addition to a likelihood model $p(x|z)$, often also use an inference model $q(z|x)$ for extracting representation, but they rely on a usually uninformative prior distribution $p(z)$ to define a joint distribution, which may render problems like posterior collapse and manifold mismatch.
1 code implementation • NeurIPS 2021 • Chang Liu, Xinwei Sun, Jindong Wang, Haoyue Tang, Tao Li, Tao Qin, Wei Chen, Tie-Yan Liu
Conventional supervised learning methods, especially deep ones, are found to be sensitive to out-of-distribution (OOD) examples, largely because the learned representation mixes the semantic factor with the variation factor due to their domain-specific correlation, while only the semantic factor causes the output.
no code implementations • 9 Oct 2020 • Haoyue Tang, Philippe Ciblat, Jintao Wang, Michele Wigger, Roy D. Yates
Inspired by this solution for the relaxed problem, we propose a practical cache updating strategy that meets all the constraints of the original problem.
Information Theory Information Theory