1 code implementation • 18 Jun 2024 • Haque Ishfaq, Yixin Tan, Yu Yang, Qingfeng Lan, Jianfeng Lu, A. Rupam Mahmood, Doina Precup, Pan Xu
Empirically, we show that in tasks where deep exploration is necessary, our proposed algorithms that combine FGTS and approximate sampling perform significantly better compared to other strong baselines.
no code implementations • 8 May 2024 • Wei Deng, Weijian Luo, Yixin Tan, Marin Biloš, Yu Chen, Yuriy Nevmyvaka, Ricky T. Q. Chen
To improve the scalability while preserving efficient transportation plans, we leverage variational inference to linearize the forward score functions (variational scores) of SB and restore simulation-free properties in training backward scores.
1 code implementation • 26 Oct 2023 • Xiuyuan Cheng, Jianfeng Lu, Yixin Tan, Yao Xie
Leveraging the exponential convergence of the proximal gradient descent (GD) in Wasserstein space, we prove the Kullback-Leibler (KL) guarantee of data generation by a JKO flow model to be $O(\varepsilon^2)$ when using $N \lesssim \log (1/\varepsilon)$ many JKO steps ($N$ Residual Blocks in the flow) where $\varepsilon $ is the error in the per-step first-order condition.
no code implementations • 26 Sep 2022 • Holden Lee, Jianfeng Lu, Yixin Tan
Score-based generative modeling (SGM) has grown to be a hugely successful method for learning to generate samples from complex data distributions such as that of images and audio.
no code implementations • 13 Jun 2022 • Holden Lee, Jianfeng Lu, Yixin Tan
Using our guarantee, we give a theoretical analysis of score-based generative modeling, which transforms white-noise input into samples from a learned data distribution given score estimates at different noise scales.
no code implementations • 7 Dec 2020 • Yixin Tan, Xiaomeng Wang, Tao Jia
The hyponym-hypernym relation is an essential element in the semantic network.