no code implementations • 17 May 2022 • Fedor Moiseev, Zhe Dong, Enrique Alfonseca, Martin Jaggi
The models pre-trained on factual triples compare competitively with the ones on natural language sentences that contain the same knowledge.
no code implementations • 14 Apr 2022 • Zhe Dong, Jianmo Ni, Dan Bikel, Enrique Alfonseca, YuAn Wang, Chen Qu, Imed Zitouni
Dual encoders have been used for question-answering (QA) and information retrieval (IR) tasks with good results.
no code implementations • NeurIPS 2021 • Zhe Dong, andriy mnih, George Tucker
Training models with discrete latent variables is challenging due to the high variance of unbiased gradient estimators.
no code implementations • NeurIPS 2020 • Zhe Dong, andriy mnih, George Tucker
Applying antithetic sampling over the augmenting variables yields a relatively low-variance and unbiased estimator applicable to any model with binary latent variables.
no code implementations • 21 Oct 2019 • Zhe Dong, Deniz Oktay, Ben Poole, Alexander A. Alemi
Certain biological neurons demonstrate a remarkable capability to optimally compress the history of sensory inputs while being maximally informative about the future.
no code implementations • ICML 2020 • Zhe Dong, Bryan A. Seybold, Kevin P. Murphy, Hung H. Bui
We propose an efficient inference method for switching nonlinear dynamical systems.
no code implementations • 25 Sep 2019 • Zhe Dong, Deniz Oktay, Ben Poole, Alexander A. Alemi
Certain biological neurons demonstrate a remarkable capability to optimally compress the history of sensory inputs while being maximally informative about the future.