1 code implementation • 21 Nov 2023 • Micha Livne, Zulfat Miftahutdinov, Elena Tutubalina, Maksim Kuznetsov, Daniil Polykovskiy, Annika Brundyn, Aastha Jhunjhunwala, Anthony Costa, Alex Aliper, Alán Aspuru-Guzik, Alex Zhavoronkov
Large Language Models (LLMs) have substantially driven scientific progress in various domains, and many papers have demonstrated their ability to tackle complex problems with creative solutions.
no code implementations • 18 Aug 2022 • Danny Reidenbach, Micha Livne, Rajesh K. Ilango, Michelle Gill, Johnny Israeli
We achieve state-of-the-art results in several constrained single property optimization tasks as well as in the challenging task of multi-objective optimization, improving over previous success rate SOTA by more than 5\% .
1 code implementation • 18 Feb 2020 • Micha Livne, Kevin Swersky, David J. Fleet
MIM learning encourages high mutual information between observations and latent variables, and is robust against posterior collapse.
Ranked #1 on Question Answering on YahooCQA (using extra training data)
1 code implementation • 8 Oct 2019 • Micha Livne, Kevin Swersky, David J. Fleet
Experiments show that MIM learns representations with high mutual information, consistent encoding and decoding distributions, effective latent clustering, and data log likelihood comparable to VAE, while avoiding posterior collapse.
no code implementations • 4 Oct 2019 • Micha Livne, Kevin Swersky, David J. Fleet
We introduce the Mutual Information Machine (MIM), a novel formulation of representation learning, using a joint distribution over the observations and latent state in an encoder/decoder framework.
no code implementations • 5 Feb 2019 • Micha Livne, David Fleet
We formulate a new class of conditional generative models based on probability flows.
no code implementations • 4 Dec 2018 • Micha Livne, Leonid Sigal, Marcus A. Brubaker, David J. Fleet
To our knowledge, this is the first approach to take physics into account without explicit {\em a priori} knowledge of the environment or body dimensions.
no code implementations • 5 Nov 2018 • Micha Livne, David J. Fleet
Unlike autoencoders, the bottleneck does not limit model expressiveness, similar to flow-based ML.