no code implementations • 15 Jul 2024 • Nataša Tagasovska, Ji Won Park, Matthieu Kirchmeyer, Nathan C. Frey, Andrew Martin Watkins, Aya Abdelsalam Ismail, Arian Rokkum Jamasb, Edith Lee, Tyler Bryson, Stephen Ra, Kyunghyun Cho
The model predictions are used to determine which designs to evaluate in the lab, and the model is updated on the new measurements to inform the next cycle of decisions.
no code implementations • 17 Jun 2024 • Anna Susmelj, Mael Macuglia, Nataša Tagasovska, Reto Sutter, Sebastiano Caprara, Jean-Philippe Thiran, Ender Konukoglu
In this paper, we introduce Dropsembles, a novel method for uncertainty estimation in tuned implicit functions.
no code implementations • 28 May 2024 • Nataša Tagasovska, Vladimir Gligorijević, Kyunghyun Cho, Andreas Loukas
Matching, combined with an encoder-decoder architecture, forms a domain-agnostic generative framework for property enhancement.
1 code implementation • 1 Jun 2023 • Ji Won Park, Nataša Tagasovska, Michael Maser, Stephen Ra, Kyunghyun Cho
Motivated by this link, we propose the Pareto-compliant CDF indicator and the associated acquisition function, BOtied.
1 code implementation • 24 Feb 2023 • Nataša Tagasovska, Firat Ozdemir, Axel Brando
Despite the major progress of deep models as learning machines, uncertainty estimation remains a major challenge.
1 code implementation • 7 Nov 2022 • Romain Lopez, Nataša Tagasovska, Stephen Ra, Kyunghyn Cho, Jonathan K. Pritchard, Aviv Regev
Instead, recent methods propose to leverage non-stationary data, as well as the sparse mechanism shift assumption in order to learn disentangled representations with a causal semantic.
no code implementations • 19 Oct 2022 • Nataša Tagasovska, Nathan C. Frey, Andreas Loukas, Isidro Hötzel, Julien Lafrance-Vanasse, Ryan Lewis Kelly, Yan Wu, Arvind Rajpal, Richard Bonneau, Kyunghyun Cho, Stephen Ra, Vladimir Gligorijević
Deep generative models have emerged as a popular machine learning-based approach for inverse design problems in the life sciences.