1 code implementation • 8 May 2023 • Abdelwahed Khamis, Russell Tsuchida, Mohamed Tarek, Vivien Rolland, Lars Petersson
This paper is about where and how optimal transport is used in machine learning with a focus on the question of scalable optimal transport.
no code implementations • 31 Mar 2023 • Mohamed Tarek, Jose Storopoli, Casey Davis, Chris Elrod, Julius Krumbiegel, Chris Rackauckas, Vijay Ivaturi
Many of the algorithms, codes, and ideas presented in this paper are highly applicable to clinical research and statistical learning at large but we chose to focus our discussions on pharmacometrics in this paper to have a narrower scope in mind and given the nature of Pumas as a software primarily for pharmacometricians.
no code implementations • 28 Jan 2022 • Mohamed Tarek, Yijiang Huang
Finally, a number of applications of the proposed methodology in the fields of approximate Bayesian inference and topology optimization are presented.
1 code implementation • 25 Sep 2021 • Frank Schäfer, Mohamed Tarek, Lyndon White, Chris Rackauckas
No single Automatic Differentiation (AD) system is the optimal choice for all problems.
no code implementations • 14 Dec 2020 • Raj Dandekar, Karen Chung, Vaibhav Dixit, Mohamed Tarek, Aslan Garcia-Valadez, Krishna Vishal Vemula, Chris Rackauckas
We demonstrate the successful integration of Neural ODEs with the above Bayesian inference frameworks on classical physical systems, as well as on standard machine learning datasets like MNIST, using GPU acceleration.
2 code implementations • 7 Feb 2020 • Mohamed Tarek, Kai Xu, Martin Trapp, Hong Ge, Zoubin Ghahramani
Since DynamicPPL is a modular, stand-alone library, any probabilistic programming system written in Julia, such as Turing. jl, can use DynamicPPL to specify models and trace their model parameters.
1 code implementation • pproximateinference AABI Symposium 2019 • Tor Erlend Fjelde, Kai Xu, Mohamed Tarek, Sharan Yalburgi, Hong Ge
Transforming one probability distribution to another is a powerful tool in Bayesian inference and machine learning.
1 code implementation • pproximateinference AABI Symposium 2019 • Kai Xu, Hong Ge, Will Tebbutt, Mohamed Tarek, Martin Trapp, Zoubin Ghahramani
Stan's Hamilton Monte Carlo (HMC) has demonstrated remarkable sampling robustness and efficiency in a wide range of Bayesian inference problems through carefully crafted adaption schemes to the celebrated No-U-Turn sampler (NUTS) algorithm.