Search Results for author: Mohamed Tarek

Found 8 papers, 5 papers with code

Scalable Optimal Transport Methods in Machine Learning: A Contemporary Survey

1 code implementation8 May 2023 Abdelwahed Khamis, Russell Tsuchida, Mohamed Tarek, Vivien Rolland, Lars Petersson

This paper is about where and how optimal transport is used in machine learning with a focus on the question of scalable optimal transport.

A Practitioner's Guide to Bayesian Inference in Pharmacometrics using Pumas

no code implementations31 Mar 2023 Mohamed Tarek, Jose Storopoli, Casey Davis, Chris Elrod, Julius Krumbiegel, Chris Rackauckas, Vijay Ivaturi

Many of the algorithms, codes, and ideas presented in this paper are highly applicable to clinical research and statistical learning at large but we chose to focus our discussions on pharmacometrics in this paper to have a narrower scope in mind and given the nature of Pumas as a software primarily for pharmacometricians.

Bayesian Inference

Simplifying deflation for non-convex optimization with applications in Bayesian inference and topology optimization

no code implementations28 Jan 2022 Mohamed Tarek, Yijiang Huang

Finally, a number of applications of the proposed methodology in the fields of approximate Bayesian inference and topology optimization are presented.

Bayesian Inference

AbstractDifferentiation.jl: Backend-Agnostic Differentiable Programming in Julia

1 code implementation25 Sep 2021 Frank Schäfer, Mohamed Tarek, Lyndon White, Chris Rackauckas

No single Automatic Differentiation (AD) system is the optimal choice for all problems.

Bayesian Neural Ordinary Differential Equations

no code implementations14 Dec 2020 Raj Dandekar, Karen Chung, Vaibhav Dixit, Mohamed Tarek, Aslan Garcia-Valadez, Krishna Vishal Vemula, Chris Rackauckas

We demonstrate the successful integration of Neural ODEs with the above Bayesian inference frameworks on classical physical systems, as well as on standard machine learning datasets like MNIST, using GPU acceleration.

Bayesian Inference BIG-bench Machine Learning +2

DynamicPPL: Stan-like Speed for Dynamic Probabilistic Models

2 code implementations7 Feb 2020 Mohamed Tarek, Kai Xu, Martin Trapp, Hong Ge, Zoubin Ghahramani

Since DynamicPPL is a modular, stand-alone library, any probabilistic programming system written in Julia, such as Turing. jl, can use DynamicPPL to specify models and trace their model parameters.

Probabilistic Programming

AdvancedHMC.jl: A robust, modular and efficient implementation of advanced HMC algorithms

1 code implementation pproximateinference AABI Symposium 2019 Kai Xu, Hong Ge, Will Tebbutt, Mohamed Tarek, Martin Trapp, Zoubin Ghahramani

Stan's Hamilton Monte Carlo (HMC) has demonstrated remarkable sampling robustness and efficiency in a wide range of Bayesian inference problems through carefully crafted adaption schemes to the celebrated No-U-Turn sampler (NUTS) algorithm.

Bayesian Inference Benchmarking

Cannot find the paper you are looking for? You can Submit a new open access paper.