Search Results for author: Theodore Papamarkou

Found 22 papers, 7 papers with code

Combinatorial Complexes: Bridging the Gap Between Cell Complexes and Hypergraphs

no code implementations15 Dec 2023 Mustafa Hajij, Ghada Zamzmi, Theodore Papamarkou, Aldo Guzmán-Sáenz, Tolga Birdal, Michael T. Schaub

In this context, cell complexes are often seen as a subclass of hypergraphs with additional algebraic structure that can be exploited, e. g., to develop a spectral theory.

Model-agnostic variable importance for predictive uncertainty: an entropy-based approach

no code implementations19 Oct 2023 Danny Wood, Theodore Papamarkou, Matt Benatan, Richard Allmendinger

In particular, by adapting permutation feature importance, partial dependence plots, and individual conditional expectation plots, we demonstrate that novel insights into model behaviour may be obtained and that these methods can be used to measure the impact of features on both the entropy of the predictive distribution and the log-likelihood of the ground truth labels under that distribution.

Feature Importance

Towards Efficient MCMC Sampling in Bayesian Neural Networks by Exploiting Symmetry

no code implementations6 Apr 2023 Jonas Gregor Wiese, Lisa Wimmer, Theodore Papamarkou, Bernd Bischl, Stephan Günnemann, David Rügamer

Bayesian inference in deep neural networks is challenging due to the high-dimensional, strongly multi-modal parameter posterior density landscape.

Bayesian Inference Uncertainty Quantification

Approximate blocked Gibbs sampling for Bayesian neural networks

no code implementations24 Aug 2022 Theodore Papamarkou

It is also possible to alleviate vanishing acceptance rates for increasing depth by reducing the proposal variance in deeper layers.

Topological Deep Learning: Going Beyond Graph Data

3 code implementations1 Jun 2022 Mustafa Hajij, Ghada Zamzmi, Theodore Papamarkou, Nina Miolane, Aldo Guzmán-Sáenz, Karthikeyan Natesan Ramamurthy, Tolga Birdal, Tamal K. Dey, Soham Mukherjee, Shreyas N. Samaga, Neal Livesay, Robin Walters, Paul Rosen, Michael T. Schaub

Topological deep learning is a rapidly growing field that pertains to the development of deep learning models for data supported on topological domains such as simplicial complexes, cell complexes, and hypergraphs, which generalize many domains encountered in scientific computations.

Graph Learning

Probability-Generating Function Kernels for Spherical Data

no code implementations1 Dec 2021 Theodore Papamarkou, Alexey Lindo

Probability-generating function (PGF) kernels are introduced, which constitute a class of kernels supported on the unit hypersphere, for the purposes of spherical data analysis.

Gaussian Processes

Simplicial Complex Representation Learning

no code implementations6 Mar 2021 Mustafa Hajij, Ghada Zamzmi, Theodore Papamarkou, Vasileios Maroulas, Xuanting Cai

In this work, we propose a method for simplicial complex-level representation learning that embeds a simplicial complex to a universal embedding space in a way that complex-to-complex proximity is preserved.

Representation Learning

Hidden Markov models are recurrent neural networks: A disease progression modeling application

no code implementations28 Sep 2020 Matthew Baucum, Anahita Khojandi, Theodore Papamarkou

To allow for this, we formulate a special case of recurrent neural networks (RNNs), which we name hidden Markov recurrent neural networks (HMRNNs), and prove that each HMRNN has the same likelihood function as a corresponding discrete-observation HMM.

Bayesian neural networks and dimensionality reduction

no code implementations18 Aug 2020 Deborshee Sen, Theodore Papamarkou, David Dunson

We attempt to solve these problems by deploying Markov chain Monte Carlo sampling algorithms (MCMC) for Bayesian inference in ANN models with latent variables.

Bayesian Inference Dimensionality Reduction +1

Hidden Markov models as recurrent neural networks: an application to Alzheimer's disease

no code implementations4 Jun 2020 Matt Baucum, Anahita Khojandi, Theodore Papamarkou

Hidden Markov models (HMMs) are commonly used for disease progression modeling when the true patient health state is not fully known.

Depth-2 Neural Networks Under a Data-Poisoning Attack

1 code implementation4 May 2020 Sayar Karmakar, Anirbit Mukherjee, Theodore Papamarkou

In this class of networks, we attempt to learn the network weights in the presence of a malicious oracle doing stochastic, bounded and additive adversarial distortions on the true output during training.

Adversarial Attack Data Poisoning

Automated detection of corrosion in used nuclear fuel dry storage canisters using residual neural networks

no code implementations6 Mar 2020 Theodore Papamarkou, Hayley Guy, Bryce Kroencke, Jordan Miller, Preston Robinette, Daniel Schultz, Jacob Hinkle, Laura Pullum, Catherine Schuman, Jeremy Renshaw, Stylianos Chatzidakis

The results demonstrate that such a deep learning approach allows to detect the locus of corrosion via smaller tiles, and at the same time to infer with high accuracy whether an image comes from a corroded canister.

Wide Neural Networks with Bottlenecks are Deep Gaussian Processes

no code implementations3 Jan 2020 Devanshu Agrawal, Theodore Papamarkou, Jacob Hinkle

There has recently been much work on the "wide limit" of neural networks, where Bayesian neural networks (BNNs) are shown to converge to a Gaussian process (GP) as all hidden layers are sent to infinite width.

Gaussian Processes

Challenges in Markov chain Monte Carlo for Bayesian neural networks

1 code implementation15 Oct 2019 Theodore Papamarkou, Jacob Hinkle, M. Todd Young, David Womble

Nevertheless, this paper shows that a non-converged Markov chain, generated via MCMC sampling from the parameter space of a neural network, can yield via Bayesian marginalization a valuable posterior predictive distribution of the output of the neural network.

Bayesian Inference valid

Geometric adaptive Monte Carlo in random environment

3 code implementations29 Aug 2016 Theodore Papamarkou, Alexey Lindo, Eric B. Ford

This paper analyzes the computational complexity of manifold Langevin Monte Carlo and proposes a geometric adaptive Monte Carlo sampler aimed at balancing the benefits of exploiting local geometry with computational cost to achieve a high effective sample size for a given computational cost.

Forward-Mode Automatic Differentiation in Julia

8 code implementations26 Jul 2016 Jarrett Revels, Miles Lubin, Theodore Papamarkou

We present ForwardDiff, a Julia package for forward-mode automatic differentiation (AD) featuring performance competitive with low-level languages like C++.

Mathematical Software

Cannot find the paper you are looking for? You can Submit a new open access paper.