Search Results for author: Natasa Tagasovska

Found 9 papers, 4 papers with code

MoleCLUEs: Molecular Conformers Maximally In-Distribution for Predictive Models

no code implementations20 Jun 2023 Michael Maser, Natasa Tagasovska, Jae Hyeon Lee, Andrew Watkins

As we train our predictive models jointly with a conformer decoder, the new latent embeddings can be mapped to their corresponding inputs, which we call \textit{MoleCLUEs}, or (molecular) counterfactual latent uncertainty explanations \citep{antoran2020getting}.

counterfactual

Vision Paper: Causal Inference for Interpretable and Robust Machine Learning in Mobility Analysis

no code implementations18 Oct 2022 Yanan Xin, Natasa Tagasovska, Fernando Perez-Cruz, Martin Raubal

Particularly, the transportation sector would benefit from the progress in AI and advance the development of intelligent transportation systems.

Causal Inference

Uncertainty Surrogates for Deep Learning

no code implementations16 Apr 2021 Radhakrishna Achanta, Natasa Tagasovska

We show how our approach can be used for estimating uncertainty in prediction and out-of-distribution detection.

Computational Efficiency Out-of-Distribution Detection

Deep Smoothing of the Implied Volatility Surface

no code implementations NeurIPS 2020 Damien Ackerer, Natasa Tagasovska, Thibault Vatter

Atypically to standard NN applications, financial industry practitioners use such models equally to replicate market prices and to value other financial instruments.

Copulas as High-Dimensional Generative Models: Vine Copula Autoencoders

1 code implementation NeurIPS 2019 Natasa Tagasovska, Damien Ackerer, Thibault Vatter

We introduce the vine copula autoencoder (VCAE), a flexible generative model for high-dimensional distributions built in a straightforward three-step procedure.

Vocal Bursts Intensity Prediction

Generative Models for Simulating Mobility Trajectories

no code implementations30 Nov 2018 Vaibhav Kulkarni, Natasa Tagasovska, Thibault Vatter, Benoit Garbinato

We also include two sample tests to assess statistical similarity between the observed and simulated distributions, and we analyze the privacy tradeoffs with respect to membership inference and location-sequence attacks.

Semantic Similarity Semantic Textual Similarity

Single-Model Uncertainties for Deep Learning

1 code implementation NeurIPS 2019 Natasa Tagasovska, David Lopez-Paz

To estimate epistemic uncertainty, we propose Orthonormal Certificates (OCs), a collection of diverse non-constant functions that map all training samples to zero.

Prediction Intervals regression

Cannot find the paper you are looking for? You can Submit a new open access paper.