Search Results for author: David Carlson

Found 22 papers, 11 papers with code

Augmenting Ground-Level PM2.5 Prediction via Kriging-Based Pseudo-Label Generation

no code implementations16 Jan 2024 Lei Duan, Ziyang Jiang, David Carlson

We show that the proposed data augmentation strategy helps enhance the performance of the state-of-the-art convolutional neural network-random forest (CNN-RF) model by a reasonable amount, resulting in a noteworthy improvement in spatial correlation and a reduction in prediction error.

Data Augmentation Pseudo Label +1

Causal Mediation Analysis with Multi-dimensional and Indirectly Observed Mediators

no code implementations13 Jun 2023 Ziyang Jiang, Yiling Liu, Michael H. Klein, Ahmed Aloui, Yiman Ren, Keyu Li, Vahid Tarokh, David Carlson

This is important in many scientific applications to identify the underlying mechanisms of a treatment effect.

Domain Adaptation via Rebalanced Sub-domain Alignment

no code implementations3 Feb 2023 Yiling Liu, Juncheng Dong, Ziyang Jiang, Ahmed Aloui, Keyu Li, Hunter Klein, Vahid Tarokh, David Carlson

To address this limitation, we propose a novel generalization bound that reweights source classification error by aligning source and target sub-domains.

Unsupervised Domain Adaptation

Estimating Causal Effects using a Multi-task Deep Ensemble

1 code implementation26 Jan 2023 Ziyang Jiang, Zhuoran Hou, Yiling Liu, Yiman Ren, Keyu Li, David Carlson

A number of methods have been proposed for causal effect estimation, yet few have demonstrated efficacy in handling data with complex structures, such as images.

Multiple Domain Causal Networks

no code implementations13 May 2022 Tianhui Zhou, William E. Carson IV, Michael Hunter Klein, David Carlson

Finally, we justify our approach by providing theoretical analyses that demonstrate that MDCN improves on the generalization bound of the new, unobserved target center.

Selection bias

AugmentedPCA: A Python Package of Supervised and Adversarial Linear Factor Models

1 code implementation7 Jan 2022 William E. Carson IV, Austin Talbot, David Carlson

Deep autoencoders are often extended with a supervised or adversarial loss to learn latent representations with desirable properties, such as greater predictivity of labels and outcomes or fairness with respects to a sensitive variable.

Fairness

Directed Spectrum Measures Improve Latent Network Models Of Neural Populations

no code implementations NeurIPS 2021 Neil Gallagher, Kafui Dzirasa, David Carlson

We prove that it is compatible with the implicit assumptions of linear factor models, and we provide a method to estimate the DS.

Estimating Potential Outcome Distributions with Collaborating Causal Networks

no code implementations4 Oct 2021 Tianhui Zhou, William E Carson IV, David Carlson

However, existing methods for estimating treatment effect potential outcome distributions often impose restrictive or simplistic assumptions about these distributions.

Causal Inference Decision Making

Adversarial Factor Models for the Generation of Improved Autism Diagnostic Biomarkers

no code implementations24 Sep 2021 William E. Carson IV, Dmitry Isaev, Samatha Major, Guillermo Sapiro, Geraldine Dawson, David Carlson

Second, we show this same model can be used to learn a disentangled representation of multimodal biomarkers that results in an increase in predictive performance.

Supervising the Decoder of Variational Autoencoders to Improve Scientific Utility

1 code implementation9 Sep 2021 Liyun Tu, Austin Talbot, Neil Gallagher, David Carlson

We demonstrate the effectiveness of these developments using synthetic data and electrophysiological recordings with an emphasis on how our learned representations can be used to design scientific experiments.

Estimating Uncertainty Intervals from Collaborating Networks

1 code implementation12 Feb 2020 Tianhui Zhou, Yitong Li, Yuan Wu, David Carlson

We address these challenges by proposing a novel method to capture predictive distributions in regression by defining two neural networks with two distinct loss functions.

Decision Making regression

Stochastic Bouncy Particle Sampler

1 code implementation ICML 2017 Ari Pakman, Dar Gilboa, David Carlson, Liam Paninski

We introduce a novel stochastic version of the non-reversible, rejection-free Bouncy Particle Sampler (BPS), a Markov process whose sample trajectories are piecewise linear.

Partition Functions from Rao-Blackwellized Tempered Sampling

no code implementations7 Mar 2016 David Carlson, Patrick Stinson, Ari Pakman, Liam Paninski

Partition functions of probability distributions are important quantities for model evaluation and comparisons.

Bridging the Gap between Stochastic Gradient MCMC and Stochastic Optimization

1 code implementation25 Dec 2015 Changyou Chen, David Carlson, Zhe Gan, Chunyuan Li, Lawrence Carin

Stochastic gradient Markov chain Monte Carlo (SG-MCMC) methods are Bayesian analogs to popular stochastic optimization methods; however, this connection is not well studied.

Stochastic Optimization

Preconditioned Stochastic Gradient Langevin Dynamics for Deep Neural Networks

no code implementations23 Dec 2015 Chunyuan Li, Changyou Chen, David Carlson, Lawrence Carin

Pytorch implementations of Bayes By Backprop, MC Dropout, SGLD, the Local Reparametrization Trick, KF-Laplace and more

Neuroprosthetic decoder training as imitation learning

no code implementations13 Nov 2015 Josh Merel, David Carlson, Liam Paninski, John P. Cunningham

We describe how training a decoder in this way is a novel variant of an imitation learning problem, where an oracle or expert is employed for supervised training in lieu of direct observations, which are not available.

Brain Computer Interface Imitation Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.