To the best of our knowledge, our Interventional Normalizing Flows are the first fully-parametric, deep learning method for density estimation of potential outcomes.
(1) We provide a theoretical analysis where we show that our framework yields multiply robust convergence rates: our ITE estimator achieves fast convergence even if several nuisance estimators converge slowly.
As a result, ALD helps practitioners and researchers of algorithmic fairness to detect disparities in machine learning algorithms, so that disparate -- or even unfair -- outcomes can be mitigated.
The extensive adoption of business analytics (BA) has brought financial gains and increased efficiencies.
To the best of our knowledge, ours is the first framework to learn domain-invariant, contextual representation for UDA of time series data.
Common methods for learning optimal DTRs, however, have shortcomings: they are typically based on outcome prediction and not treatment effect estimation, or they use linear models that are restrictive for patient data from modern electronic health records.
To address our research question, we propose the use of web mining: we characterize the influence of different POIs from OpenStreetMap on the utilization of charging stations.
Using a simulation study, we demonstrate that our algorithm outperforms state-of-the-art methods from interpretable off-policy learning in terms of regret.
Although this is a widespread problem in practice, CATE estimation with missing treatments has received little attention.
Instead, medical practice is increasingly interested in estimating causal effects among patient subgroups from electronic health records, that is, observational data.
While observational data is confounded, randomized data is unconfounded, but its sample size is usually too small to learn heterogeneous treatment effects.
To this end, we develop the Deconfounding Temporal Autoencoder, a novel method that leverages observed noisy proxies to learn a hidden embedding that reflects the true hidden confounders.
Since training data is often not representative of the target population, standard policy learning methods may yield policies that do not generalize target population.
We contribute as follows: First, we problematize fundamental assumptions in the current discourse on algorithmic fairness based on a systematic analysis of 310 articles.
Here, we train a QA system on both source data and generated data from the target domain with a contrastive adaptation loss that is incorporated in the training objective.
In this paper, we develop the Sequential Deconfounder, a method that enables estimating individualized treatment effects over time in presence of unobserved confounders.
Particle swarm optimization (PSO) is an iterative search method that moves a set of candidate solution around a search-space towards the best known global and local solutions with randomized step lengths.
We derive human mobility from anonymized, aggregated telecommunication data in a nationwide setting (Switzerland; February 10 - April 26, 2020), consisting of ~1. 5 billion trips.
As this approach can incorporate any active learning agent into its ensemble, it allows to increase the performance of every active learning agent by learning how to combine it with others.
Based on our regularization framework, we develop deep orthogonal networks for unconfounded treatments (DONUT) which learn outcomes that are orthogonal to the treatment assignment.
(ii) Our system is designed such that it continuously learns during the KB completion task and, therefore, significantly improves its performance upon initial zero- and few-shot relations over time.
These findings demonstrate that a large portion of the variance in health outcomes can be attributed to non-linear relationships between patient risk variables and implicate that the current approach of measuring hospital performance should be expanded.
State-of-the-art question answering (QA) relies upon large amounts of training data for which labeling is time consuming and thus expensive.
Here, remarkably, annotating a stratified subset with only 1. 2% of the original training set achieves 97. 7% of the performance as if the complete dataset was annotated.
Translating renderings (e. g. PDFs, scans) into hierarchical document structures is extensively demanded in the daily routines of many real-world applications.
The reactions of the human body to physical exercise, psychophysiological stress and heart diseases are reflected in heart rate variability (HRV).
This demonstrates the performance and superior interpretability of our method, while we finally discuss implications for decision support.
The conventional paradigm in neural question answering (QA) for narrative content is limited to a two-stage process: first, relevant text passages are retrieved and, subsequently, a neural network for machine comprehension extracts the likeliest answer.
For this, we present a novel strategy for learning fully interpretable negation rules via weak supervision: we apply reinforcement learning to find a policy that reconstructs negation rules from sentiment predictions at document level.
In contrast, our research overcomes the limitations of pre-set rules by contributing a novel predictor of market baskets from sequential purchase histories: our predictions are based on similarity matching in order to identify similar purchase habits among the complete shopping histories of all customers.
As our primary contribution, this is the first work that upper bounds the sample complexity for learning real-valued RNNs.
State-of-the-art systems in deep question answering proceed as follows: (1) an initial document retrieval selects relevant documents, which (2) are then processed by a neural network in order to extract the final answer.
The marvel of markets lies in the fact that dispersed information is instantaneously processed and used to adjust the price of goods, services and assets.
(4) We provide guidelines and implications for researchers, managers and practitioners in operations research who want to advance their capabilities for business analytics with regard to deep learning.
This paper provides a holistic study of how stock prices vary in their response to financial disclosures across different topics.
Traditional information retrieval (such as that offered by web search engines) impedes users with information overload from extensive result pages and the need to manually locate the desired information therein.
Emotions widely affect human decision-making.
The macroeconomic climate influences operations with regard to, e. g., raw material prices, financing, supply chain utilization and demand quotas.
Hence, this paper studies the use of deep neural networks for financial decision support.
Information forms the basis for all human behavior, including the ubiquitous decision-making that people constantly perform in their every day lives.
To learn from the resulting rhetorical structure, we propose a tensor-based, tree-structured deep neural network (named Discourse-LSTM) in order to process the complete discourse tree.
Information systems experience an ever-growing volume of unstructured data, particularly in the form of textual materials.
Decision analytics commonly focuses on the text mining of financial news sources in order to provide managerial decision support and to predict stock market movements.