Causal discovery from observational and interventional data is challenging due to limited data and non-identifiability which introduces uncertainties in estimating the underlying structural causal model (SCM).
Discovering causal structures from data is a challenging inference problem of fundamental importance in all areas of science.
However, a crucial aspect to acting intelligently upon the knowledge about causal structure which has been inferred from finite data demands reasoning about its uncertainty.
We study the problem of self-supervised structured representation learning using autoencoders for generative modeling.
We investigate and analyse the performance of popular CNN architectures (GoogleNet, AlexNet), used for other image classification tasks, when subjected to the task of detecting the selfies on the multimedia platform.
This is used to compute the sparse coefficients of the input action sequence which is divided into overlapping windows and each window gives a probability score for each action class.