We introduce a novel loss function, Covariance Loss, which is conceptually equivalent to conditional neural processes and has a form of regularization so that is applicable to many kinds of neural networks.
Ranked #1 on Time Series Forecasting on PeMSD7
Generative Adversarial Networks (GANs) have shown satisfactory performance in synthetic image generation by devising complex network structure and adversarial training scheme.
Choosing a proper set of kernel functions is an important problem in learning Gaussian Process (GP) models since each kernel structure has different model complexity and data fitness.
As a result, it is possible to assign the bi-polar relevance scores of the target (positive) and hostile (negative) attributions while maintaining each attribution aligned with the importance.
Recent advances in Deep Gaussian Processes (DGPs) show the potential to have more expressive representation than that of traditional Gaussian Processes (GPs).
We reveal that the predictive performance of deep temporal neural networks improves when the training data is temporally processed by a trend filtering.
Recently deep neural networks demonstrate competitive performances in classification and regression tasks for many temporal or sequential data.
Despite of recent advances in generative networks, identifying the image generation mechanism still remains challenging.
From the test, we observed that MNLMs partially understand various types of common sense knowledge but do not accurately understand the semantic meaning of relations.
Recently, robotic grasp detection (GD) and object detection (OD) with reasoning have been investigated using deep neural networks (DNNs).
We also provide conditions under which CBOCPD provides the lower prediction error compared to BOCPD.
As Deep Neural Networks (DNNs) have demonstrated superhuman performance in a variety of fields, there is an increasing interest in understanding the complex internal mechanisms of DNNs.
Many real-world applications of reinforcement learning require an agent to select optimal actions from continuous spaces.
Here, we propose Parametric Information Bottleneck (PIB) for a neural network by utilizing (only) its model parameters explicitly to approximate the compression and the relevance.
Information Bottleneck (IB) is a generalization of rate-distortion theory that naturally incorporates compression and relevance trade-offs for learning.
In experiments with two real-world datasets, we demonstrate that our group CNNs outperform existing CNN based regression methods.
In this paper, we present a new GP model which naturally handles multiple time series by placing an Indian Buffet Process (IBP) prior on the presence of shared kernels.
We demonstrate that the new automatic kernel decomposition procedure outperforms the existing methods on the prediction of discrete events in real-world data.
In this paper, we provide a new perspective to build expressive probabilistic program from continue time series data when the structure of model is not given.
To compute the symmetry in a grid structure, we introduce three legal grid moves (i) Commutation (ii) Cyclic Permutation (iii) Stabilization on sets of local grid squares, grid blocks.
Semantic image segmentation is a principal problem in computer vision, where the aim is to correctly classify each individual pixel of an image into a semantic label.
Gaussian Processes (GPs) provide a general and analytically tractable way of modeling complex time-varying, nonparametric functions.