Only our framework allowed us to design a method that performed well across the spectrum while remaining modular if more information about the quality of the data ever becomes available.
We consider the problem of learned transform compression where we learn both, the transform as well as the probability distribution over the discrete codes.
A key technical challenge is to strike a balance between the incomplete physics and trainable components such as neural networks for ensuring that the physics part is used in a meaningful manner.
The best performing published positioning method on this dataset is improved by 40% in terms of median error and 6% in terms of mean error, with the use of the augmented dataset.
More specifically, with the use of a public LoRaWAN dataset, the current work analyses: the repartition of the available training set between the tasks of determining the location estimates and the DAE, the concept of selecting a subset of the most reliable estimates, and the impact that the spatial distribution of the data has to the accuracy of the DAE.
Unfortunately, maximum likelihood training of such models often fails with the samples from the generative model inadequately respecting the input properties.
Despite the recent success of reinforcement learning in various domains, these approaches remain, for the most part, deterringly sensitive to hyper-parameters and are often riddled with essential engineering feats allowing their success.
We propose a novel formulation of variational autoencoders, conditional prior VAE (CP-VAE), which learns to differentiate between the individual mixture components and therefore allows for generations from the distributional data clusters.
The use of fingerprinting localization techniques in outdoor IoT settings has started to gain popularity over the recent years.
Learning embeddings of entities and relations existing in knowledge bases allows the discovery of hidden patterns in data.
To facilitate the reproducibility of tests and comparability of results, the code and train/validation/test split used in this study are available.
In this paper, we propose to map any complex structure onto a generic form, called serialization, over which we can apply any sequence-based density estimator.
Image classification with deep neural networks is typically restricted to images of small dimensionality such as 224 x 244 in Resnet models .
Continual learning is the ability to sequentially learn over time by accommodating knowledge while retaining previously learned experiences.
GAIL is a recent successful imitation learning architecture that exploits the adversarial training procedure introduced in GANs.
We investigate structured sparsity methods for variable selection in regression problems where the target depends nonlinearly on the inputs.
We propose a new method for input variable selection in nonlinear regression.
We present a new method for forecasting systems of multiple interrelated time series.
Traditional linear methods for forecasting multivariate time series are not able to satisfactorily model the non-linear dependencies that may exist in non-Gaussian series.
Lifelong learning is the problem of learning multiple consecutive tasks in a sequential manner, where knowledge gained from previous tasks is retained and used to aid future learning over the lifetime of the learner.
In this paper, we propose a framework that allows for the incorporation of the feature side-information during the learning of very general model families to improve the prediction performance.
Motivated by the fact that very often the users' and items' descriptions as well as the preference behavior can be well summarized by a small number of hidden factors, we propose a novel algorithm, LambdaMART Matrix Factorization (LambdaMART-MF), that learns a low rank latent representation of users and items using gradient boosted trees.
We consider the problem of learning models for forecasting multiple time-series systems together with discovering the leading indicators that serve as good predictors for the system.
Recently, SVMs have been analyzed from SVM and metric learning, and to develop new algorithms that build on the strengths of each.
We present a new parametric local metric learning method in which we learn a smooth metric matrix function over the data manifold.