no code implementations • 16 Oct 2024 • Van Khoa Nguyen, Maciej Falkiewicz, Giangiacomo Mercatali, Alexandros Kalousis
Specifically, we propose Molecular Implicit Neural Generation (MING), a diffusion-based model that learns molecular distributions in function space.
1 code implementation • 28 Jun 2024 • Maciej Falkiewicz, Naoya Takeishi, Alexandros Kalousis
Additionally, we review the literature on the Generalized KS test and discuss the connections between KSGAN and existing adversarial generative models.
1 code implementation • 25 Mar 2024 • Van Khoa Nguyen, Yoann Boget, Frantzeska Lavda, Alexandros Kalousis
Exploring the graph latent structures has not garnered much attention in the graph generative research field.
1 code implementation • NeurIPS 2023 • Maciej Falkiewicz, Naoya Takeishi, Imahn Shekhzadeh, Antoine Wehenkel, Arnaud Delaunoy, Gilles Louppe, Alexandros Kalousis
Bayesian inference allows expressing the uncertainty of posterior belief under a probabilistic model given prior information and the likelihood of the evidence.
1 code implementation • 16 Jun 2023 • João A. Cândido Ramos, Lionel Blondé, Naoya Takeishi, Alexandros Kalousis
In this paper, we introduce MAAD, a novel, sample-efficient on-policy algorithm for Imitation Learning from Observations.
no code implementations • 13 Jun 2023 • Yoann Boget, Magda Gregorova, Alexandros Kalousis
Despite advances in generative methods, accurately modeling the distribution of graphs remains a challenging task primarily because of the absence of predefined or inherent unique graph representation.
1 code implementation • 1 Dec 2022 • Yoann Boget, Magda Gregorova, Alexandros Kalousis
The model we propose tackles the problem of generating the data features constrained by the specific graph structure of each data point by splitting the task into two phases.
1 code implementation • 24 Oct 2022 • Naoya Takeishi, Alexandros Kalousis
The combination of deep neural nets and theory-driven models, which we call deep grey-box modeling, can be inherently interpretable to some extent thanks to the theory backbone.
no code implementations • 7 Dec 2021 • Yoann Boget, Magda Gregorova, Alexandros Kalousis
One solution consists of using equivariant generative functions, which ensure the ordering invariance.
1 code implementation • 3 Jul 2021 • Lionel Blondé, Alexandros Kalousis, Stéphane Marchand-Maillet
Only our framework allowed us to design a method that performed well across the spectrum while remaining modular if more information about the quality of the data ever becomes available.
1 code implementation • 21 Jun 2021 • Joao A. Candido Ramos, Lionel Blondé, Stéphane Armand, Alexandros Kalousis
In this work, we want to learn to model the dynamics of similar yet distinct groups of interacting objects.
1 code implementation • ICLR Workshop Neural_Compression 2021 • Magda Gregorová, Marc Desaules, Alexandros Kalousis
We consider the problem of learned transform compression where we learn both, the transform as well as the probability distribution over the discrete codes.
no code implementations • NeurIPS 2021 • Naoya Takeishi, Alexandros Kalousis
A key technical challenge is to strike a balance between the incomplete physics and trainable components such as neural networks for ensuring that the physics part is used in a meaningful manner.
no code implementations • ICLR 2021 • Jason Ramapuram, Yan Wu, Alexandros Kalousis
Episodic and semantic memory are critical components of the human memory model.
no code implementations • 4 Feb 2021 • Grigorios G. Anagnostopoulos, Alexandros Kalousis
The best performing published positioning method on this dataset is improved by 40% in terms of median error and 6% in terms of mean error, with the use of the augmented dataset.
no code implementations • 20 Nov 2020 • Grigorios G. Anagnostopoulos, Alexandros Kalousis
More specifically, with the use of a public LoRaWAN dataset, the current work analyses: the repartition of the available training set between the tasks of determining the location estimates and the DAE, the concept of selecting a subset of the most reliable estimates, and the impact that the spatial distribution of the data has to the accuracy of the DAE.
no code implementations • NeurIPS 2020 • Amina Mollaysa, Brooks Paige, Alexandros Kalousis
Unfortunately, maximum likelihood training of such models often fails with the samples from the generative model inadequately respecting the input properties.
1 code implementation • 28 Jun 2020 • Lionel Blondé, Pablo Strasser, Alexandros Kalousis
Despite the recent success of reinforcement learning in various domains, these approaches remain, for the most part, deterringly sensitive to hyper-parameters and are often riddled with essential engineering feats allowing their success.
no code implementations • 25 Nov 2019 • Frantzeska Lavda, Magda Gregorová, Alexandros Kalousis
We propose a novel formulation of variational autoencoders, conditional prior VAE (CP-VAE), which learns to differentiate between the individual mixture components and therefore allows for generations from the distributional data clusters.
no code implementations • 14 Aug 2019 • Grigorios G. Anagnostopoulos, Alexandros Kalousis
To facilitate the reproducibility of tests and comparability of results, the code and train/validation/test split used in this study are available.
no code implementations • 14 Aug 2019 • Prodromos Kolyvakis, Alexandros Kalousis, Dimitris Kiritsis
Learning embeddings of entities and relations existing in knowledge bases allows the discovery of hidden patterns in data.
no code implementations • 14 Aug 2019 • Grigorios G. Anagnostopoulos, Alexandros Kalousis
The use of fingerprinting localization techniques in outdoor IoT settings has started to gain popularity over the recent years.
no code implementations • 27 May 2019 • Pablo Strasser, Stephane Armand, Stephane Marchand-Maillet, Alexandros Kalousis
In this paper, we propose to map any complex structure onto a generic form, called serialization, over which we can apply any sequence-based density estimator.
1 code implementation • 8 Dec 2018 • Jason Ramapuram, Maurits Diephuis, Frantzeska Lavda, Russ Webb, Alexandros Kalousis
Image classification with deep neural networks is typically restricted to images of small dimensionality such as 224 x 244 in Resnet models [24].
no code implementations • 24 Oct 2018 • Frantzeska Lavda, Jason Ramapuram, Magda Gregorova, Alexandros Kalousis
Continual learning is the ability to sequentially learn over time by accommodating knowledge while retaining previously learned experiences.
3 code implementations • 6 Sep 2018 • Lionel Blondé, Alexandros Kalousis
GAIL is a recent successful imitation learning architecture that exploits the adversarial training procedure introduced in GANs.
1 code implementation • 16 May 2018 • Magda Gregorová, Alexandros Kalousis, Stéphane Marchand-Maillet
We investigate structured sparsity methods for variable selection in regression problems where the target depends nonlinearly on the inputs.
no code implementations • 19 Apr 2018 • Magda Gregorová, Jason Ramapuram, Alexandros Kalousis, Stéphane Marchand-Maillet
We propose a new method for input variable selection in nonlinear regression.
1 code implementation • 2 Oct 2017 • Magda Gregorova, Alexandros Kalousis, Stephane Marchand-Maillet
We present a new method for forecasting systems of multiple interrelated time series.
1 code implementation • 27 Jun 2017 • Magda Gregorová, Alexandros Kalousis, Stéphane Marchand-Maillet
Traditional linear methods for forecasting multivariate time series are not able to satisfactorily model the non-linear dependencies that may exist in non-Gaussian series.
1 code implementation • ICLR 2018 • Jason Ramapuram, Magda Gregorova, Alexandros Kalousis
Lifelong learning is the problem of learning multiple consecutive tasks in a sequential manner, where knowledge gained from previous tasks is retained and used to aid future learning over the lifetime of the learner.
no code implementations • ICML 2017 • Amina Mollaysa, Pablo Strasser, Alexandros Kalousis
In this paper, we propose a framework that allows for the incorporation of the feature side-information during the learning of very general model families to improve the prediction performance.
no code implementations • NeurIPS 2015 • Ke Sun, Jun Wang, Alexandros Kalousis, Stephane Marchand-Maillet
We give theoretical propositions to show that space-time is a more powerful representation than Euclidean space.
no code implementations • 4 Nov 2015 • Phong Nguyen, Jun Wang, Alexandros Kalousis
Motivated by the fact that very often the users' and items' descriptions as well as the preference behavior can be well summarized by a small number of hidden factors, we propose a novel algorithm, LambdaMART Matrix Factorization (LambdaMART-MF), that learns a low rank latent representation of users and items using gradient boosted trees.
no code implementations • 7 Jul 2015 • Magda Gregorova, Alexandros Kalousis, Stéphane Marchand-Maillet
We consider the problem of learning models for forecasting multiple time-series systems together with discovering the leading indicators that serve as good predictors for the system.
no code implementations • 12 May 2014 • Jun Wang, Ke Sun, Fei Sha, Stephane Marchand-Maillet, Alexandros Kalousis
This induces in the input data space a new family of distance metric with unique properties.
no code implementations • 16 Sep 2013 • Huyen Do, Alexandros Kalousis
Recently, SVMs have been analyzed from SVM and metric learning, and to develop new algorithms that build on the strengths of each.
no code implementations • NeurIPS 2012 • Jun Wang, Alexandros Kalousis, Adam Woznica
We present a new parametric local metric learning method in which we learn a smooth metric matrix function over the data manifold.
no code implementations • NeurIPS 2011 • Jun Wang, Huyen T. Do, Adam Woznica, Alexandros Kalousis
However, the problem then becomes finding the appropriate kernel function.