1 code implementation • 5 Feb 2024 • Bernard Spiegl, Andrea Perin, Stéphane Deny, Alexander Ilin
Deep learning is providing a wealth of new approaches to the old problem of novel view synthesis, from Neural Radiance Field (NeRF) based approaches to end-to-end style architectures.
no code implementations • 22 May 2023 • Sam Spilsbury, Alexander Ilin
Choosing good supports from the training data for a given test query is already a difficult problem, but in some cases solving this may not even be enough.
1 code implementation • 30 Jan 2023 • Kalle Kujanpää, Joni Pajarinen, Alexander Ilin
The ability to plan actions on multiple levels of abstraction enables intelligent agents to solve complex tasks effectively.
2 code implementations • 25 Oct 2022 • Yi Zhao, Rinu Boney, Alexander Ilin, Juho Kannala, Joni Pajarinen
Offline reinforcement learning, by learning from a fixed dataset, makes it possible to learn agent behaviors without interacting with the environment.
no code implementations • 25 Oct 2022 • Oscar Vikström, Alexander Ilin
With the recent successful adaptation of transformers to the vision domain, particularly when trained in a self-supervised fashion, it has been shown that vision transformers can learn impressive object-reasoning-like behaviour and features expressive for the task of object segmentation in images.
1 code implementation • 4 Oct 2022 • Kalle Kujanpää, Amin Babadi, Yi Zhao, Juho Kannala, Alexander Ilin, Joni Pajarinen
To address this problem, we propose Continuous Monte Carlo Graph Search (CMCGS), an extension of MCTS to online planning in environments with continuous state and action spaces.
1 code implementation • NAACL (ACL) 2022 • Sam Spilsbury, Alexander Ilin
We provide a study of how induced model sparsity can help achieve compositional generalization and better sample efficiency in grounded language learning problems.
no code implementations • 11 Apr 2022 • Katsiaryna Haitsiukevich, Alexander Ilin
Learning the solution of partial differential equations (PDEs) with a neural network is an attractive alternative to traditional solvers due to its elegance, greater flexibility and the ease of incorporating observed data.
no code implementations • 11 Apr 2022 • Katsiaryna Haitsiukevich, Alexander Ilin
A popular approach is to use Hamiltonian neural networks (HNNs) which rely on the assumptions that a conservative system is described with Hamilton's equations of motion.
no code implementations • 13 Dec 2021 • Katsiaryna Haitsiukevich, Samuli Bergman, Cesar de Araujo Filho, Francesco Corona, Alexander Ilin
We propose a grid-like computational model of tubular reactors.
no code implementations • 8 Nov 2021 • Arturs Polis, Alexander Ilin
We show that a deep learning model with built-in relational inductive bias can bring benefits to sample-efficient learning, without relying on extensive data augmentation.
no code implementations • 4 Oct 2021 • Antti Keurulainen, Isak Westerlund, Samuel Kaski, Alexander Ilin
On the other hand, offline data about the behavior of the assisted agent might be available, but is non-trivial to take advantage of by methods such as offline reinforcement learning.
no code implementations • 4 Oct 2021 • Antti Keurulainen, Isak Westerlund, Ariel Kwiatkowski, Samuel Kaski, Alexander Ilin
We suggest a method, where we synthetically produce populations of agents with different behavioural patterns together with ground truth data of their behaviour, and use this data for training a meta-learner.
no code implementations • 4 Oct 2021 • Kalle Kujanpää, Willie Victor, Alexander Ilin
AI-based defensive solutions are necessary to defend networks and information assets against intelligent automated attacks.
no code implementations • 31 Aug 2021 • Yogesh Kumar, Alexander Ilin, Henri Salo, Sangita Kulathinal, Maarit K. Leinonen, Pekka Marttinen
Despite the proven effectiveness of Transformer neural networks across multiple domains, their performance with Electronic Health Records (EHR) can be nuanced.
2 code implementations • 15 Jun 2021 • Rinu Boney, Alexander Ilin, Juho Kannala
In many control problems that include vision, optimal controls can be inferred from the location of the objects in the scene.
1 code implementation • 22 Dec 2020 • Rinu Boney, Alexander Ilin, Juho Kannala, Jarno Seppänen
We experimentally show that planning with naive Monte Carlo tree search does not perform very well in large combinatorial action spaces.
no code implementations • 12 Oct 2019 • Rinu Boney, Juho Kannala, Alexander Ilin
Model-based reinforcement learning could enable sample-efficient learning by quickly acquiring rich knowledge about the world and using it to improve behaviour without additional data.
no code implementations • NeurIPS 2019 • Rinu Boney, Norman Di Palo, Mathias Berglund, Alexander Ilin, Juho Kannala, Antti Rasmus, Harri Valpola
Trajectory optimization using a learned model of the environment is one of the core elements of model-based reinforcement learning.
no code implementations • 29 Nov 2017 • Rinu Boney, Alexander Ilin
We consider the problem of semi-supervised few-shot classification where a classifier needs to adapt to new tasks using a few labeled examples and (potentially many) unlabeled examples.
no code implementations • NeurIPS 2017 • Isabeau Prémont-Schwarz, Alexander Ilin, Tele Hotloo Hao, Antti Rasmus, Rinu Boney, Harri Valpola
We propose a recurrent extension of the Ladder networks whose structure is motivated by the inference required in hierarchical latent variable models.
no code implementations • 2 Oct 2014 • Jaakko Luttinen, Tapani Raiko, Alexander Ilin
The time dependency is obtained by forming the state dynamics matrix as a time-varying linear combination of a set of matrices.
no code implementations • NeurIPS 2009 • Jaakko Luttinen, Alexander Ilin
We present a probabilistic latent factor model which can be used for studying spatio-temporal datasets.