Search Results for author: Paal Engelstad

Found 9 papers, 3 papers with code

A Manifold Representation of the Key in Vision Transformers

no code implementations1 Feb 2024 Li Meng, Morten Goodwin, Anis Yazidi, Paal Engelstad

The query, key, and value are often intertwined and generated within those blocks via a single, shared linear transformation.

Instance Segmentation object-detection +2

State Representation Learning Using an Unbalanced Atlas

no code implementations17 May 2023 Li Meng, Morten Goodwin, Anis Yazidi, Paal Engelstad

The manifold hypothesis posits that high-dimensional data often lies on a lower-dimensional manifold and that utilizing this manifold as the target space yields more efficient representations.

Dimensionality Reduction Representation Learning +1

It is all Connected: A New Graph Formulation for Spatio-Temporal Forecasting

no code implementations23 Mar 2023 Lars Ødegaard Bentsen, Narada Dilp Warakagoda, Roy Stenbro, Paal Engelstad

With an ever-increasing number of sensors in modern society, spatio-temporal time series forecasting has become a de facto tool to make informed decisions about the future.

Imputation Irregular Time Series +3

Unsupervised Representation Learning in Partially Observable Atari Games

1 code implementation13 Mar 2023 Li Meng, Morten Goodwin, Anis Yazidi, Paal Engelstad

Contrastive methods have performed better than generative models in previous state representation learning research.

Atari Games Representation Learning

Spatio-Temporal Wind Speed Forecasting using Graph Networks and Novel Transformer Architectures

1 code implementation29 Aug 2022 Lars Ødegaard Bentsen, Narada Dilp Warakagoda, Roy Stenbro, Paal Engelstad

Various alterations have been proposed to better facilitate time series forecasting, of which this study focused on the Informer, LogSparse Transformer and Autoformer.

Multivariate Time Series Forecasting Spatio-Temporal Forecasting +2

Deep Reinforcement Learning with Swin Transformers

1 code implementation30 Jun 2022 Li Meng, Morten Goodwin, Anis Yazidi, Paal Engelstad

Transformers are neural network models that utilize multiple layers of self-attention heads and have exhibited enormous potential in natural language processing tasks.

Atari Games reinforcement-learning +1

improving the Diversity of Bootstrapped DQN by Replacing Priors With Noise

no code implementations2 Mar 2022 Li Meng, Morten Goodwin, Anis Yazidi, Paal Engelstad

In this article, we further explore the possibility of replacing priors with noise and sample the noise from a Gaussian distribution to introduce more diversity into this algorithm.

Atari Games Q-Learning

Wind Park Power Prediction: Attention-Based Graph Networks and Deep Learning to Capture Wake Losses

no code implementations10 Jan 2022 Lars Ødegaard Bentsen, Narada Dilp Warakagoda, Roy Stenbro, Paal Engelstad

With the increased penetration of wind energy into the power grid, it has become increasingly important to be able to predict the expected power production for larger wind farms.

Graph Attention Physical Intuition

Expert Q-learning: Deep Reinforcement Learning with Coarse State Values from Offline Expert Examples

no code implementations28 Jun 2021 Li Meng, Anis Yazidi, Morten Goodwin, Paal Engelstad

Using the board game Othello, we compare our algorithm with the baseline Q-learning algorithm, which is a combination of Double Q-learning and Dueling Q-learning.

Imitation Learning Q-Learning +2

Cannot find the paper you are looking for? You can Submit a new open access paper.