Search Results for author: Nat Dilokthanakul

Found 9 papers, 5 papers with code

Deep Reinforcement Learning Models Predict Visual Responses in the Brain: A Preliminary Result

no code implementations18 Jun 2021 Maytus Piriyajitakonkij, Sirawaj Itthipuripat, Theerawit Wilaiprasitporn, Nat Dilokthanakul

Supervised deep convolutional neural networks (DCNNs) are currently one of the best computational models that can explain how the primate ventral visual stream solves object recognition.

Object Recognition reinforcement-learning +1

MetaSleepLearner: A Pilot Study on Fast Adaptation of Bio-signals-Based Sleep Stage Classifier to New Individual Subject Using Meta-Learning

1 code implementation8 Apr 2020 Nannapas Banluesombatkul, Pichayoot Ouppaphan, Pitshaporn Leelaarporn, Payongkit Lakhan, Busarakum Chaitusaney, Nattapong Jaimchariyatam, Ekapol Chuangsuwanich, Wei Chen, Huy Phan, Nat Dilokthanakul, Theerawit Wilaiprasitporn

This is the first work that investigated a non-conventional pre-training method, MAML, resulting in a possibility for human-machine collaboration in sleep stage classification and easing the burden of the clinicians in labelling the sleep stages through only several epochs rather than an entire recording.

Automatic Sleep Stage Classification Meta-Learning +2

An Explicit Local and Global Representation Disentanglement Framework with Applications in Deep Clustering and Unsupervised Object Detection

1 code implementation24 Jan 2020 Rujikorn Charakorn, Yuttapong Thawornwattana, Sirawaj Itthipuripat, Nick Pawlowski, Poramate Manoonpong, Nat Dilokthanakul

In this work, we propose a framework, called SPLIT, which allows us to disentangle local and global information into two separate sets of latent variables within the variational autoencoder (VAE) framework.

Clustering Deep Clustering +5

Explicit Information Placement on Latent Variables using Auxiliary Generative Modelling Task

no code implementations27 Sep 2018 Nat Dilokthanakul, Nick Pawlowski, Murray Shanahan

We demonstrate the use of the method in a task of disentangling global structure from local features in images.

Clustering

Feature Control as Intrinsic Motivation for Hierarchical Reinforcement Learning

1 code implementation18 May 2017 Nat Dilokthanakul, Christos Kaplanis, Nick Pawlowski, Murray Shanahan

We highlight the advantage of our approach in one of the hardest games -- Montezuma's revenge -- for which the ability to handle sparse rewards is key.

Hierarchical Reinforcement Learning Montezuma's Revenge +2

Deep Unsupervised Clustering with Gaussian Mixture Variational Autoencoders

3 code implementations8 Nov 2016 Nat Dilokthanakul, Pedro A. M. Mediano, Marta Garnelo, Matthew C. H. Lee, Hugh Salimbeni, Kai Arulkumaran, Murray Shanahan

We study a variant of the variational autoencoder model (VAE) with a Gaussian mixture as a prior distribution, with the goal of performing unsupervised clustering through deep generative models.

Clustering Human Pose Forecasting

Classifying Options for Deep Reinforcement Learning

no code implementations27 Apr 2016 Kai Arulkumaran, Nat Dilokthanakul, Murray Shanahan, Anil Anthony Bharath

In this paper we combine one method for hierarchical reinforcement learning - the options framework - with deep Q-networks (DQNs) through the use of different "option heads" on the policy network, and a supervisory network for choosing between the different options.

Hierarchical Reinforcement Learning reinforcement-learning +1

Cannot find the paper you are looking for? You can Submit a new open access paper.