Search Results for author: Nicholas I-Hsien Kuo

Found 9 papers, 4 papers with code

Synthetic Health-related Longitudinal Data with Mixed-type Variables Generated using Diffusion Models

no code implementations22 Mar 2023 Nicholas I-Hsien Kuo, Louisa Jorm, Sebastiano Barbieri

This paper presents a novel approach to simulating electronic health records (EHRs) using diffusion probabilistic models (DPMs).

Reinforcement Learning (RL)

Synthetic Acute Hypotension and Sepsis Datasets Based on MIMIC-III and Published as Part of the Health Gym Project

no code implementations7 Dec 2021 Nicholas I-Hsien Kuo, Mark Polizzotto, Simon Finfer, Louisa Jorm, Sebastiano Barbieri

These two synthetic datasets comprise vital signs, laboratory test results, administered fluid boluses and vasopressors for 3, 910 patients with acute hypotension and for 2, 164 patients with sepsis in the Intensive Care Unit (ICU).

reinforcement-learning Reinforcement Learning (RL)

Learning to Continually Learn Rapidly from Few and Noisy Data

1 code implementation6 Mar 2021 Nicholas I-Hsien Kuo, Mehrtash Harandi, Nicolas Fourrier, Christian Walder, Gabriela Ferraro, Hanna Suominen

Neural networks suffer from catastrophic forgetting and are unable to sequentially learn new tasks without guaranteed stationarity in data distribution.

Continual Learning Meta-Learning

Highway-Connection Classifier Networks for Plastic yet Stable Continual Learning

no code implementations1 Jan 2021 Nicholas I-Hsien Kuo, Mehrtash Harandi, Nicolas Fourrier, Christian Walder, Gabriela Ferraro, Hanna Suominen

Catastrophic forgetting occurs when a neural network is trained sequentially on multiple tasks – its weights will be continuously modified and as a result, the network will lose its ability in solving a previous task.

Continual Learning

MTL2L: A Context Aware Neural Optimiser

1 code implementation18 Jul 2020 Nicholas I-Hsien Kuo, Mehrtash Harandi, Nicolas Fourrier, Christian Walder, Gabriela Ferraro, Hanna Suominen

Learning to learn (L2L) trains a meta-learner to assist the learning of a task-specific base learner.

Multi-Task Learning

EINS: Long Short-Term Memory with Extrapolated Input Network Simplification

no code implementations25 Sep 2019 Nicholas I-Hsien Kuo, Mehrtash T. Harandi, Nicolas Fourrier, Gabriela Ferraro, Christian Walder, Hanna Suominen

This paper contrasts the two canonical recurrent neural networks (RNNs) of long short-term memory (LSTM) and gated recurrent unit (GRU) to propose our novel light-weight RNN of Extrapolated Input for Network Simplification (EINS).

Image Generation Imputation +2

Cannot find the paper you are looking for? You can Submit a new open access paper.