no code implementations • 22 Mar 2023 • Nicholas I-Hsien Kuo, Louisa Jorm, Sebastiano Barbieri
This paper presents a novel approach to simulating electronic health records (EHRs) using diffusion probabilistic models (DPMs).
1 code implementation • 18 Aug 2022 • Nicholas I-Hsien Kuo, Federico Garcia, Anders Sönnerborg, Maurizio Zazzi, Michael Böhm, Rolf Kaiser, Mark Polizzotto, Louisa Jorm, Sebastiano Barbieri
Clinical data usually cannot be freely distributed due to their highly confidential nature and this hampers the development of machine learning in the healthcare domain.
1 code implementation • 12 Mar 2022 • Nicholas I-Hsien Kuo, Mark N. Polizzotto, Simon Finfer, Federico Garcia, Anders Sönnerborg, Maurizio Zazzi, Michael Böhm, Louisa Jorm, Sebastiano Barbieri
This has hampered the development of reproducible and generalisable machine learning applications in health care.
BIG-bench Machine Learning Generative Adversarial Network +1
no code implementations • 7 Dec 2021 • Nicholas I-Hsien Kuo, Mark Polizzotto, Simon Finfer, Louisa Jorm, Sebastiano Barbieri
These two synthetic datasets comprise vital signs, laboratory test results, administered fluid boluses and vasopressors for 3, 910 patients with acute hypotension and for 2, 164 patients with sepsis in the Intensive Care Unit (ICU).
no code implementations • 29 Sep 2021 • Nicholas I-Hsien Kuo, Mehrtash Harandi, Nicolas Fourrier, Gabriela Ferraro, Christian Walder, Hanna Suominen
Neural networks usually excel in learning a single task.
1 code implementation • 6 Mar 2021 • Nicholas I-Hsien Kuo, Mehrtash Harandi, Nicolas Fourrier, Christian Walder, Gabriela Ferraro, Hanna Suominen
Neural networks suffer from catastrophic forgetting and are unable to sequentially learn new tasks without guaranteed stationarity in data distribution.
no code implementations • 1 Jan 2021 • Nicholas I-Hsien Kuo, Mehrtash Harandi, Nicolas Fourrier, Christian Walder, Gabriela Ferraro, Hanna Suominen
Catastrophic forgetting occurs when a neural network is trained sequentially on multiple tasks – its weights will be continuously modified and as a result, the network will lose its ability in solving a previous task.
1 code implementation • 18 Jul 2020 • Nicholas I-Hsien Kuo, Mehrtash Harandi, Nicolas Fourrier, Christian Walder, Gabriela Ferraro, Hanna Suominen
Learning to learn (L2L) trains a meta-learner to assist the learning of a task-specific base learner.
no code implementations • 25 Sep 2019 • Nicholas I-Hsien Kuo, Mehrtash T. Harandi, Nicolas Fourrier, Gabriela Ferraro, Christian Walder, Hanna Suominen
This paper contrasts the two canonical recurrent neural networks (RNNs) of long short-term memory (LSTM) and gated recurrent unit (GRU) to propose our novel light-weight RNN of Extrapolated Input for Network Simplification (EINS).