Search Results for author: Ian Craddock

Found 11 papers, 4 papers with code

Multimodal Indoor Localisation in Parkinson's Disease for Detecting Medication Use: Observational Pilot Study in a Free-Living Setting

1 code implementation3 Aug 2023 Ferdian Jovan, Catherine Morgan, Ryan McConville, Emma L. Tonkin, Ian Craddock, Alan Whone

A sub-objective aims to evaluate whether indoor localisation, including its in-home gait speed features (i. e. the time taken to walk between rooms), could be used to evaluate motor fluctuations by detecting whether the person with PD is taking levodopa medications or withholding them.

Inertial Hallucinations -- When Wearable Inertial Devices Start Seeing Things

no code implementations14 Jul 2022 Alessandro Masullo, Toby Perrett, Tilo Burghardt, Ian Craddock, Dima Damen, Majid Mirmehdi

We propose a novel approach to multimodal sensor fusion for Ambient Assisted Living (AAL) which takes advantage of learning using privileged information (LUPI).

Hallucination Sensor Fusion

Multimodal Indoor Localisation for Measuring Mobility in Parkinson's Disease using Transformers

no code implementations12 May 2022 Ferdian Jovan, Ryan McConville, Catherine Morgan, Emma Tonkin, Alan Whone, Ian Craddock

We use data collected from 10 people with Parkinson's, and 10 controls, each of whom lived for five days in a smart home with various sensors.

Wi-Fi Based Passive Human Motion Sensing for In-Home Healthcare Applications

no code implementations13 Apr 2022 Bo Tan, Alison Burrows, Robert Piechocki, Ian Craddock, Karl Woodbridge, Kevin Chetty

The experiment results offer potential for promising healthcare applications using Wi-Fi passive sensing in the home to monitor daily activities, to gather health data and detect emergency situations.

OPERAnet: A Multimodal Activity Recognition Dataset Acquired from Radio Frequency and Vision-based Sensors

1 code implementation8 Oct 2021 Mohammud J. Bocus, Wenda Li, Shelly Vishwakarma, Roget Kou, Chong Tang, Karl Woodbridge, Ian Craddock, Ryan McConville, Raul Santos-Rodriguez, Kevin Chetty, Robert Piechocki

This dataset can be exploited to advance WiFi and vision-based HAR, for example, using pattern recognition, skeletal representation, deep learning algorithms or other novel approaches to accurately recognize human activities.

Human Activity Recognition Multimodal Activity Recognition

N2D: (Not Too) Deep Clustering via Clustering the Local Manifold of an Autoencoded Embedding

5 code implementations16 Aug 2019 Ryan McConville, Raul Santos-Rodriguez, Robert J. Piechocki, Ian Craddock

We study a number of local and global manifold learning methods on both the raw data and autoencoded embedding, concluding that UMAP in our framework is best able to find the most clusterable manifold in the embedding, suggesting local manifold learning on an autoencoded embedding is effective for discovering higher quality discovering clusters.

Clustering Deep Clustering +4

Probabilistic Sensor Fusion for Ambient Assisted Living

no code implementations4 Feb 2017 Tom Diethe, Niall Twomey, Meelis Kull, Peter Flach, Ian Craddock

There is a widely-accepted need to revise current forms of health-care provision, with particular interest in sensing systems in the home.

Activity Recognition Sensor Fusion

Calorie Counter: RGB-Depth Visual Estimation of Energy Expenditure at Home

no code implementations27 Jul 2016 Lili Tao, Tilo Burghardt, Majid Mirmehdi, Dima Damen, Ashley Cooper, Sion Hannuna, Massimo Camplani, Adeline Paiement, Ian Craddock

We present a new framework for vision-based estimation of calorific expenditure from RGB-D data - the first that is validated on physical gas exchange measurements and applied to daily living scenarios.

Cannot find the paper you are looking for? You can Submit a new open access paper.