Search Results for author: Harshala Gammulle

Found 13 papers, 0 papers with code

Towards On-Board Panoptic Segmentation of Multispectral Satellite Images

no code implementations5 Apr 2022 Tharindu Fernando, Clinton Fookes, Harshala Gammulle, Simon Denman, Sridha Sridharan

To address this challenge, we propose a multimodal teacher network based on a cross-modality attention-based fusion strategy to improve the segmentation accuracy by exploiting data from multiple modes.

Knowledge Distillation Panoptic Segmentation

Continuous Human Action Recognition for Human-Machine Interaction: A Review

no code implementations26 Feb 2022 Harshala Gammulle, David Ahmedt-Aristizabal, Simon Denman, Lachlan Tychsen-Smith, Lars Petersson, Clinton Fookes

With advances in data-driven machine learning research, a wide variety of prediction models have been proposed to capture spatio-temporal features for the analysis of video streams.

Action Recognition Action Segmentation +1

Deep Learning for Medical Anomaly Detection -- A Survey

no code implementations4 Dec 2020 Tharindu Fernando, Harshala Gammulle, Simon Denman, Sridha Sridharan, Clinton Fookes

Machine learning-based medical anomaly detection is an important problem that has been extensively studied.

Anomaly Detection

Multi-modal Fusion for Single-Stage Continuous Gesture Recognition

no code implementations10 Nov 2020 Harshala Gammulle, Simon Denman, Sridha Sridharan, Clinton Fookes

Gesture recognition is a much studied research area which has myriad real-world applications including robotics and human-machine interaction.

Gesture Recognition

Two-Stream Deep Feature Modelling for Automated Video Endoscopy Data Analysis

no code implementations12 Jul 2020 Harshala Gammulle, Simon Denman, Sridha Sridharan, Clinton Fookes

Automating the analysis of imagery of the Gastrointestinal (GI) tract captured during endoscopy procedures has substantial potential benefits for patients, as it can provide diagnostic support to medical practitioners and reduce mistakes via human error.

Hierarchical Attention Network for Action Segmentation

no code implementations7 May 2020 Harshala Gammulle, Simon Denman, Sridha Sridharan, Clinton Fookes

The temporal segmentation of events is an essential task and a precursor for the automatic recognition of human actions in the video.

Action Segmentation Frame

Predicting the Future: A Jointly Learnt Model for Action Anticipation

no code implementations ICCV 2019 Harshala Gammulle, Simon Denman, Sridha Sridharan, Clinton Fookes

Inspired by human neurological structures for action anticipation, we present an action anticipation model that enables the prediction of plausible future actions by forecasting both the visual and temporal future.

Action Anticipation

Fine-grained Action Segmentation using the Semi-Supervised Action GAN

no code implementations20 Sep 2019 Harshala Gammulle, Simon Denman, Sridha Sridharan, Clinton Fookes

In this paper we address the problem of continuous fine-grained action segmentation, in which multiple actions are present in an unsegmented video stream.

Action Classification Action Segmentation

Forecasting Future Action Sequences with Neural Memory Networks

no code implementations20 Sep 2019 Harshala Gammulle, Simon Denman, Sridha Sridharan, Clinton Fookes

We propose a novel neural memory network based framework for future action sequence forecasting.

Multi-Level Sequence GAN for Group Activity Recognition

no code implementations18 Dec 2018 Harshala Gammulle, Simon Denman, Sridha Sridharan, Clinton Fookes

The generator is fed with person-level and scene-level features that are mapped temporally through LSTM networks.

Action Classification Activity Prediction +2

Two Stream LSTM: A Deep Fusion Framework for Human Action Recognition

no code implementations4 Apr 2017 Harshala Gammulle, Simon Denman, Sridha Sridharan, Clinton Fookes

Our contribution in this paper is a deep fusion framework that more effectively exploits spatial features from CNNs with temporal features from LSTM models.

Action Recognition

Cannot find the paper you are looking for? You can Submit a new open access paper.