Search Results for author: Amirhossein Dadashzadeh

Found 6 papers, 5 papers with code

QAFE-Net: Quality Assessment of Facial Expressions with Landmark Heatmaps

1 code implementation1 Dec 2023 Shuchao Duan, Amirhossein Dadashzadeh, Alan Whone, Majid Mirmehdi

Beyond FER, pain estimation methods assess levels of intensity in pain expressions, however assessing the quality of all facial expressions is of critical value in health-related applications.

Action Quality Assessment Facial Expression Recognition +1

PECoP: Parameter Efficient Continual Pretraining for Action Quality Assessment

1 code implementation11 Nov 2023 Amirhossein Dadashzadeh, Shuchao Duan, Alan Whone, Majid Mirmehdi

The limited availability of labelled data in Action Quality Assessment (AQA), has forced previous works to fine-tune their models pretrained on large-scale domain-general datasets.

Action Quality Assessment Continual Pretraining +1

Auxiliary Learning for Self-Supervised Video Representation via Similarity-based Knowledge Distillation

1 code implementation7 Dec 2021 Amirhossein Dadashzadeh, Alan Whone, Majid Mirmehdi

Our experimental results show superior results to the state of the art on both UCF101 and HMDB51 datasets when pretraining on K100 in apple-to-apple comparisons.

Auxiliary Learning Knowledge Distillation +1

Exploring Motion Boundaries in an End-to-End Network for Vision-based Parkinson's Severity Assessment

no code implementations17 Dec 2020 Amirhossein Dadashzadeh, Alan Whone, Michal Rolinski, Majid Mirmehdi

We evaluate our proposed method on a dataset of 25 PD patients, obtaining 72. 3% and 77. 1% top-1 accuracy on hand movement and gait tasks respectively.

HGR-Net: A Fusion Network for Hand Gesture Segmentation and Recognition

2 code implementations14 Jun 2018 Amirhossein Dadashzadeh, Alireza Tavakoli Targhi, Maryam Tahmasbi, Majid Mirmehdi

We propose a two-stage convolutional neural network (CNN) architecture for robust recognition of hand gestures, called HGR-Net, where the first stage performs accurate semantic segmentation to determine hand regions, and the second stage identifies the gesture.

Hand Gesture Recognition Hand-Gesture Recognition +4

Cannot find the paper you are looking for? You can Submit a new open access paper.