no code implementations • 18 Aug 2024 • Shanaka Ramesh Gunasekara, Wanqing Li, Jack Yang, Philip Ogunbona
In skeleton-based human action recognition, temporal pooling is a critical step for capturing spatiotemporal relationship of joint dynamics.
no code implementations • 26 May 2020 • Jing Zhang, Wanqing Li, Lu Sheng, Chang Tang, Philip Ogunbona
Given an existing system learned from previous source domains, it is desirable to adapt the system to new domains without accessing and forgetting all the previous domains in some applications.
1 code implementation • CVPR 2018 • Jing Zhang, Zewei Ding, Wanqing Li, Philip Ogunbona
This paper proposes an importance weighted adversarial nets-based method for unsupervised domain adaptation, specific for partial domain adaptation where the target domain has less number of classes compared to the source domain.
no code implementations • 25 Mar 2018 • Jing Zhang, Wanqing Li, Philip Ogunbona
This paper presents a novel multi-task learning-based method for unsupervised domain adaptation.
no code implementations • 17 Mar 2018 • Pichao Wang, Wanqing Li, Zhimin Gao, Chang Tang, Philip Ogunbona
This paper proposes three simple, compact yet effective representations of depth sequences, referred to respectively as Dynamic Depth Images (DDI), Dynamic Depth Normal Images (DDNI) and Dynamic Depth Motion Normal Images (DDMNI), for both isolated and continuous action recognition.
no code implementations • 5 Dec 2017 • Pichao Wang, Wanqing Li, Jun Wan, Philip Ogunbona, Xinwang Liu
Differently from the conventional ConvNet that learns the deep separable features for homogeneous modality-based classification with only one softmax loss function, the c-ConvNet enhances the discriminative power of the deeply learned features and weakens the undesired modality discrepancy by jointly optimizing a ranking loss and a softmax loss for both homogeneous and heterogeneous modalities.
no code implementations • 31 Oct 2017 • Pichao Wang, Wanqing Li, Philip Ogunbona, Jun Wan, Sergio Escalera
Specifically, deep learning methods based on the CNN and RNN architectures have been adopted for motion recognition using RGB-D data.
no code implementations • CVPR 2017 • Jing Zhang, Wanqing Li, Philip Ogunbona
This paper presents a novel unsupervised domain adaptation method for cross-domain visual recognition.
Ranked #5 on Domain Adaptation on Office-Caltech
no code implementations • 11 May 2017 • Jing Zhang, Wanqing Li, Philip Ogunbona, Dong Xu
This paper takes a problem-oriented perspective and presents a comprehensive review of transfer learning methods, both shallow and deep, for cross-dataset visual recognition.
no code implementations • CVPR 2017 • Pichao Wang, Wanqing Li, Zhimin Gao, Yuyao Zhang, Chang Tang, Philip Ogunbona
Based on the scene flow vectors, we propose a new representation, namely, Scene Flow to Action Map (SFAM), that describes several long term spatio-temporal dynamics for action recognition.
Ranked #3 on Hand Gesture Recognition on ChaLearn val
no code implementations • 7 Jan 2017 • Pichao Wang, Wanqing Li, Song Liu, Zhimin Gao, Chang Tang, Philip Ogunbona
This paper proposes three simple, compact yet effective representations of depth sequences, referred to respectively as Dynamic Depth Images (DDI), Dynamic Depth Normal Images (DDNI) and Dynamic Depth Motion Normal Images (DDMNI).
Ranked #2 on Hand Gesture Recognition on ChaLearn val
no code implementations • 22 Aug 2016 • Pichao Wang, Wanqing Li, Song Liu, Yuyao Zhang, Zhimin Gao, Philip Ogunbona
This paper addresses the problem of continuous gesture recognition from sequences of depth maps using convolutional neutral networks (ConvNets).
no code implementations • 1 Apr 2016 • Lijuan Zhou, Wanqing Li, Philip Ogunbona
This paper presents a novel method for learning a pose lexicon comprising semantic poses defined by textual instructions and their associated visual poses defined by visual features.
no code implementations • 22 Feb 2016 • Song Liu, Wanqing Li, Philip Ogunbona, Yang-Wai Chow
This paper presents an extension to the KinectFusion algorithm which allows creating simplified 3D models with high quality RGB textures.
no code implementations • IEEE Transactions on Human-Machine Systems 2016 2015 • Pichao Wang, Wanqing Li, Zhimin Gao, Jing Zhang, Chang Tang, Philip Ogunbona
In addition, the method was evaluated on the large dataset constructed from the above datasets.
Ranked #9 on Multimodal Activity Recognition on EV-Action
no code implementations • 23 Jun 2015 • Luping Zhou, Lei Wang, Lingqiao Liu, Philip Ogunbona, Dinggang Shen
This brings two general discriminative learning frameworks for Gaussian Bayesian networks (GBN).
no code implementations • 20 Jan 2015 • Pichao Wang, Wanqing Li, Zhimin Gao, Jing Zhang, Chang Tang, Philip Ogunbona
The results show that our approach can achieve state-of-the-art results on the individual datasets and without dramatical performance degradation on the Combined Dataset.
no code implementations • 14 Sep 2014 • Pichao Wang, Wanqing Li, Philip Ogunbona, Zhimin Gao, Hanling Zhang
These parts are referred to as Frequent Local Parts or FLPs.
no code implementations • CVPR 2014 • Luping Zhou, Lei Wang, Philip Ogunbona
In this paper, we propose a learning framework to effectively improve the discriminative power of SICEs by taking advantage of the samples in the opposite class.
no code implementations • CVPR 2013 • Luping Zhou, Lei Wang, Lingqiao Liu, Philip Ogunbona, Dinggang Shen
Analyzing brain networks from neuroimages is becoming a promising approach in identifying novel connectivitybased biomarkers for the Alzheimer's disease (AD).