Search Results for author: Niamul Quader

Found 5 papers, 2 papers with code

Weight Excitation: Built-in Attention Mechanisms in Convolutional Neural Networks

1 code implementation ECCV 2020 Niamul Quader, Md Mafijul Islam Bhuiyan, Juwei Lu, Peng Dai, Wei Li

We propose novel approaches for simultaneously identifying important weights of a convolutional neural network (ConvNet) and providing more attention to the important weights during training.

3D Action Recognition 3D Object Classification +7

Towards Efficient Coarse-to-Fine Networks for Action and Gesture Recognition

no code implementations ECCV 2020 Niamul Quader, Juwei Lu, Peng Dai, Wei Li

State-of-the-art approaches to video-based action and gesture recognition often employ two key concepts: First, they employ multistream processing; second, they use an ensemble of convolutional networks.

3D Action Recognition Action Classification +3

Self-supervised Spatiotemporal Representation Learning by Exploiting Video Continuity

no code implementations11 Dec 2021 Hanwen Liang, Niamul Quader, Zhixiang Chi, Lizhe Chen, Peng Dai, Juwei Lu, Yang Wang

Recent self-supervised video representation learning methods have found significant success by exploring essential properties of videos, e. g. speed, temporal order, etc.

Action Localization Action Recognition +3

ConDA: Unsupervised Domain Adaptation for LiDAR Segmentation via Regularized Domain Concatenation

1 code implementation30 Nov 2021 Lingdong Kong, Niamul Quader, Venice Erin Liong

We present ConDA, a concatenation-based domain adaptation framework for LiDAR segmentation that: 1) constructs an intermediate domain consisting of fine-grained interchange signals from both source and target domains without destabilizing the semantic coherency of objects and background around the ego-vehicle; and 2) utilizes the intermediate domain for self-training.

Autonomous Driving LIDAR Semantic Segmentation +2

Class Semantics-based Attention for Action Detection

no code implementations ICCV 2021 Deepak Sridhar, Niamul Quader, Srikanth Muralidharan, Yaoxin Li, Peng Dai, Juwei Lu

Our attention mechanism outperforms prior self-attention modules such as the squeeze-and-excitation in action detection task.

Action Detection Action Localization

Cannot find the paper you are looking for? You can Submit a new open access paper.