Search Results for author: Xiang-Dong Zhou

Found 6 papers, 5 papers with code

Efficient End-to-End Video Question Answering with Pyramidal Multimodal Transformer

1 code implementation4 Feb 2023 Min Peng, Chongyang Wang, Yu Shi, Xiang-Dong Zhou

This paper presents a new method for end-to-end Video Question Answering (VideoQA), aside from the current popularity of using large-scale pre-training with huge feature extractors.

Computational Efficiency Question Answering +4

Multilevel Hierarchical Network with Multiscale Sampling for Video Question Answering

1 code implementation9 May 2022 Min Peng, Chongyang Wang, Yuan Gao, Yu Shi, Xiang-Dong Zhou

With a multiscale sampling, RMI iterates the interaction of appearance-motion information at each scale and the question embeddings to build the multilevel question-guided visual representations.

Question Answering Video Question Answering +1

Temporal Pyramid Transformer with Multimodal Interaction for Video Question Answering

1 code implementation10 Sep 2021 Min Peng, Chongyang Wang, Yuan Gao, Yu Shi, Xiang-Dong Zhou

Targeting these issues, this paper proposes a novel Temporal Pyramid Transformer (TPT) model with multimodal interaction for VideoQA.

Natural Language Understanding Question Answering +1

STA-VPR: Spatio-temporal Alignment for Visual Place Recognition

1 code implementation25 Mar 2021 Feng Lu, Baifan Chen, Xiang-Dong Zhou, Dezhen Song

Here we split the holistic mid-layer features into local features, and propose an adaptive dynamic time warping (DTW) algorithm to align local features from the spatial domain while measuring the distance between two images.

Dynamic Time Warping Visual Place Recognition

Recognizing Micro-Expression in Video Clip with Adaptive Key-Frame Mining

1 code implementation19 Sep 2020 Min Peng, Chongyang Wang, Yuan Gao, Tao Bi, Tong Chen, Yu Shi, Xiang-Dong Zhou

As a spontaneous expression of emotion on face, micro-expression reveals the underlying emotion that cannot be controlled by human.

Cannot find the paper you are looking for? You can Submit a new open access paper.