Search Results for author: Qianqian Li

Found 5 papers, 3 papers with code

CoTeRe-Net: Discovering Collaborative Ternary Relations in Videos

1 code implementation ECCV 2020 Zhensheng Shi, Cheng Guan, Liangjie Cao, Qianqian Li, Ju Liang, Zhaorui Gu, Haiyong Zheng, Bing Zheng

Current relation models mainly reason about relations of invisibly implicit cues, while important relations of visually explicit cues are rarely considered, and the collaboration between them is usually ignored.

Action Recognition Relation

Multi-Modal Multi-Action Video Recognition

1 code implementation ICCV 2021 Zhensheng Shi, Ju Liang, Qianqian Li, Haiyong Zheng, Zhaorui Gu, Junyu Dong, Bing Zheng

In this paper, we propose a novel multi-action relation model for videos, by leveraging both relational graph convolutional networks (GCNs) and video multi-modality.

Relation Video Recognition

DeepErase: Weakly Supervised Ink Artifact Removal in Document Text Images

2 code implementations NeurIPS Workshop Document_Intelligen 2019 W. Ronny Huang, Yike Qi, Qianqian Li, Jonathan Degange

In addition to high segmentation accuracy, we show that our cleansed images achieve a significant boost in recognition accuracy by popular OCR software such as Tesseract 4. 0.

Optical Character Recognition Optical Character Recognition (OCR)

Cannot find the paper you are looking for? You can Submit a new open access paper.