Search Results for author: Xiaojia Chen

Found 4 papers, 1 papers with code

CIParsing: Unifying Causality Properties into Multiple Human Parsing

no code implementations23 Aug 2023 Xiaojia Chen, Xuanhan Wang, Lianli Gao, Beitao Chen, Jingkuan Song, HenTao Shen

Existing methods of multiple human parsing (MHP) apply statistical models to acquire underlying associations between images and labeled body parts.

Human Parsing

RepParser: End-to-End Multiple Human Parsing with Representative Parts

no code implementations27 Aug 2022 Xiaojia Chen, Xuanhan Wang, Lianli Gao, Jingkuan Song

Different from mainstream methods, RepParser solves the multiple human parsing in a new single-stage manner without resorting to person detection or post-grouping. To this end, RepParser decouples the parsing pipeline into instance-aware kernel generation and part-aware human parsing, which are responsible for instance separation and instance-specific part segmentation, respectively.

Human Detection Human Parsing

KE-RCNN: Unifying Knowledge based Reasoning into Part-level Attribute Parsing

1 code implementation21 Jun 2022 Xuanhan Wang, Jingkuan Song, Xiaojia Chen, Lechao Cheng, Lianli Gao, Heng Tao Shen

In this article, we propose a Knowledge Embedded RCNN (KE-RCNN) to identify attributes by leveraging rich knowledges, including implicit knowledge (e. g., the attribute ``above-the-hip'' for a shirt requires visual/geometry relations of shirt-hip) and explicit knowledge (e. g., the part of ``shorts'' cannot have the attribute of ``hoodie'' or ``lining'').

Attribute

Technical Report: Disentangled Action Parsing Networks for Accurate Part-level Action Parsing

no code implementations5 Nov 2021 Xuanhan Wang, Xiaojia Chen, Lianli Gao, Lechao Chen, Jingkuan Song

Despite of dramatic progresses in the area of video classification research, a severe problem faced by the community is that the detailed understanding of human actions is ignored.

Action Parsing Action Recognition In Videos +2

Cannot find the paper you are looking for? You can Submit a new open access paper.