Search Results for author: Xiang Fang

Found 11 papers, 4 papers with code

ChinaTelecom System Description to VoxCeleb Speaker Recognition Challenge 2023

no code implementations16 Aug 2023 Mengjie Du, Xiang Fang, Jie Li

This technical report describes ChinaTelecom system for Track 1 (closed) of the VoxCeleb2023 Speaker Recognition Challenge (VoxSRC 2023).

Speaker Recognition

You Can Ground Earlier than See: An Effective and Efficient Pipeline for Temporal Sentence Grounding in Compressed Videos

no code implementations CVPR 2023 Xiang Fang, Daizong Liu, Pan Zhou, Guoshun Nan

To handle the raw video bit-stream input, we propose a novel Three-branch Compressed-domain Spatial-temporal Fusion (TCSF) framework, which extracts and aggregates three kinds of low-level visual features (I-frame, motion vector and residual features) for effective and efficient grounding.

Sentence Temporal Sentence Grounding

Hypotheses Tree Building for One-Shot Temporal Sentence Localization

no code implementations5 Jan 2023 Daizong Liu, Xiang Fang, Pan Zhou, Xing Di, Weining Lu, Yu Cheng

Given an untrimmed video, temporal sentence localization (TSL) aims to localize a specific segment according to a given sentence query.

Sentence

Multi-Modal Cross-Domain Alignment Network for Video Moment Retrieval

no code implementations23 Sep 2022 Xiang Fang, Daizong Liu, Pan Zhou, Yuchong Hu

In addition, due to the domain gap between different datasets, directly applying these pre-trained models to an unseen domain leads to a significant performance drop.

Information Retrieval Moment Retrieval +1

Hierarchical Local-Global Transformer for Temporal Sentence Grounding

no code implementations31 Aug 2022 Xiang Fang, Daizong Liu, Pan Zhou, Zichuan Xu, Ruixuan Li

To address this issue, in this paper, we propose a novel Hierarchical Local-Global Transformer (HLGT) to leverage this hierarchy information and model the interactions between different levels of granularity and different modalities for learning more fine-grained multi-modal representations.

Sentence Temporal Sentence Grounding

Exploring Optical-Flow-Guided Motion and Detection-Based Appearance for Temporal Sentence Grounding

no code implementations6 Mar 2022 Daizong Liu, Xiang Fang, Wei Hu, Pan Zhou

Temporal sentence grounding aims to localize a target segment in an untrimmed video semantically according to a given sentence query.

Object object-detection +4

V3H: View Variation and View Heredity for Incomplete Multi-view Clustering

1 code implementation23 Nov 2020 Xiang Fang, Yuchong Hu, Pan Zhou, Dapeng Oliver Wu

Inspired by the variation and the heredity in genetics, V3H first decomposes each subspace into a variation matrix for the corresponding view and a heredity matrix for all the views to represent the unique information and the consistent information respectively.

Clustering Incomplete multi-view clustering

ANIMC: A Soft Framework for Auto-weighted Noisy and Incomplete Multi-view Clustering

1 code implementation20 Nov 2020 Xiang Fang, Yuchong Hu, Pan Zhou, Dapeng Oliver Wu

In these scenarios, original image data often contain missing instances and noises, which is ignored by most multi-view clustering methods.

Clustering Incomplete multi-view clustering +1

Double Self-weighted Multi-view Clustering via Adaptive View Fusion

no code implementations20 Nov 2020 Xiang Fang, Yuchong Hu

For the first self-weighted operation, it assigns different weights to different features by introducing an adaptive weight matrix, which can reinforce the role of the important features in the joint representation and make each graph robust.

Clustering

Unbalanced Incomplete Multi-view Clustering via the Scheme of View Evolution: Weak Views are Meat; Strong Views do Eat

1 code implementation20 Nov 2020 Xiang Fang, Yuchong Hu, Pan Zhou, Dapeng Oliver Wu

However, different views often have distinct incompleteness, i. e., unbalanced incompleteness, which results in strong views (low-incompleteness views) and weak views (high-incompleteness views).

Clustering Incomplete multi-view clustering +1

Cannot find the paper you are looking for? You can Submit a new open access paper.