Search Results for author: Wenrui Li

Found 6 papers, 1 papers with code

SpikeMba: Multi-Modal Spiking Saliency Mamba for Temporal Video Grounding

no code implementations1 Apr 2024 Wenrui Li, Xiaopeng Hong, Xiaopeng Fan

To address these limitations, we introduce a novel SpikeMba: multi-modal spiking saliency mamba for temporal video grounding.

Video Grounding

MIntRec2.0: A Large-scale Benchmark Dataset for Multimodal Intent Recognition and Out-of-scope Detection in Conversations

1 code implementation16 Mar 2024 Hanlei Zhang, Xin Wang, Hua Xu, Qianrui Zhou, Kai Gao, Jianhua Su, jinyue Zhao, Wenrui Li, Yanting Chen

We believe that MIntRec2. 0 will serve as a valuable resource, providing a pioneering foundation for research in human-machine conversational interactions, and significantly facilitating related applications.

Multimodal Intent Recognition

The style transformer with common knowledge optimization for image-text retrieval

no code implementations1 Mar 2023 Wenrui Li, Zhengyu Ma, Jinqiao Shi, Xiaopeng Fan

The main module is the common knowledge adaptor (CKA) with both the style embedding extractor (SEE) and the common knowledge optimization (CKO) modules.

Retrieval Text Retrieval

X-ray Spectral Estimation using Dictionary Learning

no code implementations27 Feb 2023 Wenrui Li, Venkatesh Sridhar, K. Aditya Mohan, Saransh Singh, Jean-Baptiste Forien, Xin Liu, Gregery T. Buzzard, Charles A. Bouman

As computational tools for X-ray computed tomography (CT) become more quantitatively accurate, knowledge of the source-detector spectral response is critical for quantitative system-independent reconstruction and material characterization capabilities.

Computed Tomography (CT) Dictionary Learning

Sparse-View CT Reconstruction using Recurrent Stacked Back Projection

no code implementations9 Dec 2021 Wenrui Li, Gregery T. Buzzard, Charles A. Bouman

Sparse-view CT reconstruction is important in a wide range of applications due to limitations on cost, acquisition time, or dosage.

Cannot find the paper you are looking for? You can Submit a new open access paper.