Search Results for author: Mao Zheng

Found 6 papers, 2 papers with code

Counting-Stars: A Simple, Efficient, and Reasonable Strategy for Evaluating Long-Context Large Language Models

1 code implementation18 Mar 2024 Mingyang Song, Mao Zheng, Xuan Luo

While recent research endeavors have concentrated on developing Large Language Models (LLMs) with robust long-context capabilities, due to the lack of appropriate evaluation strategies, relatively little is known about how well the long-context capability and performance of leading LLMs (e. g., GPT-4 Turbo and Kimi Chat).

STOA-VLP: Spatial-Temporal Modeling of Object and Action for Video-Language Pre-training

no code implementations20 Feb 2023 Weihong Zhong, Mao Zheng, Duyu Tang, Xuan Luo, Heng Gong, Xiaocheng Feng, Bing Qin

Although large-scale video-language pre-training models, which usually build a global alignment between the video and the text, have achieved remarkable progress on various downstream tasks, the idea of adopting fine-grained information during the pre-training stage is not well explored.

Language Modelling Object +5

Alignment-Uniformity aware Representation Learning for Zero-shot Video Classification

1 code implementation CVPR 2022 Shi Pu, Kaili Zhao, Mao Zheng

Further, we synthesize features of unseen classes by proposing a class generator that interpolates and extrapolates the features of seen classes.

Representation Learning Video Classification +1

Multimodal Topic Learning for Video Recommendation

no code implementations26 Oct 2020 Shi Pu, Yijiang He, Zheng Li, Mao Zheng

Existing video recommendation systems directly exploit features from different modalities (e. g., user personal data, user behavior data, video titles, video tags, and visual contents) to input deep neural networks, while expecting the networks to online mine user-preferred topics implicitly from these features.

Computational Efficiency Recommendation Systems

Cannot find the paper you are looking for? You can Submit a new open access paper.