Search Results for author: Baoqun Yin

Found 9 papers, 0 papers with code

ACEBench: Who Wins the Match Point in Tool Learning?

no code implementations22 Jan 2025 Chen Chen, Xinlong Hao, Weiwen Liu, Xu Huang, Xingshan Zeng, Shuai Yu, Dexun Li, Shuai Wang, Weinan Gan, Yuefeng Huang, Wulong Liu, Xinzhi Wang, Defu Lian, Baoqun Yin, Yasheng Wang, Wu Liu

Normal evaluates function calls in basic scenarios; Special evaluates function calls in scenarios with vague or incomplete instructions; Agent introduces multi-agent interactions to simulate function calling evaluation in real-world multi-turn interactions.

Decision Making

Towards Precise Scaling Laws for Video Diffusion Transformers

no code implementations25 Nov 2024 Yuanyang Yin, Yaqi Zhao, Mingwu Zheng, Ke Lin, Jiarong Ou, Rui Chen, Victor Shea-Jay Huang, Jiahao Wang, Xin Tao, Pengfei Wan, Di Zhang, Baoqun Yin, Wentao Zhang, Kun Gai

Achieving optimal performance of video diffusion transformers within given data and compute budget is crucial due to their high training costs.

Beyond Sight: Towards Cognitive Alignment in LVLM via Enriched Visual Knowledge

no code implementations25 Nov 2024 Yaqi Zhao, Yuanyang Yin, Lin Li, MingAn Lin, Victor Shea-Jay Huang, Siwei Chen, WeiPeng Chen, Baoqun Yin, Zenan Zhou, Wentao Zhang

Specifically, the VE's representation of visual information may not fully align with LLM's cognitive framework, leading to a mismatch where visual features exceed the language model's interpretive range.

Landmark Recognition Large Language Model

SEA: Supervised Embedding Alignment for Token-Level Visual-Textual Integration in MLLMs

no code implementations21 Aug 2024 Yuanyang Yin, Yaqi Zhao, YaJie Zhang, Ke Lin, Jiahao Wang, Xin Tao, Pengfei Wan, Di Zhang, Baoqun Yin, Wentao Zhang

Multimodal Large Language Models (MLLMs) have recently demonstrated remarkable perceptual and reasoning abilities, typically comprising a Vision Encoder, an Adapter, and a Large Language Model (LLM).

Contrastive Learning Language Modeling +3

CBQ: Cross-Block Quantization for Large Language Models

no code implementations13 Dec 2023 Xin Ding, Xiaoyu Liu, Zhijun Tu, Yun Zhang, Wei Li, Jie Hu, Hanting Chen, Yehui Tang, Zhiwei Xiong, Baoqun Yin, Yunhe Wang

Post-training quantization (PTQ) has played a key role in compressing large language models (LLMs) with ultra-low costs.

Quantization

Advanced Efficient Strategy for Detection of Dark Objects Based on Spiking Network with Multi-Box Detection

no code implementations10 Oct 2023 Munawar Ali, Baoqun Yin, Hazrat Bilal, Aakash Kumar, Ali Muhammad, Avinash Rohra

The whole study proposes a combination of spiked and normal convolution layers as an energy-efficient and reliable object detector model.

Object object-detection +1

Boosting Mobile CNN Inference through Semantic Memory

no code implementations5 Dec 2021 Yun Li, Chen Zhang, Shihao Han, Li Lyna Zhang, Baoqun Yin, Yunxin Liu, Mengwei Xu

Human brains are known to be capable of speeding up visual recognition of repeatedly presented objects through faster memory encoding and accessing procedures on activated neurons.

Weight-dependent Gates for Network Pruning

no code implementations4 Jul 2020 Yun Li, Zechun Liu, Weiqun Wu, Haotian Yao, Xiangyu Zhang, Chi Zhang, Baoqun Yin

In this paper, a simple yet effective network pruning framework is proposed to simultaneously address the problems of pruning indicator, pruning ratio, and efficiency constraint.

Network Pruning

Cannot find the paper you are looking for? You can Submit a new open access paper.