1 code implementation • 25 May 2025 • Qinsi Wang, Hancheng Ye, Ming-Yu Chung, Yudong Liu, Yueqian Lin, Martin Kuo, Mingyuan Ma, Jianyi Zhang, Yiran Chen
Building on this insight, we propose CoreMatching, a co-adaptive sparse inference framework, which leverages the synergy between token and neuron sparsity to enhance inference efficiency.
no code implementations • 6 May 2025 • Shan Yu, Jiarong Xing, Yifan Qiao, Mingyuan Ma, Yangmin Li, Yang Wang, Shuo Yang, Zhiqiang Xie, Shiyi Cao, Ke Bao, Ion Stoica, Harry Xu, Ying Sheng
At its core, Prism tackles a key limitation of existing systems$\unicode{x2014}$the lack of $\textit{cross-model memory coordination}$, which is essential for flexibly sharing GPU memory across models under dynamic workloads.
1 code implementation • 2 Mar 2025 • Jinjiang You, Hewei Wang, Yijie Li, Mingxiao Huo, Long Van Tran Ha, Mingyuan Ma, Jinfeng Xu, Puzhen Wu, Shubham Garg, Wei Pu
Our approach enhances traditional Structure-from-Motion (SfM) pipelines by introducing an extrinsics regularization term to progressively align estimated extrinsics with ground-truth values, a dense feature reprojection term to reduce keypoint errors by minimizing reprojection loss in the feature space, and an intrinsics variance term for joint optimization across multiple frames.
no code implementations • 8 Oct 2024 • Cong Guo, Feng Cheng, Zhixu Du, James Kiessling, Jonathan Ku, Shiyu Li, Ziru Li, Mingyuan Ma, Tergel Molom-Ochir, Benjamin Morris, Haoxuan Shan, Jingwei Sun, Yitu Wang, Chiyue Wei, Xueying Wu, Yuhao Wu, Hao Frank Yang, Jingyang Zhang, Junyao Zhang, Qilin Zheng, Guanglei Zhou, Hai, Li, Yiran Chen
The rapid development of large language models (LLMs) has significantly transformed the field of artificial intelligence, demonstrating remarkable capabilities in natural language processing and moving towards multi-modal functionality.
3 code implementations • 12 Aug 2024 • Zhenting Qi, Mingyuan Ma, Jiahang Xu, Li Lyna Zhang, Fan Yang, Mao Yang
This paper introduces rStar, a self-play mutual reasoning approach that significantly improves reasoning capabilities of small language models (SLMs) without fine-tuning or superior models.
no code implementations • 2 Apr 2024 • Wei Chen, Zhiyuan Li, Mingyuan Ma
In the rapidly evolving domain of artificial intelligence, Large Language Models (LLMs) play a crucial role due to their advanced text processing and generation abilities.
no code implementations • 6 Apr 2023 • Ali Taghibakhshi, Mingyuan Ma, Ashwath Aithal, Onur Yilmaz, Haggai Maron, Matthew West
Cross-device user matching is a critical problem in numerous domains, including advertising, recommender systems, and cybersecurity.
2 code implementations • ICCV 2023 • Zangwei Zheng, Mingyuan Ma, Kai Wang, Ziheng Qin, Xiangyu Yue, Yang You
To address this challenge, we propose a novel method ZSCL to prevent zero-shot transfer degradation in the continual learning of vision-language models in both feature and parameter space.
1 code implementation • 28 Nov 2022 • Tunhou Zhang, Mingyuan Ma, Feng Yan, Hai Li, Yiran Chen
In this work, we establish PIDS, a novel paradigm to jointly explore point interactions and point dimensions to serve semantic segmentation on point cloud data.
Ranked #6 on
Robust 3D Semantic Segmentation
on SemanticKITTI-C
Neural Architecture Search
Robust 3D Semantic Segmentation
+1
no code implementations • 8 Sep 2022 • Minxue Tang, Jianyi Zhang, Mingyuan Ma, Louis DiValentin, Aolin Ding, Amin Hassanzadeh, Hai Li, Yiran Chen
However, the high demand for memory capacity and computing power makes large-scale federated adversarial training infeasible on resource-constrained edge devices.
no code implementations • 29 Sep 2021 • Tunhou Zhang, Mingyuan Ma, Feng Yan, Hai Li, Yiran Chen
MAKPConv employs a depthwise kernel to reduce resource consumption and re-calibrates the contribution of kernel points towards each neighbor point via Neighbor-Kernel attention to improve representation power.
1 code implementation • 10 Jan 2021 • Guyue Huang, Jingbo Hu, Yifan He, Jialong Liu, Mingyuan Ma, Zhaoyang Shen, Juejian Wu, Yuanfan Xu, Hengrui Zhang, Kai Zhong, Xuefei Ning, Yuzhe ma, HaoYu Yang, Bei Yu, Huazhong Yang, Yu Wang
With the down-scaling of CMOS technology, the design complexity of very large-scale integrated (VLSI) is increasing.
1 code implementation • 3 Jul 2020 • Mingyuan Ma, Sen Na, Hongyu Wang
In extensive experiments on citation networks and other heterogeneous graphs, we demonstrate that adding autoencoder constraints significantly improves the performance of graph convolutional networks.
1 code implementation • 30 Nov 2018 • Mingyuan Ma, Sen Na, Hongyu Wang, Congzhou Chen, Jin Xu
First, we build an interaction behavior graph for multi-level and multi-category data.
no code implementations • 14 Nov 2017 • Sen Na, Mingyuan Ma, Mladen Kolar
Along with developing of Peaceman-Rachford Splittling Method (PRSM), many batch algorithms based on it have been studied very deeply.