Search Results for author: Naiqiang Tan

Found 5 papers, 4 papers with code

Bag of Tricks for Inference-time Computation of LLM Reasoning

1 code implementation11 Feb 2025 Fan Liu, Wenshuo Chao, Naiqiang Tan, Hao liu

In this paper, we investigate and benchmark diverse inference-time computation strategies across reasoning tasks of varying complexity.

O1-Pruner: Length-Harmonizing Fine-Tuning for O1-Like Reasoning Pruning

1 code implementation22 Jan 2025 Haotian Luo, Li Shen, Haiying He, Yibo Wang, Shiwei Liu, Wei Li, Naiqiang Tan, Xiaochun Cao, DaCheng Tao

Experiments on various mathematical reasoning benchmarks show that O1-Pruner not only significantly reduces inference overhead but also achieves higher accuracy, providing a novel and promising solution to this challenge.

Mathematical Reasoning

Interpretable Cascading Mixture-of-Experts for Urban Traffic Congestion Prediction

no code implementations14 Jun 2024 Wenzhao Jiang, Jindong Han, Hao liu, Tao Tao, Naiqiang Tan, Hui Xiong

Rapid urbanization has significantly escalated traffic congestion, underscoring the need for advanced congestion prediction services to bolster intelligent transportation systems.

Mixture-of-Experts Prediction +1

Cannot find the paper you are looking for? You can Submit a new open access paper.