Search Results for author: Fu-Ming Guo

Found 6 papers, 1 papers with code

SparseOptimizer: Sparsify Language Models through Moreau-Yosida Regularization and Accelerate via Compiler Co-design

no code implementations27 Jun 2023 Fu-Ming Guo

This paper introduces SparseOptimizer, a novel deep learning optimizer that exploits Moreau-Yosida regularization to naturally induce sparsity in large language models such as BERT, ALBERT and GPT.

Sim2Real Docs: Domain Randomization for Documents in Natural Scenes using Ray-traced Rendering

1 code implementation16 Dec 2021 Nikhil Maddikunta, Huijun Zhao, Sumit Keswani, Alfy Samuel, Fu-Ming Guo, Nishan Srishankar, Vishwa Pardeshi, Austin Huang

In the past, computer vision systems for digitized documents could rely on systematically captured, high-quality scans.

Algorithm to Compilation Co-design: An Integrated View of Neural Network Sparsity

no code implementations16 Jun 2021 Fu-Ming Guo, Austin Huang

Integration of BSR operations enables the TVM runtime execution to leverage structured pattern sparsity induced by model regularization.

Language Modelling

PCONV: The Missing but Desirable Sparsity in DNN Weight Pruning for Real-time Execution on Mobile Devices

no code implementations6 Sep 2019 Xiaolong Ma, Fu-Ming Guo, Wei Niu, Xue Lin, Jian Tang, Kaisheng Ma, Bin Ren, Yanzhi Wang

Model compression techniques on Deep Neural Network (DNN) have been widely acknowledged as an effective way to achieve acceleration on a variety of platforms, and DNN weight pruning is a straightforward and effective method.

Model Compression

Cannot find the paper you are looking for? You can Submit a new open access paper.