Search Results for author: Zhenyu Gu

Found 5 papers, 1 papers with code

Boosting Deep Neural Network Efficiency with Dual-Module Inference

no code implementations ICML 2020 Liu Liu, Lei Deng, Zhaodong Chen, yuke wang, Shuangchen Li, Jingwei Zhang, Yihua Yang, Zhenyu Gu, Yufei Ding, Yuan Xie

Using Deep Neural Networks (DNNs) in machine learning tasks is promising in delivering high-quality results but challenging to meet stringent latency requirements and energy constraints because of the memory-bound and the compute-bound execution pattern of DNNs.

Yi: Open Foundation Models by 01.AI

1 code implementation7 Mar 2024 01. AI, :, Alex Young, Bei Chen, Chao Li, Chengen Huang, Ge Zhang, Guanwei Zhang, Heng Li, Jiangcheng Zhu, Jianqun Chen, Jing Chang, Kaidong Yu, Peng Liu, Qiang Liu, Shawn Yue, Senbin Yang, Shiming Yang, Tao Yu, Wen Xie, Wenhao Huang, Xiaohui Hu, Xiaoyi Ren, Xinyao Niu, Pengcheng Nie, Yuchi Xu, Yudong Liu, Yue Wang, Yuxuan Cai, Zhenyu Gu, Zhiyuan Liu, Zonghong Dai

The Yi model family is based on 6B and 34B pretrained language models, then we extend them to chat models, 200K long context models, depth-upscaled models, and vision-language models.

Attribute Chatbot +2

Energon: Towards Efficient Acceleration of Transformers Using Dynamic Sparse Attention

no code implementations18 Oct 2021 Zhe Zhou, Junlin Liu, Zhenyu Gu, Guangyu Sun

To enable such an algorithm with lower latency and better energy efficiency, we also propose an Energon co-processor architecture.

Edge-computing

Distribution Adaptive INT8 Quantization for Training CNNs

no code implementations9 Feb 2021 Kang Zhao, Sida Huang, Pan Pan, Yinghan Li, Yingya Zhang, Zhenyu Gu, Yinghui Xu

Researches have demonstrated that low bit-width (e. g., INT8) quantization can be employed to accelerate the inference process.

Image Classification object-detection +3

Dual-module Inference for Efficient Recurrent Neural Networks

no code implementations25 Sep 2019 Liu Liu, Lei Deng, Shuangchen Li, Jingwei Zhang, Yihua Yang, Zhenyu Gu, Yufei Ding, Yuan Xie

Using Recurrent Neural Networks (RNNs) in sequence modeling tasks is promising in delivering high-quality results but challenging to meet stringent latency requirements because of the memory-bound execution pattern of RNNs.

Cannot find the paper you are looking for? You can Submit a new open access paper.