Search Results for author: Li Lyna Zhang

Found 12 papers, 6 papers with code

Fast Hardware-Aware Neural Architecture Search

1 code implementation25 Oct 2019 Li Lyna Zhang, Yuqing Yang, Yuhang Jiang, Wenwu Zhu, Yunxin Liu

Unlike previous approaches that apply search algorithms on a small, human-designed search space without considering hardware diversity, we propose HURRICANE that explores the automatic hardware-aware search over a much larger search space and a two-stage search algorithm, to efficiently generate tailored models for different types of hardware.

Hardware Aware Neural Architecture Search Neural Architecture Search

Boosting Mobile CNN Inference through Semantic Memory

no code implementations5 Dec 2021 Yun Li, Chen Zhang, Shihao Han, Li Lyna Zhang, Baoqun Yin, Yunxin Liu, Mengwei Xu

Human brains are known to be capable of speeding up visual recognition of repeatedly presented objects through faster memory encoding and accessing procedures on activated neurons.

SwiftPruner: Reinforced Evolutionary Pruning for Efficient Ad Relevance

no code implementations30 Aug 2022 Li Lyna Zhang, Youkow Homma, Yujing Wang, Min Wu, Mao Yang, Ruofei Zhang, Ting Cao, Wei Shen

Remarkably, under our latency requirement of 1900us on CPU, SwiftPruner achieves a 0. 86% higher AUC than the state-of-the-art uniform sparse baseline for BERT-Mini on a large scale real-world dataset.

SpaceEvo: Hardware-Friendly Search Space Design for Efficient INT8 Inference

1 code implementation ICCV 2023 Li Lyna Zhang, Xudong Wang, Jiahang Xu, Quanlu Zhang, Yujing Wang, Yuqing Yang, Ningxin Zheng, Ting Cao, Mao Yang

The combination of Neural Architecture Search (NAS) and quantization has proven successful in automatically designing low-FLOPs INT8 quantized neural networks (QNN).

Neural Architecture Search Quantization

ElasticViT: Conflict-aware Supernet Training for Deploying Fast Vision Transformer on Diverse Mobile Devices

1 code implementation ICCV 2023 Chen Tang, Li Lyna Zhang, Huiqiang Jiang, Jiahang Xu, Ting Cao, Quanlu Zhang, Yuqing Yang, Zhi Wang, Mao Yang

However, prior supernet training methods that rely on uniform sampling suffer from the gradient conflict issue: the sampled subnets can have vastly different model sizes (e. g., 50M vs. 2G FLOPs), leading to different optimization directions and inferior performance.

Neural Architecture Search

Accurate and Structured Pruning for Efficient Automatic Speech Recognition

no code implementations31 May 2023 Huiqiang Jiang, Li Lyna Zhang, Yuang Li, Yu Wu, Shijie Cao, Ting Cao, Yuqing Yang, Jinyu Li, Mao Yang, Lili Qiu

In this paper, we propose a novel compression strategy that leverages structured pruning and knowledge distillation to reduce the model size and inference cost of the Conformer model while preserving high recognition performance.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +2

Constraint-aware and Ranking-distilled Token Pruning for Efficient Transformer Inference

1 code implementation26 Jun 2023 Junyan Li, Li Lyna Zhang, Jiahang Xu, Yujing Wang, Shaoguang Yan, Yunqing Xia, Yuqing Yang, Ting Cao, Hao Sun, Weiwei Deng, Qi Zhang, Mao Yang

Deploying pre-trained transformer models like BERT on downstream tasks in resource-constrained scenarios is challenging due to their high inference cost, which grows rapidly with input sequence length.

Model Compression

Compresso: Structured Pruning with Collaborative Prompting Learns Compact Large Language Models

1 code implementation8 Oct 2023 Song Guo, Jiahang Xu, Li Lyna Zhang, Mao Yang

To this end, Compresso prunes LLaMA-7B to 5. 4B, maintaining original performance and even surpassing LLaMA-7B in reading comprehension by 2. 62%.

Natural Language Understanding Reading Comprehension

Fewer is More: Boosting LLM Reasoning with Reinforced Context Pruning

no code implementations14 Dec 2023 Xijie Huang, Li Lyna Zhang, Kwang-Ting Cheng, Fan Yang, Mao Yang

In this work, we propose CoT-Influx, a novel approach that pushes the boundary of few-shot Chain-of-Thoughts (CoT) learning to improve LLM mathematical reasoning.

Arithmetic Reasoning Few-Shot Learning +3

LongRoPE: Extending LLM Context Window Beyond 2 Million Tokens

no code implementations21 Feb 2024 Yiran Ding, Li Lyna Zhang, Chengruidong Zhang, Yuanyuan Xu, Ning Shang, Jiahang Xu, Fan Yang, Mao Yang

This is achieved by three key innovations: (i) we identify and exploit two forms of non-uniformities in positional interpolation through an efficient search, providing a better initialization for fine-tuning and enabling an 8x extension in non-fine-tuning scenarios; (ii) we introduce a progressive extension strategy that first fine-tunes a 256k length LLM and then conducts a second positional interpolation on the fine-tuned extended LLM to achieve a 2048k context window; (iii) we readjust LongRoPE on 8k length to recover the short context window performance.

8k

Cannot find the paper you are looking for? You can Submit a new open access paper.