3 code implementations • 6 Dec 2023 • Zhongwei Wan, Xin Wang, Che Liu, Samiul Alam, Yu Zheng, Jiachen Liu, Zhongnan Qu, Shen Yan, Yi Zhu, Quanlu Zhang, Mosharaf Chowdhury, Mi Zhang
Large Language Models (LLMs) have demonstrated remarkable capabilities in important tasks such as natural language understanding, language generation, and complex reasoning and have the potential to make a substantial impact on our society.
no code implementations • 18 Apr 2023 • Yang Liu, Shen Yan, Yuge Zhang, Kan Ren, Quanlu Zhang, Zebin Ren, Deng Cai, Mi Zhang
Vision Transformers have shown great performance in single tasks such as classification and segmentation.
1 code implementation • ICCV 2023 • Chen Tang, Li Lyna Zhang, Huiqiang Jiang, Jiahang Xu, Ting Cao, Quanlu Zhang, Yuqing Yang, Zhi Wang, Mao Yang
However, prior supernet training methods that rely on uniform sampling suffer from the gradient conflict issue: the sampled subnets can have vastly different model sizes (e. g., 50M vs. 2G FLOPs), leading to different optimization directions and inferior performance.
1 code implementation • ICCV 2023 • Li Lyna Zhang, Xudong Wang, Jiahang Xu, Quanlu Zhang, Yujing Wang, Yuqing Yang, Ningxin Zheng, Ting Cao, Mao Yang
The combination of Neural Architecture Search (NAS) and quantization has proven successful in automatically designing low-FLOPs INT8 quantized neural networks (QNN).
no code implementations • 26 Jan 2023 • Ningxin Zheng, Huiqiang Jiang, Quanlu Zhang, Zhenhua Han, Yuqing Yang, Lingxiao Ma, Fan Yang, Chengruidong Zhang, Lili Qiu, Mao Yang, Lidong Zhou
Dynamic sparsity, where the sparsity patterns are unknown until runtime, poses a significant challenge to deep learning.
no code implementations • 21 Jan 2023 • Zhiqi Lin, Youshan Miao, Guodong Liu, Xiaoxiang Shi, Quanlu Zhang, Fan Yang, Saeed Maleki, Yi Zhu, Xu Cao, Cheng Li, Mao Yang, Lintao Zhang, Lidong Zhou
SuperScaler is a system that facilitates the design and generation of highly flexible parallelization plans.
no code implementations • 22 Sep 2022 • Cong Guo, Yuxian Qiu, Jingwen Leng, Chen Zhang, Ying Cao, Quanlu Zhang, Yunxin Liu, Fan Yang, Minyi Guo
An activation function is an element-wise mathematical function and plays a crucial role in deep neural networks (DNN).
no code implementations • CVPR 2022 • Chenqian Yan, Yuge Zhang, Quanlu Zhang, Yaming Yang, Xinyang Jiang, Yuqing Yang, Baoyuan Wang
Thanks to HyperFD, each local task (client) is able to effectively leverage the learning "experience" of previous tasks without uploading raw images to the platform; meanwhile, the meta-feature extractor is continuously learned to better trade off the bias and variance.
1 code implementation • 6 Aug 2021 • Yuge Zhang, Quanlu Zhang, Li Lyna Zhang, Yaming Yang, Chenqian Yan, Xiaotian Gao, Yuqing Yang
One of the key challenges in Neural Architecture Search (NAS) is to efficiently rank the performances of architectures.
no code implementations • 16 Oct 2020 • Yuge Zhang, Quanlu Zhang, Yaming Yang
Weight sharing, as an approach to speed up architecture performance estimation has received wide attention.
no code implementations • COLING 2020 • Yihuan Mao, Yujing Wang, Chufan Wu, Chen Zhang, Yang Wang, Yaming Yang, Quanlu Zhang, Yunhai Tong, Jing Bai
BERT is a cutting-edge language representation model pre-trained by a large corpus, which achieves superior performances on various natural language understanding tasks.
1 code implementation • 6 Jan 2020 • Yuge Zhang, Zejun Lin, Junyang Jiang, Quanlu Zhang, Yujing Wang, Hui Xue, Chen Zhang, Yaming Yang
With the success of deep neural networks, Neural Architecture Search (NAS) as a way of automatic model design has attracted wide attention.