no code implementations • 1 Apr 2024 • Kaiyan Chang, Songcheng Xu, Chenglong Wang, Yingfeng Luo, Tong Xiao, Jingbo Zhu
In this paper, we present a comprehensive overview of these methods.
no code implementations • 17 Mar 2024 • Kaiyan Chang, Kun Wang, Nan Yang, Ying Wang, Dantong Jin, Wenlong Zhu, Zhirong Chen, Cangyuan Li, Hao Yan, Yunhao Zhou, Zhuoliang Zhao, Yuan Cheng, Yudong Pan, Yiqi Liu, Mengdi Wang, Shengwen Liang, Yinhe Han, Huawei Li, Xiaowei Li
Our 13B model (ChipGPT-FT) has a pass rate improvement compared with GPT-3. 5 in Verilog generation and outperforms in EDA script (i. e., SiliconCompiler) generation with only 200 EDA script data.
no code implementations • 8 Aug 2023 • Chenglong Wang, Hang Zhou, Kaiyan Chang, Tongran Liu, Chunliang Zhang, Quan Du, Tong Xiao, Jingbo Zhu
Large language models achieve state-of-the-art performance on sequence generation evaluation, but typically have a large number of parameters.
no code implementations • 23 May 2023 • Kaiyan Chang, Ying Wang, Haimeng Ren, Mengdi Wang, Shengwen Liang, Yinhe Han, Huawei Li, Xiaowei Li
As large language models (LLMs) like ChatGPT exhibited unprecedented machine intelligence, it also shows great performance in assisting hardware engineers to realize higher-efficiency logic design via natural language interaction.
no code implementations • 23 Apr 2020 • Kaiyan Chang, Wei Jiang, Jinyu Zhan, Zicheng Gong, Weijia Pan
Specifically, our design can improve the accuracy on MNIST up to 97. 26% compared with RC4. The accuracies on the datasets encrypted by ArchNet are 97. 26%, 84. 15% and 79. 80%, and they are 97. 31%, 82. 31% and 80. 22% on the original datasets, which shows that the encrypted accuracy of ArchNet has the same performance as the base model.
no code implementations • 23 Apr 2020 • Ruilin Chen, Kaiyan Chang, Kaiyuan Tian
Teamwork is increasingly important in today's society.
Social and Information Networks Computers and Society