1 code implementation • 8 Aug 2023 • Jiaju Lin, Haoran Zhao, Aochi Zhang, Yiting Wu, Huqiuyue Ping, Qin Chen
With ChatGPT-like large language models (LLM) prevailing in the community, how to evaluate the ability of LLMs is an open question.
no code implementations • 26 May 2023 • Zhiyi Xue, Si Liu, Zhaodi Zhang, Yiting Wu, Min Zhang
In this paper, we study existing approaches and identify a dominant factor in defining tight approximation, namely the approximation domain of the activation function.
no code implementations • 21 Nov 2022 • Yiting Wu, Zhaodi Zhang, Zhiyi Xue, Si Liu, Min Zhang
We observe that existing approaches only rely on overestimated domains, while the corresponding tight approximation may not necessarily be tight on its actual domain.
no code implementations • 21 Aug 2022 • Zhaodi Zhang, Yiting Wu, Si Liu, Jing Liu, Min Zhang
Considerable efforts have been devoted to finding the so-called tighter approximations to obtain more precise verification results.