no code implementations • 4 Feb 2024 • Tong Niu, Weihao Zhang, Rong Zhao
In SAGE, we introduce an semi-structured conceptual representation expliciting the intricate structures of ABMs and an objective representation to guide LLMs in modeling scenarios and proposing hypothetical solutions through in-context learning.
no code implementations • 25 Jan 2024 • Tong Niu, Haoyu Huang, Yu Du, Weihao Zhang, Luping Shi, Rong Zhao
Given the escalating intricacy and multifaceted nature of contemporary social systems, manually generating solutions to address pertinent social issues has become a formidable task.
1 code implementation • 11 Nov 2022 • Hao Zheng, Hui Lin, Rong Zhao, Luping Shi
In this paper, we propose a brain-inspired hybrid neural network (HNN) that introduces temporal binding theory originated from neuroscience into ANNs by integrating spike timing dynamics (via spiking neural networks, SNNs) with reconstructive attention (by ANNs).
no code implementations • 14 Jul 2022 • Rong Zhao, Jun-e Feng, Biao Wang
According to the initial state set from which both systems start, two kinds of approximate synchronization problem, local approximate synchronization and global approximate synchronization, are proposed for the first time.
1 code implementation • 24 Mar 2021 • Faqiang Liu, Rong Zhao
The AFS model achieves a benign accuracy improvement of ~6% on CIFAR-10 and ~10% on CIFAR-100 with comparable or even stronger robustness than the state-of-the-art adversarial training methods.
no code implementations • 5 Jun 2020 • Yujie Wu, Rong Zhao, Jun Zhu, Feng Chen, Mingkun Xu, Guoqi Li, Sen Song, Lei Deng, Guanrui Wang, Hao Zheng, Jing Pei, Youhui Zhang, Mingguo Zhao, Luping Shi
We demonstrate the advantages of this model in multiple different tasks, including few-shot learning, continual learning, and fault-tolerance learning in neuromorphic vision sensors.
1 code implementation • 20 Dec 2019 • Faqiang Liu, Mingkun Xu, Guoqi Li, Jing Pei, Luping Shi, Rong Zhao
Generative adversarial networks have achieved remarkable performance on various tasks but suffer from training instability.