no code implementations • 31 Dec 2023 • Yequan Zhao, Xian Xiao, Xinling Yu, Ziyue Liu, Zhixiong Chen, Geza Kurczveil, Raymond G. Beausoleil, Zheng Zhang
Despite the ultra-high speed of optical neural networks, training a PINN on an optical chip is hard due to (1) the large size of photonic devices, and (2) the lack of scalable optical memory devices to store the intermediate results of back-propagation (BP).
no code implementations • 18 Aug 2023 • Yequan Zhao, Xinling Yu, Zhixiong Chen, Ziyue Liu, Sijia Liu, Zheng Zhang
Backward propagation (BP) is widely used to compute the gradients in neural network training.
no code implementations • 25 Sep 2022 • Zhixiong Chen, Wenqiang Yi, Yuanwei Liu, Arumugam Nallanathan
Inspired by this, we define a new objective function, i. e., the weighted scheduled data sample volume, to transform the inexplicit global loss minimization problem into a tractable one for device scheduling, bandwidth allocation, and power control.
no code implementations • 15 Aug 2022 • Zhixiong Chen, Wenqiang Yi, Atm S. Alam, Arumugam Nallanathan
To this end, this work considers dynamic task software caching at the MEC server to assist users' task execution.
no code implementations • 20 Apr 2022 • Zhixiong Chen, Wenqiang Yi, Arumugam Nallanathan, Geoffrey Ye Li
On this basis, we maximize the scheduled data size to minimize the global loss function through jointly optimize the device scheduling, bandwidth allocation, computation and communication time division policies with the assistance of Lyapunov optimization.