1 code implementation • 28 Sep 2022 • Zhiqiang Que, Hongxiang Fan, Marcus Loo, Michaela Blott, Maurizio Pierini, Alexander D Tapper, Wayne Luk
In addition, we introduce an outer-product based matrix multiplication approach which is enhanced by the strength reduction for low latency design.
no code implementations • 20 Sep 2022 • Hongxiang Fan, Thomas Chau, Stylianos I. Venieris, Royson Lee, Alexandros Kouris, Wayne Luk, Nicholas D. Lane, Mohamed S. Abdelfattah
By jointly optimizing the algorithm and hardware, our FPGA-based butterfly accelerator achieves 14. 2 to 23. 2 times speedup over state-of-the-art accelerators normalized to the same computational budget.
no code implementations • 29 Aug 2022 • Kang Gao, Perukrishnen Vytelingum, Stephen Weston, Wayne Luk, Ce Guo
It is shown that the machine learning surrogate learned in the proposed method is an accurate proxy of the true agent-based market simulation.
no code implementations • 29 Aug 2022 • Kang Gao, Perukrishnen Vytelingum, Stephen Weston, Wayne Luk, Ce Guo
We scrutinise the market dynamics during the simulated flash crash and show that the simulated dynamics are consistent with what happened in historical flash crash scenarios.
no code implementations • 24 Nov 2021 • Hongxiang Fan, Martin Ferianc, Zhiqiang Que, He Li, Shuanglong Liu, Xinyu Niu, Wayne Luk
Recent advances in algorithm-hardware co-design for deep neural networks (DNNs) have demonstrated their potential in automatically designing neural architectures and hardware designs.
1 code implementation • 26 Jun 2021 • Zhiqiang Que, Erwei Wang, Umar Marikar, Eric Moreno, Jennifer Ngadiuba, Hamza Javed, Bartłomiej Borzyszkowski, Thea Aarrestad, Vladimir Loncar, Sioni Summers, Maurizio Pierini, Peter Y Cheung, Wayne Luk
The proposed approach has been evaluated based on two LSTM models, targeting a ZYNQ 7045 FPGA and a U250 FPGA.
no code implementations • 4 Jun 2021 • Martin Ferianc, Zhiqiang Que, Hongxiang Fan, Wayne Luk, Miguel Rodrigues
To further improve the overall algorithmic-hardware performance, a co-design framework is proposed to explore the most fitting algorithmic-hardware configurations for Bayesian RNNs.
no code implementations • 12 May 2021 • Hongxiang Fan, Martin Ferianc, Miguel Rodrigues, HongYu Zhou, Xinyu Niu, Wayne Luk
Neural networks (NNs) have demonstrated their potential in a wide range of applications such as image recognition, decision making or recommendation systems.
no code implementations • 6 Sep 2020 • Seyedeh Niusha Alavi Foumani, Ce Guo, Wayne Luk
In this project, we have successfully designed, implemented, deployed and tested a novel FPGA accelerated algorithm for neural network training.
no code implementations • 6 Sep 2020 • Seyedeh Niusha Alavi Foumani, Ce Guo, Wayne Luk
This is while the use of matrix inversion, which is challenging for hardware implementation, is avoided in this method.
no code implementations • 28 Jan 2020 • Yang Chu, Wayne Luk, Dan Goodman
By combining the unreliable innate response and the sparse reinforcement rewards, an accurate auditory space map, which is hard to be achieved by either one of these two kind of supervisions, can eventually be learned.
1 code implementation • 27 Jun 2019 • Shuanglong Liu, Ringo S. W. Chu, Xiwei Wang, Wayne Luk
Hyperspectral image (HSI) classification has been widely adopted in applications involving remote sensing imagery analysis which require high classification accuracy and real-time processing speed.
1 code implementation • 27 Jun 2019 • Ringo S. W. Chu, Ho-Cheung Ng, Xiwei Wang, Wayne Luk
Hyperspectral images (HSIs) can distinguish materials with high number of spectral bands, which is widely adopted in remote sensing applications and benefits in high accuracy land cover classifications.
no code implementations • 21 Jan 2019 • Erwei Wang, James J. Davis, Ruizhe Zhao, Ho-Cheung Ng, Xinyu Niu, Wayne Luk, Peter Y. K. Cheung, George A. Constantinides
Deep neural networks have proven to be particularly effective in visual and audio recognition tasks.
1 code implementation • 23 Nov 2018 • Ruizhe Zhao, Wayne Luk
Efficient inference of Convolutional Neural Networks is a thriving topic recently.