Search Results for author: Lingchuan Meng

Found 4 papers, 2 papers with code

Restructurable Activation Networks

1 code implementation17 Aug 2022 Kartikeya Bhardwaj, James Ward, Caleb Tung, Dibakar Gope, Lingchuan Meng, Igor Fedorov, Alex Chalfin, Paul Whatmough, Danny Loh

To address this question, we propose a new paradigm called Restructurable Activation Networks (RANs) that manipulate the amount of non-linearity in models to improve their hardware-awareness and efficiency.

object-detection Object Detection

Armour: Generalizable Compact Self-Attention for Vision Transformers

no code implementations3 Aug 2021 Lingchuan Meng

Attention-based transformer networks have demonstrated promising potential as their applications extend from natural language processing to vision.

Collapsible Linear Blocks for Super-Efficient Super Resolution

3 code implementations17 Mar 2021 Kartikeya Bhardwaj, Milos Milosavljevic, Liam O'Neil, Dibakar Gope, Ramon Matas, Alex Chalfin, Naveen Suda, Lingchuan Meng, Danny Loh

Our results highlight the challenges faced by super resolution on AI accelerators and demonstrate that SESR is significantly faster (e. g., 6x-8x higher FPS) than existing models on mobile-NPU.

4k 8k +1

Efficient Winograd Convolution via Integer Arithmetic

no code implementations7 Jan 2019 Lingchuan Meng, John Brothers

Quantized neural networks can effectively reduce model sizes and improve inference speed, which leads to a wide variety of kernels and hardware accelerators that work with integer data.

Cannot find the paper you are looking for? You can Submit a new open access paper.