1 code implementation • 28 Nov 2022 • Tunhou Zhang, Mingyuan Ma, Feng Yan, Hai Li, Yiran Chen
In this work, we establish PIDS, a novel paradigm to jointly explore point interactions and point dimensions to serve semantic segmentation on point cloud data.
2 code implementations • 14 Jul 2022 • Tunhou Zhang, Dehua Cheng, Yuchen He, Zhengxing Chen, Xiaoliang Dai, Liang Xiong, Feng Yan, Hai Li, Yiran Chen, Wei Wen
To overcome the data multi-modality and architecture heterogeneity challenges in the recommendation domain, NASRec establishes a large supernet (i. e., search space) to search the full architectures.
no code implementations • 30 Mar 2022 • Jingyu Pan, Chen-Chia Chang, Zhiyao Xie, Ang Li, Minxue Tang, Tunhou Zhang, Jiang Hu, Yiran Chen
To further strengthen the results, we co-design a customized ML model FLNet and its personalization under the decentralized training scenario.
no code implementations • 29 Sep 2021 • Tunhou Zhang, Mingyuan Ma, Feng Yan, Hai Li, Yiran Chen
MAKPConv employs a depthwise kernel to reduce resource consumption and re-calibrates the contribution of kernel points towards each neighbor point via Neighbor-Kernel attention to improve representation power.
no code implementations • 3 Dec 2020 • Chen-Chia Chang, Jingyu Pan, Tunhou Zhang, Zhiyao Xie, Jiang Hu, Weiyi Qi, Chun-Wei Lin, Rongjian Liang, Joydeep Mitra, Elias Fallon, Yiran Chen
The rise of machine learning technology inspires a boom of its applications in electronic design automation (EDA) and helps improve the degree of automation in chip designs.
no code implementations • 8 Jul 2020 • Hsin-Pai Cheng, Tunhou Zhang, Yixing Zhang, Shi-Yu Li, Feng Liang, Feng Yan, Meng Li, Vikas Chandra, Hai Li, Yiran Chen
To preserve graph correlation information in encoding, we propose NASGEM which stands for Neural Architecture Search via Graph Embedding Method.
1 code implementation • 21 Nov 2019 • Tunhou Zhang, Hsin-Pai Cheng, Zhenwen Li, Feng Yan, Chengyu Huang, Hai Li, Yiran Chen
Specifically, both ShrinkCNN and ShrinkRNN are crafted within 1. 5 GPU hours, which is 7. 2x and 6. 7x faster than the crafting time of SOTA CNN and RNN models, respectively.
1 code implementation • 19 Jun 2019 • Hsin-Pai Cheng, Tunhou Zhang, Yukun Yang, Feng Yan, Shi-Yu Li, Harris Teague, Hai Li, Yiran Chen
Designing neural architectures for edge devices is subject to constraints of accuracy, inference latency, and computational cost.