no code implementations • 7 Dec 2024 • Juechu Dong, Boyuan Feng, Driss Guessous, Yanbo Liang, Horace He
We introduce FlexAttention, a novel compiler-driven programming model that allows implementing the majority of attention variants in a few lines of idiomatic PyTorch code.
1 code implementation • 23 Sep 2022 • Boyuan Feng, Tianqi Tang, yuke wang, Zhaodong Chen, Zheng Wang, Shu Yang, Yuan Xie, Yufei Ding
In this paper, we propose Faith, an efficient framework for transformer verification on GPUs.
1 code implementation • 14 Sep 2022 • yuke wang, Boyuan Feng, Zheng Wang, Tong Geng, Kevin Barker, Ang Li, Yufei Ding
For irregularly sparse and fine-grained GNN workloads, such solutions miss the opportunity to jointly schedule/optimize the computation and communication operations for high-performance delivery.
1 code implementation • 11 Dec 2021 • Jiacen Xu, Zhe Zhou, Boyuan Feng, Yufei Ding, Zhou Li
As such, we present a comparative study of PCSS robustness.
2 code implementations • 3 Dec 2021 • yuke wang, Boyuan Feng, Zheng Wang, Guyue Huang, Yufei Ding
Recently, graph neural networks (GNNs), as the backbone of graph-based machine learning, demonstrate great success in various domains (e. g., e-commerce).
no code implementations • 26 Nov 2021 • Anbang Wu, Gushu Li, yuke wang, Boyuan Feng, Yufei Ding, Yuan Xie
In this paper, we propose a novel training scheme to mitigate such noise-induced gradient vanishing.
1 code implementation • 23 Jun 2021 • Boyuan Feng, yuke wang, Tong Geng, Ang Li, Yufei Ding
Over the years, accelerating neural networks with quantization has been widely studied.
1 code implementation • 4 Jan 2021 • yuke wang, Boyuan Feng, Yufei Ding
It also brings profound impact to improve the applicability of the compute- and memory-intensive CNNs to a broad range of applications, such as mobile devices, which are generally short of computation power and memory.
no code implementations • 22 Sep 2020 • Boyuan Feng, yuke wang, Xu Li, Yufei Ding
Graph neural networks (GNNs) have achieved high performance in analyzing graph-structured data and have been widely deployed in safety-critical areas, such as finance and autonomous driving.
no code implementations • 22 Sep 2020 • Boyuan Feng, Yuke Wang, Zheng Wang, Yufei Ding
With the increasing popularity of graph-based learning, graph neural networks (GNNs) emerge as the essential tool for gaining insights from graphs.
no code implementations • 11 Sep 2020 • Yuke Wang, Boyuan Feng, Xueqiao Peng, Yufei Ding
To clear these hurdles, we propose 3D-Receptive Field (3DRF), an explainable and easy-to-compute metric, to estimate the quality of a CNN architecture and guide the search process of designs.
no code implementations • 9 Jul 2020 • Boyuan Feng, yuke wang, Xu Li, Shu Yang, Xueqiao Peng, Yufei Ding
With the increasing popularity of graph-based learning, Graph Neural Networks (GNNs) win lots of attention from the research and industry field because of their high accuracy.
1 code implementation • 11 Jun 2020 • Yuke Wang, Boyuan Feng, Gushu Li, Shuangchen Li, Lei Deng, Yuan Xie, Yufei Ding
As the emerging trend of graph-based deep learning, Graph Neural Networks (GNNs) excel for their capability to generate high-quality node feature vectors (embeddings).
Distributed, Parallel, and Cluster Computing
no code implementations • 26 Aug 2019 • Yuke Wang, Boyuan Feng, Gushu Li, Lei Deng, Yuan Xie, Yufei Ding
As a promising solution to boost the performance of distance-related algorithms (e. g., K-means and KNN), FPGA-based acceleration attracts lots of attention, but also comes with numerous challenges.
Distributed, Parallel, and Cluster Computing Programming Languages
no code implementations • ICLR 2019 • Boyuan Feng, Kun Wan, Shu Yang, Yufei Ding
Convolutional Neural Networks (CNNs) have achieved tremendous success for many computer vision tasks, which shows a promising perspective of deploying CNNs on mobile platforms.
no code implementations • ICLR 2019 • Kun Wan, Boyuan Feng, Shu Yang, Yufei Ding
In this paper, we are the first in the field to consider how to craft an effective sparse kernel design by eliminating the large design space.
1 code implementation • 28 Sep 2018 • Lingwei Xie, Song He, Shu Yang, Boyuan Feng, Kun Wan, Zhongnan Zhang, Xiaochen Bo, Yufei Ding
In this paper, we propose a novel domain-adversarial multi-task framework for integrating shared knowledge from multiple domains.
no code implementations • ICLR 2019 • Kun Wan, Boyuan Feng, Lingwei Xie, Yufei Ding
The insights attained here could potentially be applied as a general approach for boosting the accuracy of other CNN models with similar nonlinear connections.