1 code implementation • 11 Jul 2024 • Wanling Gao, Yunyou Huang, Dandan Cui, Zhuoming Yu, Wenjing Liu, Xiaoshuang Liang, Jiahui Zhao, Jiyue Xie, Hao Li, Li Ma, Ning Ye, Yumiao Kang, Dingfeng Luo, Peng Pan, Wei Huang, Zhongmou Liu, Jizhong Hu, Gangyuan Zhao, Chongrong Jiang, Fan Huang, Tianyi Wei, Suqin Tang, Bingjie Xia, Zhifei Zhang, Jianfeng Zhan
We envision DC-AI RCTs and VC-MedAI as pivotal advancements, presenting innovative and transformative evaluation methodologies for AI models in clinical practice, offering a preclinical-like setting mirroring conventional medicine, and reshaping development paradigms in a cost-effective and fast-iterative manner.
1 code implementation • 20 Jun 2024 • Zhengxin Yang, Wanling Gao, Luzhou Peng, Yunyou Huang, Fei Tang, Jianfeng Zhan
Designing and optimizing neural network architectures typically requires extensive expertise, starting with handcrafted designs and then manual or automated refinement.
2 code implementations • 3 Jan 2024 • Fanda Fan, Chunjie Luo, Wanling Gao, Jianfeng Zhan
To establish a unified evaluation framework for video generation tasks, our benchmark includes 11 metrics spanning four dimensions to assess algorithm performance.
no code implementations • 5 Sep 2023 • Fei Tang, Wanling Gao, Luzhou Peng, Jianfeng Zhan
Instead of a collection of blended questions, AGIBench focuses on three typical ability branches and adopts a four-tuple <ability branch, knowledge, difficulty, modal> to label the attributes of each question.
no code implementations • 11 Aug 2023 • Yatao Li, Wanling Gao, Lei Wang, Lixin Sun, Zun Wang, Jianfeng Zhan
This suite of metrics has demonstrated a better ability to assess a model's performance in real-world scientific applications, in contrast to traditional AI benchmarking methodologies.
no code implementations • 17 Jul 2023 • Hongxiao Li, Wanling Gao, Lei Wang, Jianfeng Zhan
The study of \textsc{Lara}'s expressive ability reports that it can represent relational algebra and most linear algebra operations.
no code implementations • 31 Jan 2023 • Xu Wen, Wanling Gao, Anzheng Li, Lei Wang, Zihan Jiang, Jianfeng Zhan
Without a unified framework, the hybrid deployments of deep learning (DL) and CML also suffer from severe performance and portability issues.
no code implementations • 25 Dec 2022 • Zhengxin Yang, Wanling Gao, Chunjie Luo, Lei Wang, Fei Tang, Xu Wen, Jianfeng Zhan
The study unveils a counterintuitive revelation: deep learning inference quality exhibits fluctuations due to inference time.
no code implementations • 21 Dec 2022 • Hongxiao Li, Wanling Gao, Lei Wang, Jianfeng Zhan
This article presents a unified computation model with generalized expression ability and a concise set of primitive operators for programming high-level algorithms.
no code implementations • 29 Sep 2021 • Chunjie Luo, Jianfeng Zhan, Lei Wang, Wanling Gao
Specifically, it calculates the mean like Batch Normalization to maintain the advantage of Batch Normalization.
1 code implementation • 24 Mar 2021 • Chunjie Luo, Jianfeng Zhan, Tianshu Hao, Lei Wang, Wanling Gao
The attention branch is gated using the Sigmoid function and multiplied by the feature map's trunk branch.
no code implementations • 25 Feb 2021 • Zihan Jiang, Wanling Gao, Fei Tang, Xingwang Xiong, Lei Wang, Chuanxin Lan, Chunjie Luo, Hongxiao Li, Jianfeng Zhan
Recent years witness a trend of applying large-scale distributed deep learning algorithms (HPC AI) in both business and scientific computing areas, whose goal is to speed up the training time to achieve a state-of-the-art quality.
Image Classification
Performance
no code implementations • 14 May 2020 • Chunjie Luo, Jianfeng Zhan, Lei Wang, Wanling Gao
To build light-weight network, we propose a new normalization, Fine-grained Batch Normalization (FBN).
no code implementations • 7 May 2020 • Chunjie Luo, Xiwen He, Jianfeng Zhan, Lei Wang, Wanling Gao, Jiahui Dai
Due to increasing amounts of data and compute resources, deep learning achieves many successes in various domains.
no code implementations • 6 May 2020 • Wanling Gao, Fei Tang, Jianfeng Zhan, Xu Wen, Lei Wang, Zheng Cao, Chuanxin Lan, Chunjie Luo, Xiaoli Liu, Zihan Jiang
We formalize a real-world application scenario as a Directed Acyclic Graph-based model and propose the rules to distill it into a permutation of essential AI and non-AI tasks, which we call a scenario benchmark.
no code implementations • 30 Apr 2020 • Fei Tang, Wanling Gao, Jianfeng Zhan, Chuanxin Lan, Xu Wen, Lei Wang, Chunjie Luo, Jiahui Dai, Zheng Cao, Xingwang Xiong, Zihan Jiang, Tianshu Hao, Fanda Fan, Fan Zhang, Yunyou Huang, Jianan Chen, Mengjia Du, Rui Ren, Chen Zheng, Daoyi Zheng, Haoning Tang, Kunlin Zhan, Biao Wang, Defei Kong, Minghe Yu, Chongkang Tan, Huan Li, Xinhui Tian, Yatao Li, Junchao Shao, Zhenyu Wang, Xiaoyu Wang, Hainan Ye
We use real-world benchmarks to cover the factors space that impacts the learning dynamics to the most considerable extent.
no code implementations • 12 Mar 2020 • Chunjie Luo, Jianfeng Zhan, Lei Wang, Wanling Gao
To alleviate the problem caused by small batch size, extended batch normalization computes the standard deviation along the (N, C, H, W) dimensions, thus enlarges the number of samples from which the standard deviation is computed.
Ranked #61 on
Image Classification
on STL-10
no code implementations • 17 Feb 2020 • Wanling Gao, Fei Tang, Jianfeng Zhan, Chuanxin Lan, Chunjie Luo, Lei Wang, Jiahui Dai, Zheng Cao, Xiongwang Xiong, Zihan Jiang, Tianshu Hao, Fanda Fan, Xu Wen, Fan Zhang, Yunyou Huang, Jianan Chen, Mengjia Du, Rui Ren, Chen Zheng, Daoyi Zheng, Haoning Tang, Kunlin Zhan, Biao Wang, Defei Kong, Minghe Yu, Chongkang Tan, Huan Li, Xinhui Tian, Yatao Li, Gang Lu, Junchao Shao, Zhenyu Wang, Xiaoyu Wang, Hainan Ye
An end-to-end benchmark is a distillation of the essential attributes of an industry-scale application.
no code implementations • 2 Dec 2019 • Jianfeng Zhan, Lei Wang, Wanling Gao, Rui Ren
This paper outlines BenchCouncil's view on the challenges, rules, and vision of benchmarking modern workloads like Big Data, AI or machine learning, and Internet Services.
no code implementations • 13 Aug 2019 • Wanling Gao, Fei Tang, Lei Wang, Jianfeng Zhan, Chunxin Lan, Chunjie Luo, Yunyou Huang, Chen Zheng, Jiahui Dai, Zheng Cao, Daoyi Zheng, Haoning Tang, Kunlin Zhan, Biao Wang, Defei Kong, Tong Wu, Minghe Yu, Chongkang Tan, Huan Li, Xinhui Tian, Yatao Li, Junchao Shao, Zhenyu Wang, Xiaoyu Wang, Hainan Ye
On the basis of the AIBench framework, abstracting the real-world data sets and workloads from one of the top e-commerce providers, we design and implement the first end-to-end Internet service AI benchmark, which contains the primary modules in the critical paths of an industry scale application and is scalable to deploy on different cluster scales.
no code implementations • 6 Aug 2019 • Tianshu Hao, Yunyou Huang, Xu Wen, Wanling Gao, Fan Zhang, Chen Zheng, Lei Wang, Hainan Ye, Kai Hwang, Zujie Ren, Jianfeng Zhan
In edge computing scenarios, the distribution of data and collaboration of workloads on different layers are serious concerns for performance, privacy, and security issues.
Performance Distributed, Parallel, and Cluster Computing
no code implementations • 1 Aug 2019 • Yunyou Huang, Nana Wang, Wanling Gao, Xiaoxu Guo, Cheng Huang, Tianshu Hao, Jianfeng Zhan
As a consequence, the deep learning methods are difficult to be popularized and applied in the real smart grid environment.
no code implementations • 27 Jul 2019 • Zihan Jiang, Wanling Gao, Lei Wang, Xingwang Xiong, Yuchen Zhang, Xu Wen, Chunjie Luo, Hainan Ye, Yunquan Zhang, Shengzhong Feng, Kenli Li, Weijia Xu, Jianfeng Zhan
In this paper, we propose HPC AI500 --- a benchmark suite for evaluating HPC systems that running scientific DL workloads.
no code implementations • 23 Feb 2018 • Wanling Gao, Jianfeng Zhan, Lei Wang, Chunjie Luo, Daoyi Zheng, Xu Wen, Rui Ren, Chen Zheng, Xiwen He, Hainan Ye, Haoning Tang, Zheng Cao, Shujie Zhang, Jiahui Dai
On the basis of our previous work that identifies eight data motifs taking up most of the run time of a wide variety of big data and AI workloads, we propose a scalable benchmarking methodology that uses the combination of one or more data motifs---to represent diversity of big data and AI workloads.