2 code implementations • 3 Jan 2024 • Fanda Fan, Chunjie Luo, Wanling Gao, Jianfeng Zhan
To establish a unified evaluation framework for video generation tasks, our benchmark includes 11 metrics spanning four dimensions to assess algorithm performance.
no code implementations • 19 Dec 2023 • Chunjie Luo, Fei Luo, Yusen Wang, Enxu Zhao, Chunxia Xiao
To model the deformation more accurately, we propose to initialize an estimated 3D clothed human in the canonical space, as it is easier for deformation fields to learn from the clothed human than from SMPL.
no code implementations • 5 Sep 2023 • Fanda Fan, Chaoxu Guo, Litong Gong, Biao Wang, Tiezheng Ge, Yuning Jiang, Chunjie Luo, Jianfeng Zhan
Our pipeline benefits from bidirectional learning of the mask modeling and thus can employ a hybrid strategy of infilling and interpolation when generating sparse frames.
no code implementations • 25 Dec 2022 • Zhengxin Yang, Wanling Gao, Chunjie Luo, Lei Wang, Fei Tang, Xu Wen, Jianfeng Zhan
The study unveils a counterintuitive revelation: deep learning inference quality exhibits fluctuations due to inference time.
no code implementations • 29 Sep 2021 • Chunjie Luo, Jianfeng Zhan, Lei Wang, Wanling Gao
Specifically, it calculates the mean like Batch Normalization to maintain the advantage of Batch Normalization.
1 code implementation • 24 Mar 2021 • Chunjie Luo, Jianfeng Zhan, Tianshu Hao, Lei Wang, Wanling Gao
The attention branch is gated using the Sigmoid function and multiplied by the feature map's trunk branch.
no code implementations • 25 Feb 2021 • Zihan Jiang, Wanling Gao, Fei Tang, Xingwang Xiong, Lei Wang, Chuanxin Lan, Chunjie Luo, Hongxiao Li, Jianfeng Zhan
Recent years witness a trend of applying large-scale distributed deep learning algorithms (HPC AI) in both business and scientific computing areas, whose goal is to speed up the training time to achieve a state-of-the-art quality.
Image Classification
Performance
no code implementations • 17 Aug 2020 • Yuan Liang, Yange Guo, Yanxia Gong, Chunjie Luo, Jianfeng Zhan, Yunyou Huang
The goal is to build a machine learning model from the data sets distributed on multiple devices so-called an isolated data island, while keeping their data secure and private.
no code implementations • 14 May 2020 • Chunjie Luo, Jianfeng Zhan, Lei Wang, Wanling Gao
To build light-weight network, we propose a new normalization, Fine-grained Batch Normalization (FBN).
no code implementations • 7 May 2020 • Chunjie Luo, Xiwen He, Jianfeng Zhan, Lei Wang, Wanling Gao, Jiahui Dai
Due to increasing amounts of data and compute resources, deep learning achieves many successes in various domains.
no code implementations • 6 May 2020 • Wanling Gao, Fei Tang, Jianfeng Zhan, Xu Wen, Lei Wang, Zheng Cao, Chuanxin Lan, Chunjie Luo, Xiaoli Liu, Zihan Jiang
We formalize a real-world application scenario as a Directed Acyclic Graph-based model and propose the rules to distill it into a permutation of essential AI and non-AI tasks, which we call a scenario benchmark.
no code implementations • 30 Apr 2020 • Fei Tang, Wanling Gao, Jianfeng Zhan, Chuanxin Lan, Xu Wen, Lei Wang, Chunjie Luo, Jiahui Dai, Zheng Cao, Xingwang Xiong, Zihan Jiang, Tianshu Hao, Fanda Fan, Fan Zhang, Yunyou Huang, Jianan Chen, Mengjia Du, Rui Ren, Chen Zheng, Daoyi Zheng, Haoning Tang, Kunlin Zhan, Biao Wang, Defei Kong, Minghe Yu, Chongkang Tan, Huan Li, Xinhui Tian, Yatao Li, Junchao Shao, Zhenyu Wang, Xiaoyu Wang, Hainan Ye
We use real-world benchmarks to cover the factors space that impacts the learning dynamics to the most considerable extent.
no code implementations • 12 Mar 2020 • Chunjie Luo, Jianfeng Zhan, Lei Wang, Wanling Gao
To alleviate the problem caused by small batch size, extended batch normalization computes the standard deviation along the (N, C, H, W) dimensions, thus enlarges the number of samples from which the standard deviation is computed.
Ranked #61 on
Image Classification
on STL-10
no code implementations • 17 Feb 2020 • Wanling Gao, Fei Tang, Jianfeng Zhan, Chuanxin Lan, Chunjie Luo, Lei Wang, Jiahui Dai, Zheng Cao, Xiongwang Xiong, Zihan Jiang, Tianshu Hao, Fanda Fan, Xu Wen, Fan Zhang, Yunyou Huang, Jianan Chen, Mengjia Du, Rui Ren, Chen Zheng, Daoyi Zheng, Haoning Tang, Kunlin Zhan, Biao Wang, Defei Kong, Minghe Yu, Chongkang Tan, Huan Li, Xinhui Tian, Yatao Li, Gang Lu, Junchao Shao, Zhenyu Wang, Xiaoyu Wang, Hainan Ye
An end-to-end benchmark is a distillation of the essential attributes of an industry-scale application.
no code implementations • 13 Aug 2019 • Wanling Gao, Fei Tang, Lei Wang, Jianfeng Zhan, Chunxin Lan, Chunjie Luo, Yunyou Huang, Chen Zheng, Jiahui Dai, Zheng Cao, Daoyi Zheng, Haoning Tang, Kunlin Zhan, Biao Wang, Defei Kong, Tong Wu, Minghe Yu, Chongkang Tan, Huan Li, Xinhui Tian, Yatao Li, Junchao Shao, Zhenyu Wang, Xiaoyu Wang, Hainan Ye
On the basis of the AIBench framework, abstracting the real-world data sets and workloads from one of the top e-commerce providers, we design and implement the first end-to-end Internet service AI benchmark, which contains the primary modules in the critical paths of an industry scale application and is scalable to deploy on different cluster scales.
no code implementations • 27 Jul 2019 • Zihan Jiang, Wanling Gao, Lei Wang, Xingwang Xiong, Yuchen Zhang, Xu Wen, Chunjie Luo, Hainan Ye, Yunquan Zhang, Shengzhong Feng, Kenli Li, Weijia Xu, Jianfeng Zhan
In this paper, we propose HPC AI500 --- a benchmark suite for evaluating HPC systems that running scientific DL workloads.
no code implementations • 23 Feb 2018 • Wanling Gao, Jianfeng Zhan, Lei Wang, Chunjie Luo, Daoyi Zheng, Xu Wen, Rui Ren, Chen Zheng, Xiwen He, Hainan Ye, Haoning Tang, Zheng Cao, Shujie Zhang, Jiahui Dai
On the basis of our previous work that identifies eight data motifs taking up most of the run time of a wide variety of big data and AI workloads, we propose a scalable benchmarking methodology that uses the combination of one or more data motifs---to represent diversity of big data and AI workloads.
1 code implementation • 20 Feb 2017 • Chunjie Luo, Jianfeng Zhan, Lei Wang, Qiang Yang
We compare cosine normalization with batch, weight and layer normalization in fully-connected neural networks as well as convolutional networks on the data sets of MNIST, 20NEWS GROUP, CIFAR-10/100 and SVHN.