Search Results for author: Jidong Zhai

Found 9 papers, 4 papers with code

AIPerf: Automated machine learning as an AI-HPC benchmark

1 code implementation17 Aug 2020 Zhixiang Ren, Yongheng Liu, Tianhui Shi, Lei Xie, Yue Zhou, Jidong Zhai, Youhui Zhang, Yunquan Zhang, WenGuang Chen

The de facto HPC benchmark LINPACK can not reflect AI computing power and I/O performance without representative workload.

AutoML Benchmarking +1

FastMoE: A Fast Mixture-of-Expert Training System

3 code implementations24 Mar 2021 Jiaao He, Jiezhong Qiu, Aohan Zeng, Zhilin Yang, Jidong Zhai, Jie Tang

However, training trillion-scale MoE requires algorithm and system co-design for a well-tuned high performance distributed training system.

Language Modelling

GraphQ IR: Unifying the Semantic Parsing of Graph Query Languages with One Intermediate Representation

1 code implementation24 May 2022 Lunyiu Nie, Shulin Cao, Jiaxin Shi, Jiuding Sun, Qi Tian, Lei Hou, Juanzi Li, Jidong Zhai

Subject to the huge semantic gap between natural and formal languages, neural semantic parsing is typically bottlenecked by its complexity of dealing with both input semantics and output syntax.

Few-Shot Learning Semantic Parsing

OLLIE: Derivation-based Tensor Program Optimizer

no code implementations2 Aug 2022 Liyan Zheng, Haojie Wang, Jidong Zhai, Muyan Hu, Zixuan Ma, Tuowei Wang, Shizhi Tang, Lei Xie, Kezhao Huang, Zhihao Jia

Boosting the runtime performance of deep neural networks (DNNs) is critical due to their wide adoption in real-world tasks.

Unveiling the Black Box of PLMs with Semantic Anchors: Towards Interpretable Neural Semantic Parsing

no code implementations4 Oct 2022 Lunyiu Nie, Jiuding Sun, Yanlin Wang, Lun Du, Lei Hou, Juanzi Li, Shi Han, Dongmei Zhang, Jidong Zhai

The recent prevalence of pretrained language models (PLMs) has dramatically shifted the paradigm of semantic parsing, where the mapping from natural language utterances to structured logical forms is now formulated as a Seq2Seq task.

Hallucination Semantic Parsing +1

FreshGNN: Reducing Memory Access via Stable Historical Embeddings for Graph Neural Network Training

no code implementations18 Jan 2023 Kezhao Huang, Haitian Jiang, Minjie Wang, Guangxuan Xiao, David Wipf, Xiang Song, Quan Gan, Zengfeng Huang, Jidong Zhai, Zheng Zhang

A key performance bottleneck when training graph neural network (GNN) models on large, real-world graphs is loading node features onto a GPU.

PowerFusion: A Tensor Compiler with Explicit Data Movement Description and Instruction-level Graph IR

no code implementations11 Jul 2023 Zixuan Ma, Haojie Wang, Jingze Xing, Liyan Zheng, Chen Zhang, Huanqi Cao, Kezhao Huang, Shizhi Tang, Penghan Wang, Jidong Zhai

To accelerate DNN computation, tensor compilers are proposed to generate efficient code on different domain-specific accelerators.

Cannot find the paper you are looking for? You can Submit a new open access paper.