Search Results for author: Qifan Xu

Found 5 papers, 1 papers with code

To what extent can Plug-and-Play methods outperform neural networks alone in low-dose CT reconstruction

no code implementations15 Feb 2022 Qifan Xu, Qihui Lyu, Dan Ruan, Ke Sheng

The Plug-and-Play (PnP) framework was recently introduced for low-dose CT reconstruction to leverage the interpretability and the flexibility of model-based methods to incorporate various plugins, such as trained deep learning (DL) neural networks.

Segmentation

Tesseract: Parallelize the Tensor Parallelism Efficiently

no code implementations30 May 2021 Boxiang Wang, Qifan Xu, Zhengda Bian, Yang You

It increases efficiency by reducing communication overhead and lowers the memory required for each GPU.

Language Modelling

Maximizing Parallelism in Distributed Training for Huge Neural Networks

no code implementations30 May 2021 Zhengda Bian, Qifan Xu, Boxiang Wang, Yang You

Our work is the first to introduce a 3-dimensional model parallelism for expediting huge language models.

An Efficient 2D Method for Training Super-Large Deep Learning Models

1 code implementation12 Apr 2021 Qifan Xu, Shenggui Li, Chaoyu Gong, Yang You

However, due to memory constraints, model parallelism must be utilized to host large models that would otherwise not fit into the memory of a single device.

Cannot find the paper you are looking for? You can Submit a new open access paper.