Search Results for author: Le Jiang

Found 7 papers, 3 papers with code

UFPS: A unified framework for partially-annotated federated segmentation in heterogeneous data distribution

1 code implementation16 Nov 2023 Le Jiang, Li Yan Ma, Tie Yong Zeng, Shi Hui Ying

We propose a Unified Federated Partially-labeled Segmentation (UFPS) framework to segment pixels within all classes for partially-annotated datasets by training a totipotential global model without class collision.

Federated Learning Segmentation

SPAC-Net: Synthetic Pose-aware Animal ControlNet for Enhanced Pose Estimation

1 code implementation29 May 2023 Le Jiang, Sarah Ostadabbas

Our work demonstrates the potential for synthetic data to overcome the challenge of limited annotated data in animal pose estimation.

Animal Pose Estimation Edge Detection +1

TAP: Accelerating Large-Scale DNN Training Through Tensor Automatic Parallelisation

no code implementations1 Feb 2023 Ziji Shi, Le Jiang, Ang Wang, Jie Zhang, Xianyan Jia, Yong Li, Chencan Wu, Jialin Li, Wei Lin

However, finding a suitable model parallel schedule for an arbitrary neural network is a non-trivial task due to the exploding search space.

Prior-Aware Synthetic Data to the Rescue: Animal Pose Estimation with Very Limited Real Data

1 code implementation30 Aug 2022 Le Jiang, Shuangjun Liu, Xiangyu Bai, Sarah Ostadabbas

Here, we present a very data efficient strategy targeted for pose estimation in quadrupeds that requires only a small amount of real images from the target animal.

Animal Pose Estimation Keypoint Estimation +3

M6-10T: A Sharing-Delinking Paradigm for Efficient Multi-Trillion Parameter Pretraining

no code implementations8 Oct 2021 Junyang Lin, An Yang, Jinze Bai, Chang Zhou, Le Jiang, Xianyan Jia, Ang Wang, Jie Zhang, Yong Li, Wei Lin, Jingren Zhou, Hongxia Yang

Recent expeditious developments in deep learning algorithms, distributed training, and even hardware design for large models have enabled training extreme-scale models, say GPT-3 and Switch Transformer possessing hundreds of billions or even trillions of parameters.

M6-T: Exploring Sparse Expert Models and Beyond

no code implementations31 May 2021 An Yang, Junyang Lin, Rui Men, Chang Zhou, Le Jiang, Xianyan Jia, Ang Wang, Jie Zhang, Jiamang Wang, Yong Li, Di Zhang, Wei Lin, Lin Qu, Jingren Zhou, Hongxia Yang

Mixture-of-Experts (MoE) models can achieve promising results with outrageous large amount of parameters but constant computation cost, and thus it has become a trend in model scaling.

Playing the Game of 2048

M6: A Chinese Multimodal Pretrainer

no code implementations1 Mar 2021 Junyang Lin, Rui Men, An Yang, Chang Zhou, Ming Ding, Yichang Zhang, Peng Wang, Ang Wang, Le Jiang, Xianyan Jia, Jie Zhang, Jianwei Zhang, Xu Zou, Zhikang Li, Xiaodong Deng, Jie Liu, Jinbao Xue, Huiling Zhou, Jianxin Ma, Jin Yu, Yong Li, Wei Lin, Jingren Zhou, Jie Tang, Hongxia Yang

In this work, we construct the largest dataset for multimodal pretraining in Chinese, which consists of over 1. 9TB images and 292GB texts that cover a wide range of domains.

Image Generation

Cannot find the paper you are looking for? You can Submit a new open access paper.