Search Results for author: Jiaxiang Ren

Found 5 papers, 1 papers with code

Federated Learning of Large Language Models with Parameter-Efficient Prompt Tuning and Adaptive Optimization

1 code implementation23 Oct 2023 Tianshi Che, Ji Liu, Yang Zhou, Jiaxiang Ren, Jiwen Zhou, Victor S. Sheng, Huaiyu Dai, Dejing Dou

This paper proposes a Parameter-efficient prompt Tuning approach with Adaptive Optimization, i. e., FedPepTAO, to enable efficient and effective FL of LLMs.

Federated Learning

Accelerated Federated Learning with Decoupled Adaptive Optimization

no code implementations14 Jul 2022 Jiayin Jin, Jiaxiang Ren, Yang Zhou, Lingjuan Lyu, Ji Liu, Dejing Dou

The federated learning (FL) framework enables edge clients to collaboratively learn a shared inference model while keeping privacy of training data on clients.

Federated Learning

Self-Supervised Bulk Motion Artifact Removal in Optical Coherence Tomography Angiography

no code implementations CVPR 2022 Jiaxiang Ren, Kicheon Park, Yingtian Pan, Haibin Ling

With the structural information and appearance feature from noisy image as references, our model can remove larger BMA and produce better visualizing result.

Image Inpainting

Validating the Lottery Ticket Hypothesis with Inertial Manifold Theory

no code implementations NeurIPS 2021 Zeru Zhang, Jiayin Jin, Zijie Zhang, Yang Zhou, Xin Zhao, Jiaxiang Ren, Ji Liu, Lingfei Wu, Ruoming Jin, Dejing Dou

Despite achieving remarkable efficiency, traditional network pruning techniques often follow manually-crafted heuristics to generate pruned sparse networks.

Network Pruning

Osteoporosis Prescreening using Panoramic Radiographs through a Deep Convolutional Neural Network with Attention Mechanism

no code implementations19 Oct 2021 Heng Fan, Jiaxiang Ren, Jie Yang, Yi-Xian Qin, Haibin Ling

The aim of this study was to investigate whether a deep convolutional neural network (CNN) with an attention module can detect osteoporosis on panoramic radiographs.

Cannot find the paper you are looking for? You can Submit a new open access paper.