Search Results for author: Ryumei Nakada

Found 10 papers, 3 papers with code

S$^{2}$FT: Efficient, Scalable and Generalizable LLM Fine-tuning by Structured Sparsity

no code implementations9 Dec 2024 Xinyu Yang, Jixuan Leng, Geyang Guo, Jiawei Zhao, Ryumei Nakada, Linjun Zhang, Huaxiu Yao, Beidi Chen

Utilizing this key insight, we propose a family of Structured Sparse Fine-Tuning (S$^{2}$FT) methods for LLMs, which concurrently achieve state-of-the-art fine-tuning performance, training efficiency, and inference scalability.

NEAT: Nonlinear Parameter-efficient Adaptation of Pre-trained Models

no code implementations2 Oct 2024 Yibo Zhong, Haoxiang Jiang, Lincan Li, Ryumei Nakada, Tianci Liu, Linjun Zhang, Huaxiu Yao, Haoyu Wang

The nonlinear approximation directly models the cumulative updates, effectively capturing complex and non-linear structures in the weight updates.

parameter-efficient fine-tuning

Contrastive Learning on Multimodal Analysis of Electronic Health Records

no code implementations22 Mar 2024 Tianxi Cai, Feiqing Huang, Ryumei Nakada, Linjun Zhang, Doudou Zhou

To accommodate the statistical analysis of multimodal EHR data, in this paper, we propose a novel multimodal feature embedding generative model and design a multimodal contrastive loss to obtain the multimodal EHR feature representation.

Contrastive Learning Privacy Preserving +1

Understanding Multimodal Contrastive Learning and Incorporating Unpaired Data

1 code implementation13 Feb 2023 Ryumei Nakada, Halil Ibrahim Gulluk, Zhun Deng, Wenlong Ji, James Zou, Linjun Zhang

We show that the algorithm can detect the ground-truth pairs and improve performance by fully exploiting unpaired datasets.

Contrastive Learning

The Power of Contrast for Feature Learning: A Theoretical Analysis

no code implementations6 Oct 2021 Wenlong Ji, Zhun Deng, Ryumei Nakada, James Zou, Linjun Zhang

Contrastive learning has achieved state-of-the-art performance in various self-supervised learning tasks and even outperforms its supervised counterpart.

Contrastive Learning Self-Supervised Learning +1

Asymptotic Risk of Overparameterized Likelihood Models: Double Descent Theory for Deep Neural Networks

no code implementations28 Feb 2021 Ryumei Nakada, Masaaki Imaizumi

We investigate the asymptotic risk of a general class of overparameterized likelihood models, including deep models.

Ensemble Learning regression +1

Adaptive Approximation and Generalization of Deep Neural Network with Intrinsic Dimensionality

no code implementations4 Jul 2019 Ryumei Nakada, Masaaki Imaizumi

In this study, we prove that an intrinsic low dimensionality of covariates is the main factor that determines the performance of deep neural networks (DNNs).

Cannot find the paper you are looking for? You can Submit a new open access paper.