Search Results for author: Martin Kuo

Found 4 papers, 1 papers with code

Min-K%++: Improved Baseline for Detecting Pre-Training Data from Large Language Models

no code implementations3 Apr 2024 Jingyang Zhang, Jingwei Sun, Eric Yeats, Yang Ouyang, Martin Kuo, Jianyi Zhang, Hao Yang, Hai Li

The problem of pre-training data detection for large language models (LLMs) has received growing attention due to its implications in critical issues like copyright violation and test data contamination.

DACBERT: Leveraging Dependency Agreement for Cost-Efficient Bert Pretraining

no code implementations8 Nov 2023 Martin Kuo, Jianyi Zhang, Yiran Chen

Building on the cost-efficient pretraining advancements brought about by Crammed BERT, we enhance its performance and interpretability further by introducing a novel pretrained model Dependency Agreement Crammed BERT (DACBERT) and its two-stage pretraining framework - Dependency Agreement Pretraining.

MRPC Natural Language Understanding +1

Towards Building the Federated GPT: Federated Instruction Tuning

1 code implementation9 May 2023 Jianyi Zhang, Saeed Vahidian, Martin Kuo, Chunyuan Li, Ruiyi Zhang, Tong Yu, Yufan Zhou, Guoyin Wang, Yiran Chen

This repository offers a foundational framework for exploring federated fine-tuning of LLMs using heterogeneous instructions across diverse categories.

Federated Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.