Search Results for author: Jianyi Zhang

Found 18 papers, 3 papers with code

Min-K%++: Improved Baseline for Detecting Pre-Training Data from Large Language Models

no code implementations3 Apr 2024 Jingyang Zhang, Jingwei Sun, Eric Yeats, Yang Ouyang, Martin Kuo, Jianyi Zhang, Hao Yang, Hai Li

The problem of pre-training data detection for large language models (LLMs) has received growing attention due to its implications in critical issues like copyright violation and test data contamination.

Unlocking the Potential of Federated Learning: The Symphony of Dataset Distillation via Deep Generative Latents

1 code implementation3 Dec 2023 Yuqi Jia, Saeed Vahidian, Jingwei Sun, Jianyi Zhang, Vyacheslav Kungurtsev, Neil Zhenqiang Gong, Yiran Chen

This process allows local devices to train smaller surrogate models while enabling the training of a larger global model on the server, effectively minimizing resource utilization.

Federated Learning

DACBERT: Leveraging Dependency Agreement for Cost-Efficient Bert Pretraining

no code implementations8 Nov 2023 Martin Kuo, Jianyi Zhang, Yiran Chen

Building on the cost-efficient pretraining advancements brought about by Crammed BERT, we enhance its performance and interpretability further by introducing a novel pretrained model Dependency Agreement Crammed BERT (DACBERT) and its two-stage pretraining framework - Dependency Agreement Pretraining.

MRPC Natural Language Understanding +1

Towards Building the Federated GPT: Federated Instruction Tuning

1 code implementation9 May 2023 Jianyi Zhang, Saeed Vahidian, Martin Kuo, Chunyuan Li, Ruiyi Zhang, Tong Yu, Yufan Zhou, Guoyin Wang, Yiran Chen

This repository offers a foundational framework for exploring federated fine-tuning of LLMs using heterogeneous instructions across diverse categories.

Federated Learning

Rethinking Normalization Methods in Federated Learning

no code implementations7 Oct 2022 Zhixu Du, Jingwei Sun, Ang Li, Pin-Yu Chen, Jianyi Zhang, Hai "Helen" Li, Yiran Chen

We also show that layer normalization is a better choice in FL which can mitigate the external covariate shift and improve the performance of the global model.

Federated Learning

Join-Chain Network: A Logical Reasoning View of the Multi-head Attention in Transformer

no code implementations6 Oct 2022 Jianyi Zhang, Yiran Chen, Jianshu Chen

Developing neural architectures that are capable of logical reasoning has become increasingly important for a wide range of applications (e. g., natural language processing).

Logical Reasoning Natural Language Understanding

Fed-CBS: A Heterogeneity-Aware Client Sampling Mechanism for Federated Learning via Class-Imbalance Reduction

no code implementations30 Sep 2022 Jianyi Zhang, Ang Li, Minxue Tang, Jingwei Sun, Xiang Chen, Fan Zhang, Changyou Chen, Yiran Chen, Hai Li

Based on this measure, we also design a computation-efficient client sampling strategy, such that the actively selected clients will generate a more class-balanced grouped dataset with theoretical guarantees.

Federated Learning Privacy Preserving

FADE: Enabling Federated Adversarial Training on Heterogeneous Resource-Constrained Edge Devices

no code implementations8 Sep 2022 Minxue Tang, Jianyi Zhang, Mingyuan Ma, Louis DiValentin, Aolin Ding, Amin Hassanzadeh, Hai Li, Yiran Chen

However, the high demand for memory capacity and computing power makes large-scale federated adversarial training infeasible on resource-constrained edge devices.

Adversarial Robustness Federated Learning +1

Towards Fair Federated Learning with Zero-Shot Data Augmentation

no code implementations27 Apr 2021 Weituo Hao, Mostafa El-Khamy, Jungwon Lee, Jianyi Zhang, Kevin J Liang, Changyou Chen, Lawrence Carin

Federated learning has emerged as an important distributed learning paradigm, where a server aggregates a global model from many client-trained models while having no access to the client data.

Data Augmentation Fairness +1

Safe Distributional Reinforcement Learning

no code implementations26 Feb 2021 Jianyi Zhang, Paul Weng

Safety in reinforcement learning (RL) is a key property in both training and execution in many domains such as autonomous driving or finance.

Autonomous Driving Distributional Reinforcement Learning +2

Differentiable Logic Machines

no code implementations23 Feb 2021 Matthieu Zimmer, Xuening Feng, Claire Glanois, Zhaohui Jiang, Jianyi Zhang, Paul Weng, Dong Li, Jianye Hao, Wulong Liu

As a step in this direction, we propose a novel neural-logic architecture, called differentiable logic machine (DLM), that can solve both inductive logic programming (ILP) and reinforcement learning (RL) problems, where the solution can be interpreted as a first-order logic program.

Decision Making Inductive logic programming +1

FLOP: Federated Learning on Medical Datasets using Partial Networks

no code implementations10 Feb 2021 Qian Yang, Jianyi Zhang, Weituo Hao, Gregory Spell, Lawrence Carin

While different data-driven deep learning models have been developed to mitigate the diagnosis of COVID-19, the data itself is still scarce due to patient privacy concerns.

Federated Learning

Self-Adversarially Learned Bayesian Sampling

no code implementations21 Nov 2018 Yang Zhao, Jianyi Zhang, Changyou Chen

Scalable Bayesian sampling is playing an important role in modern machine learning, especially in the fast-developed unsupervised-(deep)-learning models.

Self-Learning

Variance Reduction in Stochastic Particle-Optimization Sampling

no code implementations ICML 2020 Jianyi Zhang, Yang Zhao, Changyou Chen

Stochastic particle-optimization sampling (SPOS) is a recently-developed scalable Bayesian sampling framework that unifies stochastic gradient MCMC (SG-MCMC) and Stein variational gradient descent (SVGD) algorithms based on Wasserstein gradient flows.

POS

Towards More Theoretically-Grounded Particle Optimization Sampling for Deep Learning

no code implementations27 Sep 2018 Jianyi Zhang, Ruiyi Zhang, Changyou Chen

With such theoretical guarantees, SPOS can be safely and effectively applied on both Bayesian DL and deep RL tasks.

POS Reinforcement Learning (RL)

Stochastic Particle-Optimization Sampling and the Non-Asymptotic Convergence Theory

no code implementations5 Sep 2018 Jianyi Zhang, Ruiyi Zhang, Lawrence Carin, Changyou Chen

Particle-optimization-based sampling (POS) is a recently developed effective sampling technique that interactively updates a set of particles.

POS

Cannot find the paper you are looking for? You can Submit a new open access paper.