Search Results for author: Huiping Zhuang

Found 14 papers, 6 papers with code

Eraser: Jailbreaking Defense in Large Language Models via Unlearning Harmful Knowledge

no code implementations8 Apr 2024 Weikai Lu, Ziqian Zeng, Jianwei Wang, Zhengdong Lu, Zelin Chen, Huiping Zhuang, Cen Chen

Jailbreaking attacks can enable Large Language Models (LLMs) to bypass the safeguard and generate harmful content.

General Knowledge

AOCIL: Exemplar-free Analytic Online Class Incremental Learning with Low Time and Resource Consumption

no code implementations23 Mar 2024 Huiping Zhuang, Yuchen Liu, Run He, Kai Tong, Ziqian Zeng, Cen Chen, Yi Wang, Lap-Pui Chau

Online Class Incremental Learning (OCIL) aims to train the model in a task-by-task manner, where data arrive in mini-batches at a time while previous data are not accessible.

Class Incremental Learning Incremental Learning

G-ACIL: Analytic Learning for Exemplar-Free Generalized Class Incremental Learning

1 code implementation23 Mar 2024 Huiping Zhuang, Yizhu Chen, Di Fang, Run He, Kai Tong, Hongxin Wei, Ziqian Zeng, Cen Chen

The generalized CIL (GCIL) aims to address the CIL problem in a more real-world scenario, where incoming data have mixed data categories and unknown sample size distribution, leading to intensified forgetting.

Class Incremental Learning Incremental Learning

REAL: Representation Enhanced Analytic Learning for Exemplar-free Class-incremental Learning

no code implementations20 Mar 2024 Run He, Huiping Zhuang, Di Fang, Yizhu Chen, Kai Tong, Cen Chen

The DS-BPT pretrains model in streams of both supervised learning and self-supervised contrastive learning (SSCL) for base knowledge extraction.

Class Incremental Learning Contrastive Learning +1

Chimera: A Lossless Decoding Method for Accelerating Large Language Models Inference by Fusing all Tokens

no code implementations24 Feb 2024 Ziqian Zeng, Jiahong Yu, Qianshi Pang, ZiHao Wang, Huiping Zhuang, Cen Chen

Within this framework, we introduce a lightweight draft model that effectively utilizes previously generated tokens to predict subsequent words.

Mitigating Privacy Risk in Membership Inference by Convex-Concave Loss

no code implementations8 Feb 2024 Zhenlong Liu, Lei Feng, Huiping Zhuang, Xiaofeng Cao, Hongxin Wei

In this work, we propose a novel method -- Convex-Concave Loss, which enables a high variance of training loss distribution by gradient descent.

Mitigating Memorization of Noisy Labels by Clipping the Model Prediction

no code implementations8 Dec 2022 Hongxin Wei, Huiping Zhuang, Renchunzi Xie, Lei Feng, Gang Niu, Bo An, Yixuan Li

In the presence of noisy labels, designing robust loss functions is critical for securing the generalization performance of deep neural networks.

Memorization

Analytic Learning of Convolutional Neural Network For Pattern Recognition

no code implementations14 Feb 2022 Huiping Zhuang, Zhiping Lin, Yimin Yang, Kar-Ann Toh

Training convolutional neural networks (CNNs) with back-propagation (BP) is time-consuming and resource-intensive particularly in view of the need to visit the dataset multiple times.

Accumulated Decoupled Learning: Mitigating Gradient Staleness in Inter-Layer Model Parallelization

no code implementations3 Dec 2020 Huiping Zhuang, Zhiping Lin, Kar-Ann Toh

Decoupled learning is a branch of model parallelism which parallelizes the training of a network by splitting it depth-wise into multiple modules.

General Classification

Fully Decoupled Neural Network Learning Using Delayed Gradients

1 code implementation21 Jun 2019 Huiping Zhuang, Yi Wang, Qinglai Liu, Shuai Zhang, Zhiping Lin

Training neural networks with back-propagation (BP) requires a sequential passing of activations and gradients, which forces the network modules to work in a synchronous fashion.

Cannot find the paper you are looking for? You can Submit a new open access paper.