Search Results for author: Wei-Chen Wang

Found 5 papers, 4 papers with code

Tiny Machine Learning: Progress and Futures

1 code implementation28 Mar 2024 Ji Lin, Ligeng Zhu, Wei-Ming Chen, Wei-Chen Wang, Song Han

By squeezing deep learning models into billions of IoT devices and microcontrollers (MCUs), we expand the scope of AI applications and enable ubiquitous intelligence.

PockEngine: Sparse and Efficient Fine-tuning in a Pocket

no code implementations26 Oct 2023 Ligeng Zhu, Lanxiang Hu, Ji Lin, Wei-Chen Wang, Wei-Ming Chen, Chuang Gan, Song Han

On-device learning and efficient fine-tuning enable continuous and privacy-preserving customization (e. g., locally fine-tuning large language models on personalized data).

Privacy Preserving

Detecting Label Errors in Token Classification Data

2 code implementations8 Oct 2022 Wei-Chen Wang, Jonas Mueller

Mislabeled examples are a common issue in real-world data, particularly for tasks like token classification where many labels must be chosen on a fine-grained basis.

General Classification Token Classification

On-Device Training Under 256KB Memory

1 code implementation30 Jun 2022 Ji Lin, Ligeng Zhu, Wei-Ming Chen, Wei-Chen Wang, Chuang Gan, Song Han

To reduce the memory footprint, we propose Sparse Update to skip the gradient computation of less important layers and sub-tensors.

Quantization Transfer Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.