Search Results for author: Peipei Zhou

Found 5 papers, 1 papers with code

Enabling On-Device Large Language Model Personalization with Self-Supervised Data Selection and Synthesis

no code implementations21 Nov 2023 Ruiyang Qin, Jun Xia, Zhenge Jia, Meng Jiang, Ahmed Abbasi, Peipei Zhou, Jingtong Hu, Yiyu Shi

While it is possible to obtain annotation locally by directly asking users to provide preferred responses, such annotations have to be sparse to not affect user experience.

Language Modelling Large Language Model

Enabling Weakly-Supervised Temporal Action Localization from On-Device Learning of the Video Stream

no code implementations25 Aug 2022 Yue Tang, Yawen Wu, Peipei Zhou, Jingtong Hu

To enable W-TAL models to learn from a long, untrimmed streaming video, we propose an efficient video learning approach that can directly adapt to new environments.

Action Detection Weakly-supervised Temporal Action Localization +1

Sustainable AI Processing at the Edge

no code implementations4 Jul 2022 Sébastien Ollivier, Sheng Li, Yue Tang, Chayanika Chaudhuri, Peipei Zhou, Xulong Tang, Jingtong Hu, Alex K. Jones

In particular, we explore the use of processing-in-memory (PIM) approaches, mobile GPU accelerators, and recently released FPGAs, and compare them with novel Racetrack memory PIM.

BIG-bench Machine Learning Edge-computing

H2H: Heterogeneous Model to Heterogeneous System Mapping with Computation and Communication Awareness

1 code implementation29 Apr 2022 Xinyi Zhang, Cong Hao, Peipei Zhou, Alex Jones, Jingtong Hu

The heterogeneity in ML models comes from multi-sensor perceiving and multi-task learning, i. e., multi-modality multi-task (MMMT), resulting in diverse deep neural network (DNN) layers and computation patterns.

Multi-Task Learning

EF-Train: Enable Efficient On-device CNN Training on FPGA Through Data Reshaping for Online Adaptation or Personalization

no code implementations18 Feb 2022 Yue Tang, Xinyi Zhang, Peipei Zhou, Jingtong Hu

In this work, we design EF-Train, an efficient DNN training accelerator with a unified channel-level parallelism-based convolution kernel that can achieve end-to-end training on resource-limited low-power edge-level FPGAs.

Domain Adaptation

Cannot find the paper you are looking for? You can Submit a new open access paper.