no code implementations • 21 Nov 2023 • Ruiyang Qin, Jun Xia, Zhenge Jia, Meng Jiang, Ahmed Abbasi, Peipei Zhou, Jingtong Hu, Yiyu Shi
While it is possible to obtain annotation locally by directly asking users to provide preferred responses, such annotations have to be sparse to not affect user experience.
no code implementations • 25 Aug 2022 • Yue Tang, Yawen Wu, Peipei Zhou, Jingtong Hu
To enable W-TAL models to learn from a long, untrimmed streaming video, we propose an efficient video learning approach that can directly adapt to new environments.
Action Detection Weakly-supervised Temporal Action Localization +1
no code implementations • 4 Jul 2022 • Sébastien Ollivier, Sheng Li, Yue Tang, Chayanika Chaudhuri, Peipei Zhou, Xulong Tang, Jingtong Hu, Alex K. Jones
In particular, we explore the use of processing-in-memory (PIM) approaches, mobile GPU accelerators, and recently released FPGAs, and compare them with novel Racetrack memory PIM.
1 code implementation • 29 Apr 2022 • Xinyi Zhang, Cong Hao, Peipei Zhou, Alex Jones, Jingtong Hu
The heterogeneity in ML models comes from multi-sensor perceiving and multi-task learning, i. e., multi-modality multi-task (MMMT), resulting in diverse deep neural network (DNN) layers and computation patterns.
no code implementations • 18 Feb 2022 • Yue Tang, Xinyi Zhang, Peipei Zhou, Jingtong Hu
In this work, we design EF-Train, an efficient DNN training accelerator with a unified channel-level parallelism-based convolution kernel that can achieve end-to-end training on resource-limited low-power edge-level FPGAs.