1 code implementation • 3 Mar 2024 • Anudeex Shetty, Yue Teng, Ke He, Qiongkai Xu
Embedding as a Service (EaaS) has become a widely adopted solution, which offers feature extraction capabilities for addressing various downstream tasks in Natural Language Processing (NLP).
no code implementations • 21 Jan 2021 • Ke He, Le He, Lisheng Fan, Yansha Deng, George K. Karagiannidis, Arumugam Nallanathan
Existing detection methods have mainly focused on specific noise models, which are not robust enough with unknown noise statistics.
no code implementations • 7 Jan 2021 • Le He, Ke He, Lisheng Fan, Xianfu Lei, Arumugam Nallanathan, George K. Karagiannidis
This indicates that the proposed algorithm reaches almost the optimal efficiency in practical scenarios, and thereby it is applicable for large-scale systems.
no code implementations • 1 Jan 2021 • Lujun Li, Yikai Wang, Anbang Yao, Yi Qian, Xiao Zhou, Ke He
In this paper, we present Explicit Connection Distillation (ECD), a new KD framework, which addresses the knowledge distillation problem in a novel perspective of bridging dense intermediate feature connections between a student network and its corresponding teacher generated automatically in the training, achieving knowledge transfer goal via direct cross-network layer-to-layer gradients propagation, without need to define complex distillation losses and assume a pre-trained teacher model to be available.
no code implementations • 18 Nov 2019 • Ke He, Bo Liu, Yu Zhang, Andrew Ling, Dian Gu
In this paper, we firstly propose the FeCaffe, i. e. FPGA-enabled Caffe, a hierarchical software and hardware design methodology based on the Caffe to enable FPGA to support mainline deep learning development features, e. g. training and inference with Caffe.