no code implementations • 19 Feb 2025 • Sichu Liang, Linhai Zhang, Hongyu Zhu, Wenwen Wang, Yulan He, Deyu Zhou
The current paradigm, Retrieval-Augmented Generation (RAG), acquires expertise medical knowledge through large-scale corpus retrieval and uses this knowledge to guide a general-purpose large language model (LLM) for generating answers.
no code implementations • 22 Sep 2024 • Huafeng Qin, Hongyu Zhu, Xin Jin, Xin Yu, Mounim A. El-Yacoubi, Shuqiang Yang
First, we define a supernet and propose a global and local alternate Neural Architecture Search method to search the optimal architecture alternately with a differentiable neural architecture search.
1 code implementation • 21 Sep 2024 • Hongyu Zhu, Wentao Hu, Sichu Liang, Fangqi Li, Wenwen Wang, Shilin Wang
Model extraction aims to create a functionally similar copy from a machine learning as a service (MLaaS) API with minimal overhead, typically for illicit profit or as a precursor to further attacks, posing a significant threat to the MLaaS ecosystem.
no code implementations • 18 Sep 2024 • Hongyu Zhu, Xin Jin, Hongchao Liao, Yan Xiang, Mounim A. El-Yacoubi, Huafeng Qin
Eye movement biometrics is a secure and innovative identification method.
1 code implementation • 8 Sep 2024 • Xin Jin, Hongyu Zhu, Siyuan Li, Zedong Wang, Zicheng Liu, Chang Yu, Huafeng Qin, Stan Z. Li
As Deep Neural Networks have achieved thrilling breakthroughs in the past decade, data augmentations have garnered increasing attention as regularization techniques when massive labeled data are unavailable.
2 code implementations • 10 Jul 2024 • Huafeng Qin, Xin Jin, Hongyu Zhu, Hongchao Liao, Mounîm A. El-Yacoubi, Xinbo Gao
Mixup data augmentation approaches have been applied for various tasks of deep learning to improve the generalization ability of deep neural networks.
no code implementations • 21 May 2024 • Xin Jin, Hongyu Zhu, Mounîm A. El Yacoubi, Haiyang Li, Hongchao Liao, Huafeng Qin, Yun Jiang
To enable CNNs to capture comprehensive feature representations from palm-vein images, we explored the effect of convolutional kernel size on the performance of palm-vein identification networks and designed LaKNet, a network leveraging large kernel convolution and gating mechanism.
no code implementations • 21 Apr 2024 • Hongyu Zhu, Sichu Liang, Wentao Hu, Fangqi Li, Ju Jia, Shilin Wang
With the rise of Machine Learning as a Service (MLaaS) platforms, safeguarding the intellectual property of deep learning models is becoming paramount.
no code implementations • 10 Jan 2024 • Huafeng Qin, Hongyu Zhu, Xin Jin, Qun Song, Mounim A. El-Yacoubi, Xinbo Gao
To this end, we propose a mixed block consisting of three modules, transformer, attention Long short-term memory (attention LSTM), and Fourier transformer.
1 code implementation • 16 Sep 2023 • Hongyu Zhu, Sichu Liang, Wentao Hu, Fang-Qi Li, Yali Yuan, Shi-Lin Wang, Guang Cheng
As a modern ensemble technique, Deep Forest (DF) employs a cascading structure to construct deep models, providing stronger representational power compared to traditional decision forests.
no code implementations • 26 Feb 2022 • Hongyu Zhu, Stefan Klus, Tuhin Sahai
Our proposed method uses the existing wave equation clustering algorithm that is based on propagating waves through the graph.
1 code implementation • 16 Dec 2021 • Hongyu Zhu, Yan Chen, Jing Yan, Jing Liu, Yu Hong, Ying Chen, Hua Wu, Haifeng Wang
For this purpose, we create a Chinese dataset namely DuQM which contains natural questions with linguistic perturbations to evaluate the robustness of question matching models.
no code implementations • 10 Oct 2021 • Dominic Flocco, Bryce Palmer-Toy, Ruixiao Wang, Hongyu Zhu, Rishi Sonthalia, Junyuan Lin, Andrea L. Bertozzi, P. Jeffrey Brantingham
The construction and application of knowledge graphs have seen a rapid increase across many disciplines in recent years.
no code implementations • 25 Feb 2021 • Jingjing Li, Zhuo Sun, Lei Zhang, Hongyu Zhu
The security constraints of this method is constructed only with the input and output signal samples of the legal and eavesdropper channels and benefit that training the encoder is completely independent of the decoder.
no code implementations • 5 Jun 2020 • Hongyu Zhu, Amar Phanishayee, Gennady Pekhimenko
Modern deep neural network (DNN) training jobs use complex and heterogeneous software/hardware stacks.
4 code implementations • CVPR 2019 • Yunbo Wang, Jianjin Zhang, Hongyu Zhu, Mingsheng Long, Jian-Min Wang, Philip S. Yu
Natural spatiotemporal processes can be highly non-stationary in many ways, e. g. the low-level non-stationarity such as spatial correlations or temporal dependencies of local pixel values; and the high-level variations such as the accumulation, deformation or dissipation of radar echoes in precipitation forecasting.
Ranked #7 on
Video Prediction
on Human3.6M
no code implementations • 16 Mar 2018 • Hongyu Zhu, Mohamed Akrout, Bojian Zheng, Andrew Pelegris, Amar Phanishayee, Bianca Schroeder, Gennady Pekhimenko
Our primary goal in this work is to break this myopic view by (i) proposing a new benchmark for DNN training, called TBD (TBD is short for Training Benchmark for DNNs), that uses a representative set of DNN models that cover a wide range of machine learning applications: image classification, machine translation, speech recognition, object detection, adversarial networks, reinforcement learning, and (ii) by performing an extensive performance analysis of training these different applications on three major deep learning frameworks (TensorFlow, MXNet, CNTK) across different hardware configurations (single-GPU, multi-GPU, and multi-machine).