no code implementations • 29 May 2025 • Jinwen Chen, Hainan Zhang, Fei Sun, Qinnan Zhang, Sijia Wen, Ziwei Wang, Zhiming Zheng
And then we perform TF-IDF clustering on these suspicious samples to identify the true poisoned samples based on the intra-class distance.
1 code implementation • 21 May 2025 • Zishuai Zhang, Hainan Zhang, JiaYing Zheng, Ziwei Wang, Yongxin Tong, Jin Dong, Zhiming Zheng
However, it still faces significant challenges in security, efficiency, and adaptability: 1) embedding gradients are vulnerable to attacks, leading to reverse engineering of private data; 2) the autoregressive nature of LLMs means that federated split learning can only train and infer sequentially, causing high communication overhead; 3) fixed partition points lack adaptability to downstream tasks.
1 code implementation • 28 Apr 2025 • LingXiang Wang, Hainan Zhang, Qinnan Zhang, Ziwei Wang, Hongwei Zheng, Jin Dong, Zhiming Zheng
To address this challenge, we introduce CodeBC, a code generation model specifically designed for generating secure smart contracts in blockchain.
no code implementations • 17 Feb 2025 • Qianchi Zhang, Hainan Zhang, Liang Pang, Ziwei Wang, Hongwei Zheng, Yongxin Tong, Zhiming Zheng
Unlike these document-level operations, we treat noise filtering as a sentence-level MinMax optimization problem: first identifying potential clues from multiple documents, then ranking them by relevance, and finally retaining the minimum number of clues through truncation.
no code implementations • 3 Sep 2024 • Qianchi Zhang, Hainan Zhang, Liang Pang, Hongwei Zheng, Zhiming Zheng
Specifically, we first annotate the minimum top-k documents necessary for the RAG system to answer the current query as the compression rate and then construct triplets of the query, retrieved documents, and its compression rate.
no code implementations • 31 Aug 2024 • Cheng Qian, Hainan Zhang, Lei Sha, Zhiming Zheng
With the growing deployment of LLMs in daily applications like chatbots and content generation, efforts to ensure outputs align with human values and avoid harmful content have intensified.
1 code implementation • 30 Aug 2024 • Yujing Wang, Hainan Zhang, Liang Pang, Binghui Guo, Hongwei Zheng, Zhiming Zheng
Inspired by RLAIF, we train three kinds of reward models for the above metrics to achieve more efficient training.
1 code implementation • 21 Jun 2024 • JiaYing Zheng, Hainan Zhang, LingXiang Wang, Wangjie Qiu, Hongwei Zheng, Zhiming Zheng
An alternative, split learning, offloads most training parameters to the server while training embedding and output layers locally, making it more suitable for LLM.
no code implementations • 30 Dec 2023 • GuoJian Wang, Faguo Wu, Xiao Zhang, Tianyuan Chen, Zhiming Zheng
The sparsity of reward feedback remains a challenging problem in online deep reinforcement learning (DRL).
1 code implementation • 27 Dec 2023 • GuoJian Wang, Faguo Wu, Xiao Zhang, Ning Guo, Zhiming Zheng
Deep reinforcement learning (DRL) faces significant challenges in addressing the hard-exploration problems in tasks with sparse or deceptive rewards and large state spaces.
no code implementations • 26 Jun 2023 • Xue Liu, Dan Sun, Wei Wei, Zhiming Zheng
This approach incorporates the physics-based heat kernel and DropNode technique to transform each static graph into a sequence of temporal ones.
no code implementations • 7 Jul 2022 • Yaqian Yang, Zhiming Zheng, Longzhao Liu, Hongwei Zheng, Yi Zhen, Yi Zheng, Xin Wang, Shaoting Tang
Specifically, low-frequency eigenmodes, which are considered sufficient to capture the essence of the functional network, contribute little to functional connectivity reconstruction in transmodal regions, resulting in structure-function decoupling along the unimodal-transmodal gradient.
no code implementations • 11 Jun 2022 • Jingcheng Zhou, Wei Wei, Xing Li, Bowen Pang, Zhiming Zheng
Deep learning utilizing deep neural networks (DNNs) has achieved a lot of success recently in many important areas such as computer vision, natural language processing, and recommendation systems.
no code implementations • 12 May 2021 • Hexiong Li, Xin Jiang, Guanying Huo, Cheng Su, Bolun Wang, Yifei Hu, Zhiming Zheng
With the consideration of kinematic limitation and machining efficiency, a time-optimal feed rate adjustment algorithm is proposed to further adjust feed rate value at breaking points.
no code implementations • 31 Mar 2021 • Jingcheng Zhou, Wei Wei, Zhiming Zheng
First-order methods like stochastic gradient descent(SGD) are recently the popular optimization method to train deep neural networks (DNNs), but second-order methods are scarcely used because of the overpriced computing cost in getting the high-order information.
no code implementations • 26 Mar 2021 • Yifei Hu, Xin Jiang, Guanying Huo, Cheng Su, Bolun Wang, Hexiong Li, Zhiming Zheng
The algorithm consists of three modules: bidirectional scanning module, velocity scheduling module and round-off error elimination module.
no code implementations • 2 Jan 2021 • Xing Li, Wei Wei, Xiangnan Feng, Zhiming Zheng
Graphs are often used to organize data because of their simple topological structure, and therefore play a key role in machine learning.
no code implementations • 31 Jul 2020 • Xing Li, Wei Wei, Xiangnan Feng, Xue Liu, Zhiming Zheng
The graph structure is a commonly used data storage mode, and it turns out that the low-dimensional embedded representation of nodes in the graph is extremely useful in various typical tasks, such as node classification, link prediction , etc.
no code implementations • 18 Jul 2019 • Ying Shi, Wei Wei, Zhiming Zheng
Zero-shot learning (ZSL) aims to recognize the novel object categories using the semantic representation of categories, and the key idea is to explore the knowledge of how the novel class is semantically related to the familiar classes.