no code implementations • 26 May 2025 • Li Zeng, Zeming Liu, Chong Feng, Heyan Huang, Yuhang Guo
Model editing aims to correct errors and outdated knowledge in the Large language models (LLMs) with minimal cost.
1 code implementation • 14 May 2025 • Hongxin Xiang, Ke Li, Mingquan Liu, Zhixiang Cheng, Bin Yao, Wenjie Du, Jun Xia, Li Zeng, Xin Jin, Xiangxiang Zeng
Existing molecular machine learning force fields (MLFFs) generally focus on the learning of atoms, molecules, and simple quantum chemical properties (such as energy and force), but ignore the importance of electron density (ED) $\rho(r)$ in accurately understanding molecular force fields (MFFs).
no code implementations • 18 Apr 2025 • Yi Xiong, Jinqi Huang, Wenjie Huang, Xuebing Yu, Entong Li, Zhixiong Ning, Jinhua Zhou, Li Zeng, Xin Chen
Secondly, LLM inference instances within a heterogeneous cluster possess varying processing capacities, leading to different processing speeds for handling inference requests.
no code implementations • 7 Oct 2024 • Li Zeng, Yingyu Shan, Zeming Liu, Jiashu Yao, Yuhang Guo
To facilitate the application of model editing in real-world scenarios, we propose the challenge of practicality.
no code implementations • 2 Sep 2024 • Zhixiang Cheng, Hongxin Xiang, Pengsen Ma, Li Zeng, Xin Jin, Xixi Yang, Jianxin Lin, Yang Deng, Bosheng Song, Xinxin Feng, Changhui Deng, Xiangxiang Zeng
Activity cliffs, which refer to pairs of molecules that are structurally similar but show significant differences in their potency, can lead to model representation collapse and make the model challenging to distinguish them.
no code implementations • 24 May 2024 • Jing Li, Zhijie Sun, Dachao Lin, Xuan He, Yi Lin, Binfan Zheng, Li Zeng, Rongqian Zhao, Xin Chen
(3) Theoretical derivation and experimental evidence of reduced expert capacity bounds under dynamic token distribution evolution.
1 code implementation • 8 Apr 2024 • Tianyu Chen, Yiming Zhang, Guoxin Yu, Dapeng Zhang, Li Zeng, Qing He, Xiang Ao
In this paper, we extend financial sentiment analysis~(FSA) to event-level since events usually serve as the subject of the sentiment in financial text.
no code implementations • 6 Apr 2024 • Tianle Pu, Changjun Fan, Mutian Shen, Yizhou Lu, Li Zeng, Zohar Nussinov, Chao Chen, Zhong Liu
The technique is originated from physics, but is very effective in enabling RL agents to explore to continuously improve the solutions during test.
no code implementations • 2 Apr 2024 • Yuanming Shi, Li Zeng, Jingyang Zhu, Yong Zhou, Chunxiao Jiang, Khaled B. Letaief
Although promising, the dynamics of LEO networks, characterized by the high mobility of satellites and short ground-to-satellite link (GSL) duration, pose unique challenges for FEEL.
no code implementations • 25 Jan 2024 • Jing Li, Zhijie Sun, Xuan He, Li Zeng, Yi Lin, Entong Li, Binfan Zheng, Rongqian Zhao, Xin Chen
However, the performance of MoE is limited by load imbalance and high latency of All-to-All communication, along with relatively redundant computation owing to large expert capacity.
no code implementations • 7 Aug 2023 • Kerui Huang, Jianhong Tian, Lei Sun, Li Zeng, Peng Xie, Aihua Deng, Ping Mo, Zhibo Zhou, Ming Jiang, Yun Wang, Xiaocheng Jiang
Gene mining is an important topic in the field of life sciences, but traditional machine learning methods cannot consider the regulatory relationships between genes.
1 code implementation • 8 Jun 2023 • Xuan Lin, Lichang Dai, Yafang Zhou, Zu-Guo Yu, Wen Zhang, Jian-Yu Shi, Dong-Sheng Cao, Li Zeng, Haowen Chen, Bosheng Song, Philip S. Yu, Xiangxiang Zeng
Recent advances and achievements of artificial intelligence (AI) as well as deep and graph learning models have established their usefulness in biomedical applications, especially in drug-drug interactions (DDIs).
no code implementations • 15 May 2023 • Li Zeng, Xiaoliang Wan, Tao Zhou
In this paper, we develop an invertible mapping, called B-KRnet, on a bounded domain and apply it to density estimation/approximation for data or the solutions of PDEs such as the Fokker-Planck equation and the Keller-Segel equation.
no code implementations • 8 Nov 2022 • Xiaodong Feng, Li Zeng
We propose in this work the gradient-enhanced deep neural networks (DNNs) approach for function approximations and uncertainty quantification.
no code implementations • 26 Oct 2022 • Li Zeng, Xiaoliang Wan, Tao Zhou
To this end, we represent the solution with an explicit PDF model induced by a flow-based deep generative model, simplified KRnet, which constructs a transport map from a simple distribution to the target distribution.
1 code implementation • 28 Dec 2021 • Xiaodong Feng, Li Zeng, Tao Zhou
In this work, we propose an adaptive learning approach based on temporal normalizing flows for solving time-dependent Fokker-Planck (TFP) equations.
no code implementations • 29 Sep 2021 • Li Zeng, Baifan Zhou, Mohammad Al-Rifai, Evgeny Kharlamov
We propose a neural networks approach SegTime that finds precise breakpoints, obviates sliding windows, handles long-term dependencies, and it is insensitive to the label changing frequency.
1 code implementation • 26 Aug 2020 • Li Zeng, Zhaolong Yu, Yiliang Zhang, Hongyu Zhao
Predictive modeling based on genomic data has gained popularity in biomedical research and clinical practice by allowing researchers and clinicians to identify biomarkers and tailor treatment decisions more efficiently.
1 code implementation • 24 May 2019 • Changjun Fan, Li Zeng, Yuhui Ding, Muhao Chen, Yizhou Sun, Zhong Liu
By training on small-scale networks, the learned model is capable of assigning relative BC scores to nodes for any unseen networks, and thus identifying the highly-ranked nodes.
1 code implementation • 11 Mar 2018 • Li Zeng, Zhaolong Yu, Hongyu Zhao
Most of the methods focus on testing marginal significance of the associations between pathways and clinical phenotypes.