no code implementations • 29 Sep 2024 • Mohamed A. Radwan, Himaghna Bhattacharjee, Quinn Lanners, Jiasheng Zhang, Serkan Karakulak, Houssam Nassif, Murat Ali Bayir
We propose a domain-adapted reward model that works alongside an Offline A/B testing system for evaluating ranking models.
1 code implementation • 1 Aug 2024 • Jiasheng Zhang, Rex Ying, Jie Shao
When new knowledge emerges, AnoT maps it onto a node in the rule graph and traverses the rule graph recursively to derive the anomaly score of the knowledge.
1 code implementation • 17 Jun 2024 • Jiasheng Zhang, Jialin Chen, Menglin Yang, Aosong Feng, Shuang Liang, Jie Shao, Rex Ying
Moreover, we conduct extensive benchmark experiments on DTGB, evaluating 7 popular dynamic graph learning algorithms and their variants of adapting to text attributes with LLM embeddings, along with 6 powerful large language models (LLMs).
2 code implementations • journal 2023 • Jiasheng Zhang, Jie Shao, Bin Cui
To reduce the parameter size, entity representations in StreamE are decoupled from the model training to serve as the memory module to store the historical information of entities.
1 code implementation • Knowledge-Based Systems 2022 • Jiasheng Zhang, Shuang Liang, Yongpan Sheng, Jie Shao
Temporal knowledge graph (TKG) representation learning aims to project entities and relations in TKG to low-dimensional vector space while preserving the evolutionary nature of TKG.
1 code implementation • 6 Jun 2022 • Yu Fang, Jiancheng Liu, Mingrui Zhang, Jiasheng Zhang, Yidong Ma, Minchen Li, Yuanming Hu, Chenfanfu Jiang, Tiantian Liu
Differentiable physics enables efficient gradient-based optimizations of neural network (NN) controllers.
no code implementations • 12 Nov 2020 • Haoxiang Wang, Jiasheng Zhang, Chenbei Lu, Chenye Wu
In this paper, we cast one-shot non-intrusive load monitoring (NILM) in the compressive sensing framework, and bridge the gap between theoretical accuracy of NILM inference and differential privacy's parameters.
1 code implementation • WS 2019 • Yang Xu, Jiasheng Zhang, David Reitter
We use a variant of word embedding model that incorporates subword information to characterize the degree of compositionality in lexical semantics.