no code implementations • 29 Jul 2024 • Shiyu Wang, Zhixuan Chu, Yinbo Sun, Yu Liu, Yuliang Guo, Yang Chen, HuiYang Jian, Lintao Ma, Xingyu Lu, Jun Zhou
Despite recent advances with transformer-based forecasting models, challenges remain due to the non-stationary, nonlinear characteristics of workload time series and the long-term dependencies.
1 code implementation • 22 Jun 2024 • Xingyu Lu, Xiaonan Li, Qinyuan Cheng, Kai Ding, Xuanjing Huang, Xipeng Qiu
We find that LLMs' fact knowledge capacity has a linear and negative exponential law relationship with model size and training epochs, respectively.
1 code implementation • 13 Mar 2024 • Xingyu Lu, He Cao, Zijing Liu, Shengyuan Bai, Leqing Chen, Yuan YAO, Hai-Tao Zheng, Yu Li
Large language models are playing an increasingly significant role in molecular research, yet existing models often generate erroneous information, posing challenges to accurate molecular comprehension.
no code implementations • 12 Mar 2024 • Xingyu Lu, Lei Sun, Diyang Gu, Zhijie Xu, Kaiwei Wang
Event camera, as a device that quickly responds to intensity changes, provides a new solution for structured light (SL) systems.
no code implementations • 15 Dec 2023 • Xingyu Lu, Zhining Liu, Yanchu Guan, Hongxuan Zhang, Chenyi Zhuang, Wenqi Ma, Yize Tan, Jinjie Gu, Guannan Zhang
of a cascade RS, when a user triggers a request, we define two actions that determine the computation: (1) the trained instances of models with different computational complexity; and (2) the number of items to be inferred in the stage.
1 code implementation • 27 Nov 2023 • He Cao, Zijing Liu, Xingyu Lu, Yuan YAO, Yu Li
The rapid evolution of artificial intelligence in drug discovery encounters challenges with generalization and extensive training, yet Large Language Models (LLMs) offer promise in reshaping interactions with complex molecular data.
Ranked #20 on Molecule Captioning on ChEBI-20
no code implementations • 26 Oct 2023 • Ding Zou, Wei Lu, Zhibo Zhu, Xingyu Lu, Jun Zhou, Xiaojin Wang, KangYu Liu, Haiqing Wang, Kefan Wang, Renen Sun
The reactive module provides a self-tuning estimator of CPU utilization to the optimization model.
no code implementations • 14 Sep 2023 • Rajarshi Bhowmik, Marco Ponza, Atharva Tendle, Anant Gupta, Rebecca Jiang, Xingyu Lu, Qian Zhao, Daniel Preotiuc-Pietro
In text documents such as news articles, the content and key events usually revolve around a subset of all the entities mentioned in a document.
1 code implementation • 22 Aug 2023 • Jinpeng Wang, Ziyun Zeng, Yunxiao Wang, Yuting Wang, Xingyu Lu, Tianxiang Li, Jun Yuan, Rui Zhang, Hai-Tao Zheng, Shu-Tao Xia
We propose MISSRec, a multi-modal pre-training and transfer learning framework for SR. On the user side, we design a Transformer-based encoder-decoder model, where the contextual encoder learns to capture the sequence-level multi-modal user interests while a novel interest-aware decoder is developed to grasp item-modality-interest relations for better sequence representation.
1 code implementation • 21 Jun 2023 • Yinghui Li, Yong Jiang, Yangning Li, Xingyu Lu, Pengjun Xie, Ying Shen, Hai-Tao Zheng
Entity Linking (EL) is a fundamental task for Information Extraction and Knowledge Graphs.
no code implementations • 10 Apr 2023 • Wenyun Li, Guo Zhong, Xingyu Lu, Chi-Man Pun
This article proposes a multiview hashing with learnable parameters to retrieve the queried images for a large-scale remote sensing dataset.
1 code implementation • 21 Oct 2022 • Bowen Zhao, Jiuding Sun, Bin Xu, Xingyu Lu, Yuchen Li, Jifan Yu, Minghui Liu, Tingjian Zhang, Qiuyang Chen, Hanming Li, Lei Hou, Juanzi Li
To tackle these issues, we propose EDUKG, a heterogeneous sustainable K-12 Educational Knowledge Graph.
no code implementations • 27 Aug 2021 • Yitao Shen, Yue Wang, Xingyu Lu, Feng Qi, Jia Yan, Yixiang Mu, Yao Yang, Yifan Peng, Jinjie Gu
In order to do effective optimization in the second stage, counterfactual prediction and noise-reduction are essential for the first stage.
1 code implementation • 26 Feb 2021 • Jing Zhu, Xingyu Lu, Mark Heimann, Danai Koutra
While most network embedding techniques model the relative positions of nodes in a network, recently there has been significant interest in structural embeddings that model node role equivalences, irrespective of their distances to any specific nodes.
no code implementations • 23 Dec 2020 • Youcef Nafa, Qun Chen, Zhaoqiang Chen, Xingyu Lu, Haiyang He, Tianyi Duan, Zhanhuai Li
Building upon the recent advances in risk analysis for ER, which can provide a more refined estimate on label misprediction risk than the simpler classifier outputs, we propose a novel AL approach of risk sampling for ER.
no code implementations • 3 Aug 2020 • Xingyu Lu, Kimin Lee, Pieter Abbeel, Stas Tiomkin
Despite the significant progress of deep reinforcement learning (RL) in solving sequential decision making problems, RL agents often overfit to training environments and struggle to adapt to new, unseen environments.
no code implementations • 21 Dec 2019 • Xingyu Lu, Stas Tiomkin, Pieter Abbeel
While recent progress in deep reinforcement learning has enabled robots to learn complex behaviors, tasks with long horizons and sparse rewards remain an ongoing challenge.
no code implementations • 23 Sep 2019 • Ofir Nachum, Haoran Tang, Xingyu Lu, Shixiang Gu, Honglak Lee, Sergey Levine
Hierarchical reinforcement learning has demonstrated significant success at solving difficult reinforcement learning (RL) tasks.
Hierarchical Reinforcement Learning reinforcement-learning +1