1 code implementation • 13 Sep 2024 • Mouxiang Chen, Zhongxin Liu, He Tao, Yusu Hong, David Lo, Xin Xia, Jianling Sun
Our proposed approximated optimal strategy B4 significantly surpasses existing heuristics in selecting code solutions generated by large language models (LLMs) with LLM-generated tests, achieving a relative performance improvement by up to 50% over the strongest heuristic and 246% over the random selection in the most challenging scenarios.
1 code implementation • 30 Aug 2024 • Mouxiang Chen, Lefei Shen, Zhuo Li, Xiaoyun Joy Wang, Jianling Sun, Chenghao Liu
Surprisingly, without further adaptation in the time-series domain, the proposed VisionTS could achieve superior zero-shot forecasting performance compared to existing TSF foundation models.
no code implementations • 23 Feb 2024 • Zhisheng Lin, Han Fu, Chenghao Liu, Zhuo Li, Jianling Sun
However, current approaches typically either train adapters on individual tasks or distill shared knowledge from source tasks, failing to fully exploit task-specific knowledge and the correlation between source and target tasks.
no code implementations • 4 Feb 2024 • Qiheng Mao, Zemin Liu, Chenghao Liu, Zhuo Li, Jianling Sun
This collaboration harnesses the sophisticated linguistic capabilities of LLMs to improve the contextual understanding and adaptability of graph models, thereby broadening the scope and potential of GRL.
1 code implementation • 15 Jan 2024 • Mouxiang Chen, Hao Tian, Zhongxin Liu, Xiaoxue Ren, Jianling Sun
While existing code large language models (code LLMs) exhibit impressive capabilities in code generation, their autoregressive sequential generation inherently lacks reversibility.
2 code implementations • 23 Oct 2023 • Mouxiang Chen, Lefei Shen, Han Fu, Zhuo Li, Jianling Sun, Chenghao Liu
In this paper, we introduce a universal calibration methodology for the detection and adaptation of CDS with a trained model.
1 code implementation • 23 Oct 2023 • Mouxiang Chen, Zemin Liu, Chenghao Liu, Jundong Li, Qiheng Mao, Jianling Sun
Based on this framework, we propose a prompt-based transferability test to find the most relevant pretext task in order to reduce the semantic gap.
1 code implementation • 27 Sep 2023 • Mouxiang Chen, Chenghao Liu, Zemin Liu, Zhuo Li, Jianling Sun
Unbiased Learning to Rank (ULTR) aims to train unbiased ranking models from biased click logs, by explicitly modeling a generation process for user behavior and fitting click data based on examination hypothesis.
1 code implementation • 22 Feb 2023 • Qiheng Mao, Zemin Liu, Chenghao Liu, Jianling Sun
To bridge this gap, in this paper we investigate the representation learning on HINs with Graph Transformer, and propose a novel model named HINormer, which capitalizes on a larger-range aggregation mechanism for node representation learning.
1 code implementation • 3 Jun 2022 • Mouxiang Chen, Chenghao Liu, Zemin Liu, Jianling Sun
Most of the current ULTR methods are based on the examination hypothesis (EH), which assumes that the click probability can be factorized into two scalar functions, one related to ranking features and the other related to bias factors.
no code implementations • 22 Oct 2020 • Jianwen Yin, Chenghao Liu, Weiqing Wang, Jianling Sun, Steven C. H. Hoi
Sequential user behavior modeling plays a crucial role in online user-oriented services, such as product purchasing, news feed consumption, and online advertising.
no code implementations • CVPR 2020 • Han Fu, Rui Wu, Chenghao Liu, Jianling Sun
Nowadays, driven by the increasing concern on diet and health, food computing has attracted enormous attention from both industry and research community.
no code implementations • 11 Feb 2020 • Han Fu, Yunyu Bai, Zhuo Li, Jun Shen, Jianling Sun
Paper documents are widely used as an irreplaceable channel of information in many fields, especially in financial industry, fostering a great amount of demand for systems which can convert document images into structured data representations.
no code implementations • ACL 2019 • Han Fu, Chenghao Liu, Jianling Sun
Neural Machine Translation (NMT) has achieved notable success in recent years.
1 code implementation • 9 May 2019 • Chenghao Liu, Tao Lu, Xin Wang, Zhiyong Cheng, Jianling Sun, Steven C. H. Hoi
However, CF with binary codes naturally suffers from low accuracy due to limited representation capability in each bit, which impedes it from modeling complex structure of the data.
no code implementations • 28 May 2016 • Chenghao Liu, Tao Jin, Steven C. H. Hoi, Peilin Zhao, Jianling Sun
In this paper, we propose a novel scheme of Online Bayesian Collaborative Topic Regression (OBCTR) which is efficient and scalable for learning from data streams.