1 code implementation • 15 Jan 2024 • Mouxiang Chen, Hao Tian, Zhongxin Liu, Xiaoxue Ren, Jianling Sun
While existing code large language models (code LLMs) exhibit impressive capabilities in code generation, their autoregressive sequential generation inherently lacks reversibility.
1 code implementation • 23 Oct 2023 • Mouxiang Chen, Zemin Liu, Chenghao Liu, Jundong Li, Qiheng Mao, Jianling Sun
Based on this framework, we propose a prompt-based transferability test to find the most relevant pretext task in order to reduce the semantic gap.
no code implementations • 23 Oct 2023 • Mouxiang Chen, Lefei Shen, Han Fu, Zhuo Li, Jianling Sun, Chenghao Liu
In this paper, we introduce a universal calibration methodology for the detection and adaptation of CDS with a trained Transformer model.
no code implementations • 27 Sep 2023 • Mouxiang Chen, Chenghao Liu, Zemin Liu, Zhuo Li, Jianling Sun
Unbiased Learning to Rank (ULTR) aims to train unbiased ranking models from biased click logs, by explicitly modeling a generation process for user behavior and fitting click data based on examination hypothesis.
1 code implementation • 3 Jun 2022 • Mouxiang Chen, Chenghao Liu, Zemin Liu, Jianling Sun
Most of the current ULTR methods are based on the examination hypothesis (EH), which assumes that the click probability can be factorized into two scalar functions, one related to ranking features and the other related to bias factors.