Search Results for author: Mouxiang Chen

Found 5 papers, 3 papers with code

JumpCoder: Go Beyond Autoregressive Coder via Online Modification

1 code implementation15 Jan 2024 Mouxiang Chen, Hao Tian, Zhongxin Liu, Xiaoxue Ren, Jianling Sun

While existing code large language models (code LLMs) exhibit impressive capabilities in code generation, their autoregressive sequential generation inherently lacks reversibility.

Code Generation

ULTRA-DP: Unifying Graph Pre-training with Multi-task Graph Dual Prompt

1 code implementation23 Oct 2023 Mouxiang Chen, Zemin Liu, Chenghao Liu, Jundong Li, Qiheng Mao, Jianling Sun

Based on this framework, we propose a prompt-based transferability test to find the most relevant pretext task in order to reduce the semantic gap.

Multi-Task Learning Position

Calibration of Time-Series Forecasting Transformers: Detecting and Adapting Context-Driven Distribution Shift

no code implementations23 Oct 2023 Mouxiang Chen, Lefei Shen, Han Fu, Zhuo Li, Jianling Sun, Chenghao Liu

In this paper, we introduce a universal calibration methodology for the detection and adaptation of CDS with a trained Transformer model.

Time Series Time Series Forecasting

Identifiability Matters: Revealing the Hidden Recoverable Condition in Unbiased Learning to Rank

no code implementations27 Sep 2023 Mouxiang Chen, Chenghao Liu, Zemin Liu, Zhuo Li, Jianling Sun

Unbiased Learning to Rank (ULTR) aims to train unbiased ranking models from biased click logs, by explicitly modeling a generation process for user behavior and fitting click data based on examination hypothesis.

Learning-To-Rank

Scalar is Not Enough: Vectorization-based Unbiased Learning to Rank

1 code implementation3 Jun 2022 Mouxiang Chen, Chenghao Liu, Zemin Liu, Jianling Sun

Most of the current ULTR methods are based on the examination hypothesis (EH), which assumes that the click probability can be factorized into two scalar functions, one related to ranking features and the other related to bias factors.

Learning-To-Rank

Cannot find the paper you are looking for? You can Submit a new open access paper.