Search Results for author: Yanlin Wang

Found 27 papers, 12 papers with code

YODA: Teacher-Student Progressive Learning for Language Models

no code implementations28 Jan 2024 Jianqiao Lu, Wanjun Zhong, YuFei Wang, Zhijiang Guo, Qi Zhu, Wenyong Huang, Yanlin Wang, Fei Mi, Baojun Wang, Yasheng Wang, Lifeng Shang, Xin Jiang, Qun Liu

With the teacher's guidance, the student learns to iteratively refine its answer with feedback, and forms a robust and comprehensive understanding of the posed questions.

GSM8K Math

KADEL: Knowledge-Aware Denoising Learning for Commit Message Generation

1 code implementation16 Jan 2024 Wei Tao, Yucheng Zhou, Yanlin Wang, Hongyu Zhang, Haofen Wang, Wenqiang Zhang

However, previous methods are trained on the entire dataset without considering the fact that a portion of commit messages adhere to good practice (i. e., good-practice commits), while the rest do not.


Code Search Debiasing:Improve Search Results beyond Overall Ranking Performance

no code implementations25 Nov 2023 Sheng Zhang, Hui Li, Yanlin Wang, Zhao Wei, Yong Xiu, Juhong Wang, Rongong Ji

To mitigate biases, we develop a general debiasing framework that employs reranking to calibrate search results.

Code Search

Adaptive-Solver Framework for Dynamic Strategy Selection in Large Language Model Reasoning

no code implementations1 Oct 2023 Jianpeng Zhou, Wanjun Zhong, Yanlin Wang, Jiahai Wang

Experimental results from complex reasoning tasks reveal that the prompting method adaptation and decomposition granularity adaptation enhance performance across all tasks.

Computational Efficiency Language Modelling +2

SoTaNa: The Open-Source Software Development Assistant

1 code implementation25 Aug 2023 Ensheng Shi, Fengji Zhang, Yanlin Wang, Bei Chen, Lun Du, Hongyu Zhang, Shi Han, Dongmei Zhang, Hongbin Sun

To meet the demands of this dynamic field, there is a growing need for an effective software development assistant.

Code Summarization

Modeling Orders of User Behaviors via Differentiable Sorting: A Multi-task Framework to Predicting User Post-click Conversion

no code implementations18 Jul 2023 Menghan Wang, Jinming Yang, Yuchen Guo, Yuming Shen, Mengying Zhu, Yanlin Wang

Inspired by recent advances on differentiable sorting, in this paper, we propose a novel multi-task framework that leverages orders of user behaviors to predict user post-click conversion in an end-to-end approach.

Multi-Task Learning Selection bias

MemoryBank: Enhancing Large Language Models with Long-Term Memory

1 code implementation17 May 2023 Wanjun Zhong, Lianghong Guo, Qiqi Gao, He Ye, Yanlin Wang

To mimic anthropomorphic behaviors and selectively preserve memory, MemoryBank incorporates a memory updating mechanism, inspired by the Ebbinghaus Forgetting Curve theory, which permits the AI to forget and reinforce memory based on time elapsed and the relative significance of the memory, thereby offering a human-like memory mechanism.


AGIEval: A Human-Centric Benchmark for Evaluating Foundation Models

2 code implementations13 Apr 2023 Wanjun Zhong, Ruixiang Cui, Yiduo Guo, Yaobo Liang, Shuai Lu, Yanlin Wang, Amin Saied, Weizhu Chen, Nan Duan

Impressively, GPT-4 surpasses average human performance on SAT, LSAT, and math competitions, attaining a 95% accuracy rate on the SAT Math test and a 92. 5% accuracy on the English test of the Chinese national college entrance exam.

Decision Making Math

Towards Efficient Fine-tuning of Pre-trained Code Models: An Experimental Study and Beyond

1 code implementation11 Apr 2023 Ensheng Shi, Yanlin Wang, Hongyu Zhang, Lun Du, Shi Han, Dongmei Zhang, Hongbin Sun

Our experimental study shows that (1) lexical, syntactic and structural properties of source code are encoded in the lower, intermediate, and higher layers, respectively, while the semantic property spans across the entire model.

Exploring Representation-Level Augmentation for Code Search

1 code implementation21 Oct 2022 Haochen Li, Chunyan Miao, Cyril Leung, Yanxian Huang, Yuan Huang, Hongyu Zhang, Yanlin Wang

In this paper, we explore augmentation methods that augment data (both code and query) at representation level which does not require additional data processing and training, and based on this we propose a general format of representation-level augmentation that unifies existing methods.

Code Search Contrastive Learning +1

Unveiling the Black Box of PLMs with Semantic Anchors: Towards Interpretable Neural Semantic Parsing

no code implementations4 Oct 2022 Lunyiu Nie, Jiuding Sun, Yanlin Wang, Lun Du, Lei Hou, Juanzi Li, Shi Han, Dongmei Zhang, Jidong Zhai

The recent prevalence of pretrained language models (PLMs) has dramatically shifted the paradigm of semantic parsing, where the mapping from natural language utterances to structured logical forms is now formulated as a Seq2Seq task.

Decoder Hallucination +2

MM-GNN: Mix-Moment Graph Neural Network towards Modeling Neighborhood Feature Distribution

1 code implementation15 Aug 2022 Wendong Bi, Lun Du, Qiang Fu, Yanlin Wang, Shi Han, Dongmei Zhang

Graph Neural Networks (GNNs) have shown expressive performance on graph representation learning by aggregating information from neighbors.

Graph Representation Learning

Meta-data Study in Autism Spectrum Disorder Classification Based on Structural MRI

no code implementations9 Jun 2022 Ruimin Ma, Yanlin Wang, Yanjie Wei, Yi Pan

Accurate diagnosis of autism spectrum disorder (ASD) based on neuroimaging data has significant implications, as extracting useful information from neuroimaging data for ASD detection is challenging.

PrivateRec: Differentially Private Training and Serving for Federated News Recommendation

no code implementations18 Apr 2022 Ruixuan Liu, Yanlin Wang, Yang Cao, Lingjuan Lyu, Weike Pan, Yun Chen, Hong Chen

Collecting and training over sensitive personal data raise severe privacy concerns in personalized recommendation systems, and federated learning can potentially alleviate the problem by training models over decentralized user data. However, a theoretically private solution in both the training and serving stages of federated recommendation is essential but still lacking. Furthermore, naively applying differential privacy (DP) to the two stages in federated recommendation would fail to achieve a satisfactory trade-off between privacy and utility due to the high-dimensional characteristics of model gradients and hidden representations. In this work, we propose a federated news recommendation method for achieving a better utility in model training and online serving under a DP guarantee. We first clarify the DP definition over behavior data for each round in the life-circle of federated recommendation systems. Next, we propose a privacy-preserving online serving mechanism under this definition based on the idea of decomposing user embeddings with public basic vectors and perturbing the lower-dimensional combination coefficients.

Federated Learning News Recommendation +2

UniXcoder: Unified Cross-Modal Pre-training for Code Representation

2 code implementations ACL 2022 Daya Guo, Shuai Lu, Nan Duan, Yanlin Wang, Ming Zhou, Jian Yin

Furthermore, we propose to utilize multi-modal contents to learn representation of code fragment with contrastive learning, and then align representations among programming languages using a cross-modal generation task.

Code Completion Code Search +2

No One Left Behind: Inclusive Federated Learning over Heterogeneous Devices

no code implementations16 Feb 2022 Ruixuan Liu, Fangzhao Wu, Chuhan Wu, Yanlin Wang, Lingjuan Lyu, Hong Chen, Xing Xie

In this way, all the clients can participate in the model learning in FL, and the final model can be big and powerful enough.

Federated Learning Knowledge Distillation +1

Game of Privacy: Towards Better Federated Platform Collaboration under Privacy Restriction

no code implementations10 Feb 2022 Chuhan Wu, Fangzhao Wu, Tao Qi, Yanlin Wang, Yuqing Yang, Yongfeng Huang, Xing Xie

To solve the game, we propose a platform negotiation method that simulates the bargaining among platforms and locally optimizes their policies via gradient descent.

Vertical Federated Learning

On the Evaluation of Neural Code Summarization

1 code implementation15 Jul 2021 Ensheng Shi, Yanlin Wang, Lun Du, Junjie Chen, Shi Han, Hongyu Zhang, Dongmei Zhang, Hongbin Sun

To achieve a profound understanding of how far we are from solving this problem and provide suggestions to future research, in this paper, we conduct a systematic and in-depth analysis of 5 state-of-the-art neural code summarization models on 6 widely used BLEU variants, 4 pre-processing operations and their combinations, and 3 widely used datasets.

Code Summarization Source Code Summarization

On the Evaluation of Commit Message Generation Models: An Experimental Study

1 code implementation12 Jul 2021 Wei Tao, Yanlin Wang, Ensheng Shi, Lun Du, Shi Han, Hongyu Zhang, Dongmei Zhang, Wenqiang Zhang

We find that: (1) Different variants of the BLEU metric are used in previous works, which affects the evaluation and understanding of existing methods.


Is a Single Model Enough? MuCoS: A Multi-Model Ensemble Learning for Semantic Code Search

1 code implementation10 Jul 2021 Lun Du, Xiaozhou Shi, Yanlin Wang, Ensheng Shi, Shi Han, Dongmei Zhang

On the other hand, as a specific query may focus on one or several perspectives, it is difficult for a single query representation module to represent different user intents.

Code Search Data Augmentation +1

Code Completion by Modeling Flattened Abstract Syntax Trees as Graphs

no code implementations17 Mar 2021 Yanlin Wang, Hui Li

Code completion has become an essential component of integrated development environments.

Code Completion Graph Attention +2

Cannot find the paper you are looking for? You can Submit a new open access paper.