Search Results for author: Lun Du

Found 37 papers, 14 papers with code

Tag2Vec: Learning Tag Representations in Tag Networks

no code implementations19 Apr 2019 Junshan Wang, Zhicong Lu, Guojie Song, Yue Fan, Lun Du, Wei. Lin

Network embedding is a method to learn low-dimensional representation vectors for nodes in complex networks.

Network Embedding TAG

DANE: Domain Adaptive Network Embedding

2 code implementations3 Jun 2019 Yizhou Zhang, Guojie Song, Lun Du, Shu-wen Yang, Yilun Jin

Recent works reveal that network embedding techniques enable many machine learning models to handle diverse downstream tasks on graph structured data.

Domain Adaptation Network Embedding

TSSRGCN: Temporal Spectral Spatial Retrieval Graph Convolutional Network for Traffic Flow Forecasting

no code implementations30 Nov 2020 Xu Chen, Yuanxing Zhang, Lun Du, Zheng Fang, Yi Ren, Kaigui Bian, Kunqing Xie

Further analysis indicates that the locality and globality of the traffic networks are critical to traffic flow prediction and the proposed TSSRGCN model can adapt to the various temporal traffic patterns.

Retrieval

Understanding and Improvement of Adversarial Training for Network Embedding from an Optimization Perspective

no code implementations17 May 2021 Lun Du, Xu Chen, Fei Gao, Kunqing Xie, Shi Han, Dongmei Zhang

Network Embedding aims to learn a function mapping the nodes to Euclidean space contribute to multiple learning analysis tasks on networks.

Link Prediction Network Embedding +1

TabularNet: A Neural Network Architecture for Understanding Semantic Structures of Tabular Data

no code implementations6 Jun 2021 Lun Du, Fei Gao, Xu Chen, Ran Jia, Junshan Wang, Jiang Zhang, Shi Han, Dongmei Zhang

To simultaneously extract spatial and relational information from tables, we propose a novel neural network architecture, TabularNet.

graph construction

Is a Single Model Enough? MuCoS: A Multi-Model Ensemble Learning for Semantic Code Search

1 code implementation10 Jul 2021 Lun Du, Xiaozhou Shi, Yanlin Wang, Ensheng Shi, Shi Han, Dongmei Zhang

On the other hand, as a specific query may focus on one or several perspectives, it is difficult for a single query representation module to represent different user intents.

Code Search Data Augmentation +1

On the Evaluation of Commit Message Generation Models: An Experimental Study

1 code implementation12 Jul 2021 Wei Tao, Yanlin Wang, Ensheng Shi, Lun Du, Shi Han, Hongyu Zhang, Dongmei Zhang, Wenqiang Zhang

We find that: (1) Different variants of the BLEU metric are used in previous works, which affects the evaluation and understanding of existing methods.

Retrieval

On the Evaluation of Neural Code Summarization

1 code implementation15 Jul 2021 Ensheng Shi, Yanlin Wang, Lun Du, Junjie Chen, Shi Han, Hongyu Zhang, Dongmei Zhang, Hongbin Sun

To achieve a profound understanding of how far we are from solving this problem and provide suggestions to future research, in this paper, we conduct a systematic and in-depth analysis of 5 state-of-the-art neural code summarization models on 6 widely used BLEU variants, 4 pre-processing operations and their combinations, and 3 widely used datasets.

Code Summarization Source Code Summarization

Neuron Campaign for Initialization Guided by Information Bottleneck Theory

1 code implementation14 Aug 2021 Haitao Mao, Xu Chen, Qiang Fu, Lun Du, Shi Han, Dongmei Zhang

Initialization plays a critical role in the training of deep neural networks (DNN).

GBK-GNN: Gated Bi-Kernel Graph Neural Networks for Modeling Both Homophily and Heterophily

1 code implementation29 Oct 2021 Lun Du, Xiaozhou Shi, Qiang Fu, Xiaojun Ma, Hengyu Liu, Shi Han, Dongmei Zhang

For node-level tasks, GNNs have strong power to model the homophily property of graphs (i. e., connected nodes are more similar) while their ability to capture the heterophily property is often doubtful.

Graph Attention

Neuron with Steady Response Leads to Better Generalization

no code implementations30 Nov 2021 Qiang Fu, Lun Du, Haitao Mao, Xu Chen, Wei Fang, Shi Han, Dongmei Zhang

Based on the analysis results, we articulate the Neuron Steadiness Hypothesis: the neuron with similar responses to instances of the same class leads to better generalization.

Inductive Bias

Source Free Unsupervised Graph Domain Adaptation

1 code implementation2 Dec 2021 Haitao Mao, Lun Du, Yujia Zheng, Qiang Fu, Zelin Li, Xu Chen, Shi Han, Dongmei Zhang

To address the non-trivial adaptation challenges in this practical scenario, we propose a model-agnostic algorithm called SOGA for domain adaptation to fully exploit the discriminative ability of the source model while preserving the consistency of structural proximity on the target graph.

Domain Adaptation Node Classification

HTGN-BTW: Heterogeneous Temporal Graph Network with Bi-Time-Window Training Strategy for Temporal Link Prediction

no code implementations25 Feb 2022 Chongjian Yue, Lun Du, Qiang Fu, Wendong Bi, Hengyu Liu, Yu Gu, Di Yao

The Temporal Link Prediction task of WSDM Cup 2022 expects a single model that can work well on two kinds of temporal graphs simultaneously, which have quite different characteristics and data properties, to predict whether a link of a given type will occur between two given nodes within a given time span.

Link Prediction

MM-GNN: Mix-Moment Graph Neural Network towards Modeling Neighborhood Feature Distribution

1 code implementation15 Aug 2022 Wendong Bi, Lun Du, Qiang Fu, Yanlin Wang, Shi Han, Dongmei Zhang

Graph Neural Networks (GNNs) have shown expressive performance on graph representation learning by aggregating information from neighbors.

Graph Representation Learning

Learning Rate Perturbation: A Generic Plugin of Learning Rate Schedule towards Flatter Local Minima

no code implementations25 Aug 2022 Hengyu Liu, Qiang Fu, Lun Du, Tiancheng Zhang, Ge Yu, Shi Han, Dongmei Zhang

Learning rate is one of the most important hyper-parameters that has a significant influence on neural network training.

Unveiling the Black Box of PLMs with Semantic Anchors: Towards Interpretable Neural Semantic Parsing

no code implementations4 Oct 2022 Lunyiu Nie, Jiuding Sun, Yanlin Wang, Lun Du, Lei Hou, Juanzi Li, Shi Han, Dongmei Zhang, Jidong Zhai

The recent prevalence of pretrained language models (PLMs) has dramatically shifted the paradigm of semantic parsing, where the mapping from natural language utterances to structured logical forms is now formulated as a Seq2Seq task.

Hallucination Semantic Parsing +1

DIGMN: Dynamic Intent Guided Meta Network for Differentiated User Engagement Forecasting in Online Professional Social Platforms

no code implementations22 Oct 2022 Feifan Li, Lun Du, Qiang Fu, Shi Han, Yushu Du, Guangming Lu, Zi Li

Furthermore, based on the dynamic user intent representations, we propose a meta predictor to perform differentiated user engagement forecasting.

Homophily-oriented Heterogeneous Graph Rewiring

no code implementations13 Feb 2023 Jiayan Guo, Lun Du, Wendong Bi, Qiang Fu, Xiaojun Ma, Xu Chen, Shi Han, Dongmei Zhang, Yan Zhang

To this end, we propose HDHGR, a homophily-oriented deep heterogeneous graph rewiring approach that modifies the HG structure to increase the performance of HGNN.

Robust Mid-Pass Filtering Graph Convolutional Networks

1 code implementation16 Feb 2023 Jincheng Huang, Lun Du, Xu Chen, Qiang Fu, Shi Han, Dongmei Zhang

Theoretical analyses guarantee the robustness of signals through the mid-pass filter, and we also shed light on the properties of different frequency signals under adversarial attacks.

Adversarial Attack Node Classification

Towards Efficient Fine-tuning of Pre-trained Code Models: An Experimental Study and Beyond

1 code implementation11 Apr 2023 Ensheng Shi, Yanlin Wang, Hongyu Zhang, Lun Du, Shi Han, Dongmei Zhang, Hongbin Sun

Our experimental study shows that (1) lexical, syntactic and structural properties of source code are encoded in the lower, intermediate, and higher layers, respectively, while the semantic property spans across the entire model.

Enabling and Analyzing How to Efficiently Extract Information from Hybrid Long Documents with LLMs

no code implementations24 May 2023 Chongjian Yue, Xinrun Xu, Xiaojun Ma, Lun Du, Hengyu Liu, Zhiming Ding, Yanbing Jiang, Shi Han, Dongmei Zhang

We propose an Automated Financial Information Extraction (AFIE) framework that enhances LLMs' ability to comprehend and extract information from financial reports.

Retrieval

GPT4Graph: Can Large Language Models Understand Graph Structured Data ? An Empirical Evaluation and Benchmarking

no code implementations24 May 2023 Jiayan Guo, Lun Du, Hengyu Liu, Mengyu Zhou, Xinyi He, Shi Han

In this study, we conduct an extensive investigation to assess the proficiency of LLMs in comprehending graph data, employing a diverse range of structural and semantic-related tasks.

Benchmarking Graph Mining +1

On Manipulating Signals of User-Item Graph: A Jacobi Polynomial-based Graph Collaborative Filtering

1 code implementation6 Jun 2023 Jiayan Guo, Lun Du, Xu Chen, Xiaojun Ma, Qiang Fu, Shi Han, Dongmei Zhang, Yan Zhang

Graph CF has attracted more and more attention in recent years due to its effectiveness in leveraging high-order information in the user-item bipartite graph for better recommendations.

Collaborative Filtering Recommendation Systems

SoTaNa: The Open-Source Software Development Assistant

1 code implementation25 Aug 2023 Ensheng Shi, Fengji Zhang, Yanlin Wang, Bei Chen, Lun Du, Hongyu Zhang, Shi Han, Dongmei Zhang, Hongbin Sun

To meet the demands of this dynamic field, there is a growing need for an effective software development assistant.

Code Summarization

Text-to-Image Generation for Abstract Concepts

no code implementations26 Sep 2023 Jiayi Liao, Xu Chen, Qiang Fu, Lun Du, Xiangnan He, Xiang Wang, Shi Han, Dongmei Zhang

Recent years have witnessed the substantial progress of large-scale models across various domains, such as natural language processing and computer vision, facilitating the expression of concrete concepts.

Text-to-Image Generation

TAP4LLM: Table Provider on Sampling, Augmenting, and Packing Semi-structured Data for Large Language Model Reasoning

no code implementations14 Dec 2023 Yuan Sui, Jiaru Zou, Mengyu Zhou, Xinyi He, Lun Du, Shi Han, Dongmei Zhang

Table-based reasoning has shown remarkable progress in combining deep models with discrete reasoning, which requires reasoning over both free-form natural language (NL) questions and semi-structured tabular data.

Language Modelling Large Language Model +2

Professional Network Matters: Connections Empower Person-Job Fit

no code implementations19 Dec 2023 Hao Chen, Lun Du, Yuxuan Lu, Qiang Fu, Xu Chen, Shi Han, Yanbin Kang, Guangming Lu, Zi Li

Online recruitment platforms typically employ Person-Job Fit models in the core service that automatically match suitable job seekers with appropriate job positions.

Text2Analysis: A Benchmark of Table Question Answering with Advanced Data Analysis and Unclear Queries

no code implementations21 Dec 2023 Xinyi He, Mengyu Zhou, Xinrun Xu, Xiaojun Ma, Rui Ding, Lun Du, Yan Gao, Ran Jia, Xu Chen, Shi Han, Zejian yuan, Dongmei Zhang

We evaluate five state-of-the-art models using three different metrics and the results show that our benchmark presents introduces considerable challenge in the field of tabular data analysis, paving the way for more advanced research opportunities.

Question Answering

TAROT: A Hierarchical Framework with Multitask Co-Pretraining on Semi-Structured Data towards Effective Person-Job Fit

no code implementations15 Jan 2024 Yihan Cao, Xu Chen, Lun Du, Hao Chen, Qiang Fu, Shi Han, Yushu Du, Yanbin Kang, Guangming Lu, Zi Li

Person-job fit is an essential part of online recruitment platforms in serving various downstream applications like Job Search and Candidate Recommendation.

Prompting with Divide-and-Conquer Program Makes Large Language Models Discerning to Hallucination and Deception

no code implementations8 Feb 2024 Yizhou Zhang, Lun Du, Defu Cao, Qiang Fu, Yan Liu

Foundation models, such as Large language Models (LLMs), have attracted significant amount of interest due to their large number of applications.

Fake News Detection Hallucination +1

Cannot find the paper you are looking for? You can Submit a new open access paper.