no code implementations • 22 Sep 2024 • Zhou Zhang, Dongzeng Tan, Jiaan Wang, Yilong Chen, Jiarong Xu
Emojis have gained immense popularity on social platforms, serving as a common means to supplement or replace text.
1 code implementation • 27 Jul 2024 • Renhong Huang, Jiarong Xu, Xin Jiang, Ruichuan An, Yang Yang
UGDA aims to facilitate knowledge transfer from a labeled source graph to an unlabeled target graph.
no code implementations • 26 Jul 2024 • Hanyang Yuan, Jiarong Xu, Cong Wang, Ziqi Yang, Chunping Wang, Keting Yin, Yang Yang
The public sharing of user information opens the door for adversaries to infer private data, leading to privacy breaches and facilitating malicious activities.
1 code implementation • 22 May 2024 • Yiran Qiao, Xiang Ao, Yang Liu, Jiarong Xu, Xiaoqian Sun, Qing He
In this paper, we aim to streamline the GNN design process and leverage the advantages of Large Language Models (LLMs) to improve the performance of GNNs on downstream tasks.
no code implementations • 20 May 2024 • Haoxiang Shi, Jiaan Wang, Jiarong Xu, Cen Wang, Tetsuya Sakai
Our preliminary analysis of English text-to-table datasets highlights two key factors for dataset construction: data diversity and data hallucination.
1 code implementation • 20 Dec 2023 • Chenglu Pan, Jiarong Xu, Yue Yu, Ziqi Yang, Qingbiao Wu, Chunping Wang, Lei Chen, Yang Yang
Extensive experiments show that our model achieves the best trade-off between accuracy and the fairness of model gradient, as well as superior payoff fairness.
1 code implementation • NeurIPS 2023 • Jiarong Xu, Renhong Huang, Xin Jiang, Yuxuan Cao, Carl Yang, Chunping Wang, Yang Yang
The proposed pre-training pipeline is called the data-active graph pre-training (APT) framework, and is composed of a graph selector and a pre-training model.
1 code implementation • 23 Oct 2023 • Wei Chen, Qiushi Wang, Zefei Long, Xianyin Zhang, Zhongtian Lu, Bingxuan Li, Siyuan Wang, Jiarong Xu, Xiang Bai, Xuanjing Huang, Zhongyu Wei
We propose Multiple Experts Fine-tuning Framework to build a financial large language model (LLM), DISC-FinLLM.
2 code implementations • 16 Sep 2023 • Jiaan Wang, Yunlong Liang, Zengkui Sun, Yuxuan Cao, Jiarong Xu, Fandong Meng
With the recent advancements in large language models (LLMs), knowledge editing has been shown as a promising technique to adapt LLMs to new knowledge without retraining from scratch.
1 code implementation • 29 Mar 2023 • Yuxuan Cao, Jiarong Xu, Carl Yang, Jiaan Wang, Yunchao Zhang, Chunping Wang, Lei Chen, Yang Yang
All convex combinations of graphon bases give rise to a generator space, from which graphs generated form the solution space for those downstream data that can benefit from pre-training.
no code implementations • 21 Jan 2023 • Siyuan Wang, Zhongyu Wei, Jiarong Xu, Taishan Li, Zhihao Fan
Recent pre-trained language models (PLMs) equipped with foundation reasoning skills have shown remarkable performance on downstream complex tasks.
no code implementations • 14 Dec 2022 • Jiaan Wang, Fandong Meng, Yunlong Liang, Tingyi Zhang, Jiarong Xu, Zhixu Li, Jie zhou
In detail, we find that (1) the translationese in documents or summaries of test sets might lead to the discrepancy between human judgment and automatic evaluation; (2) the translationese in training sets would harm model performance in real-world applications; (3) though machine-translated documents involve translationese, they are very useful for building CLS systems on low-resource languages under specific training strategies.
no code implementations • 30 Jun 2022 • Xuanwen Huang, Yang Yang, Yang Wang, Chunping Wang, Zhisheng Zhang, Jiarong Xu, Lei Chen, Michalis Vazirgiannis
Since GAD emphasizes the application and the rarity of anomalous samples, enriching the varieties of its datasets is fundamental work.
no code implementations • 11 Jun 2022 • Zhihao Fan, Zhongyu Wei, Jingjing Chen, Siyuan Wang, Zejun Li, Jiarong Xu, Xuanjing Huang
These two steps are iteratively performed in our framework for continuous learning.
2 code implementations • 21 Apr 2022 • Taoran Fang, Zhiqing Xiao, Chunping Wang, Jiarong Xu, Xuan Yang, Yang Yang
First, it is challenging to find a universal method that are suitable for all cases considering the divergence of different datasets and models.
1 code implementation • 8 Nov 2021 • Qinkai Zheng, Xu Zou, Yuxiao Dong, Yukuo Cen, Da Yin, Jiarong Xu, Yang Yang, Jie Tang
To bridge this gap, we present the Graph Robustness Benchmark (GRB) with the goal of providing a scalable, unified, modular, and reproducible evaluation for the adversarial robustness of GML models.
no code implementations • 12 Dec 2020 • Jiarong Xu, Yizhou Sun, Xin Jiang, Yanhao Wang, Yang Yang, Chunping Wang, Jiangang Lu
To bridge the gap between theoretical graph attacks and real-world scenarios, in this work, we propose a novel and more realistic setting: strict black-box graph attack, in which the attacker has no knowledge about the victim model at all and is not allowed to send any queries.
no code implementations • 4 Dec 2020 • Jiarong Xu, Yang Yang, Junru Chen, Chunping Wang, Xin Jiang, Jiangang Lu, Yizhou Sun
Additionally, we explore a provable connection between the robustness of the unsupervised graph encoder and that of models on downstream tasks.