Search Results for author: Yaliang Li

Found 117 papers, 67 papers with code

Wasserstein Selective Transfer Learning for Cross-domain Text Mining

no code implementations EMNLP 2021 Lingyun Feng, Minghui Qiu, Yaliang Li, Haitao Zheng, Ying Shen

However, the source and target domains usually have different data distributions, which may lead to negative transfer.

Transfer Learning

Profanity-Avoiding Training Framework for Seq2seq Models with Certified Robustness

no code implementations EMNLP 2021 Hengtong Zhang, Tianhang Zheng, Yaliang Li, Jing Gao, Lu Su, Bo Li

To address this problem, we propose a training framework with certified robustness to eliminate the causes that trigger the generation of profanity.

Dialogue Generation Style Transfer

MindGYM: Enhancing Vision-Language Models via Synthetic Self-Challenging Questions

1 code implementation12 Mar 2025 Zhe Xu, Daoyuan Chen, Zhenqing Ling, Yaliang Li, Ying Shen

Large vision-language models (VLMs) face challenges in achieving robust, transferable reasoning abilities due to reliance on labor-intensive manual instruction datasets or computationally expensive self-supervised methods.

Computational Efficiency Multimodal Reasoning

Do we Really Need Visual Instructions? Towards Visual Instruction-Free Fine-tuning for Large Vision-Language Models

no code implementations17 Feb 2025 Zikang Liu, Kun Zhou, Wayne Xin Zhao, Dawei Gao, Yaliang Li, Ji-Rong Wen

Despite the success, as visual instructions require images as the input, it would leave the gap in inheriting the task-solving capabilities from the backbone LLMs, and make it costly to collect a large-scale dataset.

visual instruction following Visual Reasoning

KIMAs: A Configurable Knowledge Integrated Multi-Agent System

no code implementations13 Feb 2025 Zitao Li, Fei Wei, Yuexiang Xie, Dawei Gao, Weirui Kuang, Zhijian Ma, Bingchen Qian, Yaliang Li, Bolin Ding

Knowledge-intensive conversations supported by large language models (LLMs) have become one of the most popular and helpful applications that can assist people in different aspects.

Management RAG +1

Knowledge Graph-Guided Retrieval Augmented Generation

1 code implementation8 Feb 2025 Xiangrong Zhu, Yuexiang Xie, Yi Liu, Yaliang Li, Wei Hu

Retrieval-augmented generation (RAG) has emerged as a promising technology for addressing hallucination issues in the responses generated by large language models (LLMs).

Diversity Hallucination +3

Diversity as a Reward: Fine-Tuning LLMs on a Mixture of Domain-Undetermined Data

1 code implementation5 Feb 2025 Zhenqing Ling, Daoyuan Chen, Liuyi Yao, Yaliang Li, Ying Shen

Fine-tuning large language models (LLMs) using diverse datasets is crucial for enhancing their overall performance across various domains.

Diversity

Talk to Right Specialists: Routing and Planning in Multi-agent System for Question Answering

no code implementations14 Jan 2025 Feijie Wu, Zitao Li, Fei Wei, Yaliang Li, Bolin Ding, Jing Gao

Experimental results demonstrate that RopMura effectively handles both single-hop and multi-hop queries, with the routing mechanism enabling precise answers for single-hop queries and the combined routing and planning mechanisms achieving accurate, multi-step resolutions for complex queries.

Question Answering RAG +1

Data-Juicer 2.0: Cloud-Scale Adaptive Data Processing for Foundation Models

2 code implementations23 Dec 2024 Daoyuan Chen, Yilun Huang, Xuchen Pan, Nana Jiang, Haibin Wang, Ce Ge, Yushuo Chen, WenHao Zhang, Zhijian Ma, Yilei Zhang, Jun Huang, Wei Lin, Yaliang Li, Bolin Ding, Jingren Zhou

The burgeoning field of foundation models necessitates advanced data processing mechanisms capable of harnessing vast valuable data with varied types utilized by these models.

HumanVBench: Exploring Human-Centric Video Understanding Capabilities of MLLMs with Synthetic Benchmark Data

1 code implementation23 Dec 2024 Ting Zhou, Daoyuan Chen, Qirui Jiao, Bolin Ding, Yaliang Li, Ying Shen

In the domain of Multimodal Large Language Models (MLLMs), achieving human-centric video understanding remains a formidable challenge.

Action Recognition Video Understanding

ArtAug: Enhancing Text-to-Image Generation through Synthesis-Understanding Interaction

1 code implementation17 Dec 2024 Zhongjie Duan, Qianyi Zhao, Cen Chen, Daoyuan Chen, Wenmeng Zhou, Yaliang Li, Yingda Chen

This enables the synthesis model to directly produce aesthetically pleasing images without any extra computational cost.

Text-to-Image Generation

A Simple and Provable Scaling Law for the Test-Time Compute of Large Language Models

no code implementations29 Nov 2024 Yanxi Chen, Xuchen Pan, Yaliang Li, Bolin Ding, Jingren Zhou

We propose a general two-stage algorithm that enjoys a provable scaling law for the test-time compute of large language models (LLMs).

MMLU

LLM-Based Multi-Agent Systems are Scalable Graph Generative Models

1 code implementation13 Oct 2024 Jiarui Ji, Runlin Lei, Jialing Bi, Zhewei Wei, Xu Chen, Yankai Lin, Xuchen Pan, Yaliang Li, Bolin Ding

The structural properties of naturally arising social graphs are extensively studied to understand their evolution.

Benchmarking Graph Generation +1

GenSim: A General Social Simulation Platform with Large Language Model based Agents

1 code implementation6 Oct 2024 Jiakai Tang, Heyang Gao, Xuchen Pan, Lei Wang, Haoran Tan, Dawei Gao, Yushuo Chen, Xu Chen, Yankai Lin, Yaliang Li, Bolin Ding, Jingren Zhou, Jun Wang, Ji-Rong Wen

With the rapid advancement of large language models (LLMs), recent years have witnessed many promising studies on leveraging LLM-based agents to simulate human social behavior.

Language Modeling Language Modelling +1

Agent-Oriented Planning in Multi-Agent Systems

2 code implementations3 Oct 2024 Ao Li, Yuexiang Xie, Songze Li, Fugee Tsung, Bolin Ding, Yaliang Li

Through the collaboration of multiple LLM-empowered agents possessing diverse expertise and tools, multi-agent systems achieve impressive progress in solving real-world problems.

Scheduling

Safety Layers in Aligned Large Language Models: The Key to LLM Security

no code implementations30 Aug 2024 Shen Li, Liuyi Yao, Lan Zhang, Yaliang Li

Aligned LLMs are secure, capable of recognizing and refusing to answer malicious questions.

Exploring Selective Layer Fine-Tuning in Federated Learning

1 code implementation28 Aug 2024 Yuchang Sun, Yuexiang Xie, Bolin Ding, Yaliang Li, Jun Zhang

Federated learning (FL) has emerged as a promising paradigm for fine-tuning foundation models using distributed data in a privacy-preserving manner.

Federated Learning Privacy Preserving

Understanding Byzantine Robustness in Federated Learning with A Black-box Server

1 code implementation12 Aug 2024 Fangyuan Zhao, Yuexiang Xie, Xuebin Ren, Bolin Ding, Shusen Yang, Yaliang Li

Federated learning (FL) becomes vulnerable to Byzantine attacks where some of participators tend to damage the utility or discourage the convergence of the learned model via sending their malicious model updates.

Federated Learning

Img-Diff: Contrastive Data Synthesis for Multimodal Large Language Models

no code implementations8 Aug 2024 Qirui Jiao, Daoyuan Chen, Yilun Huang, Bolin Ding, Yaliang Li, Ying Shen

We release our codes and dataset to encourage further research on multimodal data synthesis and MLLMs' fundamental capabilities for image understanding.

Contrastive Learning Fine-Grained Image Recognition +3

EIUP: A Training-Free Approach to Erase Non-Compliant Concepts Conditioned on Implicit Unsafe Prompts

no code implementations2 Aug 2024 Die Chen, Zhiwen Li, Mingyuan Fan, Cen Chen, Wenmeng Zhou, Yaliang Li

Since image generation is conditioned on text, prompt purification serves as a straightforward solution for content safety.

Image Generation

Very Large-Scale Multi-Agent Simulation in AgentScope

1 code implementation25 Jul 2024 Xuchen Pan, Dawei Gao, Yuexiang Xie, Yushuo Chen, Zhewei Wei, Yaliang Li, Bolin Ding, Ji-Rong Wen, Jingren Zhou

Recent advances in large language models (LLMs) have opened new avenues for applying multi-agent systems in very large-scale simulations.

On the Design and Analysis of LLM-Based Algorithms

1 code implementation20 Jul 2024 Yanxi Chen, Yaliang Li, Bolin Ding, Jingren Zhou

We initiate a formal investigation into the design and analysis of LLM-based algorithms, i. e. algorithms that contain one or multiple calls of large language models (LLMs) as sub-routines and critically rely on the capabilities of LLMs.

Prompt Engineering

Data-Juicer Sandbox: A Comprehensive Suite for Multimodal Data-Model Co-development

1 code implementation16 Jul 2024 Daoyuan Chen, Haibin Wang, Yilun Huang, Ce Ge, Yaliang Li, Bolin Ding, Jingren Zhou

The emergence of large-scale multi-modal generative models has drastically advanced artificial intelligence, introducing unprecedented levels of performance and functionality.

Diversity

The Synergy between Data and Multi-Modal Large Language Models: A Survey from Co-Development Perspective

1 code implementation11 Jul 2024 Zhen Qin, Daoyuan Chen, WenHao Zhang, Liuyi Yao, Yilun Huang, Bolin Ding, Yaliang Li, Shuiguang Deng

As LLMs and MLLMs rely on vast amounts of model parameters and data to achieve emergent capabilities, the importance of data is receiving increasingly widespread attention and recognition.

FedBiOT: LLM Local Fine-tuning in Federated Learning without Full Model

1 code implementation25 Jun 2024 Feijie Wu, Zitao Li, Yaliang Li, Bolin Ding, Jing Gao

Specifically, our method involves the server generating a compressed LLM and aligning its performance with the full model.

Federated Learning

ExVideo: Extending Video Diffusion Models via Parameter-Efficient Post-Tuning

1 code implementation20 Jun 2024 Zhongjie Duan, Wenmeng Zhou, Cen Chen, Yaliang Li, Weining Qian

To evaluate the efficacy of our proposed post-tuning approach, we conduct extension training on the Stable Video Diffusion model.

Video Generation

BiMix: A Bivariate Data Mixing Law for Language Model Pretraining

no code implementations23 May 2024 Ce Ge, Zhijian Ma, Daoyuan Chen, Yaliang Li, Bolin Ding

Optimization of domain proportions yields superior model performance compared to existing methods.

Language Modeling Language Modelling

Review of Data-centric Time Series Analysis from Sample, Feature, and Period

no code implementations24 Apr 2024 Chenxi Sun, Hongyan Li, Yaliang Li, Shenda Hong

Data is essential to performing time series analysis utilizing machine learning approaches, whether for classic models or today's large language models.

Time Series Time Series Analysis

Dynamic Demonstration Retrieval and Cognitive Understanding for Emotional Support Conversation

1 code implementation3 Apr 2024 Zhe Xu, Daoyuan Chen, Jiayi Kuang, Zihao Yi, Yaliang Li, Ying Shen

Emotional Support Conversation (ESC) systems are pivotal in providing empathetic interactions, aiding users through negative emotional states by understanding and addressing their unique experiences.

Decoder Empathetic Response Generation +3

Improving LoRA in Privacy-preserving Federated Learning

no code implementations18 Mar 2024 Youbang Sun, Zitao Li, Yaliang Li, Bolin Ding

Low-rank adaptation (LoRA) is one of the most popular task-specific parameter-efficient fine-tuning (PEFT) methods on pre-trained language models for its good performance and computational efficiency.

Computational Efficiency Federated Learning +2

Less is More: High-value Data Selection for Visual Instruction Tuning

no code implementations14 Mar 2024 Zikang Liu, Kun Zhou, Wayne Xin Zhao, Dawei Gao, Yaliang Li, Ji-Rong Wen

To investigate this issue, we conduct a series of empirical studies, which reveal a significant redundancy within the visual instruction datasets, and show that greatly reducing the amount of instructions from several tasks even do not affect the performance.

Unleashing the Potential of Large Language Models as Prompt Optimizers: Analogical Analysis with Gradient-based Model Optimizers

1 code implementation27 Feb 2024 Xinyu Tang, Xiaolei Wang, Wayne Xin Zhao, Siyuan Lu, Yaliang Li, Ji-Rong Wen

By systematically analyzing a rich set of improvement strategies on the two aspects, we further develop a capable Gradient-inspired LLM-based Prompt Optimizer called GPO.

MMLU

A Bargaining-based Approach for Feature Trading in Vertical Federated Learning

no code implementations23 Feb 2024 Yue Cui, Liuyi Yao, Zitao Li, Yaliang Li, Bolin Ding, Xiaofang Zhou

We analyze the proposed bargaining model under perfect and imperfect performance information settings, proving the existence of an equilibrium that optimizes the parties' objectives.

Vertical Federated Learning

Double-I Watermark: Protecting Model Copyright for LLM Fine-tuning

no code implementations22 Feb 2024 Shen Li, Liuyi Yao, Jinyang Gao, Lan Zhang, Yaliang Li

To support various applications, a prevalent and efficient approach for business owners is leveraging their valuable datasets to fine-tune a pre-trained LLM through the API provided by LLM owners or cloud servers.

On the Convergence of Zeroth-Order Federated Tuning for Large Language Models

1 code implementation8 Feb 2024 Zhenqing Ling, Daoyuan Chen, Liuyi Yao, Yaliang Li, Ying Shen

The confluence of Federated Learning (FL) and Large Language Models (LLMs) is ushering in a new era in privacy-preserving natural language processing.

Federated Learning Privacy Preserving

An Auction-based Marketplace for Model Trading in Federated Learning

no code implementations2 Feb 2024 Yue Cui, Liuyi Yao, Yaliang Li, Ziqian Chen, Bolin Ding, Xiaofang Zhou

This FL market allows clients to gain monetary reward by selling their own models and improve local model performance through the purchase of others' models.

Federated Learning Marketing +1

EE-Tuning: An Economical yet Scalable Solution for Tuning Early-Exit Large Language Models

1 code implementation1 Feb 2024 Xuchen Pan, Yanxi Chen, Yaliang Li, Bolin Ding, Jingren Zhou

This work introduces EE-Tuning, a lightweight and economical solution to training/tuning early-exit large language models (LLMs).

From Training-Free to Adaptive: Empirical Insights into MLLMs' Understanding of Detection Information

no code implementations31 Jan 2024 Qirui Jiao, Daoyuan Chen, Yilun Huang, Yaliang Li, Ying Shen

Despite the impressive capabilities of Multimodal Large Language Models (MLLMs) in integrating text and image modalities, challenges remain in accurately interpreting detailed visual elements.

Hallucination object-detection +4

Data-CUBE: Data Curriculum for Instruction-based Sentence Representation Learning

no code implementations7 Jan 2024 Yingqian Min, Kun Zhou, Dawei Gao, Wayne Xin Zhao, He Hu, Yaliang Li

Recently, multi-task instruction tuning has been applied into sentence representation learning, which endows the capability of generating specific representations with the guidance of task instruction, exhibiting strong generalization ability on new tasks.

Representation Learning Sentence +1

ReasoningLM: Enabling Structural Subgraph Reasoning in Pre-trained Language Models for Question Answering over Knowledge Graph

no code implementations30 Dec 2023 Jinhao Jiang, Kun Zhou, Wayne Xin Zhao, Yaliang Li, Ji-Rong Wen

To better perform reasoning on KG, recent work typically adopts a pre-trained language model~(PLM) to model the question, and a graph neural network~(GNN) based module to perform multi-hop reasoning on the KG.

Graph Neural Network Language Modelling +1

EE-LLM: Large-Scale Training and Inference of Early-Exit Large Language Models with 3D Parallelism

1 code implementation8 Dec 2023 Yanxi Chen, Xuchen Pan, Yaliang Li, Bolin Ding, Jingren Zhou

We present EE-LLM, a framework for large-scale training and inference of early-exit large language models (LLMs).

Tunable Soft Prompts are Messengers in Federated Learning

1 code implementation12 Nov 2023 Chenhe Dong, Yuexiang Xie, Bolin Ding, Ying Shen, Yaliang Li

As the global model itself is not required to be shared and the local training is conducted based on an auxiliary model with fewer parameters than the global model, the proposed approach provides protection for the global model while reducing communication and computation costs in FL.

Federated Learning Language Modelling +1

Transferability Bound Theory: Exploring Relationship between Adversarial Transferability and Flatness

1 code implementation10 Nov 2023 Mingyuan Fan, Xiaodan Li, Cen Chen, Wenmeng Zhou, Yaliang Li

A prevailing belief in attack and defense community is that the higher flatness of adversarial examples enables their better cross-model transferability, leading to a growing interest in employing sharpness-aware minimization and its variants.

Adversarial Attack Diversity

FederatedScope-LLM: A Comprehensive Package for Fine-tuning Large Language Models in Federated Learning

1 code implementation1 Sep 2023 Weirui Kuang, Bingchen Qian, Zitao Li, Daoyuan Chen, Dawei Gao, Xuchen Pan, Yuexiang Xie, Yaliang Li, Bolin Ding, Jingren Zhou

When several entities have similar interested tasks, but their data cannot be shared because of privacy concerns regulations, federated learning (FL) is a mainstream solution to leverage the data of different entities.

Benchmarking Federated Learning +2

Text-to-SQL Empowered by Large Language Models: A Benchmark Evaluation

1 code implementation29 Aug 2023 Dawei Gao, Haibin Wang, Yaliang Li, Xiuyu Sun, Yichen Qian, Bolin Ding, Jingren Zhou

Our explorations highlight open-source LLMs' potential in Text-to-SQL, as well as the advantages and disadvantages of the supervised fine-tuning.

Prompt Engineering Text-To-SQL

TEST: Text Prototype Aligned Embedding to Activate LLM's Ability for Time Series

1 code implementation16 Aug 2023 Chenxi Sun, Hongyan Li, Yaliang Li, Shenda Hong

Given the lack of data, limited resources, semantic context requirements, and so on, this work focuses on TS-for-LLM, where we aim to activate LLM's ability for TS data by designing a TS embedding method suitable for LLM.

Language Modelling Large Language Model +1

Do Emergent Abilities Exist in Quantized Large Language Models: An Empirical Study

1 code implementation16 Jul 2023 Peiyu Liu, Zikang Liu, Ze-Feng Gao, Dawei Gao, Wayne Xin Zhao, Yaliang Li, Bolin Ding, Ji-Rong Wen

Different from previous studies focused on overall performance, this work aims to investigate the impact of quantization on \emph{emergent abilities}, which are important characteristics that distinguish LLMs from small language models.

In-Context Learning Instruction Following +1

Counterfactual Debiasing for Generating Factually Consistent Text Summaries

no code implementations18 May 2023 Chenhe Dong, Yuexiang Xie, Yaliang Li, Ying Shen

Despite substantial progress in abstractive text summarization to generate fluent and informative texts, the factual inconsistency in the generated summaries remains an important yet challenging problem to be solved.

Abstractive Text Summarization counterfactual

Efficient Personalized Federated Learning via Sparse Model-Adaptation

2 code implementations4 May 2023 Daoyuan Chen, Liuyi Yao, Dawei Gao, Bolin Ding, Yaliang Li

To overcome these challenges, we propose a novel approach named pFedGate for efficient personalized FL by adaptively and efficiently learning sparse local models.

model Personalized Federated Learning

Multi-grained Hypergraph Interest Modeling for Conversational Recommendation

1 code implementation4 May 2023 Chenzhan Shang, Yupeng Hou, Wayne Xin Zhao, Yaliang Li, Jing Zhang

In our approach, we first employ the hypergraph structure to model users' historical dialogue sessions and form a session-based hypergraph, which captures coarse-grained, session-level relations.

Conversational Recommendation Recommendation Systems

HPN: Personalized Federated Hyperparameter Optimization

no code implementations11 Apr 2023 Anda Cheng, Zhen Wang, Yaliang Li, Jian Cheng

The client encoding is calculated with a random projection-based procedure to protect each client's privacy.

Federated Learning Hyperparameter Optimization

LON-GNN: Spectral GNNs with Learnable Orthonormal Basis

1 code implementation24 Mar 2023 Qian Tao, Zhen Wang, Wenyuan Yu, Yaliang Li, Zhewei Wei

In recent years, a plethora of spectral graph neural networks (GNN) methods have utilized polynomial basis with learnable coefficients to achieve top-tier performances on many node-level tasks.

FS-Real: Towards Real-World Cross-Device Federated Learning

no code implementations23 Mar 2023 Daoyuan Chen, Dawei Gao, Yuexiang Xie, Xuchen Pan, Zitao Li, Yaliang Li, Bolin Ding, Jingren Zhou

Federated Learning (FL) aims to train high-quality models in collaboration with distributed clients while not uploading their local data, which attracts increasing attention in both academia and industry.

Federated Learning

Revisiting Personalized Federated Learning: Robustness Against Backdoor Attacks

1 code implementation3 Feb 2023 Zeyu Qin, Liuyi Yao, Daoyuan Chen, Yaliang Li, Bolin Ding, Minhao Cheng

We conduct the first study of backdoor attacks in the pFL framework, testing 4 widely used backdoor attacks against 6 pFL methods on benchmark datasets FEMNIST and CIFAR-10, a total of 600 experiments.

Backdoor Attack Personalized Federated Learning

Collaborating Heterogeneous Natural Language Processing Tasks via Federated Learning

1 code implementation12 Dec 2022 Chenhe Dong, Yuexiang Xie, Bolin Ding, Ying Shen, Yaliang Li

In this study, we further broaden the application scope of FL in NLP by proposing an Assign-Then-Contrast (denoted as ATC) framework, which enables clients with heterogeneous NLP tasks to construct an FL course and learn useful knowledge from each other.

Federated Learning Natural Language Understanding +1

Privacy-Preserved Neural Graph Similarity Learning

1 code implementation21 Oct 2022 Yupeng Hou, Wayne Xin Zhao, Yaliang Li, Ji-Rong Wen

To develop effective and efficient graph similarity learning (GSL) models, a series of data-driven neural algorithms have been proposed in recent years.

Graph Matching Graph Similarity +1

Towards Universal Sequence Representation Learning for Recommender Systems

2 code implementations13 Jun 2022 Yupeng Hou, Shanlei Mu, Wayne Xin Zhao, Yaliang Li, Bolin Ding, Ji-Rong Wen

In order to develop effective sequential recommenders, a series of sequence representation learning (SRL) methods are proposed to model historical user behaviors.

Recommendation Systems Representation Learning

FedHPO-B: A Benchmark Suite for Federated Hyperparameter Optimization

1 code implementation8 Jun 2022 Zhen Wang, Weirui Kuang, Ce Zhang, Bolin Ding, Yaliang Li

Due to this uniqueness, existing HPO benchmarks no longer satisfy the need to compare HPO methods in the FL setting.

Benchmarking Federated Learning +1

pFL-Bench: A Comprehensive Benchmark for Personalized Federated Learning

1 code implementation8 Jun 2022 Daoyuan Chen, Dawei Gao, Weirui Kuang, Yaliang Li, Bolin Ding

Personalized Federated Learning (pFL), which utilizes and deploys distinct local models, has gained increasing attention in recent years due to its success in handling the statistical heterogeneity of FL clients.

Fairness Personalized Federated Learning

A Benchmark for Federated Hetero-Task Learning

1 code implementation7 Jun 2022 Liuyi Yao, Dawei Gao, Zhen Wang, Yuexiang Xie, Weirui Kuang, Daoyuan Chen, Haohui Wang, Chenhe Dong, Bolin Ding, Yaliang Li

To investigate the heterogeneity in federated learning in real-world scenarios, we generalize the classic federated learning to federated hetero-task learning, which emphasizes the inconsistency across the participants in federated learning in terms of both data distribution and learning tasks.

Federated Learning Meta-Learning +2

ID-Agnostic User Behavior Pre-training for Sequential Recommendation

no code implementations6 Jun 2022 Shanlei Mu, Yupeng Hou, Wayne Xin Zhao, Yaliang Li, Bolin Ding

Instead of explicitly learning representations for item IDs, IDA-SR directly learns item representations from rich text information.

Attribute Language Modeling +2

EvenNet: Ignoring Odd-Hop Neighbors Improves Robustness of Graph Neural Networks

1 code implementation27 May 2022 Runlin Lei, Zhen Wang, Yaliang Li, Bolin Ding, Zhewei Wei

Despite their extraordinary predictive accuracy, existing approaches, such as GCN and GPRGNN, are not robust in the face of homophily changes on test graphs, rendering these models vulnerable to graph structural attacks and with limited capacity in generalizing to graphs of varied homophily levels.

Node Classification

FederatedScope-GNN: Towards a Unified, Comprehensive and Efficient Package for Federated Graph Learning

1 code implementation12 Apr 2022 Zhen Wang, Weirui Kuang, Yuexiang Xie, Liuyi Yao, Yaliang Li, Bolin Ding, Jingren Zhou

The incredible development of federated learning (FL) has benefited various tasks in the domains of computer vision and natural language processing, and the existing frameworks such as TFF and FATE has made the deployment easy in real-world applications.

Federated Learning Graph Learning

FederatedScope: A Flexible Federated Learning Platform for Heterogeneity

1 code implementation11 Apr 2022 Yuexiang Xie, Zhen Wang, Dawei Gao, Daoyuan Chen, Liuyi Yao, Weirui Kuang, Yaliang Li, Bolin Ding, Jingren Zhou

Although remarkable progress has been made by existing federated learning (FL) platforms to provide infrastructures for development, these platforms may not well tackle the challenges brought by various types of heterogeneity, including the heterogeneity in participants' local data, resources, behaviors and learning goals.

Federated Learning Hyperparameter Optimization

Towards Personalized Answer Generation in E-Commerce via Multi-Perspective Preference Modeling

1 code implementation27 Dec 2021 Yang Deng, Yaliang Li, Wenxuan Zhang, Bolin Ding, Wai Lam

Recently, Product Question Answering (PQA) on E-Commerce platforms has attracted increasing attention as it can act as an intelligent online shopping assistant and improve the customer shopping experience.

Answer Generation Question Answering

HRKD: Hierarchical Relational Knowledge Distillation for Cross-domain Language Model Compression

1 code implementation EMNLP 2021 Chenhe Dong, Yaliang Li, Ying Shen, Minghui Qiu

In this paper, we target to compress PLMs with knowledge distillation, and propose a hierarchical relational knowledge distillation (HRKD) method to capture both hierarchical and domain relational information.

Few-Shot Learning Knowledge Distillation +3

iFlood: A Stable and Effective Regularizer

no code implementations ICLR 2022 Yuexiang Xie, Zhen Wang, Yaliang Li, Ce Zhang, Jingren Zhou, Bolin Ding

However, our further studies uncover that the design of the loss function of Flooding can lead to a discrepancy between its objective and implementation, and cause the instability issue.

Image Classification

Coarformer: Transformer for large graph via graph coarsening

no code implementations29 Sep 2021 Weirui Kuang, Zhen Wang, Yaliang Li, Zhewei Wei, Bolin Ding

We get rid of these obstacles by exploiting the complementary natures of GNN and Transformer, and trade the fine-grained long-range information for the efficiency of Transformer.

Learned Index with Dynamic $\epsilon$

no code implementations29 Sep 2021 Daoyuan Chen, Wuchao Li, Yaliang Li, Bolin Ding, Kai Zeng, Defu Lian, Jingren Zhou

We theoretically analyze prediction error bounds that link $\epsilon$ with data characteristics for an illustrative learned index method.

Retrieval

Path-specific Causal Fair Prediction via Auxiliary Graph Structure Learning

no code implementations29 Sep 2021 Liuyi Yao, Yaliang Li, Bolin Ding, Jingren Zhou, Jinduo Liu, Mengdi Huai, Jing Gao

To tackle these challenges, we propose a novel casual graph based fair prediction framework, which integrates graph structure learning into fair prediction to ensure that unfair pathways are excluded in the causal graph.

Fairness Graph structure learning +1

Factual Consistency Evaluation for Text Summarization via Counterfactual Estimation

1 code implementation Findings (EMNLP) 2021 Yuexiang Xie, Fei Sun, Yang Deng, Yaliang Li, Bolin Ding

However, existing metrics either neglect the intrinsic cause of the factual inconsistency or rely on auxiliary tasks, leading to an unsatisfied correlation with human judgments or increasing the inconvenience of usage in practice.

Abstractive Text Summarization counterfactual

VolcanoML: Speeding up End-to-End AutoML via Scalable Search Space Decomposition

3 code implementations19 Jul 2021 Yang Li, Yu Shen, Wentao Zhang, Jiawei Jiang, Bolin Ding, Yaliang Li, Jingren Zhou, Zhi Yang, Wentao Wu, Ce Zhang, Bin Cui

End-to-end AutoML has attracted intensive interests from both academia and industry, which automatically searches for ML pipelines in a space induced by feature engineering, algorithm/model selection, and hyper-parameter tuning.

AutoML Feature Engineering +1

Automated Graph Learning via Population Based Self-Tuning GCN

no code implementations9 Jul 2021 Ronghang Zhu, Zhiqiang Tao, Yaliang Li, Sheng Li

Owing to the remarkable capability of extracting effective graph embeddings, graph convolutional network (GCN) and its variants have been successfully applied to a broad range of tasks, such as node classification, link prediction, and graph classification.

Graph Classification Graph Learning +3

Differential Privacy for Text Analytics via Natural Text Sanitization

1 code implementation Findings (ACL) 2021 Xiang Yue, Minxin Du, Tianhao Wang, Yaliang Li, Huan Sun, Sherman S. M. Chow

The sanitized texts also contribute to our sanitization-aware pretraining and fine-tuning, enabling privacy-preserving natural language processing over the BERT language model with promising utility.

Language Modeling Language Modelling +1

Unified Conversational Recommendation Policy Learning via Graph-based Reinforcement Learning

no code implementations20 May 2021 Yang Deng, Yaliang Li, Fei Sun, Bolin Ding, Wai Lam

However, existing methods mainly target at solving one or two of these three decision-making problems in CRS with separated conversation and recommendation components, which restrict the scalability and generality of CRS and fall short of preserving a stable training procedure.

Attribute Conversational Recommendation +5

A Unified Transferable Model for ML-Enhanced DBMS

1 code implementation6 May 2021 Ziniu Wu, Pei Yu, Peilun Yang, Rong Zhu, Yuxing Han, Yaliang Li, Defu Lian, Kai Zeng, Jingren Zhou

We propose to explore the transferabilities of the ML methods both across tasks and across DBs to tackle these fundamental drawbacks.

Management model

Contextualized Knowledge-aware Attentive Neural Network: Enhancing Answer Selection with Knowledge

no code implementations12 Apr 2021 Yang Deng, Yuexiang Xie, Yaliang Li, Min Yang, Wai Lam, Ying Shen

Answer selection, which is involved in many natural language processing applications such as dialog systems and question answering (QA), is an important yet challenging task in practice, since conventional methods typically suffer from the issues of ignoring diverse real-world background knowledge.

Answer Selection Representation Learning +1

Learning to Augment for Data-Scarce Domain BERT Knowledge Distillation

no code implementations20 Jan 2021 Lingyun Feng, Minghui Qiu, Yaliang Li, Hai-Tao Zheng, Ying Shen

Despite pre-trained language models such as BERT have achieved appealing performance in a wide range of natural language processing tasks, they are computationally expensive to be deployed in real-time applications.

Knowledge Distillation

A Pluggable Learned Index Method via Sampling and Gap Insertion

no code implementations4 Jan 2021 Yaliang Li, Daoyuan Chen, Bolin Ding, Kai Zeng, Jingren Zhou

In this paper, we propose a formal machine learning based framework to quantify the index learning objective, and study two general and pluggable techniques to enhance the learning efficiency and learning effectiveness for learned indexes.

BIG-bench Machine Learning Retrieval

Learning to Mutate with Hypergradient Guided Population

no code implementations NeurIPS 2020 Zhiqiang Tao, Yaliang Li, Bolin Ding, Ce Zhang, Jingren Zhou, Yun Fu

Computing the gradient of model hyperparameters, i. e., hypergradient, enables a promising and natural way to solve the hyperparameter optimization task.

Hyperparameter Optimization

EasyTransfer -- A Simple and Scalable Deep Transfer Learning Platform for NLP Applications

2 code implementations18 Nov 2020 Minghui Qiu, Peng Li, Chengyu Wang, Hanjie Pan, Ang Wang, Cen Chen, Xianyan Jia, Yaliang Li, Jun Huang, Deng Cai, Wei Lin

The literature has witnessed the success of leveraging Pre-trained Language Models (PLMs) and Transfer Learning (TL) algorithms to a wide range of Natural Language Processing (NLP) applications, yet it is not easy to build an easy-to-use and scalable TL toolkit for this purpose.

Compiler Optimization Conversational Question Answering +1

RecBole: Towards a Unified, Comprehensive and Efficient Framework for Recommendation Algorithms

1 code implementation3 Nov 2020 Wayne Xin Zhao, Shanlei Mu, Yupeng Hou, Zihan Lin, Yushuo Chen, Xingyu Pan, Kaiyuan Li, Yujie Lu, Hui Wang, Changxin Tian, Yingqian Min, Zhichao Feng, Xinyan Fan, Xu Chen, Pengfei Wang, Wendi Ji, Yaliang Li, Xiaoling Wang, Ji-Rong Wen

In this library, we implement 73 recommendation models on 28 benchmark datasets, covering the categories of general recommendation, sequential recommendation, context-aware recommendation and knowledge-based recommendation.

Collaborative Filtering Sequential Recommendation

Scalable Graph Neural Networks via Bidirectional Propagation

1 code implementation NeurIPS 2020 Ming Chen, Zhewei Wei, Bolin Ding, Yaliang Li, Ye Yuan, Xiaoyong Du, Ji-Rong Wen

Most notably, GBP can deliver superior performance on a graph with over 60 million nodes and 1. 8 billion edges in less than half an hour on a single machine.

Graph Sampling

FIVES: Feature Interaction Via Edge Search for Large-Scale Tabular Data

no code implementations29 Jul 2020 Yuexiang Xie, Zhen Wang, Yaliang Li, Bolin Ding, Nezihe Merve Gürel, Ce Zhang, Minlie Huang, Wei. Lin, Jingren Zhou

Then we instantiate this search strategy by optimizing both a dedicated graph neural network (GNN) and the adjacency tensor associated with the defined feature graph.

Graph Neural Network

Simple and Deep Graph Convolutional Networks

4 code implementations ICML 2020 Ming Chen, Zhewei Wei, Zengfeng Huang, Bolin Ding, Yaliang Li

We propose the GCNII, an extension of the vanilla GCN model with two simple yet effective techniques: {\em Initial residual} and {\em Identity mapping}.

Graph Classification Graph Regression +3

Relabel the Noise: Joint Extraction of Entities and Relations via Cooperative Multiagents

no code implementations ACL 2020 Daoyuan Chen, Yaliang Li, Kai Lei, Ying Shen

Distant supervision based methods for entity and relation extraction have received increasing popularity due to the fact that these methods require light human annotation efforts.

Relation Relation Extraction

Practical Data Poisoning Attack against Next-Item Recommendation

no code implementations7 Apr 2020 Hengtong Zhang, Yaliang Li, Bolin Ding, Jing Gao

In real-world recommendation systems, the cost of retraining recommendation models is high, and the interaction frequency between users and a recommendation system is restricted. Given these real-world restrictions, we propose to let the agent interact with a recommender simulator instead of the target recommendation system and leverage the transferability of the generated adversarial samples to poison the target system.

Data Poisoning Recommendation Systems +1

A Survey on Causal Inference

1 code implementation5 Feb 2020 Liuyi Yao, Zhixuan Chu, Sheng Li, Yaliang Li, Jing Gao, Aidong Zhang

Embraced with the rapidly developed machine learning area, various causal effect estimation methods for observational data have sprung up.

BIG-bench Machine Learning Causal Inference +1

AdaBERT: Task-Adaptive BERT Compression with Differentiable Neural Architecture Search

1 code implementation13 Jan 2020 Daoyuan Chen, Yaliang Li, Minghui Qiu, Zhen Wang, Bofang Li, Bolin Ding, Hongbo Deng, Jun Huang, Wei. Lin, Jingren Zhou

Motivated by the necessity and benefits of task-oriented BERT compression, we propose a novel compression method, AdaBERT, that leverages differentiable Neural Architecture Search to automatically compress BERT into task-adaptive small models for specific tasks.

Knowledge Distillation Neural Architecture Search

Automated Relational Meta-learning

1 code implementation ICLR 2020 Huaxiu Yao, Xian Wu, Zhiqiang Tao, Yaliang Li, Bolin Ding, Ruirui Li, Zhenhui Li

In order to efficiently learn with small amount of data on new tasks, meta-learning transfers knowledge learned from previous tasks to the new ones.

Few-Shot Image Classification Meta-Learning

A Minimax Game for Instance based Selective Transfer Learning

no code implementations1 Jul 2019 Bo wang, Minghui Qiu, Xisen Wang, Yaliang Li, Yu Gong, Xiaoyi Zeng, Jung Huang, Bo Zheng, Deng Cai, Jingren Zhou

To the best of our knowledge, this is the first to build a minimax game based model for selective transfer learning.

Text Retrieval Transfer Learning

Multi-Grained Named Entity Recognition

1 code implementation ACL 2019 Congying Xia, Chenwei Zhang, Tao Yang, Yaliang Li, Nan Du, Xian Wu, Wei Fan, Fenglong Ma, Philip Yu

This paper presents a novel framework, MGNER, for Multi-Grained Named Entity Recognition where multiple entities or entity mentions in a sentence could be non-overlapping or totally nested.

Multi-Grained Named Entity Recognition named-entity-recognition +5

Data Poisoning Attack against Knowledge Graph Embedding

no code implementations26 Apr 2019 Hengtong Zhang, Tianhang Zheng, Jing Gao, Chenglin Miao, Lu Su, Yaliang Li, Kui Ren

Knowledge graph embedding (KGE) is a technique for learning continuous embeddings for entities and relations in the knowledge graph. Due to its benefit to a variety of downstream tasks such as knowledge graph completion, question answering and recommendation, KGE has gained significant attention recently.

Data Poisoning Knowledge Graph Completion +2

Entity Synonym Discovery via Multipiece Bilateral Context Matching

1 code implementation31 Dec 2018 Chenwei Zhang, Yaliang Li, Nan Du, Wei Fan, Philip S. Yu

Being able to automatically discover synonymous entities in an open-world setting benefits various tasks such as entity disambiguation or knowledge graph canonicalization.

Entity Disambiguation

Joint Slot Filling and Intent Detection via Capsule Neural Networks

3 code implementations ACL 2019 Chenwei Zhang, Yaliang Li, Nan Du, Wei Fan, Philip S. Yu

Being able to recognize words as slots and detect the intent of an utterance has been a keen issue in natural language understanding.

Intent Detection Natural Language Understanding +1

Multi-Task Learning with Multi-View Attention for Answer Selection and Knowledge Base Question Answering

2 code implementations6 Dec 2018 Yang Deng, Yuexiang Xie, Yaliang Li, Min Yang, Nan Du, Wei Fan, Kai Lei, Ying Shen

Second, these two tasks can benefit each other: answer selection can incorporate the external knowledge from knowledge base (KB), while KBQA can be improved by learning contextual information from answer selection.

Answer Selection Knowledge Base Question Answering +2

Representation Learning for Treatment Effect Estimation from Observational Data

1 code implementation NeurIPS 2018 Liuyi Yao, Sheng Li, Yaliang Li, Mengdi Huai, Jing Gao, Aidong Zhang

Estimating individual treatment effect (ITE) is a challenging problem in causal inference, due to the missing counterfactuals and the selection bias.

Causal Inference Representation Learning +1

Finding Similar Medical Questions from Question Answering Websites

no code implementations14 Oct 2018 Yaliang Li, Liuyi Yao, Nan Du, Jing Gao, Qi Li, Chuishi Meng, Chenwei Zhang, Wei Fan

Patients who have medical information demands tend to post questions about their health conditions on these crowdsourced Q&A websites and get answers from other users.

Diversity Question Answering +1

Towards Differentially Private Truth Discovery for Crowd Sensing Systems

no code implementations10 Oct 2018 Yaliang Li, Houping Xiao, Zhan Qin, Chenglin Miao, Lu Su, Jing Gao, Kui Ren, Bolin Ding

To better utilize sensory data, the problem of truth discovery, whose goal is to estimate user quality and infer reliable aggregated results through quality-aware data aggregation, has emerged as a hot topic.

Privacy Preserving

MedTruth: A Semi-supervised Approach to Discovering Knowledge Condition Information from Multi-Source Medical Data

no code implementations27 Sep 2018 Yang Deng, Yaliang Li, Ying Shen, Nan Du, Wei Fan, Min Yang, Kai Lei

In the light of these challenges, we propose a new truth discovery method, MedTruth, for medical knowledge condition discovery, which incorporates prior source quality information into the source reliability estimation procedure, and also utilizes the knowledge triple information for trustworthy information computation.

Databases

SynonymNet: Multi-context Bilateral Matching for Entity Synonyms

no code implementations27 Sep 2018 Chenwei Zhang, Yaliang Li, Nan Du, Wei Fan, Philip S. Yu

Being able to automatically discover synonymous entities from a large free-text corpus has transformative effects on structured knowledge discovery.

Knowledge as A Bridge: Improving Cross-domain Answer Selection with External Knowledge

no code implementations COLING 2018 Yang Deng, Ying Shen, Min Yang, Yaliang Li, Nan Du, Wei Fan, Kai Lei

In this paper, we propose Knowledge-aware Attentive Network (KAN), a transfer learning framework for cross-domain answer selection, which uses the knowledge base as a bridge to enable knowledge transfer from the source domain to the target domains.

Answer Selection Information Retrieval +2

Cooperative Denoising for Distantly Supervised Relation Extraction

no code implementations COLING 2018 Kai Lei, Daoyuan Chen, Yaliang Li, Nan Du, Min Yang, Wei Fan, Ying Shen

Distantly supervised relation extraction greatly reduces human efforts in extracting relational facts from unstructured texts.

Denoising Information Retrieval +4

Generative Discovery of Relational Medical Entity Pairs

no code implementations ICLR 2018 Chenwei Zhang, Yaliang Li, Nan Du, Wei Fan, Philip S. Yu

Online healthcare services can provide the general public with ubiquitous access to medical knowledge and reduce the information access cost for both individuals and societies.

Bringing Semantic Structures to User Intent Detection in Online Medical Queries

no code implementations22 Oct 2017 Chenwei Zhang, Nan Du, Wei Fan, Yaliang Li, Chun-Ta Lu, Philip S. Yu

The healthcare status, complex medical information needs of patients are expressed diversely and implicitly in their medical text queries.

Intent Detection Multi-Task Learning +1

Multi-source Hierarchical Prediction Consolidation

no code implementations11 Aug 2016 Chenwei Zhang, Sihong Xie, Yaliang Li, Jing Gao, Wei Fan, Philip S. Yu

We propose a novel multi-source hierarchical prediction consolidation method to effectively exploits the complicated hierarchical label structures to resolve the noisy and conflicting information that inherently originates from multiple imperfect sources.

Prediction

Cannot find the paper you are looking for? You can Submit a new open access paper.