Search Results for author: Dawei Gao

Found 18 papers, 9 papers with code

Less is More: Data Value Estimation for Visual Instruction Tuning

no code implementations14 Mar 2024 Zikang Liu, Kun Zhou, Wayne Xin Zhao, Dawei Gao, Yaliang Li, Ji-Rong Wen

To investigate this issue, we conduct a series of empirical studies, which reveal a significant redundancy within the visual instruction datasets, and show that greatly reducing the amount of several instruction dataset even do not affect the performance.

AgentScope: A Flexible yet Robust Multi-Agent Platform

1 code implementation21 Feb 2024 Dawei Gao, Zitao Li, Weirui Kuang, Xuchen Pan, Daoyuan Chen, Zhijian Ma, Bingchen Qian, Liuyi Yao, Lin Zhu, Chen Cheng, Hongzhu Shi, Yaliang Li, Bolin Ding, Jingren Zhou

With the rapid advancement of Large Language Models (LLMs), significant progress has been made in multi-agent applications.

Data-CUBE: Data Curriculum for Instruction-based Sentence Representation Learning

no code implementations7 Jan 2024 Yingqian Min, Kun Zhou, Dawei Gao, Wayne Xin Zhao, He Hu, Yaliang Li

Recently, multi-task instruction tuning has been applied into sentence representation learning, which endows the capability of generating specific representations with the guidance of task instruction, exhibiting strong generalization ability on new tasks.

Representation Learning Sentence +1

FederatedScope-LLM: A Comprehensive Package for Fine-tuning Large Language Models in Federated Learning

1 code implementation1 Sep 2023 Weirui Kuang, Bingchen Qian, Zitao Li, Daoyuan Chen, Dawei Gao, Xuchen Pan, Yuexiang Xie, Yaliang Li, Bolin Ding, Jingren Zhou

When several entities have similar interested tasks, but their data cannot be shared because of privacy concerns regulations, federated learning (FL) is a mainstream solution to leverage the data of different entities.

Benchmarking Federated Learning +1

Message Passing Based Block Sparse Signal Recovery for DOA Estimation Using Large Arrays

no code implementations1 Sep 2023 Yiwen Mao, Dawei Gao, Qinghua Guo, Ming Jin

This work deals with directional of arrival (DOA) estimation with a large antenna array.

Text-to-SQL Empowered by Large Language Models: A Benchmark Evaluation

1 code implementation29 Aug 2023 Dawei Gao, Haibin Wang, Yaliang Li, Xiuyu Sun, Yichen Qian, Bolin Ding, Jingren Zhou

Our explorations highlight open-source LLMs' potential in Text-to-SQL, as well as the advantages and disadvantages of the supervised fine-tuning.

Prompt Engineering Text-To-SQL

Do Emergent Abilities Exist in Quantized Large Language Models: An Empirical Study

1 code implementation16 Jul 2023 Peiyu Liu, Zikang Liu, Ze-Feng Gao, Dawei Gao, Wayne Xin Zhao, Yaliang Li, Bolin Ding, Ji-Rong Wen

Different from previous studies focused on overall performance, this work aims to investigate the impact of quantization on \emph{emergent abilities}, which are important characteristics that distinguish LLMs from small language models.

In-Context Learning Instruction Following +1

Efficient Personalized Federated Learning via Sparse Model-Adaptation

2 code implementations4 May 2023 Daoyuan Chen, Liuyi Yao, Dawei Gao, Bolin Ding, Yaliang Li

To overcome these challenges, we propose a novel approach named pFedGate for efficient personalized FL by adaptively and efficiently learning sparse local models.

Personalized Federated Learning

FS-Real: Towards Real-World Cross-Device Federated Learning

no code implementations23 Mar 2023 Daoyuan Chen, Dawei Gao, Yuexiang Xie, Xuchen Pan, Zitao Li, Yaliang Li, Bolin Ding, Jingren Zhou

Federated Learning (FL) aims to train high-quality models in collaboration with distributed clients while not uploading their local data, which attracts increasing attention in both academia and industry.

Federated Learning

Hyper-Parameter Auto-Tuning for Sparse Bayesian Learning

no code implementations9 Nov 2022 Dawei Gao, Qinghua Guo, Ming Jin, Guisheng Liao, Yonina C. Eldar

Choosing the values of hyper-parameters in sparse Bayesian learning (SBL) can significantly impact performance.

Signal Detection in MIMO Systems with Hardware Imperfections: Message Passing on Neural Networks

no code implementations8 Oct 2022 Dawei Gao, Qinghua Guo, Guisheng Liao, Yonina C. Eldar, Yonghui Li, Yanguang Yu, Branka Vucetic

Modelling the MIMO system with NN enables the design of NN architectures based on the signal flow of the MIMO system, minimizing the number of NN layers and parameters, which is crucial to achieving efficient training with limited pilot signals.

Bayesian Inference

pFL-Bench: A Comprehensive Benchmark for Personalized Federated Learning

1 code implementation8 Jun 2022 Daoyuan Chen, Dawei Gao, Weirui Kuang, Yaliang Li, Bolin Ding

Personalized Federated Learning (pFL), which utilizes and deploys distinct local models, has gained increasing attention in recent years due to its success in handling the statistical heterogeneity of FL clients.

Fairness Personalized Federated Learning

A Benchmark for Federated Hetero-Task Learning

1 code implementation7 Jun 2022 Liuyi Yao, Dawei Gao, Zhen Wang, Yuexiang Xie, Weirui Kuang, Daoyuan Chen, Haohui Wang, Chenhe Dong, Bolin Ding, Yaliang Li

To investigate the heterogeneity in federated learning in real-world scenarios, we generalize the classic federated learning to federated hetero-task learning, which emphasizes the inconsistency across the participants in federated learning in terms of both data distribution and learning tasks.

Federated Learning Meta-Learning +2

FederatedScope: A Flexible Federated Learning Platform for Heterogeneity

1 code implementation11 Apr 2022 Yuexiang Xie, Zhen Wang, Dawei Gao, Daoyuan Chen, Liuyi Yao, Weirui Kuang, Yaliang Li, Bolin Ding, Jingren Zhou

Although remarkable progress has been made by existing federated learning (FL) platforms to provide infrastructures for development, these platforms may not well tackle the challenges brought by various types of heterogeneity, including the heterogeneity in participants' local data, resources, behaviors and learning goals.

Federated Learning Hyperparameter Optimization

Massive MIMO As an Extreme Learning Machine

no code implementations1 Jul 2020 Dawei Gao, Qinghua Guo, Yonina C. Eldar

This work shows that a massive multiple-input multiple-output (MIMO) system with low-resolution analog-to-digital converters (ADCs) forms a natural extreme learning machine (ELM).

Pruning-Aware Merging for Efficient Multitask Inference

no code implementations23 May 2019 Xiaoxi He, Dawei Gao, Zimu Zhou, Yongxin Tong, Lothar Thiele

Given a set of deep neural networks, each pre-trained for a single task, it is desired that executing arbitrary combinations of tasks yields minimal computation cost.

Network Pruning

Extreme Learning Machine-Based Receiver for MIMO LED Communications

no code implementations27 Feb 2019 Dawei Gao, Qinghua Guo

This work concerns receiver design for light-emitting diode (LED) multiple input multiple output (MIMO) communications where the LED nonlinearity can severely degrade the performance of communications.

Cannot find the paper you are looking for? You can Submit a new open access paper.