Search Results for author: Hang Liu

Found 36 papers, 9 papers with code

FCNCP: A Coupled Nonnegative CANDECOMP/PARAFAC Decomposition Based on Federated Learning

no code implementations18 Apr 2024 Yukai Cai, Hang Liu, XiuLin Wang, Hongjin Li, Ziyi Wang, Chuanshuai Yang, FengYu Cong

In view of this, this study proposes to study and develop a series of efficient non-negative coupled tensor decomposition algorithm frameworks based on federated learning called FCNCP for the EEG data arranged on different servers.

Tao: Re-Thinking DL-based Microarchitecture Simulation

no code implementations16 Apr 2024 Santosh Pandey, Amir Yazdanbakhsh, Hang Liu

Microarchitecture simulators are indispensable tools for microarchitecture designers to validate, estimate, and optimize new hardware that meets specific design requirements.

Transfer Learning

Zero-Space Cost Fault Tolerance for Transformer-based Language Models on ReRAM

no code implementations22 Jan 2024 Bingbing Li, Geng Yuan, Zigeng Wang, Shaoyi Huang, Hongwu Peng, Payman Behnam, Wujie Wen, Hang Liu, Caiwen Ding

Resistive Random Access Memory (ReRAM) has emerged as a promising platform for deep neural networks (DNNs) due to its support for parallel in-situ matrix-vector multiplication.

MALCOM-PSGD: Inexact Proximal Stochastic Gradient Descent for Communication-Efficient Decentralized Machine Learning

no code implementations9 Nov 2023 Andrew Campbell, Hang Liu, Leah Woldemariam, Anna Scaglione

Recent research indicates that frequent model communication stands as a major bottleneck to the efficiency of decentralized machine learning (ML), particularly for large-scale and over-parameterized neural networks (NNs).

Quantization

EmojiLM: Modeling the New Emoji Language

1 code implementation3 Nov 2023 Letian Peng, Zilong Wang, Hang Liu, Zihan Wang, Jingbo Shang

With the rapid development of the internet, online social media welcomes people with different backgrounds through its diverse content.

Language Modelling Large Language Model

DeepSpeed4Science Initiative: Enabling Large-Scale Scientific Discovery through Sophisticated AI System Technologies

no code implementations6 Oct 2023 Shuaiwen Leon Song, Bonnie Kruft, Minjia Zhang, Conglong Li, Shiyang Chen, Chengming Zhang, Masahiro Tanaka, Xiaoxia Wu, Jeff Rasley, Ammar Ahmad Awan, Connor Holmes, Martin Cai, Adam Ghanem, Zhongzhu Zhou, Yuxiong He, Pete Luferenko, Divya Kumar, Jonathan Weyn, Ruixiong Zhang, Sylwester Klocek, Volodymyr Vragov, Mohammed AlQuraishi, Gustaf Ahdritz, Christina Floristean, Cristina Negri, Rao Kotamarthi, Venkatram Vishwanath, Arvind Ramanathan, Sam Foreman, Kyle Hippe, Troy Arcomano, Romit Maulik, Maxim Zvyagin, Alexander Brace, Bin Zhang, Cindy Orozco Bohorquez, Austin Clyde, Bharat Kale, Danilo Perez-Rivera, Heng Ma, Carla M. Mann, Michael Irvin, J. Gregory Pauloski, Logan Ward, Valerie Hayot, Murali Emani, Zhen Xie, Diangen Lin, Maulik Shukla, Ian Foster, James J. Davis, Michael E. Papka, Thomas Brettin, Prasanna Balaprakash, Gina Tourassi, John Gounley, Heidi Hanson, Thomas E Potok, Massimiliano Lupo Pasini, Kate Evans, Dan Lu, Dalton Lunga, Junqi Yin, Sajal Dash, Feiyi Wang, Mallikarjun Shankar, Isaac Lyngaas, Xiao Wang, Guojing Cong, Pei Zhang, Ming Fan, Siyan Liu, Adolfy Hoisie, Shinjae Yoo, Yihui Ren, William Tang, Kyle Felker, Alexey Svyatkovskiy, Hang Liu, Ashwin Aji, Angela Dalton, Michael Schulte, Karl Schulz, Yuntian Deng, Weili Nie, Josh Romero, Christian Dallago, Arash Vahdat, Chaowei Xiao, Thomas Gibbs, Anima Anandkumar, Rick Stevens

In the upcoming decade, deep learning may revolutionize the natural sciences, enhancing our capacity to model and predict natural occurrences.

Tango: rethinking quantization for graph neural network training on GPUs

no code implementations2 Aug 2023 Shiyang Chen, Da Zheng, Caiwen Ding, Chengying Huan, Yuede Ji, Hang Liu

Graph Neural Networks (GNNs) are becoming increasingly popular due to their superior performance in critical graph-related tasks.

Quantization

Blind Graph Matching Using Graph Signals

no code implementations27 Jun 2023 Hang Liu, Anna Scaglione, Hoi-To Wai

Our analysis shows that the blind matching outcome converges to the result obtained with known graph topologies when the signal sampling size is large and the signal noise is small.

Graph Matching

Differentially Private Over-the-Air Federated Learning Over MIMO Fading Channels

no code implementations19 Jun 2023 Hang Liu, Jia Yan, Ying-Jun Angela Zhang

Consequently, relying solely on communication noise, as done in the multiple-input single-output system, cannot meet high privacy requirements, and a device-side privacy-preserving mechanism is necessary for optimal DP design.

Federated Learning Privacy Preserving

Inference via robust optimal transportation: theory and methods

no code implementations16 Jan 2023 Yiming Ma, Hang Liu, Davide La Vecchia, Metthieu Lerasle

Fourth, we use $W^{(\lambda)}$ to define minimum distance estimators, we provide their statistical guarantees and we illustrate how to apply the derived concentration inequalities for a data driven selection of $\lambda$.

Domain Adaptation

Motif-based Graph Representation Learning with Application to Chemical Molecules

1 code implementation9 Aug 2022 Yifei Wang, Shiyang Chen, Guobin Chen, Ethan Shurberg, Hang Liu, Pengyu Hong

MCM builds a motif vocabulary in an unsupervised way and deploys a novel motif convolution operation to extract the local structural context of individual nodes, which is then used to learn higher-level node representations via multilayer perceptron and/or message passing in graph neural networks.

Graph Learning Graph Representation Learning

CFLIT: Coexisting Federated Learning and Information Transfer

no code implementations26 Jul 2022 Zehong Lin, Hang Liu, Ying-Jun Angela Zhang

We propose a coexisting federated learning and information transfer (CFLIT) communication framework, where the FL and IT devices share the wireless spectrum in an OFDM system.

Federated Learning

Sparse Progressive Distillation: Resolving Overfitting under Pretrain-and-Finetune Paradigm

no code implementations ACL 2022 Shaoyi Huang, Dongkuan Xu, Ian E. H. Yen, Yijue Wang, Sung-En Chang, Bingbing Li, Shiyang Chen, Mimi Xie, Sanguthevar Rajasekaran, Hang Liu, Caiwen Ding

Conventional wisdom in pruning Transformer-based language models is that pruning reduces the model expressiveness and thus is more likely to underfit rather than overfit.

Knowledge Distillation

Dr. Top-k: Delegate-Centric Top-k on GPUs

1 code implementation16 Sep 2021 Anil Gaihre, Da Zheng, Scott Weitze, Lingda Li, Shuaiwen Leon Song, Caiwen Ding, Xiaoye S Li, Hang Liu

Recent top-$k$ computation efforts explore the possibility of revising various sorting algorithms to answer top-$k$ queries on GPUs.

Reconfigurable Intelligent Surface Empowered Over-the-Air Federated Edge Learning

no code implementations6 Sep 2021 Hang Liu, Zehong Lin, Xiaojun Yuan, Ying-Jun Angela Zhang

Federated edge learning (FEEL) has emerged as a revolutionary paradigm to develop AI services at the edge of 6G wireless networks as it supports collaborative model training at a massive number of mobile devices.

Binary Complex Neural Network Acceleration on FPGA

no code implementations10 Aug 2021 Hongwu Peng, Shanglin Zhou, Scott Weitze, Jiaxin Li, Sahidul Islam, Tong Geng, Ang Li, Wei zhang, Minghu Song, Mimi Xie, Hang Liu, Caiwen Ding

Deep complex networks (DCN), in contrast, can learn from complex data, but have high computational costs; therefore, they cannot satisfy the instant decision-making requirements of many deployable systems dealing with short observations or short signal bursts.

Decision Making

Relay-Assisted Cooperative Federated Learning

1 code implementation20 Jul 2021 Zehong Lin, Hang Liu, Ying-Jun Angela Zhang

Then, we analyze the model aggregation error in a single-relay case and show that our relay-assisted scheme achieves a smaller error than the one without relays provided that the relay transmit power and the relay channel gains are sufficiently large.

Federated Learning

FORMS: Fine-grained Polarized ReRAM-based In-situ Computation for Mixed-signal DNN Accelerator

no code implementations16 Jun 2021 Geng Yuan, Payman Behnam, Zhengang Li, Ali Shafiee, Sheng Lin, Xiaolong Ma, Hang Liu, Xuehai Qian, Mahdi Nazm Bojnordi, Yanzhi Wang, Caiwen Ding

With weights stored in the ReRAM crossbar cells as conductance, when the input vector is applied to word lines, the matrix-vector multiplication results can be generated as the current in bit lines.

SimNet: Accurate and High-Performance Computer Architecture Simulation using Deep Learning

1 code implementation12 May 2021 Lingda Li, Santosh Pandey, Thomas Flynn, Hang Liu, Noel Wheeler, Adolfy Hoisie

While discrete-event simulators are essential tools for architecture research, design, and development, their practicality is limited by an extremely long time-to-solution for realistic applications under investigation.

BIG-bench Machine Learning Vocal Bursts Intensity Prediction

TAG: Gradient Attack on Transformer-based Language Models

1 code implementation Findings (EMNLP) 2021 Jieren Deng, Yijue Wang, Ji Li, Chao Shang, Cao Qin, Hang Liu, Sanguthevar Rajasekaran, Caiwen Ding

In this paper, as the first attempt, we formulate the gradient attack problem on the Transformer-based language models and propose a gradient attack algorithm, TAG, to reconstruct the local training data.

Federated Learning Cryptography and Security

CSIT-Free Model Aggregation for Federated Edge Learning via Reconfigurable Intelligent Surface

no code implementations22 Feb 2021 Hang Liu, Xiaojun Yuan, Ying-Jun Angela Zhang

We study over-the-air model aggregation in federated edge learning (FEEL) systems, where channel state information at the transmitters (CSIT) is assumed to be unavailable.

Image Classification

Conversational Query Rewriting with Self-supervised Learning

no code implementations9 Feb 2021 Hang Liu, Meng Chen, Youzheng Wu, Xiaodong He, BoWen Zhou

Conversational Query Rewriting (CQR) aims to simplify the multi-turn dialogue modeling into a single-turn problem by explicitly rewriting the conversational query into a self-contained utterance.

Self-Supervised Learning

Semi-Blind Cascaded Channel Estimation for Reconfigurable Intelligent Surface Aided Massive MIMO

no code implementations18 Jan 2021 Zhen-Qing He, Hang Liu, Xiaojun Yuan, Ying-Jun Angela Zhang, Ying-Chang Liang

In a RIS-aided MIMO system, the acquisition of channel state information (CSI) is important for achieving passive beamforming gains of the RIS, but is also challenging due to the cascaded property of the transmitter-RIS-receiver channel and the lack of signal processing capability of the passive RIS elements.

Bayesian Inference Information Theory Information Theory

Reconfigurable Intelligent Surface Enabled Federated Learning: A Unified Communication-Learning Design Approach

1 code implementation20 Nov 2020 Hang Liu, Xiaojun Yuan, Ying-Jun Angela Zhang

However, due to the heterogeneity of communication capacities among edge devices, over-the-air FL suffers from the straggler issue in which the device with the weakest channel acts as a bottleneck of the model aggregation performance.

Federated Learning

C-SAW: A Framework for Graph Sampling and Random Walk on GPUs

1 code implementation18 Sep 2020 Santosh Pandey, Lingda Li, Adolfy Hoisie, Xiaoye S. Li, Hang Liu

In this paper, we propose, to the best of our knowledge, the first GPU-based framework for graph sampling/random walk.

Graph Sampling Distributed, Parallel, and Cluster Computing

SAPAG: A Self-Adaptive Privacy Attack From Gradients

no code implementations14 Sep 2020 Yijue Wang, Jieren Deng, Dan Guo, Chenghong Wang, Xianrui Meng, Hang Liu, Caiwen Ding, Sanguthevar Rajasekaran

Distributed learning such as federated learning or collaborative learning enables model training on decentralized data from users and only collects local gradients, where data is processed close to its sources for data privacy.

Federated Learning Reconstruction Attack

Against Membership Inference Attack: Pruning is All You Need

no code implementations28 Aug 2020 Yijue Wang, Chenghong Wang, Zigeng Wang, Shanglin Zhou, Hang Liu, Jinbo Bi, Caiwen Ding, Sanguthevar Rajasekaran

The large model size, high computational operations, and vulnerability against membership inference attack (MIA) have impeded deep learning or deep neural networks (DNNs) popularity, especially on mobile devices.

Fraud Detection Inference Attack +2

EZLDA: Efficient and Scalable LDA on GPUs

no code implementations17 Jul 2020 Shilong Wang, Hang Liu, Anil Gaihre, Hengyong Yu

LDA is a statistical approach for topic modeling with a wide range of applications.

FTRANS: Energy-Efficient Acceleration of Transformers using FPGA

no code implementations16 Jul 2020 Bingbing Li, Santosh Pandey, Haowen Fang, Yanjun Lyv, Ji Li, Jieyang Chen, Mimi Xie, Lipeng Wan, Hang Liu, Caiwen Ding

In natural language processing (NLP), the "Transformer" architecture was proposed as the first transduction model replying entirely on self-attention mechanisms without using sequence-aligned recurrent neural networks (RNNs) or convolution, and it achieved significant improvements for sequence to sequence tasks.

Model Compression

Reconfigurable-Intelligent-Surface Empowered Wireless Communications: Challenges and Opportunities

1 code implementation2 Jan 2020 Xiaojun Yuan, Ying-Jun Angela Zhang, Yuanming Shi, Wenjing Yan, Hang Liu

Reconfigurable intelligent surfaces (RISs) are regarded as a promising emerging hardware technology to improve the spectrum and energy efficiency of wireless networks by artificially reconfiguring the propagation environment of electromagnetic waves.

Information Theory Signal Processing Information Theory

Construction of all-in-focus images assisted by depth sensing

no code implementations5 Jun 2018 Hang Liu, Hengyu Li, Jun Luo, Shaorong Xie, Yu Sun

A graph-based segmentation algorithm is used to segment the depth map from the depth sensor, and the segmented regions are used to guide a focus algorithm to locate in-focus image blocks from among multi-focus source images to construct the reference all-in-focus image.

Cannot find the paper you are looking for? You can Submit a new open access paper.