no code implementations • 2 Aug 2023 • Shiyang Chen, Da Zheng, Caiwen Ding, Chengying Huan, Yuede Ji, Hang Liu
Graph Neural Networks (GNNs) are becoming increasingly popular due to their superior performance in critical graph-related tasks.
no code implementations • 27 Jun 2023 • Hang Liu, Anna Scaglione, Hoi-To Wai
Our analysis shows that the blind matching outcome converges to the result obtained with known graph topologies when the signal sampling size is large and the signal noise is small.
no code implementations • 19 Jun 2023 • Hang Liu, Jia Yan, Ying-Jun Angela Zhang
Consequently, relying solely on communication noise, as done in the multiple-input single-output system, cannot meet high privacy requirements, and a device-side privacy-preserving mechanism is necessary for optimal DP design.
no code implementations • 16 Jan 2023 • Yiming Ma, Hang Liu, Davide La Vecchia
Recent studies have been remarking that inference based on OT and on $W_p$ is sensitive to outliers.
1 code implementation • 9 Aug 2022 • Yifei Wang, Shiyang Chen, Guobin Chen, Ethan Shurberg, Hang Liu, Pengyu Hong
MCM builds a motif vocabulary in an unsupervised way and deploys a novel motif convolution operation to extract the local structural context of individual nodes, which is then used to learn higher-level node representations via multilayer perceptron and/or message passing in graph neural networks.
no code implementations • 7 Aug 2022 • Hongwu Peng, Shaoyi Huang, Shiyang Chen, Bingbing Li, Tong Geng, Ang Li, Weiwen Jiang, Wujie Wen, Jinbo Bi, Hang Liu, Caiwen Ding
Particularly, we develop a hardware-friendly sparse attention operator and a length-aware hardware resource scheduling algorithm.
no code implementations • 26 Jul 2022 • Zehong Lin, Hang Liu, Ying-Jun Angela Zhang
We propose a coexisting federated learning and information transfer (CFLIT) communication framework, where the FL and IT devices share the wireless spectrum in an OFDM system.
no code implementations • ACL 2022 • Shaoyi Huang, Dongkuan Xu, Ian E. H. Yen, Yijue Wang, Sung-En Chang, Bingbing Li, Shiyang Chen, Mimi Xie, Sanguthevar Rajasekaran, Hang Liu, Caiwen Ding
Conventional wisdom in pruning Transformer-based language models is that pruning reduces the model expressiveness and thus is more likely to underfit rather than overfit.
no code implementations • 15 Oct 2021 • Bingbing Li, Hongwu Peng, Rajat Sainju, Junhuan Yang, Lei Yang, Yueying Liang, Weiwen Jiang, Binghui Wang, Hang Liu, Caiwen Ding
In this paper, we propose a novel gender bias detection method by utilizing attention map for transformer-based models.
1 code implementation • 16 Sep 2021 • Anil Gaihre, Da Zheng, Scott Weitze, Lingda Li, Shuaiwen Leon Song, Caiwen Ding, Xiaoye S Li, Hang Liu
Recent top-$k$ computation efforts explore the possibility of revising various sorting algorithms to answer top-$k$ queries on GPUs.
no code implementations • 6 Sep 2021 • Hang Liu, Zehong Lin, Xiaojun Yuan, Ying-Jun Angela Zhang
Federated edge learning (FEEL) has emerged as a revolutionary paradigm to develop AI services at the edge of 6G wireless networks as it supports collaborative model training at a massive number of mobile devices.
no code implementations • 10 Aug 2021 • Hongwu Peng, Shanglin Zhou, Scott Weitze, Jiaxin Li, Sahidul Islam, Tong Geng, Ang Li, Wei zhang, Minghu Song, Mimi Xie, Hang Liu, Caiwen Ding
Deep complex networks (DCN), in contrast, can learn from complex data, but have high computational costs; therefore, they cannot satisfy the instant decision-making requirements of many deployable systems dealing with short observations or short signal bursts.
no code implementations • 27 Jul 2021 • Hang Liu, Menghan Hu, Yuzhen Chen, Qingli Li, Guangtao Zhai, Simon X. Yang, Xiao-Ping Zhang, Xiaokang Yang
This work demonstrates that it is practicable for the blind people to feel the world through the brush in their hands.
1 code implementation • 20 Jul 2021 • Zehong Lin, Hang Liu, Ying-Jun Angela Zhang
Then, we analyze the model aggregation error in a single-relay case and show that our relay-assisted scheme achieves a smaller error than the one without relays provided that the relay transmit power and the relay channel gains are sufficiently large.
no code implementations • 16 Jun 2021 • Geng Yuan, Payman Behnam, Zhengang Li, Ali Shafiee, Sheng Lin, Xiaolong Ma, Hang Liu, Xuehai Qian, Mahdi Nazm Bojnordi, Yanzhi Wang, Caiwen Ding
With weights stored in the ReRAM crossbar cells as conductance, when the input vector is applied to word lines, the matrix-vector multiplication results can be generated as the current in bit lines.
1 code implementation • 12 May 2021 • Lingda Li, Santosh Pandey, Thomas Flynn, Hang Liu, Noel Wheeler, Adolfy Hoisie
While discrete-event simulators are essential tools for architecture research, design, and development, their practicality is limited by an extremely long time-to-solution for realistic applications under investigation.
BIG-bench Machine Learning
Vocal Bursts Intensity Prediction
1 code implementation • Findings (EMNLP) 2021 • Jieren Deng, Yijue Wang, Ji Li, Chao Shang, Cao Qin, Hang Liu, Sanguthevar Rajasekaran, Caiwen Ding
In this paper, as the first attempt, we formulate the gradient attack problem on the Transformer-based language models and propose a gradient attack algorithm, TAG, to reconstruct the local training data.
Federated Learning
Cryptography and Security
no code implementations • 22 Feb 2021 • Hang Liu, Xiaojun Yuan, Ying-Jun Angela Zhang
We study over-the-air model aggregation in federated edge learning (FEEL) systems, where channel state information at the transmitters (CSIT) is assumed to be unavailable.
no code implementations • 9 Feb 2021 • Hang Liu, Meng Chen, Youzheng Wu, Xiaodong He, BoWen Zhou
Conversational Query Rewriting (CQR) aims to simplify the multi-turn dialogue modeling into a single-turn problem by explicitly rewriting the conversational query into a self-contained utterance.
no code implementations • 18 Jan 2021 • Zhen-Qing He, Hang Liu, Xiaojun Yuan, Ying-Jun Angela Zhang, Ying-Chang Liang
In a RIS-aided MIMO system, the acquisition of channel state information (CSI) is important for achieving passive beamforming gains of the RIS, but is also challenging due to the cascaded property of the transmitter-RIS-receiver channel and the lack of signal processing capability of the passive RIS elements.
Bayesian Inference
Information Theory
Information Theory
1 code implementation • 20 Nov 2020 • Hang Liu, Xiaojun Yuan, Ying-Jun Angela Zhang
However, due to the heterogeneity of communication capacities among edge devices, over-the-air FL suffers from the straggler issue in which the device with the weakest channel acts as a bottleneck of the model aggregation performance.
1 code implementation • 18 Sep 2020 • Santosh Pandey, Lingda Li, Adolfy Hoisie, Xiaoye S. Li, Hang Liu
In this paper, we propose, to the best of our knowledge, the first GPU-based framework for graph sampling/random walk.
Graph Sampling
Distributed, Parallel, and Cluster Computing
no code implementations • Findings of the Association for Computational Linguistics 2020 • Bingbing Li, Zhenglun Kong, Tianyun Zhang, Ji Li, Zhengang Li, Hang Liu, Caiwen Ding
Pre-trained large-scale language models have increasingly demonstrated high accuracy on many natural language processing (NLP) tasks.
no code implementations • 14 Sep 2020 • Yijue Wang, Jieren Deng, Dan Guo, Chenghong Wang, Xianrui Meng, Hang Liu, Caiwen Ding, Sanguthevar Rajasekaran
Distributed learning such as federated learning or collaborative learning enables model training on decentralized data from users and only collects local gradients, where data is processed close to its sources for data privacy.
no code implementations • 28 Aug 2020 • Yijue Wang, Chenghong Wang, Zigeng Wang, Shanglin Zhou, Hang Liu, Jinbo Bi, Caiwen Ding, Sanguthevar Rajasekaran
The large model size, high computational operations, and vulnerability against membership inference attack (MIA) have impeded deep learning or deep neural networks (DNNs) popularity, especially on mobile devices.
no code implementations • 17 Jul 2020 • Shilong Wang, Hang Liu, Anil Gaihre, Hengyong Yu
LDA is a statistical approach for topic modeling with a wide range of applications.
no code implementations • 16 Jul 2020 • Bingbing Li, Santosh Pandey, Haowen Fang, Yanjun Lyv, Ji Li, Jieyang Chen, Mimi Xie, Lipeng Wan, Hang Liu, Caiwen Ding
In natural language processing (NLP), the "Transformer" architecture was proposed as the first transduction model replying entirely on self-attention mechanisms without using sequence-aligned recurrent neural networks (RNNs) or convolution, and it achieved significant improvements for sequence to sequence tasks.
no code implementations • 15 Jul 2020 • Xianfu Chen, Celimuge Wu, Tao Chen, Zhi Liu, Honggang Zhang, Mehdi Bennis, Hang Liu, Yusheng Ji
Using the proposed deep RL scheme, each MU in the system is able to make decisions without a priori statistical knowledge of dynamics.
1 code implementation • 2 Jan 2020 • Xiaojun Yuan, Ying-Jun Angela Zhang, Yuanming Shi, Wenjing Yan, Hang Liu
Reconfigurable intelligent surfaces (RISs) are regarded as a promising emerging hardware technology to improve the spectrum and energy efficiency of wireless networks by artificially reconfiguring the propagation environment of electromagnetic waves.
Information Theory Signal Processing Information Theory
no code implementations • 5 Jun 2018 • Hang Liu, Hengyu Li, Jun Luo, Shaorong Xie, Yu Sun
A graph-based segmentation algorithm is used to segment the depth map from the depth sensor, and the segmented regions are used to guide a focus algorithm to locate in-focus image blocks from among multi-focus source images to construct the reference all-in-focus image.