no code implementations • 30 Sep 2024 • Hang Liu, Anna Scaglione
Shuffled linear regression (SLR) seeks to estimate latent features through a linear transformation, complicated by unknown permutations in the measurement dimensions.
no code implementations • 28 Aug 2024 • Yi Cheng, Chenxi Han, Yuheng Min, Linqi Ye, Houde Liu, Hang Liu
Designing a bipedal robot is a complex and challenging task, especially when dealing with a multitude of structural parameters.
no code implementations • 11 Aug 2024 • Hannuo Zhang, Huihui Li, Jiarui Lin, Yujie Zhang, Jianghua Fan, Hang Liu
Our method utilizes the downstream task of ship target semantic segmentation to guide the training of image translation network, improving the quality of output Optical-styled images.
no code implementations • 31 Jul 2024 • Wenyuan Chen, Haocong Song, Changsheng Dai, Aojun Jiang, Guanqiao Shan, Hang Liu, Yanlong Zhou, Khaled Abdalla, Shivani N Dhanani, Katy Fatemeh Moosavi, Shruti Pathak, Clifford Librach, Zhuoran Zhang, Yu Sun
Automated morphology analysis of a high number of sperm requires accurate segmentation of each sperm part and quantitative morphology evaluation.
no code implementations • 18 Apr 2024 • Yukai Cai, Hang Liu, XiuLin Wang, Hongjin Li, Ziyi Wang, Chuanshuai Yang, FengYu Cong
In view of this, this study proposes to study and develop a series of efficient non-negative coupled tensor decomposition algorithm frameworks based on federated learning called FCNCP for the EEG data arranged on different servers.
no code implementations • 16 Apr 2024 • Santosh Pandey, Amir Yazdanbakhsh, Hang Liu
Microarchitecture simulators are indispensable tools for microarchitecture designers to validate, estimate, and optimize new hardware that meets specific design requirements.
no code implementations • 22 Jan 2024 • Bingbing Li, Geng Yuan, Zigeng Wang, Shaoyi Huang, Hongwu Peng, Payman Behnam, Wujie Wen, Hang Liu, Caiwen Ding
Resistive Random Access Memory (ReRAM) has emerged as a promising platform for deep neural networks (DNNs) due to its support for parallel in-situ matrix-vector multiplication.
no code implementations • 9 Nov 2023 • Andrew Campbell, Hang Liu, Leah Woldemariam, Anna Scaglione
We promote model sparsity by adding $\ell_1$ regularization to the objective and present a decentralized proximal SGD method for training.
1 code implementation • 3 Nov 2023 • Letian Peng, Zilong Wang, Hang Liu, Zihan Wang, Jingbo Shang
With the rapid development of the internet, online social media welcomes people with different backgrounds through its diverse content.
no code implementations • 6 Oct 2023 • Shuaiwen Leon Song, Bonnie Kruft, Minjia Zhang, Conglong Li, Shiyang Chen, Chengming Zhang, Masahiro Tanaka, Xiaoxia Wu, Jeff Rasley, Ammar Ahmad Awan, Connor Holmes, Martin Cai, Adam Ghanem, Zhongzhu Zhou, Yuxiong He, Pete Luferenko, Divya Kumar, Jonathan Weyn, Ruixiong Zhang, Sylwester Klocek, Volodymyr Vragov, Mohammed AlQuraishi, Gustaf Ahdritz, Christina Floristean, Cristina Negri, Rao Kotamarthi, Venkatram Vishwanath, Arvind Ramanathan, Sam Foreman, Kyle Hippe, Troy Arcomano, Romit Maulik, Maxim Zvyagin, Alexander Brace, Bin Zhang, Cindy Orozco Bohorquez, Austin Clyde, Bharat Kale, Danilo Perez-Rivera, Heng Ma, Carla M. Mann, Michael Irvin, J. Gregory Pauloski, Logan Ward, Valerie Hayot, Murali Emani, Zhen Xie, Diangen Lin, Maulik Shukla, Ian Foster, James J. Davis, Michael E. Papka, Thomas Brettin, Prasanna Balaprakash, Gina Tourassi, John Gounley, Heidi Hanson, Thomas E Potok, Massimiliano Lupo Pasini, Kate Evans, Dan Lu, Dalton Lunga, Junqi Yin, Sajal Dash, Feiyi Wang, Mallikarjun Shankar, Isaac Lyngaas, Xiao Wang, Guojing Cong, Pei Zhang, Ming Fan, Siyan Liu, Adolfy Hoisie, Shinjae Yoo, Yihui Ren, William Tang, Kyle Felker, Alexey Svyatkovskiy, Hang Liu, Ashwin Aji, Angela Dalton, Michael Schulte, Karl Schulz, Yuntian Deng, Weili Nie, Josh Romero, Christian Dallago, Arash Vahdat, Chaowei Xiao, Thomas Gibbs, Anima Anandkumar, Rick Stevens
In the upcoming decade, deep learning may revolutionize the natural sciences, enhancing our capacity to model and predict natural occurrences.
no code implementations • 2 Aug 2023 • Shiyang Chen, Da Zheng, Caiwen Ding, Chengying Huan, Yuede Ji, Hang Liu
Graph Neural Networks (GNNs) are becoming increasingly popular due to their superior performance in critical graph-related tasks.
no code implementations • 27 Jun 2023 • Hang Liu, Anna Scaglione, Hoi-To Wai
Our analysis shows that the blind matching outcome converges to the result obtained with known graph topologies when the signal sampling size is large and the signal noise is small.
no code implementations • 19 Jun 2023 • Hang Liu, Jia Yan, Ying-Jun Angela Zhang
Consequently, relying solely on communication noise, as done in the multiple-input single-output system, cannot meet high privacy requirements, and a device-side privacy-preserving mechanism is necessary for optimal DP design.
no code implementations • 16 Jan 2023 • Yiming Ma, Hang Liu, Davide La Vecchia, Metthieu Lerasle
Fourth, we use $W^{(\lambda)}$ to define minimum distance estimators, we provide their statistical guarantees and we illustrate how to apply the derived concentration inequalities for a data driven selection of $\lambda$.
1 code implementation • 9 Aug 2022 • Yifei Wang, Shiyang Chen, Guobin Chen, Ethan Shurberg, Hang Liu, Pengyu Hong
MCM builds a motif vocabulary in an unsupervised way and deploys a novel motif convolution operation to extract the local structural context of individual nodes, which is then used to learn higher-level node representations via multilayer perceptron and/or message passing in graph neural networks.
no code implementations • 7 Aug 2022 • Hongwu Peng, Shaoyi Huang, Shiyang Chen, Bingbing Li, Tong Geng, Ang Li, Weiwen Jiang, Wujie Wen, Jinbo Bi, Hang Liu, Caiwen Ding
Particularly, we develop a hardware-friendly sparse attention operator and a length-aware hardware resource scheduling algorithm.
no code implementations • 26 Jul 2022 • Zehong Lin, Hang Liu, Ying-Jun Angela Zhang
We propose a coexisting federated learning and information transfer (CFLIT) communication framework, where the FL and IT devices share the wireless spectrum in an OFDM system.
no code implementations • 15 Oct 2021 • Bingbing Li, Hongwu Peng, Rajat Sainju, Junhuan Yang, Lei Yang, Yueying Liang, Weiwen Jiang, Binghui Wang, Hang Liu, Caiwen Ding
In this paper, we propose a novel gender bias detection method by utilizing attention map for transformer-based models.
no code implementations • ACL 2022 • Shaoyi Huang, Dongkuan Xu, Ian E. H. Yen, Yijue Wang, Sung-En Chang, Bingbing Li, Shiyang Chen, Mimi Xie, Sanguthevar Rajasekaran, Hang Liu, Caiwen Ding
Conventional wisdom in pruning Transformer-based language models is that pruning reduces the model expressiveness and thus is more likely to underfit rather than overfit.
1 code implementation • 16 Sep 2021 • Anil Gaihre, Da Zheng, Scott Weitze, Lingda Li, Shuaiwen Leon Song, Caiwen Ding, Xiaoye S Li, Hang Liu
Recent top-$k$ computation efforts explore the possibility of revising various sorting algorithms to answer top-$k$ queries on GPUs.
no code implementations • 6 Sep 2021 • Hang Liu, Zehong Lin, Xiaojun Yuan, Ying-Jun Angela Zhang
Federated edge learning (FEEL) has emerged as a revolutionary paradigm to develop AI services at the edge of 6G wireless networks as it supports collaborative model training at a massive number of mobile devices.
no code implementations • 10 Aug 2021 • Hongwu Peng, Shanglin Zhou, Scott Weitze, Jiaxin Li, Sahidul Islam, Tong Geng, Ang Li, Wei zhang, Minghu Song, Mimi Xie, Hang Liu, Caiwen Ding
Deep complex networks (DCN), in contrast, can learn from complex data, but have high computational costs; therefore, they cannot satisfy the instant decision-making requirements of many deployable systems dealing with short observations or short signal bursts.
no code implementations • 27 Jul 2021 • Hang Liu, Menghan Hu, Yuzhen Chen, Qingli Li, Guangtao Zhai, Simon X. Yang, Xiao-Ping Zhang, Xiaokang Yang
This work demonstrates that it is practicable for the blind people to feel the world through the brush in their hands.
1 code implementation • 20 Jul 2021 • Zehong Lin, Hang Liu, Ying-Jun Angela Zhang
Then, we analyze the model aggregation error in a single-relay case and show that our relay-assisted scheme achieves a smaller error than the one without relays provided that the relay transmit power and the relay channel gains are sufficiently large.
no code implementations • 16 Jun 2021 • Geng Yuan, Payman Behnam, Zhengang Li, Ali Shafiee, Sheng Lin, Xiaolong Ma, Hang Liu, Xuehai Qian, Mahdi Nazm Bojnordi, Yanzhi Wang, Caiwen Ding
With weights stored in the ReRAM crossbar cells as conductance, when the input vector is applied to word lines, the matrix-vector multiplication results can be generated as the current in bit lines.
1 code implementation • 12 May 2021 • Lingda Li, Santosh Pandey, Thomas Flynn, Hang Liu, Noel Wheeler, Adolfy Hoisie
While discrete-event simulators are essential tools for architecture research, design, and development, their practicality is limited by an extremely long time-to-solution for realistic applications under investigation.
BIG-bench Machine Learning Vocal Bursts Intensity Prediction
1 code implementation • Findings (EMNLP) 2021 • Jieren Deng, Yijue Wang, Ji Li, Chao Shang, Cao Qin, Hang Liu, Sanguthevar Rajasekaran, Caiwen Ding
In this paper, as the first attempt, we formulate the gradient attack problem on the Transformer-based language models and propose a gradient attack algorithm, TAG, to reconstruct the local training data.
Federated Learning Cryptography and Security
no code implementations • 22 Feb 2021 • Hang Liu, Xiaojun Yuan, Ying-Jun Angela Zhang
We study over-the-air model aggregation in federated edge learning (FEEL) systems, where channel state information at the transmitters (CSIT) is assumed to be unavailable.
no code implementations • 9 Feb 2021 • Hang Liu, Meng Chen, Youzheng Wu, Xiaodong He, BoWen Zhou
Conversational Query Rewriting (CQR) aims to simplify the multi-turn dialogue modeling into a single-turn problem by explicitly rewriting the conversational query into a self-contained utterance.
no code implementations • 18 Jan 2021 • Zhen-Qing He, Hang Liu, Xiaojun Yuan, Ying-Jun Angela Zhang, Ying-Chang Liang
In a RIS-aided MIMO system, the acquisition of channel state information (CSI) is important for achieving passive beamforming gains of the RIS, but is also challenging due to the cascaded property of the transmitter-RIS-receiver channel and the lack of signal processing capability of the passive RIS elements.
Bayesian Inference Information Theory Information Theory
1 code implementation • 20 Nov 2020 • Hang Liu, Xiaojun Yuan, Ying-Jun Angela Zhang
However, due to the heterogeneity of communication capacities among edge devices, over-the-air FL suffers from the straggler issue in which the device with the weakest channel acts as a bottleneck of the model aggregation performance.
1 code implementation • 18 Sep 2020 • Santosh Pandey, Lingda Li, Adolfy Hoisie, Xiaoye S. Li, Hang Liu
In this paper, we propose, to the best of our knowledge, the first GPU-based framework for graph sampling/random walk.
Graph Sampling Distributed, Parallel, and Cluster Computing
no code implementations • Findings of the Association for Computational Linguistics 2020 • Bingbing Li, Zhenglun Kong, Tianyun Zhang, Ji Li, Zhengang Li, Hang Liu, Caiwen Ding
Pre-trained large-scale language models have increasingly demonstrated high accuracy on many natural language processing (NLP) tasks.
no code implementations • 14 Sep 2020 • Yijue Wang, Jieren Deng, Dan Guo, Chenghong Wang, Xianrui Meng, Hang Liu, Caiwen Ding, Sanguthevar Rajasekaran
Distributed learning such as federated learning or collaborative learning enables model training on decentralized data from users and only collects local gradients, where data is processed close to its sources for data privacy.
no code implementations • 28 Aug 2020 • Yijue Wang, Chenghong Wang, Zigeng Wang, Shanglin Zhou, Hang Liu, Jinbo Bi, Caiwen Ding, Sanguthevar Rajasekaran
The large model size, high computational operations, and vulnerability against membership inference attack (MIA) have impeded deep learning or deep neural networks (DNNs) popularity, especially on mobile devices.
no code implementations • 17 Jul 2020 • Shilong Wang, Hang Liu, Anil Gaihre, Hengyong Yu
LDA is a statistical approach for topic modeling with a wide range of applications.
no code implementations • 16 Jul 2020 • Bingbing Li, Santosh Pandey, Haowen Fang, Yanjun Lyv, Ji Li, Jieyang Chen, Mimi Xie, Lipeng Wan, Hang Liu, Caiwen Ding
In natural language processing (NLP), the "Transformer" architecture was proposed as the first transduction model replying entirely on self-attention mechanisms without using sequence-aligned recurrent neural networks (RNNs) or convolution, and it achieved significant improvements for sequence to sequence tasks.
no code implementations • 15 Jul 2020 • Xianfu Chen, Celimuge Wu, Tao Chen, Zhi Liu, Honggang Zhang, Mehdi Bennis, Hang Liu, Yusheng Ji
Using the proposed deep RL scheme, each MU in the system is able to make decisions without a priori statistical knowledge of dynamics.
1 code implementation • 2 Jan 2020 • Xiaojun Yuan, Ying-Jun Angela Zhang, Yuanming Shi, Wenjing Yan, Hang Liu
Reconfigurable intelligent surfaces (RISs) are regarded as a promising emerging hardware technology to improve the spectrum and energy efficiency of wireless networks by artificially reconfiguring the propagation environment of electromagnetic waves.
Information Theory Signal Processing Information Theory
no code implementations • 5 Jun 2018 • Hang Liu, Hengyu Li, Jun Luo, Shaorong Xie, Yu Sun
A graph-based segmentation algorithm is used to segment the depth map from the depth sensor, and the segmented regions are used to guide a focus algorithm to locate in-focus image blocks from among multi-focus source images to construct the reference all-in-focus image.