Search Results for author: Xinping Yi

Found 26 papers, 9 papers with code

Robust Symbol-Level Precoding for Massive MIMO Communication Under Channel Aging

no code implementations7 Feb 2024 Yafei Wang, Xinping Yi, Hongwei Hou, Wenjin Wang, Shi Jin

With the signal model in the presence of channel aging, we formulate the signal-to-noise-plus-interference ratio (SINR) balancing and minimum mean square error (MMSE) problems for robust SLP design.

Robust Design

Rethinking Spectral Graph Neural Networks with Spatially Adaptive Filtering

no code implementations17 Jan 2024 Jingwei Guo, Kaizhu Huang, Xinping Yi, Zixian Su, Rui Zhang

Whilst spectral Graph Neural Networks (GNNs) are theoretically well-founded in the spectral domain, their practical reliance on polynomial approximation implies a profound linkage to the spatial domain.

Node Classification

Graph Neural Networks with Diverse Spectral Filtering

1 code implementation14 Dec 2023 Jingwei Guo, Kaizhu Huang, Xinping Yi, Rui Zhang

Spectral Graph Neural Networks (GNNs) have achieved tremendous success in graph machine learning, with polynomial filters applied for graph convolutions, where all nodes share the identical filter weights to mine their local contexts.

GPR Node Classification

Beam-Delay Domain Channel Estimation for mmWave XL-MIMO Systems

no code implementations10 Dec 2023 Hongwei Hou, Xuan He, Tianhao Fang, Xinping Yi, Wenjin Wang, Shi Jin

This paper investigates the uplink channel estimation of the millimeter-wave (mmWave) extremely large-scale multiple-input-multiple-output (XL-MIMO) communication system in the beam-delay domain, taking into account the near-field and beam-squint effects due to the transmission bandwidth and array aperture growth.

Soft Demodulator for Symbol-Level Precoding in Coded Multiuser MISO Systems

no code implementations16 Oct 2023 Yafei Wang, Hongwei Hou, Wenjin Wang, Xinping Yi, Shi Jin

It is observed that the received SLP signals do not always follow Gaussian distribution, rendering the conventional soft demodulation with the Gaussian assumption unsuitable for the coded SLP systems.

Symbol-Level Precoding for Average SER Minimization in Multiuser MISO Systems

no code implementations11 Oct 2023 Yafei Wang, Hongwei Hou, Wenjin Wang, Xinping Yi

This paper investigates symbol-level precoding (SLP) for high-order quadrature amplitude modulation (QAM) aimed at minimizing the average symbol error rate (SER), leveraging both constructive interference (CI) and noise power to gain superiority in full signal-to-noise ratio (SNR) ranges.

DeepHGCN: Toward Deeper Hyperbolic Graph Convolutional Networks

no code implementations3 Oct 2023 Jiaxu Liu, Xinping Yi, Xiaowei Huang

Hyperbolic graph convolutional networks (HGCN) have demonstrated significant potential in extracting information from hierarchical graphs.

Computational Efficiency Link Prediction +1

Semantic Communications using Foundation Models: Design Approaches and Open Issues

no code implementations23 Sep 2023 Peiwen Jiang, Chao-Kai Wen, Xinping Yi, Xiao Li, Shi Jin, Jun Zhang

Foundation models (FMs), including large language models, have become increasingly popular due to their wide-ranging applicability and ability to understand human-like semantics.

Symplectic Structure-Aware Hamiltonian (Graph) Embeddings

no code implementations9 Sep 2023 Jiaxu Liu, Xinping Yi, Tianle Zhang, Xiaowei Huang

In traditional Graph Neural Networks (GNNs), the assumption of a fixed embedding manifold often limits their adaptability to diverse graph geometries.

Node Classification Riemannian optimization

Randomized Adversarial Training via Taylor Expansion

1 code implementation CVPR 2023 Gaojie Jin, Xinping Yi, Dengyu Wu, Ronghui Mu, Xiaowei Huang

The randomized weights enable our design of a novel adversarial training method via Taylor expansion of a small Gaussian noise, and we show that the new adversarial training method can flatten loss landscape and find flat minima.

Optimising Event-Driven Spiking Neural Network with Regularisation and Cutoff

1 code implementation23 Jan 2023 Dengyu Wu, Gaojie Jin, Han Yu, Xinping Yi, Xiaowei Huang

The Top-K cutoff technique optimises the inference of SNN, and the regularisation are proposed to affect the training and construct SNN with optimised performance for cutoff.

Computational Efficiency

ES-GNN: Generalizing Graph Neural Networks Beyond Homophily with Edge Splitting

1 code implementation27 May 2022 Jingwei Guo, Kaizhu Huang, Rui Zhang, Xinping Yi

While Graph Neural Networks (GNNs) have achieved enormous success in multiple graph analytical tasks, modern variants mostly rely on the strong inductive bias of homophily.

Denoising Inductive Bias

Enhancing Adversarial Training with Second-Order Statistics of Weights

1 code implementation CVPR 2022 Gaojie Jin, Xinping Yi, Wei Huang, Sven Schewe, Xiaowei Huang

In this paper, we show that treating model weights as random variables allows for enhancing adversarial training through \textbf{S}econd-Order \textbf{S}tatistics \textbf{O}ptimization (S$^2$O) with respect to the weights.

Weight Expansion: A New Perspective on Dropout and Generalization

no code implementations23 Jan 2022 Gaojie Jin, Xinping Yi, Pengfei Yang, Lijun Zhang, Sven Schewe, Xiaowei Huang

While dropout is known to be a successful regularization technique, insights into the mechanisms that lead to this success are still lacking.

Neuronal Correlation: a Central Concept in Neural Network

no code implementations22 Jan 2022 Gaojie Jin, Xinping Yi, Xiaowei Huang

This paper proposes to study neural networks through neuronal correlation, a statistical measure of correlated neuronal activity on the penultimate layer.

Perturbation Diversity Certificates Robust Generalisation

no code implementations29 Sep 2021 Zhuang Qian, Shufei Zhang, Kaizhu Huang, Qiufeng Wang, Bin Gu, Huan Xiong, Xinping Yi

It is possibly due to the fact that the conventional adversarial training methods generate adversarial perturbations usually in a supervised way, so that the adversarial samples are highly biased towards the decision boundary, resulting in an inhomogeneous data distribution.

Adversarial Robustness of Deep Learning: Theory, Algorithms, and Applications

no code implementations24 Aug 2021 Wenjie Ruan, Xinping Yi, Xiaowei Huang

This tutorial aims to introduce the fundamentals of adversarial robustness of deep learning, presenting a well-structured review of up-to-date techniques to assess the vulnerability of various types of deep learning models to adversarial examples.

Adversarial Robustness Learning Theory

Improving Model Robustness with Latent Distribution Locally and Globally

1 code implementation8 Jul 2021 Zhuang Qian, Shufei Zhang, Kaizhu Huang, Qiufeng Wang, Rui Zhang, Xinping Yi

The proposed adversarial training with latent distribution (ATLD) method defends against adversarial attacks by crafting LMAEs with the latent manifold in an unsupervised manner.

Adversarial Robustness

LGD-GCN: Local and Global Disentangled Graph Convolutional Networks

1 code implementation24 Apr 2021 Jingwei Guo, Kaizhu Huang, Xinping Yi, Rui Zhang

}, we introduce a novel Local and Global Disentangled Graph Convolutional Network (LGD-GCN) to capture both local and global information for graph disentanglement.

Disentanglement Node Classification

A Little Energy Goes a Long Way: Build an Energy-Efficient, Accurate Spiking Neural Network from Convolutional Neural Network

1 code implementation1 Mar 2021 Dengyu Wu, Xinping Yi, Xiaowei Huang

In this paper, we argue that this trend of "energy for accuracy" is not necessary -- a little energy can go a long way to achieve the near-zero accuracy loss.

Topological Interference Management with Adversarial Topology Perturbation: An Algorithmic Perspective

no code implementations29 Jan 2021 Ya-Chun Liang, Chung-Shou Liao, Xinping Yi

This is a sharp reduction of the general graph re-coloring, whose optimal number of updates scales as the size of the network, thanks to the delicate exploitation of the structural properties of chordal graph classes.

Information Theory Information Theory

How does Weight Correlation Affect Generalisation Ability of Deep Neural Networks?

no code implementations NeurIPS 2020 Gaojie Jin, Xinping Yi, Liang Zhang, Lijun Zhang, Sven Schewe, Xiaowei Huang

This paper studies the novel concept of weight correlation in deep neural networks and discusses its impact on the networks' generalisation ability.

How does Weight Correlation Affect the Generalisation Ability of Deep Neural Networks

1 code implementation12 Oct 2020 Gaojie Jin, Xinping Yi, Liang Zhang, Lijun Zhang, Sven Schewe, Xiaowei Huang

This paper studies the novel concept of weight correlation in deep neural networks and discusses its impact on the networks' generalisation ability.

Asymptotic Singular Value Distribution of Linear Convolutional Layers

no code implementations12 Jun 2020 Xinping Yi

Although a "wrapping around" operation can transform linear convolution to a circular one, by which the singular values can be approximated with reduced computational complexity by those of a block matrix with doubly circulant blocks, the accuracy of such an approximation is not guaranteed.

Learning to Localize: A 3D CNN Approach to User Positioning in Massive MIMO-OFDM Systems

no code implementations27 Oct 2019 Chi Wu, Xinping Yi, Wenjin Wang, Li You, Qing Huang, Xiqi Gao

In this paper, we consider the user positioning problem in the massive multiple-input multiple-output (MIMO) orthogonal frequency-division multiplexing (OFDM) system with a uniform planner antenna (UPA) array.

Cannot find the paper you are looking for? You can Submit a new open access paper.