Search Results for author: Tianjin Huang

Found 9 papers, 7 papers with code

You Can Have Better Graph Neural Networks by Not Training Weights at All: Finding Untrained GNNs Tickets

no code implementations28 Nov 2022 Tianjin Huang, Tianlong Chen, Meng Fang, Vlado Menkovski, Jiaxu Zhao, Lu Yin, Yulong Pei, Decebal Constantin Mocanu, Zhangyang Wang, Mykola Pechenizkiy, Shiwei Liu

Recent works have impressively demonstrated that there exists a subnetwork in randomly initialized convolutional neural networks (CNNs) that can match the performance of the fully trained dense networks at initialization, without any optimization of the weights of the network (i. e., untrained networks).

Out-of-Distribution Detection

Superposing Many Tickets into One: A Performance Booster for Sparse Neural Network Training

no code implementations30 May 2022 Lu Yin, Vlado Menkovski, Meng Fang, Tianjin Huang, Yulong Pei, Mykola Pechenizkiy, Decebal Constantin Mocanu, Shiwei Liu

Recent works on sparse neural network training (sparse training) have shown that a compelling trade-off between performance and efficiency can be achieved by training intrinsically sparse neural networks from scratch.

Calibrated Adversarial Training

1 code implementation1 Oct 2021 Tianjin Huang, Vlado Menkovski, Yulong Pei, Mykola Pechenizkiy

In this paper, we present the Calibrated Adversarial Training, a method that reduces the adverse effects of semantic perturbations in adversarial training.

Direction-Aggregated Attack for Transferable Adversarial Examples

1 code implementation19 Apr 2021 Tianjin Huang, Vlado Menkovski, Yulong Pei, Yuhao Wang, Mykola Pechenizkiy

Deep neural networks are vulnerable to adversarial examples that are crafted by imposing imperceptible changes to the inputs.

Hop-Count Based Self-Supervised Anomaly Detection on Attributed Networks

1 code implementation16 Apr 2021 Tianjin Huang, Yulong Pei, Vlado Menkovski, Mykola Pechenizkiy

Although various approaches have been proposed to solve this problem, two major limitations exist: (1) unsupervised approaches usually work much less efficiently due to the lack of supervisory signal, and (2) existing anomaly detection methods only use local contextual information to detect anomalous nodes, e. g., one- or two-hop information, but ignore the global contextual information.

Self-Supervised Anomaly Detection

Bridging the Performance Gap between FGSM and PGD Adversarial Training

1 code implementation7 Nov 2020 Tianjin Huang, Vlado Menkovski, Yulong Pei, Mykola Pechenizkiy

In addition, it achieves comparable performance of adversarial robustness on MNIST dataset under white-box attack, and it achieves better performance than adv. PGD under white-box attack and effectively defends the transferable adversarial attack on CIFAR-10 dataset.

Adversarial Attack Adversarial Robustness

ResGCN: Attention-based Deep Residual Modeling for Anomaly Detection on Attributed Networks

1 code implementation30 Sep 2020 Yulong Pei, Tianjin Huang, Werner van Ipenburg, Mykola Pechenizkiy

Effectively detecting anomalous nodes in attributed networks is crucial for the success of many real-world applications such as fraud and intrusion detection.

Anomaly Detection Intrusion Detection

Cannot find the paper you are looking for? You can Submit a new open access paper.