Search Results for author: Tianjin Huang

Found 23 papers, 17 papers with code

Benchmarking Audio Deepfake Detection Robustness in Real-world Communication Scenarios

no code implementations16 Apr 2025 Haohan Shi, Xiyu Shi, Safak Dogan, Saif Alzubi, Tianjin Huang, Yunxiao Zhang

We introduced ADD-C, a new test dataset to evaluate the robustness of ADD systems under diverse communication conditions, including different combinations of audio codecs for compression and Packet Loss Rates (PLR).

Audio Deepfake Detection Benchmarking +2

Principal Eigenvalue Regularization for Improved Worst-Class Certified Robustness of Smoothed Classifiers

no code implementations21 Mar 2025 Gaojie Jin, Tianjin Huang, Ronghui Mu, Xiaowei Huang

While prior work has attempted to address this issue in adversarial robustness, the study of worst-class certified robustness for smoothed classifiers remains unexplored.

Adversarial Robustness Fairness

Stable-SPAM: How to Train in 4-Bit More Stably than 16-Bit Adam

1 code implementation24 Feb 2025 Tianjin Huang, Haotian Hu, Zhenyu Zhang, Gaojie Jin, Xiang Li, Li Shen, Tianlong Chen, Lu Liu, Qingsong Wen, Zhangyang Wang, Shiwei Liu

This paper comprehensively evaluates several recently proposed optimizers for 4-bit training, revealing that low-bit precision amplifies sensitivity to learning rates and often causes unstable gradient norms, leading to divergence at higher learning rates.

Enhancing Robust Fairness via Confusional Spectral Regularization

1 code implementation22 Jan 2025 Gaojie Jin, Sihao Wu, Jiaxu Liu, Tianjin Huang, Ronghui Mu

Our analysis shows that the worst-class robust error is influenced by two main factors: the spectral norm of the empirical robust confusion matrix and the information embedded in the model and training set.

Fairness

SPAM: Spike-Aware Adam with Momentum Reset for Stable LLM Training

1 code implementation12 Jan 2025 Tianjin Huang, Ziquan Zhu, Gaojie Jin, Lu Liu, Zhangyang Wang, Shiwei Liu

Large Language Models (LLMs) have demonstrated exceptional performance across diverse tasks, yet their training remains highly resource-intensive and susceptible to critical challenges such as training instability.

Time Series Forecasting

(PASS) Visual Prompt Locates Good Structure Sparsity through a Recurrent HyperNetwork

no code implementations24 Jul 2024 Tianjin Huang, Fang Meng, Li Shen, Fan Liu, Yulong Pei, Mykola Pechenizkiy, Shiwei Liu, Tianlong Chen

In this paper, we investigate a charming possibility - \textit{leveraging visual prompts to capture the channel importance and derive high-quality structural sparsity}.

Composable Interventions for Language Models

1 code implementation9 Jul 2024 Arinbjorn Kolbeinsson, Kyle O'Brien, Tianjin Huang, ShangHua Gao, Shiwei Liu, Jonathan Richard Schwarz, Anurag Vaidya, Faisal Mahmood, Marinka Zitnik, Tianlong Chen, Thomas Hartvigsen

Test-time interventions for language models can enhance factual accuracy, mitigate harmful outputs, and improve model efficiency without costly retraining.

knowledge editing Machine Unlearning +1

The Counterattack of CNNs in Self-Supervised Learning: Larger Kernel Size might be All You Need

no code implementations9 Dec 2023 Tianjin Huang, Tianlong Chen, Zhangyang Wang, Shiwei Liu

Therefore, it remains unclear whether the self-attention operation is crucial for the recent advances in SSL - or CNNs can deliver the same excellence with more advanced designs, too?

All Self-Supervised Learning

Visual Prompting Upgrades Neural Network Sparsification: A Data-Model Perspective

1 code implementation3 Dec 2023 Can Jin, Tianjin Huang, Yihua Zhang, Mykola Pechenizkiy, Sijia Liu, Shiwei Liu, Tianlong Chen

The rapid development of large-scale deep learning models questions the affordability of hardware platforms, which necessitates the pruning to reduce their computational and memory footprints.

Image Classification Visual Prompting

Enhancing Adversarial Training via Reweighting Optimization Trajectory

1 code implementation25 Jun 2023 Tianjin Huang, Shiwei Liu, Tianlong Chen, Meng Fang, Li Shen, Vlaod Menkovski, Lu Yin, Yulong Pei, Mykola Pechenizkiy

Despite the fact that adversarial training has become the de facto method for improving the robustness of deep neural networks, it is well-known that vanilla adversarial training suffers from daunting robust overfitting, resulting in unsatisfactory robust generalization.

Adversarial Robustness

Are Large Kernels Better Teachers than Transformers for ConvNets?

1 code implementation30 May 2023 Tianjin Huang, Lu Yin, Zhenyu Zhang, Li Shen, Meng Fang, Mykola Pechenizkiy, Zhangyang Wang, Shiwei Liu

We hereby carry out a first-of-its-kind study unveiling that modern large-kernel ConvNets, a compelling competitor to Vision Transformers, are remarkably more effective teachers for small-kernel ConvNets, due to more similar architectures.

Knowledge Distillation

Sparsity May Cry: Let Us Fail (Current) Sparse Neural Networks Together!

1 code implementation3 Mar 2023 Shiwei Liu, Tianlong Chen, Zhenyu Zhang, Xuxi Chen, Tianjin Huang, Ajay Jaiswal, Zhangyang Wang

In pursuit of a more general evaluation and unveiling the true potential of sparse algorithms, we introduce "Sparsity May Cry" Benchmark (SMC-Bench), a collection of carefully-curated 4 diverse tasks with 10 datasets, that accounts for capturing a wide range of domain-specific and sophisticated knowledge.

You Can Have Better Graph Neural Networks by Not Training Weights at All: Finding Untrained GNNs Tickets

1 code implementation28 Nov 2022 Tianjin Huang, Tianlong Chen, Meng Fang, Vlado Menkovski, Jiaxu Zhao, Lu Yin, Yulong Pei, Decebal Constantin Mocanu, Zhangyang Wang, Mykola Pechenizkiy, Shiwei Liu

Recent works have impressively demonstrated that there exists a subnetwork in randomly initialized convolutional neural networks (CNNs) that can match the performance of the fully trained dense networks at initialization, without any optimization of the weights of the network (i. e., untrained networks).

All Out-of-Distribution Detection

Superposing Many Tickets into One: A Performance Booster for Sparse Neural Network Training

no code implementations30 May 2022 Lu Yin, Vlado Menkovski, Meng Fang, Tianjin Huang, Yulong Pei, Mykola Pechenizkiy, Decebal Constantin Mocanu, Shiwei Liu

Recent works on sparse neural network training (sparse training) have shown that a compelling trade-off between performance and efficiency can be achieved by training intrinsically sparse neural networks from scratch.

Calibrated Adversarial Training

1 code implementation1 Oct 2021 Tianjin Huang, Vlado Menkovski, Yulong Pei, Mykola Pechenizkiy

In this paper, we present the Calibrated Adversarial Training, a method that reduces the adverse effects of semantic perturbations in adversarial training.

Direction-Aggregated Attack for Transferable Adversarial Examples

1 code implementation19 Apr 2021 Tianjin Huang, Vlado Menkovski, Yulong Pei, Yuhao Wang, Mykola Pechenizkiy

Deep neural networks are vulnerable to adversarial examples that are crafted by imposing imperceptible changes to the inputs.

Hop-Count Based Self-Supervised Anomaly Detection on Attributed Networks

1 code implementation16 Apr 2021 Tianjin Huang, Yulong Pei, Vlado Menkovski, Mykola Pechenizkiy

Although various approaches have been proposed to solve this problem, two major limitations exist: (1) unsupervised approaches usually work much less efficiently due to the lack of supervisory signal, and (2) existing anomaly detection methods only use local contextual information to detect anomalous nodes, e. g., one- or two-hop information, but ignore the global contextual information.

Self-Supervised Anomaly Detection Supervised Anomaly Detection

Bridging the Performance Gap between FGSM and PGD Adversarial Training

1 code implementation7 Nov 2020 Tianjin Huang, Vlado Menkovski, Yulong Pei, Mykola Pechenizkiy

In addition, it achieves comparable performance of adversarial robustness on MNIST dataset under white-box attack, and it achieves better performance than adv. PGD under white-box attack and effectively defends the transferable adversarial attack on CIFAR-10 dataset.

Adversarial Attack Adversarial Robustness

ResGCN: Attention-based Deep Residual Modeling for Anomaly Detection on Attributed Networks

1 code implementation30 Sep 2020 Yulong Pei, Tianjin Huang, Werner van Ipenburg, Mykola Pechenizkiy

Effectively detecting anomalous nodes in attributed networks is crucial for the success of many real-world applications such as fraud and intrusion detection.

Anomaly Detection Intrusion Detection

Cannot find the paper you are looking for? You can Submit a new open access paper.