Search Results for author: Ziming Zhao

Found 15 papers, 5 papers with code

From Images to Signals: Are Large Vision Models Useful for Time Series Analysis?

no code implementations29 May 2025 Ziming Zhao, ChengAo Shen, Hanghang Tong, Dongjin Song, Zhigang Deng, Qingsong Wen, Jingchao Ni

Transformer-based models have gained increasing attention in time series research, driving interest in Large Language Models (LLMs) and foundation models for time series analysis.

Time Series Time Series Analysis +1

Multi-Modal View Enhanced Large Vision Models for Long-Term Time Series Forecasting

no code implementations29 May 2025 ChengAo Shen, Wenchao Yu, Ziming Zhao, Dongjin Song, Wei Cheng, Haifeng Chen, Jingchao Ni

Time series, typically represented as numerical sequences, can also be transformed into images and texts, offering multi-modal views (MMVs) of the same underlying signal.

Inductive Bias Time Series +1

Harnessing Vision Models for Time Series Analysis: A Survey

1 code implementation13 Feb 2025 Jingchao Ni, Ziming Zhao, ChengAo Shen, Hanghang Tong, Dongjin Song, Wei Cheng, Dongsheng Luo, Haifeng Chen

Time series analysis has witnessed the inspiring development from traditional autoregressive models, deep learning models, to recent Transformers and Large Language Models (LLMs).

Survey Time Series +1

Rethinking Membership Inference Attacks Against Transfer Learning

no code implementations20 Jan 2025 Cong Wu, Jing Chen, Qianru Fang, Kun He, Ziming Zhao, Hao Ren, Guowen Xu, Yang Liu, Yang Xiang

The interaction between teacher and student models in transfer learning has not been thoroughly explored in MIAs, potentially resulting in an under-examined aspect of privacy vulnerabilities within transfer learning.

Transfer Learning

CHASE: A Causal Heterogeneous Graph based Framework for Root Cause Analysis in Multimodal Microservice Systems

no code implementations28 Jun 2024 Ziming Zhao, Tiehua Zhang, Zhishu Shen, Hai Dong, Xingjun Ma, Xianhui Liu, Yun Yang

In recent years, the widespread adoption of distributed microservice architectures within the industry has significantly increased the demand for enhanced system availability and robustness.

Anomaly Detection

Moderating Illicit Online Image Promotion for Unsafe User-Generated Content Games Using Large Vision-Language Models

2 code implementations27 Mar 2024 Keyan Guo, Ayush Utkarsh, Wenbo Ding, Isabelle Ondracek, Ziming Zhao, Guo Freeman, Nishant Vishwamitra, Hongxin Hu

Online user generated content games (UGCGs) are increasingly popular among children and adolescents for social interaction and more creative online entertainment.

Domain Adaptation

An Investigation of Large Language Models for Real-World Hate Speech Detection

no code implementations7 Jan 2024 Keyan Guo, Alexander Hu, Jaden Mu, Ziheng Shi, Ziming Zhao, Nishant Vishwamitra, Hongxin Hu

Our study reveals that a meticulously crafted reasoning prompt can effectively capture the context of hate speech by fully utilizing the knowledge base in LLMs, significantly outperforming existing techniques.

Hate Speech Detection

Moderating New Waves of Online Hate with Chain-of-Thought Reasoning in Large Language Models

1 code implementation22 Dec 2023 Nishant Vishwamitra, Keyan Guo, Farhan Tajwar Romit, Isabelle Ondracek, Long Cheng, Ziming Zhao, Hongxin Hu

HATEGUARD further achieves prompt-based zero-shot detection by automatically generating and updating detection prompts with new derogatory terms and targets in new wave samples to effectively address new waves of online hate.

Purifier: Defending Data Inference Attacks via Transforming Confidence Scores

no code implementations1 Dec 2022 Ziqi Yang, Lijin Wang, Da Yang, Jie Wan, Ziming Zhao, Ee-Chien Chang, Fan Zhang, Kui Ren

Besides, our further experiments show that PURIFIER is also effective in defending adversarial model inversion attacks and attribute inference attacks.

Attribute Inference Attack +1

Wavelet Regularization Benefits Adversarial Training

1 code implementation8 Jun 2022 Jun Yan, Huilin Yin, Xiaoyang Deng, Ziming Zhao, Wancheng Ge, Hao Zhang, Gerhard Rigoll

Since adversarial vulnerability can be regarded as a high-frequency phenomenon, it is essential to regulate the adversarially-trained neural network models in the frequency domain.

Adversarial Robustness

Understanding and Measuring Robustness of Multimodal Learning

no code implementations22 Dec 2021 Nishant Vishwamitra, Hongxin Hu, Ziming Zhao, Long Cheng, Feng Luo

We then introduce a new type of multimodal adversarial attacks called decoupling attack in MUROAN that aims to compromise multimodal models by decoupling their fused modalities.

Adversarial Robustness

Moving Target Defense for Web Applications using Bayesian Stackelberg Games

1 code implementation23 Feb 2016 Sailik Sengupta, Satya Gautam Vadlamudi, Subbarao Kambhampati, Marthony Taguinod, Adam Doupé, Ziming Zhao, Gail-Joon Ahn

We also address the issue of prioritizing vulnerabilities that when fixed, improves the security of the MTD system.

Cannot find the paper you are looking for? You can Submit a new open access paper.