Search Results for author: Zhaoyuan Yang

Found 14 papers, 2 papers with code

ArGue: Attribute-Guided Prompt Tuning for Vision-Language Models

no code implementations27 Nov 2023 Xinyu Tian, Shu Zou, Zhaoyuan Yang, Jing Zhang

Although soft prompt tuning is effective in efficiently adapting Vision-Language (V&L) models for downstream tasks, it shows limitations in dealing with distribution shifts.

Attribute Out-of-Distribution Generalization

IMPUS: Image Morphing with Perceptually-Uniform Sampling Using Diffusion Models

1 code implementation12 Nov 2023 Zhaoyuan Yang, Zhengyang Yu, Zhiwei Xu, Jaskirat Singh, Jing Zhang, Dylan Campbell, Peter Tu, Richard Hartley

We present a diffusion-based image morphing approach with perceptually-uniform sampling (IMPUS) that produces smooth, direct and realistic interpolations given an image pair.

Image Generation Image Morphing

Probabilistic and Semantic Descriptions of Image Manifolds and Their Applications

no code implementations6 Jul 2023 Peter Tu, Zhaoyuan Yang, Richard Hartley, Zhiwei Xu, Jing Zhang, Yiwei Fu, Dylan Campbell, Jaskirat Singh, Tianyu Wang

This paper begins with a description of methods for estimating image probability density functions that reflects the observation that such data is usually constrained to lie in restricted regions of the high-dimensional image space-not every pattern of pixels is an image.

Test-time Detection and Repair of Adversarial Samples via Masked Autoencoder

no code implementations22 Mar 2023 Yun-Yun Tsai, Ju-Chin Chao, Albert Wen, Zhaoyuan Yang, Chengzhi Mao, Tapan Shah, Junfeng Yang

Test-time defenses solve these issues but most existing test-time defenses require adapting the model weights, therefore they do not work on frozen models and complicate model memory management.

Contrastive Learning Management

Adversarial Purification with the Manifold Hypothesis

no code implementations26 Oct 2022 Zhaoyuan Yang, Zhiwei Xu, Jing Zhang, Richard Hartley, Peter Tu

In this work, we formulate a novel framework for adversarial robustness using the manifold hypothesis.

Adversarial Robustness Variational Inference

Uncertainty-aware Perception Models for Off-road Autonomous Unmanned Ground Vehicles

no code implementations22 Sep 2022 Zhaoyuan Yang, Yewteck Tan, Shiraj Sen, Johan Reimann, John Karigiannis, Mohammed Yousefhussien, Nurali Virani

We test the hypothesis that model trained on a single dataset may not generalize to other off-road navigation datasets and new locations due to the input distribution drift.

Autonomous Navigation Semantic Segmentation +1

Dropout Inference with Non-Uniform Weight Scaling

no code implementations27 Apr 2022 Zhaoyuan Yang, Arpit Jain

Dropout as regularization has been used extensively to prevent overfitting for training neural networks.

On Adversarial Vulnerability of PHM algorithms: An Initial Study

no code implementations14 Oct 2021 Weizhong Yan, Zhaoyuan Yang, Jianwei Qiu

With proliferation of deep learning (DL) applications in diverse domains, vulnerability of DL models to adversarial attacks has become an increasingly interesting research topic in the domains of Computer Vision (CV) and Natural Language Processing (NLP).

Time Series Time Series Analysis

Variational Encoder-based Reliable Classification

no code implementations19 Feb 2020 Chitresh Bhushan, Zhaoyuan Yang, Nurali Virani, Naresh Iyer

Machine learning models provide statistically impressive results which might be individually unreliable.

BIG-bench Machine Learning Classification +1

Justification-Based Reliability in Machine Learning

no code implementations18 Nov 2019 Nurali Virani, Naresh Iyer, Zhaoyuan Yang

To address this need, we link the question of reliability of a model's individual prediction to the epistemic uncertainty of the model's prediction.

BIG-bench Machine Learning

Design of intentional backdoors in sequential models

no code implementations26 Feb 2019 Zhaoyuan Yang, Naresh Iyer, Johan Reimann, Nurali Virani

Recent work has demonstrated robust mechanisms by which attacks can be orchestrated on machine learning models.

Decision Making

Adversarial Reinforcement Learning for Observer Design in Autonomous Systems under Cyber Attacks

no code implementations15 Sep 2018 Abhishek Gupta, Zhaoyuan Yang

Complex autonomous control systems are subjected to sensor failures, cyber-attacks, sensor noise, communication channel failures, etc.

reinforcement-learning Reinforcement Learning (RL)

Cannot find the paper you are looking for? You can Submit a new open access paper.