Search Results for author: Tsung-Yi Ho

Found 30 papers, 13 papers with code

NaNa and MiGu: Semantic Data Augmentation Techniques to Enhance Protein Classification in Graph Neural Networks

1 code implementation21 Mar 2024 Yi-Shan Lan, Pin-Yu Chen, Tsung-Yi Ho

In this paper, we propose novel semantic data augmentation methods, Novel Augmentation of New Node Attributes (NaNa), and Molecular Interactions and Geometric Upgrading (MiGu) to incorporate backbone chemical and side-chain biophysical information into protein classification tasks and a co-embedding residual learning framework.

Data Augmentation Drug Discovery

Evaluating Text-to-Image Generative Models: An Empirical Study on Human Image Synthesis

no code implementations8 Mar 2024 Muxi Chen, Yi Liu, Jian Yi, Changran Xu, Qiuxia Lai, Hongliang Wang, Tsung-Yi Ho, Qiang Xu

In this paper, we present an empirical study introducing a nuanced evaluation framework for text-to-image (T2I) generative models, applied to human image synthesis.

Defect Detection Fairness +1

Gradient Cuff: Detecting Jailbreak Attacks on Large Language Models by Exploring Refusal Loss Landscapes

no code implementations1 Mar 2024 Xiaomeng Hu, Pin-Yu Chen, Tsung-Yi Ho

Large Language Models (LLMs) are becoming a prominent generative AI tool, where the user enters a query and the LLM generates an answer.

Toward Fairness via Maximum Mean Discrepancy Regularization on Logits Space

no code implementations20 Feb 2024 Hao-Wei Chung, Ching-Hao Chiu, Yu-Jen Chen, Yiyu Shi, Tsung-Yi Ho

Fairness has become increasingly pivotal in machine learning for high-risk applications such as machine learning in healthcare and facial recognition.

Fairness

Achieve Fairness without Demographics for Dermatological Disease Diagnosis

no code implementations16 Jan 2024 Ching-Hao Chiu, Yu-Jen Chen, Yawen Wu, Yiyu Shi, Tsung-Yi Ho

To overcome this, we propose a method enabling fair predictions for sensitive attributes during the testing phase without using such information during training.

Attribute Fairness

MMA-Diffusion: MultiModal Attack on Diffusion Models

1 code implementation29 Nov 2023 Yijun Yang, Ruiyuan Gao, Xiaosen Wang, Tsung-Yi Ho, Nan Xu, Qiang Xu

In recent years, Text-to-Image (T2I) models have seen remarkable advancements, gaining widespread adoption.

Elijah: Eliminating Backdoors Injected in Diffusion Models via Distribution Shift

1 code implementation27 Nov 2023 Shengwei An, Sheng-Yen Chou, Kaiyuan Zhang, QiuLing Xu, Guanhong Tao, Guangyu Shen, Siyuan Cheng, Shiqing Ma, Pin-Yu Chen, Tsung-Yi Ho, Xiangyu Zhang

Diffusion models (DM) have become state-of-the-art generative models because of their capability to generate high-quality images from noises without adversarial training.

AutoVP: An Automated Visual Prompting Framework and Benchmark

1 code implementation12 Oct 2023 Hsi-Ai Tsao, Lei Hsiung, Pin-Yu Chen, Sijia Liu, Tsung-Yi Ho

To bridge this gap, we propose AutoVP, an end-to-end expandable framework for automating VP design choices, along with 12 downstream image-classification tasks that can serve as a holistic VP-performance benchmark.

Image Classification Visual Prompting

NeuralFuse: Learning to Recover the Accuracy of Access-Limited Neural Network Inference in Low-Voltage Regimes

1 code implementation29 Jun 2023 Hao-Lun Sun, Lei Hsiung, Nandhini Chandramoorthy, Pin-Yu Chen, Tsung-Yi Ho

To address this challenge, we introduce NeuralFuse, a novel add-on module that addresses the accuracy-energy tradeoff in low-voltage regimes by learning input transformations to generate error-resistant data representations.

AME-CAM: Attentive Multiple-Exit CAM for Weakly Supervised Segmentation on MRI Brain Tumor

1 code implementation26 Jun 2023 Yu-Jen Chen, Xinrong Hu, Yiyu Shi, Tsung-Yi Ho

Magnetic resonance imaging (MRI) is commonly used for brain tumor segmentation, which is critical for patient evaluation and treatment planning.

Brain Tumor Segmentation Segmentation +4

VillanDiffusion: A Unified Backdoor Attack Framework for Diffusion Models

1 code implementation NeurIPS 2023 Sheng-Yen Chou, Pin-Yu Chen, Tsung-Yi Ho

This paper presents a unified backdoor attack framework (VillanDiffusion) to expand the current scope of backdoor analysis for DMs.

Backdoor Attack Denoising

A Novel Confidence Induced Class Activation Mapping for MRI Brain Tumor Segmentation

1 code implementation8 Jun 2023 Yu-Jen Chen, Yiyu Shi, Tsung-Yi Ho

Magnetic resonance imaging (MRI) is a commonly used technique for brain tumor segmentation, which is critical for evaluating patients and planning treatment.

Brain Tumor Segmentation Object Localization +4

Conditional Diffusion Models for Weakly Supervised Medical Image Segmentation

1 code implementation6 Jun 2023 Xinrong Hu, Yu-Jen Chen, Tsung-Yi Ho, Yiyu Shi

Recent advances in denoising diffusion probabilistic models have shown great success in image synthesis tasks.

Denoising Image Generation +5

GREAT Score: Global Robustness Evaluation of Adversarial Perturbation using Generative Models

no code implementations19 Apr 2023 Zaitang Li, Pin-Yu Chen, Tsung-Yi Ho

Formally, GREAT Score carries the physical meaning of a global statistic capturing a mean certified attack-proof perturbation level over all samples drawn from a generative model.

Adversarial Robustness

How to Backdoor Diffusion Models?

1 code implementation CVPR 2023 Sheng-Yen Chou, Pin-Yu Chen, Tsung-Yi Ho

To gain a better understanding of the limitations and potential risks, this paper presents the first study on the robustness of diffusion models against backdoor attacks.

Backdoor Attack Denoising +1

NCTV: Neural Clamping Toolkit and Visualization for Neural Network Calibration

1 code implementation29 Nov 2022 Lei Hsiung, Yung-Chen Tang, Pin-Yu Chen, Tsung-Yi Ho

With the advancement of deep learning technology, neural networks have demonstrated their excellent ability to provide accurate predictions in many tasks.

Security Closure of IC Layouts Against Hardware Trojans

no code implementations15 Nov 2022 Fangzhou Wang, Qijing Wang, Bangqi Fu, Shui Jiang, Xiaopeng Zhang, Lilas Alrahis, Ozgur Sinanoglu, Johann Knechtel, Tsung-Yi Ho, Evangeline F. Y. Young

In this work, we proactively and systematically harden the physical layouts of ICs against post-design insertion of Trojans.

Neural Clamping: Joint Input Perturbation and Temperature Scaling for Neural Network Calibration

no code implementations23 Sep 2022 Yung-Chen Tang, Pin-Yu Chen, Tsung-Yi Ho

Neural network calibration is an essential task in deep learning to ensure consistency between the confidence of model prediction and the true correctness likelihood.

Be Your Own Neighborhood: Detecting Adversarial Example by the Neighborhood Relations Built on Self-Supervised Learning

no code implementations31 Aug 2022 Zhiyuan He, Yijun Yang, Pin-Yu Chen, Qiang Xu, Tsung-Yi Ho

Empowered by the robust relation net built on SSL, we found that BEYOND outperforms baselines in terms of both detection ability and speed.

Relation Self-Supervised Learning

CARBEN: Composite Adversarial Robustness Benchmark

1 code implementation16 Jul 2022 Lei Hsiung, Yun-Yun Tsai, Pin-Yu Chen, Tsung-Yi Ho

Prior literature on adversarial attack methods has mainly focused on attacking with and defending against a single threat model, e. g., perturbations bounded in Lp ball.

Adversarial Attack Adversarial Robustness

Towards Compositional Adversarial Robustness: Generalizing Adversarial Training to Composite Semantic Perturbations

1 code implementation CVPR 2023 Lei Hsiung, Yun-Yun Tsai, Pin-Yu Chen, Tsung-Yi Ho

We then propose generalized adversarial training (GAT) to extend model robustness from $\ell_{p}$-ball to composite semantic perturbations, such as the combination of Hue, Saturation, Brightness, Contrast, and Rotation.

Adversarial Robustness Scheduling

"One-Shot" Reduction of Additive Artifacts in Medical Images

no code implementations23 Oct 2021 Yu-Jen Chen, Yen-Jung Chang, Shao-Cheng Wen, Yiyu Shi, Xiaowei Xu, Tsung-Yi Ho, Meiping Huang, Haiyun Yuan, Jian Zhuang

Medical images may contain various types of artifacts with different patterns and mixtures, which depend on many factors such as scan setting, machine condition, patients' characteristics, surrounding environment, etc.

Computed Tomography (CT)

Generalizing Adversarial Training to Composite Semantic Perturbations

no code implementations ICML Workshop AML 2021 Yun-Yun Tsai, Lei Hsiung, Pin-Yu Chen, Tsung-Yi Ho

We then propose generalized adversarial training (GAT) to extend model robustness from $\ell_{p}$ norm to composite semantic perturbations, such as Hue, Saturation, Brightness, Contrast, and Rotation.

Scheduling

Cannot find the paper you are looking for? You can Submit a new open access paper.