Search Results for author: Zhuoran Liu

Found 23 papers, 16 papers with code

Towards Backdoor Stealthiness in Model Parameter Space

1 code implementation10 Jan 2025 Xiaoyun Xu, Zhuoran Liu, Stefanos Koffas, Stjepan Picek

To answer this question, we examine 12 common backdoor attacks that focus on input-space or feature-space stealthiness and 17 diverse representative defenses.

RescueADI: Adaptive Disaster Interpretation in Remote Sensing Images with Autonomous Agents

no code implementations17 Oct 2024 Zhuoran Liu, Danpei Zhao, Bo Yuan

To facilitate research and application in this area, we present a new dataset named RescueADI, which contains high-resolution RSIs with annotations for three connected aspects: planning, perception, and recognition.

Question Answering Task Planning +1

TopoNAS: Boosting Search Efficiency of Gradient-based NAS via Topological Simplification

no code implementations2 Aug 2024 Danpei Zhao, Zhuoran Liu, Bo Yuan

Improving search efficiency serves as one of the crucial objectives of Neural Architecture Search (NAS).

Neural Architecture Search

Continual Panoptic Perception: Towards Multi-modal Incremental Interpretation of Remote Sensing Images

1 code implementation19 Jul 2024 Bo Yuan, Danpei Zhao, Zhuoran Liu, Wentao Li, Tian Li

In this paper, we propose Continual Panoptic Perception (CPP), a unified continual learning model that leverages multi-task joint learning covering pixel-level classification, instance-level segmentation and image-level perception for universal interpretation in remote sensing images.

Caption Generation Continual Learning +1

The infrastructure powering IBM's Gen AI model development

no code implementations7 Jul 2024 Talia Gershon, Seetharami Seelam, Brian Belgodere, Milton Bonilla, Lan Hoang, Danny Barnett, I-Hsin Chung, Apoorve Mohan, Ming-Hung Chen, Lixiang Luo, Robert Walkup, Constantinos Evangelinos, Shweta Salaria, Marc Dombrowa, Yoonho Park, Apo Kayi, Liran Schour, Alim Alim, Ali Sydney, Pavlos Maniotis, Laurent Schares, Bernard Metzler, Bengi Karacali-Akyamac, Sophia Wen, Tatsuhiro Chiba, Sunyanan Choochotkaew, Takeshi Yoshimura, Claudia Misale, Tonia Elengikal, Kevin O Connor, Zhuoran Liu, Richard Molina, Lars Schneidenbach, James Caden, Christopher Laibinis, Carlos Fonseca, Vasily Tarasov, Swaminathan Sundararaman, Frank Schmuck, Scott Guthridge, Jeremy Cohn, Marc Eshel, Paul Muench, Runyu Liu, William Pointer, Drew Wyskida, Bob Krull, Ray Rose, Brent Wolfe, William Cornejo, John Walter, Colm Malone, Clifford Perucci, Frank Franco, Nigel Hinds, Bob Calio, Pavel Druyan, Robert Kilduff, John Kienle, Connor McStay, Andrew Figueroa, Matthew Connolly, Edie Fost, Gina Roma, Jake Fonseca, Ido Levy, Michele Payne, Ryan Schenkel, Amir Malki, Lion Schneider, Aniruddha Narkhede, Shekeba Moshref, Alexandra Kisin, Olga Dodin, Bill Rippon, Henry Wrieth, John Ganci, Johnny Colino, Donna Habeger-Rose, Rakesh Pandey, Aditya Gidh, Dennis Patterson, Samsuddin Salmani, Rambilas Varma, Rumana Rumana, Shubham Sharma, Aditya Gaur, Mayank Mishra, Rameswar Panda, Aditya Prasad, Matt Stallone, Gaoyuan Zhang, Yikang Shen, David Cox, Ruchir Puri, Dakshi Agrawal, Drew Thorstensen, Joel Belog, Brent Tang, Saurabh Kumar Gupta, Amitabha Biswas, Anup Maheshwari, Eran Gampel, Jason Van Patten, Matthew Runion, Sai Kaki, Yigal Bogin, Brian Reitz, Steve Pritko, Shahan Najam, Surya Nambala, Radhika Chirra, Rick Welp, Frank DiMitri, Felipe Telles, Amilcar Arvelo, King Chu, Ed Seminaro, Andrew Schram, Felix Eickhoff, William Hanson, Eric Mckeever, Dinakaran Joseph, Piyush Chaudhary, Piyush Shivam, Puneet Chaudhary, Wesley Jones, Robert Guthrie, Chris Bostic, Rezaul Islam, Steve Duersch, Wayne Sawdon, John Lewars, Matthew Klos, Michael Spriggs, Bill McMillan, George Gao, Ashish Kamra, Gaurav Singh, Marc Curry, Tushar Katarki, Joe Talerico, Zenghui Shi, Sai Sindhur Malleni, Erwan Gallen

This infrastructure includes (1) Vela: an AI-optimized supercomputing capability directly integrated into the IBM Cloud, delivering scalable, dynamic, multi-tenant and geographically distributed infrastructure for large-scale model training and other AI workflow steps and (2) Blue Vela: a large-scale, purpose-built, on-premises hosting environment that is optimized to support our largest and most ambitious AI model training tasks.

BAN: Detecting Backdoors Activated by Adversarial Neuron Noise

1 code implementation30 May 2024 Xiaoyun Xu, Zhuoran Liu, Stefanos Koffas, Shujian Yu, Stjepan Picek

Backdoor attacks on deep learning represent a recent threat that has gained significant attention in the research community.

Panoptic Perception: A Novel Task and Fine-grained Dataset for Universal Remote Sensing Image Interpretation

no code implementations6 Apr 2024 Danpei Zhao, Bo Yuan, Ziqiang Chen, Tian Li, Zhuoran Liu, Wentao Li, Yue Gao

Experimental results on FineGrip demonstrate the feasibility of the panoptic perception task and the beneficial effect of multi-task joint optimization on individual tasks.

Image Captioning Instance Segmentation +4

MIMIR: Masked Image Modeling for Mutual Information-based Adversarial Robustness

1 code implementation8 Dec 2023 Xiaoyun Xu, Shujian Yu, Zhuoran Liu, Stjepan Picek

Vision Transformers (ViTs) achieve excellent performance in various tasks, but they are also vulnerable to adversarial attacks.

Adversarial Robustness

Beyond Neural-on-Neural Approaches to Speaker Gender Protection

1 code implementation30 Jun 2023 Loes Van Bemmel, Zhuoran Liu, Nik Vaessen, Martha Larson

Currently, the common practice for developing and testing gender protection algorithms is "neural-on-neural", i. e., perturbations are generated and tested with a neural network.

Attribute

Image Shortcut Squeezing: Countering Perturbative Availability Poisons with Compression

1 code implementation31 Jan 2023 Zhuoran Liu, Zhengyu Zhao, Martha Larson

Perturbative availability poisons (PAPs) add small changes to images to prevent their use for model training.

Generative Poisoning Using Random Discriminators

1 code implementation2 Nov 2022 Dirren van Vlijmen, Alex Kolmus, Zhuoran Liu, Zhengyu Zhao, Martha Larson

We introduce ShortcutGen, a new data poisoning attack that generates sample-dependent, error-minimizing perturbations by learning a generator.

Data Poisoning

Going Grayscale: The Road to Understanding and Improving Unlearnable Examples

1 code implementation25 Nov 2021 Zhuoran Liu, Zhengyu Zhao, Alex Kolmus, Tijn Berns, Twan van Laarhoven, Tom Heskes, Martha Larson

Recent work has shown that imperceptible perturbations can be applied to craft unlearnable examples (ULEs), i. e. images whose content cannot be used to improve a classifier during training.

On Success and Simplicity: A Second Look at Transferable Targeted Attacks

4 code implementations NeurIPS 2021 Zhengyu Zhao, Zhuoran Liu, Martha Larson

In particular, we, for the first time, identify that a simple logit loss can yield competitive results with the state of the arts.

Adversarial Image Color Transformations in Explicit Color Filter Space

1 code implementation12 Nov 2020 Zhengyu Zhao, Zhuoran Liu, Martha Larson

In particular, our color filter space is explicitly specified so that we are able to provide a systematic analysis of model robustness against adversarial color transformations, from both the attack and defense perspectives.

Adversarial Robustness

Adversarial Item Promotion: Vulnerabilities at the Core of Top-N Recommenders that Use Images to Address Cold Start

1 code implementation2 Jun 2020 Zhuoran Liu, Martha Larson

Our experiments evaluate the danger of these attacks when mounted against three representative visually-aware recommender algorithms in a framework that uses images to address cold start.

Recommendation Systems

Adversarial Color Enhancement: Generating Unrestricted Adversarial Images by Optimizing a Color Filter

1 code implementation3 Feb 2020 Zhengyu Zhao, Zhuoran Liu, Martha Larson

We introduce an approach that enhances images using a color filter in order to create adversarial effects, which fool neural networks into misclassification.

Image Enhancement

Towards Large yet Imperceptible Adversarial Image Perturbations with Perceptual Color Distance

2 code implementations CVPR 2020 Zhengyu Zhao, Zhuoran Liu, Martha Larson

The success of image perturbations that are designed to fool image classifier is assessed in terms of both adversarial effect and visual imperceptibility.

Image Classification

Non-Determinism in Neural Networks for Adversarial Robustness

no code implementations26 May 2019 Daanish Ali Khan, Linhong Li, Ninghao Sha, Zhuoran Liu, Abelino Jimenez, Bhiksha Raj, Rita Singh

Recent breakthroughs in the field of deep learning have led to advancements in a broad spectrum of tasks in computer vision, audio processing, natural language processing and other areas.

Adversarial Robustness

Who's Afraid of Adversarial Queries? The Impact of Image Modifications on Content-based Image Retrieval

1 code implementation29 Jan 2019 Zhuoran Liu, Zhengyu Zhao, Martha Larson

An adversarial query is an image that has been modified to disrupt content-based image retrieval (CBIR) while appearing nearly untouched to the human eye.

Blocking Content-Based Image Retrieval +1

Exploiting Unlabeled Data for Neural Grammatical Error Detection

no code implementations28 Nov 2016 Zhuoran Liu, Yang Liu

Identifying and correcting grammatical errors in the text written by non-native writers has received increasing attention in recent years.

Binary Classification General Classification +1

Cannot find the paper you are looking for? You can Submit a new open access paper.