1 code implementation • 10 Jan 2025 • Xiaoyun Xu, Zhuoran Liu, Stefanos Koffas, Stjepan Picek
To answer this question, we examine 12 common backdoor attacks that focus on input-space or feature-space stealthiness and 17 diverse representative defenses.
no code implementations • 17 Oct 2024 • Zhuoran Liu, Danpei Zhao, Bo Yuan
To facilitate research and application in this area, we present a new dataset named RescueADI, which contains high-resolution RSIs with annotations for three connected aspects: planning, perception, and recognition.
no code implementations • 2 Aug 2024 • Danpei Zhao, Zhuoran Liu, Bo Yuan
Improving search efficiency serves as one of the crucial objectives of Neural Architecture Search (NAS).
1 code implementation • 19 Jul 2024 • Bo Yuan, Danpei Zhao, Zhuoran Liu, Wentao Li, Tian Li
In this paper, we propose Continual Panoptic Perception (CPP), a unified continual learning model that leverages multi-task joint learning covering pixel-level classification, instance-level segmentation and image-level perception for universal interpretation in remote sensing images.
no code implementations • 7 Jul 2024 • Talia Gershon, Seetharami Seelam, Brian Belgodere, Milton Bonilla, Lan Hoang, Danny Barnett, I-Hsin Chung, Apoorve Mohan, Ming-Hung Chen, Lixiang Luo, Robert Walkup, Constantinos Evangelinos, Shweta Salaria, Marc Dombrowa, Yoonho Park, Apo Kayi, Liran Schour, Alim Alim, Ali Sydney, Pavlos Maniotis, Laurent Schares, Bernard Metzler, Bengi Karacali-Akyamac, Sophia Wen, Tatsuhiro Chiba, Sunyanan Choochotkaew, Takeshi Yoshimura, Claudia Misale, Tonia Elengikal, Kevin O Connor, Zhuoran Liu, Richard Molina, Lars Schneidenbach, James Caden, Christopher Laibinis, Carlos Fonseca, Vasily Tarasov, Swaminathan Sundararaman, Frank Schmuck, Scott Guthridge, Jeremy Cohn, Marc Eshel, Paul Muench, Runyu Liu, William Pointer, Drew Wyskida, Bob Krull, Ray Rose, Brent Wolfe, William Cornejo, John Walter, Colm Malone, Clifford Perucci, Frank Franco, Nigel Hinds, Bob Calio, Pavel Druyan, Robert Kilduff, John Kienle, Connor McStay, Andrew Figueroa, Matthew Connolly, Edie Fost, Gina Roma, Jake Fonseca, Ido Levy, Michele Payne, Ryan Schenkel, Amir Malki, Lion Schneider, Aniruddha Narkhede, Shekeba Moshref, Alexandra Kisin, Olga Dodin, Bill Rippon, Henry Wrieth, John Ganci, Johnny Colino, Donna Habeger-Rose, Rakesh Pandey, Aditya Gidh, Dennis Patterson, Samsuddin Salmani, Rambilas Varma, Rumana Rumana, Shubham Sharma, Aditya Gaur, Mayank Mishra, Rameswar Panda, Aditya Prasad, Matt Stallone, Gaoyuan Zhang, Yikang Shen, David Cox, Ruchir Puri, Dakshi Agrawal, Drew Thorstensen, Joel Belog, Brent Tang, Saurabh Kumar Gupta, Amitabha Biswas, Anup Maheshwari, Eran Gampel, Jason Van Patten, Matthew Runion, Sai Kaki, Yigal Bogin, Brian Reitz, Steve Pritko, Shahan Najam, Surya Nambala, Radhika Chirra, Rick Welp, Frank DiMitri, Felipe Telles, Amilcar Arvelo, King Chu, Ed Seminaro, Andrew Schram, Felix Eickhoff, William Hanson, Eric Mckeever, Dinakaran Joseph, Piyush Chaudhary, Piyush Shivam, Puneet Chaudhary, Wesley Jones, Robert Guthrie, Chris Bostic, Rezaul Islam, Steve Duersch, Wayne Sawdon, John Lewars, Matthew Klos, Michael Spriggs, Bill McMillan, George Gao, Ashish Kamra, Gaurav Singh, Marc Curry, Tushar Katarki, Joe Talerico, Zenghui Shi, Sai Sindhur Malleni, Erwan Gallen
This infrastructure includes (1) Vela: an AI-optimized supercomputing capability directly integrated into the IBM Cloud, delivering scalable, dynamic, multi-tenant and geographically distributed infrastructure for large-scale model training and other AI workflow steps and (2) Blue Vela: a large-scale, purpose-built, on-premises hosting environment that is optimized to support our largest and most ambitious AI model training tasks.
1 code implementation • 30 May 2024 • Xiaoyun Xu, Zhuoran Liu, Stefanos Koffas, Shujian Yu, Stjepan Picek
Backdoor attacks on deep learning represent a recent threat that has gained significant attention in the research community.
no code implementations • 6 Apr 2024 • Danpei Zhao, Bo Yuan, Ziqiang Chen, Tian Li, Zhuoran Liu, Wentao Li, Yue Gao
Experimental results on FineGrip demonstrate the feasibility of the panoptic perception task and the beneficial effect of multi-task joint optimization on individual tasks.
1 code implementation • 8 Dec 2023 • Xiaoyun Xu, Shujian Yu, Zhuoran Liu, Stjepan Picek
Vision Transformers (ViTs) achieve excellent performance in various tasks, but they are also vulnerable to adversarial attacks.
1 code implementation • 30 Jun 2023 • Loes Van Bemmel, Zhuoran Liu, Nik Vaessen, Martha Larson
Currently, the common practice for developing and testing gender protection algorithms is "neural-on-neural", i. e., perturbations are generated and tested with a neural network.
1 code implementation • 31 Jan 2023 • Zhuoran Liu, Zhengyu Zhao, Martha Larson
Perturbative availability poisons (PAPs) add small changes to images to prevent their use for model training.
1 code implementation • 2 Nov 2022 • Dirren van Vlijmen, Alex Kolmus, Zhuoran Liu, Zhengyu Zhao, Martha Larson
We introduce ShortcutGen, a new data poisoning attack that generates sample-dependent, error-minimizing perturbations by learning a generator.
1 code implementation • 16 Sep 2022 • Zhuoran Liu, Leqi Zou, Xuan Zou, Caihua Wang, Biao Zhang, Da Tang, Bolin Zhu, Yijie Zhu, Peng Wu, Ke Wang, Youlong Cheng
In this paper, we present Monolith, a system tailored for online training.
1 code implementation • 30 May 2022 • Hamid Bostani, Zhengyu Zhao, Zhuoran Liu, Veelasha Moonsamy
The primary approach to identifying vulnerable regions involves investigating realizable AEs, but generating these feasible apps poses a challenge.
1 code implementation • 25 Nov 2021 • Zhuoran Liu, Zhengyu Zhao, Alex Kolmus, Tijn Berns, Twan van Laarhoven, Tom Heskes, Martha Larson
Recent work has shown that imperceptible perturbations can be applied to craft unlearnable examples (ULEs), i. e. images whose content cannot be used to improve a classifier during training.
4 code implementations • NeurIPS 2021 • Zhengyu Zhao, Zhuoran Liu, Martha Larson
In particular, we, for the first time, identify that a simple logit loss can yield competitive results with the state of the arts.
1 code implementation • 12 Nov 2020 • Zhengyu Zhao, Zhuoran Liu, Martha Larson
In particular, our color filter space is explicitly specified so that we are able to provide a systematic analysis of model robustness against adversarial color transformations, from both the attack and defense perspectives.
1 code implementation • 2 Jun 2020 • Zhuoran Liu, Martha Larson
Our experiments evaluate the danger of these attacks when mounted against three representative visually-aware recommender algorithms in a framework that uses images to address cold start.
1 code implementation • 3 Feb 2020 • Zhengyu Zhao, Zhuoran Liu, Martha Larson
We introduce an approach that enhances images using a color filter in order to create adversarial effects, which fool neural networks into misclassification.
2 code implementations • CVPR 2020 • Zhengyu Zhao, Zhuoran Liu, Martha Larson
The success of image perturbations that are designed to fool image classifier is assessed in terms of both adversarial effect and visual imperceptibility.
no code implementations • SEMEVAL 2019 • Zhuoran Liu, Shivali Goel, Mukund Yelahanka Raghuprasad, Smar Muresan, a
The paper presents Columbia team{'}s participation in the SemEval 2019 Shared Task 7: RumourEval 2019.
no code implementations • 26 May 2019 • Daanish Ali Khan, Linhong Li, Ninghao Sha, Zhuoran Liu, Abelino Jimenez, Bhiksha Raj, Rita Singh
Recent breakthroughs in the field of deep learning have led to advancements in a broad spectrum of tasks in computer vision, audio processing, natural language processing and other areas.
1 code implementation • 29 Jan 2019 • Zhuoran Liu, Zhengyu Zhao, Martha Larson
An adversarial query is an image that has been modified to disrupt content-based image retrieval (CBIR) while appearing nearly untouched to the human eye.
no code implementations • 28 Nov 2016 • Zhuoran Liu, Yang Liu
Identifying and correcting grammatical errors in the text written by non-native writers has received increasing attention in recent years.