Search Results for author: James Bailey

Found 44 papers, 19 papers with code

Synthnet: Learning synthesizers end-to-end

no code implementations ICLR 2019 Florin Schimbinschi, Christian Walder, Sarah Erfani, James Bailey

Learning synthesizers and generating music in the raw audio domain is a challenging task.

A Survey of Automated Data Augmentation Algorithms for Deep Learning-based Image Classication Tasks

no code implementations14 Jun 2022 Zihan Yang, Richard O. Sinnott, James Bailey, Qiuhong Ke

To mitigate such problem, a novel direction is to automatically learn the image augmentation policies from the given dataset using Automated Data Augmentation (AutoDA) techniques.

Computer Vision Image Augmentation +1

On the Convergence and Robustness of Adversarial Training

no code implementations15 Dec 2021 Yisen Wang, Xingjun Ma, James Bailey, JinFeng Yi, BoWen Zhou, Quanquan Gu

In this paper, we propose such a criterion, namely First-Order Stationary Condition for constrained optimization (FOSC), to quantitatively evaluate the convergence quality of adversarial examples found in the inner maximization.

De Novo Molecular Generation with Stacked Adversarial Model

no code implementations24 Oct 2021 Yuansan Liu, James Bailey

A second stage model then takes these features to learn properties of the molecules and refine more valid molecules.

Exploring Architectural Ingredients of Adversarially Robust Deep Neural Networks

1 code implementation NeurIPS 2021 Hanxun Huang, Yisen Wang, Sarah Monazam Erfani, Quanquan Gu, James Bailey, Xingjun Ma

Specifically, we make the following key observations: 1) more parameters (higher model capacity) does not necessarily help adversarial robustness; 2) reducing capacity at the last stage (the last group of blocks) of the network can actually improve adversarial robustness; and 3) under the same parameter budget, there exists an optimal architectural configuration for adversarial robustness.

Adversarial Robustness


no code implementations29 Sep 2021 Xueqi Ma, Pan Li, Qiong Cao, James Bailey, Yue Gao

In FAHGNN, we explore the influence of node features for the expressive power of GNNs and augment features by introducing common features and personal features to model information.

Node Classification Representation Learning

Semantic-Preserving Adversarial Text Attacks

1 code implementation23 Aug 2021 Xinghao Yang, Weifeng Liu, James Bailey, Tianqing Zhu, DaCheng Tao, Wei Liu

In this paper, we propose a Bigram and Unigram based adaptive Semantic Preservation Optimization (BU-SPO) method to examine the vulnerability of deep models.

Adversarial Text Semantic Similarity +2

Adversarial Interaction Attacks: Fooling AI to Misinterpret Human Intentions

no code implementations ICML Workshop AML 2021 Nodens Koren, Xingjun Ma, Qiuhong Ke, Yisen Wang, James Bailey

Understanding the actions of both humans and artificial intelligence (AI) agents is important before modern AI systems can be fully integrated into our daily life.

Adversarial Attack

Dual Head Adversarial Training

1 code implementation21 Apr 2021 Yujing Jiang, Xingjun Ma, Sarah Monazam Erfani, James Bailey

Deep neural networks (DNNs) are known to be vulnerable to adversarial examples/attacks, raising concerns about their reliability in safety-critical applications.

What Do Deep Nets Learn? Class-wise Patterns Revealed in the Input Space

no code implementations18 Jan 2021 Shihao Zhao, Xingjun Ma, Yisen Wang, James Bailey, Bo Li, Yu-Gang Jiang

In this paper, we focus on image classification and propose a method to visualize and understand the class-wise knowledge (patterns) learned by DNNs under three different settings including natural, backdoor and adversarial.

Image Classification

Adversarial Interaction Attack: Fooling AI to Misinterpret Human Intentions

no code implementations17 Jan 2021 Nodens Koren, Qiuhong Ke, Yisen Wang, James Bailey, Xingjun Ma

Understanding the actions of both humans and artificial intelligence (AI) agents is important before modern AI systems can be fully integrated into our daily life.

Adversarial Attack

Neural Architecture Search via Combinatorial Multi-Armed Bandit

no code implementations1 Jan 2021 Hanxun Huang, Xingjun Ma, Sarah M. Erfani, James Bailey

NAS can be performed via policy gradient, evolutionary algorithms, differentiable architecture search or tree-search methods.

Neural Architecture Search

Divide and Learn: A Divide and Conquer Approach for Predict+Optimize

no code implementations4 Dec 2020 Ali Ugur Guler, Emir Demirovic, Jeffrey Chan, James Bailey, Christopher Leckie, Peter J. Stuckey

We compare our approach withother approaches to the predict+optimize problem and showwe can successfully tackle some hard combinatorial problemsbetter than other predict+optimize methods.

Combinatorial Optimization

Imbalanced Gradients: A New Cause of Overestimated Adversarial Robustness

no code implementations28 Sep 2020 Linxi Jiang, Xingjun Ma, Zejia Weng, James Bailey, Yu-Gang Jiang

Evaluating the robustness of a defense model is a challenging task in adversarial robustness research.

Adversarial Robustness

Reflection Backdoor: A Natural Backdoor Attack on Deep Neural Networks

3 code implementations ECCV 2020 Yunfei Liu, Xingjun Ma, James Bailey, Feng Lu

A backdoor attack installs a backdoor into the victim model by injecting a backdoor pattern into a small proportion of the training data.

Backdoor Attack Computer Vision

Normalized Loss Functions for Deep Learning with Noisy Labels

4 code implementations ICML 2020 Xingjun Ma, Hanxun Huang, Yisen Wang, Simone Romano, Sarah Erfani, James Bailey

However, in practice, simply being robust is not sufficient for a loss function to train accurate DNNs.

Ranked #24 on Image Classification on mini WebVision 1.0 (ImageNet Top-1 Accuracy metric)

Learning with noisy labels

Improving Adversarial Robustness Requires Revisiting Misclassified Examples

no code implementations ICLR 2020 Yisen Wang, Difan Zou, Jin-Feng Yi, James Bailey, Xingjun Ma, Quanquan Gu

In this paper, we investigate the distinctive influence of misclassified and correctly classified examples on the final robustness of adversarial training.

Adversarial Robustness

Clean-Label Backdoor Attacks on Video Recognition Models

1 code implementation CVPR 2020 Shihao Zhao, Xingjun Ma, Xiang Zheng, James Bailey, Jingjing Chen, Yu-Gang Jiang

We propose the use of a universal adversarial trigger as the backdoor trigger to attack video recognition models, a situation where backdoor attacks are likely to be challenged by the above 4 strict conditions.

Backdoor Attack Image Classification +1

Skip Connections Matter: On the Transferability of Adversarial Examples Generated with ResNets

2 code implementations ICLR 2020 Dongxian Wu, Yisen Wang, Shu-Tao Xia, James Bailey, Xingjun Ma

We find that using more gradients from the skip connections rather than the residual modules according to a decay factor, allows one to craft adversarial examples with high transferability.

Symmetric Cross Entropy for Robust Learning with Noisy Labels

4 code implementations ICCV 2019 Yisen Wang, Xingjun Ma, Zaiyi Chen, Yuan Luo, Jin-Feng Yi, James Bailey

In this paper, we show that DNN learning with Cross Entropy (CE) exhibits overfitting to noisy labels on some classes ("easy" classes), but more surprisingly, it also suffers from significant under learning on some other classes ("hard" classes).

Learning with noisy labels

Quality Evaluation of GANs Using Cross Local Intrinsic Dimensionality

no code implementations2 May 2019 Sukarna Barua, Xingjun Ma, Sarah Monazam Erfani, Michael E. Houle, James Bailey

In this paper, we demonstrate that an intrinsic dimensional characterization of the data space learned by a GAN model leads to an effective evaluation metric for GAN quality.

Black-box Adversarial Attacks on Video Recognition Models

no code implementations10 Apr 2019 Linxi Jiang, Xingjun Ma, Shaoxiang Chen, James Bailey, Yu-Gang Jiang

Using three benchmark video datasets, we demonstrate that V-BAD can craft both untargeted and targeted attacks to fool two state-of-the-art deep video recognition models.

Video Recognition

Learning Deep Hidden Nonlinear Dynamics from Aggregate Data

no code implementations22 Jul 2018 Yisen Wang, Bo Dai, Lingkai Kong, Sarah Monazam Erfani, James Bailey, Hongyuan Zha

Learning nonlinear dynamics from diffusion data is a challenging problem since the individuals observed may be different at different time points, generally following an aggregate behaviour.

Iterative Learning with Open-set Noisy Labels

1 code implementation CVPR 2018 Yisen Wang, Weiyang Liu, Xingjun Ma, James Bailey, Hongyuan Zha, Le Song, Shu-Tao Xia

We refer to this more complex scenario as the \textbf{open-set noisy label} problem and show that it is nontrivial in order to make accurate predictions.

Characterizing Adversarial Subspaces Using Local Intrinsic Dimensionality

1 code implementation ICLR 2018 Xingjun Ma, Bo Li, Yisen Wang, Sarah M. Erfani, Sudanthi Wijewickrema, Grant Schoenebeck, Dawn Song, Michael E. Houle, James Bailey

Deep Neural Networks (DNNs) have recently been shown to be vulnerable against adversarial examples, which are carefully crafted instances that can mislead DNNs to make errors during prediction.

Adversarial Defense

Online Cluster Validity Indices for Streaming Data

no code implementations8 Jan 2018 Masud Moshtaghi, James C. Bezdek, Sarah M. Erfani, Christopher Leckie, James Bailey

An important part of cluster analysis is validating the quality of computationally obtained clusters.

Providing Effective Real-time Feedback in Simulation-based Surgical Training

no code implementations30 Jun 2017 Xingjun Ma, Sudanthi Wijewickrema, Yun Zhou, Shuo Zhou, Stephen O'Leary, James Bailey

Experimental results in a temporal bone surgery simulation show that the proposed method is able to extract highly effective feedback at a high level of efficiency.

Adversarial Generation of Real-time Feedback with Neural Networks for Simulation-based Training

no code implementations4 Mar 2017 Xingjun Ma, Sudanthi Wijewickrema, Shuo Zhou, Yun Zhou, Zakaria Mhammedi, Stephen O'Leary, James Bailey

It is the aim of this paper to develop an efficient and effective feedback generation method for the provision of real-time feedback in SBT.

Efficient Orthogonal Parametrisation of Recurrent Neural Networks Using Householder Reflections

1 code implementation ICML 2017 Zakaria Mhammedi, Andrew Hellicar, Ashfaqur Rahman, James Bailey

Our contributions are as follows; we first show that constraining the transition matrix to be unitary is a special case of an orthogonal constraint.

TopicResponse: A Marriage of Topic Modelling and Rasch Modelling for Automatic Measurement in MOOCs

no code implementations29 Jul 2016 Jiazhen He, Benjamin I. P. Rubinstein, James Bailey, Rui Zhang, Sandra Milligan

This paper explores the suitability of using automatically discovered topics from MOOC discussion forums for modelling students' academic abilities.

Ground Truth Bias in External Cluster Validity Indices

no code implementations17 Jun 2016 Yang Lei, James C. Bezdek, Simone Romano, Nguyen Xuan Vinh, Jeffrey Chan, James Bailey

For example, NCinc bias in the RI can be changed to NCdec bias by skewing the distribution of clusters in the ground truth partition.

Adjusting for Chance Clustering Comparison Measures

no code implementations3 Dec 2015 Simone Romano, Nguyen Xuan Vinh, James Bailey, Karin Verspoor

In particular, the Adjusted Rand Index (ARI) based on pair-counting, and the Adjusted Mutual Information (AMI) based on Shannon information theory are very popular in the clustering community.

MOOCs Meet Measurement Theory: A Topic-Modelling Approach

no code implementations25 Nov 2015 Jiazhen He, Benjamin I. P. Rubinstein, James Bailey, Rui Zhang, Sandra Milligan, Jeffrey Chan

Such models infer latent skill levels by relating them to individuals' observed responses on a series of items such as quiz questions.

Topic Models

A Framework to Adjust Dependency Measure Estimates for Chance

no code implementations27 Oct 2015 Simone Romano, Nguyen Xuan Vinh, James Bailey, Karin Verspoor

For example: non-linear dependencies between two continuous variables can be explored with the Maximal Information Coefficient (MIC); and categorical variables that are dependent to the target class are selected using Gini gain in random forests.

Cannot find the paper you are looking for? You can Submit a new open access paper.