Search Results for author: James Bailey

Found 57 papers, 27 papers with code

Synthnet: Learning synthesizers end-to-end

no code implementations ICLR 2019 Florin Schimbinschi, Christian Walder, Sarah Erfani, James Bailey

Learning synthesizers and generating music in the raw audio domain is a challenging task.

Mitigating Challenges of the Space Environment for Onboard Artificial Intelligence: Design Overview of the Imaging Payload on SpIRIT

no code implementations12 Apr 2024 Miguel Ortiz del Castillo, Jonathan Morgan, Jack McRobbie, Clint Therakam, Zaher Joukhadar, Robert Mearns, Simon Barraclough, Richard Sinnott, Andrew Woods, Chris Bayliss, Kris Ehinger, Ben Rubinstein, James Bailey, Airlie Chapman, Michele Trenti

Artificial intelligence (AI) and autonomous edge computing in space are emerging areas of interest to augment capabilities of nanosatellites, where modern sensors generate orders of magnitude more data than can typically be transmitted to mission control.

Edge-computing Image Compression

Time Series Representation Learning with Supervised Contrastive Temporal Transformer

no code implementations16 Mar 2024 Yuansan Liu, Sudanthi Wijewickrema, Christofer Bester, Stephen O'Leary, James Bailey

We show that the model performs with high reliability and efficiency on the online CPD problem ($\sim$98\% and $\sim$97\% area under precision-recall curve respectively).

Change Point Detection Representation Learning +2

Whose Side Are You On? Investigating the Political Stance of Large Language Models

1 code implementation15 Mar 2024 Pagnarasmey Pit, Xingjun Ma, Mike Conway, Qingyu Chen, James Bailey, Henry Pit, Putrasmey Keo, Watey Diep, Yu-Gang Jiang

Large Language Models (LLMs) have gained significant popularity for their application in various everyday tasks such as text generation, summarization, and information retrieval.

Fairness Information Retrieval +1

Unlearnable Examples For Time Series

no code implementations3 Feb 2024 Yujing Jiang, Xingjun Ma, Sarah Monazam Erfani, James Bailey

In this work, we introduce the first UE generation method to protect time series data from unauthorized training by deep learning models.

Time Series

LDReg: Local Dimensionality Regularized Self-Supervised Learning

1 code implementation19 Jan 2024 Hanxun Huang, Ricardo J. G. B. Campello, Sarah Monazam Erfani, Xingjun Ma, Michael E. Houle, James Bailey

Representations learned via self-supervised learning (SSL) can be susceptible to dimensional collapse, where the learned representation subspace is of extremely low dimensionality and thus fails to represent the full data distribution and modalities.

Self-Supervised Learning

Dimensionality-Aware Outlier Detection: Theoretical and Experimental Analysis

1 code implementation10 Jan 2024 Alastair Anderberg, James Bailey, Ricardo J. G. B. Campello, Michael E. Houle, Henrique O. Marques, Miloš Radovanović, Arthur Zimek

We present a nonparametric method for outlier detection that takes full account of local variations in intrinsic dimensionality within the dataset.

Outlier Detection

End-to-End Anti-Backdoor Learning on Images and Time Series

no code implementations6 Jan 2024 Yujing Jiang, Xingjun Ma, Sarah Monazam Erfani, Yige Li, James Bailey

Backdoor attacks present a substantial security concern for deep learning models, especially those utilized in applications critical to safety and security.

Image Classification Time Series

Time-Transformer: Integrating Local and Global Features for Better Time Series Generation

1 code implementation18 Dec 2023 Yuansan Liu, Sudanthi Wijewickrema, Ang Li, Christofer Bester, Stephen O'Leary, James Bailey

Experimental results demonstrate that our model can outperform existing state-of-the-art models in 5 out of 6 datasets, specifically on those with data containing both global and local properties.

Data Augmentation Time Series +1

PELP: Pioneer Event Log Prediction Using Sequence-to-Sequence Neural Networks

no code implementations15 Dec 2023 Wenjun Zhou, Artem Polyvyanyy, James Bailey

Process mining, a data-driven approach for analyzing, visualizing, and improving business processes using event logs, has emerged as a powerful technique in the field of business process management.

Management

Distilling Cognitive Backdoor Patterns within an Image

1 code implementation26 Jan 2023 Hanxun Huang, Xingjun Ma, Sarah Erfani, James Bailey

We conduct extensive experiments to show that CD can robustly detect a wide range of advanced backdoor attacks.

Backdoor Attacks on Time Series: A Generative Approach

1 code implementation15 Nov 2022 Yujing Jiang, Xingjun Ma, Sarah Monazam Erfani, James Bailey

We find that, compared to images, it can be more challenging to achieve the two goals on time series.

Time Series Time Series Analysis

Adaptive Local-Component-aware Graph Convolutional Network for One-shot Skeleton-based Action Recognition

no code implementations21 Sep 2022 Anqi Zhu, Qiuhong Ke, Mingming Gong, James Bailey

Skeleton-based action recognition receives increasing attention because the skeleton representations reduce the amount of training data by eliminating visual information irrelevant to actions.

Action Recognition Meta-Learning +2

A Survey of Automated Data Augmentation Algorithms for Deep Learning-based Image Classification Tasks

no code implementations14 Jun 2022 Zihan Yang, Richard O. Sinnott, James Bailey, Qiuhong Ke

To mitigate such problem, a novel direction is to automatically learn the image augmentation policies from the given dataset using Automated Data Augmentation (AutoDA) techniques.

Image Augmentation Image Classification

On the Convergence and Robustness of Adversarial Training

no code implementations15 Dec 2021 Yisen Wang, Xingjun Ma, James Bailey, JinFeng Yi, BoWen Zhou, Quanquan Gu

In this paper, we propose such a criterion, namely First-Order Stationary Condition for constrained optimization (FOSC), to quantitatively evaluate the convergence quality of adversarial examples found in the inner maximization.

De Novo Molecular Generation with Stacked Adversarial Model

no code implementations24 Oct 2021 Yuansan Liu, James Bailey

A second stage model then takes these features to learn properties of the molecules and refine more valid molecules.

valid

Exploring Architectural Ingredients of Adversarially Robust Deep Neural Networks

1 code implementation NeurIPS 2021 Hanxun Huang, Yisen Wang, Sarah Monazam Erfani, Quanquan Gu, James Bailey, Xingjun Ma

Specifically, we make the following key observations: 1) more parameters (higher model capacity) does not necessarily help adversarial robustness; 2) reducing capacity at the last stage (the last group of blocks) of the network can actually improve adversarial robustness; and 3) under the same parameter budget, there exists an optimal architectural configuration for adversarial robustness.

Adversarial Robustness

FEATURE-AUGMENTED HYPERGRAPH NEURAL NETWORKS

no code implementations29 Sep 2021 Xueqi Ma, Pan Li, Qiong Cao, James Bailey, Yue Gao

In FAHGNN, we explore the influence of node features for the expressive power of GNNs and augment features by introducing common features and personal features to model information.

Node Classification Representation Learning

Semantic-Preserving Adversarial Text Attacks

2 code implementations23 Aug 2021 Xinghao Yang, Weifeng Liu, James Bailey, DaCheng Tao, Wei Liu

In this paper, we propose a Bigram and Unigram based adaptive Semantic Preservation Optimization (BU-SPO) method to examine the vulnerability of deep models.

Adversarial Text Semantic Similarity +4

Adversarial Interaction Attacks: Fooling AI to Misinterpret Human Intentions

no code implementations ICML Workshop AML 2021 Nodens Koren, Xingjun Ma, Qiuhong Ke, Yisen Wang, James Bailey

Understanding the actions of both humans and artificial intelligence (AI) agents is important before modern AI systems can be fully integrated into our daily life.

Adversarial Attack

Dual Head Adversarial Training

1 code implementation21 Apr 2021 Yujing Jiang, Xingjun Ma, Sarah Monazam Erfani, James Bailey

Deep neural networks (DNNs) are known to be vulnerable to adversarial examples/attacks, raising concerns about their reliability in safety-critical applications.

What Do Deep Nets Learn? Class-wise Patterns Revealed in the Input Space

no code implementations18 Jan 2021 Shihao Zhao, Xingjun Ma, Yisen Wang, James Bailey, Bo Li, Yu-Gang Jiang

In this paper, we focus on image classification and propose a method to visualize and understand the class-wise knowledge (patterns) learned by DNNs under three different settings including natural, backdoor and adversarial.

Image Classification

Adversarial Interaction Attack: Fooling AI to Misinterpret Human Intentions

no code implementations17 Jan 2021 Nodens Koren, Qiuhong Ke, Yisen Wang, James Bailey, Xingjun Ma

Understanding the actions of both humans and artificial intelligence (AI) agents is important before modern AI systems can be fully integrated into our daily life.

Adversarial Attack

Neural Architecture Search via Combinatorial Multi-Armed Bandit

no code implementations1 Jan 2021 Hanxun Huang, Xingjun Ma, Sarah M. Erfani, James Bailey

NAS can be performed via policy gradient, evolutionary algorithms, differentiable architecture search or tree-search methods.

Evolutionary Algorithms Neural Architecture Search

Divide and Learn: A Divide and Conquer Approach for Predict+Optimize

no code implementations4 Dec 2020 Ali Ugur Guler, Emir Demirovic, Jeffrey Chan, James Bailey, Christopher Leckie, Peter J. Stuckey

We compare our approach withother approaches to the predict+optimize problem and showwe can successfully tackle some hard combinatorial problemsbetter than other predict+optimize methods.

Combinatorial Optimization

Imbalanced Gradients: A New Cause of Overestimated Adversarial Robustness

no code implementations28 Sep 2020 Linxi Jiang, Xingjun Ma, Zejia Weng, James Bailey, Yu-Gang Jiang

Evaluating the robustness of a defense model is a challenging task in adversarial robustness research.

Adversarial Robustness

Reflection Backdoor: A Natural Backdoor Attack on Deep Neural Networks

3 code implementations ECCV 2020 Yunfei Liu, Xingjun Ma, James Bailey, Feng Lu

A backdoor attack installs a backdoor into the victim model by injecting a backdoor pattern into a small proportion of the training data.

Backdoor Attack

Imbalanced Gradients: A Subtle Cause of Overestimated Adversarial Robustness

1 code implementation24 Jun 2020 Xingjun Ma, Linxi Jiang, Hanxun Huang, Zejia Weng, James Bailey, Yu-Gang Jiang

Evaluating the robustness of a defense model is a challenging task in adversarial robustness research.

Adversarial Robustness

Normalized Loss Functions for Deep Learning with Noisy Labels

4 code implementations ICML 2020 Xingjun Ma, Hanxun Huang, Yisen Wang, Simone Romano, Sarah Erfani, James Bailey

However, in practice, simply being robust is not sufficient for a loss function to train accurate DNNs.

Ranked #30 on Image Classification on mini WebVision 1.0 (ImageNet Top-1 Accuracy metric)

Learning with noisy labels

Improving Adversarial Robustness Requires Revisiting Misclassified Examples

1 code implementation ICLR 2020 Yisen Wang, Difan Zou, Jin-Feng Yi, James Bailey, Xingjun Ma, Quanquan Gu

In this paper, we investigate the distinctive influence of misclassified and correctly classified examples on the final robustness of adversarial training.

Adversarial Robustness

Clean-Label Backdoor Attacks on Video Recognition Models

1 code implementation CVPR 2020 Shihao Zhao, Xingjun Ma, Xiang Zheng, James Bailey, Jingjing Chen, Yu-Gang Jiang

We propose the use of a universal adversarial trigger as the backdoor trigger to attack video recognition models, a situation where backdoor attacks are likely to be challenged by the above 4 strict conditions.

Backdoor Attack backdoor defense +2

Skip Connections Matter: On the Transferability of Adversarial Examples Generated with ResNets

3 code implementations ICLR 2020 Dongxian Wu, Yisen Wang, Shu-Tao Xia, James Bailey, Xingjun Ma

We find that using more gradients from the skip connections rather than the residual modules according to a decay factor, allows one to craft adversarial examples with high transferability.

Symmetric Cross Entropy for Robust Learning with Noisy Labels

4 code implementations ICCV 2019 Yisen Wang, Xingjun Ma, Zaiyi Chen, Yuan Luo, Jin-Feng Yi, James Bailey

In this paper, we show that DNN learning with Cross Entropy (CE) exhibits overfitting to noisy labels on some classes ("easy" classes), but more surprisingly, it also suffers from significant under learning on some other classes ("hard" classes).

Learning with noisy labels

Quality Evaluation of GANs Using Cross Local Intrinsic Dimensionality

no code implementations2 May 2019 Sukarna Barua, Xingjun Ma, Sarah Monazam Erfani, Michael E. Houle, James Bailey

In this paper, we demonstrate that an intrinsic dimensional characterization of the data space learned by a GAN model leads to an effective evaluation metric for GAN quality.

Black-box Adversarial Attacks on Video Recognition Models

no code implementations10 Apr 2019 Linxi Jiang, Xingjun Ma, Shaoxiang Chen, James Bailey, Yu-Gang Jiang

Using three benchmark video datasets, we demonstrate that V-BAD can craft both untargeted and targeted attacks to fool two state-of-the-art deep video recognition models.

Video Recognition

Learning Deep Hidden Nonlinear Dynamics from Aggregate Data

no code implementations22 Jul 2018 Yisen Wang, Bo Dai, Lingkai Kong, Sarah Monazam Erfani, James Bailey, Hongyuan Zha

Learning nonlinear dynamics from diffusion data is a challenging problem since the individuals observed may be different at different time points, generally following an aggregate behaviour.

Iterative Learning with Open-set Noisy Labels

1 code implementation CVPR 2018 Yisen Wang, Weiyang Liu, Xingjun Ma, James Bailey, Hongyuan Zha, Le Song, Shu-Tao Xia

We refer to this more complex scenario as the \textbf{open-set noisy label} problem and show that it is nontrivial in order to make accurate predictions.

Characterizing Adversarial Subspaces Using Local Intrinsic Dimensionality

1 code implementation ICLR 2018 Xingjun Ma, Bo Li, Yisen Wang, Sarah M. Erfani, Sudanthi Wijewickrema, Grant Schoenebeck, Dawn Song, Michael E. Houle, James Bailey

Deep Neural Networks (DNNs) have recently been shown to be vulnerable against adversarial examples, which are carefully crafted instances that can mislead DNNs to make errors during prediction.

Adversarial Defense

Online Cluster Validity Indices for Streaming Data

no code implementations8 Jan 2018 Masud Moshtaghi, James C. Bezdek, Sarah M. Erfani, Christopher Leckie, James Bailey

An important part of cluster analysis is validating the quality of computationally obtained clusters.

Clustering

Providing Effective Real-time Feedback in Simulation-based Surgical Training

no code implementations30 Jun 2017 Xingjun Ma, Sudanthi Wijewickrema, Yun Zhou, Shuo Zhou, Stephen O'Leary, James Bailey

Experimental results in a temporal bone surgery simulation show that the proposed method is able to extract highly effective feedback at a high level of efficiency.

Adversarial Generation of Real-time Feedback with Neural Networks for Simulation-based Training

no code implementations4 Mar 2017 Xingjun Ma, Sudanthi Wijewickrema, Shuo Zhou, Yun Zhou, Zakaria Mhammedi, Stephen O'Leary, James Bailey

It is the aim of this paper to develop an efficient and effective feedback generation method for the provision of real-time feedback in SBT.

Efficient Orthogonal Parametrisation of Recurrent Neural Networks Using Householder Reflections

1 code implementation ICML 2017 Zakaria Mhammedi, Andrew Hellicar, Ashfaqur Rahman, James Bailey

Our contributions are as follows; we first show that constraining the transition matrix to be unitary is a special case of an orthogonal constraint.

TopicResponse: A Marriage of Topic Modelling and Rasch Modelling for Automatic Measurement in MOOCs

no code implementations29 Jul 2016 Jiazhen He, Benjamin I. P. Rubinstein, James Bailey, Rui Zhang, Sandra Milligan

This paper explores the suitability of using automatically discovered topics from MOOC discussion forums for modelling students' academic abilities.

Ground Truth Bias in External Cluster Validity Indices

no code implementations17 Jun 2016 Yang Lei, James C. Bezdek, Simone Romano, Nguyen Xuan Vinh, Jeffrey Chan, James Bailey

For example, NCinc bias in the RI can be changed to NCdec bias by skewing the distribution of clusters in the ground truth partition.

Vocal Bursts Type Prediction

Adjusting for Chance Clustering Comparison Measures

no code implementations3 Dec 2015 Simone Romano, Nguyen Xuan Vinh, James Bailey, Karin Verspoor

In particular, the Adjusted Rand Index (ARI) based on pair-counting, and the Adjusted Mutual Information (AMI) based on Shannon information theory are very popular in the clustering community.

Clustering

MOOCs Meet Measurement Theory: A Topic-Modelling Approach

no code implementations25 Nov 2015 Jiazhen He, Benjamin I. P. Rubinstein, James Bailey, Rui Zhang, Sandra Milligan, Jeffrey Chan

Such models infer latent skill levels by relating them to individuals' observed responses on a series of items such as quiz questions.

Topic Models

A Framework to Adjust Dependency Measure Estimates for Chance

no code implementations27 Oct 2015 Simone Romano, Nguyen Xuan Vinh, James Bailey, Karin Verspoor

For example: non-linear dependencies between two continuous variables can be explored with the Maximal Information Coefficient (MIC); and categorical variables that are dependent to the target class are selected using Gini gain in random forests.

Cannot find the paper you are looking for? You can Submit a new open access paper.