Search Results for author: Xinyang Zhang

Found 18 papers, 9 papers with code

META: Metadata-Empowered Weak Supervision for Text Classification

1 code implementation EMNLP 2020 Dheeraj Mekala, Xinyang Zhang, Jingbo Shang

Based on seed words, we rank and filter motif instances to distill highly label-indicative ones as {``}seed motifs{''}, which provide additional weak supervision.

Classification General Classification +1

OA-Mine: Open-World Attribute Mining for E-Commerce Products with Weak Supervision

1 code implementation29 Apr 2022 Xinyang Zhang, Chenwei Zhang, Xian Li, Xin Luna Dong, Jingbo Shang, Christos Faloutsos, Jiawei Han

Most prior works on this matter mine new values for a set of known attributes but cannot handle new attributes that arose from constantly changing data.

Language Modelling

Detecting Multi-Sensor Fusion Errors in Advanced Driver-Assistance Systems

2 code implementations14 Sep 2021 Ziyuan Zhong, Zhisheng Hu, Shengjian Guo, Xinyang Zhang, Zhenyu Zhong, Baishakhi Ray

We define the failures (e. g., car crashes) caused by the faulty MSF as fusion errors and develop a novel evolutionary-based domain-specific search framework, FusED, for the efficient detection of fusion errors.

Autonomous Driving

Minimally-Supervised Structure-Rich Text Categorization via Learning on Text-Rich Networks

no code implementations23 Feb 2021 Xinyang Zhang, Chenwei Zhang, Luna Xin Dong, Jingbo Shang, Jiawei Han

Specifically, we jointly train two modules with different inductive biases -- a text analysis module for text understanding and a network learning module for class-discriminative, scalable network learning.

Product Categorization Text Categorization

i-Algebra: Towards Interactive Interpretability of Deep Neural Networks

no code implementations22 Jan 2021 Xinyang Zhang, Ren Pang, Shouling Ji, Fenglong Ma, Ting Wang

Providing explanations for deep neural networks (DNNs) is essential for their use in domains wherein the interpretability of decisions is a critical prerequisite.

Composite Adversarial Training for Multiple Adversarial Perturbations and Beyond

no code implementations1 Jan 2021 Xinyang Zhang, Zheng Zhang, Ting Wang

One intriguing property of deep neural networks (DNNs) is their vulnerability to adversarial perturbations.

Trojaning Language Models for Fun and Profit

1 code implementation1 Aug 2020 Xinyang Zhang, Zheng Zhang, Shouling Ji, Ting Wang

Recent years have witnessed the emergence of a new paradigm of building natural language processing (NLP) systems: general-purpose, pre-trained language models (LMs) are composed with simple downstream models and fine-tuned for a variety of NLP tasks.

Question Answering Toxic Comment Classification

AdvMind: Inferring Adversary Intent of Black-Box Attacks

1 code implementation16 Jun 2020 Ren Pang, Xinyang Zhang, Shouling Ji, Xiapu Luo, Ting Wang

Deep neural networks (DNNs) are inherently susceptible to adversarial attacks even under black-box settings, in which the adversary only has query access to the target models.

Inf-VAE: A Variational Autoencoder Framework to Integrate Homophily and Influence in Diffusion Prediction

2 code implementations1 Jan 2020 Aravind Sankar, Xinyang Zhang, Adit Krishnan, Jiawei Han

Recent years have witnessed tremendous interest in understanding and predicting information spread on social media platforms such as Twitter, Facebook, etc.

A Tale of Evil Twins: Adversarial Inputs versus Poisoned Models

1 code implementation5 Nov 2019 Ren Pang, Hua Shen, Xinyang Zhang, Shouling Ji, Yevgeniy Vorobeychik, Xiapu Luo, Alex Liu, Ting Wang

Specifically, (i) we develop a new attack model that jointly optimizes adversarial inputs and poisoned models; (ii) with both analytical and empirical evidence, we reveal that there exist intriguing "mutual reinforcement" effects between the two attack vectors -- leveraging one vector significantly amplifies the effectiveness of the other; (iii) we demonstrate that such effects enable a large design spectrum for the adversary to enhance the existing attacks that exploit both vectors (e. g., backdoor attacks), such as maximizing the attack evasiveness with respect to various detection methods; (iv) finally, we discuss potential countermeasures against such optimized attacks and their technical challenges, pointing to several promising research directions.

Provable Defenses against Spatially Transformed Adversarial Inputs: Impossibility and Possibility Results

no code implementations ICLR 2019 Xinyang Zhang, Yifan Huang, Chanh Nguyen, Shouling Ji, Ting Wang

On the possibility side, we show that it is still feasible to construct adversarial training methods to significantly improve the resilience of networks against adversarial inputs over empirical datasets.

Interpretable Deep Learning under Fire

no code implementations3 Dec 2018 Xinyang Zhang, Ningfei Wang, Hua Shen, Shouling Ji, Xiapu Luo, Ting Wang

The improved interpretability is believed to offer a sense of security by involving human in the decision-making process.

Decision Making

Model-Reuse Attacks on Deep Learning Systems

no code implementations2 Dec 2018 Yujie Ji, Xinyang Zhang, Shouling Ji, Xiapu Luo, Ting Wang

By empirically studying four deep learning systems (including both individual and ensemble systems) used in skin cancer screening, speech recognition, face verification, and autonomous steering, we show that such attacks are (i) effective - the host systems misbehave on the targeted inputs as desired by the adversary with high probability, (ii) evasive - the malicious models function indistinguishably from their benign counterparts on non-targeted inputs, (iii) elastic - the malicious models remain effective regardless of various system design choices and tuning strategies, and (iv) easy - the adversary needs little prior knowledge about the data used for system tuning or inference.

Cryptography and Security

EagleEye: Attack-Agnostic Defense against Adversarial Inputs (Technical Report)

no code implementations1 Aug 2018 Yujie Ji, Xinyang Zhang, Ting Wang

Deep neural networks (DNNs) are inherently vulnerable to adversarial inputs: such maliciously crafted samples trigger DNNs to misbehave, leading to detrimental consequences for DNN-powered systems.

Differentially Private Releasing via Deep Generative Model (Technical Report)

2 code implementations5 Jan 2018 Xinyang Zhang, Shouling Ji, Ting Wang

Privacy-preserving releasing of complex data (e. g., image, text, audio) represents a long-standing challenge for the data mining research community.

Motif-based Convolutional Neural Network on Graphs

1 code implementation15 Nov 2017 Aravind Sankar, Xinyang Zhang, Kevin Chen-Chuan Chang

This paper introduces a generalization of Convolutional Neural Networks (CNNs) to graphs with irregular linkage structures, especially heterogeneous graphs with typed nodes and schemas.

General Classification Node Classification +1

Modular Learning Component Attacks: Today's Reality, Tomorrow's Challenge

no code implementations25 Aug 2017 Xinyang Zhang, Yujie Ji, Ting Wang

Many of today's machine learning (ML) systems are not built from scratch, but are compositions of an array of {\em modular learning components} (MLCs).

Cannot find the paper you are looking for? You can Submit a new open access paper.