Search Results for author: Ting Wang

Found 47 papers, 14 papers with code

Divide and Denoise: Learning from Noisy Labels in Fine-Grained Entity Typing with Cluster-Wise Loss Correction

no code implementations ACL 2022 Kunyuan Pang, Haoyu Zhang, Jie zhou, Ting Wang

In this work, we propose a clustering-based loss correction framework named Feature Cluster Loss Correction (FCLC), to address these two problems.

Entity Typing

Don’t Miss the Potential Customers! Retrieving Similar Ads to Improve User Targeting

no code implementations Findings (EMNLP) 2021 Yi Feng, Ting Wang, Chuanyi Li, Vincent Ng, Jidong Ge, Bin Luo, Yucheng Hu, Xiaopeng Zhang

User targeting is an essential task in the modern advertising industry: given a package of ads for a particular category of products (e. g., green tea), identify the online users to whom the ad package should be targeted.

FedEntropy: Efficient Device Grouping for Federated Learning Using Maximum Entropy Judgment

no code implementations24 May 2022 Zhiwei Ling, Zhihao Yue, Jun Xia, Ming Hu, Ting Wang, Mingsong Chen

Along with the popularity of Artificial Intelligence (AI) and Internet-of-Things (IoT), Federated Learning (FL) has attracted steadily increasing attentions as a promising distributed machine learning paradigm, which enables the training of a central model on for numerous decentralized devices without exposing their privacy.

Federated Learning

Model-Contrastive Learning for Backdoor Defense

1 code implementation9 May 2022 Zhihao Yue, Jun Xia, Zhiwei Ling, Ming Hu, Ting Wang, Xian Wei, Mingsong Chen

Due to the popularity of Artificial Intelligence (AI) techniques, we are witnessing an increasing number of backdoor injection attacks that are designed to maliciously threaten Deep Neural Networks (DNNs) causing misclassification.

Backdoor Attack Contrastive Learning

Eliminating Backdoor Triggers for Deep Neural Networks Using Attention Relation Graph Distillation

1 code implementation21 Apr 2022 Jun Xia, Ting Wang, Jiepin Ding, Xian Wei, Mingsong Chen

Due to the prosperity of Artificial Intelligence (AI) techniques, more and more backdoors are designed by adversaries to attack Deep Neural Networks (DNNs). Although the state-of-the-art method Neural Attention Distillation (NAD) can effectively erase backdoor triggers from DNNs, it still suffers from non-negligible Attack Success Rate (ASR) together with lowered classification ACCuracy (ACC), since NAD focuses on backdoor defense using attention features (i. e., attention maps) of the same order.

Knowledge Distillation

Transfer Attacks Revisited: A Large-Scale Empirical Study in Real Computer Vision Settings

no code implementations7 Apr 2022 Yuhao Mao, Chong Fu, Saizhuo Wang, Shouling Ji, Xuhong Zhang, Zhenguang Liu, Jun Zhou, Alex X. Liu, Raheem Beyah, Ting Wang

To bridge this critical gap, we conduct the first large-scale systematic empirical study of transfer attacks against major cloud-based MLaaS platforms, taking the components of a real transfer attack into account.

Over-the-Air Federated Learning via Second-Order Optimization

1 code implementation29 Mar 2022 Peng Yang, Yuning Jiang, Ting Wang, Yong Zhou, Yuanming Shi, Colin N. Jones

To address this issue, in this paper, we instead propose a novel over-the-air second-order federated optimization algorithm to simultaneously reduce the communication rounds and enable low-latency global model aggregation.

Federated Learning

The Variable Volatility Elasticity Model from Commodity Markets

no code implementations17 Mar 2022 Fuzhou Gong, Ting Wang

In this paper, we propose and study a novel continuous-time model, based on the well-known constant elasticity of variance (CEV) model, to describe the asset price process.

Machine Learning Empowered Intelligent Data Center Networking: A Survey

no code implementations28 Feb 2022 Bo Li, Ting Wang, Peng Yang, Mingsong Chen, Shui Yu, Mounir Hamdi

To support the needs of ever-growing cloud-based services, the number of servers and network devices in data centers is increasing exponentially, which in turn results in high complexities and difficulties in network optimization.

Seeing is Living? Rethinking the Security of Facial Liveness Verification in the Deepfake Era

no code implementations22 Feb 2022 Changjiang Li, Li Wang, Shouling Ji, Xuhong Zhang, Zhaohan Xi, Shanqing Guo, Ting Wang

Facial Liveness Verification (FLV) is widely used for identity authentication in many security-sensitive domains and offered as Platform-as-a-Service (PaaS) by leading cloud vendors.

DeepFake Detection Face Swapping

Towards Fast and Accurate Federated Learning with non-IID Data for Cloud-Based IoT Applications

no code implementations29 Jan 2022 Tian Liu, Jiahao Ding, Ting Wang, Miao Pan, Mingsong Chen

However, since our grouping method is based on the similarity of extracted feature maps from IoT devices, it may incur additional risks of privacy exposure.

Federated Learning

CatchBackdoor: Backdoor Testing by Critical Trojan Neural Path Identification via Differential Fuzzing

no code implementations24 Dec 2021 Haibo Jin, Ruoxi Chen, Jinyin Chen, Yao Cheng, Chong Fu, Ting Wang, Yue Yu, Zhaoyan Ming

Existing DNN testing methods are mainly designed to find incorrect corner case behaviors in adversarial settings but fail to discover the backdoors crafted by strong trojan attacks.

DNN Testing

MedAttacker: Exploring Black-Box Adversarial Attacks on Risk Prediction Models in Healthcare

no code implementations11 Dec 2021 Muchao Ye, Junyu Luo, Guanjie Zheng, Cao Xiao, Ting Wang, Fenglong Ma

Deep neural networks (DNNs) have been broadly adopted in health risk prediction to provide healthcare diagnoses and treatments.

Adversarial Attack

Auto robust relative radiometric normalization via latent change noise modelling

no code implementations24 Nov 2021 Shiqi Liu, Lu Wang, Jie Lian, Ting Chen, Cong Liu, Xuchen Zhan, Jintao Lu, Jie Liu, Ting Wang, Dong Geng, Hongwei Duan, Yuze Tian

Relative radiometric normalization(RRN) of different satellite images of the same terrain is necessary for change detection, object classification/segmentation, and map-making tasks.

Change Detection

Backdoor Attack through Frequency Domain

1 code implementation22 Nov 2021 Tong Wang, Yuan YAO, Feng Xu, Shengwei An, Hanghang Tong, Ting Wang

We also evaluate FTROJAN against state-of-the-art defenses as well as several adaptive defenses that are designed on the frequency domain.

Autonomous Driving Backdoor Attack

On the Security Risks of AutoML

1 code implementation12 Oct 2021 Ren Pang, Zhaohan Xi, Shouling Ji, Xiapu Luo, Ting Wang

Neural Architecture Search (NAS) represents an emerging machine learning (ML) paradigm that automatically searches for models tailored to given tasks, which greatly simplifies the development of ML systems and propels the trend of ML democratization.

Model Poisoning Neural Architecture Search

UAV-Assisted Over-the-Air Computation

no code implementations25 Jan 2021 Min Fu, Yong Zhou, Yuanming Shi, Ting Wang, Wei Chen

Over-the-air computation (AirComp) provides a promising way to support ultrafast aggregation of distributed data.

Optimize the trajectory of UAV which plays a BS in communication system

i-Algebra: Towards Interactive Interpretability of Deep Neural Networks

no code implementations22 Jan 2021 Xinyang Zhang, Ren Pang, Shouling Ji, Fenglong Ma, Ting Wang

Providing explanations for deep neural networks (DNNs) is essential for their use in domains wherein the interpretability of decisions is a critical prerequisite.

Composite Adversarial Training for Multiple Adversarial Perturbations and Beyond

no code implementations1 Jan 2021 Xinyang Zhang, Zheng Zhang, Ting Wang

One intriguing property of deep neural networks (DNNs) is their vulnerability to adversarial perturbations.

TrojanZoo: Towards Unified, Holistic, and Practical Evaluation of Neural Backdoors

1 code implementation16 Dec 2020 Ren Pang, Zheng Zhang, Xiangshan Gao, Zhaohan Xi, Shouling Ji, Peng Cheng, Ting Wang

To bridge this gap, we design and implement TROJANZOO, the first open-source platform for evaluating neural backdoor attacks/defenses in a unified, holistic, and practical manner.


Visual Perception Generalization for Vision-and-Language Navigation via Meta-Learning

no code implementations10 Dec 2020 Ting Wang, Zongkai Wu, Donglin Wang

In the training phase, we first locate the generalization problem to the visual perception module, and then compare two meta-learning algorithms for better generalization in seen and unseen environments.

Meta-Learning Vision and Language Navigation

UNIFUZZ: A Holistic and Pragmatic Metrics-Driven Platform for Evaluating Fuzzers

1 code implementation5 Oct 2020 Yuwei Li, Shouling Ji, Yuan Chen, Sizhuang Liang, Wei-Han Lee, Yueyao Chen, Chenyang Lyu, Chunming Wu, Raheem Beyah, Peng Cheng, Kangjie Lu, Ting Wang

We hope that our findings can shed light on reliable fuzzing evaluation, so that we can discover promising fuzzing primitives to effectively facilitate fuzzer designs in the future.

Cryptography and Security

Trojaning Language Models for Fun and Profit

1 code implementation1 Aug 2020 Xinyang Zhang, Zheng Zhang, Shouling Ji, Ting Wang

Recent years have witnessed the emergence of a new paradigm of building natural language processing (NLP) systems: general-purpose, pre-trained language models (LMs) are composed with simple downstream models and fine-tuned for a variety of NLP tasks.

Question Answering Toxic Comment Classification

Graph Backdoor

1 code implementation21 Jun 2020 Zhaohan Xi, Ren Pang, Shouling Ji, Ting Wang

One intriguing property of deep neural networks (DNNs) is their inherent vulnerability to backdoor attacks -- a trojan model responds to trigger-embedded inputs in a highly predictable manner while functioning normally otherwise.

Backdoor Attack General Classification +2

AdvMind: Inferring Adversary Intent of Black-Box Attacks

1 code implementation16 Jun 2020 Ren Pang, Xinyang Zhang, Shouling Ji, Xiapu Luo, Ting Wang

Deep neural networks (DNNs) are inherently susceptible to adversarial attacks even under black-box settings, in which the adversary only has query access to the target models.

PoisHygiene: Detecting and Mitigating Poisoning Attacks in Neural Networks

no code implementations24 Mar 2020 Junfeng Guo, Ting Wang, Cong Liu

Being able to detect and mitigate poisoning attacks, typically categorized into backdoor and adversarial poisoning (AP), is critical in enabling safe adoption of DNNs in many application domains.

Data Poisoning

Portably parallel construction of a CI wave function from a matrix-product state using the Charm++ framework

1 code implementation24 Mar 2020 Ting Wang, Yingjin Ma, Lian Zhao, Jinrong Jiang

In this work, we present an efficient procedure for constructing CI expansions from MPS using the Charm++ parallel programming framework, upon which automatic load balancing and object migration facilities can be employed.

Computational Physics Strongly Correlated Electrons

A Tale of Evil Twins: Adversarial Inputs versus Poisoned Models

1 code implementation5 Nov 2019 Ren Pang, Hua Shen, Xinyang Zhang, Shouling Ji, Yevgeniy Vorobeychik, Xiapu Luo, Alex Liu, Ting Wang

Specifically, (i) we develop a new attack model that jointly optimizes adversarial inputs and poisoned models; (ii) with both analytical and empirical evidence, we reveal that there exist intriguing "mutual reinforcement" effects between the two attack vectors -- leveraging one vector significantly amplifies the effectiveness of the other; (iii) we demonstrate that such effects enable a large design spectrum for the adversary to enhance the existing attacks that exploit both vectors (e. g., backdoor attacks), such as maximizing the attack evasiveness with respect to various detection methods; (iv) finally, we discuss potential countermeasures against such optimized attacks and their technical challenges, pointing to several promising research directions.

Provable Defenses against Spatially Transformed Adversarial Inputs: Impossibility and Possibility Results

no code implementations ICLR 2019 Xinyang Zhang, Yifan Huang, Chanh Nguyen, Shouling Ji, Ting Wang

On the possibility side, we show that it is still feasible to construct adversarial training methods to significantly improve the resilience of networks against adversarial inputs over empirical datasets.

SirenAttack: Generating Adversarial Audio for End-to-End Acoustic Systems

no code implementations23 Jan 2019 Tianyu Du, Shouling Ji, Jinfeng Li, Qinchen Gu, Ting Wang, Raheem Beyah

Despite their immense popularity, deep learning-based acoustic systems are inherently vulnerable to adversarial attacks, wherein maliciously crafted audios trigger target systems to misbehave.

Cryptography and Security

TextBugger: Generating Adversarial Text Against Real-world Applications

1 code implementation13 Dec 2018 Jinfeng Li, Shouling Ji, Tianyu Du, Bo Li, Ting Wang

Deep Learning-based Text Understanding (DLTU) is the backbone technique behind various applications, including question answering, machine translation, and text classification.

Adversarial Text Machine Translation +5

Interpretable Deep Learning under Fire

no code implementations3 Dec 2018 Xinyang Zhang, Ningfei Wang, Hua Shen, Shouling Ji, Xiapu Luo, Ting Wang

The improved interpretability is believed to offer a sense of security by involving human in the decision-making process.

Decision Making

Model-Reuse Attacks on Deep Learning Systems

no code implementations2 Dec 2018 Yujie Ji, Xinyang Zhang, Shouling Ji, Xiapu Luo, Ting Wang

By empirically studying four deep learning systems (including both individual and ensemble systems) used in skin cancer screening, speech recognition, face verification, and autonomous steering, we show that such attacks are (i) effective - the host systems misbehave on the targeted inputs as desired by the adversary with high probability, (ii) evasive - the malicious models function indistinguishably from their benign counterparts on non-targeted inputs, (iii) elastic - the malicious models remain effective regardless of various system design choices and tuning strategies, and (iv) easy - the adversary needs little prior knowledge about the data used for system tuning or inference.

Cryptography and Security

EagleEye: Attack-Agnostic Defense against Adversarial Inputs (Technical Report)

no code implementations1 Aug 2018 Yujie Ji, Xinyang Zhang, Ting Wang

Deep neural networks (DNNs) are inherently vulnerable to adversarial inputs: such maliciously crafted samples trigger DNNs to misbehave, leading to detrimental consequences for DNN-powered systems.

Differentially Private Releasing via Deep Generative Model (Technical Report)

2 code implementations5 Jan 2018 Xinyang Zhang, Shouling Ji, Ting Wang

Privacy-preserving releasing of complex data (e. g., image, text, audio) represents a long-standing challenge for the data mining research community.

Where Classification Fails, Interpretation Rises

no code implementations2 Dec 2017 Chanh Nguyen, Georgi Georgiev, Yujie Ji, Ting Wang

We believe that this work opens a new direction for designing adversarial input detection methods.

Classification General Classification

Modular Learning Component Attacks: Today's Reality, Tomorrow's Challenge

no code implementations25 Aug 2017 Xinyang Zhang, Yujie Ji, Ting Wang

Many of today's machine learning (ML) systems are not built from scratch, but are compositions of an array of {\em modular learning components} (MLCs).

Dense 3D Facial Reconstruction from a Single Depth Image in Unconstrained Environment

no code implementations24 Apr 2017 Shu Zhang, Hui Yu, Ting Wang, Junyu Dong, Honghai Liu

With the increasing demands of applications in virtual reality such as 3D films, virtual Human-Machine Interactions and virtual agents, the analysis of 3D human face analysis is considered to be more and more important as a fundamental step for those virtual reality tasks.


DIMM-SC: A Dirichlet mixture model for clustering droplet-based single cell transcriptomic data

no code implementations6 Apr 2017 Zhe Sun, Ting Wang, Ke Deng, Xiao-Feng Wang, Robert Lafyatis, Ying Ding, Ming Hu, Wei Chen

More importantly, as a model-based approach, DIMM-SC is able to quantify the clustering uncertainty for each single cell, facilitating rigorous statistical inference and biological interpretations, which are typically unavailable from existing clustering methods.

Context-Aware Online Learning for Course Recommendation of MOOC Big Data

no code implementations11 Oct 2016 Yifan Hou, Pan Zhou, Ting Wang, Li Yu, Yuchong Hu, Dapeng Wu

In this respect, the key challenge is how to realize personalized course recommendation as well as to reduce the computing and storage costs for the tremendous course data.

online learning Recommendation Systems

Neural Mechanism of Language

no code implementations22 Aug 2014 Peilei Liu, Ting Wang

Firstly, we briefly introduce this model in this paper, and then we explain the neural mechanism of language and reasoning with it.

Motor Learning Mechanism on the Neuron Scale

no code implementations18 Jul 2014 Peilei Liu, Ting Wang

Finally, we compare motor system with sensory system.

A Quantitative Neural Coding Model of Sensory Memory

no code implementations25 Jun 2014 Peilei Liu, Ting Wang

The coding mechanism of sensory memory on the neuron scale is one of the most important questions in neuroscience.

A Unified Quantitative Model of Vision and Audition

no code implementations23 Jun 2014 Peilei Liu, Ting Wang

This is complementary to existing theories and has provided better explanations for sound localization.

Automatic Extraction of Protein Interaction in Literature

no code implementations8 Jun 2014 Peilei Liu, Ting Wang

Protein-protein interaction extraction is the key precondition of the construction of protein knowledge network, and it is very important for the research in the biomedicine.

Cannot find the paper you are looking for? You can Submit a new open access paper.