1 code implementation • EMNLP 2020 • Dheeraj Mekala, Xinyang Zhang, Jingbo Shang
Based on seed words, we rank and filter motif instances to distill highly label-indicative ones as {``}seed motifs{''}, which provide additional weak supervision.
no code implementations • 13 Dec 2024 • Yu-Jhe Li, Xinyang Zhang, Kun Wan, Lantao Yu, Ajinkya Kale, Xin Lu
To overcome this challenge, existing methods often use multi-modal models like CLIP, which combine image and text features in a shared embedding space to bridge the gap between limited and extensive vocabulary recognition, resulting in a two-stage approach: In the first stage, a mask generator takes an input image to generate mask proposals, and the in the second stage the target mask is picked based on the query.
no code implementations • 19 Jul 2023 • Xinyang Zhang, Wentian Zhao, Xin Lu, Jeff Chien
To achieve layered image generation, we train an autoencoder that is able to reconstruct layered images and train diffusion models on the latent representation.
no code implementations • 20 May 2023 • Bowen Jin, Wentao Zhang, Yu Zhang, Yu Meng, Xinyang Zhang, Qi Zhu, Jiawei Han
A real-world text corpus sometimes comprises not only text documents but also semantic links between them (e. g., academic papers in a bibliographic network are linked by citations and co-authorships).
1 code implementation • 1 Jan 2023 • Jiayun Zhang, Xiyuan Zhang, Xinyang Zhang, Dezhi Hong, Rajesh K. Gupta, Jingbo Shang
Traditional federated classification methods, even those designed for non-IID clients, assume that each client annotates its local data with respect to the same universal class set.
1 code implementation • 15 Sep 2022 • Xinyang Zhang, Yury Malkov, Omar Florez, Serim Park, Brian McWilliams, Jiawei Han, Ahmed El-Kishky
Most existing PLMs are not tailored to the noisy user-generated text on social media, and the pre-training does not factor in the valuable social engagement logs available in a social network.
1 code implementation • 29 Apr 2022 • Xinyang Zhang, Chenwei Zhang, Xian Li, Xin Luna Dong, Jingbo Shang, Christos Faloutsos, Jiawei Han
Most prior works on this matter mine new values for a set of known attributes but cannot handle new attributes that arose from constantly changing data.
no code implementations • 10 Mar 2022 • Junjie Shen, Ningfei Wang, Ziwen Wan, Yunpeng Luo, Takami Sato, Zhisheng Hu, Xinyang Zhang, Shengjian Guo, Zhenyu Zhong, Kang Li, Ziming Zhao, Chunming Qiao, Qi Alfred Chen
In this paper, we perform the first systematization of knowledge of such growing semantic AD AI security research space.
3 code implementations • 14 Sep 2021 • Ziyuan Zhong, Zhisheng Hu, Shengjian Guo, Xinyang Zhang, Zhenyu Zhong, Baishakhi Ray
We define the failures (e. g., car crashes) caused by the faulty MSF as fusion errors and develop a novel evolutionary-based domain-specific search framework, FusED, for the efficient detection of fusion errors.
no code implementations • 23 Feb 2021 • Xinyang Zhang, Chenwei Zhang, Luna Xin Dong, Jingbo Shang, Jiawei Han
Specifically, we jointly train two modules with different inductive biases -- a text analysis module for text understanding and a network learning module for class-discriminative, scalable network learning.
no code implementations • 22 Jan 2021 • Xinyang Zhang, Ren Pang, Shouling Ji, Fenglong Ma, Ting Wang
Providing explanations for deep neural networks (DNNs) is essential for their use in domains wherein the interpretability of decisions is a critical prerequisite.
no code implementations • 1 Jan 2021 • Xinyang Zhang, Zheng Zhang, Ting Wang
One intriguing property of deep neural networks (DNNs) is their vulnerability to adversarial perturbations.
1 code implementation • 1 Aug 2020 • Xinyang Zhang, Zheng Zhang, Shouling Ji, Ting Wang
Recent years have witnessed the emergence of a new paradigm of building natural language processing (NLP) systems: general-purpose, pre-trained language models (LMs) are composed with simple downstream models and fine-tuned for a variety of NLP tasks.
1 code implementation • 16 Jun 2020 • Ren Pang, Xinyang Zhang, Shouling Ji, Xiapu Luo, Ting Wang
Deep neural networks (DNNs) are inherently susceptible to adversarial attacks even under black-box settings, in which the adversary only has query access to the target models.
2 code implementations • 1 Jan 2020 • Aravind Sankar, Xinyang Zhang, Adit Krishnan, Jiawei Han
Recent years have witnessed tremendous interest in understanding and predicting information spread on social media platforms such as Twitter, Facebook, etc.
1 code implementation • 5 Nov 2019 • Ren Pang, Hua Shen, Xinyang Zhang, Shouling Ji, Yevgeniy Vorobeychik, Xiapu Luo, Alex Liu, Ting Wang
Specifically, (i) we develop a new attack model that jointly optimizes adversarial inputs and poisoned models; (ii) with both analytical and empirical evidence, we reveal that there exist intriguing "mutual reinforcement" effects between the two attack vectors -- leveraging one vector significantly amplifies the effectiveness of the other; (iii) we demonstrate that such effects enable a large design spectrum for the adversary to enhance the existing attacks that exploit both vectors (e. g., backdoor attacks), such as maximizing the attack evasiveness with respect to various detection methods; (iv) finally, we discuss potential countermeasures against such optimized attacks and their technical challenges, pointing to several promising research directions.
no code implementations • ICLR 2019 • Xinyang Zhang, Yifan Huang, Chanh Nguyen, Shouling Ji, Ting Wang
On the possibility side, we show that it is still feasible to construct adversarial training methods to significantly improve the resilience of networks against adversarial inputs over empirical datasets.
no code implementations • 3 Dec 2018 • Xinyang Zhang, Ningfei Wang, Hua Shen, Shouling Ji, Xiapu Luo, Ting Wang
The improved interpretability is believed to offer a sense of security by involving human in the decision-making process.
no code implementations • 2 Dec 2018 • Yujie Ji, Xinyang Zhang, Shouling Ji, Xiapu Luo, Ting Wang
By empirically studying four deep learning systems (including both individual and ensemble systems) used in skin cancer screening, speech recognition, face verification, and autonomous steering, we show that such attacks are (i) effective - the host systems misbehave on the targeted inputs as desired by the adversary with high probability, (ii) evasive - the malicious models function indistinguishably from their benign counterparts on non-targeted inputs, (iii) elastic - the malicious models remain effective regardless of various system design choices and tuning strategies, and (iv) easy - the adversary needs little prior knowledge about the data used for system tuning or inference.
Cryptography and Security
no code implementations • 1 Aug 2018 • Yujie Ji, Xinyang Zhang, Ting Wang
Deep neural networks (DNNs) are inherently vulnerable to adversarial inputs: such maliciously crafted samples trigger DNNs to misbehave, leading to detrimental consequences for DNN-powered systems.
2 code implementations • 5 Jan 2018 • Xinyang Zhang, Shouling Ji, Ting Wang
Privacy-preserving releasing of complex data (e. g., image, text, audio) represents a long-standing challenge for the data mining research community.
1 code implementation • 15 Nov 2017 • Aravind Sankar, Xinyang Zhang, Kevin Chen-Chuan Chang
This paper introduces a generalization of Convolutional Neural Networks (CNNs) to graphs with irregular linkage structures, especially heterogeneous graphs with typed nodes and schemas.
no code implementations • 25 Aug 2017 • Xinyang Zhang, Yujie Ji, Ting Wang
Many of today's machine learning (ML) systems are not built from scratch, but are compositions of an array of {\em modular learning components} (MLCs).