Search Results for author: Qinglong Wang

Found 13 papers, 5 papers with code

SurrogatePrompt: Bypassing the Safety Filter of Text-To-Image Models via Substitution

1 code implementation25 Sep 2023 Zhongjie Ba, Jieming Zhong, Jiachen Lei, Peng Cheng, Qinglong Wang, Zhan Qin, Zhibo Wang, Kui Ren

Evaluation results disclose an 88% success rate in bypassing Midjourney's proprietary safety filter with our attack prompts, leading to the generation of counterfeit images depicting political figures in violent scenarios.

Masked Diffusion Models Are Fast Distribution Learners

1 code implementation20 Jun 2023 Jiachen Lei, Qinglong Wang, Peng Cheng, Zhongjie Ba, Zhan Qin, Zhibo Wang, Zhenguang Liu, Kui Ren

In the pre-training stage, we propose to mask a high proportion (e. g., up to 90\%) of input images to approximately represent the primer distribution and introduce a masked denoising score matching objective to train a model to denoise visible areas.

Denoising Image Generation

Connecting First and Second Order Recurrent Networks with Deterministic Finite Automata

1 code implementation12 Nov 2019 Qinglong Wang, Kaixuan Zhang, Xue Liu, C. Lee Giles

We propose an approach that connects recurrent networks with different orders of hidden interaction with regular grammars of different levels of complexity.

Shapley Homology: Topological Analysis of Sample Influence for Neural Networks

no code implementations15 Oct 2019 Kaixuan Zhang, Qinglong Wang, Xue Liu, C. Lee Giles

This has motivated different research areas such as data poisoning, model improvement, and explanation of machine learning models.

BIG-bench Machine Learning Data Poisoning

Gated Attentive-Autoencoder for Content-Aware Recommendation

1 code implementation7 Dec 2018 Chen Ma, Peng Kang, Bin Wu, Qinglong Wang, Xue Liu

In particular, a word-level and a neighbor-level attention module are integrated with the autoencoder.

Product Recommendation Recommendation Systems

Verification of Recurrent Neural Networks Through Rule Extraction

no code implementations14 Nov 2018 Qinglong Wang, Kaixuan Zhang, Xue Liu, C. Lee Giles

The verification problem for neural networks is verifying whether a neural network will suffer from adversarial samples, or approximating the maximal allowed scale of adversarial perturbation that can be endured.

Point-of-Interest Recommendation: Exploiting Self-Attentive Autoencoders with Neighbor-Aware Influence

1 code implementation27 Sep 2018 Chen Ma, Yingxue Zhang, Qinglong Wang, Xue Liu

To incorporate the geographical context information, we propose a neighbor-aware decoder to make users' reachability higher on the similar and nearby neighbors of checked-in POIs, which is achieved by the inner product of POI embeddings together with the radial basis function (RBF) kernel.

Recommendation Systems

Energy Spatio-Temporal Pattern Prediction for Electric Vehicle Networks

no code implementations14 Feb 2018 Qinglong Wang

In this paper, we propose a framework for predicting the amount of the electricity energy stored by a large number of EVs aggregated within different city-scale regions, based on spatio-temporal pattern of the electricity energy.

Scheduling

A Comparative Study of Rule Extraction for Recurrent Neural Networks

no code implementations16 Jan 2018 Qinglong Wang, Kaixuan Zhang, Alexander G. Ororbia II, Xinyu Xing, Xue Liu, C. Lee Giles

Then we empirically evaluate different recurrent networks for their performance of DFA extraction on all Tomita grammars.

An Empirical Evaluation of Rule Extraction from Recurrent Neural Networks

no code implementations29 Sep 2017 Qinglong Wang, Kaixuan Zhang, Alexander G. Ororbia II, Xinyu Xing, Xue Liu, C. Lee Giles

Rule extraction from black-box models is critical in domains that require model validation before implementation, as can be the case in credit scoring and medical diagnosis.

Medical Diagnosis

Learning Adversary-Resistant Deep Neural Networks

no code implementations5 Dec 2016 Qinglong Wang, Wenbo Guo, Kaixuan Zhang, Alexander G. Ororbia II, Xinyu Xing, Xue Liu, C. Lee Giles

Despite the superior performance of DNNs in these applications, it has been recently shown that these models are susceptible to a particular type of attack that exploits a fundamental flaw in their design.

Autonomous Vehicles

Using Non-invertible Data Transformations to Build Adversarial-Robust Neural Networks

no code implementations6 Oct 2016 Qinglong Wang, Wenbo Guo, Alexander G. Ororbia II, Xinyu Xing, Lin Lin, C. Lee Giles, Xue Liu, Peng Liu, Gang Xiong

Deep neural networks have proven to be quite effective in a wide variety of machine learning tasks, ranging from improved speech recognition systems to advancing the development of autonomous vehicles.

Autonomous Vehicles Dimensionality Reduction +2

Adversary Resistant Deep Neural Networks with an Application to Malware Detection

no code implementations5 Oct 2016 Qinglong Wang, Wenbo Guo, Kaixuan Zhang, Alexander G. Ororbia II, Xinyu Xing, C. Lee Giles, Xue Liu

However, after a thorough analysis of the fundamental flaw in DNNs, we discover that the effectiveness of current defenses is limited and, more importantly, cannot provide theoretical guarantees as to their robustness against adversarial sampled-based attacks.

Information Retrieval Malware Detection +3

Cannot find the paper you are looking for? You can Submit a new open access paper.