no code implementations • 10 Feb 2025 • Zhongjie Ba, YiTao Zhang, Peng Cheng, Bin Gong, Xinyu Zhang, Qinglong Wang, Kui Ren
Watermarking plays a key role in the provenance and detection of AI-generated content.
no code implementations • 31 Oct 2024 • Shunmei Dong, Qinglong Wang, Haiqing Wang, Qianqian Wang
To address the challenges for starmap identification, a reverse attitude statistics based method is proposed to handle position noise, false stars, and missing stars.
no code implementations • 25 Sep 2023 • Zhongjie Ba, Jieming Zhong, Jiachen Lei, Peng Cheng, Qinglong Wang, Zhan Qin, Zhibo Wang, Kui Ren
Evaluation results disclose an 88% success rate in bypassing Midjourney's proprietary safety filter with our attack prompts, leading to the generation of counterfeit images depicting political figures in violent scenarios.
1 code implementation • 20 Jun 2023 • Jiachen Lei, Qinglong Wang, Peng Cheng, Zhongjie Ba, Zhan Qin, Zhibo Wang, Zhenguang Liu, Kui Ren
In the pre-training stage, we propose to mask a high proportion (e. g., up to 90\%) of input images to approximately represent the primer distribution and introduce a masked denoising score matching objective to train a model to denoise visible areas.
1 code implementation • 12 Nov 2019 • Qinglong Wang, Kaixuan Zhang, Xue Liu, C. Lee Giles
We propose an approach that connects recurrent networks with different orders of hidden interaction with regular grammars of different levels of complexity.
no code implementations • 15 Oct 2019 • Kaixuan Zhang, Qinglong Wang, Xue Liu, C. Lee Giles
This has motivated different research areas such as data poisoning, model improvement, and explanation of machine learning models.
1 code implementation • 7 Dec 2018 • Chen Ma, Peng Kang, Bin Wu, Qinglong Wang, Xue Liu
In particular, a word-level and a neighbor-level attention module are integrated with the autoencoder.
no code implementations • 14 Nov 2018 • Qinglong Wang, Kaixuan Zhang, Xue Liu, C. Lee Giles
The verification problem for neural networks is verifying whether a neural network will suffer from adversarial samples, or approximating the maximal allowed scale of adversarial perturbation that can be endured.
1 code implementation • 27 Sep 2018 • Chen Ma, Yingxue Zhang, Qinglong Wang, Xue Liu
To incorporate the geographical context information, we propose a neighbor-aware decoder to make users' reachability higher on the similar and nearby neighbors of checked-in POIs, which is achieved by the inner product of POI embeddings together with the radial basis function (RBF) kernel.
no code implementations • 14 Feb 2018 • Qinglong Wang
In this paper, we propose a framework for predicting the amount of the electricity energy stored by a large number of EVs aggregated within different city-scale regions, based on spatio-temporal pattern of the electricity energy.
no code implementations • 16 Jan 2018 • Qinglong Wang, Kaixuan Zhang, Alexander G. Ororbia II, Xinyu Xing, Xue Liu, C. Lee Giles
Then we empirically evaluate different recurrent networks for their performance of DFA extraction on all Tomita grammars.
no code implementations • 29 Sep 2017 • Qinglong Wang, Kaixuan Zhang, Alexander G. Ororbia II, Xinyu Xing, Xue Liu, C. Lee Giles
Rule extraction from black-box models is critical in domains that require model validation before implementation, as can be the case in credit scoring and medical diagnosis.
no code implementations • 5 Dec 2016 • Qinglong Wang, Wenbo Guo, Kaixuan Zhang, Alexander G. Ororbia II, Xinyu Xing, Xue Liu, C. Lee Giles
Despite the superior performance of DNNs in these applications, it has been recently shown that these models are susceptible to a particular type of attack that exploits a fundamental flaw in their design.
no code implementations • 6 Oct 2016 • Qinglong Wang, Wenbo Guo, Alexander G. Ororbia II, Xinyu Xing, Lin Lin, C. Lee Giles, Xue Liu, Peng Liu, Gang Xiong
Deep neural networks have proven to be quite effective in a wide variety of machine learning tasks, ranging from improved speech recognition systems to advancing the development of autonomous vehicles.
no code implementations • 5 Oct 2016 • Qinglong Wang, Wenbo Guo, Kaixuan Zhang, Alexander G. Ororbia II, Xinyu Xing, C. Lee Giles, Xue Liu
However, after a thorough analysis of the fundamental flaw in DNNs, we discover that the effectiveness of current defenses is limited and, more importantly, cannot provide theoretical guarantees as to their robustness against adversarial sampled-based attacks.