no code implementations • 13 May 2024 • Ebuka Okpala, Nishant Vishwamitra, Keyan Guo, Song Liao, Long Cheng, Hongxin Hu, Yongkai Wu, Xiaohong Yuan, Jeannette Wade, Sajad Khorsandroo
While capstone projects are an excellent example of experiential learning, given the interdisciplinary nature of this emerging social cybersecurity problem, it can be challenging to use them to engage non-computing students without prior knowledge of AI.
2 code implementations • 27 Mar 2024 • Keyan Guo, Ayush Utkarsh, Wenbo Ding, Isabelle Ondracek, Ziming Zhao, Guo Freeman, Nishant Vishwamitra, Hongxin Hu
Online user generated content games (UGCGs) are increasingly popular among children and adolescents for social interaction and more creative online entertainment.
no code implementations • 7 Jan 2024 • Keyan Guo, Alexander Hu, Jaden Mu, Ziheng Shi, Ziming Zhao, Nishant Vishwamitra, Hongxin Hu
Our study reveals that a meticulously crafted reasoning prompt can effectively capture the context of hate speech by fully utilizing the knowledge base in LLMs, significantly outperforming existing techniques.
1 code implementation • 22 Dec 2023 • Nishant Vishwamitra, Keyan Guo, Farhan Tajwar Romit, Isabelle Ondracek, Long Cheng, Ziming Zhao, Hongxin Hu
HATEGUARD further achieves prompt-based zero-shot detection by automatically generating and updating detection prompts with new derogatory terms and targets in new wave samples to effectively address new waves of online hate.
no code implementations • 2 Nov 2022 • Mingqi Li, Fei Ding, Dan Zhang, Long Cheng, Hongxin Hu, Feng Luo
In this paper, we propose Multi-level Multilingual Knowledge Distillation (MMKD), a novel method for improving multilingual language models.
no code implementations • 22 Dec 2021 • Nishant Vishwamitra, Hongxin Hu, Ziming Zhao, Long Cheng, Feng Luo
We then introduce a new type of multimodal adversarial attacks called decoupling attack in MUROAN that aims to compromise multimodal models by decoupling their fused modalities.
1 code implementation • 12 Jun 2021 • Dian Chen, Hongxin Hu, Qian Wang, Yinli Li, Cong Wang, Chao Shen, Qi Li
In deep learning, a typical strategy for transfer learning is to freeze the early layers of a pre-trained model and fine-tune the rest of its layers on the target domain.
1 code implementation • 1 Dec 2020 • Fei Ding, Yin Yang, Hongxin Hu, Venkat Krovi, Feng Luo
While it is important to transfer the full knowledge from teacher to student, we introduce the Multi-level Knowledge Distillation (MLKD) by effectively considering both knowledge alignment and correlation.
2 code implementations • 9 Oct 2019 • Zili Meng, Minhu Wang, Jiasong Bai, Mingwei Xu, Hongzi Mao, Hongxin Hu
While many deep learning (DL)-based networking systems have demonstrated superior performance, the underlying Deep Neural Networks (DNNs) remain blackboxes and stay uninterpretable for network operators.
1 code implementation • 21 May 2019 • Juan Wang, Chengyang Fan, Jie Wang, Yueqiang Cheng, Yinqian Zhang, Wenhui Zhang, Peng Liu, Hongxin Hu
In this paper, we present SvTPM, a secure and efficient software-based vTPM implementation based on hardware-rooted Trusted Execution Environment (TEE), providing a whole life cycle protection of vTPMs in the cloud.
Cryptography and Security
no code implementations • 27 Mar 2019 • Joseph Clements, Yuzhe Yang, Ankur Sharma, Hongxin Hu, Yingjie Lao
Recent advances in artificial intelligence and the increasing need for powerful defensive measures in the domain of network security, have led to the adoption of deep learning approaches for use in network intrusion detection systems.
no code implementations • 27 Sep 2018 • Mhafuzul Islam, Mahsrur Chowdhury, Hongda Li, Hongxin Hu
Vision-based navigation of autonomous vehicles primarily depends on the Deep Neural Network (DNN) based systems in which the controller obtains input from sensors/detectors, such as cameras and produces a vehicle control output, such as a steering wheel angle to navigate the vehicle safely in a roadway traffic environment.
no code implementations • ICLR 2018 • Xiang Zhang, Nishant Vishwamitra, Hongxin Hu, Feng Luo
The numbers of convolution layers and parameters are only increased linearly in Crescendo blocks.