1 code implementation • 23 Jun 2024 • Erin J. Talvitie, Zilei Shao, Huiying Li, Jinghan Hu, Jacob Boerma, Rory Zhao, Xintong Wang
In model-based reinforcement learning, simulated experiences from the learned model are often treated as equivalent to experience from the real environment.
Model-based Reinforcement Learning
reinforcement-learning
+1
no code implementations • 3 Apr 2024 • Hailong Jin, Huiying Li
This problem is challenging due to the close proximity of keypoints associated with small objects, which results in the fusion of these respective features.
no code implementations • 18 Feb 2024 • Xinbang Dai, Huiying Li, Guilin Qi
While the research community's focus on Knowledge Graph Question Answering (KGQA), the field of answering questions incorporating both spatio-temporal information based on STKGs remains largely unexplored.
no code implementations • 8 Jun 2022 • Huiying Li, Arjun Nitin Bhagoji, Yuxin Chen, Haitao Zheng, Ben Y. Zhao
Existing research on training-time attacks for deep neural networks (DNNs), such as backdoors, largely assume that models are static once trained, and hidden backdoors trained into models remain active indefinitely.
2 code implementations • 1 Nov 2021 • Yongrui Chen, Huiying Li, Guilin Qi, Tianxing Wu, Tenggou Wang
The high-level decoding generates an AQG as a constraint to prune the search space and reduce the locally ambiguous query graph.
Ranked #1 on
Knowledge Base Question Answering
on LC-QuAD 1.0
1 code implementation • 12 Sep 2021 • Yongrui Chen, Xinnan Guo, Chaojie Wang, Jian Qiu, Guilin Qi, Meng Wang, Huiying Li
Compared to the larger pre-trained model and the tabular-specific pre-trained model, our approach is still competitive.
1 code implementation • 8 Sep 2021 • Yongrui Chen, Huiying Li, Yuncheng Hua, Guilin Qi
However, this candidate generation strategy ignores the structure of queries, resulting in a considerable number of noisy queries.
Ranked #2 on
Knowledge Base Question Answering
on LC-QuAD 1.0
1 code implementation • 24 Jun 2020 • Huiying Li, Shawn Shan, Emily Wenger, Jiayun Zhang, Hai-Tao Zheng, Ben Y. Zhao
In particular, query-based black-box attacks do not require knowledge of the deep learning model, but can compute adversarial examples over the network by submitting queries and inspecting returns.
1 code implementation • 19 Feb 2020 • Shawn Shan, Emily Wenger, Jiayun Zhang, Huiying Li, Hai-Tao Zheng, Ben Y. Zhao
In this paper, we propose Fawkes, a system that helps individuals inoculate their images against unauthorized facial recognition models.
1 code implementation • 2 Oct 2019 • Huiying Li, Emily Wenger, Shawn Shan, Ben Y. Zhao, Haitao Zheng
We empirically show that our proposed watermarks achieve piracy resistance and other watermark properties, over a wide range of tasks and models.
no code implementations • 24 May 2019 • Yuanshun Yao, Huiying Li, Hai-Tao Zheng, Ben Y. Zhao
Recent work has proposed the concept of backdoor attacks on deep neural networks (DNNs), where misbehaviors are hidden inside "normal" models, only to be triggered by very specific inputs.
1 code implementation • IEEE Symposium on Security and Privacy (SP) 2019 • Bolun Wang, Yuanshun Yao, Shawn Shan, Huiying Li, Bimal Viswanath, Haitao Zheng, Ben Y. Zhao
We identify multiple mitigation techniques via input filters, neuron pruning and unlearning.
no code implementations • journal 2017 • Xionggao Zou, Yueping Feng, Huiying Li, Shuyu Jiang
As one of the most popular research fields in machine learning, the research on imbalanced dataset receives more and more attentions in recent years.