Search Results for author: Danding Wang

Found 6 papers, 4 papers with code

Progressive Open Space Expansion for Open-Set Model Attribution

1 code implementation13 Mar 2023 Tianyun Yang, Danding Wang, Fan Tang, Xinying Zhao, Juan Cao, Sheng Tang

In this study, we focus on a challenging task, namely Open-Set Model Attribution (OSMA), to simultaneously attribute images to known models and identify those from unknown ones.

Open Set Learning

Online Misinformation Video Detection: A Survey

2 code implementations7 Feb 2023 Yuyan Bu, Qiang Sheng, Juan Cao, Peng Qi, Danding Wang, Jintao Li

With information consumption via online video streaming becoming increasingly popular, misinformation video poses a new threat to the health of the online information ecosystem.

Misinformation Recommendation Systems +1

Improving Fake News Detection of Influential Domain via Domain- and Instance-Level Transfer

no code implementations COLING 2022 Qiong Nan, Danding Wang, Yongchun Zhu, Qiang Sheng, Yuhui Shi, Juan Cao, Jintao Li

To address this issue, we propose a Domain- and Instance-level Transfer Framework for Fake News Detection (DITFEND), which could improve the performance of specific target domains.

Fake News Detection Language Modelling +2

Generalizing to the Future: Mitigating Entity Bias in Fake News Detection

1 code implementation20 Apr 2022 Yongchun Zhu, Qiang Sheng, Juan Cao, Shuokai Li, Danding Wang, Fuzhen Zhuang

In this paper, we propose an entity debiasing framework (\textbf{ENDEF}) which generalizes fake news detection models to the future data by mitigating entity bias from a cause-effect perspective.

Fake News Detection

Zoom Out and Observe: News Environment Perception for Fake News Detection

1 code implementation ACL 2022 Qiang Sheng, Juan Cao, Xueyao Zhang, Rundong Li, Danding Wang, Yongchun Zhu

To differentiate fake news from real ones, existing methods observe the language patterns of the news post and "zoom in" to verify its content with knowledge sources or check its readers' replies.

Fake News Detection Misinformation

Show or Suppress? Managing Input Uncertainty in Machine Learning Model Explanations

no code implementations23 Jan 2021 Danding Wang, Wencan Zhang, Brian Y. Lim

Feature attribution is widely used in interpretable machine learning to explain how influential each measured input feature value is for an output inference.

BIG-bench Machine Learning Interpretable Machine Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.