no code implementations • 16 Jan 2024 • Haoxin Liu, Wenli Zhang, Jiaheng Xie, Buomsoo Kim, Zhu Zhang, Yidong Chai
On the depression detection task, our method (F1 = 0. 975~0. 978) significantly outperforms traditional supervised learning paradigms, including feature engineering (F1 = 0. 760) and architecture engineering (F1 = 0. 756).
no code implementations • 11 Jan 2024 • Jiaheng Xie, Ruicheng Liang, Yidong Chai, Yang Liu, Daniel Zeng
To prevent widespread consequences, platforms are eager to predict these videos' impact on viewers' mental health.
1 code implementation • 6 Jun 2023 • Shuang Geng, Wenli Zhang, Jiaheng Xie, Gemin Liang, Ben Niu
In virtual health, the information asymmetries inherent in its delivery format, between different stakeholders, and across different healthcare delivery systems hinder the performance of existing predictive methods.
no code implementations • 18 May 2023 • Junwei Kuang, Jiaheng Xie, Zhijun Yan
This study contributes to IS literature with a novel interpretable deep learning model for depression detection in social media.
no code implementations • 6 Mar 2023 • Wenli Zhang, Jiaheng Xie, Zhu Zhang, Xiang Liu
Depression is a common disease worldwide.
no code implementations • 8 Nov 2022 • Jiaheng Xie, Xiaohang Zhao, Xiang Liu, Xiao Fang
To connect human expertise in the decision-making, safeguard trust for this high-stake prediction, and ensure algorithm transparency, we develop an interpretable deep learning model: Temporal Prototype Network (TempPNet).
no code implementations • 21 Dec 2020 • Jiaheng Xie, Xiao Liu
Although deep learning champions viewership prediction, it lacks interpretability, which is fundamental to increasing the adoption of predictive models and prescribing measurements to improve viewership.