no code implementations • ACL 2021 • Paul Pu Liang, Terrance Liu, Anna Cai, Michal Muszynski, Ryo Ishii, Nicholas Allen, Randy Auerbach, David Brent, Ruslan Salakhutdinov, Louis-Philippe Morency
Using computational models, we find that language and multimodal representations of mobile typed text (spanning typed characters, words, keystroke timings, and app usage) are predictive of daily mood.
no code implementations • 4 Dec 2020 • Terrance Liu, Paul Pu Liang, Michal Muszynski, Ryo Ishii, David Brent, Randy Auerbach, Nicholas Allen, Louis-Philippe Morency
Mental health conditions remain under-diagnosed even in countries with common access to advanced medical care.
4 code implementations • 6 Jan 2020 • Paul Pu Liang, Terrance Liu, Liu Ziyin, Nicholas B. Allen, Randy P. Auerbach, David Brent, Ruslan Salakhutdinov, Louis-Philippe Morency
To this end, we propose a new federated learning algorithm that jointly learns compact local representations on each device and a global model across all devices.
no code implementations • WS 2019 • Victor Ruiz, Lingyun Shi, Wei Quan, Neal Ryan, C Biernesser, ice, David Brent, Rich Tsui
The NB model had the best performance in two additional binary-classification tasks, i. e., no risk vs. flagged risk (any risk level other than no risk) with F1 score 0. 836 and no or low risk vs. urgent risk (moderate or severe risk) with F1 score 0. 736.