Search Results for author: Yining Hua

Found 15 papers, 12 papers with code

Large Language Models in Mental Health Care: a Scoping Review

no code implementations1 Jan 2024 Yining Hua, Fenglin Liu, Kailai Yang, Zehan Li, Yi-han Sheu, Peilin Zhou, Lauren V. Moran, Sophia Ananiadou, Andrew Beam

Objective: The growing use of large language models (LLMs) stimulates a need for a comprehensive review of their applications and outcomes in mental health care contexts.

A Survey of Large Language Models in Medicine: Principles, Applications, and Challenges

1 code implementation9 Nov 2023 Hongjian Zhou, Fenglin Liu, Boyang Gu, Xinyu Zou, Jinfa Huang, Jinge Wu, Yiru Li, Sam S. Chen, Peilin Zhou, Junling Liu, Yining Hua, Chengfeng Mao, Chenyu You, Xian Wu, Yefeng Zheng, Lei Clifton, Zheng Li, Jiebo Luo, David A. Clifton

LLMs in medicine to assist physicians for patient care are emerging as a promising research direction in both artificial intelligence and clinical medicine.

Exploring Recommendation Capabilities of GPT-4V(ision): A Preliminary Case Study

no code implementations7 Nov 2023 Peilin Zhou, Meng Cao, You-Liang Huang, Qichen Ye, Peiyan Zhang, Junling Liu, Yueqi Xie, Yining Hua, Jaeboum Kim

Large Multimodal Models (LMMs) have demonstrated impressive performance across various vision and language tasks, yet their potential applications in recommendation tasks with visual assistance remain unexplored.

General Knowledge Reading Comprehension

Continuous Training and Fine-tuning for Domain-Specific Language Models in Medical Question Answering

no code implementations1 Nov 2023 Zhen Guo, Yining Hua

This work demonstrates a method using continuous training and instruction fine-tuning to rapidly adapt Llama 2 base models to the Chinese medical domain.

Question Answering

Qilin-Med-VL: Towards Chinese Large Vision-Language Model for General Healthcare

1 code implementation27 Oct 2023 Junling Liu, ZiMing Wang, Qichen Ye, Dading Chong, Peilin Zhou, Yining Hua

This method enhances the model's ability to generate medical captions and answer complex medical queries.

Language Modelling

Streamlining Social Media Information Extraction for Public Health Research with Deep Learning

2 code implementations28 Jun 2023 Yining Hua, Shixu Lin, Minghui Li, Yujie Zhang, Dinah Foer, Siwen Wang, Peilin Zhou, Li Zhou, Jie Yang

Conclusion: This study advances public health research by implementing a novel, systematic pipeline for curating symptom lexicons from social media data.

Information Retrieval named-entity-recognition +3

Rethinking Multi-Interest Learning for Candidate Matching in Recommender Systems

1 code implementation28 Feb 2023 Yueqi Xie, Jingqi Gao, Peilin Zhou, Qichen Ye, Yining Hua, Jaeboum Kim, Fangzhao Wu, Sunghun Kim

To address these issues, we propose the REMI framework, consisting of an Interest-aware Hard Negative mining strategy (IHN) and a Routing Regularization (RR) method.

Recommendation Systems

Exploring Social Media for Early Detection of Depression in COVID-19 Patients

1 code implementation23 Feb 2023 Jiageng Wu, Xian Wu, Yining Hua, Shixu Lin, Yefeng Zheng, Jie Yang

Secondly, We conducted an extensive analysis of this dataset to investigate the characteristic of COVID-19 patients with a higher risk of depression.

Knowledge Distillation

GreenPLM: Cross-Lingual Transfer of Monolingual Pre-Trained Language Models at Almost No Cost

1 code implementation13 Nov 2022 Qingcheng Zeng, Lucas Garay, Peilin Zhou, Dading Chong, Yining Hua, Jiageng Wu, Yikang Pan, Han Zhou, Rob Voigt, Jie Yang

Large pre-trained models have revolutionized natural language processing (NLP) research and applications, but high training costs and limited data resources have prevented their benefits from being shared equally amongst speakers of all the world's languages.

Cross-Lingual Transfer

Equivariant Contrastive Learning for Sequential Recommendation

1 code implementation10 Nov 2022 Peilin Zhou, Jingqi Gao, Yueqi Xie, Qichen Ye, Yining Hua, Jae Boum Kim, Shoujin Wang, Sunghun Kim

Therefore, we propose Equivariant Contrastive Learning for Sequential Recommendation (ECL-SR), which endows SR models with great discriminative power, making the learned user behavior representations sensitive to invasive augmentations (e. g., item substitution) and insensitive to mild augmentations (e. g., featurelevel dropout masking).

Contrastive Learning Data Augmentation +1

METS-CoV: A Dataset of Medical Entity and Targeted Sentiment on COVID-19 Related Tweets

1 code implementation28 Sep 2022 Peilin Zhou, Zeqiang Wang, Dading Chong, Zhijiang Guo, Yining Hua, Zichang Su, Zhiyang Teng, Jiageng Wu, Jie Yang

To further investigate tweet users' attitudes toward specific entities, 4 types of entities (Person, Organization, Drug, and Vaccine) are selected and annotated with user sentiments, resulting in a targeted sentiment dataset with 9, 101 entities (in 5, 278 tweets).

Epidemiology named-entity-recognition +3

Using Twitter Data to Understand Public Perceptions of Approved versus Off-label Use for COVID-19-related Medications

1 code implementation29 Jun 2022 Yining Hua, Hang Jiang, Shixu Lin, Jie Yang, Joseph M. Plasek, David W. Bates, Li Zhou

Time-trend analysis indicated that Hydroxychloroquine and Ivermectin were discussed more than Molnupiravir and Remdesivir, particularly during COVID-19 surges.


Cannot find the paper you are looking for? You can Submit a new open access paper.