Search Results for author: Hongxia Xu

Found 10 papers, 4 papers with code

MambaCapsule: Towards Transparent Cardiac Disease Diagnosis with Electrocardiography Using Mamba Capsule Network

no code implementations30 Jul 2024 Yinlong Xu, Xiaoqiang Liu, Zitai Kong, Yixuan Wu, Yue Wang, Yingzhou Lu, Honghao Gao, Jian Wu, Hongxia Xu

Cardiac arrhythmia, a condition characterized by irregular heartbeats, often serves as an early indication of various heart ailments.

Multi-Modal CLIP-Informed Protein Editing

no code implementations27 Jul 2024 Mingze Yin, Hanjing Zhou, Yiheng Zhu, Miao Lin, Yixuan Wu, Jialu Wu, Hongxia Xu, Chang-Yu Hsieh, Tingjun Hou, Jintai Chen, Jian Wu

Proteins govern most biological functions essential for life, but achieving controllable protein discovery and optimization remains challenging.

Attribute Contrastive Learning

TrialBench: Multi-Modal Artificial Intelligence-Ready Clinical Trial Datasets

1 code implementation30 Jun 2024 Jintai Chen, Yaojun Hu, Yue Wang, Yingzhou Lu, Xu Cao, Miao Lin, Hongxia Xu, Jian Wu, Cao Xiao, Jimeng Sun, Lucas Glass, Kexin Huang, Marinka Zitnik, Tianfan Fu

Clinical trials are pivotal for developing new medical treatments, yet they typically pose some risks such as patient mortality, adverse events, and enrollment failure that waste immense efforts spanning over a decade.

Multi-rater Prompting for Ambiguous Medical Image Segmentation

no code implementations11 Apr 2024 Jinhong Wang, Yi Cheng, Jintai Chen, Hongxia Xu, Danny Chen, Jian Wu

In this paper, we tackle two challenges arisen in multi-rater annotations for medical image segmentation (called ambiguous medical image segmentation): (1) How to train a deep learning model when a group of raters produces a set of diverse but plausible annotations, and (2) how to fine-tune the model efficiently when computation resources are not available for re-training the entire model on a different dataset domain.

Image Segmentation Medical Image Segmentation +2

Making Pre-trained Language Models Great on Tabular Prediction

1 code implementation4 Mar 2024 Jiahuan Yan, Bo Zheng, Hongxia Xu, Yiheng Zhu, Danny Z. Chen, Jimeng Sun, Jian Wu, Jintai Chen

Condensing knowledge from diverse domains, language models (LMs) possess the capability to comprehend feature names from various tables, potentially serving as versatile learners in transferring knowledge across distinct tables and diverse prediction tasks, but their discrete text representation space is inherently incompatible with numerical feature values in tables.

Unraveling Babel: Exploring Multilingual Activation Patterns of LLMs and Their Applications

no code implementations26 Feb 2024 Weize Liu, Yinlong Xu, Hongxia Xu, Jintai Chen, Xuming Hu, Jian Wu

Recently, large language models (LLMs) have achieved tremendous breakthroughs in the field of NLP, but still lack understanding of their internal activities when processing different languages.

Generative AI for Controllable Protein Sequence Design: A Survey

no code implementations16 Feb 2024 Yiheng Zhu, Zitai Kong, Jialu Wu, Weize Liu, Yuqiang Han, Mingze Yin, Hongxia Xu, Chang-Yu Hsieh, Tingjun Hou

To set the stage, we first outline the foundational tasks in protein sequence design in terms of the constraints involved and present key generative models and optimization algorithms.

Drug Discovery Protein Design

Multimodal Clinical Trial Outcome Prediction with Large Language Models

1 code implementation9 Feb 2024 Wenhao Zheng, Dongsheng Peng, Hongxia Xu, Yun Li, Hongtu Zhu, Tianfan Fu, Huaxiu Yao

To address these issues, we propose a multimodal mixture-of-experts (LIFTED) approach for clinical trial outcome prediction.

Mind's Mirror: Distilling Self-Evaluation Capability and Comprehensive Thinking from Large Language Models

1 code implementation15 Nov 2023 Weize Liu, Guocong Li, Kai Zhang, Bang Du, Qiyuan Chen, Xuming Hu, Hongxia Xu, Jintai Chen, Jian Wu

While techniques such as chain-of-thought (CoT) distillation have displayed promise in distilling LLMs into small language models (SLMs), there is a risk that distilled SLMs may still inherit flawed reasoning and hallucinations from LLMs.

Transfer Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.