Search Results for author: Mu Wei

Found 10 papers, 3 papers with code

X-Reasoner: Towards Generalizable Reasoning Across Modalities and Domains

no code implementations6 May 2025 Qianchu Liu, Sheng Zhang, Guanghui Qin, Timothy Ossowski, Yu Gu, Ying Jin, Sid Kiblawi, Sam Preston, Mu Wei, Paul Vozila, Tristan Naumann, Hoifung Poon

Experiments show that X-Reasoner successfully transfers reasoning capabilities to both multimodal and out-of-domain settings, outperforming existing state-of-the-art models trained with in-domain and multimodal data across various general and medical benchmarks (Figure 1).

Multimodal Reasoning

Boltzmann Attention Sampling for Image Analysis with Small Objects

no code implementations4 Mar 2025 Theodore Zhao, Sid Kiblawi, Naoto Usuyama, Ho Hin Lee, Sam Preston, Hoifung Poon, Mu Wei

Detecting and segmenting small objects, such as lung nodules and tumor lesions, remains a critical challenge in image analysis.

BiomedParse: a biomedical foundation model for image parsing of everything everywhere all at once

no code implementations21 May 2024 Theodore Zhao, Yu Gu, Jianwei Yang, Naoto Usuyama, Ho Hin Lee, Tristan Naumann, Jianfeng Gao, Angela Crabtree, Jacob Abel, Christine Moung-Wen, Brian Piening, Carlo Bifulco, Mu Wei, Hoifung Poon, Sheng Wang

On object recognition, which aims to identify all objects in a given image along with their semantic types, we showed that BiomedParse can simultaneously segment and label all biomedical objects in an image (all at once).

All Image Segmentation +6

Distilling Large Language Models for Biomedical Knowledge Extraction: A Case Study on Adverse Drug Events

no code implementations12 Jul 2023 Yu Gu, Sheng Zhang, Naoto Usuyama, Yonas Woldesenbet, Cliff Wong, Praneeth Sanapathi, Mu Wei, Naveen Valluri, Erika Strandberg, Tristan Naumann, Hoifung Poon

We find that while LLMs already possess decent competency in structuring biomedical text, by distillation into a task-specific student model through self-supervised learning, substantial gains can be attained over out-of-box LLMs, with additional advantages such as cost, efficiency, and white-box model access.

Self-Supervised Learning

Pareto Optimal Learning for Estimating Large Language Model Errors

no code implementations28 Jun 2023 Theodore Zhao, Mu Wei, J. Samuel Preston, Hoifung Poon

We present a method based on Pareto optimization that generates a risk score to estimate the probability of error in an LLM response by integrating multiple sources of information.

Information Retrieval Language Modeling +4

Cannot find the paper you are looking for? You can Submit a new open access paper.