Search Results for author: Hoo-chang Shin

Found 17 papers, 2 papers with code

Text Mining Drug/Chemical-Protein Interactions using an Ensemble of BERT and T5 Based Models

no code implementations30 Nov 2021 Virginia Adams, Hoo-chang Shin, Carol Anderson, Bo Liu, Anas Abidin

In Track-1 of the BioCreative VII Challenge participants are asked to identify interactions between drugs/chemicals and proteins.

Relation Extraction Sentence Classification

BioMegatron: Larger Biomedical Domain Language Model

1 code implementation EMNLP 2020 Hoo-chang Shin, Yang Zhang, Evelina Bakhturina, Raul Puri, Mostofa Patwary, Mohammad Shoeybi, Raghav Mani

There has been an influx of biomedical domain-specific language models, showing language models pre-trained on biomedical text perform better on biomedical domain benchmarks than those trained on general domain text corpora such as Wikipedia and Books.

Language Modelling named-entity-recognition +3

GANBERT: Generative Adversarial Networks with Bidirectional Encoder Representations from Transformers for MRI to PET synthesis

no code implementations10 Aug 2020 Hoo-chang Shin, Alvin Ihsani, Swetha Mandava, Sharath Turuvekere Sreenivas, Christopher Forster, Jiook Cha, Alzheimer's Disease Neuroimaging Initiative

Synthesizing medical images, such as PET, is a challenging task due to the fact that the intensity range is much wider and denser than those in photographs and digital renderings and are often heavily biased toward zero.

Natural Language Processing

GANDALF: Generative Adversarial Networks with Discriminator-Adaptive Loss Fine-tuning for Alzheimer's Disease Diagnosis from MRI

no code implementations10 Aug 2020 Hoo-chang Shin, Alvin Ihsani, Ziyue Xu, Swetha Mandava, Sharath Turuvekere Sreenivas, Christopher Forster, Jiook Cha, Alzheimer's Disease Neuroimaging Initiative

This paper proposes an alternative approach to the aforementioned, where AD diagnosis is incorporated in the GAN training objective to achieve the best AD classification performance.

Correlation via Synthesis: End-to-end Image Generation and Radiogenomic Learning Based on Generative Adversarial Network

no code implementations MIDL 2019 Ziyue Xu, Xiaosong Wang, Hoo-chang Shin, Dong Yang, Holger Roth, Fausto Milletari, Ling Zhang, Daguang Xu

In this work, we investigate the potential of an end-to-end method fusing gene code with image features to generate synthetic pathology image and learn radiogenomic map simultaneously.

Image Generation

Correlation via synthesis: end-to-end nodule image generation and radiogenomic map learning based on generative adversarial network

no code implementations8 Jul 2019 Ziyue Xu, Xiaosong Wang, Hoo-chang Shin, Dong Yang, Holger Roth, Fausto Milletari, Ling Zhang, Daguang Xu

Radiogenomic map linking image features and gene expression profiles is useful for noninvasively identifying molecular properties of a particular type of disease.

Image Generation

Unsupervised Joint Mining of Deep Features and Image Labels for Large-scale Radiology Image Categorization and Scene Recognition

no code implementations23 Jan 2017 Xiaosong Wang, Le Lu, Hoo-chang Shin, Lauren Kim, Mohammadhadi Bagheri, Isabella Nogues, Jianhua Yao, Ronald M. Summers

The recent rapid and tremendous success of deep convolutional neural networks (CNN) on many challenging computer vision tasks largely derives from the accessibility of the well-annotated ImageNet and PASCAL VOC datasets.

Computer Vision General Classification +3

Learning to Read Chest X-Rays: Recurrent Neural Cascade Model for Automated Image Annotation

no code implementations CVPR 2016 Hoo-chang Shin, Kirk Roberts, Le Lu, Dina Demner-Fushman, Jianhua Yao, Ronald M. Summers

Recurrent neural networks (RNNs) are then trained to describe the contexts of a detected disease, based on the deep CNN features.

Unsupervised Category Discovery via Looped Deep Pseudo-Task Optimization Using a Large Scale Radiology Image Database

no code implementations25 Mar 2016 Xiaosong Wang, Le Lu, Hoo-chang Shin, Lauren Kim, Isabella Nogues, Jianhua Yao, Ronald Summers

Obtaining semantic labels on a large scale radiology image database (215, 786 key images from 61, 845 unique patients) is a prerequisite yet bottleneck to train highly effective deep convolutional neural network (CNN) models for image recognition.

DeepOrgan: Multi-level Deep Convolutional Networks for Automated Pancreas Segmentation

no code implementations22 Jun 2015 Holger R. Roth, Le Lu, Amal Farag, Hoo-chang Shin, Jiamin Liu, Evrim Turkbey, Ronald M. Summers

We propose and evaluate several variations of deep ConvNets in the context of hierarchical, coarse-to-fine classification on image patches and regions, i. e. superpixels.

Automated Pancreas Segmentation Computed Tomography (CT) +2

Interleaved Text/Image Deep Mining on a Very Large-Scale Radiology Database

no code implementations CVPR 2015 Hoo-chang Shin, Le Lu, Lauren Kim, Ari Seff, Jianhua Yao, Ronald M. Summers

We present an interleaved text/image deep learning system to extract and mine the semantic interactions of radiology images and reports from a national research hospital's picture archiving and communication system.

Computer Vision

Interleaved Text/Image Deep Mining on a Large-Scale Radiology Database for Automated Image Interpretation

no code implementations4 May 2015 Hoo-chang Shin, Le Lu, Lauren Kim, Ari Seff, Jianhua Yao, Ronald M. Summers

We present an interleaved text/image deep learning system to extract and mine the semantic interactions of radiology images and reports from a national research hospital's Picture Archiving and Communication System.

Computer Vision Natural Language Processing

Cannot find the paper you are looking for? You can Submit a new open access paper.