Drug–drug Interaction Extraction
12 papers with code • 2 benchmarks • 2 datasets
Automatic extraction of Drug-drug interaction (DDI) information from the biomedical literature.
( Image credit: Using Drug Descriptions and Molecular Structures for Drug-Drug Interaction Extraction from Literature )
Latest papers
MocFormer: A Two-Stage Pre-training-Driven Transformer for Drug-Target Interactions Prediction
Drug-target interactions (DTIs) is essential for advancing pharmaceuticals.
End-to-End $n$-ary Relation Extraction for Combination Drug Therapies
Extracting combination therapies from scientific literature inherently constitutes an $n$-ary relation extraction problem.
A Dataset for N-ary Relation Extraction of Drug Combinations
Furthermore, the relations in this dataset predominantly require language understanding beyond the sentence level, adding to the challenge of this task.
SciFive: a text-to-text transformer model for biomedical literature
In this report, we introduce SciFive, a domain-specific T5 model that has been pre-trained on large biomedical corpora.
ELECTRAMed: a new pre-trained language representation model for biomedical NLP
The overwhelming amount of biomedical scientific texts calls for the development of effective language models able to tackle a wide range of biomedical natural language processing (NLP) tasks.
EGFI: Drug-Drug Interaction Extraction and Generation with Fusion of Enriched Entity and Sentence Information
To address such a problem, we propose EGFI for extracting and consolidating drug interactions from large-scale medical literature text data.
Using Drug Descriptions and Molecular Structures for Drug-Drug Interaction Extraction from Literature
Specifically, we focus on drug description and molecular structure information as the drug database information.
CharacterBERT: Reconciling ELMo and BERT for Word-Level Open-Vocabulary Representations From Characters
Due to the compelling improvements brought by BERT, many recent representation models adopted the Transformer architecture as their main building block, consequently inheriting the wordpiece tokenization system despite it not being intrinsically linked to the notion of Transformers.
Domain-Specific Language Model Pretraining for Biomedical Natural Language Processing
In this paper, we challenge this assumption by showing that for domains with abundant unlabeled text, such as biomedicine, pretraining language models from scratch results in substantial gains over continual pretraining of general-domain language models.