Drug–drug Interaction Extraction

12 papers with code • 2 benchmarks • 2 datasets

Automatic extraction of Drug-drug interaction (DDI) information from the biomedical literature.

( Image credit: Using Drug Descriptions and Molecular Structures for Drug-Drug Interaction Extraction from Literature )

End-to-End $n$-ary Relation Extraction for Combination Drug Therapies

bionlproc/end-to-end-combdrugext 29 Mar 2023

Extracting combination therapies from scientific literature inherently constitutes an $n$-ary relation extraction problem.

4
29 Mar 2023

A Dataset for N-ary Relation Extraction of Drug Combinations

allenai/drug-combo-extraction NAACL 2022

Furthermore, the relations in this dataset predominantly require language understanding beyond the sentence level, adding to the challenge of this task.

19
04 May 2022

SciFive: a text-to-text transformer model for biomedical literature

justinphan3110/SciFive 28 May 2021

In this report, we introduce SciFive, a domain-specific T5 model that has been pre-trained on large biomedical corpora.

85
28 May 2021

ELECTRAMed: a new pre-trained language representation model for biomedical NLP

gmpoli/electramed 19 Apr 2021

The overwhelming amount of biomedical scientific texts calls for the development of effective language models able to tackle a wide range of biomedical natural language processing (NLP) tasks.

13
19 Apr 2021

EGFI: Drug-Drug Interaction Extraction and Generation with Fusion of Enriched Entity and Sentence Information

Layne-Huang/EGFI 25 Jan 2021

To address such a problem, we propose EGFI for extracting and consolidating drug interactions from large-scale medical literature text data.

20
25 Jan 2021

Using Drug Descriptions and Molecular Structures for Drug-Drug Interaction Extraction from Literature

tticoin/DESC_MOL-DDIE 24 Oct 2020

Specifically, we focus on drug description and molecular structure information as the drug database information.

33
24 Oct 2020

CharacterBERT: Reconciling ELMo and BERT for Word-Level Open-Vocabulary Representations From Characters

helboukkouri/character-bert COLING 2020

Due to the compelling improvements brought by BERT, many recent representation models adopted the Transformer architecture as their main building block, consequently inheriting the wordpiece tokenization system despite it not being intrinsically linked to the notion of Transformers.

193
20 Oct 2020

Domain-Specific Language Model Pretraining for Biomedical Natural Language Processing

bionlu-coling2024/biomed-ner-intent_detection 31 Jul 2020

In this paper, we challenge this assumption by showing that for domains with abundant unlabeled text, such as biomedicine, pretraining language models from scratch results in substantial gains over continual pretraining of general-domain language models.

2
31 Jul 2020