Most weakly supervised named entity recognition (NER) models rely on domain-specific dictionaries provided by experts.
While the most train sentences are created via automatic techniques such as crawling and sentence-alignment methods, the test sentences are annotated with the consideration of fluency by human.
Recent named entity recognition (NER) models often rely on human-annotated datasets, requiring the significant engagement of professional knowledge on the target domain and entities.
We observe that BioBERT trained on the NLI dataset obtains better performance on Yes/No (+5. 59%), Factoid (+0. 53%), List type (+13. 58%) questions compared to performance obtained in a previous challenge (BioASQ 7B Phase B).
Attention networks, a deep neural network architecture inspired by humans' attention mechanism, have seen significant success in image captioning, machine translation, and many other applications.