General Knowledge
60 papers with code • 1 benchmarks • 1 datasets
This task aims to evaluate the ability of a model to answer general-knowledge questions.
Source: BIG-bench
Libraries
Use these libraries to find General Knowledge models and implementationsSubtasks
Most implemented papers
Joey NMT: A Minimalist NMT Toolkit for Novices
We present Joey NMT, a minimalist neural machine translation toolkit based on PyTorch that is specifically designed for novices.
ConceptNet 5.5: An Open Multilingual Graph of General Knowledge
It is designed to represent the general knowledge involved in understanding language, improving natural language applications by allowing the application to better understand the meanings behind the words people use.
Automated Phrase Mining from Massive Text Corpora
As one of the fundamental tasks in text analysis, phrase mining aims at extracting quality phrases from a text corpus.
Learning to Understand Phrases by Embedding the Dictionary
Distributional models that learn rich semantic word representations are a success story of recent NLP research.
ConceptNet at SemEval-2017 Task 2: Extending Word Embeddings with Multilingual Relational Knowledge
This paper describes Luminoso's participation in SemEval 2017 Task 2, "Multilingual and Cross-lingual Semantic Word Similarity", with a system based on ConceptNet.
Yuanfudao at SemEval-2018 Task 11: Three-way Attention and Relational Knowledge for Commonsense Machine Comprehension
This paper describes our system for SemEval-2018 Task 11: Machine Comprehension using Commonsense Knowledge.
Integrating Semantic Knowledge to Tackle Zero-shot Text Classification
Insufficient or even unavailable training data of emerging classes is a big challenge of many classification tasks, including text classification.
Go From the General to the Particular: Multi-Domain Translation with Domain Transformation Networks
The key challenge of multi-domain translation lies in simultaneously encoding both the general knowledge shared across domains and the particular knowledge distinctive to each domain in a unified model.
What Does My QA Model Know? Devising Controlled Probes using Expert Knowledge
Open-domain question answering (QA) is known to involve several underlying knowledge and reasoning challenges, but are models actually learning such knowledge when trained on benchmark tasks?
Transformers as Soft Reasoners over Language
However, expressing the knowledge in a formal (logical or probabilistic) representation has been a major obstacle to this research.