Negation
164 papers with code • 1 benchmarks • 1 datasets
Libraries
Use these libraries to find Negation models and implementationsMost implemented papers
Beta Embeddings for Multi-Hop Logical Reasoning in Knowledge Graphs
Logical operations are performed in the embedding space by neural operators over the probabilistic embeddings.
Editing Models with Task Arithmetic
Changing how pre-trained models behave -- e. g., improving their performance on a downstream task or mitigating biases learned during pre-training -- is a common practice when developing machine learning systems.
QA-GNN: Reasoning with Language Models and Knowledge Graphs for Question Answering
The problem of answering questions using knowledge from pre-trained language models (LMs) and knowledge graphs (KGs) presents two challenges: given a QA context (question and answer choice), methods need to (i) identify relevant knowledge from large KGs, and (ii) perform joint reasoning over the QA context and KG.
Logic Embeddings for Complex Query Answering
Answering logical queries over incomplete knowledge bases is challenging because: 1) it calls for implicit link prediction, and 2) brute force answering of existential first-order logic queries is exponential in the number of existential variables.
Fast and accurate sentiment classification using an enhanced Naive Bayes model
We have explored different methods of improving the accuracy of a Naive Bayes classifier for sentiment analysis.
Analogs of Linguistic Structure in Deep Representations
We investigate the compositional structure of message vectors computed by a deep network trained on a communication game.
MazeBase: A Sandbox for Learning from Games
This paper introduces MazeBase: an environment for simple 2D games, designed as a sandbox for machine learning approaches to reasoning and planning.
Evaluating Scoped Meaning Representations
A pilot study is performed to automatically find changes in meaning by comparing meaning representations of translations.
What BERT is not: Lessons from a new suite of psycholinguistic diagnostics for language models
Pre-training by language modeling has become a popular and successful approach to NLP tasks, but we have yet to understand exactly what linguistic capacities these pre-training processes confer upon models.
Explanation by Progressive Exaggeration
As machine learning methods see greater adoption and implementation in high stakes applications such as medical image diagnosis, the need for model interpretability and explanation has become more critical.