Logical Fallacies
11 papers with code • 1 benchmarks • 2 datasets
Most implemented papers
Scaling Language Models: Methods, Analysis & Insights from Training Gopher
Language modelling provides a step towards intelligent communication systems by harnessing large repositories of written human knowledge to better predict and understand the world.
Logical Fallacy Detection
In this paper, we propose the task of logical fallacy detection, and provide a new dataset (Logic) of logical fallacies generally found in text, together with an additional challenge set for detecting logical fallacies in climate change claims (LogicClimate).
Robust and Explainable Identification of Logical Fallacies in Natural Language Arguments
Our three-stage framework natively consolidates prior datasets and methods from existing tasks, like propaganda detection, serving as an overarching evaluation testbed.
Case-Based Reasoning with Language Models for Classification of Logical Fallacies
The ease and speed of spreading misinformation and propaganda on the Web motivate the need to develop trustworthy technology for detecting fallacies in natural language arguments.
How susceptible are LLMs to Logical Fallacies?
Then, it evaluates the debater's performance in logical reasoning by contrasting the scenario where the persuader employs logical fallacies against one where logical reasoning is used.
A Closer Look at the Self-Verification Abilities of Large Language Models in Logical Reasoning
In this paper, we take a closer look at the self-verification abilities of LLMs in the context of logical reasoning, focusing on their ability to identify logical fallacies accurately.
OlympiadBench: A Challenging Benchmark for Promoting AGI with Olympiad-Level Bilingual Multimodal Scientific Problems
Notably, the best-performing model, GPT-4V, attains an average score of 17. 97% on OlympiadBench, with a mere 10. 74% in physics, highlighting the benchmark rigor and the intricacy of physical reasoning.
A Logical Fallacy-Informed Framework for Argument Generation
Despite the remarkable performance of Large Language Models (LLMs) in natural language processing tasks, they still struggle with generating logically sound arguments, resulting in potential risks such as spreading misinformation.
Grounding Fallacies Misrepresenting Scientific Publications in Evidence
Health-related misinformation claims often falsely cite a credible biomedical publication as evidence, which superficially appears to support the false claim.
Boosting Logical Fallacy Reasoning in LLMs via Logical Structure Tree
We further develop two strategies to incorporate the logical structure tree into LLMs for fallacy reasoning.