Search Results for author: Walid S. Saba

Found 8 papers, 0 papers with code

Stochastic LLMs do not Understand Language: Towards Symbolic, Explainable and Ontologically Based LLMs

no code implementations12 Sep 2023 Walid S. Saba

In our opinion the exuberance surrounding the relative success of data-driven large language models (LLMs) is slightly misguided and for several reasons (i) LLMs cannot be relied upon for factual information since for LLMs all ingested text (factual or non-factual) was created equal; (ii) due to their subsymbolic na-ture, whatever 'knowledge' these models acquire about language will always be buried in billions of microfeatures (weights), none of which is meaningful on its own; and (iii) LLMs will often fail to make the correct inferences in several linguistic contexts (e. g., nominal compounds, copredication, quantifier scope ambi-guities, intensional contexts.

Symbolic and Language Agnostic Large Language Models

no code implementations27 Aug 2023 Walid S. Saba

We argue that the relative success of large language models (LLMs) is not a reflection on the symbolic vs. subsymbolic debate but a reflection on employing an appropriate strategy of bottom-up reverse engineering of language at scale.

Towards Ontologically Grounded and Language-Agnostic Knowledge Graphs

no code implementations20 Jul 2023 Walid S. Saba

Knowledge graphs (KGs) have become the standard technology for the representation of factual information in applications such as recommendation engines, search, and question-answering systems.

Knowledge Graphs Question Answering

Towards Explainable and Language-Agnostic LLMs: Symbolic Reverse Engineering of Language at Scale

no code implementations30 May 2023 Walid S. Saba

To address these limitations, we suggest com-bining the strength of symbolic representations with what we believe to be the key to the success of LLMs, namely a successful bottom-up re-verse engineering of language at scale.

No Adjective Ordering Mystery, and No Raven Paradox, Just an Ontological Mishap

no code implementations14 Apr 2019 Walid S. Saba

But how exactly can we rectify our logical formalisms so that semantics, an endeavor that has occupied the most penetrating minds for over two centuries, can become (nearly) trivial, and what exactly does it mean to assume a theory of the world in our semantics?

A Simple Machine Learning Method for Commonsense Reasoning? A Short Commentary on Trinh & Le (2018)

no code implementations1 Oct 2018 Walid S. Saba

This is a short Commentary on Trinh & Le (2018) ("A Simple Method for Commonsense Reasoning") that outlines three serious flaws in the cited paper and discusses why data-driven approaches cannot be considered as serious models for the commonsense reasoning needed in natural language understanding in general, and in reference resolution, in particular.

BIG-bench Machine Learning Natural Language Understanding

On the Winograd Schema: Situating Language Understanding in the Data-Information-Knowledge Continuum

no code implementations30 Sep 2018 Walid S. Saba

The Winograd Schema (WS) challenge, proposed as an al-ternative to the Turing Test, has become the new standard for evaluating progress in natural language understanding (NLU).

Natural Language Understanding

Logical Semantics and Commonsense Knowledge: Where Did we Go Wrong, and How to Go Forward, Again

no code implementations6 Aug 2018 Walid S. Saba

We argue that logical semantics might have faltered due to its failure in distinguishing between two fundamentally very different types of concepts: ontological concepts, that should be types in a strongly-typed ontology, and logical concepts, that are predicates corresponding to properties of and relations between objects of various ontological types.

Cannot find the paper you are looking for? You can Submit a new open access paper.