no code implementations • 12 Sep 2023 • Walid S. Saba
In our opinion the exuberance surrounding the relative success of data-driven large language models (LLMs) is slightly misguided and for several reasons (i) LLMs cannot be relied upon for factual information since for LLMs all ingested text (factual or non-factual) was created equal; (ii) due to their subsymbolic na-ture, whatever 'knowledge' these models acquire about language will always be buried in billions of microfeatures (weights), none of which is meaningful on its own; and (iii) LLMs will often fail to make the correct inferences in several linguistic contexts (e. g., nominal compounds, copredication, quantifier scope ambi-guities, intensional contexts.
no code implementations • 27 Aug 2023 • Walid S. Saba
We argue that the relative success of large language models (LLMs) is not a reflection on the symbolic vs. subsymbolic debate but a reflection on employing an appropriate strategy of bottom-up reverse engineering of language at scale.
no code implementations • 20 Jul 2023 • Walid S. Saba
Knowledge graphs (KGs) have become the standard technology for the representation of factual information in applications such as recommendation engines, search, and question-answering systems.
no code implementations • 30 May 2023 • Walid S. Saba
To address these limitations, we suggest com-bining the strength of symbolic representations with what we believe to be the key to the success of LLMs, namely a successful bottom-up re-verse engineering of language at scale.
no code implementations • 14 Apr 2019 • Walid S. Saba
But how exactly can we rectify our logical formalisms so that semantics, an endeavor that has occupied the most penetrating minds for over two centuries, can become (nearly) trivial, and what exactly does it mean to assume a theory of the world in our semantics?
no code implementations • 1 Oct 2018 • Walid S. Saba
This is a short Commentary on Trinh & Le (2018) ("A Simple Method for Commonsense Reasoning") that outlines three serious flaws in the cited paper and discusses why data-driven approaches cannot be considered as serious models for the commonsense reasoning needed in natural language understanding in general, and in reference resolution, in particular.
no code implementations • 30 Sep 2018 • Walid S. Saba
The Winograd Schema (WS) challenge, proposed as an al-ternative to the Turing Test, has become the new standard for evaluating progress in natural language understanding (NLU).
no code implementations • 6 Aug 2018 • Walid S. Saba
We argue that logical semantics might have faltered due to its failure in distinguishing between two fundamentally very different types of concepts: ontological concepts, that should be types in a strongly-typed ontology, and logical concepts, that are predicates corresponding to properties of and relations between objects of various ontological types.