Search Results for author: Gary Marcus

Found 12 papers, 1 papers with code

A Gentle Introduction to Deep Nets and Opportunities for the Future

no code implementations ACL 2022 Kenneth Church, Valia Kordoni, Gary Marcus, Ernest Davis, Yanjun Ma, Zeyu Chen

The first half of this tutorial will make deep nets more accessible to a broader audience, following “Deep Nets for Poets” and “A Gentle Introduction to Fine-Tuning.” We will also introduce GFT (general fine tuning), a little language for fine tuning deep nets with short (one line) programs that are as easy to code as regression in statistics packages such as R using glm (general linear models).

Getting from Generative AI to Trustworthy AI: What LLMs might learn from Cyc

no code implementations31 Jul 2023 Doug Lenat, Gary Marcus

Even long arguments produced this way can be both trustworthy and interpretable, since the full step-by-step line of reasoning is always available, and for each step the provenance of the knowledge used can be documented and audited.

Knowledge Graphs

A Sentence is Worth a Thousand Pictures: Can Large Language Models Understand Human Language?

no code implementations26 Jul 2023 Gary Marcus, Evelina Leivada, Elliot Murphy

Artificial Intelligence applications show great potential for language-related tasks that rely on next-word prediction.

Sentence

Testing AI performance on less frequent aspects of language reveals insensitivity to underlying meaning

no code implementations23 Feb 2023 Vittoria Dentella, Elliot Murphy, Gary Marcus, Evelina Leivada

With successes in bottom-up challenges partially overshadowing shortcomings, the 'human-like' performance of Large Language Models has raised the question of how linguistic performance is achieved by algorithms.

DALL-E 2 Fails to Reliably Capture Common Syntactic Processes

no code implementations23 Oct 2022 Evelina Leivada, Elliot Murphy, Gary Marcus

Machine intelligence is increasingly being linked to claims about sentience, language processing, and an ability to comprehend and transform natural language into a range of stimuli.

Negation

A very preliminary analysis of DALL-E 2

no code implementations25 Apr 2022 Gary Marcus, Ernest Davis, Scott Aaronson

The DALL-E 2 system generates original synthetic images corresponding to an input text as caption.

Common Sense Reasoning

The Defeat of the Winograd Schema Challenge

no code implementations7 Jan 2022 Vid Kocijan, Ernest Davis, Thomas Lukasiewicz, Gary Marcus, Leora Morgenstern

The Winograd Schema Challenge - a set of twin sentences involving pronoun reference disambiguation that seem to require the use of commonsense knowledge - was proposed by Hector Levesque in 2011.

A Review of Winograd Schema Challenge Datasets and Approaches

no code implementations23 Apr 2020 Vid Kocijan, Thomas Lukasiewicz, Ernest Davis, Gary Marcus, Leora Morgenstern

The Winograd Schema Challenge is both a commonsense reasoning and natural language understanding challenge, introduced as an alternative to the Turing test.

Natural Language Understanding

The Next Decade in AI: Four Steps Towards Robust Artificial Intelligence

no code implementations14 Feb 2020 Gary Marcus

Recent research in artificial intelligence and machine learning has largely emphasized general-purpose learning and ever-larger training sets and more and more compute.

BIG-bench Machine Learning

Innateness, AlphaZero, and Artificial Intelligence

no code implementations17 Jan 2018 Gary Marcus

The concept of innateness is rarely discussed in the context of artificial intelligence.

Deep Learning: A Critical Appraisal

1 code implementation2 Jan 2018 Gary Marcus

Although deep learning has historical roots going back decades, neither the term "deep learning" nor the approach was popular just over five years ago, when the field was reignited by papers such as Krizhevsky, Sutskever and Hinton's now classic (2012) deep network model of Imagenet.

speech-recognition Speech Recognition

The Scope and Limits of Simulation in Cognitive Models

no code implementations16 Jun 2015 Ernest Davis, Gary Marcus

It has been proposed that human physical reasoning consists largely of running "physics engines in the head" in which the future trajectory of the physical system under consideration is computed precisely using accurate scientific theories.

Cannot find the paper you are looking for? You can Submit a new open access paper.