Search Results for author: Paul Smolensky

Found 35 papers, 15 papers with code

Implicit Chain of Thought Reasoning via Knowledge Distillation

1 code implementation2 Nov 2023 Yuntian Deng, Kiran Prasad, Roland Fernandez, Paul Smolensky, Vishrav Chaudhary, Stuart Shieber

In this work, we explore an alternative reasoning approach: instead of explicitly producing the chain of thought reasoning steps, we use the language model's internal hidden states to perform implicit reasoning.

Knowledge Distillation Math

Differentiable Tree Operations Promote Compositional Generalization

1 code implementation1 Jun 2023 Paul Soulos, Edward Hu, Kate McCurdy, Yunmo Chen, Roland Fernandez, Paul Smolensky, Jianfeng Gao

To facilitate the learning of these symbolic sequences, we introduce a differentiable tree interpreter that compiles high-level symbolic tree operations into subsymbolic matrix operations on tensors.

Semantic Parsing Text Generation

Uncontrolled Lexical Exposure Leads to Overestimation of Compositional Generalization in Pretrained Models

1 code implementation21 Dec 2022 Najoung Kim, Tal Linzen, Paul Smolensky

Human linguistic capacity is often characterized by compositionality and the generalization it enables -- human learners can produce and comprehend novel complex expressions by composing known parts.

Neurocompositional computing: From the Central Paradox of Cognition to a new generation of AI systems

no code implementations2 May 2022 Paul Smolensky, R. Thomas McCoy, Roland Fernandez, Matthew Goldrick, Jianfeng Gao

What explains the dramatic progress from 20th-century to 21st-century AI, and how can the remaining limitations of current AI be overcome?

Distributed neural encoding of binding to thematic roles

no code implementations24 Oct 2021 Matthias Lalisse, Paul Smolensky

A framework and method are proposed for the study of constituent composition in fMRI.

Sentence

Scalable knowledge base completion with superposition memories

1 code implementation24 Oct 2021 Matthias Lalisse, Eric Rosen, Paul Smolensky

We present Harmonic Memory Networks (HMem), a neural architecture for knowledge base completion that models entities as weighted sums of pairwise bindings between an entity's neighbors and corresponding relations.

Knowledge Base Completion

Enriching Transformers with Structured Tensor-Product Representations for Abstractive Summarization

1 code implementation NAACL 2021 Yichen Jiang, Asli Celikyilmaz, Paul Smolensky, Paul Soulos, Sudha Rao, Hamid Palangi, Roland Fernandez, Caitlin Smith, Mohit Bansal, Jianfeng Gao

On several syntactic and semantic probing tasks, we demonstrate the emergent structural information in the role vectors and improved syntactic interpretability in the TPR layer outputs.

Abstractive Text Summarization

Compositional Processing Emerges in Neural Networks Solving Math Problems

1 code implementation19 May 2021 Jacob Russin, Roland Fernandez, Hamid Palangi, Eric Rosen, Nebojsa Jojic, Paul Smolensky, Jianfeng Gao

A longstanding question in cognitive science concerns the learning mechanisms underlying compositionality in human cognition.

Math Mathematical Reasoning

Invertible Tree Embeddings using a Cryptographic Role Embedding Scheme

no code implementations COLING 2020 Coleman Haley, Paul Smolensky

We present a novel method for embedding trees in a vector space based on Tensor-Product Representations (TPRs) which allows for inversion: the retrieval of the original tree structure and nodes from the vectorial embedding.

Position Retrieval +1

Universal linguistic inductive biases via meta-learning

1 code implementation29 Jun 2020 R. Thomas McCoy, Erin Grant, Paul Smolensky, Thomas L. Griffiths, Tal Linzen

To facilitate computational modeling aimed at addressing this question, we introduce a framework for giving particular linguistic inductive biases to a neural network model; such a model can then be used to empirically explore the effects of those inductive biases.

Language Acquisition Meta-Learning

HUBERT Untangles BERT to Improve Transfer across NLP Tasks

1 code implementation25 Oct 2019 Mehrad Moradshahi, Hamid Palangi, Monica S. Lam, Paul Smolensky, Jianfeng Gao

We introduce HUBERT which combines the structured-representational power of Tensor-Product Representations (TPRs) and BERT, a pre-trained bidirectional Transformer language model.

Language Modelling

Enhancing the Transformer with Explicit Relational Encoding for Math Problem Solving

3 code implementations15 Oct 2019 Imanol Schlag, Paul Smolensky, Roland Fernandez, Nebojsa Jojic, Jürgen Schmidhuber, Jianfeng Gao

We incorporate Tensor-Product Representations within the Transformer in order to better support the explicit representation of relation structure.

Math Question Answering

Mapping Natural-language Problems to Formal-language Solutions Using Structured Neural Representations

2 code implementations ICML 2020 Kezhen Chen, Qiuyuan Huang, Hamid Palangi, Paul Smolensky, Kenneth D. Forbus, Jianfeng Gao

The encoder of TP-N2F employs TPR `binding' to encode natural-language symbolic structure in vector space and the decoder uses TPR `unbinding' to generate, in symbolic space, a sequential program represented by relational tuples, each consisting of a relation (or operation) and a number of arguments.

Program Synthesis Text Generation

Natural- to formal-language generation using Tensor Product Representations

no code implementations25 Sep 2019 Kezhen Chen, Qiuyuan Huang, Hamid Palangi, Paul Smolensky, Kenneth D. Forbus, Jianfeng Gao

Generating formal-language represented by relational tuples, such as Lisp programs or mathematical expressions, from a natural-language input is an extremely challenging task because it requires to explicitly capture discrete symbolic structural information from the input to generate the output.

Math Program Synthesis +1

RNNs implicitly implement tensor-product representations

1 code implementation ICLR 2019 R. Thomas McCoy, Tal Linzen, Ewan Dunbar, Paul Smolensky

Recurrent neural networks (RNNs) can learn continuous vector representations of symbolic structures such as sequences and sentences; these representations often exhibit linear regularities (analogies).

Representation Learning Sentence

RNNs Implicitly Implement Tensor Product Representations

no code implementations20 Dec 2018 R. Thomas McCoy, Tal Linzen, Ewan Dunbar, Paul Smolensky

Recurrent neural networks (RNNs) can learn continuous vector representations of symbolic structures such as sequences and sentences; these representations often exhibit linear regularities (analogies).

Representation Learning Sentence

Augmenting Compositional Models for Knowledge Base Completion Using Gradient Representations

no code implementations2 Nov 2018 Matthias Lalisse, Paul Smolensky

Neural models of Knowledge Base data have typically employed compositional representations of graph objects: entity and relation embeddings are systematically combined to evaluate the truth of a candidate Knowedge Base entry.

Knowledge Base Completion Knowledge Graphs +1

A Simple Recurrent Unit with Reduced Tensor Product Representations

1 code implementation29 Oct 2018 Shuai Tang, Paul Smolensky, Virginia R. de Sa

idely used recurrent units, including Long-short Term Memory (LSTM) and the Gated Recurrent Unit (GRU), perform well on natural language tasks, but their ability to learn structured representations is still questionable.

Natural Language Inference

Learning and analyzing vector encoding of symbolic representations

no code implementations10 Mar 2018 Roland Fernandez, Asli Celikyilmaz, Rishabh Singh, Paul Smolensky

We present a formal language with expressions denoting general symbol structures and queries which access information in those structures.

Discrete symbolic optimization and Boltzmann sampling by continuous neural dynamics: Gradient Symbolic Computation

no code implementations4 Jan 2018 Paul Tupper, Paul Smolensky, Pyeong Whan Cho

Gradient Symbolic Computation is proposed as a means of solving discrete global optimization problems using a neurally plausible continuous stochastic dynamical system.

A Neural-Symbolic Approach to Design of CAPTCHA

no code implementations29 Oct 2017 Qiuyuan Huang, Paul Smolensky, Xiaodong He, Li Deng, Dapeng Wu

To address this, this paper promotes image/visual captioning based CAPTCHAs, which is robust against machine-learning-based attacks.

BIG-bench Machine Learning Image Captioning +1

Tensor Product Generation Networks for Deep NLP Modeling

2 code implementations NAACL 2018 Qiuyuan Huang, Paul Smolensky, Xiaodong He, Li Deng, Dapeng Wu

We present a new approach to the design of deep networks for natural language processing (NLP), based on the general technique of Tensor Product Representations (TPRs) for encoding and processing symbol structures in distributed neural networks.

Caption Generation

Question-Answering with Grammatically-Interpretable Representations

no code implementations23 May 2017 Hamid Palangi, Paul Smolensky, Xiaodong He, Li Deng

In our application of TPRN, internal representations learned by end-to-end optimization in a deep neural network performing a textual question-answering (QA) task can be interpreted using basic concepts from linguistic theory.

Inductive Bias Question Answering

Basic Reasoning with Tensor Product Representations

no code implementations12 Jan 2016 Paul Smolensky, Moontae Lee, Xiaodong He, Wen-tau Yih, Jianfeng Gao, Li Deng

In this paper we present the initial development of a general theory for mapping inference in predicate logic to computation over Tensor Product Representations (TPRs; Smolensky (1990), Smolensky & Legendre (2006)).

Question Answering

Cannot find the paper you are looking for? You can Submit a new open access paper.