no code implementations • • Paul Soulos, Sudha Rao, Caitlin Smith, Eric Rosen, Asli Celikyilmaz, R. Thomas McCoy, Yichen Jiang, Coleman Haley, Roland Fernandez, Hamid Palangi, Jianfeng Gao, Paul Smolensky
Machine translation has seen rapid progress with the advent of Transformer-based models.
On several syntactic and semantic probing tasks, we demonstrate the emergent structural information in the role vectors and improved syntactic interpretability in the TPR layer outputs.
A longstanding question in cognitive science concerns the learning mechanisms underlying compositionality in human cognition.
In this paper, we propose a new model architecture for learning multi-modal neuro-symbolic representations for video captioning.
To achieve this, we model various exploratory inspection and diagnostic tasks for deep learning training processes as specifications for streams using a map-reduce paradigm with which many data scientists are already familiar.
Transformers have increasingly outperformed gated RNNs in obtaining new state-of-the-art results on supervised tasks involving text sequences.
We incorporate Tensor-Product Representations within the Transformer in order to better support the explicit representation of relation structure.
Ranked #3 on Question Answering on Mathematics Dataset
We present a formal language with expressions denoting general symbol structures and queries which access information in those structures.