no code implementations • 29 Sep 2021 • Hugh Perkins
We present a benchmark \textsc{Icy} for measuring the compositional inductive bias of models in the context of emergent communications.
no code implementations • 29 Sep 2021 • Hugh Perkins
We propose a new dataset TexRel as a playground for the study of emergent communications, in particular for relations.
1 code implementation • 26 May 2021 • Hugh Perkins
We propose a new dataset TexRel as a playground for the study of emergent communications, in particular for relations.
1 code implementation • 6 Mar 2021 • Hugh Perkins
We show that it is possible to craft transformations that, applied to compositional grammars, result in grammars that neural networks can learn easily, but humans do not.
1 code implementation • 27 Jan 2021 • Hugh Perkins
We propose an architecture and process for using the Iterated Learning Model ("ILM") for artificial neural networks.
1 code implementation • IJCNLP 2019 • Hugh Perkins, Yi Yang
We introduce the dialog intent induction task and present a novel deep multi-view clustering approach to tackle the problem.
no code implementations • 15 Jun 2016 • Hugh Perkins
This paper presents cltorch, a hardware-agnostic backend for the Torch neural network framework.
no code implementations • 24 Dec 2015 • Hugh Perkins, Minjie Xu, Jun Zhu, Bo Zhang
As one of the most popular classifiers, linear SVMs still have challenges in dealing with very large-scale problems, even though linear or sub-linear algorithms have been developed recently on single machines.
no code implementations • 10 Oct 2013 • Jun Zhu, Ning Chen, Hugh Perkins, Bo Zhang
Gibbs max-margin supervised topic models minimize an expected margin loss, which is an upper bound of the existing margin loss derived from an expected prediction rule.