A Permutation-Invariant Representation of Neural Networks with Neuron Embeddings

29 Sep 2021  ·  Ryan Zhou, Christian Muise, Ting Hu ·

Neural networks are traditionally represented in terms of their weights. A key property of this representation is that there are multiple representations of a network which can be obtained by permuting the order of the neurons. These representations are generally not compatible and attempting to transfer part of a network without the preceding layers is usually destructive to any learned relationships. This paper proposes a method by which a neural network is represented in terms of an embedding of the neurons rather than explicit weights. In addition to reducing the number of free parameters, this encoding is agnostic to the ordering of neurons, bypassing a key problem for weight-based representations. This allows us to transplant individual neurons and layers into another network and still maintain their functionality. This is particularly important for tasks like transfer learning and neuroevolution. We show through experiments on the MNIST and CIFAR10 datasets that this method is capable of representing networks which achieve identical performance to direct weight representation, and that transfer done this way preserves much of the performance between two networks that are distant in parameter space.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here