Recent advances in deep learning, such as powerful generative models and joint text-image embeddings, have provided the computational creativity community with new tools, opening new perspectives for artistic pursuits.
We show experimentally that the evolved robots are effective in the task of locomotion: thanks to self-attention, instances of the same controller embodied in the same robot can focus on different inputs.
In this review, we will provide a historical context of neural network research's involvement with complex systems, and highlight several active areas in modern deep learning research that incorporate the principles of collective intelligence to advance its current capabilities.
In complex systems, we often observe complex global behavior emerge from a collection of agents interacting with each other in their environment, with each individual agent acting only on locally available information, without knowing the full picture.
The approach can reliably find levels with a specific target difficulty for a variety of planning agents in only a few trials, while maintaining an understanding of their skill landscape.
In this work we focus on their ability to have invariance towards the presence or absence of details.
That useful models can arise out of the messy and slow optimization process of evolution suggests that forward-predictive modeling can arise as a side-effect of optimization under the right circumstances.
Dramatic advances in generative models have resulted in near photographic quality for artificially rendered faces, animals and other objects in the natural world.
Much of machine learning research focuses on producing models which perform well on benchmark tasks, in turn improving our understanding of the challenges associated with those tasks.
Ranked #4 on Image Classification on Kuzushiji-MNIST (Error metric)
Planning has been very successful for control tasks with known environment dynamics.
Ranked #2 on Continuous Control on DeepMind Cheetah Run (Images)
A generative recurrent neural network is quickly trained in an unsupervised manner to model popular reinforcement learning environments through compressed spatio-temporal representations.
We use a Latent Constraints GAN (LC-GAN) to learn from the facial feedback of a small group of viewers, by optimizing the model to produce sketches that it predicts will lead to more positive facial expressions.
It is a neural network algorithm that uses agents embedded in the neural network whose task is to discover which parts of the network to re-use for new tasks.
Ranked #5 on Continual Learning on F-CelebA (10 tasks)
This work explores hypernetworks: an approach of using a one network, also known as a hypernetwork, to generate the weights for another network.
Ranked #14 on Language Modelling on Penn Treebank (Character Level)