This paper frames a general prediction system as an observer traveling around a continuous space, measuring values at some locations, and predicting them at others.
Several models have been developed to predict how the COVID-19 pandemic spreads, and how it could be contained with non-pharmaceutical interventions (NPIs) such as social distancing restrictions and school and business closures.
Using this data, it is possible to learn a surrogate model, and with that model, evolve a decision strategy that optimizes the outcomes.
In many such tasks, the point prediction is not enough: the uncertainty (i. e. risk or confidence) of that prediction must also be estimated.
As deep learning applications continue to become more diverse, an interesting question arises: Can general problem solving arise from jointly learning several such diverse tasks?
However, the success of DNNs depends on the proper configuration of its architecture and hyperparameters.
Multitask learning, i. e. learning several tasks at once with the same neural network, can improve performance in each of the tasks.
The conclusion is that behavior domination can help illuminate the complex dynamics of behavior-driven search, and can thus lead to the design of more scalable and robust algorithms.
4 code implementations • 1 Mar 2017 • Risto Miikkulainen, Jason Liang, Elliot Meyerson, Aditya Rawal, Dan Fink, Olivier Francon, Bala Raju, Hormoz Shahrzad, Arshak Navruzyan, Nigel Duffy, Babak Hodjat
The success of deep learning depends on finding an architecture to fit the task.
A general approach to knowledge transfer is introduced in which an agent controlled by a neural network adapts how it reuses existing networks as it learns in a new domain.