Self-Driven Particles (SDP) describe a category of multi-agent systems common in everyday life, such as flocking birds and traffic flows.
Channel pruning is broadly recognized as an effective approach to obtain a small compact model through eliminating unimportant channels from a large cumbersome network.
We validate that training with the increasing number of procedurally generated scenes significantly improves the generalization of the agent across scenarios of different traffic densities and road networks.
Our method can compensate for the data biases by generating balanced data without introducing external annotations.
Existing methods on this task usually draw attention on the high-level alignment based on the whole image or object of interest, which naturally, cannot fully utilize the fine-grained channel information.
To this end, we pose questions that future differentiable methods for neural wiring discovery need to confront, hoping to evoke a discussion and rethinking on how much bias has been enforced implicitly in existing NAS methods.
We introduce a simple and versatile framework for image-to-image translation.
We study the problem of distilling knowledge from a large deep teacher network to a much smaller student network for the task of road marking segmentation.
Ranked #1 on Semantic Segmentation on ApolloScape
The GSMN explicitly models object, relation and attribute as a structured phrase, which not only allows to learn correspondence of object, relation and attribute separately, but also benefits to learn fine-grained correspondence of structured phrase.
Ranked #14 on Cross-Modal Retrieval on Flickr30k
We argue that given a computer vision task for which a NAS method is expected, this definition can reduce the vaguely-defined NAS evaluation to i) accuracy of this task and ii) the total computation consumed to finally obtain a model with satisfying accuracy.
Ranked #15 on Neural Architecture Search on NAS-Bench-201, ImageNet-16-120 (Accuracy (Val) metric)
In this work, we propose a hybrid framework to learn neural decisions in the classical modular pipeline through end-to-end imitation learning.
Training deep models for lane detection is challenging due to the very subtle and sparse supervisory signals inherent in lane annotations.
Ranked #5 on Lane Detection on BDD100K val
In experiments on CIFAR-10, SNAS takes less epochs to find a cell architecture with state-of-the-art accuracy than non-differentiable evolution-based and reinforcement-learning-based NAS, which is also transferable to ImageNet.
Ranked #24 on Neural Architecture Search on NAS-Bench-201, CIFAR-10
Reinforcement learning agents need exploratory behaviors to escape from local optima.
In this paper, we considerably improve the accuracy and robustness of predictions through heterogeneous auxiliary networks feature mimicking, a new and effective training method that provides us with much richer contextual signals apart from steering direction.
Ranked #1 on Steering Control on BDD100K val