Search Results for author: Roberto J. López-Sastre

Found 12 papers, 7 papers with code

Embarrassingly Simple Model for Early Action Proposal

no code implementations17 Oct 2018 Marcos Baptista-Ríos, Roberto J. López-Sastre, Franciso Javier Acevedo-Rodríguez, Saturnino Maldonado-Bascón

Early action proposal consists in generating high quality candidate temporal segments that are likely to contain an action in a video stream, as soon as they happen.

ISA$^2$: Intelligent Speed Adaptation from Appearance

no code implementations11 Oct 2018 Carlos Herranz-Perdiguero, Roberto J. López-Sastre

Technically, the goal of an ISA$^2$ model is to predict for a given image of a driving scenario the proper speed of the vehicle.


In pixels we trust: From Pixel Labeling to Object Localization and Scene Categorization

no code implementations19 Jul 2018 Carlos Herranz-Perdiguero, Carolina Redondo-Cabrera, Roberto J. López-Sastre

While there has been significant progress in solving the problems of image pixel labeling, object detection and scene classification, existing approaches normally address them separately.

object-detection Object Detection +4

Learning Short-Cut Connections for Object Counting

no code implementations8 May 2018 Daniel Oñoro-Rubio, Mathias Niepert, Roberto J. López-Sastre

Standard short-cut connections are connections between layers in deep neural networks which skip at least one intermediate layer.

Density Estimation Object Counting

Learning to Exploit the Prior Network Knowledge for Weakly-Supervised Semantic Segmentation

1 code implementation13 Apr 2018 Carolina Redondo-Cabrera, Marcos Baptista-Ríos, Roberto J. López-Sastre

Training a Convolutional Neural Network (CNN) for semantic segmentation typically requires to collect a large amount of accurate pixel-level annotations, a hard and expensive task.

Weakly supervised Semantic Segmentation Weakly-Supervised Semantic Segmentation

Unsupervised learning from videos using temporal coherency deep networks

1 code implementation24 Jan 2018 Carolina Redondo-Cabrera, Roberto J. López-Sastre

We here propose two Siamese architectures for Convolutional Neural Networks, and their corresponding novel loss functions, to learn from unlabeled videos, which jointly exploit the local temporal coherence between contiguous frames, and a global discriminative margin used to separate representations of different videos.

Cannot find the paper you are looking for? You can Submit a new open access paper.