Multimodal image super-resolution (SR) is the reconstruction of a high resolution image given a low-resolution observation with the aid of another image modality.
The deep unfolding architecture is used as a core component of a multimodal framework for guided image super-resolution.
Deep learning methods have been successfully applied to various computer vision tasks.
Predicting the geographical location of users on social networks like Twitter is an active research topic with plenty of methods proposed so far.
In the context of Twitter user geolocation, we realize MENET with textual, network, and metadata features.
Our dictionary learning framework can be tailored both to a single- and a multi-scale framework, with the latter leading to a significant performance improvement.
In support of art investigation, we propose a new source sepa- ration method that unmixes a single X-ray scan acquired from double-sided paintings.