no code implementations • 15 Dec 2023 • Nelson Perez-Rojas, Saul Calderon-Ramirez, Martin Solis-Salazar, Mario Romero-Sandoval, Monica Arias-Monge, Horacio Saggion
Text simplification, crucial in natural language processing, aims to make texts more comprehensible, particularly for specific groups like visually impaired Spanish speakers, a less-represented language in this field.
no code implementations • 3 Nov 2022 • Isaac Benavides-Mata, Saul Calderon-Ramirez
Frequently, the unlabeled data is more widely available than the labeled data, hence this data is used to improve the level of generalization of a model when the labeled data is scarce.
no code implementations • 1 Mar 2022 • Saul Calderon-Ramirez, Shengxiang Yang, David Elizondo
In a semi-supervised setting, unlabelled data is used to improve the levels of accuracy and generalization of a model with small labelled datasets.
no code implementations • 17 Aug 2021 • Saul Calderon-Ramirez, Shengxiang Yang, David Elizondo, Armaghan Moemeni
This results in a distribution mismatch between the unlabelled and labelled datasets.
no code implementations • 24 Jul 2021 • Saul Calderon-Ramirez, Diego Murillo-Hernandez, Kevin Rojas-Salazar, David Elizondo, Shengxiang Yang, Miguel Molina-Cabello
The use of two popular and publicly available datasets (INbreast and CBIS-DDSM) as source data, to train and test the models on the novel target dataset, is evaluated.
1 code implementation • 10 Jun 2021 • Willard Zamora-Cardenas, Mauro Mendez, Saul Calderon-Ramirez, Martin Vargas, Gerardo Monge, Steve Quiros, David Elizondo, Miguel A. Molina-Cabello
To enforce the learning of morphological information per pixel, a deep distance transformer (DDT) acts as a back-bone model.
2 code implementations • 20 Apr 2021 • Saul Calderon-Ramirez, Luis Oala
In this work, we demonstrate the limits of semantic data set matching.
no code implementations • 19 Aug 2020 • Saul Calderon-Ramirez, Shengxiang-Yang, Armaghan Moemeni, David Elizondo, Simon Colreavy-Donnelly, Luis Fernando Chavarria-Estrada, Miguel A. Molina-Cabello
In this work we evaluate the performance of the semi-supervised deep learning architecture known as MixMatch using a very limited number of labelled observations and highly imbalanced labelled dataset.
1 code implementation • 14 Jun 2020 • Saul Calderon-Ramirez, Luis Oala, Jordina Torrents-Barrena, Shengxiang Yang, Armaghan Moemeni, Wojciech Samek, Miguel A. Molina-Cabello
In this work, we propose MixMOOD - a systematic approach to mitigate effect of class distribution mismatch in semi-supervised deep learning (SSDL) with MixMatch.
1 code implementation • ICANN 2019 2019 • Jose Carranza-Rojas, Saul Calderon-Ramirez, Adán Mora-Fallas, Michael Granados-Menani, Jordina Torrents-Barrena
The layer optimizes the unsharp masking parameters during model training, without any manual intervention.
Ranked #13 on Scene Text Detection on ICDAR 2013