2 code implementations • 6 Aug 2023 • José Cañete, Gabriel Chaperon, Rodrigo Fuentes, Jou-Hui Ho, Hojin Kang, Jorge Pérez
The Spanish language is one of the top 5 spoken languages in the world.
no code implementations • 2 Mar 2023 • Francisco Plana, Andrés Abeliuk, Jorge Pérez
Our experiments show that QuickCent is able to make estimates that are competitive in accuracy with the best alternative methods tested, either on synthetic scale-free networks or empirical networks.
1 code implementation • 5 Sep 2022 • Cinthia Sánchez, Hernan Sarmiento, Andres Abeliuk, Jorge Pérez, Barbara Poblete
In this work, we study the task of automatically classifying messages that are related to crisis events by leveraging cross-language and cross-domain labeled data.
1 code implementation • NeurIPS 2021 • Marcelo Arenas, Daniel Baez, Pablo Barceló, Jorge Pérez, Bernardo Subercaseaux
Several queries and scores have recently been proposed to explain individual predictions over ML models.
no code implementations • 29 Sep 2021 • Hojin Kang, Jou-Hui Ho, Diego Mesquita, Jorge Pérez, Amauri H Souza
To avoid temporal message passing, OGN maintains a summary of the temporal neighbors of each node in a latent variable and updates it as events unroll, in an online fashion.
no code implementations • 30 Apr 2021 • Aymé Arango, Jorge Pérez, Barbara Poblete
Our proposal constitutes, to the best of our knowledge, the first attempt for constructing multilingual specific-task representations.
1 code implementation • 27 Mar 2021 • Jesus Perez-Martin, Benjamin Bustos, Silvio Jamil F. Guimarães, Ivan Sipiran, Jorge Pérez, Grethel Coello Said
When the visual information is related to videos, this takes us into Video-Text Research, which includes several challenging tasks such as video question answering, video summarization with natural language, and video-to-text and text-to-video conversion.
no code implementations • NeurIPS 2020 • Pablo Barceló, Mikaël Monet, Jorge Pérez, Bernardo Subercaseaux
We prove that this notion provides a good theoretical counterpart to current beliefs on the interpretability of models; in particular, we show that under our definition and assuming standard complexity-theoretical assumptions (such as P$\neq$NP), both linear and tree-based models are strictly more interpretable than neural networks.
no code implementations • ICLR 2020 • Pablo Barceló, Egor V. Kostylev, Mikael Monet, Jorge Pérez, Juan Reutter, Juan Pablo Silva
We show that this class of GNNs is too weak to capture all FOC2 classifiers, and provide a syntactic characterization of the largest subclass of FOC2 classifiers that can be captured by AC-GNNs.
no code implementations • 19 Mar 2020 • Constanza Fierro, Jorge Pérez, Javier Mora
Deep learning techniques have been successfully applied to predict unplanned readmissions of patients in medical centers.
no code implementations • 25 Sep 2019 • Javier Carrasco, Aidan Hogan, Jorge Pérez
Given a classifier and a test image, we compute an approximate minimal-entropy positive image for which the classifier provides a correct classification, becoming incorrect upon any further reduction.
no code implementations • ICLR 2019 • Jorge Pérez, Javier Marinković, Pablo Barceló
Alternatives to recurrent neural networks, in particular, architectures based on attention or convolutions, have been gaining momentum for processing input sequences.