1 code implementation • MMMPIE (COLING) 2022 • Rodrigo Santos, António Branco, João Ricardo Silva
Cross-modal language and image processing is envisaged as a way to improve language understanding by resorting to visual grounding, but only recently, with the emergence of neural architectures specifically tailored to cope with both modalities, has it attracted increased attention and obtained promising results.
no code implementations • 28 Jul 2024 • Luís Gomes, António Branco, João Silva, João Rodrigues, Rodrigo Santos
Sentence encoder encode the semantics of their input, enabling key downstream applications such as classification, clustering, or retrieval.
no code implementations • 8 Apr 2024 • Tomás Osório, Bernardo Leite, Henrique Lopes Cardoso, Luís Gomes, João Rodrigues, Rodrigo Santos, António Branco
Similarly, the respective fine-tuned neural language models, developed with a low-rank adaptation approach, are made available as baselines that can stimulate future work on the neural processing of Portuguese.
no code implementations • 12 Mar 2024 • Rodrigo Santos, João Silva, António Branco
The combination of language processing and image processing keeps attracting increased interest given recent impressive advances that leverage the combined strengths of both domains of research.
no code implementations • 4 Mar 2024 • Rodrigo Santos, João Rodrigues, Luís Gomes, João Silva, António Branco, Henrique Lopes Cardoso, Tomás Freitas Osório, Bernardo Leite
To foster the neural encoding of Portuguese, this paper contributes foundation encoder models that represent an expansion of the still very scarce ecosystem of large language models specifically developed for this language that are fully open, in the sense that they are open source and openly distributed for free under an open license for any purpose, thus including research and commercial usages.
no code implementations • 29 Feb 2024 • Rodrigo Santos, João Silva, Luís Gomes, João Rodrigues, António Branco
To advance the neural decoding of Portuguese, in this paper we present a fully open Transformer-based, instruction-tuned decoder model that sets a new state of the art in this respect.
no code implementations • 11 May 2023 • João Rodrigues, Luís Gomes, João Silva, António Branco, Rodrigo Santos, Henrique Lopes Cardoso, Tomás Osório
To advance the neural encoding of Portuguese (PT), and a fortiori the technological preparation of this language for the digital age, we developed a Transformer-based foundation model that sets a new state of the art in this respect for two of its variants, namely European Portuguese from Portugal (PT-PT) and American Portuguese from Brazil (PT-BR).
no code implementations • 10 Mar 2020 • Raphael Thiago, Renan Souza, L. Azevedo, E. Soares, Rodrigo Santos, Wallas Santos, Max De Bayser, M. Cardoso, M. Moreno, Renato Cerqueira
Machine Learning (ML) has increased its role, becoming essential in several industries.