Search Results for author: Pierre Colombo

Found 45 papers, 19 papers with code

Affect-Driven Dialog Generation

no code implementations NAACL 2019 Pierre Colombo, Wojciech Witon, Ashutosh Modi, James Kennedy, Mubbasir Kapadia

The majority of current systems for end-to-end dialog generation focus on response quality without an explicit control over the affective content of the responses.

From the Token to the Review: A Hierarchical Multimodal approach to Opinion Mining

no code implementations IJCNLP 2019 Alexandre Garcia, Pierre Colombo, Slim Essid, Florence d'Alché-Buc, Chloé Clavel

The task of predicting fine grained user opinion based on spontaneous spoken language is a key problem arising in the development of Computational Agents as well as in the development of social network based opinion miners.

Opinion Mining

Regularly varying representation for sentence embedding

no code implementations25 Sep 2019 Hamid Jalalzai, Pierre Colombo, Chloé Clavel, Eric Gaussier, Giovanna Varni, Emmanuel Vignon, Anne Sabourin

The dominant approaches to sentence representation in natural language rely on learning embeddings on massive corpuses.

Attribute Sentence +3

Heavy-tailed Representations, Text Polarity Classification & Data Augmentation

no code implementations NeurIPS 2020 Hamid Jalalzai, Pierre Colombo, Chloé Clavel, Eric Gaussier, Giovanna Varni, Emmanuel Vignon, Anne Sabourin

The dominant approaches to text representation in natural language rely on learning embeddings on massive corpora which have convenient properties such as compositionality and distance preservation.

Attribute Data Augmentation +4

The importance of fillers for text representations of speech transcripts

no code implementations EMNLP 2020 Tanvi Dinkar, Pierre Colombo, Matthieu Labeau, Chloé Clavel

While being an essential component of spoken language, fillers (e. g."um" or "uh") often remain overlooked in Spoken Language Understanding (SLU) tasks.

Spoken Language Understanding

Learning to Disentangle Textual Representations and Attributes via Mutual Information

no code implementations1 Jan 2021 Pierre Colombo, Chloé Clavel, Pablo Piantanida

Learning disentangled representations of textual data is essential for many natural language tasks such as fair classification (\textit{e. g.} building classifiers whose decisions cannot disproportionately hurt or benefit specific groups identified by sensitive attributes), style transfer and sentence generation, among others.

Attribute Disentanglement +2

Code-switched inspired losses for generic spoken dialog representations

no code implementations27 Aug 2021 Emile Chapuis, Pierre Colombo, Matthieu Labeau, Chloe Clavel

Spoken dialog systems need to be able to handle both multiple languages and multilinguality inside a conversation (\textit{e. g} in case of code-switching).

Retrieval

Automatic Text Evaluation through the Lens of Wasserstein Barycenters

2 code implementations EMNLP 2021 Pierre Colombo, Guillaume Staerman, Chloe Clavel, Pablo Piantanida

A new metric \texttt{BaryScore} to evaluate text generation based on deep contextualized embeddings e. g., BERT, Roberta, ELMo) is introduced.

Image Captioning Machine Translation +3

Improving Multimodal fusion via Mutual Dependency Maximisation

no code implementations EMNLP 2021 Pierre Colombo, Emile Chapuis, Matthieu Labeau, Chloe Clavel

We demonstrate that our new penalties lead to a consistent improvement (up to $4. 3$ on accuracy) across a large variety of state-of-the-art models on two well-known sentiment analysis datasets: \texttt{CMU-MOSI} and \texttt{CMU-MOSEI}.

Multimodal Sentiment Analysis

Beam Search with Bidirectional Strategies for Neural Response Generation

no code implementations ICNLSP 2021 Pierre Colombo, Chouchang Yang, Giovanna Varni, Chloé Clavel

Sequence-to-sequence neural networks have been widely used in language-based applications as they have flexible capabilities to learn various language models.

Language Modelling Response Generation +1

InfoLM: A New Metric to Evaluate Summarization & Data2Text Generation

1 code implementation2 Dec 2021 Pierre Colombo, Chloe Clavel, Pablo Piantanida

In this paper, we introduce InfoLM a family of untrained metrics that can be viewed as a string-based metric that addresses the aforementioned flaws thanks to a pre-trained masked language model.

Language Modelling Text Generation

NL-Augmenter: A Framework for Task-Sensitive Natural Language Augmentation

2 code implementations6 Dec 2021 Kaustubh D. Dhole, Varun Gangal, Sebastian Gehrmann, Aadesh Gupta, Zhenhao Li, Saad Mahamood, Abinaya Mahendiran, Simon Mille, Ashish Shrivastava, Samson Tan, Tongshuang Wu, Jascha Sohl-Dickstein, Jinho D. Choi, Eduard Hovy, Ondrej Dusek, Sebastian Ruder, Sajant Anand, Nagender Aneja, Rabin Banjade, Lisa Barthe, Hanna Behnke, Ian Berlot-Attwell, Connor Boyle, Caroline Brun, Marco Antonio Sobrevilla Cabezudo, Samuel Cahyawijaya, Emile Chapuis, Wanxiang Che, Mukund Choudhary, Christian Clauss, Pierre Colombo, Filip Cornell, Gautier Dagan, Mayukh Das, Tanay Dixit, Thomas Dopierre, Paul-Alexis Dray, Suchitra Dubey, Tatiana Ekeinhor, Marco Di Giovanni, Tanya Goyal, Rishabh Gupta, Louanes Hamla, Sang Han, Fabrice Harel-Canada, Antoine Honore, Ishan Jindal, Przemyslaw K. Joniak, Denis Kleyko, Venelin Kovatchev, Kalpesh Krishna, Ashutosh Kumar, Stefan Langer, Seungjae Ryan Lee, Corey James Levinson, Hualou Liang, Kaizhao Liang, Zhexiong Liu, Andrey Lukyanenko, Vukosi Marivate, Gerard de Melo, Simon Meoni, Maxime Meyer, Afnan Mir, Nafise Sadat Moosavi, Niklas Muennighoff, Timothy Sum Hon Mun, Kenton Murray, Marcin Namysl, Maria Obedkova, Priti Oli, Nivranshu Pasricha, Jan Pfister, Richard Plant, Vinay Prabhu, Vasile Pais, Libo Qin, Shahab Raji, Pawan Kumar Rajpoot, Vikas Raunak, Roy Rinberg, Nicolas Roberts, Juan Diego Rodriguez, Claude Roux, Vasconcellos P. H. S., Ananya B. Sai, Robin M. Schmidt, Thomas Scialom, Tshephisho Sefara, Saqib N. Shamsi, Xudong Shen, Haoyue Shi, Yiwen Shi, Anna Shvets, Nick Siegel, Damien Sileo, Jamie Simon, Chandan Singh, Roman Sitelew, Priyank Soni, Taylor Sorensen, William Soto, Aman Srivastava, KV Aditya Srivatsa, Tony Sun, Mukund Varma T, A Tabassum, Fiona Anting Tan, Ryan Teehan, Mo Tiwari, Marie Tolkiehn, Athena Wang, Zijian Wang, Gloria Wang, Zijie J. Wang, Fuxuan Wei, Bryan Wilie, Genta Indra Winata, Xinyi Wu, Witold Wydmański, Tianbao Xie, Usama Yaseen, Michael A. Yee, Jing Zhang, Yue Zhang

Data augmentation is an important component in the robustness evaluation of models in natural language processing (NLP) and in enhancing the diversity of the data they are trained on.

Data Augmentation

What are the best systems? New perspectives on NLP Benchmarking

1 code implementation8 Feb 2022 Pierre Colombo, Nathan Noiry, Ekhine Irurozki, Stephan Clemencon

In Machine Learning, a benchmark refers to an ensemble of datasets associated with one or multiple metrics together with a way to aggregate different systems performances.

Benchmarking

A Differential Entropy Estimator for Training Neural Networks

1 code implementation14 Feb 2022 Georg Pichler, Pierre Colombo, Malik Boudiaf, Günther Koliander, Pablo Piantanida

Mutual Information (MI) has been widely used as a loss regularizer for training neural networks.

Domain Adaptation

Learning Disentangled Textual Representations via Statistical Measures of Similarity

no code implementations ACL 2022 Pierre Colombo, Guillaume Staerman, Nathan Noiry, Pablo Piantanida

When working with textual data, a natural application of disentangled representations is fair classification where the goal is to make predictions without being biased (or influenced) by sensitive attributes that may be present in the data (e. g., age, gender or race).

Attribute

Of Human Criteria and Automatic Metrics: A Benchmark of the Evaluation of Story Generation

2 code implementations COLING 2022 Cyril Chhun, Pierre Colombo, Chloé Clavel, Fabian M. Suchanek

However, there is no consensus on which human evaluation criteria to use, and no analysis of how well automatic criteria correlate with them.

Story Generation

The Glass Ceiling of Automatic Evaluation in Natural Language Generation

no code implementations31 Aug 2022 Pierre Colombo, Maxime Peyrard, Nathan Noiry, Robert West, Pablo Piantanida

Automatic evaluation metrics capable of replacing human judgments are critical to allowing fast development of new methods.

Text Generation

BLOOM: A 176B-Parameter Open-Access Multilingual Language Model

5 code implementations9 Nov 2022 BigScience Workshop, :, Teven Le Scao, Angela Fan, Christopher Akiki, Ellie Pavlick, Suzana Ilić, Daniel Hesslow, Roman Castagné, Alexandra Sasha Luccioni, François Yvon, Matthias Gallé, Jonathan Tow, Alexander M. Rush, Stella Biderman, Albert Webson, Pawan Sasanka Ammanamanchi, Thomas Wang, Benoît Sagot, Niklas Muennighoff, Albert Villanova del Moral, Olatunji Ruwase, Rachel Bawden, Stas Bekman, Angelina McMillan-Major, Iz Beltagy, Huu Nguyen, Lucile Saulnier, Samson Tan, Pedro Ortiz Suarez, Victor Sanh, Hugo Laurençon, Yacine Jernite, Julien Launay, Margaret Mitchell, Colin Raffel, Aaron Gokaslan, Adi Simhi, Aitor Soroa, Alham Fikri Aji, Amit Alfassy, Anna Rogers, Ariel Kreisberg Nitzav, Canwen Xu, Chenghao Mou, Chris Emezue, Christopher Klamm, Colin Leong, Daniel van Strien, David Ifeoluwa Adelani, Dragomir Radev, Eduardo González Ponferrada, Efrat Levkovizh, Ethan Kim, Eyal Bar Natan, Francesco De Toni, Gérard Dupont, Germán Kruszewski, Giada Pistilli, Hady Elsahar, Hamza Benyamina, Hieu Tran, Ian Yu, Idris Abdulmumin, Isaac Johnson, Itziar Gonzalez-Dios, Javier de la Rosa, Jenny Chim, Jesse Dodge, Jian Zhu, Jonathan Chang, Jörg Frohberg, Joseph Tobing, Joydeep Bhattacharjee, Khalid Almubarak, Kimbo Chen, Kyle Lo, Leandro von Werra, Leon Weber, Long Phan, Loubna Ben allal, Ludovic Tanguy, Manan Dey, Manuel Romero Muñoz, Maraim Masoud, María Grandury, Mario Šaško, Max Huang, Maximin Coavoux, Mayank Singh, Mike Tian-Jian Jiang, Minh Chien Vu, Mohammad A. Jauhar, Mustafa Ghaleb, Nishant Subramani, Nora Kassner, Nurulaqilla Khamis, Olivier Nguyen, Omar Espejel, Ona de Gibert, Paulo Villegas, Peter Henderson, Pierre Colombo, Priscilla Amuok, Quentin Lhoest, Rheza Harliman, Rishi Bommasani, Roberto Luis López, Rui Ribeiro, Salomey Osei, Sampo Pyysalo, Sebastian Nagel, Shamik Bose, Shamsuddeen Hassan Muhammad, Shanya Sharma, Shayne Longpre, Somaieh Nikpoor, Stanislav Silberberg, Suhas Pai, Sydney Zink, Tiago Timponi Torrent, Timo Schick, Tristan Thrush, Valentin Danchev, Vassilina Nikoulina, Veronika Laippala, Violette Lepercq, Vrinda Prabhu, Zaid Alyafeai, Zeerak Talat, Arun Raja, Benjamin Heinzerling, Chenglei Si, Davut Emre Taşar, Elizabeth Salesky, Sabrina J. Mielke, Wilson Y. Lee, Abheesht Sharma, Andrea Santilli, Antoine Chaffin, Arnaud Stiegler, Debajyoti Datta, Eliza Szczechla, Gunjan Chhablani, Han Wang, Harshit Pandey, Hendrik Strobelt, Jason Alan Fries, Jos Rozen, Leo Gao, Lintang Sutawika, M Saiful Bari, Maged S. Al-shaibani, Matteo Manica, Nihal Nayak, Ryan Teehan, Samuel Albanie, Sheng Shen, Srulik Ben-David, Stephen H. Bach, Taewoon Kim, Tali Bers, Thibault Fevry, Trishala Neeraj, Urmish Thakker, Vikas Raunak, Xiangru Tang, Zheng-Xin Yong, Zhiqing Sun, Shaked Brody, Yallow Uri, Hadar Tojarieh, Adam Roberts, Hyung Won Chung, Jaesung Tae, Jason Phang, Ofir Press, Conglong Li, Deepak Narayanan, Hatim Bourfoune, Jared Casper, Jeff Rasley, Max Ryabinin, Mayank Mishra, Minjia Zhang, Mohammad Shoeybi, Myriam Peyrounette, Nicolas Patry, Nouamane Tazi, Omar Sanseviero, Patrick von Platen, Pierre Cornette, Pierre François Lavallée, Rémi Lacroix, Samyam Rajbhandari, Sanchit Gandhi, Shaden Smith, Stéphane Requena, Suraj Patil, Tim Dettmers, Ahmed Baruwa, Amanpreet Singh, Anastasia Cheveleva, Anne-Laure Ligozat, Arjun Subramonian, Aurélie Névéol, Charles Lovering, Dan Garrette, Deepak Tunuguntla, Ehud Reiter, Ekaterina Taktasheva, Ekaterina Voloshina, Eli Bogdanov, Genta Indra Winata, Hailey Schoelkopf, Jan-Christoph Kalo, Jekaterina Novikova, Jessica Zosa Forde, Jordan Clive, Jungo Kasai, Ken Kawamura, Liam Hazan, Marine Carpuat, Miruna Clinciu, Najoung Kim, Newton Cheng, Oleg Serikov, Omer Antverg, Oskar van der Wal, Rui Zhang, Ruochen Zhang, Sebastian Gehrmann, Shachar Mirkin, Shani Pais, Tatiana Shavrina, Thomas Scialom, Tian Yun, Tomasz Limisiewicz, Verena Rieser, Vitaly Protasov, Vladislav Mikhailov, Yada Pruksachatkun, Yonatan Belinkov, Zachary Bamberger, Zdeněk Kasner, Alice Rueda, Amanda Pestana, Amir Feizpour, Ammar Khan, Amy Faranak, Ana Santos, Anthony Hevia, Antigona Unldreaj, Arash Aghagol, Arezoo Abdollahi, Aycha Tammour, Azadeh HajiHosseini, Bahareh Behroozi, Benjamin Ajibade, Bharat Saxena, Carlos Muñoz Ferrandis, Daniel McDuff, Danish Contractor, David Lansky, Davis David, Douwe Kiela, Duong A. Nguyen, Edward Tan, Emi Baylor, Ezinwanne Ozoani, Fatima Mirza, Frankline Ononiwu, Habib Rezanejad, Hessie Jones, Indrani Bhattacharya, Irene Solaiman, Irina Sedenko, Isar Nejadgholi, Jesse Passmore, Josh Seltzer, Julio Bonis Sanz, Livia Dutra, Mairon Samagaio, Maraim Elbadri, Margot Mieskes, Marissa Gerchick, Martha Akinlolu, Michael McKenna, Mike Qiu, Muhammed Ghauri, Mykola Burynok, Nafis Abrar, Nazneen Rajani, Nour Elkott, Nour Fahmy, Olanrewaju Samuel, Ran An, Rasmus Kromann, Ryan Hao, Samira Alizadeh, Sarmad Shubber, Silas Wang, Sourav Roy, Sylvain Viguier, Thanh Le, Tobi Oyebade, Trieu Le, Yoyo Yang, Zach Nguyen, Abhinav Ramesh Kashyap, Alfredo Palasciano, Alison Callahan, Anima Shukla, Antonio Miranda-Escalada, Ayush Singh, Benjamin Beilharz, Bo wang, Caio Brito, Chenxi Zhou, Chirag Jain, Chuxin Xu, Clémentine Fourrier, Daniel León Periñán, Daniel Molano, Dian Yu, Enrique Manjavacas, Fabio Barth, Florian Fuhrimann, Gabriel Altay, Giyaseddin Bayrak, Gully Burns, Helena U. Vrabec, Imane Bello, Ishani Dash, Jihyun Kang, John Giorgi, Jonas Golde, Jose David Posada, Karthik Rangasai Sivaraman, Lokesh Bulchandani, Lu Liu, Luisa Shinzato, Madeleine Hahn de Bykhovetz, Maiko Takeuchi, Marc Pàmies, Maria A Castillo, Marianna Nezhurina, Mario Sänger, Matthias Samwald, Michael Cullan, Michael Weinberg, Michiel De Wolf, Mina Mihaljcic, Minna Liu, Moritz Freidank, Myungsun Kang, Natasha Seelam, Nathan Dahlberg, Nicholas Michio Broad, Nikolaus Muellner, Pascale Fung, Patrick Haller, Ramya Chandrasekhar, Renata Eisenberg, Robert Martin, Rodrigo Canalli, Rosaline Su, Ruisi Su, Samuel Cahyawijaya, Samuele Garda, Shlok S Deshmukh, Shubhanshu Mishra, Sid Kiblawi, Simon Ott, Sinee Sang-aroonsiri, Srishti Kumar, Stefan Schweter, Sushil Bharati, Tanmay Laud, Théo Gigant, Tomoya Kainuma, Wojciech Kusa, Yanis Labrak, Yash Shailesh Bajaj, Yash Venkatraman, Yifan Xu, Yingxin Xu, Yu Xu, Zhe Tan, Zhongli Xie, Zifan Ye, Mathilde Bras, Younes Belkada, Thomas Wolf

Large language models (LLMs) have been shown to be able to perform new tasks based on a few demonstrations or natural language instructions.

Language Modelling Multilingual NLP

Rainproof: An Umbrella To Shield Text Generators From Out-Of-Distribution Data

no code implementations18 Dec 2022 Maxime Darrin, Pablo Piantanida, Pierre Colombo

In this work, we focus on leveraging soft-probabilities in a black-box framework, i. e. we can access the soft-predictions but not the internal states of the model.

Machine Translation Out of Distribution (OOD) Detection +2

Unsupervised Layer-wise Score Aggregation for Textual OOD Detection

no code implementations20 Feb 2023 Maxime Darrin, Guillaume Staerman, Eduardo Dadalto Câmara Gomes, Jackie CK Cheung, Pablo Piantanida, Pierre Colombo

More importantly, we show that the usual choice (the last layer) is rarely the best one for OOD detection and that far better results could be achieved if the best layer were picked.

feature selection Out of Distribution (OOD) Detection

Hallucinations in Large Multilingual Translation Models

1 code implementation28 Mar 2023 Nuno M. Guerreiro, Duarte Alves, Jonas Waldendorf, Barry Haddow, Alexandra Birch, Pierre Colombo, André F. T. Martins

Large-scale multilingual machine translation systems have demonstrated remarkable ability to translate directly between numerous languages, making them increasingly appealing for real-world applications.

Language Modelling Large Language Model +2

Towards More Robust NLP System Evaluation: Handling Missing Scores in Benchmarks

no code implementations17 May 2023 Anas Himmi, Ekhine Irurozki, Nathan Noiry, Stephan Clemencon, Pierre Colombo

This paper formalize an existing problem in NLP research: benchmarking when some systems scores are missing on the task, and proposes a novel approach to address it.

Benchmarking

A Functional Data Perspective and Baseline On Multi-Layer Out-of-Distribution Detection

1 code implementation6 Jun 2023 Eduardo Dadalto, Pierre Colombo, Guillaume Staerman, Nathan Noiry, Pablo Piantanida

A key feature of out-of-distribution (OOD) detection is to exploit a trained neural network by extracting statistical patterns and relationships through the multi-layer classifier to detect shifts in the expected input data distribution.

Anomaly Detection Out-of-Distribution Detection +1

xCOMET: Transparent Machine Translation Evaluation through Fine-grained Error Detection

1 code implementation16 Oct 2023 Nuno M. Guerreiro, Ricardo Rei, Daan van Stigt, Luisa Coheur, Pierre Colombo, André F. T. Martins

Widely used learned metrics for machine translation evaluation, such as COMET and BLEURT, estimate the quality of a translation hypothesis by providing a single sentence-level score.

Machine Translation Sentence +1

Transductive Learning for Textual Few-Shot Classification in API-based Embedding Models

no code implementations21 Oct 2023 Pierre Colombo, Victor Pellegrain, Malik Boudiaf, Victor Storchan, Myriam Tami, Ismail Ben Ayed, Celine Hudelot, Pablo Piantanida

First, we introduce a scenario where the embedding of a pre-trained model is served through a gated API with compute-cost and data-privacy constraints.

Classification Transductive Learning

A Novel Information-Theoretic Objective to Disentangle Representations for Fair Classification

no code implementations21 Oct 2023 Pierre Colombo, Nathan Noiry, Guillaume Staerman, Pablo Piantanida

One of the pursued objectives of deep learning is to provide tools that learn abstract representations of reality from the observation of multiple contextual situations.

Attribute Disentanglement

Toward Stronger Textual Attack Detectors

1 code implementation21 Oct 2023 Pierre Colombo, Marine Picot, Nathan Noiry, Guillaume Staerman, Pablo Piantanida

The landscape of available textual adversarial attacks keeps growing, posing severe threats and raising concerns regarding the deep NLP system's integrity.

Revisiting Instruction Fine-tuned Model Evaluation to Guide Industrial Applications

1 code implementation21 Oct 2023 Manuel Faysse, Gautier Viaud, Céline Hudelot, Pierre Colombo

Instruction Fine-Tuning (IFT) is a powerful paradigm that strengthens the zero-shot capabilities of Large Language Models (LLMs), but in doing so induces new evaluation metric requirements.

CroissantLLM: A Truly Bilingual French-English Language Model

1 code implementation1 Feb 2024 Manuel Faysse, Patrick Fernandes, Nuno M. Guerreiro, António Loison, Duarte M. Alves, Caio Corro, Nicolas Boizard, João Alves, Ricardo Rei, Pedro H. Martins, Antoni Bigata Casademunt, François Yvon, André F. T. Martins, Gautier Viaud, Céline Hudelot, Pierre Colombo

We introduce CroissantLLM, a 1. 3B language model pretrained on a set of 3T English and French tokens, to bring to the research and industrial community a high-performance, fully open-sourced bilingual model that runs swiftly on consumer-grade local hardware.

Language Modelling Large Language Model

Towards Cross-Tokenizer Distillation: the Universal Logit Distillation Loss for LLMs

1 code implementation19 Feb 2024 Nicolas Boizard, Kevin El Haddad, Céline Hudelot, Pierre Colombo

Deploying large language models (LLMs) of several billion parameters can be impractical in most industrial use cases due to constraints such as cost, latency limitations, and hardware accessibility.

Knowledge Distillation

Enhanced Hallucination Detection in Neural Machine Translation through Simple Detector Aggregation

no code implementations20 Feb 2024 Anas Himmi, Guillaume Staerman, Marine Picot, Pierre Colombo, Nuno M. Guerreiro

Hallucinated translations pose significant threats and safety concerns when it comes to the practical deployment of machine translation systems.

Hallucination Machine Translation +1

Tower: An Open Multilingual Large Language Model for Translation-Related Tasks

1 code implementation27 Feb 2024 Duarte M. Alves, José Pombal, Nuno M. Guerreiro, Pedro H. Martins, João Alves, Amin Farajian, Ben Peters, Ricardo Rei, Patrick Fernandes, Sweta Agrawal, Pierre Colombo, José G. C. de Souza, André F. T. Martins

While general-purpose large language models (LLMs) demonstrate proficiency on multiple tasks within the domain of translation, approaches based on open LLMs are competitive only when specializing on a single task.

Language Modelling Large Language Model +1

Code-switched inspired losses for spoken dialog representations

no code implementations EMNLP 2021 Pierre Colombo, Emile Chapuis, Matthieu Labeau, Chloé Clavel

Spoken dialogue systems need to be able to handle both multiple languages and multilinguality inside a conversation (e. g in case of code-switching).

Retrieval Spoken Dialogue Systems

Cannot find the paper you are looking for? You can Submit a new open access paper.