Search Results for author: Chlo{\'e} Braud

Found 15 papers, 2 papers with code

Investigation par m\'ethodes d'apprentissage des sp\'ecificit\'es langagi\`eres propres aux personnes avec schizophr\'enie (Investigating Learning Methods Applied to Language Specificity of Persons with Schizophrenia)

no code implementations JEPTALNRECITAL 2020 Maxime Amblard, Chlo{\'e} Braud, Chuyuan Li, Caroline Demily, Nicolas Franck, Michel Musiol

Nous pr{\'e}sentons des exp{\'e}riences visant {\`a} identifier automatiquement des patients pr{\'e}sentant des sympt{\^o}mes de schizophr{\'e}nie dans des conversations contr{\^o}l{\'e}es entre patients et psychoth{\'e}rapeutes.

Specificity

Which aspects of discourse relations are hard to learn? Primitive decomposition for discourse relation classification

no code implementations WS 2019 Charlotte Roze, Chlo{\'e} Braud, Philippe Muller

Discourse relation classification has proven to be a hard task, with rather low performance on several corpora that notably differ on the relation set they use.

General Classification Relation +1

Aligning Discourse and Argumentation Structures using Subtrees and Redescription Mining

no code implementations WS 2019 Laurine Huber, Yannick Toussaint, Charlotte Roze, Mathilde Dargnat, Chlo{\'e} Braud

In this paper, we investigate similarities between discourse and argumentation structures by aligning subtrees in a corpus containing both annotations.

EusDisParser: improving an under-resourced discourse parser with cross-lingual data

no code implementations WS 2019 Mikel Iruskieta, Chlo{\'e} Braud

More precisely, we build a monolingual system using the small set of data available and investigate the use of multilingual word embeddings to train a system for Basque using data annotated for another language.

Multilingual Word Embeddings

When does deep multi-task learning work for loosely related document classification tasks?

no code implementations WS 2018 Emma Kerinec, Chlo{\'e} Braud, Anders S{\o}gaard

This work aims to contribute to our understanding of \textit{when} multi-task learning through parameter sharing in deep neural networks leads to improvements over single-task learning.

Document Classification General Classification +5

Is writing style predictive of scientific fraud?

no code implementations WS 2017 Chlo{\'e} Braud, Anders S{\o}gaard

The problem of detecting scientific fraud using machine learning was recently introduced, with initial, positive results from a model taking into account various general indicators.

Logical Reasoning

Cannot find the paper you are looking for? You can Submit a new open access paper.