GECko+ : a Grammatical and Discourse Error Correction Tool We introduce GECko+, a web-based writing assistance tool for English that corrects errors both at the sentence and at the discourse level.
We demonstrate our model's competitive performance on analogy detection and resolution over multiple languages.
Analogical proportions are statements expressed in the form "A is to B as C is to D" and are used for several reasoning and classification tasks in artificial intelligence and natural language processing (NLP).
In fact, symbolic approaches were developed to solve or to detect analogies between character strings, e. g., the axiomatic approach as well as that based on Kolmogorov complexity.
Unintended biases in machine learning (ML) models are among the major concerns that must be addressed to maintain public trust in ML.
We show that while a convolutional network can be trained to correctly estimate well calibrated aleatoric uncertainty, -- the uncertainty due to the presence of noise in the images -- it is unable to generate a trustworthy ellipticity distribution when exposed to previously unseen data (i. e. here, blended scenes).
Bayesian Neural Networks (BNN) have recently emerged in the Deep Learning world for dealing with uncertainty estimation in classification tasks, and are used in many application domains such as astrophysics, autonomous driving... BNN assume a prior over the weights of a neural network instead of point estimates, enabling in this way the estimation of both aleatoric and epistemic uncertainty of the model prediction. Moreover, a particular type of BNN, namely MC Dropout, assumes a Bernoulli distribution on the weights by using Dropout. Several attempts to optimize the dropout rate exist, e. g. using a variational approach. In this paper, we present a new method called "Dropout Regulation" (DR), which consists of automatically adjusting the dropout rate during training using a controller as used in automation. DR allows for a precise estimation of the uncertainty which is comparable to the state-of-the-art while remaining simple to implement.
To illustrate, we will revisit the case of "LimeOut" that was proposed to tackle "process fairness", which measures a model's reliance on sensitive or discriminatory features.
Features mined from knowledge graphs are widely used within multiple knowledge discovery tasks such as classification or fact-checking.
To achieve both, we draw inspiration from "dropout" techniques in neural based approaches, and propose a framework that relies on "feature drop-out" to tackle process fairness.
In particular, units should be matched within and across sources, and their level of relatedness should be classified into equivalent, more specific, or similar.