Probing Language Models

4 papers with code • 1 benchmarks • 2 datasets

This task has no description! Would you like to contribute one?

Most implemented papers

Probing Toxic Content in Large Pre-Trained Language Models

HKUST-KnowComp/Probing_toxicity_in_PTLMs ACL 2021

Large pre-trained language models (PTLMs) have been shown to carry biases towards different social groups which leads to the reproduction of stereotypical and toxic content by major NLP systems.

Probing Language Models for Understanding of Temporal Expressions

kunalkukreja21/temporal-expressions-evaluation-lm EMNLP (BlackboxNLP) 2021

We present three Natural Language Inference (NLI) challenge sets that can evaluate NLI models on their understanding of temporal expressions.

Discontinuous Constituency and BERT: A Case Study of Dutch

gijswijnholds/discontinuous-probing Findings (ACL) 2022

In this paper, we set out to quantify the syntactic capacity of BERT in the evaluation regime of non-context free patterns, as occurring in Dutch.

KAMEL : Knowledge Analysis with Multitoken Entities in Language Models

JanKalo/KAMEL Automated Knowledge Base Construction 2022

Instead of performing the evaluation on masked language models, we present results for a variety of recent causal LMs in a few-shot setting.