Sentence-Pair Classification
20 papers with code • 0 benchmarks • 3 datasets
Benchmarks
These leaderboards are used to track progress in Sentence-Pair Classification
Most implemented papers
New Datasets for Automatic Detection of Textual Entailment and of Contradictions between Sentences in French
DACCORD consists of 1034 pairs of sentences and is the first dataset exclusively dedicated to this task and covering among others the topic of the Russian invasion in Ukraine.
Utilizing BERT for Aspect-Based Sentiment Analysis via Constructing Auxiliary Sentence
Aspect-based sentiment analysis (ABSA), which aims to identify fine-grained opinion polarity towards a specific aspect, is a challenging subtask of sentiment analysis (SA).
CLUE: A Chinese Language Understanding Evaluation Benchmark
The advent of natural language understanding (NLU) benchmarks for English, such as GLUE and SuperGLUE allows new NLU models to be evaluated across a diverse set of tasks.
Glyce: Glyph-vectors for Chinese Character Representations
However, due to the lack of rich pictographic evidence in glyphs and the weak generalization ability of standard computer vision models on character data, an effective way to utilize the glyph information remains to be found.
CBLUE: A Chinese Biomedical Language Understanding Evaluation Benchmark
Artificial Intelligence (AI), along with the recent progress in biomedical language understanding, is gradually changing medical practice.
DACCORD : un jeu de données pour la Détection Automatique d'énonCés COntRaDictoires en français
In this article, we present DACCORD, a new dataset dedicated to the task of automatically detecting contradictions between sentences in French.
Continual and Multi-Task Architecture Search
Architecture search is the process of automatically learning the neural model or cell structure that best suits the given task.
Elastic weight consolidation for better bias inoculation
The biases present in training datasets have been shown to affect models for sentence pair classification tasks such as natural language inference (NLI) and fact verification.
Fine-Tuning Pre-trained Language Model with Weak Supervision: A Contrastive-Regularized Self-Training Approach
To address this problem, we develop a contrastive self-training framework, COSINE, to enable fine-tuning LMs with weak supervision.
Be Careful about Poisoned Word Embeddings: Exploring the Vulnerability of the Embedding Layers in NLP Models
However, in this paper, we find that it is possible to hack the model in a data-free way by modifying one single word embedding vector, with almost no accuracy sacrificed on clean samples.