CoLA

28 papers with code • 1 benchmarks • 2 datasets

This task has no description! Would you like to contribute one?

Datasets


Most implemented papers

Neural Network Acceptability Judgments

nyu-mll/CoLA-baselines TACL 2019

This paper investigates the ability of artificial neural networks to judge the grammatical acceptability of a sentence, with the goal of testing their linguistic competence.

Contrastive Learning of General-Purpose Audio Representations

google-research/google-research 21 Oct 2020

We introduce COLA, a self-supervised pre-training approach for learning a general-purpose representation of audio.

COLA-Net: Collaborative Attention Network for Image Restoration

MC-E/COLA-Net 10 Mar 2021

Local and non-local attention-based methods have been well studied in various image restoration tasks while leading to promising performance.

Can BERT eat RuCoLA? Topological Data Analysis to Explain

upunaprosk/la-tda 4 Apr 2023

Our results contribute to understanding the behavior of monolingual LMs in the acceptability classification task, provide insights into the functional roles of attention heads, and highlight the advantages of TDA-based approaches for analyzing LMs.

COLA: Decentralized Linear Learning

epfml/cola NeurIPS 2018

Decentralized machine learning is a promising emerging paradigm in view of global challenges of data ownership and privacy.

An LSTM Adaptation Study of (Un)grammaticality

LiCo-TREiL/Computational-Ungrammaticality WS 2019

We propose a novel approach to the study of how artificial neural network perceive the distinction between grammatical and ungrammatical sentences, a crucial task in the growing field of synthetic linguistics.

Do Attention Heads in BERT Track Syntactic Dependencies?

evtaktasheva/dependency_extraction 27 Nov 2019

We investigate the extent to which individual attention heads in pretrained transformer language models, such as BERT and RoBERTa, implicitly capture syntactic dependency relations.

CoLA: Weakly-Supervised Temporal Action Localization with Snippet Contrastive Learning

zhang-can/CoLA CVPR 2021

In this paper, we argue that learning by comparing helps identify these hard snippets and we propose to utilize snippet Contrastive learning to Localize Actions, CoLA for short.

Efficient Sequence Packing without Cross-contamination: Accelerating Large Language Models without Impacting Performance

graphcore/tutorials NeurIPS 2021

We show in this paper that the variation in sequence lengths in common NLP datasets is such that up to 50% of all tokens can be padding.

SupCL-Seq: Supervised Contrastive Learning for Downstream Optimized Sequence Representations

hooman650/supcl-seq Findings (EMNLP) 2021

This paper introduces SupCL-Seq, which extends the supervised contrastive learning from computer vision to the optimization of sequence representations in NLP.