ARC

104 papers with code • 0 benchmarks • 0 datasets

This task has no description! Would you like to contribute one?

Most implemented papers

Finetuned Language Models Are Zero-Shot Learners

google-research/flan ICLR 2022

We show that instruction tuning -- finetuning language models on a collection of tasks described via instructions -- substantially improves zero-shot performance on unseen tasks.

On the Measure of Intelligence

fchollet/ARC 5 Nov 2019

To make deliberate progress towards more intelligent and more human-like artificial systems, we need to be following an appropriate feedback signal: we need to be able to define and evaluate intelligence in a way that enables comparisons between two systems, as well as comparisons with humans.

Learning to Attend On Essential Terms: An Enhanced Retriever-Reader Model for Open-domain Question Answering

nijianmo/arc-etrr-code NAACL 2019

In this paper we propose a retriever-reader model that learns to attend on essential terms during the question answering process.

Arc-support Line Segments Revisited: An Efficient and High-quality Ellipse Detection

AlanLuSun/High-quality-ellipse-detection 8 Oct 2018

Over the years many ellipse detection algorithms spring up and are studied broadly, while the critical issue of detecting ellipses accurately and efficiently in real-world images remains a challenge.

ST-MoE: Designing Stable and Transferable Sparse Expert Models

tensorflow/mesh 17 Feb 2022

But advancing the state-of-the-art across a broad set of natural language tasks has been hindered by training instabilities and uncertain quality during fine-tuning.

Self-Consistency Improves Chain of Thought Reasoning in Language Models

lastmile-ai/aiconfig 21 Mar 2022

Chain-of-thought prompting combined with pre-trained large language models has achieved encouraging results on complex reasoning tasks.

FactGraph: Evaluating Factuality in Summarization with Semantic Graph Representations

amazon-research/fact-graph NAACL 2022

Despite recent improvements in abstractive summarization, most current approaches generate summaries that are not factually consistent with the source document, severely restricting their trust and usage in real-world applications.

Yara Parser: A Fast and Accurate Dependency Parser

yahoo/YaraParser 23 Mar 2015

At its fastest, Yara can parse about 4000 sentences per second when in greedy mode (1 beam).

More About Covariance Descriptors for Image Set Coding: Log-Euclidean Framework based Kernel Matrix Representation

Kai-Xuan/iCovDs 16 Sep 2019

We consider a family of structural descriptors for visual data, namely covariance descriptors (CovDs) that lie on a non-linear symmetric positive definite (SPD) manifold, a special type of Riemannian manifolds.

FreeLB: Enhanced Adversarial Training for Natural Language Understanding

zhuchen03/FreeLB ICLR 2020

Adversarial training, which minimizes the maximal risk for label-preserving input perturbations, has proved to be effective for improving the generalization of language models.