Date Understanding
9 papers with code • 0 benchmarks • 0 datasets
Benchmarks
These leaderboards are used to track progress in Date Understanding
Most implemented papers
Large Language Models are Zero-Shot Reasoners
Pretrained large language models (LLMs) are widely used in many sub-fields of natural language processing (NLP) and generally known as excellent few-shot learners with task-specific exemplars.
Scaling Language Models: Methods, Analysis & Insights from Training Gopher
Language modelling provides a step towards intelligent communication systems by harnessing large repositories of written human knowledge to better predict and understand the world.
Training Compute-Optimal Large Language Models
We investigate the optimal model size and number of tokens for training a transformer language model under a given compute budget.
Dataset and Baseline System for Multi-lingual Extraction and Normalization of Temporal and Numerical Expressions
Temporal and numerical expression understanding is of great importance in many downstream Natural Language Processing (NLP) and Information Retrieval (IR) tasks.
EchoPrompt: Instructing the Model to Rephrase Queries for Improved In-context Learning
On average, EchoPrompt improves the Zero-shot-CoT performance of code-davinci-002 by 5% in numerical tasks and 13% in reading comprehension tasks.
ReGAL: Refactoring Programs to Discover Generalizable Abstractions
While large language models (LLMs) are increasingly being used for program synthesis, they lack the global view needed to develop useful abstractions; they generally predict programs one at a time, often repeating the same functionality.
Understanding the Weakness of Large Language Model Agents within a Complex Android Environment
These challenges motivate AndroidArena, an environment and benchmark designed to evaluate LLM agents on a modern operating system.
Instance-level quantitative saliency in multiple sclerosis lesion segmentation
Saliency maps (based on SmoothGrad) in FLAIR showed positive values inside a lesion and negative in its neighborhood.