Mathematical Induction

3 papers with code • 1 benchmarks • 1 datasets

Tests the language model's capability to understand induction by asking the model to verify the correctness of an induction argument.

Source: BIG-bench

Datasets


Most implemented papers

Scaling Language Models: Methods, Analysis & Insights from Training Gopher

allenai/dolma NA 2021

Language modelling provides a step towards intelligent communication systems by harnessing large repositories of written human knowledge to better predict and understand the world.

Training Compute-Optimal Large Language Models

karpathy/llama2.c 29 Mar 2022

We investigate the optimal model size and number of tokens for training a transformer language model under a given compute budget.

SNIP: Bridging Mathematical Symbolic and Numeric Realms with Unified Pre-training

deep-symbolic-mathematics/Multimodal-Math-Pretraining 3 Oct 2023

To bridge the gap, we introduce SNIP, a Symbolic-Numeric Integrated Pre-training model, which employs contrastive learning between symbolic and numeric domains, enhancing their mutual similarities in the embeddings.