Tiny QA Benchmark++: Ultra-Lightweight, Synthetic Multilingual Dataset Generation & Smoke-Tests for Continuous LLM Evaluation

17 May 2025  ยท  Vincent Koc ยท

Tiny QA Benchmark++ (TQB++) presents an ultra-lightweight, multilingual smoke-test suite designed to give large-language-model (LLM) pipelines a unit-test style safety net dataset that runs in seconds with minimal cost. Born out of the tight feedback-loop demands building the Comet Opik prompt-optimization SDK, where waiting on heavyweight benchmarks breaks developer flow. TQB++ couples a 52-item English gold set (less than 20 kB) with a tiny synthetic-data generator pypi package built on provider-agnostic LiteLLM. The generator lets practitioners mint their own tiny packs in any language, domain, or difficulty, while ten ready-made packs already cover Arabic, Chinese, French, German, Japanese, Korean, Portuguese, Russian, Spanish, and Turkish. Every dataset ships with Croissant metadata and plug-and-play files for OpenAI-Evals, LangChain, and standard CI tools, so teams can drop deterministic micro-benchmarks directly into pull-request gates, prompt-engineering loops, and production dashboards without touching GPU budgets. A complete TQB++ run adds only a few seconds to pipeline latency yet reliably flags prompt-template errors, tokenizer drift, and fine-tuning side-effects long before full-scale suites like MMLU or BIG-Bench would finish configuring. The entire framework is released to accelerate continuous, resource-efficient quality assurance across the generative-AI ecosystem.

PDF Abstract

Datasets


Introduced in the Paper:

TQBA++

Used in the Paper:

MML BIG-bench HELM
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
TinyQA Benchmark++ tinyqabenchmark_core-en gemma-3-4b Exact Match 86.5 # 1
TinyQA Benchmark++ tinyqabenchmark_core-en llama-3.2-1b-instruct Exact Match 53.8 # 6
TinyQA Benchmark++ tinyqabenchmark_core-en llama-3.2-3b-instruct Exact Match 84.6 # 2
TinyQA Benchmark++ tinyqabenchmark_core-en mistral-7b-instruct Exact Match 50.0 # 7
TinyQA Benchmark++ tinyqabenchmark_core-en mistral-24b-instruct Exact Match 84.6 # 2
TinyQA Benchmark++ tinyqabenchmark_core-en ministral-3b Exact Match 76.9 # 5
TinyQA Benchmark++ tinyqabenchmark_core-en ministral-8b Exact Match 80.8 # 4
TinyQA Benchmark++ tinyqabenchmark_core-en gemma-3-12b Exact Macth 90.4 # 1

Methods