SNARKS
4 papers with code • 0 benchmarks • 0 datasets
Benchmarks
These leaderboards are used to track progress in SNARKS
Most implemented papers
Scaling Language Models: Methods, Analysis & Insights from Training Gopher
Language modelling provides a step towards intelligent communication systems by harnessing large repositories of written human knowledge to better predict and understand the world.
Training Compute-Optimal Large Language Models
We investigate the optimal model size and number of tokens for training a transformer language model under a given compute budget.
Verifiable and Provably Secure Machine Unlearning
In this framework, the server first computes a proof that the model was trained on a dataset $D$.
Scaling up Trustless DNN Inference with Zero-Knowledge Proofs
In this work, we present the first practical ImageNet-scale method to verify ML model inference non-interactively, i. e., after the inference has been done.