Search Results for author: Yaya Sy

Found 2 papers, 1 papers with code

Lillama: Large Language Models Compression via Low-Rank Feature Distillation

no code implementations21 Dec 2024 Yaya Sy, Christophe Cerisara, Irina Illina

Current LLM structured pruning methods typically involve two steps: (1) compression with calibration data and (2) costly continued pretraining on billions of tokens to recover lost performance.

Mamba

BabySLM: language-acquisition-friendly benchmark of self-supervised spoken language models

1 code implementation2 Jun 2023 Marvin Lavechin, Yaya Sy, Hadrien Titeux, María Andrea Cruz Blandón, Okko Räsänen, Hervé Bredin, Emmanuel Dupoux, Alejandrina Cristia

Self-supervised techniques for learning speech representations have been shown to develop linguistic competence from exposure to speech without the need for human labels.

Benchmarking Language Acquisition

Cannot find the paper you are looking for? You can Submit a new open access paper.