Search Results for author: Roger Waleffe

Found 5 papers, 2 papers with code

Chameleon: a heterogeneous and disaggregated accelerator system for retrieval-augmented language models

no code implementations15 Oct 2023 Wenqi Jiang, Marco Zeller, Roger Waleffe, Torsten Hoefler, Gustavo Alonso

The heterogeneity ensures efficient acceleration of both LM inference and retrieval, while the accelerator disaggregation enables the system to independently scale both types of accelerators to fulfill diverse RALM requirements.

Language Modelling Retrieval +1

Repeated Random Sampling for Minimizing the Time-to-Accuracy of Learning

no code implementations28 May 2023 Patrik Okanovic, Roger Waleffe, Vasilis Mageirakos, Konstantinos E. Nikolakakis, Amin Karbasi, Dionysis Kalogerias, Nezihe Merve Gürel, Theodoros Rekatsinas

Methods for carefully selecting or generating a small set of training data to learn from, i. e., data pruning, coreset selection, and data distillation, have been shown to be effective in reducing the ever-increasing cost of training neural networks.

Data Compression

Marius: Learning Massive Graph Embeddings on a Single Machine

1 code implementation20 Jan 2021 Jason Mohoney, Roger Waleffe, Yiheng Xu, Theodoros Rekatsinas, Shivaram Venkataraman

We propose a new framework for computing the embeddings of large-scale graphs on a single machine.

Graph Embedding

Principal Component Networks: Parameter Reduction Early in Training

no code implementations23 Jun 2020 Roger Waleffe, Theodoros Rekatsinas

Recent works show that overparameterized networks contain small subnetworks that exhibit comparable accuracy to the full model when trained in isolation.

Cannot find the paper you are looking for? You can Submit a new open access paper.