Search Results for author: Luka Rimanic

Found 10 papers, 7 papers with code

Fine-tuning Language Models over Slow Networks using Activation Compression with Guarantees

1 code implementation2 Jun 2022 Jue Wang, Binhang Yuan, Luka Rimanic, Yongjun He, Tri Dao, Beidi Chen, Christopher Re, Ce Zhang

Communication compression is a crucial technique for modern distributed learning systems to alleviate their communication bottlenecks over slower networks.

SHiFT: An Efficient, Flexible Search Engine for Transfer Learning

1 code implementation4 Apr 2022 Cedric Renggli, Xiaozhe Yao, Luka Kolar, Luka Rimanic, Ana Klimovic, Ce Zhang

Transfer learning can be seen as a data- and compute-efficient alternative to training models from scratch.

Transfer Learning

Evaluating Bayes Error Estimators on Real-World Datasets with FeeBee

1 code implementation30 Aug 2021 Cedric Renggli, Luka Rimanic, Nora Hollenstein, Ce Zhang

The Bayes error rate (BER) is a fundamental concept in machine learning that quantifies the best possible accuracy any classifier can achieve on a fixed probability distribution.

Knowledge Enhanced Machine Learning Pipeline against Diverse Adversarial Attacks

1 code implementation11 Jun 2021 Nezihe Merve Gürel, Xiangyu Qi, Luka Rimanic, Ce Zhang, Bo Li

In particular, we develop KEMLP by integrating a diverse set of weak auxiliary models based on their logical relationships to the main DNN model that performs the target task.

BIG-bench Machine Learning

DataLens: Scalable Privacy Preserving Training via Gradient Compression and Aggregation

2 code implementations20 Mar 2021 Boxin Wang, Fan Wu, Yunhui Long, Luka Rimanic, Ce Zhang, Bo Li

In this paper, we aim to explore the power of generative models and gradient sparsity, and propose a scalable privacy-preserving generative model DATALENS.

Dimensionality Reduction Navigate +1

A Data Quality-Driven View of MLOps

no code implementations15 Feb 2021 Cedric Renggli, Luka Rimanic, Nezihe Merve Gürel, Bojan Karlaš, Wentao Wu, Ce Zhang

Developing machine learning models can be seen as a process similar to the one established for traditional software development.

BIG-bench Machine Learning

Automatic Feasibility Study via Data Quality Analysis for ML: A Case-Study on Label Noise

2 code implementations16 Oct 2020 Cedric Renggli, Luka Rimanic, Luka Kolar, Wentao Wu, Ce Zhang

In our experience of working with domain experts who are using today's AutoML systems, a common problem we encountered is what we call "unrealistic expectations" -- when users are facing a very challenging task with a noisy data acquisition process, while being expected to achieve startlingly high accuracy with machine learning (ML).

AutoML BIG-bench Machine Learning

On Convergence of Nearest Neighbor Classifiers over Feature Transformations

no code implementations NeurIPS 2020 Luka Rimanic, Cedric Renggli, Bo Li, Ce Zhang

This analysis requires in-depth understanding of the properties that connect both the transformed space and the raw feature space.

Which Model to Transfer? Finding the Needle in the Growing Haystack

no code implementations CVPR 2022 Cedric Renggli, André Susano Pinto, Luka Rimanic, Joan Puigcerver, Carlos Riquelme, Ce Zhang, Mario Lucic

Transfer learning has been recently popularized as a data-efficient alternative to training models from scratch, in particular for computer vision tasks where it provides a remarkably solid baseline.

Transfer Learning

TSS: Transformation-Specific Smoothing for Robustness Certification

1 code implementation27 Feb 2020 Linyi Li, Maurice Weber, Xiaojun Xu, Luka Rimanic, Bhavya Kailkhura, Tao Xie, Ce Zhang, Bo Li

Moreover, to the best of our knowledge, TSS is the first approach that achieves nontrivial certified robustness on the large-scale ImageNet dataset.

Cannot find the paper you are looking for? You can Submit a new open access paper.