Search Results for author: Thijs Vogels

Found 12 papers, 8 papers with code

MultiModN- Multimodal, Multi-Task, Interpretable Modular Networks

1 code implementation25 Sep 2023 Vinitra Swamy, Malika Satayeva, Jibril Frej, Thierry Bossy, Thijs Vogels, Martin Jaggi, Tanja Käser, Mary-Anne Hartley

Predicting multiple real-world tasks in a single model often requires a particularly diverse feature space.

Modular Clinical Decision Support Networks (MoDN) -- Updatable, Interpretable, and Portable Predictions for Evolving Clinical Environments

1 code implementation12 Nov 2022 Cécile Trottet, Thijs Vogels, Martin Jaggi, Mary-Anne Hartley

Data-driven Clinical Decision Support Systems (CDSS) have the potential to improve and standardise care with personalised probabilistic guidance.

Privacy Preserving

Beyond spectral gap: The role of the topology in decentralized learning

1 code implementation7 Jun 2022 Thijs Vogels, Hadrien Hendrikx, Martin Jaggi

In data-parallel optimization of machine learning models, workers collaborate to improve their estimates of the model: more accurate gradients allow them to use larger learning rates and optimize faster.

Distributed Optimization

RelaySum for Decentralized Deep Learning on Heterogeneous Data

1 code implementation NeurIPS 2021 Thijs Vogels, Lie He, Anastasia Koloskova, Tao Lin, Sai Praneeth Karimireddy, Sebastian U. Stich, Martin Jaggi

A key challenge, primarily in decentralized deep learning, remains the handling of differences between the workers' local data distributions.

Practical Low-Rank Communication Compression in Decentralized Deep Learning

1 code implementation NeurIPS 2020 Thijs Vogels, Sai Praneeth Karimireddy, Martin Jaggi

Lossy gradient compression has become a practical tool to overcome the communication bottleneck in centrally coordinated distributed training of machine learning models.

PowerGossip: Practical Low-Rank Communication Compression in Decentralized Deep Learning

2 code implementations4 Aug 2020 Thijs Vogels, Sai Praneeth Karimireddy, Martin Jaggi

Lossy gradient compression has become a practical tool to overcome the communication bottleneck in centrally coordinated distributed training of machine learning models.

Optimizer Benchmarking Needs to Account for Hyperparameter Tuning

no code implementations ICML 2020 Prabhu Teja Sivaprasad, Florian Mai, Thijs Vogels, Martin Jaggi, François Fleuret

The performance of optimizers, particularly in deep learning, depends considerably on their chosen hyperparameter configuration.

Benchmarking

On the Tunability of Optimizers in Deep Learning

no code implementations25 Sep 2019 Prabhu Teja S*, Florian Mai*, Thijs Vogels, Martin Jaggi, Francois Fleuret

There is no consensus yet on the question whether adaptive gradient methods like Adam are easier to use than non-adaptive optimization methods like SGD.

PowerSGD: Practical Low-Rank Gradient Compression for Distributed Optimization

1 code implementation NeurIPS 2019 Thijs Vogels, Sai Praneeth Karimireddy, Martin Jaggi

We study gradient compression methods to alleviate the communication bottleneck in data-parallel distributed optimization.

Distributed Optimization

Kernel-predicting convolutional networks for denoising monte carlo renderings.

no code implementations ACM Transactions on Graphics 2017 Steve Bako, Thijs Vogels, Brian McWilliams, Mark Meyer, Jan Novák, Alex Harvill, Pradeep Sen, Tony Derose, Fabrice Rousselle

In a second approach, we introduce a novel, kernel-prediction network which uses the CNN to estimate the local weighting kernels used to compute each denoised pixel from its neighbors.

Denoising

Cannot find the paper you are looking for? You can Submit a new open access paper.