Search Results for author: Alexander Nikitin

Found 9 papers, 5 papers with code

ICML Topological Deep Learning Challenge 2024: Beyond the Graph Domain

no code implementations8 Sep 2024 Guillermo Bernárdez, Lev Telyatnikov, Marco Montagna, Federica Baccini, Mathilde Papillon, Miquel Ferriol-Galmés, Mustafa Hajij, Theodore Papamarkou, Maria Sofia Bucarelli, Olga Zaghen, Johan Mathe, Audun Myers, Scott Mahan, Hansen Lillemark, Sharvaree Vadgama, Erik Bekkers, Tim Doster, Tegan Emerson, Henry Kvinge, Katrina Agate, Nesreen K Ahmed, Pengfei Bai, Michael Banf, Claudio Battiloro, Maxim Beketov, Paul Bogdan, Martin Carrasco, Andrea Cavallo, Yun Young Choi, George Dasoulas, Matouš Elphick, Giordan Escalona, Dominik Filipiak, Halley Fritze, Thomas Gebhart, Manel Gil-Sorribes, Salvish Goomanee, Victor Guallar, Liliya Imasheva, Andrei Irimia, Hongwei Jin, Graham Johnson, Nikos Kanakaris, Boshko Koloski, Veljko Kovač, Manuel Lecha, Minho Lee, Pierrick Leroy, Theodore Long, German Magai, Alvaro Martinez, Marissa Masden, Sebastian Mežnar, Bertran Miquel-Oliver, Alexis Molina, Alexander Nikitin, Marco Nurisso, Matt Piekenbrock, Yu Qin, Patryk Rygiel, Alessandro Salatiello, Max Schattauer, Pavel Snopov, Julian Suk, Valentina Sánchez, Mauricio Tec, Francesco Vaccarino, Jonas Verhellen, Frederic Wantiez, Alexander Weers, Patrik Zajec, Blaž Škrlj, Nina Miolane

This paper describes the 2nd edition of the ICML Topological Deep Learning Challenge that was hosted within the ICML 2024 ELLIS Workshop on Geometry-grounded Representation Learning and Generative Modeling (GRaM).

Deep Learning Representation Learning

Kernel Language Entropy: Fine-grained Uncertainty Quantification for LLMs from Semantic Similarities

no code implementations30 May 2024 Alexander Nikitin, Jannik Kossen, Yarin Gal, Pekka Marttinen

To address this problem, we propose Kernel Language Entropy (KLE), a novel method for uncertainty estimation in white- and black-box LLMs.

Text Generation Uncertainty Quantification

Thin and Deep Gaussian Processes

1 code implementation NeurIPS 2023 Daniel Augusto de Souza, Alexander Nikitin, ST John, Magnus Ross, Mauricio A. Álvarez, Marc Peter Deisenroth, João P. P. Gomes, Diego Mesquita, César Lincoln C. Mattos

Gaussian processes (GPs) can provide a principled approach to uncertainty quantification with easy-to-interpret kernel hyperparameters, such as the lengthscale, which controls the correlation distance of function values.

Gaussian Processes Uncertainty Quantification

TSGM: A Flexible Framework for Generative Modeling of Synthetic Time Series

1 code implementation19 May 2023 Alexander Nikitin, Letizia Iannucci, Samuel Kaski

Temporally indexed data are essential in a wide range of fields and of interest to machine learning researchers.

Diversity Synthetic Data Generation +1

Human-in-the-Loop Large-Scale Predictive Maintenance of Workstations

2 code implementations23 Jun 2022 Alexander Nikitin, Samuel Kaski

We propose a human-in-the-loop PdM approach in which a machine learning system predicts future problems in sets of workstations (computers, laptops, and servers).

Active Learning Scheduling

Non-separable Spatio-temporal Graph Kernels via SPDEs

no code implementations16 Nov 2021 Alexander Nikitin, ST John, Arno Solin, Samuel Kaski

Gaussian processes (GPs) provide a principled and direct approach for inference and learning on graphs.

Gaussian Processes

Decision Rule Elicitation for Domain Adaptation

no code implementations23 Feb 2021 Alexander Nikitin, Samuel Kaski

Human-in-the-loop machine learning is widely used in artificial intelligence (AI) to elicit labels for data points from experts or to provide feedback on how close the predicted results are to the target.

Decision Making Domain Adaptation

Cannot find the paper you are looking for? You can Submit a new open access paper.