Search Results for author: Aleksandr Rubashevskii

Found 5 papers, 2 papers with code

Fact-Checking the Output of Large Language Models via Token-Level Uncertainty Quantification

no code implementations7 Mar 2024 Ekaterina Fadeeva, Aleksandr Rubashevskii, Artem Shelmanov, Sergey Petrakov, Haonan Li, Hamdy Mubarak, Evgenii Tsymbalov, Gleb Kuzmin, Alexander Panchenko, Timothy Baldwin, Preslav Nakov, Maxim Panov

Uncertainty scores leverage information encapsulated in the output of a neural network or its layers to detect unreliable predictions, and we show that they can be used to fact-check the atomic claims in the LLM output.

Fact Checking Hallucination +1

Conformal Prediction for Federated Uncertainty Quantification Under Label Shift

no code implementations8 Jun 2023 Vincent Plassier, Mehdi Makni, Aleksandr Rubashevskii, Eric Moulines, Maxim Panov

Federated Learning (FL) is a machine learning framework where many clients collaboratively train models while keeping the training data decentralized.

Conformal Prediction Federated Learning +2

Scalable Batch Acquisition for Deep Bayesian Active Learning

1 code implementation13 Jan 2023 Aleksandr Rubashevskii, Daria Kotova, Maxim Panov

In deep active learning, it is especially important to choose multiple examples to markup at each step to work efficiently, especially on large datasets.

Active Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.