Search Results for author: Lev V. Utkin

Found 34 papers, 13 papers with code

Incorporating Expert Rules into Neural Networks in the Framework of Concept-Based Learning

no code implementations22 Feb 2024 Andrei V. Konstantinov, Lev V. Utkin

The first idea behind the combination is to form constraints for a joint probability distribution over all combinations of concept values to satisfy the expert rules.

Generating Survival Interpretable Trajectories and Data

1 code implementation19 Feb 2024 Andrei V. Konstantinov, Stanislav R. Kirpichenko, Lev V. Utkin

A new model for generating survival trajectories and data based on applying an autoencoder of a specific structure is proposed.

counterfactual Counterfactual Explanation

SurvBeNIM: The Beran-Based Neural Importance Model for Explaining the Survival Models

1 code implementation11 Dec 2023 Lev V. Utkin, Danila Y. Eremenko, Andrei V. Konstantinov

A new method called the Survival Beran-based Neural Importance Model (SurvBeNIM) is proposed.

SurvBeX: An explanation method of the machine learning survival models based on the Beran estimator

1 code implementation7 Aug 2023 Lev V. Utkin, Danila Y. Eremenko, Andrei V. Konstantinov

For every generated example, the survival function of the black-box model is computed, and the survival function of the surrogate model (the Beran estimator) is constructed as a function of the explanation coefficients.

A New Computationally Simple Approach for Implementing Neural Networks with Output Hard Constraints

no code implementations19 Jul 2023 Andrei V. Konstantinov, Lev V. Utkin

A new computationally simple method of imposing hard convex constraints on the neural network output values is proposed.

Neural Attention Forests: Transformer-Based Forest Improvement

1 code implementation12 Apr 2023 Andrei V. Konstantinov, Lev V. Utkin, Alexey A. Lukashin, Vladimir A. Muliukha

The main idea behind the proposed NAF model is to introduce the attention mechanism into the random forest by assigning attention weights calculated by neural networks of a specific form to data in leaves of decision trees and to the random forest itself in the framework of the Nadaraya-Watson kernel regression.

regression

Interpretable Ensembles of Hyper-Rectangles as Base Models

1 code implementation15 Mar 2023 Andrei V. Konstantinov, Lev V. Utkin

A new extremely simple ensemble-based model with the uniformly generated axis-parallel hyper-rectangles as base models (HRBM) is proposed.

Computational Efficiency

Multiple Instance Learning with Trainable Decision Tree Ensembles

no code implementations13 Feb 2023 Andrei V. Konstantinov, Lev V. Utkin

The whole STE-MIL model, including soft decision trees, neural networks, the attention mechanism and a classifier, is trained in an end-to-end manner.

Multiple Instance Learning

BENK: The Beran Estimator with Neural Kernels for Estimating the Heterogeneous Treatment Effect

1 code implementation19 Nov 2022 Stanislav R. Kirpichenko, Lev V. Utkin, Andrei V. Konstantinov

Instead of typical kernel functions in the Beran estimator, it is proposed to implement kernels in the form of neural networks of a specific form called the neural kernels.

LARF: Two-level Attention-based Random Forests with a Mixture of Contamination Models

1 code implementation11 Oct 2022 Andrei V. Konstantinov, Lev V. Utkin

The first idea behind the models is to introduce a two-level attention, where one of the levels is the "leaf" attention and the attention mechanism is applied to every leaf of trees.

Improved Anomaly Detection by Using the Attention-Based Isolation Forest

1 code implementation5 Oct 2022 Lev V. Utkin, Andrey Y. Ageev, Andrei V. Konstantinov

A new modification of Isolation Forest called Attention-Based Isolation Forest (ABIForest) for solving the anomaly detection problem is proposed.

Anomaly Detection

Heterogeneous Treatment Effect with Trained Kernels of the Nadaraya-Watson Regression

1 code implementation19 Jul 2022 Andrei V. Konstantinov, Stanislav R. Kirpichenko, Lev V. Utkin

The network is trained on controls, and it replaces standard kernels with a set of neural subnetworks with shared parameters such that every subnetwork implements the trainable kernel, but the whole network implements the Nadaraya-Watson estimator.

regression Transfer Learning

Attention and Self-Attention in Random Forests

no code implementations9 Jul 2022 Lev V. Utkin, Andrei V. Konstantinov

New models of random forests jointly using the attention and self-attention mechanisms are proposed for solving the regression problem.

regression

Attention-based Random Forest and Contamination Model

no code implementations8 Jan 2022 Lev V. Utkin, Andrei V. Konstantinov

A new approach called ABRF (the attention-based random forest) and its modifications for applying the attention mechanism to the random forest (RF) for regression and classification are proposed.

regression

Multi-Attention Multiple Instance Learning

no code implementations11 Dec 2021 Andrei V. Konstantinov, Lev V. Utkin

In the method, one of the attention modules takes into account adjacent patches or instances, several attention modules are used to get a diverse feature representation of patches, and one attention module is used to unite different feature representations to provide an accurate classification of each patch (instance) and the whole bag.

Classification Multiple Instance Learning

Attention-like feature explanation for tabular data

1 code implementation10 Aug 2021 Andrei V. Konstantinov, Lev V. Utkin

The first part is a set of the one-feature neural subnetworks which aim to get a specific representation for every feature in the form of a basis of shape functions.

An Imprecise SHAP as a Tool for Explaining the Class Probability Distributions under Limited Training Data

1 code implementation16 Jun 2021 Lev V. Utkin, Andrei V. Konstantinov, Kirill A. Vishniakov

One of the most popular methods of the machine learning prediction explanation is the SHapley Additive exPlanations method (SHAP).

Ensembles of Random SHAPs

no code implementations4 Mar 2021 Lev V. Utkin, Andrei V. Konstantinov

According to the first modification, called ER-SHAP, several features are randomly selected many times from the feature set, and Shapley values for the features are computed by means of "small" SHAPs.

Interpretable Machine Learning with an Ensemble of Gradient Boosting Machines

1 code implementation14 Oct 2020 Andrei V. Konstantinov, Lev V. Utkin

In contrast to the neural additive model, the method provides weights of features in the explicit form, and it is simply trained.

Additive models BIG-bench Machine Learning +1

A Generalized Stacking for Implementing Ensembles of Gradient Boosting Machines

no code implementations12 Oct 2020 Andrei V. Konstantinov, Lev V. Utkin

The main idea behind the approach is to use the stacking algorithm in order to learn a second-level meta-model which can be regarded as a model for implementing various ensembles of gradient boosting models.

regression

Gradient boosting machine with partially randomized decision trees

no code implementations19 Jun 2020 Andrei V. Konstantinov, Lev V. Utkin

The gradient boosting machine is a powerful ensemble-based machine learning method for solving regression problems.

BIG-bench Machine Learning regression

SurvLIME-Inf: A simplified modification of SurvLIME for explanation of machine learning survival models

no code implementations5 May 2020 Lev V. Utkin, Maxim S. Kovalev, Ernest M. Kasimov

A new modification of the explanation method SurvLIME called SurvLIME-Inf for explaining machine learning survival models is proposed.

BIG-bench Machine Learning Survival Analysis

A robust algorithm for explaining unreliable machine learning survival models using the Kolmogorov-Smirnov bounds

no code implementations5 May 2020 Maxim S. Kovalev, Lev V. Utkin

As a result, the robust maximin strategy is used, which aims to minimize the average distance between cumulative hazard functions of the explained black-box model and of the approximating Cox model, and to maximize the distance over all cumulative hazard functions in the interval produced by the Kolmogorov-Smirnov bounds.

BIG-bench Machine Learning

An explanation method for Siamese neural networks

no code implementations18 Nov 2019 Lev V. Utkin, Maxim S. Kovalev, Ernest M. Kasimov

A new method for explaining the Siamese neural network is proposed.

An Adaptive Weighted Deep Forest Classifier

no code implementations4 Jan 2019 Lev V. Utkin, Andrei V. Konstantinov, Viacheslav S. Chukanov, Mikhail V. Kots, Anna A. Meldo

The idea underlying the modification is very simple and stems from the confidence screening mechanism idea proposed by Pang et al. to simplify the Deep Forest classifier by means of updating the training set at each level in accordance with the classification accuracy of every training instance.

General Classification

A simple genome-wide association study algorithm

no code implementations5 Aug 2017 Lev V. Utkin, Irina L. Utkina

A computationally simple genome-wide association study (GWAS) algorithm for estimating the main and epistatic effects of markers or single nucleotide polymorphisms (SNPs) is proposed.

Discriminative Metric Learning with Deep Forest

no code implementations25 May 2017 Lev V. Utkin, Mikhail A. Ryabinin

A Discriminative Deep Forest (DisDF) as a metric learning algorithm is proposed in the paper.

Metric Learning

A Siamese Deep Forest

no code implementations27 Apr 2017 Lev V. Utkin, Mikhail A. Ryabinin

A Siamese Deep Forest (SDF) is proposed in the paper.

Cannot find the paper you are looking for? You can Submit a new open access paper.