Search Results for author: Irina Gaynanova

Found 14 papers, 6 papers with code

Fast computation of latent correlations

2 code implementations24 Jun 2020 Grace Yoon, Christian L. Müller, Irina Gaynanova

Latent Gaussian copula models provide a powerful means to perform multi-view data integration since these models can seamlessly express dependencies between mixed variable types (binary, continuous, zero-inflated) via latent Gaussian correlations.

Computation Methodology

Gluformer: Transformer-Based Personalized Glucose Forecasting with Uncertainty Quantification

1 code implementation9 Sep 2022 Renat Sergazinov, Mohammadreza Armandpour, Irina Gaynanova

Deep learning models achieve state-of-the art results in predicting blood glucose trajectories, with a wide range of architectures being proposed.

Uncertainty Quantification

Non-convex Global Minimization and False Discovery Rate Control for the TREX

1 code implementation22 Apr 2016 Jacob Bien, Irina Gaynanova, Johannes Lederer, Christian Müller

The TREX is a recently introduced method for performing sparse high-dimensional regression.

Sparse Feature Selection in Kernel Discriminant Analysis via Optimal Scoring

1 code implementation12 Feb 2019 Alexander F. Lapanowski, Irina Gaynanova

We consider the two-group classification problem and propose a kernel classifier based on the optimal scoring framework.

Classification feature selection +1

Double-matched matrix decomposition for multi-view data

1 code implementation7 May 2021 Dongbang Yuan, Irina Gaynanova

We consider the problem of extracting joint and individual signals from multi-view data, that is data collected from different sources on matched samples.

Sparse quadratic classification rules via linear dimension reduction

1 code implementation13 Nov 2017 Irina Gaynanova, Tianying Wang

We consider the problem of high-dimensional classification between the two groups with unequal covariance matrices.

Classification Dimensionality Reduction +2

Oracle Inequalities for High-dimensional Prediction

no code implementations1 Aug 2016 Johannes Lederer, Lu Yu, Irina Gaynanova

The abundance of high-dimensional data in the modern sciences has generated tremendous interest in penalized estimators such as the lasso, scaled lasso, square-root lasso, elastic net, and many others.

Vocal Bursts Intensity Prediction

Structural Learning and Integrative Decomposition of Multi-View Data

no code implementations20 Jul 2017 Irina Gaynanova, Gen Li

We call this model SLIDE for Structural Learning and Integrative DEcomposition of multi-view data.

Clustering Dimensionality Reduction

Penalized versus constrained generalized eigenvalue problems

no code implementations22 Oct 2014 Irina Gaynanova, James Booth, Martin T. Wells

We investigate the difference between using an $\ell_1$ penalty versus an $\ell_1$ constraint in generalized eigenvalue problems, such as principal component analysis and discriminant analysis.

Variable Selection

Simultaneous sparse estimation of canonical vectors in the p>>N setting

no code implementations24 Mar 2014 Irina Gaynanova, James G. Booth, Martin T. Wells

Secondly, we propose an extension of this form to the $p\gg N$ setting and achieve feature selection by using a group penalty.

Classification Consistency feature selection +1

Optimal variable selection in multi-group sparse discriminant analysis

no code implementations23 Nov 2014 Irina Gaynanova, Mladen Kolar

This article considers the problem of multi-group classification in the setting where the number of variables $p$ is larger than the number of observations $n$.

Variable Selection

Supervised Classification Using Sparse Fisher's LDA

no code implementations21 Jan 2013 Irina Gaynanova, James G. Booth, Martin T. Wells

We apply a lasso-type penalty to the discriminant vector to ensure sparsity of the solution and use a shrinkage type estimator for the covariance matrix.

Classification General Classification +1

Joint association and classification analysis of multi-view data

no code implementations20 Nov 2018 Yunfeng Zhang, Irina Gaynanova

A distinct advantage of JACA is that it can be applied to the multi-view data with block-missing structure, that is to cases where a subset of views or class labels is missing for some subjects.

Classification General Classification

Compressing Large Sample Data for Discriminant Analysis

no code implementations8 May 2020 Alexander F. Lapanowski, Irina Gaynanova

Large-sample data became prevalent as data acquisition became cheaper and easier.

Cannot find the paper you are looking for? You can Submit a new open access paper.