Search Results for author: Juho Rousu

Found 12 papers, 7 papers with code

Scalable variable selection for two-view learning tasks with projection operators

1 code implementation4 Jul 2023 Sandor Szedmak, Riikka Huusari, Tat Hong Duong Le, Juho Rousu

With the projection operators the relationship, correlation, between sets of input and output variables can also be expressed by kernel functions, thus nonlinear correlation models can be exploited as well.

Variable Selection

Learning to Predict Graphs with Fused Gromov-Wasserstein Barycenters

1 code implementation8 Feb 2022 Luc Brogat-Motte, Rémi Flamary, Céline Brouard, Juho Rousu, Florence d'Alché-Buc

This paper introduces a novel and generic framework to solve the flagship task of supervised labeled graph prediction by leveraging Optimal Transport tools.

regression

Learning primal-dual sparse kernel machines

1 code implementation27 Aug 2021 Riikka Huusari, Sahely Bhadra, Cécile Capponi, Hachem Kadri, Juho Rousu

In this paper, instead of using the traditional representer theorem, we propose to search for a solution in RKHS that has a pre-image decomposition in the original data space, where the elements don't necessarily correspond to the elements in the training set.

Learning Output Embeddings in Structured Prediction

no code implementations29 Jul 2020 Luc Brogat-Motte, Alessandro Rudi, Céline Brouard, Juho Rousu, Florence d'Alché-Buc

A powerful and flexible approach to structured prediction consists in embedding the structured objects to be predicted into a feature space of possibly infinite dimension by means of output kernels, and then, solving a regression problem in this output space.

regression Structured Prediction

A Solution for Large Scale Nonlinear Regression with High Rank and Degree at Constant Memory Complexity via Latent Tensor Reconstruction

no code implementations4 May 2020 Sandor Szedmak, Anna Cichonska, Heli Julkunen, Tapio Pahikkala, Juho Rousu

For learning the models, we present an efficient gradient-based algorithm that can be implemented in linear time in the sample size, order, rank of the tensor and the dimension of the input.

MULTI-VIEW LEARNING Tensor Decomposition

Bayesian Metabolic Flux Analysis reveals intracellular flux couplings

1 code implementation18 Apr 2018 Markus Heinonen, Maria Osmala, Henrik Mannerström, Janne Wallenius, Samuel Kaski, Juho Rousu, Harri Lähdesmäki

Flux analysis methods commonly place unrealistic assumptions on fluxes due to the convenience of formulating the problem as a linear programming model, and most methods ignore the notable uncertainty in flux estimates.

A Tutorial on Canonical Correlation Methods

1 code implementation7 Nov 2017 Viivi Uurtio, João M. Monteiro, Jaz Kandola, John Shawe-Taylor, Delmiro Fernandez-Reyes, Juho Rousu

Canonical correlation analysis is a family of multivariate statistical methods for the analysis of paired sets of variables.

Multi-view Kernel Completion

2 code implementations8 Feb 2016 Sahely Bhadra, Samuel Kaski, Juho Rousu

In this paper, we introduce the first method that (1) can complete kernel matrices with completely missing rows and columns as opposed to individual missing kernel values, (2) does not require any of the kernels to be complete a priori, and (3) can tackle non-linear kernels.

Non-Stationary Gaussian Process Regression with Hamiltonian Monte Carlo

1 code implementation18 Aug 2015 Markus Heinonen, Henrik Mannerström, Juho Rousu, Samuel Kaski, Harri Lähdesmäki

We present a novel approach for fully non-stationary Gaussian process regression (GPR), where all three key parameters -- noise variance, signal variance and lengthscale -- can be simultaneously input-dependent.

GPR regression

Multilabel Structured Output Learning with Random Spanning Trees of Max-Margin Markov Networks

no code implementations NeurIPS 2014 Mario Marchand, Hongyu Su, Emilie Morvant, Juho Rousu, John S. Shawe-Taylor

We show that the usual score function for conditional Markov networks can be written as the expectation over the scores of their spanning trees.

Multilabel Classification through Random Graph Ensembles

no code implementations31 Oct 2013 Hongyu Su, Juho Rousu

We present new methods for multilabel classification, relying on ensemble learning on a collection of random output graphs imposed on the multilabel and a kernel-based structured output learner as the base classifier.

Classification Ensemble Learning +1

Cannot find the paper you are looking for? You can Submit a new open access paper.