Search Results for author: Lars Schmidt-Thieme

Found 53 papers, 21 papers with code

Auxiliary Quantile Forecasting with Linear Networks

no code implementations5 Dec 2022 Shayan Jawed, Lars Schmidt-Thieme

We show that following similar intuition from multi-task learning to exploit correlations among forecast horizons, we can model multiple quantile estimates as auxiliary tasks for each of the forecast horizon to improve forecast accuracy across the quantile estimates compared to modeling only a single quantile estimate.

Multi-Task Learning Time Series

Deep Multi-Representation Model for Click-Through Rate Prediction

1 code implementation18 Oct 2022 Shereen Elsayed, Lars Schmidt-Thieme

Click-Through Rate prediction (CTR) is a crucial task in recommender systems, and it gained considerable attention in the past few years.

Click-Through Rate Prediction Recommendation Systems +1

Tripletformer for Probabilistic Interpolation of Asynchronous Time Series

1 code implementation5 Oct 2022 Vijaya Krishna Yalavarthi, Johannes Burchert, Lars Schmidt-Thieme

Asynchronous time series are often observed in several applications such as health care, astronomy, and climate science, and pose a significant challenge to the standard deep learning architectures.

Astronomy Medical Diagnosis +1

When Bioprocess Engineering Meets Machine Learning: A Survey from the Perspective of Automated Bioprocess Development

no code implementations2 Sep 2022 Nghia Duong-Trung, Stefan Born, Jong Woo Kim, Marie-Therese Schermeyer, Katharina Paulick, Maxim Borisyak, Mariano Nicolas Cruz-Bournazou, Thorben Werner, Randolf Scholz, Lars Schmidt-Thieme, Peter Neubauer, Ernesto Martinez

ML can be seen as a set of tools that contribute to the automation of the whole experimental cycle, including model building and practical planning, thus allowing human experts to focus on the more demanding and overarching cognitive tasks.

Model Selection Probabilistic Programming

DCSF: Deep Convolutional Set Functions for Classification of Asynchronous Time Series

1 code implementation24 Aug 2022 Vijaya Krishna Yalavarthi, Johannes Burchert, Lars Schmidt-Thieme

Because of the asynchronous nature, they pose a significant challenge to deep learning architectures, which presume that the time series presented to them are regularly sampled, fully observed, and aligned with respect to time.

Astronomy Classification +1

Attention, Filling in The Gaps for Generalization in Routing Problems

no code implementations14 Jul 2022 Ahmad Bdeir, Jonas K. Falkner, Lars Schmidt-Thieme

Machine Learning (ML) methods have become a useful tool for tackling vehicle routing problems, either in combination with popular heuristics or as standalone models.

Data Augmentation

Learning to Control Local Search for Combinatorial Optimization

1 code implementation27 Jun 2022 Jonas K. Falkner, Daniela Thyssens, Ahmad Bdeir, Lars Schmidt-Thieme

Combinatorial optimization problems are encountered in many practical contexts such as logistics and production, but exact solutions are particularly difficult to find and usually NP-hard for considerable problem sizes.

Combinatorial Optimization

Zero-Shot AutoML with Pretrained Models

1 code implementation16 Jun 2022 Ekrem Öztürk, Fabio Ferreira, Hadi S. Jomaa, Lars Schmidt-Thieme, Josif Grabocka, Frank Hutter

Given a new dataset D and a low compute budget, how should we choose a pre-trained model to fine-tune to D, and set the fine-tuning hyperparameters without risking overfitting, particularly if D is small?

AutoML Meta-Learning

End-to-End Image-Based Fashion Recommendation

1 code implementation5 May 2022 Shereen Elsayed, Lukas Brinkmeyer, Lars Schmidt-Thieme

In fashion-based recommendation settings, incorporating the item image features is considered a crucial factor, and it has shown significant improvements to many traditional models, including but not limited to matrix factorization, auto-encoders, and nearest neighbor models.

Recommendation Systems Representation Learning

Large Neighborhood Search based on Neural Construction Heuristics

1 code implementation2 May 2022 Jonas K. Falkner, Daniela Thyssens, Lars Schmidt-Thieme

The neural repair operator is combined with a local search routine, heuristic destruction operators and a selection procedure applied to a small population to arrive at a sophisticated solution approach.

reinforcement-learning

CARCA: Context and Attribute-Aware Next-Item Recommendation via Cross-Attention

1 code implementation4 Apr 2022 Ahmed Rashed, Shereen Elsayed, Lars Schmidt-Thieme

This cross-attention allows CARCA to harness the correlation between old and recent items in the user profile and their influence on deciding which item to recommend next.

 Ranked #1 on Recommendation Systems on Amazon Games (using extra training data)

Sequential Recommendation

Positive-Unlabeled Domain Adaptation

no code implementations11 Feb 2022 Jonas Sonntag, Gunnar Behrens, Lars Schmidt-Thieme

In this work we are the first to introduce the challenge of Positive-Unlabeled Domain Adaptation where we aim to generalise from a fully labeled source domain to a target domain where only positive and unlabeled data is available.

Domain Adaptation Object Recognition

Supervised Permutation Invariant Networks for Solving the CVRP with Bounded Fleet Size

no code implementations5 Jan 2022 Daniela Thyssens, Jonas Falkner, Lars Schmidt-Thieme

Learning to solve combinatorial optimization problems, such as the vehicle routing problem, offers great computational advantages over classical operations research solvers and heuristics.

Combinatorial Optimization

Improving Hyperparameter Optimization by Planning Ahead

no code implementations15 Oct 2021 Hadi S. Jomaa, Jonas Falkner, Lars Schmidt-Thieme

Hyperparameter optimization (HPO) is generally treated as a bi-level optimization problem that involves fitting a (probabilistic) surrogate model to a set of observed hyperparameter responses, e. g. validation loss, and consequently maximizing an acquisition function using a surrogate model to identify good hyperparameter candidates for evaluation.

Hyperparameter Optimization Model-based Reinforcement Learning +2

Transfer Learning for Bayesian HPO with End-to-End Meta-Features

no code implementations29 Sep 2021 Hadi Samer Jomaa, Sebastian Pineda Arango, Lars Schmidt-Thieme, Josif Grabocka

As a result, our novel DKLM can learn contextualized dataset-specific similarity representations for hyperparameter configurations.

Hyperparameter Optimization Transfer Learning

Deep Metric Learning for Ground Images

no code implementations3 Sep 2021 Raaghav Radhakrishnan, Jan Fabian Schmid, Randolf Scholz, Lars Schmidt-Thieme

Ground texture based localization methods are potential prospects for low-cost, high-accuracy self-localization solutions for robots.

Image Retrieval Metric Learning +1

Multimodal Meta-Learning for Time Series Regression

no code implementations5 Aug 2021 Sebastian Pineda Arango, Felix Heinrich, Kiran Madhusudhanan, Lars Schmidt-Thieme

Recent work has shown the efficiency of deep learning models such as Fully Convolutional Networks (FCN) or Recurrent Neural Networks (RNN) to deal with Time Series Regression (TSR) problems.

Meta-Learning regression +1

RP-DQN: An application of Q-Learning to Vehicle Routing Problems

no code implementations25 Apr 2021 Ahmad Bdeir, Simon Boeder, Tim Dernedde, Kirill Tkachuk, Jonas K. Falkner, Lars Schmidt-Thieme

In this paper we present a new approach to tackle complex routing problems with an improved state representation that utilizes the model complexity better than previous methods.

BIG-bench Machine Learning Q-Learning

Hyperparameter Optimization with Differentiable Metafeatures

no code implementations7 Feb 2021 Hadi S. Jomaa, Lars Schmidt-Thieme, Josif Grabocka

In contrast to existing models, DMFBS i) integrates a differentiable metafeature extractor and ii) is optimized using a novel multi-task loss, linking manifold regularization with a dataset similarity measure learned via an auxiliary dataset identification meta-task, effectively enforcing the response approximation for similar datasets to be similar.

Hyperparameter Optimization

Do We Really Need Deep Learning Models for Time Series Forecasting?

1 code implementation6 Jan 2021 Shereen Elsayed, Daniela Thyssens, Ahmed Rashed, Hadi Samer Jomaa, Lars Schmidt-Thieme

In this paper, we report the results of prominent deep learning models with respect to a well-known machine learning baseline, a Gradient Boosting Regression Tree (GBRT) model.

regression Time Series Forecasting

Zero-shot Transfer Learning for Gray-box Hyper-parameter Optimization

no code implementations1 Jan 2021 Hadi Samer Jomaa, Lars Schmidt-Thieme, Josif Grabocka

Zero-shot hyper-parameter optimization refers to the process of selecting hyper- parameter configurations that are expected to perform well for a given dataset upfront, without access to any observations of the losses of the target response.

Transfer Learning

Learning to Solve Vehicle Routing Problems with Time Windows through Joint Attention

1 code implementation16 Jun 2020 Jonas K. Falkner, Lars Schmidt-Thieme

Many real-world vehicle routing problems involve rich sets of constraints with respect to the capacities of the vehicles, time windows for customers etc.

HIDRA: Head Initialization across Dynamic targets for Robust Architectures

1 code implementation28 Oct 2019 Rafael Rego Drumond, Lukas Brinkmeyer, Josif Grabocka, Lars Schmidt-Thieme

In this paper, we present HIDRA, a meta-learning approach that enables training and evaluating across tasks with any number of target variables.

Meta-Learning

Chameleon: Learning Model Initializations Across Tasks With Different Schemas

1 code implementation30 Sep 2019 Lukas Brinkmeyer, Rafael Rego Drumond, Randolf Scholz, Josif Grabocka, Lars Schmidt-Thieme

Parametric models, and particularly neural networks, require weight initialization as a starting point for gradient-based optimization.

Meta-Learning

Atomic Compression Networks

no code implementations25 Sep 2019 Jonas Falkner, Josif Grabocka, Lars Schmidt-Thieme

Compressed forms of deep neural networks are essential in deploying large-scale computational models on resource-constrained devices.

Model Compression

Hyp-RL : Hyperparameter Optimization by Reinforcement Learning

1 code implementation27 Jun 2019 Hadi S. Jomaa, Josif Grabocka, Lars Schmidt-Thieme

More recently, methods have been introduced that build a so-called surrogate model that predicts the validation loss for a specific hyperparameter setting, model and dataset and then sequentially select the next hyperparameter to test, based on a heuristic function of the expected value and the uncertainty of the surrogate model called acquisition function (sequential model-based Bayesian optimization, SMBO).

Hyperparameter Optimization reinforcement-learning

In Hindsight: A Smooth Reward for Steady Exploration

no code implementations24 Jun 2019 Hadi S. Jomaa, Josif Grabocka, Lars Schmidt-Thieme

In classical Q-learning, the objective is to maximize the sum of discounted rewards through iteratively using the Bellman equation as an update, in an attempt to estimate the action value function of the optimal policy.

Atari Games Q-Learning

Dataset2Vec: Learning Dataset Meta-Features

1 code implementation27 May 2019 Hadi S. Jomaa, Lars Schmidt-Thieme, Josif Grabocka

As a data-driven approach, meta-learning requires meta-features that represent the primary learning tasks or datasets, and are estimated traditonally as engineered dataset statistics that require expert domain knowledge tailored for every meta-task.

Auxiliary Learning Few-Shot Learning +1

Learning Surrogate Losses

no code implementations24 May 2019 Josif Grabocka, Randolf Scholz, Lars Schmidt-Thieme

Ultimately, the surrogate losses are learned jointly with the prediction model via bilevel optimization.

Bilevel Optimization General Classification

Multi-Label Network Classification via Weighted Personalized Factorizations

no code implementations25 Feb 2019 Ahmed Rashed, Josif Grabocka, Lars Schmidt-Thieme

It can be formalized as a multi-relational learning task for predicting nodes labels based on their relations within the network.

Classification General Classification +2

NeuralWarp: Time-Series Similarity with Warping Networks

2 code implementations20 Dec 2018 Josif Grabocka, Lars Schmidt-Thieme

Research on time-series similarity measures has emphasized the need for elastic methods which align the indices of pairs of time series and a plethora of non-parametric have been proposed for the task.

Sentence Similarity Time Series

Channel masking for multivariate time series shapelets

no code implementations2 Nov 2017 Dripta S. Raychaudhuri, Josif Grabocka, Lars Schmidt-Thieme

Time series shapelets are discriminative sub-sequences and their similarity to time series can be used for time series classification.

General Classification Time Series Classification

Automatic Frankensteining: Creating Complex Ensembles Autonomously

no code implementations SIAM 2017 2017 Martin Wistuba, Nicolas Schilling, Lars Schmidt-Thieme

Automating machine learning by providing techniques that autonomously find the best algorithm, hyperparameter configuration and preprocessing is helpful for both researchers and practitioners.

AutoML BIG-bench Machine Learning

Integrating Distributional and Lexical Information for Semantic Classification of Words using MRMF

no code implementations COLING 2016 Rosa Tsegaye Aga, Lucas Drumond, Christian Wartena, Lars Schmidt-Thieme

Thus we show, that MRMF provides an interesting approach for building semantic classifiers that (1) gives better results than unsupervised approaches based on vector similarity, (2) gives similar results as other supervised methods and (3) can naturally be extended with other sources of information in order to improve the results.

General Classification Semantic Similarity +1

Bank Card Usage Prediction Exploiting Geolocation Information

no code implementations13 Oct 2016 Martin Wistuba, Nghia Duong-Trung, Nicolas Schilling, Lars Schmidt-Thieme

We describe the solution of team ISMLL for the ECML-PKDD 2016 Discovery Challenge on Bank Card Usage for both tasks.

General Classification regression

Multi-Relational Learning at Scale with ADMM

no code implementations3 Apr 2016 Lucas Drumond, Ernesto Diaz-Aviles, Lars Schmidt-Thieme

Learning from multiple-relational data which contains noise, ambiguities, or duplicate entities is essential to a wide range of applications such as statistical inference based on Web Linked Data, recommender systems, computational biology, and natural language processing.

Recommendation Systems Relational Reasoning

Optimal Time-Series Motifs

no code implementations3 May 2015 Josif Grabocka, Nicolas Schilling, Lars Schmidt-Thieme

We demonstrate that searching is non-optimal since the domain of motifs is restricted, and instead we propose a principled optimization approach able to find optimal motifs.

Time Series

Ultra-Fast Shapelets for Time Series Classification

no code implementations17 Mar 2015 Martin Wistuba, Josif Grabocka, Lars Schmidt-Thieme

A method for using shapelets for multivariate time series is proposed and Ultra-Fast Shapelets is proven to be successful in comparison to state-of-the-art multivariate time series classifiers on 15 multivariate time series datasets from various domains.

Classification General Classification +1

Scalable Discovery of Time-Series Shapelets

no code implementations11 Mar 2015 Josif Grabocka, Martin Wistuba, Lars Schmidt-Thieme

Time-series classification is an important problem for the data mining community due to the wide range of application domains involving time-series data.

General Classification Online Clustering +1

Invariant Factorization Of Time-Series

no code implementations23 Dec 2013 Josif Grabocka, Lars Schmidt-Thieme

Time-series classification is an important domain of machine learning and a plethora of methods have been developed for the task.

Time Series Classification

Time-Series Classification Through Histograms of Symbolic Polynomials

no code implementations24 Jul 2013 Josif Grabocka, Martin Wistuba, Lars Schmidt-Thieme

The coefficients of the polynomial functions are converted to symbolic words via equivolume discretizations of the coefficients' distributions.

Classification Econometrics +2

BPR: Bayesian Personalized Ranking from Implicit Feedback

21 code implementations9 May 2012 Steffen Rendle, Christoph Freudenthaler, Zeno Gantner, Lars Schmidt-Thieme

In this paper we present a generic optimization criterion BPR-Opt for personalized ranking that is the maximum posterior estimator derived from a Bayesian analysis of the problem.

Cannot find the paper you are looking for? You can Submit a new open access paper.