Search Results for author: Vladimir Golkov

Found 15 papers, 6 papers with code

Interactions Across Blocks in Post-Training Quantization of Large Language Models

no code implementations6 Nov 2024 Khasmamad Shabanovi, Lukas Wiest, Vladimir Golkov, Daniel Cremers, Thomas Pfeil

Typically, individual substructures, such as layers or blocks of layers, are quantized with the objective of minimizing quantization errors in their pre-activations by fine-tuning the corresponding weights.

Quantization

Scale-Equivariant Deep Learning for 3D Data

1 code implementation12 Apr 2023 Thomas Wimmer, Vladimir Golkov, Hoai Nam Dang, Moritz Zaiss, Andreas Maier, Daniel Cremers

The ability of convolutional neural networks (CNNs) to recognize objects regardless of their position in the image is due to the translation-equivariance of the convolutional operation.

Deep Learning Image Segmentation +3

Scene Graph Generation for Better Image Captioning?

no code implementations23 Sep 2021 Maximilian Mozes, Martin Schmitt, Vladimir Golkov, Hinrich Schütze, Daniel Cremers

We investigate the incorporation of visual relationships into the task of supervised image caption generation by proposing a model that leverages detected objects and auto-generated visual relationships to describe images in natural language.

Caption Generation Graph Generation +2

Learning to Evolve

1 code implementation8 May 2019 Jan Schuchardt, Vladimir Golkov, Daniel Cremers

Here we show that learning to evolve, i. e. learning to mutate and recombine better than at random, improves the result of evolution in terms of fitness increase per generation and even in terms of attainable fitness.

Deep Reinforcement Learning Evolutionary Algorithms +2

q-Space Novelty Detection with Variational Autoencoders

1 code implementation8 Jun 2018 Aleksei Vasilev, Vladimir Golkov, Marc Meissner, Ilona Lipp, Eleonora Sgarlata, Valentina Tomassini, Derek K. Jones, Daniel Cremers

Since abnormal samples are not used during training, we define novelty metrics based on the (partially complementary) assumptions that the VAE is less capable of reconstructing abnormal samples well; that abnormal samples more strongly violate the VAE regularizer; and that abnormal samples differ from normal samples not only in input-feature space, but also in the VAE latent space and VAE output.

Novelty Detection

Clustering with Deep Learning: Taxonomy and New Methods

2 code implementations23 Jan 2018 Elie Aljalbout, Vladimir Golkov, Yawar Siddiqui, Maximilian Strobel, Daniel Cremers

In this paper, we propose a systematic taxonomy of clustering methods that utilize deep neural networks.

Clustering Deep Learning

Regularization for Deep Learning: A Taxonomy

no code implementations ICLR 2018 Jan Kukačka, Vladimir Golkov, Daniel Cremers

Regularization is one of the crucial ingredients of deep learning, yet the term regularization has various definitions, and regularization methods are often studied separately from each other.

Deep Learning

3D Deep Learning for Biological Function Prediction from Physical Fields

no code implementations13 Apr 2017 Vladimir Golkov, Marcin J. Skwark, Atanas Mirchev, Georgi Dikov, Alexander R. Geanes, Jeffrey Mendenhall, Jens Meiler, Daniel Cremers

In this paper, we show that deep learning can predict biological function of molecules directly from their raw 3D approximated electron density and electrostatic potential fields.

Deep Learning

Protein contact prediction from amino acid co-evolution using convolutional networks for graph-valued images

no code implementations NeurIPS 2016 Vladimir Golkov, Marcin J. Skwark, Antonij Golkov, Alexey Dosovitskiy, Thomas Brox, Jens Meiler, Daniel Cremers

A contact map is a compact representation of the three-dimensional structure of a protein via the pairwise contacts between the amino acid constituting the protein.

Cannot find the paper you are looking for? You can Submit a new open access paper.