Search Results for author: Minyoung Kim

Found 27 papers, 9 papers with code

BayesDLL: Bayesian Deep Learning Library

2 code implementations22 Sep 2023 Minyoung Kim, Timothy Hospedales

We release a new Bayesian neural network library for PyTorch for large-scale deep networks.

Bayesian Inference Variational Inference

Vulnerability Clustering and other Machine Learning Applications of Semantic Vulnerability Embeddings

no code implementations23 Aug 2023 Mark-Oliver Stehr, Minyoung Kim

Cyber-security vulnerabilities are usually published in form of short natural language descriptions (e. g., in form of MITRE's CVE list) that over time are further manually enriched with labels such as those defined by the Common Vulnerability Scoring System (CVSS).


A Hierarchical Bayesian Model for Deep Few-Shot Meta Learning

1 code implementation16 Jun 2023 Minyoung Kim, Timothy Hospedales

We propose a novel hierarchical Bayesian model for learning with a large (possibly infinite) number of tasks/episodes, which suits well the few-shot meta learning problem.

Bayesian Inference Meta-Learning +1

FedHB: Hierarchical Bayesian Federated Learning

no code implementations8 May 2023 Minyoung Kim, Timothy Hospedales

We propose a novel hierarchical Bayesian approach to Federated Learning (FL), where our model reasonably describes the generative process of clients' local data via hierarchical Bayesian modeling: constituting random variables of local models for clients that are governed by a higher-level global variate.

Avg Federated Learning +1

Domain Generalisation via Domain Adaptation: An Adversarial Fourier Amplitude Approach

no code implementations23 Feb 2023 Minyoung Kim, Da Li, Timothy Hospedales

We tackle the domain generalisation (DG) problem by posing it as a domain adaptation (DA) task where we adversarially synthesise the worst-case target domain and adapt a model to that worst-case domain, thereby improving the model's robustness.

Domain Adaptation

Fisher SAM: Information Geometry and Sharpness Aware Minimisation

no code implementations10 Jun 2022 Minyoung Kim, Da Li, Shell Xu Hu, Timothy M. Hospedales

Recent sharpness-aware minimisation (SAM) is known to find flat minima which is beneficial for better generalisation with improved robustness.

Pushing the Limits of Simple Pipelines for Few-Shot Learning: External Data and Fine-Tuning Make a Difference

1 code implementation CVPR 2022 Shell Xu Hu, Da Li, Jan Stühmer, Minyoung Kim, Timothy M. Hospedales

To this end, we explore few-shot learning from the perspective of neural network architecture, as well as a three stage pipeline of network updates under different data supplies, where unsupervised external data is considered for pre-training, base categories are used to simulate few-shot tasks for meta-training, and the scarcely labelled data of an novel task is taken for fine-tuning.

Few-Shot Image Classification Few-Shot Learning +1

Gaussian Process Modeling of Approximate Inference Errors for Variational Autoencoders

no code implementations CVPR 2022 Minyoung Kim

Variational autoencoder (VAE) is a very successful generative model whose key element is the so called amortized inference network, which can perform test time inference using a single feed forward pass.

Gaussian Processes Test

SwAMP: Swapped Assignment of Multi-Modal Pairs for Cross-Modal Retrieval

no code implementations10 Nov 2021 Minyoung Kim

Specifically, we aim to predict class labels of the data instances in each modality, and assign those labels to the corresponding instances in the other modality (i. e., swapping the pseudo labels).

Contrastive Learning Cross-Modal Retrieval +4

Gaussian Process Meta Few-shot Classifier Learning via Linear Discriminant Laplace Approximation

no code implementations9 Nov 2021 Minyoung Kim, Timothy Hospedales

In essence, the MAP solution is approximated by the LDA estimate, but to take the GP prior into account, we adopt the prior-norm adjustment to estimate LDA's shared variance parameters, which ensures that the adjusted estimate is consistent with the GP prior.


Differentiable Expectation-Maximization for Set Representation Learning

no code implementations ICLR 2022 Minyoung Kim

The elements of an input set are considered as i. i. d.~samples from a mixture distribution, and we define our set embedding feed-forward network as the maximum-a-posterior (MAP) estimate of the mixture which is approximately attained by a few Expectation-Maximization (EM) steps.

Representation Learning

Synthesizing Human Faces using Latent Space Factorization and Local Weights (Extended Version)

no code implementations19 Jul 2021 Minyoung Kim, Young J. Kim

First, we factorize the latent space of the whole face to the subspace indicating different parts of the face.

On PyTorch Implementation of Density Estimators for von Mises-Fisher and Its Mixture

1 code implementation10 Feb 2021 Minyoung Kim

The von Mises-Fisher (vMF) is a well-known density model for directional random variables.

Clustering Image Clustering

Reducing the Amortization Gap in Variational Autoencoders: A Bayesian Random Function Approach

no code implementations5 Feb 2021 Minyoung Kim, Vladimir Pavlovic

In this paper, we address the problem in a completely different way by considering a random inference model, where we model the mean and variance functions of the variational posterior as random Gaussian processes (GP).

Gaussian Processes Test

Learning Disentangled Latent Factors from Paired Data in Cross-Modal Retrieval: An Implicit Identifiable VAE Approach

no code implementations1 Dec 2020 Minyoung Kim, Ricardo Guerrero, Vladimir Pavlovic

We deal with the problem of learning the underlying disentangled latent factors that are shared between the paired bi-modal data in cross-modal retrieval.

Cross-Modal Retrieval Retrieval

Recursive Inference for Variational Autoencoders

no code implementations NeurIPS 2020 Minyoung Kim, Vladimir Pavlovic

Using the functional gradient approach, we devise an intuitive learning criteria for selecting a new mixture component: the new component has to improve the data likelihood (lower bound) and, at the same time, be as divergent from the current mixture distribution as possible, thus increasing representational diversity.

Variational Inference

Ordinal-Content VAE: Isolating Ordinal-Valued Content Factors in Deep Latent Variable Models

no code implementations7 Sep 2020 Minyoung Kim, Vladimir Pavlovic

In deep representational learning, it is often desired to isolate a particular factor (termed {\em content}) from other factors (referred to as {\em style}).

Task-Discriminative Domain Alignment for Unsupervised Domain Adaptation

no code implementations26 Sep 2019 Behnam Gholami, Pritish Sahu, Minyoung Kim, Vladimir Pavlovic

In this paper, we improve the performance of DA by introducing a discriminative discrepancy measure which takes advantage of auxiliary information available in the source and the target domains to better align the source and target distributions.

Clustering Unsupervised Domain Adaptation

Bayes-Factor-VAE: Hierarchical Bayesian Deep Auto-Encoder Models for Factor Disentanglement

1 code implementation ICCV 2019 Minyoung Kim, Yuting Wang, Pritish Sahu, Vladimir Pavlovic

We propose a family of novel hierarchical Bayesian deep auto-encoder models capable of identifying disentangled factors of variability in data.


Probabilistic Approximate Logic and its Implementation in the Logical Imagination Engine

no code implementations25 Jul 2019 Mark-Oliver Stehr, Minyoung Kim, Carolyn L. Talcott, Merrill Knapp, Akos Vertes

In spite of the rapidly increasing number of applications of machine learning in various domains, a principled and systematic approach to the incorporation of domain knowledge in the engineering process is still lacking and ad hoc solutions that are difficult to validate are still the norm in practice, which is of growing concern not only in mission-critical applications.

BIG-bench Machine Learning

Efficient Deep Gaussian Process Models for Variable-Sized Input

1 code implementation16 May 2019 Issam H. Laradji, Mark Schmidt, Vladimir Pavlovic, Minyoung Kim

The key advantage is that the combination of GP and DRF leads to a tractable model that can both handle a variable-sized input as well as learn deep long-range dependency structures of the data.

Gaussian Processes

Relevance Factor VAE: Learning and Identifying Disentangled Factors

1 code implementation5 Feb 2019 Minyoung Kim, Yuting Wang, Pritish Sahu, Vladimir Pavlovic

We propose a novel VAE-based deep auto-encoder model that can learn disentangled latent representations in a fully unsupervised manner, endowed with the ability to identify all meaningful sources of variation and their cardinality.


Online Multi-Object Tracking with Dual Matching Attention Networks

1 code implementation ECCV 2018 Ji Zhu, Hua Yang, Nian Liu, Minyoung Kim, Wenjun Zhang, Ming-Hsuan Yang

In this paper, we propose an online Multi-Object Tracking (MOT) approach which integrates the merits of single object tracking and data association methods in a unified framework to handle noisy detections and frequent interactions between targets.

Multi-Object Tracking Online Multi-Object Tracking

Markov Modulated Gaussian Cox Processes for Semi-Stationary Intensity Modeling of Events Data

no code implementations ICML 2018 Minyoung Kim

The Cox process is a flexible event model that can account for uncertainty of the intensity function in the Poisson process.

Variational Inference

Similarity Mapping with Enhanced Siamese Network for Multi-Object Tracking

no code implementations28 Sep 2016 Minyoung Kim, Stefano Alletto, Luca Rigazio

Multi-object tracking has recently become an important area of computer vision, especially for Advanced Driver Assistance Systems (ADAS).

Multi-Object Tracking

Deep Clustered Convolutional Kernels

no code implementations6 Mar 2015 Minyoung Kim, Luca Rigazio

Deep neural networks have recently achieved state of the art performance thanks to new training algorithms for rapid parameter estimation and new regularization methods to reduce overfitting.


Cannot find the paper you are looking for? You can Submit a new open access paper.