no code implementations • 6 Nov 2024 • Xinnuo Xu, Minyoung Kim, Royson Lee, Brais Martinez, Timothy Hospedales
Data point selection (DPS) is becoming a critical topic in deep learning due to the ease of acquiring uncurated training data compared to the difficulty of obtaining curated or processed data.
no code implementations • 14 Oct 2024 • Minyoung Kim, Timothy M. Hospedales
We tackle the general differentiable meta learning problem that is ubiquitous in modern deep learning, including hyperparameter optimization, loss function learning, few-shot learning, invariance learning and more.
2 code implementations • 22 Sep 2023 • Minyoung Kim, Timothy Hospedales
We release a new Bayesian neural network library for PyTorch for large-scale deep networks.
no code implementations • 23 Aug 2023 • Mark-Oliver Stehr, Minyoung Kim
Cyber-security vulnerabilities are usually published in form of short natural language descriptions (e. g., in form of MITRE's CVE list) that over time are further manually enriched with labels such as those defined by the Common Vulnerability Scoring System (CVSS).
1 code implementation • 16 Jun 2023 • Minyoung Kim, Timothy Hospedales
We propose a novel hierarchical Bayesian model for learning with a large (possibly infinite) number of tasks/episodes, which suits well the few-shot meta learning problem.
no code implementations • 8 May 2023 • Minyoung Kim, Timothy Hospedales
We propose a novel hierarchical Bayesian approach to Federated Learning (FL), where our model reasonably describes the generative process of clients' local data via hierarchical Bayesian modeling: constituting random variables of local models for clients that are governed by a higher-level global variate.
no code implementations • 23 Feb 2023 • Minyoung Kim, Da Li, Timothy Hospedales
We tackle the domain generalisation (DG) problem by posing it as a domain adaptation (DA) task where we adversarially synthesise the worst-case target domain and adapt a model to that worst-case domain, thereby improving the model's robustness.
no code implementations • 10 Jun 2022 • Minyoung Kim, Da Li, Shell Xu Hu, Timothy M. Hospedales
Recent sharpness-aware minimisation (SAM) is known to find flat minima which is beneficial for better generalisation with improved robustness.
1 code implementation • CVPR 2022 • Shell Xu Hu, Da Li, Jan Stühmer, Minyoung Kim, Timothy M. Hospedales
To this end, we explore few-shot learning from the perspective of neural network architecture, as well as a three stage pipeline of network updates under different data supplies, where unsupervised external data is considered for pre-training, base categories are used to simulate few-shot tasks for meta-training, and the scarcely labelled data of an novel task is taken for fine-tuning.
Ranked #2 on Few-Shot Image Classification on Meta-Dataset
no code implementations • CVPR 2022 • Minyoung Kim
Variational autoencoder (VAE) is a very successful generative model whose key element is the so called amortized inference network, which can perform test time inference using a single feed forward pass.
no code implementations • 10 Nov 2021 • Minyoung Kim
Specifically, we aim to predict class labels of the data instances in each modality, and assign those labels to the corresponding instances in the other modality (i. e., swapping the pseudo labels).
no code implementations • 9 Nov 2021 • Minyoung Kim, Timothy Hospedales
In essence, the MAP solution is approximated by the LDA estimate, but to take the GP prior into account, we adopt the prior-norm adjustment to estimate LDA's shared variance parameters, which ensures that the adjusted estimate is consistent with the GP prior.
no code implementations • ICLR 2022 • Minyoung Kim
The elements of an input set are considered as i. i. d.~samples from a mixture distribution, and we define our set embedding feed-forward network as the maximum-a-posterior (MAP) estimate of the mixture which is approximately attained by a few Expectation-Maximization (EM) steps.
no code implementations • 19 Jul 2021 • Minyoung Kim, Young J. Kim
First, we factorize the latent space of the whole face to the subspace indicating different parts of the face.
1 code implementation • 10 Feb 2021 • Minyoung Kim
The von Mises-Fisher (vMF) is a well-known density model for directional random variables.
no code implementations • 5 Feb 2021 • Minyoung Kim, Vladimir Pavlovic
In this paper, we address the problem in a completely different way by considering a random inference model, where we model the mean and variance functions of the variational posterior as random Gaussian processes (GP).
no code implementations • 1 Dec 2020 • Minyoung Kim, Ricardo Guerrero, Vladimir Pavlovic
We deal with the problem of learning the underlying disentangled latent factors that are shared between the paired bi-modal data in cross-modal retrieval.
no code implementations • NeurIPS 2020 • Minyoung Kim, Vladimir Pavlovic
Using the functional gradient approach, we devise an intuitive learning criteria for selecting a new mixture component: the new component has to improve the data likelihood (lower bound) and, at the same time, be as divergent from the current mixture distribution as possible, thus increasing representational diversity.
no code implementations • 7 Sep 2020 • Minyoung Kim, Vladimir Pavlovic
In deep representational learning, it is often desired to isolate a particular factor (termed {\em content}) from other factors (referred to as {\em style}).
no code implementations • 26 Sep 2019 • Behnam Gholami, Pritish Sahu, Minyoung Kim, Vladimir Pavlovic
In this paper, we improve the performance of DA by introducing a discriminative discrepancy measure which takes advantage of auxiliary information available in the source and the target domains to better align the source and target distributions.
1 code implementation • ICCV 2019 • Minyoung Kim, Yuting Wang, Pritish Sahu, Vladimir Pavlovic
We propose a family of novel hierarchical Bayesian deep auto-encoder models capable of identifying disentangled factors of variability in data.
no code implementations • 25 Jul 2019 • Mark-Oliver Stehr, Minyoung Kim, Carolyn L. Talcott, Merrill Knapp, Akos Vertes
In spite of the rapidly increasing number of applications of machine learning in various domains, a principled and systematic approach to the incorporation of domain knowledge in the engineering process is still lacking and ad hoc solutions that are difficult to validate are still the norm in practice, which is of growing concern not only in mission-critical applications.
1 code implementation • 16 May 2019 • Issam H. Laradji, Mark Schmidt, Vladimir Pavlovic, Minyoung Kim
The key advantage is that the combination of GP and DRF leads to a tractable model that can both handle a variable-sized input as well as learn deep long-range dependency structures of the data.
1 code implementation • CVPR 2019 • Minyoung Kim, Pritish Sahu, Behnam Gholami, Vladimir Pavlovic
The latter can be achieved by minimizing the maximum discrepancy of predictors (classifiers).
Ranked #3 on Synthetic-to-Real Translation on Syn2Real-C
1 code implementation • 5 Feb 2019 • Minyoung Kim, Yuting Wang, Pritish Sahu, Vladimir Pavlovic
We propose a novel VAE-based deep auto-encoder model that can learn disentangled latent representations in a fully unsupervised manner, endowed with the ability to identify all meaningful sources of variation and their cardinality.
1 code implementation • ECCV 2018 • Ji Zhu, Hua Yang, Nian Liu, Minyoung Kim, Wenjun Zhang, Ming-Hsuan Yang
In this paper, we propose an online Multi-Object Tracking (MOT) approach which integrates the merits of single object tracking and data association methods in a unified framework to handle noisy detections and frequent interactions between targets.
Ranked #5 on Online Multi-Object Tracking on MOT16
no code implementations • ICML 2018 • Minyoung Kim
The Cox process is a flexible event model that can account for uncertainty of the intensity function in the Poisson process.
no code implementations • 28 Sep 2016 • Minyoung Kim, Stefano Alletto, Luca Rigazio
Multi-object tracking has recently become an important area of computer vision, especially for Advanced Driver Assistance Systems (ADAS).
no code implementations • 6 Mar 2015 • Minyoung Kim, Luca Rigazio
Deep neural networks have recently achieved state of the art performance thanks to new training algorithms for rapid parameter estimation and new regularization methods to reduce overfitting.