no code implementations • NeurIPS 2009 • Jong K. Kim, Seungjin Choi
Most of existing methods for DNA motif discovery consider only a single set of sequences to find an over-represented motif.
no code implementations • 16 Jan 2014 • Sunho Park, TaeHyun Hwang, Seungjin Choi
Multiclass problems are often decomposed into multiple binary problems that are solved by individual binary classifiers whose results are integrated into a final answer.
no code implementations • 29 Jan 2015 • Juho Lee, Seungjin Choi
Bayesian hierarchical clustering (BHC) is an agglomerative clustering method, where a probabilistic model is defined and its marginal likelihoods are evaluated to decide which clusters to merge.
no code implementations • CVPR 2015 • Saehoon Kim, Seungjin Choi
In this paper we analyze a bilinear random projection method where feature matrices are transformed to binary codes by two smaller random projection matrices.
no code implementations • CVPR 2016 • Yong-Deok Kim, Taewoong Jang, Bohyung Han, Seungjin Choi
We propose a Bayesian evidence framework to facilitate transfer learning from pre-trained deep convolutional neural networks (CNNs).
no code implementations • NeurIPS 2015 • Juho Lee, Seungjin Choi
Normalized random measures (NRMs) provide a broad class of discrete random measures that are often used as priors for Bayesian nonparametric models.
no code implementations • 18 Apr 2016 • Suwon Suh, Seungjin Choi
To this end, we employ Gaussian copula to model the local dependency in mixed categorical and continuous data, leading to {\em Gaussian copula variational autoencoder} (GCVAE).
no code implementations • NeurIPS 2016 • Juho Lee, Lancelot F. James, Seungjin Choi
Bayesian nonparametric methods based on the Dirichlet process (DP), gamma process and beta process, have proven effective in capturing aspects of various datasets arising in machine learning.
no code implementations • ICML 2017 • Juho Lee, Creighton Heaukulani, Zoubin Ghahramani, Lancelot F. James, Seungjin Choi
The BFRY random variables are well approximated by gamma random variables in a variational Bayesian inference routine, which we apply to several network datasets for which power law degree distributions are a natural assumption.
no code implementations • 17 Oct 2017 • Jungtaek Kim, Saehoon Kim, Seungjin Choi
A simple alternative of manual search is random/grid search on a space of hyperparameters, which still undergoes extensive evaluations of validation errors in order to find its best configuration.
1 code implementation • ICML 2018 • Yoonho Lee, Seungjin Choi
Our primary contribution is the {\em MT-net}, which enables the meta-learner to learn on each layer's activation space a subspace that the task-specific learner performs gradient descent on.
9 code implementations • 1 Oct 2018 • Juho Lee, Yoonho Lee, Jungtaek Kim, Adam R. Kosiorek, Seungjin Choi, Yee Whye Teh
Many machine learning tasks such as multiple instance learning, 3D shape recognition, and few-shot image classification are defined on sets of instances.
1 code implementation • 3 Oct 2018 • Juho Lee, Lancelot F. James, Seungjin Choi, François Caron
We consider a non-projective class of inhomogeneous random graph models with interpretable parameters and a number of interesting asymptotic properties.
no code implementations • 24 Jan 2019 • Jungtaek Kim, Seungjin Choi
In practice, however, local optimizers of an acquisition function are also used, since searching for the global optimizer is often a non-trivial or time-consuming task.
no code implementations • 11 Apr 2019 • Minseop Park, Jungtaek Kim, Saehoon Kim, Yanbin Liu, Seungjin Choi
A meta-model is trained on a distribution of similar tasks such that it learns an algorithm that can quickly adapt to a novel task with only a handful of labeled examples.
1 code implementation • 18 May 2019 • Jungtaek Kim, Seungjin Choi
We propose a practical Bayesian optimization method using Gaussian process regression, of which the marginal likelihood is maximized where the number of model selection steps is guided by a pre-defined threshold.
no code implementations • 23 May 2019 • Jungtaek Kim, Michael McCourt, Tackgeun You, Saehoon Kim, Seungjin Choi
We propose a practical Bayesian optimization method over sets, to minimize a black-box function that takes a set as a single input.
no code implementations • 28 May 2019 • Yoonho Lee, Wonjae Kim, Wonpyo Park, Seungjin Choi
In this paper we present a model that produces Discrete InfoMax Codes (DIMCO); we learn a probabilistic encoder that yields k-way d-dimensional codes associated with input data.
no code implementations • 25 Sep 2019 • Yoonho Lee, Wonjae Kim, Seungjin Choi
This paper analyzes how generalization works in meta-learning.
1 code implementation • NeurIPS 2020 • Yoonho Lee, Juho Lee, Sung Ju Hwang, Eunho Yang, Seungjin Choi
While various complexity measures for deep neural networks exist, specifying an appropriate measure capable of predicting and explaining generalization in deep networks has proven challenging.
no code implementations • 7 Sep 2020 • Beomjo Shin, Junsu Cho, Hwanjo Yu, Seungjin Choi
Since a positive bag contains both positive and negative instances, it is often required to detect positive instances (key instances) when a set of instances is categorized as a positive bag.
no code implementations • 26 Nov 2020 • Jungtaek Kim, Seungjin Choi, Minsu Cho
The main idea is to use a random mapping which embeds the combinatorial space into a convex polytope in a continuous space, on which all essential process is performed to determine a solution to the black-box optimization in the combinatorial space.
no code implementations • 22 Feb 2022 • Jungtaek Kim, Seungjin Choi
Sequential model-based optimization sequentially selects a candidate point by constructing a surrogate model with the history of evaluations, to solve a black-box optimization problem.