no code implementations • ICML 2020 • Byung-Jun Lee, Jongmin Lee, Peter Vrancx, Dongho Kim, Kee-Eung Kim
We consider the batch reinforcement learning problem where the agent needs to learn only from a fixed batch of data, without further interaction with the environment.
no code implementations • 24 Jan 2025 • Jongmin Lee, Sungjoo Yoo
We present Dense-SfM, a novel Structure from Motion (SfM) framework designed for dense and accurate 3D reconstruction from multi-view images.
no code implementations • 1 Nov 2024 • Jongmin Lee, Minsu Cho
Determining the 3D orientations of an object in an image, known as single-image pose estimation, is a crucial task in 3D vision applications.
no code implementations • 12 Aug 2024 • Carmelo Sferrazza, Dun-Ming Huang, Fangchen Liu, Jongmin Lee, Pieter Abbeel
In recent years, the transformer architecture has become the de facto standard for machine learning algorithms applied to natural language processing and computer vision.
no code implementations • 15 Jul 2024 • Jongmin Lee, Amin Rakhsha, Ernest K. Ryu, Amir-Massoud Farahmand
To accelerate the computation of the value function, we propose Deflated Dynamics Value Iteration (DDVI).
1 code implementation • 29 May 2024 • Haanvid Lee, Tri Wahyu Guntara, Jongmin Lee, Yung-Kyun Noh, Kee-Eung Kim
To address this limitation, we propose to relax the deterministic target policy using a kernel and learn the kernel metrics that minimize the overall mean squared error of the estimated temporal difference update vector of an action value function, where the action value function is used for policy evaluation.
1 code implementation • NeurIPS 2023 • Daiki E. Matsunaga, Jongmin Lee, Jaeseok Yoon, Stefanos Leonardos, Pieter Abbeel, Kee-Eung Kim
To this end, we introduce AlberDICE, an offline MARL algorithm that alternatively performs centralized training of individual agents based on stationary distribution optimization.
no code implementations • 3 Oct 2023 • Jongmin Lee, Yohann Cabon, Romain Brégier, Sungjoo Yoo, Jerome Revaud
Existing learning-based methods for object pose estimation in RGB images are mostly model-specific or category based.
1 code implementation • NeurIPS 2023 • Hyunin Lee, Yuhao Ding, Jongmin Lee, Ming Jin, Javad Lavaei, Somayeh Sojoudi
In the context of the time-desynchronized environment, however, the agent at time $t_{k}$ allocates $\Delta t$ for trajectory generation and training, subsequently moves to the next episode at $t_{k+1}=t_{k}+\Delta t$.
no code implementations • 21 Jul 2023 • Jerome Revaud, Yohann Cabon, Romain Brégier, Jongmin Lee, Philippe Weinzaepfel
Instead of encoding the scene coordinates into the network weights, our model takes as input a database image with some sparse 2D pixel to 3D coordinate annotations, extracted from e. g. off-the-shelf Structure-from-Motion or RGB-D data, and a query image for which are predicted a dense 3D coordinate map and its confidence, based on cross-attention.
no code implementations • CVPR 2023 • Jongmin Lee, Byungjin Kim, SeungWook Kim, Minsu Cho
The resultant features and their orientations are further processed by group aligning, a novel invariant mapping technique that shifts the group-equivariant features by their orientations along the group dimension.
1 code implementation • 24 Oct 2022 • Haanvid Lee, Jongmin Lee, Yunseon Choi, Wonseok Jeon, Byung-Jun Lee, Yung-Kyun Noh, Kee-Eung Kim
We consider local kernel metric learning for off-policy evaluation (OPE) of deterministic policies in contextual bandits with continuous action spaces.
1 code implementation • 15 Jun 2022 • Jongmin Lee, Yoonwoo Jeong, Minsu Cho
We study the problem of learning to assign a characteristic pose, i. e., scale and orientation, for an image region of interest.
1 code implementation • CVPR 2022 • Jongmin Lee, Byungjin Kim, Minsu Cho
Detecting robust keypoints from an image is an integral part of many computer vision problems, and the characteristic orientation and scale of keypoints play an important role for keypoint description and matching.
1 code implementation • ICLR 2022 • Jongmin Lee, Cosmin Paduraru, Daniel J. Mankowitz, Nicolas Heess, Doina Precup, Kee-Eung Kim, Arthur Guez
We consider the offline constrained reinforcement learning (RL) problem, in which the agent aims to compute a policy that maximizes expected return while satisfying given cost constraints, learning only from a pre-collected dataset.
2 code implementations • 28 Feb 2022 • Geon-Hyeong Kim, Jongmin Lee, Youngsoo Jang, Hongseok Yang, Kee-Eung Kim
We consider the problem of learning from observation (LfO), in which the agent aims to mimic the expert's behavior from the state-only demonstrations by experts.
1 code implementation • 7 Feb 2022 • Jongmin Lee, Joo Young Choi, Ernest K. Ryu, Albert No
The tremendous recent progress in analyzing the training dynamics of overparameterized neural networks has primarily focused on wide networks and therefore does not sufficiently address the role of depth in deep learning.
no code implementations • ICLR 2022 • Geon-Hyeong Kim, Seokin Seo, Jongmin Lee, Wonseok Jeon, HyeongJoo Hwang, Hongseok Yang, Kee-Eung Kim
We consider offline imitation learning (IL), which aims to mimic the expert's behavior from its demonstration without further interaction with the environment.
no code implementations • ICLR 2022 • Youngsoo Jang, Jongmin Lee, Kee-Eung Kim
GPT-Critic is essentially free from the issue of diverging from human language since it learns from the sentences sampled from the pre-trained language model.
no code implementations • 29 Sep 2021 • Jongmin Lee, Byungjin Kim, Minsu Cho
Therefore, we propose a rotation-invariant keypoint detection method using rotation-equivariant CNNs.
1 code implementation • 21 Jun 2021 • Jongmin Lee, Wonseok Jeon, Byung-Jun Lee, Joelle Pineau, Kee-Eung Kim
We consider the offline reinforcement learning (RL) setting where the agent aims to optimize the policy solely from the data without further environment interactions.
no code implementations • 4 Jan 2021 • Bethany J. Little, Gregory W. Hoth, Justin Christensen, Chuck Walker, Dennis J. De Smet, Grant W. Biedermann, Jongmin Lee, Peter D. D. Schwindt
Compact cold-atom sensors depend on vacuum technology.
Atomic Physics Applied Physics
no code implementations • ICLR 2021 • Youngsoo Jang, Seokin Seo, Jongmin Lee, Kee-Eung Kim
Interactive Fiction (IF) games provide a useful testbed for language-based reinforcement learning agents, posing significant challenges of natural language understanding, commonsense reasoning, and non-myopic planning in the combinatorial search space.
no code implementations • ICLR 2021 • Byung-Jun Lee, Jongmin Lee, Kee-Eung Kim
We present a new objective for model learning motivated by recent advances in the estimation of stationary distribution corrections.
no code implementations • NeurIPS 2020 • Jongmin Lee, ByungJun Lee, Kee-Eung Kim
Many real-world sequential decision problems involve multiple action variables whose control frequencies are different, such that actions take their effects at different periods.
1 code implementation • ECCV 2020 • Juhong Min, Jongmin Lee, Jean Ponce, Minsu Cho
Feature representation plays a crucial role in visual correspondence, and recent methods for image matching resort to deeply stacked convolutional layers.
Ranked #2 on
Semantic correspondence
on Caltech-101
no code implementations • 5 Mar 2020 • Jang-Hyun Kim, Jongmin Lee, Hee-Seok Oh
In this study, we propose a new approach to construct principal curves on a sphere by a projection of the data onto a continuous curve.
no code implementations • IJCNLP 2019 • Youngsoo Jang, Jongmin Lee, Jaeyoung Park, Kyeng-Hun Lee, Pierre Lison, Kee-Eung Kim
We present PyOpenDial, a Python-based domain-independent, open-source toolkit for spoken dialogue systems.
no code implementations • 28 Aug 2019 • Juhong Min, Jongmin Lee, Jean Ponce, Minsu Cho
In this paper, we present a new large-scale benchmark dataset of semantically paired images, SPair-71k, which contains 70, 958 image pairs with diverse variations in viewpoint and scale.
1 code implementation • ICCV 2019 • Juhong Min, Jongmin Lee, Jean Ponce, Minsu Cho
Establishing visual correspondences under large intra-class variations requires analyzing images at different levels, from features linked to semantics and context to local patterns, while being invariant to instance-specific details.
Ranked #1 on
Semantic correspondence
on Caltech-101
no code implementations • NeurIPS 2018 • Jongmin Lee, Geon-Hyeong Kim, Pascal Poupart, Kee-Eung Kim
In this paper, we present CC-POMCP (Cost-Constrained POMCP), an online MCTS algorithm for large CPOMDPs that leverages the optimization of LP-induced parameters and only requires a black-box simulator of the environment.
no code implementations • ECCV 2018 • Paul Hongsuck Seo, Jongmin Lee, Deunsol Jung, Bohyung Han, Minsu Cho
Semantic correspondence is the problem of establishing correspondences across images depicting different instances of the same object or scene class.