Search Results for author: Joonho Lee

Found 14 papers, 4 papers with code

LiDAR-UDA: Self-ensembling Through Time for Unsupervised LiDAR Domain Adaptation

no code implementations ICCV 2023 Amirreza Shaban, Joonho Lee, Sanghun Jung, Xiangyun Meng, Byron Boots

Existing self-training methods use a model trained on labeled source data to generate pseudo labels for target data and refine the predictions via fine-tuning the network on the pseudo labels.

Pseudo Label Unsupervised Domain Adaptation

Unsupervised Domain Adaptation Based on the Predictive Uncertainty of Models

1 code implementation16 Nov 2022 Joonho Lee, Gyemin Lee

Unsupervised domain adaptation (UDA) aims to improve the prediction performance in the target domain under distribution shifts from the source domain.

Unsupervised Domain Adaptation

Feature Alignment by Uncertainty and Self-Training for Source-Free Unsupervised Domain Adaptation

no code implementations31 Aug 2022 Joonho Lee, Gyemin Lee

Most unsupervised domain adaptation (UDA) methods assume that labeled source images are available during model adaptation.

Data Augmentation Self-Supervised Learning +1

Advanced Skills through Multiple Adversarial Motion Priors in Reinforcement Learning

no code implementations23 Mar 2022 Eric Vollenweider, Marko Bjelonic, Victor Klemm, Nikita Rudin, Joonho Lee, Marco Hutter

Imitation learning approaches such as adversarial motion priors aim to reduce this problem by encouraging a pre-defined motion style.

Imitation Learning Navigate +2

A Phaseless Auxiliary-Field Quantum Monte Carlo Perspective on the Uniform Electron Gas at Finite Temperatures: Issues, Observations, and Benchmark Study

no code implementations22 Dec 2020 Joonho Lee, Miguel A. Morales, Fionn D. Malone

We investigate the viability of the phaseless finite temperature auxiliary field quantum Monte Carlo (ph-FT-AFQMC) method for ab initio systems using the uniform electron gas as a model.

Chemical Physics Strongly Correlated Electrons

Even more efficient quantum computations of chemistry through tensor hypercontraction

no code implementations6 Nov 2020 Joonho Lee, Dominic Berry, Craig Gidney, William J. Huggins, Jarrod R. McClean, Nathan Wiebe, Ryan Babbush

We describe quantum circuits with only $\widetilde{\cal O}(N)$ Toffoli complexity that block encode the spectra of quantum chemistry Hamiltonians in a basis of $N$ arbitrary (e. g., molecular) orbitals.

Quantum Physics Chemical Physics

Learning Quadrupedal Locomotion over Challenging Terrain

1 code implementation21 Oct 2020 Joonho Lee, Jemin Hwangbo, Lorenz Wellhausen, Vladlen Koltun, Marco Hutter

The trained controller has taken two generations of quadrupedal ANYmal robots to a variety of natural environments that are beyond the reach of prior published work in legged locomotion.

Zero-shot Generalization

ProbAct: A Probabilistic Activation Function for Deep Neural Networks

1 code implementation26 May 2019 Kumar Shridhar, Joonho Lee, Hideaki Hayashi, Purvanshi Mehta, Brian Kenji Iwana, Seokjun Kang, Seiichi Uchida, Sheraz Ahmed, Andreas Dengel

We show that ProbAct increases the classification accuracy by +2-3% compared to ReLU or other conventional activation functions on both original datasets and when datasets are reduced to 50% and 25% of the original size.

Image Classification

Learning agile and dynamic motor skills for legged robots

2 code implementations24 Jan 2019 Jemin Hwangbo, Joonho Lee, Alexey Dosovitskiy, Dario Bellicoso, Vassilios Tsounis, Vladlen Koltun, Marco Hutter

In the present work, we introduce a method for training a neural network policy in simulation and transferring it to a state-of-the-art legged system, thereby leveraging fast, automated, and cost-effective data generation schemes.

reinforcement-learning Reinforcement Learning (RL)

Robust Recovery Controller for a Quadrupedal Robot using Deep Reinforcement Learning

no code implementations22 Jan 2019 Joonho Lee, Jemin Hwangbo, Marco Hutter

We experimentally validate our approach on the quadrupedal robot ANYmal, which is a dog-sized quadrupedal system with 12 degrees of freedom.

Navigate reinforcement-learning +1

Cannot find the paper you are looking for? You can Submit a new open access paper.