no code implementations • 18 Apr 2024 • Yoonsang Lee, Xi Ye, Eunsol Choi
and a set of documents discussing different people named Michael Jordan, can LMs distinguish entity mentions to generate a cohesive answer to the question?
no code implementations • 18 Mar 2024 • Jihun Han, Yoonsang Lee
We propose a neural network-based mesh-free approach for perforated domain problems.
no code implementations • 16 Nov 2023 • Yoonsang Lee, Pranav Atreya, Xi Ye, Eunsol Choi
We perform analysis on three multi-answer question answering datasets, which allows us to further study answer set ordering strategies based on the LM's knowledge of each answer.
no code implementations • 14 Oct 2023 • Jihun Han, Yoonsang Lee, Anne Gelb
We present a framework designed to learn the underlying dynamics between two images observed at consecutive time steps.
no code implementations • 28 Sep 2023 • Jihun Han, Yoonsang Lee
This study analyzes the derivative-free loss method to solve a certain class of elliptic PDEs using neural networks.
no code implementations • 14 Aug 2023 • Taesoo Kwon, Taehong Gu, Jaewon Ahn, Yoonsang Lee
Using the centroidal dynamics model (CDM) to express the full-body character as a single rigid body (SRB) and training a policy to track a reference motion, we can obtain a policy that is capable of adapting to various unobserved environmental changes and controller transitions without requiring any additional learning.
no code implementations • 4 Jun 2022 • Jihun Han, Yoonsang Lee
Compared with other network-based approaches for multiscale problems, the proposed method is free from the design of hand-crafted neural network architecture and the cell problem to calculate the homogenization coefficient.
no code implementations • 2 Dec 2021 • Jihun Han, Yoonsang Lee
In this work, we propose a hierarchical approach to improve the convergence rate and accuracy of the neural network solution to partial differential equations.
no code implementations • 30 Jul 2020 • Hwangpil Park, Ri Yu, Yoonsang Lee, Kyungho Lee, Jehee Lee
The goal of this study is to answer these questions by evaluating the push-recovery stability of deep policies compared to human subjects and a previous feedback controller.