Search Results for author: Jean Oh

Found 33 papers, 12 papers with code

Knowledge-driven Scene Priors for Semantic Audio-Visual Embodied Navigation

no code implementations21 Dec 2022 Gyan Tatiya, Jonathan Francis, Luca Bondi, Ingrid Navarro, Eric Nyberg, Jivko Sinapov, Jean Oh

We also define a new audio-visual navigation sub-task, where agents are evaluated on novel sounding objects, as opposed to unheard clips of known objects.

Visual Navigation

Distribution-aware Goal Prediction and Conformant Model-based Planning for Safe Autonomous Driving

no code implementations16 Dec 2022 Jonathan Francis, Bingqing Chen, Weiran Yao, Eric Nyberg, Jean Oh

The feasibility of collecting a large amount of expert demonstrations has inspired growing research interests in learning-to-drive settings, where models learn by imitating the driving behaviour from experts.

Autonomous Driving Density Estimation +1

Towards Real-Time Text2Video via CLIP-Guided, Pixel-Level Optimization

1 code implementation23 Oct 2022 Peter Schaldenbrand, Zhixuan Liu, Jean Oh

We introduce an approach to generating videos based on a series of given language descriptions.

T2FPV: Dataset and Method for Correcting First-Person View Errors in Pedestrian Trajectory Prediction

1 code implementation22 Sep 2022 Benjamin Stoler, Meghdeep Jana, Soonmin Hwang, Jean Oh

To support first-person view trajectory prediction research, we present T2FPV, a method for constructing high-fidelity first-person view (FPV) datasets given a real-world, top-down trajectory dataset; we showcase our approach on the ETH/UCY pedestrian dataset to generate the egocentric visual data of all interacting pedestrians, creating the T2FPV-ETH dataset.

Imputation Pedestrian Trajectory Prediction +1

RCA: Ride Comfort-Aware Visual Navigation via Self-Supervised Learning

no code implementations29 Jul 2022 Xinjie Yao, Ji Zhang, Jean Oh

Under shared autonomy, wheelchair users expect vehicles to provide safe and comfortable rides while following users high-level navigation plans.

Self-Supervised Learning Visual Navigation

StyleCLIPDraw: Coupling Content and Style in Text-to-Drawing Translation

1 code implementation24 Feb 2022 Peter Schaldenbrand, Zhixuan Liu, Jean Oh

Generating images that fit a given text description using machine learning has improved greatly with the release of technologies such as the CLIP image-text encoder model; however, current methods lack artistic control of the style of image to be generated.

Style Transfer Translation

StyleCLIPDraw: Coupling Content and Style in Text-to-Drawing Synthesis

1 code implementation4 Nov 2021 Peter Schaldenbrand, Zhixuan Liu, Jean Oh

Generating images that fit a given text description using machine learning has improved greatly with the release of technologies such as the CLIP image-text encoder model; however, current methods lack artistic control of the style of image to be generated.

Style Transfer

Safe Autonomous Racing via Approximate Reachability on Ego-vision

no code implementations14 Oct 2021 Bingqing Chen, Jonathan Francis, Jean Oh, Eric Nyberg, Sylvia L. Herbert

Given the nature of the task, autonomous agents need to be able to 1) identify and avoid unsafe scenarios under the complex vehicle dynamics, and 2) make sub-second decision in a fast-changing environment.

Autonomous Driving Reinforcement Learning (RL) +1

Unsupervised Domain Adaptation Via Pseudo-labels And Objectness Constraints

no code implementations29 Sep 2021 Rajshekhar Das, Jonathan Francis, Sanket Vaibhav Mehta, Jean Oh, Emma Strubell, Jose Moura

Crucially, the objectness constraint is agnostic to the ground-truth semantic segmentation labels and, therefore, remains appropriate for unsupervised adaptation settings.

Pseudo Label Semantic Segmentation +2

Translating Robot Skills: Learning Unsupervised Skill Correspondences Across Robots

no code implementations29 Sep 2021 Tanmay Shankar, Yixin Lin, Aravind Rajeswaran, Vikash Kumar, Stuart Anderson, Jean Oh

In this paper, we explore how we can endow robots with the ability to learn correspondences between their own skills, and those of morphologically different robots in different domains, in an entirely unsupervised manner.

Translation Unsupervised Machine Translation

Localize, Group, and Select: Boosting Text-VQA by Scene Text Modeling

no code implementations20 Aug 2021 Xiaopeng Lu, Zhen Fan, Yansen Wang, Jean Oh, Carolyn P. Rose

LOGOS leverages two grounding tasks to better localize the key information of the image, utilizes scene text clustering to group individual OCR tokens, and learns to select the best answer from different sources of OCR (Optical Character Recognition) texts.

Data Ablation Optical Character Recognition (OCR) +4

Core Challenges in Embodied Vision-Language Planning

no code implementations26 Jun 2021 Jonathan Francis, Nariaki Kitamura, Felix Labelle, Xiaopeng Lu, Ingrid Navarro, Jean Oh

Recent advances in the areas of multimodal machine learning and artificial intelligence (AI) have led to the development of challenging tasks at the intersection of Computer Vision, Natural Language Processing, and Embodied AI.

Self-supervised Learning of 3D Object Understanding by Data Association and Landmark Estimation for Image Sequence

no code implementations14 Apr 2021 Hyeonwoo Yu, Jean Oh

Therefore, we propose a strategy to exploit multipleobservations of the object in the image sequence in orderto surpass the self-performance: first, the landmarks for theglobal object map are estimated through network predic-tion and data association, and the corrected annotation fora single frame is obtained.

Association Pose Estimation +1

Domain Adaptive Monocular Depth Estimation With Semantic Information

no code implementations12 Apr 2021 Fei Lu, Hyeonwoo Yu, Jean Oh

The advent of deep learning has brought an impressive advance to monocular depth estimation, e. g., supervised monocular depth estimation has been thoroughly investigated.

Image Classification Monocular Depth Estimation

Anytime 3D Object Reconstruction using Multi-modal Variational Autoencoder

no code implementations25 Jan 2021 Hyeonwoo Yu, Jean Oh

In this context, we propose a method for imputation of latent variables whose elements are partially lost.

3D Object Reconstruction 3D Shape Reconstruction +3

Anchor Distance for 3D Multi-Object Distance Estimation from 2D Single Shot

no code implementations25 Jan 2021 Hyeonwoo Yu, Jean Oh

Given a 2D Bounding Box (BBox) and object parameters, a 3D distance to the object can be calculated directly using 3D reprojection; however, such methods are prone to significant errors because an error from the 2D detection can be amplified in 3D.

Autonomous Driving object-detection +3

Content Masked Loss: Human-Like Brush Stroke Planning in a Reinforcement Learning Painting Agent

1 code implementation18 Dec 2020 Peter Schaldenbrand, Jean Oh

The objective of most Reinforcement Learning painting agents is to minimize the loss between a target image and the paint canvas.

object-detection Object Detection +1

Trajformer: Trajectory Prediction with Local Self-Attentive Contexts for Autonomous Driving

2 code implementations30 Nov 2020 Manoj Bhat, Jonathan Francis, Jean Oh

Effective feature-extraction is critical to models' contextual understanding, particularly for applications to robotics and autonomous driving, such as multimodal trajectory prediction.

Autonomous Driving Trajectory Prediction

Image Captioning with Compositional Neural Module Networks

no code implementations10 Jul 2020 Junjiao Tian, Jean Oh

In image captioning where fluency is an important factor in evaluation, e. g., $n$-gram metrics, sequential models are commonly used; however, sequential models generally result in overgeneralized expressions that lack the details that may be present in an input image.

Image Captioning Question Answering +2

Noticing Motion Patterns: Temporal CNN with a Novel Convolution Operator for Human Trajectory Prediction

no code implementations2 Jul 2020 Dapeng Zhao, Jean Oh

We propose a Convolutional Neural Network-based approach to learn, detect, and extract patterns in sequential trajectory data, known here as Social Pattern Extraction Convolution (Social-PEC).

Decision Making Trajectory Prediction

A Multimodal Dialogue System for Conversational Image Editing

no code implementations16 Feb 2020 Tzu-Hsiang Lin, Trung Bui, Doo Soon Kim, Jean Oh

In this paper, we present a multimodal dialogue system for Conversational Image Editing.

Following Social Groups: Socially Compliant Autonomous Navigation in Dense Crowds

no code implementations27 Nov 2019 Xinjie Yao, Ji Zhang, Jean Oh

The underlying system incorporates a deep neural network to track social groups and join the flow of a social group in facilitating the navigation.

Autonomous Navigation Social Navigation

Explainable Semantic Mapping for First Responders

no code implementations15 Oct 2019 Jean Oh, Martial Hebert, Hae-Gon Jeon, Xavier Perez, Chia Dai, Yeeho Song

One of the key challenges in the semantic mapping problem in postdisaster environments is how to analyze a large amount of data efficiently with minimal supervision.

Semantic Segmentation

Social Attention: Modeling Attention in Human Crowds

2 code implementations12 Oct 2017 Anirudh Vemula, Katharina Muelling, Jean Oh

In this work, we propose Social Attention, a novel trajectory prediction model that captures the relative importance of each person when navigating in the crowd, irrespective of their proximity.

Navigate Trajectory Prediction

Learning Lexical Entries for Robotic Commands using Crowdsourcing

no code implementations8 Sep 2016 Junjie Hu, Jean Oh, Anatole Gershman

Robotic commands in natural language usually contain various spatial descriptions that are semantically similar but syntactically different.

Machine Translation Translation

Path Planning in Dynamic Environments with Adaptive Dimensionality

1 code implementation22 May 2016 Anirudh Vemula, Katharina Muelling, Jean Oh

In this paper, we apply the idea of adaptive dimensionality to speed up path planning in dynamic environments for a robot with no assumptions on its dynamic model.

Robotics

Cannot find the paper you are looking for? You can Submit a new open access paper.