Search Results for author: Tsuhan Chen

Found 13 papers, 2 papers with code

Stack-Captioning: Coarse-to-Fine Learning for Image Captioning

1 code implementation11 Sep 2017 Jiuxiang Gu, Jianfei Cai, Gang Wang, Tsuhan Chen

On the other hand, multi-stage image caption model is hard to train due to the vanishing gradient problem.

Image Captioning

In the Shadows, Shape Priors Shine: Using Occlusion to Improve Multi-Region Segmentation

no code implementations CVPR 2016 Yuka Kihara, Matvey Soloviev, Tsuhan Chen

We present a new algorithm for multi-region segmentation of 2D images with objects that may partially occlude each other.

QUOTE: "Querying" Users as Oracles in Tag Engines - A Semi-Supervised Learning Approach to Personalized Image Tagging

no code implementations20 Jan 2016 Amandianeze O. Nwana, Tsuhan Chen

Previous work has correctly identified that many of the tags that users provide on images are not visually relevant (i. e. representative of the salient content in the image) and they go on to treat such tags as noise, ignoring that the users chose to provide those tags over others that could have been more visually relevant.

TAG

Recent Advances in Convolutional Neural Networks

no code implementations22 Dec 2015 Jiuxiang Gu, Zhenhua Wang, Jason Kuen, Lianyang Ma, Amir Shahroudy, Bing Shuai, Ting Liu, Xingxing Wang, Li Wang, Gang Wang, Jianfei Cai, Tsuhan Chen

In the last few years, deep learning has led to very good performance on a variety of problems, such as visual recognition, speech recognition and natural language processing.

Speech Recognition

You Are Here: Mimicking the Human Thinking Process in Reading Floor-Plans

no code implementations ICCV 2015 Hang Chu, Dong Ki Kim, Tsuhan Chen

A human can easily find his or her way in an unfamiliar building, by walking around and reading the floor-plan.

Deep Neural Network for Real-Time Autonomous Indoor Navigation

no code implementations15 Nov 2015 Dong Ki Kim, Tsuhan Chen

Autonomous indoor navigation of Micro Aerial Vehicles (MAVs) possesses many challenges.

Autonomous Navigation

A Mixed Bag of Emotions: Model, Predict, and Transfer Emotion Distributions

no code implementations CVPR 2015 Kuan-Chuan Peng, Tsuhan Chen, Amir Sadovnik, Andrew C. Gallagher

First, we show through psychovisual studies that different people have different emotional reactions to the same image, which is a strong and novel departure from previous work that only records and predicts a single dominant emotion for each image.

Revisiting Depth Layers from Occlusions

no code implementations CVPR 2013 Adarsh Kowdle, Andrew Gallagher, Tsuhan Chen

We cast the problem of depth-layer segmentation as a discrete labeling problem on a spatiotemporal Markov Random Field (MRF) that uses the motion occlusion cues along with monocular cues and a smooth motion prior for the moving object.

3D-Based Reasoning with Blocks, Support, and Stability

no code implementations CVPR 2013 Zhaoyin Jia, Andrew Gallagher, Ashutosh Saxena, Tsuhan Chen

Our algorithm incorporates the intuition that a good 3D representation of the scene is the one that fits the data well, and is a stable, self-supporting (i. e., one that does not topple) arrangement of objects.

\theta-MRF: Capturing Spatial and Semantic Structure in the Parameters for Scene Understanding

no code implementations NeurIPS 2011 Cong-Cong Li, Ashutosh Saxena, Tsuhan Chen

For most scene understanding tasks (such as object detection or depth estimation), the classifiers need to consider contextual information in addition to the local features.

Depth Estimation Object Detection +1

Towards Holistic Scene Understanding: Feedback Enabled Cascaded Classification Models

no code implementations NeurIPS 2010 Cong-Cong Li, Adarsh Kowdle, Ashutosh Saxena, Tsuhan Chen

In many machine learning domains (such as scene understanding), several related sub-tasks (such as scene categorization, depth estimation, object detection) operate on the same raw data and provide correlated outputs.

Classification Depth Estimation +5

Cannot find the paper you are looking for? You can Submit a new open access paper.