Search Results for author: Austin Reiter

Found 19 papers, 10 papers with code

Segmental Spatiotemporal CNNs for Fine-grained Action Segmentation

no code implementations9 Feb 2016 Colin Lea, Austin Reiter, Rene Vidal, Gregory D. Hager

We propose a model for action segmentation which combines low-level spatiotemporal features with a high-level segmental classifier.

Action Classification Action Segmentation +4

Temporal Convolutional Networks: A Unified Approach to Action Segmentation

1 code implementation29 Aug 2016 Colin Lea, Rene Vidal, Austin Reiter, Gregory D. Hager

The dominant paradigm for video-based action segmentation is composed of two steps: first, for each frame, compute low-level features using Dense Trajectories or a Convolutional Neural Network that encode spatiotemporal information locally, and second, input these features into a classifier that captures high-level temporal relationships, such as a Recurrent Neural Network (RNN).

Action Segmentation Segmentation

Anatomically Constrained Video-CT Registration via the V-IMLOP Algorithm

no code implementations25 Oct 2016 Seth D. Billings, Ayushi Sinha, Austin Reiter, Simon Leonard, Masaru Ishii, Gregory D. Hager, Russell H. Taylor

Functional endoscopic sinus surgery (FESS) is a surgical procedure used to treat acute cases of sinusitis and other sinus diseases.

Interpretable 3D Human Action Analysis with Temporal Convolutional Networks

1 code implementation14 Apr 2017 Tae Soo Kim, Austin Reiter

In this work, we propose to use a new class of models known as Temporal Convolutional Neural Networks (TCN) for 3D human action recognition.

Action Analysis Multimodal Activity Recognition +1

Train, Diagnose and Fix: Interpretable Approach for Fine-grained Action Recognition

no code implementations22 Nov 2017 Jingxuan Hou, Tae Soo Kim, Austin Reiter

Based on the findings from the model interpretation analysis, we propose a targeted refinement technique, which can generalize to various other recognition models.

3D Action Recognition Fine-grained Action Recognition +2

Endoscopic navigation in the absence of CT imaging

no code implementations8 Jun 2018 Ayushi Sinha, Xingtong Liu, Austin Reiter, Masaru Ishii, Gregory D. Hager, Russell H. Taylor

Clinical examinations that involve endoscopic exploration of the nasal cavity and sinuses often do not have a reference image to provide structural context to the clinician.

Computed Tomography (CT)

Self-supervised Learning for Dense Depth Estimation in Monocular Endoscopy

no code implementations25 Jun 2018 Xingtong Liu, Ayushi Sinha, Mathias Unberath, Masaru Ishii, Gregory Hager, Russell H. Taylor, Austin Reiter

We present a self-supervised approach to training convolutional neural networks for dense depth estimation from monocular endoscopy data without a priori modeling of anatomy or shading.

Anatomy Depth Estimation +2

Learning to See Forces: Surgical Force Prediction with RGB-Point Cloud Temporal Convolutional Networks

no code implementations31 Jul 2018 Cong Gao, Xingtong Liu, Michael Peven, Mathias Unberath, Austin Reiter

Our method results in a mean absolute error of 0. 814 N in the ex vivo study, suggesting that it may be a promising alternative to hardware based surgical force feedback in endoscopic procedures.

Dense Depth Estimation in Monocular Endoscopy with Self-supervised Learning Methods

1 code implementation20 Feb 2019 Xingtong Liu, Ayushi Sinha, Masaru Ishii, Gregory D. Hager, Austin Reiter, Russell H. Taylor, Mathias Unberath

We present a self-supervised approach to training convolutional neural networks for dense depth estimation from monocular endoscopy data without a priori modeling of anatomy or shading.

Anatomy Computed Tomography (CT) +2

Action Recognition Using Volumetric Motion Representations

1 code implementation19 Nov 2019 Michael Peven, Gregory D. Hager, Austin Reiter

In this work, we introduce a novel representation of motion as a voxelized 3D vector field and demonstrate how it can be used to improve performance of action recognition networks.

Action Recognition Data Augmentation +2

Deep Multi-Modal Sets

no code implementations3 Mar 2020 Austin Reiter, Menglin Jia, Pu Yang, Ser-Nam Lim

Most deep learning-based methods rely on a late fusion technique whereby multiple feature types are encoded and concatenated and then a multi layer perceptron (MLP) combines the fused embedding to make predictions.

Intentonomy: a Dataset and Study towards Human Intent Understanding

1 code implementation CVPR 2021 Menglin Jia, Zuxuan Wu, Austin Reiter, Claire Cardie, Serge Belongie, Ser-Nam Lim

Based on our findings, we conduct further study to quantify the effect of attending to object and context classes as well as textual information in the form of hashtags when training an intent classifier.

Cross-Modal Retrieval Augmentation for Multi-Modal Classification

no code implementations Findings (EMNLP) 2021 Shir Gur, Natalia Neverova, Chris Stauffer, Ser-Nam Lim, Douwe Kiela, Austin Reiter

Recent advances in using retrieval components over external knowledge sources have shown impressive results for a variety of downstream tasks in natural language processing.

Cross-Modal Retrieval General Classification +4

Cannot find the paper you are looking for? You can Submit a new open access paper.