1 code implementation • Findings (EMNLP) 2021 • Menglin Jia, Austin Reiter, Ser-Nam Lim, Yoav Artzi, Claire Cardie
We introduce Classification with Alternating Normalization (CAN), a non-parametric post-processing step for classification.
no code implementations • Findings (EMNLP) 2021 • Shir Gur, Natalia Neverova, Chris Stauffer, Ser-Nam Lim, Douwe Kiela, Austin Reiter
Recent advances in using retrieval components over external knowledge sources have shown impressive results for a variety of downstream tasks in natural language processing.
1 code implementation • ICCV 2021 • Menglin Jia, Zuxuan Wu, Austin Reiter, Claire Cardie, Serge Belongie, Ser-Nam Lim
Visual engagement in social media platforms comprises interactions with photo posts including comments, shares, and likes.
1 code implementation • CVPR 2021 • Menglin Jia, Zuxuan Wu, Austin Reiter, Claire Cardie, Serge Belongie, Ser-Nam Lim
Based on our findings, we conduct further study to quantify the effect of attending to object and context classes as well as textual information in the form of hashtags when training an intent classifier.
no code implementations • 3 Mar 2020 • Austin Reiter, Menglin Jia, Pu Yang, Ser-Nam Lim
Most deep learning-based methods rely on a late fusion technique whereby multiple feature types are encoded and concatenated and then a multi layer perceptron (MLP) combines the fused embedding to make predictions.
1 code implementation • 19 Nov 2019 • Michael Peven, Gregory D. Hager, Austin Reiter
In this work, we introduce a novel representation of motion as a voxelized 3D vector field and demonstrate how it can be used to improve performance of action recognition networks.
1 code implementation • 20 Feb 2019 • Xingtong Liu, Ayushi Sinha, Masaru Ishii, Gregory D. Hager, Austin Reiter, Russell H. Taylor, Mathias Unberath
We present a self-supervised approach to training convolutional neural networks for dense depth estimation from monocular endoscopy data without a priori modeling of anatomy or shading.
no code implementations • 31 Jul 2018 • Cong Gao, Xingtong Liu, Michael Peven, Mathias Unberath, Austin Reiter
Our method results in a mean absolute error of 0. 814 N in the ex vivo study, suggesting that it may be a promising alternative to hardware based surgical force feedback in endoscopic procedures.
1 code implementation • 28 Jun 2018 • Ayushi Sinha, Masaru Ishii, Russell H. Taylor, Gregory D. Hager, Austin Reiter
Several registration algorithms have been developed, many of which achieve high accuracy.
no code implementations • 25 Jun 2018 • Xingtong Liu, Ayushi Sinha, Mathias Unberath, Masaru Ishii, Gregory Hager, Russell H. Taylor, Austin Reiter
We present a self-supervised approach to training convolutional neural networks for dense depth estimation from monocular endoscopy data without a priori modeling of anatomy or shading.
no code implementations • 8 Jun 2018 • Ayushi Sinha, Xingtong Liu, Austin Reiter, Masaru Ishii, Gregory D. Hager, Russell H. Taylor
Clinical examinations that involve endoscopic exploration of the nasal cavity and sinuses often do not have a reference image to provide structural context to the clinician.
no code implementations • 22 Nov 2017 • Jingxuan Hou, Tae Soo Kim, Austin Reiter
Based on the findings from the model interpretation analysis, we propose a targeted refinement technique, which can generalize to various other recognition models.
1 code implementation • 14 Apr 2017 • Tae Soo Kim, Austin Reiter
In this work, we propose to use a new class of models known as Temporal Convolutional Neural Networks (TCN) for 3D human action recognition.
Ranked #1 on
Multimodal Activity Recognition
on EV-Action
2 code implementations • 22 Feb 2017 • Feng Wang, Xiang Xiang, Chang Liu, Trac. D. Tran, Austin Reiter, Gregory D. Hager, Harry Quon, Jian Cheng, Alan L. Yuille
In this way, the expression intensity regression task can benefit from the rich feature representations trained on a huge amount of data for face verification.
5 code implementations • CVPR 2017 • Colin Lea, Michael D. Flynn, Rene Vidal, Austin Reiter, Gregory D. Hager
The ability to identify and temporally segment fine-grained human actions throughout a video is crucial for robotics, surveillance, education, and beyond.
no code implementations • 25 Oct 2016 • Seth D. Billings, Ayushi Sinha, Austin Reiter, Simon Leonard, Masaru Ishii, Gregory D. Hager, Russell H. Taylor
Functional endoscopic sinus surgery (FESS) is a surgical procedure used to treat acute cases of sinusitis and other sinus diseases.
1 code implementation • 29 Aug 2016 • Colin Lea, Rene Vidal, Austin Reiter, Gregory D. Hager
The dominant paradigm for video-based action segmentation is composed of two steps: first, for each frame, compute low-level features using Dense Trajectories or a Convolutional Neural Network that encode spatiotemporal information locally, and second, input these features into a classifier that captures high-level temporal relationships, such as a Recurrent Neural Network (RNN).
Ranked #6 on
Action Segmentation
on JIGSAWS
no code implementations • 9 Feb 2016 • Colin Lea, Austin Reiter, Rene Vidal, Gregory D. Hager
We propose a model for action segmentation which combines low-level spatiotemporal features with a high-level segmental classifier.
Ranked #7 on
Action Segmentation
on JIGSAWS
no code implementations • CVPR 2015 • Chi Li, Austin Reiter, Gregory D. Hager
In this paper, we formulate a probabilistic framework for analyzing the performance of pooling.