no code implementations • 24 Feb 2017 • Aneeq Zia, Yachna Sharma, Vinay Bettadapura, Eric L. Sarin, Irfan Essa
Methods: We conduct the largest study, to the best of our knowledge, for basic surgical skills assessment on a dataset that contained video and accelerometer data for suturing and knot-tying tasks.
no code implementations • 18 Jan 2016 • Vinay Bettadapura, Daniel Castro, Irfan Essa
We present an approach for identifying picturesque highlights from large amounts of egocentric video data.
no code implementations • CVPR 2013 • Vinay Bettadapura, Grant Schindler, Thomaz Plotz, Irfan Essa
We present data-driven techniques to augment Bag of Words (BoW) models, which allow for more robust modeling and recognition of complex long-term activities, especially when the structure and topology of the activities are not known a priori.
no code implementations • 7 Oct 2015 • Vinay Bettadapura, Irfan Essa, Caroline Pantofaru
We present a technique that uses images, videos and sensor data taken from first-person point-of-view devices to perform egocentric field-of-view (FOV) localization.
no code implementations • 7 Oct 2015 • Vinay Bettadapura, Edison Thomaz, Aman Parnami, Gregory Abowd, Irfan Essa
The pervasiveness of mobile cameras has resulted in a dramatic increase in food photos, which are pictures reflecting what people eat.
no code implementations • 6 Oct 2015 • Daniel Castro, Steven Hickson, Vinay Bettadapura, Edison Thomaz, Gregory Abowd, Henrik Christensen, Irfan Essa
We collected a dataset of 40, 103 egocentric images over a 6 month period with 19 activity classes and demonstrate the benefit of state-of-the-art deep learning techniques for learning and predicting daily activities.
no code implementations • 30 Mar 2012 • Vinay Bettadapura
Notes have also been presented on emotions, expressions and facial features, discussion on the six prototypic expressions and the recent studies on expression classifiers.