no code implementations • 27 Jan 2022 • Hong Xuan, Robert Pless
Pair-wise loss is an approach to metric learning that learns a semantic embedding by optimizing a loss function that encourages images from the same semantic class to be mapped closer than images from different classes.
Ranked #7 on Metric Learning on In-Shop
no code implementations • 9 Aug 2021 • Abby Stylianou, Robert Pless, Nadia Shakoor, Todd Mockler
We introduce a simple approach to understanding the relationship between single nucleotide polymorphisms (SNPs), or groups of related SNPs, and the phenotypes they control.
1 code implementation • 18 May 2021 • Zekai Chen, Fangtian Zhong, Zhumin Chen, Xiao Zhang, Robert Pless, Xiuzhen Cheng
Prior studies in predicting user response leveraged the feature interactions by enhancing feature vectors with products of features to model second-order or high-order cross features, either explicitly or implicitly.
1 code implementation • ECCV 2020 • Hong Xuan, Abby Stylianou, Xiaotong Liu, Robert Pless
We offer a simple fix to the loss function and show that, with this fix, optimizing with hard negative examples becomes feasible.
Ranked #14 on Metric Learning on In-Shop
no code implementations • 8 Oct 2019 • Abby Stylianou, Richard Souvenir, Robert Pless
Investigations of sex trafficking sometimes have access to photographs of victims in hotel rooms.
no code implementations • 25 Sep 2019 • Hong Xuan, Robert Pless
The Triplet Loss approach to Distance Metric Learning is defined by the strategy to select triplets and the loss function through which those triplets are optimized.
no code implementations • 16 Sep 2019 • Menghua Zhai, Tawfiq Salem, Connor Greenwell, Scott Workman, Robert Pless, Nathan Jacobs
We propose to implicitly learn to extract geo-temporal image features, which are mid-level features related to when and where an image was captured, by explicitly optimizing for a set of location and time estimation tasks.
1 code implementation • 16 Sep 2019 • Xiaotong Liu, Hong Xuan, Zeyu Zhang, Abby Stylianou, Robert Pless
Deep metric learning is often used to learn an embedding function that captures the semantic differences within a dataset.
3 code implementations • 8 Apr 2019 • Hong Xuan, Abby Stylianou, Robert Pless
Deep metric learning seeks to define an embedding where semantically similar images are embedded to nearby locations, and semantically dissimilar images are embedded to distant locations.
Ranked #6 on Image Retrieval on In-Shop
1 code implementation • 26 Jan 2019 • Abby Stylianou, Hong Xuan, Maya Shende, Jonathan Brandt, Richard Souvenir, Robert Pless
Recognizing a hotel from an image of a hotel room is important for human trafficking investigations.
1 code implementation • 2 Jan 2019 • Abby Stylianou, Richard Souvenir, Robert Pless
For convolutional neural network models that optimize an image embedding, we propose a method to highlight the regions of images that contribute most to pairwise similarity.
1 code implementation • ECCV 2018 • Hong Xuan, Richard Souvenir, Robert Pless
Learning embedding functions, which map semantically related inputs to nearby locations in a feature space supports a variety of classification and information retrieval tasks.
2 code implementations • CVPR 2017 • Paul Upchurch, Jacob Gardner, Geoff Pleiss, Robert Pless, Noah Snavely, Kavita Bala, Kilian Weinberger
We propose Deep Feature Interpolation (DFI), a new data-driven baseline for automatic high-resolution image transformation.
no code implementations • CVPR 2016 • Ian Schillebeeckx, Robert Pless
We consider the problem of camera pose estimation for a scenario where the camera may have continuous and unknown changes in its focal length.
no code implementations • ICCV 2015 • Calvin Murdock, Nathan Jacobs, Robert Pless
Satellite imagery of cloud cover is extremely important for understanding and predicting weather.
no code implementations • CVPR 2013 • Austin Abrams, Kylia Miskell, Robert Pless
For outdoor scenes with solar illumination, we term this the episolar constraint, which provides a convex optimization to solve for the sparse depth of a scene from shadow correspondences, a method to reduce the search space when finding shadow correspondences, and a method to geometrically calibrate a camera using shadow constraints.
no code implementations • 15 Apr 2013 • Austin Abrams, Chris Hawley, Kylia Miskell, Adina Stoica, Nathan Jacobs, Robert Pless
We show these approaches only work with very careful tuning of parameters, and do not work well for long-term time-lapse sequences taken over the span of many months.