no code implementations • 15 Oct 2023 • Xiaotong Chen, Zheming Zhou, Zhuo Deng, Omid Ghasemalizadeh, Min Sun, Cheng-Hao Kuo, Arnie Sen
Reconstructing transparent objects using affordable RGB-D cameras is a persistent challenge in robotic perception due to inconsistent appearances across views in the RGB domain and inaccurate depth readings in each single-view.
no code implementations • 23 Jul 2023 • Huijie Zhang, Anthony Opipari, Xiaotong Chen, Jiyue Zhu, Zeren Yu, Odest Chadwicke Jenkins
TransNet is evaluated in terms of pose estimation accuracy on a large-scale transparent object dataset and compared to a state-of-the-art category-level pose estimation approach.
no code implementations • 3 Feb 2023 • Brian Hsu, Xiaotong Chen, Ying Han, Hongseok Namkoong, Kinjal Basu
We demonstrate our framework with a case study on predictive parity.
no code implementations • 29 Oct 2022 • Seyed Mehdi Iranmanesh, Xiaotong Chen, Kuo-Chin Lien
In this approach, we detect an object bounding box as a pair of keypoints, the top-left corner and the center, using two decoders.
no code implementations • 22 Aug 2022 • Huijie Zhang, Anthony Opipari, Xiaotong Chen, Jiyue Zhu, Zeren Yu, Odest Chadwicke Jenkins
TransNet is evaluated in terms of pose estimation accuracy on a recent, large-scale transparent object dataset and compared to a state-of-the-art category-level pose estimation approach.
2 code implementations • 29 Jun 2022 • G Roshan Lal, Xiaotong Chen, Varun Mithal
Tree Ensemble (TE) models, such as Gradient Boosted Trees, often achieve optimal performance on tabular datasets, yet their lack of transparency poses challenges for comprehending their decision logic.
no code implementations • 17 Jun 2022 • Kaizhi Zheng, Xiaotong Chen, Odest Chadwicke Jenkins, Xin Eric Wang
We hope the new simulator and benchmark will facilitate future research on language-guided robotic manipulation.
1 code implementation • 8 Mar 2022 • Xiaotong Chen, Huijie Zhang, Zeren Yu, Anthony Opipari, Odest Chadwicke Jenkins
Transparent objects are ubiquitous in household settings and pose distinct challenges for visual sensing and perception systems.
1 code implementation • 1 Mar 2022 • Xiaotong Chen, Huijie Zhang, Zeren Yu, Stanley Lewis, Odest Chadwicke Jenkins
We demonstrate the effectiveness of ProgressLabeller by rapidly create a dataset of over 1M samples with which we fine-tune a state-of-the-art pose estimation network in order to markedly improve the downstream robotic grasp success rates.
no code implementations • 1 Jan 2022 • Xiaotong Chen, Seyed Mehdi Iranmanesh, Kuo-Chin Lien
In this paper, we present PatchTrack, a Transformer-based joint-detection-and-tracking system that predicts tracks using patches of the current frame of interest.
1 code implementation • 16 Oct 2020 • Xiaotong Chen, Kaizhi Zheng, Zhen Zeng, Cameron Kisailus, Shreshtha Basu, James Cooney, Jana Pavlasek, Odest Chadwicke Jenkins
In this work, we combine the notions of affordance and category-level pose, and introduce the Affordance Coordinate Frame (ACF).
no code implementations • 2 Oct 2019 • Zheming Zhou, Xiaotong Chen, Odest Chadwicke Jenkins
With respect to this ProLIT dataset, we demonstrate that LIT can outperform both state-of-the-art end-to-end pose estimation methods and a generative pose estimator on transparent objects.
1 code implementation • 17 Jun 2019 • Larry Zhang, Xiaotong Chen, Abbad Vakil, Ali Byott, Reza Hosseini Ghomi
Simultaneously, voice has shown potential for analysis in precision medicine as a biomarker for screening illnesses.
no code implementations • 20 Mar 2019 • Xiaotong Chen, Rui Chen, Zhiqiang Sui, Zhefan Ye, Yanqi Liu, R. Iris Bahar, Odest Chadwicke Jenkins
In this work, we propose Generative Robust Inference and Perception (GRIP) as a two-stage object detection and pose estimation system that aims to combine relative strengths of discriminative CNNs and generative inference methods to achieve robust estimation.