Search Results for author: Xingyu Liu

Found 19 papers, 9 papers with code

Mesure de similarité textuelle pour l’évaluation automatique de copies d’étudiants (Textual similarity measurement for automatic evaluation of students’ answers)

no code implementations JEP/TALN/RECITAL 2021 Xiaoou Wang, Xingyu Liu, Yimei Yue

Cet article décrit la participation de l’équipe Nantalco à la tâche 2 du Défi Fouille de Textes 2021 (DEFT) : évaluation automatique de copies d’après une référence existante.

Sentence Embeddings

Tripartite: Tackle Noisy Labels by a More Precise Partition

no code implementations19 Feb 2022 Xuefeng Liang, Longshan Yao, Xingyu Liu, Ying Zhou

Instead, we propose a Tripartite solution to partition training data more precisely into three subsets: hard, noisy, and clean.

Self-Supervised Learning

REvolveR: Continuous Evolutionary Models for Robot-to-robot Policy Transfer

no code implementations10 Feb 2022 Xingyu Liu, Deepak Pathak, Kris M. Kitani

A popular paradigm in robotic learning is to train a policy from scratch for every new robot.

Imitation Learning

V-MAO: Generative Modeling for Multi-Arm Manipulation of Articulated Objects

no code implementations7 Nov 2021 Xingyu Liu, Kris M. Kitani

Manipulating articulated objects requires multiple robot arms in general.

Sequential Voting with Relational Box Fields for Active Object Detection

no code implementations21 Oct 2021 Qichen Fu, Xingyu Liu, Kris M. Kitani

While our voting function is able to improve the bounding box of the active object, one round of voting is typically not enough to accurately localize the active object.

Active Object Detection Decision Making +2

Ego4D: Around the World in 3,000 Hours of Egocentric Video

no code implementations13 Oct 2021 Kristen Grauman, Andrew Westbury, Eugene Byrne, Zachary Chavis, Antonino Furnari, Rohit Girdhar, Jackson Hamburger, Hao Jiang, Miao Liu, Xingyu Liu, Miguel Martin, Tushar Nagarajan, Ilija Radosavovic, Santhosh Kumar Ramakrishnan, Fiona Ryan, Jayant Sharma, Michael Wray, Mengmeng Xu, Eric Zhongcong Xu, Chen Zhao, Siddhant Bansal, Dhruv Batra, Vincent Cartillier, Sean Crane, Tien Do, Morrie Doulaty, Akshay Erapalli, Christoph Feichtenhofer, Adriano Fragomeni, Qichen Fu, Abrham Gebreselasie, Cristina Gonzalez, James Hillis, Xuhua Huang, Yifei HUANG, Wenqi Jia, Weslie Khoo, Jachym Kolar, Satwik Kottur, Anurag Kumar, Federico Landini, Chao Li, Yanghao Li, Zhenqiang Li, Karttikeya Mangalam, Raghava Modhugu, Jonathan Munro, Tullie Murrell, Takumi Nishiyasu, Will Price, Paola Ruiz Puentes, Merey Ramazanova, Leda Sari, Kiran Somasundaram, Audrey Southerland, Yusuke Sugano, Ruijie Tao, Minh Vo, Yuchen Wang, Xindi Wu, Takuma Yagi, Ziwei Zhao, Yunyi Zhu, Pablo Arbelaez, David Crandall, Dima Damen, Giovanni Maria Farinella, Christian Fuegen, Bernard Ghanem, Vamsi Krishna Ithapu, C. V. Jawahar, Hanbyul Joo, Kris Kitani, Haizhou Li, Richard Newcombe, Aude Oliva, Hyun Soo Park, James M. Rehg, Yoichi Sato, Jianbo Shi, Mike Zheng Shou, Antonio Torralba, Lorenzo Torresani, Mingfei Yan, Jitendra Malik

We introduce Ego4D, a massive-scale egocentric video dataset and benchmark suite.

De-identification

KDFNet: Learning Keypoint Distance Field for 6D Object Pose Estimation

no code implementations21 Sep 2021 Xingyu Liu, Shun Iwase, Kris M. Kitani

To address this problem, we propose a novel continuous representation called Keypoint Distance Field (KDF) for projected 2D keypoint locations.

6D Pose Estimation using RGB

RePOSE: Fast 6D Object Pose Refinement via Deep Texture Rendering

1 code implementation ICCV 2021 Shun Iwase, Xingyu Liu, Rawal Khirodkar, Rio Yokota, Kris M. Kitani

Furthermore, we utilize differentiable Levenberg-Marquardt (LM) optimization to refine a pose fast and accurately by minimizing the feature-metric error between the input and rendered image representations without the need of zooming in.

6D Pose Estimation 6D Pose Estimation using RGB

Time-Efficient Mars Exploration of Simultaneous Coverage and Charging with Multiple Drones

no code implementations16 Nov 2020 Yuan Chang, Chao Yan, Xingyu Liu, Xiangke Wang, Han Zhou, Xiaojia Xiang, Dengqing Tang

This paper presents a time-efficient scheme for Mars exploration by the cooperation of multiple drones and a rover.

KeyPose: Multi-View 3D Labeling and Keypoint Estimation for Transparent Objects

2 code implementations CVPR 2020 Xingyu Liu, Rico Jonschkowski, Anelia Angelova, Kurt Konolige

We address two problems: first, we establish an easy method for capturing and labeling 3D keypoints on desktop objects with an RGB camera; and second, we develop a deep neural network, called $KeyPose$, that learns to accurately predict object poses using 3D keypoints, from stereo input, and works even for transparent objects.

3D Pose Estimation Transparent objects

Learning Video Representations from Correspondence Proposals

2 code implementations CVPR 2019 Xingyu Liu, Joon-Young Lee, Hailin Jin

In particular, it can effectively learn representations for videos by mixing appearance and long-range motion with an RGB-only input.

Action Recognition

FlowNet3D: Learning Scene Flow in 3D Point Clouds

10 code implementations CVPR 2019 Xingyu Liu, Charles R. Qi, Leonidas J. Guibas

In this work, we propose a novel deep neural network named $FlowNet3D$ that learns scene flow from point clouds in an end-to-end fashion.

Motion Segmentation

Efficient Sparse-Winograd Convolutional Neural Networks

1 code implementation ICLR 2018 Xingyu Liu, Jeff Pool, Song Han, William J. Dally

First, we move the ReLU operation into the Winograd domain to increase the sparsity of the transformed activations.

Network Pruning

Exploring the Regularity of Sparse Structure in Convolutional Neural Networks

no code implementations24 May 2017 Huizi Mao, Song Han, Jeff Pool, Wenshuo Li, Xingyu Liu, Yu Wang, William J. Dally

Since memory reference is more than two orders of magnitude more expensive than arithmetic operations, the regularity of sparse structure leads to more efficient hardware design.

EIE: Efficient Inference Engine on Compressed Deep Neural Network

4 code implementations4 Feb 2016 Song Han, Xingyu Liu, Huizi Mao, Jing Pu, Ardavan Pedram, Mark A. Horowitz, William J. Dally

EIE has a processing power of 102GOPS/s working directly on a compressed network, corresponding to 3TOPS/s on an uncompressed network, and processes FC layers of AlexNet at 1. 88x10^4 frames/sec with a power dissipation of only 600mW.

Cannot find the paper you are looking for? You can Submit a new open access paper.