no code implementations • 8 May 2022 • Yida Wang, David Joseph Tan, Nassir Navab, Federico Tombari
We propose a novel convolutional operator for the task of point cloud completion.
no code implementations • 8 Apr 2022 • Chenglong Wang, Yun Liu, Fen Wang, Chengxiu Zhang, Yida Wang, Mei Yuan, Guang Yang
However, detection and accurate diagnosis of pulmonary nodules depend heavily on the experiences of radiologists and can be a heavy workload for them.
no code implementations • 30 Mar 2022 • Yida Wang, David Joseph Tan, Nassir Navab, Federico Tombari
To this aim, we introduce a second model that assembles our layers within a transformer architecture.
no code implementations • 15 Mar 2022 • Evin Pınar Örnek, Shristi Mudgal, Johanna Wald, Yida Wang, Nassir Navab, Federico Tombari
There have been numerous recently proposed methods for monocular depth prediction (MDP) coupled with the equally rapid evolution of benchmarking tools.
no code implementations • 28 Jan 2022 • Lianmin Zheng, Zhuohan Li, Hao Zhang, Yonghao Zhuang, Zhifeng Chen, Yanping Huang, Yida Wang, Yuanzhong Xu, Danyang Zhuo, Joseph E. Gonzalez, Ion Stoica
Alpa automates model-parallel training of large deep learning (DL) models by generating execution plans that unify data, operator, and pipeline parallelism.
2 code implementations • 3 Aug 2021 • Hao Zhou, Pei Ke, Zheng Zhang, Yuxian Gu, Yinhe Zheng, Chujie Zheng, Yida Wang, Chen Henry Wu, Hao Sun, Xiaocong Yang, Bosi Wen, Xiaoyan Zhu, Minlie Huang, Jie Tang
Although pre-trained language models have remarkably enhanced the generation ability of dialogue systems, open-domain Chinese dialogue systems are still limited by the dialogue data and the model size compared with English ones.
no code implementations • 6 Jun 2021 • Chen Henry Wu, Yinhe Zheng, Yida Wang, Zhenyu Yang, Minlie Huang
In this paper, we propose to combine pretrained language models with the modular dialogue paradigm for open-domain dialogue modeling.
1 code implementation • ACL 2021 • Yida Wang, Yinhe Zheng, Yong Jiang, Minlie Huang
Neural dialogue generation models trained with the one-hot target distribution suffer from the over-confidence issue, which leads to poor generation diversity as widely reported in the literature.
no code implementations • 3 May 2021 • Zhi Chen, Cody Hao Yu, Trevor Morris, Jorn Tuyls, Yi-Hsiang Lai, Jared Roesch, Elliott Delaye, Vin Sharma, Yida Wang
Deep neural networks (DNNs) have been ubiquitously applied in many applications, and accelerators are emerged as an enabler to support the fast and efficient inference tasks of these applications.
no code implementations • 21 Jan 2021 • Jian Weng, Animesh Jain, Jie Wang, Leyuan Wang, Yida Wang, Tony Nowatzki
However, it is hard to leverage mixed precision without hardware support because of the overhead of data casting.
1 code implementation • 20 Nov 2020 • Zhewei Yao, Zhen Dong, Zhangcheng Zheng, Amir Gholami, Jiali Yu, Eric Tan, Leyuan Wang, Qijing Huang, Yida Wang, Michael W. Mahoney, Kurt Keutzer
Current low-precision quantization algorithms often have the hidden cost of conversion back and forth from floating point to quantized integer values.
no code implementations • 26 Aug 2020 • Yuwei Hu, Zihao Ye, Minjie Wang, Jiali Yu, Da Zheng, Mu Li, Zheng Zhang, Zhiru Zhang, Yida Wang
FeatGraph provides a flexible programming interface to express diverse GNN models by composing coarse-grained sparse templates with fine-grained user-defined functions (UDFs) on each vertex/edge.
1 code implementation • ECCV 2020 • Yida Wang, David Joseph Tan, Nassir Navab, Federico Tombari
In this paper, we propose a method for 3D object completion and classification based on point clouds.
3 code implementations • 10 Aug 2020 • Yida Wang, Pei Ke, Yinhe Zheng, Kaili Huang, Yong Jiang, Xiaoyan Zhu, Minlie Huang
The cleaned dataset and the pre-training models will facilitate the research of short-text conversation modeling.
1 code implementation • 5 Aug 2020 • Yanyan Li, Nikolas Brasch, Yida Wang, Nassir Navab, Federico Tombari
In this paper a low-drift monocular SLAM method is proposed targeting indoor scenarios, where monocular SLAM often fails due to the lack of textured surfaces.
Robotics
no code implementations • 18 Jun 2020 • Animesh Jain, Shoubhik Bhattacharya, Masahiro Masuda, Vin Sharma, Yida Wang
A deep learning compiler such as Apache TVM can enable the efficient execution of model from various frameworks on various targets.
1 code implementation • 17 Jun 2020 • Zhen Zhang, Chaokun Chang, Haibin Lin, Yida Wang, Raman Arora, Xin Jin
As such, we advocate that the real challenge of distributed training is for the network community to develop high-performance network transport to fully utilize the network capacity and achieve linear scale-out.
no code implementations • 11 Jun 2020 • Lianmin Zheng, Chengfan Jia, Minmin Sun, Zhao Wu, Cody Hao Yu, Ameer Haj-Ali, Yida Wang, Jun Yang, Danyang Zhuo, Koushik Sen, Joseph E. Gonzalez, Ion Stoica
Ansor can find high-performance programs that are outside the search space of existing state-of-the-art approaches.
no code implementations • 4 Jun 2020 • Haichen Shen, Jared Roesch, Zhi Chen, Wei Chen, Yong Wu, Mu Li, Vin Sharma, Zachary Tatlock, Yida Wang
Modern deep neural networks increasingly make use of features such as dynamic control flow, data structures and dynamic tensor shapes.
no code implementations • 27 Feb 2020 • Hongbin Zheng, Sejong Oh, Huiqing Wang, Preston Briggs, Jiading Gai, Animesh Jain, Yizhi Liu, Rich Heaton, Randy Huang, Yida Wang
Deep learning (DL) workloads are moving towards accelerators for faster processing and lower cost.
no code implementations • ICCV 2019 • Yida Wang, David Joseph Tan, Nassir Navab, Federico Tombari
We propose a novel model for 3D semantic completion from a single depth image, based on a single encoder and three separate generators used to reconstruct different geometric and semantic representations of the original and completed scene, all sharing the same latent space.
no code implementations • 25 Oct 2018 • Yida Wang, David Joseph Tan, Nassir Navab, Federico Tombari
We propose a method to reconstruct, complete and semantically label a 3D scene from a single input depth image.
no code implementations • 24 May 2017 • Yida Wang, Weihong Deng
In this paper, our generative model trained with synthetic images rendered from 3D models reduces the workload of data collection and limitation of conditions.
no code implementations • 16 Aug 2016 • Michael J. Anderson, Mihai Capotă, Javier S. Turek, Xia Zhu, Theodore L. Willke, Yida Wang, Po-Hsuan Chen, Jeremy R. Manning, Peter J. Ramadge, Kenneth A. Norman
The scale of functional magnetic resonance image data is rapidly increasing as large multi-subject datasets are becoming widely available and high-resolution scanners are adopted.