2 code implementations • 17 Nov 2023 • Chenyu Jiang, Zhen Jia, Shuai Zheng, Yida Wang, Chuan Wu
This paper proposes a dynamic micro-batching approach to tackle sequence length variation and enable efficient multi-task model training.
no code implementations • 7 Oct 2023 • Alexandre Eichenberger, Qi Lin, Saif Masood, Hong Min, Alexander Sim, Jie Wang, Yida Wang, Kesheng Wu, Binhang Yuan, Lixi Zhou, Jia Zou
Serving deep learning (DL) models on relational data has become a critical requirement across diverse commercial and scientific domains, sparking growing interest recently.
no code implementations • 28 Aug 2023 • Milan Ganai, Haichen Li, Theodore Enns, Yida Wang, Randy Huang
We also propose enhancements to the deep RL algorithms to further improve optimal search performance and open the research direction for domain-specific guidance for RL.
no code implementations • 11 Jul 2023 • Chenglong Wang, Dexuan Li, Sucheng Wang, Chengxiu Zhang, Yida Wang, Yun Liu, Guang Yang
The $\mathrm{SAM^{assist}}$ demonstrates the generalization ability of SAM to the downstream medical segmentation task using the prompt-learning approach.
1 code implementation • 8 Mar 2023 • Cody Hao Yu, Haozheng Fan, Guangtai Huang, Zhen Jia, Yizhi Liu, Jie Wang, Zach Zheng, Yuan Zhou, Haichen Shen, Junru Shao, Mu Li, Yida Wang
In this paper, we present RAF, a deep learning compiler for training.
no code implementations • 16 Feb 2023 • Hongzheng Chen, Cody Hao Yu, Shuai Zheng, Zhen Zhang, Zhiru Zhang, Yida Wang
Specifically, the schedule works on a PyTorch model and uses a set of schedule primitives to convert the model for common model training optimizations such as high-performance kernels, effective 3D parallelism, and efficient activation checkpointing.
no code implementations • 31 Jan 2023 • Artem Savkin, Yida Wang, Sebastian Wirkert, Nassir Navab, Federico Tombar
This in turn enables our method to employ a one-stage upsampling paradigm without the need for coarse and fine reconstruction.
2 code implementations • 18 Oct 2022 • Yaoyao Ding, Cody Hao Yu, Bojian Zheng, Yizhi Liu, Yida Wang, Gennady Pekhimenko
With the proposed paradigm, we implement a deep learning compiler Hidet.
no code implementations • 8 May 2022 • Yida Wang, David Joseph Tan, Nassir Navab, Federico Tombari
We propose a novel convolutional operator for the task of point cloud completion.
no code implementations • 8 Apr 2022 • Chenglong Wang, Yun Liu, Fen Wang, Chengxiu Zhang, Yida Wang, Mei Yuan, Guang Yang
However, detection and accurate diagnosis of pulmonary nodules depend heavily on the experiences of radiologists and can be a heavy workload for them.
no code implementations • CVPR 2022 • Yida Wang, David Joseph Tan, Nassir Navab, Federico Tombari
To this aim, we introduce a second model that assembles our layers within a transformer architecture.
no code implementations • 15 Mar 2022 • Evin Pınar Örnek, Shristi Mudgal, Johanna Wald, Yida Wang, Nassir Navab, Federico Tombari
There have been numerous recently proposed methods for monocular depth prediction (MDP) coupled with the equally rapid evolution of benchmarking tools.
1 code implementation • 28 Jan 2022 • Lianmin Zheng, Zhuohan Li, Hao Zhang, Yonghao Zhuang, Zhifeng Chen, Yanping Huang, Yida Wang, Yuanzhong Xu, Danyang Zhuo, Eric P. Xing, Joseph E. Gonzalez, Ion Stoica
Existing model-parallel training systems either require users to manually create a parallelization plan or automatically generate one from a limited space of model parallelism configurations.
2 code implementations • 3 Aug 2021 • Hao Zhou, Pei Ke, Zheng Zhang, Yuxian Gu, Yinhe Zheng, Chujie Zheng, Yida Wang, Chen Henry Wu, Hao Sun, Xiaocong Yang, Bosi Wen, Xiaoyan Zhu, Minlie Huang, Jie Tang
Although pre-trained language models have remarkably enhanced the generation ability of dialogue systems, open-domain Chinese dialogue systems are still limited by the dialogue data and the model size compared with English ones.
1 code implementation • 23 Jun 2021 • Farid Yagubbayli, Yida Wang, Alessio Tonioni, Federico Tombari
Most modern deep learning-based multi-view 3D reconstruction techniques use RNNs or fusion modules to combine information from multiple images after independently encoding them.
no code implementations • 6 Jun 2021 • Yinhe Zheng, Yida Wang, Pei Ke, Zhenyu Yang, Minlie Huang
This paper propose to combine pretrained language models with the modular dialogue paradigm for open-domain dialogue modeling.
1 code implementation • ACL 2021 • Yida Wang, Yinhe Zheng, Yong Jiang, Minlie Huang
Neural dialogue generation models trained with the one-hot target distribution suffer from the over-confidence issue, which leads to poor generation diversity as widely reported in the literature.
no code implementations • 3 May 2021 • Zhi Chen, Cody Hao Yu, Trevor Morris, Jorn Tuyls, Yi-Hsiang Lai, Jared Roesch, Elliott Delaye, Vin Sharma, Yida Wang
Deep neural networks (DNNs) have been ubiquitously applied in many applications, and accelerators are emerged as an enabler to support the fast and efficient inference tasks of these applications.
no code implementations • 21 Jan 2021 • Jian Weng, Animesh Jain, Jie Wang, Leyuan Wang, Yida Wang, Tony Nowatzki
However, it is hard to leverage mixed precision without hardware support because of the overhead of data casting.
1 code implementation • 20 Nov 2020 • Zhewei Yao, Zhen Dong, Zhangcheng Zheng, Amir Gholami, Jiali Yu, Eric Tan, Leyuan Wang, Qijing Huang, Yida Wang, Michael W. Mahoney, Kurt Keutzer
Current low-precision quantization algorithms often have the hidden cost of conversion back and forth from floating point to quantized integer values.
no code implementations • 26 Aug 2020 • Yuwei Hu, Zihao Ye, Minjie Wang, Jiali Yu, Da Zheng, Mu Li, Zheng Zhang, Zhiru Zhang, Yida Wang
FeatGraph provides a flexible programming interface to express diverse GNN models by composing coarse-grained sparse templates with fine-grained user-defined functions (UDFs) on each vertex/edge.
1 code implementation • ECCV 2020 • Yida Wang, David Joseph Tan, Nassir Navab, Federico Tombari
In this paper, we propose a method for 3D object completion and classification based on point clouds.
2 code implementations • 10 Aug 2020 • Yida Wang, Pei Ke, Yinhe Zheng, Kaili Huang, Yong Jiang, Xiaoyan Zhu, Minlie Huang
The cleaned dataset and the pre-training models will facilitate the research of short-text conversation modeling.
1 code implementation • 5 Aug 2020 • Yanyan Li, Nikolas Brasch, Yida Wang, Nassir Navab, Federico Tombari
In this paper a low-drift monocular SLAM method is proposed targeting indoor scenarios, where monocular SLAM often fails due to the lack of textured surfaces.
Robotics
no code implementations • 18 Jun 2020 • Animesh Jain, Shoubhik Bhattacharya, Masahiro Masuda, Vin Sharma, Yida Wang
A deep learning compiler such as Apache TVM can enable the efficient execution of model from various frameworks on various targets.
1 code implementation • 17 Jun 2020 • Zhen Zhang, Chaokun Chang, Haibin Lin, Yida Wang, Raman Arora, Xin Jin
As such, we advocate that the real challenge of distributed training is for the network community to develop high-performance network transport to fully utilize the network capacity and achieve linear scale-out.
no code implementations • 11 Jun 2020 • Lianmin Zheng, Chengfan Jia, Minmin Sun, Zhao Wu, Cody Hao Yu, Ameer Haj-Ali, Yida Wang, Jun Yang, Danyang Zhuo, Koushik Sen, Joseph E. Gonzalez, Ion Stoica
Ansor can find high-performance programs that are outside the search space of existing state-of-the-art approaches.
no code implementations • 4 Jun 2020 • Haichen Shen, Jared Roesch, Zhi Chen, Wei Chen, Yong Wu, Mu Li, Vin Sharma, Zachary Tatlock, Yida Wang
Modern deep neural networks increasingly make use of features such as dynamic control flow, data structures and dynamic tensor shapes.
1 code implementation • 27 Feb 2020 • Hongbin Zheng, Sejong Oh, Huiqing Wang, Preston Briggs, Jiading Gai, Animesh Jain, Yizhi Liu, Rich Heaton, Randy Huang, Yida Wang
Deep learning (DL) workloads are moving towards accelerators for faster processing and lower cost.
no code implementations • ICCV 2019 • Yida Wang, David Joseph Tan, Nassir Navab, Federico Tombari
We propose a novel model for 3D semantic completion from a single depth image, based on a single encoder and three separate generators used to reconstruct different geometric and semantic representations of the original and completed scene, all sharing the same latent space.
Ranked #7 on
3D Semantic Scene Completion
on NYUv2
(using extra training data)
no code implementations • 25 Oct 2018 • Yida Wang, David Joseph Tan, Nassir Navab, Federico Tombari
We propose a method to reconstruct, complete and semantically label a 3D scene from a single input depth image.
no code implementations • 24 May 2017 • Yida Wang, Weihong Deng
In this paper, our generative model trained with synthetic images rendered from 3D models reduces the workload of data collection and limitation of conditions.
no code implementations • 16 Aug 2016 • Michael J. Anderson, Mihai Capotă, Javier S. Turek, Xia Zhu, Theodore L. Willke, Yida Wang, Po-Hsuan Chen, Jeremy R. Manning, Peter J. Ramadge, Kenneth A. Norman
The scale of functional magnetic resonance image data is rapidly increasing as large multi-subject datasets are becoming widely available and high-resolution scanners are adopted.