Search Results for author: Yida Wang

Found 33 papers, 13 papers with code

DynaPipe: Optimizing Multi-task Training through Dynamic Pipelines

2 code implementations17 Nov 2023 Chenyu Jiang, Zhen Jia, Shuai Zheng, Yida Wang, Chuan Wu

This paper proposes a dynamic micro-batching approach to tackle sequence length variation and enable efficient multi-task model training.

Language Modelling Large Language Model +3

Serving Deep Learning Model in Relational Databases

no code implementations7 Oct 2023 Alexandre Eichenberger, Qi Lin, Saif Masood, Hong Min, Alexander Sim, Jie Wang, Yida Wang, Kesheng Wu, Binhang Yuan, Lixi Zhou, Jia Zou

Serving deep learning (DL) models on relational data has become a critical requirement across diverse commercial and scientific domains, sparking growing interest recently.

Target-independent XLA optimization using Reinforcement Learning

no code implementations28 Aug 2023 Milan Ganai, Haichen Li, Theodore Enns, Yida Wang, Randy Huang

We also propose enhancements to the deep RL algorithms to further improve optimal search performance and open the research direction for domain-specific guidance for RL.

Compiler Optimization reinforcement-learning +1

$\mathrm{SAM^{Med}}$: A medical image annotation framework based on large vision model

no code implementations11 Jul 2023 Chenglong Wang, Dexuan Li, Sucheng Wang, Chengxiu Zhang, Yida Wang, Yun Liu, Guang Yang

The $\mathrm{SAM^{assist}}$ demonstrates the generalization ability of SAM to the downstream medical segmentation task using the prompt-learning approach.

Image Segmentation Liver Segmentation +2

Decoupled Model Schedule for Deep Learning Training

no code implementations16 Feb 2023 Hongzheng Chen, Cody Hao Yu, Shuai Zheng, Zhen Zhang, Zhiru Zhang, Yida Wang

Specifically, the schedule works on a PyTorch model and uses a set of schedule primitives to convert the model for common model training optimizations such as high-performance kernels, effective 3D parallelism, and efficient activation checkpointing.


Lidar Upsampling with Sliced Wasserstein Distance

no code implementations31 Jan 2023 Artem Savkin, Yida Wang, Sebastian Wirkert, Nassir Navab, Federico Tombar

This in turn enables our method to employ a one-stage upsampling paradigm without the need for coarse and fine reconstruction.

Autonomous Driving Domain Adaptation

Towards Reliable and Explainable AI Model for Solid Pulmonary Nodule Diagnosis

no code implementations8 Apr 2022 Chenglong Wang, Yun Liu, Fen Wang, Chengxiu Zhang, Yida Wang, Mei Yuan, Guang Yang

However, detection and accurate diagnosis of pulmonary nodules depend heavily on the experiences of radiologists and can be a heavy workload for them.

From 2D to 3D: Re-thinking Benchmarking of Monocular Depth Prediction

no code implementations15 Mar 2022 Evin Pınar Örnek, Shristi Mudgal, Johanna Wald, Yida Wang, Nassir Navab, Federico Tombari

There have been numerous recently proposed methods for monocular depth prediction (MDP) coupled with the equally rapid evolution of benchmarking tools.

Benchmarking Depth Estimation +1

Alpa: Automating Inter- and Intra-Operator Parallelism for Distributed Deep Learning

1 code implementation28 Jan 2022 Lianmin Zheng, Zhuohan Li, Hao Zhang, Yonghao Zhuang, Zhifeng Chen, Yanping Huang, Yida Wang, Yuanzhong Xu, Danyang Zhuo, Eric P. Xing, Joseph E. Gonzalez, Ion Stoica

Existing model-parallel training systems either require users to manually create a parallelization plan or automatically generate one from a limited space of model parallelism configurations.

EVA: An Open-Domain Chinese Dialogue System with Large-Scale Generative Pre-Training

2 code implementations3 Aug 2021 Hao Zhou, Pei Ke, Zheng Zhang, Yuxian Gu, Yinhe Zheng, Chujie Zheng, Yida Wang, Chen Henry Wu, Hao Sun, Xiaocong Yang, Bosi Wen, Xiaoyan Zhu, Minlie Huang, Jie Tang

Although pre-trained language models have remarkably enhanced the generation ability of dialogue systems, open-domain Chinese dialogue systems are still limited by the dialogue data and the model size compared with English ones.

LegoFormer: Transformers for Block-by-Block Multi-view 3D Reconstruction

1 code implementation23 Jun 2021 Farid Yagubbayli, Yida Wang, Alessio Tonioni, Federico Tombari

Most modern deep learning-based multi-view 3D reconstruction techniques use RNNs or fusion modules to combine information from multiple images after independently encoding them.

3D Reconstruction Multi-View 3D Reconstruction +1

Semantic-Enhanced Explainable Finetuning for Open-Domain Dialogues

no code implementations6 Jun 2021 Yinhe Zheng, Yida Wang, Pei Ke, Zhenyu Yang, Minlie Huang

This paper propose to combine pretrained language models with the modular dialogue paradigm for open-domain dialogue modeling.

Informativeness Language Modelling +1

Diversifying Dialog Generation via Adaptive Label Smoothing

1 code implementation ACL 2021 Yida Wang, Yinhe Zheng, Yong Jiang, Minlie Huang

Neural dialogue generation models trained with the one-hot target distribution suffer from the over-confidence issue, which leads to poor generation diversity as widely reported in the literature.

Dialogue Generation

Bring Your Own Codegen to Deep Learning Compiler

no code implementations3 May 2021 Zhi Chen, Cody Hao Yu, Trevor Morris, Jorn Tuyls, Yi-Hsiang Lai, Jared Roesch, Elliott Delaye, Vin Sharma, Yida Wang

Deep neural networks (DNNs) have been ubiquitously applied in many applications, and accelerators are emerged as an enabler to support the fast and efficient inference tasks of these applications.

Code Generation

UNIT: Unifying Tensorized Instruction Compilation

no code implementations21 Jan 2021 Jian Weng, Animesh Jain, Jie Wang, Leyuan Wang, Yida Wang, Tony Nowatzki

However, it is hard to leverage mixed precision without hardware support because of the overhead of data casting.

HAWQV3: Dyadic Neural Network Quantization

1 code implementation20 Nov 2020 Zhewei Yao, Zhen Dong, Zhangcheng Zheng, Amir Gholami, Jiali Yu, Eric Tan, Leyuan Wang, Qijing Huang, Yida Wang, Michael W. Mahoney, Kurt Keutzer

Current low-precision quantization algorithms often have the hidden cost of conversion back and forth from floating point to quantized integer values.

Model Compression Quantization

FeatGraph: A Flexible and Efficient Backend for Graph Neural Network Systems

no code implementations26 Aug 2020 Yuwei Hu, Zihao Ye, Minjie Wang, Jiali Yu, Da Zheng, Mu Li, Zheng Zhang, Zhiru Zhang, Yida Wang

FeatGraph provides a flexible programming interface to express diverse GNN models by composing coarse-grained sparse templates with fine-grained user-defined functions (UDFs) on each vertex/edge.

A Large-Scale Chinese Short-Text Conversation Dataset

2 code implementations10 Aug 2020 Yida Wang, Pei Ke, Yinhe Zheng, Kaili Huang, Yong Jiang, Xiaoyan Zhu, Minlie Huang

The cleaned dataset and the pre-training models will facilitate the research of short-text conversation modeling.

Dialogue Generation Short-Text Conversation

Structure-SLAM: Low-Drift Monocular SLAM in Indoor Environments

1 code implementation5 Aug 2020 Yanyan Li, Nikolas Brasch, Yida Wang, Nassir Navab, Federico Tombari

In this paper a low-drift monocular SLAM method is proposed targeting indoor scenarios, where monocular SLAM often fails due to the lack of textured surfaces.


Efficient Execution of Quantized Deep Learning Models: A Compiler Approach

no code implementations18 Jun 2020 Animesh Jain, Shoubhik Bhattacharya, Masahiro Masuda, Vin Sharma, Yida Wang

A deep learning compiler such as Apache TVM can enable the efficient execution of model from various frameworks on various targets.


Is Network the Bottleneck of Distributed Training?

1 code implementation17 Jun 2020 Zhen Zhang, Chaokun Chang, Haibin Lin, Yida Wang, Raman Arora, Xin Jin

As such, we advocate that the real challenge of distributed training is for the network community to develop high-performance network transport to fully utilize the network capacity and achieve linear scale-out.

Nimble: Efficiently Compiling Dynamic Neural Networks for Model Inference

no code implementations4 Jun 2020 Haichen Shen, Jared Roesch, Zhi Chen, Wei Chen, Yong Wu, Mu Li, Vin Sharma, Zachary Tatlock, Yida Wang

Modern deep neural networks increasingly make use of features such as dynamic control flow, data structures and dynamic tensor shapes.

Optimizing Memory-Access Patterns for Deep Learning Accelerators

1 code implementation27 Feb 2020 Hongbin Zheng, Sejong Oh, Huiqing Wang, Preston Briggs, Jiading Gai, Animesh Jain, Yizhi Liu, Rich Heaton, Randy Huang, Yida Wang

Deep learning (DL) workloads are moving towards accelerators for faster processing and lower cost.

ForkNet: Multi-branch Volumetric Semantic Completion from a Single Depth Image

no code implementations ICCV 2019 Yida Wang, David Joseph Tan, Nassir Navab, Federico Tombari

We propose a novel model for 3D semantic completion from a single depth image, based on a single encoder and three separate generators used to reconstruct different geometric and semantic representations of the original and completed scene, all sharing the same latent space.

Ranked #7 on 3D Semantic Scene Completion on NYUv2 (using extra training data)

3D Semantic Scene Completion

Adversarial Semantic Scene Completion from a Single Depth Image

no code implementations25 Oct 2018 Yida Wang, David Joseph Tan, Nassir Navab, Federico Tombari

We propose a method to reconstruct, complete and semantically label a 3D scene from a single input depth image.


Generative Model with Coordinate Metric Learning for Object Recognition Based on 3D Models

no code implementations24 May 2017 Yida Wang, Weihong Deng

In this paper, our generative model trained with synthetic images rendered from 3D models reduces the workload of data collection and limitation of conditions.

Bayesian Inference Metric Learning +2

Enabling Factor Analysis on Thousand-Subject Neuroimaging Datasets

no code implementations16 Aug 2016 Michael J. Anderson, Mihai Capotă, Javier S. Turek, Xia Zhu, Theodore L. Willke, Yida Wang, Po-Hsuan Chen, Jeremy R. Manning, Peter J. Ramadge, Kenneth A. Norman

The scale of functional magnetic resonance image data is rapidly increasing as large multi-subject datasets are becoming widely available and high-resolution scanners are adopted.

Cannot find the paper you are looking for? You can Submit a new open access paper.