Search Results for author: Tianqi Chen

Found 47 papers, 18 papers with code

GraphPipe: Improving Performance and Scalability of DNN Training with Graph Pipeline Parallelism

no code implementations24 Jun 2024 Byungsoo Jeon, Mengdi Wu, Shiyi Cao, Sunghyun Kim, Sunghyun Park, Neeraj Aggarwal, Colin Unger, Daiyaan Arfeen, Peiyuan Liao, Xupeng Miao, Mohammad Alizadeh, Gregory R. Ganger, Tianqi Chen, Zhihao Jia

GraphPipe partitions a DNN into a graph of stages, optimizes micro-batch schedules for these stages, and parallelizes DNN training using the discovered GPP strategies.

Emerging Platforms Meet Emerging LLMs: A Year-Long Journey of Top-Down Development

no code implementations14 Apr 2024 Siyuan Feng, Jiawei Liu, Ruihang Lai, Charlie F. Ruan, Yong Yu, Lingming Zhang, Tianqi Chen

While a traditional bottom-up development pipeline fails to close the gap timely, we introduce TapML, a top-down approach and tooling designed to streamline the deployment of ML systems on diverse platforms, optimized for developer productivity.

Cascaded Multi-path Shortcut Diffusion Model for Medical Image Translation

no code implementations6 Apr 2024 Yinchi Zhou, Tianqi Chen, Jun Hou, Huidong Xie, Nicha C. Dvornek, S. Kevin Zhou, David L. Wilson, James S. Duncan, Chi Liu, Bo Zhou

To reduce the required number of iterations and ensure robust performance, our method first obtains a conditional GAN-generated prior image that will be used for the efficient reverse translation with a DM in the subsequent step.

Image-to-Image Translation Translation

A Dense Reward View on Aligning Text-to-Image Diffusion with Preference

1 code implementation13 Feb 2024 Shentao Yang, Tianqi Chen, Mingyuan Zhou

In this paper, we take on a finer dense reward perspective and derive a tractable alignment objective that emphasizes the initial steps of the T2I reverse chain.

Towards Efficient Generative Large Language Model Serving: A Survey from Algorithms to Systems

no code implementations23 Dec 2023 Xupeng Miao, Gabriele Oliaro, Zhihao Zhang, Xinhao Cheng, Hongyi Jin, Tianqi Chen, Zhihao Jia

In the rapidly evolving landscape of artificial intelligence (AI), generative large language models (LLMs) stand at the forefront, revolutionizing how we interact with our data.

Language Modelling Large Language Model

Improving In-Context Learning in Diffusion Models with Visual Context-Modulated Prompts

no code implementations3 Dec 2023 Tianqi Chen, Yongfei Liu, Zhendong Wang, Jianbo Yuan, Quanzeng You, Hongxia Yang, Mingyuan Zhou

In light of the remarkable success of in-context learning in large language models, its potential extension to the vision domain, particularly with visual foundation models like Stable Diffusion, has sparked considerable interest.

In-Context Learning

Atom: Low-bit Quantization for Efficient and Accurate LLM Serving

1 code implementation29 Oct 2023 Yilong Zhao, Chien-Yu Lin, Kan Zhu, Zihao Ye, Lequn Chen, Size Zheng, Luis Ceze, Arvind Krishnamurthy, Tianqi Chen, Baris Kasikci

To maximize LLMs' serving throughput, we introduce Atom, a low-bit quantization method that achieves high throughput improvements with negligible accuracy loss.

Quantization Sentiment Analysis

Beta Diffusion

1 code implementation NeurIPS 2023 Mingyuan Zhou, Tianqi Chen, Zhendong Wang, Huangjie Zheng

We introduce beta diffusion, a novel generative modeling method that integrates demasking and denoising to generate data within bounded ranges.

Denoising

Learning to Jump: Thinning and Thickening Latent Counts for Generative Modeling

1 code implementation28 May 2023 Tianqi Chen, Mingyuan Zhou

However, it is found in this paper to have limited ability in modeling some other types of data, such as count and non-negative continuous data, that are often highly sparse, skewed, heavy-tailed, and/or overdispersed.

ACRoBat: Optimizing Auto-batching of Dynamic Deep Learning at Compile Time

no code implementations17 May 2023 Pratik Fegade, Tianqi Chen, Phillip B. Gibbons, Todd C. Mowry

Dynamic control flow is an important technique often used to design expressive and efficient deep learning computations for applications such as text parsing, machine translation, exiting early out of deep models and so on.

Code Generation Machine Translation +1

SONAR: Joint Architecture and System Optimization Search

no code implementations25 Aug 2022 Elias Jääsaari, Michelle Ma, Ameet Talwalkar, Tianqi Chen

There is a growing need to deploy machine learning for different tasks on a wide array of new hardware platforms.

SparseTIR: Composable Abstractions for Sparse Compilation in Deep Learning

2 code implementations11 Jul 2022 Zihao Ye, Ruihang Lai, Junru Shao, Tianqi Chen, Luis Ceze

We propose SparseTIR, a sparse tensor compilation abstraction that offers composable formats and composable transformations for deep learning workloads.

TensorIR: An Abstraction for Automatic Tensorized Program Optimization

2 code implementations9 Jul 2022 Siyuan Feng, Bohan Hou, Hongyi Jin, Wuwei Lin, Junru Shao, Ruihang Lai, Zihao Ye, Lianmin Zheng, Cody Hao Yu, Yong Yu, Tianqi Chen

Finally, we build an end-to-end framework on top of our abstraction to automatically optimize deep learning models for given tensor computation primitives.

BIG-bench Machine Learning

Tensor Program Optimization with Probabilistic Programs

no code implementations26 May 2022 Junru Shao, Xiyou Zhou, Siyuan Feng, Bohan Hou, Ruihang Lai, Hongyi Jin, Wuwei Lin, Masahiro Masuda, Cody Hao Yu, Tianqi Chen

Experimental results show that MetaSchedule can cover the search space used in the state-of-the-art tensor program optimization frameworks in a modular way.

Probabilistic Programming

Stack operation of tensor networks

1 code implementation28 Mar 2022 Tianning Zhang, Tianqi Chen, Erping Li, Bo Yang, L. K. Ang

The tensor network, as a facterization of tensors, aims at performing the operations that are common for normal tensors, such as addition, contraction and stacking.

Tensor Networks

Collage: Seamless Integration of Deep Learning Backends with Automatic Placement

1 code implementation1 Nov 2021 Byungsoo Jeon, Sunghyun Park, Peiyuan Liao, Sheng Xu, Tianqi Chen, Zhihao Jia

Given the fast-evolving nature of the DL ecosystem, this manual approach often slows down continuous innovations across different layers; it prevents hardware vendors from the fast deployment of their cutting-edge libraries, DL framework developers must repeatedly adjust their hand-coded rules to accommodate new versions of libraries, and machine learning practitioners need to wait for the integration of new technologies and often encounter unsatisfactory performance.

Deep Adversarially-Enhanced k-Nearest Neighbors

no code implementations15 Aug 2021 Ren Wang, Tianqi Chen, Alfred Hero

Recent works have theoretically and empirically shown that deep neural networks (DNNs) have an inherent vulnerability to small perturbations.

ASK: Adversarial Soft k-Nearest Neighbor Attack and Defense

1 code implementation27 Jun 2021 Ren Wang, Tianqi Chen, Philip Yao, Sijia Liu, Indika Rajapakse, Alfred Hero

K-Nearest Neighbor (kNN)-based deep learning methods have been applied to many applications due to their simplicity and geometric interpretability.

Immuno-mimetic Deep Neural Networks (Immuno-Net)

no code implementations27 Jun 2021 Ren Wang, Tianqi Chen, Stephen Lindsly, Cooper Stansbury, Indika Rajapakse, Alfred Hero

This immuno-mimetic model leads to a new computational biology framework for robustification of deep neural networks against adversarial attacks.

Image Classification

RAILS: A Robust Adversarial Immune-inspired Learning System

1 code implementation27 Jun 2021 Ren Wang, Tianqi Chen, Stephen Lindsly, Cooper Stansbury, Alnawaz Rehemtulla, Indika Rajapakse, Alfred Hero

Initializing a population of exemplars that is balanced across classes, RAILS starts from a uniform label distribution that encourages diversity and uses an evolutionary optimization process to adaptively adjust the predictive label distribution in a manner that emulates the way the natural immune system recognizes novel pathogens.

Adversarial Defense Adversarial Robustness +3

Automated Backend-Aware Post-Training Quantization

no code implementations27 Mar 2021 Ziheng Jiang, Animesh Jain, Andrew Liu, Josh Fromm, Chengqian Ma, Tianqi Chen, Luis Ceze

Quantization is a key technique to reduce the resource requirement and improve the performance of neural network deployment.

Diversity Quantization

RAILS: A Robust Adversarial Immune-inspired Learning System

no code implementations18 Dec 2020 Ren Wang, Tianqi Chen, Stephen Lindsly, Alnawaz Rehemtulla, Alfred Hero, Indika Rajapakse

RAILS incorporates an Adaptive Immune System Emulation (AISE), which emulates in silico the biological mechanisms that are used to defend the host against attacks by pathogens.

Adversarial Defense Diversity +1

Cortex: A Compiler for Recursive Deep Learning Models

no code implementations2 Nov 2020 Pratik Fegade, Tianqi Chen, Phillip B. Gibbons, Todd C. Mowry

Optimizing deep learning models is generally performed in two steps: (i) high-level graph optimizations such as kernel fusion and (ii) low level kernel optimizations such as those found in vendor libraries.

Dynamic Tensor Rematerialization

1 code implementation ICLR 2021 Marisa Kirisame, Steven Lyubomirsky, Altan Haan, Jennifer Brennan, Mike He, Jared Roesch, Tianqi Chen, Zachary Tatlock

Checkpointing enables the training of deep learning models under restricted memory budgets by freeing intermediate activations from memory and recomputing them on demand.

ADARES: Adaptive Resource Management for Virtual Machines

no code implementations5 Dec 2018 Ignacio Cano, Lequn Chen, Pedro Fonseca, Tianqi Chen, Chern Cheah, Karan Gupta, Ramesh Chandra, Arvind Krishnamurthy

Our large-scale analysis confirms that VMs are often misconfigured, either overprovisioned or underprovisioned, and that this problem is pervasive across a wide range of private clusters.

Management Multi-Armed Bandits +1

Automating Generation of Low Precision Deep Learning Operators

no code implementations25 Oct 2018 Meghan Cowan, Thierry Moreau, Tianqi Chen, Luis Ceze

To date, none of the popular deep learning directly support low precision operators, partly due to a lack of optimized low precision libraries.

A Hardware-Software Blueprint for Flexible Deep Learning Specialization

no code implementations11 Jul 2018 Thierry Moreau, Tianqi Chen, Luis Vega, Jared Roesch, Eddie Yan, Lianmin Zheng, Josh Fromm, Ziheng Jiang, Luis Ceze, Carlos Guestrin, Arvind Krishnamurthy

Specialized Deep Learning (DL) acceleration stacks, designed for a specific set of frameworks, model architectures, operators, and data types, offer the allure of high performance while sacrificing flexibility.

Code Generation Style Transfer

Learning to Optimize Tensor Programs

no code implementations NeurIPS 2018 Tianqi Chen, Lianmin Zheng, Eddie Yan, Ziheng Jiang, Thierry Moreau, Luis Ceze, Carlos Guestrin, Arvind Krishnamurthy

Efficient implementations of tensor operators, such as matrix multiplication and high dimensional convolution, are key enablers of effective deep learning systems.

TVM: An Automated End-to-End Optimizing Compiler for Deep Learning

1 code implementation12 Feb 2018 Tianqi Chen, Thierry Moreau, Ziheng Jiang, Lianmin Zheng, Eddie Yan, Meghan Cowan, Haichen Shen, Leyuan Wang, Yuwei Hu, Luis Ceze, Carlos Guestrin, Arvind Krishnamurthy

Experimental results show that TVM delivers performance across hardware back-ends that are competitive with state-of-the-art, hand-tuned libraries for low-power CPU, mobile GPU, and server-class GPUs.

Diversity

Training Deep Nets with Sublinear Memory Cost

6 code implementations21 Apr 2016 Tianqi Chen, Bing Xu, Chiyuan Zhang, Carlos Guestrin

In the extreme case, our analysis also shows that the memory consumption can be reduced to O(log n) with as little as O(n log n) extra cost for forward computation.

XGBoost: A Scalable Tree Boosting System

25 code implementations9 Mar 2016 Tianqi Chen, Carlos Guestrin

In this paper, we describe a scalable end-to-end tree boosting system called XGBoost, which is used widely by data scientists to achieve state-of-the-art results on many machine learning challenges.

BIG-bench Machine Learning Clustering +6

MXNet: A Flexible and Efficient Machine Learning Library for Heterogeneous Distributed Systems

2 code implementations3 Dec 2015 Tianqi Chen, Mu Li, Yutian Li, Min Lin, Naiyan Wang, Minjie Wang, Tianjun Xiao, Bing Xu, Chiyuan Zhang, Zheng Zhang

This paper describes both the API design and the system implementation of MXNet, and explains how embedding of both symbolic expression and tensor operation is handled in a unified fashion.

BIG-bench Machine Learning Clustering +2

Net2Net: Accelerating Learning via Knowledge Transfer

3 code implementations18 Nov 2015 Tianqi Chen, Ian Goodfellow, Jonathon Shlens

Our Net2Net technique accelerates the experimentation process by instantaneously transferring the knowledge from a previous network to each new deeper or wider network.

Transfer Learning

A Complete Recipe for Stochastic Gradient MCMC

no code implementations NeurIPS 2015 Yi-An Ma, Tianqi Chen, Emily B. Fox

That is, any continuous Markov process that provides samples from the target distribution can be written in our framework.

Physical Intuition

Empirical Evaluation of Rectified Activations in Convolutional Network

2 code implementations5 May 2015 Bing Xu, Naiyan Wang, Tianqi Chen, Mu Li

In this paper we investigate the performance of different types of rectified activation functions in convolutional neural network: standard rectified linear unit (ReLU), leaky rectified linear unit (Leaky ReLU), parametric rectified linear unit (PReLU) and a new randomized leaky rectified linear units (RReLU).

General Classification Image Classification

A Parallel and Efficient Algorithm for Learning to Match

no code implementations22 Oct 2014 Jingbo Shang, Tianqi Chen, Hang Li, Zhengdong Lu, Yong Yu

In this paper, we tackle this challenge with a novel parallel and efficient algorithm for feature-based matrix factorization.

Collaborative Filtering Link Prediction

Stochastic Gradient Hamiltonian Monte Carlo

5 code implementations17 Feb 2014 Tianqi Chen, Emily B. Fox, Carlos Guestrin

Hamiltonian Monte Carlo (HMC) sampling methods provide a mechanism for defining distant proposals with high acceptance probabilities in a Metropolis-Hastings framework, enabling more efficient exploration of the state space than standard random-walk proposals.

Efficient Exploration Friction

Semi-Supervised Technical Term Tagging With Minimal User Feedback

no code implementations LREC 2012 Behrang QasemiZadeh, Paul Buitelaar, Tianqi Chen, Georgeta Bordea

In this paper, we address the problem of extracting technical terms automatically from an unannotated corpus.

Dependency Parsing Language Modelling +1

Feature-Based Matrix Factorization

no code implementations11 Sep 2011 Tianqi Chen, Zhao Zheng, Qiuxia Lu, Weinan Zhang, Yong Yu

Recommender system has been more and more popular and widely used in many applications recently.

Recommendation Systems

Cannot find the paper you are looking for? You can Submit a new open access paper.