Search Results for author: Zichang Liu

Found 10 papers, 3 papers with code

Wisdom of Committee: Distilling from Foundation Model to Specialized Application Model

no code implementations21 Feb 2024 Zichang Liu, Qingyun Liu, Yuening Li, Liang Liu, Anshumali Shrivastava, Shuchao Bi, Lichan Hong, Ed H. Chi, Zhe Zhao

Further, to accommodate the dissimilarity among the teachers in the committee, we introduce DiverseDistill, which allows the student to understand the expertise of each teacher and extract task knowledge.

Knowledge Distillation Transfer Learning

Heterogeneous federated collaborative filtering using FAIR: Federated Averaging in Random Subspaces

1 code implementation3 Nov 2023 Aditya Desai, Benjamin Meisburger, Zichang Liu, Anshumali Shrivastava

To include data from all devices in federated learning, we must enable collective training of embedding tables on devices with heterogeneous memory capacities.

Collaborative Filtering Federated Learning +1

Deja Vu: Contextual Sparsity for Efficient LLMs at Inference Time

1 code implementation26 Oct 2023 Zichang Liu, Jue Wang, Tri Dao, Tianyi Zhou, Binhang Yuan, Zhao Song, Anshumali Shrivastava, Ce Zhang, Yuandong Tian, Christopher Re, Beidi Chen

We show that contextual sparsity exists, that it can be accurately predicted, and that we can exploit it to speed up LLM inference in wall-clock time without compromising LLM's quality or in-context learning ability.

In-Context Learning

Learning Multimodal Data Augmentation in Feature Space

1 code implementation29 Dec 2022 Zichang Liu, Zhiqiang Tang, Xingjian Shi, Aston Zhang, Mu Li, Anshumali Shrivastava, Andrew Gordon Wilson

The ability to jointly learn from multiple modalities, such as text, audio, and visual data, is a defining feature of intelligent systems.

Data Augmentation Image Classification +1

SAR-Net: Shape Alignment and Recovery Network for Category-level 6D Object Pose and Size Estimation

no code implementations CVPR 2022 Haitao Lin, Zichang Liu, Chilam Cheang, Yanwei Fu, Guodong Guo, xiangyang xue

The concatenation of the observed point cloud and symmetric one reconstructs a coarse object shape, thus facilitating object center (3D translation) and 3D size estimation.

Object Optical Character Recognition (OCR)

Efficient Inference via Universal LSH Kernel

no code implementations21 Jun 2021 Zichang Liu, Benjamin Coleman, Anshumali Shrivastava

Large machine learning models achieve unprecedented performance on various tasks and have evolved as the go-to technique.

Knowledge Distillation Quantization

MONGOOSE: A Learnable LSH Framework for Efficient Neural Network Training

no code implementations ICLR 2021 Beidi Chen, Zichang Liu, Binghui Peng, Zhaozhuo Xu, Jonathan Lingjie Li, Tri Dao, Zhao Song, Anshumali Shrivastava, Christopher Re

Recent advances by practitioners in the deep learning community have breathed new life into Locality Sensitive Hashing (LSH), using it to reduce memory and time bottlenecks in neural network (NN) training.

Efficient Neural Network Language Modelling +2

Neighbor Oblivious Learning (NObLe) for Device Localization and Tracking

no code implementations23 Nov 2020 Zichang Liu, Li Chou, Anshumali Shrivastava

In this paper, we argue that the state-of-the-art-systems are significantly worse in terms of accuracy because they are incapable of utilizing these essential structural information.

Conditional Automated Channel Pruning for Deep Neural Networks

no code implementations21 Sep 2020 Yixin Liu, Yong Guo, Zichang Liu, Haohua Liu, Jingjie Zhang, Zejun Chen, Jing Liu, Jian Chen

To address this issue, given a target compression rate for the whole model, one can search for the optimal compression rate for each layer.

Model Compression

Climbing the WOL: Training for Cheaper Inference

no code implementations2 Jul 2020 Zichang Liu, Zhaozhuo Xu, Alan Ji, Jonathan Li, Beidi Chen, Anshumali Shrivastava

Efficient inference for wide output layers (WOLs) is an essential yet challenging task in large scale machine learning.

Retrieval

Cannot find the paper you are looking for? You can Submit a new open access paper.