Search Results for author: Yiyang Cai

Found 6 papers, 4 papers with code

TEQ: Trainable Equivalent Transformation for Quantization of LLMs

1 code implementation17 Oct 2023 Wenhua Cheng, Yiyang Cai, Kaokao Lv, Haihao Shen

As large language models (LLMs) become more prevalent, there is a growing need for new and improved quantization methods that can meet the computationalast layer demands of these modern architectures while maintaining the accuracy.

Quantization

Optimize Weight Rounding via Signed Gradient Descent for the Quantization of LLMs

2 code implementations11 Sep 2023 Wenhua Cheng, Weiwei Zhang, Haihao Shen, Yiyang Cai, Xin He, Kaokao Lv, Yi Liu

Large Language Models (LLMs) have demonstrated exceptional proficiency in language-related tasks, but their deployment poses significant challenges due to substantial memory and storage requirements.

Quantization

Uncertainty-Aware Cross-Modal Transfer Network for Sketch-Based 3D Shape Retrieval

no code implementations11 Aug 2023 Yiyang Cai, Jiaming Lu, Jiewen Wang, Shuang Liang

UACTN decouples the representation learning of sketches and 3D shapes into two separate tasks: classification-based sketch uncertainty learning and 3D shape feature transfer.

3D Shape Retrieval Representation Learning +1

Few-Shot Font Generation by Learning Fine-Grained Local Styles

2 code implementations CVPR 2022 Licheng Tang, Yiyang Cai, Jiaming Liu, Zhibin Hong, Mingming Gong, Minhu Fan, Junyu Han, Jingtuo Liu, Errui Ding, Jingdong Wang

Instead of explicitly disentangling global or component-wise modeling, the cross-attention mechanism can attend to the right local styles in the reference glyphs and aggregate the reference styles into a fine-grained style representation for the given content glyphs.

Font Generation

Cannot find the paper you are looking for? You can Submit a new open access paper.