Search Results for author: Hengyu Meng

Found 7 papers, 4 papers with code

Text2VDM: Text to Vector Displacement Maps for Expressive and Interactive 3D Sculpting

no code implementations27 Feb 2025 Hengyu Meng, Duotun Wang, Zhijing Shao, Ligang Liu, Zeyu Wang

This paper presents Text2VDM, a novel framework for text-to-VDM brush generation through the deformation of a dense planar mesh guided by score distillation sampling (SDS).

3D Generation

DEGAS: Detailed Expressions on Full-Body Gaussian Avatars

1 code implementation20 Aug 2024 Zhijing Shao, Duotun Wang, Qing-Yao Tian, Yao-Dong Yang, Hengyu Meng, Zeyu Cai, Bo Dong, Yu Zhang, Kang Zhang, Zeyu Wang

We also propose an audio-driven extension of our method with the help of 2D talking faces, opening new possibilities for interactive AI agents.

3DGS Neural Rendering

HeadEvolver: Text to Head Avatars via Expressive and Attribute-Preserving Mesh Deformation

no code implementations14 Mar 2024 Duotun Wang, Hengyu Meng, Zeyu Cai, Zhijing Shao, Qianxi Liu, Lin Wang, Mingming Fan, Xiaohang Zhan, Zeyu Wang

Extensive experiments demonstrate that our framework can generate diverse and expressive head avatars with high-quality meshes that artists can easily manipulate in graphics software, facilitating downstream applications such as efficient asset creation and animation with preserved attributes.

Attribute NeRF

MagicScroll: Nontypical Aspect-Ratio Image Generation for Visual Storytelling via Multi-Layered Semantic-Aware Denoising

no code implementations18 Dec 2023 Bingyuan Wang, Hengyu Meng, Zeyu Cai, Lanjiong Li, Yue Ma, Qifeng Chen, Zeyu Wang

Visual storytelling often uses nontypical aspect-ratio images like scroll paintings, comic strips, and panoramas to create an expressive and compelling narrative.

Denoising Image Generation +1

Efficient LLM Inference on CPUs

2 code implementations1 Nov 2023 Haihao Shen, Hanwen Chang, Bo Dong, Yu Luo, Hengyu Meng

Large language models (LLMs) have demonstrated remarkable performance and tremendous potential across a wide range of tasks.

Quantization

An Efficient Sparse Inference Software Accelerator for Transformer-based Language Models on CPUs

1 code implementation28 Jun 2023 Haihao Shen, Hengyu Meng, Bo Dong, Zhe Wang, Ofir Zafrir, Yi Ding, Yu Luo, Hanwen Chang, Qun Gao, Ziheng Wang, Guy Boudoukh, Moshe Wasserblat

We apply our sparse accelerator on widely-used Transformer-based language models including Bert-Mini, DistilBERT, Bert-Base, and BERT-Large.

Model Compression

Fast DistilBERT on CPUs

1 code implementation27 Oct 2022 Haihao Shen, Ofir Zafrir, Bo Dong, Hengyu Meng, Xinyu Ye, Zhe Wang, Yi Ding, Hanwen Chang, Guy Boudoukh, Moshe Wasserblat

In this work, we propose a new pipeline for creating and running Fast Transformer models on CPUs, utilizing hardware-aware pruning, knowledge distillation, quantization, and our own Transformer inference runtime engine with optimized kernels for sparse and quantized operators.

Knowledge Distillation Model Compression +2

Cannot find the paper you are looking for? You can Submit a new open access paper.