Search Results for author: Haoming Lu

Found 5 papers, 3 papers with code

Specialist Diffusion: Plug-and-Play Sample-Efficient Fine-Tuning of Text-to-Image Diffusion Models To Learn Any Unseen Style

no code implementations CVPR 2023 Haoming Lu, Hazarapet Tunanyan, Kai Wang, Shant Navasardyan, Zhangyang Wang, Humphrey Shi

Diffusion models have demonstrated impressive capability of text-conditioned image synthesis, and broader application horizons are emerging by personalizing those pretrained diffusion models toward generating some specialized target object or style.

Disentanglement Image Generation

Kokoyi: Executable LaTeX for End-to-end Deep Learning

no code implementations29 Sep 2021 Minjie Wang, Haoming Lu, Yu Gai, Lesheng Jin, Zihao Ye, Zheng Zhang

Despite substantial efforts from the deep learning system community to relieve researchers and practitioners from the burden of implementing models with ever-growing complexity, a considerable lingual gap remains between developing models in the language of mathematics and implementing them in the languages of computer.

Math Translation

Deep Learning for 3D Point Cloud Understanding: A Survey

1 code implementation18 Sep 2020 Haoming Lu, Humphrey Shi

The development of practical applications, such as autonomous driving and robotics, has brought increasing attention to 3D point cloud understanding.

Autonomous Driving

SkyNet: A Champion Model for DAC-SDC on Low Power Object Detection

1 code implementation25 Jun 2019 Xiaofan Zhang, Cong Hao, Haoming Lu, Jiachen Li, Yuhong Li, Yuchen Fan, Kyle Rupnow, JinJun Xiong, Thomas Huang, Honghui Shi, Wen-mei Hwu, Deming Chen

Developing artificial intelligence (AI) at the edge is always challenging, since edge devices have limited computation capability and memory resources but need to meet demanding requirements, such as real-time processing, high throughput performance, and high inference accuracy.

object-detection Object Detection

Cannot find the paper you are looking for? You can Submit a new open access paper.