Search Results for author: Shizhao Sun

Found 8 papers, 2 papers with code

LayoutPrompter: Awaken the Design Ability of Large Language Models

1 code implementation NeurIPS 2023 Jiawei Lin, Jiaqi Guo, Shizhao Sun, Zijiang James Yang, Jian-Guang Lou, Dongmei Zhang

In this work, we propose LayoutPrompter, which leverages large language models (LLMs) to address the above problems through in-context learning.

A Parse-Then-Place Approach for Generating Graphic Layouts from Textual Descriptions

no code implementations ICCV 2023 Jiawei Lin, Jiaqi Guo, Shizhao Sun, Weijiang Xu, Ting Liu, Jian-Guang Lou, Dongmei Zhang

To model combined and incomplete constraints, we use a Transformer-based layout generation model and carefully design a way to represent constraints and layouts as sequences.

LayoutDiffusion: Improving Graphic Layout Generation by Discrete Diffusion Probabilistic Models

1 code implementation ICCV 2023 Junyi Zhang, Jiaqi Guo, Shizhao Sun, Jian-Guang Lou, Dongmei Zhang

To tackle the challenge, we summarize three critical factors for achieving a mild forward process for the layout, i. e., legality, coordinate proximity and type disruption.

Layout Design

LayoutFormer++: Conditional Graphic Layout Generation via Constraint Serialization and Decoding Space Restriction

no code implementations CVPR 2023 Zhaoyun Jiang, Jiaqi Guo, Shizhao Sun, Huayu Deng, Zhongkai Wu, Vuksan Mijovic, Zijiang James Yang, Jian-Guang Lou, Dongmei Zhang

First, to flexibly handle diverse constraints, we propose a constraint serialization scheme, which represents different user constraints as sequences of tokens with a predefined format.

Data-Anonymous Encoding for Text-to-SQL Generation

no code implementations IJCNLP 2019 Zhen Dong, Shizhao Sun, Hongzhi Liu, Jian-Guang Lou, Dongmei Zhang

On text-to-SQL generation, the input utterance usually contains lots of tokens that are related to column names or cells in the table, called \textit{table-related tokens}.


Slim-DP: A Light Communication Data Parallelism for DNN

no code implementations27 Sep 2017 Shizhao Sun, Wei Chen, Jiang Bian, Xiaoguang Liu, Tie-Yan Liu

However, with the increasing size of DNN models and the large number of workers in practice, this typical data parallelism cannot achieve satisfactory training acceleration, since it usually suffers from the heavy communication cost due to transferring huge amount of information between workers and the parameter server.

Ensemble-Compression: A New Method for Parallel Training of Deep Neural Networks

no code implementations2 Jun 2016 Shizhao Sun, Wei Chen, Jiang Bian, Xiaoguang Liu, Tie-Yan Liu

In this framework, we propose to aggregate the local models by ensemble, i. e., averaging the outputs of local models instead of the parameters.

Model Compression

On the Depth of Deep Neural Networks: A Theoretical View

no code implementations17 Jun 2015 Shizhao Sun, Wei Chen, Li-Wei Wang, Xiaoguang Liu, Tie-Yan Liu

First, we derive an upper bound for RA of DNN, and show that it increases with increasing depth.


Cannot find the paper you are looking for? You can Submit a new open access paper.