Search Results for author: Haitian Jiang

Found 6 papers, 1 papers with code

MuseGNN: Interpretable and Convergent Graph Neural Network Layers at Scale

no code implementations19 Oct 2023 Haitian Jiang, Renjie Liu, Xiao Yan, Zhenkun Cai, Minjie Wang, David Wipf

Among the many variants of graph neural network (GNN) architectures capable of modeling data with cross-instance relations, an important subclass involves layers designed such that the forward pass iteratively reduces a graph-regularized energy function of interest.

Node Classification

Efficient Halftoning via Deep Reinforcement Learning

no code implementations24 Apr 2023 Haitian Jiang, Dongliang Xiong, Xiaowen Jiang, Li Ding, Liang Chen, Kai Huang

In this paper, we propose a fast and structure-aware halftoning method via a data-driven approach.

reinforcement-learning SSIM

FreshGNN: Reducing Memory Access via Stable Historical Embeddings for Graph Neural Network Training

no code implementations18 Jan 2023 Kezhao Huang, Haitian Jiang, Minjie Wang, Guangxuan Xiao, David Wipf, Xiang Song, Quan Gan, Zengfeng Huang, Jidong Zhai, Zheng Zhang

A key performance bottleneck when training graph neural network (GNN) models on large, real-world graphs is loading node features onto a GPU.

WaveIPT: Joint Attention and Flow Alignment in the Wavelet domain for Pose Transfer

no code implementations ICCV 2023 Liyuan Ma, Tingwei Gao, Haitian Jiang, Haibin Shen, Kejie Huang

To leverage the advantages of both attention and flow simultaneously, we propose Wavelet-aware Image-based Pose Transfer (WaveIPT) to fuse the attention and flow in the wavelet domain.

Pose Transfer

Halftoning with Multi-Agent Deep Reinforcement Learning

no code implementations23 Jul 2022 Haitian Jiang, Dongliang Xiong, Xiaowen Jiang, Aiguo Yin, Li Ding, Kai Huang

Deep neural networks have recently succeeded in digital halftoning using vanilla convolutional layers with high parallelism.

reinforcement-learning Reinforcement Learning (RL)

Cannot find the paper you are looking for? You can Submit a new open access paper.