Search Results for author: Xiaolong Zhu

Found 7 papers, 4 papers with code

Using Human Feedback to Fine-tune Diffusion Models without Any Reward Model

1 code implementation22 Nov 2023 Kai Yang, Jian Tao, Jiafei Lyu, Chunjiang Ge, Jiaxin Chen, Qimai Li, Weihan Shen, Xiaolong Zhu, Xiu Li

The direct preference optimization (DPO) method, effective in fine-tuning large language models, eliminates the necessity for a reward model.

Denoising

Emergent collective intelligence from massive-agent cooperation and competition

1 code implementation4 Jan 2023 HanMo Chen, Stone Tao, Jiaxin Chen, Weihan Shen, Xihui Li, Chenghui Yu, Sikai Cheng, Xiaolong Zhu, Xiu Li

Since these learned group strategies arise from individual decisions without an explicit coordination mechanism, we claim that artificial collective intelligence emerges from massive-agent cooperation and competition.

reinforcement-learning Reinforcement Learning (RL)

Multi-Agent Path Finding via Tree LSTM

1 code implementation24 Oct 2022 Yuhao Jiang, Kunjie Zhang, Qimai Li, Jiaxin Chen, Xiaolong Zhu

In recent years, Multi-Agent Path Finding (MAPF) has attracted attention from the fields of both Operations Research (OR) and Reinforcement Learning (RL).

Multi-Agent Path Finding reinforcement-learning +1

Real-Time Neural Style Transfer for Videos

no code implementations CVPR 2017 Hao-Zhi Huang, Hao Wang, Wenhan Luo, Lin Ma, Wenhao Jiang, Xiaolong Zhu, Zhifeng Li, Wei Liu

More specifically, a hybrid loss is proposed to capitalize on the content information of input frames, the style information of a given style image, and the temporal information of consecutive frames.

Style Transfer Video Style Transfer

A Multi-UAV System for Exploration and Target Finding in Cluttered and GPS-Denied Environments

no code implementations19 Jul 2021 Xiaolong Zhu, Fernando Vanegas, Felipe Gonzalez, Conrad Sanderson

Performance of the system with an increasing number of UAVs in several indoor scenarios with obstacles is tested.

Cannot find the paper you are looking for? You can Submit a new open access paper.