Search Results for author: Zeshun Zong

Found 5 papers, 0 papers with code

VideoPhy: Evaluating Physical Commonsense for Video Generation

no code implementations5 Jun 2024 Hritik Bansal, Zongyu Lin, Tianyi Xie, Zeshun Zong, Michal Yarom, Yonatan Bitton, Chenfanfu Jiang, Yizhou Sun, Kai-Wei Chang, Aditya Grover

Recent advances in internet-scale video data pretraining have led to the development of text-to-video generative models that can create high-quality videos across a broad range of visual concepts, synthesize realistic motions and render complex objects.

Video Generation

Atlas3D: Physically Constrained Self-Supporting Text-to-3D for Simulation and Fabrication

no code implementations28 May 2024 Yunuo Chen, Tianyi Xie, Zeshun Zong, Xuan Li, Feng Gao, Yin Yang, Ying Nian Wu, Chenfanfu Jiang

Existing diffusion-based text-to-3D generation methods primarily focus on producing visually realistic shapes and appearances, often neglecting the physical constraints necessary for downstream tasks.

3D Generation Friction +1

Gaussian Splashing: Unified Particles for Versatile Motion Synthesis and Rendering

no code implementations27 Jan 2024 Yutao Feng, Xiang Feng, Yintong Shang, Ying Jiang, Chang Yu, Zeshun Zong, Tianjia Shao, Hongzhi Wu, Kun Zhou, Chenfanfu Jiang, Yin Yang

We demonstrate the feasibility of integrating physics-based animations of solids and fluids with 3D Gaussian Splatting (3DGS) to create novel effects in virtual scenes reconstructed using 3DGS.

Motion Synthesis

PhysGaussian: Physics-Integrated 3D Gaussians for Generative Dynamics

no code implementations CVPR 2024 Tianyi Xie, Zeshun Zong, Yuxing Qiu, Xuan Li, Yutao Feng, Yin Yang, Chenfanfu Jiang

We introduce PhysGaussian, a new method that seamlessly integrates physically grounded Newtonian dynamics within 3D Gaussians to achieve high-quality novel motion synthesis.

Motion Synthesis

Cannot find the paper you are looking for? You can Submit a new open access paper.