We show that physics-based simulations can be seamlessly integrated with NeRF to generate high-quality elastodynamics of real-world objects.
We introduce PhysGaussian, a new method that seamlessly integrates physically grounded Newtonian dynamics within 3D Gaussians to achieve high-quality novel motion synthesis.
We propose a hybrid neural network and physics framework for reduced-order modeling of elastoplasticity and fracture.
In this work, we aim to identify parameters characterizing a physical system from a set of multi-view videos without any assumption on object geometry or topology.
In this work, we present a reconfigurable data glove design to capture different modes of human hand-object interactions, which are critical in training embodied artificial intelligence (AI) agents for fine manipulation tasks.
Recent breakthroughs in Vision-Language (V&L) joint research have achieved remarkable results in various text-driven tasks.
Bi-stride pools nodes on every other frontier of the breadth-first search (BFS), without the need for the manual drawing of coarser meshes and avoiding the wrong edges by spatial proximity.
Differentiable physics enables efficient gradient-based optimizations of neural network (NN) controllers.
This paper provides a new avenue for exploiting deep neural networks to improve physics-based simulation.
We believe our method will inspire a wide-range of new algorithms for deep learning and numerical optimization.
A quadrature-level connectivity graph-based method is adopted to avoid the artificial checkerboard issues commonly existing in multi-resolution topology optimization methods.
Computational Physics Computational Engineering, Finance, and Science Graphics
We propose Hierarchical Optimization Time Integration (HOT) for efficient implicit time-stepping of the Material Point Method (MPM) irrespective of simulated materials and conditions.
We present a human-centric method to sample and synthesize 3D room layouts and 2D images thereof, to obtain large-scale 2D/3D image data with perfect per-pixel ground truth.
We propose a systematic learning-based approach to the generation of massive quantities of synthetic 3D scenes and arbitrary numbers of photorealistic 2D images thereof, with associated ground truth information, for the purposes of training, benchmarking, and diagnosing learning-based computer vision and robotics algorithms.
We propose a notion of affordance that takes into account physical quantities generated when the human body interacts with real-world objects, and introduce a learning framework that incorporates the concept of human utilities, which in our opinion provides a deeper and finer-grained account not only of object affordance but also of people's interaction with objects.