Search Results for author: Yao Feng

Found 20 papers, 12 papers with code

Min-Max Optimization without Gradients: Convergence and Applications to Black-Box Evasion and Poisoning Attacks

no code implementations ICML 2020 Sijia Liu, Songtao Lu, Xiangyi Chen, Yao Feng, Kaidi Xu, Abdullah Al-Dujaili, Mingyi Hong, Una-May O'Reilly

In this paper, we study the problem of constrained min-max optimization in a black-box setting, where the desired optimizer cannot access the gradients of the objective function but may query its values.

ChatHuman: Language-driven 3D Human Understanding with Retrieval-Augmented Tool Reasoning

no code implementations7 May 2024 Jing Lin, Yao Feng, Weiyang Liu, Michael J. Black

The novel features of ChatHuman include leveraging academic publications to guide the application of 3D human-related tools, employing a retrieval-augmented generation model to generate in-context-learning examples for handling new tools, and discriminating and integrating tool results to enhance 3D human understanding.

Human-Object Interaction Detection In-Context Learning +3

ChatPose: Chatting about 3D Human Pose

no code implementations CVPR 2024 Yao Feng, Jing Lin, Sai Kumar Dwivedi, Yu Sun, Priyanka Patel, Michael J. Black

Additionally, ChatPose empowers LLMs to apply their extensive world knowledge in reasoning about human poses, leading to two advanced tasks: speculative pose generation and reasoning about pose estimation.

Pose Estimation Pose Prediction +1

Parameter-Efficient Orthogonal Finetuning via Butterfly Factorization

1 code implementation10 Nov 2023 Weiyang Liu, Zeju Qiu, Yao Feng, Yuliang Xiu, Yuxuan Xue, Longhui Yu, Haiwen Feng, Zhen Liu, Juyeon Heo, Songyou Peng, Yandong Wen, Michael J. Black, Adrian Weller, Bernhard Schölkopf

We apply this parameterization to OFT, creating a novel parameter-efficient finetuning method, called Orthogonal Butterfly (BOFT).

Ghost on the Shell: An Expressive Representation of General 3D Shapes

no code implementations23 Oct 2023 Zhen Liu, Yao Feng, Yuliang Xiu, Weiyang Liu, Liam Paull, Michael J. Black, Bernhard Schölkopf

Recent work has focused on the former, and methods for reconstructing open surfaces do not support fast reconstruction with material and lighting or unconditional generative modelling.

Text-Guided Generation and Editing of Compositional 3D Avatars

no code implementations13 Sep 2023 Hao Zhang, Yao Feng, Peter Kulits, Yandong Wen, Justus Thies, Michael J. Black

We argue that existing methods are limited because they employ a monolithic modeling approach, using a single representation for the head, face, hair, and accessories.

text-guided-generation Virtual Try-on

Learning Disentangled Avatars with Hybrid 3D Representations

no code implementations12 Sep 2023 Yao Feng, Weiyang Liu, Timo Bolkart, Jinlong Yang, Marc Pollefeys, Michael J. Black

Towards this end, both explicit and implicit 3D representations are heavily studied for a holistic modeling and capture of the whole human (e. g., body, clothing, face and hair), but neither representation is an optimal choice in terms of representation efficacy since different parts of the human avatar have different modeling desiderata.


Controlling Text-to-Image Diffusion by Orthogonal Finetuning

1 code implementation NeurIPS 2023 Zeju Qiu, Weiyang Liu, Haiwen Feng, Yuxuan Xue, Yao Feng, Zhen Liu, Dan Zhang, Adrian Weller, Bernhard Schölkopf

To tackle this challenge, we introduce a principled finetuning method -- Orthogonal Finetuning (OFT), for adapting text-to-image diffusion models to downstream tasks.

MeshDiffusion: Score-based Generative 3D Mesh Modeling

1 code implementation14 Mar 2023 Zhen Liu, Yao Feng, Michael J. Black, Derek Nowrouzezahrai, Liam Paull, Weiyang Liu

We consider the task of generating realistic 3D shapes, which is useful for a variety of applications such as automatic scene generation and physical simulation.

Scene Generation

Physics-Informed Machine Learning: A Survey on Problems, Methods and Applications

1 code implementation15 Nov 2022 Zhongkai Hao, Songming Liu, Yichi Zhang, Chengyang Ying, Yao Feng, Hang Su, Jun Zhu

Recent work shows that it provides potential benefits for machine learning models by incorporating the physical prior and collected data, which makes the intersection of machine learning and physics become a prevailing paradigm.

Physics-informed machine learning

Model-based Reinforcement Learning with a Hamiltonian Canonical ODE Network

no code implementations2 Nov 2022 Yao Feng, Yuhong Jiang, Hang Su, Dong Yan, Jun Zhu

Model-based reinforcement learning usually suffers from a high sample complexity in training the world model, especially for the environments with complex dynamics.

Model-based Reinforcement Learning reinforcement-learning +1

Capturing and Animation of Body and Clothing from Monocular Video

1 code implementation4 Oct 2022 Yao Feng, Jinlong Yang, Marc Pollefeys, Michael J. Black, Timo Bolkart

Building on this insight, we propose SCARF (Segmented Clothed Avatar Radiance Field), a hybrid model combining a mesh-based body with a neural radiance field.

Virtual Try-on

Min-Max Optimization without Gradients: Convergence and Applications to Adversarial ML

1 code implementation30 Sep 2019 Sijia Liu, Songtao Lu, Xiangyi Chen, Yao Feng, Kaidi Xu, Abdullah Al-Dujaili, Minyi Hong, Una-May O'Reilly

In this paper, we study the problem of constrained robust (min-max) optimization ina black-box setting, where the desired optimizer cannot access the gradients of the objective function but may query its values.

DenseBody: Directly Regressing Dense 3D Human Pose and Shape From a Single Color Image

2 code implementations25 Mar 2019 Pengfei Yao, Zheng Fang, Fan Wu, Yao Feng, Jiwei Li

Recovering 3D human body shape and pose from 2D images is a challenging task due to high complexity and flexibility of human body, and relatively less 3D labeled data.


Cannot find the paper you are looking for? You can Submit a new open access paper.