no code implementations • 27 Jun 2023 • Xiang 'Anthony' Chen, Jeff Burke, Ruofei Du, Matthew K. Hong, Jennifer Jacobs, Philippe Laban, DIngzeyu Li, Nanyun Peng, Karl D. D. Willis, Chien-Sheng Wu, Bolei Zhou
Through iterative, cross-disciplinary discussions, we define and propose next-steps for Human-centered Generative AI (HGAI).
no code implementations • 19 Apr 2023 • Sitong Wang, Samia Menon, Tao Long, Keren Henderson, DIngzeyu Li, Kevin Crowston, Mark Hansen, Jeffrey V. Nickerson, Lydia B. Chilton
To translate news into social media reels, we support journalists in reframing the narrative.
no code implementations • CVPR 2022 • Yang Zhou, Jimei Yang, DIngzeyu Li, Jun Saito, Deepali Aneja, Evangelos Kalogerakis
We present a method that reenacts a high-quality video with gestures matching a target speech audio.
no code implementations • 12 Feb 2022 • Arda Senocak, Junsik Kim, Tae-Hyun Oh, Hyeonggon Ryu, DIngzeyu Li, In So Kweon
Human brain is continuously inundated with the multisensory information and their complex interactions coming from the outside world at any given moment.
no code implementations • 15 Sep 2021 • Henrique Teles Maia, Chang Xiao, DIngzeyu Li, Eitan Grinspun, Changxi Zheng
We find that each layer component's evaluation produces an identifiable magnetic signal signature, from which layer topology, width, function type, and sequence order can be inferred using a suitably trained classifier and a joint consistency optimization based on integer programming.
1 code implementation • ECCV 2020 • Yapeng Tian, DIngzeyu Li, Chenliang Xu
In this paper, we introduce a new problem, named audio-visual video parsing, which aims to parse a video into temporal event segments and label them as either audible, visible, or both.
3 code implementations • 27 Apr 2020 • Yang Zhou, Xintong Han, Eli Shechtman, Jose Echevarria, Evangelos Kalogerakis, DIngzeyu Li
We present a method that generates expressive talking heads from a single facial image with audio as the only input.
1 code implementation • 21 Dec 2019 • Yapeng Tian, Chenliang Xu, DIngzeyu Li
We are interested in applying deep networks in the absence of training dataset.
no code implementations • 14 Nov 2019 • Zhenyu Tang, Nicholas J. Bryan, DIngzeyu Li, Timothy R. Langlois, Dinesh Manocha
We present a new method to capture the acoustic characteristics of real-world rooms using commodity devices, and use the captured characteristics to generate similar sounding sources with virtual models.
Sound Graphics Multimedia Audio and Speech Processing
no code implementations • 12 May 2018 • Dingzeyu Li, Timothy R. Langlois, Changxi Zheng
In our validations, we show that our synthesized spatial audio matches closely with recordings using ambisonic microphones.
no code implementations • 9 Aug 2017 • Dingzeyu Li
Incorporating accurate physics-based simulation into interactive design tools is challenging.
no code implementations • 18 Jul 2017 • Dingzeyu Li, Avinash S. Nair, Shree K. Nayar, Changxi Zheng
We present AirCode, a technique that allows the user to tag physically fabricated objects with given information.