Search Results for author: DIngzeyu Li

Found 12 papers, 3 papers with code

Audio-Visual Fusion Layers for Event Type Aware Video Recognition

no code implementations12 Feb 2022 Arda Senocak, Junsik Kim, Tae-Hyun Oh, Hyeonggon Ryu, DIngzeyu Li, In So Kweon

Human brain is continuously inundated with the multisensory information and their complex interactions coming from the outside world at any given moment.

Multi-Task Learning Video Recognition +1

Can one hear the shape of a neural network?: Snooping the GPU via Magnetic Side Channel

no code implementations15 Sep 2021 Henrique Teles Maia, Chang Xiao, DIngzeyu Li, Eitan Grinspun, Changxi Zheng

We find that each layer component's evaluation produces an identifiable magnetic signal signature, from which layer topology, width, function type, and sequence order can be inferred using a suitably trained classifier and a joint consistency optimization based on integer programming.

Unified Multisensory Perception: Weakly-Supervised Audio-Visual Video Parsing

1 code implementation ECCV 2020 Yapeng Tian, DIngzeyu Li, Chenliang Xu

In this paper, we introduce a new problem, named audio-visual video parsing, which aims to parse a video into temporal event segments and label them as either audible, visible, or both.

Multiple Instance Learning

MakeItTalk: Speaker-Aware Talking-Head Animation

3 code implementations27 Apr 2020 Yang Zhou, Xintong Han, Eli Shechtman, Jose Echevarria, Evangelos Kalogerakis, DIngzeyu Li

We present a method that generates expressive talking heads from a single facial image with audio as the only input.

Talking Face Generation Talking Head Generation

Deep Audio Prior

1 code implementation21 Dec 2019 Yapeng Tian, Chenliang Xu, DIngzeyu Li

We are interested in applying deep networks in the absence of training dataset.

blind source separation Texture Synthesis

Scene-Aware Audio Rendering via Deep Acoustic Analysis

no code implementations14 Nov 2019 Zhenyu Tang, Nicholas J. Bryan, DIngzeyu Li, Timothy R. Langlois, Dinesh Manocha

We present a new method to capture the acoustic characteristics of real-world rooms using commodity devices, and use the captured characteristics to generate similar sounding sources with virtual models.

Sound Graphics Multimedia Audio and Speech Processing

Scene-Aware Audio for 360\textdegree{} Videos

no code implementations12 May 2018 Dingzeyu Li, Timothy R. Langlois, Changxi Zheng

In our validations, we show that our synthesized spatial audio matches closely with recordings using ambisonic microphones.

Interacting with Acoustic Simulation and Fabrication

no code implementations9 Aug 2017 Dingzeyu Li

Incorporating accurate physics-based simulation into interactive design tools is challenging.

AirCode: Unobtrusive Physical Tags for Digital Fabrication

no code implementations18 Jul 2017 Dingzeyu Li, Avinash S. Nair, Shree K. Nayar, Changxi Zheng

We present AirCode, a technique that allows the user to tag physically fabricated objects with given information.

Object Robotic Grasping +1

Cannot find the paper you are looking for? You can Submit a new open access paper.