Search Results for author: Zhikang Dong

Found 5 papers, 1 papers with code

Face-GPS: A Comprehensive Technique for Quantifying Facial Muscle Dynamics in Videos

no code implementations11 Jan 2024 Juni Kim, Zhikang Dong, Pawel Polak

We introduce a novel method that combines differential geometry, kernels smoothing, and spectral analysis to quantify facial muscle activity from widely accessible video recordings, such as those captured on personal smartphones.

Tackling Data Bias in MUSIC-AVQA: Crafting a Balanced Dataset for Unbiased Question-Answering

1 code implementation10 Oct 2023 Xiulong Liu, Zhikang Dong, Peng Zhang

In recent years, there has been a growing emphasis on the intersection of audio, vision, and text modalities, driving forward the advancements in multimodal research.

Question Answering

MuseChat: A Conversational Music Recommendation System for Videos

no code implementations10 Oct 2023 Zhikang Dong, Bin Chen, Xiulong Liu, Pawel Polak, Peng Zhang

The reasoning module, equipped with the power of Large Language Model (Vicuna-7B) and extended to multi-modal inputs, is able to provide reasonable explanation for the recommended music.

Language Modelling Large Language Model +2

Detection of (Hidden) Emotions from Videos using Muscles Movements and Face Manifold Embedding

no code implementations1 Nov 2022 Juni Kim, Zhikang Dong, Eric Guan, Judah Rosenthal, Shi Fu, Miriam Rafailovich, Pawel Polak

Although the original FAN model achieves very high out-of-sample performance on the original CK++ videos, it does not perform so well on hidden emotions videos.

Optical Flow Estimation

CP-PINNs: Data-Driven Changepoints Detection in PDEs Using Online Optimized Physics-Informed Neural Networks

no code implementations18 Aug 2022 Zhikang Dong, Pawel Polak

However, when changepoints are present, our approach yields superior parameter estimation, improved model fitting, and reduced training error compared to the original PINNs model.

Meta-Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.