Search Results for author: Zizheng Guo

Found 6 papers, 3 papers with code

CSCE: Boosting LLM Reasoning by Simultaneous Enhancing of Casual Significance and Consistency

no code implementations20 Sep 2024 Kangsheng Wang, Xiao Zhang, Zizheng Guo, Tianyu Hu, Huimin Ma

Chain-based reasoning methods like chain of thought (CoT) play a rising role in solving reasoning tasks for large language models (LLMs).

Synergistic Spotting and Recognition of Micro-Expression via Temporal State Transition

1 code implementation15 Sep 2024 Bochao Zou, Zizheng Guo, Wenfeng Qin, Xin Li, Kangsheng Wang, Huimin Ma

The analysis of micro-expressions generally involves two main tasks: spotting micro-expression intervals in long videos and recognizing the emotions associated with these intervals.

Classification

Data Debugging is NP-hard for Classifiers Trained with SGD

no code implementations2 Aug 2024 Zizheng Guo, PengYu Chen, Yanzhang Fu, Dongjing Miao

(1) If the loss function and the dimension of the model are not fixed, Debuggable is NP-complete regardless of the training order in which all the training samples are processed during SGD.

RhythmMamba: Fast Remote Physiological Measurement with Arbitrary Length Videos

1 code implementation9 Apr 2024 Bochao Zou, Zizheng Guo, Xiaocheng Hu, Huimin Ma

Remote photoplethysmography (rPPG) is a non-contact method for detecting physiological signals from facial videos, holding great potential in various applications such as healthcare, affective computing, and anti-spoofing.

Mamba

RhythmFormer: Extracting rPPG Signals Based on Hierarchical Temporal Periodic Transformer

1 code implementation20 Feb 2024 Bochao Zou, Zizheng Guo, Jiansheng Chen, Huimin Ma

Due to the periodicity nature of rPPG, the long-range dependency capturing capacity of the Transformer was assumed to be advantageous for such signals.

VAT-Mart: Learning Visual Action Trajectory Proposals for Manipulating 3D ARTiculated Objects

no code implementations ICLR 2022 Ruihai Wu, Yan Zhao, Kaichun Mo, Zizheng Guo, Yian Wang, Tianhao Wu, Qingnan Fan, Xuelin Chen, Leonidas Guibas, Hao Dong

In this paper, we propose object-centric actionable visual priors as a novel perception-interaction handshaking point that the perception system outputs more actionable guidance than kinematic structure estimation, by predicting dense geometry-aware, interaction-aware, and task-aware visual action affordance and trajectory proposals.

Cannot find the paper you are looking for? You can Submit a new open access paper.