no code implementations • 29 Aug 2024 • Lu Dong, Xiao Wang, Srirangaraj Setlur, Venu Govindaraju, Ifeoma Nwogu
Our experimental results demonstrate that our proposed method outperforms the state-of-the-art AffectNet VA estimation and RAF-DB classification tasks.
no code implementations • 19 Jul 2024 • Shuang Li, Ziyuan Pu, Nan Zhang, Duxin Chen, Lu Dong, Daniel J. Graham, Yinhai Wang
In the absence of ground-truth data, a synthetic data generation procedure is proposed to emulate the causal mechanism between traffic speed, crashes, and covariates.
no code implementations • 28 May 2024 • Mengyi Shan, Lu Dong, Yutao Han, Yuan YAO, Tao Liu, Ifeoma Nwogu, Guo-Jun Qi, Mitch Hill
To our knowledge, our method is the first to generate multi-subject motion sequences with high diversity and fidelity from a large variety of textual prompts.
no code implementations • 13 May 2024 • Lu Dong, Lipisha Chaudhary, Fei Xu, Xiao Wang, Mason Lary, Ifeoma Nwogu
Achieving expressive 3D motion reconstruction and automatic generation for isolated sign words can be challenging, due to the lack of real-world 3D sign-word data, the complex nuances of signing motions, and the cross-modal understanding of sign language semantics.
1 code implementation • CVPR 2024 • Yifei HUANG, Guo Chen, Jilan Xu, Mingfang Zhang, Lijin Yang, Baoqi Pei, Hongjie Zhang, Lu Dong, Yali Wang, LiMin Wang, Yu Qiao
Along with the videos we record high-quality gaze data and provide detailed multimodal annotations, formulating a playground for modeling the human ability to bridge asynchronous procedural actions from different viewpoints.
Ranked #1 on
Action Anticipation
on EgoExoLearn
(using extra training data)
no code implementations • 8 Dec 2023 • Hongjie Zhang, Yi Liu, Lu Dong, Yifei HUANG, Zhen-Hua Ling, Yali Wang, LiMin Wang, Yu Qiao
While several long-form VideoQA datasets have been introduced, the length of both videos used to curate questions and sub-clips of clues leveraged to answer those questions have not yet reached the criteria for genuine long-form video understanding.
1 code implementation • 23 Sep 2023 • Wenzhe Cai, Guangran Cheng, Lingyue Kong, Lu Dong, Changyin Sun
We consider the problem of improving the generalization of mobile robots and achieving sim-to-real transfer for navigation skills.
1 code implementation • 18 Aug 2023 • Yuanhao Zhai, Mingzhen Huang, Tianyu Luan, Lu Dong, Ifeoma Nwogu, Siwei Lyu, David Doermann, Junsong Yuan
In this paper, we propose ATOM (ATomic mOtion Modeling) to mitigate this problem, by decomposing actions into atomic actions, and employing a curriculum learning strategy to learn atomic action composition.
no code implementations • 29 Nov 2022 • Zichen He, Chunwei Song, Lu Dong
Safe and efficient co-planning of multiple robots in pedestrian participation environments is promising for applications.
no code implementations • 1 Mar 2022 • Lu Dong, ZhenHua Ling, Qiang Ling, Zefeng Lai
Then, based on the estimated student vectors, the probabilistic part of DINA can be modified to a student dependent model that the slip and guess rates are related to student vectors.
1 code implementation • 26 Jan 2022 • Lu Dong, Zhi-Qiang Guo, Chao-Hong Tan, Ya-Jun Hu, Yuan Jiang, Zhen-Hua Ling
Neural network models have achieved state-of-the-art performance on grapheme-to-phoneme (G2P) conversion.
no code implementations • 13 Dec 2021 • Zichen He, Lu Dong, Chunwei Song, Changyin Sun
In this paper, a novel hybrid multi-robot motion planner that can be applied under non-communication and local observable conditions is presented.