no code implementations • 7 Apr 2025 • Monir Abughalwa, Diep N. Nguyen, Dinh Thai Hoang, Van-Dinh Nguyen, Ming Zeng, Quoc-Viet Pham, Eryk Dutkiewicz
The problem is even more difficult under finite blocklength constraints that are popular in ultra-reliable low-latency communication (URLLC) and massive machine-type communications (mMTC).
1 code implementation • 22 Nov 2024 • Zizhao Wu, Jian Shi, Xuan Deng, Cheng Zhang, Genfu Yang, Ming Zeng, Yunhai Wang
Point cloud completion aims to infer a complete shape from its partial observation.
no code implementations • 3 Jul 2024 • Mohsen Ahmadzadeh, Saeid Pakravan, Ghosheh Abed Hodtani, Ming Zeng, Jean-Yves Chouinard, Leslie A. Rusch
This paper investigates an over-the-air federated learning (OTA-FL) system that employs fluid antennas (FAs) at an access point.
1 code implementation • 25 Apr 2024 • Fenglin Liu, Zheng Li, Hongjian Zhou, Qingyu Yin, Jingfeng Yang, Xianfeng Tang, Chen Luo, Ming Zeng, Haoming Jiang, Yifan Gao, Priyanka Nigam, Sreyashi Nag, Bing Yin, Yining Hua, Xuan Zhou, Omid Rohanian, Anshul Thakur, Lei Clifton, David A. Clifton
The adoption of large language models (LLMs) to assist clinicians has attracted remarkable attention.
no code implementations • 22 Apr 2024 • Chengrui Wang, PengFei Liu, Min Zhou, Ming Zeng, Xubin Li, Tiezheng Ge, Bo Zheng
The style guidance is a hand image, e. g., the malformed hand itself, and is employed to furnish the style reference for hand refining.
no code implementations • 6 Feb 2024 • Amir Omidi, Mai Banawan, Erwan Weckenmann, Benoit Paquin, Alireza Geravand, Zibo Zheng, Wei Shi, Ming Zeng, Leslie A. Rusch
We present an experimental validation of 4 dB improvement of an ASE-limited PAM4 at 60 Gbaud using an optimized constellation and a NN decoder.
no code implementations • 10 Aug 2023 • Saeid Pakravan, Jean-Yves Chouinard, Xingwang Li, Ming Zeng, Wanming Hao, Quoc-Viet Pham, Octavia A. Dobre
Nevertheless, from a security view-point, when multiple users are utilizing the same time-frequency resource, there may be concerns regarding keeping information confidential.
no code implementations • 8 Jul 2023 • Chao Chen, Chenghua Guo, Guixiang Ma, Ming Zeng, Xi Zhang, Sihong Xie
Robust explanations of machine learning models are critical to establish human trust in the models.
1 code implementation • 22 May 2023 • Renshuai Liu, Chengyang Li, Haitao Cao, Yinglin Zheng, Ming Zeng, Xuan Cheng
In the second stage, we tune the imitator network by optimizing the style code, in order to find an optimal fusion result for each input pair.
no code implementations • 24 Mar 2023 • PengFei Liu, Wenjin Deng, Hengda Li, Jintai Wang, Yinglin Zheng, Yiwei Ding, Xiaohu Guo, Ming Zeng
In this paper, we present a method for this task with natural motions of the lip, facial expression, head pose, and eye states.
no code implementations • 28 Dec 2022 • Chao Chen, Chenghua Guo, Rufeng Chen, Guixiang Ma, Ming Zeng, Xiangwen Liao, Xi Zhang, Sihong Xie
To foster trust in machine learning models, explanations must be faithful and stable for consistent insights.
no code implementations • 22 Nov 2022 • Honggu Zhou, Xiaogang Peng, Jiawei Mao, Zizhao Wu, Ming Zeng
To solve it, we proposed PointCMC, a novel cross-modal method to model multi-scale correspondences across modalities for self-supervised point cloud representation learning.
no code implementations • CVPR 2023 • Xiaoyi Dong, Jianmin Bao, Yinglin Zheng, Ting Zhang, Dongdong Chen, Hao Yang, Ming Zeng, Weiming Zhang, Lu Yuan, Dong Chen, Fang Wen, Nenghai Yu
Second, masked self-distillation is also consistent with vision-language contrastive from the perspective of training objective as both utilize the visual encoder for feature aligning, and thus is able to learn local semantics getting indirect supervision from the language.
1 code implementation • 22 Jun 2022 • Yiwei Ding, Wenjin Deng, Yinglin Zheng, PengFei Liu, Meihong Wang, Xuan Cheng, Jianmin Bao, Dong Chen, Ming Zeng
In this paper, we present the Intra- and Inter-Human Relation Networks (I^2R-Net) for Multi-Person Pose Estimation.
Ranked #2 on
Multi-Person Pose Estimation
on OCHuman
2 code implementations • CVPR 2022 • Yinglin Zheng, Hao Yang, Ting Zhang, Jianmin Bao, Dongdong Chen, Yangyu Huang, Lu Yuan, Dong Chen, Ming Zeng, Fang Wen
In this paper, we study the transfer performance of pre-trained models on face analysis tasks and introduce a framework, called FaRL, for general Facial Representation Learning in a visual-linguistic manner.
Ranked #1 on
Face Parsing
on CelebAMask-HQ
(using extra training data)
1 code implementation • ICCV 2021 • Chenxu Zhang, Yifan Zhao, Yifei HUANG, Ming Zeng, Saifeng Ni, Madhukar Budagavi, Xiaohu Guo
In this paper, we propose a talking face generation method that takes an audio signal as input and a short target video clip as reference, and synthesizes a photo-realistic video of the target face with natural lip motions, head poses, and eye blinks that are in-sync with the input audio signal.
1 code implementation • ICCV 2021 • Yinglin Zheng, Jianmin Bao, Dong Chen, Ming Zeng, Fang Wen
The first stage is a fully temporal convolution network (FTCN).
Ranked #4 on
DeepFake Detection
on FakeAVCeleb
no code implementations • ICCV 2021 • Jie Xu, Yazhou Ren, Huayi Tang, Xiaorong Pu, Xiaofeng Zhu, Ming Zeng, Lifang He
The prior of view-common variable obeys approximately discrete Gumbel Softmax distribution, which is introduced to extract the common cluster factor of multiple views.
2 code implementations • 21 Jun 2021 • Jianpeng Chen, Yujing Wang, Ming Zeng, Zongyi Xiang, Bitan Hou, Yunhai Tong, Ole J. Mengshoel, Yazhou Ren
Specifically, the proposed CustomGNN can automatically learn the high-level semantics for specific downstream tasks to highlight semantically relevant paths as well to filter out task-irrelevant noises in a graph.
no code implementations • 20 Mar 2021 • Quoc-Viet Pham, Ming Zeng, Rukhsana Ruby, Thien Huynh-The, Won-Joo Hwang
Finally, simulations illustrate the potential of our proposed UAV-SFL approach in providing a sustainable solution for FL-based wireless networks, and in reducing the UAV transmit power by 32. 95%, 63. 18%, and 78. 81% compared with the benchmarks.
no code implementations • 30 Jul 2020 • Yin-Bo Liu, Ming Zeng, Qing-Hao Meng
Vision-based lane detection (LD) is a key part of autonomous driving technology, and it is also a challenging problem.
no code implementations • 9 Jun 2020 • Yin-Bo Liu, Ming Zeng, Qing-Hao Meng
As an important part of linear perspective, vanishing points (VPs) provide useful clues for mapping objects from 2D photos to 3D space.
no code implementations • 8 Jun 2020 • Yin-Bo Liu, Ming Zeng, Qing-Hao Meng
Unstructured road vanishing point (VP) detection is a challenging problem, especially in the field of autonomous driving.
no code implementations • 21 Nov 2019 • Bitan Hou, Yujing Wang, Ming Zeng, Shan Jiang, Ole J. Mengshoel, Yunhai Tong, Jing Bai
For these applications, graph embedding is crucial as it provides vector representations of the graph.
2 code implementations • CVPR 2019 • Jinpeng Lin, Hao Yang, Dong Chen, Ming Zeng, Fang Wen, Lu Yuan
It uses hierarchical local based method for inner facial components and global methods for outer facial components.
no code implementations • 7 Oct 2018 • Ming Zeng, Haoxiang Gao, Tong Yu, Ole J. Mengshoel, Helge Langseth, Ian Lane, Xiaobing Liu
To address these issues, we propose two attention models for human activity recognition: temporal attention and sensor attention.
no code implementations • 22 Jan 2018 • Ming Zeng, Tong Yu, Xiao Wang, Le T. Nguyen, Ole J. Mengshoel, Ian Lane
Labeled data used for training activity recognition classifiers are usually limited in terms of size and diversity.
no code implementations • 12 Jul 2016 • Benjamin Elizalde, Guan-Lin Chao, Ming Zeng, Ian Lane
In particular, we present a method to compute and use semantic acoustic features to perform city-identification and the features show semantic evidence of the identification.
no code implementations • CVPR 2013 • Ming Zeng, Jiaxiang Zheng, Xuan Cheng, Xinguo Liu
We represent the shape motion by a deformation graph, and propose a model-to-part method to gradually integrate sampled points of depth scans into the deformation graph.