no code implementations • COLING 2022 • Bo Zhou, Chenhao Wang, Yubo Chen, Kang Liu, Jun Zhao, Jiexin Xu, XiaoJian Jiang, Qiuxia Li
Currently existing approach models this task as a statistical induction problem, to predict a sequence of events by exploring the similarity between the given goal and the known sequences of events.
1 code implementation • ACL 2022 • Zhuoran Jin, Tianyi Men, Hongbang Yuan, Zhitao He, Dianbo Sui, Chenhao Wang, Zhipeng Xue, Yubo Chen, Jun Zhao
Designing CogKGE aims to provide a unified programming framework for KGE tasks and a series of knowledge representations for downstream tasks.
no code implementations • EMNLP 2021 • Dianbo Sui, Chenhao Wang, Yubo Chen, Kang Liu, Jun Zhao, Wei Bi
In this paper, we formulate end-to-end KBP as a direct set generation problem, avoiding considering the order of multiple facts.
1 code implementation • 12 Oct 2024 • Jiachun Li, Pengfei Cao, Chenhao Wang, Zhuoran Jin, Yubo Chen, Kang Liu, XiaoJian Jiang, Jiexin Xu, Jun Zhao
In it, we design a reward model to filter out the noisy knowledge and take the marginal consistent reasoning module to reduce invalid reasoning.
1 code implementation • 16 Aug 2024 • Kai Li, Yupeng Deng, Jingbo Chen, Yu Meng, Zhihao Xi, Junxian Ma, Chenhao Wang, Xiangyu Zhao
Building Footprint Extraction (BFE) from off-nadir aerial images often involves roof segmentation and offset prediction to adjust roof boundaries to the building footprint.
no code implementations • 28 Jul 2024 • Chenhao Wang, Xiaopeng Hong, Zhiheng Ma, Yupeng Wei, Yabin Wang, Xiaopeng Fan
To overcome the gap between different modalities, we propose a modal emulation-based two-pass multi-modal crowd-counting framework that enables efficient modal emulation, alignment, and fusion.
1 code implementation • 10 Jul 2024 • Haoliang Meng, Xiaopeng Hong, Chenhao Wang, Miao Shang, WangMeng Zuo
Multi-modal crowd counting involves estimating crowd density from both visual and thermal/depth images.
1 code implementation • 16 Jun 2024 • Zhuoran Jin, Pengfei Cao, Chenhao Wang, Zhitao He, Hongbang Yuan, Jiachun Li, Yubo Chen, Kang Liu, Jun Zhao
Large language models (LLMs) inevitably memorize sensitive, copyrighted, and harmful knowledge from the training corpus; therefore, it is crucial to erase this knowledge from the models.
no code implementations • 2 Jun 2024 • Zining Qin, Chenhao Wang, Huiling Qin, Weijia Jia
Large Language Models (LLMs) have demonstrated amazing capabilities in language generation, text comprehension, and knowledge reasoning.
no code implementations • 11 Mar 2024 • Chenhao Wang, Zihan Chen, Nikolaos Pappas, Howard H. Yang, Tony Q. S. Quek, H. Vincent Poor
In contrast, an Adam-like algorithm converges at the $\mathcal{O}( 1/T )$ rate, demonstrating its advantage in expediting the model training process.
1 code implementation • 5 Mar 2024 • Zhitao He, Pengfei Cao, Chenhao Wang, Zhuoran Jin, Yubo Chen, Jiexin Xu, Huaijun Li, XiaoJian Jiang, Kang Liu, Jun Zhao
With the development of deep learning, natural language processing technology has effectively improved the efficiency of various aspects of the traditional judicial industry.
1 code implementation • 28 Feb 2024 • Jiachun Li, Pengfei Cao, Chenhao Wang, Zhuoran Jin, Yubo Chen, Daojian Zeng, Kang Liu, Jun Zhao
Large language models exhibit high-level commonsense reasoning abilities, especially with enhancement methods like Chain-of-Thought (CoT).
1 code implementation • 27 Nov 2023 • Yan Pei, Jiahui Xu, Qianhao Chen, Chenhao Wang, Feng Yu, Lisan Zhang, Wei Luo
Finally, a Decoder layer is employed to reconstruct the artifact-reduced EEG signal.
no code implementations • 24 Sep 2021 • Chenhao Wang, Qianxin Yi, Xiuwu Liao, Yao Wang
Frequent Directions, as a deterministic matrix sketching technique, has been proposed for tackling low-rank approximation problems.
no code implementations • 23 Aug 2021 • Qianxin Yi, Chenhao Wang, Kaidong Wang, Yao Wang
Low-tubal-rank tensor approximation has been proposed to analyze large-scale and multi-dimensional data.
1 code implementation • ACL 2021 • Zhuoran Jin, Yubo Chen, Dianbo Sui, Chenhao Wang, Zhipeng Xue, Jun Zhao
CogNet is a knowledge base that integrates three types of knowledge: linguistic knowledge, world knowledge and commonsense knowledge.
1 code implementation • 9 Jul 2021 • Chenhao Wang, Hongxiang Cai, Yuxin Zou, Yichao Xiong
State-of-the-art temporal action detectors to date are based on two-stream input including RGB frames and optical flow.
Ranked #27 on Temporal Action Localization on THUMOS’14
no code implementations • 17 May 2021 • Andrey Ignatov, Andres Romero, Heewon Kim, Radu Timofte, Chiu Man Ho, Zibo Meng, Kyoung Mu Lee, Yuxiang Chen, Yutong Wang, Zeyu Long, Chenhao Wang, Yifei Chen, Boshen Xu, Shuhang Gu, Lixin Duan, Wen Li, Wang Bofei, Zhang Diankai, Zheng Chengjian, Liu Shaoli, Gao Si, Zhang Xiaofeng, Lu Kaidi, Xu Tianyu, Zheng Hui, Xinbo Gao, Xiumei Wang, Jiaming Guo, Xueyi Zhou, Hao Jia, Youliang Yan
Video super-resolution has recently become one of the most important mobile-related problems due to the rise of video communication and streaming services.
no code implementations • 23 Mar 2021 • Haris Aziz, Hau Chan, Ágnes Cseh, Bo Li, Fahimeh Ramezani, Chenhao Wang
Multi-robot task allocation is one of the most fundamental classes of problems in robotics and is crucial for various real-world robotic applications such as search, rescue and area exploration.
no code implementations • 3 Mar 2021 • Chenhao Wang, Yubo Chen, Zhipeng Xue, Yang Zhou, Jun Zhao
In this paper, we present CogNet, a knowledge base (KB) dedicated to integrating three types of knowledge: (1) linguistic knowledge from FrameNet, which schematically describes situations, objects and events.
2 code implementations • 26 Nov 2020 • Yanjia Zhu, Hongxiang Cai, Shuhan Zhang, Chenhao Wang, Yichao Xiong
Face detection has received intensive attention in recent years.
Ranked #1 on Occluded Face Detection on WIDER Face (Hard)
no code implementations • 30 Aug 2019 • Chenhao Wang
By making full use of the information from different levels in a unified framework, the model is not only able to distinguish words with similar pronunciations, but also becomes robust to appearance changes.
Ranked #21 on Lipreading on Lip Reading in the Wild
2 code implementations • 16 Oct 2018 • Shuang Yang, Yuan-Hang Zhang, Dalu Feng, Mingmin Yang, Chenhao Wang, Jing-Yun Xiao, Keyu Long, Shiguang Shan, Xilin Chen
It has shown a large variation in this benchmark in several aspects, including the number of samples in each class, video resolution, lighting conditions, and speakers' attributes such as pose, age, gender, and make-up.
Ranked #2 on Lipreading on LRW-1000