Search Results for author: Chenhao Wang

Found 23 papers, 12 papers with code

Augmentation, Retrieval, Generation: Event Sequence Prediction with a Three-Stage Sequence-to-Sequence Approach

no code implementations COLING 2022 Bo Zhou, Chenhao Wang, Yubo Chen, Kang Liu, Jun Zhao, Jiexin Xu, XiaoJian Jiang, Qiuxia Li

Currently existing approach models this task as a statistical induction problem, to predict a sequence of events by exploring the similarity between the given goal and the known sequences of events.

Retrieval

Set Generation Networks for End-to-End Knowledge Base Population

no code implementations EMNLP 2021 Dianbo Sui, Chenhao Wang, Yubo Chen, Kang Liu, Jun Zhao, Wei Bi

In this paper, we formulate end-to-end KBP as a direct set generation problem, avoiding considering the order of multiple facts.

Decoder Knowledge Base Population +2

Extracting polygonal footprints in off-nadir images with Segment Anything Model

1 code implementation16 Aug 2024 Kai Li, Yupeng Deng, Jingbo Chen, Yu Meng, Zhihao Xi, Junxian Ma, Chenhao Wang, Xiangyu Zhao

Building Footprint Extraction (BFE) from off-nadir aerial images often involves roof segmentation and offset prediction to adjust roof boundaries to the building footprint.

RAG

Multi-modal Crowd Counting via Modal Emulation

no code implementations28 Jul 2024 Chenhao Wang, Xiaopeng Hong, Zhiheng Ma, Yupeng Wei, Yabin Wang, Xiaopeng Fan

To overcome the gap between different modalities, we propose a modal emulation-based two-pass multi-modal crowd-counting framework that enables efficient modal emulation, alignment, and fusion.

Crowd Counting

Multi-modal Crowd Counting via a Broker Modality

1 code implementation10 Jul 2024 Haoliang Meng, Xiaopeng Hong, Chenhao Wang, Miao Shang, WangMeng Zuo

Multi-modal crowd counting involves estimating crowd density from both visual and thermal/depth images.

Crowd Counting Denoising

RWKU: Benchmarking Real-World Knowledge Unlearning for Large Language Models

1 code implementation16 Jun 2024 Zhuoran Jin, Pengfei Cao, Chenhao Wang, Zhitao He, Hongbang Yuan, Jiachun Li, Yubo Chen, Kang Liu, Jun Zhao

Large language models (LLMs) inevitably memorize sensitive, copyrighted, and harmful knowledge from the training corpus; therefore, it is crucial to erase this knowledge from the models.

Adversarial Attack Benchmarking +4

Brainstorming Brings Power to Large Language Models of Knowledge Reasoning

no code implementations2 Jun 2024 Zining Qin, Chenhao Wang, Huiling Qin, Weijia Jia

Large Language Models (LLMs) have demonstrated amazing capabilities in language generation, text comprehension, and knowledge reasoning.

Logical Reasoning Reading Comprehension +1

Adaptive Federated Learning Over the Air

no code implementations11 Mar 2024 Chenhao Wang, Zihan Chen, Nikolaos Pappas, Howard H. Yang, Tony Q. S. Quek, H. Vincent Poor

In contrast, an Adam-like algorithm converges at the $\mathcal{O}( 1/T )$ rate, demonstrating its advantage in expediting the model training process.

Federated Learning

AgentsCourt: Building Judicial Decision-Making Agents with Court Debate Simulation and Legal Knowledge Augmentation

1 code implementation5 Mar 2024 Zhitao He, Pengfei Cao, Chenhao Wang, Zhuoran Jin, Yubo Chen, Jiexin Xu, Huaijun Li, XiaoJian Jiang, Kang Liu, Jun Zhao

With the development of deep learning, natural language processing technology has effectively improved the efficiency of various aspects of the traditional judicial industry.

Decision Making Information Retrieval

Focus on Your Question! Interpreting and Mitigating Toxic CoT Problems in Commonsense Reasoning

1 code implementation28 Feb 2024 Jiachun Li, Pengfei Cao, Chenhao Wang, Zhuoran Jin, Yubo Chen, Daojian Zeng, Kang Liu, Jun Zhao

Large language models exhibit high-level commonsense reasoning abilities, especially with enhancement methods like Chain-of-Thought (CoT).

Position

An Improved Frequent Directions Algorithm for Low-Rank Approximation via Block Krylov Iteration

no code implementations24 Sep 2021 Chenhao Wang, Qianxin Yi, Xiuwu Liao, Yao Wang

Frequent Directions, as a deterministic matrix sketching technique, has been proposed for tackling low-rank approximation problems.

Computational Efficiency

Effective Streaming Low-tubal-rank Tensor Approximation via Frequent Directions

no code implementations23 Aug 2021 Qianxin Yi, Chenhao Wang, Kaidong Wang, Yao Wang

Low-tubal-rank tensor approximation has been proposed to analyze large-scale and multi-dimensional data.

CogIE: An Information Extraction Toolkit for Bridging Texts and CogNet

1 code implementation ACL 2021 Zhuoran Jin, Yubo Chen, Dianbo Sui, Chenhao Wang, Zhipeng Xue, Jun Zhao

CogNet is a knowledge base that integrates three types of knowledge: linguistic knowledge, world knowledge and commonsense knowledge.

Entity Linking Entity Typing +7

RGB Stream Is Enough for Temporal Action Detection

1 code implementation9 Jul 2021 Chenhao Wang, Hongxiang Cai, Yuxin Zou, Yichao Xiong

State-of-the-art temporal action detectors to date are based on two-stream input including RGB frames and optical flow.

Action Detection Data Augmentation +2

Multi-Robot Task Allocation -- Complexity and Approximation

no code implementations23 Mar 2021 Haris Aziz, Hau Chan, Ágnes Cseh, Bo Li, Fahimeh Ramezani, Chenhao Wang

Multi-robot task allocation is one of the most fundamental classes of problems in robotics and is crucial for various real-world robotic applications such as search, rescue and area exploration.

CogNet: Bridging Linguistic Knowledge, World Knowledge and Commonsense Knowledge

no code implementations3 Mar 2021 Chenhao Wang, Yubo Chen, Zhipeng Xue, Yang Zhou, Jun Zhao

In this paper, we present CogNet, a knowledge base (KB) dedicated to integrating three types of knowledge: (1) linguistic knowledge from FrameNet, which schematically describes situations, objects and events.

World Knowledge

Multi-Grained Spatio-temporal Modeling for Lip-reading

no code implementations30 Aug 2019 Chenhao Wang

By making full use of the information from different levels in a unified framework, the model is not only able to distinguish words with similar pronunciations, but also becomes robust to appearance changes.

Lipreading Lip Reading

LRW-1000: A Naturally-Distributed Large-Scale Benchmark for Lip Reading in the Wild

2 code implementations16 Oct 2018 Shuang Yang, Yuan-Hang Zhang, Dalu Feng, Mingmin Yang, Chenhao Wang, Jing-Yun Xiao, Keyu Long, Shiguang Shan, Xilin Chen

It has shown a large variation in this benchmark in several aspects, including the number of samples in each class, video resolution, lighting conditions, and speakers' attributes such as pose, age, gender, and make-up.

Lipreading Lip Reading +2

Cannot find the paper you are looking for? You can Submit a new open access paper.