Search Results for author: Fengbin Zhu

Found 8 papers, 2 papers with code

Think Twice Before Assure: Confidence Estimation for Large Language Models through Reflection on Multiple Answers

no code implementations15 Mar 2024 Moxin Li, Wenjie Wang, Fuli Feng, Fengbin Zhu, Qifan Wang, Tat-Seng Chua

Confidence estimation aiming to evaluate output trustability is crucial for the application of large language models (LLM), especially the black-box ones.

TAT-LLM: A Specialized Language Model for Discrete Reasoning over Tabular and Textual Data

no code implementations24 Jan 2024 Fengbin Zhu, Ziyang Liu, Fuli Feng, Chao Wang, Moxin Li, Tat-Seng Chua

In this work, we address question answering (QA) over a hybrid of tabular and textual data that are very common content on the Web (e. g. SEC filings), where discrete reasoning capabilities are often required.

Language Modelling Question Answering

Doc2SoarGraph: Discrete Reasoning over Visually-Rich Table-Text Documents via Semantic-Oriented Hierarchical Graphs

1 code implementation3 May 2023 Fengbin Zhu, Chao Wang, Fuli Feng, Zifeng Ren, Moxin Li, Tat-Seng Chua

Discrete reasoning over table-text documents (e. g., financial reports) gains increasing attention in recent two years.

Towards Complex Document Understanding By Discrete Reasoning

no code implementations25 Jul 2022 Fengbin Zhu, Wenqiang Lei, Fuli Feng, Chao Wang, Haozhou Zhang, Tat-Seng Chua

Document Visual Question Answering (VQA) aims to understand visually-rich documents to answer questions in natural language, which is an emerging research topic for both Natural Language Processing and Computer Vision.

document understanding Question Answering +1

RDU: A Region-based Approach to Form-style Document Understanding

no code implementations14 Jun 2022 Fengbin Zhu, Chao Wang, Wenqiang Lei, Ziyang Liu, Tat Seng Chua

Key Information Extraction (KIE) is aimed at extracting structured information (e. g. key-value pairs) from form-style documents (e. g. invoices), which makes an important step towards intelligent document understanding.

document understanding Key Information Extraction +5

TAT-QA: A Question Answering Benchmark on a Hybrid of Tabular and Textual Content in Finance

1 code implementation ACL 2021 Fengbin Zhu, Wenqiang Lei, Youcheng Huang, Chao Wang, Shuo Zhang, Jiancheng Lv, Fuli Feng, Tat-Seng Chua

In this work, we extract samples from real financial reports to build a new large-scale QA dataset containing both Tabular And Textual data, named TAT-QA, where numerical reasoning is usually required to infer the answer, such as addition, subtraction, multiplication, division, counting, comparison/sorting, and the compositions.

Question Answering

Retrieving and Reading: A Comprehensive Survey on Open-domain Question Answering

no code implementations4 Jan 2021 Fengbin Zhu, Wenqiang Lei, Chao Wang, Jianming Zheng, Soujanya Poria, Tat-Seng Chua

Open-domain Question Answering (OpenQA) is an important task in Natural Language Processing (NLP), which aims to answer a question in the form of natural language based on large-scale unstructured documents.

Machine Reading Comprehension Open-Domain Question Answering

Cannot find the paper you are looking for? You can Submit a new open access paper.