Search Results for author: Minghao Hu

Found 13 papers, 7 papers with code

Reinforced Mnemonic Reader for Machine Reading Comprehension

3 code implementations8 May 2017 Minghao Hu, Yuxing Peng, Zhen Huang, Xipeng Qiu, Furu Wei, Ming Zhou

In this paper, we introduce the Reinforced Mnemonic Reader for machine reading comprehension tasks, which enhances previous attentive readers in two aspects.

Machine Reading Comprehension Question Answering +2

A Multi-Type Multi-Span Network for Reading Comprehension that Requires Discrete Reasoning

1 code implementation IJCNLP 2019 Minghao Hu, Yuxing Peng, Zhen Huang, Dongsheng Li

Rapid progress has been made in the field of reading comprehension and question answering, where several systems have achieved human parity in some simplified settings.

Negation Question Answering +1

Snapshot Ptychography on Array cameras

1 code implementation5 Nov 2021 Chengyu Wang, Minghao Hu, Yuzuru Takashima, Timothy J. Schulz, David J. Brady

We use convolutional neural networks to recover images optically down-sampled by $6. 7\times$ using coherent aperture synthesis over a 16 camera array.

Array Camera Image Fusion using Physics-Aware Transformers

1 code implementation5 Jul 2022 Qian Huang, Minghao Hu, David Jones Brady

We demonstrate a physics-aware transformer for feature-based data fusion from cameras with diverse resolution, color spaces, focal planes, focal lengths, and exposure.

Image Generation

Attention-Guided Answer Distillation for Machine Reading Comprehension

no code implementations EMNLP 2018 Minghao Hu, Yuxing Peng, Furu Wei, Zhen Huang, Dongsheng Li, Nan Yang, Ming Zhou

Despite that current reading comprehension systems have achieved significant advancements, their promising performances are often obtained at the cost of making an ensemble of numerous models.

Knowledge Distillation Machine Reading Comprehension

Modeling Dense Cross-Modal Interactions for Joint Entity-Relation Extraction

no code implementations1 Jul 2020 Shan Zhao, Minghao Hu, Zhiping Cai, Fang Liu

The network is carefully constructed by stacking multiple attention units in depth to fully model dense interactions over token-label spaces, in which two basic attention units are proposed to explicitly capture fine-grained correlations across different modalities (e. g., token-to-token and labelto-token).

Joint Entity and Relation Extraction Relation +1

Interactive Contrastive Learning for Self-supervised Entity Alignment

no code implementations17 Jan 2022 Kaisheng Zeng, Zhenhao Dong, Lei Hou, Yixin Cao, Minghao Hu, Jifan Yu, Xin Lv, Juanzi Li, Ling Feng

Self-supervised entity alignment (EA) aims to link equivalent entities across different knowledge graphs (KGs) without seed alignments.

Contrastive Learning Entity Alignment +1

Adaptive Threshold Selective Self-Attention for Chinese NER

no code implementations COLING 2022 Biao Hu, Zhen Huang, Minghao Hu, Ziwen Zhang, Yong Dou

Recently, Transformer has achieved great success in Chinese named entity recognition (NER) owing to its good parallelism and ability to model long-range dependencies, which utilizes self-attention to encode context.

Chinese Named Entity Recognition named-entity-recognition +2

Cannot find the paper you are looking for? You can Submit a new open access paper.