This paper describes our system developed for the subtask 1c of the sixth Social Media Mining for Health Applications (SMM4H) shared task in 2021.
This research marks the first application of large language models to table-based question answering tasks, enhancing the model's comprehension of both table structures and content.
To keep independent encoding of questions and answers during inference stage, variational auto-encoder is further introduced to reconstruct answers (questions) from question (answer) embeddings as an auxiliary task to enhance QA interaction in representation learning in training stage.
Starting with a fully supervised model trained on the data with pixel-level masks, the proposed framework iteratively refines the model itself using the entire weakly labeled data (image-level soft label) in a self-training fashion.
Developing high-performance entity normalization algorithms that can alleviate the term variation problem is of great interest to the biomedical community.