1 code implementation • 25 Sep 2023 • Xiao Han, Shuhan Yuan, Mohamed Trabelsi
However, there is a gap between language modeling and anomaly detection as the objective of training a sequential model via a language modeling loss is not directly related to anomaly detection.
no code implementations • 7 Jun 2023 • Mohamed Trabelsi, Huseyin Uzunalioglu
In this paper, we consider the unsupervised abstractive MDS setting where there are only documents with no groundtruh summaries provided, and we propose Absformer, a new Transformer-based method for unsupervised abstractive summary generation.
no code implementations • 5 Aug 2022 • Xinqi Bao, Yujia Xu, Hak-Keung Lam, Mohamed Trabelsi, Ines Chihi, Lilia Sidhom, Ernest N. Kamavuako
The findings of this study provided the knowledge for selecting TFDs as CNN input and designing CNN architecture for heart sound classification.
1 code implementation • 20 Apr 2022 • Mohamed Trabelsi, Jeff Heflin, Jin Cao
We study the zero-shot learning case on the target domain, and demonstrate that our method learns the EM task and transfers knowledge to the target domain.
1 code implementation • 27 Mar 2022 • Mohamed Trabelsi, Zhiyu Chen, Shuo Zhang, Brian D. Davison, Jeff Heflin
In this paper, we propose StruBERT, a structure-aware BERT model that fuses the textual and structural information of a data table to produce context-aware representations for both textual and tabular content of a data table.
no code implementations • 23 Feb 2021 • Mohamed Trabelsi, Zhiyu Chen, Brian D. Davison, Jeff Heflin
A variety of deep learning models have been proposed, and each model presents a set of neural network components to extract features that are used for ranking.
1 code implementation • 30 Oct 2020 • Mohamed Trabelsi, Jin Cao, Jeff Heflin
Generating schema labels automatically for column values of data tables has many data science applications such as schema matching, and data discovery and linking.
1 code implementation • 19 May 2020 • Zhiyu Chen, Mohamed Trabelsi, Jeff Heflin, Yinan Xu, Brian D. Davison
Pretrained contextualized language models such as BERT have achieved impressive results on various natural language processing benchmarks.