no code implementations • 16 Aug 2019 • Sarthak Dash, Alfio Gliozzo
State-of-the-art approaches for Knowledge Base Completion (KBC) exploit deep neural networks trained with both false and true assertions: positive assertions are explicitly taken from the knowledge base, whereas negative ones are generated by random sampling of entities.
no code implementations • 21 Aug 2019 • Sarthak Dash, Michael R. Glass, Alfio Gliozzo, Mustafa Canim
In addition to that, the system uses a deep learning approach for knowledge base completion by utilizing the global structure information of the induced KG to further refine the confidence of the newly discovered relations.
no code implementations • 23 Sep 2019 • Sarthak Dash, Md. Faisal Mahbub Chowdhury, Alfio Gliozzo, Nandana Mihindukulasooriya, Nicolas Rodolfo Fauceglia
This paper introduces Strict Partial Order Networks (SPON), a novel neural network architecture designed to enforce asymmetry and transitive properties as soft constraints.
no code implementations • IJCNLP 2019 • Nicolas Rodolfo Fauceglia, Alfio Gliozzo, Sarthak Dash, Md. Faisal Mahbub Chowdhury, N Mihindukulasooriya, ana
The Knowledge Graph Induction Service (KGIS) is an end-to-end knowledge induction system.
no code implementations • ACL 2020 • Chao Shang, Sarthak Dash, Md. Faisal Mahbub Chowdhury, N Mihindukulasooriya, ana, Alfio Gliozzo
However, there has been no attempt to exploit GNN to create taxonomies.
1 code implementation • EMNLP 2021 • Sarthak Dash, Gaetano Rossiello, Nandana Mihindukulasooriya, Sugato Bagchi, Alfio Gliozzo
In this work, we propose Canonicalizing Using Variational Autoencoders (CUVA), a joint model to learn both embeddings and cluster assignments in an end-to-end approach, which leads to a better vector representation for the noun and relation phrases.
no code implementations • 2 Apr 2021 • Sarthak Dash, Nandana Mihindukulasooriya, Alfio Gliozzo, Mustafa Canim
Inferring semantic types for entity mentions within text documents is an important asset for many downstream NLP tasks, such as Semantic Role Labelling, Entity Disambiguation, Knowledge Base Question Answering, etc.
no code implementations • Findings (NAACL) 2022 • Sarthak Dash, Sugato Bagchi, Nandana Mihindukulasooriya, Alfio Gliozzo
Existing methods that leverage pretrained Transformer encoders range from a simple construction of pseudo-sentences by concatenating text across rows or columns to complex parameter-intensive models that encode table structure and require additional pretraining.