no code implementations • EMNLP 2021 • Mojtaba Nayyeri, Chengjin Xu, Franca Hoffmann, Mirza Mohtashim Alam, Jens Lehmann, Sahar Vahdati

Many KGEs use the Euclidean geometry which renders them incapable of preserving complex structures and consequently causes wrong inferences by the models.

1 code implementation • 21 Feb 2024 • Mehdi Azarafza, Mojtaba Nayyeri, Charles Steinmetz, Steffen Staab, Achim Rettberg

Large Language Models (LLMs) have garnered significant attention for their ability to understand text and images, generate human-like text, and perform complex reasoning tasks.

1 code implementation • 21 Dec 2023 • Jiaxin Pan, Mojtaba Nayyeri, Yinan Li, Steffen Staab

Temporal knowledge graphs may exhibit static temporal patterns at distinct points in time and dynamic temporal patterns between different timestamps.

1 code implementation • 14 Dec 2023 • Bo Xiong, Mojtaba Nayyeri, Linhao Luo, ZiHao Wang, Shirui Pan, Steffen Staab

NestE represents each atomic fact as a $1\times3$ matrix, and each nested relation is modeled as a $3\times3$ matrix that rotates the $1\times3$ atomic fact matrix through matrix multiplication.

no code implementations • 24 Apr 2023 • Bo Xiong, Mojtaba Nayyeri, Ming Jin, Yunjie He, Michael Cochez, Shirui Pan, Steffen Staab

Geometric relational embeddings map relational data as geometric objects that combine vector information suitable for machine learning and structured/relational information for structured/relational reasoning, typically in low dimensions.

Hierarchical Multi-label Classification
Knowledge Graph Completion
**+1**

no code implementations • 21 Mar 2023 • Yunjie He, Mojtaba Nayyeri, Bo Xiong, Evgeny Kharlamov, Steffen Staab

However, the role of such patterns in answering FOL queries by query embedding models has not been yet studied in the literature.

no code implementations • 13 Feb 2023 • Cosimo Gregucci, Mojtaba Nayyeri, Daniel Hernández, Steffen Staab

As a result, the combined model can learn relational and structural patterns.

1 code implementation • 4 Aug 2022 • Mojtaba Nayyeri, ZiHao Wang, Mst. Mahfuja Akter, Mirza Mohtashim Alam, Md Rashad Al Hasan Rony, Jens Lehmann, Steffen Staab

In our approach, we build on existing strong representations of single modalities and we use hypercomplex algebra to represent both, (i), single-modality embedding as well as, (ii), the interaction between different modalities and their complementary means of knowledge representation.

no code implementations • 1 Jun 2022 • Bo Xiong, Shichao Zhu, Mojtaba Nayyeri, Chengjin Xu, Shirui Pan, Chuan Zhou, Steffen Staab

Recent knowledge graph (KG) embeddings have been advanced by hyperbolic geometry due to its superior capability for representing hierarchies.

no code implementations • 18 Feb 2022 • Chengjin Xu, Mojtaba Nayyeri, Yung-Yu Chen, Jens Lehmann

In this work, we strive to move beyond the complex or hypercomplex space for KGE and propose a novel geometric algebra based embedding approach, GeomE, which uses multivector representations and the geometric product to model entities and relations.

1 code implementation • 24 Jan 2022 • Bo Xiong, Nico Potyka, Trung-Kien Tran, Mojtaba Nayyeri, Steffen Staab

Namely, the learned model of BoxEL embedding with loss 0 is a (logical) model of the KB.

no code implementations • 3 Jul 2021 • Mojtaba Nayyeri, Gokce Muge Cil, Sahar Vahdati, Francesco Osborne, Mahfuzur Rahman, Simone Angioni, Angelo Salatino, Diego Reforgiato Recupero, Nadezhda Vassilyeva, Enrico Motta, Jens Lehmann

This is typical for KGs that categorize a large number of entities (e. g., research articles, patents, persons) according to a relatively small set of categories.

1 code implementation • NAACL 2021 • Chengjin Xu, Yung-Yu Chen, Mojtaba Nayyeri, Jens Lehmann

Representation learning approaches for knowledge graphs have been mostly designed for static data.

no code implementations • 11 Apr 2021 • Chengjin Xu, Mojtaba Nayyeri, Sahar Vahdati, Jens Lehmann

For example, instead of training a model one time with a large embedding size of 1200, we repeat the training of the model 6 times in parallel with an embedding size of 200 and then combine the 6 separate models for testing while the overall numbers of adjustable parameters are same (6*200=1200) and the total memory footprint remains the same.

no code implementations • 13 Oct 2020 • Mojtaba Nayyeri, Chengjin Xu, Jens Lehmann, Sahar Vahdati

To this end, we represent each relation (edge) in a KG as a vector field on a smooth Riemannian manifold.

2 code implementations • COLING 2020 • Chengjin Xu, Mojtaba Nayyeri, Fouad Alkhoury, Hamed Shariat Yazdi, Jens Lehmann

We show our proposed model overcomes the limitations of the existing KG embedding models and TKG embedding models and has the ability of learning and inferringvarious relation patterns over time.

no code implementations • COLING 2020 • Chengjin Xu, Mojtaba Nayyeri, Yung-Yu Chen, Jens Lehmann

Knowledge graph (KG) embedding aims at embedding entities and relations in a KG into a lowdimensional latent representation space.

no code implementations • 8 Jun 2020 • Mojtaba Nayyeri, Sahar Vahdati, Can Aykul, Jens Lehmann

Most of the embedding models designed in Euclidean geometry usually support a single transformation type - often translation or rotation, which is suitable for learning on graphs with small differences in neighboring subgraphs.

2 code implementations • 18 Nov 2019 • Chengjin Xu, Mojtaba Nayyeri, Fouad Alkhoury, Hamed Shariat Yazdi, Jens Lehmann

Moreover, considering the temporal uncertainty during the evolution of entity/relation representations over time, we map the representations of temporal KGs into the space of multi-dimensional Gaussian distributions.

no code implementations • 25 Sep 2019 • Mojtaba Nayyeri, Chengjin Xu, Yadollah Yaghoobzadeh, Hamed Shariat Yazdi, Jens Lehmann

We show that by a proper selection of the loss function for training the TransE model, the main limitations of the model are mitigated.

no code implementations • 2 Sep 2019 • Mojtaba Nayyeri, Chengjin Xu, Yadollah Yaghoobzadeh, Hamed Shariat Yazdi, Jens Lehmann

We show that by a proper selection of the loss function for training the TransE model, the main limitations of the model are mitigated.

no code implementations • 20 Aug 2019 • Mojtaba Nayyeri, Chengjin Xu, Jens Lehmann, Hamed Shariat Yazdi

We prove that LogicENN can learn every ground truth of encoded rules in a knowledge graph.

Ranked #17 on Link Prediction on FB15k

no code implementations • 9 Jul 2019 • Mojtaba Nayyeri, Xiaotian Zhou, Sahar Vahdati, Hamed Shariat Yazdi, Jens Lehmann

To tackle this problem, several loss functions have been proposed recently by adding upper bounds and lower bounds to the scores of positive and negative samples.

no code implementations • 27 Apr 2019 • Mojtaba Nayyeri, Sahar Vahdati, Jens Lehmann, Hamed Shariat Yazdi

In this work, the TransE embedding model is reconciled for a specific link prediction task on scholarly metadata.

Cannot find the paper you are looking for? You can
Submit a new open access paper.

Contact us on:
hello@paperswithcode.com
.
Papers With Code is a free resource with all data licensed under CC-BY-SA.