Search Results for author: Panos Kalnis

Found 9 papers, 7 papers with code

Task-Oriented GNNs Training on Large Knowledge Graphs for Accurate and Efficient Modeling

1 code implementation9 Mar 2024 Hussein Abdallah, Waleed Afandi, Panos Kalnis, Essam Mansour

We refer to this subgraph as a task-oriented subgraph (TOSG), which contains a subset of task-related node and edge types in G. Training the task using TOSG instead of G alleviates the excessive computation required for a large KG.

Knowledge Graphs Link Prediction +1

SLAMB: Accelerated Large Batch Training with Sparse Communication

1 code implementation The International Conference on Machine Learning (ICML) 2023 Hang Xu, Wenxuan Zhang, Jiawei Fei, Yuzhe Wu, Tingwen Xie, Jun Huang, Yuchen Xie, Mohamed Elhoseiny, Panos Kalnis

Distributed training of large deep neural networks requires frequent exchange of massive data between machines, thus communication efficiency is a major concern.

A Universal Question-Answering Platform for Knowledge Graphs

1 code implementation1 Mar 2023 Reham Omar, Ishika Dhall, Panos Kalnis, Essam Mansour

Knowledge from diverse application domains is organized as knowledge graphs (KGs) that are stored in RDF engines accessible in the web via SPARQL endpoints.

Knowledge Graphs Question Answering +1

ChatGPT versus Traditional Question Answering for Knowledge Graphs: Current Status and Future Directions Towards Knowledge Graph Chatbots

no code implementations8 Feb 2023 Reham Omar, Omij Mangukiya, Panos Kalnis, Essam Mansour

Conversational AI and Question-Answering systems (QASs) for knowledge graphs (KGs) are both emerging research areas: they empower users with natural language interfaces for extracting information easily and effectively.

Chatbot Knowledge Graphs +1

Rethinking gradient sparsification as total error minimization

no code implementations NeurIPS 2021 Atal Narayan Sahu, Aritra Dutta, Ahmed M. Abdelmoniem, Trambak Banerjee, Marco Canini, Panos Kalnis

Unlike with Top-$k$ sparsifier, we show that hard-threshold has the same asymptotic convergence and linear speedup property as SGD in the convex case and has no impact on the data-heterogeneity in the non-convex case.

DeepReduce: A Sparse-tensor Communication Framework for Federated Deep Learning

1 code implementation NeurIPS 2021 Hang Xu, Kelly Kostopoulou, Aritra Dutta, Xin Li, Alexandros Ntoulas, Panos Kalnis

DeepReduce is orthogonal to existing gradient sparsifiers and can be applied in conjunction with them, transparently to the end-user, to significantly lower the communication overhead.

DeepReduce: A Sparse-tensor Communication Framework for Distributed Deep Learning

1 code implementation NeurIPS 2021 Kelly Kostopoulou, Hang Xu, Aritra Dutta, Xin Li, Alexandros Ntoulas, Panos Kalnis

This paper introduces DeepReduce, a versatile framework for the compressed communication of sparse tensors, tailored for distributed deep learning.

On the Discrepancy between the Theoretical Analysis and Practical Implementations of Compressed Communication for Distributed Deep Learning

1 code implementation19 Nov 2019 Aritra Dutta, El Houcine Bergou, Ahmed M. Abdelmoniem, Chen-Yu Ho, Atal Narayan Sahu, Marco Canini, Panos Kalnis

Compressed communication, in the form of sparsification or quantization of stochastic gradients, is employed to reduce communication costs in distributed data-parallel training of deep neural networks.

Model Compression Quantization

Cannot find the paper you are looking for? You can Submit a new open access paper.