Search Results for author: MohamadAli Torkamani

Found 5 papers, 2 papers with code

LazyGNN: Large-Scale Graph Neural Networks via Lazy Propagation

1 code implementation3 Feb 2023 Rui Xue, Haoyu Han, MohamadAli Torkamani, Jian Pei, Xiaorui Liu

Recent works have demonstrated the benefits of capturing long-distance dependency in graphs by deeper graph neural networks (GNNs).

Graph Representation Learning

Alternately Optimized Graph Neural Networks

no code implementations8 Jun 2022 Haoyu Han, Xiaorui Liu, Haitao Mao, MohamadAli Torkamani, Feng Shi, Victor Lee, Jiliang Tang

Extensive experiments demonstrate that the proposed method can achieve comparable or better performance with state-of-the-art baselines while it has significantly better computation and memory efficiency.

MULTI-VIEW LEARNING Node Classification

Differential Equation Units: Learning Functional Forms of Activation Functions from Data

1 code implementation6 Sep 2019 MohamadAli Torkamani, Shiv Shankar, Amirmohammad Rooshenas, Phillip Wallis

Most deep neural networks use simple, fixed activation functions, such as sigmoids or rectified linear units, regardless of domain or network structure.

Learning Compact Neural Networks Using Ordinary Differential Equations as Activation Functions

no code implementations19 May 2019 MohamadAli Torkamani, Phillip Wallis, Shiv Shankar, Amirmohammad Rooshenas

Most deep neural networks use simple, fixed activation functions, such as sigmoids or rectified linear units, regardless of domain or network structure.

Differential Equation Networks

no code implementations27 Sep 2018 MohamadAli Torkamani, Phillip Wallis

Most deep neural networks use simple, fixed activation functions, such as sigmoids or rectified linear units, regardless of domain or network structure.

Cannot find the paper you are looking for? You can Submit a new open access paper.