1 code implementation • 3 Feb 2023 • Rui Xue, Haoyu Han, MohamadAli Torkamani, Jian Pei, Xiaorui Liu
Recent works have demonstrated the benefits of capturing long-distance dependency in graphs by deeper graph neural networks (GNNs).
no code implementations • 8 Jun 2022 • Haoyu Han, Xiaorui Liu, Haitao Mao, MohamadAli Torkamani, Feng Shi, Victor Lee, Jiliang Tang
Extensive experiments demonstrate that the proposed method can achieve comparable or better performance with state-of-the-art baselines while it has significantly better computation and memory efficiency.
1 code implementation • 6 Sep 2019 • MohamadAli Torkamani, Shiv Shankar, Amirmohammad Rooshenas, Phillip Wallis
Most deep neural networks use simple, fixed activation functions, such as sigmoids or rectified linear units, regardless of domain or network structure.
no code implementations • 19 May 2019 • MohamadAli Torkamani, Phillip Wallis, Shiv Shankar, Amirmohammad Rooshenas
Most deep neural networks use simple, fixed activation functions, such as sigmoids or rectified linear units, regardless of domain or network structure.
no code implementations • 27 Sep 2018 • MohamadAli Torkamani, Phillip Wallis
Most deep neural networks use simple, fixed activation functions, such as sigmoids or rectified linear units, regardless of domain or network structure.