no code implementations • 16 Sep 2024 • Raika Karimi, Faezeh Faez, Yingxue Zhang, Xing Li, Lei Chen, Mingxuan Yuan, Mahdi Biparva
Contemporary hardware design benefits from the abstraction provided by high-level logic gates, streamlining the implementation of logic circuits.
no code implementations • 9 Sep 2024 • Faezeh Faez, Raika Karimi, Yingxue Zhang, Xing Li, Lei Chen, Mingxuan Yuan, Mahdi Biparva
On the other hand, we employ a hierarchical graph representation learning strategy to improve the model's capacity for learning expressive graph-level representations of large AIGs, surpassing traditional plain GNNs.
no code implementations • 14 Jun 2024 • Mohammad Dehghan, Mohammad Ali Alomrani, Sunyam Bagga, David Alfonso-Hermelo, Khalil Bibi, Abbas Ghaddar, Yingxue Zhang, Xiaoguang Li, Jianye Hao, Qun Liu, Jimmy Lin, Boxing Chen, Prasanna Parthasarathi, Mahdi Biparva, Mehdi Rezagholizadeh
To mitigate these issues, we propose our enhanced web and efficient knowledge graph (KG) retrieval solution (EWEK-QA) to enrich the content of the extracted knowledge fed to the system.
no code implementations • 14 Feb 2024 • Ali Saheb Pasand, Reza Moravej, Mahdi Biparva, Raika Karimi, Ali Ghodsi
Our experiments demonstrate that the cost associated with the loss computation can be reduced via node or dimension sampling without lowering the downstream performance.
no code implementations • 14 Feb 2024 • Ali Saheb Pasand, Reza Moravej, Mahdi Biparva, Ali Ghodsi
A common phenomena confining the representation quality in Self-Supervised Learning (SSL) is dimensional collapse (also known as rank degeneration), where the learned representations are mapped to a low dimensional subspace of the representation space.
no code implementations • 9 Feb 2024 • Mahdi Naseri, Mahdi Biparva
Self-supervised Learning (SSL) has emerged as a powerful technique in pre-training deep learning models without relying on expensive annotated labels, instead leveraging embedded signals in unlabeled data.
no code implementations • 2 Feb 2024 • Mahdi Biparva, Raika Karimi, Faezeh Faez, Yingxue Zhang
Furthermore, we illustrate the underlying aspects of the proposed model in effectively capturing extensive temporal dependencies in dynamic graphs.
2 code implementations • 30 Oct 2022 • Mohammad Ali Alomrani, Mahdi Biparva, Yingxue Zhang, Mark Coates
Temporal graph neural networks have shown promising results in learning inductive representations by automatically extracting temporal patterns.
Ranked #1 on
Dynamic Link Prediction
on Social Evolution
1 code implementation • 11 Mar 2022 • Lyndon Boone, Mahdi Biparva, Parisa Mojiri Forooshani, Joel Ramirez, Mario Masellis, Robert Bartha, Sean Symons, Stephen Strother, Sandra E. Black, Chris Heyn, Anne L. Martel, Richard H. Swartz, Maged Goubran
To address these limitations, we propose ROOD-MRI: a platform for benchmarking the Robustness of DNNs to Out-Of-Distribution (OOD) data, corruptions, and artifacts in MRI.
no code implementations • 13 Jan 2021 • Mahdi Biparva, David Fernández-Llorca, Rubén Izquierdo-Gonzalo, John K. Tsotsos
Up to four different two-stream-based approaches, that have been successfully applied to address human action recognition, are adapted here by stacking visual cues from forward-looking video cameras to recognize and anticipate lane-changes of target vehicles.
no code implementations • 21 Nov 2020 • Mahdi Biparva, John Tsotsos
We study the role of the context on interfering with a disentangled foreground target object representation in this work.
no code implementations • 25 Aug 2020 • David Fernández-Llorca, Mahdi Biparva, Rubén Izquierdo-Gonzalo, John K. Tsotsos
Different sizes of the regions around the vehicles are analyzed, evaluating the importance of the interaction between vehicles and the context information in the performance.
no code implementations • 10 May 2020 • Mahdi Biparva, John Tsotsos
Network parameter reduction methods have been introduced to systematically deal with the computational and memory complexity of deep networks.
no code implementations • 4 Feb 2020 • Mahdi Biparva, John Tsotsos
Convolutional neural networks model the transformation of the input sensory data at the bottom of a network hierarchy to the semantic information at the top of the visual hierarchy.
1 code implementation • 16 Nov 2017 • Amir Rosenfeld, Mahdi Biparva, John K. Tsotsos
This process has been shown to be an effect of top-down signaling in the visual system triggered by the said cue.
no code implementations • 21 Aug 2017 • Mahdi Biparva, John Tsotsos
Visual attention modeling has recently gained momentum in developing visual hierarchies provided by Convolutional Neural Networks.