1 code implementation • 23 May 2024 • Shukai Duan, Heng Ping, Nikos Kanakaris, Xiongye Xiao, Panagiotis Kyriakis, Nesreen K. Ahmed, Peiyu Zhang, Guixiang Ma, Mihai Capota, Shahin Nazarian, Theodore L. Willke, Paul Bogdan
Computation graphs are Directed Acyclic Graphs (DAGs) where the nodes correspond to mathematical operations and are used widely as abstractions in optimizations of neural networks.
no code implementations • 3 Feb 2024 • Le Chen, Nesreen K. Ahmed, Akash Dutta, Arijit Bhattacharjee, Sixing Yu, Quazi Ishtiaque Mahmud, Waqwoya Abebe, Hung Phan, Aishwarya Sarkar, Branden Butler, Niranjan Hasabnis, Gal Oren, Vy A. Vo, Juan Pablo Munoz, Theodore L. Willke, Tim Mattson, Ali Jannesari
Recently, language models (LMs), especially large language models (LLMs), have revolutionized the field of deep learning.
no code implementations • 9 Dec 2023 • Shukai Duan, Nikos Kanakaris, Xiongye Xiao, Heng Ping, Chenyu Zhou, Nesreen K. Ahmed, Guixiang Ma, Mihai Capota, Theodore L. Willke, Shahin Nazarian, Paul Bogdan
Code optimization is a challenging task requiring a substantial level of expertise from developers.
no code implementations • NeurIPS Workshop LMCA 2020 • Aaron Zweig, Nesreen Ahmed, Theodore L. Willke, Guixiang Ma
The application of deep reinforcement learning (RL) to graph learning and meta-learning admits challenges from both topics.
no code implementations • 9 Oct 2020 • Guixiang Ma, Yao Xiao, Theodore L. Willke, Nesreen K. Ahmed, Shahin Nazarian, Paul Bogdan
High-level applications, such as machine learning, are evolving from simple models based on multilayer perceptrons for simple image recognition to much deeper and more complex neural networks for self-driving vehicle control systems. The rapid increase in the consumption of memory and computational resources by these models demands the use of multi-core parallel systems to scale the execution of the complex emerging applications that depend on them.
no code implementations • 20 Jul 2020 • Sachin Ravi, Sebastian Musslick, Maia Hamin, Theodore L. Willke, Jonathan D. Cohen
The terms multi-task learning and multitasking are easily confused.
no code implementations • 25 Dec 2019 • Guixiang Ma, Nesreen K. Ahmed, Theodore L. Willke, Philip S. Yu
In many domains where data are represented as graphs, learning a similarity metric among graphs is considered a key problem, which can further facilitate various learning tasks, such as classification, clustering, and similarity search.
1 code implementation • ICML 2020 • Javier S. Turek, Shailee Jain, Vy Vo, Mihai Capota, Alexander G. Huth, Theodore L. Willke
In this work, we explore the delayed-RNN, which is a single-layer RNN that has a delay between the input and output.
no code implementations • 11 Sep 2018 • Michael J. Anderson, Jonathan I. Tamir, Javier S. Turek, Marcus T. Alley, Theodore L. Willke, Shreyas S. Vasanawala, Michael Lustig
Our improvements to the pipeline on a single machine provide a 3x overall reconstruction speedup, which allowed us to add algorithmic changes improving image quality.
1 code implementation • ECCV 2018 • Apoorv Vyas, Nataraj Jammalamadaka, Xia Zhu, Dipankar Das, Bharat Kaul, Theodore L. Willke
In conjunction with the standard cross-entropy loss, we minimize the novel loss to train an ensemble of classifiers.
2 code implementations • IJCAI 2018 • Nesreen K. Ahmed, Ryan Rossi, John Boaz Lee, Theodore L. Willke, Rong Zhou, Xiangnan Kong, Hoda Eldardiry
Random walks are at the heart of many existing network embedding methods.
no code implementations • 17 Nov 2017 • Hejia Zhang, Xia Zhu, Theodore L. Willke
We explore encoding brain symmetry into a neural network for a brain tumor segmentation task.
no code implementations • 25 Oct 2017 • Nesreen K. Ahmed, Ryan A. Rossi, Rong Zhou, John Boaz Lee, Xiangnan Kong, Theodore L. Willke, Hoda Eldardiry
To make these methods more generally applicable, we propose a framework for inductive network representation learning based on the notion of attributed random walk that is not tied to node identity and is instead based on learning a function $\Phi : \mathrm{\rm \bf x} \rightarrow w$ that maps a node attribute vector $\mathrm{\rm \bf x}$ to a type $w$.
no code implementations • 14 Sep 2017 • Nesreen K. Ahmed, Ryan A. Rossi, Rong Zhou, John Boaz Lee, Xiangnan Kong, Theodore L. Willke, Hoda Eldardiry
Random walks are at the heart of many existing deep learning algorithms for graph data.
no code implementations • 4 Oct 2016 • Nesreen K. Ahmed, Ryan A. Rossi, Theodore L. Willke, Rong Zhou
The experimental results demonstrate the utility of edge roles for network analysis tasks on a variety of graphs from various problem domains.
no code implementations • 29 Sep 2016 • Hejia Zhang, Po-Hsuan Chen, Janice Chen, Xia Zhu, Javier S. Turek, Theodore L. Willke, Uri Hasson, Peter J. Ramadge
In this work, we examine a searchlight based shared response model to identify shared information in small contiguous regions (searchlights) across the whole brain.
no code implementations • 17 Aug 2016 • Po-Hsuan Chen, Xia Zhu, Hejia Zhang, Javier S. Turek, Janice Chen, Theodore L. Willke, Uri Hasson, Peter J. Ramadge
We examine two ways to combine the ideas of a factor model and a searchlight based analysis to aggregate multi-subject fMRI data while preserving spatial locality.
no code implementations • 16 Aug 2016 • Michael J. Anderson, Mihai Capotă, Javier S. Turek, Xia Zhu, Theodore L. Willke, Yida Wang, Po-Hsuan Chen, Jeremy R. Manning, Peter J. Ramadge, Kenneth A. Norman
The scale of functional magnetic resonance image data is rapidly increasing as large multi-subject datasets are becoming widely available and high-resolution scanners are adopted.
no code implementations • 13 Jun 2015 • Nesreen K. Ahmed, Jennifer Neville, Ryan A. Rossi, Nick Duffield, Theodore L. Willke
From social science to biology, numerous applications often rely on graphlets for intuitive and meaningful characterization of networks at both the global macro-level as well as the local micro-level.