Search Results for author: Constantine Dovrolis

Found 12 papers, 6 papers with code

SHARP: Sparsity and Hidden Activation RePlay for Neuro-Inspired Continual Learning

1 code implementation29 May 2023 Mustafa Burak Gurbuz, Jean Michael Moorman, Constantine Dovrolis

Inspired by how our brain consolidates memories, a powerful strategy in CL is replay, which involves training the DNN on a mixture of new and all seen classes.

Class Incremental Learning Incremental Learning

Root-Cause Analysis of Activation Cascade Differences in Brain Networks

no code implementations16 Jul 2022 Qihang Yao, Manoj Chandrasekaran, Constantine Dovrolis

The question we focus on is: if we are given such activation cascades for two groups, say A and B (e. g. Controls versus a mental disorder), what is the smallest set of brain connectivity (graph edge weight) changes that are sufficient to explain the observed differences in the activation cascades between the two groups?

PHEW: Paths with Higher Edge-Weights give ''winning tickets'' without training data

no code implementations1 Jan 2021 Shreyas Malakarjun Patil, Constantine Dovrolis

Then, we show that Paths with Higher Edge-Weights (PHEW) at initialization have higher loss gradient magnitude, resulting in more efficient training.

PHEW: Constructing Sparse Networks that Learn Fast and Generalize Well without Training Data

1 code implementation22 Oct 2020 Shreyas Malakarjun Patil, Constantine Dovrolis

Our work is based on a recently proposed decomposition of the Neural Tangent Kernel (NTK) that has decoupled the dynamics of the training process into a data-dependent component and an architecture-dependent kernel - the latter referred to as Path Kernel.

Unsupervised Progressive Learning and the STAM Architecture

1 code implementation3 Apr 2019 James Smith, Cameron Taylor, Seth Baer, Constantine Dovrolis

We first pose the Unsupervised Progressive Learning (UPL) problem: an online representation learning problem in which the learner observes a non-stationary and unlabeled data stream, learning a growing number of features that persist over time even though the data is not stored or replayed.

Clustering Continual Learning +3

Unsupervised Continual Learning and Self-Taught Associative Memory Hierarchies

no code implementations ICLR Workshop LLD 2019 James Smith, Seth Baer, Zsolt Kira, Constantine Dovrolis

We first pose the Unsupervised Continual Learning (UCL) problem: learning salient representations from a non-stationary stream of unlabeled data in which the number of object classes varies with time.

Continual Learning Online Clustering

A neuro-inspired architecture for unsupervised continual learning based on online clustering and hierarchical predictive coding

no code implementations22 Oct 2018 Constantine Dovrolis

We propose that the Continual Learning desiderata can be achieved through a neuro-inspired architecture, grounded on Mountcastle's cortical column hypothesis.

Clustering Continual Learning +1

Emergence and Evolution of Hierarchical Structure in Complex Systems

1 code implementation13 May 2018 Payam Siyari, Bistra Dilkina, Constantine Dovrolis

It is well known that many complex systems, both in technology and nature, exhibit hierarchical modularity: smaller modules, each of them providing a certain function, are used within larger modules that perform more complex functions.

δ-MAPS: From spatio-temporal data to a weighted and lagged network between functional domains

1 code implementation23 Feb 2016 Ilias Fountalis, Annalisa Bracco, Bistra Dilkina, Constantine Dovrolis, Shella Keilholz

The proposed edge inference method examines the statistical significance of each lagged cross-correlation between two domains, infers a range of lag values for each edge, and assigns a weight to each edge based on the covariance of the two domains.

Other Computer Science

Lexis: An Optimization Framework for Discovering the Hierarchical Structure of Sequential Data

no code implementations17 Feb 2016 Payam Siyari, Bistra Dilkina, Constantine Dovrolis

We also consider the problem of identifying the set of intermediate nodes (substrings) that collectively form the "core" of a Lexis-DAG, which is important in the analysis of Lexis-DAGs.

Text Compression

Cannot find the paper you are looking for? You can Submit a new open access paper.