Search Results for author: Hemanth Saratchandran

Found 11 papers, 0 papers with code

Sine Activated Low-Rank Matrices for Parameter Efficient Learning

no code implementations28 Mar 2024 Yiping Ji, Hemanth Saratchandran, Cameron Gordon, Zeyu Zhang, Simon Lucey

Low-rank decomposition has emerged as a vital tool for enhancing parameter efficiency in neural network architectures, gaining traction across diverse applications in machine learning.

From Activation to Initialization: Scaling Insights for Optimizing Neural Fields

no code implementations28 Mar 2024 Hemanth Saratchandran, Sameera Ramasinghe, Simon Lucey

In the realm of computer vision, Neural Fields have gained prominence as a contemporary tool harnessing neural networks for signal representation.

D'OH: Decoder-Only random Hypernetworks for Implicit Neural Representations

no code implementations28 Mar 2024 Cameron Gordon, Lachlan Ewen MacDonald, Hemanth Saratchandran, Simon Lucey

We instead present a strategy for the initialization of run-time deep implicit functions for single-instance signals through a Decoder-Only randomly projected Hypernetwork (D'OH).

Preconditioners for the Stochastic Training of Implicit Neural Representations

no code implementations13 Feb 2024 Shin-Fang Chng, Hemanth Saratchandran, Simon Lucey

Implicit neural representations have emerged as a powerful technique for encoding complex continuous multidimensional signals as neural networks, enabling a wide range of applications in computer vision, robotics, and geometry.

A Sampling Theory Perspective on Activations for Implicit Neural Representations

no code implementations8 Feb 2024 Hemanth Saratchandran, Sameera Ramasinghe, Violetta Shevchenko, Alexander Long, Simon Lucey

Implicit Neural Representations (INRs) have gained popularity for encoding signals as compact, differentiable entities.

Analyzing the Neural Tangent Kernel of Periodically Activated Coordinate Networks

no code implementations7 Feb 2024 Hemanth Saratchandran, Shin-Fang Chng, Simon Lucey

In this paper, we aim to address this gap by providing a theoretical understanding of periodically activated networks through an analysis of their Neural Tangent Kernel (NTK).

Memorization

Architectural Strategies for the optimization of Physics-Informed Neural Networks

no code implementations5 Feb 2024 Hemanth Saratchandran, Shin-Fang Chng, Simon Lucey

Physics-informed neural networks (PINNs) offer a promising avenue for tackling both forward and inverse problems in partial differential equations (PDEs) by incorporating deep learning with fundamental physics principles.

Curvature-Aware Training for Coordinate Networks

no code implementations ICCV 2023 Hemanth Saratchandran, Shin-Fang Chng, Sameera Ramasinghe, Lachlan MacDonald, Simon Lucey

Coordinate networks are widely used in computer vision due to their ability to represent signals as compressed, continuous entities.

On the effectiveness of neural priors in modeling dynamical systems

no code implementations10 Mar 2023 Sameera Ramasinghe, Hemanth Saratchandran, Violetta Shevchenko, Simon Lucey

Modelling dynamical systems is an integral component for understanding the natural world.

On skip connections and normalisation layers in deep optimisation

no code implementations NeurIPS 2023 Lachlan Ewen MacDonald, Jack Valmadre, Hemanth Saratchandran, Simon Lucey

We introduce a general theoretical framework, designed for the study of gradient optimisation of deep neural networks, that encompasses ubiquitous architecture choices including batch normalisation, weight normalisation and skip connections.

How You Start Matters for Generalization

no code implementations17 Jun 2022 Sameera Ramasinghe, Lachlan MacDonald, Moshiur Farazi, Hemanth Saratchandran, Simon Lucey

Characterizing the remarkable generalization properties of over-parameterized neural networks remains an open problem.

Cannot find the paper you are looking for? You can Submit a new open access paper.