Search Results for author: Sayandev Mukherjee

Found 6 papers, 2 papers with code

Randomness Is All You Need: Semantic Traversal of Problem-Solution Spaces with Large Language Models

1 code implementation8 Feb 2024 Thomas Sandholm, Sayandev Mukherjee, Bernardo A. Huberman

We present a novel approach to exploring innovation problem and solution domains using LLM fine-tuning with a custom idea database.

Why Neural Networks Work

no code implementations26 Nov 2022 Sayandev Mukherjee, Bernardo A. Huberman

We argue that many properties of fully-connected feedforward neural networks (FCNNs), also called multi-layer perceptrons (MLPs), are explainable from the analysis of a single pair of operations, namely a random projection into a higher-dimensional space than the input, followed by a sparsification operation.

Reinforcement Learning for Standards Design

no code implementations13 Oct 2021 Shahrukh Khan Kasi, Sayandev Mukherjee, Lin Cheng, Bernardo A. Huberman

Communications standards are designed via committees of humans holding repeated meetings over months or even years until consensus is achieved.

reinforcement-learning Reinforcement Learning (RL)

General Information Bottleneck Objectives and their Applications to Machine Learning

no code implementations12 Dec 2019 Sayandev Mukherjee

Aided by this insight, we formulate a new IBO that accounts for this property of the parameters of the trained model, and derive and optimize a variational bound on this IBO.

BIG-bench Machine Learning

Machine Learning using the Variational Predictive Information Bottleneck with a Validation Set

no code implementations6 Nov 2019 Sayandev Mukherjee

Zellner (1988) modeled statistical inference in terms of information processing and postulated the Information Conservation Principle (ICP) between the input and output of the information processing block, showing that this yielded Bayesian inference as the optimum information processing rule.

Bayesian Inference BIG-bench Machine Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.