Search Results for author: Sam Greydanus

Found 10 papers, 7 papers with code

Nature's Cost Function: Simulating Physics by Minimizing the Action

no code implementations3 Mar 2023 Tim Strang, Isabella Caruso, Sam Greydanus

In physics, there is a scalar function called the action which behaves like a cost function.

Dissipative Hamiltonian Neural Networks: Learning Dissipative and Conservative Dynamics Separately

no code implementations25 Jan 2022 Andrew Sosanya, Sam Greydanus

Recent work has shown that neural networks can learn such symmetries directly from data using Hamiltonian Neural Networks (HNNs).

Friction

Piecewise-constant Neural ODEs

1 code implementation11 Jun 2021 Sam Greydanus, Stefan Lee, Alan Fern

Neural networks are a popular tool for modeling sequential data but they generally do not treat time as a continuous variable.

Scaling down Deep Learning

1 code implementation29 Nov 2020 Sam Greydanus

Though deep learning models have taken on commercial and political relevance, many aspects of their training and operation remain poorly understood.

Neural reparameterization improves structural optimization

1 code implementation NeurIPS Workshop Deep_Invers 2019 Stephan Hoyer, Jascha Sohl-Dickstein, Sam Greydanus

Structural optimization is a popular method for designing objects such as bridge trusses, airplane wings, and optical devices.

Hamiltonian Neural Networks

5 code implementations NeurIPS 2019 Sam Greydanus, Misko Dzamba, Jason Yosinski

Even though neural networks enjoy widespread use, they still struggle to learn the basic laws of physics.

Learning Finite State Representations of Recurrent Policy Networks

no code implementations ICLR 2019 Anurag Koul, Sam Greydanus, Alan Fern

Recurrent neural networks (RNNs) are an effective representation of control policies for a wide range of reinforcement and imitation learning problems.

Atari Games Imitation Learning

Visualizing and Understanding Atari Agents

3 code implementations ICML 2018 Sam Greydanus, Anurag Koul, Jonathan Dodge, Alan Fern

While deep reinforcement learning (deep RL) agents are effective at maximizing rewards, it is often unclear what strategies they use to do so.

Reinforcement Learning (RL)

Learning the Enigma with Recurrent Neural Networks

1 code implementation24 Aug 2017 Sam Greydanus

We demonstrate that RNNs can learn decryption algorithms -- the mappings from plaintext to ciphertext -- for three polyalphabetic ciphers (Vigen\`ere, Autokey, and Enigma).

Cryptanalysis speech-recognition +1

Cannot find the paper you are looking for? You can Submit a new open access paper.