Search Results for author: Peter Belcak

Found 7 papers, 3 papers with code

Exponentially Faster Language Modelling

3 code implementations15 Nov 2023 Peter Belcak, Roger Wattenhofer

Language models only really need to use an exponential fraction of their neurons for individual inferences.

Benchmarking Language Modelling

Fast Feedforward Networks

3 code implementations28 Aug 2023 Peter Belcak, Roger Wattenhofer

We break the linear link between the layer size and its inference cost by introducing the fast feedforward (FFF) architecture, a log-time alternative to feedforward networks.

Examining the Emergence of Deductive Reasoning in Generative Language Models

no code implementations31 May 2023 Peter Belcak, Luca A. Lanzendörfer, Roger Wattenhofer

We conduct a preliminary inquiry into the ability of generative transformer models to deductively reason from premises provided.

Neural Combinatorial Logic Circuit Synthesis from Input-Output Examples

1 code implementation29 Oct 2022 Peter Belcak, Roger Wattenhofer

We propose a novel, fully explainable neural approach to synthesis of combinatorial logic circuits from input-output examples.

Deterministic Graph-Walking Program Mining

no code implementations22 Aug 2022 Peter Belcak, Roger Wattenhofer

These programs characterise linear long-distance relationships between the given two vertex sets in the context of the whole graph.

The LL(finite) strategy for optimal LL(k) parsing

no code implementations15 Oct 2020 Peter Belcak

The LL(finite) parsing strategy for parsing of LL(k) grammars where k needs not to be known is presented.

Fast Agent-Based Simulation Framework with Applications to Reinforcement Learning and the Study of Trading Latency Effects

no code implementations18 Aug 2020 Peter Belcak, Jan-Peter Calliess, Stefan Zohren

As a simple illustration, we employ our toolbox to investigate the role of the order processing delay in normal trading and for the scenario of a significant price change.

Cannot find the paper you are looking for? You can Submit a new open access paper.