no code implementations • 8 Feb 2024 • Raymond Douglas, Jacek Karwowski, Chan Bae, Andis Draguns, Victoria Krakovna
Prior work has shown theoretically that models fail to imitate agents that generated the training data if the agents relied on hidden observations: the hidden observations act as confounding variables, and the models treat actions they generate as evidence for nonexistent observations.
no code implementations • 31 Jan 2024 • Raymond Douglas, Andis Draguns, Tomáš Gavenčiak
We develop a new technique for mitigating the problem of strong priors: we take the original set of instructions, produce a weakened version of the original prompt that is even more susceptible to the strong priors problem, and then extrapolate the continuation away from the weakened prompt.
no code implementations • 27 Jul 2022 • Elīza Gaile, Andis Draguns, Emīls Ozoliņš, Kārlis Freivalds
We use our loss function with a Graph Neural Network and design controlled experiments on both Euclidean and asymmetric TSP.
1 code implementation • 1 Aug 2021 • Ronalds Zakovskis, Andis Draguns, Eliza Gaile, Emils Ozolins, Karlis Freivalds
In this paper, we propose a new recurrent cell called Residual Recurrent Unit (RRU) which beats traditional cells and does not employ a single gate.
1 code implementation • 14 Jun 2021 • Emils Ozolins, Karlis Freivalds, Andis Draguns, Eliza Gaile, Ronalds Zakovskis, Sergejs Kozlovics
To demonstrate the capabilities of the query mechanism, we formulate an unsupervised (not depending on labels) loss function for Boolean Satisfiability Problem (SAT) and theoretically show that it allows the network to extract rich information about the problem.
2 code implementations • 6 Apr 2020 • Andis Draguns, Emīls Ozoliņš, Agris Šostaks, Matīss Apinis, Kārlis Freivalds
Attention is a commonly used mechanism in sequence processing, but it is of O(n^2) complexity which prevents its application to long sequences.
Ranked #1 on Music Transcription on MusicNet