no code implementations • insights (ACL) 2022 • Yue Ding, Karolis Martinkus, Damian Pascual, Simon Clematide, Roger Wattenhofer
Different studies of the embedding space of transformer models suggest that the distribution of contextual representations is highly anisotropic - the embeddings are distributed in a narrow cone.
1 code implementation • spnlp (ACL) 2022 • Guirong Fu, Zhao Meng, Zhen Han, Zifeng Ding, Yunpu Ma, Matthias Schubert, Volker Tresp, Roger Wattenhofer
In this paper, we tackle the temporal knowledge graph completion task by proposing TempCaps, which is a Capsule network-based embedding model for Temporal knowledge graph completion.
1 code implementation • 9 Sep 2024 • Yahya Jabary, Andreas Plesner, Turlan Kuzhagaliyev, Roger Wattenhofer
However, these methods are model-specific and thus can not aid CAPTCHAs in fooling all models.
1 code implementation • 20 Aug 2024 • Florian Grötschla, Joël Mathys, Christoffer Raun, Roger Wattenhofer
Therefore, we propose a novel framework: GraphFSA (Graph Finite State Automaton).
1 code implementation • 7 Aug 2024 • Florian Grötschla, Luca A. Lanzendörfer, Marco Calzavara, Roger Wattenhofer
Image datasets serve as the foundation for machine learning models in computer vision, significantly influencing model capabilities, performance, and biases alongside architectural considerations.
no code implementations • 16 Jul 2024 • Shaopeng Wei, Beni Egressy, Xingyan Chen, Yu Zhao, Fuzhen Zhuang, Roger Wattenhofer, Gang Kou
Enterprise credit assessment is critical for evaluating financial risk, and Graph Neural Networks (GNNs), with their advanced capability to model inter-entity relationships, are a natural tool to get a deeper understanding of these financial networks.
1 code implementation • 9 Jul 2024 • Giulia Argüello, Luca A. Lanzendörfer, Roger Wattenhofer
Cue points indicate possible temporal boundaries in a transition between two pieces of music in DJ mixing and constitute a crucial element in autonomous DJ systems as well as for live mixing.
no code implementations • 5 Jul 2024 • Rainer Feichtinger, Florian Grötschla, Lioba Heimbach, Roger Wattenhofer
Nodes announce their channels to the network, forming a graph with channels as edges.
1 code implementation • 29 Jun 2024 • Benjamin Estermann, Luca A. Lanzendörfer, Yannick Niedermayr, Roger Wattenhofer
Algorithmic reasoning is a fundamental cognitive ability that plays a pivotal role in problem-solving and decision-making processes.
1 code implementation • 27 Jun 2024 • Giacomo Camposampiero, Michael Hersche, Aleksandar Terzić, Roger Wattenhofer, Abu Sebastian, Abbas Rahimi
We introduce the Abductive Rule Learner with Context-awareness (ARLC), a model that solves abstract reasoning tasks based on Learn-VRF.
1 code implementation • 22 Jun 2024 • Carlos Vonessen, Florian Grötschla, Roger Wattenhofer
Message-Passing Neural Networks (MPNNs) are extensively employed in graph learning tasks but suffer from limitations such as the restricted scope of information exchange, by being confined to neighboring nodes during each round of message passing.
Ranked #1 on Graph Regression on Peptides-struct
2 code implementations • 1 Jun 2024 • Nathan Corecco, Giorgio Piatti, Luca A. Lanzendörfer, Flint Xiaofeng Fan, Roger Wattenhofer
However, the successful implementation of RL in recommender systems is challenging because of several factors, including the limited availability of online data for training on-policy methods.
no code implementations • 4 May 2024 • Zeyu Yang, Zhao Meng, Xiaochen Zheng, Roger Wattenhofer
Large Language Models (LLMs) have revolutionized natural language processing, but their robustness against adversarial attacks remains a critical concern.
1 code implementation • 29 Mar 2024 • Hei Yi Mak, Flint Xiaofeng Fan, Luca A. Lanzendörfer, Cheston Tan, Wei Tsang Ooi, Roger Wattenhofer
CAESAR is an aggregation strategy used by the server that combines convergence-aware sampling with a screening mechanism.
no code implementations • 6 Mar 2024 • Paul Doucet, Benjamin Estermann, Till Aczel, Roger Wattenhofer
This study addresses the integration of diversity-based and uncertainty-based sampling strategies in active learning, particularly within the context of self-supervised pre-trained models.
no code implementations • 6 Mar 2024 • Yuta Ono, Till Aczel, Benjamin Estermann, Roger Wattenhofer
Active learning is a machine learning paradigm designed to optimize model performance in a setting where labeled data is expensive to acquire.
1 code implementation • 9 Feb 2024 • Florian Grötschla, Joël Mathys, Robert Veres, Roger Wattenhofer
We introduce a scalable Graph Neural Network (GNN) based Graph Drawing framework with sub-quadratic runtime that can learn to optimize stress.
1 code implementation • 7 Jan 2024 • Philip Jordan, Florian Grötschla, Flint Xiaofeng Fan, Roger Wattenhofer
We provide the first decentralized Byzantine fault-tolerant FRL method.
1 code implementation • 14 Dec 2023 • Andreas Bergmeister, Karolis Martinkus, Nathanaël Perraudin, Roger Wattenhofer
However, most existing methods struggle with large graphs due to the complexity of representing the entire joint distribution across all node pairs and capturing both global and local graph structures simultaneously.
3 code implementations • 15 Nov 2023 • Peter Belcak, Roger Wattenhofer
Language models only really need to use an exponential fraction of their neurons for individual inferences.
1 code implementation • 30 Oct 2023 • Stefan Künzli, Florian Grötschla, Joël Mathys, Roger Wattenhofer
We propose SURF, a benchmark designed to test the $\textit{generalization}$ of learned graph-based fluid simulators.
no code implementations • 10 Oct 2023 • Joël Mathys, Florian Grötschla, Kalyan Varma Nadimpalli, Roger Wattenhofer
We test the Flood and Echo Net on a variety of synthetic tasks and the SALSA-CLRS benchmark and find that the algorithmic alignment of the execution improves generalization to larger graph sizes.
no code implementations • 3 Oct 2023 • Vivian Ziemke, Benjamin Estermann, Roger Wattenhofer, Ye Wang
In the evolving landscape of digital art, Non-Fungible Tokens (NFTs) have emerged as a groundbreaking platform, bridging the realms of art and technology.
1 code implementation • 21 Sep 2023 • Julian Minder, Florian Grötschla, Joël Mathys, Roger Wattenhofer
We introduce an extension to the CLRS algorithmic learning benchmark, prioritizing scalability and the utilization of sparse representations.
4 code implementations • 28 Aug 2023 • Peter Belcak, Roger Wattenhofer
We break the linear link between the layer size and its inference cost by introducing the fast feedforward (FFF) architecture, a log-time alternative to feedforward networks.
no code implementations • 9 Aug 2023 • Nina Weng, Martyna Plomecka, Manuel Kaufmann, Ard Kastrati, Roger Wattenhofer, Nicolas Langer
Eye movements can reveal valuable insights into various aspects of human mental processes, physical well-being, and actions.
1 code implementation • 30 Jun 2023 • Eren Akbiyik, Florian Grötschla, Beni Egressy, Roger Wattenhofer
We use Graphtester to analyze over 40 different graph datasets, determining upper bounds on the performance of various GNNs based on the number of layers.
1 code implementation • 22 Jun 2023 • Luca A. Lanzendörfer, Roger Wattenhofer
Implicit Neural Representations (INRs) have emerged as a promising method for representing diverse data modalities, including 3D shapes, images, and audio.
no code implementations • 20 Jun 2023 • Béni Egressy, Luc von Niederhäusern, Jovan Blanusa, Erik Altman, Roger Wattenhofer, Kubilay Atasu
This paper analyses a set of simple adaptations that transform standard message-passing Graph Neural Networks (GNN) into provably powerful directed multigraph neural networks.
no code implementations • 31 May 2023 • Peter Belcak, Luca A. Lanzendörfer, Roger Wattenhofer
We conduct a preliminary inquiry into the ability of generative transformer models to deductively reason from premises provided.
no code implementations • 23 May 2023 • Frédéric Odermatt, Béni Egressy, Roger Wattenhofer
Our plug-and-play approach performs on par with the winning submissions without using a domain-specific language model and with no additional training.
no code implementations • 25 Apr 2023 • Mihai Babiac, Karolis Martinkus, Roger Wattenhofer
We provide a novel approach to construct generative models for graphs.
no code implementations • 7 Mar 2023 • Giacomo Camposampiero, Loic Houmard, Benjamin Estermann, Joël Mathys, Roger Wattenhofer
While artificial intelligence (AI) models have achieved human or even superhuman performance in many well-defined applications, they still struggle to show signs of broad and flexible intelligence.
1 code implementation • 2 Mar 2023 • Benjamin Estermann, Roger Wattenhofer
We compare DAVA to models with optimal hyperparameters.
no code implementations • 19 Feb 2023 • Ard Kastrati, Martyna Beata Plomecka, Joël Küchler, Nicolas Langer, Roger Wattenhofer
In this study, we validate the findings of previously published papers, showing the feasibility of an Electroencephalography (EEG) based gaze estimation.
no code implementations • 26 Jan 2023 • Flint Xiaofeng Fan, Yining Ma, Zhongxiang Dai, Cheston Tan, Bryan Kian Hsiang Low, Roger Wattenhofer
Federated Reinforcement Learning (FedRL) encourages distributed agents to learn collectively from each other's experience to improve their performance without exchanging their raw trajectories.
no code implementations • 23 Jan 2023 • Lioba Heimbach, Eric Schertenleib, Roger Wattenhofer
They feared spiking ETH borrowing rates would lead to mass liquidations which could undermine their viability.
1 code implementation • 9 Dec 2022 • Florian Grötschla, Joël Mathys, Roger Wattenhofer
In order to scale, we focus on a recurrent architecture design that can learn simple graph problems end to end on smaller graphs and then extrapolate to larger instances.
1 code implementation • 20 Nov 2022 • Jeremia Geiger, Karolis Martinkus, Oliver Richter, Roger Wattenhofer
Rigid origami has shown potential in large diversity of practical applications.
1 code implementation • 29 Oct 2022 • Peter Belcak, Roger Wattenhofer
We propose a novel, fully explainable neural approach to synthesis of combinatorial logic circuits from input-output examples.
1 code implementation • 29 Oct 2022 • Yu Fei, Ping Nie, Zhao Meng, Roger Wattenhofer, Mrinmaya Sachan
We further explore the applicability of our clustering approach by evaluating it on 14 datasets with more diverse topics, text lengths, and numbers of classes.
1 code implementation • 4 Oct 2022 • Kilian Konstantin Haefeli, Karolis Martinkus, Nathanaël Perraudin, Roger Wattenhofer
Denoising diffusion probabilistic models and score-matching models have proven to be very powerful for generative tasks.
1 code implementation • 23 Sep 2022 • Peter Belcák, David Hofer, Roger Wattenhofer
Grammatical inference is a classical problem in computational learning theory and a topic of wider influence in natural language processing.
1 code implementation • 21 Sep 2022 • Peter Belcák, Roger Wattenhofer
The learning of the simplest possible computational pattern -- periodicity -- is an open problem in the research of strong generalisation in neural networks.
no code implementations • 20 Sep 2022 • Peter Belcák, Ard Kastrati, Flavio Schenker, Roger Wattenhofer
Integer sequences are of central importance to the modeling of concepts admitting complete finitary descriptions.
no code implementations • 22 Aug 2022 • Peter Belcak, Roger Wattenhofer
These programs characterise linear long-distance relationships between the given two vertex sets in the context of the whole graph.
no code implementations • 20 Aug 2022 • Lioba Heimbach, Eric Schertenleib, Roger Wattenhofer
Financial markets have evolved over centuries, and exchanges have converged to rely on the order book mechanism for market making.
1 code implementation • 22 Jun 2022 • Karolis Martinkus, Pál András Papp, Benedikt Schesch, Roger Wattenhofer
AgentNet is inspired by sublinear algorithms, featuring a computational complexity that is independent of the graph size.
1 code implementation • 17 Jun 2022 • Lukas Wolf, Ard Kastrati, Martyna Beata Płomecka, Jie-Ming Li, Dustin Klebe, Alexander Veicht, Roger Wattenhofer, Nicolas Langer
Here, we introduce DETRtime, a novel framework for time-series segmentation that creates ocular event detectors that do not require additionally recorded eye-tracking modality and rely solely on EEG data.
no code implementations • 9 Jun 2022 • Robin Fritsch, Samuel Käser, Roger Wattenhofer
This paper studies the question whether automated market maker protocols such as Uniswap can sustainably retain a portion of their trading fees for the protocol.
no code implementations • 1 Jun 2022 • Beni Egressy, Roger Wattenhofer
Most Graph Neural Networks (GNNs) cannot distinguish some graphs or indeed some pairs of nodes within a graph.
no code implementations • 26 May 2022 • Peter Müller, Lukas Faber, Karolis Martinkus, Roger Wattenhofer
We propose the fully explainable Decision Tree Graph Neural Network (DT+GNN) architecture.
no code implementations • 24 May 2022 • Lukas Faber, Roger Wattenhofer
This paper studies asynchronous message passing (AMP), a new paradigm for applying neural network based learning to graphs.
no code implementations • 18 May 2022 • Lioba Heimbach, Eric Schertenleib, Roger Wattenhofer
However, Uniswap V3 requires far more decisions from liquidity providers than previous DEX designs.
1 code implementation • 4 Apr 2022 • Karolis Martinkus, Andreas Loukas, Nathanaël Perraudin, Roger Wattenhofer
We approach the graph generation problem from a spectral perspective by first generating the dominant parts of the graph Laplacian spectrum and then building a graph matching these eigenvalues and eigenvectors.
no code implementations • 30 Jan 2022 • Pál András Papp, Roger Wattenhofer
We study and compare different Graph Neural Network extensions that increase the expressive power of GNNs beyond the Weisfeiler-Leman test.
no code implementations • 28 Jan 2022 • Robin Fritsch, Roger Wattenhofer
We consider the ratio between this number and the number of matches of the overall best outcome which may not have majority support.
1 code implementation • NeurIPS 2021 • Pál András Papp, Karolis Martinkus, Lukas Faber, Roger Wattenhofer
In DropGNNs, we execute multiple runs of a GNN on the input graph, with some of the nodes randomly and independently dropped in each of these runs.
Ranked #11 on Graph Classification on IMDb-B
5 code implementations • 6 Nov 2021 • Ard Kastrati, Martyna Beata Płomecka, Damián Pascual, Lukas Wolf, Victor Gillioz, Roger Wattenhofer, Nicolas Langer
We present a new dataset and benchmark with the goal of advancing research in the intersection of brain activities and eye movements.
1 code implementation • 17 Oct 2021 • Zai Shi, Zhao Meng, Yiran Xing, Yunpu Ma, Roger Wattenhofer
3D-RETR is capable of 3D reconstruction from a single view or multiple views.
no code implementations • 27 Sep 2021 • Yue Ding, Karolis Martinkus, Damian Pascual, Simon Clematide, Roger Wattenhofer
Different studies of the embedding space of transformer models suggest that the distribution of contextual representations is highly anisotropic - the embeddings are distributed in a narrow cone.
1 code implementation • Findings (EMNLP) 2021 • Damian Pascual, Beni Egressy, Clara Meister, Ryan Cotterell, Roger Wattenhofer
Large pre-trained language models have repeatedly shown their ability to produce fluent text.
no code implementations • 15 Sep 2021 • Jens Hauser, Zhao Meng, Damián Pascual, Roger Wattenhofer
We combine a human evaluation of individual word substitutions and a probabilistic analysis to show that between 96% and 99% of the analyzed attacks do not preserve semantics, indicating that their success is mainly based on feeding poor data to the model.
1 code implementation • Findings (NAACL) 2022 • Zhao Meng, Yihan Dong, Mrinmaya Sachan, Roger Wattenhofer
In this paper, we present an approach to improve the robustness of BERT language models against word substitution-based adversarial attacks by leveraging adversarial perturbations for self-supervised contrastive learning.
no code implementations • 1 Jun 2021 • Pál András Papp, Roger Wattenhofer
We first show that there can be no positive swap for any pair of banks in a static financial system, or when a shock hits each bank in the network proportionally.
no code implementations • 28 May 2021 • Lioba Heimbach, Ye Wang, Roger Wattenhofer
In this paper, we aim to understand how liquidity providers react to market information and how they benefit from providing liquidity in DEXes.
no code implementations • 21 Apr 2021 • Ye Wang, Yan Chen, Haotian Wu, Liyi Zhou, Shuiguang Deng, Roger Wattenhofer
We find that traders have executed 292, 606 cyclic arbitrages over eleven months and exploited more than 138 million USD in revenue.
no code implementations • NAACL (BioNLP) 2021 • Damian Pascual, Sandro Luck, Roger Wattenhofer
Unlike the general trend in language processing, no transformer model has been reported to reach high performance on this task.
no code implementations • 11 Mar 2021 • Lukas Faber, Yifan Lu, Roger Wattenhofer
We find that for graph classification, a GNN is not more than the sum of its parts.
no code implementations • 25 Feb 2021 • Nikola Jovanović, Zhao Meng, Lukas Faber, Roger Wattenhofer
We study the problem of adversarially robust self-supervised learning on graphs.
1 code implementation • 12 Jan 2021 • Sumu Zhao, Damian Pascual, Gino Brunner, Roger Wattenhofer
In this work we provide new insights into the transformer architecture, and in particular, its best-known variant, BERT.
1 code implementation • ACL 2021 • Yiran Xing, Zai Shi, Zhao Meng, Gerhard Lakemeyer, Yunpu Ma, Roger Wattenhofer
We present Knowledge Enhanced Multimodal BART (KM-BART), which is a Transformer-based sequence-to-sequence model capable of reasoning about commonsense knowledge from multimodal inputs of images and texts.
1 code implementation • 1 Jan 2021 • Johannes Ackermann, Oliver Paul Richter, Roger Wattenhofer
We show the generality of our approach by evaluating on simple discrete and continuous control tasks, as well as complex bipedal walker tasks and Atari games.
1 code implementation • 31 Dec 2020 • Damian Pascual, Beni Egressy, Florian Bolli, Roger Wattenhofer
Given that state-of-the-art language models are too large to be trained from scratch in a manageable time, it is desirable to control these models without re-training them.
1 code implementation • 26 Oct 2020 • Lukas Faber, Amin K. Moghaddam, Roger Wattenhofer
Graph Neural Networks achieve remarkable results on problems with structured data but come as black-box predictors.
no code implementations • NeurIPS Workshop LMCA 2020 • Jorel Elmiger, Lukas Faber, Pankaj Khanchandani, Oliver Paul Richter, Roger Wattenhofer
Given there are quadratically many possible edges in a graph and each subset of edges is a possible solution, this yields unfeasibly large search spaces even for few nodes.
no code implementations • COLING 2020 • Zhao Meng, Roger Wattenhofer
Generating adversarial examples for natural language is hard, as natural language consists of discrete symbols, and examples are often of variable lengths.
1 code implementation • 10 Sep 2020 • Nicolas Affolter, Beni Egressy, Damian Pascual, Roger Wattenhofer
In the case of language stimuli, recent studies have shown that it is possible to decode fMRI scans into an embedding of the word a subject is reading.
no code implementations • 25 Aug 2020 • Lukas Faber, Sandro Luck, Damian Pascual, Andreas Roth, Gino Brunner, Roger Wattenhofer
The automatic generation of medleys, i. e., musical pieces formed by different songs concatenated via smooth transitions, is not well studied in the current literature.
2 code implementations • 19 May 2020 • Oliver Richter, Roger Wattenhofer
Attention architectures are widely used; they recently gained renewed popularity with Transformers yielding a streak of state of the art results.
no code implementations • 15 Apr 2020 • Lukas Faber, Roger Wattenhofer
Standard Neural Networks can learn mathematical operations, but they do not extrapolate.
no code implementations • EACL 2021 • Damian Pascual, Gino Brunner, Roger Wattenhofer
This way, we propose a distinction between local patterns revealed by attention and global patterns that refer back to the input, and analyze BERT from both angles.
no code implementations • 23 Oct 2019 • Georgia Avarikioti, Eleftherios Kokoris-Kogias, Roger Wattenhofer
Sharding distributed ledgers is a promising on-chain solution for scaling blockchains but lacks formal grounds, nurturing skepticism on whether such complex systems can scale blockchains securely.
Distributed, Parallel, and Cluster Computing
no code implementations • 25 Sep 2019 • Julian Zilly, Hannes Zilly, Oliver Richter, Roger Wattenhofer, Andrea Censi, Emilio Frazzoli
Empirically across several data domains, we substantiate this viewpoint by showing that test performance correlates strongly with the distance in data distributions between training and test set.
1 code implementation • 24 Sep 2019 • Jakub Sliwinski, Roger Wattenhofer
There is a preconception that a blockchain needs consensus.
Cryptography and Security
no code implementations • ICLR 2020 • Gino Brunner, Yang Liu, Damián Pascual, Oliver Richter, Massimiliano Ciaramita, Roger Wattenhofer
We show that, for sequences longer than the attention head dimension, attention weights are not identifiable.
1 code implementation • 22 Jul 2019 • Damian Pascual, Amir Aminifar, David Atienza, Philippe Ryvlin, Roger Wattenhofer
In this work, we generate synthetic seizure-like brain electrical activities, i. e., EEG signals, that can be used to train seizure detection algorithms, alleviating the need for recorded data.
1 code implementation • 5 Jul 2019 • Timo Bram, Gino Brunner, Oliver Richter, Roger Wattenhofer
Sharing knowledge between tasks is vital for efficient learning in a multi-task setting.
no code implementations • 27 Jun 2019 • Oliver Richter, Roger Wattenhofer
Policy gradient based reinforcement learning algorithms coupled with neural networks have shown success in learning complex policies in the model free continuous action space control setting.
1 code implementation • 30 Sep 2018 • Gino Brunner, Manuel Fritsche, Oliver Richter, Roger Wattenhofer
Learning in sparse reward settings remains a challenge in Reinforcement Learning, which is often addressed by using intrinsic rewards.
1 code implementation • 21 Sep 2018 • Gino Brunner, Bence Szebedy, Simon Tanner, Roger Wattenhofer
The drop-off location could, e. g., be on a balcony or porch, and simply needs to be indicated by a visual marker on the wall or window.
Robotics Systems and Control
no code implementations • 20 Sep 2018 • Gino Brunner, Andres Konrad, Yuyi Wang, Roger Wattenhofer
The interpolations smoothly change pitches, dynamics and instrumentation to create a harmonic bridge between two music pieces.
5 code implementations • 20 Sep 2018 • Gino Brunner, Yuyi Wang, Roger Wattenhofer, Sumu Zhao
In this paper we apply such a model to symbolic music and show the feasibility of our approach for music genre transfer.
no code implementations • 18 Jan 2018 • Gino Brunner, Yuyi Wang, Roger Wattenhofer, Michael Weigelt
We train multi-task autoencoders on linguistic tasks and analyze the learned hidden sentence representations.
1 code implementation • 21 Nov 2017 • Gino Brunner, Yuyi Wang, Roger Wattenhofer, Jonas Wiesendanger
First, a chord LSTM predicts a chord progression based on a chord embedding.
1 code implementation • 20 Nov 2017 • Gino Brunner, Oliver Richter, Yuyi Wang, Roger Wattenhofer
Localization and navigation is also an important problem in domains such as robotics, and has recently become a focus of the deep reinforcement learning community.