no code implementations • 21 Oct 2024 • Isaac L. Huidobro-Meezs, Jun Dai, Guillaume Rabusseau, Rodrigo A. Vargas-Hernández

Quantum computing presents a promising alternative for the direct simulation of quantum systems with the potential to explore chemical problems beyond the capabilities of classical methods.

no code implementations • 17 Jul 2024 • Shenyang Huang, Farimah Poursafaei, Reihaneh Rabbany, Guillaume Rabusseau, Emanuele Rossi

In this paper, we introduce Unified Temporal Graph (UTG), a framework that unifies snapshot-based and event-based machine learning models under a single umbrella, enabling models developed for one representation to be applied effectively to datasets of the other.

1 code implementation • 10 Jul 2024 • Marawan Gamal Abdel Hameed, Aristides Milios, Siva Reddy, Guillaume Rabusseau

However, existing methods such as adapters, prompt tuning or low-rank adaptation (LoRA) either introduce latency overhead at inference time or achieve subpar downstream performance compared with full fine-tuning.

Natural Language Understanding
parameter-efficient fine-tuning
**+1**

2 code implementations • 14 Jun 2024 • Julia Gastinger, Shenyang Huang, Mikhail Galkin, Erfan Loghmani, Ali Parviz, Farimah Poursafaei, Jacob Danovitch, Emanuele Rossi, Ioannis Koutis, Heiner Stuckenschmidt, Reihaneh Rabbany, Guillaume Rabusseau

To address these challenges, we introduce Temporal Graph Benchmark 2. 0 (TGB 2. 0), a novel benchmarking framework tailored for evaluating methods for predicting future links on Temporal Knowledge Graphs and Temporal Heterogeneous Graphs with a focus on large-scale datasets, extending the Temporal Graph Benchmark.

1 code implementation • 14 Jun 2024 • Razieh Shirzadkhani, Tran Gia Bao Ngo, Kiarash Shamsi, Shenyang Huang, Farimah Poursafaei, Poupak Azad, Reihaneh Rabbany, Baris Coskunuzer, Guillaume Rabusseau, Cuneyt Gurcan Akcora

Next, we evaluate the transferability of Temporal Graph Neural Networks (TGNNs) for the temporal graph property prediction task by pre-training on a collection of up to sixty-four token transaction networks and then evaluating the downstream performance on twenty unseen token networks.

1 code implementation • 7 Jun 2024 • Maude Lizaire, Michael Rizvi-Martel, Marawan Gamal Abdel Hameed, Guillaume Rabusseau

Second-order Recurrent Neural Networks (2RNNs) extend RNNs by leveraging second-order interactions for sequence modelling.

no code implementations • 12 Mar 2024 • Michael Rizvi, Maude Lizaire, Clara Lacroce, Guillaume Rabusseau

Recent work has shown that these models can compactly simulate the sequential reasoning abilities of deterministic finite automata (DFAs).

no code implementations • 31 Oct 2023 • Alex Meiburg, Jing Chen, Jacob Miller, Raphaëlle Tihon, Guillaume Rabusseau, Alejandro Perdomo-Ortiz

Beyond their origin in modeling many-body quantum systems, tensor networks have emerged as a promising class of models for solving machine learning problems, notably in unsupervised generative learning.

1 code implementation • 6 Oct 2023 • Dominique Beaini, Shenyang Huang, Joao Alex Cunha, Zhiyi Li, Gabriela Moisescu-Pareja, Oleksandr Dymov, Samuel Maddrell-Mander, Callum McLean, Frederik Wenkel, Luis Müller, Jama Hussein Mohamud, Ali Parviz, Michael Craig, Michał Koziarski, Jiarui Lu, Zhaocheng Zhu, Cristian Gabellini, Kerstin Klaser, Josef Dean, Cas Wognum, Maciej Sypetkowski, Guillaume Rabusseau, Reihaneh Rabbany, Jian Tang, Christopher Morris, Ioannis Koutis, Mirco Ravanelli, Guy Wolf, Prudencio Tossou, Hadrien Mary, Therence Bois, Andrew Fitzgibbon, Błażej Banaszewski, Chad Martin, Dominic Masters

Recently, pre-trained foundation models have enabled significant advancements in multiple fields.

5 code implementations • NeurIPS 2023 • Shenyang Huang, Farimah Poursafaei, Jacob Danovitch, Matthias Fey, Weihua Hu, Emanuele Rossi, Jure Leskovec, Michael Bronstein, Guillaume Rabusseau, Reihaneh Rabbany

We present the Temporal Graph Benchmark (TGB), a collection of challenging and diverse benchmark datasets for realistic, reproducible, and robust evaluation of machine learning models on temporal graphs.

2 code implementations • 15 May 2023 • Shenyang Huang, Jacob Danovitch, Guillaume Rabusseau, Reihaneh Rabbany

Current solutions do not scale well to large real-world graphs, lack robustness to large amounts of node additions/deletions, and overlook changes in node attributes.

2 code implementations • 2 Feb 2023 • Shenyang Huang, Samy Coulombe, Yasmeen Hitti, Reihaneh Rabbany, Guillaume Rabusseau

how to capture temporal dependencies, and iii).

1 code implementation • 4 Nov 2022 • Kaiwen Hou, Guillaume Rabusseau

Various forms of regularization in learning tasks strive for different notions of simplicity.

no code implementations • 8 Jun 2022 • Tianyu Li, Bogdan Mazoure, Guillaume Rabusseau

Although WFAs have been extended to deal with continuous input data, namely continuous WFAs (CWFAs), it is still unclear how to approximate density functions over sequences of continuous random variables using WFA-based models, due to the limitation on the expressiveness of the model as well as the tractability of approximating density functions via CWFAs.

no code implementations • 24 May 2022 • Chenqing Hua, Guillaume Rabusseau, Jian Tang

Graph Neural Networks (GNNs) are attracting growing attention due to their effectiveness and flexibility in modeling a variety of graph-structured data.

no code implementations • NeurIPS 2021 • Behnoush Khavari, Guillaume Rabusseau

These results are used to derive a generalization bound which can be applied to classification with low rank matrices as well as linear classifiers based on any of the commonly used tensor decomposition models.

no code implementations • 26 Oct 2021 • Beheshteh T. Rakhshan, Guillaume Rabusseau

Random projection (RP) have recently emerged as popular techniques in the machine learning community for their ability in reducing the dimension of very high-dimensional tensors.

no code implementations • 22 Jun 2021 • Behnoush Khavari, Guillaume Rabusseau

These results are used to derive a generalization bound which can be applied to classification with low rank matrices as well as linear classifiers based on any of the commonly used tensor decomposition models.

no code implementations • 5 Jun 2021 • Clara Lacroce, Prakash Panangaden, Guillaume Rabusseau

The objective is to obtain a weighted finite automaton (WFA) that fits within a given size constraint and which mimics the behaviour of the original model while minimizing some notion of distance between the black box and the extracted WFA.

no code implementations • 13 Jan 2021 • Greta Laage, Emma Frejinger, Andrea Lodi, Guillaume Rabusseau

This is a challenging problem as it corresponds to the difference between the generated value and the value that would have been generated keeping the system as before.

no code implementations • 20 Oct 2020 • Siddarth Srinivasan, Sandesh Adhikary, Jacob Miller, Guillaume Rabusseau, Byron Boots

We address this gap by showing how stationary or uniform versions of popular quantum tensor network models have equivalent representations in the stochastic processes and weighted automata literature, in the limit of infinitely long sequences.

no code implementations • 19 Oct 2020 • Tianyu Li, Doina Precup, Guillaume Rabusseau

In this paper, we present connections between three models used in different research fields: weighted finite automata~(WFA) from formal languages and linguistics, recurrent neural networks used in machine learning, and tensor networks which encompasses a set of optimization techniques for high-order tensors used in quantum physics and numerical analysis.

2 code implementations • 7 Oct 2020 • Thang Doan, Mehdi Bennani, Bogdan Mazoure, Guillaume Rabusseau, Pierre Alquier

Continual learning (CL) is a setting in which an agent has to learn from an incoming stream of data during its entire lifetime.

no code implementations • 12 Aug 2020 • Meraj Hashemizadeh, Michelle Liu, Jacob Miller, Guillaume Rabusseau

However, identifying the best tensor network structure from data for a given task is challenging.

1 code implementation • 2 Jul 2020 • Shenyang Huang, Yasmeen Hitti, Guillaume Rabusseau, Reihaneh Rabbany

To solve the above challenges, we propose Laplacian Anomaly Detection (LAD) which uses the spectrum of the Laplacian matrix of the graph structure at each snapshot to obtain low dimensional embeddings.

no code implementations • 11 Mar 2020 • Beheshteh T. Rakhshan, Guillaume Rabusseau

We introduce a novel random projection technique for efficiently reducing the dimension of very high-dimensional tensors.

no code implementations • 2 Mar 2020 • Stefano Alletto, Shenyang Huang, Vincent Francois-Lavet, Yohei Nakata, Guillaume Rabusseau

Almost all neural architecture search methods are evaluated in terms of performance (i. e. test accuracy) of the model structures that it finds.

1 code implementation • 2 Mar 2020 • Jacob Miller, Guillaume Rabusseau, John Terilla

Tensor networks are a powerful modeling framework developed for computational many-body physics, which have only recently been applied within machine learning.

no code implementations • 7 Feb 2020 • Bogdan Mazoure, Thang Doan, Tianyu Li, Vladimir Makarenkov, Joelle Pineau, Doina Precup, Guillaume Rabusseau

We propose a general framework for policy representation for reinforcement learning tasks.

no code implementations • 12 Nov 2019 • Tianyu Li, Bogdan Mazoure, Doina Precup, Guillaume Rabusseau

Learning and planning in partially-observable domains is one of the most difficult problems in reinforcement learning.

no code implementations • 14 Sep 2019 • Shenyang Huang, Vincent François-Lavet, Guillaume Rabusseau

To understand how to expand a continual learner, we focus on the neural architecture design problem in the context of class-incremental learning: at each time step, the learner must optimize its performance on all classes observed so far by selecting the most competitive neural architecture.

1 code implementation • 18 Dec 2018 • Kian Kenyon-Dean, Andre Cianflone, Lucas Page-Caccia, Guillaume Rabusseau, Jackie Chi Kit Cheung, Doina Precup

The standard loss function used to train neural network classifiers, categorical cross-entropy (CCE), seeks to maximize accuracy on the training data; building useful representations is not a necessary byproduct of this objective.

1 code implementation • NeurIPS 2017 • Matteo Ruffini, Guillaume Rabusseau, Borja Balle

Spectral methods of moments provide a powerful tool for learning the parameters of latent variable models.

no code implementations • ICLR 2018 • Eric Crawford, Guillaume Rabusseau, Joelle Pineau

Achieving machine intelligence requires a smooth integration of perception and reasoning, yet models developed to date tend to specialize in one or the other; sophisticated manipulation of symbols acquired from rich perceptual spaces has so far proved elusive.

no code implementations • 4 Jul 2018 • Guillaume Rabusseau, Tianyu Li, Doina Precup

In this paper, we unravel a fundamental connection between weighted finite automata~(WFAs) and second-order recurrent neural networks~(2-RNNs): in the case of sequences of discrete symbols, WFAs and 2-RNNs with linear activation functions are expressively equivalent.

no code implementations • 21 Jun 2018 • Philip Amortila, Guillaume Rabusseau

Graph Weighted Models (GWMs) have recently been proposed as a natural generalization of weighted automata over strings and trees to arbitrary families of labeled graphs (and hypergraphs).

2 code implementations • 27 Dec 2017 • Xingwei Cao, Guillaume Rabusseau

We evaluate the compressive and regularization performances of the proposed model with both deep and shallow convolutional neural networks.

no code implementations • NeurIPS 2017 • Guillaume Rabusseau, Borja Balle, Joelle Pineau

We first present a natural notion of relatedness between WFAs by considering to which extent several WFAs can share a common underlying representation.

no code implementations • 22 Sep 2017 • Vincent Francois-Lavet, Guillaume Rabusseau, Joelle Pineau, Damien Ernst, Raphael Fonteneau

This paper provides an analysis of the tradeoff between asymptotic bias (suboptimality with unlimited data) and overfitting (additional suboptimality due to limited data) in the context of reinforcement learning with partial observability.

no code implementations • 13 Sep 2017 • Tianyu Li, Guillaume Rabusseau, Doina Precup

Weighted finite automata (WFA) can expressively model functions defined over strings but are inherently linear models.

no code implementations • NeurIPS 2016 • Guillaume Rabusseau, Hachem Kadri

This paper proposes an efficient algorithm (HOLRR) to handle regression tasks where the outputs have a tensor structure.

no code implementations • 22 Feb 2016 • Guillaume Rabusseau, Hachem Kadri

This paper proposes an efficient algorithm (HOLRR) to handle regression tasks where the outputs have a tensor structure.

no code implementations • 4 Nov 2015 • Guillaume Rabusseau, Borja Balle, Shay B. Cohen

We describe a technique to minimize weighted tree automata (WTA), a powerful formalisms that subsumes probabilistic context-free grammars (PCFGs) and latent-variable PCFGs.

no code implementations • 17 Mar 2014 • Guillaume Rabusseau, François Denis

Building upon a recent paper on tensor decompositions for learning latent variable models, we extend this work to the broader setting of tensors having a symmetric decomposition with positive and negative weights.

Cannot find the paper you are looking for? You can
Submit a new open access paper.

Contact us on:
hello@paperswithcode.com
.
Papers With Code is a free resource with all data licensed under CC-BY-SA.