no code implementations • 26 Mar 2024 • Henry Kenlay, Frédéric A. Dreyer, Aleksandr Kovaltsuk, Dom Miketa, Douglas Pires, Charlotte M. Deane
Antibodies are proteins produced by the immune system that can identify and neutralise a wide variety of antigens with high specificity and affinity, and constitute the most successful class of biotherapeutics.
no code implementations • 30 Oct 2023 • Frédéric A. Dreyer, Daniel Cutting, Constantin Schneider, Henry Kenlay, Charlotte M. Deane
We consider the problem of antibody sequence design given 3D structural information.
1 code implementation • 20 Jun 2023 • Pierre Osselin, Henry Kenlay, Xiaowen Dong
Certifying the robustness of a graph-based machine learning model poses a critical challenge for safety.
no code implementations • 29 Mar 2022 • Deborah Sulem, Henry Kenlay, Mihai Cucuringu, Xiaowen Dong
The main novelty of our method is to use a siamese graph neural network architecture for learning a data-driven graph similarity function, which allows to effectively compare the current graph and its recent history.
1 code implementation • NeurIPS 2021 • Xingchen Wan, Henry Kenlay, Robin Ru, Arno Blaas, Michael Osborne, Xiaowen Dong
While the majority of the literature focuses on such vulnerability in node-level classification tasks, little effort has been dedicated to analysing adversarial attacks on graph-level classification, an important problem with numerous real-life applications such as biochemistry and social network analysis.
1 code implementation • 23 Nov 2021 • Emanuele Rossi, Henry Kenlay, Maria I. Gorinova, Benjamin Paul Chamberlain, Xiaowen Dong, Michael Bronstein
While Graph Neural Networks (GNNs) have recently become the de facto standard for modeling relational data, they impose a strong assumption on the availability of the node or edge features of the graph.
1 code implementation • 4 Nov 2021 • Xingchen Wan, Henry Kenlay, Binxin Ru, Arno Blaas, Michael A. Osborne, Xiaowen Dong
While the majority of the literature focuses on such vulnerability in node-level classification tasks, little effort has been dedicated to analysing adversarial attacks on graph-level classification, an important problem with numerous real-life applications such as biochemistry and social network analysis.
no code implementations • ICML Workshop AML 2021 • Xingchen Wan, Henry Kenlay, Binxin Ru, Arno Blaas, Michael Osborne, Xiaowen Dong
Graph neural networks have been shown to be vulnerable to adversarial attacks.
no code implementations • 18 Feb 2021 • Henry Kenlay, Dorina Thanou, Xiaowen Dong
In this paper, we study filter stability and provide a novel and interpretable upper bound on the change of filter output, where the bound is expressed in terms of the endpoint degrees of the deleted and newly added edges, as well as the spatial proximity of those edges.
no code implementations • ICLR Workshop GTRL 2021 • Henry Kenlay, Dorina Thanou, Xiaowen Dong
Graph neural networks are experiencing a surge of popularity within the machine learning community due to their ability to adapt to non-Euclidean domains and instil inductive biases.