Search Results for author: Aurelie Lozano

Found 8 papers, 4 papers with code

Protein Representation Learning by Geometric Structure Pretraining

2 code implementations11 Mar 2022 Zuobai Zhang, Minghao Xu, Arian Jamasb, Vijil Chenthamarakshan, Aurelie Lozano, Payel Das, Jian Tang

Despite the effectiveness of sequence-based approaches, the power of pretraining on known protein structures, which are available in smaller numbers only, has not been explored for protein property prediction, though protein structures are known to be determinants of protein function.

Contrastive Learning Property Prediction +1

On Extensions of CLEVER: A Neural Network Robustness Evaluation Algorithm

1 code implementation19 Oct 2018 Tsui-Wei Weng, huan zhang, Pin-Yu Chen, Aurelie Lozano, Cho-Jui Hsieh, Luca Daniel

We apply extreme value theory on the new formal robustness guarantee and the estimated robustness is called second-order CLEVER score.

AlphaFold Distillation for Protein Design

1 code implementation5 Oct 2022 Igor Melnyk, Aurelie Lozano, Payel Das, Vijil Chenthamarakshan

This model can then be used as a structure consistency regularizer in training the inverse folding model.

Drug Discovery Knowledge Distillation +3

Neurogenesis-Inspired Dictionary Learning: Online Model Adaption in a Changing World

1 code implementation22 Jan 2017 Sahil Garg, Irina Rish, Guillermo Cecchi, Aurelie Lozano

In this paper, we focus on online representation learning in non-stationary environments which may require continuous adaptation of model architecture.

Dictionary Learning Hippocampus +2

Benchmarking deep generative models for diverse antibody sequence design

no code implementations12 Nov 2021 Igor Melnyk, Payel Das, Vijil Chenthamarakshan, Aurelie Lozano

Here we consider three recently proposed deep generative frameworks for protein design: (AR) the sequence-based autoregressive generative model, (GVP) the precise structure-based graph neural network, and Fold2Seq that leverages a fuzzy and scale-free representation of a three-dimensional fold, while enforcing structure-to-sequence (and vice versa) consistency.

Benchmarking Protein Design

NeuroPrune: A Neuro-inspired Topological Sparse Training Algorithm for Large Language Models

no code implementations28 Feb 2024 Amit Dhurandhar, Tejaswini Pedapati, Ronny Luss, Soham Dan, Aurelie Lozano, Payel Das, Georgios Kollias

Transformer-based Language Models have become ubiquitous in Natural Language Processing (NLP) due to their impressive performance on various tasks.

Machine Translation Natural Language Inference

Cannot find the paper you are looking for? You can Submit a new open access paper.