Search Results for author: Joshua Meier

Found 7 papers, 4 papers with code

Language models enable zero-shot prediction of the effects of mutations on protein function

1 code implementation NeurIPS 2021 Joshua Meier, Roshan Rao, Robert Verkuil, Jason Liu, Tom Sercu, Alex Rives

Modeling the effect of sequence variation on function is a fundamental problem for understanding and designing proteins.

MSA Transformer

1 code implementation13 Feb 2021 Roshan Rao, Jason Liu, Robert Verkuil, Joshua Meier, John F. Canny, Pieter Abbeel, Tom Sercu, Alexander Rives

Unsupervised protein language models trained across millions of diverse sequences learn structure and function of proteins.

Masked Language Modeling Multiple Sequence Alignment +1

Neural Potts Model

no code implementations1 Jan 2021 Tom Sercu, Robert Verkuil, Joshua Meier, Brandon Amos, Zeming Lin, Caroline Chen, Jason Liu, Yann Lecun, Alexander Rives

We propose the Neural Potts Model objective as an amortized optimization problem.

Transformer protein language models are unsupervised structure learners

no code implementations ICLR 2021 Roshan Rao, Joshua Meier, Tom Sercu, Sergey Ovchinnikov, Alexander Rives

Unsupervised contact prediction is central to uncovering physical, structural, and functional constraints for protein structure determination and design.

Language Modelling

Cannot find the paper you are looking for? You can Submit a new open access paper.