|TREND||DATASET||BEST METHOD||PAPER TITLE||PAPER||CODE||COMPARE|
One of the key challenges in sequence transduction is learning to represent both the input and output sequences in a way that is invariant to sequential distortions such as shrinking, stretching and translating.
Protein secondary structure (SS) prediction is important for studying protein structure and function.
Here we present a new supervised generative stochastic network (GSN) based method to predict local secondary structure with deep hierarchical representations.
In spite of this, even the most sophisticated ab initio SS predictors are not able to reach the theoretical limit of three-state prediction accuracy (88–90%), while only a few predict more than the 3 traditional Helix, Strand and Coil classes.
Motivation: Although secondary structure predictors have been developed for decades, current ab initio methods have still some way to go to reach their theoretical limits.
In the spirit of reproducible research we make our data, models and code available, aiming to set a gold standard for purity of training and testing sets.
Inspired by the recent successes of deep neural networks, in this paper, we propose an end-to-end deep network that predicts protein secondary structures from integrated local and global contextual features.