In this work we introduce DivNormEI, a novel bio-inspired convolutional network that performs divisive normalization, a canonical cortical computation, along with lateral inhibition and excitation that is tailored for integration into modern Artificial Neural Networks (ANNs).
Work at the intersection of vision science and deep learning is starting to explore the efficacy of deep convolutional networks (DCNs) and recurrent networks in solving perceptual grouping problems that underlie primate visual recognition and segmentation.
The large amount of online data and vast array of computing resources enable current researchers in both industry and academia to employ the power of deep learning with neural networks.
Previous work on automated pain detection from facial expressions has primarily focused on frame-level pain metrics based on specific facial muscle activations, such as Prkachin and Solomon Pain Intensity (PSPI).
Ranked #1 on Pain Intensity Regression on UNBC-McMaster ShoulderPain dataset (MAE (VAS) metric)
The primate visual system builds robust, multi-purpose representations of the external world in order to support several diverse downstream cortical processes.
Word embeddings learnt from large corpora have been adopted in various applications in natural language processing and served as the general input representations to learning systems.
idely used recurrent units, including Long-short Term Memory (LSTM) and the Gated Recurrent Unit (GRU), perform well on natural language tasks, but their ability to learn structured representations is still questionable.
The encoder-decoder models for unsupervised sentence representation learning tend to discard the decoder after being trained on a large unlabelled corpus, since only the encoder is needed to map the input sentence into a vector representation.
Context information plays an important role in human language understanding, and it is also useful for machines to learn vector representations of language.
We carefully designed experiments to show that neither an autoregressive decoder nor an RNN decoder is required.
We train our skip-thought neighbor model on a large corpus with continuous sentences, and then evaluate the trained model on 7 tasks, which include semantic relatedness, paraphrase detection, and classification benchmarks.
The skip-thought model has been proven to be effective at learning sentence representations and capturing sentence semantics.