no code implementations • 19 Jan 2022 • Joshua T. Vogelstein, Timothy Verstynen, Konrad P. Kording, Leyla Isik, John W. Krakauer, Ralph Etienne-Cummings, Elizabeth L. Ogburn, Carey E. Priebe, Randal Burns, Kwame Kutten, James J. Knierim, James B. Potash, Thomas Hartung, Lena Smirnova, Paul Worley, Alena Savonenko, Ian Phillips, Michael I. Miller, Rene Vidal, Jeremias Sulam, Adam Charles, Noah J. Cowan, Maxim Bichuch, Archana Venkataraman, Chen Li, Nitish Thakor, Justus M Kebschull, Marilyn Albert, Jinchong Xu, Marshall Hussain Shuler, Brian Caffo, Tilak Ratnanather, Ali Geisa, Seung-Eon Roh, Eva Yezerets, Meghana Madhyastha, Javier J. How, Tyler M. Tomita, Jayanta Dey, Ningyuan, Huang, Jong M. Shin, Kaleab Alemayehu Kinfu, Pratik Chaudhari, Ben Baker, Anna Schapiro, Dinesh Jayaraman, Eric Eaton, Michael Platt, Lyle Ungar, Leila Wehbe, Adam Kepecs, Amy Christensen, Onyema Osuagwu, Bing Brunton, Brett Mensh, Alysson R. Muotri, Gabriel Silva, Francesca Puppo, Florian Engert, Elizabeth Hillman, Julia Brown, Chris White, Weiwei Yang
We call this 'retrospective learning'.
We hypothesize that individual differences in how information is encoded in the brain are task-specific and predict different behavior measures.
Real-world generalization, e. g., deciding to approach a never-seen-before animal, relies on contextual information as well as previous experiences.
We see that regions well-predicted by syntactic features are distributed in the language system and are not distinguishable from those processing semantics.
These results suggest that only the end of semantic processing of a word is task-dependent, and pose a challenge for future research to formulate new hypotheses for earlier task effects as a function of the task and stimuli.
Encoding models based on task features predict activity in different regions across the whole brain.
Progress in natural language processing (NLP) models that estimate representations of word sequences has recently been leveraged to improve the understanding of language processing in the brain.
Unsupervised document representation learning is an important task providing pre-trained features for NLP applications.
Our results reveal differences in the context-related representations across these models.
The goal is to understand the brain by trying to find the function that expresses the activity of brain areas in terms of different properties of the stimulus.
This paper deals with the problem of nonparametric independence testing, a fundamental decision-theoretic problem that asks if two arbitrary (possibly multivariate) random variables $X, Y$ are independent or not, a question that comes up in many fields like causality and neuroscience.