Search Results for author: Armin W. Thomas

Found 11 papers, 8 papers with code

Self-Supervised Learning of Brain Dynamics from Broad Neuroimaging Data

1 code implementation22 Jun 2022 Armin W. Thomas, Christopher Ré, Russell A. Poldrack

At their core, these frameworks learn the dynamics of brain activity by modeling sequences of activity akin to how sequences of text are modeled in NLP.

Causal Language Modeling Language Modelling +1

Differentiable programming for functional connectomics

1 code implementation31 May 2022 Rastko Ciric, Armin W. Thomas, Oscar Esteban, Russell A. Poldrack

We introduce a new analytic paradigm and software toolbox that implements common operations used in functional connectomics as fully differentiable processing blocks.

Denoising

Comparing interpretation methods in mental state decoding analyses with deep learning models

no code implementations31 May 2022 Armin W. Thomas, Christopher Ré, Russell A. Poldrack

Deep learning (DL) models find increasing application in mental state decoding, where researchers seek to understand the mapping between mental states (e. g., perceiving fear or joy) and brain activity by identifying those brain regions (and networks) whose activity allows to accurately identify (i. e., decode) these states.

Explainable artificial intelligence

Evaluating deep transfer learning for whole-brain cognitive decoding

1 code implementation1 Nov 2021 Armin W. Thomas, Ulman Lindenberger, Wojciech Samek, Klaus-Robert Müller

Here, we systematically evaluate TL for the application of DL models to the decoding of cognitive states (e. g., viewing images of faces or houses) from whole-brain functional Magnetic Resonance Imaging (fMRI) data.

Transfer Learning

Challenges for cognitive decoding using deep learning methods

no code implementations16 Aug 2021 Armin W. Thomas, Christopher Ré, Russell A. Poldrack

In cognitive decoding, researchers aim to characterize a brain region's representations by identifying the cognitive states (e. g., accepting/rejecting a gamble) that can be identified from the region's activity.

Explainable artificial intelligence Transfer Learning

On the Opportunities and Risks of Foundation Models

2 code implementations16 Aug 2021 Rishi Bommasani, Drew A. Hudson, Ehsan Adeli, Russ Altman, Simran Arora, Sydney von Arx, Michael S. Bernstein, Jeannette Bohg, Antoine Bosselut, Emma Brunskill, Erik Brynjolfsson, Shyamal Buch, Dallas Card, Rodrigo Castellon, Niladri Chatterji, Annie Chen, Kathleen Creel, Jared Quincy Davis, Dora Demszky, Chris Donahue, Moussa Doumbouya, Esin Durmus, Stefano Ermon, John Etchemendy, Kawin Ethayarajh, Li Fei-Fei, Chelsea Finn, Trevor Gale, Lauren Gillespie, Karan Goel, Noah Goodman, Shelby Grossman, Neel Guha, Tatsunori Hashimoto, Peter Henderson, John Hewitt, Daniel E. Ho, Jenny Hong, Kyle Hsu, Jing Huang, Thomas Icard, Saahil Jain, Dan Jurafsky, Pratyusha Kalluri, Siddharth Karamcheti, Geoff Keeling, Fereshte Khani, Omar Khattab, Pang Wei Koh, Mark Krass, Ranjay Krishna, Rohith Kuditipudi, Ananya Kumar, Faisal Ladhak, Mina Lee, Tony Lee, Jure Leskovec, Isabelle Levent, Xiang Lisa Li, Xuechen Li, Tengyu Ma, Ali Malik, Christopher D. Manning, Suvir Mirchandani, Eric Mitchell, Zanele Munyikwa, Suraj Nair, Avanika Narayan, Deepak Narayanan, Ben Newman, Allen Nie, Juan Carlos Niebles, Hamed Nilforoshan, Julian Nyarko, Giray Ogut, Laurel Orr, Isabel Papadimitriou, Joon Sung Park, Chris Piech, Eva Portelance, Christopher Potts, aditi raghunathan, Rob Reich, Hongyu Ren, Frieda Rong, Yusuf Roohani, Camilo Ruiz, Jack Ryan, Christopher Ré, Dorsa Sadigh, Shiori Sagawa, Keshav Santhanam, Andy Shih, Krishnan Srinivasan, Alex Tamkin, Rohan Taori, Armin W. Thomas, Florian Tramèr, Rose E. Wang, William Wang, Bohan Wu, Jiajun Wu, Yuhuai Wu, Sang Michael Xie, Michihiro Yasunaga, Jiaxuan You, Matei Zaharia, Michael Zhang, Tianyi Zhang, Xikun Zhang, Yuhui Zhang, Lucia Zheng, Kaitlyn Zhou, Percy Liang

AI is undergoing a paradigm shift with the rise of models (e. g., BERT, DALL-E, GPT-3) that are trained on broad data at scale and are adaptable to a wide range of downstream tasks.

Transfer Learning

Deep Transfer Learning For Whole-Brain fMRI Analyses

no code implementations2 Jul 2019 Armin W. Thomas, Klaus-Robert Müller, Wojciech Samek

Even further, the pre-trained DL model variant is already able to correctly decode 67. 51% of the cognitive states from a test dataset with 100 individuals, when fine-tuned on a dataset of the size of only three subjects.

Transfer Learning

Analyzing Neuroimaging Data Through Recurrent Deep Learning Models

1 code implementation23 Oct 2018 Armin W. Thomas, Hauke R. Heekeren, Klaus-Robert Müller, Wojciech Samek

We further demonstrate DeepLight's ability to study the fine-grained temporo-spatial variability of brain activity over sequences of single fMRI samples.

Cannot find the paper you are looking for? You can Submit a new open access paper.