no code implementations • 4 Mar 2024 • Xiaoliang Luo, Akilles Rechardt, Guangzhi Sun, Kevin K. Nejad, Felipe Yáñez, Bati Yilmaz, Kangjoo Lee, Alexandra O. Cohen, Valentina Borghesani, Anton Pashkov, Daniele Marinazzo, Jonathan Nicholas, Alessandro Salatiello, Ilia Sucholutsky, Pasquale Minervini, Sepehr Razavi, Roberta Rocca, Elkhan Yusifov, Tereza Okalova, Nianlong Gu, Martin Ferianc, Mikail Khona, Kaustubh R. Patil, Pui-Shee Lee, Rui Mata, Nicholas E. Myers, Jennifer K Bizley, Sebastian Musslick, Isil Poyraz Bilgin, Guiomar Niso, Justin M. Ales, Michael Gaebler, N Apurva Ratan Murty, Leyla Loued-Khenissi, Anna Behler, Chloe M. Hall, Jessica Dafflon, Sherry Dongqi Bao, Bradley C. Love
LLMs trained on the vast scientific literature could potentially integrate noisy yet interrelated findings to forecast novel results better than human experts.
1 code implementation • 1 Dec 2023 • Sida Li, Ioana Marinescu, Sebastian Musslick
Symbolic regression (SR) is an area of interpretable machine learning that aims to identify mathematical expressions, often composed of simple functions, that best fit in a given set of covariates $X$ and response $y$.
no code implementations • 14 Jul 2023 • Ryan Pyle, Sebastian Musslick, Jonathan D. Cohen, Ankit B. Patel
A key property of neural networks (both biological and artificial) is how they learn to represent and manipulate input information in order to solve a task.
1 code implementation • 11 Jun 2022 • Aimen Zerroug, Mohit Vaishnav, Julien Colin, Sebastian Musslick, Thomas Serre
Overall, we hope that our challenge will spur interest in the development of neural architectures that can learn to harness compositionality toward more efficient learning.
1 code implementation • 25 Mar 2021 • Sebastian Musslick
The integration of behavioral phenomena into mechanistic models of cognitive function is a fundamental staple of cognitive science.
no code implementations • 20 Jul 2020 • Sachin Ravi, Sebastian Musslick, Maia Hamin, Theodore L. Willke, Jonathan D. Cohen
The terms multi-task learning and multitasking are easily confused.
no code implementations • NeurIPS 2017 • Noga Alon, Daniel Reichman, Igor Shinkar, Tal Wagner, Sebastian Musslick, Jonathan D. Cohen, Tom Griffiths, Biswadip Dey, Kayhan Ozcimder
A key feature of neural network architectures is their ability to support the simultaneous interaction among large numbers of units in the learning and processing of representations.