Michael Wibral, MEG Unit, Brain Imaging Center, Goethe University Frankfurt
Information theoretic quantities measure key elements of distributed computation in neural systems, such as the storage and transfer of information. This way, they help to better understand the computational algorithm implemented in the network under investigation.
Information theoretic approaches have raised great interest in neuroscience recently, especially because they do not require modeling of the neural system. This is important, as our current knowledge about neural systems is often still too limited to rely on modeling alone -- but what happens when, one day, our knowledge will suffice for detailed modeling the dynamics of large neural systems like the human brain?
To provide an answer I will turn to the classic tri-level hypothesis of David Marr to explain how simply duplicating the dynamics of a neural system via detailed modeling amounts to the possibility of perfect measurements at the level of a biophysical implementation but does not entail an understanding of the information processing algorithms implemented in the system\\\'s dynamics.
The missing link between the dynamics simulated at the biophysical level and the computational algorithms implemented by these dynamics can be provided by information theoretic methods. This will make information theoretic methods an indispensable tool for the investigation of upcoming large scale neural models. To demonstrate the use of this approach I will combine analyses of information transfer and storage in real-world neural data from magnetoencephalography (MEG) and Voltage Sensitive Dye (VSD) imaging with the same type of analysis on neural models, demonstrating the possibility to compare system via their \\\'Information Footprint\\\'. An application to MEG data from Patients with autism spectrum disorder will also be presented.