The Project
MeReX ("Mechanistic and Representational Explanation in Cognitive Neuroscience") is a collaboration between TU Berlin and the Ben Gurion University in Israel with further collaborators from the University of Umea in Sweden. Our work is funded by DFG grant no. 520194884 and runs from April 2024 to March 2027.
Cognitive neuroscientists explain cognitive phenomena such as perception, memory, or problem solving by describing the neural mechanisms underlying the phenomena. In doing so, they usually assume that some components of these mechanisms have representational properties. For example, neurons in the visual cortex are thought to represent certain stimulus features, which explains how the organism is able to perceive and interact with the world.
However, the combination of mechanistic and representational explanations yields a tension: neuroscientific mechanistic explanations can prima facie refer exclusively to factors within the brain. Representational properties, though, supervene on the organism’s relations to its external world and/or past. This raises, what we dub, the "compatibility challenge": can explanations in cognitive neuroscience be simultaneously mechanistic and representational?
The compatibility challenge has not been sufficiently examined philosophically, though it is related to a problem familiar from philosophy of mind in the 1980s - the "classical challenge".
The project can be understood as the necessary and long overdue revision and reassessment of the classical challenge in light of recent developments in philosophy of science, philosophy of cognition, and cognitive neuroscience. We will approach the compatibility challenge by working in close collaboration with empirical researchers and by applying a novel method called “adversarial collaboration” to examine two sets of working hypotheses.
​
The first one is:​
-
Cognitive neuroscience can do without representational explanations and rely solely on mechanistic explanation.​
-
The prominence of computational explanation in cognitive neuroscience explains why scientists still use representational vocabulary while at the same time showing that computational-mechanistic explanations are sufficient.
​​
The second set of working hypotheses is:​
-
Computational explanation provides the first step towards an improved account of representational content and -explanation.
-
It is possible to develop an account of representations in terms of function-informational properties of computational vehicles.
-
Wide explananda alone do not yet render representational explanation compatible with mechanistic explanation.
-
The mechanistic framework can be extended so that it allows for function-informational properties of computational vehicles to figure in mechanistic explanation.
The project will provide new insights into the role representations can play in mechanistic explanations of mental phenomena. Through its methodology, it is explicitly open-ended. The results will contribute to the general understanding of our mind and the scientific explanation of it, and will significantly contribute to a fundamental reorientation of the debate between representationalists and anti-representationalists.