
On Tuesday 19 June 2018, FraMEPhys hosted a one-day workshop at the Univeristy of Birmingham. Here’s the original poster.
Speakers
- Tudor Baetu (Bristol)
- Mazviita Chirimuuta (Pittsburgh)
- Lina Jansson (Nottingham)
- Isaac Wilhelm (Rutgers)
- Alastair Wilson (Birmingham)
Schedule
10.00am – 10.15am: Alastair Wilson – “Intro: FraMEPhys and Pluralism”
10.15am – 11.30am: Lina Jansson – “Explanatory Pluralism and Realism”
11.45am – 1.00pm: Isaac Wilhelm – “Explanatory Priority Monism”
1.00 – 2.00pm: Lunch
2.00pm – 3.15pm: Mazviita Chirimuuta – “Prediction, Explanation, and the Limits of Neuroscience”
3.30pm – 4.45pm: Tudor Baetu – “Multidisciplinary Integration and the Level-Laden Conception of Science”
4.45pm – 5.00pm: Closing discussion”
Abstracts
“Multidisciplinary Integration and the Level-Laden Conception of Science.” Tudor Baetu
Abstract: Causal models aggregating ‘lower-level’ (e.g., biological) and ‘higher-level’ (psychological) determinants of a phenomenonraise a puzzle about how interactions between these factors are possible. I argue that these models are in fact level-neutral compilations of empirical findings about correlated and causally relevant factors, and as such they neither assume, nor entail a conceptual or ontological stratification into levels of description, explanation or reality. If inter-level causation is deemed problematic or if debates about the superiority of a particular level of description or explanation arise, these issues are fuelled by considerations other than empirical findings.

“Prediction, Explanation, and the Limits of Neuroscience.” Mazviita Chirimuuta
Abstract: A major task for neuroscience in the 21st century is to illuminate the relationship between neural population activity and behaviours such as sensory discrimination and motor control. Amongst technological goals, the ability to decode the activity of neural populations in order to drive brain machine interfaces is fairly advanced. Here I will argue that in order for models describing the relationship between neural activity and behaviour to serve epistemic goals going beyond mere prediction, they must be interpretable by scientists. While the technological goal of decoding was initially served by highly interpretable linear models, some recent advances have come through the use of AI methods that sacrifice interpretability for predictive accuracy. Such models offer explanations (of some kinds) without understanding. I discuss the implications for the distinction between basic and applied science, and argue that understanding should remain a central epistemic goal for neuroscience.

“Explanatory Pluralism and Realism”. Lina Jansson
Abstract: In light of the range of explanations that we find in science (and beyond), some pluralism about accounts of explanation seems very plausible. We could take this to suggest that there is no (even if thin) unified account of explanation to give that is on its own capable of recovering some of the main features of explanation. I will argue that such a view challenges central arguments for scientific realism and explore which features of explanation that such arguments for realism rely on.

“Explanatory Priority Monism”. Isaac Wilhelm
Abstract: I argue that there is one relation which backs all cases of explanation. That relation explains why causation, grounding, and other such relations, are capable of backing explanations.
