Saturday, August 26, 2017

Consciousness: how to understand?

The real problem

It looks like scientists and philosophers might have made consciousness far more mysterious than it needs to be

Anil K Seth is professor of cognitive and computational neuroscience at the University of Sussex, and co-director of the Sackler Centre for Consciousness Science. He is also editor-in-chief of Neuroscience of ConsciousnessHe lives in Brighton.




Listen here
Brought to you by curio.io, an Aeon partner
3,600 words
Edited by Nigel Warburton

SYNDICATE THIS ESSAY

The cerebellum (the so-called ‘little brain’ hanging off the back of the cortex) has about four times as many neurons as the rest of the brain, but seems barely involved in maintaining conscious level. It’s not even the overall level of neural activity – your brain is almost as active during dreamless sleep as it is during conscious wakefulness. Rather, consciousness seems to depend on how different parts of the brain speak to each other, in specific ways.
..............................................
Consciousness is informative in the sense that every experience is different from every other experience you have ever had, or ever could have.
..............................................
Consciousness is integrated in the sense that every conscious experience appears as a unified scene. We do not experience colours separately from their shapes, nor objects independently of their background.
...............................................
In the 19th century, the German polymath Hermann von Helmholtz proposed that the brain is a prediction machine, and that what we see, hear and feel are nothing more than the brain’s best guesses about the causes of its sensory inputs. Think of it like this. The brain is locked inside a bony skull. All it receives are ambiguous and noisy sensory signals that are only indirectly related to objects in the world. Perception must therefore be a process of inference, in which indeterminate sensory signals are combined with prior expectations or ‘beliefs’ about the way the world is, to form the brain’s optimal hypotheses of the causes of these sensory signals – of coffee cups, computers and clouds. What we see is the brain’s ‘best guess’ of what’s out there.
People consciously see what they expect, rather than what violates their expectations.
................................................
There is a final twist to this story. Predictive models are good not only for figuring out the causes of sensory signals, they also allow the brain to control or regulate these causes, by changing sensory data to conform to existing predictions (this is sometimes called ‘active inference’). When it comes to the self, especially its deeply embodied aspects, effective regulation is arguably more important than accurate perception. As long as our heartbeat, blood pressure and other physiological quantities remain within viable bounds, it might not matter if we lack detailed perceptual representations. This might have something to do with the distinctive character of experiences of ‘being a body’, in comparison with experiences of objects in the world – or of the body as an object.

And this returns us one last time to Descartes. In dissociating mind from body, he argued that non-human animals were nothing more than ‘beast machines’ without any inner universe. In his view, basic processes of physiological regulation had little or nothing to do with mind or consciousness. I’ve come to think the opposite. It now seems to me that fundamental aspects of our experiences of conscious selfhood might depend on control-oriented predictive perception of our messy physiology, of our animal blood and guts. We are conscious selves because we too are beast machines – self-sustaining flesh-bags that care about their own persistence.

No comments:

Post a Comment