Towards a Computational Neuroscience of Meditation and Self Awareness Lars Sandved Smith , Antoine Lutz; Jeremie Mattout; Maxwell Ramstead; Karl Friston; Mark Miller; (UCL Wellcome Centre for Human Neuroimaging, Thiverval, France) C22
Recent advances in computational modelling techniques are making it possible to begin mapping the causal processes at play in the brain during meditation. These advances are shedding light on the computational structure of an organism's self model and make it possible to simulate the process of training awareness of various aspects of the self (i.e. model parameters). At the root of these advances is a cutting-edge behavioural modelling framework, active inference (Friston et al., 2017). Active inference is a first principles (ie. Bayesian) account of how autonomous agents might operate in dynamic, non-stationary environments and a descendant of Bayesian theories of the brain, such as predictive coding (Bastos et al., 2012). The maximisation of model evidence, in active inference, enables agents to make inferences about the environment and select optimal behaviours. The agent achieves this by evaluating (sensory) evidence in relation to its internal generative model that entails beliefs about future (hidden) states and sequence of actions that it can choose (Sajid et al., 2019). Our recent work has been focused on extending this framework to account for the agent's ability to select mental actions, such as deliberately paying attention to a particular stimulus. This extension endows the agent's generative model with a deep structural architecture where higher level beliefs about lower level model parameters can be inferred. This enables the agent to gain awareness (i.e. phenomenological opacity) and control over aspects of their own generative model. Simulations of agents possessing this form of parametrically deep generative model during a meditation task (Sandved Smith et al., in progress) organically result in the phenomenological cycles of focus and mind wandering associated with focused attention meditation practices (Hasencamp et al., 2011). Furthermore, these agents can report on metacognitive observations such as the extent to which they are aware of where their attention is focused. This novel extension to the active inference framework makes it possible to plausibly simulate and make behavioural predictions about mental tasks such as meditation. More generally however, it provides a biologically plausible, first principles understanding of the computational architecture required for metacognitively aware organisms.