Show simple item record

dc.contributor.advisorWomelsdorf, Thilo
dc.creatorBalcarras, Matthew Dwight
dc.date.accessioned2015-12-16T19:17:35Z
dc.date.available2015-12-16T19:17:35Z
dc.date.copyright2015-06-18
dc.date.issued2015-12-16
dc.identifier.urihttp://hdl.handle.net/10315/30659
dc.description.abstractAttention and learning are cognitive control processes that are closely related. This thesis investigates this inter-relatedness by using computational models to describe the mechanisms that are shared between these processes. Computational models describe the transformation of stimuli to observable variables (behaviour) and contain the latent mechanisms that affect this transformation. Here, I captured these mechanisms with the reinforcement learning (RL) framework applied in two different task contexts and three different projects to show 1) how attentional selection of stimuli involves the learning of values for stimuli, 2) how the learning of stimulus values is influenced by previously learned rules, and 3) how explorations of value-related mechanisms in the brain benefit from using intracranial EEG to investigate the strength of oscillatory activity in ventromedial prefrontal cortex. In the first project, the RL framework is applied to a feature-based attention task that required macaques to learn the value of stimulus features while ignoring non-relevant information. By comparing different RL schemes I found that trial-by-trial covert attentional selections were best predicted by a model that only represents expected values for the task relevant feature dimension. In the second project, I explore mechanisms of stimulus-feature value learning in humans in order to understand the influence of learned rules for the flexible, on-going learning of expected values. I test the hypothesis that naive subjects will show enhanced learning of feature specific reward associations by switching to the use of an abstract rule that associates stimuli by feature type. I found that two-thirds of subjects (n=22/32) exhibited behaviour that was best fit by a ‘flexible-rule-selection’ model. Low-frequency oscillatory activity in frontal cortex has been associated with cognitive control and integrative brain functions, however, the relationship between expected values for stimuli and band-limited, rhythmic neural activity in the human brain is largely unknown. In the third project, I used intracranial electrocorticography (ECoG) in a proof-of-principle study to reveal spectral power signatures in vmPFC related to the expected values of stimuli predicted by a RL model for a single human subject.
dc.language.isoen
dc.rightsAuthor owns copyright, except where explicitly noted. Please contact the author directly with licensing requests.
dc.subjectNeurosciences
dc.titleReinforcement Learning Describes the Computational and Neural Processes Underlying Flexible Learning of Values and Attentional Selection
dc.typeElectronic Thesis or Dissertation
dc.degree.disciplineBiology
dc.degree.namePhD - Doctor of Philosophy
dc.degree.levelDoctoral
dc.date.updated2015-12-16T19:17:35Z
dc.subject.keywordsreinforcement learning
dc.subject.keywordsneuroeconomics
dc.subject.keywordsprefrontal cortex
dc.subject.keywordsECOG
dc.subject.keywordsattentional selection
dc.subject.keywordsattentional control
dc.subject.keywordscomputational neuroscience
dc.subject.keywordstheta
dc.subject.keywordsventromedial prefrontal cortex


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record


All items in the YorkSpace institutional repository are protected by copyright, with all rights reserved except where explicitly noted.