An investigation of factual and counterfactual feedback information in early visual cortex

Paton, Angus T. (2019) An investigation of factual and counterfactual feedback information in early visual cortex. PhD thesis, University of Glasgow.

Full text available as:
[img]
Preview
PDF
Download (3MB) | Preview
Printed Thesis Information: https://eleanor.lib.gla.ac.uk/record=b3343001

Abstract

Primary visual cortex receives approximately 90% of the input to the retina, however this only accounts for around 5% of the input to V1 (Muckli, 2010). The majority of the input to V1 is in fact from other cortical and sub-cortical parts of the brain that arrive there via lateral and feedback pathways. It is therefore critical to our knowledge of visual perception to understand how these feedback responses influence visual processing.

The aim of this thesis is to investigate different sources of non-visual feedback to early visual cortex. To do this we use a combination of an occlusion paradigm, derived from F. W. Smith and Muckli (2010), and functional magnetic resonance imagining. Occlusion offers us a method to inhibit the feedforward flow of information to the retina from a specific part of the visual field. By inhibiting the feedforward information we exploit the highly precise retinotopic organisation of visual cortex by rendering a corresponding patch of cortex free of feedforward input. From this isolated patch of cortex we can ask questions about the information content of purely feedback information.

In Chapter 3 we investigated whether or not information about valance was present in non-stimulated early visual cortex. We constructed a 900 image set that contained an equal number of images for neutral, positive and negative valance across animal, food and plant categories. We used an m-sequence design to allow us to present image set within a standard period of time for fMRI. We were concerned about low-level image properties being a potential confound, so a large image set would allow us to average out these low-level properties. We occluded the lower-right quadrant of each image and presented each image only once to our subjects. The image set was rated for valance and arousal after fMRI so that individual subjectivity could be accounted for. We used multivariate pattern analysis (MVPA) to decode pairs of neutral, positive and negative valance. We found that in both stimulated and non-stimulated V1, V2 and V3, and the amygdala and pulvinar only information about negative valance could be decoded. In a second analysis we again used MVPA to cross-decode between pairs of valance and category. By training the classifier on pairs of valance that each contained two categories, we could ask the question of whether the classifier generalises to the left out category for the same pair of valance. We found that valance does generalise across category in both stimulated and non-stimulated cortex, and in the amygdala and pulvinar. These results demonstrate that information about valance, particularly negative valance, is represented in low level visual areas and is generalisable across animal, food and plant categories.

In Chapter 4 we explored the retinotopic organisation of object and scene sound responses in non-stimulated early visual cortex. We embedded a repeating object sound (axe chopping or motor starting) in to a scene sound (blizzard wind or forest) and used MVPA to read out object or scene information from non-stimulated early visual cortex. We found that object sounds were decodable in the fovea and scene sounds were decodable in the periphery. This finding demonstrates that auditory feedback to visual cortex has an eccentricity bias corresponding to the functional role involved. We suggest that object information feeds back to the fovea for fine-scaled discrimination whereas abstract information feeds back to the periphery to provide a modulatory contextual template for vision.

In a second experiment in Chapter 4 we further explored the similarity between categorical representations between sound and video stimuli in non-stimulated early visual cortex. We use video stimuli and separate the audio and visual parts in to unimodal stimuli. We occlude the bottom right quadrant of the videos and use MVPA to cross-decode between sounds and videos (and vice-versa) from responses in occluded cortex. We find that a classifier trained on one modality can decode the other in occluded cortex. This finding tells us that there is an overlap in the neural representation of aural and visual stimuli in early visual cortex.

In Chapter 5 we probe the internal thought processes of subjects after occluding a short video sequence. We use a priming sequence to generate predictions as subjects are asked to imagine how events from a video unfold during occlusion. We then probe these predictions with a series of test frames corresponding to points in time, either close in time to the offset of the video, just before the video would be expected to reappear, the matching frame from when the video would be expected to reappear or a frame from the very distant future. In an adaption paradigm we find that predictions best match the test frames around the point in time that subjects expect the video to reappear. The test frame from a point close in time to the offset of the video was rarely a match. This tells us that the predictions that subjects make are not related to the offset of the priming sequence but represent a future state of the world that they have not seen. In a second control experiment we show that these predictions are absent when the priming sequence is randomised, and that predictions take between 600ms and 1200ms to fully develop. These findings demonstrate the dynamic flexibility of internal models, that information about these predictions can be read out in early visual cortex and that stronger representations form if given additional time.

In Chapter 6 we again probe at internal dynamic predictions by using virtual navigation paradigm. We use virtual reality to train subjects in a new environment where they can build strong representations of four categorical rooms (kitchen, bedroom, office and game room). Later in fMRI we provide subjects with a direction cue and a starting room and ask them to predict the upcoming room by combining the information. The starting room is shown as a short video clip with the bottom right quadrant occluded. During the video sequence of the starting room, we find that we can read out information about the future room from non-stimulated early visual cortex. In a second control experiment, when we remove the direction cue information about the future room can no longer be decoded. This finding demonstrates that dynamic predictions about the immediate future are present in early visual cortex during simultaneous visual stimulation and that we can read out these predictions with 3T fMRI.

These findings increase our knowledge about the types of non-visual information available to early visual cortical areas and provide insight in to the influence they have on vision. These results lend support to the idea that early visual areas may act as a blackboard for read and write operations for communication around the brain (Muckli et al., 2015; Mumford, 1991; Murray et al., 2016; Roelfsema & de Lange, 2016; Williams et al., 2008). Current models of predictive coding will need to be updated to account for the brains ability to switch between two different processing streams, one that is factual and related to an external stimulus and one that is stimulus independent and internal.

Item Type: Thesis (PhD)
Qualification Level: Doctoral
Additional Information: This work was supported by a grant from the European Research Council (grant number 167640) and Human Brain Project: European Union's Horizon 2020 Framework Programme for Research and Innovation under the Specific Grant Agreements 720270 (SGA1) and 785907 (SGA2).
Keywords: Predictive coding, fMRI, vision, neuroscience, V1, V2, V3, striate, extrastriate, cortex, feedback, affect, valance, audition, virtual reality.
Colleges/Schools: College of Medical Veterinary and Life Sciences > Institute of Neuroscience and Psychology
Supervisor's Name: Muckli, Professor Lars
Date of Award: 2019
Depositing User: Doctor Angus Paton
Unique ID: glathesis:2019-41089
Copyright: Copyright of this thesis is held by the author.
Date Deposited: 26 Mar 2019 11:13
Last Modified: 30 Apr 2019 13:03
URI: http://theses.gla.ac.uk/id/eprint/41089
Related URLs:

Actions (login required)

View Item View Item