I have always been intrigued by the mystery of how sensations arise from brain activity – the puzzle of consciousness. During my PhD  and subsequent post docs I studied the boundaries between conscious and non-conscious visual perception, and the relationship between attention, metacognition and visual awareness.

  • Neural correlates of consciousness

    Neural correlates of consciousness

    Most of the neural activity in our brains does not produce conscious sensations: it simply takes place without leaving any trace of conscious sentience.

    This unconsciousness might seem an alien and unknown territory, but it is actually one of the most common facts of our lives. Every morning we can collect evidence of how this non-conscious neural activity operates: the neural mechanisms that literally wake us up after a good night of sleep are in fact completely unconscious to us — we cannot wake ourselves up, our brain (or the neural activity within it) wakes “us” up. By definition, those mechanisms lack consciousness.*

    Similarly, even when we are conscious and attentive to the world that surrounds us, the neural activity related to most of our cognitive processes does not correlate with conscious sensations. Think, for example, of episodic memory. How do we retrieve a single piece of memory? If we want to remember an event (say, what we had for dinner the previous day) we ask ourselves a specific question (what did I eat yesterday?), wait for approximately a second or so, and then see the answer in our “minds eye” (a pizza). The initiation of the process of retrieving a memory is under our voluntary control (we exert that command) but the intermediate cognitive steps needed to retrieve a memory are unknown to us — whatever cognitive mechanisms our brains use, we are not conscious of them.

    And memory is just one example. Non-conscious processes pervade each and every one of our cognitive processes, from visual imagery to language, from motor control to decision-making. We do not have conscious access to the full list of operations required for such cognitive processes but, instead, just to their outcome**.

    But if most of our brain activity is not associated with consciousness, what about that small part that actually produces consciousness? What are the special properties of the neural correlates of consciousness, of that “minimum set of neural activity necessary and sufficient to give rise to consciousness”? Several attempts have been made to answer to this question** but, in my opinion, there is still no satisfactory answer. This remains one of the biggest mysteries of science.

    We might get closer to a solution once we will finally recognize how to properly address this problem. What is the explanation about consciousness that we (neuroscientists) are pursuing? What is the problem that a theory of consciousness should address and how can science help to solve it?

    This is one of my main current scientific interest.


    * Note that when we say “I woke up” we make it sound like a conscious action, even if we do not have any control over it — we identify ourselves as subjects, while we are actually objects of consciousness. Instead of saying “having consciousness” a more precise formulation of the act of consciousness would actually be “consciousness has us”.

    **Many neural markers have been proposed as possible neural correlates of consciousness. One of those candidates has been the P3 component in EEG recordings — Joaquin Navajas and I wrote a short opinion article on the relationship between the electroencephalogram P3 component and conscious access that you can read here.


    Image Source
  • The boundary between conscious and non-conscious visual motion perception

    The boundary between conscious and non-conscious visual motion perception

    Research on the scope and limits of non-conscious vision can advance our understanding of the functional and neural underpinnings of visual awareness. To understand what we are seeing our brain performs various analysis such as color perception, object segmentation and motion perception. All these visual features are analysed separately and in parallel by our brain, and ultimately “bound” into a unified visual experience. An open question is to what extent the visual system can process and interpret information of non-conscious stimuli. Which of these visual features can be integrated or bound into coherent patterns without awareness? And does consciousness play a particular role in the bounding of visual features?

    We investigated whether distributed motion local features can be bound, outside of awareness, into coherent patterns. We found that the visual system is able to integrate low-level motion signals into a coherent pattern outside of visual awareness. In contrast, in an experiment using meaningful or scrambled biological motion we did not observe any increase in the sensitivity of detection for meaningful patterns. Overall, our results are in agreement with previous studies on face processing and with the hypothesis that certain features are spatio-temporally bound into coherent patterns even outside of attention or awareness. This means that visual awareness is not needed for, at least, certain forms of coherence analysis of visual features. You can see our two papers on non-conscious motion processing here and here.


    Image Source
  • Integration and segregation of objects into conscious vision across saccades

    Integration and segregation of objects into conscious vision across saccades

    Humans make several eye movements every second, and thus a fundamental challenge in conscious vision is to maintain continuity by matching object representations in constantly shifting retinal coordinates. During my PhD I investigated, in collaboration with Nicola de Pisapia, Alessio Fracasso and David Melcher, the mechanisms for the stability of conscious vision with stimuli that are briefly flashed near the time of saccades. More specifically, we studied the integration and segregation of those objects into conscious vision depending on the exact timing of saccade onset. We found that during segregation, the target was unmasked because it was perceived as displaced from the mask; during integration, the postsaccadic stimulus masked the presaccadic target (spatiotopic masking). Our results show that segregation and integration may work together to yield continuity in conscious vision. For further readings, here and here are the two papers.


    Image Source
  • Target detection during non-restricted scene visual search

    Target detection during non-restricted scene visual search

    Every day we move our eyes… a lot. Eye movements help us scan the visual world – we move our eyes when we walk down a street, when we search for food at the supermarket, when we talk to other people and when are at home alone. When cognitive scientists investigate the neural responses associated with high cognitive abilities such as memory, attention, language or consciousness, however, they often restrict subjects eye movements during their experiments. The main reason for these restrictions is that eye movements cause big artifacts (large signals in their fMRI, EEG or MEG recordings) that are not related to cognitive processes but, instead, distort the results of their experiments.

    This represents a challenge for cognitive scientists because the classic experiments where subjects sat on a chair with their heads attached to a headrest for more than an hour often suffer from a lack of ecological validity. Ideally, cognitive scientists would like to develop experiments that are tested in conditions that are as similar as possible to real life.

    To address this problem we designed a visual search paradigm that allowed us to investigate the mechanisms of target detection during unconstrained scene search. With this paradigm we were able to record EEG activity while subjects freely moved their eyes while they searched for faces in scenes of crowds. The results from this project show that EEG signatures related to cognitive behaviour develop across spatially unconstrained exploration of natural scenes, just as they do when subjects are tested with their eyes at fixation. Further details can be found here.


    Image Source

People I work/collaborate with

Shenjun Zhong

Shenjun Zhong

Research Scientist at Monash University

Follow
Javier Kreiner

Javier Kreiner

Friend, above all

Follow
Naotsugu Tsuchiya, Associate Professor

Naotsugu Tsuchiya, Associate Professor

Monash University, Australia

Follow
Joaquín Navajas, Postdoctoral Researcher

Joaquín Navajas, Postdoctoral Researcher

Institute of Cognitive Neuroscience, University College London, UK

Follow
Hirokazu Takahashi, Assistant Professor

Hirokazu Takahashi, Assistant Professor

Research Center for Advanced Science and Technology, University of Tokyo, Japan

Follow