Reading letters from the mind’s eye

Visual mental imagery is the quasi-perceptual experience of “seeing in the mind’s eye.” While it has been well established that there is a strong relationship between imagery and perception (in terms of subjective experience), in terms of neural representations, this relationship remains insufficiently understood.

In a recent article, researchers from NESTOR Project 1, at Maastricht University, exploit high spatial resolution of functional magnetic resonance imaging (fMRI) at 7 Tesla, uncovering the retinotopic organization of early visual cortex and combining it with machine-learning techniques, to investigate whether visual imagery of letter shapes preserves the topographic organization of perceived shapes.

Six subjects imagined four different letter shapes which were recovered from the fMRI (BOLD) signal. These findings may eventually be utilized for the development of content-based BCI letter-speller systems.

Mario Senden, Thomas C. Emmerling, Rick van Hoof, Martin A. Frost & Rainer Goebel. Reconstructing imagined letters from early visual cortex reveals tight topographic correspondence between visual mental imagery and perception. Brain Structure and Function 224, pages 1167–1183 (2019). https://doi.org/10.1007/s00429-019-01828-6

The visual brain divided

The human brain contains many neurons, the activity of which, when measured, respond differently to specific types of visual input. These neurons can be divided in retinotopic and category-specific regions and have been the focus of a large body of functional magnetic resonance imaging (fMRI) research. Studying these regions requires accurate localization of their cortical location, hence researchers traditionally perform functional localizer scans to identify these regions in each individual.

However, it is not always possible to conduct these localizer scans. Researchers from NESTOR Project 1 have recently published a probabilistic map of the visual brain, detailing the functional location and variability of visual regions. This atlas can help identify the loci of visual areas in healthy subjects as well as populations (e.g., blind people, infants) in which functional localizers cannot be run.

A Probabilistic Functional Atlas of Human Occipito-Temporal Visual Cortex. Mona Rosenke, Rick van Hoof, Job van den Hurk, Kalanit Grill-Spector, Rainer Goebel. Cerebral Cortex, Volume 31, Issue 1, January 2021

https://doi.org/10.1093/cercor/bhaa246

End-to-end optimization of prosthethic vision: How AI algorithms use feedback to optimize the interpretability of phosphene vision

For a visual prosthesis to be useful in daily life, the system relies on image processing to ensure that maximally relevant information is conveyed, e.g. allowing the blind neuroprosthesis user to recognise people and objects. Extraction of the most useful features of a visual scene is a non-trivial task, and the definition of what is ‘useful’ for a user is strongly context-dependent (e.g. navigation, reading, and social interactions are three very different tasks that require different types of information to be conveyed). Despite rapid advancements in deep learning, it is challenging to develop a general, automated preprocessing strategy that is suitable for use in a variety of contexts. In this recent publication, we present a novel deep learning approach that optimizes the phosphene generation process in an end-to-end fashion. In this approach, both the delivery of stimulation to generate phsophene images (phosphene encoding), as well as the interpretation of these phosphene images (phosphene decoding), are modelled using a deep neural network. The proposed model includes a highly adjustable simulation module of prosthetic vision. All components are trained in a single loop, with the goal of finding an optimally interpretable phosphene encoding which can then be decoded to obtain the original input. In computational validation experiments, we show that such an approach is able to automatically find a task-specific stimulation protocol, which can be tailored to specific constraints, such as stimulation on a sparse subset of electrodes. This approach is highly modular and could be used to dynamically optimize prosthetic vision for everyday tasks and to meet the requirements of the end user.

Jaap de Ruyter van Steveninck, Umut Güçlü, Richard van Wezel, Marcel van Gerven. doi: https://doi.org/10.1101/2020.12.19.423601

Simulating neuroprosthetic vision for emotion recognition

We developed a mobile simulator of phosphene vision, to allow the general public to experience how artificially induced phosphene vision would look like for blind users of a visual prosthesis. This setup allows us to evaluate, compare, and optimize different signal processing algorithms that are used to generate phosphene vision, by carrying out tests on individuals with normal vision. In this demo, we show how intelligent algorithms can improve the quality of perception with prosthetic vision with an image processing pipeline that allows for accurate emotion expression recognition.

C. J. M. Bollen, U. Güçlü, R. J. A. van Wezel, M. A. J. van Gerven and Y. Güçlütürk, “Simulating neuroprosthetic vision for emotion recognition,” 2019 8th International Conference on Affective Computing and Intelligent Interaction Workshops and Demos (ACIIW), 2019, pp. 85-87, https://doi.org/10.1109/ACIIW.2019.8925229

Article on representations of naturalistic stimulus complexity in early and associative visual and auditory cortices

The complexity of sensory stimuli has an important role in perception and cognition. However, its neural representation is not well understood. In this article published in Scientific Reports, we characterize the representations of naturalistic visual and auditory stimulus complexity in early and associative visual and auditory cortices. To do this, we carried out data encoding and decoding in two fMRI datasets with visual and auditory modalities. We found that most early and some associative sensory areas represent the complexity of naturalistic sensory stimuli. For example, the parahippocampal place area, which was previously shown to represent scene features, was found to also represent scene complexity. Similarly, posterior regions of superior temporal gyrus and superior temporal sulcus, which were previously shown to represent syntactic (language) complexity, were found to also represent music (auditory) complexity. Furthermore, our results suggest that gradients of sensitivity to naturalistic sensory stimulus complexity exist in these areas.

Güçlütürk, Y., Güçlü, U., van Gerven, M., and van Lier, R. (2018). Representations of naturalistic stimulus complexity in early and associative visual and auditory cortices. 8:3439. Full text

Science artice gains widespread international media coverage

Our recently published results in Science on the efficacy of using a 1024-channel neuroprosthesis for the generation of artificial vision gained coverage in national and international media, including CNN, NOS, NPO, RTL, Scientific American, New Scientist, Trouw, de Volkskrant, and AD.

The paper gave rise to >300 items in the popular press across more than 51 countries, with a total potential reach of >1.4 billion people. https://pure.knaw.nl/portal/en/clippings/nieuw-hersenimplantaat-kan-blinden-vorm-van-zien-teruggeven

Here are some of the highlights:

AD https://www.ad.nl/binnenland/dankzij-deze-vinding-kunnen-blinden-niet-alleen-stipjes-maar-ook-vormen-zien~ab2a627f/

Trouw https://www.trouw.nl/wetenschap/zien-zonder-ogen-een-nieuwe-studie-wijst-uit-dat-het-kan~bb320ffa/

Volkskrant https://www.volkskrant.nl/wetenschap/zien-zonder-ogen-hersenimplantaat-kan-blinden-mogelijk-deel-van-hun-zicht-teruggeven~b6e083fc/

Tijd voor Max https://www.npostart.nl/tijd-voor-max/04-12-2020/POW_04776966

NPO Start https://www.npostart.nl/nos-journaal/04-12-2020/POW_04508473

NPO Radio 1 https://www.nporadio1.nl/nos-met-het-oog-op-morgen/onderwerpen/68902-2020-12-04-hoe-blinden-weer-zicht-krijgen

NOS Journal https://nos.nl/artikel/2359209-blinde-mensen-kunnen-zicht-mogelijk-deels-terugkrijgen-door-nieuw-implantaat.html

NRC https://www.nrc.nl/nieuws/2020/12/03/hersenimplantaat-laat-apen-lezen-zonder-ogen-met-1024-pixels-a4022524

CNN https://edition.cnn.com/2020/12/03/europe/brain-implant-blind-intl-scli-scn/index.html

Scientific American https://www.scientificamerican.com/article/bionic-eye-tech-learns-its-abcs/

New Scientist https://www.newscientist.com/article/2261853-brain-stimulation-device-lets-monkeys-see-shapes-without-using-eyes/

New Scientist (NL) https://www.newscientist.nl/nieuws/ons-hersenimplantaat-kan-blinden-een-vorm-van-zicht-teruggeven/

NOS Nieuwsradio https://www.nporadio1.nl/nos-radio-1-journaal/uitzendingen/1486664-2020-12-04

RTL nieuws https://www.rtlnieuws.nl/video/uitzendingen/video/5201335/rtl-nieuws-1930-uur

FBR Smart Brief https://www2.smartbrief.com/getLast.action?mode=last&b=FBR

Review of results from Orion study

Scientists have long dreamed of restoring vision in blind individuals by stimulating the visual cortex, bypassing malfunctioning eyes to directly deliver information to higher visual centers (Bosking et al., 2017). Beauchamp et al. (2020) used electrical stimulation of the visual cortex to produce visual percepts in blind human subjects, using innovative current steering and sequential stimulation techniques to create recognisable shapes such as letters of the alphabet. In our review of Beauchamp et al.’s work, ‘Writing to the Mind’s Eye of the Blind,’ published in Cell, we discuss their ground-breaking results, and their implications for the field of visual cortical prosthetics: https://doi.org/10.1016/j.cell.2020.03.014

Review on the future of neurotechnology

Recent advances in neuroscience and technology have made it possible to record from large assemblies of neurons and to decode their activity to extract information. At the same time, available methods to stimulate the brain and influence ongoing processing are also rapidly expanding. These developments pave the way for advanced neurotechnological applications that directly read from, and write to, the human brain. While such technologies are still primarily used in restricted therapeutic contexts, this may change in the future once their performance has improved and they become more widely applicable.

In this review article, ‘Mind Reading and Writing: The Future of Neurotechnology,’ published in Trends in Cognitive Sciences, we provide an overview of methods to interface with the brain, speculate about potential applications, and discuss important issues associated with a neurotechnologically assisted future: https://doi.org/10.1016/j.tics.2018.04.001

Science article on the successful generation of artificial vision

Recent discoveries from NESTOR Project 3 show that newly developed high-resolution implants in the visual cortex make it possible to recognize artificially induced shapes and percepts. The findings were published in Science on 3 December.

When electrical stimulation is delivered to the brain via an implanted electrode, it generates the percept of a dot of light at a particular location in visual space, known as a ‘phosphene.’ Our team developed high-resolution implants consisting of 1024 electrodes and implanted them in the visual cortex of two sighted monkeys. The monkeys successfully recognised shapes and percepts, including lines, moving dots, and letters, using their artificial vision. 

This research lays the foundations for a neuroprosthetic device that could allow profoundly blind people to regain functional vision and to recognize objects, navigate in unfamiliar surroundings, and interact more easily in social settings, significantly improving their independence and quality of life.