Summer-long showcase of visual prosthesis and VR simulator at NEMO Museum

The NESTOR consortium is delighted to announce that we are showcasing our work on a visual neuroprosthesis at the NEMO Science Museum this summer (2021), as part of the exhibition, ‘See It Your Way’! https://www.nemosciencemuseum.nl/en/activities-at-nemo/exhibitions/see-it-your-way/

Running from the 10th of July to the 31st of October, this exhibition on the sense of sight focuses on how people have made use of technology to improve vision. From microscopes, telescopes, imaging techniques using non-visible energy waves (such as X rays and sonar), to futuristic visual aids such as smart AI glasses and brain implants.

Six times a day, NEMO guides give exciting demonstrations of several cutting-edge technologies in the museum auditorium, including the brain implants developed via the NESTOR programme. The guides explain how a visual prosthesis can be used to send information about the visual world directly to the brain, which could one day allow blind people to regain enough functional vision to recognise objects and people in their surroundings.

On display is a mock-up of a 1024-channel visual prosthesis, comprising a pedestal, cables, and electrodes. The real implant (made out of medical-grade titanium and silicon electrodes) was used to send tiny electrical signals to the visual cortex and induce artificial visual percepts in experimental animals.

During each session, the NEMO guide asks a volunteer from the audience to don a pair of VR goggles, and view the world through the NESTOR phosphene simulator application (developed by Radboud University and other NESTOR partners). The image seen by the viewer through the headset provides a vividly immersive simulation of what the world would look like to a blind user of a visual prosthesis, and is projected onto a large screen so that the rest of the audience can share the experience with them.  

Over selected weekends in July and August, members of NESTOR are present at the museum to answer questions from the audience and interact with visitors. On the weekend of the 16th and 17th of July, three of us joined in the demo sessions. We had a blast, talking to the kids and their parents, and spreading awareness of our technology with the general public.

We were treated to a behind-the-scenes peek into how the museum works, from the planning and setting up of an exhibition (logistics, technical testing, signing a loan agreement for the model implant and VR glasses) to the execution itself. We got know the amazing museum guides, walk around and explore certain ‘hidden’ areas of the museum, and enjoy the panoramic views of Amsterdam and the water from the NEMO rooftop terrace- which, by the way, is accessible without a ticket.

Come and check out the exhibition! And if you’d like to catch one of the members of the NESTOR consortium, we’ll be back again on the 7th, 28th and 29th of August. Hope to see you there!

Wireless energy transfer demonstration

In the future, blind users of a neuroprosthesis for vision restoration could receive tiny electrical currents via a device that is implanted in their brain, and be able to recognise objects and people in their surroundings more easily. In order to deliver these electrical currents, power needs to be sent from external equipment to the implantable device- whether via a cable or a wireless connection.  

Tom van Nunen, a PhD student at the Eindhoven University of Technology, investigated how the wireless transfer of energy could be carried out, providing a future user of the system with better freedom of movement. The system consists of two parts: a wireless power transmitter and a wireless power receiver. Tom created a demo system with a hollow, life-size model head made out of glass (to simulate the skin), and a wireless receiver unit installed inside the model head. A wireless transmitter unit located on the outside of the model head is positioned close to the receiver unit, with a layer of glass between them.  

When the transmitter and receiver units are closely aligned, power is sent from the transmitter to the receiver, causing LED lights on the receiver to light up! This demo is featured in a promotional video for the departments of Electrical Engineering and Applied Physics, located in the Flux building of the university: https://youtu.be/Qimyilhz8JY?t=140

Reading letters from the mind’s eye

Visual mental imagery is the quasi-perceptual experience of “seeing in the mind’s eye.” While it has been well established that there is a strong relationship between imagery and perception (in terms of subjective experience), in terms of neural representations, this relationship remains insufficiently understood.

In a recent article, researchers from NESTOR Project 1, at Maastricht University, exploit high spatial resolution of functional magnetic resonance imaging (fMRI) at 7 Tesla, uncovering the retinotopic organization of early visual cortex and combining it with machine-learning techniques, to investigate whether visual imagery of letter shapes preserves the topographic organization of perceived shapes.

Six subjects imagined four different letter shapes which were recovered from the fMRI (BOLD) signal. These findings may eventually be utilized for the development of content-based BCI letter-speller systems.

Mario Senden, Thomas C. Emmerling, Rick van Hoof, Martin A. Frost & Rainer Goebel. Reconstructing imagined letters from early visual cortex reveals tight topographic correspondence between visual mental imagery and perception. Brain Structure and Function 224, pages 1167–1183 (2019). https://doi.org/10.1007/s00429-019-01828-6

The visual brain divided

The human brain contains many neurons, the activity of which, when measured, respond differently to specific types of visual input. These neurons can be divided in retinotopic and category-specific regions and have been the focus of a large body of functional magnetic resonance imaging (fMRI) research. Studying these regions requires accurate localization of their cortical location, hence researchers traditionally perform functional localizer scans to identify these regions in each individual.

However, it is not always possible to conduct these localizer scans. Researchers from NESTOR Project 1 have recently published a probabilistic map of the visual brain, detailing the functional location and variability of visual regions. This atlas can help identify the loci of visual areas in healthy subjects as well as populations (e.g., blind people, infants) in which functional localizers cannot be run.

A Probabilistic Functional Atlas of Human Occipito-Temporal Visual Cortex. Mona Rosenke, Rick van Hoof, Job van den Hurk, Kalanit Grill-Spector, Rainer Goebel. Cerebral Cortex, Volume 31, Issue 1, January 2021

https://doi.org/10.1093/cercor/bhaa246

End-to-end optimization of prosthethic vision: How AI algorithms use feedback to optimize the interpretability of phosphene vision

For a visual prosthesis to be useful in daily life, the system relies on image processing to ensure that maximally relevant information is conveyed, e.g. allowing the blind neuroprosthesis user to recognise people and objects. Extraction of the most useful features of a visual scene is a non-trivial task, and the definition of what is ‘useful’ for a user is strongly context-dependent (e.g. navigation, reading, and social interactions are three very different tasks that require different types of information to be conveyed). Despite rapid advancements in deep learning, it is challenging to develop a general, automated preprocessing strategy that is suitable for use in a variety of contexts. In this recent publication, we present a novel deep learning approach that optimizes the phosphene generation process in an end-to-end fashion. In this approach, both the delivery of stimulation to generate phsophene images (phosphene encoding), as well as the interpretation of these phosphene images (phosphene decoding), are modelled using a deep neural network. The proposed model includes a highly adjustable simulation module of prosthetic vision. All components are trained in a single loop, with the goal of finding an optimally interpretable phosphene encoding which can then be decoded to obtain the original input. In computational validation experiments, we show that such an approach is able to automatically find a task-specific stimulation protocol, which can be tailored to specific constraints, such as stimulation on a sparse subset of electrodes. This approach is highly modular and could be used to dynamically optimize prosthetic vision for everyday tasks and to meet the requirements of the end user.

Jaap de Ruyter van Steveninck, Umut Güçlü, Richard van Wezel, Marcel van Gerven. doi: https://doi.org/10.1101/2020.12.19.423601

Simulating neuroprosthetic vision for emotion recognition

We developed a mobile simulator of phosphene vision, to allow the general public to experience how artificially induced phosphene vision would look like for blind users of a visual prosthesis. This setup allows us to evaluate, compare, and optimize different signal processing algorithms that are used to generate phosphene vision, by carrying out tests on individuals with normal vision. In this demo, we show how intelligent algorithms can improve the quality of perception with prosthetic vision with an image processing pipeline that allows for accurate emotion expression recognition.

C. J. M. Bollen, U. Güçlü, R. J. A. van Wezel, M. A. J. van Gerven and Y. Güçlütürk, “Simulating neuroprosthetic vision for emotion recognition,” 2019 8th International Conference on Affective Computing and Intelligent Interaction Workshops and Demos (ACIIW), 2019, pp. 85-87, https://doi.org/10.1109/ACIIW.2019.8925229

Article on representations of naturalistic stimulus complexity in early and associative visual and auditory cortices

The complexity of sensory stimuli has an important role in perception and cognition. However, its neural representation is not well understood. In this article published in Scientific Reports, we characterize the representations of naturalistic visual and auditory stimulus complexity in early and associative visual and auditory cortices. To do this, we carried out data encoding and decoding in two fMRI datasets with visual and auditory modalities. We found that most early and some associative sensory areas represent the complexity of naturalistic sensory stimuli. For example, the parahippocampal place area, which was previously shown to represent scene features, was found to also represent scene complexity. Similarly, posterior regions of superior temporal gyrus and superior temporal sulcus, which were previously shown to represent syntactic (language) complexity, were found to also represent music (auditory) complexity. Furthermore, our results suggest that gradients of sensitivity to naturalistic sensory stimulus complexity exist in these areas.

Güçlütürk, Y., Güçlü, U., van Gerven, M., and van Lier, R. (2018). Representations of naturalistic stimulus complexity in early and associative visual and auditory cortices. 8:3439. Full text

Science artice gains widespread international media coverage

Our recently published results in Science on the efficacy of using a 1024-channel neuroprosthesis for the generation of artificial vision gained coverage in national and international media, including CNN, NOS, NPO, RTL, Scientific American, New Scientist, Trouw, de Volkskrant, and AD.

The paper gave rise to >300 items in the popular press across more than 51 countries, with a total potential reach of >1.4 billion people. https://pure.knaw.nl/portal/en/clippings/nieuw-hersenimplantaat-kan-blinden-vorm-van-zien-teruggeven

Here are some of the highlights:

AD https://www.ad.nl/binnenland/dankzij-deze-vinding-kunnen-blinden-niet-alleen-stipjes-maar-ook-vormen-zien~ab2a627f/

Trouw https://www.trouw.nl/wetenschap/zien-zonder-ogen-een-nieuwe-studie-wijst-uit-dat-het-kan~bb320ffa/

Volkskrant https://www.volkskrant.nl/wetenschap/zien-zonder-ogen-hersenimplantaat-kan-blinden-mogelijk-deel-van-hun-zicht-teruggeven~b6e083fc/

Tijd voor Max https://www.npostart.nl/tijd-voor-max/04-12-2020/POW_04776966

NPO Start https://www.npostart.nl/nos-journaal/04-12-2020/POW_04508473

NPO Radio 1 https://www.nporadio1.nl/nos-met-het-oog-op-morgen/onderwerpen/68902-2020-12-04-hoe-blinden-weer-zicht-krijgen

NOS Journal https://nos.nl/artikel/2359209-blinde-mensen-kunnen-zicht-mogelijk-deels-terugkrijgen-door-nieuw-implantaat.html

NRC https://www.nrc.nl/nieuws/2020/12/03/hersenimplantaat-laat-apen-lezen-zonder-ogen-met-1024-pixels-a4022524

CNN https://edition.cnn.com/2020/12/03/europe/brain-implant-blind-intl-scli-scn/index.html

Scientific American https://www.scientificamerican.com/article/bionic-eye-tech-learns-its-abcs/

New Scientist https://www.newscientist.com/article/2261853-brain-stimulation-device-lets-monkeys-see-shapes-without-using-eyes/

New Scientist (NL) https://www.newscientist.nl/nieuws/ons-hersenimplantaat-kan-blinden-een-vorm-van-zicht-teruggeven/

NOS Nieuwsradio https://www.nporadio1.nl/nos-radio-1-journaal/uitzendingen/1486664-2020-12-04

RTL nieuws https://www.rtlnieuws.nl/video/uitzendingen/video/5201335/rtl-nieuws-1930-uur

FBR Smart Brief https://www2.smartbrief.com/getLast.action?mode=last&b=FBR