Article on Open-access resting state dataset in Scientific Data

A newly published dataset and an accompanying Data Descriptor article by Chen, Morales-Gregorio, et al., titled ‘1024-channel electrophysiological recordings in macaque V1 and V4 during resting state,’ are now available for use by researchers around the world.

The dataset, collected at the Netherlands Institute for Neuroscience and prepared together with collaborators at Juelich Research Center (Germany), consists of electrophysiology data that was recorded from the visual cortex (V1 and V4) of two monkeys, from 1024 recording sites simultaneously during the resting state, and also includes supporting datasets obtained while the monkeys performed visual tasks.

The data provide a picture of neuronal activity across large regions of the visual cortex at an unprecedented spatial and temporal resolution, with high-density receptive field coverage (>900 recording sites across the V1 and V4 representations of the central 8 degrees of visual angle).

This dataset could allow other scientists to derive new fundamental neuroscientific insights into underlying activity that influences the processing of visual information in our brain. Potential applications include correlation analysis, large-scale modelling, and the spread of spontaneous activity across the cortex in waves. Part of this dataset has already been successfully used to investigate the neural correlates of the BOLD signal obtained via non-invasive imaging, yielding a recent publication in eLife which compares population RF estimates obtained with our multi-channel electrophysiology data and fMRI-generated BOLD activity (Klink et al., 2021).

Additionally, the dataset could be used as teaching material and serve as a template for future publications of large electrophysiology datasets, providing standardized methods and tools for the description, preparation, and organization of both data and metadata, contributing to the era of open data sharing and collaboration.

The dataset has been uploaded to the open-access data-sharing platform, G-Node Infrastructure (GIN,; DOI: It adheres to the FAIR principles, using common standards to ensure interoperability, while providing detailed metadata information for reproducibility. All metadata are organized into a unified hierarchical structure, using the open metadata markup language (odML) (, a human- and machine-readable file format for reproducible metadata management in electrophysiology. The dataset has been published under an open-access Creative Commons Attribution license (CC-BY) licence for data and metadata, and a BSD-3-Clause license for the software.

The dataset is accompanied by a Data Descriptor article in Scientific Data (DOI: 10.1038/s41597-022-01180-1). The article includes a thorough description of the scientific insights that could be obtained from the data, data formats, and methods of data acquisition and processing, with sections on Data Records (describing data usage and file formats); Technical Validation (to validate the data for usability and correctness, using cutting-edge methods to identify artifacts); and Code Availability (describing the scripts used for data collection, processing and analysis).

To facilitate data interoperability and accessibility, scripts for data processing and analysis are provided in both Matlab and Python formats, allowing ease of access for researchers regardless of whether they prefer to use open-source or proprietary programming languages.

This project was funded by the Dutch Research Council (NWO), the European Union FP7, the European Union Horizon 2020 Framework Programme for Research and Innovation, the European Union Horizon 2020 Future and Emerging Technologies, and the German Research Foundation (Deutsche Forschungsgemeinschaft).

Article in Journal of Vision on ‘Real-world indoor mobility with simulated prosthetic vision’

Researchers from NESTOR Project 2 have published their latest findings on the benefits and feasibility of using software algorithms to process and generate artificial vision. Their work, by de Ruyter van Steveninck et al., published in the Journal of Vision, describes their results using contour-based scene simplification at different phosphene resolutions in a simulation experiment on sighted participants.

They explored both the theoretically attainable benefits of strict scene simplification in an indoor environment (by controlling the complexity of the surrounding environment), as well as the practical results attained using two methods: a deep-learning-based surface boundary detection implementation, and traditional edge detection.

They found that scene simplification requires a careful tradeoff between informativeness and interpretability of artificially generated visual percepts, which may depend on whether a larger or smaller number of implanted electrodes are used.

‘Prosthesis restores some vision in blind person,’ article in Het Parool & New Scientist

The Dutch research programme, NESTOR, is developing a prosthesis for the blind that makes contours of objects visible in the form of dots. ‘In this way we are replacing the eyes with a camera.’

A consortium of scientists from Amsterdam, Nijmegen, Maastricht and Eindhoven, has developed an app for a pair of virtual-reality goggles, that makes the contours of people or objects visible by means of dots. This simulates the vision that blind people could stand to gain in the future, with the help of a neuroprosthesis.

Jaap de Ruyter van Steveninck is a PhD student at the Donders Institute for Brain, Cognition and Behavior at Radboud Univesity and is investigating how the prosthesis can help blind people. ‘I can now walk a bit myself with those glasses, but it requires training to be able to do so without bumping into them.’

Our article featured on the cover of JCI: ‘Evoking complex visual perceptions in blind people’

Our recently published article (Fernández et al., 2021) reports the safety and efficacy of implanting an intracortical microelectrode array in a blind person, suggesting the potential for this approach to restore functional vision.

To our delight, the article was highlighted on the December 1st cover of the journal, due to the extensive coverage received since publication in ‘In Press Preview’ format. Read the original article here, as well as an accompanying commentary by Beauchamp et al. in the Journal of Clinical Investigation.

BACKGROUND. A long-held goal of vision therapy is to transfer information directly to the visual cortex of blind individuals, thereby restoring a rudimentary form of sight. However, no clinically available cortical visual prosthesis yet exists.

METHODS. We implanted an intracortical microelectrode array consisting of 96 electrodes in the visual cortex of a 57-year-old person with complete blindness for a 6-month period. We measured thresholds and the characteristics of the visual percepts elicited by intracortical microstimulation.

RESULTS. Implantation and subsequent explantation of intracortical microelectrodes were carried out without complications. The mean stimulation threshold for single electrodes was 66.8 ± 36.5 μA. We consistently obtained high-quality recordings from visually deprived neurons and the stimulation parameters remained stable over time. Simultaneous stimulation via multiple electrodes was associated with a significant reduction in thresholds (p < 0.001, ANOVA) and evoked discriminable phosphene percepts, allowing the blind participant to identify some letters and recognize object boundaries.

CONCLUSIONS. Our results demonstrate the safety and efficacy of chronic intracortical microstimulation via a large number of electrodes in human visual cortex, showing its high potential for restoring functional vision in the blind.

Scientists enable a blind woman to see simple shapes using a brain implant

Newly published research details how a team of scientists from the University Miguel Hernández (Spain), the Netherlands Institute of Neuroscience (Netherlands) and the John A. Moran Eye Center at the University of Utah (USA) successfully created a form of artificial vision for a blind woman using a brain implant.

In the article, “Visual percepts evoked with an Intracortical 96-channel Microelectrode Array inserted in human occipital cortex,” published in The Journal of Clinical Investigation, Eduardo Fernández, MD, PhD, from the University Miguel Hernández details how an array of penetrating electrodes produced a simple form of vision for a 58-year-old blind volunteer. The team conducted a series of experiments with the blind volunteer in their laboratory in Elche, Spain. The results represent a leap forward for scientists hoping to create a visual brain prosthesis to increase independence of the blind.


A neurosurgeon implanted a microelectrode array composed of 100 microneedles into the visual cortex of the blind woman to both record from and stimulate neurons located close to the electrodes. She wore eyeglasses equipped with a miniature video camera; specialized software encoded the visual data collected by the camera and sent it to electrodes located in the brain. The array then stimulated the surrounding neurons to produce white points of light known as ‘phosphenes’ to create an image.

The blind woman was a former science teacher and had been completely blind for 16 years at the time of the study. She had no complications from the surgery, and researchers determined that the implant did not impair or negatively affect brain function. With the help of the implant, she was able to identify lines, shapes and simple letters evoked by different patterns of stimulation. To assist her in practicing with the prosthesis, researchers created a video game with a character from the popular television show The Simpsons. Due to her extensive involvement and insight, she is also co-author on the article.

“These results are very exciting because they demonstrate both safety and efficacy and could help to achieve a long-held dream of many scientists, which is the transfer information from the outside world directly to the visual cortex of blind individuals, thereby restoring a rudimentary form of sight”, said Prof. Eduardo Fernández. He also added that “although these preliminary results are very encouraging, we should be aware that there are still a number of important unanswered questions and that many problems have to be solved before a cortical visual prosthesis can be considered a viable clinical therapy.”

“This new study provides proof-of-principle and demonstrate that our previous findings in monkey experiments can be translated to humans,” said Prof. P. Roelfsema, a co-author on the study. “This work is likely to become a milestone for the development of new technologies that could transform the treatment of blindness”.

“One goal of this research is to give a blind person more mobility,” said Prof. R. A. Normann, also a co-author on the study. “It could allow them to identify a person, doorways, or cars. It could increase independence and safety. That’s what we’re working toward.”

The research team hopes that the next set of experiments will use a more sophisticated image encoder system, capable of stimulating more electrodes simultaneously and to elicit more complex visual images.

Experience what it is like to be blind

How do you navigate a city when you are unable to see anything? In the game The Blind Experience you are solely dependent on your hearing. You will experience what more than 70,000 people in The Netherlands – and 40 million people worldwide – experience daily, in their homes, on the street, in their lives.

The aim of the game is to navigate through a virtual city, in search of the laboratory of Professor Pieter Roelfsema at the Netherlands Institute for Neuroscience. On your way, you will encounter many different obstacles.

You will also get to meet Hein Noortman, the businessman who lost his sight in a climbing accident and is determined to raise funding for the development of a visual prosthesis for restoration of vision in the blind.

Hein Noortman and Netherlands Institute for Neuroscience kick off fundraising campaign for development of visual prosthesis

Donate and give sight to the blind

Hein Noortman knows what it is like not to be able to see. An experienced climber, he lost his sight due to a tragic climbing accident, when he fell from a height of 15 metres, breaking almost every bone in his body. Against all odds, he survived the fall. After spending five weeks in a coma, he finally awoke, unable to see anything.

But Hein refused to give up. He embarked on a journey to relearn everything, from swallowing, to walking, talking and moving. His body recovered, for the most part, except for his sight.

Giving blind people the opportunity to see again: that is the ambition of Professor Pieter Roelfsema and the team at the Netherlands Institute for Neuroscience. We are developing a brain implant in order to restore functional vision to blind people.

To facilitate this groundbreaking research, we need 6 million euros in funding. Will you help? Donate now and help give the blind their sight back!

Summer-long showcase of visual prosthesis and VR simulator at NEMO Museum

The NESTOR consortium is delighted to announce that we are showcasing our work on a visual neuroprosthesis at the NEMO Science Museum this summer (2021), as part of the exhibition, ‘See It Your Way’!

Running from the 10th of July to the 31st of October, this exhibition on the sense of sight focuses on how people have made use of technology to improve vision. From microscopes, telescopes, imaging techniques using non-visible energy waves (such as X rays and sonar), to futuristic visual aids such as smart AI glasses and brain implants.

Six times a day, NEMO guides give exciting demonstrations of several cutting-edge technologies in the museum auditorium, including the brain implants developed via the NESTOR programme. The guides explain how a visual prosthesis can be used to send information about the visual world directly to the brain, which could one day allow blind people to regain enough functional vision to recognise objects and people in their surroundings.

On display is a mock-up of a 1024-channel visual prosthesis, comprising a pedestal, cables, and electrodes. The real implant (made out of medical-grade titanium and silicon electrodes) was used to send tiny electrical signals to the visual cortex and induce artificial visual percepts in experimental animals.

During each session, the NEMO guide asks a volunteer from the audience to don a pair of VR goggles, and view the world through the NESTOR phosphene simulator application (developed by Radboud University and other NESTOR partners). The image seen by the viewer through the headset provides a vividly immersive simulation of what the world would look like to a blind user of a visual prosthesis, and is projected onto a large screen so that the rest of the audience can share the experience with them.  

Over selected weekends in July and August, members of NESTOR are present at the museum to answer questions from the audience and interact with visitors. On the weekend of the 16th and 17th of July, three of us joined in the demo sessions. We had a blast, talking to the kids and their parents, and spreading awareness of our technology with the general public.

We were treated to a behind-the-scenes peek into how the museum works, from the planning and setting up of an exhibition (logistics, technical testing, signing a loan agreement for the model implant and VR glasses) to the execution itself. We got know the amazing museum guides, walk around and explore certain ‘hidden’ areas of the museum, and enjoy the panoramic views of Amsterdam and the water from the NEMO rooftop terrace- which, by the way, is accessible without a ticket.

Come and check out the exhibition! And if you’d like to catch one of the members of the NESTOR consortium, we’ll be back again on the 7th, 28th and 29th of August. Hope to see you there!

Wireless energy transfer demonstration

In the future, blind users of a neuroprosthesis for vision restoration could receive tiny electrical currents via a device that is implanted in their brain, and be able to recognise objects and people in their surroundings more easily. In order to deliver these electrical currents, power needs to be sent from external equipment to the implantable device- whether via a cable or a wireless connection.  

Tom van Nunen, a PhD student at the Eindhoven University of Technology, investigated how the wireless transfer of energy could be carried out, providing a future user of the system with better freedom of movement. The system consists of two parts: a wireless power transmitter and a wireless power receiver. Tom created a demo system with a hollow, life-size model head made out of glass (to simulate the skin), and a wireless receiver unit installed inside the model head. A wireless transmitter unit located on the outside of the model head is positioned close to the receiver unit, with a layer of glass between them.  

When the transmitter and receiver units are closely aligned, power is sent from the transmitter to the receiver, causing LED lights on the receiver to light up! This demo is featured in a promotional video for the departments of Electrical Engineering and Applied Physics, located in the Flux building of the university:

Reading letters from the mind’s eye

Visual mental imagery is the quasi-perceptual experience of “seeing in the mind’s eye.” While it has been well established that there is a strong relationship between imagery and perception (in terms of subjective experience), in terms of neural representations, this relationship remains insufficiently understood.

In a recent article, researchers from NESTOR Project 1, at Maastricht University, exploit high spatial resolution of functional magnetic resonance imaging (fMRI) at 7 Tesla, uncovering the retinotopic organization of early visual cortex and combining it with machine-learning techniques, to investigate whether visual imagery of letter shapes preserves the topographic organization of perceived shapes.

Six subjects imagined four different letter shapes which were recovered from the fMRI (BOLD) signal. These findings may eventually be utilized for the development of content-based BCI letter-speller systems.

Mario Senden, Thomas C. Emmerling, Rick van Hoof, Martin A. Frost & Rainer Goebel. Reconstructing imagined letters from early visual cortex reveals tight topographic correspondence between visual mental imagery and perception. Brain Structure and Function 224, pages 1167–1183 (2019).

The visual brain divided

The human brain contains many neurons, the activity of which, when measured, respond differently to specific types of visual input. These neurons can be divided in retinotopic and category-specific regions and have been the focus of a large body of functional magnetic resonance imaging (fMRI) research. Studying these regions requires accurate localization of their cortical location, hence researchers traditionally perform functional localizer scans to identify these regions in each individual.

However, it is not always possible to conduct these localizer scans. Researchers from NESTOR Project 1 have recently published a probabilistic map of the visual brain, detailing the functional location and variability of visual regions. This atlas can help identify the loci of visual areas in healthy subjects as well as populations (e.g., blind people, infants) in which functional localizers cannot be run.

A Probabilistic Functional Atlas of Human Occipito-Temporal Visual Cortex. Mona Rosenke, Rick van Hoof, Job van den Hurk, Kalanit Grill-Spector, Rainer Goebel. Cerebral Cortex, Volume 31, Issue 1, January 2021

End-to-end optimization of prosthethic vision: How AI algorithms use feedback to optimize the interpretability of phosphene vision

For a visual prosthesis to be useful in daily life, the system relies on image processing to ensure that maximally relevant information is conveyed, e.g. allowing the blind neuroprosthesis user to recognise people and objects. Extraction of the most useful features of a visual scene is a non-trivial task, and the definition of what is ‘useful’ for a user is strongly context-dependent (e.g. navigation, reading, and social interactions are three very different tasks that require different types of information to be conveyed). Despite rapid advancements in deep learning, it is challenging to develop a general, automated preprocessing strategy that is suitable for use in a variety of contexts. In this recent publication, we present a novel deep learning approach that optimizes the phosphene generation process in an end-to-end fashion. In this approach, both the delivery of stimulation to generate phsophene images (phosphene encoding), as well as the interpretation of these phosphene images (phosphene decoding), are modelled using a deep neural network. The proposed model includes a highly adjustable simulation module of prosthetic vision. All components are trained in a single loop, with the goal of finding an optimally interpretable phosphene encoding which can then be decoded to obtain the original input. In computational validation experiments, we show that such an approach is able to automatically find a task-specific stimulation protocol, which can be tailored to specific constraints, such as stimulation on a sparse subset of electrodes. This approach is highly modular and could be used to dynamically optimize prosthetic vision for everyday tasks and to meet the requirements of the end user.

Jaap de Ruyter van Steveninck, Umut Güçlü, Richard van Wezel, Marcel van Gerven. doi:

Simulating neuroprosthetic vision for emotion recognition

We developed a mobile simulator of phosphene vision, to allow the general public to experience how artificially induced phosphene vision would look like for blind users of a visual prosthesis. This setup allows us to evaluate, compare, and optimize different signal processing algorithms that are used to generate phosphene vision, by carrying out tests on individuals with normal vision. In this demo, we show how intelligent algorithms can improve the quality of perception with prosthetic vision with an image processing pipeline that allows for accurate emotion expression recognition.

C. J. M. Bollen, U. Güçlü, R. J. A. van Wezel, M. A. J. van Gerven and Y. Güçlütürk, “Simulating neuroprosthetic vision for emotion recognition,” 2019 8th International Conference on Affective Computing and Intelligent Interaction Workshops and Demos (ACIIW), 2019, pp. 85-87,

Article on representations of naturalistic stimulus complexity in early and associative visual and auditory cortices

The complexity of sensory stimuli has an important role in perception and cognition. However, its neural representation is not well understood. In this article published in Scientific Reports, we characterize the representations of naturalistic visual and auditory stimulus complexity in early and associative visual and auditory cortices. To do this, we carried out data encoding and decoding in two fMRI datasets with visual and auditory modalities. We found that most early and some associative sensory areas represent the complexity of naturalistic sensory stimuli. For example, the parahippocampal place area, which was previously shown to represent scene features, was found to also represent scene complexity. Similarly, posterior regions of superior temporal gyrus and superior temporal sulcus, which were previously shown to represent syntactic (language) complexity, were found to also represent music (auditory) complexity. Furthermore, our results suggest that gradients of sensitivity to naturalistic sensory stimulus complexity exist in these areas.

Güçlütürk, Y., Güçlü, U., van Gerven, M., and van Lier, R. (2018). Representations of naturalistic stimulus complexity in early and associative visual and auditory cortices. 8:3439. Full text

Science artice gains widespread international media coverage

Our recently published results in Science on the efficacy of using a 1024-channel neuroprosthesis for the generation of artificial vision gained coverage in national and international media, including CNN, NOS, NPO, RTL, Scientific American, New Scientist, Trouw, de Volkskrant, and AD.

The paper gave rise to >300 items in the popular press across more than 51 countries, with a total potential reach of >1.4 billion people.

Here are some of the highlights:




Tijd voor Max

NPO Start

NPO Radio 1

NOS Journal



Scientific American

New Scientist

New Scientist (NL)

NOS Nieuwsradio

RTL nieuws

FBR Smart Brief

Review of results from Orion study

Scientists have long dreamed of restoring vision in blind individuals by stimulating the visual cortex, bypassing malfunctioning eyes to directly deliver information to higher visual centers (Bosking et al., 2017). Beauchamp et al. (2020) used electrical stimulation of the visual cortex to produce visual percepts in blind human subjects, using innovative current steering and sequential stimulation techniques to create recognisable shapes such as letters of the alphabet. In our review of Beauchamp et al.’s work, ‘Writing to the Mind’s Eye of the Blind,’ published in Cell, we discuss their ground-breaking results, and their implications for the field of visual cortical prosthetics:

Review on the future of neurotechnology

Recent advances in neuroscience and technology have made it possible to record from large assemblies of neurons and to decode their activity to extract information. At the same time, available methods to stimulate the brain and influence ongoing processing are also rapidly expanding. These developments pave the way for advanced neurotechnological applications that directly read from, and write to, the human brain. While such technologies are still primarily used in restricted therapeutic contexts, this may change in the future once their performance has improved and they become more widely applicable.

In this review article, ‘Mind Reading and Writing: The Future of Neurotechnology,’ published in Trends in Cognitive Sciences, we provide an overview of methods to interface with the brain, speculate about potential applications, and discuss important issues associated with a neurotechnologically assisted future:

Science article on the successful generation of artificial vision

Recent discoveries from NESTOR Project 3 show that newly developed high-resolution implants in the visual cortex make it possible to recognize artificially induced shapes and percepts. The findings were published in Science on 3 December.

When electrical stimulation is delivered to the brain via an implanted electrode, it generates the percept of a dot of light at a particular location in visual space, known as a ‘phosphene.’ Our team developed high-resolution implants consisting of 1024 electrodes and implanted them in the visual cortex of two sighted monkeys. The monkeys successfully recognised shapes and percepts, including lines, moving dots, and letters, using their artificial vision. 

This research lays the foundations for a neuroprosthetic device that could allow profoundly blind people to regain functional vision and to recognize objects, navigate in unfamiliar surroundings, and interact more easily in social settings, significantly improving their independence and quality of life.



NESTOR annual meeting in Maastricht

Our consortium members convened in Maastricht for our annual meeting on the 23rd of May, 2019, to share exciting updates from both the NESTOR programme and the field of visual cortical prostheses in general. The needs of blind users played a prominent role in the programme, with valuable insights coming from our clinical consultant, Jens Naumann, who was recently interviewed by the Volkskrant. In addition, special guest lectures showcased promising new developments internationally, including initiatives at the Illinois Institute of Technology and the Baylor College of Medicine. Finally, we enjoyed a tour of the ultra-high 7T and 9.4T MRI scanners at Maastricht University’s Brain Imaging Centre. An energizing and thoroughly inspiring gathering indeed!

NESTOR annual meeting in Nijmegen

On the 31st of May, 2018, members of the NESTOR consortium gathered at De Villa in the city of Nijmegen, the Netherlands, for our annual get-together meeting. Highlights of the day included presentations from researchers and students, and round-table discussions together with our industry partners and representatives from organizations for the blind and the visually impaired. Also in attendance were our clinical and neurosurgical advisors, who helped to shape our goals and plans for the future. Finally, we were taken on an enriching tour of the muZIEum, Nijmegen- an immersive museum that allows visitors to experience the world from the perspective of a blind or visually impaired person. Many thanks to the organizers and all attendees, for making this event a highly meaningful and successful one!

See you at SfN!

We will be showcasing highlights of our recent research at the Blackrock Microsystems booth at the Society for Neuroscience meeting in Washington, DC, on the 12th of November, 2017.