Learn how to leverage generative AI to support and improve your productivity at work. The next cohort will take place online on April 28 and 30, 2026, in French.
We use cookies to analyze the browsing and usage of our website and to personalize your experience. You can disable these technologies at any time, but this may limit certain functionalities of the site. Read our Privacy Policy for more information.
Setting cookies
You can enable and disable the types of cookies you wish to accept. However certain choices you make could affect the services offered on our sites (e.g. suggestions, personalised ads, etc.).
Essential cookies
These cookies are necessary for the operation of the site and cannot be deactivated. (Still active)
Analytics cookies
Do you accept the use of cookies to measure the audience of our sites?
Multimedia Player
Do you accept the use of cookies to display and allow you to watch the video content hosted by our partners (YouTube, etc.)?
An exciting aspect of neuroscience is developing and testing hypotheses via experimentation. However, due to logistical and financial hurdle… (see more)s, the experiment and discovery component of neuroscience is generally lacking in classroom and outreach settings. To address this issue, here we introduce RetINaBox: a low-cost open–source electronic visual system simulator that provides users with a hands-on tool to discover how the visual system builds feature detectors. RetINaBox includes an LED array for generating visual stimuli and photodiodes that act as an array of model photoreceptors. Custom software on a Raspberry Pi computer reads out responses from model photoreceptors and allows users to control the polarity and delay of the signal transfer from model photoreceptors to model retinal ganglion cells. Interactive lesson plans are provided, guiding users to discover different types of visual feature detectors—including ON/OFF, center-surround, orientation-selective, and direction-selective receptive fields—as well as their underlying circuit computations.
An exciting aspect of neuroscience is developing and testing hypotheses via experimentation. However, due to logistical and financial hurdle… (see more)s, the experiment and discovery component of neuroscience is generally lacking in classroom and outreach settings. To address this issue, here we introduce RetINaBox: a low-cost open-source electronic visual system simulator that provides users with a hands-on tool to discover how the visual system builds feature detectors. RetINaBox features an LED array for generating visual stimuli and a photodiode array that acts as a mosaic of model photoreceptors. Custom software on a Raspberry Pi computer reads out responses from model photoreceptors and allows users to control the polarity and delay of the signal transfer from model photoreceptors to model retinal ganglion cells. Interactive lesson plans are provided, guiding users to discover different types of visual feature detectors—including ON/OFF, center-surround, orientation selective, and direction selective receptive fields—as well as their underlying circuit computations.
Understanding computations in the visual system requires a characterization of the distinct feature preferences of neurons in different visu… (see more)al cortical areas. However, we know little about how feature preferences of neurons within a given area relate to that area’s role within the global organization of visual cortex. To address this, we recorded from thousands of neurons across six visual cortical areas in mouse and leveraged generative AI methods combined with closed-loop neuronal recordings to identify each neuron’s visual feature preference. First, we discovered that the mouse’s visual system is globally organized to encode features in a manner invariant to the types of image transformations induced by self-motion. Second, we found differences in the visual feature preferences of each area and that these differences generalized across animals. Finally, we observed that a given area’s collection of preferred stimuli (‘own-stimuli’) drive neurons from the same area more effectively through their dynamic range compared to preferred stimuli from other areas (‘other-stimuli’). As a result, feature preferences of neurons within an area are organized to maximally encode differences among own-stimuli while remaining insensitive to differences among other-stimuli. These results reveal how visual areas work together to efficiently encode information about the external world.