Portrait of Erica Cianfarano is unavailable

Erica Cianfarano

PhD - McGill University
Supervisor
Research Topics
Computational Neuroscience

Publications

RetINaBox: A Hands-On Learning Tool for Experimental Neuroscience
Brune Bettler
Flavia Arias Armas
Vanessa Bordonaro
Megan Q. Liu
Mingyu Wan
Aude Villemain
Blake A. Richards
Stuart Trenholm
An exciting aspect of neuroscience is developing and testing hypotheses via experimentation. However, due to logistical and financial hurdle… (see more)s, the experiment and discovery component of neuroscience is generally lacking in classroom and outreach settings. To address this issue, here we introduce RetINaBox: a low-cost open–source electronic visual system simulator that provides users with a hands-on tool to discover how the visual system builds feature detectors. RetINaBox includes an LED array for generating visual stimuli and photodiodes that act as an array of model photoreceptors. Custom software on a Raspberry Pi computer reads out responses from model photoreceptors and allows users to control the polarity and delay of the signal transfer from model photoreceptors to model retinal ganglion cells. Interactive lesson plans are provided, guiding users to discover different types of visual feature detectors—including ON/OFF, center-surround, orientation-selective, and direction-selective receptive fields—as well as their underlying circuit computations.
RetINaBox: A hands-on learning tool for experimental neuroscience
Brune Bettler
Flavia Arias Armas
Vanessa Bordonaro
Megan Liu
Mingyu Wan
Aude Villemain
Stuart Trenholm
An exciting aspect of neuroscience is developing and testing hypotheses via experimentation. However, due to logistical and financial hurdle… (see more)s, the experiment and discovery component of neuroscience is generally lacking in classroom and outreach settings. To address this issue, here we introduce RetINaBox: a low-cost open-source electronic visual system simulator that provides users with a hands-on tool to discover how the visual system builds feature detectors. RetINaBox features an LED array for generating visual stimuli and a photodiode array that acts as a mosaic of model photoreceptors. Custom software on a Raspberry Pi computer reads out responses from model photoreceptors and allows users to control the polarity and delay of the signal transfer from model photoreceptors to model retinal ganglion cells. Interactive lesson plans are provided, guiding users to discover different types of visual feature detectors—including ON/OFF, center-surround, orientation selective, and direction selective receptive fields—as well as their underlying circuit computations.
The feature landscape of visual cortex
Rudi Tong
Ronan da Silva
James Wilsenach
Stuart Trenholm
Understanding computations in the visual system requires a characterization of the distinct feature preferences of neurons in different visu… (see more)al cortical areas. However, we know little about how feature preferences of neurons within a given area relate to that area’s role within the global organization of visual cortex. To address this, we recorded from thousands of neurons across six visual cortical areas in mouse and leveraged generative AI methods combined with closed-loop neuronal recordings to identify each neuron’s visual feature preference. First, we discovered that the mouse’s visual system is globally organized to encode features in a manner invariant to the types of image transformations induced by self-motion. Second, we found differences in the visual feature preferences of each area and that these differences generalized across animals. Finally, we observed that a given area’s collection of preferred stimuli (‘own-stimuli’) drive neurons from the same area more effectively through their dynamic range compared to preferred stimuli from other areas (‘other-stimuli’). As a result, feature preferences of neurons within an area are organized to maximally encode differences among own-stimuli while remaining insensitive to differences among other-stimuli. These results reveal how visual areas work together to efficiently encode information about the external world.