Many cognitive neuroscience experiments require computing skills that are not typically taught in psychology and neuroscience programs, such as command line usage, remote server access, and version control. Scientific Computing for Cognitive Neuroscience: Tutorials and Practical Applications, an open-source resource created by the students of the Visual Cognition Lab at Barnard College, Columbia University, addresses this gap. It consolidates both foundational and advanced concepts in scientific computing, equipping lab members with essential knowledge while providing valuable tutorials and explanations for the broader cognitive science community. We hope to reduce dropout rates in the computationally intensive field of cognitive neuroscience by providing an educational resource for undergraduates with diverse levels of background knowledge. Nearly 50% of college students who initially declare a STEM (Science, Technology, Engineering, Mathematics) major switch to a non-STEM major before graduating (Nasser & Hutson, 2025). This statistic becomes even more notable for students from minority backgrounds, with attrition rates reaching 75% (Nasser and Hutson 2025). Scientific Computing for Cognitive Neuroscience: Tutorials and Practical Applications progressively builds on content, ensuring that students gain confidence and the necessary skills to work efficiently with large datasets—a fundamental component of cognitive neuroscience research.
We encourage you to use this material freely, but kindly ask that you cite us if this resource is used. If you have comments or wish to contribute, please contact mgreene@barnard.edu]. This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.
Authors
Dr. Michelle R. Greene, Principal Investigator Visual Cognition Lab (editorial/advisor)

Dr. Michelle Greene, is an Assistant Professor in the Psychology Department at Barnard College. Greene’s research integrates machine learning, experimental psychology, and cognitive neuroscience to understand the processes that enable rapid visual perception. Greene earned their Ph.D. in Cognitive Science from MIT in 2009, followed by postdoctoral training at Harvard Medical School and Stanford University. Before joining Barnard, Greene was a faculty member in the Neuroscience Department at Bates College. Dr. Greene has led multiple university-based teams to create the Visual Experience Dataset, a large-scale egocentric video and eye movement database of daily human activities.
Greene is the recipient of the prestigious National Science Foundation (NSF) Faculty Early Career Development (CAREER) Program Award. At Barnard, Greene teaches “Introduction to Psychology,” “Exploring the Psychology of Imagination,” and “Introduction to Statistics,” while mentoring students through Independent Studies. She also runs the Visual Cognition Lab, guiding undergraduate students through interdisciplinary research combining psychology, neuroscience, and computer science.
This textbook is part of Dr. Greene’s NSF CAREER award, furthering her mission to create a resource that is openly accessible to cognitive neuroscience students, free of charge.
Sage Aronson (editorial/advisor)

Sage Aronson (Barnard ’24) studied Information Science and Psychology. She is the current lab manager for the Visual Cognition Lab and will be starting her Ph.D. in Industrial-Organizational Psychology in Fall 2025. Sage intends to study group dynamics, emotions, and personnel selection. In her role as lab manager at VCL, she conducts research with undergraduate assistants, oversees data collection and analysis, and assists with grant management.
Hooriya Aamir (Research Assistant)

Hooriya Aamir (Barnard ’27) studies Neuroscience with a minor in Psychology. She joined VCL in the summer of 2024. Her research investigates the impact of visual complexity on rapid scene detection. Her findings could have applications in enhancing visual recognition systems and user interfaces. Hooriya’s work in the lab aligns with her broader interest in neurobiology, providing hands-on experience and insights that support her future career in healthcare.
Maria Adkins (Reserach Assistant)

Maria Adkins (Barnard ’27) studies Neuroscience and Behavior. She joined VCL in the summer of 2024. Maria researches the relationship between semantic complexity and rapid scene detection. She is now expanding her work to investigate how people process real versus internet images using EEG decoding. Outside of the lab, Maria volunteers with Alzheimer’s patients through Columbia’s Brain Exercise Initiative and enjoys honing her coding and computing skills.
Vivian Gao (Research Assistant)

Vivian Gao (Barnard ’27) studies Cognitive Science with a focus on Intelligence and Economics. She has been a Research Assistant in VCL since the summer of 2024. Her research examines the effects of semantic complexity on event-related potentials (ERPs). Vivian designs experimental paradigms with PsychoPy, conducts EEG experiments, and analyzes and decodes data. She also contributes to the lab’s textbook project, editing chapters, creating graphics, and designing practice questions. Outside of her academic and lab commitments, Vivian enjoys cooking, practicing jiu-jitsu, and dancing.
Amy Nguyen (Research Assistant)

Amy Nguyen (Barnard ’26) studies psychology and computer science. She joined VCL in the summer of 2024. Her research focuses on event-related potentials (ERPs) and varying visual complexity. Amy designs experimental paradigms in PsychoPy, codes in Python, performs data analysis, and decodes EEG results. Amy also contributes to the lab’s open-source textbook project. In her free time, she enjoys running and works at SUMMIT One Vanderbilt.
Sundari Ruth

Sundari Ruth (Barnard ’26) studies Psychology. She joined the Visual Cognition Lab summer of 2025. Her research in the lab focuses on object information detection and utilizes EEG. In her free time, she enjoys sailing, reading, and volunteering as an adult leader for her local scout organization.
Skylar Stadhard (Research Assistant)

Skylar Stadhard (Barnard ’27) studies Neuroscience & Applied Mathematics. Skylar joined the lab in June 2024. Her research in the lab focuses on scene recognition and utilizes applications like Matlab, Python, PsychoPy, and EEG. Skylar is interested in combining disciplines like neuroscience with computer science and math. In her free time, Skylar enjoys reading and exploring NYC.
Carina Wong (Research Assistant)

Carina Wong (Barnard ’25) studied Neuroscience and Behavior. Carina joined the Visual Cognition Lab in fall of 2024. Her research explored what happens in the brain when it is solving for scene/word incongruencies in visual scenes. Carina used an experimental paradigm modified after the original Stroop experiment and EEG to measured brain activity as the participant categorized scenes in incongruent and congruent trials. Carina formerly did research in the Froemke lab at NYU Langone where she measured OXTR expression in the mammary glands of maternal mice after oxytocin treatment and inhibition. Currently, she is interning as a medical assistant at a dermatology practice and is choreographing for one of Columbia University’s Orchesis.