Interview with Prof. Ariel Rokem, 2024 Winner of the Education Award
Authors: Audrey Luo
Editors: Simon R. Steinkamp, Elisa Guma & Kevin Sitek
Dr. Ariel Rokem received the Organization for Human Brain Mapping (OHBM) Education in Neuroimaging Award at the 2024 annual meeting in Seoul, South Korea. Dr. Rokem is a Research Associate Professor of Psychology at the University of Washington (UW) and a Senior Data Science Fellow at the eScience Institute. He completed his undergraduate and graduate studies in Biology and Cognitive Psychology at the Hebrew University of Jerusalem, followed by a PhD in Neuroscience from UC Berkeley. He then pursued postdoctoral training in computational neuroimaging at Stanford University.
Dr. Rokem leads the Neuroinformatics R&D Group, where he and his team develop computational tools to explore the biological basis of brain function. His group's research addresses fundamental questions in neuroscience through computational neuroimaging, with an emphasis on reproducible and open science. Dr. Rokem is also the co-director of the UW Center for Human Neuroscience.
He founded and currently co-directs the NeuroHackademy Summer Institute in Neuroimaging and Data Science, together with Noah Benson. Together with Tal Yarkoni, Dr. Rokem co-authored the book Data Science for Neuroimaging: An Introduction, which is available through Princeton University Press and freely accessible online.
We had the privilege to interview Dr. Rokem on his work in neuroinformatics and neuroimaging education as well as his educational philosophy.
1. What sparked your interest in teaching and education within the fields of neuroscience and neuroinformatics?
I am now a computational neuroscientist and neuroinformatician, so you might assume that I have been programming since I was in middle school, and that my academic training is in computer science or some other engineering field, but that's really not the case. Though I was interested in math when I was in high-school (and even earlier), I was far more interested in theater, movies, and music. When I went to college, I chose to pursue a bachelor's degree (at the Hebrew University of Jerusalem) in biology and psychology, which also gave me very little formal training in programming. This means that all of my programming knowledge was acquired through informal means: fortunate interactions with mentors or self-motivated explorations. My PhD (in the neuroscience program at UC Berkeley) involved a lot of growth in that respect. During that time, I was fortunate to have the opportunity to work alongside Fernando Perez, a key figure in open-source software for science – his work led to the Jupyter notebook and he has made other contributions to the Python software ecosystem. He was a mentor to me in finding my way in open-source software.
Around 2011, after I graduated, and with a bit more programming experience under my belt, I learned about Software Carpentry (SWC), an organization that was teaching scientists how to program for their research purposes, and I joined as an instructor. SWC is remarkable in that Greg Wilson, who founded the organization, immersed himself in the empirical research on teaching and learning, and the curriculum and teaching approach hews very closely with the empirical findings. (I really recommend Greg's free online book Teaching Tech Together for a concise summary of this literature.) By becoming a SWC instructor, I learned that teaching others about programming was (1) exciting and enjoyable and (2) one of the best ways for me to learn.
By the time I moved to Seattle to join the University of Washington eScience Institute in 2015, it was clear to me and to others around me that there was a gap in training for neuroscientists (and researchers in other adjacent fields): datasets were growing increasingly large and complex, and methods for analysis were also rapidly changing with the introduction of machine learning methods into many of these fields. However,there were no formal mechanisms, and in many places no informal mechanisms, for students to learn more about the tools and practices that they needed to adopt to be able to advance their own work with these technical and theoretical ideas. SWC filled part of this gap, and still does. But we needed something more. Around the same time, another important trigger for starting NeuroHackademy was my experience of the OHBM Brainhacks. I attended one in 2013 in Seattle and it left a deep impression on me: here was a community of tinkerers that were interested in completely new standards with respect to open research and open collaboration. It was so different from the rest of the conference, and there was something utterly hopeful and generative about this event that really drew me in. And so, NeuroHackademy grew out of an attempt to fill the gap that I noticed by marrying these two ideas: Software Carpentry, with its teaching practices, and Brainhack with its free-wheeling sense of excitement, collaboration and possibility.
2. How have your own experiences as a learner shaped the way you approach teaching?
One of the key takeaways from my experiences is the power of problem-based learning (PBL). The core of this approach is that learning can happen when learners are trying to solve open-ended problems through self-directed study of the problem and its contours. Another way to say this is that in order to learn something you have to understand how it scratches a particular itch. There is a lot of good research that supports this approach, and as a teaching method, this approach is particularly well-suited to post-graduate students who require less direct supervision and guidance (although I am excited that my 7-year-old daughter's grade school also spends a lot of time on PBL). On the other hand, I think a lot about the kind of support systems that we need to put in place to enable fruitful learning in research environments. Research in neuroscience is demanding – there is simply so much one needs to know! – and learners need to be (gently) shown the way towards practices and tools that will make their work more efficient, more impactful, and more rigorous so that they are not frustrated by a meandering search through all of the possible options. We have rapidly gone from "where do I find out how to install Python software on my computer" (which was a challenge when I was in grad school) to "which of these three-hundred tutorials about installing Python software should I follow". That's what my book, written together with Tal Yarkoni, "Data Science for Neuroimaging: An Introduction"was hoping to achieve. I hope that it helps clarify things and is not just adding the 301st option to the list.
3. How has being at the forefront of teaching data science, reproducible neuroimaging, and neuroinformatics shaped the way you approach and conduct your research?
I believe in what some people call "dog-fooding". That is, that we should be users of our own tools and practice what we preach. I am fortunate that I now have the opportunity to work with students and postdocs who develop computational tools and apply them to ask all kinds of interesting research questions. I named my research group the neuroinformatics R&D group, with the intention that development and research be included in almost every one of our projects. And this is a great way to “eat your own dog food”. Along the way, we do try to live up to the things that we teach, making code and data as open as possible and as usable by others as possible. I truly do believe that this also makes our research more rigorous and impactful.
On the other hand, there are two lessons that I have learned that I think are important to underscore in this context: the first is that reproducibility is a matter of degree, not of kind. That means that you can almost always increase the reproducibility of the work that you do by taking small meaningful steps: publishing your analysis code, pre-registering your analysis approach (something that we have not done yet!), or advocating for practical and ethical data sharing where that is still not practiced. But you don't have to swallow the whole enchilada all at once. Sometimes you can do just one thing to improve scientific transparency in your current project, and you can do that without having to consider how to boil the whole ocean.
The other is that computational reproducibility and scientific transparency are just some of the facets that underlie scientific rigor, and they need to be supported by curiosity and careful scientific thinking, attention to experimental design and the logic of scientific interpretation, as well as clarity in scientific communication. This really argues for a holistic approach towards rigor, that doesn't lose sight of the forest in favor of the trees.
4. What do you see as qualities of effective teaching, both for individual educators and for educational organizations (such as NeuroHackademy)?
Organizing and teaching in Neurohackademy has given me an amazing opportunity to be a part of a community of learners and teachers that I find inspiring and uplifting. I have learned that effective teaching is fundamentally an optimistic humanistic act- - an act of faith in yourself and in your learners. I learn so much every year from the interactions that I have both with the instructors (after all, I get to sit and listen to some of the most interesting researchers engaged in neuroimaging and data science!) and with the participants. For me, some of the best moments of learning in NeuroHackademy are instances of "incidental learning" where a participant notices something about the way that another participant or an instructor is using technical tools or concepts. In this case, effective teaching comes from being open to these possibilities and embracing them when they appear. To paraphrase Greg Wilson, good teaching is more like a jazz improvisation piece than a strictly-performed classical piece.
Another goal that we have in mind with NeuroHackademy is the creation and fostering of a "community of practice" (CoP). In the theory that describes CoPs (that comes from the work of Lave and Wenger), learning happens naturally when new members of a community can engage in "legitimate peripheral participation". In the context of neuroimaging and data science this is important, because many of the projects that we work on (the BIDS standard, for example) are made by the community for the community, so we really aim to empower people to feel like they can contribute to the projects and tools that they themselves use. You can see some great examples among the projects that participants pursue in NeuroHackademy (for this year: https://github.com/NeuroHackademy2024/projects). This is also an act of optimism and faith, in that we really believe that the people we teach will become collaborators and colleagues in a collective endeavor.
Finally, thanks a lot to the OHBM ComCom team for putting this post together! I really appreciate the opportunity to talk about my educational work and my educational philosophy. It’s been fun to think about these questions!