As promised in my opening post, my goal through the process of blogging on the topic of Theory and Interdisciplinarity is to understand the different perspectives that exist at the interface of theory and experiments, including that of young researchers. In this post, I talk to Habiba Azab who is a PhD candidate at University of Minnesota, where she is the studying the neural mechanisms of value-based decision making in the prefrontal cortex. Habiba is an experimentalist, and works primarily with single unit data from nonhuman primates performing decision-making tasks. As a graduate student, Habiba has had the experience of collaborating with a theorist, and has had a chance to think about what could work and what doesn’t work at the theory-experiment interface. She recently attended a meeting titled Present and Future Frameworks of Theoretical Neuroscience, which was hosted by University of Texas, San Antonio (through support from NSF’s Brain Initiative). She was part of a working group that discussed the topic of “Organization of Neural Theories”. For more on the meeting, here is a link to the podcast recorded with the meeting participants (I will also summarize the relevant insights from the podcast after the interview).
Here is a condensed and edited version of my chat with Habiba Azab:
What was your experience of attending the workshop as a student?
It was a very enlightening experience. The format of the workshop was very free flowing and discussion based, and while there were a few concise presentations by some of the faculty members, nobody talked about their own work. The goal was to talk about where we think the field should be headed, and what role theory should play in that. I actually realized I enjoyed that a lot more than the traditional conference format. But I suppose this meeting had a fundamentally different purpose.
What was one of the big takeaways for you?
Science is full of technical terminology, and we often assume everyone is using the terminology in the same manner. But this is not always true; different people use the same term in different ways in different contexts or subfields. As a graduate student, especially when starting out or reading papers from a field other than my own, I just assume everyone knows what a certain term means, and I need to catch up. It was fascinating to realize that was not the case at all. This wasn’t something we realized through curated sessions devoted for that purpose, either: it wasn’t like we all sat down to discuss “What is theory?” and realized we had different ideas (although that happened, too). Oftentimes people would express their ideas and start discussing them and only THEN realize we were talking about different things. Seeing this happen between experts and eminent neuroscientists really drove home to me the fact that our field is very young. I think it changed the way I’ve come to read papers.
I heard on the podcast that you were part of the working group Organization of Neural Theories (theory of theories), and one of the topics of discussion was on exploratory vs. hypothesis-driven research. What are your thoughts on these different forms of research?
I study the prefrontal cortex, so at least from my vantage point it seems like a lot of neuroscience is very young and a lot of the ongoing research is therefore, by necessity, quite exploratory. But papers aren’t written to reflect that; the research is presented as if it was driven by a central hypothesis from the very beginning, which is usually supported by the results presented. Good papers really do give you that impression (and maybe that is actually how it happened), but sometimes that way of presenting the research feels very forced, and that’s usually in areas where we just don’t know much of what’s going on. It’s really helpful to read papers that are written in a manner that clarifies the depth of background they’re working from—what’s known and what’s not. But moreover, I think it’s necessary to clarify what type of research we’re doing, because that influences how we should interpret the results; if a study is exploratory, it likely needs to be replicated, and we need to be careful with the statistical interpretation of our results. This concept was very unintuitive to me at first; why would my viewpoint influence how significant a result is? But when we think of statistical significance as a marker of how ‘surprising’ the results are, this starts to make more sense.
Imagine a researcher trying to determine if a coin is a magic coin. They flip the coin 12 times, and look for a pattern. We all know that humans are excellent pattern finders; it’s easy, for example, to argue that the pattern THTTHHTTTHHH is significant (1 head, 1 tail; 2 heads, 2 tails; 3 heads, 3 tails, etc.). When we then test this finding statistically, we compare the likelihood of this pattern against all other possible permutations of heads and tails. In that context, these findings are really surprising. But what actually happened here? The researcher didn’t go in predicting they would see that exact pattern; so it’s unfair to compare the likelihood of that permutation given the space of all possible permutations. The researcher would also be able to make a nice case for a magic coin if they had seen another significant pattern like alternating heads and tails, or 6 heads then 6 tails, etc. On the other hand, if the researcher now does a second experiment and finds exactly the same pattern as the first, now that’s surprising, and suggests that there’s really something up with this coin. So we see the difference between exploratory and confirmatory research, and the importance of replication in the case of the former. I don’t know if this example illustrates the point well enough; I still think this is a hard concept to wrap one’s head around.
This doesn’t mean that exploratory research is somehow inferior: in fact, I think it’s the necessary first step in investigating any new scientific question. Unfortunately, I think the incentive structure in science is heavily skewed towards hypothesis-driven research: where the entire space of results can be predicted and interpreted before the experiment has even been done. And yet research needs to be novel and surprising!
You mentioned that the incentive structure makes it difficult to pursue exploratory research. I also wonder if the current research practices make it harder to communicate the objectives and outcomes of exploratory research in a helpful manner. How can the open science movement help this process? What are your thoughts on open science and how it can benefit neuroscience specifically?
As I mentioned before, research has become so concerned with the “story”. Papers always look so clean, so elegant. Real research is usually far messier. It is never a straight line from experimental design to execution to data to results—the actual process is much more long-winded. But the real process and the failures don’t end up in papers—we can sometimes get a glimpse of these in informal discussions with peers, but rarely from publications or conference talks. I think it would help to know what went wrong in the research attempts to set a clear path for future work and exploration. Some aspects of the open science movement could help create avenues for being transparent and explicit about exploratory work. Pre-registration, for example: how often have you submitted a paper, had it reviewed, and wished that you could’ve gotten this feedback before actually running the experiment? That’s what pre-registration is about. Submitting your study before running it forces you to think about what you’re actually doing, what could go wrong, and how else results could be interpreted. Plus you get all the “you should have…” feedback (which is often very insightful) before you actually run the experiment. But perhaps more importantly, pre-registration ensures that your work isn’t lost. So you spent a year on an experiment and came up with null results; that still gets published. It’s not a failure on your part; it’s part of science. I remember starting out graduate school, and a very, very distinguished professor told us flat out: four out of every five things you do will fail. That’s just how it works! But pre-registration (or publishing null results generally) ensures that that work isn’t also lost to the rest of the scientific community. Plus, it goes on your CV as work that you have done. That’s just one example of how the open science movement has ideas that can really help out exploratory researchers. Others include making collaborations easier, encouraging others to build upon your work by being able to look at your data and code and suggest ways to improve analyses or different analyses entirely. And yes, finding bugs. My undergraduate background is in computer science—I find bugs in my code all the time. A sizeable portion of students starting graduate school will not have any programming experience at all. Even software companies have systems in place to ensure that a piece of code has gone through multiple tests and was scanned by multiple eyes before it does what it’s supposed to do. To assume that we don’t need that in the world of scientific research is extremely naïve, and dangerous! Sharing one’s code can help with that (although I still don’t think it’s enough). Anyhow, I’m still a novice and mostly-aspiring open-scientist, but I think there’s plenty to be excited about.
One of my inspirations to pursue this blog is a podcast titled Song Exploder – “where musicians take apart their songs, and piece by piece, tell the story of how they were made.” In a similar manner, can you talk a little about your experience working at the theory-experiment interface and collaborating with a theorist?
I was collaborating with a computational neuroscientist to build a neural circuit model of value-based decision-making. We set out to test predictions of existing models against data we had collected in a novel task. Just like empirical research, I got the sense that even theoretical and computational research is often exploratory. Theorists seem to have an eye for which model could generate the data as it is, and that could be a starting point, but they don’t always know what’s going to work going in. In our case, we started with looking at the standard model and found that our data did not fit its predictions, so in a way we were using the hypothesis-driven approach. But then we also wanted to figure what model could have generated this data? In the process, we realized that we did not have the right data to constrain our model, and realized what kind of experiment we should be doing next. So that’s one instance where theoretical work can guide experiments that I got to witness firsthand.
I encountered this question that I found intriguing – “how do we know what we need to know”, and would like to hear your thoughts on. It was raised by an undergraduate while trying to determine what paths should they go on in their future research careers. The number of unknowns seem so high, and it’s not clear what manipulations are necessary and sufficient.
That’s an interesting question, I’m not sure if there is a right or wrong answer to this one. Different people find value in different things, and people care about understanding at different levels of analysis or resolution. It might be useful to keep in mind what “understanding” means for different people and let that drive the research questions. For some people, understanding might mean to “semantically understand the brain” i.e. understanding the mechanisms of information-processing in the brain, how the information flows across different regions, how is it being processed etc. Maybe a test of that is being able to illustrate a cognitive process using a simplified flow diagram, or something like that. But for some other people, just building a brain (or simulating a brain) or some reduced version of it might be enough. It can also depend heavily on the purpose of the research; for example, if my purpose is to develop a robotic arm, maybe I don’t care specifically what the neural signals coming out of motor cortex mean, as long as I can use them to move a mechanical arm. Some people believe that the brain is a chaotic system, and we can’t simplify its workings to the point where we can write them down in an algorithm or draw a box diagram. I don’t believe that we can definitively prove that one definition of understanding is more valuable than another: it really depends on what we individually value as researchers. I think there’s a lot we can learn from different perspectives, though, and we shouldn’t fall into the habit of dismissing a different type of understanding because it’s not how we normally pursue our own research.
How has your experience been in neuroscience as a young woman researcher? Do you think your opinions and concerns are given due respect in work meetings etc.? How about conferences or when you are attending workshops?
As far as I can tell, my experience as a young woman researcher in neuroscience has been the experience of any researcher in neuroscience. I haven’t had this experience of talking to someone about a situation I’ve been through and realizing I was being treated differently for being a woman / a foreigner / a hijabi Muslim / young. I’m not saying everyone in academia is nice and perfect; in fact I think everyone could really afford to be a bit nicer to one another. But I personally have not gone through any experience that I would attribute to sexism / racism / religious prejudice / ageism or any other form of discrimination. In fact, I think the major reason why I haven’t faced nearly any prejudice since coming here to the US is because I’ve been largely within academic communities. This is not to say these experiences don’t occur and this type of discrimination doesn’t exist, but it is to say that it doesn’t always and consistently happen. Maybe it mostly happens, or often happens, or sometimes happens or rarely happens… I’m only a single data-point. But oftentimes we only hear about cases where it DOES happen (because if something doesn’t happen that’s not really noteworthy) and I think that skews our perspective. So from my perspective, I personally don’t think I’ve been treated differently for any of those reasons. If it did happen, I wasn’t aware of it.
As an ending note, I want to highlight a few interesting aspects of the meeting Present and Future Frameworks of Theoretical Neuroscience that I gathered from listening to the podcast. (For me) As someone starting out to explore these topics as an experimentalist, it was exciting to hear the topic of theory being discussed amongst experimentalists and theorists, and hear their perspectives on how they think about theory. It was also encouraging to realize that the definition of theory in neuroscience is in need of further characterization and grounding to help move the field forward. A huge chunk of the first podcast episode focused on this idea of defining theory: what it means, what can it offer experimentalists, and how does it differ (if at all) from computational models. Theory isn’t just a collection of equations or functions (which is how it is largely viewed today), but the role of theory should be to provide a “larger framework”, a series of hypotheses, and reveal first principles of the systems we are studying. Another manner in which a larger theoretical framework can drive progress is by revealing questions that we don’t currently know how to test, and thus aid in the creation of new experimental paradigms or tools. I found this to be an important insight given that currently, in our field, the available tools seems to be driving a lot of experiments (I’m not particularly skeptical of this approach, but I find it interesting to think of the implications of research driven by tools vs. questions per se, a topic to tackle further another day).
Moving beyond definitions, theory can provide a bridge amongst research across subfields. The study of the nervous system is carried out across different levels of analysis, using a wide array of tools with varied granularity. In Explaining the Brain, Carl Craver states that explanations in neuroscience by necessity span multiple levels and integrate findings from multiple fields (in order to illuminate mechanisms). Models and theories can be very critical to bridge the gaps or make the connections across different levels of analysis, across species and across these varied methods. By establishing the links or bridges between levels, any limitations of species models or experimental methods can be made more explicit, thus allowing each individual researcher to be able to think or see beyond the confines of one’s own research, and understand how it relates to the broader goal of explanation.