An introduction to 2024 Keynote Speaker Lucia Melloni (OHBM 2024 keynote interview series pt. 8)

Dr. Lucia Melloni. Dr. Melloni is a leader in consciousness research with a particular focus on its neural correlates. Her research career has spanned topics including language comprehension, predictive processing, and sleep, and has investigated both healthy and clinical populations. Dr. Melloni’s career covers this broad range of research areas partly because of her frequent collaboration with scientists within and outside of her field.

Her commitment to collaboration is potentially informed by her extensive research background. After completing her PhD in Psychology at the Catholic University of Chile, Dr. Melloni completed a postdoctoral position at Goethe University, before moving on to lead a research group at Max Planck Institute for Brain Research, and later undertaking roles at Columbia University and Max Planck Institute for Empirical Aesthetics. She currently leads the Neural Circuits, Consciousness, and Cognition Research Group at the Max Planck Institute for Empirical Aesthetics in Frankfurt, and holds a Research Professor position in the Department of Neurology at NY Grossman School of Medicine.

Recently, Dr. Melloni has co-led a large-scale adversarial collaboration attempting to test two competing theories of the neural correlates of consciousness. The project reflects Dr. Melloni’s commitment to open science and reproducibility by including full preregistration of all steps, data collection at six independent labs, a built-in replication phase, and openly available code. Her keynote talk will present the current state of the project and discuss how other researchers can implement these large-scale collaborations to tackle the biggest issues in their fields. If you’d like to read a summary of one of Dr. Melloni’s recent works, see our Brain Bites summary here.

Alex Albury (AA): Your research has covered several different areas, can you describe your research trajectory and what led you to be interested in studying consciousness?

Lucia Melloni (LM): I've been fascinated by the mind and the brain for as long as I can remember. It has always amazed me how a particular configuration of cells and their connections can give rise to our rich mental experiences, our emotions, and our inner thoughts. At the heart of it all is the question of consciousness: what makes it possible to have vivid sensory experiences, and what can make us lose that capacity?

My journey into studying consciousness has been deeply influenced by a tragic event early in my scientific career. Just as I started my PhD, my sister's partner suffered a traumatic brain injury and lost consciousness. He was only 18. The injury was devastating, and the doctors told us he was brain dead. When I saw him for the last time to say goodbye, he was lying calm with his eyes closed, but then, he moved. For a brief moment, there was hope. Was he still there? That night, I dreamt about him, reliving the day. It was intense—I felt like I was there. This experience brought to light how hard the problem of consciousness is. If you had seen us both, how could you tell who was truly "there"—the sleeping me, paralyzed but vividly conscious, or my brother-in-law, moving but with his consciousness gone? This story illustrates why consciousness is such a challenging field and why understanding it is crucial for medical, ethical, and societal decision-making.

Over the past 20 years, I have been on a quest to understand how we can switch consciousness on and off, measure it, and restore it when it is lost due to brain injuries. My approach has been multi-faceted. Empirically, my team and I have worked on developing ways to measure consciousness without the need for explicit reports. This is crucial, as it allows us to study consciousness in infants, animals, and brain-injured patients. We explored whether consciousness could be trained and found that it can, which opens up new ways to investigate it under controlled conditions. Recently, we developed a technique using frequency tagging to access higher-level cognitive processes. This has enabled us to investigate residual linguistic capacities in patients with disorders of consciousness, aiming to predict whether they could regain consciousness. It is an exciting time, as other groups have also developed sensitive tools to infer consciousness in these patients. For many families facing situations like mine, this progress is incredibly meaningful.

Theoretically, I've explored various hypotheses on how consciousness comes about. Early on, with Wolf Singer and Eugenio Rodriguez, I tested the idea that long-range interareal synchronization might explain the unity of consciousness and transitions between conscious and unconscious states. We found evidence supporting this hypothesis, but realized it couldn't be the sole mechanism, as for example hypersynchronization in epilepsy leads to loss of consciousness. This suggested another ingredient was needed. I then investigated whether predictive processing could explain consciousness, but found it lacking as a full explanation. More recently, I've been involved in the first adversarial collaboration in consciousness research, testing Global Neuronal Workspace Theory against Integrated Information Theory. I'll be presenting our first results at OHBM. This collaboration is exciting because it represents a shift from testing our own theories to rigorously challenging them against others. This approach minimizes confirmation bias and accelerates the rejection of inadequate hypotheses.

These are exciting times for the field. With multiple adversarial collaborations now underway, we are embracing a new way of doing science—one that systematically embraces skepticism and sharpens ideas through rigorous testing. While finding the final answer may still take time, we are speeding up the process by rejecting hypotheses more efficiently. That's progress!

 

(AA): If you had to explain what you do to a non-scientist, what would you tell them?

 (LM): In my lab, we are studying what happens in our brain when we are conscious versus when we lose consciousness, such as during dreamless sleep, under anesthesia, and in some cases of traumatic brain injury. You can think of consciousness as having a multicolor TV on, or you can recall the previous night when you somehow disappeared into nothing. What is the difference in the brain between the full, multicolor, multisensory experience you are having now while reading this, and when you fell into dreamless sleep last night? If you have the same brain and the same body, what then is the difference? How does our biological system retain the capacity for consciousness, allowing us to regain it every day? And what happens when we lose that capacity? Is it possible to switch it back on, for example, in cases of traumatic brain injury with loss of consciousness?

We study this question in adults, and through a number of techniques that allow us to study the living brain, we, along with many others in the field, are making strides in understanding what brings about the marvelous capacity to experience and to be sentient. I also collaborate with other scientists to test different theories about how consciousness works, which helps us get closer to understanding this incredible phenomenon.

So, in a nutshell, I’m like a detective for the mind, trying to figure out what makes us conscious and how we can use that knowledge to improve lives.

 

(AA): What is it about consciousness that makes it an important topic of study? Do you believe it has a more central role in cognition generally?

 (LM): Consciousness is everything to us. It’s the lens through which we experience the world. It’s where our intense feelings live, where joy, sorrow, regret, and excitement all happen. It’s also where our sense of moral responsibility lies. While we’ve made amazing strides in understanding the universe, life, and evolution, the mystery of what makes us conscious remains largely unsolved. Why does the stuff in our brains and bodies give rise to feelings and experiences? How did consciousness evolve, and when does it emerge in development? And now, with AI advancing so quickly, we also have to ask: can machines be conscious? If they can, how should we build them—and should we even try to build conscious machines?

Few questions in science are as deep and profound as the question of consciousness. Answering these questions is becoming more urgent with the rapid pace of AI development. We know that conscious processing in humans is different from unconscious processing. For example, our ability to veto actions seems to rely on consciousness, suggesting it plays a key role in cognition. This leads us to a crucial question: How do consciousness and intelligence relate? As AI systems become more sophisticated, understanding this relationship is vital.

These questions highlight the puzzle of the function of consciousness, particularly what evolutionary purpose it might serve. For example, does feeling pain confer an evolutionary advantage? As we begin to answer this question, we also need to consider whether consciousness might evolve in artificial systems, which have completely different constraints. In short, your question takes on special significance at this historical moment when we are seeing sophisticated artificial systems and are still trying to understand the capacities they might exhibit.

(AA): Consciousness can have a reputation for being a bit abstract. Do you find that this is warranted? What are some of the challenges you've had trying to quantify consciousness?

(LM): Oh, absolutely, consciousness can seem incredibly abstract, and I think that's somehow warranted. It's such a complex and multifaceted topic, which sometimes defies a simple definition. Philosophers have been debating it for centuries—think Descartes and Kant—so it's no wonder it feels a bit elusive. In fact, for much of the 20th century, and especially after the dominance of behaviourism, studying consciousness was seen as something outside the realm of science. A luxury that only a few well-established academics would consider. Yet, this has dramatically changed, since the 1990s, the focus has shifted from philosophical debates to more empirical studies aimed at finding a correspondence between the patterns of brain activity and properties of conscious experience, that is, finding the neural correlates of consciousness.

When we shift to the scientific realm, we try to ground the question of consciousness with empirical studies, which presents its own set of challenges.

For starters, defining consciousness in a way that everyone agrees on is not an easy task. It involves so many elements like sentience, awareness, perception, cognition, and self-reflection. And then, how do we measure it? Tools like fMRI and EEG are great, and have in many ways been a game changer for the study of the mind and in particular consciousness, but they have their limits. However, the toughest problem with investigating consciousness comes from the fact that is subjective.  How do we as scientists study and measure something that is inherently subjective? How do we account for the fact that everyone's personal experience is different? The so-called subjectivity gap makes quantifying and comparing consciousness a hard problem.

The brain itself adds to the complexity: Trying to pinpoint the exact neural correlates of consciousness is like trying to find a needle in a haystack. It's a massive challenge. But that's where interdisciplinary collaboration comes in. By bringing together neuroscientists, psychologists, philosophers, and computer scientists, we can tackle this from multiple angles.

 In the adversarial collaboration project, we've had our share of disagreements on methodologies and data interpretations, but that has actually been really productive. It pushes us to refine our hypotheses and improve our methods. There are real-world applications for this research, like treating disorders of consciousness and even advancing AI, which shows how important this work is.

Looking ahead, I'm excited about the potential of new neuroimaging techniques and computational models. They might help us make this abstract concept a bit more concrete. So, while consciousness is abstract, the journey to understand it is incredibly fascinating and rewarding.

 

(AA): What should this year's Annual Meeting attendees expect from your keynote? Can you give us a teaser?

(LM): We will delve into two fascinating topics: one is consciousness—one of the frontiers of science—and the other is us as scientists, focusing on how we can improve our practice to avoid confirmation biases and systematically enhance theory development. It will be a story with two tales: the science of consciousness and the human scientists.

During the keynote, I’ll give you a quick overview of where we stand in the science of consciousness. Spoiler alert: it's a field filled with diverse theories and, yes, a fair bit of confirmation bias. Next, I’ll introduce you to adversarial collaboration, our secret weapon for overcoming these theoretical divides. We’re putting this to the test by pitting two leading theories—Global Neuronal Workspace (GNW) and Integrated Information Theory (IIT)—against each other using standardized experiments.

Finally, I’ll dive into the work of the COGITATE consortium, where we’re implementing this approach with some cutting-edge research. Specifically, we’re running an experiment across eleven labs using MEG, fMRI, and invasive EEG to see how different stimuli affect brain activity. This is where things get really interesting. By comparing the predictions of GNW and IIT, we can start to see where the theories might fall short.

Expect some thought-provoking findings that could challenge what you know about the brain and consciousness. And don’t worry, we’ll also discuss the challenges of making adversarial collaboration work in the real world of science.

So, if you're ready for a keynote that's a mix of science and storytelling, packed with exciting discoveries and big questions, don't miss it. I can’t wait to share it with you all!

Next
Next

Diversity and Inclusivity Committee (DIC) and other Special Interest Group (SIG) Events: what to expect from 2024 OHBM in Seoul