Greetings, Friendlies!
I am so excited to be writing this post. Remember a gazillion years ago when I proposed the PPP series? It was this concept, Gazzaniga’s Interpreter Module, that was the spark that lit the whole thing off. And now I finally get to tell you about it. :)))
We had the classic question about modularity: If brains are modular, if consciousness-of, why doesn’t it feel like that? For most of us, most of the time, it feels like there is a sense of continuity, a single, unified, continuous self experiencing a stream of phenomena over time.
What gives?
In Chapter Three of Who’s In Charge (And a 2000 Journal Article, and probably a lot of other places) Gazzaniga proposes his solution. Not surprisingly, this solution comes from his studies on persons whose corpus callosum has been severed, “Split-Brain Patients” (refresher here, and wikipedia).
The Setup
Imagine sitting at a table with a screen in front of you, the screen configured such that an image can be presented to only the left or right visual field. This can get a little confusing because of our neural wiring: the left visual field feeds the right hemisphere and the right visual field feeds the left hemisphere. So going forward I’ll talk about hemispheres, rather than visual fields.
If an image of an apple is presented to your right hemisphere and you are asked what you see, you’ll see the apple, be conscious of having seen the apple and you’ll say, “Apple”. If an orange is presented to your left hemisphere, you’ll see the orange, be conscious of having seen the orange, and you’ll say, “Orange”. Easy peasy.
For Split-Brain Patients things are different. If you present an orange to the left hemisphere, they will say “Orange”. They are conscious of having seen the orange and can tell you they see the orange.
If however, you present an apple to the right hemisphere, they will tell you they didn’t see anything. From this information, you might think they are somehow blinded in this part of their visual field.
But wait.
If you then place several objects on the table in front of them, one of these objects is an apple, and you say, “Okay. You didn’t see anything. Take a guess. With your left hand, just guess what object might have been on the screen.” With their left hand they will select the apple.
Whaaaaaaaaaat?!?
Patient P.S.
Patient P.S. generously offered his time and bandwidth for participation in studies in Gazzaniga’s lab.
A chicken claw is presented to P.S.’s left hemisphere. A snow scene is presented to his right hemisphere.
These two images are cleared from the screen and a new set of images are displayed now visible to both left and right hemispheres.
P.S. is then asked to point, with his left and right hands, to the new images best associated with the previous images (chicken claw and snow scene).
With his right hand (left hemisphere), P.S. points at a drawing of a chicken. Chicken claw, chicken. With his left hand (right hemisphere) he points at a drawing of a snow shovel. Snow scene, snow shovel. So far so good.
But when asked why he pointed at these new images, here’s what happened:
we asked why he chose those items. His left-hemisphere speech center replied, “Oh, that’s simple. The chicken claw goes with the chicken,” easily explaining what it knew. It had seen the chicken claw. Then, looking down at his left hand pointing to the shovel, without missing a beat, he said, “And you need a shovel to clean out the chicken shed.”
P.S. did not say, “I don’t know why my left hand is pointing at a snow shovel.” Instead, he fabricated a story that made sense with the information that was available to “him” (or his left-hemispheric speech center, anyway). A chicken and a shovel. Shovel must be for the chicken poop.
Whaaaaaaaaaat?!?
The Interpreter Module
From this case and others, Gazzaniga and his lab proposed the existence of what they call an “Interpreter Module”. This module is only one of many many modules, as we’ve discussed here and here.
It appears that what is going on is that this interpreter module is receiving input from other modules about what is happening in the mind/body and what is happening outside the mind/body, and it is fabricating a story to make meaning/explain what it knows.
🤯
Maybe you’re asking, So what? Well. The way I see it, the existence of an Interpreter Module could answer a ton of dhammic questions including why the feeling of a continuous self and what are we doing in different forms of contemplative practice. It’s not a slam dunk yet, I mean, I have plenty of questions, but so tantalizing…
With friendliness!
***
PS: As always, your thoughts most welcome. Particularly if you have meta ideas (with metta!) about how this post landed. How was the length? How was the pacing? Is it clear what’s happening in the studies? Did the flow feel right? Did you even make it to this post script? 😹 Are you interested in knowing more? Are you bored? Is it interesting from a dhammic-perspective? Tell me all the things.
4 replies on “PPP, Part 11, The Interpreter Module”
[…] happens if we take the Interpreter Module, Avijjā (as mis-understanding how our minds work), and Saṅkhāra (as “principle of […]
LikeLike
[…] information to one hemisphere which the other hemisphere does not have access to? (Refresher here) And remember how the Interpreter Module constructs narrative based solely on the information it […]
LikeLike
[…] time we looked at how the Interpreter Module (IM) uses not just individual pieces of information but also views and narratives when constructing […]
LikeLike
[…] the past 27(ish) posts we’ve looked at the Classical View of Humanity, the Interpreter Module, Avijjā, Simple Knowing, Skillfulness, Reactivity, Fabrication, Implicit Bias, Hemisphere […]
LikeLike