... seeking simple answers to complex problems, and in the process, disrupting the status quo in technology, art and neuroscience.

Saturday, February 04, 2023

The Gnostic Neuron - Part 1 - A Simple Model of a Complex Brain

 Part 1

<First version posted July 17, 2020>

What if I told you… that Morpheus never actually said the above words?

What if I told you that what you “know” about the above quote is the result of the Mandela Effect, and so isn’t actually true? Does that make the stated assertion a lie? (Which once again validates everything you know?) Knowledge can be a slippery business.

What if I told you the reason for this false memory was that this meme was a better one-line summary of what Morpheus DID say during this pivotal disclosure? And that this better summary was independently created by some and passed from person to person over the last couple of decades to the extent that it replaced the original script of the movie in our culturally collective memory.

If you recognize the quote, you probably remember it as spoken by Morpheus from the movie, “The Matrix”. But your memory is wrong. Morpheus never uttered any of the above line. Go ahead, look it up. Or watch the movie again. I did. 

So is the above quote a glitch in the Matrix? Nope. It’s a bit of false knowledge created by you, me, and many others from a cultural distillation of the conversation Morpheus had with Neo during Neo’s introduction to the “real world” in the film. The consequences of that scene in the film were so emotionally dramatic that you constructed knowledge about it which has ultimately become a collective cultural cue as others have done something similar with the experience.

That first part, “What if I told you…”, is meant to make you challenge what you think you know. The second half invalidates that knowledge. The quote embodies such a jarring summary that it has even become a meme on the internet for issues both trivial and profound. Trivial, creating humor, and profound for its impact. (By the way, cueing you with that visual image and text sets you up for the Mandela Effect.)

If you're like me and most others, you probably knew FOR SURE that Morpheus actually said the above line. So much for the accuracy of memories. So much for what we KNOW. I present this meme as an example of the actual nature of knowledge which is far less reliable than we generally think, and yet in other ways, far more useful than the actual truth, or what we know “for sure”.

The essence of the above assertion is that what Neo had experienced all of his life was not reality, but a computer simulation. That was the jarring part.

“Ironically, this is not far from the truth.” - actually said Morpheus

Except for the “computer” part and the “D” cell energy aspects, a simulation is a reasonable way to describe how the brain models the world. So is our experience of life a simulation? Yep. Actually, a sparse one but the consequences are just as extraordinary as what was depicted in the movie. Just without the Kung Fu, or ability to dodge bullets.

The simulation running in your brain is actually a dynamic collection of ionic neural signals in a cauldron of chemistry, but we’re getting the cart way ahead of the horse. We need to first understand the nature of these signals and the chemistry they affect, and in turn, are affected by other signals and chemistry. This is where I need to unplug you from the Matrix of your left-brain with its perspective of information technology and bring you into the wisdom of your more knowledgeable right-mind.

"If you can't explain it simply, you don't understand it well enough." - Albert Einstein

I know why neurons fire, and I understand it well enough to explain in a relatively simple fashion, especially for such a difficult topic. I’m serious. Researching the nature of neural connection and the concept of “knowledge” led me to a startling conclusion based on a single radical, yet simple idea:

Neurons create knowledge.

More specifically, neurons literally create and define knowledge at the instant that they fire, and then they use this knowledge to cue scripts of muscle movement, yielding behavior. Such "knowledge" could also be described as a decision to fire, releasing its chemical signal. What does this even mean? How can biology make a decision, creating something as abstract as knowledge, let alone define it?

Most knowledge can not be expressed as language, nor are words even needed for this pervasive and dynamic body of internal knowledge. Yet words are literally the expression of knowledge, a representation of knowledge outside of the skull and apart from the body. Knowledge is the ethereal relationship between things.

Knowledge is often confused with information, but a bit of knowledge is very different from an ordinary binary bit and would require many more of the digital kind to encode what it delivers. The encoding of knowledge is dependent upon what it moves or might move, and how this movement affects the world before being re-sensed in a continuous loop with the world. Information is how knowledge is managed outside of the neuron or the brain in macro. Information is an attempt to objectify knowledge, but never quite hits the mark. It's typically a disembodied and refined REpresentation of specific knowledge fixed in some medium in the real world outside the skull. Knowledge knows. Information merely informs.

Data and information are more formalized, and frozen, knowledge. Knowledge is mostly internal and changing. Information is mostly external and fixed, but not exclusively so in either case. Knowledge is ancient, and information is modern; but again, not exclusively so in either case.

Am I being redundant with what might at first appear to be a minor exception? Yes, and on purpose. Repetition is part of how knowledge becomes information. It's why chanting came into existence: knowledge striving to be stored as information in the oral tradition. Both knowledge and information remain critical in this "age" of information. As long as we don't forget that it started with knowledge. Which is the point.

Most knowledge occurs far more often, and with far less quality than is generally assumed. The trick is in how we define and think about knowledge. If we relax and expand its definition in a very specific way, some fairly magical things begin to happen in modeling our multifaceted neurons, brain, and our world in general. The key is to understand the actual nature of knowledge. And how neurons create it. Neurons create knowledge, and knowledge is everything that isn't real. Knowledge is ethereal. This assertion begs a detailed clarification, which I’ll provide in due course, but here’s a quick overview: 

It's widely assumed that knowledge and information are the same, or at least very similar things. They are not. They somewhat share a spectrum of quality and utility. Knowledge is pervasive and ORGANIC proto-information subjectively relative to a neuron, the brain, or a person. Most knowledge generation is inherently biological, and there's far too much of it to even think about most of the time. Information is knowledge objectively viewed from the outside of a neuron, brain, or person. It's usually a more refined, abstracted, and relatively tiny subset of knowledge managed consciously in a physical form, such as words in the form of sounds or written text. This paragraph is a RE-presentation of knowledge elevated to the form of digital information.

Information can be sent as a signal if both ends agree upon its meaning typically represented by a state in some medium such as this text. This type of information signal is the objective form of knowledge. Knowledge can be sent as a signal as well, but agreement is not required. Instead, meaning evolves. The knowledge signaled by a neuron is far more dynamic but far less accurate and consistent. Agreement as to its meaning is a constantly changing process. Knowledge means what knowledge moves. Neurons only aspire to achieve consistency. They often fail gloriously. But typically in some useful fashion.

Information is normally defined by more objective consensus usually represented in some medium outside the skull. These information "states" only change for logical reasons. In contrast, neuronal knowledge starts from within and is inherently subjective, analog, ephemeral, and ethereal. It's also often surprisingly incorrect, and even illogical. Each bit of knowledge is the product of a specific neuron, at a specific moment, and only exists for that moment, useful or not. Knowledge is far more pervasive but far less reliable than information. Information informs us. Knowledge moves us. Sometimes. The exception is the key difference between objective information and subjective knowledge.

If you're a technologist, the idea that knowledge is more primal and more organic than information should challenge your understanding of information theory, but the truth of this assertion is intuitively built into our language. You may already have a "feel" for this assertion. I'll try to flesh it out for you.

Of course, knowledge can also be captured in an inanimate book, but that's a RE-presentation of knowledge becoming information. The genesis of knowledge has an organic association created by our culture and language. Would you say that a door "knows" how to close? Even if it's spring-loaded? Why not? Even writing the question is intuitively awkward. Yet, "you" could be comfortably described as knowing how to close a door. Such language would be in good form. Also, this description of subjective and organic knowledge is not limited to humans. A horse knows the way home. A dog may know how to roll over. But would you ever attribute such "knowledge" to the typical automobile even if "turning over" is how we often describe the engine starting? That engine doesn't "know" how to turn over. The point is, that we intuitively KNOW the difference between authentic organic intelligence and the artificial kind. So far. Modern AI is finally blurring the line.

(It was watching videos of Tesla automobiles using Full Self Driving when I first noticed the drivers comfortably describing the car's operation in a more organic form, crossing a sacred line between the living and the non-living - "the car can see the pedestrian", or "the car knows where that bicycle is headed".)

I'm getting a bit far afield for this introduction, but such subtle differences are just the beginning. There's much, much more to introduce.

The brain in macro, and even each neuron, are multifaceted.

How can a lone neuron with only a single output be multifaceted? That answer for me was the key to breaking a logjam of logic - metaphorically. Such a thing is possible because there are multiple ways of creating the same knowledge in the same neuron just as there are multiple ways of using such knowledge from the same neuron, such as freeze, fight, or flight, to name three of the most obvious examples. The ratio of activating versus inhibiting synapses in each neuron is a critical hint. Each facet in a single neuron both competes and cooperates with the other facets for control of when to fire the chemical signal for that given neuron, neural net, or muscle group of the body. And one facet need not preclude another. Or it may.

This multifaceted yet unitary aspect of knowledge creation from a single neuron could be compared to a reversible robe in its simplest form. Such a robe may appear differently to the world and even feel different when wrapped around you, yet keep you just as essentially warm worn either way. Now think of such a robe with even more than two sur-faces (making it multi-faced and multifaceted), a type of multivariant robe that yields an invariant result. The key is understanding that the essential bit of knowledge in this case is warmth. This idea can also be described as "flexible invariance" which may seem like a contradiction in terms or even a paradox, but only if you think about it logically. The neuron has no such restriction.

In a similar respect, the brain in macro form has different ways of coming to know the relationships between things in its environment from multiple senses at the same, or similar time frames. Our brain also has multiple ways of responding to such complex experiences. Multiple faces confront the world for both input and output, sense and behavior. Sensing does not determine actual behavior. Neurons and the brain in macro do.

This multifaceted nature of the neuron and brain, in general, is the reason for the seemingly contradictory behavior we describe as cognitive dissonance, passive-aggressive behavior, and hypocrisy in general. Multifaceted and competing neural nets also explain delusion, hypnosis, false memory, and optical illusion. Indeed, what we come to "know" are various types of illusion, some more useful than others, by degrees. Understanding our multifaceted nature is key to managing our seemingly conflicting behavior. Yes, the details get a bit complex, but would you expect anything less from such an efficient and resilient survival solution as the brain? And the neuron?

Knowledge Cues Scripts 

Most significantly from an information theory perspective, neither knowledge nor its signal is stored as a fixed “state” in the neuron, or anywhere else in the brain. Instead of storing states, neurons evolve a very specific “sensitivity” to each experience much like an immune cell becomes sensitive to a specific pathogen, except more flexible and adaptable, making neurons much less "stately" than even an immune response. When a similar circumstance reoccurs, that neuron may fire again in recognition of that specific bit of what is best described as approximate biological knowledge, and then adjusts its sensitivity to be a more effective cue for this particular bit of knowledge at the next opportunity. Again, this is similar to what happens when the body re-encounters a pathogen, just more flexible. (Well, if a single disease could be caused by multiple pathogens, but that would be pushing the metaphor a bit too far). An immune response is driven by a type of hyper-specific knowledge used to help keep our bodies alive, healthy, and reproducing. So is knowledge in a more flexible and dynamic fashion.

A neuron’s knowledge has a utility that is quite different from that of information, but no less significant. As other neurons fire, their specific knowledge joins in a convergent and cascading but sparse map of semiotic simulation that has evolved to create more abstract meaning from any particular experience. Each neuron knows something different, but it only knows that thing for the instant that it fires and then prepares to know that thing even better the next time it occurs in the world. Neurons only fire when they are cued by that thing from reality or imagination (AKA Global Neuronal Workspace), and that thing is best described as ethereal knowledge.

An ionically mediated chemical signal representing this knowledge also diverges out to any other neurons that might find it useful. Ultimately, these somewhat divergent, but mostly convergent and hierarchically organized experience nets both compete and cooperate to form cues that drive scripts of muscle movement known as behavior. Each movement we make is informed by a crescendo of convergent knowledge. How is this knowledge encoded? Mostly, it's not. At least not in the same way that information is encoded in a computer. Instead, neurons DEcode the world as they create knowledge, and this knowledge is constantly changing, much like the reality we encounter daily in our lives.

In the temporal background, typically out of the critical path, the cortex creates models of the world using a form of this stateless, signal-based simulation expressed as chemical feelings from both sides of the brain. We call these predictions emotions. Through the trick of priming, they increase the probability of physical movement we call behavior as the word e-motion implies. Processing thoughts in our left brain, and envisioning solutions in our right, are both higher-order forms of this emotionally driven effort. Emotions make our imagination "real" so that we'll respond in a way similar to a stimulus from the world, only next time hopefully before it happens, yielding prediction.

Dreams are the off-line version of this type of chemo-semiotic stateless simulation, a type of practice run for the next real-world encounter, sorting out what we learned from forming fresh neural connections the day before, all while keeping our muscles carefully inhibited, but the emotions active. Dreams help to hone and firm up this stateless "memory" at night as a follow-up to the real-world sensitivity adjustments that have occurred during the day. This process is known as up and downregulation of neural connection, a form of biological normalization, somewhat similar to what we do with an information database. But also quite different. The result ranges from primal proto-knowledge joining together to drive increasing abstraction all the way up to information, and ultimately something approaching the truth. But by degrees. And over time.

The key to understanding the brain is this fresh perspective that neurons create knowledge, and that most knowledge is created by neurons. It’s only the quality and character of this knowledge that varies, and varies widely. Once we begin to focus on what each neuron knows, and how knowledge dynamically changes, can we begin to build a simple model of a complex brain.

The Neurophilosophy of Language

If you’ve spent any time studying neuroscience or human behavior, this idea of neurons creating and defining knowledge may at first seem comical, radical, bizarre, or worse - meaningless. My first reaction was to laugh out loud. My second was, could it be this simple? I couldn’t look away. 

As I worked with the details of neuronal communication I soon discovered that the macro consequences of this gnostic model were so dramatic and answered so many questions about human behavior that my macro experience began to eclipse the work I was doing in the nano context with the synapses. This neo-gnostic model of neurons ultimately changed how I understand the world and even philosophy itself, which is of course the appreciation of such knowledge.

It's now hard to see neurons as anything other than creators of knowledge. And that’s just the beginning. The concept changes not just how I see neurons and the brain, but also how I understand human behavior. I now see adaptive knowledge behind the actions of everyone I meet. This model is dramatically shifting my perspective of everything. Like green letters dropping down the screens from the movie, “The Matrix”, I see bits of primal knowledge coming together in life to form effective behavior and ultimately emergent insight about everything I experience. This transformation is what I wish to share, but I'm torn between continuing to explore this model and describing its nature in this blog post. I'll try to do both in hopes that each will inform the other.

Am I delusional? Perhaps. But with a clear understanding of this first principle of the neuron and its multifaceted nature, the brain begins to make a lot more sense. The trick is to generalize and broaden the concept of knowledge while recognizing its multifaceted genesis. Once I understood that neurons literally created and defined knowledge, figuring out how this happened became a lot easier and revealed the brain's multifaceted architecture, and vice versa, yielding a map of astounding complexity largely based on this one simple principle.

Even more surprisingly, the concept illuminates language as a Rosetta Stone

of brain architecture hiding in plain sight. The connectome of the brain is

ultimately reflected in our language and culture, but by degrees. This

evolutionary trick has evolved to yield knowledge, information, and ultimately,


Words are literally the expression of this knowledge in the process of becoming disembodied information. When pre-motor neurons fire, they cue a script of choreographed muscle movements in the diaphragm, throat, tongue, and lips to create sounds. Or in the fingers to produce writing. Words are literally the expression of knowledge. So is virtually every other form of expression from dance to mathematics.

What I’m about to present is not merely the redefinition of the "word" knowledge. It’s a radically different understanding of what it means to define all words which are only a very tiny subset of all knowledge. Knowledge is also likely the basis for all thought and imagination. From this perspective, etymology may shed light on harder problems. The most probable path to understanding the hard problem of consciousness is to understand the brain, and the most probable path to understanding the brain is to understand the neuron. It's also easier to address the simple problem first. Later we can speculate about chemo-semiotic consciousness.

Scripts Both Compete and Cooperate to Yield a Multifaceted Brain 

In due course, I’ll describe a collection of tricks that evolution has used to evolve a new way to evolve. (Well, knowledge is only about a billion years old, so fairly new.) It yields a very different, yet powerful way of thinking about the brain. And reality. No, I don’t understand all the tricks of the brain, only a relative few. But these tricks are applied disproportionately yielding a shadow of an overview that has for me become a simple model of the brain. Needless to say, understanding the nature of this chemical, signal, and biology-based knowledge has extraordinary application in our everyday interactions with the world, from science to art, and especially, philosophy. It informs everything you can imagine. And many things you can't.

Yes, I realize how audacious this claim is, probably better than most. I’ve been casually working on this problem for decades, but more intensely over the last few years. I’ve collected well over a thousand pages of technical descriptions, alternate versions, notes, and references, but all of that detail would only distract us at this point.

A comprehensive model of anything needs to account for all known observations. This of course is currently impractical in the case of the brain. There’s simply too much data to even review, let alone validate (at least by any one person). We need a simple model of the brain first. That starts with a framework, or better yet, an overview of a model. We can fill in the details as our understanding evolves.

Whether we realize it or not, we each manage a default model of the brain along with our model for human behavior. We use it daily in various ways. It's just how the mind works. Being part of nature itself, the brain too abhors a vacuum. If your exposure to our technical media about the brain is typical, your personal brain model likely involves electrical metaphors, computers, and processing your thoughts in a sequential fashion. After that, the details are likely lost in shadow, because most of that model is simply wrong. But not completely.

Many think of neurons as logic devices or memory elements (which can be derived from electronic logic). For decades, so did I. But neurons have far more in contrast than in common with such metaphors. If you're like me, you may have a feeling that there's just something about this tech approach that doesn't seem quite right.

We each know different things about the brain depending upon our own individual research and experience. Striving for a fresh approach, here's how I manage my model of the brain - start from the most general and work in new detail as I validate each observation. But it helps greatly to have that first principle understood - that multifaceted neurons create knowledge.

Here's a fun game: each time you use the word "know" or "knowledge", look outward into the world and think about how you came to know this thing and what your level of conviction is. Question everything. So, what do you know?

After that, the challenge is to generalize in a way that incorporates what we know, yet keep those generalizations broad enough to account for all the detail we’ve yet to discover. A fool’s errand? Perhaps, but here's the hyper-simplified model of the brain I now use to understand this challenging mystery.

A Simple Model of a Complex Brain

The body delivers millions of neural signals to the brain, each of which represents a bit of knowledge about the world in chemical form. These signals are best understood as theatrical cues which both compete and cooperate in a converging and increasingly abstract fashion to drive scripts of muscle movement known as behavior, which in turn sometimes affects the world, which can once again be sensed. This process happens in a continuous loop with that world. Or these cues may signal glands to release internal chemistry which interacts with these chemical signals in a similar fashion also forming a dynamic loop within the body and especially the brain. In both cases, these two macro loops help to refine and normalize interactions with each real-world encounter. Or internal emotional feelings. In the process, both ionic signals and their chemistry refine and validate the accumulated knowledge of that experience in the form of adjusted sensitivity. Or not. The exceptions can be critical.

In their competition and cooperation, these cues and scripts of neural connection have formed in layers within each side of a single verticle division, left and right, providing for necessary isolation to create the multifaceted nature of the macro brain. These sides and layers are best imagined as creatures from our evolutionary past. Each of these critters has many different ways of dealing with the world. As you come to know how each creature net is cued, you’ll begin to better understand your own behavior. A thousand critters each apply one of their thousand tricks to yield a million survival solutions. There are obviously too many to keep in mind. Fortunately, their application is disproportionate, even extremely disproportionate. But understanding even a few of these tricks can be quite useful in understanding the brain, and ourselves.


For instance, think of the cues that drive human competition, consumption, and reproduction. There are many, but only a relative few dominate most of the results in a form best described as sparse signaling creating a map of your body and the world in general. If the “executive” in your mind can intercept and redirect even a few of these more common cues, it can change your life dramatically. There are many self-help books that apply these techniques without ever understanding the neural details of how they work. OK, the above may be a bit too complex for now. Ignore these last three paragraphs. If you can.

An even simpler summary of a simple model of the brain:

- Neurons sense the world to biologically create primal knowledge.

- Chemical signals converge to create even more abstract knowledge.

- This knowledge cues scrips of muscle movement known as behavior.

- Behavior affects the world and body, and in turn is affected by the world and body, forming dynamic loops with reality, normalizing, refining, and validating neuronal knowledge with each repetition.

- This knowledge is published widely by the neuron's axon delivering chemical signals to form a type of stateless, semiotic simulation using sparsely decoded maps of reality that help increase the probability of survival and reproduction.

- Our conscious worldmapculus is one such map becoming both the source and the object of this ultimate expression, in a Zen fashion. It's a simulation using dynamically looped signals to create an ethereal representation of reality in our skull, paradoxically.

Still too complex?

How about this:

Neurons create knowledge which is used to cue scripts of muscle movement we describe as behavior. These cues and scripts of multifaceted neural nets both compete and cooperate to yield a multifaceted brain needed to survive in a complex world.

We are each a thousand creatures that have evolved a million tricks over a billion years.

or even more simply:

Neurons create knowledge yielding a skull full of cues and scripts that help us survive and replicate.

That's about as simple as I can manage for now. Just think of your brain as a collection of competing and cooperating theatrical cues and scripts. Explore the interactions of these cues and scripts introspectively. It may provide a better understanding of how you deal with the world. Like mindfulness (closely related to knowing), this neo-gnostic approach will begin to make more sense and yield more useful results.

If you’ve read this post more than once, it may seem to have changed. That’s because it probably did. I used to have a section here about assertion salad which I broke out as a separate post I now use as a summary. What’s useful today may not be useful tomorrow, or worse, may even become distracting. If I'm correct about this prime assertion, the consequences are as cosmic as the brain itself. It informs all of human knowledge, science, philosophy, and art. I want to keep my thinking flexible and plan to treat this content as a dynamic document much like a monitored Wiki which will evolve as I get useful feedback. Initially, it will be progressively published here as a series of dynamic blog posts. Feel free to follow or link, and share as you will. Check back later for new versions.

If the above summary about the brain speaks to you in any way, you’ve likely spent a great deal of time thinking about philosophy, the brain, and/or human behavior. I hope I can help you along your path, and you, along mine. If you’re purely a spectator, that’s fine for now. But I hope you’ll get involved in this effort to understand the brain. Perhaps I should clarify who I am, and who you are as my audience. My work history is steeped in computers, business, and technology, but not biology. To a significant degree, I'm writing these blog posts for myself, and to myself. I read them often. But I also need to include you as a critical element in this exercise. That's part of this multifaceted process.

You are likely very interested in the topic or you wouldn't have read this far. I'm sure most will bail within the first few paragraphs. But those who truly understand the nature of this challenge will likely entertain even crazy ideas if it helps them in any way to understand the brain. That makes you more like me in your imagination and conviction regarding this quest. To be candid, I’m making much of this up as I go along, so I need your feedback. Here’s how I hope to inspire it:

I’ll start with an important question to help frame the problem which has been informed by this neo-gnostic model. The exploration of this question will be followed by some unlearning critical to finding a fresh start and solid ground. Then I'll describe why and how neurons create knowledge and actually define knowledge. Next, we'll take a trip starting with the first animal and then forward through history to imagine how evolution might have created this amazing result. I’ll present an evolving description of the brain starting with a single neuron and ending with a simple model of the human brain. If using this simple model itself to inform a fresh thesis seems like circular reasoning, it’s not. It’s merely a circular presentation. Modeling the brain ultimately starts with the neuron.

So will I.

I’ll describe the ideas that informed this model in the way I came to know them over my lifetime of subjective experience, especially the parts I had to unlearn. That’s the reason some of this presentation will be a memoir. Here’s a sample:


My very first memory was from when I was about three years old and sitting on a rock wall in front of my grandmother’s house where I lived. Above is a current photo. This wall was already falling down 67 years ago. Most of the rocks have now been used for other projects, but at the time I was straddling not only the wall but also that remaining concrete post that originally held a gate. At the time of this memory, all that was left of this gate was a single board of the frame held by one bolt at its center. Now only the bolt-hole remains. I don’t know what happened to the gate or the other bolts, but the remaining one allowed this board to rotate about the face of the post to a horizontal position. As a typical three-year-old fascinated by airplanes, I’d put my feet on this board which became a wing. I could bank left or right. This seat, post, and board became my airplane, not unlike Snoopy’s doghouse which I discovered years later. I recall flying my "airplane" and going to many places in my mind. I remember it well. Or do I?

A couple of years later my father took me on a real airplane ride with a friend of his. As a five-year-old, I had to sit in my dad's lap, but I got to fly a real airplane for a few minutes. Thirteen years later I had my pilot’s license, followed by an instrument rating. Flying for me has always been a joy, inspiring an immense sense of freedom.

I’ve since wondered many times about this first “memory” of "flying" and how it was stored in my brain. Did my later aviation ambitions affect the content or recollection? Decades later my grandmother told me I’d spent hours on that rock wall as a child. Did my memory simply come from hers? Or did I modify the genesis of my own memory? Are memories real? Or ethe-real?

It's unlikely she would have known about the dynamics of that board, nor did she mention it at the time, yet that aspect remains vivid, leading me to think the memory was mine. Or was this memory created anew at the moment before I typed this sentence into this blog post? A bit of both I suspect.

As we proceed, I will mostly ignore genetics, imaging, brain waves, and the rest of the more recent technical fields, especially anything having to do with the electron (once I carefully dismiss it). What’s left? Chemistry, connection, and the concept of knowledge. Oh, and a bit of theory about evolution informed by the practices of Tao and Zen. But first I need to challenge some common assumptions with a very important question, then plant a seed of doubt about the limits of information theory, and even science itself.

One last thing before you proceed. I may be wrong about neurons creating knowledge as a first principle, but if I AM wrong, what IS the first principle of the neuron? What exactly does its signal mean? And how can we build a model of the brain if we don't clearly understand this first principle? Finally, if not neurons, from where does knowledge spring? Whatever your perspective and convictions about the brain, these questions need to be asked. And answered. While you consider them, here's that first important question to be addressed in the next post:

How can the most profound and studied object in the world be so poorly understood?


The Gnostic Neuron - Part 2 - Our Missing Model of the Brain

Friday, February 03, 2023

The Gnostic Neuron - Part 2 - Our Missing Model of the Brain

Our Missing Model of the Brain

<Originally posted July 17, 2020>

How can the most profound and studied object in the world be so poorly understood?

I’m of course talking about the brain. And “profound” is an understatement. Without our brain, nothing else matters. Without your brain, there is no you. Our brain creates our reality, and mediates our interaction with the world.  We are our brains. This view of the brain is not new. In the 4th century BC, Hippocrates realized the significance of the brain surprisingly well:

“Men ought to know that from nothing else but the brain comes joy, delights, laughter, and sports, and sorrows, griefs, despondency, and lamentations. And by this, in an especial manner, we acquire wisdom and knowledge, and see and hear and know what are foul and what are fair, what are bad and what are good, what are sweet, and what are unsavory. ... And by the same organ we become mad and delirious, and fears and terrors assail us. ... All these things we endure from the brain. ...In these ways I am of the opinion that the brain exercises the greatest power in the man.” - On the Sacred Disease  - Hippocrates

In spite of all that has been learned since, this ancient and intuitive summary remains one of the best and most concise descriptions of how our mind experiences our brain. Not only does “the brain exercise the greatest power in the man”, but in everything every man has ever done. Pick a topic. As you think about it, your brain informs your understanding.

For instance, how does simple matter yield the complex experience we call our mind? Is matter not as simple as we think? Or is consciousness not as complex? Perhaps neither. And both. We seem to have too much data about the brain and no model. Let's try to correlate the components with the result. If you act on your thoughts by writing them down as I have just done, it’s that collection of neurons we call our brain that clearly has ultimate control. You can not think about, nor do anything that does not involve your brain. You can not be without your brain.

And that’s just your brain. And that’s just right now. While subjectively critical, most of our individual brains will have little impact on the world at large.  But collectively, all the brains that have ever existed have literally controlled everything that has ever been done. Our brains create our culture. Our brain is the key to all relationships, politics, economics, science, art, and philosophy.  Yes, profound without doubt, but only loosely linked with behavior.

As for studied, has any other object gotten more attention, especially in the last few decades? Does any other intellectual challenge present as much data? And has any other effort yielded fewer conclusions? We’ve had a “Decade of the Brain”, a “New Century of the Brain” and have even treated the brain like a “moon shot” during Obama’s “Brain Initiative”. Yet, none of this rhetoric mattered. We still don’t have a useful model, nor even much consensus about how it really works. Is the brain too complicated for the mind to comprehend? Unlikely, but let’s take a closer look.

The complexity of the brain is astounding. You’ve probably heard the quantifications. Each of our brains has trillions of connections between billions of neurons, to monitor millions of sensors, all to control thousands of movements using hundreds of muscles for one primary purpose - survival. The number of possible combinations and behaviors is greater than all the atoms in the universe. And that’s just one brain. Each seems to be a little different. And each is constantly changing.

Neuroanatomy has taken the brain apart and reduced it to components. Much of the brain has been mapped by function, at least in a macro sense. But when we look closer, these “areas” and other brain “components” have few clear boundaries. Or specific functions. Most are fuzzy at the edges where millions of fibers deliver signals from one part to another.

With heroic effort involving injury and death, various functions have been attributed to various lumps, gyri, and sulci. But this localization is only by degrees. If we try to get specific about what exactly happens where, exception becomes the rule, and rules become the exception. Brain function appears to be both localized and distributed at the same time. It’s a paradox.

Also, most of the brain is clearly divided left and right. These two halves are only connected at the bottom, center, and back. Plus the most obvious central connection, the corpus callosum, has a profoundly inhibitory nature. In both directions. Why? Even the cerebellum and brainstem have definite bilateral symmetry, and some functional division in their structure and operation. Sense and control are mostly separated in the spine, dividing the peripheral nervous system in a completely different fashion, and at ninety degrees to the bilateral symmetry of the brain itself. So is the brain truly divided? Or unified? The answer is obviously yes, without question. Which presents another paradox.

It’s not just the brain that’s complex, it’s also the neuron. In the nano context, we’ve collected an astounding amount of data involving types of neurons, neurotransmitters, glial structures, genetics, and of course, nano, micro, and macro chemistry, each with their own functional domain bleeding into the others. We understand how the neuron creates a signal but not what that signal means. We have a clear understanding of how all of this happens, but not exactly why. At least not in the nano context. Yet.

As we zoom back out conceptually, various groups of neurons “project” their axons from one area to another. Some detailed connections have been mapped, but between the nano and the macro context, most of the micro connectome remains in shadow. Should neuronal function be associated with the location of their cell bodies and dendrites? Or the majority of their axon terminations? Specifically, what connects to where? And why?

The Biology of Behavior

Now for understanding that result. Again, we seem to have too many answers. Even more challenging than neurophysiology or chemistry is characterizing function. The brain is where sensory input gets converted into muscle movement. We define this as behavior. This behavioral database of course includes all animal life, but even limiting it to human history, it’s still overwhelming in scale, content, and variety. The range of possible human behavior is astounding. Why did Socrates drink the poison? What led Joseph Stalin, and Mao Tse-Tung to direct the death of tens of millions of their own people? Was it simply to remain in power? If so, how did Henry II conquer much of Europe with relatively little bloodshed? Simply different management styles? Why so many answers, but no good ones?

Generalize from a trillion behaviors then apply them to yourself. Why do you do each thing you do each day? Your behavior is far from random, but its true course can be difficult to devine, and at the same time, amazingly easy to rationalize. Behavior ranges from primal to sophisticated and obvious to enigmatic, with no clear boundaries from one limit to the next, much like the physiology of the brain itself. And that’s an important clue. Function follows form.

As individuals, we each have an introspective experience. It’s our own private view of our brain from the inside. Much of the time our behavior seems reasonable and organized. But is it? How many times per day are you surprised by your own actions? Think carefully. True self-awareness does not come easily. Where might these surprising visions, thoughts, and actions come from? How much of our thinking is conscious? How much is hidden in layers of mystery even from our conscious subjective experience?

Scaling outward, how do your behaviors contrast and conform to those around you? And those more distant? Plus, each brain is changing dynamically from moment to moment. Repeating psychological experiments on the same subject often yield different results. Consistency is elusive as recent brain imaging meta-studies have shown. The brain is plastic by degrees, and in critical phases. So is the resulting behavior. Parts of the brain enlarge and contract over time as if challenged by our environment, and how we choose to deal with it. London taxi drivers are an example. Form follows function.

Multiply these behaviors by the trillions of creatures and all the people that have ever existed. Now correlate that with what we know about neuroanatomy, chemistry, genetics, neuroscience, and all the other academic fields we’ve enlisted in this effort. Generalizing from such a broad and changing base of information is like trying to nail an ocean of Jello to an infinite moving wall. What goes where? And why? For how long?

And yet, the brain is not random. As Hippocrates noted, behaviors flow from within the skull. As does subjective experience. So far we have nothing to disprove his observation. We’re left with billions of neurons doing mysterious things to yield trillions of complex behaviors. In short, the brain is a tangle, a modern Gordian Knot, and perhaps just as difficult to unravel. Or in the case of the brain, to understand.

If you’re not familiar with the Gordian Knot, it’s a parable about a very large and complex ball of rope with one loop attached to an ox cart. It was said that anyone who could manage this knot and uncouple the cart would become the King of Gordium (which was near modern-day Greece). This royal test was much like the Sword in the Stone, except the challenge was a tangle. When Alexander the Great encountered this test he simply drew his sword and cut the loop. Then he took the kingdom by force.

Though both the Sword in the Stone and Gordian’s Knot crowned a King, you might be tempted to conclude that Alex cheated. If you require the solution to conform to the spirit of the problem as presented, you’d be correct. But it could also be argued that Alex was just thinking outside the box. Or that might makes right. The story contains several possible lessons depending upon your values, sensibilities, and perspective. And that’s the point. It’s only one example of our mind entertaining multiple ways of looking at a problem. And its solutions. That’s another important hint, but for now, the concept of the Gordian Knot is a useful way to encapsulate the mystery of the brain. Speaking of childish stories, let’s take a break before we continue with this important question.

Learning to Ride a Bike 

I got my first bicycle for Christmas when I was six years old, and of course, I didn’t know how to ride. My dad had gotten me a full-sized, 26-inch Schwinn. He probably figured I would grow into it, and so was trying to be cost-effective. Or perhaps he wanted to present not only a gift but also a challenge, which he certainly did. This Schwinn was made of steel, and nobody made training wheels for a bike that big. I could barely pick it up. And of course, other than watching bigger kids, I had no clue how to make it work. For weeks I just pushed it around with one foot on the pedal trying to get used to its weight.

My dad was busy with work but our babysitter from across the street said she would teach me how to ride. We went to the school grounds where there was lots of smooth, flat pavement and nothing to run into. She said the weight wasn’t as important as keeping my balance. I asked her to explain. She said words wouldn’t help much. I just needed to get a feel for it. This was my first encounter with intuitive learning, a kind of Zen experience.

She held the bike while I got on, then kept it upright as she pushed me up and down the basketball court. Unfortunately, I could only reach the pedals when they were in the top half of their rotation. So I had to start with the pedals near the top and could only push them halfway down, first one side, then the other. Between the weight, pedals, and balance I had my hands full, and so did she. We had to take breaks as she was doing all the hard work and heavy lifting. But to her credit, she kept at it. And so did I. I really wanted to be able to ride this shiny new Christmas present.

While resting at one point, she explained that steering was the key to keeping my balance. So words did matter, but something else mattered more. Getting the feel for it was apparently critical. She was right. A few more tries and I finally found that feel. I was gliding by myself before I knew it. Looking back she was no longer holding me up. I was jubilant. That’s when I fell over. There was no way my legs would reach the ground. But I didn’t care. She was right. I had felt that balance. I knew it was there. I knew I could find it again if only I could learn to get off (and on) without crashing.

Later, on my own, I discovered that if I leaned the bike against a fence with the pedal in the right place, I could get on, push off and keep going as I bounced from side to side to make the pedals work. I still had to find a good place to jump off and catch the bike before it hit the ground. I remember riding in big circles a lot delaying the dismount. 

Then I tried something radical. I put my leg through the frame below the top bar. This way I was able to pedal and also start and stop when needed. Well, mostly. It looked goofy as hell but it made the landings easier. Either way, I rode that bike for a long time hanging off the side. If you think this is too strange to have actually happened, that is the test of its veracity - stranger than truth. When my legs finally got longer I was able to sit on the seat again. And even then it was only to coast as I had to bounce from side to side to keep the bike going.

By the way, my younger brother also got the same bike but a different color for that Christmas. He soon copied me and my, "through the frame type of peddling." He continued to use that "through the frame" style long after I moved on to regular riding. Years later we both got updated to Schwinn Stingrays which were lower to the ground and solved all the problems. Too bad we didn't start with the Stingrays.

The point is, not all learning is logical. 

And there’s more than one way to ride a bike.

Or skin a cat.

Intuitive Modeling

“A theory can be proved by experiment; but no path leads from experiment to the birth of a theory.” - Albert Einstein

Open any book on neuroscience.  Usually within the first few pages will be some disclaimer about the lack of a useful brain model. Jeff Hawkins of the Redwood Center for Theoretical Neuroscience put it concisely in his 2004 TED talk about his book, “On Intelligence”: 

(Sorry for the interruption, but I need to add this note as of late March, 2021 because most of what you’re about to read was written years ago, though only posted in July of 2020. I wish to point out that Jeff Hawkins has now published a second book that gets much closer to my model than perhaps anything I’ve read so far. Unfortunately, Jeff does not make that final conceptual leap about neurons creating knowledge.

Jeff and I share both a background in computers and a fascination with the brain. I agree with most of his constraints and many of his assertions, but we part company when it comes to the nature of knowledge. And how he thinks about the brain.

Jeff’s new book is titled, "A Thousand Brains", though his thousand brains are quite different from the thousand creatures I'll be describing below. Like in his first book, Jeff presents some wonderful thought experiments which add to his very insightful observations, but his description of neuron firing as "spikes" indicates hours spent staring at an oscilloscope. This reflects his more technical approach. Even still, he gets close to my thesis near the bottom of page 125, “The knowledge is the model.” I did some counts on a few pages of this new book. Jeff actually uses the word “knowledge” more than he does “reference frames”, the key to his thesis. Though he does include a whole chapter about how to preserve knowledge, he doesn't seem to see how neurons might create it.

Jeff, (like many others), sets out to model the neocortex before understanding the nature of the neuron itself. The neocortex being accessible and obvious might seem like a good place to start, but it was also the most recent major structure in the brain to evolve. This puts Jeff at a grave disadvantage. His view of how language is “processed” is literally the opposite of mine. He sees language as top-down deconstruction, and actually attributes aspects of his column reference frame model to the genesis of language where “features” are “stored”, again reflecting computer metaphors. He then applies this architecture recursively - “it is reference frames all the way down.” Down to where? Unlike turtles, the brain is not infinite. And practical recursion normally has a base case. I've found language generation in the brain to actually be far more simple and literally starting with the neuron. In contrast with his view, I believe language is an external expression of internally generated neuronal knowledge. It reflects the gnostic nature of the neuron itself, as I’ll shortly present from the oldest to the newest parts of the brain.

In any case, I highly recommend both of Jeff’s books about the brain for their many useful concepts and insights about prediction.)

I now return you to my earlier quote from Jeff:

Open any book on neuroscience.  Usually within the first few pages will be some disclaimer about the lack of a useful brain model. Jeff Hawkins of the Redwood Center for Theoretical Neuroscience put it concisely in his 2004 TED talk about his book, “On Intelligence”: 

"We have tons of data and very little theory."  To drive home this deficit, Jeff then offered a quote from decades before by Nobel laureate Francis Crick, "We don't even have a framework."

Not even a framework! Well, this is embarrassing. Why all the intellectual abdication? After all, we have an emergent field of AI - "Artificial Intelligence". How can we create an artificial version if we don't understand the biological version? Yet, there's actually very little in common with human intelligence and the artificial kind. So what exactly IS the nature of intelligence and the brain?

Some generalizations must certainly be more probable than others, even if extraordinarily broad. Or completely wrong. Error tends to invoke useful counterpoint. Where are our sweeping generalizations about the brain? We need a  new perspective. We need a fresh approach. We need a Rosetta Stone of the brain. And most of all, we need a radical idea to break this logical logjam of data. Some wild speculation would be more useful than none at all.

Ironically, modeling is one of the things the brain does best. We model the world constantly and intuitively. We can’t help it. This modeling ranges from casual and even subconscious, to formal, detailed, and explicit. The most useful models may even become external and detailed mathematical simulations. Or computer programs. Thus, the field of AI.

The more intuitive models take many forms. (Ironically, these intuitive models can also be useful for understanding the neuron itself, but back to the macro.) We model the actions of other people to predict their behavior, typically without realizing it. This is known as theory of mind. Other models are conscious but still casual. Their complexity ranges from sparse to rich depending upon how much attention we pay to each topic.

For instance, you may know more about psychology than I, or the detailed “proofs” of philosophy, but I likely know more about how a computer works. I’ve designed most major aspects of computer systems, from the bare metal of logic up through the ALU, processor, storage and I/O. But most of my career has been spent in software from hex coding and assembly through the BIOS, operating system, computer languages, and finally, application software. I understand the logic of AND, OR, NOT, along with how assignment, loop, and decision make up the essence of a Turing machine which represents any possible computer. 

I understand the details of how bits are converted from digital to analog yielding the emergent result of music, which may bring a tear to your eye. I also understand the limits of computers and computability. You can probably do something similar in some other field, whether it’s technical or artistic. It’s what each of us attends to that allows us to populate our respective models of the world.

Using associative maps, allegory, and metaphor, we also model the tools we use, the work we do, and the places we live, all to great advantage. But it’s still a different experience, a different model, and a different advantage for each of us. You think about things I dismiss. And vice versa. We each have our casual and more technical models of the world. And our model of the brain is part of that world, casual or not.

Since you’re reading this, you likely entertain some default but at least conscious model of the brain. It may involve the concept of hard-wired “circuits”, programming new habits, or just processing your thoughts and feelings. Did you notice that each of these is a tech metaphor? Or perhaps your model might involve more explicit concepts of electronics, brain waves, genetics, or imaging. Again, each is a field of technology. 

For decades, my casual model of the brain focused on chemistry, ionics, and logic, yet my model was never viscerally satisfying. The closer I looked the clearer it became that the brain had more in contrast than in common with technology. I came to realize these default tech metaphors and fields of study were distorting my thinking.

Blinded by Science?

"The model we choose to use to understand something determines what we find." - Dr. Iain McGilchrist

Science is built on characterizing and quantifying consistency. Once defined, these consistent objects become tools to be relied upon. Those things not well defined remain in shadow, even when they are important, like the brain. Has science taken us down some kind of blind alley? Could science, the prime tool of validation, be the very thing blinding us to the nature of the brain? Or is it that science has become so hyper-specialized that we can no longer generalize effectively? Why do we not have a big picture of the brain? Are computers getting in the way? Or perhaps the status quo of academia?

Today, billions of dollars are being spent to understand this slippery object. As noted, the 1990s were declared the “Decade of the Brain.” That decade produced yet another tsunami of data, but again, few conclusions. This data is also a logjam waiting to be released. We’re now well into the new millennium and we still don’t have a useful model of the brain. Below is a more recent quote from Ed Boyden who leads brain investigation at the MIT Media Lab:

How the Brain Is Computing the Mind

Despite the title, Ed explains very little about how the brain works, though he does acknowledge the challenge, and provides this same important clue - why would Ed presume the brain might “compute” the mind? And it’s not just Ed. Various forms of computer thinking remain our default metaphor of the brain in spite of its poor fit.

The contrasts between the brain and computer have been well known for decades. Nobel laureate Gerald Edelman effectively challenged the computer metaphor in several of his books. Yet this tech approach continues to guide most of the effort, and consume most of the resources, as did his, unfortunately.

If we think of the brain as a computer, it follows that neurons somehow represent state machines, conforming to information theory. This is not the case. If the brain were some kind of computer, we would expect it to be fast, digital, synchronous, serial, hard-wired, and determinant in its operation. 

The brain is the very opposite in each of these major aspects. It’s relatively slow, surprisingly analog, mostly asynchronous, profoundly parallel, and quite plastic. Instead of consistent answers, the brain often yields an indeterminate result in a very uncomputer-like fashion. But it’s not just the computer metaphor that causes problems. It’s science itself. Let’s get back to Ed Boyden:

“The reason is that the brain is such a mess and it’s so complicated, we don’t know for sure which technologies and which strategies and which ideas are going to be the very best.“

The very best? How about any approach at all? And if we don’t know “for sure”,  might it help to know something by degrees? Keep this demand for a determinant model in the back of your mind for later consideration. Don't get me wrong. I love science and technology almost as much as logic, but I don't believe that science has an answer for everything. At times it even blinds us from the true nature of things. For now, let’s evaluate the rest of this quote.

As yet another example, Ed too leads with “technologies.” Why would we expect the brain to be understood in terms of technology? The brain certainly didn’t come out of a factory. The brain evolved. And yet technology has been the prime strategy for modeling the brain throughout most of recorded history. 

The brain has in turn been compared with an aqueduct, telegraph, clock, telephone, steam power, computer, and lately, the internet. Each has been the most advanced technology of its time. Some are now trying to understand the brain in terms of superconduction quantum calculation. Though complex, I doubt the brain’s operation is quite that exotic. Or technical. And the distraction gets worse. It’s almost as if science itself has become our latest “new” technology. 

The first test of science is consistency. Though not random, the brain’s operation is often not consistent. This is a major challenge for science, and perhaps one reason for our missing model. Science requires that experiments produce repeatable results. The brain violates this with impunity, switching from one answer to another as it dynamically tunes itself to its environment. Hypocrisy is common in human behavior. When we overlay technical metaphors, things get worse. Soon these metaphors are steeped in rationalization and confusion when the true test of any model or metaphor is simplicity and utility in sorting out the data. Without a useful model, the data just piles up. We need a way to break this logjam, and the key is likely more intuitive than logical.

The technical approach to understanding the brain is like deconstructing a Boeing to understand how a bird flies, and just as useful. The Boeing applies thrust to a fixed-wing in a fairly crude fashion but also flies much faster. The bird’s solution to flight is far more subtle and elegant. But slow. Which is better? Neither. Each has advantages depending upon requirements for load, speed, and maneuverability. And that’s the point. There is no one perfect answer. Nature has evolved many different ways to fly as demonstrated by bumblebees, hummingbirds, and even gliding snakes. Human methods are just the most recent, and most clumsy in spite of their awesome power and speed.

This tech / biological contrast is not limited to the skies. Something similar happens on land, and even at sea in terms of movement. The wheel forever changed how we travel. Like human flight, it allows for greater speed, load, and distance at the cost of maneuverability. But not always. A bicycle is a hybrid of tech and biological methods for moving over land. It allows a human muscle to achieve greater speed than running, and also greater efficiency than any other application of the wheel.

At sea, a similar contrast exists, and also similar hybrid solutions. Powerboats will get you there faster using the brute force of a motor. Swimming is elegant but quite limiting. Sailing applies the best of both worlds when speed and capacity are critical requirements. Sailing works with the wind even when sailing into it - quite elegant.

Learning to fly provides one more interesting comparison when trying to find a useful model of the brain. Just over a century ago the consensus was that man would never fly. Heavier than air human flight was thought to be beyond our reach, but there had been many attempts, and even a few hints of how to proceed.

Still, after years of trying, by December of 1903, Samuel Langley had spent the entire Smithsonian budget plus $50,000 from the Department of the Army trying to fly using the brute force of a steam engine and fixed flight surfaces. We might describe this as the technical approach to flight - a predetermined and consistent solution. Following many attempts, (some even modestly successful), the final version of his airplane crashed and the pilot died. After decades of effort, after spending a literal fortune from the government, and also in a state of grief about his friend, Dr. Langley finally gave up.

Only a couple of months before, on October 9th, 1903, the New York Times punctuated this failure (and wasted public money) by publishing an op-ed stating that man would never fly in a million years. Literally nine weeks later with far less funding, the Wright brothers proved them wrong. The Wright brothers applied a more hybrid “bicycle” approach to flight which was consistent with their background. Using a lighter gasoline engine, and having a human actively balancing the control surfaces in an organic fashion, (as suggested by the New York Times column), were the key elements that were different from Langley’s effort. Orville Wright finally took to the air. That first flight was controlled by a biological brain, not a perfectly calculated and trimmed airfoil.

The point is, whether you wish to travel by land, sea, or sky, solutions range from biological to technical. Biological is more subtle and effective. The technical approach applies more power and speed, but in many ways is far more crude. Tech succeeds, but in a different, more clumsy way. 

This challenge of a brain model is similar. Electronic computers simulate the world using complex systems operating at the speed of light in a mostly serial fashion - the tech approach. But why might we think there’s only one way to simulate the world when there are so many ways to fly? The biological approach, which the brain uses, is slower but much more subtle and elegant. And in many ways, it’s far more effective. Especially when survival is involved. How many ways are there to simulate the world? Might IBM, Google and Tesla's current AI efforts actually be a hybrid form of intelligence like sailing is to swimming? What is the nature of this more biological type of simulation? What is the brain really doing? And what part does the neuron play?

These were the questions I should have kept in mind, but for decades I too have searched for the logic systems of the brain, and how its state machine was encoded by this special form or analog of logic. My approach was technical, but I was about to see the first hint of one possible bioanalogical, and stateless alternative - DEcoding reality.

The Gnostic Neuron

Here’s one final acknowledgment of our missing brain model. It’s the opening line from the issue dedicated to the brain from Scientific American in 2014, The New Century of the Brain:

“Despite a century of sustained research, brain scientists remain ignorant of the workings of the three-pound organ that is the seat of all conscious human activity.”

Pessimistically, the article then immediately cites an interesting discovery as just another mysterious loop in our Gordian Knot:

“... the discovery of a “Jennifer Aniston neuron” was something like a message from aliens, a sign of intelligent life in the universe but without any indication about the meaning of the transmission. We are still completely ignorant of how the pulsing electrical activity of that neuron influences our ability to recognize Aniston’s face and then relate it to a clip from the television show Friends. For the brain to recognize the star, it probably has to activate a large ensemble of neurons, all communicating using a neural code that we have yet to decipher.”

If you haven’t heard about the “Jennifer Aniston neuron”, here’s a quick summary of this remarkable work by Rodrigo Quian Quiroga from UCLA in 2005. It began when a patient was being prepared for brain surgery to treat epilepsy. As part of that process, selected neurons were monitored while the subject was shown photos of various places, people, and things. In this case, the chosen neuron fired when the patient was shown a picture of Jennifer Aniston as the character Rachel. Even more remarkably, that same neuron fired no matter how "Rachel" was presented. This was amazingly consistent for a neuron. Whether it was her spoken name, her written name, her photograph, or other likenesses, all of them worked as long as the cue seemed to capture some essence of the character Rachel. I immediately recognized this quality as the “invariance” described by Jeff Hawkins’ in his book, “On Intelligence” referred to above.

This remarkable discovery nicely demonstrates a “gnostic” neuron, or “grandmother cell”. Such neurons were described as a subset of neurons known as “concept” cells, which itself was a concept started as a joke at a neuroscience conference in 1969. Yet this was no joke. This was real, and after 15 years has yet to be effectively challenged. The results were independently verified when “Luke Skywalker”, “Bill Clinton”, and “Halle Berry” neurons were found in other tests. Some of these neurons even fired when a cartoon of the subject was presented. There are many other examples, and they all demonstrate invariant knowledge by firing when the essence of said subjects or characters was detected in ANY form.

The idea of a gnostic neuron is philosophically profound, literally, the expression of knowledge taking the form, in this case, of Jennifer Aniston's "Rachel". This neuron recognized that one specific character out of thousands of people that this particular epilepsy patient experienced during her life. That’s an impressive trick. How did this neuron come by this knowledge? And what significance does it have in breaking through this logical logjam of brain data?

I’ve included the above pessimistic assessment of the Jennifer Aniston discovery because I reached the very opposite conclusion. For me, this gnostic neuron was not a message from aliens. It was a critical hint. The moment I read about the Jennifer Aniston neuron I literally stopped in mid-bite. I was eating lunch. The moment remains vivid.

Knowledge is the key to philosophy, or at least its object of love. As a computer architect, I’ve had a lifelong professional interest in what computers have in common, and in contrast, with the brain. Computers process information. Higher level knowledge is similar to information, but not the same thing. Like many other technologists, not only had I been mischaracterizing the neuron, I’d also been mischaracterizing knowledge. I’d spent decades analyzing neurons as logic devices, trying to understand what kind of systems these neurons might form, or how to “code” memory as suggested by the Scientific American article above. But like so many other technologists, I had the wrong perspective.

In that instant, for me, the problem changed. Instead of dismissing this result as a message from aliens, I began to wonder what all the other neurons “knew”, and how they came to know it. 

At that moment, the neuron ceased being a slippery state machine and became associated with acquiring knowledge. I began researching what it might mean to “know” something, and how a neuron might perform this amazing trick.


Ignoring for the time being how, let me present why knowledge might be the key to modeling the brain. Simply accept the assertion that neurons magically create knowledge at the instant that they fire. Here’s an example:

Imagine a neuron that can sense smoke, another that can feel heat, and finally, a third that can detect light (all well-characterized by neuroscience). When each of these neurons triggers we can assume that each of these things is experienced by the person in question at that moment. When each of these neurons senses their respectively tuned condition they each create a neural pulse in that instant which can be thought of as knowledge taking the form of a signal reflecting each specific experience. 

Now imagine a fourth neuron tuned to sense a specific pattern from these three neurons when they all occur within a constrained amount of time, the essence of synchronicity. This fourth neuron thus combines signals from these first three neurons, (and the events they indicate), to form an abstraction of knowledge we will call “fire”.

If all three source neurons trigger at the same time, they create an association, and this fourth neuron will trigger indicating that something is burning in the world, a useful bit of knowledge quite distinct from the knowledge of smoke, heat, and light.

Now further imagine that this fourth neuron is connected to a script of other motor neurons whose muscles compress the diaphragm, adjust the vocal cords and manage the tongue and lips of the person in question. When these three original source neurons trigger in unison, they trigger the fourth that captures the pattern, and this knowledge about this fire will escape the body and alert the rest of the tribe as the word “fire” issues forth from that person’s mouth. 

At that moment, the unique sound that makes up the word "fire" becomes a new thing in the world. A thing to be sensed by others so that they may mobilize without actually smelling, seeing or feeling the result of the physical fire. The word fire has become a useful abstraction re-presented to others without the need for the actual experience. These vibrations in the air connect one human to another, not unlike the chemistry that connects one neuron to another at the synapse.

It’s easy to see from this simple example how all words might each be represented by a single neuron dedicated to a specific bit of knowledge, and how language itself might be an external form of the brain’s internal architecture. To summarize, words in verbal or written form are an external expression of internal neuronal knowledge. Decursively. 

I realize this simple description requires a great leap of faith based on the radical notion that neurons create knowledge, so if it challenges your sensibilities, relax for now. I’ll continue with how I came to understand this model of the brain. Other detailed examples including the important “how” of this example will be provided.

Our Left-Brain and Right-mind

I thought about this possible gnostic nature of the neuron for another seven years. No, not all the time, but a lot. I was still trying to understand how a neuron might come to know something when I happened across this video. It is a TED summary of “The Master and His Emissary, The Divided Brain and the Making of the Western World”, by Dr. Iain McGilchrist. If you’ve watched this 12-minute RSA video, (which is brilliant for its own reasons), you’ll understand why I read (and reviewed) the book.

This book deals with the divided brain at the macro level, as opposed to Jennifer and her neuron where I’d been probing possible forms of logic at the nano level. And yet his descriptions were vaguely familiar. Dr. McGilchrist begins by noting the physical asymmetry of the brain, and uses it to support his model of operational asymmetry and malleable dichotomy throughout the rest of the book. 

I use the terms left-brain and right-mind for a similar reason. Our mind is the subjective ethereal experience of our physical brain. Think in terms of music. The material brain is like a musical instrument; the ethereal result is the music of the mind. This left-brain, right-mind association is to remind me of the world as managed by each respective side of the brain. The left-brain objectifies things of interest. They stand apart so needs labels, titles, and indexies as access methods to find things in a delayed but determinant fashion.

Our right-mind evaluates things according to their impact on us personally, and so subjectively and immediately in time and space. Experience, association, and imagination are its access methods - intuitively. The left-brain prefers things normalized and defined so they can be used as components in constructing other thinking. Our right-mind keeps its options open and close by as it watches for threats and opportunities, and in the process, solves the homunculus issue created by the left-brain.

Of course, we each also have a right-brain and left-mind, which accounts for the exceptions in this broad generalization. Dr. McGilchrist also notes that our left-brain naively thinks of itself as the whole brain, and prefers to define our world as logically consistent. The cause and effect of science are how our left-brained "Executive" models the world. He seeks THE answer. Objective technology is the result. 

In contrast, our right-mind knows the world is not entirely consistent, nor completely random. It lives in that undefined yet personalized middle ground. Mysticism, art, and intuition are the results. Our right-mind correctly treats our whole body and whole brain as a collection of subjective survival solutions, where one solution need not preclude another.

The first half of this book describes the brain in objective and definitive terms, hallmarks of the left-brain. The second half of the book presents culture and art in a more subjective fashion. It reaches for conceptional connection, as a mystic might.

Our left-brain’s "Scientist" dominates the implementation of neuroscience and its metaphors at the direction of our left-brain's "Executive". This leaves little room for the speculations of our more intuitive right-minded "Mystic" to direct our right-minded “Artist”. Ironically, and consistent with McGilchrist’s concept, our right-minded Mystic knows more about our left-brained Executive than that Executive knows about our Mystic. At least in a holistic sense. 

(Sorry Ian, the whole Nietzsche, Master / Emissary story works for your theme, but it conflates subservience in a hierarchical relationship with the more complementary but equal nature of a divided brain. If there is subservience, the left-brain seems to be unaware of it. As I’ll shortly present, the creation of knowledge occurs on that line between yin and yang, not master and servant.  I agree with your Nietzsche story that our left-brain has run amuck at times in recent history, and especially now. Though complementary in most ways, the two halves of the brain are ultimately far more equal, at least in opportunity, if not always in operation. A left-brained Scientist directed by his Executive, and a right-minded Artist directed by a Mystic better describe both the subservient and egalitarian nature of this complementary architecture. Going forward, I'll use these four titles (and many others) in my personal org chart as opposed to only two.)

It was also just before reading the Divided Brain that I discovered Oliver Sacks’ wonderful writing about neurological deficits associated with physiological injury or disease.  It wasn’t just Oliver's writing I appreciated. His powers of observation and correlation were that of a modern Sherlock Holmes, except more subjective, which is what makes his stories so much fun to read. 

Each Oliver Sacks case now took on new meaning in the context of left-brain and right-mind as I explored the gnostic nature of the neuron. Dr. Sacks intuitively used the model I’m about to describe without knowing it explicitly. I will provide examples in due course. He also suggested we move beyond objective and subjective to explore the brain with a trajective approach, but, (believe it or not), I’m trying to keep things simple so won’t wander down that rabbit hole at this point, but I will summarize by saying if McGilchrist is correct, there's a rich opportunity both intellectually and personally in trying to learn what our right-mind might have to say about the last hundred and fifty years of brain research. If only it could speak.

Why Words Matter 

Returning to lateralization, here’s one example of why the dominance of left-brained language is so distracting:

You do not have an amygdala. Nor do you have a hippocampus. You don’t even have a cortex. You have two of each. Your skull contains two amygdalae, two hippocampi, and two cortices, one each for left and right. This spell checker does not even recognize any of these alternate plurals. And how often do you even encounter the more common ones - cortexes, amygdalas, and hippocampuses? This oversight is especially grievous when referring to a single brain. These three terms even sound alien. Do these plurals matter to modern neuronscience? Of course they do. Are they ever used? It's quite rare. It's as if our language-dominate left-brain is denying the very existence of our right-brain and its major components. Our left-brain can't bring itself to represent the another side it doesn't believe even exists. Neuroscience is so steeped in its left-brained world that it won't admit that the right-brain is even there in terms of functionality or anatomy.

Think about the last time you read about the cortex. It was likely presented as a singular and unified whole, as I have just done. But as McGilchrist presents, the very opposite is true. The surface of these two cortices does not touch topologically, and their corpus callosum signaling is mostly about inhibition (a useful hint). It’s the same with the amygdalae and hippocampi, along with all the other lateralized and duplicated parts. The brain is vividly, profoundly, and obviously divided both physically and operationally. Words matter. They affect our thinking. And also how we feel. Our left-brain likes to believe it controls the entire body, including the whole brain. All of the time. The absence of these plurals is just one example of our left-brain with its language dominating our narratives, and our view of the world. 

This advantage with words doesn’t mean the left-brain is a villain with a superweapon called language. Quite the opposite. The left-brain is often left innocently wondering what happened in its struggles with what I’ve come to think of as the silent and secret tyranny of the right-mind. From the left-brain’s perspective, the right-mind doesn’t even exist. The left-brain tries to ignore both the right-brain and right-mind. From the left-brain’s perspective, this mysterious “other” side of the brain remains in shadow but will often simply take control at key moments, leaving the left-brain to quickly (and often inaccurately) rationalize the result, which it does with grace honed from practiced experience. Even when completely wrong. This quirky dynamic nicely explains cognitive dissonance, passive-aggressive behavior, and the ubiquity of hypocrisy in our culture, as well as many other enigmatic yet commonly observed aspects of human behavior. Contrast pain and pleasure. Oh, that we had time to walk down that rabbit hole right now.

The result of this architecture is a contest played out in the corpus callosum in the same way it’s played out between two neurons competing to create dichotomous knowledge in the nano context. Since the left-brain controls most language, it tends to dominate verbal and written description. We never get to hear, (or read about), the model of the brain that the right-mind intuitively understands. Fortunately, this more intuitive model still shows up as hints in our language and culture. A right-minded template of the brain is hiding in plain sight, as I’ll shortly describe using something I've come to call decursion.

To summarize, Dr. McGilchrist’s work not only described the divided brain, his theme suggests one possible reason we don’t have a model for the brain. It’s that our left-brain doesn’t like the answer it’s found, and so inhibits our more intuitive right-mind, which of course has no voice. For those who have studied neuroscience, their left-brain believes simulation requires access methods, electrical communication, and logic states, but can’t find where these states are stored, or even anywhere logic is consistently applied in the brain. Our right-mind knows better but gets inhibited conceptually on any topic dealing with electricity, brain data, logic, or science.

Ultimately, the more rational left half of our brain denies the fruits of right-minded intuition and ends up in a logic trap much like Zeno’s Paradox as presented in the "Divided Brain". If you’re not familiar with Zeno’s Paradox, I’ll present my father’s version as told to me when I was a teenager. My father was always telling dirty jokes, and I don’t believe he even realized the philosophical history behind this one:

"An engineer and a scientist were brought into a dark room containing a large one-way mirror. On the other side of the mirror was a small white room which was empty except for a beautiful naked woman standing against the back wall. Both men were instructed that they could shortly enter the room with the naked women but could step only half the distance to her with each step.

The scientist put his head in his hands and wept, knowing he could never achieve his objective. The engineer simply smiled, realizing he could get close enough for all practical purposes."

What each of these men “knew” were both correct, yet they reached opposite conclusions. Knowledge is subjective, reflecting our individual talents, experience, and perspective.

Is it not likely that the reason for our missing brain model, is a very similar logic trap? Fortunately, a right-minded “Mystic” (or artistic engineer), can span a towering paradox in a single and final stride, getting him close enough for all practical purposes.

I casually speak of our left-brained Executive/Scientist and right-minded Mystic/Artist as if Dr. McGilchrist has laid the matter of multiple entities in our skull to rest. And so he has. But which aspects of our behavior lie on which side? And why? Is dichotomy the only aspect of our brain that forms physical boundaries? The brain is clearly multifaceted both physically and operationally. But how many faces do we present to the world? And why do we generally have this subjective experience of a unified mind? Time for another story from my past:

Ricky Morrison 

I first met Ricky Morrison when I was seven years old and in second grade, but not on the first day.  He didn’t show up until late September.  Ricky actually started the year in the special-ed class but was soon mainstreamed into ours. I don’t know why the teacher put him next to me, but she asked me to be nice to Ricky, to help him with his work.

Ricky was ugly, awkward, and clumsy.  His head, and especially his forehead, were larger than normal, even for his big frame.  He outweighed me by at least fifty percent.  In retrospect, I think he had already been held back a grade, maybe more.  

Ricky tended to slobber and drool, which he usually caught with the sleeve of his canvas jacket.  The mucus buildup took on a sheen near the cuffs.  He almost never took off this jacket.  I once asked him why.  He told me he would get in trouble if he lost it, so he left it on, even during hot weather.

Ricky had another curious habit. If he wasn’t being forced to stay in his seat, he was always on his way somewhere else. And “somewhere else” constantly changed. He would move around the classroom from place to place but only stay a second, then off to another. I would often see him heading down a hall, stop abruptly, then head a different direction as if following some internal radio instructions. This was just one of his more bizarre behaviors. We never talked about it.

Ricky rarely spoke, but often stared intensely. When he did speak, his voice was high and had a nasal quality.  His words were hard to understand, but he would glare at you if the meaning was important.  I can not remember a time when he actually smiled. His countenance was generally dull.  Well, at least when it wasn’t intense around the eyes. When he wasn’t trying to get my attention.

As his helper, I put his name on his papers so the teacher wouldn't lose track.  He had trouble writing his name.  Perhaps that’s why I remember it after all these years.  He rarely added much to the page, though he could when he wanted.  He wasn’t as dumb as everyone thought. I remember showing him where the "World Book" encyclopedias were. These books were my favorite thing in the classroom. We of course couldn’t read them at the time, but I showed him how the pictures could tell stories. Mostly, he wasn’t interested in school, but I did see him get correct answers on his papers now and then.

His general appearance and tendency to stare often provoked confrontation.  But his size and volatile nature usually kept the other kids at bay.  He lived in fear of adult authority, but little else.  You could see it in how his eyes would dart around when a teacher approached, and how he would bristle if they asked a question.  I came to wonder how many of his issues were caused by his limited abilities, and how many by innocent confrontations with impatient teachers.

Weeks later we were getting ready to leave for the day and he commanded, “come”.  I was curious, so I followed, with no detours this time.  We walked up to the local Frosty-Freeze on Main Street. An attractive woman emerged from the back and introduced herself. It was his mom. She seemed pleased that I was with him.  She brought out a large basket of French fries and sat it on the table between us, then went back to the kitchen. I only got a few. Ricky ate them as if he were starved, and shortly pulled the basket across the table so I had to reach farther. He glared at me every time I took one.  It seemed that he realized he was supposed to share but wasn’t comfortable actually doing it.

On another day we were walking over to the Frosty Freeze after school and some fourth-graders began teasing him. I was close by so grabbed him by the jacket trying to move him along the path.  He slipped his arms out of the jacket and dropped into a threatening stance addressing the older boys. He was out of his jacket. He was out of his league. This was serious. 

Then he did something strange. He bit down on the base of his left thumb and made a fist with his right as he crouched down.  It was his idea of defense, and it worked.  Perhaps he had seen the posture on some TV show, adding the bite for effect. I’m not sure. The older boys laughed at him, but also backed away.  I grabbed him by the shirt and pulled him along until he noticed I had his jacket. He put it back on.  I never saw him actually fight anyone but heard about one time when he had gotten a bloody nose and had to go to the principal's office.

Starting in third grade I moved to a desert community east of Tucson, Arizona. I didn’t see Ricky again until years later when I returned to northern California and we both attended the same high school.  He clearly recognized me from years before but only said, “Hi.” That was it. No smile, nothing. I asked him how he’d been, but no reply. I don’t think we were ever really friends, at least not in the normal sense.  His social behaviors were largely missing.  I was simply part of his known world. Maybe he trusted me more than others. And perhaps naively, I trusted him.

For me at the time, Ricky was an example of how different people had different ways of not only perceiving, but also dealing with the world. True, most were not as different as Ricky, but I began to notice how he reacted to the same events I experienced, but in different ways. We each have our own methods of dealing with the world. I remember wondering at the time, what went on in Ricky’s head? I was fascinated by human behavior. Ricky was such a vivid example. The difference between us was a hint of something important, a subjective shadow in the back of my mind. Or have I just now created these distinctions all these years later? I’m not sure, but Ricky remains vivid in my memory.

I also observed how others reacted to Ricky. Most saw him as abhorrent, a creature to be avoided. But I didn’t. Ricky was interesting. I wasn’t enchanted by Ricky, but I did have compassion for him. In a selfish way, for me, Ricky was a subject of study. But whatever his IQ, Ricky was still a person. I felt he should be given the opportunity to explore the world like the rest of us.

In another time and culture, Ricky might well have been honored as an oracle, to be consulted in various ways as a source of alternate wisdom in difficult times. I now believe that we come to know things with our right-mind, and we come to understand things with our left-brain. And also the inverse. Wisdom is created in the tension between knowledge and understanding, a type of Yin and Yang, whichever side of our skull creates it. But certainly, the brain is more complex than just the simple trick of dichotomy. What about all the other possible tricks? Let's leave Ricky for the moment and return to how the brain is divided.

The Triune Model

Dealing with multiple brain parts was not new for me. I had introspectively tried something similar in 1977 when I first read Carl Sagan’s book, “Dragons of Eden”. It was mostly a popularization of Dr. Paul MacLean’s “Triune” model of the brain. 

Dr. MacLean’s model divided the brain into three layers of ascension - the reptilian, paleomammalian, and neomammalian.  Each is ascribed characteristics associated with the implied group of species. The paleomammalian experience is subtle and quixotic, actions of the reptilian brain, more obvious. Think back to any of your movements that were so quick they surprised you. They are more likely to be reptilian. Comparisons with Freud’s Id, Ego, and Superego are obvious and often made in the process of dismissing this Triune theory by associating it with some of Freud's more challenging ideas. But might that be throwing out the baby with the bathwater? I've found that both Freud and MacLean have contributed significantly to understanding the brain. At least for me.

As I explored the possibilities at the time, I remember thinking that the main problem with the Triune model was that these three delineations had too many exceptions. As the brain has become more characterized and functions more localized, the lines between these entities have become blurred, and only having three layers, far too limiting. Plus, we now know that the top two and a half layers have independent versions for the left and right sides of the brain as noted above. Perhaps five creatures might have been a better fit. But allowing five is only tugging on a string that reveals fifty more. After that, things get complicated. 

As for limiting the number of major “parts” of the brain to three or five, other significant neuronal structures don’t even reside within the skull at all, such as neural control of the heart and gut. Obvious peripheral control functions are even more primal than Dr. MacLean’s reptile. In spite of these problems and others, the Triune concept has value. There is significant evidence that our brains are layered in evolutionary history phylogenetically from the spine up, out, and forward along an axis of sophistication. These three, five, and perhaps more layers may represent successful tricks evolved by various other creatures back through our evolutionary past.


Our prefrontal brain (on both sides) seems to provide for the most abstract executive (and mystical) functions. The basal ganglia (being somewhat less lateralized), deals with the more primal, far less sophisticated than even a reptile. The three anchor endpoints within the skull along this split axis of sophistication.

Note that I don’t use the word “systems” to describe any of this complex functionality. One of the most valuable aspects of the Triune model (and Carl’s presentation) is that it’s biological and subjective from these creature’s perspectives. “Systems” would take us back into the tech world with all its rigid definitions. I will use the term sparingly and present technical comparisons mostly for contrast.

Setting aside specific functions and their locations for now, the broader concept that the brain is somewhat layered in our evolutionary history remains useful. A whole class of behavior known as reflexes can be understood as largely independent creatures living in the spine. Along with the above five parts, should we not add the spine, heart, and gut for a total of eight? A “gut feel” is often how we describe conviction. In any case, we will soon visit a few others. But before we let our left-brain limit the number of these layered creatures, let’s consider more abstract behavior.

How Many Parts?

In, “Frames of Mind: The Theory of Multiple Intelligences”, Howard Gardner describes more than eight types of intelligence. Might these eight types of intelligence be implemented by eight or more relatively independent areas of the brain?

And Dr. McGilchrist was not the first to suggest a dual nature of the brain. For completely different reasons Freud was one of the first to recognize a mind with at least two aspects by contrasting our conscious and subconscious nature.

More recently Daniel Kahneman’s book, "Thinking, Fast and Slow" describes the operation of the brain in two competing “systems” reflected in the title. Daniel was also careful to make clear that his ideas have nothing to do with the vertical separation of the two cortie. But again, are “fast and slow” represented by a physical or operational boundary somewhere in the brain? Perhaps a lizard contrasted with a mammal? Maybe some other creature in our past? There are so many possibilities, and for now, it’s important that one concept need not preclude another, no matter how we slice up the mind and its underlying brain.

In, "A Thousand Brains", Jeff Hawkins also obviously models the brain in a thousand parts, but these parts are general and homogeneous, as opposed to specialized and dedicated in function. In a more macro context, he refers to an "old brain" as opposed to our "neo" cortex (singular!). His old-brain, new-brain model is similar in many ways to the Triune model (minus one brain part) and has similarities with Kahneman's fast and slow versions. Do any of these limits add clarity?

In any case, we have Iain McGilchrist effectively characterizing a brain divided left and right, Paul Mclean doing something similar in three levels from primitive to advanced, Howard Gardner describing eight types of intelligence with no physical allocation, an attribution also left undefined by Daniel Kahneman with his two “systems”, Hawkins with his old and new brains, and Freud with our conscious and subconscious. And this is the shortlist of those who have presented the brain as having multiple facets, with apparently multiple dimensions and aspects of control.

It’s also interesting that there is no agreement as to how the brain is sliced or managed by these various creatures from our phylogenetic history. Yet reference to multifaceted behavior is so common in our language and literature that whole sections of our vocabulary are dedicated to the idea. There are literally thousands of examples. 

Abraham Lincoln referred to, “the better angels of our nature”. He didn’t specify how many. Shakespeare liked to confuse us with pairs of twins, and their deceptions. Often these twins had contrasting natures. Or similar ones. Were these literary characters actually devices representing multifaceted meta-characters? And of course, Rene’ Descartes also recognized a duality in our nature, if not a reasonable physical implementation. Was his attempt to separate mind from matter driven by an ultimately multifaceted aspect of internal versus external modeling of our left-right divided brain? Or was he simply protecting the sanctity of the soul while being unable to move beyond the dichotomy of mind and body to entertain other parts? Our literature and history are rife with examples of multiple aspects of our mind’s experience, and likely, the multiple competing and cooperating behaviors created by a multifaceted brain.

So if the physical brain and some introspective experience are multifaceted, should not its operating model also conform? Perhaps we have no model of the brain because there isn’t one. Perhaps it’s because there are many, one for each part or creature in our evolutionary past. And if the brain has many independent parts, how many? And how independent might they be? What constitutes the sovereignty of pumping blood compared to moving the gut? Are we really looking for more masters? More emissaries? How about a few minions? Certainly viewing our divided brain as simply Master and Emissary, (or even Mystic/Artist and Executive/Scientist), does not account for the observed and extensively multifaceted nature of our behavior.

After all, we can drive down the road actively debating talk radio in our minds while eating a cheeseburger and picking our teeth. This requires at the very least five independent entities all operating in parallel, each deferring to the others as needed, and when. Some will have to be inhibited in their operation at any given moment. Someone watches the road. Someone drives the car. Someone listens to the narrative. Someone bites the burger. Another manages dental hygiene. And that’s not even counting all the autonomic and/or peripheral functions happening in the background, such as heartbeat, breathing, and digestion to get that cheeseburger down. A single-digit creature count is likely a gross under-estimation and oversimplification.

If you’re now trying to imagine what it might feel like to have multiple creatures in your skull, don’t bother. You already know. If this thesis is correct, it’s what you experience each day. Mostly we feel unified with transitions that are surprisingly smooth. It’s only when you have to grab the wheel to get you back on the road that you realize there is much more happening below the surface of your conscious mind. Every moment of every day.

So how many operational parts make up our brain? It’s easy to imagine a brain/body with tens, or even hundreds of independent “systems” all working in various degrees of contention and harmony to yield a single and seemingly unified experience. To keep this multi-brain idea flexible and open-ended, let’s work with a nice round number, say a thousand creatures, while we explore functional boundaries and behavioral sovereignty. All while trying to avoid tech metaphors.

Our Complex Brain

A multifaceted brain explains so much about human behavior. It explains why people don’t keep promises. It explains why people lie. It explains a great deal about sex. It explains how people can change their minds so quickly. It explains how people can be so self-destructive. It answers those questions I posed above about Socrates, Henry II, Stalin, and Mao. It explains your doubt about what I’m presenting right now. It even explains mine.

From a tech perspective, a multifaceted brain resolves so many issues about the subtlety of our more obvious behaviors. The very nature of a complex system is that it’s made up of multiple subsystems, each with its own agenda. In order to understand the behavior of a complex system, it’s important to know when each subsystem is in control, when they are competing, and when they are cooperating.

If control of the body’s muscles can be instantly and dynamically switched from one system to another, the result can be highly adaptive. This is especially true when one model or architecture need not preclude the others. And that’s the trick in a macro sense. Such a parallel and/or contention-resolving method would also explain the extraordinary resilience and reliability of the brain. As Malcolm from “Jurassic Park” said, “Life finds a way”.

Returning to a more right-minded perspective, these "subsystems" are better thought of as “creatures”. That’s likely how they evolved. Being able to introspectively feel the experience is especially challenging when multiple things are happening at the same time. But it’s fun to deconstruct, to think about, and to feel.

This raises other issues. What is the nature of each of these thousand creatures? How do we characterize them? What are their operational boundaries? What are their capabilities? What are their limits? And if we actually have these creatures in our skull, how are they physically organized? Left/right? Up/down? Front/back? Core to periphery? All of the above? Even more significantly, how are they operationally organized? What connects to what? Who has control? Who gives consent? When and why? 

Finally, if we have these multiple entities in our brain, how might this control be arbitrated? For me, these were familiar issues of computer architecture. Contention resolution of parallel computer operation is one of the more challenging aspects of computer science. I’ve had to deal with it on several occasions over the decades with varying degrees of success. It’s not an easy problem to solve. At least not for a left-brained programmer, which is one reason the concept has been so poorly implemented, even in current multi-core silicon.

In an ironic comparison with the brain, (and for completely different reasons), most computer cores are quiet most of the time waiting for other cores to complete a process, making overall operation inherently serial and dependent upon whatever result they might be waiting for. Then the transitions of control are relatively clumsy and crude. I won’t bore you with more clunky detail. In contrast, the biological brain manages arbitration of these many facets with a casual elegance that would lock a computer in a tight loop, or the logic trap of a “deadly embrace”, a challenge in the control of data editing. But let’s not slip back into the world of tech quite yet.

While reading the “Divided Brain” I recognized something else. These concepts of control forced me to step back from my detailed work with the neuron for a broader view of the brain. In one of Dr. McGilchrist’s lectures, he notes that while the lateralization of human behavior as a field of study has been largely ignored, the study of lateralization in other animals continued, and helped greatly in his research and writing. Why would we ignore lateralization in ourselves, but not less complex lifeforms? Is it the same reason we believe we are fundamentally different from other animals? Such thinking is the height of hubris which is perhaps our left-brain blinding our right-mind in some way. It's far more likely that we are only different from other animals by degrees, even when these degrees yield apparently dramatic differences. It’s only the disproportionate effects of emergent results in these degrees that separate us from Bonobos and Chimps. That and another superpower of the left-brain - denial.

Our left-brain does not easily accept the idea of other creatures in our skull. But the reality of at the very least a divided brain is obvious. I too found the concept challenging when I first read, “Dragons of Eden”. The thought of a lizard in my brain was distasteful at best, but I did seriously consider the possibility. Perhaps that’s why I found the ideas in “The Master and His Emissary” subjectively less shocking. (Carl Sagan had also addressed the issue of left and right brain differences, but more to insightfully contrast serial and parallel operation of the brain which I’ll address in due course).

In any case, it’s not easy to think about sharing your skull with multiple entities. Still, the evidence is overwhelming in our language and our culture. Fortunately, this multifaceted approach allows us to more easily deconstruct the brain, and more effectively characterize its parts and processes. A multifaceted architecture solves so many problems in forming a useful model of the brain.

So what does the Triune brain (and the other division models) have to do with a simply Divided Brain? For me, it was deja vu all over again. The more I compared the vertically divided brain with the core-layered Triune model, the more similarities I found with a third venue - my work with neurons trying to acquire knowledge in a nano context. In all three cases, and as noted above, arbitrating control was the key. After months of study, I discovered that each of these three models might use a similar method of arbitrating control. If I could solve this challenge for two, three, or six creatures, I could solve it for a thousand. And I have.

Before we leave the topic of complexity, let’s contrast it with the concept of complicated. Complicated implies something opaque and nebulous. It’s how our right-mind dismisses the issue. In contrast, if we are effective at deconstructing complexity, things get simpler, a quality especially appreciated by our left-brain. How do we even keep track of what goes where? And does it really matter?

But Not Exclusivity So

As Dr. McGilchrist described the nature of the divided brain on the macro level, I began to see parallels with what I was finding at the nano-level of the neuron. Could evolution be using the same tricks over and over? The dynamic tension created by the divided brain was in many ways similar to what happens within a neuron as some inputs try to activate, and others work to inhibit firing. I began to call this similarity “decursive” in contrast with recursive, to be described shortly. 

The thesis of the “Divided Brain” is that our left-brain has become more emergent during the last few thousand years as its dominance waxed largely because of the success of technology, and our left-brain’s lopsided advantage of expressing this success using language. Even if you don’t follow Dr. McGilchrist to his thematic conclusion, he clearly demonstrates the differences between the two sides of our brain and how arbitrating control presents a challenge of how one side might do one thing, “but not exclusively so”. And the other, the opposite, but with the same exception - “not exclusively so”.

When I first read this bidirectional mitigating clause, I thought of it as a lack of conviction. I soon changed my mind. Dr. McGilchrist was describing something both subtle and significant about the two halves of the brain - they both compete and cooperate dynamically as control shifts from one side to the other in a macro context. But strangely, they do not collaborate (sharing labor), and their cooperation is often unintended, even unknowingly performed as he describes in his patients who have had their callus callosum severed. This strange dynamic also occurs within the layers of the Triune (or more layered) brain in a micro context. And finally at the level of individual neurons in a nano context. That alternative minority case in each context can be quite important for survival.

For instance, language resides in the left-brain, but not exclusively so. And facial recognition occurs on the right, but not exclusively so. Both sides do both. But only to a degree, thus the exceptions might be best described as right-brained and left-minded.

Here’s another way to describe this subtlety. The left-brain likes to quantify and define things. It likes to find THE answer and deny the rest. It uses these definitions to construct dependencies which often become logically rigid in a serial fashion. In contrast, the right-mind keeps its options open. One solution need not preclude another. It’s constantly comparing and reviewing scenarios in parallel, dismissing the complex as merely complicated, and certainty only by degrees. You might say our left-brain likes to define things with "facts," or at the very least assertions, and our right-mind would be happy to end any generalization with, "or something like that." At least it would if it had a voice.

One way to think about this dichotomy (and lack of conviction) is that there is likely a majority (or more common) response from the side that is most commonly associated with any given challenge. But there’s also a minority (or backup) solution standing by if the majority response fails, or for some other reason is not cued. That cueing is the key to transferring control. I’ll be describing this in more detail later on, but suffice it to say, something similar happens within the peripheral nervous system, the layers of each side of the brain, neuronal nets within these layers, and even within the neuron itself (which is where I've spent most of my time). Let’s return to Ricky at my high school to understand the value of a hot standby, which is also the value of an alternative and minority method.

Ricky’s Triumph

A few years after returning to northern California our complete high school Junior class was in the cafeteria taking some annual test. As usual, I was seated at the geeks’ table with my cousin Dave Cline and a few others. When the time was up, we were instructed to put down our pencils and hand in our papers.  We were then told we had to remain seated for the next 45 minutes. That’s when school officially got out. The teachers didn’t want us wandering around campus, which I find reasonable now, but didn’t at the time. While sitting at this table, I remember discussing continental drift and other weighty topics while we lamented this boring challenge to our personal freedom. (Interestingly, the idea that continents drifted like pond scum was a concept that had virtually no hard data to back it up when introduced about fifty years earlier. But by the late 1960s, it was a hot topic with lots of valid evidence. I expect something similar to happen with the idea that neurons create knowledge. Or not.)

Anyway, we were deep in debate when all of a sudden, I saw Ricky Morrison stand up across the room. He started for the main door, but three teachers literally ran to intercept him.  By this time Ricky was a big guy, more than 200 pounds, and fairly lean. But the teachers were experienced with his brash physical behavior. They were ready to block the door.  All of a sudden Ricky turned on his heels and went in the opposite direction. I’d seen this move before. Now he was in the lead. Ricky pushed through the emergency exit at the other end of the room and was gone. The alarm sounded. The door banged closed. The room became quiet. The teachers stopped and stared. The silence was broken by nervous laughter from some of the students.

One of the guys at our table sneered with derision, and stated to no one in particular, “Retard!”

My cousin Dave countered with this observation which I’ll always remember, “That ‘retard’ is enjoying his freedom while the rest of us sit here in envy.  So who is actually smarter?” And who is retarded?"

Dave was right. Here was a room full of students who failed to answer that final question on that day’s intelligence test, the one about personal sovereignty. Ricky was the only one who got it right. At that moment I realized intelligence depended on context and perspective. Sure, like Alexander the Great before him, Ricky broke the rules. But he also solved a problem. Ricky didn’t think outside the box, he lived outside the box. At least the socially acceptable box. This gave him an advantage. And he exploited it. That day, Ricky was no retard.

Even then I saw parallels with my computer work. I’d been studying Boolean algebra and had just finished designing my first ALU (Arithmetic Logic Unit).  I had demonstrated it for the science review and was selected to present it to the local Rotary Club. Yes, I was that geeky, even in 1967, well before geeky was cool.

I’d also been reading Freud and B.F. Skinner, but Desmond Morris’s “Naked Ape” was a favorite. I found human behavior fascinating, and like other computer geeks, wondered about the parallels between not only baboons and humans, but also between humans and machines. Even at that time, the possibility that a computer might become smarter than a human was being suggested.  But the question at hand, the question presented by Dave that afternoon was, who was the smartest guy in the room?

Ricky’s feint is often seen in football - head-fake an obvious objective, then break the other way. But Ricky wasn’t on the football team. He’d evolved this particular solution somewhere else. I’d seen the prototype years before. You might say Ricky’s weird habit in how he moved had tricked the teachers. I don’t believe he even thought about his actions that day. At least not like you or I might. He just wasn’t that concerned. He had the same instruction as the rest of us (remain in your seat), but he used a different personal script, something more primal, something more innocent. And in this case, something more effective.

I realized at that point, Ricky had the same desire to leave as the rest of us, but his behavior did not take into account the social contract. He simply didn’t honor those constraints, that inhibition. This allowed him to overcome this one minor challenge in his life by using a different script.

Perhaps we all have such alternative scripts, but how do they get triggered? When? And why? I’ve since spent most of my life in computer design and business management, but humans continue to fascinate me most. Like many technologists at the time, I wondered about the behavioral “program” we had in our brain, and why Ricky’s was so different from mine. What did he know that we didn’t? Or was it his lack of knowledge? I’ve since discovered that it actually depends upon the nature of knowledge. And the individual.

One thing was certain, at that moment, for this particular challenge, Ricky was the smartest guy in the room. Or was he? Should I have used the word "certain" in the above sentence? Is any evaluation of the circumstances that certain? And should I remove “that” from this prior sentence when presenting a superlative? Or are all superlatives naive? Or even the word "all" in this last sentence. See how language and logic have a major blind spot? One can get lost for hours in such a paradox which can quickly be resolved by our right-mind if we let it. Can certainty be expressed by degrees? 

Of course not, but before we get distracted, don’t we all have a bit of Ricky hiding somewhere in our skull? When the standard approach fails, don’t we revert to something more crude and powerful? When push comes to shove, doesn’t it make sense to bolt for the exits and push through the door? Ricky was just a bit more claustrophobic than the rest of us on this particular afternoon. And less concerned with the consequences.

Here’s a question to ponder in the back (or quite probably, front) of your mind while I set up the concept of decursion:

For this situation, was Ricky really the smartest guy in the room?


I need to present a bit of housekeeping before I move on. I could have titled this section “contexts”, but it’s hard to hear the subtle plural of context so I’ve stolen a method used for singular Latin words that end in “us”, I believe a trick applied by the Romans for a similar reason. The reason we need a plural for context is that the complexity of the brain demands it, otherwise we’ll get hopelessly lost. Since I’m about to play fast and loose with the definition of decursion, I might as well coin another term - contie. There. Doesn’t that sound better? At least there's a clear contrast, and isn't that bit of knowledge useful at times?

Seriously, the brain scales many orders of magnitude in complexity. If we’re describing one context, how do we differentiate it from another? For instance, Dr. McGilchrist can be said to have described the brain in a macro-context - that of an individual human, and that’s fine as far as it goes. But it doesn’t go far enough. If I arbitrarily deconstruct the brain into a thousand creatures and the process is useful, is it really arbitrary? Utility is the test. It will be applied over and over as I proceed. For instance, we could describe these creatures as living in a milli-context, and their tricks in a micro-context, with individual neurons described in a nano-context. That’s a lot of contie.

Going the other direction for a moment, applying decursion outside the skull could be described in a kilo-context for tribes, a mega-context for different cultures or nations, and maybe even a giga-context for the internet, plus or minus an order of magnitude. Or two. There’s no need to get too specific at this point. We don’t yet know how much room we’re going to need in order to model this Gordian Knot we call a brain. For the sake of this presentation, I’ll define the following contie each separated by three orders of magnitude, with some exceptions:

Macro-context - an individual human, or a few people, and perhaps the two sides of the brain within the skull.

Milli-context - the realm in which our creatures live - the many layers of the brain.

Micro-context - where these creature’s tricks are implemented, mostly the connectome, about a thousand neurons per trick, plus or minus.

Nano-context - within the neuron and between them, but not strictly so.

This gives us nine orders of magnitude to describe the brain and scale the concept of decursion within the skull, which I'll address next.

But before we leave the topic, I'll note that context and contie are more than just words. A context can also be understood as a bit of knowledge expressed by all the neurons that feed the neuron in question. Yep, context too is knowledge too. I'll save the details for later.


“To understand recursion, one must first understand recursion.” - Steven Hawking

This paradox is literally a joke that makes fun of our left-brain. Our right-mind knows better because we obviously are able to understand recursion. Thus the humor. What exactly happens if you don’t have to define something in terms of itself? Or define it at all? Our left-brain goes around trying to define everything it encounters so it can be associated with a word and be manipulated, but that’s a naive behavior in many cases. Some things defy definition, yet obviously exist. Such as faith, conviction, or love.

Fortunately, these things can be largely understood without defining them using words and are better managed with our right-mind. Steven Hawking demonstrates the limits of logic with his assessment of the recursion paradox. And he nicely captures the essence of the issue, which is all about capturing the essence of an issue. Before we end up in a hopeless tangle, let me unwind this pretzel logic by inverting recursion’s definition into something more manageable, and opposite, in a somewhat derivative and decurrent fashion.

If you’re not familiar with recursion, it is the act of defining something in terms of itself. Recursion is repeatedly applying this same mathematical or computer result over and over until a base case is reached. It’s a little like (but not the same thing as), performing the long division algorithm over and over until the remainder is no longer significant. In contrast to long division, with recursion, the process is then unwound to provide a final answer. Recursion is one approach to solving various computer problems. Coastline fractals simulated by the Mandelbrot set are a visual example. Or seeing how a fern frond appears to be made up of smaller fern fronds, decurrently. Another example is the Romanesco cauliflower. Biologically similar structures can be successively deconstructed at each level, recursively. 

McGilchrist describes our left-brain as being somewhat recursive in its approach to defining the world. I began to wonder what our right-mind’s alternative approach might be, which led me immediately to appropriate the word decursion and apply it in a contrastingly new context - the brain itself.

As the inverse of recursion, decursion constructs from the smallest fern stem element in steps upward to the whole frond. Both recursion and decursion contain the essence of the object in each context. It would actually be more consistent with other "re" and "de" prefixes to swap these two but that would be too confusing at this point. It’s mostly a matter of whether you start - at the bottom or the top. Evolution and biology more likely started at the bottom. Decursion honors that approach. And it has more utility for our current challenge than left-brained recursion, as I’ll shortly demonstrate. We also don't have to worry about the "turtles all the way down" issue when the sky's the limit.

Decursion is the opposite of recursion in several important ways. Recursion deals with the issue objectively and at arm's length, much like the concept of stimulus-response. Decursion honors the artist's more subjective approach. Decursion is the right-mind's more intimate and expansive alternative to our left-brain’s reductionist approach. Instead of turtles all the way down, it's knowledge creation methods all the way up.

Decursion replicates a similar method, but not in a mathematically or logically definitive way. Decursion is not simply the same as derivation either. Instead, it mimics that base case with increasing adaptation and sophistication. I believe that once evolution discovers a new trick, it doesn’t like to let go. Instead, it applies it over and over in a decursive fashion.

About a billion years ago evolution discovered a new way to evolve by inducing movement in response to knowledge about the world. Animals came into existence at the same moment as knowledge. Going forward, many lifeforms did not have to die in order for evolution to proceed. It was the beginning of a new kingdom and a new era, decursively.

As evolution applies decursion it captures the essence of the thing, just not in a perfectly defined form or algorithm. More significantly, decursion provides a template for understanding the brain. And it doesn’t stop there.

Here is a grim, but useful way to think about decursion - war. I've mentioned that neurons seem to both compete and cooperate to create knowledge. Putting aside my assertion for a moment, think of the political and practical aspects of war. It's largely implemented as acts of competition and cooperation. Let's take the idea a bit further and explore the parallels with the modern world of sports where both teamwork and individual champions are important factors in success.  Sports become a metaphor for war because it is decursive of war (and likely, also the inverse - another chicken and egg?). War is decursive of sports. Many battles between primal tribes were much like sporting events, even when the players were injured, or even killed.

This decursive template not only describes the neuron, micro-brain structures, and the divided brain itself, this evolutionarily decursive expression also escapes the skull and finds form in our language, art, media, dance, government, and even finance. Verbal language is decursive of neurons creating knowledge, written language is decursive of verbal language. Memes are a right-minded version of both. For me, decursion provided the map of development history which I desperately needed, a sort of Rosetta Stone translating between contie. The concept also finally yielded some useful metaphors of the brain. Here’s one example:

Instead of thinking of the brain as a computer, think of it as a Wall Street stock exchange where each of millions of remote investors is a neuron with their own unique ways of predicting the market in a nano context. They each "know" their version of how and when to buy low and sell high. Thousands of market managers in New York channel these decisions to a few brokers on the exchange floor where in a micro context these methods and systems compete and cooperate to find the highest and best use of capital as they dynamically price any given stock issue. 

At least that’s how it used to happen. Nowadays brokers are automated, but in the past, they too were humans representing neurons. When you stand back from the process, these competing and cooperating investors with their own financial methods can be thought of as creating useful knowledge, aka the investor's alpha. But until you understand how each method works, they are mystical tricks of the financial trade. And literally tricks of evolution that have directed capital needed for survival and the growth of civilization. No one investor has all the answers, or all the knowledge, but brought together as a system, it works surprisingly well. Most of the time.

OK, how about another hierarchically convergent and decursive example of brain architecture. Think of our bicameral congress as a brain where each member represents a micro-context state, county, or precinct of nano-context voters competing and cooperating to promote the best form of government in an ultimately macro context defined as a bill that becomes the law of the land. Note how this example is steeped in dichotomy. I'll be describing how the brain and even neurons do something similar, for similar reasons.

And if you want to understand the hierarchy of the brain, think Army. Some decisions are made by privates, others by captains, but the most important ones go all the way up to the general, or even president. Neural layers work in a similar way, on both sides of the brain. Or both sides of the battle.

Here's another example of decursion. It's a metaphor of simulation that artists may appreciate - the theater. Performance art likely started around the campfire. It still goes on today in many forms. From mimicry to scary stories the audience is asked to suspend disbelief as they listen to a narrative and watch the expressions of the storyteller. (For the more technical, this could be described as a form of sparse coding. I mention it to bring you scientists along a path. Artists can ignore the sparse aspect for now.)

In time, such campfire performances became more formal oral histories yielding verbal language in a talent now largely lost to history. Fortunately, some of these stories were captured in written form tens or hundreds of thousands of years later. For how long was language limited to oral presentation? Perhaps a million years? And it's only taken a few thousand years more until actual theatrical scripts were written, capturing not only the words spoken, but also stage direction and movement of Shakespear's actors. This can be thought of as rebalancing McGilchrist's left-brain with a more cultural right-mind, which brings us finally to the theater.

Once a more formal stage was built and costumes made, the masks of ancient theater were left in the dressing room and suspending disbelief became easier. Modern comedy and drama are the results. Movies only refined and replicated the experience. Decursively. From campfire mimicry to modern virtual reality and electronic gaming, each of these art forms delivers an increasingly decursive version of the one that came before. How much they evoke emotion from their audience is a test of their quality. From pain to pleasure, from laughter to tears, what we see in the theater or the VR headset is a decursive re-cuing for what our ancestors experienced in life.

Sure, these performances sometimes have rough edges. And all of the above metaphors are rife with operational failure, but then so is the brain. As is the actual stock market, and army. Need I say Congress is not immune? Fortunately, the brain makes up for it in each case with parallel resilience. And in various ways, so does the theater, the army, the stock market, and congress. Failure, competition, and cooperation are how all three examples hone knowledge, decursively.

Neurons create knowledge.

Cues are decursive of neurons creating knowledge.

Words are decursive of neurons creating knowledge.

Memes are decursive of neurons creating knowledge.

Maps are decursive of the brain's sparse signaling.

And there are so many more examples - virtually every form of knowledge. Even this blog post is decursive of neurons creating knowledge. Think in terms of experience driving expression, useful or not. Going forward, I'll be using the concept of decursion to help model the brain in the macro context, and also using these macro examples to inform the nano nature of the neuron. Keep an open mind.

The Tao of Zen and Zen of Tao

"What Zen Buddhist riddles reveal about knowledge and the unknowable"

Buddhism, Tao, and Zen are religions roughly associated with India, China, and Japan. Putting the actual practices aside, I find some of the ideas most interesting, especially with regard to the brain. My generalizations about Tao and Zen will not satisfy masters of either, but the underlying concepts provide examples of how an important aspect of neuron and brain architecture has escaped the skull and found form in eastern culture, summarized as the wisdom of Buddhism.

For our purposes, I’ll present Tao as literally meaning “the path” or “the way”. Enlightenment may come from walking this path, an inherently serial process. A path can also be described as a technique, an algorithm, or a process. Tao is more left-brained, but not exclusively so. The pronoun “the” implies the superlative, the singular. This one and only path to enlightenment, (even if a different path for each person), is singular, and superlative and may have decursively evolved into the more monotheistic religions of the west. Our left-brain is always looking for “the one” as presented in the movie, “The Matrix”, an obvious reference to the messiah, a concept common across western cultures.

In contrast, the term Zen usually stands alone, (or sometimes with a following association, but no pronoun needed). In some ways, Zen is both the essence and exception of Tao, and vice versa, similar to the contrasts of the divided brain. Zen is often described by what it isn’t, as opposed to what it is. In contrast to Tao, Zen is knowing the nature of a thing without walking a path, an aspect we might attribute to the right-mind. Also, one person’s enlightenment need not be the same as another’s. And one solution or enlightenment need not preclude another. Thus, pronounless Zen. But back to Tao for now.

In the micro context, neurons connect from one to another forming pathways, actually, many pathways (in contrast with circuits as I'll address shortly). We have tens of millions of neural sensors. We have only a few hundred muscles, but these muscles can be applied in complex sequences to form about a million scripts of movement, plus or minus, making the brain inherently convergent from sensors to muscles. 

To clarify, these largely parallel paths begin with tens of millions of neurons sensing the world and converge down to moving a few hundred muscles (or collection of muscles in a serial fashion). These muscle movements will hopefully affect the world in some way beneficial to the subject at hand. This affected world can then once again be sensed, completing a never-ending operational loop specifically including this individual within the world. Tao is serial in nature, and so exists over time in serial feedback loops. But not exclusively so.

The knowledge that Zen acquires, lives in the moment, and so exists in parallel with other similar knowledge at any given moment. With Zen, we observe all aspects of a thing struggling not to define it before we are enlightened by its nature. Or not. Then somewhere a neuron fires and we understand its nature. It all happens at once, with no steps to be taken. Time is not a factor. Zen does not exist in a temporal frame as Tao must in order to be sequential. This Tao and Zen dichotomy exists all through the brain in our nano, micro, millia, and macro contie, decursively.

In the nano context, a collection of inputs need to be present at the same time in order for that neuron to fire, (or at least within a primed window of time to push this metaphor from an instant to a moment). Zen occurs in that moment. Not the moment before, not the moment after.

In a micro context, thousands of inputs all happening at once can be thought of as the basis for what we call associative memory (which is not actually stateful memory at all, as I’ll present later). Before we lose the path of our Tao and digress into a mess of logic, let’s combine this Zen moment with our Tao path to yield a convergent hierarchy, the most common “network” in the brain. I use the term network here loosely in that the brain’s hierarchies are obviously not orthogonal, nor even very consistent. But they must ultimately and inherently be mostly convergent, as is the result.

In a macro context, these two competing approaches yield hierarchical competition and cooperation as the two sides of the brain operate in parallel, and also serially, not unlike McGilchrist describes them. The left brain tends to engage the world in a serial fashion, but not exclusively so. It helps us to manage time and provides a temporal framework for language, and our view of reality. In contrast, the right-mind tends to engage the world in a parallel fashion, but not exclusively so. At that moment.

What happens at the level of the neuron also happens at the micro and milla network context as well as the macro-level of the left and right brain. Oh, and it’s the very same decursively replicated architecture we see in the stock exchange and congress as presented above. Tao and the left-brain are more rational, but not exclusively so. The left-brain tends to present a thesis using logic. The right-mind is more insightful as it challenges with the anthesis, but not exclusively so. Together they form a synthesis yielding wisdom. Hopefully.

If my abused definitions of these concepts replicated decursively in various contie seems a bit flighty, it’s meant to be. We’re looking at it from the top down. The objective at this point is to keep things general until we form a more useful framework for the brain. Decursion is one of the tools I’ve used to get there. If it doesn’t make sense right now, relax. When we build from the bottom up, the model will become a bit more obvious. For now, it’s time to revisit the smartest guy in the room. We’re not yet done with Ricky. Or Dave.

The Smarter Guy in the Room

Thinking back at this point in my life, Ricky Morison not only inspired my interest in human behavior, in later analysis his actions taught me something very important about knowledge. In a more subjective model of the brain - knowledge approaches truth like an asymptote or Zeno's paradox - it never arrives. Definitions are merely ways to grasp things. Superlatives, like truth, are aspirations. All are illusions, starting out as knowledge. Each can be useful in its own context.

As the door slammed shut behind him, Ricky was now free to do whatever Ricky does. The rest of us were left to realize what Ricky could not have known without imagination. In this case, Dave’s imagination. Dave had implied that Ricky was the smartest guy in the room, but wasn’t Dave the smartest guy in the room for having the imagination to recognize Ricky’s brilliance? 

But wait! If Dave was the smartest guy in the room, then his recognition of Ricky’s superlative was invalid, obviously leaving Ricky and Dave in a paradox for the title.

See how easy it is to end up in a left-brain logic trap? They happen more often than you might realize. We normally dismiss them using denial. In order to be trapped, you have to think about them logically, and then limit yourself to only those rules. And that’s the error we might make - logic. Fortunately, we typically ignore logic traps and deal with them using our right-mind. Otherwise, we would all be constantly getting stuck in some catatonic state. Some "deadly embrace".

In any case, Ricky accomplished that which the rest of us only imagined, after the fact. And I believe he did it without using imagination at all. Ricky was obviously cheating, but was his cheating not intuitive? His solution certainly was simple, but obviously violated the rules. Which is the point. In a more Zen fashion, Ricky walked his Tao path right out of the room, and left conformance to the wind. The rest of us only observed. So does enlightenment flow from walking the path? Or simply understanding its nature? Does nothing matter until something moves? And does it matter how we gain the knowledge from this little exercise as long as we come to know it?

The point is, the standard for knowledge is constantly changing. As soon as it’s defined, it needs redefinition. Ask any day trader. Ask any Senator. Knowledge is as fluid and flexible as information is rigid and defined. Before debate, you may know one thing, after debate, the opposite. Though rare, it does happen. Minds do change. And for the little issues, constantly.

So, which makes someone smarter, the Tao of walking the path, or the Zen of knowing what it means? I would suggest that, like Zen, it is all of the above. And none. It depends upon your perspective. And to some degree, luck. The world is not determinant.

According to Occam’s Razor, the simplest solution is likely to be the correct one. Did Ricky have a breakthrough as he set off that exit alarm? Is this what the knowledge of enlightenment looks like? There’s no correct answer of course. There is no smartest guy in the room. But what might happen if we treated the brain as a collection of such challenges? Such koans? Such tricks? Might we find that brain model we seek?

Enlightened by Art?

“If you can’t replicate the work and get the same outcome, then it’s not science.

If you can replicate the work and get the same outcome, it’s not art.” - Seth Godin

To clarify the above quote, I’ll paraphrase another of my favorites originally by Theodore Roethke: 

“Art is that, which everything else isn’t.”

Continuing from the last sections, this quote may seem a bit Zen. And that’s the point. Art is not a “thing” to be defined or managed by our left-brain. Art is often enigmatic. So is Zen. Art allows us to discover novel “things” which are initially undefined, but not exclusively so. As these new things move from our right-mind to left-brain they become “grasped” and better defined over time. Once characterized, these tricks become methods. These methods are then applied in the steps of an algorithm to be repeated in the more serial fashion of a machine. Once their more consistent nature is characterized, they cease to be art and become part of science. To gain traction for this challenge we will need to approach the problem as neuron-art in contrast with neuroscience.

The right-mind deals with novelty - things not yet defined. Artistic things are discovered in a moment of intuitive enlightenment. Once our right-mind shares these things by altering the focus of our attention, the left-brain defines them and uses them as components, or deconstructs them into their subcomponents, sometimes recursively. But not exclusively so. More later on how this transition (and many others) actually occurs as knowledge moves around the brain.

After reading the Divided Brain and realizing that once again our left-brain had failed to objectively model the brain, I began to wonder what a more subjective, a more biological model of the brain might entail? McGilchrist inspired me to explore this possibility. How does our right-mind see the brain? What is the Zen of the neuron, as opposed to the Tao of a neural pathway? This last question became my personal koan. And the notion allowed me to tease out the first principle of the neuron - that neurons create knowledge. 

Zen is about understanding the nature of things without defining them. Zen lives in the realm of the right-mind. Zen is a muse. What does Zen have to do with the brain? And Art? For me, it was the simplest approach to forming a useful model of the brain. To treat the brain as a collection of evolutionary tricks to be characterized and defined.

As for defining art, the above Theodore Roethke quotes simply mean art begins where science leaves off. As noted above, the left-brain deals with science, but not exclusively so. The right-mind deals with the things in life which have not yet been defined, but not exclusively so. These proto-things can also be generalized as art. 

Oliver Sacks reached the same conclusion about understanding the brain, and mind:

“What is this mystery which passes any method or procedure, and is essentially different from algorithm or strategy? It is art.” - from “Awakenings”

Over the decades I have made many attempts to model the brain, but none of them felt right. None were viscerally satisfying. None of them left me at peace with the problem. Until now.

That has been my test. Was this simply my right-mind objecting to my more technical left-brain conclusions? Does it matter if we are able to build a model of the brain that is useful? What if we treat the brain as a Zen koan? What if we take a more subjective and indeterminant approach? What if we play with ideas instead of working with them?

I’m going to describe this gnostic model, hopefully without defining it, at least until we get down to the neuron. And even after that, I shall endeavor to keep things flexible as we come to understand evolution’s tricks and ultimately describe them as methods.

I believe many others, perhaps millions, have used this or similar approaches throughout history. Many have likely reached similar conclusions over time, but have described the experience as spiritual or even as enlightenment. Or worse, not being able to describe what they have discovered at all because of our left-brain’s reticence to express the right mind's discoveries in language. 

How many times have you been at a loss for words? This might have been more likely because what you wished to express was coming from your right-mind. Or you were not able to describe it logically or in a way that would satisfy your left-brain. Perhaps you struggled with some art form. But just because you couldn’t find the words doesn’t mean your conclusion was invalid or useless. It just wasn’t directly accessible by your left-brain. 

I will take you along a path that is similar to the one I have walked both logically and intuitively, but not definitively. Still, science will not be ignored. Virtually everything I present will ultimately be testable with repeatable results. Or it's not useful science. It's mostly how I get there that will be intuitive. Our left-brain may not admit that we already have a simple model of the brain in our right-mind, but it’s there. The left-brain may not describe it in words.

But I will.