... seeking simple answers to complex problems, and in the process, disrupting the status quo in technology, art and neuroscience.

Wednesday, December 23, 2020

Our Missing Model of the Brain


 


(Originally posted July 17, 2020)


How can the most profound and studied object in the world be so poorly understood?


I’m of course talking about the brain. And “profound” is an understatement. Without our brain, nothing else matters. Without your brain, there is no you. Our brain creates our reality, and mediates our interaction with the world.  We are our brains. This view of the brain is not new. In the 4th century BC, Hippocrates understood the significance of the brain surprisingly well:


“Men ought to know that from nothing else but the brain comes joy, delights, laughter, and sports, and sorrows, griefs, despondency, and lamentations. And by this, in an especial manner, we acquire wisdom and knowledge, and see and hear and know what are foul and what are fair, what are bad and what are good, what are sweet, and what are unsavory. ... And by the same organ we become mad and delirious, and fears and terrors assail us. ... All these things we endure from the brain. ...In these ways I am of the opinion that the brain exercises the greatest power in the man.” - On the Sacred Disease


In spite of all that has been learned since, this ancient and intuitive summary remains one of the best and most concise descriptions of how our mind experiences our brain. Not only does “the brain exercise the greatest power in the man”, but in everything every man has ever done. Pick a topic. As you think about it, your brain informs your understanding.  If you act on your thoughts, it’s your brain that has ultimate control. You can not think about, nor do anything that does not involve your brain. You can not be without your brain.


And that’s just your brain. And that’s just right now. While subjectively critical, most of our individual brains will have little impact on the world at large.  But collectively, all the brains that have ever existed have literally controlled everything that has ever been done. Our brains create our culture. Our brain is the key to all relationships, politics, economics, science, art, and philosophy.  Yes, profound without doubt.


As for studied, has any other object gotten more attention, especially in the last few decades? Does any other intellectual challenge present as much data? And has any other effort yielded fewer conclusions? We’ve had a “Decade of the Brain”, a “New Century of the Brain” and have even treated the brain like a “moon shot” during Obama’s “Brain Initiative”. Yet, none of this rhetoric mattered. We still don’t have a useful model, nor even much consensus about how it really works. Is the brain too complicated for the mind to comprehend? Unlikely, but let’s take a closer look.


The complexity of the brain is astounding. You’ve probably heard the quantifications. Each of our brains has trillions of connections between billions of neurons, to monitor millions of sensors, all to control thousands of movements using hundreds of muscles for one purpose - survival. The number of possible combinations and behaviors is greater than all the atoms in the universe. And that’s just one brain. Each seems to be a little different. And each is constantly changing.


Neuroanatomy has taken the brain apart and reduced it to components. Much of the brain has been mapped by function, at least in a macro sense. But when we look closer, these “areas” and other brain “components” have few clear boundaries. Or specific functions. Most are fuzzy at the edges where millions of fibers deliver signals from one part to another.


With heroic effort involving injury and death, various functions have been attributed to various lumps, gyri, and sulci, but this localization is only by degrees. If we try to get specific about what exactly happens where, exception becomes the rule, and rules become the exception. Brain function appears to be both localized and distributed at the same time. It’s a paradox.


Also, most of the brain is clearly divided left and right. These two halves are only connected at the bottom, center, and back. Plus the most obvious central connection, the corpus callosum, has a profoundly inhibitory nature. In both directions. Why? Even the cerebellum and brainstem have definite bilateral symmetry, and some functional division in their structure and operation. Sense and control are mostly separated in the spine, dividing the peripheral nervous system in a completely different fashion, and at ninety degrees to the bilateral symmetry. So is the brain truly divided? Or unified? The answer is obviously yes, without question. Which presents another paradox.


It’s not just the brain that’s complex, it’s also the neuron. In the nano context, we’ve collected an astounding amount of data involving types of neurons, neurotransmitters, glial structures, genetics, and of course, nano, micro, and macro chemistry, each with their own functional domain bleeding into the others. We understand how the neuron creates a signal but not what that signal means. We have a clear understanding of how all of this happens, but not exactly why. At least not in the nano-context. Yet.


As we zoom back out conceptually, various groups of neurons “project” their axons from one area to another. Some detailed connections have been mapped, but between the nano and the macro context, most of the micro connectome remains in shadow. Should neuronal function be associated with the location of their cell bodies and dendrites? Or the majority of their axon terminations? Specifically, what connects to where? And why?



The Biology of Behavior


Even more challenging than neurophysiology or chemistry is characterizing function. The brain is where sensory input gets converted into muscle movement. We define this as behavior. This behavioral database includes all animal life, but even limiting it to human history, it’s still overwhelming in scale and content. Why did Socrates drink the poison? What led Joseph Stalin, and Mao Tse-Tung to direct the death of tens of millions of their own people simply to remain in power, while Henry II conquered much of Europe with relatively little bloodshed? Simply different management styles?


Generalize from a trillion behaviors then apply them to yourself. Why do you do each thing you do each day? Your behavior is far from random, but its true course can be difficult to devine, and at the same time, easy to rationalize. Behavior ranges from obvious to enigmatic, with no clear boundaries from one motive to the next, much like the physiology of the brain itself. And that’s an important clue. Function follows form.


As individuals, we each have an introspective experience. It’s our own private view of our brain from the inside. Much of the time our behavior seems reasonable and organized. But is it? How many times per day are you surprised by your own actions? Think carefully. True self-awareness does not come easily. Where might these surprising visions, thoughts, and actions come from? How much of our thinking is conscious? How much is hidden in layers of mystery even from our conscious subjective experience?


Scaling outward, how do your behaviors contrast and conform to those around you? And those more distant? Plus, each brain is changing dynamically from moment to moment. Repeating psychological experiments on the same subject often yield different results. Consistency is elusive as recent brain imaging meta-studies have shown. The brain is plastic by degrees, and in critical phases. So is the resulting behavior. Parts of the brain enlarge and contract over time. Form follows function.


Multiply these behaviors by the trillions of creatures and all the people that have ever lived. Now correlate it with what we know about neuroanatomy, chemistry, genetics, neuroscience, and all the other academic fields we’ve enlisted in this effort. Generalizing from such a broad and changing base of information is like trying to nail an ocean of Jello to an infinite moving wall. What goes where? And why?


And yet, the brain is not random. As Hippocrates noted, behaviors flow from within the skull. As does subjective experience. So far we have nothing to disprove his observation. We’re left with billions of neurons doing mysterious things to yield trillions of complex behaviors. In short, the brain is a tangle, a modern Gordian Knot, and perhaps just as difficult to unravel. Or in the case of the brain, to understand.


If you’re not familiar with the Gordian Knot, it’s a parable about a very large and complex ball of rope with one loop attached to an ox cart. It was said that anyone who could manage this knot and uncouple the cart would become the King of Gordium (which was near modern-day Greece). This royal test was much like the Sword in the Stone, except the challenge was a tangle. When Alexander the Great encountered this test he simply drew his sword and cut the loop. Then he took the kingdom by force.


Though both the Sword in the Stone and Gordian’s Knot crowned a King, you might be tempted to conclude that Alex cheated. If you require the solution to conform to the spirit of the problem as presented, you’d be correct. But it could also be argued that Alex was just thinking outside the box. Or that might makes right. The story contains several possible lessons depending upon your values, sensibilities, and perspective. And that’s the point. It’s only one example of our mind entertaining multiple ways of looking at a problem. And its solutions. That’s another important hint, but for now, the concept of the Gordian Knot is a useful way to encapsulate the mystery of the brain. Speaking of childish stories, let’s take a break before we continue with this important question.



Learning to Ride a Bike 


I got my first bicycle for Christmas when I was six years old, and of course I didn’t know how to ride. My dad had gotten me a full-sized Schwinn. He probably figured I would grow into it, and so was trying to be efficient. Or maybe he wanted to present with not only a gift, but also a challenge, which he certainly did. This bike was made of steel. I could barely pick it up. And of course, other than watching bigger kids, I had no clue how to make it work. For weeks I just pushed it around with one foot on the pedal trying to get used to its weight.


My dad was busy with work but our babysitter from across the street said she would teach me how to ride. We went to the school grounds where there was lots of smooth, flat pavement and nothing to run into. She said the weight wasn’t as important as keeping my balance. I asked her to explain. She said words wouldn’t help much. I just needed to get a feel for it. This was my first encounter with intuitive learning, a kind of Zen experience.


She held the bike while I got on, then kept it upright as she pushed me up and down the basketball court. Unfortunately, I could only reach the pedals when they were in the top half of their rotation. So I had to start with the pedals near the top and could only push them halfway down, first one side, then the other. Between the weight, pedals, and balance I had my hands full, and so did she. We had to take breaks as she was doing all the hard work and heavy lifting. But to her credit, she kept at it. And so did I. I really wanted to be able to ride this shiny new Christmas present.


While resting at one point, she explained that steering was the key to keeping my balance. So words did matter, but something else mattered more. Getting the feel for it was apparently critical. She was right. A few more tries and I finally got it. I was gliding by myself before I knew it. Looking back she was no longer holding me up. I was jubilant. That’s when I fell over. There was no way my legs would reach the ground. But I didn’t care. She was right. I had felt that balance. I knew it was there. I knew I could find it again if only I could learn to get off (and on) without crashing.


Later, on my own, I discovered that if I leaned the bike against a wall with the pedal in the right place, I could get on and push off and keep going as I bounced from side to side to make the pedals work. I still had to find a good place to jump off and catch the bike before it hit the ground when I wanted to stop. 


Then I tried something radical. I put my leg through the frame below the top bar. This way I was able to pedal and also start and stop when needed. Well, mostly. It looked goofy as hell but it made the landings easier. Either way, I rode that bike for years without ever sitting on the seat, except to coast. Years later I got a Schwinn Stingray which was lower to the ground and solved all the problems.


The point is, not all learning is logical. 


And there’s more than one way to ride a bike.



Intuitive Modeling


“A theory can be proved by experiment; but no path leads from experiment to the birth of a theory.” - Albert Einstein


Open any book on neuroscience.  Usually within the first few pages will be some disclaimer about the lack of a useful brain model. Jeff Hawkins of the Redwood Center for Theoretical Neuroscience put it concisely in his 2004 TED talk about his book, “On Intelligence”: 


(Sorry for the interruption, but I need to add this note as of late March, 2021 because most of what you’re about to read was written years ago, though only posted in July of 2020. I wish to point out that Jeff Hawkins has now published a second book that gets much closer to my model than perhaps anything I’ve read so far. Unfortunately, Jeff does not make that final conceptual leap about neurons creating knowledge.


Jeff and I share both a background in computers and a fascination with the brain. I agree with most of his constraints and many of the assertions, but we part company when it comes to the nature of knowledge.


Jeff’s new book is titled, "A Thousand Brains", though his thousand brains are quite different from the thousand creatures I'll be describing below. Like in his first book, Jeff presents some wonderful thought experiments and adds to his very insightful observations. His description of neuron firing as "spikes" as opposed to states is a good start, and he gets close to my thesis near the bottom of page 125, “The knowledge is the model.” I did some counts on a few pages of this new book. Jeff actually uses the word “knowledge” more than he does “reference frames”, the key to his thesis. Though he does include a whole chapter about how to preserve knowledge, he doesn't see that neurons create it.


Jeff, (like many others), sets out to model the neocortex before understanding the nature of the neuron itself. This puts him at a grave disadvantage. His view of how language is “processed” is literally the opposite of mine. He sees language as top-down deconstruction, and actually attributes aspects of his column reference frame model to the genesis of language where “features” are “stored”, again reflecting computer metaphors. He then applies this architecture recursively - “it is reference frames all the way down.” Down to where? Unlike turtles, the brain is not infinite. And practical recursion normally has a base case. Language generation in the brain is actually much simpler. In contrast with his view, I believe language is an external expression of internally generated knowledge. It reflects the gnostic nature of the neuron itself, as I’ll shortly present.


In any case, I highly recommend both of Jeff’s books about the brain for their concepts and insights about prediction.)


I now return you to my earlier quote from Jeff:


Open any book on neuroscience.  Usually within the first few pages will be some disclaimer about the lack of a useful brain model. Jeff Hawkins of the Redwood Center for Theoretical Neuroscience put it concisely in his 2004 TED talk about his book, “On Intelligence”: 


"We have tons of data and very little theory."  To drive home this deficit, Jeff then offered a quote from decades before by Nobel laureate Francis Crick, "We don't even have a framework."


Not even a framework? Well, this is embarrassing. Why all the intellectual abdication? Some generalizations must certainly be more probable than others, even if extraordinarily broad. Or completely wrong. Error tends to invoke useful counterpoint. Where are our sweeping generalizations about the brain? We need a  new perspective. We need a fresh approach. We need a Rosetta Stone of the brain. And most of all, we need a radical idea to break this logical logjam of data. Some wild speculation would be more useful than none at all.


Ironically, modeling is one of the things the brain does best. We model the world constantly. We can’t help it. This modeling ranges from casual and even subconscious, to formal, detailed, and explicit. The most useful models may even become external and detailed mathematical simulations. Or computer programs.


The more intuitive models take many forms. We model the actions of other people to predict their behavior, typically without realizing it. This is known as theory of mind. Other models are conscious but still casual. Their complexity ranges from sparse to rich depending upon how much attention we pay to each topic. 


For instance, you may know more about psychology than I, or the detailed “proofs” of philosophy, but I likely know more about how a computer works. I’ve designed most major aspects of computer systems, from the bare metal of logic up through the ALU, processor, storage and I/O. But most of my career has been spent in software from hex coding and assembly through the BIOS, operating system, computer languages, and finally, application software. I understand how the logic of AND, OR, NOT, along with assignment, loop, and decision make up the essence of a Turing machine which represents any possible computer. 


I understand the details of how bits are converted from digital to analog yielding the emergent result of music, which may bring a tear to your eye. I also understand the limits of computers and computability. You can probably do something similar in some other field, whether it’s technical or artistic. It’s what each of us attends to that allows us to populate our respective models of the world.


Using associative maps, allegory, and metaphor, we also model the tools we use, the work we do, and the places we live, all to great advantage. But it’s still a different experience, different model, and a different advantage for each of us. You think about things I dismiss. And vice versa. We each have our casual and more technical models of the world. And our model of the brain is part of that world, casual or not.


Since you’re reading this, you likely entertain at the very least, some default but at least conscious model of the brain. It may involve the concept of hard-wired “circuits”, programming new habits, or just processing your thoughts and feelings. Did you notice that each of these is a tech metaphor? Or perhaps your model might involve more explicit concepts of electronics, brain waves, genetics, or imaging. Again, each is a field of technology. 


For decades, my casual model of the brain focused on chemistry, ionics, and logic, yet my model was never viscerally satisfying. The closer I looked the clearer it became that the brain had more in contrast than in common with technology. I came to realize these default tech metaphors and fields of study were distorting my thinking, which reminds me of another useful story.



Analog Versus Digital


During much of my grammar and high school, my cousin Dave Cline and I shared more than just classes together. We also shared a lab. Well, that’s what we called it. It was actually his sister’s playhouse which she had long ago abandoned. This “lab” was a free-standing building of about 10 feet by 12 feet located behind his house. We built a long bench along the back wall. Dave took the right half, I the left. Over the years we built bikes, rockets, radios, and computers in our “lab”. By the time we were in high school, it was mostly used for electronics. I was into digital systems. Dave preferred analog. Many relate to the analog/digital dichotomy in reference to analog music which has seen a recent resurgence. But the difference goes much deeper, even to the core of physics and philosophy. Our interest in the analog / digital dichotomy was all about electronics.  It was a friendly competition of dueling designs.


At one point we had both purchased oscilloscope kits which we assembled. For those not familiar, an oscilloscope is a kind of TV for electronic waveforms. Because of our limited resources, these were very simple single-trace units. To make them more useful we decided to add a dual-trace input circuit so we could compare two waveforms on the screen at once. My design digitally switched from one input source to another quickly enough to time-share the oscilloscope beam. Dave took the classic analog approach of mixing a square wave with the two input sources to be observed.


The design world today is almost completely digital, but in 1968, analog was the standard approach. Radio was analog. Television was analog. Even a few simple computers were analog. But the cool new computers were all digital. Philosophically, digital and analog are about as different as possible and still be called electronics. Both approaches were common at the time. As an exercise, we were reinventing the wheel with this new dual-trace circuit design.


The world in which we live is mostly analog. Most performed music is analog. Temperature is analog. Dance (movement) is analog. But all can be digitized in various ways. In nature, virtually everything is analog. You can think of analog as smooth waveforms, infinitely variable. Old fashioned volume knobs are a good example. Most of our interaction with the natural world is analog. From the chirp of a bird to the warmth of a kiss, we experience an analog world.


In contrast, the digital world is driven by logic and math. Anything in nature such as sound or music can be quantified by defining values of a certain resolution and range. Once digitized, these sounds from nature can be treated as numbers to be encoded, copied, and manipulated by computer programs. This digital world has another special quality - it’s determinant, meaning that it sounds exactly the same each time you play the song. Manipulating these values using math yields a consistent result, more consistent than nature itself. Digital also allows for interchangeable components. In contrast, analog often has to be tuned for each application, and over time, degrades. Digital is always the same.


Ironically, if you look closely enough at nature, some parts of the world become digital. Atoms and molecules are actually discrete. We only perceive them as analog because of their extraordinarily high resolution. Well, mostly. Smell, taste, and some aspects of light have certain digital qualities because of their molecular and quantum nature. Our neural sensors can detect a single molecule of odor, and a single photon of light. These vivid exceptions nicely demonstrate how exquisite our organic neural sensors can be.


But in a macro context, we live in an analog world. Why would we bother with digital? Digitizing the world has some dramatic advantages. It makes our analog world easier to capture, store, and manipulate. That’s why our interaction with the world today has been almost completely digitized by technology.


Designing these oscilloscope circuits at the time was a challenge. Integrated circuits and TTL were exotic and expensive, certainly beyond our budgets. Many of our parts came from old transistor radios. My memory system on another project was literally made from relays out of a pinball machine. For this project, we struggled to find transistors with matching characteristics. For these reasons, our designs had to be simple, even elegantly so.


It took a couple of weeks, but we both got our solutions to work reasonably well. Indeed, they had similar performance characteristics. To this point we had been quite secretive as to our implementation, even hiding our schematics. Now it was time to critique each other's work.


With a digital perspective, I of course started with a logic design then figured out how to cost-effectively implement it using linear transistors, which was all I had. Dave’s design treated the inputs as if he were mixing music channels but at higher frequencies, again implemented using similar linear transistors. That was not surprising. 


What got my attention was that when these designs were reduced to electronic schematics, the circuits were virtually identical. I was very much challenged by this outcome and compared the designs in various ways only to conclude that no matter how you approach this particular problem, the optimum result was similar. It reminded me of the quantum nature of light being both a particle and a wave, and the concept was to become important decades later when sorting out the nature of the neuron.



Blinded by Science?


"The model we choose to use to understand something determines what we find." - Dr. Iain McGilchrist

Science is built on characterizing and quantifying consistency. Once defined, these consistent objects become tools to be relied upon. Those things not well defined remain in shadow, even when they are important, like the brain. Could science, the prime tool of validation, be the very thing blinding us to the nature of the brain?


Today, billions of dollars are being spent to understand this slippery object. As noted, the 1990s were declared the “Decade of the Brain.” That decade produced yet another tsunami of data, but again, few conclusions. This data is also a logjam waiting to be released. We’re now well into the new millennium and we still don’t have a useful model of the brain. Below is a more recent quote from Ed Boyden who leads brain investigation at the MIT Media Lab:


How the Brain Is Computing the Mind


Despite the title, Ed explains very little about how the brain works, though he does acknowledge the challenge, and provides this same important clue - why would Ed presume the brain might “compute” the mind? And it’s not just Ed. Various forms of computer thinking remain our default metaphor of the brain in spite of its poor fit.


The contrasts between the brain and computer have been well known for decades. Nobel laureate Gerald Edelman effectively challenged the computer metaphor in several of his books. Yet this tech approach continues to guide most of the effort, and consume most of the resources.


If we think of the brain as a computer, it follows that neurons somehow represent state machines, conforming to information theory. This is not the case. If the brain were some kind of computer, we would expect it to be fast, digital, synchronous, serial, hard-wired, and determinant in its operation. 


The brain is the very opposite in each of these major aspects. It’s relatively slow, surprisingly analog, mostly asynchronous, profoundly parallel, and quite plastic. Instead of consistent answers, the brain often yields an indeterminate result in a very uncomputer like fashion. But it’s not just the computer metaphor that causes problems. It’s science itself. Let’s get back to Ed Boyden:


“The reason is that the brain is such a mess and it’s so complicated, we don’t know for sure which technologies and which strategies and which ideas are going to be the very best.“


The very best? How about any approach at all? And if we don’t know “for sure”,  might it help to know something by degrees? Keep this demand for a determinant model in the back of your mind for later consideration. For now, let’s evaluate the rest of this quote.


As yet another example, Ed too leads with “technologies.” Why would we expect the brain to be understood in terms of technology? The brain certainly didn’t come out of a factory. The brain evolved. And yet technology has been the prime strategy for modeling the brain through most of recorded history. 


The brain has in turn been compared with an aqueduct, telegraph, clock, telephone, steam power, computer, and lately, the internet. Each has been the most advanced technology of its time. Some are now trying to understand the brain in terms of superconducting quantum calculation. Though complex, I doubt the brain’s operation is quite that exotic. Or technical. And the distraction gets worse. It’s almost as if science itself has become our latest “new” technology. 


The first test of science is consistency. Though not random, the brain’s operation is often not consistent. This is a major challenge for science, and perhaps one reason for our missing model. Science requires that experiments produce repeatable results. The brain violates this with impunity, switching from one answer to another as it dynamically tunes itself to its changing environment. Hypocrisy is common in human behavior. When we overlay technical metaphors, things get worse. Soon these metaphors are steeped in rationalization and confusion, when the true test of any model or metaphor is simplicity and utility in sorting out the data. Without a useful model, the data just piles up. We need a way to break this logjam, and the key is likely more intuitive than logical.


The technical approach to understanding the brain is like deconstructing a Boeing to understand how a bird flies, and just as useful. The Boeing applies thrust over a fixed-wing in a fairly crude fashion, but also flies much faster. The bird’s solution to flight is far more subtle and elegant. But slow. Which is better? Neither. Each has advantages depending upon requirements for load, speed, and maneuverability. And that’s the point. There is no one perfect answer. Nature has evolved many different ways to fly as demonstrated by bumblebees, hummingbirds, and even gliding snakes. Human methods are just the most recent, and most clumsy.


This tech / organic contrast is not limited to the skies. Something similar happens on land, and even at sea in terms of movement. The wheel forever changed how we travel. It allows for greater speed, load, and distance at the cost of maneuverability. But not always. A bicycle is a hybrid of tech and organic methods for moving over land. It allows human muscle to achieve greater speed than running, and also greater efficiency than any other application of the wheel.


At sea, a similar dichotomy exists, and also similar hybrid solutions. Powerboats will get you there faster using the brute force of a motor. Swimming is elegant but quite limiting. Sailing applies the best of both worlds when speed and capacity are critical requirements. Sailing works with the wind even when sailing into it.


Learning to fly provides another interesting comparison when trying to find a useful model of the brain. Just over a century ago the consensus was that man would never fly. Heavier than air human flight was thought to be beyond our reach, but there were many attempts.


After years of trying, by December of 1903, Samuel Langley had spent the entire Smithsonian budget plus $50,000 from the Department of the Army attempting manned flight using the brute force of a steam engine and fixed flight surfaces. We might describe this as the technical approach to flight - a predetermined and consistent solution. Following many attempts, the final version of his airplane crashed and the pilot died. After decades of effort, and after spending a literal fortune from the government, and also in a state of grief, Dr. Langley finally gave up.


The New York Times punctuated this failure (and wasted money) by publishing an op-ed stating that man would never fly. A week later with far less funding, the Wright brothers proved them wrong. The Wright brothers applied a more hybrid “bicycle” approach to flight which was consistent with their background. Using a lighter gasoline engine, and having a human actively balancing the control surfaces were key elements that were different from Langley’s effort. Man finally took to the air. That first flight was controlled by an organic brain, not a perfectly calculated and trimmed airfoil.


The point is, whether you wish to travel by land, sea or sky, solutions range from organic to technical. Organic is more subtle and effective. The technical approach applies more power and speed, but is in many ways is far more crude. Tech succeeds, but in a different way. 


This challenge of a brain model is similar. Electronic computers simulate the world using complex systems operating at the speed of light in a mostly serial fashion - the tech approach. But why might we think there’s only one way to simulate the world when there are many ways to fly? The organic approach, which the brain uses, is much more subtle and elegant. And in many ways, it’s far more effective. Especially when survival is involved. How many ways are there to simulate the world? What is the nature of this more organic simulation? What is the brain really doing? And what part does the neuron play?


These were the questions I should have kept in mind, but for decades I too have searched for the logic systems of the brain, and how its state machine was encoded by this logic. My approach was technical, but I was about to see the first hint of one possible analogical, and stateless alternative.



The Gnostic Neuron


“We are buried beneath the weight of information, which is being confused with knowledge; quantity is being confused with abundance and wealth with happiness. We are monkeys with money and guns.” - Tom Waits


Here’s one final acknowledgment of our missing brain model. It’s the opening line from the issue dedicated to the brain from Scientific American in 2014, The New Century of the Brain:


“Despite a century of sustained research, brain scientists remain ignorant of the workings of the three-pound organ that is the seat of all conscious human activity.”


Pessimistically, the article then immediately cites an interesting discovery as just another mysterious loop in our Gordian Knot:


“... the discovery of a “Jennifer Aniston neuron” was something like a message from aliens, a sign of intelligent life in the universe but without any indication about the meaning of the transmission. We are still completely ignorant of how the pulsing electrical activity of that neuron influences our ability to recognize Aniston’s face and then relate it to a clip from the television show Friends. For the brain to recognize the star, it probably has to activate a large ensemble of neurons, all communicating using a neural code that we have yet to decipher.”


If you haven’t heard about the “Jennifer Aniston neuron”, here’s a quick summary of this remarkable work of Rodrigo Quiroga from UCLA in 2005. It began when a patient was being prepared for brain surgery to treat epilepsy. As part of that process, selected neurons were monitored while the subject was shown photos of various places, people, and things. In this case, the chosen neuron fired when the patient was shown a picture of Jennifer Aniston as the character Rachel. Even more remarkably, that same neuron fired no matter how "Rachel" was presented. This was amazingly consistent for a neuron. Whether it was her spoken name, her written name, her photograph, or other likeness, all of them worked as long as the cue seemed to capture some essence of the character Rachel. I immediately recognized this quality as the “invariance” described by Jeff Hawkins’ in his book, “On Intelligence” referred to above.


This remarkable discovery nicely demonstrates a “gnostic” neuron, or “grandmother cell”. Such neurons were described as a subset of neurons known as “concept” cells, which itself was a concept started as a joke at a neuroscience conference in 1967. Yet this was no joke. This was real, and after 15 years has yet to be effectively challenged. The results were independently verified when “Luke Skywalker”, “Bill Clinton”, and “Halle Berry” neurons were found in other tests. Some of these neurons even fired when a cartoon of the subject was presented. There are many other examples, and they all demonstrate invariant knowledge by firing when the essence of said subjects or characters were detected in ANY form.


The idea of a gnostic neuron is philosophically profound, literally, the expression of knowledge taking the form, in this case, of Jennifer Aniston's "Rachel". This neuron recognized that one specific character out of thousands of people that this particular epilepsy patient experienced during her life. That’s an impressive trick. How did this neuron come by this knowledge? And what significance does it have in breaking through this logical logjam of data?


I’ve included the above pessimistic assessment of the Jennifer Aniston discovery because I reached the very opposite conclusion. For me, this gnostic neuron was not a message from aliens. It was a critical hint. The moment I read about the Jennifer Aniston neuron I literally stopped in mid-bite. I was eating lunch. The moment remains vivid.


Knowledge is the key to philosophy, or at least its object of love. As a computer architect, I’ve had a lifelong professional interest in what computers have in common, and in contrast, with the brain. Computers process information. Knowledge is similar to information, but not the same thing. Like many other technologists, not only had I been mischaracterizing the neuron, I’d also been mischaracterizing knowledge. I’d spent decades analyzing neurons as logic devices, trying to understand what kind of systems these neurons might form, or how to “code” memory as suggested by the Scientific American article above. But like so many other technologists, I had the wrong perspective.


In that instant, for me, the problem changed. Instead of dismissing this result as a message from aliens, I began to wonder what all the other neurons “knew”, and how they came to know it. 


In that moment, the neuron ceased being a slippery state machine and became associated with acquiring knowledge. I began researching what it might mean to “know” something, and how a neuron might perform this amazing trick.



Fire


Ignoring for the time being how, let me present why knowledge might be the key to modeling the brain. Simply assume that neurons magically create knowledge at the instant that they fire. Here’s an example:


Imagine a neuron that can sense smoke, another that can feel heat, and finally, a third that can detect light (all well-characterized by neuroscience). When each of these neurons triggers we can assume that each of these things is experienced by the person in question at that moment. When each of these neurons senses their respective condition they each create a neural pulse in that instant which can be thought of as knowledge taking the form of a signal reflecting each experience. 


Now imagine a fourth neuron tuned to sense a specific pattern from these three neurons when they occur within a constrained amount of time, the essence of synchronicity. This fourth neuron thus combines signals from these first three neurons, (and the events they indicate), to form an abstraction of knowledge we will call “fire”.


If all three source neurons trigger at the same time, they create an association, and this fourth neuron will trigger indicating that something is burning in the world, a useful bit of knowledge quite distinct from the knowledge of smoke, heat and light.


Now further imagine that this fourth neuron is connected to a script of other motor neurons whose muscles compress the diaphragm, adjust the vocal cords and manage the tongue and lips of the person in question. When these three original source neurons trigger in unison, they trigger the fourth that captures the pattern, and this knowledge about this fire will escape the body and alert the rest of the tribe as the word “fire” issues forth from that person’s mouth. 


It’s easy to see from this simple example how all words might each be represented by a neuron dedicated to a specific bit of knowledge, and how language itself might be an external form of the brain’s internal architecture. To summarize, words in verbal or written form are an external expression of internal neuronal knowledge. 


I realize this simple description requires a great leap of faith based on the radical notion that neurons create knowledge, so if it challenges your sensibilities, relax for now. I’ll continue with how I came to understand this model of the brain. Other detailed examples including the important “how” of this example will be provided.



Our Left-Brain and Right-mind


I thought about this possible gnostic nature of the neuron for another seven years. No, not all the time, but a lot. I was still trying to understand how a neuron might come to know something when I happened across this video. It is a summary of “The Master and His Emissary, The Divided Brain and the Making of the Western World”, by Dr. Iain McGilchrist. If you’ve watched this 12 minute RSA video, (which is brilliant for its own reasons), you’ll understand why I read (and reviewed) the book.


This book deals with the divided brain at the macro level, as opposed to Jennifer and her neuron where I’d been probing possible forms of logic at the nano level. And yet his descriptions were vaguely familiar. Dr. McGilchrist begins by noting the physical asymmetry of the brain, and uses it to support his model of operational asymmetry and malleable dichotomy throughout the rest of the book. 


He presents the idea that the left and right brain see and deal with the world differently, the essence of subjective experience versus objective reality. I use the terms left-brain and right-mind for a similar reason. Our mind is our subjective experience of our objective brain. This left-brain, right-mind association is to remind myself of the world as perceived by each respective side of the brain. The left-brain objectifies things of interest. Our right-mind evaluates things according to their impact on us personally. The left-brain prefers things defined so they can be used as components in constructing other thinking. Our right-mind keeps its options open as it watches for threats and opportunities. Of course, we each also have a right-brain and left-mind, which accounts for the exceptions in this broad generalization.


Dr. McGilchrist also notes that our left-brain naively thinks of itself as the whole brain, and prefers to define our world as logically consistent. The cause and effect of science are how our left-brained "Executive" models the world. He seeks THE answer. Objective technology is the result. 


In contrast, our right-mind knows the world is not entirely consistent, nor completely random. It lives in that undefined middle ground. Mysticism, art and intuition are the results. Our right-mind correctly treats our whole body and whole brain as a collection of subjective survival solutions, where one solution need not preclude another.


The first half of his book describes the brain in objective and definitive terms, hallmarks of the left-brain. The second half of the book presents culture and art in a more subjective fashion. It reaches for conceptional connection, as a mystic might.


Our left-brain’s "Scientist" dominates the implementation of neuroscience and its metaphors at the direction of our left-brain's "Executive". This leaves little room for the speculations of our more intuitive right-minded "Mystic" to direct our right-minded “Artist”. Ironically, and consistent with McGilchrist’s concept, our right-minded Mystic knows more about our left-brained Executive than that Executive knows about our Mystic. At least in a holistic sense. 


(Sorry Ian, the whole Nietzsche, Master / Emissary story works for your theme, but it conflates subservience in the hierarchical relationship with the truly complementary nature of our divided brain. As I’ll shortly present, the creation of knowledge occurs on that line between yin and yang, not master and servant.  I agree with your Nietzsche story that our left-brain has run amuck at times in recent history, and especially now. Though complementary in most ways, the two halves of the brain are ultimately far more equal, at least in opportunity, if not always in operation. A left-brained Scientist directed by his Executive, and a right-minded Artist directed by a Mystic better describes both the subservient and egalitarian nature of this complementary architecture. Going forward, I'll use these four titles in my personal org chart as opposed to only two.)


It was just before reading the Divided Brain that I discovered Oliver Sacks’ wonderful writing about neurological deficits associated with physiological injury or disease.  It wasn’t just Oliver's writing I appreciated. His powers of observation and correlation were that of a modern Sherlock Holmes, except more subjective, which is what makes his stories so much fun to read. 


Each Oliver Sacks case now took on new meaning in the context of left-brain and right-mind as I explored the gnostic nature of the neuron. Dr. Sacks intuitively used the model I’m about to describe without knowing it explicitly. I will provide examples in due course. He also suggested we move beyond objective and subjective to explore the brain with a trajective approach, but, (believe it or not), I’m trying to keep things simple so won’t wander down that rabbit hole at this point.



Why Words Matter 


Returning to lateralization, here’s one example of why the dominance of left-brained language is so distracting:


You do not have an amygdala. Nor do you have a hippocampus. You don’t even have a cortex. You have two of each. Your skull contains two amygdalae, two hippocampi, and two cortie, one each for left and right. Yet this spell checker does not even recognize any of these plurals. This clearly demonstrates how often the fact that we have both a left and right instance of each of these important structures is presented in text for spell checkers to access. 


Think about the last time you read about the cortex. It was likely presented as a singular and unified whole, as I have just done. But as McGilchrist presents, the very opposite is true. The surface of these two cortie do not touch topologically. It’s the same with the amygdalae and hippocampi, along with all the other lateralized and duplicated parts. The brain is vividly, profoundly, and obviously divided both physically and operationally. Words matter. They affect our thinking. Our left-brain likes to believe it controls the entire body, including the whole brain. All of the time. The absence of these plurals is just one example of our left-brain with its language, dominating our narratives, and our thinking. 


This advantage with words doesn’t mean the left-brain is a villain with a superweapon called language. Quite the opposite. The left-brain is often left innocently wondering what happened in its struggles with what I’ve come to think of as the silent tyranny of the right-mind. From the left-brain’s perspective, the right-mind doesn’t even exist. The left-brain tries to ignore both the right-brain and right-mind. From the left-brain’s perspective, this mysterious “other” side of the brain remains in shadow but will often simply take control at key moments, leaving the left-brain to quickly (and often inaccurately) rationalize the result, which it does with grace honed from practiced experience. Even when completely wrong. This quirky dynamic nicely explains cognitive dissonance and the ubiquity of hypocrisy in our culture, as well as many other enigmatic yet commonly observed aspects of behavior.


The result of this architecture is a contest played out in the corpus callosum in the same way it’s played out between two neurons competing to create dichotomous knowledge in the nano context. Since the left-brain controls most language, it tends to dominate verbal and written description. We never get to hear, (or read about), the model of the brain which the right-mind intuitively understands. Fortunately, this more intuitive model still shows up as hints in our language and culture. A right-minded template of the brain is hiding in plain sight, as I’ll shortly describe as something I've come to call decursion.


To summarize, Dr. McGilchrist’s work not only described the divided brain, his theme suggests one possible reason we don’t have a model for the brain. It’s that our left-brain doesn’t like the answer it’s found, and so inhibits our more intuitive right-mind, which of course has no voice. For those who have studied neuroscience, their left-brain believes simulation requires access methods, electrical communication, and logic states, but can’t find where these states are stored, or even anywhere logic is consistently applied in the brain. Our right-mind knows better but gets inhibited conceptually on any topic dealing with electricity, brain data, logic, or science.


Ultimately, the more rational left half of our brain denies the fruits of right-minded intuition and ends up in a logic trap much like Zeno’s Paradox as presented in the "Divided Brain". If you’re not familiar with Zeno’s Paradox, I’ll present my father’s version as told to me when I was a teenager. My father was always telling dirty jokes, and I don’t believe he even realized the philosophical history behind this one:


"An engineer and a scientist were brought into a dark room containing a large one-way mirror. On the other side of the mirror was a small white room which was empty except for a beautiful naked woman standing against the back wall. Both men were instructed that they could shortly enter the room with the naked women but could step only half the distance to her with each step.


The scientist put his head in his hands and wept, knowing he could never achieve his objective. The engineer simply smiled, realizing he could get close enough for all practical purposes."


What each of these men “knew” were both correct, yet they reached opposite conclusions. Knowledge is subjective, reflecting our individual talents, experience, and perspective.


Is it not likely that the reason for our missing brain model, is a very similar logic trap? Fortunately, a right-minded “Mystic” (or artistic engineer), can span a towering paradox in a single and final stride, getting him close enough.


I casually speak of our left-brained Executive/Scientist and right-minded Mystic/Artist as if Dr. McGilchrist has laid the matter of multiple entities in our skull to rest. And so he has. But which aspects of our behavior lie on which side? And why? Is dichotomy the only aspect of our brain that forms boundaries?

The brain is clearly multifaceted both physically and operationally. But how many faces do we present to the world? And why do we generally have this subjective experience of a unified mind? Time for another story from my past:



Ricky Tolleson 


I first met Ricky Tolleson when I was seven years old and in second grade, but not on the first day.  He didn’t show up until late September.  Ricky actually started the year in the special-ed class but was soon mainstreamed into ours. I don’t know why the teacher put him next to me, but she asked me to be nice to Ricky, to help him with his work.


Ricky was ugly, awkward, and clumsy.  His head, and especially his forehead, were larger than normal, even for his big frame.  He outweighed me by at least fifty percent.  In retrospect, I think he had already been held back a grade, maybe more.  


Ricky tended to slobber and drool, which he usually caught with the sleeve of his canvas jacket.  The mucus buildup took on a sheen near the cuffs.  He almost never took off this jacket.  I once asked him why.  He told me he would get in trouble if he lost it, so he left it on, even during hot weather.


Ricky had another curious habit. If he wasn’t being forced to stay in his seat, he was always on his way somewhere else. And “somewhere else” constantly changed. He would move around the classroom from place to place but only stay a second, then off to another. I would often see him heading down a hall, stop abruptly, then head a different direction as if following some internal radio instructions. This was just one of his more bizarre behaviors. We never talked about it.


Ricky rarely spoke, but often stared intensely. When he did speak, his voice was high and had a nasal quality.  His words were hard to understand, but he would glare at you if the meaning was important.  I can not remember a time when he actually smiled. His countenance was generally dull.  Well, at least when it wasn’t intense around the eyes. When he wasn’t trying to get my attention.


As his helper, I put his name on his papers so the teacher wouldn't lose track.  He had trouble writing his name.  Perhaps that’s why I remember it after all these years.  He rarely added much to the page, though he could when he wanted.  He wasn’t as dumb as everyone thought. I remember showing him where the "World Book" encyclopedias were. These books were my favorite thing in the classroom. We of course couldn’t read them at the time, but I showed him how the pictures could tell stories. Mostly, he wasn’t interested in school, but I did see him get correct answers on his papers now and then.


His general appearance and tendency to stare often provoked confrontation.  But his size and volatile nature usually kept the other kids at bay.  He lived in fear of adult authority, but little else.  You could see it in how his eyes would dart around when a teacher approached, and how he would bristle if they asked a question.  I came to wonder how many of his issues were caused by his limited abilities, and how many by innocent confrontation with impatient teachers.


Weeks later we were getting ready to leave for the day and he commanded, “come”.  I was curious, so I followed, no detours this time.  We walked up to the local Frosty-Freeze on Main Street. An attractive woman emerged from the back and introduced herself. It was his mom. She seemed pleased that I was with him.  She brought out a large basket of french fries and sat it on the table between us, then went back to the kitchen. I only got a few. Ricky ate them as if he were starved, and shortly pulled the basket across the table so I had to reach farther. He glared at me every time I took one.  It seemed that he realized he was supposed to share, but wasn’t comfortable actually doing it.


On another day we were walking over to the Frosty Freeze after school and some fourth-graders began teasing him. I was close by so grabbed him by the jacket trying to move him along the path.  He slipped his arms out of the jacket and dropped into a threatening stance addressing the older boys. He was out of his jacket. He was out of his league. This was serious. 


Then he did something strange. He bit down on the base of his left thumb and made a fist with his right as he crouched down.  It was his idea of defense, and it worked.  Perhaps he had seen the posture on some TV show, adding the bite for effect. I’m not sure. The older boys laughed at him, but also backed away.  I grabbed him by the shirt and pulled him along until he noticed I had his jacket. He put it back on.  I never saw him actually fight anyone but heard about one time when he had gotten a bloody nose and had to go to the principal's office.


Starting in third grade I moved to a desert community east of Tucson, Arizona. I didn’t see Ricky again until years later when I returned to northern California and we both attended the same high school.  He clearly recognized me from years before but only said, “Hi.” That was it. No smile, nothing. I asked him how he’d been, but no reply. I don’t think we were ever really friends, at least not in the normal sense.  His social behaviors were largely missing.  I was simply part of his known world. Maybe he trusted me more than others. And perhaps naively, I trusted him.


For me at the time, Ricky was an example of how different people had different ways of not only perceiving, but also dealing with the world. True, most were not as different as Ricky, but I began to notice how he reacted to the same events I experienced, but in different ways. We each have our own methods of dealing with the world. I remember wondering at the time, what went on in Ricky’s head? I was fascinated by human behavior. Ricky was such a vivid example. The difference between us was a hint of something important, a subjective shadow in the back of my mind. Or have I just now created these distinctions all these years later? I’m not sure.


I also observed how others reacted to Ricky. Most saw him as abhorrent, a creature to be avoided. But I didn’t. Ricky was interesting. I wasn’t enchanted by Ricky, but I did have compassion for him. In a selfish way, for me, Ricky was a subject of study. But whatever his IQ, Ricky was still a person. I felt he should be given the opportunity to explore the world like the rest of us.


In another time and culture, Ricky might well have been honored as an oracle, to be consulted in various ways as a source of alternate wisdom in difficult times. I now believe that we come to know things with our right-mind, and we come to understand things with our left-brain. And also the inverse. Wisdom is created in the tension between knowledge and understanding, a type of Yin and Yang, whichever side of our skull creates it. But certainly the brain is more complex than just the simple trick of dichotomy. What about all the other tricks?



The Triune Model


Dealing with multiple brain parts was not new for me. I had introspectively tried something similar in 1977 when I first read Carl Sagan’s book, “Dragons of Eden”. It was mostly a popularization of Dr. Paul MacLean’s “Triune” model of the brain. 


Dr. MacLean’s Triune model divided the brain into three layers of ascension - the reptilian, paleomammalian, and neomammalian.  Each is ascribed characteristics associated with the implied group of species. The paleomammalian experience is subtle and quixotic, actions of the reptilian brain, more obvious. Think back to any of your movements that were so quick they surprised you. They are more likely to be reptilian. Comparisons with Freud’s Id, Ego, and Superego are obvious and often made in the process of dismissing this Triune theory by associating it with some of Freud's more challenging ideas. But might that be throwing out the baby with the bathwater? I've found that both Freud and MacLean have contributed significantly to understanding the brain.


As I explored the possibilities at the time, I remember thinking that the main problem with the Triune model was that these three delineations had too many exceptions. As the brain has become more characterized and functions more localized, the lines between these entities have become blurred, and only having three layers, far too limiting. Plus, the top two and a half layers have independent versions for the left and right sides of the brain as noted above. Perhaps five creatures might have been a better fit. But allowing five is only tugging on a string that reveals fifty more. After that, things get complicated. 


As for limiting the number of major “parts” of the brain to three or five, other significant neuronal structures don’t even reside within the skull at all, such as neural control of the heart and gut. Obvious peripheral control functions are even more primal than Dr. MacLean’s reptile. In spite of these problems and others, the Triune concept has value. There is significant evidence that our brains are layered in evolutionary history phylogenetically from the spine up, out, and forward along an axis of sophistication. These three, five, and perhaps more layers may represent successful tricks evolved by various other creatures back through our evolutionary past.

 

Our prefrontal brain (on both sides) seems to provide for the most abstract executive (and mystical) functions. The basal ganglia (being less lateralized), deals with the more primal, far less sophisticated than even a reptile. The three anchor endpoints within the skull along this split axis of sophistication.


Note that I don’t use the word “systems” to describe any of this complex functionality. One of the most valuable aspects of the Triune model (and Carl’s presentation) is that it’s organic and subjective from these creature’s perspectives. “Systems” would take us back into the tech world with all its rigid definitions. I will use the term sparingly and present technical comparisons mostly for contrast.


Setting aside specific functions and their locations for now, the broader concept that the brain is somewhat layered in our evolutionary history remains useful. A whole class of behavior known as reflexes can be understood as largely independent creatures living in the spine. Along with the above five parts, should we not add the spine, heart, and gut for a total of eight? A “gut-feel” is often how we describe conviction. In any case, we will soon visit a few. But before we let our left-brain define a count of these layered creatures, let’s consider more abstract behavior.



How Many Parts?


In, “Frames of Mind: The Theory of Multiple Intelligences”, Howard Gardner describes more than eight types of intelligence. Might these eight types of intelligence be implemented by eight or more relatively independent areas of the brain?


And Dr. McGilchrist was not the first to suggest a dual nature of the brain. For completely different reasons Freud was one of the first to recognize a mind with at least two aspects by contrasting our conscious and subconscious nature.

More recently Daniel Kahneman’s book, “Thinking, Fast and Slow” describes the operation of the brain in two competing “systems” reflected in the title. Daniel was also careful to make clear that his ideas have nothing to do with the vertical separation of the two cortie. But again, are “fast and slow” represented by a physical or operational boundary somewhere in the brain? Perhaps a lizard contrasted with a mammal? Maybe some other creature in our past? There are so many possibilities, and for now, it’s important that one concept need not preclude another, no matter how we slice up the mind and its underlying brain.


In, "A Thousand Brains", Jeff Hawkins also obviously models the brain in a thousand parts, but these parts are general and homogeneous, as opposed to specialized and dedicated in function. In a more macro context, he refers to an "old brain" as opposed to our "neo" cortex (singular). His old-brain, new-brain model is similar in many ways to the Triune model (minute one brain part) and has similarities with Kahneman's fast and slow versions.


So we have Iain McGilchrist effectively characterizing a brain divided left and right, Paul Mclean doing something similar in three levels from primitive to advanced, Howard Gardner describing eight types of intelligence with no physical allocation, an attribution also left undefined by Daniel Kahneman with his two “systems”, Hawkins with his old and new brains, and Freud with our conscious and subconscious. And this is the shortlist of those who have presented the brain as having multiple facets, with apparently multiple dimensions and aspects of control.


It’s also interesting that there is no agreement as to how the brain is sliced or managed by these various creatures from our phylogenetic history. Yet reference to multifaceted behavior is so common in our language and literature that whole sections of our vocabulary are dedicated to the idea. There are thousands of examples. 


Abraham Lincoln referred to, “the better angels of our nature”. He didn’t specify how many. Shakespeare liked to confuse us with pairs of twins, and their deceptions. Often these twins had contrasting natures. Or similar ones. Were these literary characters actually devices representing multifaceted meta-characters? And of course, Rene’ Descartes also recognized a duality in our nature, if not its physical implementation. Was his attempt to separate mind from matter driven by an ultimately multifaceted aspect of internal versus external modeling of our left-right divided brain? Or was he simply protecting the sanctity of the soul while being unable to move beyond the dichotomy of mind and body to entertain other parts? Our literature and history are rife with examples of multiple aspects of our mind’s experience, and likely, the multiple competing and cooperating behaviors created by our multifaceted brain.


So if the physical brain and some introspective experience are multifaceted, should not its operating model also conform? Perhaps we have no model of the brain because there isn’t one. Perhaps it’s because there are many, one for each part or creature in our evolutionary past. And if the brain has many independent parts, how many? And how independent might they be? What constitutes the sovereignty of pumping blood compared to moving the gut? Are we really looking for more masters? More emissaries? How about a few minions? Certainly viewing our divided brain as simply Master and Emissary, (or even Mystic/Artist and Executive/Scientist), does not account for the observed and extensively multifaceted nature of our behavior.


After all, we can drive down the road actively debating talk radio in our mind while eating a cheeseburger and picking our teeth. This requires at the very least five independent entities all operating in parallel, each deferring to the others as needed, and when. Some will have to be inhibited in their operation at any given moment. Someone watches the road. Someone drives the car. Someone listens to the narrative. Someone bites the burger. Another manages dental hygiene. And that’s not even counting all the autonomic and/or peripheral functions happening in the background, such as heartbeat, breathing, and digestion to get that cheeseburger down. A single-digit creature count is likely a gross under-estimation and oversimplification.


If you’re trying to imagine what it might feel like to have multiple creatures in your skull, don’t bother. You already know. If this thesis is correct, it’s what you experience each day. Mostly we feel unified with transitions that are normally smooth. It’s only when you have to grab the wheel to get you back on the road that you realize there is much more happening below the surface of your conscious mind. Every moment of every day.


So how many operational parts make up our brain? It’s easy to imagine a brain/body with tens, or even hundreds of independent “systems” all working in various degrees of contention and harmony to yield a single and seemingly unified experience. To keep this multi-brain idea flexible and open-ended, let’s work with a nice round number, say a thousand creatures, while we explore functional boundaries and behavioral sovereignty. And at the same time, try to avoid tech metaphors.



Our Complex Brain


A multifaceted brain explains so much about human behavior. It explains why people don’t keep promises. It explains why people lie. It explains a great deal about sex. It explains how people can change their minds so quickly. It explains how people can be so self-destructive. It answers those questions I posed above about Socrates, Henry II, Stalin, and Mao. It explains your doubt about what I’m presenting right now. It even explains mine.


From a tech perspective, a multifaceted brain resolves so many issues about the subtlety of our more obvious behaviors. The very nature of a complex system is that it’s made up of multiple subsystems, each with its own agenda. In order to understand the behavior of a complex system, it’s important to know when each subsystem is in control, when they are competing, and when they are cooperating.


If control of the body’s muscles can be instantly and dynamically switched from one system to another, the result can be highly adaptive. This is especially true when one model or architecture need not preclude the others. And that’s the trick in a macro sense. Such a parallel and/or contention resolving method would also explain the extraordinary resilience and reliability of the brain. As Malcolm from “Jurassic Park” said, “Life finds a way”.


Returning to a more right-minded perspective, these subsystems are better thought of as “creatures”. That’s likely how they evolved. Being able to introspectively feel the experience is especially challenging when multiple things are happening at the same time. But it’s fun to deconstruct, to think about. And to feel.


Which raises other issues. What is the nature of each of these thousand creatures? How do we characterize them? What are their operational boundaries? What are their capabilities? What are their limits? And if we actually have these creatures in our skull, how are they physically organized? Left/right? Up/down? Front/back? Core to periphery? All of the above? Even more significantly, how are they operationally organized? What connects to what? Who has control? Who gives consent? When and why? 


Finally, if we have these multiple entities in our brain, how might this control be arbitrated? For me, these were familiar issues of computer architecture. Contention resolution of parallel computer operation is one of the more challenging aspects of computer science. I’ve had to deal with it on several occasions over the years with varying degrees of success. It’s not an easy problem to solve. At least not for a left-brain scientist, which is one reason the concept has been so poorly implemented in multi-core silicon.


In an ironic comparison with the brain, (and for completely different reasons), most computer cores are quiet most of the time waiting for other cores to complete a process, making overall operation inherently serial and dependent upon whatever result they might be waiting for. Then transitions of control are relatively clumsy and crude. I won’t bore you with more clunky detail. In contrast, the organic brain manages arbitration of these many facets with a casual elegance that would lock a computer in a tight loop, or the logic trap of a “deadly embrace”. But let’s not slip back into the world of tech quite yet.


While reading the “Divided Brain” I recognized something else. These concepts of control forced me to step back from my detailed work with the neuron for a broader view of the brain. In one of Dr. McGilchrist’s lectures, he notes that while the lateralization of human behavior as a field of study has been largely ignored, the study of lateralization in other animals continued, and helped greatly in his research. Why would we ignore lateralization in ourselves, but not less complex lifeforms? Is it the same reason we believe we are fundamentally different from other animals? Such thinking is hubris. We are only different from other animals by degrees, even when these degrees yield apparently dramatic differences. It’s only the disproportionate effects of emergent results in these degrees that separate us from Bonobos and Chimps. That and another superpower of the left-brain - denial.


Our left-brain does not easily accept the idea of other creatures in our skull. But the reality of at the very least a divided brain is obvious. I too found the concept challenging when I first read, “Dragons of Eden”. The thought of a lizard in my brain was distasteful at best, but I did seriously consider the possibility. Perhaps that’s why I found the ideas in “The Master and His Emissary” subjectively less shocking. (Carl Sagan had also addressed the issue of left and right brain differences, but more to insightfully contrast serial and parallel operation of the brain which I’ll address in due course).


In any case, it’s not easy to think about sharing your skull with multiple entities. Still, the evidence is overwhelming in our language and our culture. Fortunately, this multifaceted approach allows us to more easily deconstruct the brain, and more effectively characterize its parts and processes. A multifaceted architecture solves so many problems in forming a useful model of the brain.


So what does the Triune brain (and the other division models) have to do with a simply Divided Brain? For me, it was deja vu all over again. The more I compared the vertically divided brain with the core-layered Triune model, the more similarities I found with a third venue - my work with neurons trying to acquire knowledge in a nano context. In all three cases, and as noted above, arbitrating control was the key. After months of study, I discovered that each of these three models might use a similar method. If I could solve this challenge for two, three, or six creatures, I could solve it for a thousand. And I have.


Before we leave the topic of complexity, let’s contrast it with the concept of complicated. Complicated implies something opaque and nebulous. It’s how our right-mind dismisses the issue. In contrast, if we are effective at deconstructing complexity, things get simpler, a quality especially appreciated by our left-brain. How do we even keep track of what goes where? Does it matter?



But Not Exclusivity So


As Dr. McGilchrist described the nature of the divided brain on the macro level, I began to see parallels with what I was finding at the nano-level of the neuron. Could evolution be using the same tricks over and over? The dynamic tension created by the divided brain was in many ways similar to what happens within a neuron as some inputs try to activate, and others inhibit firing. I began to call this similarity “decursive” in contrast with recursive, to be described shortly. 


The thesis of the “Divided Brain” is that our left-brain has become more emergent during the last few thousand years as its dominance waxed largely because of the success of technology, and our left-brain’s lopsided advantage of expressing this success using language. Even if you don’t follow Dr. McGilchrist to his thematic conclusion, he clearly demonstrates the differences between the two sides of our brain and how arbitrating control presents a challenge of how one side might do one thing, “but not exclusively so”. And the other, the opposite, but with the same exception - “not exclusively so”.


When I first read this bidirectional mitigating clause, I thought of it as a lack of conviction. I soon changed my mind. Dr. McGilchrist was describing something both subtle and significant about the two halves of the brain - they both compete and cooperate dynamically as control shifts from one side to the other in a macro context. But strangely, they do not collaborate (sharing labor), and their cooperation is often unintended, even unknowingly performed as he describes in his patients who have had their callus callosum severed. This strange dynamic also occurs within the layers of the Triune (or more layered) brain in a micro context. And finally at the level of individual neurons in a nano context. That alternative minority case in each context can be quite important for survival.


For instance, language resides in the left-brain, but not exclusively so. And facial recognition occurs on the right, but not exclusively so. Both sides do both. But only to a degree, thus the exceptions might be best described as right-brained and left-minded.


Here’s another way to describe this subtlety. The left-brain likes to quantify and define things. It likes to find THE answer and ignore the rest. It uses these definitions to construct dependencies which often become logically rigid in a serial fashion. In contrast, the right-mind keeps its options open. One solution need not preclude another. It’s constantly comparing and reviewing scenarios in parallel, dismissing the complex as merely complicated.


One way to think about this dichotomy is that there is likely a majority (or most common) response from the side that is most commonly associated with any given challenge. But there’s also a minority (or backup) solution standing by if the majority response fails, or for some other reason is not cued. That cueing is the key to transferring control. I’ll be describing this in more detail later on, but suffice it to say, something similar happens within the peripheral nervous system as well, the layers of each side of the brain, neuronal nets within these layers, and even within the neuron itself (which is where I've spent most of my time). Let’s return to Ricky at my high school to understand the value of a hot standby, which is also the value of an alternative and minority method.



Ricky’s Triumph


A few years after returning to northern California our complete high school Junior class was in the cafeteria taking some annual test. As usual, I was seated at the geeks’ table with my cousin Dave Cline and a few other friends. When the time was up, we were instructed to put down our pencils and hand in our papers.  We were then told we had to remain seated for the next 45 minutes. That’s when school officially got out. The teachers didn’t want us wandering around campus, which I find reasonable now, but didn’t at the time. While sitting at this table, I remember discussing continental drift and other weighty topics while we lamented this boring challenge to our personal freedom. (Interestingly, continental drift was a concept that had virtually no hard data to back it up when introduced about fifty years before. But by the late 1960s it was a hot topic with lots of valid evidence.)


Anyway, we were deep in deep debate when all of a sudden, I saw Ricky Tolleson stand up across the room. He started for the main door, but three teachers literally ran to intercept him.  By this time Ricky was a big guy, more than 200 pounds, and fairly lean. But the teachers were experienced with his brash physical behavior. They were ready to block the door.  All of a sudden Ricky turned on his heels and went the opposite direction. I’d seen this move before. Now he was in the lead. Ricky pushed through the emergency exit at the other end of the room and was gone. The alarm sounded. The door banged closed. The room became quiet. The teachers stopped and stared. The silence was broken by nervous laughter from some of the students.


One of the guys at our table sneered with derision, and stated to no one in particular, “Retard!”


My cousin Dave countered with this observation which I’ll always remember, “That ‘retard’ is enjoying his freedom while the rest of us sit here in envy.  So who is actually smarter?” And who is retarded?


Dave was right. Here was a room full of students who failed to answer that final question on that day’s intelligence test, the one about personal sovereignty. Ricky was the only one who got it right. At that moment I realized intelligence depended on context and perspective. Sure, like Alexander the Great before him, Ricky broke the rules. But he also solved a problem. Ricky didn’t think outside the box, he lived outside the box. At least the socially acceptable box. This gave him an advantage. And he exploited it. That day, Ricky was no retard.


Even then I saw parallels with my computer work. I’d been studying Boolean algebra and had just finished designing my first ALU (Arithmetic Logic Unit).  I had demonstrated it for the science review and was selected to present it to the local Rotary Club. Yes, I was that geeky. Even in 1967, well before it was cool.


I’d also been reading Freud and B.F. Skinner, but Desmond Morris’s “Naked Ape” was a favorite. I found human behavior fascinating, and like other computer geeks, wondered about the parallels between not only baboons and humans, but also between humans and machines. Even at that time, the possibility that a computer might become smarter than a human was being suggested.  But the question at hand, the question presented by Dave that afternoon was, who was the smartest guy in the room?


Ricky’s feint is often seen in football - indicate an obvious objective, then break the other way. But Ricky wasn’t on the football team. He’d evolved this particular solution somewhere else. I’d seen the prototype years before. You might say Ricky’s weird habit in movement had tricked the teachers. I don’t believe he even thought about his actions that day. At least not like you or I might. He just wasn’t that concerned. He had the same instruction as the rest of us (remain in your seat), but he used a different personal script, something more primal, something more innocent. And in this case, something more effective.


I realized at that point, Ricky had the same desire to leave as the rest of us, but his behavior did not take into account the social contract. He simply didn’t honor those constraints, that inhibition. This allowed him to overcome this one minor challenge in his life by using a different script.


Perhaps we all have such alternative scripts, but how do they get triggered? When? And why? I’ve since spent most of my life in computer design and business management, but humans continue to fascinate me most. Like many technologists at the time, I wondered about the behavioral “program” we had in our brain, and why Ricky’s was so different from mine. What did he know that we didn’t? Or was it his lack of knowledge? I’ve since discovered that it actually depends upon the nature of knowledge. And the individual.


One thing was certain, at that moment, for this particular challenge, Ricky was the smartest guy in the room. Or was he? Is any evaluation of the circumstances THAT certain? And should I remove “that” from the prior sentence when presenting a superlative? Can certainty be expressed by degrees? 


Of course not, but before we get distracted, don’t we all have a bit of Ricky hiding somewhere in our skull? When the standard approach fails, don’t we revert to something more crude and powerful? When push comes to shove, doesn’t it make sense to bolt for the exits and push through the door? Ricky was just a bit more claustrophobic than the rest of us on this particular afternoon. And less concerned with the consequences.


Here’s a question to ponder in the back (or quite probably, front) of your mind while I set up the concept of decursion:


For this situation, was Ricky really the smartest guy in the room?



Contie


I need to present a bit of housekeeping before I move on. I could have titled this section “contexts”, but it’s hard to hear the subtle plural of context so I’ve stolen a trick used for singular Latin words that end in “us”, a trick applied for a similar reason. The reason we need a plural for context is that the complexity of the brain demands it, otherwise we’ll get hopelessly lost. Since I’m about to play fast and loose with the definition of decursion, I might as well coin another term - contie. There. Doesn’t that sound better?


Seriously, the brain scales many orders of magnitude in complexity. If we’re describing one context, how do we differentiate it from another? For instance, Dr. McGilchrist can be said to have described the brain in a macro-context - that of an individual human, and that’s fine as far as it goes. But it doesn’t go far enough. If I arbitrarily deconstruct the brain into a thousand creatures and the process is useful, is it really arbitrary? Utility is an important test. It will be applied over and over as I proceed. For instance, we could describe these creatures as living in a milli-context, and their tricks in a micro-context, with individual neurons described in a nano-context. That’s a lot of contie.


Going the other direction for a moment, applying decursion outside the skull could be described in a kilo-context for tribes, and a mega-context for different cultures or nations, and maybe even a giga-context for the internet, plus or minus an order of magnitude. Or two. There’s no need to get too specific at this point. We don’t yet know how much room we’re going to need in order to model this monster we call a brain. For the sake of this presentation, I’ll define the following contie each separated by three orders of magnitude, with some exceptions:


Macro-context - an individual human, or a few people, and perhaps the two sides of the brain within the skull.


Milli-context - the realm in which our creatures live - the many layers of the brain.


Micro-context - where these creature’s tricks are implemented, mostly the connectome, about a thousand neurons per trick, plus or minus.


Nano-context - within the neuron and between them, but not strictly so.


This gives us nine orders of magnitude to describe the brain and scale the concept of decursion within the skull, which I'll now address.



Decursion


“To understand recursion, one must first understand recursion.” - Steven Hawking


This paradox is literally a joke that makes fun of our left-brain. Our right-mind knows better because we obviously are able to understand recursion. Thus the humor. What exactly happens if you don’t have to define something in terms of itself? Or define it at all? Our left-brain goes around trying to define everything it encounters, but that’s a naive behavior in many cases. Some things defy definition, yet obviously exist. Such as faith, conviction, or love.


Fortunately, these things can be largely understood without defining them. Steven Hawking demonstrates the limits of logic with his assessment of the recursion paradox. And he nicely captures the essence of the issue, which is all about capturing the essence of an issue. Before we end up in a hopeless tangle, let me unwind this pretzel logic by inverting recursion’s definition into something more manageable, and opposite, in a somewhat derivative fashion.


If you’re not familiar with recursion, it is the act of defining something in terms of itself. Recursion is repeatedly applying this same mathematical or computer result over and over until a base case is reached. It’s a little like (but not the same thing as), performing the long division algorithm over and over until the remainder is no longer significant. In contrast to long division, with recursion, the process is then unwound to provide a final answer. Recursion is one approach to solving various computer problems. Fractals are a visual example. Or seeing how a fern frond appears to be made up of smaller fern fronds. A similar structure can be successively deconstructed at each level, recursively. 


McGilchrist describes our left-brain as being somewhat recursive in its approach to defining the world. I began to wonder what our right-mind’s dichotomous approach might be, which led me immediately to appropriate the word decursion and apply it in a contrastingly new context - the brain.


As the inverse of recursion, decursion constructs from the smallest fern stem element in steps upward to the whole frond. Both recursion and decursion contain the essence of the object in each context. It’s mostly a matter of whether you start at the bottom or the top. Evolution more likely started at the bottom. Decursion honors that approach. And it has more utility for our current challenge than left-brained recursion, as I’ll shortly demonstrate.


Decursion is the opposite of recursion in several important ways. Recursion deals with the issue objectively and at arm's length, much like the concept of stimulus - response. Decursion honors the artist's more subjective and expansive approach. Decursion is the right-minded alternative to our left-brain’s reductionist approach.


Decursion replicates a similar method, but not in a mathematically or logically definitive way. Decursion is not simply the same as derivation either. Instead, it mimics that base case with increasing adaptation and sophistication. I believe that once evolution discovers a new trick, it doesn’t want to let go. Instead, it applies it over and over in a decursive fashion. As evolution applies decursion it captures the essence of the thing, just not in a perfectly defined form or algorithm. More significantly, decursion provides a template for understanding the brain. And it doesn’t stop there.


Here is a grim, but useful way to think about decursion - war. I've mentioned that neurons seem to both compete and cooperate to create knowledge. Putting aside my assertion for a moment, think of the political and practical aspects of war. It's largely implemented as acts of cooperation and competition. Let's take the idea a bit further and explore the parallels with the modern world of sports where teamwork and individual champions are both important factors in success. Sports becomes a metaphor for war because it is decursive of war.


This decursive template not only describes the neuron, micro-brain structures, and the divided brain itself, this evolutionarily decursive expression escapes the skull and finds form in our language, art, media, dance, government, and even finance. For me, decursion provided the map of development history which I desperately needed, a sort of Rosetta Stone translating between contie. The concept also finally yielded some useful metaphors of the brain. Here’s one example:


Instead of thinking of the brain as a computer, think of it as a Wall Street stock exchange where each of millions of remote investors is a neuron with their own unique ways of predicting the market in a nano context. They each "know" their version of how and when to buy low and sell high. Thousands of market managers in New York channel these decisions to a few brokers on the exchange floor where in a micro context these methods and systems compete and cooperate to find the highest and best use of capital as they dynamically price any given stock issue. 


At least that’s how it used to happen. Nowadays brokers are automated, but in the past, they too were humans representing neurons. When you stand back from the process, these cooperating and competing investors with their own financial analysis methods can be thought of as creating useful knowledge. But until you understand how each method works, they are mystical tricks of the financial trade. And literally tricks of evolution that have directed capital needed for survival and the growth of civilization. No one investor has all the answers, or all the knowledge, but brought together as a system, it works surprisingly well.


OK, how about another hierarchically convergent and decursive example of brain architecture. Think of our bicameral congress as a brain where each member represents a micro-context state or district of nano-context voters competing and cooperating to promote the best form of government in an ultimately macro context defined as a bill that becomes the law of the land. Note how both of these broad examples are steeped in dichotomy. I'll be describing how the brain does something similar, for similar reasons.


Here's another example of decursion. It's a metaphor of simulation that artists may appreciate - the theater. Performance art likely started around the campfire. It still goes on today in many forms. From mimicry to scary stories the audience is asked to suspend disbelief as they listen to a narrative and watch the expressions of the storyteller. (For the more technical, this could be described as a form of sparse coding. I mention it to bring you scientists along a path. Artists can ignore sparse coding for now.)


In time, such campfire performances became more formal oral histories in a talent now largely lost to history. Fortunately, some of these stories were captured in written form tens or hundreds of thousands of years later. How long was language limited to oral presentation? Perhaps a million years? And it's only taken a few thousand years more until actual theatrical scripts were written, capturing not only the words spoken, but also stage direction and movement of Shakespear's actors. This can be thought of as rebalancing McGilchrist's left-brain with a more cultural right-mind, which brings us finally to the theater.


Once a more formal stage was built and costumes made, the masks of ancient theater were left in the dressing room and suspending disbelief became easier. Modern comedy and drama is the result. Movies refined and replicated the experience. Decursively. From mimicry to modern virtual reality and electronic gaming, each of these art forms delivers an increasingly decursive version of the one that came before. How much they evoke emotion is a test of their quality. From pain to pleasure, from laughter to tears, what we see in the theater or the VR headset is a decursive metaphor for what we have experienced in life.


Sure, these performances sometimes have rough edges. And all of the above metaphors are rife with operational failure, but then so is the brain. As is the actual stock market. Need I say Congress is not immune? Fortunately, the brain makes up for it in each case with parallel resilience. And in various ways, so does the theater, the stock market, and congress. Failure, cooperation, and competition are how all three examples hone knowledge, decursively.


Now I'll broaden my approach to modeling the brain.



The Tao of Zen and Zen of Tao


Buddhism, Tao, and Zen are religions roughly associated with India, China, and Japan. Putting the actual practices aside, I find some of the ideas most interesting, especially with regard to the brain. My generalizations about Tao and Zen will not satisfy masters of either, but the underlying concepts provide examples of how an important aspect of neuron and brain architecture has escaped the skull and found form in eastern culture.


For our purposes, I’ll present Tao as literally meaning “the path” or “the way”. Enlightenment comes from walking this path, an inherently serial process. A path can also be described as a technique, an algorithm, or a process. Tao is more left-brained, but not exclusively so. The pronoun “the” implies the superlative, the singular. This one and only path to enlightenment, (even if a different path for each person), is singular, and superlative and may have evolved into the more monotheistic religions of the west. Our left-brain is always looking for “the one” as presented in the movie, “The Matrix”, an obvious reference to the messiah, a concept common across western cultures.


In contrast, the term Zen usually stands alone, (or sometimes with a following association, but no pronoun needed). In some ways, Zen is both the essence and exception of Tao, and vice versa, similar to the contrasts of the divided brain. Zen is often described by what it isn’t, as opposed to what it is. In contrast to Tao, Zen is knowing the nature of a thing without walking a path, an aspect we might attribute to the right-mind. Also, one person’s enlightenment need not be the same as another’s. And one solution or enlightenment need not preclude another. Thus, pronounless Zen. But back to Tao for now.


In the micro context, neurons connect from one to another forming pathways, actually, many pathways. We have tens of millions of neural sensors. We have only a few hundred muscles, but these muscles can be applied in complex sequences to form about a million scripts of movement, plus or minus, making the brain inherently convergent from sensors to muscles. 


To clarify, these largely parallel paths begin with tens of millions of neurons sensing the world and converge down to moving a few hundred muscles (or collection of muscles in a serial fashion). These muscle movements will hopefully affect the world in some way beneficial to the subject at hand. This affected world can then once again be sensed, completing a never-ending operational loop specifically including this individual within the world. Tao is serial in nature, and so exists over time in serial feedback loops. But not exclusively so.


The knowledge that Zen acquires, lives in the moment, and so exists in parallel with other similar knowledge at any given moment. With Zen, we observe all aspects of a thing struggling not to define it before we are enlightened by its nature. Or not. Then somewhere a neuron fires and we understand its nature. It all happens at once, no steps to be taken. Time is not a factor. Zen does not exist in a temporal frame as Tao must in order to be sequential. This Tao and Zen dichotomy exists all through the brain in our nano, micro, and macro contie, decursively.


In the nano context, a collection of inputs need to be present at the same time in order for that neuron to fire, (or at least within a primed window of time to push this metaphor from an instant to a moment). Zen occurs in that moment. Not the moment before, not the moment after.


In a micro context, thousands of inputs all happening at once can be thought of as the basis for what we call associative memory (which is not actually stateful memory at all, as I’ll present later). Before we lose the path of our Tao and digress into a mess of logic, let’s combine this Zen with our Tao path to yield a convergent hierarchy, the most common “network” in the brain. I use the term network here loosely in that the brain’s hierarchies are obviously not orthogonal, nor very consistent. But they must ultimately and inherently be convergent.


In a macro context, these two competing approaches yield hierarchical cooperation and competition as the two sides of the brain operate in parallel, and also serially, not unlike McGilchrist describes them. The left brain tends to engage the world in a serial fashion, but not exclusively so. It helps us to manage time and provides a temporal framework for language. In contrast, the right-mind tends to engage the world in a parallel fashion, but not exclusively so. In that moment.


What happens at the level of the neuron also happens at the micro-network context as well as the macro-level of the left and right brain. Oh, and it’s the very same decursively replicated architecture we see in the stock exchange and congress as presented above. The left-brain is more rational, but not exclusively so. The right-mind is more insightful, but not exclusively so. Together they yield wisdom. Hopefully.


If my abused definitions of these concepts replicated decursively in various contie seems a bit flighty, it’s meant to be. We’re looking at it from the top down. The objective at this point is to keep things general until we form a useful framework for the brain. Decursion is one of the tools I’ve used to get here. If it doesn’t make sense right now, relax. When we build from the bottom up, the model will be a bit more reasonable and rational. For now, it’s time to revisit the smartest guy in the room. We’re not yet done with Ricky. Or Dave.



The Smarter Guy in the Room


Thinking back at this point in my life, Ricky Tolleson not only inspired my interest in human behavior, in later analysis his actions taught me something very important about knowledge. In a more subjective model of the brain - knowledge approaches truth like an asymptote - it never arrives. Definitions are merely ways to grasp things. Superlatives, like truth, are aspirations. All are illusions, starting out as knowledge. Each can be useful in its own context.


As the door slammed shut behind him, Ricky was now free to do whatever Ricky does. The rest of us were left to realize what Ricky could not have known without imagination. In this case, Dave’s imagination. Dave had implied that Ricky was the smartest guy in the room, but wasn’t Dave the smartest guy in the room for having the imagination to recognize Ricky’s brilliance? 


But wait! If Dave was the smartest guy in the room, then his recognition of Ricky’s superlative was invalid, obviously leaving Ricky and Dave in a paradox for the title.


See how easy it is to end up in a left-brain logic trap? They happen more often than you might realize. We normally dismiss them. In order to be trapped, you have to think about them logically, and then limit yourself to those rules. And that’s the error we might make - logic. Fortunately, we typically ignore logic traps and deal with them using our right-mind. Otherwise, we would all be constantly getting stuck in some catatonic state. Some deadly embrace.


In any case, Ricky accomplished that which the rest of us only imagined, after the fact. And I believe he did it without using imagination at all. Ricky was obviously cheating, but was his cheating not intuitive? His solution certainly was simple, but obviously violated the rules. Which is the point. Ricky walked his Tao path right out of the room, and left conformance to the wind. The rest of us only observed. So does enlightenment flow from walking the path? Or simply understanding its nature? Does nothing matter until something moves? And does it matter how we gain the knowledge from this little exercise as long as we come to know it?


The point is, the standard for knowledge is constantly changing. As soon as it’s defined, it needs redefinition. Ask any day trader. Ask any Senator. Knowledge is as fluid and flexible as information is rigid and defined. Before debate, you may know one thing, after debate, the opposite. Though rare, it does happen.


Which makes someone smarter, the Tao of walking the path, or the Zen of knowing what it means? I would suggest that it is all of the above. And none. It depends upon your perspective. And to some degree, luck. The world is not determinant.


According to Occam’s Razor, the simplest solution is likely to be the correct one. Did Ricky have a breakthrough as he set off that exit alarm? Is this what the knowledge of enlightenment looks like? There’s no correct answer of course. There is no smartest guy in the room. But what might happen if we treated the brain as a collection of such challenges? Such koans? Such tricks? Might we find that brain model we seek?



Enlightened by Art?


“If you can’t replicate the work and get the same outcome, then it’s not science.

If you can replicate the work and get the same outcome, it’s not art.” - Seth Godin


To clarify the above quote, I’ll paraphrase another by Theodore Roethke: 


“Art is that, which everything else isn’t.”


Continuing from the last sections, this quote may seem a bit Zen. And that’s the point. Art is not a “thing” to be defined or managed by our left-brain. Art is often enigmatic. So is Zen. Art allows us to discover novel “things” which are initially undefined, but not exclusively so. As these new things move from our right-mind to left-brain they become “grasped” and better defined over time. Once characterized, these tricks become methods. These methods are then applied in the steps of an algorithm to be repeated in the more serial fashion of a machine. Once consistent, they cease to be art and become part of science. 


The right-mind deals with novelty - things not yet defined. Artistic things are discovered in a moment of intuitive enlightenment. Once our right-mind shares these things by altering the focus of our attention, the left-brain defines them and uses them as components, or deconstructs them into their subcomponents. But not exclusively so. More later on how this transition (and many others) actually occurs as knowledge moves around the brain.


After reading the Divided Brain and realizing that our left-brain had failed to objectively model the brain, I began to wonder, what a more subjective, a more organic model of the brain might entail? McGilchrist inspired me to explore this possibility. How does our right-mind see the brain? What is the Zen of the neuron, as opposed to the Tao of neural priming? This last question became my personal koan. And the notion allowed me to tease out the first principle of the neuron. 


Zen is about understanding the nature of things without defining them. Zen lives in the realm of the right-mind. Zen is a muse. What does Zen have to do with the brain? And Art? It’s the simplest approach to forming a useful model of the brain. Treat the brain as a collection of evolutionary tricks to be characterized and defined.


As for defining art, the above Theodore Roethke quote simply means art begins where science leaves off. As noted above, the left-brain deals with science, but not exclusively so. The right-mind deals with the things in life which have not yet been defined, but not exclusively so. These proto-things can also be generalized as art. 


Oliver Sacks reached the same conclusion about understanding the brain, and mind:


“What is this mystery which passes any method or procedure, and is essentially different from algorithm or strategy? It is art.” - from “Awakenings”


Over the decades I have made many attempts to model the brain, but none of them felt right. None were viscerally satisfying. None of them left me at peace with the problem. Until now.


That has been my test. Was this simply my right-mind objecting to my more technical left-brain conclusions? Does it matter if we are able to build a model of the brain that is useful? What if we treat the brain as a Zen koan? What if we take a more subjective and organic approach? What if we play with ideas instead of working with them?


I’m going to describe this gnostic model, hopefully without defining it, at least until we get down to the neuron. And even after that, I shall endeavor to keep things flexible as we come to understand evolution’s tricks and ultimately describe them as methods.


I believe many others, perhaps millions, have used this or similar approaches throughout history. Many have likely reached similar conclusions over time, but have described the experience as spiritual or even as enlightenment. Or worse, not being able to describe what they have discovered at all because of our left-brain’s reticence to express their discoveries in language. 


How many times have you been at a loss for words? This was more likely because what you wished to express was coming from your right-mind. Or you were not able to describe it logically or in a way that would satisfy your left-brain. Perhaps you struggled with some art form. But just because you couldn’t find the words doesn’t mean your conclusion was invalid or useless. It just wasn’t accessible by your left-brain. 


I will take you along a path that is similar to the one I have walked both logically and intuitively, but not definitively. Our left-brain may not admit that we already have a simple model of the brain in our right-mind, but it’s there. The left-brain may not describe it in words. But I will.


I began this post with a question. I'll end it with another in the form of a koan:


If not from neurons, from where does knowledge spring?


<to be continued>

No comments:

Post a Comment