... seeking simple answers to complex problems, and in the process, disrupting the status quo in technology, art and neuroscience.

Thursday, February 02, 2023

The Gnostic Neuron - Part 3 - Unlearning Brain Metaphors

Unlearning Brain Metaphors

First posted 02-14-22








Unlearning Brain Metaphors



Setting aside for now why we don't have a consensus model of the brain, I'm going to present one possible approach to resolving this deficit, without having to know for sure why it's missing. Let's start with the classic first lessons from Zen:


‘Zen Koan: “A Cup of Tea”


Nan-in, a Japanese master during the Meiji era (1868–1912), received a university professor who came to inquire about Zen.


Nan-in served tea. He poured his visitor’s cup full, and then kept on pouring.

The professor watched the overflow until he no longer could restrain himself. “It is overfull. No more will go in!”


“Like this cup,” Nan-in said, “you are full of your own opinions and speculations. How can I show you Zen unless you first empty your cup?” ‘


So it is with the brain. Our cup is certainly full of somewhat disorganized data, mis-metaphors, and preconceptions that simply don’t fit, which makes them worse than not useful. They actually distract us from the true nature of the brain. And not in a good way. We need to set them aside. Or somehow distract ourselves from these distractions. It’s not easy. Try to not think about a flying pig while we discuss how such a pig might be made to fly.

How will we empty our cup? For me this began decades ago when I began to note the differences between neural biology and computer architecture. I’m in no way an expert with biology but am intimately familiar with how a computer works. For me, what I discovered about the biology of the brain took a long time to accept. Your mileage may vary. Even so, the contrasts are striking, and I’ll try to make them even more vivid. 


I started this process by literally setting up lists of what brains and computers have in common, and what they have in contrast. I suggest you also start such a list for yourself. It will help you find conviction when challenging some very engrained ideas - such as the brain being electrical in its nature. Ultimately, vivid descriptions won’t be enough. Even once you realize how the brain is not like a computer, the knowledge does not tell you much about what the brain IS like. Our brain does not like an information vacuum. This won’t be easy.


The brain will try to hold on to distracting associations until you can distract it with new and better ones until we discover reasonable alternative explanations and get them firmly in place. I had to force myself to ignore these default metaphors, all of them, and treat the brain as a complete mystery, which was the conclusion I ultimately came to only a few years ago. It’s not an easy process and is best described as unlearning, as the above Zen koan above nicely presents.


Once I began to let go of these preconceptions, things actually started getting easier, less frustrating. And more fun. I’ll provide many examples, but ultimately, you’ll have to reach your own conclusion on how this aspect of understanding, (and misunderstanding), goes to the core of our missing model. If you’re a technologist, this section will be more challenging. If not, you have an advantage. You’ll have less to unlearn. Now let’s address this more mystical approach before we deal with the electrical issues.


The Brain As Mystery 


Treating the brain as a playful mystery has many advantages. It allows for a wonderful flexibility as we work our way back to more useful generalizations. By doing this, my casual model of the brain shortly morphed into a collection of evolutionary tricks not yet understood, at least not by me. It’s a humbling experience after decades of serious study, and not an easy frame of mind to maintain. But instead of being something I had to work at, I let the brain become a toy, something for me to play with conceptually. My Assertion Salad in the final post was just one way of generalizing. Nothing was sacred in my thinking, and this remains the case. It’s also how science in general should be treated, or else science may blind you to the actual nature of things.


Mystery, of course, is only one way of looking at the problem. But so far for me, it’s yielding a much more satisfying model of the brain, even if it’s only a simple, cursory, and casual one. So suspend disbelief for a time. Forget everything you know about the brain. This approach is actually one of the more powerful aspects of our right-mind. Keep your options open. Now, what is the nature of these “tricks”?


Ultimately applied in physical form, evolutionary tricks have been created by various proto-creatures down through our phylogenetic history. I refer to these critters as proto-creatures because they are not the same animals that currently walk the earth. Every modern species we observe today has been evolving for exactly the same amount of time. These creatures are the leaves of our evolutionary tree. In contrast, proto-creatures emerge at the branch points of our evolutionary tree. They are the prototypes of what we see in their current form, but each in turn has a phylogenetic history that can be best described as a long line of proto-creatures, one at each branch. And each has found a way to survive by evolving various tricks. This of course includes primates along our particular path through evolution out to our current leaf - humans.

This evolutionary path has many branches and alternatives, not all of which are in human evolutionary history. For instance, there are at least two versions of eyeballs. In some ways humans got the inferior ones. Also, bioluminescence has independently evolved at least 40 different times so shows up all over the evolutionary bush. These are just two of evolution’s tricks. There are many, many more, and one need not preclude another. 



"The purpose of thinking is to let the ideas die instead of us dying." - Alfred North Whitehead

Another idea I’d like to present is the high cost of Darwinian evolution. It's expensive. A whole lot of critters have to die in order for a very few to change. For this reason, I believe that evolution has evolved a more cost-effective way to evolve, and the brain is at the focus of this effort. Though truly wondrous, the brain is just another one of evolution’s tricks, or better understood as a collection of them, each evolved by a different creature in our evolutionary past.


It’s important to note, these are not a magician’s tricks. A professional magician’s performance is all about human deception. Fortunately, evolution has no such agenda (yes, I’m subjectively anthropomorphizing evolution itself as it helped me understand its nature). Evolution’s agenda is replication and survival. Deception in this context is not typically important. These tricks are merely clever ways of doing things. Evolution has mostly left them tricks out in plain sight for us to discover and observe. Once we understand them, they become methods for our left-brain to engage, but for now, let’s leave most of these tricks as playthings for our right-mind. This will involve many things not well defined, as well as the relationships between these things, especially other humans, as they are the things in our lives that we come to know most intimately. I’m of course talking about personal feelings - love, hate, anger, and joy as the shortlist. These too will become our playthings.


Before I leave this proposal to treat the brain as a complete mystery, I need to contrast the concept with the engineering approach to dealing with the unknown. Engineers like to treat any subject in question as a “black-box” whose inputs are to be activated while observing its outputs. The concept is useful for teasing out the more consistent aspects of any object at hand, but less so for more dynamic challenges, or when we limit ourselves to being mere observers.


In the neuroscience community, this black-box approach is known as “stimulus-response.” It’s one way of exploring the brain from the outside - the more objective engineering perspective. The approach has been popular since Galvani first applied electricity to that frog’s leg, but since has mostly lead to frustration. An alternative might honor the differences between our left-brain and right-mind by suggesting a less consistent but more useful model. Try replacing the cause and effect of "stimulus-response" with the more theaterical concept of, “cues and scripts.” This fresh perspective might be described as subjective versus objective since the subject neuron is in control, and not the investigator. I will explain in more detail shortly.


As noted, we can approach these mysteries as evolutionary tricks then treat them as Zen koans standing between us and enlightenment. Our unlearning objective is to leave behind our preconceptions of the brain, especially those having to do with technical metaphors. I have spent my life working with electricity, electronics, logic, and computer architecture. I’ve only spent a few years without them when thinking about the brain. This is still not easy for me, but I will do my best to not distract you from the true nature of the neuron as I distract you from these unfortunate metaphors and the reality that electricity is actually anathema to the neuron. So let’s empty our cup. And also have a care as to which koans we embrace. We don’t want to get lost in the details before we form a useful framework. 


False Metaphors and Distracting Words 


Most neuroscientists think of neurons as logic devices or memory elements, even when their background is in biology or medicine. They apply these more technical metaphors without understanding the differences in depth. Brains are not state machines, nor do they conform to information theory in many respects.


For most of my life, I too shared this view. But over time I've come to learn that neurons have far more in contrast than in common with such metaphors. If you're like me, you may have a feeling that there's just something about this tech approach that doesn't seem right. We need a new way of thinking about the problem.


If you’re a student of neuroscience, this “artistic” approach will stand in stark contrast to almost everything you know about the brain, especially the words and metaphors we use to describe the brain such as spikes, conduction, hard-wired, and anything to do with electricity. For this reason, I’ll find alternatives for the more blinding technical concepts as I proceed. For now, I want to point out the most distracting terms without losing the advantages of distraction for lateral thinking.


One exception to misapplying technical terms is the word, "circuit". With electricity, electrons tend to flow from literally the ground or negative terminal of a battery or other power source. Using metal pathways, these electrons may pass through all kinds of convoluted logic before ultimately returning to the positive terminal of the battery completing a circle of sorts. These metal pathways are best described as circuits, not unlike a Circuit Judge who would travel from town to town managing cases until he returned to the city Courthouse in days of yore.


Confusingly, the brain does something similar but not in an electrical sense. Instead, we have neurons with long axons which (almost) connect to other neurons in an analogical fashion from one to another in complex ways. A simple version would be neurons sensing the world, which then might trigger muscle movement creating simple behaviors, which in turn might affect the world in some way, which neurons once again sense to start the process over. This ultimate circle of activity between the brain and the world forms a circuit of sorts, but not an electrical one. Instead, you might call it an analogical circuit of knowledge as I'll present. So circuits in the brain are a useful concept and description, unlike most other computer metaphors.


Fortunately, there are also other non-technical descriptions of the brain that apply surprisingly well in that they are more intuition-based, being inspired by the right-mind. For instance, “tension” better describes how we feel as our neurons prime for movement, in contrast with “action potential”. Yes, “potential” is a more accurate term for quantifying electrical charge, but we need to let go of our electrical metaphors. And the “action” part is archaic in that most neuronal firing does not lead to actual muscle movement, potential or not. 


Lately, even the term “firing” is being replaced with “spikes”. This is likely because that’s how ionic charge appears on an oscilloscope. Since the pervasive use of electrical metaphors is part of our problem, I’ll mostly stick with the older metaphor of “fire” somewhat because of how a gun is triggered (and more recently, people).


Seeking more useful metaphors, humans have been managing fires for more than 400,000 years. The propagation of neural signals across the brain have far more in common with wildfires (and backfires) than they do with spiking electricity. Fire yields direct experience so is the more primal and intuitive term. I will apply it generously. 


I’m also going to retrieve the word “bit” from the tech world. With few exceptions, when I use this description of quantity I will not be referring to a binary digit, but the older reference to a small amount of something - in most cases, a small “bit” of knowledge, which as you’ll see, is very different from a binary digit. These two contrasting bit definitions still have much in common in that they both represent an elemental quantity. Contrasting the digital with the analog versions of "bit" will become very important as we proceed.


And since we’re on the topic, you’ll see that I use the word “cue” to describe the cascade of neuronal firing, at least until we get to the business end of any neural pathway which I describe as scripts of muscle movement. And that more modern use of the word “triggering” will certainly be helpful depending upon who is pulling the trigger. I’ll note differences in other word use as we proceed. Now let’s address the most egregious distraction in trying to model the brain:


The Neuron is Ionic, Not Electronic, Nor Even Electrical


And the brain in general is neither; it biologically relies on chemical signaling.


Electricity and electronics is all about the movement of electrons in conductors and semiconductors. The neuron has neither, relies on ion migration to work its magic, and chemistry for actual signaling.


If your background is in technology or even medicine, you may find the above assertion challenging. But the more you learn about electricity and electronics compared with the ionic nature of biology, the more startling the contrasts become. Most everyone has some confusion as to the differences between ionic and electronic. Thinking about the brain in electrical terms distracts from the neuron’s actual, and more simple ionic nature. Hans Berger’s development of the EEG (electroencephalogram) would have been of far more value if he had known about ion migration in the neuron’s cell wall instead of interpreting the cause of brain waves as electrical in nature.


And it’s not just the tech world that harbors this confusion. It’s our broader culture in general. Hollywood presents electricity as the magic spark that restarts the heart, much like the Frankenstein story. And it works. Sort of. Sometimes. Well actually, not very often. Restarting the heart with electricity is nowhere nearly as effective as depicted on television where applying electricity to the heart usually does the trick. In reality, not so much. And there’s an important reason for these failures to re-spark life. Electrical stimulation is actually an abomination to most biology, (some aspects of knitting bones possibly being an exception). But ionic sensing can be of great utility in various ways.


Inside the brain, electricity is anathema to the neuron. It’s only in the neuron’s recovery from this type of assault that it sometimes restores biologically ionic rhythms in the heart. Note that even “electric” eels use “electricity” as an offensive weapon. Ironically, even this weapon is mostly ionic in its generation, but my objective here is not to actually argue the issue scientifically. I just want to raise a doubt and provide a fresh perspective - the un-electric nature of the actually ionic neuron and chemical brain. 


Let’s back up a bit. You may have heard that in 1780 a guy named Luigi Galvani touched a scalpel to a frog’s leg just as a spark of static charge made the frog’s leg move. This became known at the time as “animal electricity”, and the metaphor still haunts us today. A fellow Italian named Alessandro Volta set out to replicate the work but attributed the electricity to the metal of the scalpel and not biology. In the process of proving his point, Volta invented the electrical battery giving him an advantage in the debate. 


Ironically, Galvani and Volta were both right and both wrong in various respects. Batteries are mostly ionic. It’s the wires and silicon outside of the battery that deal with electricity and electronics.  And also ironically, Galvani was correct to think that the “electrical” response of the frog’s leg was quite different from what was happening in Volta’s metal wires, or even his own metal scalpel. Again, the contrast is the same as comparing what happens inside of a battery (ionic) with what happens outside of a battery in wires and silicon (electronic). This contrast is critical to understanding the actual nature of the neuron. I suggest diving deep into the topic if you have any doubts. The rest of us will play with ions a bit.


Ask yourself this, is the nature of the brain really electrical in the same way that the telegraph or computer is electrical? If you understand both even modestly, that answer has to be no, not at all. The brain does not rely on electromagnetic propagation. Instead, it partly uses ion migration for signaling which is dramatically slower. What is often referred to as "electrical" charge is actually ionic charge and its detection is merely a side effect of what's really important to the neuron - cascading ion migration. The signaling may seem similar, but the medium of the signaling is dramatically different. It just so happens that ions within neurons are somewhat similar to those within a battery, but axon firing has almost nothing in common with the signals carried by electrons in metal wires. The point is, electrons are not the critical element involved in the operation of the neuron or the brain in general. But ions are. And that is figuratively, literally, and physically a very big difference. Electronics is all about the physics of the electron and signals using electromotive force. Ionic signaling in the neuron is far more chemical in nature.


You may have heard that if an electron were a pea, most ions would be the size of a dump truck. And these dump trucks move through channels in the axon’s cell wall in a cascading fashion, like dominos falling in turn. This action delivers a signal quickly in human terms, but these cascading ionic signals never approach anywhere near the speed of a computer. If none of this makes any sense right now - great! Ignore the difference for now. You’re halfway to unlearning the electrical nature of the neuron. Here’s an easier way to think about ionic neural conduction - don’t. Think chemistry instead.


Outside the neuron, the brain is best described as biological, with most neural communication delivered by tiny puffs of chemistry at the synapse where neurons almost touch one another. Only within neurons, and only because ion migration within the axon wall polarizes (and depolarizes) these internal liquids, do we need to discuss charge at all. If it seems like I’m splitting hairs on this topic, I'm not. The differences between ions and electrons is dramatic and very important. Ionic charge from this “dump truck” (plus polarity) is actually the opposite of our electron “pea” (minus polarity). This is the same plus and minus you’ll find on the ends of a battery. As you may know, it’s important not to confuse the two ends of a battery, and it’s even more important not to confuse the two types of charge within the neuron.


The practical difference between ionic and electrical charge is the same difference between cascading ionic polarization moving at about 200 MPH, and electromagnetic propagation which occurs at the speed of light - or at about a 1,000,000,000 MPH (I rounded up to a billion miles per hour to make things more dramatic, but the value is fairly close). The point is, this is a huge difference in signaling speed. And again, if none of this makes any sense - even better! Ignore these electrical details with impunity! You’re well on your way to becoming innocently unlearned.


OK, only one more comparison of what happens inside of a battery compared to what happens outside of the battery. Inside, ions migrate and build up a charge. Outside the battery, electrons move freely along a metal conductor. There are no such metal conductors inside the brain, there are no such metal conductors inside of a neuron. Because of its ionic nature, the brain acts like one big chunk of metal in electrical terms. An electrical signal will cross the entire brain in a nanosecond without discrimination as to type of tissue or fluid. Nothing in the brain insulates or isolates such possible electromagnetic propagation.


In contrast, myelin somewhat limits ion migration out of a leaky axon. Water metaphors applied to leaky hoses are far more useful in the neuron than wires. Myelin is not an electrical insulator. There are no insulators separating these fibers in electrical terms. By the way, none of these details are new to science, but it may seem that way depending on how much you rely on electrical metaphors in your thinking.


Unfortunately, this ionic nature of the neuron allows electricity to flash across a brain as quickly as a bolt of lightning disrupting the delicate ionic balance for each neuron in its path. For decades electro-shock therapy was applied to “reset” the brain. The result changed brain operation in random ways for a time as the brain recovered its ionic homeostasis. Neural damage remains an open question for this type of therapy. This is consistent with clinical recovery for such patients. But again, arguing the point is beyond the scope of what I’m presenting, so I won’t. Instead, I’ll just challenge your assumptions as we proceed.


It’s true that what appears to be electro-”motive” force (EMF) radiates from neurons (and their axons). But it’s actually ionic charge that’s being detected. It’s the opposite in polarity, and it doesn’t “move” much at all. It’s this ionic charge that we measure with an EKG (or EEG as noted above). But this electro-”motive” force has nothing to do with most neuron to neuron signaling which happens chemically, not ionically in virtually all cases. Indeed, if you characterize the neurons as a black box, you can ignore ions completely.


Even more confusingly, if electricity is applied to the neuron in any way, it actually invalidates the operation of that neuron, at least for a time. In some cases it may even be harmful as noted above, but there’s no need to go into detail here. If you're not familiar with the difference between electronic and ionic, don't worry, many neuro-technologists aren't either or the metaphor would not be getting such wide acceptance. So relax. We’re just having fun for now. Here’s how I came to know the neuron.


And also, how I came to unknow it.


Snakes and Neurons 


During the summer between my sophomore and junior years of high school, I returned to Tucson and worked with my grandfather at the Skyline Country Club. He was the greenskeeper at their golf course during and after construction. We’d go to work at 5 P.M. to avoid watering in the worst of the summer heat. I’d spend my nights driving around the desert carrying sprinkler heads and turning water valves on and off in the dark. This was long before the electrical automation of watering systems. And we didn’t bother with headlamps. Such lights were far heavier and more awkward than they are today.


At the time, Skyline was a new course and only a few of the paths were paved. Golf carts weren’t fast enough to get the work done and most of the roads we used were raw desert sand and rock, thus the need for something rugged. It was only a couple of decades after World War II and surplus Jeeps were a cheap solution. This job was also where I learned to drive as I didn’t have my license yet.


I especially appreciated that Jeep. It kept me up and away from the rattlesnakes. The headlights helped. At least most of the time. One night while carrying an armload of sprinkler heads I actually stepped over a rattlesnake in the dark. I was walking with my grandfather at the time. His eyes were better than mine and he pointed out that I was safe. It couldn’t strike until it had coiled.  


As I said, this was a new course. All the fresh water had brought lots of animal life out of the Santa Catalina mountains. And rattlesnakes followed the other critters. Stepping on a rattlesnake was only one of the dangers. My grandfather had a side business of capturing the snakes and selling them to the University where they did who knows what. Several times I had to hold a gunny sack in the dark while my grandfather dropped a snake in. Once he even caught a coral snake, but it was small and somehow got out of the sack in the cab of my grandfather’s pickup. We never did find it again. For a while, I rode with my feet up on the seat. My grandfather didn’t seem to mind the snakes.


After sunset, we’d stop on top of the hill above the maintenance shop and have “lunch”. This was also the far end of the parking lot for the main clubhouse. While we ate, my grandfather would tell me stories about the coal mines of Kentucky. He was my age when he first worked in the mines and noted that they were dangerous enough, but union sabotage and company bulls with clubs and guns during the labor disputes were a far greater risk. It’s one reason he started a tie mill. This allowed him to get away from the picket lines. He described some of the violence. He said snakes and cave-ins were a minor threat in comparison to what humans could do to each other. He noted that because of such conflict and war, humans were the most dangerous animal on earth. I’ve since learned that mosquitoes are worse. But humans are a close second. Rattlesnakes don’t even make the shortlist.


On one of those nights at the golf course, I finished my change-ups early so was taking a break in the far corner of the clubhouse parking lot. The sun had long ago set and there was an excellent view of the city from this very dark location. A large thunderstorm was moving up from Mexico. Lightning flashed around the edges to set the mood for an unlikely encounter. 


Just then a car pulled up a couple of spaces over. The headlights had not illuminated my position. A guy got out and opened his trunk. I think that’s when I startled him. A high school kid in a surplus army jeep was not what he expected to find in the dark. I put him at ease by explaining I was waiting for my sprinklers to finish. Perhaps to regain his composure he began a conversation that started with the lights of the city beneath the storm but quickly shifted to something far more enlightening. 


It turns out this guy was a grad student at the university. He was studying neurons. Since logic was my current fascination, I asked him how neurons might perform a logical evaluation. I already knew that neurons moved signals asymmetrically across the neuron but didn’t know how logic was integrated into the process. 


Asymmetric Communication 


Asymmetry in this aspect simply means that signals generally come in one side of any neuron and go out the other, from input to output, not unlike a logic element sending signals over wires to another logic element. The point is, the signal does not seem to internally back-propagate, at least not directly. Setting a backfire in a forest has a similar effect. If you have a bit of wind, the fire will only burn in one direction. So it is with the firing of a neuron. The fired signal tends to go in one direction, from input to output and onward to the next inputs (but as a chemical signal, as noted, not electrical).


I had already designed an ALU (arithmetic logic unit) and understood such logic intimately, so I asked how neurons might be connected to perform these functions normally associated with a computer. I think my understanding of logic and its parallels in the brain was his second surprise of the night. I’d been studying electronics since fourth grade. I asked how neural logic might be a factor in controlling behavior. He’d obviously also thought about the topic extensively and described the neuron to me in the following way. It’s how I came to both understand (and misunderstand) what neurons did:


If you’re not familiar with the neuron, you have an advantage, and far less to unlearn, so I’ll keep this simple. Our bodies are made up of billions of cells in a bag of salt water. These cells may be muscle cells, fat cells, and of course, neurons, among other types. One of the first and most impressive tricks of evolution is the cell wall. It allows the cell to control what to let in, and what to keep out in many respects. Each cell is contained within its cell wall. Muscle cells allow the body to move in various ways, expressing behavior. Fat cells store energy for later use. There are billions of other cells in the body that perform many other evolutionary tricks. Neurons are a big part of that count. 


Like all cells, neurons have a cell wall, a nucleus, and ways to provide energy for the cell, but they also do something no other type of cell does (at least not at the speed that neurons do it). This special evolutionary trick is that neurons signal other neurons using very small bits of neurochemistry, which ultimately tells muscle cells when to move. 


The grad student that night described these signals as traveling asymmetrically, from input to output. I asked how neurons might electrically encode information. (I didn’t learn about state machines until much later.) As it turned out, he didn’t know the answer. To this day, no one else has been able to model how neurons might encode logic “states” representing information. That’s because they don’t. The closest thing a neuron has to a “state” is a dynamically evolving sensitivity to specific conditions in the world, and since this sensitivity is different each time the neuron fires, it's not technically a state. Or memory in the conventional sense.


The brain does shift moods as hormones wax and wane, and neurons too dynamically change sensitivities in a similar fashion, but describing either as states would be inaccurate. Yet both are a form of chemical signaling in the nano and macro context. Chemical diffusion typically has an aspect of temporal inertia, but the result is not very state-like. Instead of “encoding”, neurons have a way of evolving this sensitivity using something I've come to call analogic which I’ll describe shortly. Consistent "states" encoded either electrically or chemically have yet to be found in the brain, but neurons clearly signal one another. This begged an obvious question at the time, and still does - what does this signal mean? Information theory does not deal with the meaning of signals but does require that both sender and receiver agree on what a signal means to be useful.


Understanding what these neural signals mean, and how they can create simulations of reality is our objective. How neurons know when to tell the muscles to move (and how much) is critical to such simulations, but for now, let’s just describe neurons as asymmetrical chemical communicators of knowledge. This simply means that neurons create and deliver signals along their most significant fiber called the axon, but generally in one direction - asymmetrically. These axons may be much shorter than a millimeter or up to several feet long. They tend to branch out and connect to other neurons at the far end (often far away from the source neuron’s cell body, but not exclusively so).


Neurons also have input fibers. These are called dendrites and are much shorter and tend to form close to the neuron cell’s body or soma. Dendrites can be thought of as relatively short whiskers on the input side of the neuron. This is where the real magic happens. The axon typically protrudes out the other side of the neuron at a bulge called the hillock and extends for some distance as noted. The hillock is also known as the “trigger zone” and will be quite important when we get around to understanding how neurons create knowledge. 


Dendrites support even smaller fibers called spines which host connections called synapses where other neuron’s axons almost connect at the input side of the synapse, but not quite. There is a very small gap in the synapse between the axon’s part of the cell wall and the cell wall of the next neuron’s spine and dendrite. This gap plays a critical role in the communication of any two neurons. It’s where the firing neuron delivers a very small amount of chemistry to the next neuron. This chemical is described as a neurotransmitter and where it’s delivered to the next neuron is called a neuroreceptor.


Transmitters and receivers harken back to radio signals which also travel at the speed of light using EMF. Though they are distracting terms, I don’t have better alternatives so far. These neuronal transmitters and receivers are actually chemical ports. Try not to think of them in terms of radio communication. They are not. Unfortunately, these words will have to do for now.


Getting back to signal meaning, how does agreement happen between neurotransmitters and neuroreceptors? Strangely enough, I now believe that it doesn’t. At least not in any logical way. Instead, this signal represents an enigmatic bit of knowledge (which we’ll explore shortly). This converging and cascading chemical knowledge forms a pathway from sensor to muscle, but this signal is not deterministic as required by information theory. Perhaps we need to define a new knowledge theory to contrast with information theory. I’ll put it on my To-Do list.


Until then, the "state" aspect of memory does not apply to the brain, at least not in the conventional sense. Neurons do not deliver states as described by information theory. Instead, they evolve a sensitivity for a particular condition not unlike an immune response detecting a virus which it has experienced before, but we’re getting a bit too detailed. For now, it’s best to think of neurons as creators of a magic signal that axons deliver at a distance in the form of nano-chemistry, asymmetrically. That’s it. That’s all we need to know about neurons. At least for now. 


For you technologists, let me cushion the blow a bit. Think of intraneural communication as a series of dominos lined up within the axon. Ionic tension may push over the first domino and start a cascading collapse that ends with the last domino dumping a bit of chemistry into the next synapse. Or if you don’t like dominos, replace them with butterfly wings caught in a cascading line of electrostatic discharge. I will elaborate later. Beyond that, we risk slipping back into our electrical metaphors. After all, everyone knows that sometimes dominos don't do what you expect. And with butterflies, anything can happen.


The next topic to unlearn is that neurons are not average.


Disproportionality 


You may have heard that we only use ten percent of our brain. This myth is now largely demoted, but actually retains some utility on the nano level. The misunderstanding was caused by average oxygen consumption rates measured in the brain in the macro context. Extreme disproportionality is the reason for these oxygen observations and their misinterpretation. These early average estimates were both under and over stated dramatically. More modern measurements show only about one percent of neurons are in the process of firing at any moment, but the rate can vary significantly depending upon individual activity and immediate experience. 


Most of the time, most neurons are not firing. Many individual neurons are quiet for minutes, weeks, or even years. But some neurons fire a lot, in some cases, almost all the time. This disproportionate firing rate actually reflects what happens in the outside world. Sensory neurons that detect detailed changes in the world fire most often, with interneurons firing less often along with increasing abstraction of the neural pathway at each succeeding neural step. This is how signals create simulation as they move up any neural pathway creating knowledge that ranges from concrete to abstract. Motor neurons of course fire more often when there’s lots of movement involved, but between sensor and motor, most of the middle parts of the pathways are far less active than even movement requires. But I’m spoiling the surprise. 


In the brain, Pareto’s principle applies. Even more so. Think of it as hyper-Pareto. Who’s law covers 999  to 1? 9999 to 1? Disproportionality occurs by orders of magnitude in various ways throughout the brain.


The point is, statistics and average measurements are of little value when modeling the brain. Most neurons are inactive most of the time. Brain architecture, the number of connections, and active firing rates are all extremely disproportionate. Applying averages in the brain, much like generalizing about human behavior is a fool’s errand. It sort of works, but you can’t count on it. And for similar reasons. We need to back away from means, medians, and statistics in general as we explore the brain. We need to leave these powerful tools of science behind for a while.


Brain Waves and Imaging are Gross 


Though medical imaging has seen amazing progress and utility over the last century, and especially the last few decades, when it comes to brain modeling, most imaging is grossly over-interpreted and misunderstood. Much like brain waves were a generation before, brain imaging now does more harm than good. Here’s why:


When I first gained access to minicomputers in the early 1970s I discovered that I could hold a small AM radio up to the sides of the CPU, (Central Processing Unit), and actually listen to programs being executed. You can try it yourself if you have an old AM radio and a cell phone (which of course, is a computer). Hear the static? It’s not nearly as random as you might at first conclude. There are definite patterns in that noise. 


Of course, even with old and relatively slow minicomputers, these sounds were not caused by individual computer instructions, but the flow of the program could definitely be heard in a gross sense. Today it sounds almost random as the frequencies are so much higher causing so many more electrical transitions per second. But if you listen with a transistor radio held up to the side of an old Digital PDP-8, you’ll hear more order and more rhythms from the speaker than you will from a cell phone. I discovered these noises in a simpler time. I even wrote programs to yield a type of crude percussion music without the important aspect of actual musical notes. This was before Moog was popular.


Something similar is happening with brain imaging today. We observe the highest order rhythms of brain activity, or even more crudely, areas of increased oxygen demand. Correlating these images or movies with behavior is like trying to predict a single FedEx delivery by watching rush hour traffic from 30,000 feet. Try to follow a single vehicle while looking out the window of an airplane from cruise altitude next time you fly. It’s easy to keep track of a large truck on the freeway, but when they get into the city, don’t blink. Actually, CPU noise or watching cities from an airplane are very generous metaphors compared to fMRI and other imaging methods which are much lower resolution. They are more like trying to track that FedX truck from the moon. Are looking at clouds from space meaningful when trying to understand freeway traffic? Not much. It's like trying to predict behavior using phrenology during the 19th century.


Imaging is especially useless when mapping chemical communication between neurons at the nano level. Very high resolution Imaging is a bit more useful at the micro context. With more sensitive equipment, one can even sense ionic signals enabling a monkey to move a robotic arm. This is ultimately useful, but only in a very gross sense. Finer control, and more importantly, neuron sovereignty is sacrificed on this altar of electricity. Better brain interfacing will flow from a better understanding of the neuron.


In summary, such macro views of the brain are just mush and blur, not devoid of meaning, but almost certainly and dramatically over-interpreted. Scope and context are critical to exploration, but the devil is in the details. Brain waves and imaging are of little value on the macro scale. I won’t be bothering with either beyond this warning. We need to avoid this distraction no matter how pretty or how entertaining the images are. For now, simply ignore brain imaging and “electrical” brain waves. 


Now for our most distracting metaphor.


The Brain is Not a Computer


“For more than a century, the single nerve cell has served as the structural and functional unit of brain activity. Pioneers of cognitive science enlisted the neuron doctrine as the foundation of the brain’s putative computational capacities. Each neuron was conceived as an on-off switch presumed capable of acting as a logic gate, enabling information to be ‘digitized’ (turned into ones or zeros) and thereby ‘encoded’. Single neurons were assumed to perform complex encoding tasks, including for places, faces and locations in space; a Nobel Prize was awarded on this basis.” - Pamela Lyon - Flinders University, Adelaide


I include Pamela’s assessment to note how ingrained the computer is in thinking about neuroscience. By far, the most common metaphor applied to the brain is the computer. This is largely because of the emergent impact of the computer upon our culture and lifestyle, and also for some obviously similar aspects of their operation which are mostly a decursive macro illusion. The computer has thus become the very thing blinding us from the nature of the neuron and its collective expression, the brain. 


Sure, the computer does a great job of accessing and manipulating information, but it rarely creates knowledge. And if it did, who would know? Most importantly, the operation of the brain has almost nothing in common with the operation of a computer. The challenge has been well addressed by Pamela Lyon noted above, Gerald Edelman, and many others for decades so I won’t go into much detail, but a few main points are important to challenge before we proceed with a gnostic model of the neuron. This is not easy for me. I love computers. They are just so unlike brains that it would be negligent to ignore the issues.


Here are the most important aspects that set the brain apart from a computer:


A computer is fast, digital, consistent, synchronous, serial, fixed, objective, and most importantly, logical. In contrast, the brain could be described as largely opposite in each of these important aspects, but not exclusively so. Here are the main differences in chart form to help visualize the comparison:



A Computer The Brain 


Fast Relatively slow, but oh so elegantly time-efficient

Digital Biologically analog yielding stateless digital signals

Consistent Mostly malleable with evolving consistency

Synchronous Actually asynchronous but exploiting synchronicity

Serial Profoundly parallel only converging to serial

Fixed Predominately plastic, in critical phases, by degrees

Objective Surprisingly subjective, aspiring to the objective

Logical Ultimately bioanalogical, but not exclusively so


Please note in the table above that I define computers in terms of only eight words, one for each aspect. These eight are not the only differences between computers and brains, far from it. There may be a thousand things that matter, but I want to keep this simple and obvious for now. The above list is easy to defend.


This table is an example of something technologists might recognize as sparse coding which simply means finding the few things that matter most, then using as few binary bits as possible to encode these most significant things. Sparse coding works because of what’s not included, and not encoded. Maps are a good example. Only the important stuff gets included. The brain does this better and in a more elegant fashion than computers. Or even maps. I’ll get to it shortly. Hopefully, we won't throw out the baby with the bathwater, another way to describe sparse coding compared to computer systems which tend to capture more data to sort out later.


Also, In the brain column, I qualify each opposite aspect in contradiction, emphasizing that many exceptions are needed to describe the more multifaceted and competitive approach the brain uses to cooperate in parallel, yielding a somewhat serial result. I do this to avoid defining (or fixing) the brain’s version of each aspect. In truth, there are exceptions in both columns, but more so for the brain. Finally, I’ve added, McGilchrist’s, “but not exclusively so”, for the final bioanalogical aspect as it is the most confusing exception. At least it was for me. The result ranges from the dichotomy of definition to the threshold of an enigma. I do this so you will question my descriptions in more detail.


Even though it often leads to paradox, technologists tend to cling to a consistent, determinant, and most importantly, a defined model of the subject at hand. Or its opposite, when contrasting in a binary fashion. As Bob Dylan might say, “When something’s not right - it’s wrong!” As we proceed we’ll leave Bob behind and try to keep our thinking biologically flexible. Now I’m going to color outside the lines even more as I address some of these eight aspects and a few others not listed as I did with sparse coding above.


Are brains like computers?


Not very much.


The Elegance of Inaction 


I previously compared a bird to a Boeing to suggest alternative ways of simulating the world, but the metaphor breaks down when the transit time of operation is compared for each solution. Signals in the brain travel at the speed of a very fast automobile. But electronic computer signals literally travel at the speed of light. 


As noted by Jeff Hawkins in, “On Intelligence”, this limited biological speed only allows for about a hundred neurons in any neural pathway from sensor to muscle, at least if cause and effect are to be preserved. When we consider the possible allocations between transient response and propagation delays against what neurons accomplish, this number of jumps could be fewer than 100, but not by much. 


The bottom line is, the brain seems to be as elegant in what it accomplishes with this speed and the limited number of “jumps” as it is with its astounding power efficiency. It seems that the brain’s Zen nature is more about what it doesn’t do, in comparison to what a computer has to do, to accomplish a similar result. There seems to be sparse signaling not just in content, but also in time and energy. This is probably the most telling contrast between brains and computers. The brain accomplishes far more, using far less, in all three aspects - speed, power, and encoding.


As for cause and effect, it depends upon who’s in control. Plus, the number of “jumps” in any neural pathway is rarely average. Finally, there are ways around the temporal paradox as described by more recent timing experiments in the cortie. Where does behavior originate? It must be within the neuron. And when exactly? Whenever it decides. The details are beyond the scope of this contrasting exercise, but rest assured, the temporal paradox will not be ignored. It has a solution. For now, think in terms of an extraordinarily elegant tortoise, and forget about the computerized hare. 


Phineas Gage 


My grandfather knew I was studying logic that summer on the golf course. And he knew I’d bought a book about the brain after my discussion with the grad student. My guess is that’s why he told me a story he’d heard about a guy who had an accident on one of the rail lines. He said he’d heard the story as a kid:


In the process of dynamiting a cut for a new track, a spark accidentally set off a black powder charge which drove a pike completely through this guy’s brain. According to my grandfather, the guy survived and went on to live a fairly normal life. I had my doubts at the time, and my grandfather admitted it happened way before he was born, but he believed that the story was true. I wasn’t so sure at the time but found the tale interesting nonetheless.


It was years later that I connected his story that night to Phineas Gage, one of the most famous brain injury cases of all time. My grandfather had not known Gage’s name, and I did not make the connection until decades later. Names are a bit of knowledge that allows us to connect things. The name Phineas Gage provides a handle to cue this vivid story. Since my grandfather worked in coal mines, this story would have been an important lesson even decades after the actual event: when preparing an explosive charge, it’s important to always work from the side of a borehole, not directly above it. Also, for me it made the point - the brain is resilient and has an amazing ability to recover from injury, even major injury. A computer would never survive such damage. Now for more unlearning.


“It’s Only Analogical, Captain” - said Spock. Never.  


The prefix “ana” is Greek for, “up”, “again”, or “apart“, and is widely applied in the field of biology. These three letters also prefix both “analog” and “analogy”. For me, these last two especially help to contrast technical or philosophical logic with what happens within the neuron and the brain. Logic is definitive. Analogic is logic by degrees, but not necessarily in a proportional fashion. "Logic by degrees" is undefined in boolean algebra, much like "divide by zero" in arithmetic. Thus the need for ANAlogic. It depends upon the chemistry of that context, or the "mood" of any given neuron. For instance, think of a logical decision in a relationship that can be altered by three drinks of alcohol. Those drinks alter behavior in a macro sense. Other mood shifts may occur in the micro and nano context as well. Analogic only approaches logic by doing something enigmatically similar - sometimes, or something like that. Bear with me as I compare analogic to logic.


The original Star Trek series premiered when I was in high school. As a fan of SciFi, this new TV show caught my attention. It was love at first "to boldly go". Watching Spock and his Vulcan nature, I became fascinated by the differences between the logic of Vulcan culture in contrast with human behavior with all of its messy exceptions. The show literally inspired me to study boolean algebra. It’s also when I began reading Plato and others who addressed the topic. Finding meaning from endless logical arguments was so much fun I even considered studying law for a time, but the fever passed - there were too many exceptions. Both human behavior and law were too unpredictable and illogical for me.


But challenges with alien creatures seem so simple if you just take them logically as Spock did. I later wondered if Paul Simon’s only number-one solo hit, “Fifty Ways to Leave Your Lover” was inspired by Spock’s logical culture. If you could present reasons in a logical manner, you might even keep your heart from breaking. Well maybe.


At least Spock made it look easy. Even when Captain Kirk screwed things up yet somehow succeeded, it was easier to revert to logical analysis. If I ignored the human element, I could live in a perfectly determinant world. Logic was the key. Logic allowed me to wall off the messiness of the real world. Digital electronics, computers, and programming were all ways to live in this perfectly logical world. I fell in love with logic. The honeymoon lasted for decades.


Later when Star Trek, the Next Generation elaborated on something called the "prime directive", I took notice. This is the idea that one should try to never affect the object under study, only observe it. This seemed to challenge the "stimulus - response" model so popular in brain science for the last few hundred years. By honoring the prime directive, I remain critical of this approach as it can grossly distort the result, making any observations less useful for logical analysis. It was also the reason for my early concerns about any data captured by applying electricity to the brain.


Logic is the most clearly defined and determinant branch of mathematics. Logic makes integers with their “divide by zero", and other issues seem downright fluffy. Even merely ganging logic to encode the analog world opens up the challenges of range and resolution. And things get worse with real numbers. But logic by itself is almost pure and complete, the most challenging exception being how it’s applied to biology. And behavior. We won’t need to unlearn logic, but we will need to understand how technical logic likely evolved from the biological version. I refer to this as analogic because of the obvious similarities when it comes to creating knowledge. Logic works with clear definitions, like all mathematics. Analogic is more flexible, especially in the early phases in the nano context. 


As a 14-year-old, I found the beauty and consistency of logic compelling. I’ve since spent most of my adult life with this tool close at hand and have used it widely. But relationships change over time, and so has this one. Don’t get me wrong. I still have a love of logic, but I also have a new lover - knowledge. This new relationship is far more accommodating because of its analogical nature. And because one love need not preclude another.


Analog Versus Digital 


During much of my grammar and high school, my cousin Dave Cline and I shared more than just classes together. We also shared a lab. Well, that’s what we called it. It was actually his sister’s playhouse which she had long ago abandoned. This “lab” was a free-standing building of about 10 feet by 12 feet located behind his house. We built a bench along the back wall. Dave took the right half, I the left. Over the years we built bikes, rockets, radios, and other circuits in our “lab”. By the time we were in high school, it was mostly used for electronics. I was into the newest digital systems. Dave preferred analog. Many relate to the analog/digital dichotomy in reference to analog music which has seen a recent resurgence. But the difference goes much deeper, even to the core of physics and philosophy. Our interest in the analog / digital dichotomy was all about electronics.  It was a friendly competition of dueling designs. Cooperation might come later.


At one point we had both purchased oscilloscope kits which we assembled. For those not familiar, an oscilloscope is a kind of TV for electronic waveforms. Because of our limited resources, these were inexpensive and very simple single-trace units. To make them more useful we decided to add a dual-trace input circuit so we could compare two waveforms on the screen at once. My design digitally switched from one input source to another quickly enough to time-share the oscilloscope beam. Dave took the classic analog approach of mixing a square wave with the two input sources to be observed.


The design world today is almost completely digital, but in 1968, analog was the standard approach. Radio was analog. Television was analog. Even a few simple computers were analog. But the cool new computers were all digital. Philosophically, digital and analog are about as different as possible and still be called electronics. Both approaches were common at the time. As an exercise in design, we were reinventing the wheel with these new dual-trace circuits.


The world in which we live is mostly analog. Most performed music is analog. Temperature is analog. Dance (movement) is analog. But the representation of each can be digitized in various ways. In nature, virtually everything is analog. You can think of analog as smooth waveforms, infinitely variable. Old fashioned volume controls with real knobs are a great example. Most of our interaction with the natural world is analog. From the chirp of a bird to the warmth of a kiss, we experience an analog world.


In contrast, the digital world is driven by logic and math. Anything in nature such as sound or music can be quantified by defining values of a certain resolution and range, (the two parts of a floating-point number). Once digitized, these sounds from nature can be treated as numbers to be encoded, copied, and manipulated by computer programs. This digital world has another special quality - it’s determinant, meaning that it sounds exactly the same each time you play the song. Or at least it should, (reality has exceptions). Manipulating these values using math yields a consistent result, more consistent than even nature itself, well if it weren't part of nature. Let's just say digital is less common in nature. Digital also allows for interchangeable components - the key to mass production. In contrast, analog often has to be tuned for each application, and over time, degrades. Digital is always the same. Or it doesn't work at all, much like comparing Pheneas Gage’s resilient brain to a computer where a single failure can brick the device.


Ironically, if you look closely enough at nature, some parts of the world become almost digital. Atoms and molecules are actually discrete. We only perceive them as analog because of their extraordinarily high resolution. Well, mostly. Smell, taste, and some aspects of light have certain digital qualities because of their molecular and quantum nature. Our neural sensors can detect a single molecule of odor, and a single photon of light. These vivid exceptions nicely demonstrate how exquisite our organic neural sensors can be.


But in a macro context, we live in an analog world. So why would we bother with digital? Digitizing the world has some dramatic advantages. It makes our analog world easier to capture, store, and manipulate. That’s why our interaction with the world today has been almost completely digitized by technology.


Designing these oscilloscope enhancements at the time was a challenge. Integrated circuits and TTL were exotic and very expensive, certainly beyond our budgets. Many of our parts came from old transistor radios. My memory system on another project was literally made from relays taken from a pinball machine. For this project, we struggled to find transistors with matching characteristics. For these reasons, our designs had to be simple, even elegantly so.


In another application I actually pushed the limits of good power design by using a single transistor as an AND gate. After all, two of its three wires require the same polarity, and that final wire the opposite polarity in order for it to activate. This allowed me to use an analog transistor in a pseudo-digital fashion. At the time I jokingly thought of it as “analogical”. Decades later the term would take on new meaning.


It took a few weeks, but we both got our solutions to work reasonably well. Indeed, they had similar performance characteristics. To this point we had been quite secretive as to our implementation, even hiding our schematics. Now it was time to critique each other's work.


With a digital perspective, I of course started with a logic design then figured out how to cost-effectively implement it using linear transistors, which was all I had. Dave’s design treated the inputs as if he were mixing music channels but at frequencies high enough to form fairly nice square waves, again implemented using similar linear transistors. That was not surprising. 


What really got my attention was that when these two different designs were reduced to electronic schematics, the circuits were virtually identical. I was very much challenged by this outcome and compared the designs in various ways only to conclude that no matter how you approach this particular problem, the optimum result was similar. It reminded me of the quantum nature of light being both a particle and a wave. The concept was to become important decades later when sorting out the nature of the neuron compared to the computer.


Here's a video that nicely describes some of the issues between analog and digital that challenged me for years. The presentation remains clearly in the tech world so in terms of unlearning, take it with a grain of salt:


Future Computers Will Be Radically Different



Mechanical Perfection 


As a child, we wish time would go faster, allowing us to do the things the bigger kids got to do, granting more freedom with each passing year. Once we reach the age of majority, we wish time would slow down so we could take advantage of these options. And it’s that way for the rest of our life. At least so far.


One of the things I liked about school in Tucson was that they began vocational training in 7th grade. The girls went to one classroom to study home economics, and the boys in another to learn mechanical drawing. I’d been looking forward to this topic for more than a year. Drawing was the key to expressing engineering design and schematics, already a personal interest. Schematics are a type of map showing how various elements of electronics are connected to one another. I was fascinated by them. I had even saved money and bought a collection of precision drawing instruments. My intention was to create perfect schematics and get an “A” in this class.


On the first day I started by placing my paper perfectly on the drawing board. I don’t know how many pieces of  masking tape I wasted trying to get the position against the t-square perfect, then putting tape on all four corners without moving the paper or ending up with tension between these four points of support on the slanted board. Seriously, much of that first hour was spent learning this skill. And that was just putting the paper in place.


Next I began to plot the assignment, but not with a pencil. It was too early for that. Instead, I used the very fine point of my high-tech divider to make a tiny hole where the first line would begin and another one where it would end. These holes were so small you could only find them if you knew exactly where to look. Then I would go on to plot the next line with another tiny hole. By the time I closed the loop there was a very small gap. My tiny holes were in the wrong place.


The problem of course was accumulative error. This was something I would learn a great deal about years later when I managed a survey crew. On this day, no matter how careful, it kept happening. Sometimes error averages out. Most of the time it doesn’t. In any case, I tore the paper off and started over. I didn’t want to get confused about which tiny hole was which attempt.


The same thing happened on my second try. The third try I actually got some lines drawn, but other lines had a similar problem. I didn’t want to erase them because I can never make it look fresh again, so I started over one more time. This was only a one hour class. Four days later everyone else was on their third assignment. I still hadn’t turned in my first. It still wasn’t perfect, but it was getting close.


As I tore off yet another attempt, the teacher took notice. He came over, set a piece of paper on my board and quickly taped it down. Then he grabbed my hand and drew a line with a triangle, not even using the T-square. Next he rotated the triangle, grabbed my hand to draw another line. In about five minutes the drawing was complete. He pulled it off the board, put a big “C” on the top and threw it on his desk. My first drawing was done. 


I did OK with the other assignments and ended up with a “B” in the class, but I remained forever disillusioned about creating that perfect drawing. This was of course an example of perfect being the enemy of good. Or a demonstration of my teacher’s right-mind casually stepping over the towering paradox created by my left-brain seeking perfection. 


For me, physical expression was a challenge. I tended to live in my head, a theoretical place. From the drafting class, I learned that perfection in the real world is an illusion, and ultimately a fool's errand, by degrees. But by degrees is never perfect. Another paradox. It’s one reason I was drawn to the digital world. Somewhat later, when doing logic design or coding, I could make things perfect. Or seemingly so.


Cargo Cult 


You may have heard that there were isolated Pacific islands invaded by American soldiers during World War II. The native people watched these Americans create long, flat, and hard runways from coral and steel. Then large airplanes landed on these runways and disgorged all sorts of rectangular boxes. These plywood boxes contained all kinds of weapons, food, equipment, and tools - the stuff that makes an army function. Of course the natives got some of this opulent stuff in trade for helping the soldiers in various ways. And lots of empty boxes were left over.


A couple of years later the soldiers loaded up most of their stuff and flew off never to be seen again. When other westerners visited these islands years later they found that the natives had used some of these empty wooden boxes to fashion crude “airplanes” which didn’t actually fly but were an attempt to encourage the real airplanes with all their cargo to return to the islands. These natives became known as a cargo cult, and similar behavior has shown up in various native people around the world at different times and in various ways.


The behavior is of course known as mimicry and is one of evolution’s most powerful tricks, which is why it’s so often applied. The more modern and technical description is known as a simulation. They are common in our hyper-modern digital world. Minecraft, Rec Room, or Roblox are more current and more vivid digital examples. Each allows the user to create perfect virtual worlds where they can control the outcomes in various but actually imperfect ways. One need only note the blocky features of these creations and the lack of intimate connection that we find in live theater. It's a cargo cult result, but quite attractive to our left-brain as Dr. McGilchrist suggests in the, "Divided Brain".


Bizarro Logic


When I was the age to enjoy Robox or Minecraft, the toy store had Lincoln Logs, Tinker Toys, and Erector sets. For my sons, it was Legos and Warcraft. I too enjoyed Populus and Polytopia once computers became common. These too are simulations just as are dolls representing people, or even a simple wooden stick that can become a rifle when you have the right frame of mind, and most importantly, the ability to suspend disbelief.


As a kid, one of my favorite simulations was comic books. They inspired complete worlds, many quite different from our own. One of the Superman subplots was something called Bizarro World where everything was expressed in crude form and everyone did the opposite of what was normally done back on Earth.


Bizarro people were ugly, frustrating, and even mean - typically sinister, and just the thing a young boy likes to explore. This place was kind of a contra-earth where everything was clunky, inverted, inside out, and backward. The actual planet was even a cube, well, once the organic Superman got done with it. Don’t ask about the geo-dynamics. Nothing worked like it did on Earth. This place was frustrating to think about, which is what made it fun. It was one of my favorite Superman venues.


Irony flows from trying to be perfectly contrary in multiple aspects at the same time. As you might imagine, the writer's many attempts quickly lead to paradoxes. And so it is with logic and technology in our analog world. It may sound strange, but for me, computer technology and “perfection” have become a clunky and rigid Bizarro World version of the brain based on logic, as opposed to biologically authentic organic intelligence and intuition. 


Analog electronics become Bizarro digital with their consistent square waves. Signals exist at the moment, and states are more persistent. These signals are forced into clocked synchronization in contrast with our biologically asynchronous reality.  Simulation is managed using states instead of the signal-based simulation which biology has evolved. For me, Bizarro is the world of technology, not unlike a cargo cult in comic form. Ironically, at times technology works much better, thus airplanes, computers, and speedboats.


So what do cargo cults have to do with the Bizarro World and computer logic? They are both forms of mimicry somewhat imperfectly implemented to create the illusion of perfection. But brains are not logical. They are biological. Logic is a crude subset of biologics, as information is a crude and rigid subset of knowledge. Words are only a rough approximation of organic knowledge. Analogical is an easier way of bridging the differences.


Since logical systems will ultimately be useful in understanding and validating all the tricks evolution has created, let’s explore logic as the people of Bizarro World might. I’ll now describe the logic of a very simple example of homeostasis, a trick of evolution that finds form in biological “systems”.


Robots are Bizzaro Humans


There are two main types of flying protocols, at least for humans in airplanes - VFR (Visual Flight Rules) and IFR (Instrument Flight Rules). In the first case, the pilot in command is responsible for seeing and avoiding other aircraft. In the second case, air traffic controllers are responsible for keeping all aircraft separated. At times they will try to offload this responsibility by noting the relative location, speed, and direction of other traffic. If this traffic is acknowledged by the pilot, they can go on to other work. The pilot then comes under VFR rules for at least that specific encounter.


When I was training for my instrument flight rating, like every IFR student I had a plastic hood over my eyes so that I couldn’t look outside the cockpit, forcing me to rely only on instruments. My flight instructor worked the radio as needed. I remember one early lesson departing the Reno control area when departure control called traffic, “United heavy, eleven o’clock, 7 miles.” 


Out of “reflex”, I tried to look up, but my instructor swatted my hood and admonished me to track the instruments. He radioed back, “looking”. After a few seconds, he said, “ah, most of them miss us anyway”. I of course found this humorous because positive air control and safety require seeing and missing EVERY one. Perfectly. But as noted by my instructor, that doesn’t always happen, and you have to call back, “negative traffic” so that Control can vector you to a safe path.


There’s actually a very important trick to spotting air traffic. If you see the traffic moving within your visual frame of reference, you’re not going to collide. It’s the ones that don’t move within your field of view that you have to worry about. The process is dangerously counter-intuitive. Or is it counter-logical? Is that another airplane on a collision course? Or just a bug on the windshield?


Here’s another way of “looking” (pun intended) at this perception problem which I’ve encountered several times in various books. I’m not sure who first used the example so you’ll get a mixed-up version. The example is described like this:


In baseball, how does an outfielder catch a fly ball? If you’re an engineering student and you had to build a robot to accomplish the task, you might have the robot look at the fly ball long enough to determine its direction and velocity as a vector, then calculate the parabolic curve of the ball in flight considering Earth's gravity and have your robot proceed to that location. This might seem like a reasonable solution but it’s not what a human does. The robot approach is actually the more Bizarro method.


A human outfielder will look at the ball, noting changes in displacement within his visual field. He will then begin moving in a way that decreases that displacement dynamically, which of course means that the two objects (human and baseball) move into a collision course. The outfielder then simply raises his glove to stop the ball from hitting him. This is the more biological method and baseball players do it without thinking. We might call this method, “subconsciously normalizing ball displacement in a visual field.” It’s just one of a million tricks baseball players come to learn through experience, and not in a classroom. 


Early robot engineers spent months duplicating this ability in a far more crude, clumsy, and Bizarro fashion. AI (Artificial Intelligence) now at least seems to be refining this more technical approach into something more organic.


Digital Consistency?


“A foolish consistency is the hobgoblin of little minds, adored by little statesmen and philosophers and divines.” - Walt Whitman from “Self Reliance”


If Walt had written this 150 years later he might have included technologists on his list of the adoring. Consistency is certainly the key to science, if not its most basic requirement. And most digital technology is simply broken without consistency. 


This is not the case for neurons. In spite of those beautifully digital ionic waveforms which are quite consistent in both amplitude and pulse width, neurons seem to have a mind of their own, at least to some degree. But that small degree often makes the difference between a neuron firing and not firing. Even though chemical release at the synapse is an all-or-nothing affair, the magic lies between the "all" and the "nothing"; it's managed upstream of the hillock. Is this actually “a mind of its own?'' Not really. The term is generally reserved for a decursively higher-order form of this decision-making ability in the macro brain, but can you imagine each neuron deciding for itself? Keep that thought… in... mind.


Speaking of mind, do you ever hear the word “mind” used to describe what happens within a computer? Not so far. One of the key differences between a neuron and a logic gate is that the neuron knows both if and when to fire. And the neuron does so asynchronously. The “when” can vary dramatically from event to event for all kinds of reasons. The "if," even more so. We’ll explore some of them. 


Though the ionic signals arriving at the next synapse are quite consistent, the resulting chemistry at and after the next synapse is anything but. This is where the analogical magic begins. Neurons seem to produce some kind of normalization of this apparently digital input signal into an analog form for this next neuron.


So what exactly happens between the synapse, (along with all the other synapses informing this particular neuron), and the hillock where a new signal may or may not be triggered? There is some kind of analogical integration going on, but what is its nature? Let’s explore a bit.


This hillock divides the neuron into two realms - the neuron body and its axon. From the synapse to the hillock, the neuron is not only analog but dramatically so in many different ways. Between the hillock and the next synapse, the ionic output signal is quite consistent. It could almost be described as digital. 


The first realm is a bag of magical chemistry that may or may not trigger a new signal to deliver at a distance using the neuron’s second realm, the axon, and its delivering synapses. This second realm is a little easier to understand as it uses a subset of what’s happening in the first realm. We’ll get to it later. And the first? Well, that’s where knowledge is created, but for now, we’re only exploring how digital or how analog a neuron is. At this point, it's important to understand how the brain is unlike the computer. 


In Bizarro fashion, computers are digital everywhere. Brain states are described as being turned on and off like a toggle switch. Computer logic and memory is perfectly consistent. But in the brain, neurons are only consistent enough to be called digital in the axon. Everywhere else, the neuron is mystically analog and frustratingly inconsistent. "Switching" in the brain is actually one neural layer gaining control over another as they compete and cooperate, or more generally shift their macro-chemistry using hormones as temporally extended signals.


For those who are more technical, those repeated ionic spikes from a given neuron may appear to be a pulse-coded modulation of some sort, and that may actually be the case, but not in the normal digital sense. These pulse trains are more likely an artifact of analog priming than actual coded values of some sort. 


I too was captivated by the beauty of these waveforms as a teenager. But the closer I looked, the more problems I found with a digital interpretation. Still, I ignored these problems for decades. Slowly I began thinking of the neuron as having analog inputs producing a somewhat digital output. But that didn’t help much. It was all quite frustrating. Ultimately, even this model broke down leaving consistency only by degrees, and signals in contrast to states. Or something like that.


It took me years to discover that this conflict was completely resolved with a change in perspective from objective to subjective when dealing with the neuron itself. This was of course a form of subjective anthropomorphization. With this new tool, the relationship between a neuron’s inputs and output began to make a lot more sense. Once we yield our objective perspective, neurons become even more consistent, but never actually digital. Let it go, Grasshopper.



Asynchronicity


When I started college, microprocessors were yet to be invented. The  Altair, Apple, and of course IBM PC computers were still years away. The relatively few computers that existed were either mainframes or minicomputers. Our modest College of the Redwoods didn’t have a computer anywhere on campus, just an 80-column card punch and sorter. I first learned the computer language BASIC on a Teletype connected through a 300 baud modem to an HP minicomputer at Berkeley, California. This Teletype was isolated in a storeroom in the physics lab because it made so much noise.


When I signed up to learn FORTRAN, we had to literally create an 80-column card deck which was then driven to another campus. It took one or two days to get the result. One comma in the wrong place and you had to wait 48 hours to discover your error. You might say the learning experience was very loosely coupled in time, and virtually useless. Though it may have been loud, the response from that Teletype was almost immediate. I dropped the FORTRAN class. BASIC was similar, or at least close enough for the work that I was doing. I much later picked up FORTRAN as needed for specific projects after our campus got its own minicomputer. As one of the few computer geeks on campus, I helped install and manage this new HP minicomputer when it arrived. Direct connect screens dramatically improved the learning experience for both. And cut the noise level as well.


In the early 1970s, almost everything about computers was learned from stapled Xeroxed pages of schematics, flow charts, or source code. Then Altair made the cover of Popular Science. Soon books were published about the architectures of microprocessors. I’d already designed an ALU in high school but this at a whole new scale. I remember discovering the first issue of Byte at a stereo store. Things happened quickly after that, but digital electronics was still a very new field of study. I helped a close friend who was part of the faculty at College of the Redwoods define his first curriculum for teaching TTL (Transistor-Transistor Logic). We’d spend hours late at night in his empty classroom debating the best way to design and present the ideas behind electronic logic. 


I had this theory that computers could be much faster if the logic were simply asynchronous and didn’t have to wait for the standard clock signals. Some of these ideas were used years later when I designed the Sage computer which I documented in another blog post. Other aspects were applied to understanding the neuron. You might say that waiting for a clock signal in TTL is a form of artificial or forced synchronicity. It happens on a grand scale in virtually all computers. But it has a cost in time efficiency.


Neuron communication is asynchronous, at least in most cases. Again, this assertion will challenge many of the more technical. Brain waves clearly show a great deal of what appears to be synchronicity in the brain. But this is largely the effect of brain operation, not its cause. The point is, brain “processing” is not driven by some synchronizing clock signal. It’s normally not even a synchronized process. The illusion of synchronicity is an artifact of parallel competing neurons cued by the same experience from the world. Brain waves just seem synchronous.


For instance, a given neuron may cue 37 different neural scripts, but only one (or none) may actually invoke physical movement as the 36 others are inhibited by various other cues in the “group”. It may seem that an ensemble of neurons are responsible for some stimulus by coming together in an apparent synchronized fashion, but the very opposite is actually the case. Cause and effect seem inverted compared to a computer. Ultimately, a single neural script might induce the movement of a given muscle resource even though it may seem like many more were involved.


Ironically, it’s also the asynchronous synchronicity of inputs from the world that is at the heart of one of the neuron’s first and most effective tricks for creating knowledge. Does that sound like a paradox? Good. Now relax your mind. The concept won’t even be needed until I describe how neurons come to know a thing - asynchronously.



Parallel and Serial


At their heart, computers are inherently serial and single-tasking. Notwithstanding superclusters of identical GPUs and some modest success with “multi-core” and “neural net” functionality, most computers still operate in a highly serial fashion as do each element in these clusters. When controlled by a single clock signal, computers mostly do one thing after another. But they do it so quickly, they seem to be multiprocessing. This creates the illusion of doing many things at once. In general, computers simply don’t work well in parallel largely because they are constrained by tight synchronization and identical processing cells, a kind of left-brained mass production of logic.


In contrast and as noted earlier, the operation of the brain is profoundly parallel and multifaceted, becoming more serial as behavior is delivered. The contrast is also vividly and visually apparent from the right to left side of our brains. Left-brained language is more serial. Right-minded visualization is more parallel. The right also has more parallel connections showing up as white matter. The left more sequential neuron connections showing up as gray. Even so, both sides are more parallel near the sensors and more serial near the muscles along each neural pathway.


Brains literally do many things at once, and these many “facets” seem to accomplish this remarkable parallel operation without getting in each other’s way. At least most of the time. One of the reasons that computers as described above struggle with parallel multiprocessing is managing contention resolution - simply which processor (or logical function) is in control at any given moment. This problem is exacerbated by forcing synchronicity from a single clock.


In contrast, neurons resolve this contention issue in a more asynchronous, finely-grained approach, literally at the neuron level. Each neuron resolves contention each time it fires. How this works has been one of my biggest personal challenges for decades. How this happens between the left and right brain was what initially inspired me to test the model deeper and led me to discover how it was managed in other parts of the brain, and ultimately to the neuron itself where I found something very interesting about control and consent, which I’ll share shortly.


The important part for now is, this profoundly parallel architecture is the key to the brain’s resilience and graceful degradation, meaning when one part fails, most of the rest of the brain keeps functioning in fairly normal fashion. Analysis of patient stroke data is a vivid demonstration of this resilience which is mostly lacking in computer architecture.


Paradoxically, brain architecture is not just parallel, it’s both parallel and serial at the same time. Steps along a neural pathway or steps in a dance are both obviously serial, but knowledge converges in a parallel fashion. Simulations in the brain start out parallel near the sensors and become serial at the muscles. This is the Zen opposite of a computer which is inherently serial and struggles to accomplish much of anything in a parallel fashion. Is this the Zen of Tao, and the Tao of Zen?


Don’t Hardwire the Zen Nature of Memory 


As a consequence of the difference between electronic and ionic, we also need to ignore the copper wire metaphor and think in more biological terms. Instead of wires, neurons typically rely on leaky hoses called axons moving ions around in a cauldron of chemistry. These fibers deliver signals from one place to another but are also influenced by this ionic stew of chemistry. Communication in the brain happens in multiple mediums, and in multiple ways, some broadly chemical, others relying on necessary cellular isolation. Only the neuron’s axon appears similar to wires with insulation, but that’s just an illusion. Axons with their “insulation” are not similar to metal wires in any way. And the connections as noted are often changing, not hard-wired at all.


Nothing in the brain is hard-wired, not even what we’re born with. Our first couple of years are dedicated to pruning what we don’t need based on our initial experiences, or the lack of them. What’s left forms a very sparse framework representing experiences during our first few years of life. From there, new and more subtle connections are made over the rest of our lives in various critical phases of learning. This “softwired” metaphor can be a little difficult to understand at first but is a very useful concept.


Our left-brain prefers to work with things that are fixed or at least change in a predictable fashion. “Hard wired” implies a predictable result. Consistent “states” are one way to describe such things. But in the brain nothing stays the same. Each time a neuron fires, it may adjust how much ionic tension is required from any given input signal to induce the neuron to fire the next time. There is no fixed logical relationship between the input and output of any given neuron. But there are analogical ones. Any apparent “states” in the brain are a high-level artifact (or illusion), as is human memory itself. Everything in the brain is plastic by degrees and in critical phases. It’s just a matter of when and how it changes. Like the world, nothing is fixed.


There are no hard wires. Unlearn them. The "circuits" in the brain are not electrical signals nor even power circuits that got their name by completing a circle from the battery back to the battery. There are no such power or signal circuits in the brain. Neural pathways instead start at neural sensors and converge down to scripts of muscle movement. The only circle they complete is a loop with the world, various internal feedback systems, or possibly in the micro context of imagination where physical looping of neural pathways are more likely to occur. These pathways are better described as dynamic neural pathways which start out being quite flexible and only become fixed by degrees and in phases over time. Or not. But forget electrical or electron "circuits". They don't exist in the skull.


Memory as suggested by information theory will be of little use in any model of the brain. The concept is a logic trap. The neuron does not store “states”. And “muscle memory” is not memory at all. Instead, the brain has an alternative way of simulating the world by using signals dynamically, yielding a reconstruction of the past. Is it memory? Not in a technical sense, but it can produce similar results in some cases. For now, it’s best to let go of the concept of memory altogether. I’ll address the topic in more detail later on.


“Cause and effect” nicely describes what happens between a motor neuron and the muscle it controls, but less so as you evaluate the connections back along the neural path towards the sensor. While “determinant” may apply to this last connection before movement is invoked, It’s less true of each step that precedes it. And in a mathematical sense, not much at all.


“Cause and effect” has far less of a correlation between most individual neurons than mathematical logic. And for the brain in a macro sense, very little. This is a very hard thing to unlearn, but critical to understanding the nature of the neuron. For now, relax your sense of a hardwired or consistent connection between most neurons. Consistency occurs by degrees. Think instead in terms of dynamic associative probability at each junction made up of multiple synapses. The quantification of the meaning of any neural input is controlled by the receiving part of the synapse, not the transmitting side. Or something like that.


Instead of hardwired, the brain is sort of soft-wired where the contrast between hardware and software is a useful comparison. In a computer, a logical function may be expressed in hardware or software, but hybrids are difficult. In the brain, analogical functions range between these limits as synapses are formed, upregulated, downregulated, and or decrease through atrophy over a lifetime. Think biology, not copper wires.


Fixed by Degrees


“The moving finger writes; and, having writ,

Moves on: nor all thy piety nor wit

Shall lure it back to cancel half a line,

Nor all thy tears wash out a word of it."

    - From The Rubaiyat of Omar Khayyam


As for writing, the craft has changed dramatically since Omar’s time in the 13th century. Ink and paper were expensive and valuable tools in the 13th century. It was important for writers to carefully choose their words before fixing them on paper. Or by actions in their lives.


Things are different today. We're no longer limited to quill and ink. We edit with impunity, changing content willy-nilly in electronic form as I’m doing with this blog post right now. Even on paper, (which we discard by the millions of tons each year), we sometimes reprint versions of our work every few minutes until it looks just right in physical form.


For decades I’ve preferred electronic dashboards over paper because of the advantages of their more dynamic nature. I used to admonish subordinates for bringing me reports in printed form, not just because it killed a tree, but because “paper freezes disembodied information”, decreasing its flexibility even as it logs a more permanent history.


When I designed and coded my text editor, Sudden View, I purposely left out the print function just to keep the content more flexible. It frustrated some of my customers, but I never relented. The point is, information is permanent only by degrees depending upon the medium in which it’s stored, ranging from being wispered in your ear or written on paper with a quill and ink, to carved in stone at the base of a building.


The same can be said for knowledge, even before it finds form in physical expression. Knowledge has to exist in the mind before it can take physical form as information, whether in spoken, electronic, or written form. The permanence of knowledge is by degrees, even in the mind.


The Surprisingly Subjective Neuron


In the gas crisis of 1974, President Nixon asked for and got a national speed limit of 55 MPH. After two decades, and in an attempt to have it repealed, one of the legislators from Texas noted that “there are parts of west Texas that if you set out at 55 miles per hour, you’ll never arrive.” 


Obviously, his assertion is false in a mathematical sense, but his humor helped get the law repealed, so ended up being quite meaningful, at least on the Senate floor. A vehicle going at a certain rate on a west Texas road can be calculated to arrive at a specific time. But if you had to actually drive the course, at each moment along the highway, it might seem to be taking forever. The left-brain deals more effectively with the abstraction of time and its calculation. The right-mind lives in the moment and is frustrated by the lack of arrival as noted by the childish refrain, “are we there yet? Laughter in the above legislative case flows from a type of race condition between the two sides of the brain. Or it doesn’t, depending upon the individual. Some people have no sense of humor.


So which side is correct? Which is true? It largely depends upon if you seek an objective answer or a subjective one. Since the law WAS repealed, subjectivity won the day. Something similar happens not only with the macro brain but also with their controlling neurons. 


Cues and Scripts versus Stimulus-response


"Cues and scripts" are the subjective alternative to the more objective, "stimulus-response" model that has been popular for more than a hundred years. Unfortunately, stimulus-response relies on a determinant model of the world where every effect has a cause. But in the context of the neuron, it's often not true; or at least the cause can not be easily determined bringing the effect into question. Think of the contrast in the macro context. Why do people do absurd things such as murder their own children? Such decisions start with a neuron, and what they come to know, subjectively. Can such behavior ever be rationalized? Yes, depending upon what the subject comes to know about the event and how critical it is in their life.


“Objective” is an emergent abstraction of our more recent Bizarro culture. It requires at least two people to agree upon the thing held apart from both typically described as information. Computers are a higher-order form of such information management and so are inherently objective in nature, being set apart from any single individual.


In contrast, the brain and each neuron in it are by nature, inherently subjective. Neurons only come to know what is delivered to them in the form of chemistry at each synapse. The important difference between objective and subjective when comparing computers with the brain is a matter of who is in control of what and when. Let’s explore an example using logic gates.


Electronic logic requires perfect consistency in its evaluation of input signals, and it always yields a determinant result, and that result will stay the same no matter how many times you apply the inputs. (Well, at least unless one or more of the inputs is a random number, but this is an edge case that we can explore later.) It could be argued that the inputs control the outputs in most cases. If such inputs come from the world then a consistent stimulus should produce a consistent response. This is how most neuroscientists come to understand both logic, and also try to evaluate the brain. But they typically fail. The reason is that the sovereignty of control does not flow from the world but is literally created within the neuron. The easiest way to understand this control is that neurons are subjective as opposed to objective. And so is the brain. Hopefully, the next section will help clarify this a bit. Just be ready to understand the neuron subjectively.


Finally, Analogiccal Signals versus Logical States


Logic is involved in approximately half of high-level thinking in a macro context. The other half is intuition. Without defining logic in detail,  I’m going to assert that logic is a mathematical tool for not only reasoning, but also the basis for validating all control and computer systems. Logic is also a tool created by the world of the Bizzaro Cargo Cult of technology. Logic is all or nothing, 0 or 1, true for false. There is no middle ground with logic. But we live much of our lives in that middle ground between true and false. Logic only deals with literally the limits or edge cases in a very different sense of the term.


Logic gates are electronic devices connected by copper pathways used to control determinant systems but are of little use in the more flexible and dynamically analog world of biology. Fortunately, there is a bridge between these two worlds best described as analogical.


The objective here is to contrast logic with what happens within the neuron and the brain in general - analogic. For now, I’ll focus on neurons in a nano context. Later we can decursively apply most of these ideas to the brain in a macro context. As you might guess, this will not be an exercise in reasoning. It will be an intuitive quest to understand neurons as I have suggested above.


Like the computer which is its crowning achievement, logic is digital, consistent, fixed, objective, and determinant. As I’ve noted, the biological neuron has these characteristics only by degrees, or in many cases can be described in a way that is the very opposite. I have presented many examples above. Since logical signals beyond noise are the key to information theory, I’m going to summarize how I reached that almost opposite conclusion. Sometimes noise has utility. Or something like that.


Having a technical perspective, I initially assumed that the neuron was logical. At least some of the observations can be described that way, but the exceptions start early and become the rule, ultimately overwhelming the thesis that neurons have what’s needed to be logical. I’ll now focus on what caused many of those exceptions, the exceptions that literally changed my mind.


Neurons sense the world and create signals that somehow sparsely encode the important parts of what they discover. These “digital” signals are passed on to other neurons across synapses in the form of chemistry. Approximately half of these connections tend to activate these follow-on neurons, and the other half inhibit such activation. That is an important clue. Muscles are arranged in a similar fashion, and their control is managed in a similar fashion - opponent processing as noted by Sir Charles Sherrington.


There are a few hundred skeletal muscles in the human body. Most are arranged in pairs allowing movement in both directions. These muscles both compete and cooperate to achieve gross displacement and fine motor control. 


Decursively, the neurons that control these muscles also compete and cooperate by applying activation and inhibition in the micro context. Even within the neuron in the nano context, some synapses tend to activate, and others tend to inhibit.  Again, the architecture decursively allows for competition and cooperation, and for a similar reason.


What’s extraordinary, at least from a technical perspective, is that a given neuron will have synapses that both activate and inhibit the very same following neuron! If you’re a technical person, think about this assertion for a moment. Why would one neuron try to both activate AND inhibit the next neuron? These synapses would cancel each other out, at least in a digital sense. When that first neuron fires, the result is null. Nothing happens in the second neuron. Activation cancels inhibition. At least if these signals are truly digital. Logically, it simply makes no sense. Analogically, it might.


If these two neurons had more than these two opposing synapses, let’s say 3, 17, or 54, the connection is no longer digital - it becomes analog having a value reflecting the RATIO between activation and inhibiting synapses. Poof! The resulting signal transforms from digital into analog. The connection between these two neurons can now up-regulate or down-regulate the significance of any given digital signal by changing the ratio of activation and inhibition allowing it to both compete and cooperate in this nano context. 


When you think of the resulting ionic tension created by this form of connection, the dendrites become a digital to analog converter (D/A), and the hillock becomes an analog to digital converter (A/D), all within the body of a single neuron. And when you introduce a second source neuron at a different dendritic spine, a type of analog logic becomes possible, but only by degrees. And that’s just one of evolution’s decursive tricks. Here’s another that’s a bit easier to understand.


What happens when you get angry? That’s right. Your mood changes. This change is largely mediated by various hormones, I won’t bother getting into the detailed chemistry. The point is, your mood can be thought of as putting you into a different mode of operation in a macro sense. And also in a micro sense. And finally, in a nano sense. That’s right. Shifts in ambient macro chemistry affect micro and nano-connection allowing the analogical equation to change - some form of fight or flight is the likely result. This too is an evolutionary trick, something that helps keep us alive - adaptive chemistry yielding a form of analogic.


I'll now tease you with this one paragraph on the topic of the "quality" of analogic. Its "ANDness," "ORness," or how "naughty" ( tending to invert a value) it might be at any given instant. The very idea invalidates the all-or-nothing nature of a signal, and indeed, the very nature of logic as we've come to know it. At the same time, it allows for applying logic by degrees. I'll stop here. That's a useful hint for now.


I believe you can probably see how quickly things can become complex when trying to understand what’s happening in the neurons according to Boolean algebra. And this is only describing two of nature’s tricks. There are many, many more. And one need not preclude another.


OK, I'll give you one macro example of what I mean by analogical. Our right mind is more likely to reason by analogy, simply by making comparisons from similar events in our history known as metaphor. This works reasonably well but when you cooperatively also reason by logic a la the Socratic method, the blend can produce remarkable results and be rightfully described as analogical wisdom. It's when things get out of balance that results can become either really bad, or really good reflecting McGilchrist's thesis of our left-brain. But there's also the possibility of insight or epiphany. I realize such thinking can quickly lead to paradox, but also in a few cases, breakthroughs.


Before we end up down a rabbit hole, forget about what I’ve just written in this complete post if you like, but consider the possibility that the neuron is not electrical in nature, and its ionic aspects are mostly internal to the neuron; that the brain is not built of logic gates, though it may have analogical aspects; and finally, that the mind is not a computer. It’s something far more powerful and elegant. Think of neurons as magical devices that create a bit of knowledge and then deliver a chemical signal at a distance to any other neuron that might be able to use this knowledge to help survive and replicate. Pretty simple, right?



Top-down or Bottom-up?

Before I end this post, I want to clearly state an objection to the typical approach to modeling the brain - top-down versus bottom-up. So many of the books I've read about the brain tend to start with the cortex, either mapping, imaging, or modeling. And yet we don't have a useful and effective model of the brain. I believe this approach is a big part of the problem, and I believe this has occurred for largely technical reasons. It's how we do black-box analysis, but in this case, the approach is highly distracting, and not in a good way.


Evolution didn't start with a fully formed human brain, nor even the cortex which is likely only about 100 million years old. This is quite recent considering all of evolution. The cortie are literally in the way of deeper exploration. I will here suggest that we simply cut off the top of the skull and take both sides of the cortie and set them aside for a while. This will expose everything below allowing us to more easily imagine starting from the bottom of the brain and working our way up. We'll get back to the cortex (both of them) in due course.


Now, let’s explore the really fun part, the philosophical detail of a gnostic neuron.


Continued:


No comments:

Post a Comment