... seeking simple answers to complex problems, and in the process, disrupting the status quo in technology, art and neuroscience.

Wednesday, February 01, 2023

The Gnostic Neuron - Part 4 - Neurons Create Knowledge

 <originally posted on 03-15-22>











Entertaining an Assertion


As should be obvious by now, the prime assertion that lies at the heart of this gnostic model requires only three words to describe in its simplest form:


“Neurons Create Knowledge” 


I’ve repeated it several times at this point. Now we’re going to explore the idea behind the assertion, but not in the way you might expect. Instead of presenting a technical, formal, or logical argument to support this assertion, I’m going to ask you to suspend disbelief for a time and even relax your intuition as I take you along a path I stumbled along trying to understand the brain’s analogical nature. If you haven’t yet bothered to consider the consequences of the above assertion, I will now present some of them in a more literal fashion so we can play with them as ideas before accepting them even as hypotheses. Later these ideas may find more permanent form as thesis, becoming less hypo.



Permanence by Degree


“The moving finger writes; and, having writ,

Moves on: nor all thy piety nor wit

Shall lure it back to cancel half a line,

Nor all thy tears wash out a word of it.”

- from The Rubaiyat of Omar Khayyam


I present this quote again for a largely different reason. I'll now present how the right-mind comes to know these words. When I first read the above verse, I didn’t take it literally. As opposed to actual writing, I saw this quatrain as a metaphor for the things we do in our lives that can’t be undone. For me, and I think for most, this quote represents the choices we make, and how these choices affect our future, sometimes in more permanent ways. How are these choices made? Ultimately by the firing of a neuron somewhere. That is pretty well established. What drives this firing? Knowledge, even if neurons don't create it, and even if it has some more mystical source. We express ourselves in writing because of what we've come to know about some topic.


An assertion documents an idea in words; and ideas are informed by knowledge. Before a word is ever written (or even spoken), it exists only as knowledge in someone’s mind, as merely an idea. But once an idea is communicated and understood by someone else, it can be difficult to unknow in an objective sense. Some ideas are more permanent than paper, or even stone. They may even be captured in biology. And biology, with its organic will, can have greater impact than stone.


“Nothing is more powerful than an idea whose time has come.” - Victor Hugo


The rubaiyat above can also be understood in a deeper sense as to the degree of permanence our choices make in the world. Some ideas are fleeting thoughts. Others are a challenge to unthink, such as the poorly applied electrical metaphors of the brain. Or any idea about knowledge whose time has finally come.


If it hasn’t happened yet, what you’re about to read may profoundly change your life. It certainly has changed mine. Entertaining this model of the neuron and the brain has literally triggered a personal experience of transformation for me. It was something much more profound than I ever imagined it might become. It has been very difficult to describe in words, but its luminance has surrounded me, and still does. The experience shortly took on the nature of an alternate reality and continues to gain conviction. It has now morphed into my normal way of understanding the world. I enjoy each moment of this emerging epiphany.


I now see people as a collection of what they know, essentially the sum of their relationships in the world. This is not such a strange perspective in the macro sense, but when seen from the nano perspective of the neuron, it takes on a whole new meaning. Of course, I can not truly know another's relationships. But many I can guess by observing how others respond to experiences similar to what I have had. It comes down to what they know versus what I know about any given topic, at any given moment. This of course takes many forms because of personal and subjective history, but it's all interesting. And that’s just a taste of what I’m living right now. I see similar but even more primal knowledge when observing other animal life and comparing them to my own more primal behaviors. Fight or flight takes on new meaning. Neognosticism works on many levels.


What I suggest in the above paragraph might be described as simply empathy. And indeed, most of what I’m living can be presented in other fairly normal terms, but not the whole of it, as opposed to all of it. The experience is both holistic and reductive as both sides of my brain are constantly analyzing these experiences in real time. I’ve gotten to the point that my old Bizarro worldview no longer gets as much attention, and the new one is still finding its legs. I will describe more of this later.


It’s time to offer a formal and final warning - what I’m about to describe can not be unseen; it can not be undone, or innocently ignored once it has been understood. It's very unlikely to once again become unknown. At least that’s been the case for me. I have no regrets. I awake each morning ready to proceed with all due haste. Your mileage may of course vary, but if the result is even close to what I’ve experienced in the last few years, it will certainly change your worldview. So if you like your world just as it is, you should simply stop reading at this point. I realize how delusional these last few paragraphs may seem as I write them, but I’m trying to be as candid, and explicit, as possible. 


In any case, you’ve been warned. If you’re ready,  let’s proceed.



Neurons 101


Before we get to the heart of the matter, this is a good time to quickly review some of the more probable and useful generalizations about neurons. These are things generally accepted by most in the world of neuroscience.


Neurons have a clear input and a clear output, both of which take the form of chemical signals. Inputs occur at the synapses of the dendrites with their spines which also ionically integrate these inputs along with other internal ambient chemistry. At the other end of the neuron, chemical signals are expressed at the synapses near the termination of the axon.


Neurons apply a complex (but not that complicated) form of ionic integration to combine these chemical signals from other neurons to create a signal of their own. I describe these events as chemical (in contrast with electrical) signals, because from a neuron's perspective that is exactly what they are. The effects of any ionic charges are limited to the inside of the neuron. Firing does not create or change states in any neurons, but they do adjust sensitivity often in an ionically analog fashion, again, in contrast with determinant logical shifts.


So far no one has demonstrated how or where any “state” might be stored in a neuron, at least not in any consistent or conventional sense. Even a neuron’s sensitivity to any input can be said to vary from moment to moment so of course when and why it fires is best described as a dynamic as opposed to a fixed. 


Whatever contribution a neuron’s signal produces is expressed as an ionically mediated chemical signal at a synapse near the end of its axon. An obvious question might be, “What is the nature of these signals?” We’ll address that directly.



The Definition of Knowledge


My personal satori actually flowed from the analogical nature of how neurons chemically communicate with each other, and somewhat later, how this process relates to knowledge in a macro sense. But that second step still took me by surprise. The philosophical implications of the prime assertion of the neuron has far more significance than neuroscience might suggest. It essentially is the foundation for the empiricism presented by John Locke but also supports rationalism through imagination. That will need to be explored when we get to consciousness. For now I don't want to get too distracted by the deeper philosophical consequences. Still, in many ways, the cultural follow-on aspects totally eclipsed the original more technical neuron work, so I’m going to present this philosophical version first. A review of the definition of knowledge is needed. Or should we define the WORD “knowledge” first? There is a significant difference between experience and the way we label it. Experience creates direct neuronal knowledge. Expressing a word verbally or in written form creates indirect knowledge to cue others on various topics.


Not being an etymologist, I didn’t draw the distinction between a thing and the WORD for that thing, so I glossed over this difference and dove into the epistemology, specifically the definition of the WORD “knowledge”. If you’re not familiar with these two quite different “e-mologies”, now is the time to get a handle on each. We’ll be dealing with both quite a bit.


Ety-mology is the study of the history of words, and how their meanings evolve over time. Episte-mology is the study of knowledge and its underlying origin and etiology (yet another "e-ology" defined as the cause of something, especially in medicine or meaning.) Words are such decursively fun playthings, and the concepts behind these three will become even more playful as we proceed, but let’s deal with the word for knowledge first as written words are our current form of communication.



Justified True Belief?


There’s of course much more on the topic, but serious analysis of knowledge was nicely documented by Plato in his description of a Socratic debate from his Theaetetus of 369 BC. I’ll spare you the tedious arguments and instead present Wikipedia’s summary of Plato’s definition in three words - knowledge is “justified true belief”.


These three words carry a heavy burden and balance three important underlying concepts to yield this definition. “Justified” describes evidence used to support truth. “True” sets knowledge apart objectively, and removes all doubt. “Belief” reintroduces doubt, but then quickly dismisses it with subjective conviction depending upon whether the evidence is justified, making things a bit circularly dependent. The three words together result in a kind of dynamic tension, but definitions in general don't allow for any possible dynamic nature of the experience of knowledge.


Justified by what? And how? Truth at least seems testable in an objective sense. And if true, is knowledge always true without qualification? If so, why do we need the other two words in the definition at all? Finally, “belief” begs the question of subjective conviction. Is knowledge dependent upon who believes? Or is a believer required at all? Is knowledge subjective or objective? Did you hear that tree fall in the woods? Me neither, but I know that it fell. Or do I? This definition of knowledge is all pretty messy, but apparently, the best we have.


In any case, “justified true belief” has pretty much held for most of the last two thousand years - right up until 1963 when Edmund Gettier established that the “true” part is not always a required component - at least not logically. Leaving “truth” behind, we’re left with “justified belief”, an even weaker, less satisfying, increasingly subjective, and yet apparently more accurate remainder. Does this consensus definition now fall over like a two-legged stool? Or was Plato's three-word definition overdue for an update?


And it's not just the definition of knowledge that's been in question for thousands of years, there's also the quality of knowledge, however you define it. What each of us knows varies widely. Most of us end up wrong to some degree most of the time. So much for "facts". We need only refer to what each of us may know about some price in the stock market to understand the validity of this assertion and efficient market theory. And there are so many other examples. I'll let you pick your favorite.


I’m obviously not the first to question the nature of knowledge only to discover this aporia. After all, we wouldn’t have the word “epistemology” if the field weren’t worthy of further study. Being its object of love, knowledge goes to the core of philosophy.  I’m in no way qualified to debate philosophy, but I’ll shortly offer a right-minded alternate definition for the experience of knowledge by using words to describe the experience and let you decide. For now, we need to let “truth” go hide somewhere as Gettier logically established, or at the very least accept that truth is a relatively small subset of knowledge.


Letting “truth” go somewhere else for a while actually helped me a great deal. It may sound strange, but truth did not set me free. Its absence did, in a Zen sort of way. This simple exclusion led me to a new way of viewing knowledge inspired by a very strange aspect of neural reality supplied by knowing Jennifer Anniston.



Jennifer and Friends


So, we’ve finally made our way back to the Gnostic Neuron and what it means to know Jennifer Anniston. Or anyone else. Or any place. Or anything. How do neurons come to know a person, place, or thing? The idea seemed fantastic. At least it did for me. And for several years I had no concept of how a neuron might achieve this remarkable result. But the possibility that neurons could create knowledge resolved so many issues in the nano, micro and macro context that I just couldn’t leave the concept alone. Set aside for now how evolution might have accomplished this amazing trick, and just envision neurons knowing SOMEthing in a Zen sort of way.


What exactly did it mean for that specific neuron to come to know Jennifer Aniston? To be accurate, this neuron also fired for other actors from the TV show “Friends”, but it has been argued that they only fired because of their association with Jennifer. Forgetting her “Friends” for the moment, the response of this neuron remains an impressive demonstration of a type of knowledge, in this case, the ability to discriminate between thousands of people that this particular subject encountered in various ways during her lifetime. THAT is an amazing trick and it seems to border on being impossible. Or even alien. Yet evolution accomplished it. Discrimination between this person or that one is at the heart of the process as we’ll later explore. Before we try to figure out how, let’s focus on the WHAT of neuronal knowledge.


I spent years trying to make sense of this type of knowledge, even in a more flexible form. I ultimately tried to characterize the subject’s recognition as not knowledge at all, but something very limited and specific to perhaps a very few neurons. (In hindsight, I should have gone the other direction - toward the more general, for I now realize that knowledge is the superset.)


Actually, this same reductive approach has been taken by others trying to explain this neuron-knowledge phenomenon. Many believe gnostic (or concept) neurons are a relatively rare exception in the brain, the enigmatic leader in a group of neurons. Unfortunately, limiting the scope of knowledgeable neurons more easily allows the mystery to be ignored or dismissed as a bizarre exception, (at least by our left-brain with its superpower, denial.)


But I couldn’t let it go. After reviewing Rodrigo Quiroga’s videos and reading his book, I decided that gnostic neurons seem to be far more common than we might have at first guessed. So my thinking finally went in the opposite direction. 


What if ALL neurons were inherently gnostic in their nature? Along with questions raised by, “The Divided Brain”, this idea led me back to interneuron chemical communication and my ultimate epiphany. To describe how I got there, I’ll use one of my favorite tricks of critical thinking: instead of seeking the answer to any question, turn the question into the premise, taking the form of an assertion. Then evaluate the results in the context of current brain data.


Instead of trying to understand how a neuron could come to know something, I simply accepted that it did, then explored the consequences. And the consequences for me were astounding. This little exercise in critical thinking not only begged for a redefinition of the word knowledge, it changed how I thought about the definition of all words. Let’s deal with these new constraints of knowledge first, or better described as the lack thereof. 


Knowledge is not something represented only by words or even actions. Indeed, (thanks to Dr. McGilchrist), most words and actions are re-presentations, or the result of knowledge, and not actual knowledge itself, thus the distinction between the definition for any experience and the word used to define it.


Neuronal knowledge is more primal, more basic, more atomic in this elemental sense. We come to know things we can’t even describe which is consistent with most knowledge being sub-cognitive. This is why we have the word “hunch” or the more modern expression, “vibe”, both of which reside on the boundary between conscious and unconscious. Only a very small subset of knowledge rises to the level of consciousness to be re-presented by words like these.


I now believe that most knowledge is created by neurons, and the expression of knowledge is not limited to the skull, nor the body, nor any given species. Before I explain a few of the myriad ways knowledge can be created (which in some cases can be quite complex), it will help to continue this simple game about defining words. Defining a word is a decursive version of what a neuron actually does when it fires. Words are literally the result of what happens when a neuron delivers that little packet of chemistry to the next synapse.


How’s that for a radical thought? If you want to understand how neurons express knowledge, simply explore the Zen nature of words. For now, I’m going to follow this shortcut as to what it means for a neuron to know something, and simply suggest that it does. Suspend disbelief for a time, as is done in the theater, or in literature. Instead of thinking of a neuron’s knowledge as a message from aliens, think of it as the very core of a neuron's nature, arising between its dendrite and hillock, no matter how implausible it may seem at first. Let’s take a clue from one of my favorite Sherlock Holmes quotes:


“Once you eliminate the impossible, whatever remains, no matter how improbable, must be the truth.” 


Following Sherlock’s advice, I accepted the observation to see where it might lead. That was the moment. That’s when everything changed for me. I now believe that the easiest way to proceed is not by debate nor logic, but simply to entertain this prime assertion and follow its path to enlightenment:


Neurons create knowledge, but not exclusively so. 


If you haven’t already done so, at this point I suggest you take a break from reading and go for a walk to think about this assertion without prejudice…


And even more walking… 


Until…



Welcome back. Let’s proceed.


Even if you only accept that some neurons know something, the next question is what exactly do they know? And what is the nature of this knowledge? Or if my prime assertion is correct, how does it inform models of neuron operation and ultimately, the amazing intuition, resilience, and effectiveness of the brain? It requires a bit of imagination to envision the emergent nature of many (non-digital) bits of knowledge coming together to create that one neuron’s extraordinary abstraction we call Jennifer Anniston.


So, how might a neuron create this knowledge? Were these special neurons? Or all of them? And how did this knowledge come together to yield behavior? That’s where I discovered an even bigger surprise. As I began to apply this gnostic approach at the macro level of the divided brain, human behavior started making a lot more sense. It was like people themselves were just higher-level neurons. The way they deal with consent, conflict, and control is similar to what I found at the nano-level of the neuron. That’s when the idea of decursion emerged. As I take you from the nano context of the neuron to the macro context of the brain, I can actually describe neurons in terms of human behavior, as long as that behavior is driven by a new broader definition of knowledge. I will provide plenty of examples, but first, here are a couple of important questions to address. They helped me in my quest:


Why does a neuron fire?


And when exactly?



Significance


Single-word answers are elegant, but are they accurate? Setting knowledge aside for a moment, have you ever seriously thought about why a neuron might fire? “Knowledge” is not the most primal answer. It’s more of the result. I treated this question as a koan and kept it in the back of my mind for years. The question is simple, but the answer, not so much. It was only after I’d spent a long time unlearning even the analogical nature of neural communication and finally dismissed the whole approach as not having a scientific answer. 


Though frustrated, I worked on other issues, leaving the problem in the back of my mind for years. Then one day I thought of the neuron not as a component of a neural pathway, but as a cell standing alone. I became that cell. That’s when it came to me. Sometimes it helps to anthropomorphize in a more intimate fashion, or just broaden your focus. It’s why I suggested above that you go for a walk. Leaving the objective behind for the moment, reach for a more subjective answer to the question. If you were a neuron, why would YOU fire? And when?


For me, a neuron fires because it has found something significant in its world, something that may be of utility. This of course immediately begs the question, “what is significant?” This has an easier answer - with help from Darwin, significance is “knowing” something that helps a creature or even a concept survive and replicate (not unlike a meme per Richard Dalkins). 


This brings me back to the elegance of one-word answers. This is where Iain McGilchrist’s “Divided Brain” helped out. He described our left-brain as the tool our mind uses to grasp something, to nail it down. This is decursively similar to the nature of sparse coding which is how we populate a map - only the most important stuff is included. What if neurons only fire when they detect important stuff - the very essence of significance?


Our left-brain prefers solo and simple answers. It’s always looking for “the” thing as opposed to “a” thing. Our right-mind likes to keep its options open. After a fashion, the two sides of our brain establish dichotomies for many aspects of the world. Our left-brain tries to drive any solution toward the endpoints; our right-brain mines the middle ground, tending away from the left-brain’s focus, creating a useful dynamic tension between the two sides. Together, dichotomy is a powerful tool of investigation, and neural knowledge generation. Wisdom lies somewhere in the middle.


When I was in high school, I would drive over to The College of the Redwoods to read a magazine simply called, “Electronics”. You could only subscribe if you were a documented engineer, which at 17 years old, I wasn’t. So I had to sit in the library to read this particular magazine. Not only did it have excellent content, but the ads were also from companies constantly bragging about their component specs, some of which were quite detailed. And very interesting, at least to me.


I remember at the time laughing at myself for studying the ads as much as I did the articles. In most publications, there is an editorial line between news and noise. This is also known as the line between content and spam using the current vernacular. It may be obvious, but this line gets adjusted as we learn more about the topic. Something similar happens with neurons. I ended up reading this magazine for decades. The more I learned about electronics, the more I ignored the ads. But the threshold of what I paid attention to was dynamic, and remains so. This is why advertising works. Well, when it does work, when you let it past your bullshit filter. The point is, every now and then ads can be content. Mostly the world is not black or white, true or false, right or wrong. We mine the areas between dichotomies for meaning all the time. This magazine allowed me to better appreciate the “Goldilocks” zone and challenge “absolutes”. I still do. But there are exceptions to the exceptions.


I will here assert that significance, unlike most things in life does not occur by degrees. It either is, or it isn’t, much like a firing neuron. It’s how we think of logic values - true or false, right or wrong, good or bad. That last sentence took you from objective logic to subjective social judgment in only three steps, yet it seems reasonable. As Bob Dylan would say, “If something’s not right, it’s wrong.” 


But is Bob actually correct? Your left-brain might agree, but hopefully, your right-mind would withhold judgment. Our left-brain prefers to deal with the endpoints of dichotomy, our right-mind, the middle ground. It’s decursively similar to what a neuron does when it fires.


Our left-brain likes to nail things down, define them using fixed words. It’s how we grasp things in the world, ideas included. It’s the same with knowledge, including its more limited higher form - information. Information is closed like a parabola. In contrast, knowledge is more like a hyperbola. Our right-mind likes to keep its options open in order to mine that middle ground. So it is with that realm between the neuron’s dendrite and hillock in the nano context. That’s where priming is managed until firing nails down a particularly significant thing.


So is knowledge simply finding something which might be significant for survival at that moment for that particular creature? Or that particular neuron? At that particular moment?


Wait before you answer. There’s more. “Significance” depends not only on context but perspective and the temporal aspects of synchronicity. This is where we need the answer about "when". The flash of a flame might be quite significant unless it’s accompanied by the visual image of someone lighting a cigarette. Then we ignore it. What’s objectively insignificant to me at this moment might become VERY significant to someone else in another moment depending upon context and experience.


Knowing? Significance? Objective or subjective? In this moment, but not the next? Are we headed into another logic trap? Another paradox? Not if you’ve studied Zen. I had already worked out how an analogical neuron could yield a type of “AND” and “NOT” function, or at least a fluffy relationship for each. I had even characterized the analogical version of the “master OR” configuration, also known as a CASE statement in some computer languages. In any case, (pun intended), this work informed my understanding of significance and led me back to my inverted assertion. So to speak. Or something like that.


This understanding forced me to reevaluate my process of turning the question into the premise. It was a way of freeing me from the assumption that I knew what the Jennifer Anniston neuron knew. It no longer mattered to me what exactly that particular neuron knew at that moment. The nature of its knowledge was subjectively a matter of its (that neuron's) sovereignty, not mine. It was no longer a question of how a particular neuron came to know someone that I had defined as Jennifer Anniston. It became a question of what exactly did that neuron know at that moment. And yes, the words "Jennifer Anniston" are a useful REpresentation, but not the actual knowledge. Only the neuron could know what that was. So the question I should have originally been asking was, what does each neuron know when it fires? My "flipping the question into the premise trick" was merely a bridge to a new perspective. (This paragraph was added years later. I hope you find it useful.)


I’ve long thought of the line between yin and yang as a middle ground without dimension, without range. When is it the same for content and noise? Significant and insignificant? What I mean by this is, sometimes even spam is content. That line is the essence of discrimination. Which happens to be what a neuron does. 


Significance is why a neuron fires. Discrimination is how it happens. But it's not just THAT it fires, it also fires WHEN it's most useful. Knowledge can best be described as a significant relationship between things that help survival and replication. The things are material. The relationship between them is ethereal, so knowledge is ethereal, at least until it is RE-presented in some material medium.


Now that we understand that a neuron fires when it finds some significant relationship in its world, it’s time to consider another single-word answer - "knowledge" in more detail.



From Where Does Knowledge Spring?


Before we address the above question, let your mind wander just a bit more…


What if creating knowledge is literally the nature of neurons? Or more primally, simply the nature of matter itself? Above I described the cell membrane as an important trick of evolution that allows for the ionic nature of the neuron and their magic ability to create knowledge. So do cell membranes create knowledge? They certainly have some osmotic aspects that are ultimately key to the process. But does this tenuous membrane qualify as a source of knowledge? I can honestly say I don’t know, but I’ve left this paragraph in to leave the question open, and more importantly by going a step beyond, to anchor one end of knowledge generation as an axiom, a first principle. So for now, cell membranes do not know what to let in and what to keep out. But they might. Now I’ll return to that same important question.


What if creating knowledge is literally the nature of neurons? (Or even their cell membranes?) 


What if knowledge is simply what neurons produce? How might this idea affect my casual brain model? It may seem like I’m just playing word games but think about it carefully. For me it took weeks, but the ultimate epiphany was quick and dramatic. This simple idea changed everything in how I thought about the brain. And the mind. Indeed, a useful brain model can be logically, (or more likely, analogically) derived from the simple assertion that neurons create knowledge. This is true even if you don’t know the actual nature of knowledge, or even how it is created. 


The answer does not require decursive analysis, nor even logic, though both can be very helpful. The easy answer simply reflects the nature of a neuron, decursively (from possibly the nature of cell membranes). The point is, I’m more certain that most knowledge starts with the neuron, at least in the way I’m about to redefine knowledge. It’s neurons that ultimately create meaning in dance, language, memes, and a thousand other forms of expression. It all flows from knowledge, because…



Neurons are the genesis of most knowledge. 


Once I had that assertion, the idea wouldn’t leave me alone. I looked for ways to invalidate it. Please let me know if you are able to accomplish that objective. I even thought about coining a new word for this type of primal organic knowledge as opposed to the more macro and abstract type we generally deal with day to day, but decided the word knowledge (and to know a thing) was actually the most accurate description I could find. Let me know if you come up with a better word than knowledge.


I played with the idea for months, but it just became more and more convincing. For me, it now seems impossible that neurons do NOT know something at the instant that they fire. So back to the questions that come to mind from this little exercise - what does each neuron know? And how exactly does it come by this knowledge?


The answers to these questions allowed for a dramatic conceptual simplification. A Zen moment. For now, I won’t bother returning to Socratic debate which is steeped in left-brain logic and language. Instead, to share my experience of that moment, I’ll return to that process of elimination and flip the premise:


If not from a neuron, from where does knowledge spring? From that apple in Eden? Carl Sagan almost reached that conclusion, (or should I say genesis?), in “Dragons of Eden.” Or does knowledge spring from the forehead of Zeus? Or his prefrontal corti? That’s a useful hint. Even our right-minded myths point us in the more probable direction. If you haven’t already, take some time with this question. Treat it as a Zen koan. I did for years.


Creating knowledge is not limited to just that one Jennifer Aniston neuron, or the many other examples from split-brain surgery. Regardless of how you understand the poor historical definition of knowledge above, I will assert that ALL neurons know something at the moment they fire. Knowledge is the variable in multiple ways. It's not only the "limited" algebraic variable with its rigid structure, type, or values. It's something far more general, literally including everything anyone (or any animal) has ever known. This more generalized "variable" of knowledge becomes a matter of what exactly each neuron knows, and how this knowledge is acquired and applied.


As an answer and example of where knowledge comes from, let’s deconstruct the “World Book”, my favorite resource as a child. Where did all that knowledge come from? People, of course. Its knowledge came from writers and editors. And they got it from their teachers who got their knowledge indirectly from those who originally wrote it down, either from direct experience of investigation or from actual witnesses. And where did they get this knowledge? Let’s use the direct experience case since it’s more to the point. That’s right. They lived it, then expressed it in movement taking the form of speaking or writing, both controlled by neurons. Ultimately, most knowledge starts with the firing of a single neuron, which brings us back to searching for a more useful definition of knowledge.


Slipping back into the world of logic and definitions for a moment, there are two ways of defining a word (which can represent a bit of knowledge). Words can be formally defined in terms of other words. This is the essence of association, but not identity. If the meaning behind one word were EXACTLY the same as another, why would both things need separate words to represent them? Even when some are quite similar, each word represents something unique. As you can clearly see, word-based definitions have a problem. They lead to a circular paradox - words defined by words, defined by the same words. If you want proof of this assertion, try satisfying a young child’s series of questions all beginning with “why”. Your words will ultimately be futile, likely exhausting your patience before your vocabulary. From the mind of a child, wisdom.


Fortunately, there’s another way to define things. It’s simply applying direct experience, such as feeling a hot stove. I will here suggest that defining words in terms of other words is the more limiting alternative in a left-brained Bizarro and indirect fashion. Right-minded intuition is far richer in meaning than any collection of words, including these. These paragraphs are only a sparse approximation, especially when it comes to defining something as important as the word knowledge. And defining the experience of knowledge? Well, you have to live it.


That’s why it’s important to not only think about but also to anthropomorphically and subjectively envision a neuron collecting many bits of sensation or other knowledge to create that one thing that IT knows. The best way to really know something is to experience it. Otherwise, we’re just taking someone else’s “word” for it, (including all the logical issues presented by Plato and others). 


So, what’s it like to experience the creation of knowledge? It’s exactly as you might imagine - insightful. Sometimes. But most of the time we’re not even aware of the vast nature of this knowledge, let alone the experience. This is where introspection becomes a critical tool for understanding introspection’s limits. The creation of some knowledge can be brought in from the shadows, but most knowledge will always remain a mystery. Perhaps it’s time to reframe that definition of knowledge. Yes, I believe that time has ripened the word-fruit from our tree of… knowledge.


Now for a new definition of the experience of knowledge, ironically, using words.



Redefining Knowledge


"I know it when I see it." - Supreme Court Justice Potter Stewart


Justice Stewart was referring to obscenity when he reached the above conclusion, and the movie in question did not qualify. At least according to him. Which was the point. We may not always be able to define a rule for some quality, but we still know it when we see it. So what is this thing we see? What is this thing we call knowledge? How do we define it? Or come to know it when we see it?


Once we back into this working premise that neurons create something called knowledge, we need to define knowledge more clearly. This is especially important to those who are technically minded as they are more likely to apply their left-brain to the issue even if the answer lies in the realm of the right-mind. So, what is the nature of this new more generalized type of neuronal knowledge? Is there a way to define knowledge as a practical experience? I’ll here assert that there is, and that way is Zen simple. The definition again flows from turning the question into the premise as noted above:


Knowledge is that which is created when a neuron fires, 


…but not exclusively so. 

 

If you’re not laughing, don’t bounce just yet. Before you decide I’ve gone full-looney, relax, and actually entertain the idea. You’ve come this far. You might as well suspend disbelief a bit longer. Yes, I realize this seems like a case of circular reasoning or a paradox, but it’s not. As Iain McGilchrist noted, it's more like "Drawing Hands", by M.C. Escher. The reason this definition is enigmatic is that there's more than one way to understand knowledge, and one way need not preclude another. Also, for a definition, it’s not very definitive, but the exceptions are relatively rare. It’s mostly a primal definition, a firstish principle. An epistemological axiom, with an exception. Or perhaps a few more. 


Thinking about the issue leads us back to presenting a Zen koan - which of course, defies logic. But there it is. Play with it a bit. The simple idea that neurons create knowledge solves so many problems in modeling the brain that the answers we seek must at the very least lie somewhere in that direction. Neurons creating knowledge explains a great deal about language, art, science, and of course, philosophy. These are all expressions of human behavior. For me this idea was startling and I laughed about it for days before I began to take it seriously. Now I can’t see the world or neurons in any other way. But you are likely new to the idea, so let’s play with it a bit more. It takes time to adjust.


I could have asserted that neurons simply create knowledge, without defining knowledge, but that would beg more questions. This definition begs fewer and provides better answers. For now, it's more useful to have this definition than not. 


For instance, what does it mean to create a thing? If the idea of “creation” challenges you, there’s an easier way to think about the topic. Think of neurons as complex active filters converting experience into knowledge, capturing an essential bit of the nature of the world as such a process implies, but not always in a consistent or determinant fashion. At this point, I must be clear. Neurons don't simply convert experience into knowledge. Neurons literally CREATE knowledge. Experience is simply the raw material neurons use in this creative process. I’ll describe how later on.


Though easier to think about, subjective experience conversion is not quite as mystical as creation, but it still leaves room for the magic to happen. The magic that happens between the dendrite and the hillock of the neuron is a reflection of the magic we experience in the world. The World is the source of experience, and even more importantly, a big part of a loop with various neural pathways. Neurons are constantly working to perfect knowledge as a dancer might do with movement. Or something like that.


Once you achieve it, the implications of this perspective are profound. All knowledge takes on new meaning when redefined in this way. Let’s explore a few of these shifts in thinking. For instance, I’ve noted that neurons fire when they detect something significant in the world, at the very least significant relative to that neuron, and likely significant for the whole organism. 


I’ll go one step deeper for those seeking a handhold in mathematics. When any neuron fires it divides that particular aspect of the world, (informed by which type of neural sensor initially fires), into two mutually exclusive sets - almost enough and more than required. The edge between these two domains is where the magic happens, where knowledge is created. It’s that “S” line in the symbol for yin and yang. It’s the very edge of indecision. Or decision. It’s that creative spark (metaphorically ionic, not electronic).


Now overlay these two sets at angles with the sets created by another associated neuron sensing some different aspect of that same event in the same context, let’s say just enough heat and light for these two types of neurons. Where the yin and yang of these two edges cross defines a point of no dimension, yet with great significance for some creature just trying to stay alive and survive a possible actual fire in this particular example. 


Now add in a third neuron sensing yet another aspect of that context, perhaps the complex smell of smoke. Now imagine that each of these three neurons with their respective double sets is supported by hundreds of other more primal neurons to support these three points of knowledge, yielding this abstraction we call fire. This should give you just a glimpse of how specific (and abstract) any given convergent neural net can become. Going deeper at this point will challenge most readers, and this hint is enough for those who wish to chase the model using geometry and set theory. Or more accurately, those lines and points between such sets.


Yes, I’ve left out a few hundred pages of very technical detail and debate, but most of that is about the “how” of creating knowledge as noted above. Once I realized that it was possible to understand the “what” of neuronal knowledge creation without knowing specifically “how”, I set out to describe knowledge generation in its simplest form possible - philosophy. Using these words.


For you technologists, think of a computer language without an input/output library or any kind of I/O function. The operation of any program in that context must by definition only exist within its mathematically complete world. I actually defined such a tiny language, and a friend wrote a compiler for it so we could play with the ideas as we debated the issues about the essence of entropy. And extropy.


Since I’m trying to present this concept in its simplest form, I’d like to first present what it allows for in modeling the brain before we get into the “how” of the neuron. I’m just going to leave these ideas here for now and express my experience as I applied them using decursive examples. Hopefully, by then it will make as much sense to you, as it does to me.


If the idea that neurons create knowledge doesn’t make any sense at all, I suggest you simply stop reading at this point and imagine that it’s true. What are the consequences? Test them as you will. Your imagination will be different from mine, but all of that will likely be helpful. Think of ways to prove me wrong. I would love to entertain your debate. 


Now with your version in mind, what problems does it solve with respect to understanding human behavior? Take all the time you want. Hours, weeks, or years. I’ve had that luxury. The rest of what I present is based on this simple assertion, but for me, pieces started to fall into place right away. And yes, I’ve been down the “how” rabbit hole enough times to realize there are many answers to that question and most are ultimately far simpler than the concept itself, and its consequences. 


At first, it was just a trickle, but over the next few weeks, it turned into a torrent of resolved paradoxes. I’m tempted to leave, “how neurons come to know”, as a student exercise. At least for now. Other questions are actually more important, and their answers more enlightening. This new definition of knowledge allows for a wonderful flexibility in modeling the brain as I'll present in due course.


I realize your analysis of this quirky redefinition is likely to be quite different from mine. My perspective is a technical one filtered by intuition, art, and Zen. Because of my technical background, I’ve had to struggle to accept my own conclusions in each of these issues. Some of them may ultimately turn out to be completely wrong, but this perspective has been so useful over the last few years that I remain compelled to document my conclusions, so I will proceed.



The Nature of Knowledge


"Somewhere, something incredible is waiting to be known." - Carl Sagan.


Might that “something” be the nature of knowledge itself? 


Using this new definition of neuronal knowledge, how might we describe knowledge in general? Returning for a moment to the discussion of significance, what is it that any given neuron finds significant in the world? This question of course has a myriad of answers, but a generalization might be something that a given neuron has detected before that might help it survive, plus or minus. I note the variance because neurons seem to be constantly adjusting their sensitivity to specific prior conditions, causing the result to range from one end of a spectrum to another in multiple aspects. This is quite different from another source of biological knowledge which I should quickly address.


This newly defined neuronal knowledge should by now sound similar to an immune response where a mature T cell recognizes a specific prior infectious agent, but without needed adjustments. Gerald Edelman developed what he called the Theory of Neuronal Group Selection (TNGS) largely based on a similar idea. The theory hasn’t gotten much traction, though many of his ideas have been quite useful for me. Can an immune cell be said to know something about the infection it guards against? How might this type of “memory” be different from the memory we ascribe to the brain? I found it a useful hint. Much more on memory later.


What if we think of the entire world as a source of challenges, some from bacteria, some from viruses, but mostly various environmental conditions we can detect using smell, light, touch, temperature, and all the other neuronal sensors the body possesses? When one of these sensor neurons fires because it recognizes a specific condition or profile of conditions outside of the body, might it then deliver to any subscribing neuron a chemical signal representing that unique profile or condition? Further, might this second interneuron sample this fairly standard chemical message in a way that it can best hone its sensitivity to what IT wants to know in contrast to how it’s being informed? This would allow the sensor neuron to know one thing about the world at that moment, and the next neuron in the path to know something a bit different depending upon what other signals arrive within a temporal window of synchronicity. This “what if '' is only one possibility, though a likely one. It also goes to the genesis of consent.


Before I get carried away with the details of chemical communication, let me try to generalize a bit about the impact of this new definition of knowledge. Let’s just summarize by saying neurons (sensor and otherwise) evolve a sensitivity to specific conditions in the world, and knowledge is a reflection of this sensitivity. I believe it’s fairly easy to see how this significant detection might reflect the relationship between that thing and the neuron and be quite useful for survival. THAT is the nature of knowledge - the relationships between things out there and ourselves as neurons, anthropomorphically speaking. Neurons create knowledge, and so do people from their relationships with other people and other things.


Here are some of the more obvious consequences of redefining knowledge. This may seem like a backward approach but I’m going to characterize knowledge as a collection of dichotomies, or more accurately, the continuums between dichotomies. This approach to describing knowledge is based on what I’ve learned at the nano level. Using decursion, we can tap into our cultural vocabulary which gets us closer than any tech or mathematical approach. 



What Exactly Does a Neuron Know? 


So if neurons really do create knowledge, what is the nature of this knowledge? I’m sure there are many other important aspects but these ten have moved to the top of my list over the last few years. Others come and go. These aspects of knowledge are not fixed or definitive in any way. Instead, these aspects are best represented as spectrums anchored by dichotomy. Knowledge ranges:


From To


Subjective Objective

Ephemeral Persistent

Novel                Expected

Fallible         Reliable

Capricious Useful

Abstract                           Concrete

Emergent                        Reducible

Ethereal         Real

Signal                              State

Relationship                    Function


These spectral associations obviously don’t replace Plato’s three words as a definition. We did that when a neuron fired. This description is not definitive, it's just a more general way of understanding knowledge. These are not merely words to anchor meaning. They are limits of spectrums.


Think of knowledge as love. Does one word define it? The Greeks used 16 words in an attempt to corral a definition for love, but that just broke the definition into types, begging even more subtypes in a left-brained fashion. Love remains enigmatic. So is knowledge. Love is literally a type of knowledge. Also, these words are only those that have captured my attention so far. They are not definitive. They are just the opening of a door. Let’s step through.



Knowledge is Subjective, Striving to Become Objective


This first spectrum begins at a sensor neuron, the first in a series forming a neural pathway. Even if we knew why it triggered the next time it did, would we be able to quantify it? And if we could, this quantification might change in the next moment. Such knowledge is subjective to that neuron, at that moment, and what that knowledge means can change almost as quickly as it’s detected. Even though knowledge becomes more objective over time and as we move along any given neural path, it also becomes more dependent upon what each preceding neuron knows. Even operational objectivity is never truly achieved within a single skull. Knowledge is not “encoded” in any given neuron, but the neurons do become sensitized to a specific bit of knowledge as they decode the world. Knowledge from these neurons converges on a relative consensus with each step becoming almost objective near the muscle where the process continues in a macro context between individuals but never truly reaching its objective as a limit (circular reference intended). Knowledge is subjective as its quality is relative to that neuron at that moment, and changes from neuron to neuron, and moment to moment. 


Much of this knowledge competes and cooperates with knowledge from those neurons around it. What this means is that what any neuron knows may be quite different from what an adjacent neuron similarly informed comes to know as each reaches its own conclusion.


As knowledge converges into more specific cues, it’s better described by the limit at the other end of its continuum. Subjective knowledge becomes more objective just before it drives a script of muscle movement, but not exclusively so. We’ll explore why shortly.


Decursively in the macro context, what individual people come to know is also on this spectrum. You and I may have similar backgrounds and read the same paragraph yet come to know very different things about what that paragraph says. We may even reach opposite conclusions as to its meaning. The result is similar with neurons. What they come to know is sort of an analog function better described as a mathematical relationship of how each of their triggers is primed, and/or fire. So it is with people. Even in a given neuron (or person) consistency ranges from nearly random to almost determinant, but not exclusively so. 


I realize this sounds pretty fluffy, and it’s true only by degrees, but much neuronal knowledge is fairly consistent.  Well, at least as consistent as human behavior, and for similar reasons. Neuronal knowledge also becomes more consistent as patterns evolve in life, and also with each step along a neural pathway yielding increased abstraction. But never reaches its limit.


Knowledge is a signal meant to cue or prime movement. It depends upon how that knowledge is literally applied in ratios of activation and inhibition by other neurons, and how it’s refined as it gets schematically closer to a muscle.


Or something like that.



Knowledge is Ephemeral, Striving to Be Persistent


In a fashion similar to the subjective-objective spectrum, primal neuronal knowledge is individually ephemeral, but there are tricks to make its sensitivity more persistent over time. Examples to follow.


Knowledge only exists for about a thousandth of a second and then is lost, meaning that that specific bit of knowledge can’t be detected again - ever. Similar knowledge can be detected by that same neuron in the next moment, but because of its dynamic nature, may sometimes be quite different. This makes it difficult to simulate the world, but not impossible. Evolution has come up with some very elegant tricks using chemistry which I’ll describe in due course.


This ephemeral nature of knowledge is also why so many psychological tests fail repeatability and therefore are eliminated from being part of science. Science requires objective consistency in the macro context. So do people in order to manage their lives in the world effectively.


Knowledge is a signal striving to become a state.



Knowledge is Novel, Becoming Expected


This aspect of knowledge used to be "Random to Predictable", but I thought "Novel to Expected" might be a better description, but I left the original version included below just in case. Such is the dynamic, or maybe fluid nature of knowledge.


At birth everything is novel and lots of neurons are firing, but the ones that deliver more utility from what they move over time tend to be the ones not pruned. Yet novelty holds great utility all through life as demonstrated by the operation of our right mind. Note how the nano finds decursive form in the macro of the divided brain. Perhaps I should switch this back to capture some irony? Or is such ambivalence just my mood this morning?


Whichever version you prefer, as each neuron accumulates more and stronger connections, what that neuron comes to "know" tends to settle in, thus, so does its knowledge. Knowledge is fluffy becoming more useful as noted in the next (now second) section below.



(Knowledge is Random, Striving to be Predictable


OK, random is actually the anchoring limit, not exactly the nature of knowledge. But it sometimes starts out that way. Think of the primal limit of knowledge as random initial conditions for a system designed to be self-tuning. I doubt there’s anything truly random about the firing of a neuron, but it can certainly seem like that on occasion.


Since the knowledge created by any neuron is dependent upon all the neurons that inform it, consistency is quite useful, but not required. If a given neuron’s knowledge is not of much value to the next neuron, the quality of that connection will atrophy over time.


Fortunately, the sensitivity for any given knowledge strengthens with each firing of that neuron and each neuron in the path. And since predictability holds great utility for survival, repeatability is the objective, and the actual result in many cases. But only by degrees.


So, unfortunately, you can’t always count on it.)



Knowledge is Fallible, Striving to be Reliable


"90 percent of everything is crap" - Theodore Sturgeon


This third consequence of neuronal knowledge is perhaps the most challenging - that knowledge is not necessarily true in any logical or reliable sense. Indeed, “fallible” may be an understatement when accessing the quality of primal neuronal knowledge. Such knowledge may not even be close to the truth, though it tries to be. At its best, the quality of knowledge asymptotically approaches the truth but typically struggles to beat the flip of a coin, and in many cases for new experiences, knowledge may have less than a five percent chance of being accurate. Sometimes, even far less. 


When this idea is applied to individual humans decursively, the result is what we observe in politics - wide disagreement, with conviction. What you may know about any topic, (such as modeling the brain), is likely quite different from what I know. We usually sort it out by seeking the answers using cooperation and competition to achieve consensus, just as neurons do. One of the main differences with humans is, once we express it in some more permanent form it becomes information (see information’s contrast with knowledge below). Neuronal knowledge does not have such luxury. It must remain flexible and adaptable, but by degrees.


This consequence may be difficult to accept but not if you consider the impact of false positives compared to false negatives for each case. Often there are tradeoffs, even when the probabilities of each are in the single digits. The key to quality is the ultimate Darwinian consequence. This is only one-way evolution has learned to evolve - by applying knowledge disproportionately, accurate or not.


The point is, what you know may be wrong. Keep an open mind.



Knowledge is Capricious, Striving to Be Useful


Which brings us to utility. Knowledge need only be right one time out of a thousand, if that thousandth time is the critical application that allows for survival. Knowledge is the reason that one out of 20 businesses succeeds. Does that invalidate the 19 that fail? That 20th may make up for all the rest, but that’s enough to be useful. Business success is a decursive example of such knowledge generation and application.


Even long shots have some utility for that neuron at that moment, so they are justified even if they can’t be defended logically. That’s where belief comes in, or if you prefer, faith. Belief is an even higher-order aspirational illusion that is closely related to truth, or at least its approximation. Such diametrically opposed knowledge may be less useful, though surprisingly not actually false. At least from a neuron’s perspective. All religions can’t be true at once, yet faith provides value for each, subjectively. And one (or more) may ultimately be the truth.



Knowledge is Abstract, Striving to be Concrete


Knowledge starts out as a kind of sparse abstraction of the world. Similar to the way a pinscreen captures a reflection of our face, the mind is a reflection of the brain's effort to capture knowledge of the world. After only a few jumps from neuron to neuron this knowledge quickly becomes even more abstract which then informs a concrete response to the world to once again be evaluated.


If you've never played with a pinscreen, it's a sculptor's tool. It's a useful plaything and great for capturing compound curves in a 3D space. A pinscreen is a collection of pins or nails set loosely in a matrix of holes in a square piece of thick plywood. Each of these pins can move individually. When you press the pins from the block of wood up against your face it captures a three-dimensional image of who you are in inverse form, sparsely decoded. (The other end of the pins contains your positive image.)


Our brain does something similar when capturing images and other sensory input. It converts reality into an inverse signal simulation which we experience as the mind. The ethereal mind becomes pure abstraction or a type of negative reflection of our world which we manage internally, struggling to find concrete form that can be usefully applied.


Just as a neuron converts its knowledge back into physical form as a packet of chemistry at the end of the axon, our muscles convert our ethereal mind back into reality as a physical expression to affect the world, decursively. Neurons sense the physical world and convert it to ethereal knowledge, then back to material chemistry. Sensing is concrete. The abstraction is pure ethereal knowledge. More on the pinscreen later.



Knowledge is Emergent, Striving to become Reducible


The emergent aspect of neuronal knowledge is a little more difficult to understand, but fortunately, there are many excellent examples. An emergent property is when many somewhat similar things come together to produce a result dramatically different than each of those individual things. Digital music is a useful metaphor. It’s created from thousands of electronic states delivered in a specific order to an electromagnet that modulates sound waves, ultimately yielding something transformational; in this case, a beautiful tune. 


The brain does something similar in the visual corti. It’s described in layers as V1, V2, V3... etc. These layers converge millions of pixels of knowledge to yield the emergent result of an ephemeral image. It takes surprisingly few layers to manage the abstraction we call a face. Perhaps Jennifer Aniston’s face. Such a result can be described as emergent, similar to a song.



Knowledge is Ethereal, Only Modeling the Real


This final aspect is the continuum that sums up the difference between the brain and the mind, the dead and the living, the method, and the magic. The brain is material. The mind is ethereal. Atoms and molecules are material. Knowledge is how they are arranged reflecting the relationship between things. This arrangement is ethereal and dynamic.


To clarify, when a neuron fires it creates ethereal knowledge, meaning the knowledge has no material substance but can effect material substance as the axon uses ionic charge to move this knowledge along the axon then express it as a puff of chemistry at the synapses.


In a similar respect, knowledge is the ethereal song even though its representation is expressed by vibrating atoms of air in the material world. The brain is physical, made up of matter; the mind is an abstract illusion of the real, a thing of the ether, meaning literally from the other world, or more simply, the other. Thus ethe-real.


This does not mean that the mind is a ghost, but it may be. I wouldn’t draw that conclusion, but others might. From my perspective, something else may be going on - that our left brain does not like to admit the ethereality of the right-mind. This is because of the isolation between the two sides of the brain which is needed for our left-brain to get its work done. I think of the divide in our brain (and our mind) as a necessary isolation as opposed to a mystery. Ethereality is why we don't yet have a good model of the brain in spite of way too much information. This technical distraction is what makes consciousness a hard problem. Our left-brain tends to deal with the physical things in the world. Our right-mind is more comfortable with the ethereal.


In contrast with the left, our right-mind needs to know everything about the whole brain (and mind) to keep both of them safe, but mostly in the moment in case left-brain strategies fail, providing for a pervasive redundancy as our left-brain navigates time. This divide allows our left-brain to remain grounded, and our right-minded mystic to soar, as may be needed for each to offer their individual illusions of the world. Now let's unify this experience in our skulls.



Relationships, Born of Reality, Probing the Ethereal


I want to be clear at this point. This project started out as a technical description of the neuron but has become a love letter to philosophy, exploring knowledge as the object of that love. I’ve had to leave the technical part in the closet for now in order to describe the ethereal aspects of knowledge creation. This model of the brain answers so many questions and validates so many aspects of other models from Descartes to Freud, to Skinner, but in various ways for each.


It's not a matter of this gnostic model being right or wrong. That's ultimately for science to settle. At this point, the question is, does this neognostic model provide a more useful way of thinking about the brain and human behavior? For me, the answer is obvious.


This gnostic way of thinking about neurons unifies the brain with the mind. Instead of the pineal gland, the spiritual magic happens between the dendrite and the hillock in each of billions of neurons competing and cooperating to create knowledge of the world needed to manage this illusion of reality - better described as ethereality since we can never be absolutely sure of the quality of the knowledge we acquire from our bodies. We only approach the limits of truth.


For instance, what is the relationship between water and dirt?


One form of this mixture we commonly describe as mud. It is one aspect of the relationship between dirt and water. The dirt, water, and mud are all real, but the relationship formed when we mix dirt and water which we describe using the word mud is ethereal, even though the pressure waves in our ear when we hear the word mud is also real. It's the relationship between dirt and water when mixed that is knowledge, and a reflection of that relationship as knowledge is a reflection of the world. The word "mud" is just how we represent this knowledge.


Now extend this idea to the interactions of all the things you know of, including people. These interactions are known as relationships. The number of them is literally infinite. That is why neurons only create the knowledge about the most significant relationships. This is the same thing that happens when someone creates a map (which represents knowledge). The map can not contain ALL of the relationships about the ground in question. So the map maker only notes the ones which are most significant for the task at hand. For an army major, it's the rivers and hills. For a botanist, it's the flowers and the extent of the grasses. All of it is knowledge created by someone's neurons.


Again, language guides the way. The words theater and thesis flow from ethereal. The theater is a decursive version of our worldmapculus, a type of virtual world or simulation created from the knowledge of billions of neurons in a semiotic fashion. In being so familiar with the theater, William Shakespear intuitively understood the nature of this ethereality - "All the world's a stage". A thesis for the ethereal is what I here present.


Another way to look at these issues is people and things are material, but the relationships between these things have real aspects which are captured as ethereal knowledge. Here you may need to take a break and go for a walk again. I certainly did. This conclusion literally took miles to reach. What I'm writing, I do not write casually, and I won't be flippant. I am quite serious about these next few paragraphs.


Even without biology, I believe relationships exist, but they aren't RE-presented as signals and certainly not as things of the mind. Relationships do not have mass. They do not occupy space. But they do live on the boundary between real and ethereal. Until there were neurons, relationships existed but it took neurons to convert these relationships into knowledge represented as signals. Before I go too far with this idea, I will not exclude the possibility that signals representing knowledge might not be able to be created artificially or even possibly in some more alien form, or perhaps in some other more primal yet organic form. I try to keep an open mind, and I encourage you to do the same thing.


Here it's important to contrast knowledge with truth. The relationship between things is best described as truth, but reality is only known by degrees, so truth is only an aspiration of the mind. What each of us (and each neuron) knows about truth is merely a good-faith estimate.


For instance, the constant pi is the quantified ratio between the circumference of a circle and its diameter. Pi is not material. It does not have mass. Pi does not occupy space, but once objectively agreed upon, it can be REpresented in physical media as information about this particular ratio. It is a bit of knowledge that describes the relationship between a circle's circumference and its diameter. Knowledge about pi likely and asymptotically closely approachs the truth. Math is one type of knowledge reflecting the relationships between things in the real world, but math itself is ethereal.


Relationships are the surfaces between things and have many aspects which can be effectively managed. But knowledge about these relationships allows us to predict how things in our world may interact, and so we can optimize those interactions. Knowledge is not the relationship, but it can inform others about this relationship in an ethereal fashion. Knowledge is the RE-presentation of relationships between things. Such concepts will be very important when we get around to exploring consciousness.



Knowledge is a Signal, Striving to Become a State


Knowledge is a signal, born of the material, but having an ethereal nature and striving to become a state as information.


Knowledge is a Relationship, Seeking Functionality


If you explore the differences between a relationship and a function this arch is a bit redundant, but I include it for the technical. It may help in your transition from thinking about information theory to the true nature of knowledge.


Dancing with the Consequences of Knowledge


Before we leave this challenging topic, I need to carefully present the second and in some ways more critical half of this process best described as the consequence of this prime assertion. This ethereal signal created by a neuron does not come into existence alone. It has a counterpart best described by the word dance. Yes, I mean literally the thing that Michael Jackson was so good at doing in the macro context, the neuron relies on muscles to perform even in the micro context, sometimes quite indirectly.


Without dance, there is no way for the neuron to affect the world and so no way for the world to once again affect the neuron. Without this loop made up of, world - sense - decide - signal - converge - cue - script - movement - world - sense - decide - signal - converge - cue - script - movement - world - sense - decide - signal - converge - cue - script - movement - world... there is no way for the neuron to hone its knowledge of this relationship between things. Knowledge is merely a constantly changing approximation and REpresentation of this relationship between things in the form of a chemical signal. Dance is critical to the creation of knowledge. This movement is primed in the nano context. It is triggered in the micro context. It happens in the macro context. I will describe how evolution came up with this "trick" in due course, but here's one more observation about knowledge.


Knowledge is the simplest thing I can imagine. It's more simple than matter. It's more simple than movement, so of course, it's even more simple than a more complex dance. It's even more simple than the signal that represents it. It's just a sign. And it's applied to allow the operation of the most complex thing so far known in the universe - the brain.


I could write a whole book about the consequence of knowing. So could you. These are the best words I’ve been able to find to describe these ideas. So far. Perhaps yours will be better. I encourage you to write them.



Knowledge at Human Scale


Do any of these ideas sound familiar? Once you begin to explore the consequences of gnostic neurons and brains, some of life's greatest challenges begin to make more sense, at least in an intuitive way. This simple model of a complex brain decursively helps to explain the more enigmatic aspects of war, human relationships, and even the divided brain itself.


If you’ve read The Master and His Emissary, Dr. McGilchrist presents the dichotomy of the right and left brain in a fashion similar to what I’m presenting, including that the left-brain has come to dominate our culture in the last few centuries, and at other times. I'm sure each has had its turn many times in the last billion years. I threw in the continuum part as wisdom comes not from the limits of each side, but from the realm between them. Even science itself can be described as playful experiments of the right-mind struggling to become “facts” in the left-brain.


Facts do not change. Knowledge does. What any given person knows about some event is often quite different from what another person might know about the same event, or even what that same person might know about that event at a different point in time.


Just because knowledge may be true at some point in time, that does not make it a fact. Even after the fact (entering the set of things we call history), what you know for sure may not always remain true. There have been all kinds of exceptions causing reversals in fact. The obvious argument then becomes, “well, it wasn’t actually true to begin with, and so not a fact.” 


This assertion is also true. It’s a fact. So how many tests must we run before we finally accept something as fact? And how long need we wait? The answers, which become “patently obvious to the most casual observer, ” are an infinite number and forever. 


As a matter of fact, facts do not exist. Nor does truth. Facts are merely more reliable, more stable forms of knowledge. For now. And truth? Merely an illusion.


And if you’ve studied human behavior, each of these continuums should also be familiar. I did not invent the words used to describe this model (well, maybe one - decursion). Most of these words and many of the ideas existed long before me. They were just less accessible with their knowledge living as they were in the realm of the right-mind. I just put them in this order by drawing them into my left-brain from my right-mind. I'll explain how later. For me, they present the range of what humans know and what we do with that knowledge. Examples could fill all the libraries, plus a soap opera. And they do. Yet there is much more to learn, so let’s get at it.


If you review the descriptions of knowledge above, most of it can be applied to humans in a macro context without much change. I could go off in a thousand directions at once. For instance, hierarchical organizations in our culture outside of our skull are much like they are organized within the brain. Decursively. 


Knowledge finding form as information supplied by many soldiers in the front line might move up through the chain of command to give a general a useful insight of an attack in relatively few steps. It’s called raising an alarm, and takes many forms in war. Such knowledge allows the general to manage the battle more effectively. War is the ultimate example of competition; the collecting of battle knowledge at the risk of death, the ultimate example of cooperation. All of them are literally trying to stay alive in their own contie. I earlier presented similar examples in the stock market and congress. Refer back as needed. But let’s return to the neuron for now.


The quality of knowledge from any neuron depends upon the specific neuron in question, the event that drove its firing, and where it is along its neural pathway aspiring to control the body. This quality can range from barely significant at sensor neurons, all the way to quite accurate about life-changing reality at the most abstract neurons just before they generate behavior. How do we manage the consequences of such slippery knowledge? The best we can.


Most knowledge is not accessible consciously. Indeed, it’s a very small percentage of knowledge that raises to the level of awareness. How do we manage this vast and dynamic resource if we don’t even know the details of its slippery encoding? Fortunately, it manages itself, or mostly. That’s part of the “how.” And the brain in general takes care of the rest using alternative systems in a very dynamic fashion.


For instance, there are millions of neural sensors all over your body that are actively involved in maintaining homeostasis for hundreds and even thousands of aspects of our biology. The actual number depends upon what resolution you choose to observe these biological feedback loops, but let’s take a look at one of the more obvious examples.


Your heart has by default a free-running clock of neuronal firing represented by your sinus heart rate. This neuron, or neurons, knows when to fire, rhythmically. It’s why your heart can beat even outside of your body. For a time. When properly reconnected, this rate is upregulated and downregulated to match demand from your environment based upon what it’s come to “know” about that environment.


Now, you might think what I’ve just described is obvious, but the actual details aren’t. Slight changes in temperature, available energy, and many different hormones all come together to constantly control your heart rate. You can understand this process without ever invoking the concept of knowledge, but each of these sensors (and many follow-on neurons) come to know the best way to manage your heart rate. You might not consider these subcognitive signals as knowledge, but when you think about it, why they fire is best described as useful knowledge, conscious or not. And WHEN they fire is critical to your survival.


This is a primal example of neuronal knowledge. A more abstract example might be agreeing to a marriage or other contract. A great deal of primal knowledge would normally inform such eventual firing of such a neuron. Along with veracity and consistency, it’s also useful to assess the quality and context of neuronal knowledge before allowing it to escape the skull and become the information that creates the sound of, “I do”. Have a care as to what you say or write. And when.


You might think that words are just communication cues, but they are so much more. Words are the physical and sparse expression of neuronal knowledge, or at least a small percentage of all that knowledge. Which brings us to one of my favorite exercises - thinking of words as neurons firing, and the implicit scale of neuronal knowledge once you realize the consequences of flipping this brain mystery into an assertion.


The Oxford English Dictionary has upwards of three hundred thousand words, each of a different flavor from the next. A great writer will find that ideal word that captures the meaning he’s trying to express. Now think of each of these words that any given person might “know” as neurons. And how their knowledge might vary from yours. Think of how some specific words inform others. Such networks generate meaning for that word as it cycles between the world and back to the brain. The idea embodies how neurons express knowledge to the next neuron and ultimately, a series of muscles. At least in the abstract. 


Sure, three hundred thousand words pale in comparison to the number of bits of knowledge even in one brain, but remember, each word is flavored by thousands of other neurons - words are the crowning and emergent result. It’s time for a more vivid comparison. Knowledge aspires to leave the skull and become information in the form of words or other symbols. Every single neuron firing represents a chemical signal. And these “symbols” are the key. Each symbol ultimately represents the knowledge of some neuron somewhere.

Not there yet?


Try this. For a time, ignore computers and electricity completely; even forget about the underlying ionic aspects of the neuron. Ionic signals are encapsulated with the neuron’s cell membranes anyway. Instead, think of neurons as proto-words struggling to be born as information once their quality is honed. Let’s evaluate this comparison.



Knowledge Informs Information, (and vice versa)


"We are buried beneath the weight of information, which is being confused with knowledge; quantity is being confused with abundance and wealth and happiness. We are monkeys with money and guns." - Tom Waits


It's commonly accepted that data, and its more meaningful form, information is more atomic, more primal than knowledge. But that's just our left-brain's more limited and Bizarro view. What if the very opposite is true? Collecting data is the process of quantifying and fixing various aspects of nature in a very left-brain fashion. If our right-mind could speak, how might it compare the two? If a neuron really does know the essence of Jennifer Aniston, does it not suggest that neurons have some quality that can not be easily captured as data? Or information?


It's often said that wisdom flows from knowledge, but do we ever ascribe that same quality to information? Or even its data? Does our right-mind know something about wisdom that our left-brain has no way to process?


Using the above experiential definition that neurons create knowledge, can we explore not only the nature of knowledge, but also the nature of information, and how the former possibly becomes the latter, as words, verbal or written.


First, there is the contrast of characterization and quantification from a more left-brained technical or “programming” perspective. Knowledge doesn’t have a “class”. Knowledge doesn’t have a “type”. Knowledge doesn’t define a fixed “variable”. Knowledge doesn’t even have a set “value” so of course doesn’t have a “range” or “resolution”. With knowledge, there’s nothing to encode, nor any way to encode it, at least in a technical sense. As soon as you try, you invalidate the effort of the neuron at hand. Star Trek’s Prime Directive is especially important when it comes to interfacing with any neuron or the brain in general.


Most neuronal knowledge is not meaningful in any numerical or logical fashion, but the subset of knowledge known as information generally qualifies in all ten of the above aspects. And more. Information can be used to describe many aspects of a thing, including knowledge itself as is happening right now. As you read each word in this paragraph, new knowledge is being cued in your mind, each idea to be accepted or cast aside. And that discrimination is the very thing that neurons do; the essence of how neurons create knowledge - discrimination.


Using this simple assertion, what any given neuron knows becomes the analogical variable, and it is a variable in multiple senses of the word. The concept is not useful in the algebraic sense, but programmers will still try to apply the important aspects of data - class, type, variable, value, range, and resolution. These are how things are taken apart by our Bizarro left-brain when we wish to calculate them in a clearly defined way. Doing so with neuronal knowledge risks invalidating its creation and quality, thus, the need for the Prime Directive.


So what is this more right-minded version of neuronal knowledge like? What is the experience of knowledge in contrast to the word we use to grasp it? The cloesest analogy I've found is what I consider as a hybred in the form of ChatGPT, a kind of Bizarro version of knowledge approximation and interaction. The biggest difference is that knowledge varies dramatically as to its probability of being true. Knowledge also varies in its degree of similarity with other closely related knowledge. Just don’t try to quantify or define it too quickly. The moment you do, knowledge becomes frozen as data, perhaps prematurely. Living knowledge becomes dead information.


In contrast to information, most knowledge is not only dynamically changing but literally ephemeral - most knowledge has no physical representation. Knowledge only lasts for about a thousandth of a second at which time it needs to be re-sensed in order to exist again. It’s why we have the word re-member.


I’ll now further compare and contrast knowledge with dynamic RAM in a computer. Though it needs to be refreshed quite often, information stored in dynamic RAM still represents a state and needs only to be refreshed to maintain that state. Knowledge only exists when being sensed, and for about a thousandth of a second later. Knowledge is expressed as a chemical signal, but only now. Information is represented by a state in the form of a word, electronic voltage, or part of a digital simulation. When this subset of knowledge manages to find form as an agreed-upon value, it becomes information. Some information might even be true if such a thing exists, or at least approach truth asymptotically if it doesn't.



Knowledge         Information


Subjective Objective

Embodied                        Disembodied

Adaptable Fairly fixed

Ephemeral More enduring

Signal         Mostly Stateful

A flip of a coin         Closer to the Truth

Mostly subconscious Usually Conscious

Emergent Reductive

Biological         Mechanical

Living         Dead

Mostly internal         Mostly external

Open-ended                    Defined


Once you understand how neurons create knowledge and how this knowledge helps inform information, you’ll realize that knowledge is millions of times more common than information. Simply compare the number of active neurons in all the skulls in all the world with the number of bits of accumulated information. At least so far, knowledge clearly wins.


Though information in the form of words is an abstracted and sparsely encoded Bizarro version of knowledge, they are still literally and often cued by a single neuron, and typically informed by millions more. It remains useful to think of mindful words as those final physical neurons of expression.


Take the word love as an example. Is it a four-letter word or simply a soft sound that may cue an amazingly rich memory far more complex than any word? At least the Greeks had sixteen flavors. It’s best to live the word love in order to know it. But the word “love” will have to do for now. Or at least until it happens to you.




Quantifying Knowledge Nets


There are lots of consequences to entertaining this assertion that neurons create knowledge. One we can address quickly which will give you a sense of scale is to quantify knowledge. How much of it exists? Or a better question in light of its new ephemeral definition might be, how much of it exists each second?


Let's start with a single neuron. Neurons typically converge thousands of inputs and deliver a single output or conclusion. This could be described as a convergent network, or knowledge net since knowledge is the nature of that conclusion. But wait. That conclusion does not typically go on to a single neuron somewhere. It typically goes to thousands of other neurons that can possibly use that knowledge in some fashion.


So a neuron could be described as a knowledge network with two different arbors. One for collecting and creating knowledge, and the other for publishing. These arbors are not equal. Their ratios of connections will vary dramatically, even extremely disproportionately depending upon where they reside in the knowledge network. Now scale out descursively and quantify.


The brain has upwards of 100 billion neurons. At any given millisecond about one percent of them are firing so about a trillion of them fire each second. And that's each neuron being sparse with the physical world. Obviously, not all of these firings induce movement. With perhaps a thousand pathways of a thousand neurons each directly involved in controlling a few hundred muscles, about a million neurons could be said to get something done each second. This is an example of the extreme disproportionality of brain architecture - only one in a million neurons actually move something each second, yet a trillion bits of knowledge are created in that second, in that single skull. Multiply by 8 billion more for just the humans alive at this moment (I use round numbers for such speculation to make the point that it's only an estimate), and you have 8 sextillion bits of knowledge newly created on Earth. And that happens each second.


Next, how much of that knowledge actually becomes useful information? The information created by humans (such as this sentence) is a dramatically smaller portion of knowledge. Humans create about 20 million text messages per day which are 231 texts per second out of 8 sextillion bits of knowledge giving you some idea of how common knowledge is, or conversely, how rare information is. Perhaps one in a quintillion to be generous in how each letter of each text message is cued.


And those texts are mostly noise - small talk. Information that finds form in blogs like this or books, screenplays, videos, songs, and text in other forms is such a small part of the total that it might get lost in this noise except for its significance. Significance is why it emerges as the cream of our culture - the top 40 hits, best selling books, classic film. See any parallels in how neurons might come to know things yet? At least this quantification gives you some handle on how pervasive knowledge is. And the relative rarity of information.



Sextillions Upon Sextillions


Now that we have words to help us think about the brain in a more useful fashion, we can take a look at this model from a different perspective. Think of EACH of those 300,000 plus words from the Oxford English Dictionary as a neuron in the brain. Ah, but wait. We can simplify this by an order of magnitude since the average person only knows about a tenth of that number. 


Those thirty-odd thousand words each represent something you have experienced, at least in the form of language, or you wouldn’t have known that word. In any case, a specific neuron is cued by the approximate experience described by that word, actual or imagined. And this same neuron can cue an expression of that word, spoken or written. 


These words literally form a sparsely “decoded” map of your subjective knowledge of the world in terms of words, one of our highest and most abstract forms of knowledge. But what about the more primal forms of knowledge? Words are only a very small part of what we know at the nexus of neural pathways which I estimate is around a million specific things. 


Thirty thousand out of about a million possible responses - we only have words for about 3% of the most important and abstract things that we know, at least consciously. Imagine the rich nature of that other 97% that lies just below the level of labeling in the mind. This is the realm of hunches and vibes and the even more subtle aspects of knowledge.


Returning to words, they only represent about 0.03% of all of the knowledge in the brain, with one bit of knowledge for each neuron representing a word. These quantification and ratios are of course only gross approximations, but we have to start somewhere. We can work on more accurate estimates later. The point is, our 30,000-word vocabulary is informed by the same knowledge that cues about a million useful scripts of behavior; and the knowledge of these million-odd tricks are informed by another 86 billion more primal bits of knowledge. That would mean that for each abstract survival trick in our repertoire there are about 86,000 supporting neurons but this average is meaningless as the informed combinations are shared widely. 86,000 neurons do not only support that one trick. That would be left-brain thinking. Our right-mind can imagine a myriad of combinations. The scale of it can take your breath away to the point that if you aren’t careful you might sufficate. Now let’s move in the opposite direction, decursively.


These 30,000 personal words, like the 26 letters that are used to form them, are in turn a decursive alphabet of meaning once you string them together in sentences as I have done here. Each letter, each word, each sentence, and each paragraph enriches meaning as a script is delivered in a somewhat parallel and hierarchical fashion we call a serial narrative.


Over in the right-brain we have something similar happening but with visualizations. Together they make up our imagination fed by two competing and cooperating approaches. I’ll explain the Zen nature of that mix later. In any case, that’s how we come to know more than 30,000 things in an objective sense. How much of the other 49.985% is used in this way is hard to guess, so I won’t. But it’s likely a lot, perhaps still well short of a majority depending upon the topic at hand. 


The other half remains in darkness, mostly doing the things that make the magic of human discovery and behavior happen. I’m sure some of this knowledge will be more clearly characterized as we move forward with this model. The point is, we don’t know what most of the brain knows in an objective sense. We only get hints about some of it now and then. 86 billion (plus or minus a few billion) are a lot of bits of knowledge even considering virtually all of it is too primal for words.


And that’s just one brain out of eight odd billion living today, and the billions more of humans and other critters that have ever lived to help build this body of sublingual knowledge we pass on culturally in the form of dance and other forms of expression to be mimicked by others. Most of this knowledge never rises even to the level of expression, let alone information. And then when you realize that all the information in all data and in all the libraries in all the world for all of history is the smaller and more decursive end of this funnel, you can begin to appreciate how much knowledge is active in all the brains all around the world, and where your breath went. Sure, there’s a lot of duplication, but each is also unique in how it’s been brought together by each individual, such as me delivering this description right now. 


Hopefully, this little exercise has enriched your understanding of how much knowledge goes into recognizing Jennifer Anniston. So far I’ve talked about the brain in phylogenetically formed layers of evolution from the brainstem up, out, and forward. If you don’t get too technical, that pretty much takes care of three dimensions. And this perspective is great if you only want to understand the brain’s history or how it shifts context on the fly, but it doesn’t say much about the subtleties of meaning and how it’s formed. Before we catch our breath completely, let’s push on.



Sparse Signaling Forms a Worldmapculus


Another more general way of looking at the brain is how it interfaces with the world. Here I'll preview of how knowledge generation might be used to model our world. Specific details will follow. Let's start at the sensor end. A homunculus is a mapping of the neural sensors from our body onto a modest section of the corti, on each side, actually two homunculie. This allows the brain to have direct contact with the world, but what about all the other areas of the brain's surface? The balance of the corti's area could be said to be modeling the world apart from our bodies using the balance of this resource.


Like a homunculus, a worldmapculus (to coin another term) could be said to manage the relationship between our bodies and the rest of the world which exists apart from our bodies in a similar fashion. Again, two worldmapculi would present two competing versions of this world. This would require inputs from our vision, hearing, and smell. Our skin represents about a million sensors. When you add in proprioceptors and taste you get a few million more, but by far vision with its tens of millions of sensors dominates sensing our reality, (and our imagination).


Considering our modern ability to collect data at the nano level, the number of possible data points just within an arm's reach is an extraordinarily large number. Even the modest tens of millions of visual sensors are a tiny subset of what can be detected. That is why our sensors are described as sparsely coded. These visual (and other remote) sensor points are mapped in our corti largely by proximity, similar to a homunculus, so the parallel concept is useful.


Now let’s unbend our minds by a few orders of magnitude. The world is obviously far too large and complex to capture with any significant resolution such as a 4K computer image. Our model has to be steeped in disproportionality. We’ll need to use a map, but not a flat or linear map. We’ll need one like from the cover of that special New Yorker image:


View of the World from 9th Avenue 



Now for the pinscreen version. Ever take a pinscreen and hold it against your face to capture a 3D image of yourself as suggested above? That’s what we’re after, but it’s not just a visual image. Though low in resolution, because of sparse coding, it captures the essence of a person to the point of marginal recognition. Now think of those pins capturing not only visual depth but all the non-visual aspects of the world as well. Think of each nail as one bit of knowledge forming a meta image in depth with that depth as adjustable sensitivity for any given bit of knowledge. Now imagine such a disproportionate image forming a map of what’s important in your world, all ultimately in 3D.


Next, expand the idea to capture everything in our world that is significant at that moment - smell, sight, touch, taste, and hearing, to summarize in a very limited and Bizarro fashion. It's fairly easy to conclude those parts of the world that are significant at any moment in a very dynamic and amazingly small subset of all possible sensing. That's where attention comes in. It allows us to very selectively and dynamically change what is significant at any given moment in the process of creating useful knowledge.


Words to list all aspects of knowledge fail us at this point. Or at least these words fail me, but after all, words to are only a sparse map. I only have about 30,000. I think of this as a 2D image because the actual 3D part is later derived, but our sensor forms a topologically 2D surface representing an ultimately multidimensional construct. What does smell look like in this image? Exactly. Let’s hope it’s tasty.


We now have this imaginary surface containing the endpoints for each biological sensor on the human body. These points on the surface of our brain’s corti are not beginning-points as a neural sensor is. Instead, they are where signals are delivered to form an abstract semiotic simulation of the world which is why I call it a worldmapculus. Our homunculi is a subset of that worldmapculi because your body is part of that world, perhaps the most important part of that world, at least for each of us subjectively. This is why our modest homunculi still gets a disproportionately large "view" of the world in each cortex. But let’s get back to our sensor surface. It’s where the fun begins. The worldmapculi will be used to describe signal-based simulation later.


We now have a wall of tens of millions of neuro sensors. Each sensor is a dot on this wall. These dots are like pixels in a digital image, but not as regular, not as orthogonal, not as linear. That’s just how our cargo culting Bizarro technologists might want to lay it out, but biology has its own agenda. Our brain takes a different approach. Mapping proximity is important as is evidenced in our corti, but temporal mapping as in synchronicity may also be critical when we get a few neurons deep along any neural path.


Also, these few tens of millions of sensor neurons are only the first rank in a converging matrix of neural pathways. The output from each of these sensors (which are the very beginning of each neural pathway) will inform many of the next ranks of neurons in a divergent fashion, but not all. That would require each sensor to connect with every neuron in that next rank. (I use the word rank here instead of row; columns and rows only form a 2D surface, but the brain models a 3, or more, dimensional world, so "rank" is a more flexible term.)


Here’s a brief overview of convergence versus divergence in neural nets. Near the sensors, the net diverges as needed but as you hop from neuron to neuron, convergence creating abstract knowledge comes to dominate. Once the path reaches the nexus of a million tricks, serial scripts looping with the world in a dynamic dance yields behavior.


Now back to our pinscreen.


The reason each sensor does not connect to each neuron in the next rank is because the geometric expansion required to make such connections would make a Gordian Knot seem like child’s play. The brain’s connectome is complex enough already. That’s why each sensor only connects to the neurons that find significance in what that neuron comes to know in the world. These connections are dramatically fewer than all possible connections.


Such sparse coding is the very nature of maps. Only the significant points are included.


Actually, we’re born with a far more complex connectome but most of it withers away within a couple of years during the process of discovering what is significant in the world. And what isn’t. This brings us to the conclusion of this section so I'll summarize.


Imagine yourself looking at this wall of neural sensors with its diverging and converging pathway behind the wall. Now turn around. You are looking out into the world. Watch what happens out there. Some things in that world catch your eye (or other sensors) more than other things. It tends to be things that move or change in some significant way. Our brain is fed by sensors that seek out the most significant changes in the world and analogically encode them as sensitivity adjustments in a sparse map of that world. In a crude fashion, yet still rich in content, our brain is a map of sparsely-decoded significance that is constantly changing. What connects to what and exactly how at each synapse is what encodes our experience of this world. So our brain is a reflection of this world, but only the most significant parts are captured and simulated - thus an extraordinarily efficient and elegant survival system.


Paper and now electronic maps are literally a decursive version of our neural connectome.


I’ve gotten a bit far afield for a simple brain model, but I think I’ll leave this in for now, and even enrich this topic later on. Or someone else may enrich it later? At least if this model has utility. It works for me.




Knowledge Dances With the World to Form Cues and Scripts


“All the world’s a stage” - from, “As you like it,” by William Shakespeare


Even more useful than thinking of neuronal knowledge as gestating words is to think of the firing of a neuron as a theatrical cue ultimately driving a script of muscle movement, which can also be thought of as a theatrical script interacting with that world. With the other actors and the props to play off of, life becomes improvisation forming a story or narrative.


Indeed, thinking of the world as a stage, and my body as an actor on that stage created a little model of everything - me as a homunculus experiencing that world. This model can yield substantial insight. At least it did for me. It even demanded the concept of a worldmapculus in which our homunculus can perform. But I'm distracted by the macro version. Let’s get back to basics.


Early primal neurons needed the actual world in order to hone knowledge from experience. As noted above, this happens in a loop with the world:


World - sense - decide - signal - converge - cue - script - movement - world...


Rinse and repeat, maybe thousands of times. Or a million. This is the dance that neurons do with the world. This operating model actually works quite well for very primal creatures trying to stay alive (examples to follow). But this method is not only very expensive to the creatures in question it is also very slow and tedious to evolve. A given neuron might have to wait a week for another external event to recur, or even much longer in some cases. This method is not very time or energy efficient. But at least it got the process started.


Fortunately, evolution found a way to speed up evolution using a chemical-based semiotic simulation forming a worldmapulus of knowledge as described above. We feel this chemistry as emotion where both the feelings and emotions are ethereal knowledge. The pain and pleasure you experience do not exist in the material world. They only exists within your skull as is true for every bit of ethereal knowledge you “experience”. This is called your mind and it’s a simulation of a hypothetical world approximating the real one. The quality of this simulation is a reflection of your imagination and how you manage it.


Like most neuronal knowledge, our mind is mostly subcognitive. Only a very small percentage can be manipulated as consciousness. But again, we’re getting ahead of ourselves. I only leave these paragraphs in to let you know that this model explains far more than just primal response or behavior. But for now, let’s simplify.


Think of converging neural-nets of knowledge as theatrical cues, and what we do with those cues, as scripts of muscle movement we call behavior. Cooperating and competing cues and scripts explain a great deal about how we function in the world. Let’s explore the concept a bit. We can hone it into higher-level forms later.


To summarize, before a word is ever written (or even spoken), it exists only as ethereal knowledge in someone’s mind. But long before that knowledge has the consistency to be useful as information, it dances with the world in a loop of development by applying a script of movement while sensing the result as a cue. 


Actually, knowledge and the dance that helps create it both come into existence at the very same time. Knowledge in the form of a cue from the world, and dance in the form of a script of muscle movement meant to affect that world.


You may reasonably argue that most neuronal firing does not induce movement as behavior and you’d be correct. Much of the time it only primes subscribing neurons. But if movement is not ultimately part of this loop with the world, then knowledge can not be refined effectively. Such knowledge would not be grounded in reality. Knowledge needs a place to play as we are doing with this model of the brain but it also needs to be anchored in reality to be useful for survival and replication. For now, think of our interaction with the world as a collection of cooperating and competing cues and scripts. Details to follow.



What Does Any Given Neuron Know?


I’ll ask the question again because that’s what I did for years on end. It was one of the first questions that occurred to me after accepting the premise that each neuron knows something at the instant that it fires. This question could also be presented as, "how is knowledge encoded in the brain?" It took me quite some time to realize that I didn’t know the answer, and that I might never know it.


But that neuron does "know" it.


Actually, what any given neuron knows is far less mystical than the above might suggest. What a neuron knows and what it means to the world is a type of analogcal function (better described as a mathematical relationship) of where it connects (which neurons) and how it connects (the number and types of synapses). The difference between a function and a relationship is one of agency. Functions are determinant. Relationships, less so. Agency goes to the heart of the neuron and is socially expressed as consent. This is why the neuron knows, yet we may not.


Neurons "encode" relationships to form knowledge. These relationships in the world are expressed as connections in the brain. Each is literally a mathematical relationship, (as opposed to a function), of which sensors or other neurons inform that neuron's dendrites by how they connect, as in, how many synapses and the ratio of activation versus inhibition connections on the input side. But it doesn't end there. The second half of any given formula would need to take into account the output of the neuron in question to the next neurons in turn in a given neural pathway until it causes a muscle to be moved or other chemistry to be deployed or at the very least the degree to which other neurons are primed to fire. Then there are the issues of these factors for all the inputs that have honed this neuron's sensitivity.


Before you start calculating, yes, there may be an opportunity here to create a formalism and even a formula to express the creation of knowledge but it should include all the neurons involved in all of the previous loops through the skull as well as those that might be affected in the future, a virtually infinite collection with meaning occupying the other side of the equal sign, so of course, I'm going to hold off presenting any attempt of such formalism (better described as a fool's errand), in favor of just playing with the idea. In any case, this is how neurons "encode" actual relationships into knowledge, at least in rough form. Fortunately, we don't need formalism for our right-mind to grasp this idea.


In any case, that’s the conceptual key to understanding the gnostic neuron, and any such formalism should honor the sovereignty of that neuron. It’s not just about how a neuron comes to fire when it detects what YOU may think is knowledge. It’s that the neuron fires when it detects what IT thinks is knowledge, and that can be a wholly different collection. Of cues. And scripts.


In other words, what any given neuron knows at the moment of firing is subjective for that neuron, and ephemeral, so it may be different in the next moment. Knowledge is not objective nor determinant. That only happens outside the skull. Approximately. Sometimes. Or something like that.


Let’s say that by the time you were five years old you came to know your grandmother well, but I’ve never met her. I don’t know what you know. You’ve come to know the sound of her car, the careful way she opens the screen door, and the type of cookies she likes to bake. You may not know the actual make of her old car (what five-year-old does?), why the screen door makes that special noise when she lets it close or what’s in her cookies. But you do know what those cookies smell like even if you can’t describe the odor. And you do know the creak of the screen door when she arrives and that she doesn’t allow the screen door to slam when she comes in. You will be able to re-cognized her by these three elements of knowledge the next time she shows up. But I can’t. I don’t know those things. What you know about the world is different from what I know about the world. And what each of us knows about your grandmother.


For you tech types, and from the perspective of any neural pathway, what a neuron knows is a relationship that can be described as a type of analog function of that neuron, the other neurons that cued its firing, and the scripts of still other neurons that it might cue. What any given neuron knows becomes a function of the knowledge accumulated as the signal moves along its neural pathway combined with other knowledge from other neural pathways. And the neuronal accumulation of knowledge can go from primal to sophisticated in surprisingly few jumps. If you remain frustrated because of the lack of formalism, I have something even better. I'll describe how this "analog function" actually works in due course. Stay tuned.


For now, I’ll provide an algorithm as a tease. Say you have a town with almost a thousand people and you want to find a specific one. You don’t have their name or address but you know how tall they are. You line them up by height then move down the line until you match the height exactly. You have found your objective. This is a process and interaction with the world known as an access method that ultimately yields knowledge.


This evolving knowledge makes its way from sensor to muscle and then encounters the world where it is expressed as movement. That behavior may then affect the world in a way that can be sensed and refined during its next time through the loop. The world is wild. Knowledge is the tracking variable. Or something like that.


I realize this answer may not be very satisfying to everyone, especially the more technical among you. But the answer holds great utility. It has nicely explained so many paradoxes flowing from interneuron communication that I can’t even begin to count. And it has literally and decursively done the same for thousands of behavioral interactions I’ve analyzed in humans over the last two decades.


Here’s another way of thinking about this enigma. Imagine that you have a neuron in front of you that knows whether Schrodinger’s cat is alive or dead in that box. At the very instant that it fires, it knows that answer, but you don’t (and perhaps for the same reason as the cat, but I won’t take you down that rat-hole right now). Do you see how what that neuron knows is not only hidden but indeed unknowable? Well, at least unknowable if you do not know why that neuron’s firing may have been successfully cued. 


Fortunately, most neurons do not have to know something as unknowable as the fate of Schrodinger’s cat which lives on the very dangerous edge of an equally probable binary outcome. In most cases, what a neuron knows is not at the asymptotic limit of what can be known. For most neurons, most of the time, what that neuron knows can be guessed quite successfully, but not absolutely. The answer technically ranges from almost determinant to almost random, but absolutely neither.


If the above paragraph challenges your sensibilities, relax. It’s only a theory. Most of the time neurons are very practical in what they know. I will provide many examples shortly but I had to include this section for those who need to apply technology to the above question. The rest of us now get to play with the answers.



Waves of Knowledge, Vibes, and Hunches…


I could now go into the nano details of creating knowledge using interneuron communication, but we’d risk losing sight of the macro consequences of neuron-created knowledge, which is actually more important than the details. The holistic understanding holds more value than the proof, which will come in time.


To be clear, when I made this conceptual leap about seven years ago I spent a great deal of time at the nano level and then began to observe the decursive similarities between that work and what Iain McGilchrist had written at the macro level of the Divided Brain, as I’ve noted. This led me to explore the micro context middle ground where words are given meaning, even before they were words. This is the realm of vibes and hunches. Understanding how words might be formed from chemical vibes began to force a revision of my analogical analysis. 


These similar ideas in each contie lead me up and down the axis of evolutionary sophistication using my new magic vehicle - decursion. In the process I came to realize that at each level there was not a single solution to each problem - there were many, and one need not preclude another. This is where the multifaceted aspect of the brain became reinforced, as I explored how it might be managed. 


This particular path of understanding also had the side benefit of explaining the nature of words in general and how grammar is actually a post-analysis and not the genesis of language. Our left-brain has it backward. Our right-mind delivers concepts for our left-brain to manage using words as a clerk might do. And again, words lead me back to human behavior and neuron interaction. The process continued over and over gaining conviction with each iteration.


This conceptual migration in context from nano, to micro, to macro, had other interesting side effects. I spent so much time with neuro connection, chemistry and emotion that it began to affect how I came to see the macro world. Soon, the two limits of the contie were competing for my attention at the same time, changing my thinking daily. In the beginning, it was my more innocent and normal way of dealing with the world, often without thinking. Then I began to take a moment to consider what would happen if my neurons really were informing my behavior using this crazy concept. The details were enlightening! 


For instance, a friend might ask me what I wanted for dinner and I would try to reach beyond my usual habits to offer a new and creative suggestion. As a creature of habit, being creative was no easy task, and I would relax my left-brain to reach into my right-mind for a fresh answer, all the time imagining each inhibiting the other as neurons do in a similar situation. I would then quite often stare into space for minutes at a time seeking new vibes and hunches to drive culinary delight. You might think of this as mindfulness, but for me, it was far more. Soon I was applying the idea to those around me and actually spending more time with the gnostic model than my normal way of dealing with the world. This constant dual exercise has changed my life in many subtle but significant ways.


The best way I can describe this is as waves of knowledge washing over me which became one of my more useful metaphors for brain operation. I see waves of neuronal knowledge from millions of biological sensors forming cues that converge and cascade across my brain in various ways to reach conclusions expressed as muscle scripts. 


As I’ve mentioned before, another way to visualize this is how the character “Mouse” in the original Matrix movie would look at green letters dropping down the screen and see beautiful women challenging those around them. For me, these mind games became so much fun that this neo-gnostic perspective almost didn't get documented in the form you are now reading. It still dominates how I see and deal with the world. Your mileage may vary.


Speaking of which, you too now have all the hints needed to achieve this neo-gnostic perspective. I’ve considered this in some detail. Even if you don’t know how neurons create knowledge, just knowing that they might, should yield a completely new worldview largely informed by how we use what we know in relationships with others. OK, I’ll give you a few more hints to sum things up.


As I’ve suggested, we have millions of neural sensors each of which is monitoring the world. They don’t fire until they find significant change which they then use to create knowledge reflecting the relationships between things and ourselves. This is done using cooperation and competition in a yin-yang fashion. The resulting knowledge is then delivered to the next tier of interneurons in various ways with various weights as selected by the next neuron’s synapses with their ratios of activation and inhibition. Sovereignty, decision, and control ultimately lie in the ionic area between the dendrite and the hillock of each neuron. These neural signals from millions of sensors tend to converge with each hop as the knowledge they generate becomes more abstract. As noted, this happens much as the series of “V” layers in the brain’s optical system is conventionally described - from pixels to lines, to ovals, to mouths, to eyes, to faces. If you’re not familiar with this convergence of abstraction, think about how a journalist collects “facts” to build a story fully summarized when published. It’s similar. 


A journalist first collects facts, hopefully in an objective fashion. These facts (hyper-knowledge) begin to yield a theory which he may then begin to test by seeking more knowledge about the events in question. Once he has a meaningful thesis he expresses it as a written narrative and publishes it, just as I have done. Neurons do something similar but in a more primal way. Both examples create new knowledge in their own context.


At the same time, copies of this knowledge diverge out to other layers that may be able to use this knowledge in some way, just as you might subscribe to a blog. A downstream neuron can then subscribe to an upstream neuron to acquire its knowledge. The result is a wave of knowledge starting at the sensors and narrowing down to a nexus of about a million neurons near the lower center of the brain. At least this architecture largely describes our lizard brain. I will provide details about the even more complex layers later on. For now, I need to continue this decursive progression to sum up this post.



…Inform Words, Emoticons, and Memes…


“~ All cultural formation in our time is now the development and propagation of memes that battle their way through a supply chain in cyberspace.

Most die; some thrive.

The memes that make it through encode deep meanings.

This is as serious a process as has ever existed.” - from the Twitter feed of Marc Andereessen @pmarca


If you missed the context, Marc is describing the very essence of evolution, but not as Darwin described it. Instead, it decursively takes the form of what happens in neurons, and the brain in general to create words, emoticons, and memes. It’s also how the brain’s ability to evolve has escaped our collective skulls to find form in our culture. The “deep meanings” Marc refers to are literally knowledge. And sometimes information.


If you’re having trouble with this idea that neurons create knowledge, consider their decursive expression again - words. It may clarify things for you. Think of words where each underlying letter is informed by other knowledge, creating an emergent abstraction to cue other more highly evolved neurons that have become social in nature. Words expressed as sounds (or written symbols) are just the more abstract neurons that cue specific experiences that then find form outside of the skull. Every time you hear a word, a specific neuron is firing, with hundreds or thousands of other neurons fired in milliseconds before to support, refine, and invoke this abstraction.


Sure, you can think of that same word and most of those same neurons will fire, but until you publicly express the word, nothing has moved, that word doesn’t matter, at least not socially. Nothing matters until something moves. And once you express the word, it may cue other neurons in others' brains, and you can’t take it back - “the moving finger having writ”. Everything matters once something moves: 


“The moving finger writes; and, having writ,

Moves on: nor all thy piety nor wit

Shall lure it back to cancel half a line,

Nor all thy tears wash out a word of it.”

  • From The Rubaiyat of Omar Khayyam


(This is the first time in my life that I’ve used the same quote three times in the same work. I suspect it’s significant. Or maybe I just like the poem.)


Of course, words make up language when used in sequence, not unlike scripts of muscle movement. Decursively. But words are just one of evolution’s tricks that have escaped the skull to find a place in our culture. Emoticons are a more modern version, and memes are the reason, as Richard Dawkins has so nicely presented.


Sometimes it helps to work with these ideas in creatures that are sub-human. It allows us to be more objective. (As he noted in his work, it allowed Iain McGilchrist to review far more data about the divided brain.) For instance, many creatures have language even more subtle than words, emoticons, and memes. Each example can tell you a great deal about the range, resolution, degree of abstraction, and architecture of that creature’s brain. Perhaps not surprisingly for many of these creatures, you may have to go a long way back to find a common ancestor, unless, of course, it’s independent evolution.


If it’s not independent evolution, then it will give you some idea of how old that particular trick is in our human evolutionary past. For instance, insects and birds have various forms of dance and language to communicate things about the world to other members of their group. How old are those common ancestors? Proto versions of language may literally be that old, or else they were independently developed multiple times. I believe the former is more likely the case. Just like neurons creating knowledge, words as we now know them likely evolved much later, but the various forms of dance are likely to have evolved far earlier than formal language but are still quite similar in their current form today. Just watch a bee dancing. I suspect more advanced protozoa dance in a similar fashion. So did Michael Jackson. 


A nod or a wink is enough of a cue to invoke a script of muscle movement as long as the context is shared (required for information transfer). For instance, there are jellyfish that have been documented to socially communicate using bioluminescence in a dance of lights. Or even more primally, herd movements go all the way back to bacteria for similar reasons. We’re talking a billion years in some cases. And of course, a more familiar sound-based language is more common in closer relatives such as primates. Writing is a quite recent decursive human version of such "dance".



…To Organize Individuals, Tribes, and Nations With Conviction


These ideas about evolution evolving a new way to evolve are not limited to the skull or any individual. They also find form in how we interact with each other, the differences in conviction, and how we organize in larger groups. Decursively. If you want to understand what’s in your skull and how it’s organized, simply look around at how humans organize as a tribe or nation. And vice versa; at least once you get a feel for how cues and scripts work, human behavior begins to make a lot more sense. 


For the best example, I’ll again refer you to the “Divided Brain” and its descriptions of how each side of the brain sees and deals with the world differently. Similar generalizations are useful when comparing politics and religion. I’m not suggesting that you judge any of these groups, and that’s the point. Each has its own biases, triggers, and beliefs. Each has its own truths. One is not better or worse than another in any absolute sense, though it may seem like it at the time, subjectively, with conviction. The challenge is to step outside our minds when considering such issues. Your left-brain may be of help in this exercise. 


In any family or tribe, there will be different approaches, agendas, priorities, and triggers for each member. Desmond Morris's work is helpful at this point. Depending upon pecking order, these solutions will cooperate and compete to hopefully find the best form, but often they won’t. There is no absolute best or worst answer. There are just scripts that yield behavior, in this case of a group.


For instance, how can a man murder his children, then himself? But if he gets stopped before he dies, he likely will become so remorseful only minutes later. Or not? Some part of this man knows something we don't. Something not shared with the more remorseful facet of his brain.


The point is, that the brain and mind are definitely multifaceted, not just physically, but also operationally. They both have cooperating and competing facets. It’s just a matter of understanding which facet is controlling the body, (or major parts of it), at any given moment.


Such multifaceted operation also happens within the skull as one survival solution or mating strategy is sometimes better than another, each competing and cooperating in various ways. I realize that these are very broad generalizations, especially when applied to the mystery of the brain, but isn’t that the very thing we need to break the logjam of data about neuroscience?


What each of us comes to know about our world, politics, and religion is ultimately a function of nanochemistry and connection, both of which are constantly changing. The most significant knowledge tends to get reinforced more often (connection) and more dramatically (chemistry), so we come to know that thing with more faith and conviction. An example is this very paragraph which occurred to me in a dream, got me out of bed, and drove me to my computer keyboard. Again.



It Is Written…


“It ain’t what you don’t know that gets you into trouble. It’s what you know for sure that just ain’t so.” - Misattributed to Mark Twain


This quote was used at the beginning of the movie, “The Big Short”. At the time I appreciated it, as I already had it in my notes to be included here for reasons other than its impact on the financial markets. I already knew that the above misattribution was an excellent example of cultural knowledge because of this ironic error in the attribution of the above quote:


“Quote Investigator: Scholars at the Center for Mark Twain Studies of Elmira College have found no substantive evidence supporting the ascription to Mark Twain.”


It brings me such joy to have an opportunity to quote, “Quote Investigator'.' This conclusion summarily demonstrates both the fallibility of knowledge and its decursive nature. Even the screen writers of, "The Big Short" can possibly fail in their attribution. If Mark DID udder this lament and it only got repeated orally, it can now not be properly attributed to him because it was not important enough at the time to be written down and kept for the internet to discover - technically. Hold on; we’re going down a rabbit hole.


Somewhere in Dr. McGilchrist’s YouTube lectures, he provides an example of how our how left-brain has come to dominate our right-mind, in spite of the obvious. Iain tells a story about how doctors have come to the morgue to verify some aspect of a dead patient’s treatment, only to find that the patient still had a pulse. The doctor turned to the nurse and proclaimed, “This man is still alive!”


The nurse retorted succinctly, “He can’t be alive. It says right here on the clipboard that his time of death was more than three hours ago!”


Iain McGilchrist used this story to demonstrate how our left-brain (with its superpower denial), can effectively deny reality, no matter the evidence to the contrary. Apologies to Iain if I got any of this wrong. It’s what I re-member. It’s what I know at this moment. Now I’ll carefully back out of this particular rabbit hole.



So, What Do YOU Know?


This is getting to be a habit, but again I’ll quote :


“It ain’t what you don’t know that gets you into trouble. It’s what you know for sure that just ain’t so.” - Misattributed to Mark Twain


Whomever the source, with this caveat in mind, I’ll ask the above question again in the hope that, after you’ve read this post, you’ll seriously consider this final question:


So, what do YOU really know?


And do you know this knowledge for sure? Or do you agree with the nature of knowledge as I have described above? I know for sure that it is. But is it so? I know for sure that neurons create knowledge, but recognizing that I too am quite fallible, even if I’m wrong about the neuron, let's consider the ideas in a macro context.


It could be said that "neurons creating knowledge" is just a different way of looking at all the data, and I'd agree with that assessment because that is the point - a fresh perspective for looking at all the data is the key to a better understanding. Seeing neurons as knowledge creators has dramatic consequences on how we might model the brain and human behavior. I will explore many of these going forward. For the moment, let's consider some low-hanging fruit.


Human behavior is clearly multifaceted. And it can switch from one solution to another faster than the blink of an eye. Is it not useful to think of behavior as a collection of cooperating and competing cues and scripts played out on this stage we call the world? And also in our skull as a worldmapculus? Is knowledge in the macro context not simply a trick of evolution in an attempt to seek a new, quicker, and less deadly way to evolve?


Is it not more useful to think of knowledge as ethereal as opposed to real or physical? Compare it to music contrasted with the instrument being played. Or a story contrasted with its medium of expression whether paper and ink or pixels on a screen. These are all forms of abstracted knowledge.


Yes, I recognize the irony in what I'm presenting about knowledge and the neuron may be entirely wrong, at least logically, but I'm willing to risk it if it provides a more useful perspective for addressing this difficult challenge, which brings us to YOU.


What you know about the topic may be more useful than what I know. If you've read this far, you've definitely thought about the nature of the neuron to some degree. If you have different answers to some of the questions I've presented, please share them with me, in private if you like. I read all non-spam emails.



Shifting Gears


It's time to move from the "what" to the "how" of knowledge creation. If we accept that neurons create knowledge, how do they accomplish this amazing trick? Most of the rest of what I'm presenting will be about answering this enigmatic question. The "how" is the proof in the pudding.


Trying to describe the concept that neurons and the brain create knowledge has been one of the most difficult challenges I’ve ever undertaken. For me, the hard part has been not slipping back into tech metaphors excessively. Perhaps I’ve written the same thing too many ways, or with too sparse of a vocabulary. I’ll blame this on the limits of my right-mind which has to draw attention to these ideas so my left-brain can capture them in written form.


Before we proceed, I'd also like to generalize about the ultimate consequences of this prime assertion, mostly to acknowledge they are myriad, profound, and difficult to overstate. Thinking of neurons as gnostic will dramatically change anything having to do with knowledge, which is to say, virtually everything from science to art and understanding human behavior. But the differences may be subtle, as my right-mind and likely yours, already knew that neurons create knowledge. Our right-mind just had to convince our left-brain. I hope my words written here help you along your path to whatever conclusion you reach.


Let's just summarize the potential impact as a completely new form of applied philosophy - Neognosticisim, in contrast with original Gnostism. I could write thousands of pages on this topic alone, but others could do a much better job once this concept is well understood, so I won't. At least not right now.


Instead, I will continue to edit and hopefully improve this current presentation to extend the scope into how neurons might accomplish their amazing trick of evolution, and try to do so without getting too technical. I shall rely on a series of stories about the evolution of a number of creatures that still haunt our brains in various ways. So here this section concludes. Next, we’ll explore how these many creatures helped inform who we are as humans.



Continued:


No comments:

Post a Comment