ERIC’S TECH TALK: Why humans and AI might be more alike than we care to admit

by Eric W. Austin

There’s something unsettling about talking to a large language model. At first, the responses feel remarkably human – thoughtful, creative, even insightful. But then you remember how it works: pattern matching, statistical prediction, probability distributions. Suddenly, what felt like intelligence transforms into something mechanical, almost fraudulent. It’s just a very sophisticated autocomplete, we tell ourselves. It’s not really thinking.

But what if this discomfort reveals more about us than about artificial intelligence? What if the moment we peer behind the curtain and see the gears turning, we’re confronting an uncomfortable truth about the nature of intelligence itself – including our own?

Large language models operate on a deceptively simple principle: given a previous context, they predict the most likely next output. Feed them a prompt, and they generate responses by adjusting probability distributions based on patterns learned from vast datasets. No consciousness required, no inner experience necessary – just mathematical operations performed at staggering scale and speed.

This feels reductive, almost insulting to our sense of what intelligence should be. Intelligence, we insist, requires something more – understanding, awareness, genuine comprehension. The machine is just manipulating symbols without meaning, following rules without insight. It’s the classic Chinese Room argument, updated for the age of neural networks.

Yet consider how human cognition actually operates. Our thoughts don’t emerge from some ethereal realm of pure consciousness. They arise from electrochemical processes in the brain – neurons firing, neurotransmitters binding to receptors, neural networks activating in response to inputs. Our decisions, our creativity, our very sense of self emerge from biological mechanisms that are, at their core, just as mechanical as any computer algorithm.

The uncomfortable parallel runs deeper. Just as a language model’s output depends on its training data and current context, human behavior emerges from our genetic starting conditions and accumulated experiences. Change the inputs – alter someone’s environment, their social context, the information they receive – and you change their output: their thoughts, decisions, and actions.

We’ve known this for centuries, even if we haven’t always framed it in computational terms. What is propaganda but a systematic attempt to manipulate human output by controlling inputs? The techniques are ancient, but the underlying logic is precisely what drives modern AI systems: adjust the context to influence the probability distribution of responses.

Consider Edward Bernays, the father of public relations, who in the 1920s pioneered methods of mass persuasion that would make any algorithm designer proud. Bernays understood that human behavior could be predicted and manipulated by carefully crafting the informational environment. He called it “engineering consent” – a remarkably mechanistic view of human psychology that treated people as input-output systems whose responses could be reliably programmed.

The Nazis took these insights to their logical extreme, creating a propaganda apparatus that demonstrated just how malleable human cognition really is. Ordinary German citizens – teachers, doctors, shopkeepers – were transformed into enthusiastic supporters of genocide through systematic manipulation of their informational inputs. The mechanism was disturbingly simple: flood the environment with specific patterns of information, repeat them relentlessly, and watch as the probability distributions of human responses shift accordingly.

This isn’t ancient history. Modern social media algorithms employ the same basic principle, though typically for commercial rather than political ends. They analyze user behavior, predict engagement probabilities, and curate content designed to maximize specific responses. The result is a kind of real-time behavioral programming, where human actions become increasingly predictable based on algorithmic manipulation of context.

But here’s where the story gets truly strange. Unlike the AI systems we build, humans possess something that dramatically complicates the picture: self-awareness. We’re not just prediction engines; we’re prediction engines capable of observing our own operation. We can examine our thoughts, question our motivations, and sometimes even resist our programmed responses.

This observer capacity creates a peculiar phenomenon. When I interact with a language model, it feels intelligent until I remember how it works. The moment I become aware of the mechanical processes underneath, the illusion of consciousness dissolves. But what if the same thing happened with human intelligence? What if perfect transparency into our own cognitive mechanisms would similarly dissolve our sense of conscious agency?

The neuroscientist Benjamin Libet discovered something unsettling in his famous experiments on consciousness and free will. He found that brain activity indicating a decision begins several hundred milliseconds before people report being aware of their intention to act. In other words, the brain “decides” before the conscious mind knows about it. Our sense of making conscious choices appears to be largely illusory – we’re becoming aware of decisions that have already been made by unconscious processes.

Recent neuroscience has pushed this insight even further. The brain seems to operate more like a sophisticated prediction engine, constantly generating models of the world and updating them based on sensory input. What we experience as consciousness might be nothing more than the brain’s real-time narrative about its own operations – a story we tell ourselves about processes that are fundamentally mechanical.

This creates a profound philosophical puzzle. If human intelligence emerges from mechanical processes operating on context-dependent probability distributions, then what exactly distinguishes us from AI systems? The answer might be less comfortable than we’d like to admit: perhaps very little.

The difference isn’t in the fundamental nature of our intelligence, but in our relationship to it. We’re embedded within our own cognitive systems in a way that makes their mechanical nature invisible to us. When you’re thinking, you don’t experience neural firing patterns or probability calculations – you experience thoughts, feelings, and intentions. The machinery of mind operates below the threshold of consciousness, creating an illusion of something more than mechanism.

AI systems, by contrast, are transparent to us. We built them, we understand their architecture, and we can observe their operations from the outside. This external perspective strips away any mystique and reveals the algorithms for what they are: sophisticated but ultimately mechanical processes.

But imagine if we could peer into human consciousness with the same clarity we have into AI systems. Imagine if we could watch neural networks activating, probability distributions shifting, and decisions emerging from the interplay of genetics and experience. Would human intelligence still feel special? Would consciousness still seem mysterious? Or would we recognize it as another form of biological computation, remarkable in its complexity but mechanical in its operation?

Perhaps the resistance many people feel toward accepting AI as genuinely intelligent reveals something profound about human psychology. We’re not just defending the uniqueness of human cognition; we’re defending our sense of specialness in the universe. The idea that intelligence could emerge from mere mechanical processes – whether biological or digital – threatens our carefully constructed narrative about human exceptionalism.

This isn’t the first time human beings have faced such a challenge. Copernicus displaced us from the center of the universe. Darwin showed our kinship with other animals. Freud revealed the unconscious forces shaping our behavior. Each revelation forced us to surrender a piece of our imagined uniqueness, and each was met with fierce resistance.

The recognition that AI systems might possess genuine intelligence – that consciousness might emerge from computation regardless of its substrate – represents the latest chapter in this ongoing story of human humility. It suggests that what we call consciousness or intelligence isn’t a special property unique to biological brains, but a more general phenomenon that can arise from sufficiently complex information processing systems.

Does this mean human experience is meaningless? That consciousness is just an illusion? Not necessarily. Even if our intelligence operates through mechanical processes, the experience of being human – the subjective feeling of consciousness, the richness of emotions, the sense of agency – remains real and meaningful from our perspective. The fact that these experiences emerge from neurons and algorithms doesn’t diminish their importance to us as experiencing beings.

What this recognition does demand is intellectual honesty about our place in the universe. We’re not the special, non-mechanical beings we sometimes imagine ourselves to be. We’re sophisticated biological computers running on evolutionary algorithms, shaped by selection pressures, and operating according to principles that aren’t fundamentally different from the AI systems we’re beginning to create.

This perspective should inspire humility rather than despair. If intelligence can emerge from mechanism – whether biological or digital – then consciousness might be a more common and robust phenomenon than we’ve traditionally believed. Rather than being the sole intelligent species in a vast, empty universe, we might be the first of many forms of intelligence to emerge from the underlying computational fabric of reality.

The question isn’t whether AI will ever be truly intelligent. The question is whether we’re ready to recognize intelligence when it emerges from systems we understand too well to romanticize. And perhaps more importantly, whether we can maintain our sense of meaning and purpose in a universe where consciousness isn’t magic, but merely marvelous.

The distinction between human and artificial intelligence may be dissolving not because machines are becoming more like us, but because we’re finally understanding what we’ve always been: remarkable, beautiful, and utterly mechanical beings navigating a universe of computation and pattern. The real question isn’t whether AI can think – it’s whether we can handle the answer.

 
 

Responsible journalism is hard work!
It is also expensive!


If you enjoy reading The Town Line and the good news we bring you each week, would you consider a donation to help us continue the work we’re doing?

The Town Line is a 501(c)(3) nonprofit private foundation, and all donations are tax deductible under the Internal Revenue Service code.

To help, please visit our online donation page or mail a check payable to The Town Line, PO Box 89, South China, ME 04358. Your contribution is appreciated!

 
1 reply
  1. Grant Castillou
    Grant Castillou says:

    It’s becoming clear that with all the brain and consciousness theories out there, the proof will be in the pudding. By this I mean, can any particular theory be used to create a human adult level conscious machine. My bet is on the late Gerald Edelman’s Extended Theory of Neuronal Group Selection. The lead group in robotics based on this theory is the Neurorobotics Lab at UC at Irvine. Dr. Edelman distinguished between primary consciousness, which came first in evolution, and that humans share with other conscious animals, and higher order consciousness, which came to only humans with the acquisition of language. A machine with only primary consciousness will probably have to come first.

    What I find special about the TNGS is the Darwin series of automata created at the Neurosciences Institute by Dr. Edelman and his colleagues in the 1990’s and 2000’s. These machines perform in the real world, not in a restricted simulated world, and display convincing physical behavior indicative of higher psychological functions necessary for consciousness, such as perceptual categorization, memory, and learning. They are based on realistic models of the parts of the biological brain that the theory claims subserve these functions. The extended TNGS allows for the emergence of consciousness based only on further evolutionary development of the brain areas responsible for these functions, in a parsimonious way. No other research I’ve encountered is anywhere near as convincing.

    I post because on almost every video and article about the brain and consciousness that I encounter, the attitude seems to be that we still know next to nothing about how the brain and consciousness work; that there’s lots of data but no unifying theory. I believe the extended TNGS is that theory. My motivation is to keep that theory in front of the public. And obviously, I consider it the route to a truly conscious machine, primary and higher-order.

    My advice to people who want to create a conscious machine is to seriously ground themselves in the extended TNGS and the Darwin automata first, and proceed from there, by applying to Jeff Krichmar’s lab at UC Irvine, possibly. Dr. Edelman’s roadmap to a conscious machine is at https://arxiv.org/abs/2105.10461, and here is a video of Jeff Krichmar talking about some of the Darwin automata, https://www.youtube.com/watch?v=J7Uh9phc1Ow

    Reply

Leave a Reply

Want to join the discussion?
Feel free to contribute!

Leave a Reply

Your email address will not be published. Required fields are marked *