How to Build a Self-Conscious Machine (And Why It’s Probably a Silly Idea)

     The Coolest Thing in the Universe

The universe is full of some very cool stuff: neutron stars that weigh a ton a teaspoon; supermassive black holes that grip even light in their iron fists; infinitesimal neutrinos that stream right through solid steel; all the bizarre flora and fauna found right here on planet Earth.

It might be the ultimate in egoism, but of all the known things in the universe, the most amazing is surely the lump of goo inside our skulls. That lump of goo knows about neutron stars, black holes, neutrinos, and a middling number of the flora and fauna here on planet Earth. It even knows (a little) about itself. That lump of goo has worked out mathematical truths, moral half-truths, and philosophical ambiguities. And from the mud beneath our feet, it extracted all the stuff used to make our great cities, our cars and jets and rockets, and the wires and wireless signals that are turning these disparate lumps of goo into one great hivemind of creativity, knowledge, and sometimes … cruelty.

There can be no argument that our brains are the coolest things ever, because there can be no such argument without those brains. They are the substrate of all argument and discussion. End of discussion.

So far, at least. One day, other things may be discovered or built that can also discover, create, argue, discuss, cajole, or be cruel. They might land in ships from faraway lands (highly unlikely). They might emerge from a laboratory or a garage (almost certainly). And these new thinking machines will without a doubt surpass the wonder of our lumps of goo. Just as a child grows taller than both parents, and reaches new peaks while those parents decline, our creations will take our place as the coolest damn thing in the universe. Some argue that this is already true.

Artificial intelligence is here now. In laboratories all around the world, little AIs are springing to life. Some play chess better than any human ever has. Some are learning to drive a million cars a billion miles while saving more lives than most doctors or EMTs will over their entire careers. Some will make sure your dishes are dry and spot-free, or that your laundry is properly fluffed and without wrinkle. Countless numbers of these intelligences are being built and programmed; they are only going to get smarter and more pervasive; they’re going to be better than us, but they’ll never be just like us. And that’s a good thing.

What separates us from all the other life forms on Earth is the degree to which we are self-aware. Most animals are conscious. Many are even self-conscious. But humans are something I like to call hyper-conscious. There’s an amplifier in our brains wired into our consciousness, and it goes to 11.

It goes to 11, and the knob has come off.

 

     The Origin of Consciousness

There isn’t a single day that a human being becomes self-conscious. You can’t pen the date in a baby book, or take a picture of the moment and share it on Facebook, or celebrate its anniversary for years to come. It happens gradually, in stages. (It often unravels gradually, also in stages.)

We don’t like to admit to this gradual unspooling of ourselves. Some of these reasons are religious, but that’s just passing the buck. The real reason is ego and the illusion of self-permanence. These twin drives are deeply embedded in most religions, and so we pass the blame to houses of dogma. But the weakness of ego and the fallacy of self-permanence are just as evident in most atheists. They are delusions as prevalent as the belief in free will.

Human consciousness comes on like the old lights that used to hang in school gyms when I was a kid. You flip a switch, and nothing happens at first. There’s a buzz, a dim glow from a bulb here or there, a row that flickers on, shakily at first, and then more lights, a rising hum, before all the great hanging silver cones finally get in on the act and rise and rise in intensity to their full peak a half hour or more later.

We switch on like that. We emerge from the womb unaware of ourselves. The world very likely appears upside down to us for the first few hours of our lives, until our brains reorient the inverted image created by the lenses of our eyes (a very weird bit of mental elasticity that we can replicate in labs with goggle-wearing adults).

It takes a long while before our hands are seen as extensions of ourselves. Even longer before we realize that we have brains and thoughts separate from other people’s brains and thoughts. Longer still to cope with the disagreements and separate needs of those other people’s brains and thoughts. And for many of us (possibly most), any sort of true self-knowledge and self-enlightenment never happens. Because we rarely pause to reflect on such trivialities. The unexamined life and all that…

The field of AI is full of people working to replicate or simulate various features of our intelligence. The one thing they are certain to replicate is the gradual way that our consciousness turns on. As I write this, the gymnasium is buzzing. A light in the distance, over by the far bleachers, is humming. Others are flickering. Still more are turning on.

 

     The Holy Gr-ai-l

The holy grail of AI research was established before AI research ever even began. One of the pioneers of computing, Alan Turing, described an ultimate test for “thinking” machines: could they pass as human? Ever since, humanity has both dreamed of, and had collective nightmares about, a future where machines are more human than human. Not smarter than humans – which these intelligences already are. But more neurotic, violent, warlike, obsessed, devious, creative, passionate, amorous, and so on.

The genre of science fiction is stuffed to the gills with such tales. A collection of my short works will be released this October, and in it you can see that I have been similarly taken with these ideas about AI. And yet – even as these intelligences outpace human beings in almost every intellectual arena in which they’re entered, they seem no closer to being like us, much less more like us.

This is a good thing, but not for the reasons that films such as The Terminator and The Matrix suggest. The reason we haven’t made self-conscious machines is primarily because we are in denial about what makes us self-conscious. The things that make us self-conscious aren’t as flattering as the delusion of ego or the illusion of self-permanence. Self-consciousness isn’t even very useful (which is why research into consciousness rarely goes anywhere – it spends too much time assuming there’s a grand purpose and then searching for it).

Perhaps the best thing to come from AI research isn’t an understanding of computers, but an understanding of ourselves. The challenges we face in building machines that think highlight the various little miracles of our own biochemical goo. They also highlight our deficiencies. To replicate ourselves, we have to first embrace both the miracles and the foibles. What follows is a very brief guide on how to build a self-conscious machine, and why no one has done so to date (thank goodness).

 

     The Blueprint

The blueprint for a self-conscious machine is simple. You need:

1)      A physical body or apparatus that responds to outside stimuli. (This could be a car whose windshield wipers come on when it senses rain, or who brakes when a child steps in front of it. Not a problem, as we’re already building these.)

2)      A language engine. (Also not a problem. This can be a car with hundreds of different lights and indicators. Or it can be as linguistically savvy as IBM’s Watson.)

3)      The third component is a bit more unusual, and I don’t know why anyone would build one except to reproduce evolution’s botched mess. This final component is a separate part of the machine that observes the rest of its body and makes up stories about what it’s doing – stories that are usually wrong.

Again: (1) A body that responds to stimuli (2) a method of communication and (3) an algorithm that attempts (with little success) to deduce the reasons and motivations for these communications.

The critical ingredient here is that the algorithm in (3) must usually be wrong.

If this blueprint is confusing to you, you aren’t alone. The reason no one has built a self-conscious machine is that most people have the wrong idea about what consciousness is and how it arose in humans. So let’s take a detour. We’ll return to the blueprint later to describe how this algorithm might be programmed.

 

     What Makes Us Human

To understand human consciousness, one needs to dive deep into the study of Theory of Mind. It’s a shame that this concept is obscure, because it consumes most of our computing power for most of our lives. Our brains have been likened to little more than Theory of Mind machines – almost all of our higher level processing power is shunted into this singular task. So what is Theory of Mind and why is this topic so rarely discussed if our brains are indeed so obsessed?

Theory of Mind is the attempt by one brain to ascertain the contents of another brain. It is Sue wondering what in the world Juan is thinking. Sue creates theories about the current state of Juan’s mind. She does this in order to guess what Juan might do next.

If you think about it, no power could possibly be greater for a social and tribal animal like we humans. For thousands and thousands of years we have lived in close proximity, reliant on one another in a way that mimics bees, ants, and termites. As our behaviors and thoughts grew more and more complex, it became crucial for each member of the tribe to have an idea of what the other members were thinking and what actions they might perform. Theory of Mind is intellectual espionage, and we are quite good at it. But with critical limitations that we will get into later.

Sue guessing what Juan is thinking is known as First Order Theory of Mind. It gets more complex. Sue might also be curious about what Juan thinks of her. This is Second Order Theory of Mind, and it is the root of most of our neuroses and perseverate thinking. “Does Juan think I’m smart?” “Does Juan like me?” “Does Juan wish me harm?” “Is Juan in a good or bad mood because of something I did?”

Questions like these should sound very, very familiar. We fill our days with them. And that’s just the beginning.

Third Order Theory of Mind would be for Sue to wonder what Juan thinks Josette thinks about Tom. More simply, does Tom know Josette is into him? Or Sue might wonder what Josette thinks Juan thinks about Sue. Is Josette jealous, in other words? This starts to sound confusing, the listing of several names and all the “thinking about” thrown in there like glue, but this is what we preoccupy our minds with more than any other conscious-level sort of thinking. We hardly stop doing it. We might call it gossip, or socializing, but our brains consider this their main duty. Their primary function. There is speculation that Theory of Mind, and not tool use, is the reason for the relative size of our brains in the first place.

In a world of rocks hurtling through the air, a good use of processing power is to compute trajectories and learn how to avoid getting hit. One develops an innate sense of parabolas, of F=ma, of velocities squared. In a world of humans jostling about, a good use of processing power is to compute where those people might be next, and what they will do when they get there.

If this trait is so useful, then why aren’t all animals self-conscious? They very well might be. There’s plenty of research to suggest that many animals display varying degrees of self-consciousness. Animals that know a spot of color on the face in the mirror is in fact on their own heads. Animals that communicate to other animals on how to solve a puzzle so that both get a reward. Even octopi show considerable evidence of being self-conscious. But just as the cheetah is the fastest animal on land, humans are the queens and kings of Theory of Mind.

I’ve watched my dog observe me expectantly to guess what I might do next. Am I going to throw the stick or not? Am I going to eat that last bite of food or share it? I’ve even seen dogs wrestle with Second Order Theory of Mind questions. Play-wrestling with a partner, the dog has to gauge what my intent is. Have I suddenly turned on the pack? Or is this yet another game? Which side should my dog take? (Dwelling on this example now, I’m ashamed of having put my poor pup into such a heart-pounding conundrum for my own entertainment.)

Dogs are a good example of Theory of Mind in the animal kingdom, because dogs have evolved over the years to be social with humans and to pick up on our behavioral cues. The development of self-conscious AIs will follow this model closely, as robots have already become our domesticated pals. Some of them are already trying to guess what we’re thinking and what we might do next. There are cars being developed that read our faces to determine where our attention is being directed and if we’re sleepy or not. This is First Order Theory of Mind, and it is being built into automated machines already on the road.

Further development of these abilities will not lead to self-consciousness, however. There’s a very simple and elegant reason for this, and it explains the mystery of human consciousness and it provides the blueprint mentioned above for creating self-conscious machines, something we could very easily do in the lab today. But you’ll soon see why this would be a terrible idea. And not the world-ending kind found in dystopic science fiction.

 

     The Missing Piece

The human brain is not a single, holistic entity. It is a collection of thousands of disparate modules that only barely and rarely interconnect. We like to think of the brain as a computer chip. We might even attempt further precision and think of the brain as a desktop computer, with a central processing unit that’s separate from RAM (short-term memory), the hard drives (long-term memory), cooling fans (autonomous nervous functions), power supplies (digestion), and so on.

That’s a fun analogy, but it’s incredibly misleading. Computers are well-engineered devices created with a unified purpose. All the various bits were designed around the same time for those same purposes, and they were designed to work harmoniously with one another. None of this in any way resembles the human mind. Not even close.

The human mind is more like Washington DC (or any large government or sprawling corporation). Some functions of the brain were built hundreds of millions of years ago, like the ones that provide power to individual cells or pump sodium-potassium through cell membranes. Others were built millions of years ago, like the ones that fire neurons and make sure blood is pumped and oxygen is inhaled. More toward the frontal lobe, and we have the modules that control mammalian behaviors and thoughts that were layered on relatively recently.

Each module in the brain is like a separate building in a congested town. Some of these modules don’t even talk to other modules, and for good measure. The blood-pumping and breathe-reflex buildings should be left to their own devices. The other modules are prone to arguing, bickering, disagreeing, subverting one another, spasming uncontrollably, staging coups, freaking the fuck out, and all sorts of other hysterics. A couple of examples:

A few months ago, my girlfriend and I set out from the Galapagos for the French Polynesians. The 3,000 miles of open sea can take anywhere from two weeks to a month to cross. My girlfriend does not often succumb to seasickness, but in an area of the South Pacific known as the Convergence Zone, a confused sea set our sailboat into a strange and jerking rhythm. She fell prey to a terrible sensation that lasted for a few days.

Seasickness is a case of our brain modules not communicating with one another (or doing their own thing). When the visual cues of motion from our environment do not match the signals from our inner ears (where we sense balance), our brains assume that we’ve been poisoned. It’s a reasonable assumption for creatures that climb through trees eating all the brightly-colored things. Toxins disrupt our brains’ processing, leading to misfires and bad data. We did not evolve to go to sea, so when motion does not match what we are seeing, our bodies think we’ve lost our ability to balance on two legs. The result is that we empty our stomachs – getting rid of the poison – and we lie down and feel zero desire to move about – preventing us from plummeting to our deaths from whatever high limb we might be swinging from.

It doesn’t matter that we know this is happening in a different module of our brains, a higher-level processing module. We can know without a doubt that we haven’t been poisoned, but this module is not going to easily win out over the seasickness module. Having been seasick myself, and being very curious about such things, I’ve felt the various modules wrestle with one another. Lying still and sleeping a lot while seasick, I will then jump up and perform various tasks needed of me around the boat – the seasickness practically gone for the moment, only to lie back down once the chore is done. Modules get different priorities based on our environmental stimuli. Our brains are not a holistic desktop PC. To truly watch their analogue in action, turn on CSPAN or sit in on a contentious corporate board meeting.

Another example of our modules in battle with one another: There are some very strong modules inside of us that are programmed to make copies of themselves (and to do that, they need to make copies of us). These are the sex modules, and they have some of the largest and nicest buildings in our internal Washington DCs (but you wouldn’t want to shine a black light in them). These modules direct many of our waking hours as we navigate dating scenes, tend to our current relationships, determine what to wear and how to maintain our bodies, and so much more.

These reproductive modules might fill a woman with the urge to dress up and go dancing. And men with the urge to go to places where women dress up and dance while the men stand at bars drinking. Those modules might even lead some of these people to pair up and go home with one another. And this is where various other modules will intervene with pills, condoms, and other tools designed to subvert the original urges that got the couples together in the first place. However, if those devices are not employed, even though higher level modules most definitely did not want anyone getting pregnant that night, a lovechild might be born, and other modules will then kick in and flood brains with love and deep connections to assist in the rearing of that child.

Some of our modules want us to get pregnant. Often stronger modules very much wish to delay this or make sure it’s with the right person. Dormant modules lie in wait to make sure we’re connected with our children no matter what those other hedonistic and unfeeling pricks in those other buildings down the block think.

Critical to keep in mind here is that these modules are highly variable across the population, and our unique mix of modules create the personalities that we associate with our singular selves. It means we aren’t all alike. We might have modules that crave reproduction, even though some of our bodies do not create sperm, or our eggs cannot be fertilized. We might carry the reproduction module, even though the sexual-attraction module is for the same sex as our own.

The perfectly engineered desktop computer analogy fails spectacularly, and the failure of this analogy leads to some terrible legislation and social mores, as we can’t seem to tolerate designs different from our own (or the average). It also leads AI researchers down erroneous paths if they want to mimic human behavior. Fallibility and the disjointed nature of processing systems will have to be built in by design. We will have to purposefully break systems similar to how nature haphazardly cobbled them together. We will especially have to simulate a most peculiar feature of this modularity, one that combines with Theory of Mind in a very special way. It is this combination that leads to human consciousness. It is the most important feature in our blueprint for a self-conscious machine.

 

     The Most Important Mistake

With the concept of Theory of Mind firmly in our thoughts, and the knowledge that brain modules are both fallible and disconnected, we are primed to understand human consciousness, how it arose, and what it’s (not) for.

This may surprise those who are used to hearing that we don’t understand human consciousness and have made no progress in that arena. This isn’t true at all. What we have made no progress in doing is understanding what human consciousness is for. Thousands of years of failure in this regard points to the simple truth: Human consciousness is not for anything at all. It serves no purpose. It has no evolutionary benefit. It arises at the union of two modules that are both so supremely useful that we can’t survive without either, and so we tolerate the annoying and detrimental consciousness that arises as a result.

One of those modules is Theory of Mind. It has already been mentioned that Theory of Mind consumes more brain processing power than any other higher-level neurological activity. It’s that damn important. The problem with this module is that it isn’t selective with its powers; it’s not even clear that such selectivity would be possible. That means our Theory of Mind abilities get turned onto ourselves just as often (or far more often) than it is wielded on others.

Imagine an alien raygun that shoots with such a wide spread that anywhere you aim it, you hit yourself. That should give you a fair picture of how we employ Theory of Mind. Our brains are primed to watch humans to determine what they are thinking, why they are behaving the way they are behaving, and what they might do next. Looking down, these brains (and their mindless modules) see a body attached to them. These modules watch hands perform tasks, feet take them places, words pop out in streams of thought. It is not possible to turn off our Theory of Mind modules (and it wouldn’t be a good idea anyway; we would be blind in a world of hurtling rocks). And so this Theory of Mind module concocts stories about our own behaviors. Why do we want to get dressed up and go dancing? Because it’s fun! And our friends will be there! Why do we want to keep eating when we are already full? Because it’s delicious! And we walked an extra thousand steps today!

These questions about our own behaviors are never ending. And the answers are almost always wrong.

Allow that to sink in for a moment. The explanations we tell ourselves about our own behaviors are almost always wrong.

This is the weird thing about our Theory of Mind superpowers. They’re pretty good when we employ them on others. They fail spectacularly when we turn them on ourselves. Our guesses about others’ motivations are far more accurate than our own. In a sense, we have developed a magic force-field to protect us from the alien mind-reading raygun that we shoot others (and ourselves) with. This force-field is our egos, and it gives us an inflated opinion of ourselves, a higher-minded rationale for our actions, and an illusion of sanity that we rarely extend to our peers.

The incorrect explanations we come up with about our own behaviors are meant to protect ourselves. They are often wildly creative, or they are absurdly simplistic. Answers like “fun” and “delicious” are circular answers pointing back to a happiness module, with no curiosity about the underlying benefit of this reward mechanism. The truth is that we keep eating when we’re full because we evolved in a world of caloric scarcity. We dance to attract mates to make copies of ourselves, because the modules that guided this behavior made lots of copies, which crowded out other designs.

Researchers have long studied this mismatch of behaviors and the lies we tell ourselves about our behaviors. One study primed test subjects to think they were feeling warm (easily done by dropping in certain words in a fake test given to those subjects). When these people got up to adjust the thermostat, the researches paused them to ask why they were adjusting the temperature. Convincing stories were told, and when the primed words were pointed out, incredulity reigned. Even when we are shown where our actions come from, we choose to believe our internal Theory of Mind module, which has already reached its own conclusion.

Subjects in fMRI machines have revealed another peculiarity. Watching their brains in real-time, we can see that decisions are made before higher level parts of the brain are aware of the decisions. That is, researchers can tell which button a test subject will press before those subjects claim to have made the choice. The action comes before the narrative. We move; we observe our actions; we tell ourselves stories about why we do things. The very useful Theory of Mind tool – which we can’t shut off – continues to run and make up things about our own actions.

More pronounced examples of this come from people with various neurological impairments. Test subjects with vision processing problems, or with hemispheres of their brains severed from one another, can be shown different images in each eye. Disconnected modules take in these conflicting inputs and create fascinating stories. One eye might see a rake and the other will see a pile of snow. The rake eye is effectively blind, with the test subject unable to tell what it is seeing if asked. But the module for processing the image is still active, so when asked what tool is needed to handle the image that is seen (the snow), the person will answer “a rake.” That’s not the interesting bit. What’s interesting is that the person will go through amazing contortions to justify this answer, even after the entire process is explained to them. You can tell your internal Washington DC how its wires are crossed, and it will continue to persist in its lunacy.

This is how we would have to build a self-conscious machine. And you’re hopefully beginning to see why no one should waste their time. These machines (probably) wouldn’t end the world, but they would be just as goofy and nonsensical as nature has made us. The only reason I can think of to build these machines is to employ more shrinks.

 

     The Blueprint Revisited

An eccentric fan of Alan Turing’s Turing Test with too much time on her hands has heard about this blueprint for building a self-conscious machine. Thinking that this will lead to a kind of super-intelligent mind that will spit out the cure to cancer and the path to cold fusion, she hires me and gives me a large budget and a team of electrical and mechanical engineers. How would we go about actually assembling our self-conscious machine?

Applying what we know about Theory of Mind and disconnected modules, the first thing we would build is an awareness program. These are quite simple and already exist in spades. Using off-the-shelf technology, we decide that our first machine will look and act very much like a self-driving car. For many years, the biggest limitation to achieving truly autonomous vehicles has been the awareness apparatus: the sensors that let the vehicle know what’s going on around it. Enormous progress here has provided the sight and hearing that our machine will employ.

With these basic senses, we then use machine learning algorithms to build a repertoire of behaviors for our AI car to learn. Unlike the direction most autonomous vehicle research is going – where engineers want to teach their car how to do certain things safely – our team will instead be teaching an array of sensors all over a city grid to watch other cars and guess what they’re doing. That blue Nissan is going to the grocery store because “it is hungry.” That red van is pulling into a gas station because “it needs power.” That car is inebriated. That one can’t see very well. That other one has slow reaction speeds. That one is full of adrenaline.

Thousands and thousands of these needs and anthropomorphic descriptors are built up in a vast library of phrases or indicator lights. If we were building a person-shaped robot, we would do the same by observing people and building a vocabulary for the various actions that humans seem to perform. Sensors would note objects of awareness by scanning eyes (which is what humans and dogs do). They would learn our moods by our facial expressions and body posture (which current systems are already able to do). This library and array of sensors would form our Theory of Mind module. Its purpose is simply to tell stories about the actions of others. The magic would happen when we turn it on itself.

Our library starts simply with First Order concepts, but then builds up to Second and Third Order ideas. Does that yellow Ford see the gray Chevy coming toward it? It swerved slightly, so yes it did. Does that van think the hot rod drives too aggressively? It is giving it more room than the average room given to other cars on the road, so yes it does. Does the van think all hot rods drive too aggressively? It is giving the same amount of room to the Corvette that just left the dealership with only two miles on the odometer, so yes it does. Does this make the van prejudiced? When we turn our own car loose and make the module self-referential, it’ll have to determine if it is prejudiced as well. Perhaps it would rely on other data and consider itself prudent rather than prejudiced (ego protecting it from wielding Theory of Mind accurately).

Before we get that far, we need to make our machine self-aware. And so we teach it to drive itself around town for its owner (who is just another inner module pushing and pulling this way and that, much like our older reptilian brains). We then ask the AI car to observe its own behaviors and come up with guesses as to what it’s doing. The key here is to not give it perfect awareness. Don’t let it have access to the GPS unit, which has been set for the grocery store. Don’t let it know what the owner’s cell phone knows, which is that the husband has texted the wife to pick up the kids on the way home for work. To mimic human behavior, ignorance is key. As is the surety of initial guesses, or what we might call biases.

Early assumptions are given a higher weight in our algorithms than theories which come later and have more data (overcoming initial biases requires a preponderance of evidence. A likelihood of 50% might be enough to set the mind initially, but to overcome that first guess will require a likelihood of 75%, for instance). Stories are concocted which are wrong and build up cloudy pictures for future wrongness. So when the car stops at the gas station every day and gets plugged into the electrical outlet, even though the car is always at 85% charge, the Theory of Mind algorithm assumes it is being safe rather than sorry, or preparing for a possible hurricane evacuation like that crazy escapade on I-95 three years back where it ran out of juice.

What it doesn’t know is that the occupant of the car is eating a microwaved cheesy gordita at the gas station every day, along with a pile of fries and half a liter of soda. Later, when the car is going to the hospital regularly, the story will be one of checkups and prudence, rather than episodes of congestive heart failure. This constant stream of guesses about what it’s doing, and all the ways that the machine is wrong, confused, and quite sure of itself, will give our eccentric Turing fan the self-conscious AI promised by science fiction. And our eccentric will find that the resultant design is terrible in just about every way possible.

 

     The Language of Consciousness

The reason I suspect that we’ll have AI long before we recognize it as such is that we’ll expect our AI to reside in a single device, self-contained, with one set of algorithms. This is not how we are constructed at all. It’s an illusion created by the one final ingredient in the recipe of human consciousness, which is language. It is language more than any other trait which provides us with the sense that our brains are a single module, a single device.

Before I make the argument once again that our brains are not a single entity, let’s consider our bodies, of which the brain is part and parcel. Our bodies are made up of trillions of disparate cells, many of which can live and be sustained outside of us. Cultures of our bodies can live in laboratories for decades (indefinitely, really. Look up Henrietta Lacks for more). Entire organs can live in other people’s bodies. And there are more cells within us that are not us than there are cells that make up us. I know that sounds impossible, but more organisms live within our guts, on our skin, and elsewhere, than the total number of cells that are actually our bodies. These are not just hitchhikers, either. They affect our moods, our health, our thoughts, and our behaviors. They are an essential facet of what we consider our “selves.”

As horrific as it would be, and it has been for too many unfortunates, you can live without your arms, legs, and much of your torso. There have been people who have lost half their brains and gone on to live somewhat normal lives. Some are born with only half their brains and manage to get by (and no, you can’t tell these people simply from talking with them, so forget whatever theories you are now forming about your coworkers).

Consider this: By the age of thirty, just about every cell that a person was born with has been replaced with a different cell. Almost none of the original cells remain. We still feel like the same person, however. Understanding all of these biological curiosities, and the way our brains rationalize a sense of “sameness” will be crucial to recognizing AI when it arrives. It may feel like cheating for us to build a self-driving car, give it all kinds of sensors around a city, create a separate module for guessing vehicular intentions, turn that module back on the machine, and call this AI. But that’s precisely what we are doing when we consider ourselves “us.” In fact, one of the responses we’ll need to build into our AI car is a vehement disgust when confronted with its disparate and algorithmic self. Denial of our natures is perhaps the most fundamental of our natures.

Just like the body, the brain can exist without many of its internal modules. This is how the study of brain functions began, with test subjects who suffered head traumas (like Phineas Gage), or were operated on (like the numerous hemispherectomies, lobotomies, and tumor excissions). The razor thin specialization of our brain modules never fails to amaze. There are vision modules that recognize movement and only movement. People who lack this module cannot see objects in motion, and so friends and family seem to materialize here and there out of thin air. There are others who cannot pronounce a written word if that word represents an animal. Words that stand for objects are seen and spoken clearly. The animal-recognition module – as fine as a module as that seems – is gone.

And yet these people are self-conscious. They are human.

So is a child, only a few weeks old, who can’t yet recognize that her hand belongs to her. All along these gradations we find what we call humanity, from birth to the Alzheimer’s patient who has lost access to most of their experiences. We very rightly treat these people as equally human, but at some point we have to be willing to define consciousness in order to have a target for artificial consciousness. As Kevin Kelly is fond of saying, we keep moving the goalposts when it comes to AI. Machines do things today that were considered impossible a mere decade ago. As the improvements are made, the mystery is gone, and so we push back the metrics. But machines are already more capable than newborns in almost every measurable way. They are also more capable than bedridden humans on life support in almost every measurable way. As AI advances, it will squeeze in towards the middle of humanity, passing toddlers and those in the last decades of their lives, until its superiority meets in the middle and keeps expanding.

This is happening every day. AI has learned to walk, something the earliest and eldest humans can’t do. It can drive with very low failure rates, something almost no human at any age can do. With each layer added, each ability, and more squeezing in on humanity from both ends of the age spectrum, we light up that flickering, buzzing gymnasium. It’s as gradual as a sunrise on a foggy day. Suddenly, the sun is overhead, but we never noticed it rising.

 

     So What of Language?

I mentioned above that language is a key ingredient of consciousness. This is a very important concept to carry into work on AI. However many modules our brains consist of, they fight and jostle for our attentive states (the thing our brain is fixated on at any one moment) and our language processing centers (which are so tightly wound with our attentive states as to be nearly one and the same).

As a test of this, try listening to an audiobook or podcast while having a conversation with someone else. Is it even possible? Could years of practice unlock this ability? The nearest thing I know of when it comes to concurrent communication streams are real-time human translators. But this is an illusion, because the concepts – the focus of their awareness – is the same. It only seems like magic to those of us who are barely literate in our native tongues, much less two or more. Tell me a story in English, and I can repeat it concurrently in English as well. You’ll even find that I’m doing most of the speaking in your silences, which is what translators do so brilliantly well.

Language and attention are narrow spouts on the inverted funnels of our brains. Thousands of disparate modules are tossing inputs into this funnel. Hormones are pouring in, features of our environment, visual and auditory cues, even hallucinations and incorrect assumptions. Piles and piles of data that can only be extracted in a single stream. This stream is made single – is limited and constrained – by our attentive systems and language. It is what the monitor provides for the desktop computer. All that parallel processing is made serial in the last moment.

There are terrible consequences to this. I’ve lost count of the number of times I’ve felt like I’m forgetting something only to realize what was nagging at me hours or days later. I left my laptop in an AirBnB once. Standing at the door, which would lock automatically and irrevocably once I closed it, I wracked my brain for what I felt I was forgetting. It was four in the morning, and I had an early flight to catch. There would be no one to call to let me back in. I ran through the list of the things I might possibly leave behind (chargers, printed tickets), and the things that always reside in my pockets (patting for my wallet and cell phone). Part of me was screaming danger, but the single output stream was going through its paces and coming up empty.

The marvelous thing about all of this is that I’m aware of how this happens, that the danger module knows something it can’t get through the funnel of awareness, and so I should pay heed to it. Despite this foreknowledge, I closed the door. Only when it made an audible “click” did the information come through. Now I could clearly see my laptop on the bed where I was making a last-minute note in a manuscript. I’d never left my laptop behind anywhere, so it wasn’t on the list of things to check. The alarm sounding in my head was part of me, but there’s not a whole me. There’s only what gets through the narrow language corridor. This is why damage to the language centers of our brains are as disastrous to normal living as damage to our memory modules.

I should note here that language is not the spoken word. The deaf process through words as well, as do the blind and mute. But imagine life for animals without words. Drives are surely felt, for food and sex and company. For warmth and shelter and play. Without language, these drives come from parallel processes. They are narrowed by attentive focus, but not finely serialized into a stream of language. Perseveration on a single concept – my dog thinking “Ball Ball Ball Ball” – would come closest.

We know what this is like from study of the thankfully rare cases where humans reach adulthood free from contact with language. Children locked in rooms into their teens. Children that survive in the wild. It’s difficult to tease apart the abuse of these circumstances to the damage of living without language, except to say that those who lose their language processing modules later in life show behavioral curiosities that we might otherwise assume were due to childhood abuses.

When Watson won at Jeopardy, what made him unique among AIs was the serialized output stream that allowed us to connect with him, to listen to him. We could read his answers on his little blue monitor just as we could read Ken Jennings’ hand-scrawled Final Jeopardy answers. This final burst of output is what made Watson seem human. It’s the same exchange Alan Turing expected in the test that bears his name (in his case, slips of paper with written exchanges are passed under a door). Our self-driving AI car will not be fully self-conscious unless we program it to tell us (and itself) the stories it’s concocting about its behaviors.

It’s my only quibble with Kevin Kelly’s pronouncement that AI is already here. I grant that Google’s servers and various interconnected projects should already qualify as a super-intelligent AI. What else can you call something that understands what we ask and has an answer for everything – an answer so trusted that the company’s name has become a verb synonymous for “discovering the answer.”

Google can also draw, translate, beat the best humans at almost every game ever devised, drive cars better than we can, and stuff still classified and very, very spooky. Google has read and remembers almost every book ever written. It can read those books back to you aloud. It makes mistakes like humans. It is prone to biases (which it has absorbed from both its environment and its mostly male programmers). What it lacks are the two things our machine will have, which are the self-referential loop and the serial output stream.

Our machine will make up stories about what it’s doing. It will be able to relate those stories to others. It will often be wrong.

If you want to feel small in the universe, gaze up at the Milky Way from the middle of the Pacific Ocean. If this is not possible, consider that what makes us human is as ignoble as a puppet who has convinced himself he has no strings.

 

     A Better Idea

Building a car with purposeful ignorance is a terrible idea. To make our machine self-conscious akin to human consciousness, we would have to let it leave that laptop locked in that AirBnB. It would need to run out of juice occasionally. This is easily programmed by assigning weights to the hundreds of input modules, and artificially limiting the time and processing power granted to the final arbiter of decisions and Theory of Mind stories. Our own brains are built as though the sensors have gigabit resolution, and each input module has teraflops of throughput, but the output is through an old IBM 8088 chip. We won’t recognize AI as being human-like because we’ll never build such limitations.

Just such a limitation was built into IBM’s Watson, by dint of the rules of Jeopardy. Jeopardy requires speed. Watson had to quickly determine how sure he was of his answers to know whether or not to buzz in. Timing that buzzer, as it turns out, is the key to winning at Jeopardy. What made Watson often appear most human wasn’t him getting answers right, but seeing on his display what his second, third, and fourth guesses would have been, with percentages of surety beside each. What really made Watson appear human was when he made goofs, like a final Jeopardy answer in the “American Cities” category where Watson replied with a Canadian city as the question.

(It’s worth noting here that robots seem most human to us when they fail, and there’s a reason for this. When my Roomba gets stuck under the sofa, or is gagging on the stringy fringes of my area rug, are the moments I’m most attached to the machine. Watch YouTube videos of Boston Dynamics’ robots and gauge your own reactions. When the robot dog is pushed over, or starts slipping in the snow, or when the package handler has the box knocked from its hands or is shoved onto its face, are when many of us feel the deepest connection. Also note that this is our Theory of Mind brains doing what they do best, but for machines rather than fellow humans.)

Car manufacturers are busy at this very moment building vehicles that we would never call self-conscious. That’s because they are being built too well. Our blueprint is to make a machine ignorant of its motivations while providing a running dialog of those motivations. A much better idea would be to build a machine that knows what other cars are doing. No guessing. And no running dialog at all.

That means access to the GPS unit, to the smartphone’s texts, the home computer’s emails. But also access to every other vehicle and all the city’s sensor data. The Nissan tells the Ford that it’s going to the mall. Every car knows what every other car is doing. There are no collisions. On the freeway, cars with similar destinations clump together, magnetic bumpers linking up, sharing a slipstream and halving the collective energy use of every car. The machines operate in concert. They display all the traits of vehicular omnipotence. They know everything they need to know, and with new data, they change their minds instantly. No bias.

We are fortunate that this is the sort of fleet being built by AI researchers today. It will not provide for the quirks seen in science fiction stories (the glitch seen in my short story Glitch, or the horror from the titular piece of my short story collection Machine Learning). What it will provide instead is a well-engineered system that almost always does what it’s designed to do. Accidents will be rare, their causes understood, this knowledge shared widely, and improvements made.

Imagine for a moment that humans were created by a perfect engineer (many find this easy – some might find such a hypothetical more difficult). The goal of these humans is to coexist, to shape their environment in order to maximize happiness, productivity, creativity, and the storehouse of knowledge. One useful feature to build here would be mental telepathy, so that every human knew what every other human knew. This might prevent two Italian restaurants from opening within weeks of each other in the same part of town, causing one to go under and waste enormous resources (and lead to a loss of happiness for its proprietor and employees). This same telepathy might help in relationships, so one partner knows when the other is feeling stuck or down and precisely what is needed in that moment to be of service.

It would also be useful for these humans to have perfect knowledge of their own drives, behaviors, and thoughts. Or even to know the likely consequences for every action. Just as some professional American NFL footballers are being vocal about not letting their children play a sport shown to cause brain damage later in life, these engineered humans would not allow themselves to engage in harmful activities. Entire industries would collapse. Vegas would empty. Accidental births would trend toward zero.

And this is why we have the system that we do. In a world of telepathic humans, one human who can hide their thoughts would have an enormous advantage. Let the others think they are eating their fair share of the killed elk, but sneak out and take some strips of meat off the salt rack when no one is looking. And then insinuate to Sue that you think Juan did it. Enjoy the extra resources for more calorie-gathering and mate-hunting, and also enjoy the fact that Sue is indebted to you and thinks Juan is a crook.

This is all terrible behavior, but after several generations, there will be many more copies of this module than Juan’s honesty module. Pretty soon, there will be lots of these truth-hiding machines moving about, trying to guess what the others are thinking, concealing their own thoughts, getting very good at doing both, and turning these raygun powers onto their own bodies on accident.

 

     The Human Condition

We celebrate our intellectual and creative products, and we assume artificial intelligences will give us more of both. They already are. Algorithms that learn through iterations (neural networks that employ machine learning) have proven better at us in just about every arena in which we’ve committed resources. Not in just what we think of as computational areas, either. Algorithms have written classical music that skeptics have judged – in “blind” hearing tests – to be from famous composers. Google built a Go-playing AI that beat the best human Go player in the world. One move in the third game of the match was so unusual, it startled Go experts. The play was described as “creative” and “ingenious.”

Google has another algorithm that can draw what it thinks a cat looks like. Not a cat image copied from elsewhere, but the general “sense” of a cat after learning what millions of actual cats look like. It can do this for thousands of objects. There are other programs that have mastered classic arcade games without any instruction other than “get a high score.” The controls and rules of the game are not imparted to the algorithm. It tries random actions, and the actions that lead to higher scores become generalized strategies. Mario the plumber eventually jumps over crates and smashes them with hammers like a seasoned human is at the controls. Things are getting very spooky out there in AI-land, but they aren’t getting more human. Nor should they.

I do see a potential future where AIs become like humans, and it’s something to be wary of. Not because I buy arguments from experts like Nick Bolstrom and Sam Harris, who ascribe to the Terminator and Matrix view of things (to oversimplify their mostly reasonable concerns). Long before we get to HAL and Psylons, we will have AIs that are designed to thwart other AIs. Cyberwarfare will enter the next phase, one that it is treading upon even as I write this. The week that I began writing this piece, North Korea launched a missile that exploded seconds after launch. The country’s rate of failure (at the time) was not only higher than average, it had gotten worse over time. This – combined with announcements from the US that it is actively working to sabotage these launches with cyberwarfare – means that our programs are already trying to do what the elk-stealer did to Sue and Juan.

What happens when an internet router can get its user more bandwidth by knocking rival manufacturer’s routers offline? It wouldn’t even require a devious programmer to make this happen. If the purpose of the machine-learning algorithm built into the router is to maximize bandwidth, it might stumble upon this solution on accident, which it then generalizes across the entire suite of router products. Rival routers will be looking for similar solutions. We’ll have an electronic version of the Tragedy of the Commons, which is when humans destroy a shared resource because the potential utility to each individual is so great, and the first to act reaps the largest rewards (the last to act gets nothing). In such scenarios, logic often outweighs morality, and good people do terrible things.

Cars might “decide” one day that they can save energy and arrive at their destination faster if they don’t let other cars know that the freeway is uncommonly free of congestion that morning. Or worse, they transmit false data about accidents, traffic issues, or speed traps. A hospital dispatches an ambulance, which finds no one to assist. Unintended consequences such as this are already happening. Wall Street had a famous “flash crash” caused by investment algorithms, and no one understands to this day what happened. Billions of dollars of real wealth were wiped out and regained in short order because of the interplay of rival algorithms that even their owners and creators don’t understand.

Google’s search results are an AI, one of the best in the world. A head researcher in charge of this division quit when he realized that even he didn’t understand how the algorithm decided upon and ranked its search results. These machines get so good at their jobs, and they arrive at this mastery through self-learned iterations, so even looking at the code it becomes impossible to divine how query A is leading to answer B. That’s the world we already live in. It is just going to become more pronounced.

The human condition is the end result of millions of years of machine-learning algorithms. Written in our DNA, and transmitted via hormones and proteins, they have competed with one another to improve their chances at creating more copies of themselves. One of the more creative survival innovations has been cooperation. Legendary biologist E.O. Wilson classifies humans as a “Eusocial” animal (along with ants, bees, and termites). This eusociality is marked by division of labor, which leads to specialization, which leads to quantum leaps in productivity, knowledge-gathering, and creativity. It relies heavily on our ability to cooperate in groups, even as we compete and subvert on an individual level.

As mentioned above, there are advantages to not cooperating, which students of game theory know quite well. The algorithm that can lie and get away with it makes more copies, which means more liars in the next generation. The same is true for the machine that can steal. Or the machine that can wipe out its rivals through warfare and other means. The problem with these efforts is that future progeny will be in competition with each other. This is the recipe not just for more copies, but for more lives filled with strife. As we’ve seen here, these are also lives full of confusion. Humans make decisions and then lie to themselves about what they are doing. They eat cake while full, succumb to gambling and chemical addictions, stay in abusive relationships, neglect to exercise, and countless other poor habits that are reasoned away with stories as creative as they are untrue.

The vast majority of the AIs we build will not resemble the human condition. They will be smarter and less eccentric. This will disappoint our hopeful AI researcher with her love of science fiction, but it will benefit and better humanity. Driving AIs will kill and maim far fewer people, use fewer resources, and free up countless hours of our time. Doctor AIs are already better at spotting cancer in tissue scans. Attorney AIs are better at pre-trial research. There are no difficult games left where humans are competitive with AIs. And life is a game of sorts, one full of treachery and misdeeds, as well as a heaping dose of cooperation.

 

     The Future

We could easily build a self-conscious machine today. It would be very simple at first, but it would grow more complex over time. Just as a human infant first learns that its hand belongs to the rest of itself, that other beings exist with their own brains and thoughts, and eventually to surmise that Juan thinks Suzette thinks Mary has a crush on Jane, this self-conscious machine would build toward human-like levels of mind-guessing and self-deception.

But that shouldn’t be the goal. The goal should go in the opposite direction. Millions of years of competing for scarce resources have built up an algorithm in each of us with more problems than solutions. The goal should not be to build an artificial algorithm that mimics humans, but for humans to learn how to coexist more like our perfectly engineered constructs.

Some societies have already experimented along these lines. There was a recent trend in hyper honesty where partners said whatever thing was on their mind, however nasty that thought might be (with some predictable consequences). Other cultures have attempted to divine the messiness of the human condition and improve upon it with targeted thoughts, meditations, and physical practices. Buddhism and yoga are two examples. Vegetarianism is a further one, where our algorithms start to view entire other classes of algorithms as worthy of respect and protection.

Even these noble attempts are susceptible to corruption from within. The abuses of Christianity and Islam are well documented, but there have also been sex abuse scandals in the upper echelons of yoga, and terrorism among practicing Buddhists. There will always be advantages to those willing to break ranks, hide knowledge and motivations from others and themselves, and to do greater evils. Trusting a system to remain pure, whatever its founding tenets, is to lower one’s guard. Just as our digital constructs will require vigilance, so should the algorithms handed down to us by our ancestors.

The future will most certainly see an incredible expansion of the number of and the complexity of AIs. Many will be designed to mimic humans, as they provide helpful information over the phone and through chat bots, and as they attempt to sell us goods and services. Most will be supremely efficient at a single task, even if that task is as complex as driving a car. Almost none will become self-conscious, because such a machine will be worse at its job. Self-awareness will be useful (where it is in space, how its components are functioning), but the stories we tell ourselves about ourselves, which we learned to generalize after coming up with stories about others, is not something we’re likely to see in the world of automated machines.

What the future is also likely to hold is an expansion and improvement of our own internal algorithms. We have a long history of bettering our treatment of others. Despite what the local news is trying to sell you, the world is getting safer every day for the vast majority of humanity. Or ethics are improving. Our spheres of empathy are expanding. We are assigning more computing power to our frontal lobes and drowning out baser impulses from our reptilian modules. But this only happens with effort. We are each the programmers of our own internal algorithms, and improving ourselves is entirely up to us. It starts with understanding how imperfectly we are constructed, by learning not to trust the stories we tell ourselves about our own actions, and being dedicated to removing bugs and installing newer features along the way.

While it is certainly possible to do so, we may never build an artificial intelligence that is as human as we are. And yet we may build better humans anyway.

 

 


6 responses to “How to Build a Self-Conscious Machine (And Why It’s Probably a Silly Idea)”

  1. As usual, an interesting read Hugh. Seems to me that the internal drives of an AI may be the most important thing. Obviously it is not going to eat or have sex, but hopefully have a sense of morality that keeps it from turning into one of those nasty B-science fiction movie robots. It is not an easy thing, because as you say the algorithm, has to make up stories, basically have curiosity. That would leave simple ideas, like “do no harm” or “serve humanity” up to a great deal of interpretation that could lead to some trouble. The irony is that you probably have to make it so literal minded that it couldn’t truly be considered to be conscience. Oh well, certainly something to ponder.

  2. Outstanding. The concept of the gossiping agent predictor turned inward is one of my favorite things from your Wayfinding series. The expansion here to AI is great.

    I’m not sure what happened in game 3 but AlphaGo game 2 move 37 is my favorite. It was such an odd move that the commentator duplicated it incorrectly on his own board as it was so unituitive. Its significance only became important far later in the match and was then lauded as a brilliant move.

    We definitely live in the most interesting of times. Game2move37 and your example of the Google coder unable to divine how his program works are literal examples of Vinge’s Singularity. This is what it will look like. Baffling magic.

    I wonder if Clarke would’ve been surprised to learn seeing the transition to advanced technology with your own eyes would still be mistaken for magic.

    Mike

  3. You are right. Our brains are not little computers or machines. (Though it is a good analogy of how our organic goo operates, simplified). We are humans – and being human is quite good enough. We are similar to each other, yet different and unique. Like a snowflake. They all look so similar, until you see them close up. And just like a snowflake alone has little influence, put them all together, and like an avalanche, it is a mighty force.

    Being self-conscious, aware, being present in the moment, listening to that small voice within that screams, “Danger!” and taking one last look to find the laptop on the bed…makes us human. And yet, how many times do we go through a day, a moment, a season, as mindless as a machine (maybe more mindless than AI)?

    I just finished Dust, after reading Wool first, then Shift, introduced by a Mr. Howey – that seems is NOT Mr. Howey. You do not have to clone yourself – there are many impersonating you (on IG, on FB, on….) For some reason they are reaching out to me. Maybe there is an algorithm for this? As I shared with one of them, messaging on Social Media is like radio waves with a pic, a bio, and some posts. You really do not know who you are communicating with. “The lies shield the truth; the truth shields the lies when they are integrated together” (Shift). I love this line written by you! I love how you use words like a painter uses color. It’s the contrast of different colors (or light) that makes beauty, that defines or clarifies, that evokes emotion.

    It’s the use of words that express our conscious self to others. Or to hide our conscious self to others. Or to hide our conscious self to ourselves. Or a combination of these three and more. I cannot read your mind. You certainly cannot read mine. And even when we do try and communicate or tell stories, the telling is filtered through the stories we tell ourselves (our personal bias).

    Somehow, it has meaning – or it can have meaning. The unspoken communication with our dogs, with other people, is as meaningful as words. Paolo Coelho (The Alchemist) calls this the “universal language.” But not with dishonesty. There is no meaningful relationship where there is dishonesty. Only impressions that are certainly not accurate. The relationship with artificial intelligence is, well artificial. Real intelligence, though affected by all those other cells that are not us, that is affected by our environment, that is affected by the other, that is affected by our self-awareness or lack of, is real and unpredictable. Machines tend to be predictable – that’s one reason we like them. This uncertainty makes us human. And being human is good enough. I happen to like humans, with the good, the bad, and the ugly parts of ourselves. Seeing a machine make a mistake, certainly seems very human.

    Now that I am done with Dust, what book do you recommend I read next? What is your favorite book that you have written?

  4. The vast ocean for your ink, and it did indeed flow freely and fearlessly. The moon as your desk lamp, perhaps a comforting nightlight; the North Star as your true North – a constant guide.
    Where does one begin? This was a refreshingly brilliant and transparent read. Thank you! For the purposes of brevity, I shall list my thoughts/impressions. You don’t have to agree, just consider:
    1. The term AI may well be a misnomer that risks spawning a series of misguided concepts about what it can truly achieve. The seeds of misunderstanding and off-kilter endeavour stemming from inadequate definitions of ‘intelligence’, artificial or not.
    2. The brain is a finite structure, though an organ of immense complexity, a masterclass in information processing. The mind, it can be argued, is infinite having no physical boundary, with no beginning and no ending. I include the concept of ‘collective consciousness’ here – immeasurable, indefinable but we sense it does exist. Can we use brain and mind interchageably? I would think not. Some would go as far as saying, for the physical brain, there is the metaphysical mind. So too the physical and metaphysical heart. From the very latter human conscience may spring.
    3. Consider ‘AI’ as merely a grand but finite external projection of the human brain, limited in many respects – sentience, creative imagination, spontaneity, unpredictability, insight, introspection, real-time adaptation, empathy, stress response, physical arousal, reciprocated intimacy… Consider programmable mechanical adverse thought vs human machinations and too, whether a machine, no matter how advanced, can ever reflect the mind. We agree on this, I believe.
    4. How this phenomenon expands and evolves is imprisoned by and dependent upon the mind/mindset of its creator – the inventor.
    5. I found the chess master vs the machine an interesting scenario. Why did the machine win? Was it because it was more ‘intelligent’ or ‘better at chess’? Or was it less affected by the pressure to win? It made its clean moves in an emotional vacuum. The chess master checked the clock, probably perspired at bit. The machine was its own clock. It was devoid of emotion, subjectivity and yes, empathy. It was at an advantage in this regard, would be at a disadvantage in many other scenarios. Which begs the question, should we even attempt to humanize our own invention in terms of how we perceive ‘AI’? I’m very hesitant to do so.
    6. And this is where one gets a little philosophical – the penultimate point. As we interface with ‘AI’ more and more, one could paradoxically predict a devolution in human interpersonal relationships. What I woefully call The Great Fall – a critical mass of notable mishaps which by and by engenders a burgeoning appreciation for the intangibles, the non-verbal, the metaphysical, the unseen, an exquisitely heightened nuanced sensual/sentient existence, as well as the mysteries of our oneness with everything. We have already embarked on a process of perfecting that will take us back to a new beginning. I’ve heard it called the emergence of the super sapien where advanced machines/AI become useless, antiquated relics of an almost faded past.
    7. Finally, to fully appreciate Theory of Mind, it would be useful to also delve deeply into Prof. Simon Baron Cohen’s concept of Mind Blindness. When we consider both, one is gifted an epiphany…
    Once again, thank you for sharing your mind so generously with us.
    All the best for October! ????????????

  5. Love this post!! Thank you for sharing your thoughts in this area. Your assessment of how our minds predict and justify follows from a lot of well researched science. A lot of really good science that isn’t widely known or understood. It’s great that our world has people like you in it to expand our thinking in this way. I also can’t wait to read your new shorts!!

    I do have a question of course. Is there any research you used for your trifecta of consciousness?

    It seems the thinking here is based on the blueprint being true. But how do we know your blueprint of body, language, and justification are the actual blueprint of consciousness?

    If a human has no language are they unconscious?

    If a human has no input from their body due to paralysis, are they unconscious?

    Robots today have bodies, language, and justification, are they conscious?

    Anecdotal experience tells me the answer is No for all three. So what am I not understanding about the application of the blueprint?

    While researching and thinking about this myself a couple months ago, I wrote this poem that you might enjoy…

    I am a series of multi-lateral agreements
    A collection of nested achievements
    As I explain with poor justification
    Probability defines my next destination

    Inevitably, I disagree with me internally
    But to succeed I must overcome diurnally
    And during that fleeting moment
    Of flowing harmony without foment

    I achieve partial success externally
    The satisfaction never lasting eternally
    So I start again from my last station
    A path only defined by past ambition

    Seeking internal individual fulfillment
    I and us, renegotiating our agreements

  6. Fascinating. I love the research and the results. It’s probably an illusion, but there seems to be a lot of effort in this direction, and has been for a while. Science explains its findings, but it’s not its purpose to tell us how to feel about it.

    I don’t know how I feel about it. I’ve been aware for a very long time that, I’m basically a liar. Of course, on a scale of 1 to Trump, I’m at the lower end of the scale.
    Most of that time I’ve blamed it on a love of stories. What’s the harm of bending the truth a little. A little. And the occasional whopper.

    Anyway, this isn’t about the lies is it. It’s about the lies being a by product of some incredibly useful pieces of brain function. I hadn’t heard that before (surprised I was) and it does make immediate sense. Though strangely, the one time I tried to write a singularity story, machine consciousness cafe from a partial combination of two modules.

    That is fascinating and I want to know more about it now, but I’m drawn to the question of our evolution as well.

    Sorry to bring up Trump again, but his phenomena is unavoidable. It seems to me that, this ability to lie and bond is a key evolutionary trait. It’s as big a reason arguably of course, for our success. Forming gangs, is the way forward.

    I don’t thing Neanderthals we’re much different, but I read a suggestion that evidence points to them living in smaller groups than us, and being less social.

    Were we better liars and did it make us successful? How simplistic :)

    It does seem incredibly unlikely that machine consciousness would ever contain these elements. Why would it?

    Put this way – it’s quite an eye opener.

Leave a Reply

Your email address will not be published. Required fields are marked *