Anthropomorphizing Machines

Some preliminary notes on AI hype and the increasingly popular idea of conscious machines. Software is error-prone, brittle, constantly breaking. It is very far off from the self-directed, self-healing nature of the brain.

To get a glimpse of how extreme some claims being made in the technology world are, check out this Q&A with Martine Rothblatt in the MIT Technology Review. The basic premise is that AI is going to soon be so advanced and human-like that it will achieve consciousness, and Rothblatt is seriously proposing that we should consider extending human rights to such an AI. It actually seems kind of sweet and well-intended to me — if machines could indeed suffer like us, but the notion itself is so far from the mark I have to consider that it risks watering down our sense of the value of real humans.

These ideas have been circulating for a while. For a good concise debunking, check out Berkeley philosopher John Searle’s recent essay in the NYRB: What Your Computer Can’t Know. What Rothblatt and Ray Kurzweil (the more well-known futurist with similar ideas) espouse isn’t really about science or technology — it’s actually a new religion (Transhumanism), based on the same thing other religions are based on — fear of death, the need for answers, the urge to transcend. Kurzweil has made one failed prediction after another. Looking back at The Age of Spiritual Machines I mostly feel annoyed that I took it seriously when I read it back in 1999. It’s startling that someone who publicly makes such wild predictions can become head of engineering at Google — then again, you can be a brilliant engineer and ignorant about other fields.

As someone educated in the humanities, now working as a programmer, I feel like I can speak to the limitations of both fields. But for me, the idea of conscious machines fails in both criteria (philosophical and technological). Software, for those of us who work in the industry, is not magic, actually it’s labor-intensive, error-prone, brittle, constantly failing, constantly breaking. It so very far off from the generalized problem-solving, self-directed, self-healing nature of the brain.

It’s true that software is so complicated that it’s often at the limits of human understanding is still no argument that it’s about to achieve consciousness. That’s like saying if we tie a billion knots, eventually the knot will get so big it will start to tie itself. This argument of inevitability-because-of-quantity is often invoked by the Transhumanists. But really… if a car goes fast enough, it does not turn into a plane. Cars are … different from planes. A lot less different than humans are from computers.

Some common sense is in order here. Information is inert, it has no will to do anything because it has no body, and no mind. Now, that isn’t to say that we won’t create increasingly intelligent machines, which will change our lives in all sorts of ways. But they can only be a simulacrum of consciousness, an encapsulation of intelligence rather than a new form of it.

Human consciousness has so many underpinnings that software lacks: a sense of physical limits, a fear of death, the subconscious, intuition, the ability to dream … hunger, thirst, longing, love, these make a human human. So much that we don’t even begin to understand. Our ability to forget, for example, may be as important as our ability to remember.

It seems like the futurism of the Transhumanists is ultimately not a theory about the physical world, but a religion, driven by the hope that consciousness can be set free from our biological basis and live on beyond our material bodies. Some Transhumanists hope to achieve immortality this way. This is purely theoretical, and as a belief may end up to have more in common with belief in The Rapture or reincarnation than it does with real-world AI.

In order for this immortality to work, the machines have to get good enough, but even more important, consciousness has to be something that can be reduced to pure information and put into binary code.

We don’t know for sure. But my guess is that Kurzweil’s life-extension obsession is not going to get him to the finish line of the Singularity. No amount of vitamins will make you live that long. I could be wrong. But my gut tells me that all this is going to look pretty silly in ten, or even a hundred years from now.

The more interesting (and probable) possibilities are in how humans and machines are going to blend. Our technologies are extensions of ourselves. We can extend our voices through phones, or our feet through the automobile. The human brain, as far as I can see, is still going to be directing things.

 

 

Leave a Reply