subject: The Truth About Self-Replicating Machines [print this page] The Truth About Self-Replicating Machines
In the 'Terminator' science fiction movie series the machines turned against their human masters at a moment when the network suddenly became 'aware.' Self-awareness is just but one of many attributes that scientists and philosophers cite in the argument that artificial intelligence is anything remotely like a living conscious being. I myself do concur, but just what are the implications of self-replicating machines and do they pose a very real and imminent danger for humanity?
Because undoubtedly just as soon as we are capable we will surely have computers design new and better computers, and we will have robots building and perfecting new robots, most certainly including sophisticated androids or, in simpler forms, already affectionately referred to simply as 'bots.' But more foreboding is the fact that if war continues in the future, and there is no reason to suppose that it won't, robots will certainly be on the battlefield and yes, they will be programmed to kill.
I am not at all uneasy about the notion of 'artificial' intelligence being, in fact, intelligence. But you see the problem is not in the ingenious capability of robots to perform extraordinarily complex tasks. It is a multifold problem of consciousness, emotions, and especially conscience that makes AI a simple mechanical function that is fully explainable even by the most ambitious objective inflation.
So why would our beloved machines turn on us? Why would they hunt us down, kill us, even possibly drive us to extinction? I for one don't think it would be because they will gain consciousness as depicted in 'Terminator.' I think the simple reason will be, if it ever happens, because we program them to do so. Put in another way, our greed and lust for power could make such a thing possible.