Board logo

subject: What Happens After Humans Build A Computer That Can Outsmart Them? [print this page]


So what happens if we ever did build computers that were capable of outsmarting us? Would human beings be rendered obsolete? Could humans and computers ever come to some sort of understanding? Science fiction writer Isaac Asimov was one of the first people to propose safety precautions for Artificial Intelligence with his "Three Laws of Robotics." The first rule the computers must abide by is that a robot may not harm a human being or let a human being get hurt through inaction. The second rule is that robots have to comply with the orders given to it by humans, except when these orders would conflict with the first rule. And the third rule is that a robot has to protect its own existence as long as doing so doesn't conflict with the first or second law. Of course, things don't go down that smoothly in Asimov's fiction.

Some people take the bleak view that artificial intelligences will simply have goals that are inconsistent with human survival and goodwill, and that they will effectively destroy the human race. Others believe that it is just impossible to tell - computers are too different from humans. Humans are founded on evolution, thus they give birth to children, and experience emotions like love, fear, and anger. Computers would have no need for any of these traits.

Other people still argue that even though the faster, smarter machines may make the human brain obsolete as far as the superior intellect, there will still be ecological space for humans. Still, the idea of not being at the top of the pecking order seems uncomfortable.

Some experts in the field are proposing that we put research into how to produce "friendly artificial intelligence" to address the dangers of, oh, being completely annihilated by a race of super computers. They feel as though if the first AI was programmed to be friendly with humans, it would design other, smarter computers to be friendly as well and this could prevent any harmful AI from developing. It might just be worth our while!

In fact, in 2009, a number of researchers, experts, and analysts met in California to speak about the hypothetical ramifications of self sufficient robots that may be able to make their own decisions. At this meeting, they talked about how these computers and robots might be able to acquire autonomy, and to what degree they could use these abilities to pose threats. Again, military computers have the ability to choose targets to attack with weapons, and even something as simple as your every day computer virus can evade getting caught by us "smarter" humans. To be continued in the next article, "Criticisms of Technological Singularity And Exponential Growth.

by: Mallory Megan




welcome to loan (http://www.yloan.com/) Powered by Discuz! 5.5.0