Monday, February 29, 2016

The Doomsday Invention

http://www.newyorker.com/magazine/2015/11/23/doomsday-invention-artificial-intelligence-nick-bostrom

After reading the above article titled, The Doomsday Invention, by Raffi Khatchadourian, I had some reactions to many of the points and observations by the writer, as well as Bostrom. The first two sections were the most striking to me and had a lot to consider upon reading through both of them.
I found the first part very interesting because I have always wondered some eerily similar questions. I have always held a certain fascination with this universe that we inhabit. Really thinking about how many planets are really out there, I too have come to the opinion that there is certainly life out there somewhere. There has to be. So it was incredibly provoking to think of why we have yet to encounter life from other planets. Odds are there is definitely multiple populations and forms of life that began ages before humans evolved, which would mean they should be much more advanced technologically than us humans, so why have they not made themselves evident to us. An answer to this speculation was brought up by Bostrom and was somewhat worrisome. Maybe these civilizations had gone through the same sort of technology evolution that humans have, and are, going through, but when they reached a certain level of advancement it might have caused the entire civilization's demise. If that is true, then at what point do humans make the leap to that level and ultimately cause our own demise? Also, if that is true, then one hope we may have is just the astronomically improbable chance that Earth is in fact the only planet with life. But lets not get too comfy with that hope.
 The second part of the article gave me the feel of The Terminator movies. That movie itself, back years ago when I first watched it gave me the same questions that were presented. I agree with Bostrom that advancing technologies too much could certainly be catastrophic. Humans dominate and control this planet simply because we are the most capable beings that inhabit it currently. If we create something that is more capable than ourselves, then there is no reason to think we are entirely safe from a takeover. A lot of this fear depends on the development and advancement of A.I. If the machine is able to think as a human does and can actually take in experiences and teach itself and better itself, it could surpass any limitations humans may have put on it. Another worry to creating superintelligent machines is that thy would replace the need for humans. If these machines were passing our levels of intelligence there would be no need for us in anything. The machines would essentially regard us as inferior being such as we regard animals. It's a crazy sci-fi looking idea but what is even scarier is how it actually is not far from reality. I think it is important for humans to look further ahead at the implications we may be creating instead of just focusing on how it might help in the short term. Like Bostrom said, breakthroughs are often unpredictable, so what if we accidentally come across the next breakthrough that ultimately leads to our extinction?

No comments:

Post a Comment