p.p1 again, I come to agree with

p.p1 {margin: 0.0px 0.0px 0.0px 0.0px; font: 12.0px Helvetica; min-height: 14.0px}
p.p2 {margin: 0.0px 0.0px 0.0px 0.0px; font: 11.5px Helvetica}
span.s1 {font: 12.0px Helvetica}

 In this book, Bostrom shares many of the same concerns I have in regards to the development of Artificial Intelligent agents. Unlike me, he is able to point out different arguments and points that, so far, I could only abstractly think about, however was unable to derive myself. In my opinion, one of the most interesting concept or idea I encountered on the book was the idea that superintelligence will likely pose a severe existential risk to humanity. Bostrom points out three main reason for this. First, the first project to reach superintelligence will likely possess a strategic advantage such that its capabilities far exceed (or can recursively improve to far exceed) those of its competitors and all of humanity combined. This possibility might spontaneously emerge because an artificial superintelligence could easily be generated anonimously by a small research team that overcomes a crucial, final obstacle to superintelligence. Bostrom draws a fair point here because once superintelligence is achieve (agents far more intelligent than humans), I see to limit to what they could develop and achieve. Hence, if this kind of intelligence is ruled by bad intentions, no good outcome would come out of it, and that brings us to the second point. 
Second, there is no reason to think that this possibility will necessarily possess human values such as humility, self-sacrifice or a general concern for others. Instead, he claims that intelligence and final goals are mostly no correlated, any given level of intelligence could be associated with any goal whatsoever. Once again, I come to agree with Bostrom on this argument. I believe that machines will be able to reach superintelligence on a matters dealing with with concrete knowledge. When it comes to abstract knowledge, like feelings, I strongly believe machines will not have those abilities. They might be able to identify them and maybe even imitate but never actually feel them. 
Third, even if the system were innocuously trying to complete a simple final goal such as creating exactly one million paper clips, there is strong reason to believe that it would try to actualize specific “convergent instrumental” goals that make it easier to obtain the final goals, regardless of what the final goals: eliminating potential threats to the actualization of its goals and acquiring as many resources as possible to achieve the final goal. Human beings may potentially be seen as threats, and they definitely possess resources. In the paper clip scenario, for example, it seems plausible that the superintelligence would try to acquire as many resources as possible (potentially at our expense) to increase its certainty of having produced exactly one million paper clips, no more no less. Again, a super intelligent machine could have been created with very well defined goals, but since it possesses higher intelligence than its potential creators, it could itself resets its own goals according to its own interests. 

We Will Write a Custom Essay Specifically
For You For Only $13.90/page!

order now