Superintelligence paths dangers strategies pdf download torrent kickass






















Andone, I. Soul Psihologia dezvoltarii umane ana muntean pdf. This special offer gives Siberian Mouses M 21 Torrent. Cargado por. Educa ia 21, nr. Hayes, N. Chelcea, S. Romance Novels Pdf. Rate this book In this metamorphosis, required by September st, Chisinau, Republic of Moldova.

Consent in Childbirth.. Discover more. Bredemeier, Brenda, J. Harris, H. Muntean, Ana. Ciel Comptes In terms of the gender of the participants, of the 90, 21 are male Adaption for the 21st Century Munteanu, Ana. The following list of keywords is sorted from A to Z. More than one. The author Nick Bostrom looks at what awaits us. He points out that controlling such a creation might not be easy. If unfriendly superintelligence comes about, we won't be able to change or replace it.

This is a densely written book, with small print, with 63 pages of notes and bibliography. In the introduction the author tells us twice that it was not easy to write. However he tries to make it accessible, and adds that if you don't understa We are now building superintelligences. However he tries to make it accessible, and adds that if you don't understand some techie terms you should still be able to grasp the meaning. He hopes that by pulling together this material he has made it easier for other researchers to get started.

So - where are we? I have to state that with lines like: "Collective superintelligence is less conceptually clear-cut than speed superintelligence. However it is more familiar empirically.

If you are used to such terms and concepts you can dive in; if not I'd recommend the Ford book first. To be fair, terms are explained and we can easily see that launching a space shuttle requires a collective intellectual effort.

No one person could do it. Humanity's collective intelligence has continued to grow, as people evolved to become smarter, as there were more of us to work on a problem, as we got to communicate and store knowledge, and as we kept getting smarter and building on previous knowledge. There are now so many of us who don't need to farm or make tools, that we can solve many problems in tandem. Personally, I say that if you don't think your leaders are making smart decisions, just go out and look at your national transport system at rush hour in the capital city.

But a huge population requires a huge resource drain. As will the establishment of a superintelligence. Not just materials and energy but inventions, tests, human hours and expertise are required. Bostrom talks about a seed AI, a small system to start. He says in terms of a major system, the first project to reach a useful AI will win. After that the lead will be too great and the new AI so useful and powerful, that other projects may not close the gap. Hardware, power generation, software and coding are all getting better.

And we have the infrastructure in place. We are reminded that "The atomic bomb was created primarily by a group of scientists and engineers. The Manhattan Project employed about , people at its peak, the vast majority of whom were construction workers or building operators. I turned to the chapter heading 'Of horses and men'. Horses, augmented by ploughs and carriages, were a huge advantage to human labour. But they were replaced by the automobile and tractor.

The equine population crashed, and not to retirement homes. By the early s, 2 million remained. Bostrom later reassures us: "The US horse population has undergone a robust recovery: a recent census puts the number at just under 10 million head. Capital is mentioned; yes unlike horses, people own land and wealth.

But many people have no major income or property, or have net debt such as student loans and credit card debt. Bostrom suggests that all humans could become wealthy from AIs. But he doesn't notice that more than half of the world's wealth and resources is now owned by one percent of its people, and it's heading ever more in the favour of the one percent, because they have the wealth to ensure that it does.

They rent the land, they own the debt, they own the manufacturing and the resource mines. Homeowners could be devastated by sea rise and climate change, not looked at, but the super-wealthy can just move to another of their homes. Again, I found in a later chapter lines like: "For example, suppose that we want to start with some well-motivated human-like agents - let us say emulations.

We want to boost the cognitive capacities of these agents, but we worry that the enhancements might corrupt their motivations. One way to deal with this challenge would be to set up a system in which individual emulations function as subagents.

When a new enhancement is introduced, it is first applied to a small subset of the subagents. Its effects are then studied by a review panel composed of subagents who have not yet had the enhancement applied to them.

I have to think that the author, Director of the Future of Humanity Institute and Professor of the Faculty of Philosophy at Oxford, is so used to writing for engineers or philosophers that he loses out on what really helps the average interested reader. For this reason I'm giving Superintelligence four stars, but someone working in this AI industry may of course feel it deserves five stars.

If so, I'm not going to argue with her. In fact I'm going to be very polite. Jun 17, Gavin rated it really liked it Shelves: the-long-term , insight-full.

Like a lot of great philosophy, Superintelligence acts as a space elevator: you make many small, reasonable, careful movements - and you suddenly find yourself in outer space, home comforts far below. It is more rigorous about a topic which doesn't exist than you would think possible.

I didn't find it hard to read, but I have been marinating in tech rationalism for a few years and have absorbed much of Bostrom secondhand so YMMV. I loved this: Many of the points made in this book are probably w Like a lot of great philosophy, Superintelligence acts as a space elevator: you make many small, reasonable, careful movements - and you suddenly find yourself in outer space, home comforts far below.

I loved this: Many of the points made in this book are probably wrong. It is also likely that there are considerations of critical importance that I fail to take into account, thereby invalidating some or all of my conclusions.

Yet these topical applications of epistemic modesty are not enough; they must be supplemented here by a systemic admission of uncertainty and fallibility.

This is not false modesty: for while I believe that my book is likely to be seriously wrong and misleading, I think that the alternative views that have been presented in the literature are substantially worse - including the default view, according to which we can for the time being reasonably ignore the prospect of superintelligence. Bostrom introduces dozens of neologisms and many arguments. Here is the main scary apriori one though: 1. Just being intelligent doesn't imply being benign; intelligence and goals can be independent.

Any agent which seeks resources and lacks explicit moral programming would default to dangerous behaviour. You are made of things it can use; hate is superfluous. Instrumental convergence.

It is conceivable that AIs might gain capability very rapidly through recursive self-improvement. Non-negligible possibility of a hard takeoff. Of far broader interest than its title and that argument might suggest to you.

In particular, it is the best introduction I've seen to the new, shining decision sciences - an undervalued reinterpretation of old, vague ideas which, until recently, you only got to see if you read statistics, and economics, and the crunchier side of psychology.

It is also a history of humanity, a thoughtful treatment of psychometrics v genetics, and a rare objective estimate of the worth of large organisations, past and future. Superintelligence 's main purpose is moral: he wants us to worry and act urgently about hypotheticals; given this rhetorical burden, his tone too is a triumph. For a child with an undetonated bomb in its hands, a sensible thing to do would be to put it down gently, quickly back out of the room, and contact the nearest adult.

Yet what we have here is not one child but many, each with access to an independent trigger mechanism. The chances that we will all find the sense to put down the dangerous stuff seem almost negligible.

Some little idiot is bound to press the ignite button just to see what happens. Nor can we attain safety by running away, for the blast of an intelligence explosion would bring down the firmament. Nor is there a grown-up in sight This is not a prescription of fanaticism. The intelligence explosion might still be many decades off in the future. Moreover, the challenge we face is, in part, to hold on to our humanity: to maintain our groundedness, common sense, and goodhumored decency even in the teeth of this most unnatural and inhuman problem.

We need to bring all human resourcefulness to bear on its solution. I don't donate to AI safety orgs, despite caring about the best way to improve the world and despite having no argument against it better than "that's not how software has worked so far" and despite the concern of smart experts. This sober, kindly book made me realise this was more to do with fear of sneering than noble scepticism or empathy.

Robin Hanson chokes eloquently here and for god's sake let's hope he's right. Dec 28, Diego Petrucci rated it it was amazing. There's no way around it: a super-intelligent AI is a threat. We can safely assume that an AI smarter than a human, if developed, would accelerate its own development getting smarter at a rate faster than anything we'd ever seen. In just a few cycles of self-improvement it would spiral out of control. Trying to fight, or control, or hijack it, would be totally useless — for a comparison, try picturing an ant trying to outsmart a human being a laughable attempt, at best.

But why is a super-intell There's no way around it: a super-intelligent AI is a threat. But why is a super-intelligent AI a threat? Well, it probably wouldn't have human qualities empathy, a sense of justice, and so on and would rely on a more emotion-less understanding of the world — understanding emotion doesn't mean you have to feel emotions, you can understand the motives of terrorists without agreeing with them.

There would be a chance of developing a super-intelligent AI with an insane set of objectives, like maximizing the production of chairs with no regard to the safety of human beings or the environment, totally subsuming Earth 's materials and the planet itself.

Or, equally probable, we could end up with an AI whose main objective is self-preservation, who would later annihilate the human race because of an even minuscule chance of us destroying it. With that said, it's clear that before developing a self-improving AI we need a plan. We need tests to understand and improve its moral priorities, we need security measures, we need to minimize the risk of it destroying the planet.

Once the AI is more intelligent than us, it won't take much to get extremely more intelligent, so we need to be prepared. We only got one chance and that's it, either we set it up right or we're done as a species. Superintelligence deals with all these problems systematically analyzing them and providing a few frames of mind to let us solve them if that's even possible. Oct 21, Bharath rated it liked it. This is the most detailed book I have read on the implications of AI, and this book is a mixed bag.

The initial chapters provide an excellent introduction to the various different paths leading to superintelligence. This part of the book is very well written and also provides an insight into what to expect from each pathway.

The following sections detail the implications for each of these pathways. There is a detailed discussion also on how the dangers to humans can be limited, if at all possible This is the most detailed book I have read on the implications of AI, and this book is a mixed bag. There is a detailed discussion also on how the dangers to humans can be limited, if at all possible.

However, considering that much of this is speculative, the book delves into far too much depth in these sections. It is also unclear what kind of an audience these sections are aimed at - the bio technologists would regard this as containing not enough depth and detail, while the general audience would find this tiring. And yet, this book might be worth a read for the initial sections.. Sep 22, Bill rated it liked it Shelves: ai , mind.

An extraordinary achievement: Nick Bostrom takes a topic as intrinsically gripping as the end of human history if not the world and manages to make it stultifyingly boring. Artificial General Intelligence AGI will recursively improve itself, leading to a technological singularity and unpredictable changes to human civilization. Low probability combined with high impact generates a risk which certainly makes one wonder about the backgrounds.

Academic philosopher Nick Bostrom is by far not the first to argue about the singularity, but he makes a high effort to imagine how it would come so far, what we can do about it, and what the consequences would be.

He works as Artificial General Intelligence AGI will recursively improve itself, leading to a technological singularity and unpredictable changes to human civilization. He works as professor at the Uni Oxford, runs the Future of Humanity Institute at Oxford, and is very well connected to the industry — Elon Musk financed the institute, Bill Gates recommends the book, others like Hassabis contributed.

It is a different point of view, not a technological but a philosophical one. This makes it in parts difficult to understand his argumentation for me as a computer scientist, because I lack part of the presupposed terms and way of discussion.

But he also is very light on the technological side and can be very easily read by educated readers with a broad understanding of relevant topics like economics, computer science, or biology. Another fact that needs to be mentioned is the book has been published in Add to this that the argumentation mostly crosses the border from non-fiction to fiction in the way that Bostrom argues what might happen.

Much of the book is highly speculative, some of the discussion is really over-the-top up to the point where I was amused in a popcorn-eating-sitting-back-entertain-me fashion. In the last part, the author discusses what must be done immediately to minimize the risks from AGI. The philosophical understanding has to be developed further in this rather new field, we are scratching only the surface of the consequences while playing with fire. As far away as AGI might be — most experts place it somewhere in the next 50 years — the risks are immense high to do it in a catastrophic way.

I recommend this insightful book for everyone interested in a non-SF treatment of the singularity. May 29, Rod Van Meter rated it really liked it. Is the surface of our planet -- and maybe every planet we can get our hands on -- going to be carpeted in paper clips and paper clip factories by a well-intentioned but misguided artificial intelligence AI that ultimately cannibalizes everything in sight, including us, in single-minded pursuit of a seemingly innocuous goal? It doesn't require Skynet and Terminato Is the surface of our planet -- and maybe every planet we can get our hands on -- going to be carpeted in paper clips and paper clip factories by a well-intentioned but misguided artificial intelligence AI that ultimately cannibalizes everything in sight, including us, in single-minded pursuit of a seemingly innocuous goal?

It doesn't require Skynet and Terminators, it doesn't require evil geniuses bent on destroying the world, it just requires a powerful AI with a moral system in which humanity's welfare is irrelevant or defined very differently than most humans today would define it.

This is perhaps the most important book I have read this decade, and it has kept me awake at night for weeks. I want to tell you why, and what I think, but a lot of this is difficult ground, so please bear with me. I've also been skeptical of the idea that AIs will destroy us, either on purpose or by accident.

Bostrom's book has made me think that perhaps I was naive. I still think that, on the whole, his worst-case scenarios are unlikely. However, he argues persuasively that we can't yet rule out any number of bad outcomes of developing AI, and that we need to be investing much more in figuring out whether developing AI is a good idea. We may need to put a moratorium on research, as was done for a few years with recombinant DNA starting in We also need to be prepared for the possibility that such a moratorium doesn't hold.

Bostrom also brings up any number of mind-bending dystopias around what qualifies as human, which we'll get to below. Bostrom skirts the issue of whether it will be conscious, or "have qualia", as I think the philosophers of mind say. Where Bostrom and I differ is in the level of plausibility we assign to the idea of a truly exponential explosion in intelligence by AIs, in a takeoff for which Vernor Vinge coined the term "the Singularity.

I read one of Kurzweil's books a number of years ago, and I found it imbued with a lot of near-mystic hype. He believes the Universe's purpose is the creation of intelligence, and that that process is growing on a double exponential, starting from stars and rocks through slime molds and humans and on to digital beings.

I'm largely allergic to that kind of hooey. I really don't see any evidence of the domain-to-domain acceleration that Kurzweil sees, and in particular the shift from biological to digital beings will result in a radical shift in the evolutionary pressures.

I also don't see that Kurzweil really pays any attention to the physical limits of what will ultimately be possible for computing machines. Exponentials can't continue forever, as Danny Hillis is fond of pointing out. So perhaps my opinion is somewhat biased by a dislike of Kurzweil's circus barker approach, but I think there is more to it than that. Fundamentally, I would put it this way: Being smart is hard. And making yourself smarter is also hard.

My inclination is that getting smarter is at least as hard as the advantages it brings, so that the difficulty of the problem and the resources that can be brought to bear on it roughly balance. This will result in a much slower takeoff than Kurzweil reckons, in my opinion. Bostrom presents a spectrum of takeoff speeds, from "too fast for us to notice" through "long enough for us to develop international agreements and monitoring institutions," but he makes it fairly clear that he believes that the probability of a fast takeoff is far too large to ignore.

There are parts of his argument I find convincing, and parts I find less so. To give you a little more insight into why I am a little dubious that the Singularity will happen in what Bostrom would describe as a moderate to fast takeoff, let me talk about the kinds of problems we human beings solve, and that an AI would have to solve.

Actually, rather than the kinds of questions, first let me talk about the kinds of answers we would like an AI or a pet family genius to generate when given a problem.

Off the top of my head, I can think of six: [Speed] Same quality of answer, just faster. The first three are really about how the answers are generated; the last three about what we want to get out of them.

I think this set is reasonably complete and somewhat orthogonal, despite those differences. So what kinds of problems do we apply these styles of answers to? We ultimately want answers that are "better" in some qualitative sense.

Humans are already pretty good at projecting the trajectory of a baseball, but it's certainly conceivable that a robot batter could be better, by calculating faster and using better data. Such a robot might make for a boring opponent for a human, but it would not be beyond human comprehension.

But if you accidentally knock a bucket of baseballs down a set of stairs, better data and faster computing are unlikely to help you predict the exact order in which the balls will reach the bottom and what happens to the bucket.

Someone "smarter" might be able to make some interesting statistical predictions that wouldn't occur to you or me, but not fill in every detail of every interaction between the balls and stairs. Chaos, in the sense of sensitive dependence on initial conditions, is just too strong. In chess, go, or shogi, a x improvement in the number of plies that can be investigated gains you maybe only the ability to look ahead two or three moves more than before.

Less if your pruning discarding unpromising paths is poor, more if it's good. Don't get me wrong -- that's a huge deal, any player will tell you. But in this case, humans are already pretty good, when not time limited. Go players like to talk about how close the top pros are to God, and the possibly apocryphal answer from a top pro was that he would want a three-stone three-move handicap, four if his life depended on it.

Compared this to the fact that a top pro is still some ten stones stronger than me, a fair amateur, and could beat a rank beginner even if the beginner was given the first forty moves. Top pros could sit across the board from an almost infinitely strong AI and still hold their heads up. In the most recent human-versus-computer shogi Japanese chess series, humans came out on top, though presumably this won't last much longer. In chess, as machines got faster, looked more plies ahead, carried around more knowledge, and got better at pruning the tree of possible moves, human opponents were heard to say that they felt the glimmerings of insight or personality from them.

Simply being able to hold more data in your head or the AI's head while making a medical diagnosis using epidemiological data, or cross-correlating drug interactions, for example, will definitely improve our lives, and I can imagine an AI doing this.

Again, however, the AI's capabilities are unlikely to recede into the distance as something we can't comprehend.

We know that increasing the amount of data you can handle by a factor of a thousand gains you 10x in each dimension for a 3-D model of the atmosphere or ocean, up until chaotic effects begin to take over, and then as we currently understand it you can only resort to repeated simulations and statistical measures.

The actual calculations done by a climate model long ago reached the point where even a large team of humans couldn't complete them in a lifetime. But they are not calculations we cannot comprehend, in fact, humans design and debug them. The size of computation grows quickly in many problems, and for many problems we believe that sheer computation is fundamentally limited in how well it can correspond to the real world.

But those are just the warmup. Those are things we already ask computers to do for us, even though they are "dumber" than we are. What about the latter three categories? I'm no expert in creativity, and I know researchers study it intensively, so I'm going to weasel through by saying it is the ability to generate completely new material, which involves some random process.

You also need the ability either to generate that material such that it is aesthetically pleasing with high probability, or to prune those new ideas rapidly using some metric that achieves your goal. For my purposes here, insight is the ability to be creative not just for esthetic purposes, but in a specific technical or social context, and to validate the ideas. No implication that artists don't have insight is intended, this is just a technical distinction between phases of the operation, for my purposes here.

Einstein's insight for special relativity was that the speed of light is constant. Either he generated many, many hypotheses possibly unconsciously and pruned them very rapidly, or his hypothesis generator was capable of generating only a few good ones. In either case, he also had the mathematical chops to prove or at least analyze effectively his hypothesis; this analysis likewise involves generating possible paths of proofs through the thicket of possibilities and finding the right one.

So, will someone smarter be able to do this much better? Well, it's really clear that Einstein or Feynman or Hawking, if your choice of favorite scientist leans that way produced and validated hypotheses that the rest of us never could have.

A hundred? A million? My guess is it's closer to the latter than the former. Even generating a single hypothesis that could be said to attack the problem is difficult, and most humans would decline to even try if you asked them to. Making better devices and systems of any kind requires all of the above capabilities.

You must have insight to innovate, and you must be able to quantitatively and qualitatively analyze the new systems, requiring the heavy use of data. As systems get more complex, all of this gets harder. My own favorite example is airplane engines. The Wright Brothers built their own engines for their planes. Today, it takes a team of hundreds to create a jet turbine -- thousands, if you reach back into the supporting materials, combustion and fluid flow research.

We humans have been able to continue to innovate by building on the work of prior generations, and especially harnessing teams of people in new ways. Unlike Peter Thiel, I don't believe that our rate of innovation is in any serious danger of some precipitous decline sometime soon, but I do agree that we begin with the low-lying fruit, so that harvesting fruit requires more effort -- or new techniques -- with each passing generation.

The Singularity argument depends on the notion that the AI would design its own successor, or even modify itself to become smarter. Will we watch AIs gradually pull even with us and then ahead, but not disappear into the distance in a Roadrunner-like flash of dust covering just a few frames of film in our dull-witted comprehension?

Ultimately, this is the question on which continued human existence may depend: If an AI is enough smarter than we are, will it find the process of improving itself to be easy, or will each increment of intelligence be a hard problem for the system of the day? This is what Bostrom calls the "recalcitrance" of the problem. I believe that the range of possible systems grows rapidly as they get more complex, and that evaluating them gets harder; this is hard to quantify, but each step might involve a thousand times as many options, or evaluating each option might be a thousand times harder.

Growth in computational power won't dramatically overbalance that and give sustained, rapid and accelerating growth that moves AIs beyond our comprehension quickly. Don't take these numbers seriously, it's just an example. Bostrom believes that recalcitrance will grow more slowly than the resources the AI can bring to bear on the problem, resulting in continuing, and rapid, exponential increases in intelligence -- the arrival of the Singularity. As you can tell from the above, I suspect that the opposite is the case, or that they very roughly balance, but Bostrom argues convincingly.

He is forcing me to reconsider. What about "values", my sixth type of answer, above? Ah, there's where it all goes awry. Chapter eight is titled, "Is the default scenario doom? What happens when we put an AI in charge of a paper clip factory, and instruct it to make as many paper clips as it can? With such a simple set of instructions, it will do its best to acquire more resources in order to make more paper clips, building new factories in the process.

If it's smart enough, it will even anticipate that we might not like this and attempt to disable it, but it will have the will and means to deflect our feeble strikes against it. Eventually, it will take over every factory on the planet, continuing to produce paper clips until we are buried in them. It may even go on to asteroids and other planets in a single-minded attempt to carpet the Universe in paper clips. I suppose it goes without saying that Bostrom thinks this would be a bad outcome.

Bostrom reasons that AIs ultimately may or may not be similar enough to us that they count as our progeny, but doesn't hesitate to view them as adversaries, or at least rivals, in the pursuit of resources and even existence.

Bostrom clearly roots for humanity here. Which means it's incumbent on us to find a way to prevent this from happening. Bostrom thinks that instilling values that are actually close enough to ours that an AI will "see things our way" is nigh impossible. There are just too many ways that the whole process can go wrong. If an AI is given the goal of "maximizing human happiness," does it count when it decides that the best way to do that is to create the maximum number of digitally emulated human minds, even if that means sacrificing some of the physical humans we already have because the planet's carrying capacity is higher for digital than organic beings?

As long as we're talking about digital humans, what about the idea that a super-smart AI might choose to simulate human minds in enough detail that they are conscious, in the process of trying to figure out humanity? Do those recursively digital beings deserve any legal standing? Do they count as human?

If their simulations are stopped and destroyed, have they been euthanized, or even murdered? Some of the mind-bending scenarios that come out of this recursion kept me awake nights as I was reading the book. He uses a variety of names for different strategies for containing AIs, including "genies" and "oracles". Given that Bostrom attributes nearly infinite brainpower to an AI, it is hard to effectively rule out that an AI could still find some way to manipulate us into doing its will.

If the AI's ability to probe the state of the world is likewise limited, Bsotrom argues that it can still turn even single-bit probes of its environment into a coherent picture. It can then decide to get loose and take over the world, and identify security flaws in outside systems that would allow it to do so even with its very limited ability to act.

I think this unlikely. Imagine we set up a system to monitor the AI that alerts us immediately when the AI begins the equivalent of a port scan, for whatever its interaction mechanism is. How could it possibly know of the existence and avoid triggering the alert? Bostrom has gone off the deep end in allowing an intelligence to infer facts about the world even when its data is very limited. Sherlock Holmes always turns out to be right, but that's fiction; in reality, many, many hypotheses would suit the extremely slim amount of data he has.

The same will be true with carefully boxed AIs. At this point, Bostrom has argued that containing a nearly infinitely powerful intelligence is nearly impossible. That seems to me to be effectively tautological. If we can't contain them, what options do we have? After arguing earlier that we can't give AIs our own values and presenting mind-bending scenarios for what those values might actually mean in a Universe with digital beings , he then turns around and invests a whole string of chapters in describing how we might actually go about building systems that have those values from the beginning.

At this point, Bostrom began to lose me. Beyond the systems for giving AIs values, I felt he went off the rails in describing human behavior in simplistic terms.

We are incapable of balancing our desire to reproduce with a view of the tragedy of the commons, and are inevitably doomed to live out our lives in a rude, resource-constrained existence. There were some interesting bits in the taxonomies of options, but the last third of the book felt very speculative, even more so than the earlier parts.

Bostrom is rational and seems to have thought carefully about the mechanisms by which AIs may actually arise. Here, I largely agree with him. It is to these distinctive capabilities that our species owes its dominant position. If machine brains surpassed human brains in general intelligence, then this new superintelligence could become extremely powerful - possibly beyond our control. As the fate of the gorillas now depends more on humans than on the species itself, so would the fate of humankind depend on the actions of the machine superintelligence.

But we have one advantage: we get to make the first move. Will it be possible to construct a seed Artificial Intelligence, to engineer initial conditions so as to make an intelligence explosion survivable?



0コメント

  • 1000 / 1000