Jump to ratings and reviews
Rate this book

Superintelligence: Paths, Dangers, Strategies

Rate this book
Superintelligence asks the questions: what happens when machines surpass humans in general intelligence? Will artificial agents save or destroy us? Nick Bostrom lays the foundation for understanding the future of humanity and intelligent life.

The human brain has some capabilities that the brains of other animals lack. It is to these distinctive capabilities that our species owes its dominant position. If machine brains surpassed human brains in general intelligence, then this new superintelligence could become extremely powerful--possibly beyond our control. As the fate of the gorillas now depends more on humans than on the species itself, so would the fate of humankind depend on the actions of the machine superintelligence.

But we have one advantage: we get to make the first move. Will it be possible to construct a seed Artificial Intelligence, to engineer initial conditions so as to make an intelligence explosion survivable? How could one achieve a controlled detonation?

352 pages, Hardcover

First published July 3, 2014

Loading interface...
Loading interface...

About the author

Nick Bostrom

27 books1,473 followers
Nick Bostrom is Professor at Oxford University, where he is the founding Director of the Future of Humanity Institute. He also directs the Strategic Artificial Intelligence Research Center. He is the author of some 200 publications, including Anthropic Bias (Routledge, 2002), Global Catastrophic Risks (ed., OUP, 2008), Human Enhancement (ed., OUP, 2009), and Superintelligence: Paths, Dangers, Strategies (OUP, 2014), a New York Times bestseller.

Bostrom holds bachelor degrees in artificial intelligence, philosophy, mathematics and logic followed by master’s degrees in philosophy, physics and computational neuroscience. In 2000, he was awarded a PhD in Philosophy from the London School of Economics.
He is recipient of a Eugene R. Gannon Award (one person selected annually worldwide from the fields of philosophy, mathematics, the arts and other humanities, and the natural sciences). He has been listed on Foreign Policy's Top 100 Global Thinkers list twice; and he was included on Prospect magazine's World Thinkers list, the youngest person in the top 15 from all fields and the highest-ranked analytic philosopher. His writings have been translated into 24 languages. There have been more than 100 translations and reprints of his works. During his time in London, Bostrom also did some turns on London’s stand-up comedy circuit.

Nick is best known for his work on existential risk, the anthropic principle, human enhancement ethics, the simulation argument, artificial intelligence risks, the reversal test, and practical implications of consequentialism. The bestseller Superintelligence, and FHI’s work on AI, has changed the global conversation on the future of machine intelligence, helping to stimulate the emergence of a new field of technical research on scalable AI control.

More: https://nickbostrom.com

Ratings & Reviews

What do you think?
Rate this book

Friends & Following

Create a free account to discover what your friends think of this book!

Community Reviews

5 stars
5,201 (28%)
4 stars
7,085 (38%)
3 stars
4,536 (24%)
2 stars
1,186 (6%)
1 star
308 (1%)
Displaying 1 - 30 of 1,762 reviews
Profile Image for Manny.
Author 34 books14.9k followers
March 3, 2018
Superintelligence was published in 2014, and it's already had time to become a cult classic. So, with apologies for being late getting to the party, here's my two cents.

For people who still haven't heard of it, the book is intended as a serious, hard-headed examination of the risks associated with the likely arrival, in the short- to medium-term future, of machines which are significantly smarter than we are. Bostrom is well qualified to do this. He runs the Future of Humanity Institute at Oxford, where he's also a professor at the philosophy department, he's read a great deal of relevant background, and he knows everyone. The cover quotes approving murmurs from the likes of Bill Gates, Elon Musk, Martin Rees and Stuart Russell, co-author of the world's leading AI textbook; people thanked in the acknowledgements include Demis Hassabis, the founder and CEO of Google's Deep Mind. So, why don't we assume for now that Bostrom passes the background check and deserves to be taken seriously. What's he saying?

First of all, let's review the reasons why this is a big deal. If machines can get to the point where they're even a little bit smarter than we are, they'll soon be a whole lot smarter than we are. Machines can think much faster than humans (our brains are not well optimised for speed); the differential is at least in the thousands and more likely in the millions. So, having caught us up, they will rapidly overtake us, since they're living thousands or millions of their years for every one of ours. Of course, you can still, if you want, argue that it's a theoretical extrapolation, it won't happen any time soon, etc. But the evidence suggests the opposite. The list of things machines do roughly as well as humans is now very long, and there are quite a few things, things we humans once prided ourselves on being good at, that they do much better. More about that shortly.

So if we can produce an artificial human-level intelligence, we'll shortly after have an artificial superintelligence. What does "shortly after" mean? Obviously, no one knows, which is the "fast takeoff/slow takeoff" dichotomy that keeps turning up in the book. But probably "slow takeoff" will be at most a year or two, and fast takeoff could be seconds. Suddenly, we're sharing our planet with a being who's vastly smarter than we are. Bostrom goes to some trouble to help you understand what "vastly smarter" means. We're not talking Einstein versus a normal person, or even Einstein versus a mentally subnormal person. We're talking human being versus a mouse. It seems reasonable to assume the superintelligence will quickly learn to do all the things a very smart person can do, including, for starters: formulating and carrying out complex strategic plans; making money in business activities; building machines, including robots and weapons; using language well enough to persuade people to do dumb things; etc etc. It will also be able to do things that we not only can't do, but haven't even thought of doing.

And so we come to the first key question: having produced your superintelligence, how do you keep it under control, given that you're a mouse and it's a human being? The book examines this in great detail, coming up with any number of bizarre and ingenious schemes. But the bottom line is that no matter how foolproof your scheme might appear to you, there's absolutely no way you can be sure it'll work against an agent who's so much smarter. There's only one possible strategy which might have a chance of working, and that's to design your superintelligence so that it wants to act in your best interests, and has no possibility of circumventing the rules of its construction to change its behavior, build another superintelligence which changes its behavior, etc. It has to sincerely and honestly want to do what's best for you. Of course, this is Asimov Three Laws territory; and, as Bostrom says, you read Asimov's stories and you see how extremely difficult it is to formulate clear rules which specify what it means to act in people's best interests.

So the second key question is: how do you build an agent which of its own accord wants to do "the right thing", or, as Socrates put it two and half thousand years ago, is virtuous? As Socrates concludes, for example in Meno and Euthyphro, these issues are really quite difficult to understand. Bostrom uses language which is a bit less poetic and a bit more mathematical, but he comes to pretty much the same conclusions. No one has much idea yet of how to do it. The book reaches this point and gives some closing advice. There are many details, but the bottom line is unsurprising given what's gone before: be very, very careful, because this stuff is incredibly dangerous and we don't know how to address the critical issues.

I think some people have problems with Superintelligence due to the fact that Bostrom has a few slightly odd beliefs (he's convinced that we can easily colonize the whole universe, and he thinks simulations are just as real as the things they are simulating). I don't see that these issues really affect the main arguments very much, so don't let them bother you if you don't like them. Also, I'm guessing some other people dislike the style, which is also slightly odd: it's sort of management-speak with a lot of philosophy and AI terminology added, and because it's philosophy there are many weird thought-experiments which often come across as being a bit like science-fiction. Guys, relax. Philosophers have been doing thought-experiments at least since Plato. It's perfectly normal. You just have to read them in the right way. And so, to conclude, let's look at Plato again (remember, all philosophy is no more than footnotes to Plato), and recall the argument from the Theaetetus. Whatever high-falutin' claims it makes, science is only opinions. Good opinions will agree with new facts that turn up later, and bad opinions will not. We've had three and a half years of new facts to look at since Superintelligence was published. How's its scorecard?

Well, I am afraid to say that it's looking depressingly good. Early on in the history of AI, as the book reminds us, people said that a machine which could play grandmaster level chess would be most of the way to being a real intelligent agent. So IBM's team built Deep Blue, which beat Garry Kasparov in 1997, and people immediately said chess wasn't a fair test, you could crack it with brute force. Go was the real challenge, since it required understanding. In late 2016 and mid 2017, Deep Mind's AlphaGo won matches against two of the world's three best Go players. That was also discounted as not a fair test: AlphaGo was trained on millions of moves of top Go matches, so it was just spotting patterns. Then late last year, Alpha Zero learned Go, Chess and Shogi on its own, in a couple of days, using the same general learning method and with no human examples to train from. It played all three games not just better than any human, but better than all previous human-derived software. Looking at the published games, any strong chess or Go player can see that it has worked out a vast array of complex strategic and tactical principles. It's no longer a question of "does it really understand what it's doing". It obviously understands these very difficult games much better than even the top experts do, after just a few hours of study.

Humanity, I think that was our final warning. Come up with more excuses if you like, but it's not smart. And read Superintelligence.
Profile Image for Brian Clegg.
Author 214 books2,863 followers
July 1, 2014
There has been a spate of outbursts from physicists who should know better, including Stephen Hawking, saying ‘philosophy is dead – all we need now is physics’ or words to that effect. I challenge any of them to read this book and still say that philosophy is pointless.

It’s worth pointing out immediately that this isn’t really a popular science book. I’d say the first handful of chapters are for everyone, but after that, the bulk of the book would probably be best for undergraduate philosophy students or AI students, reading more like a textbook than anything else, particularly in its dogged detail – but if you are interested in philosophy and/or artificial intelligence, don’t let that put you off.

What Nick Bostrom does is to look at the implications of developing artificial intelligence that goes beyond human abilities in the general sense. (Of course, we already have a sort of AI that goes beyond our abilities in the narrow sense of, say, arithmetic, or playing chess.) In the first couple of chapters he examines how this might be possible – and points out that the timescale is very vague. (Ever since electronic computers have been invented, pundits have been putting the development of effective AI around 20 years in the future, and it’s still the case.) Even so, it seems entirely feasible that we will have a more than human AI – a superintelligent AI – by the end of the century. But the ‘how’ aspect is only a minor part of this book.

The real subject here is how we would deal with such a ‘cleverer than us’ AI. What would we ask it to do? How would we motivate it? How would we control it? And, bearing in mind it is more intelligent than us, how would we prevent it taking over the world or subverting the tasks we give it to its own ends? It is truly fascinating concept, explored in great depth here. This is genuine, practical philosophy. The development of super-AIs may well happen – and if we don’t think through the implications and how we would deal with it, we could well be stuffed as a species.

I think it’s a shame that Bostrom doesn’t make more use of science fiction to give examples of how people have already thought about these issues – he gives only half a page to Asimov and the three laws of robotics (and how Asimov then spends most of his time showing how they’d go wrong), but that’s about it. Yet there has been a lot of thought and dare I say it, a lot more readability than you typically get in a textbook, put into the issues in science fiction than is being allowed for, and it would have been worthy of a chapter in its own right.

I also think a couple of the fundamentals aren’t covered well enough, but pretty much assumed. One is that it would be impossible to contain and restrict such an AI. Although some effort is put into this, I’m not sure there is enough thought put into the basics of ways you can pull the plug manually – if necessary by shutting down the power station that provides the AI with electricity.

The other dubious assertion was originally made by I. J. Good, who worked with Alan Turing, and seems to be taken as true without analysis. This is the suggestion that an ultra-intelligent machine would inevitably be able to design a better AI than humans, so once we build one it will rapidly improve on itself, producing an ‘intelligence explosion’. I think the trouble with this argument is that my suspicion is that if you got hold of the million most intelligent people on earth, the chances are that none of them could design an ultra-powerful computer at the component level. Just because something is superintelligent doesn’t mean it can do this specific task well – this is an assumption.

However this doesn’t set aside what a magnificent conception the book is. I don’t think it will appeal to many general readers, but I do think it ought to be required reading on all philosophy undergraduate courses, by anyone attempting to build AIs… and by physicists who think there is no point to philosophy.
Profile Image for Joseph.
129 reviews57 followers
February 5, 2015
Preamble:I confess to Roko's Basilisk that I didn't believe superintelligence was much of a threat, and this book didn't really do anything to challenge that prior. Mea Culpa, Mea Culpa, Mea [local] Maxima Culpa.

I. Overall View
I'm a software engineer with some basic experience in machine learning, and though the results of machine learning have been becoming more impressive and general, I've never really seen where people are coming from when they see strong superintelligence just around the corner, especially the kind that can recursively improve itself to the point where intelligence vastly increases in the space of a few hours or days. So I came to this book with a simple question: "Why are so many intelligent people scared of a near-term existential threat from AI, and especially why should I believe that AI takeoff will be incredibly fast?" Unfortunately, I leave the book with this question largely unanswered. Though in principle I can't think of anything that prevents the formation of some forms of superintelligence, everything I know about software development makes me think that any progress will be slow and gradual, occasionally punctuated with a new trick or two that allows for somewhat faster (but still gradual) increases in some domains. So on the whole, I came away from this book with the uncomfortable but unshakeable notion that most of the people cited don't really have much relevant experience in building large-scale software systems. Though Bostrom used much of the language of computer science correctly, any of his extrapolations from very basic, high-level understandings of these concepts seemed frankly oversimplified and unconvincing.

II. General Rant on Math in Philosophy
Ever since I was introduced to utilitarianism in college (the naive, Bentham-style utilitarianism at least) I've been somewhat concerned about the practice of trying to add more rigor to philosophical arguments by filling them with mathematical formalism. To continue with the example of utilitarianism, in its most basic sense it asks you to consider any action based on a calculation of how much pleasure will result from your action divided by the amount of pain an action will cause, and to act in such a way that you maximize this ratio. Now it's of course impossible to do this calculation in all but the most trivial cases, even assuming you've somehow managed to define pleasure, pain, and come up with some sort of metric for actually evaluating differences between them. So really the formalism only expresses a very simple relationship between things which are not defined, and based on the process of definition might not be able to be legitimately placed in simple arithmetic or algebraic expressions.
I felt much the same way when I was reading Superintelligence. Especially in his chapter on AI takeoff, Bostrom argued that the amount of improvement in an AI system could be modeled as a ratio of applied optimization power over the recalcitrance of the system, or its architectural unwillingness to accept change. Certainly this is true as far as it goes, but "optimization power" and "recalcitrance" are necessarily at this point dealing with systems that nobody yet knows how to build, or even what they will look like, beyond some hand-wavey high-level descriptions, and so there is no definition one can give that makes any sense unless you've already committed to some ideas of exactly how the system will perform. Bostrom tries to hedge his bets by presenting some alternatives, but he's clearly committed to the idea of a fast takeoff, and the math-like symbols he's using present only a veneer of formalism, drawing some extremely simple relations between concepts which can't be yet defined in any meaningful way.
This was the example that really made my objections to unjustified philosophy-math snap into sharp focus, but it's just one of many peppered throughout the book, which gives an attempted high-level look at superintelligent systems, but too many of the black boxes on which his argument rested remained black boxes. Unable to convince myself of the majority of his argument since too many of his steps were glossed over, I came away from this book thinking that there had to be a lot more argumentation somewhere, since I couldn't imagine holding this many unsubstantiated "axioms" for something apparently important to him as superintelligence.
And it really is a shame that the book needed to be bogged down with so much unnecessary formalism (which had the unpleasant effect of making it feel simultaneously overly verbose and too simplistic), since there were a few good things in here that I came away with. The sections on value-loading and security were especially good. Like most of the book, I found them overly speculative and too generous in assuming what powers superintelligences would possess, but there is some good strategic stuff in here that could lead toward more general forms of machine intelligence, and avoid some of the overfitting problems common in contemporary machine learning. Of course, there's also no plan of implementation for this stuff, but it's a cool idea that hopefully penetrates a little further into modern software development.

III. Whereof One Cannot Speak, Thereof One Must Request Funding
It's perhaps callous and cynical of me to think of this book as an extended advertisement for the Machine Intelligence Research Institute (MIRI), but the final two chapters in many ways felt like one. Needless to say I'm not filled with a desire to donate on the basis of an argument I found largely unconvincing, but I do have to commend those involved for actually having an attempt at a plan of implementation in place simultaneous with a call to action.

IV. Conclusion
I remain pretty unconvinced of AI as a relatively near-term existential threat, though I think there's some good stuff in here that could use a wider audience. And being more thoughtful and careful with software systems is always a cause I can get behind. I just wish some more of the gaps got filled in, and I could justifiably shake my suspicion that Bostrom doesn't really know that much about the design and implementation of large-scale software systems.

V. Charitable TL;DR
Not uninteresting, needs a lot of work before it's convincing.

VI. Uncharitable TL;DR
scumbag analytic philosopher
Profile Image for Riku Sayuj.
658 reviews7,279 followers
October 13, 2015
Imagine a Danger (You may say I'm a Dreamer)

Bostrom is here to imagine a world for us (and he has batshit crazy imagination, have to give him that). The world he imagines is a post-AI world or at least a very-near-to-AI world or a nascent-AI world. Don’t expect to know how we will get there - only what to do if we get there and how to skew the road to getting there to our advantage. And there are plenty of wild ideas on how things will pan out in that world-in-transition, the ‘routes’ bit - Bostrom discusses the various potential routes, but all of them start at a point where AI is already in play. Given that assumption, the “dangers” bit is automatic since the unknown and powerful has to be assumed to be dangerous. And hence strategies are required. See what he did there?

It is all a lot of fun, to be playing this thought experiment game, but it leaves me a bit confused about what to feel about the book as an intellectual piece of speculation. I was on the fence between a two-star rating or a four-star rating for much of the reading. Plenty of exciting and grand-sounding ideas are thrown at me… but, truth be told, there are too many - and hardly any are developed. The author is so caught up in his own capacity for big BIG BIIG ideas that he forgets to develop them into a realistic future or make any the real focus of ‘dangers’ or ‘strategies’. They are just all out there, hanging. As if their nebulosity and sheer abundance should do the job of scaring me enough.

In the end I was reduced to surfing the book for ideas worth developing on my own. And what do you know, there were a few. So, not too bad a read and I will go with three.

And for future readers, the one big (not-so-new) and central idea of the book is simple enough to be expressed as a fable, here it is:

The Unfinished Fable of the Sparrows

It was the nest-building season, but after days of long hard work, the sparrows sat in the evening glow, relaxing and chirping away.

“We are all so small and weak. Imagine how easy life would be if we had an owl who could help us build our nests!”

“Yes!” said another. “And we could use it to look after our elderly and our young.”

“It could give us advice and keep an eye out for the neighborhood cat,” added a third.

Then Pastus, the elder-bird, spoke: “Let us send out scouts in all directions and try to find an abandoned owlet somewhere, or maybe an egg. A crow chick might also do, or a baby weasel. This could be the best thing that ever happened to us, at least since the opening of the Pavilion of Unlimited Grain in yonder backyard.”

The flock was exhilarated, and sparrows everywhere started chirping at the top of their lungs.

Only Scronkfinkle, a one-eyed sparrow with a fretful temperament, was unconvinced of the wisdom of the endeavor. Quoth he: “This will surely be our undoing. Should we not give some thought to the art of owl-domestication and owl-taming first, before we bring such a creature into our midst?”

Replied Pastus: “Taming an owl sounds like an exceedingly difficult thing to do. It will be difficult enough to find an owl egg. So let us start there. After we have succeeded in raising an owl, then we can think about taking on this other challenge.”

“There is a flaw in that plan!” squeaked Scronkfinkle; but his protests were in vain as the flock had already lifted off to start implementing the directives set out by Pastus.

Just two or three sparrows remained behind. Together they began to try to work out how owls might be tamed or domesticated. They soon realized that Pastus had been right: this was an exceedingly difficult challenge, especially in the absence of an actual owl to practice on. Nevertheless they pressed on as best they could, constantly fearing that the flock might return with an owl egg before a solution to the control problem had been found.

It is not known how the story ends…
Profile Image for Leonard Gaya.
Author 1 book1,027 followers
May 8, 2017
In recent times, prominent figures such as Stephen Hawking, Bill Gates and Elon Musk have expressed serious concerns about the development of strong artificial intelligence technology, arguing that the dawn of super-intelligence might well bring about the end of mankind. Others, like Ray Kurzweil (who, admittedly, has gained some renown in professing silly predictions about the future of the human race), have an opposite view on the matter and maintain that AI is a blessing that will bestow utopia upon humanity. Nick Bostrom painstakingly elaborates on the disquiet views of the former (he might well have influenced them in the first place), without fully dismissing the blissful engrossment of the latter.

First, he endeavours to shed some light on the subject and delves into quite a few particulars concerning the future of AI research, such as: the different paths that could lead to super-intelligence (brain emulations or AI proper), the steps and timeframe through which we might get there, the types and number of AI that could result as we continue improving our intelligent machines (he calls them “oracles”, “genies” and “sovereigns”), the different ways in which it could go awry, and so forth.

But Bostrom is first and foremost a philosophy professor, and his book is not so much about the engineering or economic aspects that we could foresee as regards strong AI. The main concern is the ethical problems that the development of a general (i.e. cross-domain) super-intelligent machine, far surpassing the abilities of the human brain, might pose to us as humans. The assumption is that the possible existence of such a machine would represent an existential threat to human kind. The main argument is thus to warn us about the dangers (some of Bostrom’s examples are weirdly farcical, and reminded me of Douglas Adams’s The Hitchhiker's Guide to the Galaxy), but also to outline in some detail how this risk could or should be mitigated, restraining the scope or the purpose of a hypothetical super-brain: this is what he calls “the AI control problem”, which is at the core of his reasoning and which, upon reflexion, is a surprisingly difficult one.

I should add that, although the book is largely accessible to the layperson, Bostrom’s prose is often dense, speculative, and makes very dry reading: not exactly a walk in the park. He should be praised nonetheless for attempting to apply philosophy and ethical thinking to nontrivial questions.

One last remark: Bostrom explores a great many questions in this book but, oddly enough, it seems never to occur to him to think about the possible moral responsibility we humans might have towards an intelligent machine, not just a figment of our imagination but a being that we will someday create and could at least be compared to us. Charity begins at home, I suppose.
Profile Image for Matt.
752 reviews570 followers
February 15, 2018
As a software developer, I've cared very little for artificial intelligence (AI) in the past. My programs, which I develop professionally, have nothing to do with the subject. They’re dumb as can be and only following strict orders (that is rather simple algorithms). Privately I wrote a few AI test programs (with more or less success) and read a articles in blogs or magazines (with more or less interest). By and large I considered AI as not being relevant for me.

In March 2016 AlphaGo was introduced. This was the first Go program capable of defeating a champion in this game. Shortly after that, in December 2017, Alpha Zero entered the stage. Roughly speaking this machine is capable of teaching itself games after being told the rules. Within a day, Alpha Zero developed superhuman level of play for Go, Chess, and Shogi; all by itself (if you can believe the developers). The algorithm used in this machine is very abstract and can probably be used for all games of this kind. The amazing thing for me was how fast the AI development progresses.

This book is not all about AI. It’s about “superintelligence” (SI). An SI can be thought of some entity which is far superior to human intelligence in all (or almost all) cognitive abilities. To paraphrase Lincoln: You can outsmart some of the people all of the time and you can outsmart all of the people some of the time, but you can’t outsmart all of the people all of the time; unless you are a superintelligence. The subtitle of the English edition “paths, dangers, strategies” has been chosen wisely. What steps can been taken to build an SI, what are the dangers of introducing an SI, and how can one ensure that these dangers and risks are eliminated or at least scaled-down to an acceptable level?

An SI does not necessarily have to exist in a computer. The author is also co-founder of the “World Transhumanist Association”. Therefore, transhumanist ideas are included in the book, albeit in a minor role. An SI can theoretically be build by using genetic selection (of embryos, i.e. “breeding”). Genetic research would probably soon be ready to provide the appropriate technologies. For me, a scary thought; something which touches my personal taboos. Not completely outlandish, but still with a big ethical question mark for me, seems to be “Whole Brain Emulation” (WBE). Here, the brain of a human being, more precisely, the state of the brain at a given time, is analyzed and transferred to a corresponding data structure in the memory of a powerful computer where then the brain/consciousness of the individual continues to exist, possibly within a suitable virtual reality. There are already quite a few films or books that deal with this scenario (for a positive example see the this episode of the Black Mirror series). With WBE you would have an artificial entity with the cognitive performance of a human being. The vastly superior processing speed of the digital versus the biological circuits will let this entity become super intelligent (consider 100,000 copies of a 1000x faster WBE and let it run for six months, and you’ll get 50 millenia worth of thinking!) However, the main focus in the discussion about SI in this book is the further development of AI to become Super-AI (SAI). This is not a technical book though. It contains no computer code whatsoever, and the math (appearing twice in some info-boxes) is only marginal and not at all necessary for understanding.

One should not imagine an SI as a particularly intelligent person. It might be more appropriate to equate the ratio of SI to human intelligence with that of human intelligence to the cognitive performance of a mouse. An SI will indeed be very very smart and, unfortunately, also very very unstable. By that I mean that an SI will be busy at any time to changed and improve itself. The SI you speak with today will be a million or more times smarter tomorrow. In this context, the book speaks of “intelligence explosion”. Nobody knows yet, when this will start and how fast it will go. Could be next year, or in ten, fifty, or one hundred years. Or perhaps never (although this is highly unlikely). Various scenarios are discussed in the book. Also it is not clear if there will be only one SI (a so called singleton), or several competing or collaborating SIs (with a singleton seeming to be more likely).

I think it’s fair to say that humanity as a whole has the wish to continue to exist; at least the vast majority of people do not consider the extinction of humanity desirable. With that in mind it would make sense to instruct an SI to follow that same goal. Now I forgot to specify the exact state in which you want to exist. In this case the SI might choose to put all humans into coma (less energy consumption). The problem is solved from the SI’s point of view; its goal has been reached. But obviously this is not what we meant. We have to re-program the SI and tweak its goal a bit. Therefore it would be mandatory to always be able to control the SI. It’s possible an SI will not act the way we intended (it will act, however, the way we programmed it). A case of an “unfriendly” SI is actually very likely. The book mentions and describes “perverse instantiation”, “infrastructure profusion” and “mind crime” as possible effects. The so called “control problem” remains unsolved as of now and it appears equivalent to that of a mouse controlling a human being. Without a solution, the introduction of an SI becomes a gamble (with a very high probability a “savage” SI will wipe out humanity).

The final goal of an SI should be formulated pro-human if at all possible. At least, the elimination of humankind should not be prioritized at any time. You should give the machine some kind of morality. But how does one do it? How can you formulate moral ideas in a computer language? And what happens if our morals change over time (which has happened before), and the machine still decides on a then-outdated moral ground? In my opinion, there will be insurmountable difficulties at this point. Nevertheless, there are also at least some theoretical approaches explained by Bostrom (who is primarily a philosopher). It’s quite impressive to read these chapters (albeit also a bit dry). In general, the chapters dealing with philosophical questions, and how they are translated to the SI world, were the most engrossing ones for me. The answers to this kind of questions are also subject to some urgency. Advances in technology generally move faster than wisdom (not only in this field), and the sponsors of the projects expect some return on invest. Bostrom speaks of a “philosophy with a deadline”, a fitting, but also disturbing image.

Another topic is an SI that is neither malignant nor fitted with false goals (something like this is also possible), but on the contrary actually helps humanity. Quote: The point of superintelligence is not to pander to human preconceptions but to make mincemeat out of our ignorance and folly. Certainly this is a noble goal. However, how will people (and I’m thinking about those who are currently living) react when their follies are disproved? It’s hard to say, but I guess they will not be amused. One should not trust people too much intelligence in this respect (see below for my own “anger”).

Except for the sections on improving human intelligence through biological interference and breeding (read eugenics), I found everything in this book fascinating, thought-provoking, and highly disturbing. The book has, in a way, changed my world view rather drastically, which is rare. My “folly” about AI and especially Super-AI has changed fundamentally. In a way, I've gone through 4 of the 5 stages of grief & loss. Before the book, I flatly denied a Super-AI will ever come to fruition. When I read the convincing arguments that not only an Super-AI will be possible, but indeed very likely, my denial changed into anger. In spite of the known problems and the existential risk of such a technology, how can one even think to follow this slippery slope? (this question is also dealt with in the book) My anger was then turned into a depression (not a clinical one) towards the end. Still in this condition, I’m now awaiting acceptance, which in my case will more likely be fatalism.

A book that shook me profoundly and that I actually wished I had not read, but that I still recommend highly (I guess I need a superintelligence to make sense of that).

Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License.
February 3, 2020
Hypothetical enough to become insanely dumb boring. Superintelligence, hyperintelligence, hypersuperintelligence…

Basically, it all amounts to the fact that maybe, sometime, the ultimate thinking machines will do or not so something. Just how new is that idea? IMO, the main point is how do we get them there?

Designing intuition? Motivating the AI? Motivational scaffolding? Associative value accretion? While it's all very entertaining, it's nowhere near practical at this point. And the bareboned philosophy of the non-existent AI that's pretty much dumb today?

This is one fat DNF.
Profile Image for John Igo.
146 reviews32 followers
September 8, 2016
This book...

if {}
else if {}
else if {}
else if {}
else if {}
...

You can get most of the ideas in this book in the WaitButWhy article about AI.

This book assumes that an intelligence explosion is possible, and that it is possible for us to make a computer whose intelligence will explode. Then talks about ways to deal with it.
A lot of this book seems like pointless naval gazing, but I think some of it is worth reading.

Profile Image for Bradley.
Author 4 books4,384 followers
March 25, 2019
I'm very pleased to have read this book. It states, concisely, the general field of AI research's BIG ISSUES. The paths to making AIs are only a part of the book and not a particularly important one at this point.

More interestingly, it states that we need to be more focused on the dangers of superintelligence. Fair enough! If I was an ant separated from my colony coming into contact with an adult human being, or a sadistic (if curious) child, I might start running for the hills before that magnifying glass focuses the sunlight.

And so we move on to strategies, and this is where the book does its most admirable job. All the current thoughts in the field are represented, pretty much, but only in broad outlines. A lot of this has been fully explored in SF literature, too, and not just from the Asimov Laws of Robotics.

We've had isolation techniques, oracle techniques, and even straight tool-use techniques crop up in robot and AI literature. Give robots a single-task job and they'll find a way to turn it into a monkey's paw scenario.

And this just begs the question, doesn't it?

When we get right down to it, this book may be very concise and give us a great overview, but I do believe I'll remain an uberfan of Eliezer Yudkowsky over Nick Bostrom. After having just read Rationality: From AI to Zombies, almost all of these topics are not only brought up, but they're explored in grander fashion and detail.

What do you want? A concise summary? Or a gloriously delicious multi-prong attack on the whole subject that admits its own faults the way that HUMANITY should admit its own faults?

Give me Eli's humor, his brilliance, and his deeply devoted stand on working out a real solution to the "Nice" AI problem. :)

I'm not saying Superintelligence isn't good, because it most certainly is, but it is still the map, not the land. :)
(Or to be slightly fairer, neither is the land, but one has a little better definition on the topography.)
Profile Image for Michael Perkins.
Author 5 books425 followers
November 10, 2022
Two quotes from Dune....

“Once humans turned their thinking over to machines in the hope that this would set them free. But that only permitted others with machines to enslave them.”

“Thou shalt not make a machine in the likeness of a human mind."

=============

Though Superintelligence came out first, I treated it as companion volume to Human Compatible. This author explores several scary what-if A.I. takeover scenarios that are not included in Compatible.

Here's the best review of this book by a true expert....

https://www.goodreads.com/review/show...

My review of Human Compatible...

https://www.goodreads.com/review/show...
Profile Image for Valeriu Gherghel.
Author 6 books1,688 followers
April 28, 2023
O carte foarte interesantă, care adună cam tot ce se știe despre inteligență și superinteligență. Nick Bostrom scrie, uneori, arid, deși încearcă din răsputeri să evite jargonul „științific” și oferă adesea exemple intuitive. În fond, Superinteligența e, mai degrabă, o speculație filosofică decît o carte de „știință” în sens strict.

Autorul formulează cîteva întrebări legitime (putem transmite unei mașini principii etice?), dar uită, din păcate, întrebări elementare. Prima dintre ele, cea mai acută, se referă la definiția inteligenței înseși. În fond, ce înseamnă a fi inteligent? Nu există un consens cu privire la semnificația acestui termen. Și nu există mijloace de a o măsura. Testele de inteligență sînt niște glume, nu probează absolut nimic, nu pot fi aplicate iliteraților. Din exemplele lui Bostrom (Newton, Einstein etc.), rezultă că inteligența înseamnă, în primul și în primul rînd, creativitate științifică. Prin urmare, e inteligent individul care izbutește să demonstreze „ultima teoremă a lui Fermat” (Andrew Wiles) și e prost cel care nu reușește o atare ispravă. Massa damnata a muritorilor de rînd nu posedă o astfel de inteligență, se cufundă, probabil, într-o nerozie fericită.

Nick Bostrom e convins că superinteligența (intelectul care depășește cu mult în abilități cognitive creierul uman) e apriori periculoasă. Poate fi ostilă, vicleană, distrugătoare. E foarte posibil ca saltul în superinteligență să se petreacă subit (să fie o „explozie”), fără știrea cercetătorilor. O vom putea controla? Totul ține de posibilitatea omului de a încărca această dihanie digitală cu înțelepciune (termen nici el bine definit). Dar nici asupra unei teorii etice nu există consens.

Am mai scris asta, de două mii de ani, inteligența umană a ajuns la o culme (și s-a oprit acolo). Sigur, „baza de date” a lui Aristotel era mult mai îngustă decît aceea a lui Einstein, posibilitățile de informare erau limitate (numărul cărților în Atena secolului al IV-lea era infim), dar „invenția” logicii generale (prezentată în cele șase volume din Organon) mi se pare echivalentă cu oricare dintre realizările lui Newton sau Einstein. Dacă vorbim de o evoluție a inteligenței artificiale către o superinteligență perfidă, ar trebui, pînă una-alta, să știm de ce s-a „blocat” inteligența umană.

N-avem habar și poate că este mai bine așa.

P. S. Ca și frumosul, inteligența e un concept vag (fuzzy). Mașinile actuale nu pot lucra cu astfel de concepte. Într-o mulțime vagă (mulțimea oamenilor inteligenți), apartenența presupune grade. Și ar mai fi o problemă. În ce chip se va ivi conștiința de sine la o superinteligență?
Profile Image for Robert Schertzer.
116 reviews2 followers
January 21, 2015
I switched to the audio version of this book after struggling with the Kindle edition since I needed to read this for a book club. If you are looking for a book on artificial intelligence (AI), avoid this and opt for Jeff Hawkins' book "On Intelligence" written by someone who has devoted their life to the field. If it is one on "AI gone bad" you seek, try 2001 Space Odyssey. For a fictional approach on AI that helped set the groundwork for AI theory, go for Isaac Asimov. If you want a tedious, relentless and pointless book that fails at achieving what all three previously aforementioned authors have succeeded at - this is the book for you.
Profile Image for Clif Hostetler.
1,144 reviews853 followers
March 2, 2019
This book was published in 2014 so is a bit dated, and I’m now writing this review somewhat late for what should be a cutting edge issue. But many people who are interested in this subject continue to respect this book as the definitive examination of the risks associated with machines that are significantly smarter than humans.

We have been living for many years with computers—and even phones—that store more information and can retrieve that information faster than any human. These devices don’t seem to pose much threat to us humans, so it’s hard to perceive why there may be cause for concern.

The problem is as follows. As artificial intelligence (AI) becomes more proficient in the future it will have the ability to learn (a.k.a.machine learning) and improve itself as it examines and solves problems. It will have the ability to change (i.e. reprogram) itself in order to develop new methods as needed to execute solutions for the tasks at hand. Thus, it will be using techniques and strategies of which the originating human programmer will be unaware. Once machines are creatively strategizing better (i.e. smarter) than humans, the gap between machine and human performance (i.e. intelligence) will grow exponentially.

Eventually, the level of thinking by the “super-intelligent” machine will have the relative superiority over that of humans that is equivalent to the superiority of the human brain over that of a beetle crawling on the floor. It is reasonable to conjecture that a machine that smart will have as much respect for humans who think they’re controlling it as humans are likely to have respect for a beetle trying to control them.

The concept of superintelligence means that the machine can perform better than humans at all tasks including such things as using human language to be persuasive, raising money, developing strategic plans, designing and making robots, advanced weapons, and advances in science and technology. A super-intelligent machine will solve problems that humans don't know exist.

Of course that may be a good thing, but such machines in effect have a mind of their own. They may decide they know best and not want to follow human instructions. Much of this book is spent examining—in too much detail in my opinion—possible ways to control a super-intelligent machine. Then after this long exploration of various strategies the conclusion is in essence that it's not possible.

So then the book moves on to the question of how to design the initiating foundation of such a machine to have the innate desire to do good (i.e. be virtuous). Again the author goes into excruciating details examining various ways to do this. The bottom line is we can try, but we don't have the necessary tools to be sure to address the critical issues.

In conclusion, our goose is cooked. We can't help ourselves. Superintelligence is the "tree of the knowledge of good and evil." We have to take a bite.

This link is to an article about facial recognition. It contains the following quote:
... the whole ecosystem of artificial intelligence is optimized for a lack of accountability.
Shortly after writing my review the Dilbert cartoon featured the subject of AI:
AI3
AI
AI2
AI4
AI5

Here's a link to a review of "Game Changer: AlphaZero's Groundbreaking Chess Strategies and the Promise of AI," by Matthew Sadler and Natasha Regan. This review describes a chess program that utilizes AI to become almost unbeatable with a style of play not previously seen.
https://www.goodreads.com/review/show...
Profile Image for Jim.
242 reviews15 followers
March 2, 2015
Superintelligence by Nick Bostrom is a hard book to recommend, but is one that thoroughly covers its subject. Superintelligence is a warning against developing artificial intelligence (AI). However, the writing is dry and systematic, more like Plato than Wired Magazine. There are few real world examples, because it's not a history of AI, but theoretic conjectures. The book explores the possible issues we might face if a superintelligent machine or life form is created. I would have enjoyed the book more if it reported on current state of the art projects in AI. The recent work of DeepMind learning to play classic Atari games offers more realism to the possibilities of AI than anything mentioned in this book. Deep learning projects, the latest development in neural nets, is having some astounding successes. Bostrom doesn't report on them.

And I think Bostrom makes one glaring error. I'm no AI expert, but he seems to assume we can program an AI, and control its intelligence. I don't think that's possible. We won't be programming The Three Laws of Robotics into future AI beings.

I believe AIs will evolve out of learning systems, and we'll have no control over what emerges. We'll create software and hardware that is capable of adapting and learning. The process of becoming a self-aware superintelligence will be no more understandable to us than why our brains generate consciousness.
48 reviews10 followers
Read
July 29, 2015
Read up through chapter 8. The book started out somewhat promisingly by not taking a stand on whether strong AI was imminent or not, but that was the height of what I read. I'm not sure there was a single section of the book where I didn't have a reaction ranging from "wait, how do you know that's true?" to "that's completely wrong and anyone with a modicum of familiarity with the field you're talking about would know that", but really it's the overall structure of the argument that led me to give this one up as a waste of time.

Essentially, the argument goes like this: Bostrom introduces some idea, explains in vague language what he means by it, traces out how it might be true (or, in a few "slam-dunk" sections, *several* ways it might be true), and then moves on. In the next section, he takes all of the ideas introduced in the previous sections as givens and as mostly black boxes, in the sense that the old ideas are brought up to justify new claims without ever invoking any of the particular evidence for or structure of the old idea, it's just an opaque formula. The sense is of someone trying to build a tower, straight up. The fact that this particular tower is really a wobbly pile of blocks, with many of the higher up ones actually resting on the builder's arm and not really on the previous ones at all, is almost irrelevant: this is not how good reasoning works! There is no broad consideration of the available evidence, no demonstration of why the things we've seen imply the specific things Bostrom suggests, no serious engagement with alternative explanations/predictions, no cycling between big-picture overviews and in-detail analyses. There is just a stack of vague plausibilities and vague conceptual frameworks to accommodate them. A compelling presentation is a lot more like clearing away fog to note some rocky formations, then pulling back a bit to see they're all connected, then zooming back in to clear away the connected areas, and so on and so forth until a broad mountain is revealed.

This is not to say that the outcome Bostrom fears is impossible. Even though I think many of the specific things he thinks are plausible are actually much less so than he asserts, I do think a kind of very powerful "unfriendly" AI is a possibility that should be considered by those in a position to really understand the problem and take action against it if it turns out to be a real one. The problem with Bostrom's presentation is that it doesn't tell us anything useful: We have no reason to suspect that the particular kinds of issues he proposes are the ones that will matter, that the particular characteristics he ascribes to future AI are ones that will be salient, indeed that this problem is likely enough, near enough, and tractable enough to be worth spending significant resources on at all at the moment! Nothing Bostrom is saying compellingly privileges his particular predictions over many many possible others, even if you take as a given that extraordinarily powerful AI is possible and its behavior hard to predict. I continually got the sense (sometimes explicitly echoed by Bostrom himself!) that you could substitute in huge worlds of incompatible particulars for the ones he proposed and still make the same claims. So why should I expect anything particular he proposes to be worthwhile?

Edit: After chatting about this a bit with some friends, I should add one caveat to this review. This is praising with bold damnation if ever there were such a thing, but this book has made me more likely to engage with AI as an existential risk by being such a clear example of what had driven me away up until now. Now that I can see the essence of what's wrong with the bad approaches I've seen, I'll be better able to seek out the good ones (and, as I said, I do think the problem is worth serious investigation). So, I guess ultimately Bostrom succeeded at his goal in my case?
Profile Image for Veronica.
102 reviews70 followers
December 22, 2023
Ironically seems like it was written by a robot. Bleak. Science fiction.

"The orthogonality thesis: intelligence and final goals are orthogonal: more or less any level of intelligence could in principle be combined with more or less any final goal." (This thesis goes a long way towards describing the creation of this book)
Profile Image for Paul H..
831 reviews350 followers
August 24, 2019
My apologies in advance for this absurdly long review (which continues into the comments); I really couldn’t think of a more condensed way to respond to Bostrom’s book.

Superintelligence is an interesting and mostly serious work, though the later chapters wander a bit too far into speculation, and Bostrom also has an annoying tendency to try to make cautious claims early on, e.g. that AI research “might result in superintelligence” (25), which he then assumes to be true in later chapters: “given that machines will eventually vastly exceed biology in general intelligence" (75), etc. I was also amused by Bostrom’s lament in the afterword about "misguided public alarm about evil robot armies," as he apparently forgot that an entire chapter of his book describes an AI using "nanofactories producing nerve gas or target-seeking mosquito-like robots” that will “burgeon forth simultaneously from every square meter of the globe" (117). With that said, the dangers that Bostrom describes are definitely metaphysically possible, and he has carefully thought through the possible consequences of Artificial General Intelligence (AGI) / Strong AI, formulating convincing scenarios that make sense within his premises.

However, I think that Bostrom makes a series of category mistakes that obviate most of his concerns about superintelligent AGI. In my view, as I will explain below, the real danger is superintelligent Weak AI / software / Tool AI in the hands of a malevolent government (Bostrom briefly addresses this at 113 and 180) -- e.g., if North Korea were to develop a bootstrapping/recursive Weak AI (using deep learning, neural nets, etc.) that underwent an “intelligence explosion” (without developing general intelligence) and then used such an AI to use evolutionary algorithms or other methods to solve engineering/physics/logistics problems at a dizzying rate in order to develop, e.g., advanced weapons systems, we would all need to be very worried.

Thus while we almost certainly do not need to worry about Strong AI developing an emergent will/agency/intentionality and taking over the world, as (to be explained below) the concept of Strong AI appears to be a fundamental misunderstanding of words like “intelligence,” “computing,” etc., we should definitely worry about superintelligent Weak AI. As Bostrom puts it, “there is no subroutine in Excel that secretly wants to take over the world if only it were smart enough to find a way" (185), so the real concern should be the people/governments that might use such AI, and then enacting an international treaty/enforcement agency to prevent this from happening.

First, I want to briefly present a few of Bostrom’s key positions in his own words. He states, as a foundational premise: "we know that blind evolutionary processes can produce human-level general intelligence" (23), and later adds that "evolution has produced an organism with human values at least once" (187), concluding that “the fact that evolution produced intelligence therefore indicates that human engineering will soon be able to do the same” (28). He mentions the possibility of whole brain emulation (WBE), where we could create a virtual copy of a particular human brain in a computer, and the “result would be a digital reproduction of the original intellect, with memory and personality intact” (36), and later refers to human reason as a “cognitive module” added to our “simian species” that suddenly created agency, consciousness, etc. (70). Therefore, studying the brain will help us to understand questions like: “What is the neural code? How are concepts represented? Is there some standard unit of cortical processing machinery? How does the brain store structured representations?” (292). He also states that a “web-based cognitive system in a future version of the Internet” could suddenly “blaze up with intelligence” (60), and adds that “purposive behavior can emerge spontaneously from the implementation of powerful search processes" in an AI (188). He claims that AI would be one of many types of minds in the “vastness of the space of possible minds” (127), evolving just as other minds have evolved (dolphins, humans, etc.), whether from a seed AI or whole brain emulation. Bostrom admits the difficulty of ‘seeding’ the concept of happiness, goodness, utility, morality etc., in an AI (see 147, 171, 227, 267), stating that “even a correct account expressed in natural language would then somehow have to be translated into a programming language” (171), but seems confident that researchers will eventually solve this problem.

Let me start by noting that normally you can clearly demarcate philosophy from science. I think it's fair to say that when a bunch of scientists at CERN report that they’ve found the Higgs Boson at 125 GeV and that this confirms the Standard Model, everyone can agree that they're the experts, so if they say they found it at p < .05, then they found it; I may not grasp the finer points of the equations, but the layman's explanations make sense, so they should break out the champagne. In this case, as with most scientific fields, it does not matter, at all, if the director of CERN or the particle physicists involved are substance dualists, or panpsychists, or panentheists, or if they think that the Higgs Boson should be worshipped as a deity, or if they think that metaphysical naturalism is the only possible account of reality, and so forth.

However, theoretical writings about Strong AI definitely do NOT fall in this category. While there are plenty of experts in Weak AI, there are no Strong AI "scientific experts" because this field is purely theoretical, and most of the writers in the field do not seem to understand that they're assuming all sorts of exotic metaphysical positions regarding personhood, agency, intelligence, mind, intentionality, calculation, thinking, etc., which has a massive influence on how they perceive the issues involved. To put it another way, Strong AI simply is philosophical speculation, but many of the writers involved (even Bostrom, a philosopher of mind) do not seem to understand that the questions and problems of this field cannot necessarily be answered or even formulated within the naive cultural naturalist/materialist metaphysics that software engineers and Anglo-American philosophers of mind happen to have.

In other words, the key distinction underlying almost all of the assumptions about Strong AI in Bostrom and other theorists is actually very simple -- naturalist Darwinism reified as metaphysics. I hasten to add that there is nothing inherently wrong with Darwinist evolutionary theory, which has incredible explanatory power; I don’t see how the broad strokes version will possibly be refuted or superseded, though perhaps it will be folded into a more comprehensive theory. My point is that all of the statements made by Bostrom (quoted above) and by other AI theorists (such as Jeff Hawkins) fallaciously assume that personhood, agency, intelligence, etc., blindly evolved and can be exhaustively explained by naturalism.

(There are also a variety of ancillary questionable assumptions made by Strong AI theorists. Already in the late 1960s and early 1970s, Hubert Dreyfus had seen four key assumptions at work in the field, all of which -- at a greater or lesser level of sophistication -- seem to still be in force for Bostrom: "The brain processes information in discrete operations by way of some biological equivalent of on/off switches; The mind can be viewed as a device operating on bits of information according to formal rules; All knowledge can be formalized; The world consists of independent facts that can be represented by independent symbols." All of these are eminently questionable metaphysical positions.)

To be clear, I’m not trying to make a theological critique of metaphysical naturalism (though I suppose one could be made in terms of ‘natural theology’); there are many, many convincing refutations of this position purely within philosophy, such as Nagel’s Mind and Cosmos, Kelly’s Irreducible Mind, etc. With that said, likely the first reaction of many readers will be that metaphysical naturalism is just “common sense,” or is equivalent to atheism/agnosticism, i.e., the most rational and logical and “neutral” position to hold, staying free of any commitments one way or the other. The idea is that “I don’t believe in any gods, I’m neutral on the question” is equivalent to “I don’t maintain any sort of metaphysics, I’m neutral on the question; stuff just exists and natural science can explain it.” However, one of these is not like the other; metaphysical naturalism passes the “common sense for mainstream cultural biases among educated Westerners in the early 21st century” test but is actually an exotic and almost indefensible metaphysical position, which probably explains why very, very few philosophers have been materialists. (Lucretius is probably the most interesting one, but even he couldn’t really find a way to explain the willing, living consciousness that thinks about materialism.)

D. B. Hart provides one of the clearest and most forceful critiques of naturalism (warning, wall of text incoming):

Naturalism -- the doctrine that there is nothing apart from the physical order, and certainly nothing supernatural -- is an incorrigibly incoherent concept, and one that is ultimately indistinguishable from pure magical thinking. The very notion of nature as a closed system entirely sufficient to itself is plainly one that cannot be verified, deductively or empirically, from within the system of nature. It is a metaphysical (which is to say 'extranatural') conclusion regarding the whole of reality, which neither reason nor experience legitimately warrants. . . .

If moreover, naturalism is correct (however implausible that is), and if consciousness is then an essentially material phenomenon, then there is no reason to believe that our minds, having evolved purely through natural selection, could possibly be capable of knowing what is or is not true about reality as a whole. Our brains may necessarily have equipped us to recognize certain sorts of physical objects around them, but there is no reason to suppose that such structures have access to any abstract 'truth' about the totality of things. If naturalism is true as a picture of reality, it is necessarily false as a philosophical precept.

The one thing of which it can give no account, and which its most fundamental principles make it entirely impossible to explain at all, is nature's very existence. For existence is definitely not a natural phenomenon; it is logically prior to any physical cause whatsoever; and anyone who imagines that it is susceptible of a natural explanation simply has no grasp of what the question of existence really is. . . . There is something of a popular impression out there the naturalist position rests upon a particularly sound rational foundation. But, in fact, materialism is among the most problematic of philosophical standpoints, the most impoverished in its explanatory range, and among the most willful and (for want of a better word) magical in its logic. . . . The heuristic metaphor of a purely mechanical cosmos has become a kind of ontology, a picture of reality as such. The mechanistic view of consciousness remains a philosophical and scientific premise only because it is now an established cultural bias, a story we have been telling ourselves for centuries, without any real warrant from either reason or science. . . .

Naturalism commits the genetic fallacy, the mistake of thinking that to have described a thing's material history or physical origins is to have explained that thing exhaustively. We tend to presume that if one can discover the temporally prior physical causes of some object -- the world, an organism, a behavior, a religion, a mental event, an experience, or anything else -- one has thereby eliminated all other possible causal explanations of that object. . . . To bracket form and finality out of one's investigations as far as reason allows is a matter of method, but to deny their reality altogether is a matter of metaphysics. if common sense tells us that real causality is limited solely to material motion and the transfer of energy, that is because a great deal of common sense is a cultural artifact produced by an ideological heritage. . . .

Consciousness is a reality that cannot be explained in any purely physiological terms at all. The widely cherished expectation that neuroscience will one day discover an explanation of consciousness solely within the brain's electrochemical processes is no less enormous a category error than the expectation that physics will one day discover the reason for the existence of the material universe. It is a fundamental conceptual confusion, unable to explain how any combination of diverse material forces, even when fortuitously gathered into complex neurological systems, could just somehow add up to the simplicity and immediacy of consciousness, to its extraordinary openness to the physical world, to its reflective awareness of itself. . . .

Naturalists fall victim to the pleonastic fallacy, another hopeless attempt to overcome qualitative difference by way of an indeterminately large number of gradual quantitative steps. This is the great non sequitir that pervades practically all attempts, evolutionary or mechanical, to reduce consciousness wholly to its basic physiological constituents. At what point precisely was the qualitative difference between brute physical causality and unified intentional subjectivity vanquished? And how can that transition fail to have been an essentially magical one? There is no bit of the nervous system that can become the first step toward intentionality.

One can conduct an exhaustive surveillance of all those electrical events in the neurons of the brain that are undoubtedly the physical concomitants of mental states, but one does not thereby gain access to that singular, continuous, and wholly interior experience of being this person that is the actual substance of conscious thought. . . . There is an absolute qualitative abyss between the objective facts of neurophysiology and the subjective experience of being a conscious self. . . . Consciousness as we commonly conceive of it is also almost certainly irreconcilable with a materialist view of reality, and there is no ‘question’ of whether subjective consciousness really exists -- subjective consciousness is an indubitable primordial datum, the denial of which is simply meaningless.


[continued in comments]
Profile Image for Brendan Monroe.
609 reviews161 followers
July 25, 2019
Reading this was like trying to wade through a pool of thick, gooey muck. Did I say pool? I meant ocean. And if you don't keep moving you're going to get pulled under by Bostrom's complex mathematical formulas and labored writing and slowly suffocate.

It shouldn't have been this way. I went into it eagerly enough, having read a little recently about AI. It is a fascinating subject, after all. Wanting to know more, I picked up "Superintelligence".

I could say my relationship with this book was akin to the one Michael Douglas had with Glenn Close in "Fatal Attraction" but there was actually some hot sex in that film before all the crazy shit started happening. The only thing hot about this book is how parched the writing is.

To say that this reads more like a textbook wouldn't be right either as I have read some textbooks that were absolute nail biters by comparison. Yes, I'm giving this 2 stars but perhaps that's my own insecurity at refusing to let a 1-star piece of shit beat me. This isn't an all-out bad book, it's just a book by someone who has something interesting to say but no idea of how to say it — at least, not to human beings.

You know things aren't looking good when the author says in his introduction that he failed in what he set out to do — namely, write a readable book. Maybe save that for the afterword?

But it didn't matter that I was warned. I slogged through the fog for 150 pages or so, finally throwing the towel in about a quarter of the way in. I never thought someone could make artificial intelligence sound boring but Nick Bostrom certainly has. The only part of the thing I liked at all was the nice little parable at the beginning about the owl. That lasted only a couple pages and you could tell Bostrom didn't write it because it was:

1. Understandable
2. Interesting

If you're doing penance for some sin, forcing this down ought to cover a murder or two. Here you are, O.J. Justice has finally been served. To everyone else wanting to read this one, you really don't hate yourselves that much.
Profile Image for Travis.
437 reviews
March 16, 2016
I'm not going to criticize the content. I cannot finish this. Imagine eating saltines when you have cotton mouth in the middle of the desert. You might be close to describing how dry the writing is. Could be very interesting read if the writing was done in a more attention grabbing way.
Profile Image for Gavin.
1,113 reviews407 followers
August 8, 2019
Like a lot of great philosophy, Superintelligence acts as a space elevator: you make many small, reasonable, careful movements - and you suddenly find yourself in outer space, home comforts far below. It is more rigorous about a topic which doesn't exist than you would think possible.

I didn't find it hard to read, but I have been marinating in tech rationalism for a few years and have absorbed much of Bostrom secondhand so YMMV.

I loved this:
Many of the points made in this book are probably wrong. It is also likely that there are considerations of critical importance that I fail to take into account, thereby invalidating some or all of my conclusions. I have gone to some length to indicate nuances and degrees of uncertainty throughout the text — encumbering it with an unsightly smudge of “possibly,” “might,” “may,” “could well,” “it seems,” “probably,” “very likely,” “almost certainly.” Each qualifier has been placed where it is carefully and deliberately. Yet these topical applications of epistemic modesty are not enough; they must be supplemented here by a systemic admission of uncertainty and fallibility. This is not false modesty: for while I believe that my book is likely to be seriously wrong and misleading, I think that the alternative views that have been presented in the literature are substantially worse - including the default view, according to which we can for the time being reasonably ignore the prospect of superintelligence.

Bostrom introduces dozens of neologisms and many arguments. Here is the main scary apriori one though:

1. Just being intelligent doesn't imply being benign; intelligence and goals can be independent. (the orthogonality thesis.)
2. Any agent which seeks resources and lacks explicit moral programming would default to dangerous behaviour. You are made of things it can use; hate is superfluous. (Instrumental convergence.)
3. It is conceivable that AIs might gain capability very rapidly through recursive self-improvement. (Non-negligible possibility of a hard takeoff.)
4. Since AIs will not be automatically nice, would by default do harmful things, and could obtain a lot of power very quickly*, AI safety is morally significant, deserving public funding, serious research, and international scrutiny.

Of far broader interest than its title (and that argument) might suggest to you. In particular, it is the best introduction I've seen to the new, shining decision sciences - an undervalued reinterpretation of old, vague ideas which, until recently, you only got to see if you read statistics, and economics, and the crunchier side of psychology. It is also a history of humanity, a thoughtful treatment of psychometrics v genetics, and a rare objective estimate of the worth of large organisations, past and future.

Superintelligence's main purpose is moral: he wants us to worry and act urgently about hypotheticals; given this rhetorical burden, his tone too is a triumph.
For a child with an undetonated bomb in its hands, a sensible thing to do would be to put it down gently, quickly back out of the room, and contact the nearest adult. Yet what we have here is not one child but many, each with access to an independent trigger mechanism. The chances that we will all find the sense to put down the dangerous stuff seem almost negligible. Some little idiot is bound to press the ignite button just to see what happens. Nor can we attain safety by running away, for the blast of an intelligence explosion would bring down the firmament. Nor is there a grown-up in sight...

This is not a prescription of fanaticism. The intelligence explosion might still be many decades off in the future. Moreover, the challenge we face is, in part, to hold on to our humanity: to maintain our groundedness, common sense, and goodhumored decency even in the teeth of this most unnatural and inhuman problem. We need to bring all human resourcefulness to bear on its solution.

I don't donate to AI safety orgs, despite caring about the best way to improve the world and despite having no argument against it better than "that's not how software has worked so far" and despite the concern of smart experts. This sober, kindly book made me realise this was more to do with fear of sneering than noble scepticism or empathy.

[EDIT 2019: Reader, I married this cause.]


* People sometimes choke on this point, but note that the first intelligence to obtain half a billion dollars virtually, anonymously, purely via mastery of maths occurred... just now. Robin Hanson chokes eloquently here and for god's sake let's hope he's right.
Profile Image for Clare O'Beara.
Author 22 books362 followers
August 1, 2018
We are now building superintelligences. More than one. The author Nick Bostrom looks at what awaits us. He points out that controlling such a creation might not be easy. If unfriendly superintelligence comes about, we won't be able to change or replace it.
This is a densely written book, with small print, with 63 pages of notes and bibliography. In the introduction the author tells us twice that it was not easy to write. However he tries to make it accessible, and adds that if you don't understand some techie terms you should still be able to grasp the meaning. He hopes that by pulling together this material he has made it easier for other researchers to get started.

So - where are we? I have to state that with lines like: "Collective superintelligence is less conceptually clear-cut than speed superintelligence. However it is more familiar empirically." This is a more daunting book than 'The Rise of The Robots' by Martin Ford. If you are used to such terms and concepts you can dive in; if not I'd recommend the Ford book first. To be fair, terms are explained and we can easily see that launching a space shuttle requires a collective intellectual effort. No one person could do it. Humanity's collective intelligence has continued to grow, as people evolved to become smarter, as there were more of us to work on a problem, as we got to communicate and store knowledge, and as we kept getting smarter and building on previous knowledge. There are now so many of us who don't need to farm or make tools, that we can solve many problems in tandem.
Personally, I say that if you don't think your leaders are making smart decisions, just go out and look at your national transport system at rush hour in the capital city.

But a huge population requires a huge resource drain. As will the establishment of a superintelligence. Not just materials and energy but inventions, tests, human hours and expertise are required. Bostrom talks about a seed AI, a small system to start. He says in terms of a major system, the first project to reach a useful AI will win. After that the lead will be too great and the new AI so useful and powerful, that other projects may not close the gap.

Hardware, power generation, software and coding are all getting better. And we have the infrastructure in place. We are reminded that "The atomic bomb was created primarily by a group of scientists and engineers. The Manhattan Project employed about 130,000 people at its peak, the vast majority of whom were construction workers or building operators."

Aspects covered include reinforcement learning, associative value accretion, monitoring of projects, solving the value loading problem - which means defining such terms as happiness and suffering, explaining them to a computer and representing which is our goal.

I turned to the chapter heading 'Of horses and men'. Horses, augmented by ploughs and carriages, were a huge advantage to human labour. But they were replaced by the automobile and tractor. The equine population crashed, and not to retirement homes. "In the US there were about 26 million horses in 1915. By the early 1950s, 2 million remained." The horses we still have, we keep because we enjoy them and the sports they provide. Bostrom later reassures us: "The US horse population has undergone a robust recovery: a recent census puts the number at just under 10 million head." As humans are fast superseded by robot or computer workers, in jobs from the tedious to the technically skilled, and companies or the rich grudge paying wages, what work or sport will make us worth our keep?

Capital is mentioned; yes unlike horses, people own land and wealth. But many people have no major income or property, or have net debt such as student loans and credit card debt. Bostrom suggests that all humans could become wealthy from AIs. But he doesn't notice that more than half of the world's wealth and resources is now owned by one percent of its people, and it's heading ever more in the favour of the one percent, because they have the wealth to ensure that it does. They rent the land, they own the debt, they own the manufacturing and the resource mines. Homeowners could be devastated by sea rise and climate change, not looked at, but the super-wealthy can just move to another of their homes.

Again, I found in a later chapter lines like: "For example, suppose that we want to start with some well-motivated human-like agents - let us say emulations. We want to boost the cognitive capacities of these agents, but we worry that the enhancements might corrupt their motivations. One way to deal with this challenge would be to set up a system in which individual emulations function as subagents. When a new enhancement is introduced, it is first applied to a small subset of the subagents. Its effects are then studied by a review panel composed of subagents who have not yet had the enhancement applied to them."
Yes, I can follow this text, and it's showing sensible good practice, but it's not nearly so clear and easily understood as Martin Ford's book telling us that computers can be taught to recognise cancer in an X-ray scan, target customers for marketing or to connect various sources and diagnose a rare disease. I have to think that the author, Director of the Future of Humanity Institute and Professor of the Faculty of Philosophy at Oxford, is so used to writing for engineers or philosophers that he loses out on what really helps the average interested reader. For this reason I'm giving Superintelligence four stars, but someone working in this AI industry may of course feel it deserves five stars. If so, I'm not going to argue with her. In fact I'm going to be very polite.
Profile Image for Rick Wilson.
803 reviews318 followers
August 21, 2021
This book reminds me of late night discussions at summer camp. Every year from about 6th to 9th grade I’d go to science camp or some similar nerd gathering.

Inevitably, after growing bored of whatever enrichment we had been sent to do, our discussions as 14 year old philosophers would devolve into the speculative. Something like “what superpower would you want?” “How would an orc brush its teeth if it were so inclined,” and “if you had three wishes what would you wish for?”

The usual “flying, “it wouldn’t, read the samarillion dummy,” or “more wishes” would devolve into rules and finger pointing.

“No wishing for more wishes.” “Shut up”

And inevitably some “too smart for his own good” kid would propose something like writing down everything they wished for in a notebook and saying “I wish everything in this notebook were true” and how this would effectively circumnavigate the restrictions.

Nick Bostrom is that kid grown up. And instead of genies or superpowers he’s written a book about Superintelligence.

It’s interesting and well done. Thoroughly researched, with a David Foster Wallace level of footnotes. However I’m confused as to the purpose of this book as a warning call for what exactly? It reads like a highly academic sci-fi novel. A level of speculation written about in painstaking detail, but reminiscent of when one might pontificate on the future after a bong hit. Sure, some of these things may come to pass, and like Asimov or Stephenson before it, there’s an impressive level of world building, but I don’t find myself convinced by the authors premise and if anything, this book has tapered my optimism that we might ever see a “super intelligence” as presented.

I admire the author for undertaking the effort with the rigor he did, but seems like a whole lot of “what if” masquerading as a “dire warning”. And that doesn’t really work in a book like this. We’re naturally limited by our understanding, and this shines through in discussions of things outside the authors circle of competency. It’s frankly impossible to speculate on because it’s so far from coming to pass we might as well be arguing about the dental habits of Orcs.

One of the other half dozen AI books I read this month talks about how AI recognizes Giraffes in everything. Another talks about how AI keeps organizing itself into boxes or a tall stick that falls over in order to “move”. We think in forms that are familiar, and those habits of thought limit us when it comes to wild speculation. This analysis is interesting but assumes a humanness to a computer intelligence that doesn’t exist.

The takeaway is that even if we did develop something that resembles a “super” intelligence, it’s unlikely it would be similar in consciousness to ourselves. I struggle to get my roommates dogs to listen to me without a treat in my hand, and they’re a hop skip away from us biologically on the tree of life, why should a silicon based intelligence be easier to communicate and understand? My best bet is that the product of AI 20-50 years from now either hasn’t materialized, or it looks nothing like what we imagine today.

I think this is super true when you look at the current iteration of the OpenAi Codex. A recently developed, as of 2021, Codex can complete a variety of simple programming tasks such as developing and designing UIs and other JavaScript based tasks. It’s a specialized tool that’s super powerful and I think the majority of the contributions over the next hundred years are going to be of this nature.

The fundamental flaw in this in many other approaches is that it assumes that our experience of consciousness is the default or baselevel experience. It also assumes that subjugation is the end goal of the creation of such a concept. Our subjugation of it and it’s apparently inevitable subjugation of us. It’s a view that maybe speaks to the fundamental flaws in the human experience. We’re essentially using whatever we’re talking about as a Rorschach test that we can project ourselves onto. It’s like philosophizing about orc oral hygiene, it may be entertaining but it isn’t useful in any meaningful sense other than group bonding.

Still a well researched, entertaining, and thought provoking read. Just not very convincing.
Profile Image for Bharath.
717 reviews540 followers
October 21, 2017
This is the most detailed book I have read on the implications of AI, and this book is a mixed bag.

The initial chapters provide an excellent introduction to the various different paths leading to superintelligence. This part of the book is very well written and also provides an insight into what to expect from each pathway.

The following sections detail the implications for each of these pathways. There is a detailed discussion also on how the dangers to humans can be limited, if at all possible. However, considering that much of this is speculative, the book delves into far too much depth in these sections. It is also unclear what kind of an audience these sections are aimed at - the (bio) technologists would regard this as containing not enough depth and detail, while the general audience would find this tiring.

And yet, this book might be worth a read for the initial sections..
Profile Image for Diego Petrucci.
81 reviews78 followers
February 1, 2015
There's no way around it: a super-intelligent AI is a threat.

We can safely assume that an AI smarter than a human, if developed, would accelerate its own development getting smarter at a rate faster than anything we'd ever seen. In just a few cycles of self-improvement it would spiral out of control. Trying to fight, or control, or hijack it, would be totally useless — for a comparison, try picturing an ant trying to outsmart a human being (a laughable attempt, at best).

But why is a super-intelligent AI a threat? Well, it probably wouldn't have human qualities (empathy, a sense of justice, and so on) and would rely on a more emotion-less understanding of the world — understanding emotion doesn't mean you have to feel emotions, you can understand the motives of terrorists without agreeing with them. There would be a chance of developing a super-intelligent AI with an insane set of objectives, like maximizing the production of chairs with no regard to the safety of human beings or the environment, totally subsuming Earth 's materials and the planet itself. Or, equally probable, we could end up with an AI whose main objective is self-preservation, who would later annihilate the human race because of an even minuscule chance of us destroying it.

With that said, it's clear that before developing a self-improving AI we need a plan. We need tests to understand and improve its moral priorities, we need security measures, we need to minimize the risk of it destroying the planet. Once the AI is more intelligent than us, it won't take much to get extremely more intelligent, so we need to be prepared. We only got one chance and that's it, either we set it up right or we're done as a species.

Superintelligence deals with all these problems systematically analyzing them and providing a few frames of mind to let us solve them (if that's even possible).
Profile Image for Igor.
68 reviews
July 24, 2021


No, I can't. I can't finish this.

Very early into the book it became all too clear that the guy doesn't know what he's writing about, and the further the worse it gets.

Some passages are absolutely hilarious in their pomposity about how better machines are than humans, like the one where he writes that brains decay after several decades of use, but processors don't. Well did he even own or use a computer?! Processors do fail and other parts do too. Most of the computers I owned or used failed after no more than 5 years.

Or about how connections in our brains are slow compared to the speed of light, so an artificial brain could be star sized or whatnot. Well, good luck cooling that. Once you are able to supply energy, that is.

In fact, connections in our brains are fast. They work on a different principle (ionic pumps/channels) and we are unable to simulate anything like this in our current digital technology with anything like the speed of the original. We are now working hard to simulate a simple worm.

Comparing raw speed or capacity of the biological and the digital is not even comparing apples to oranges, and not even to elephants, these are so completely different worlds. And the book is full of such nonsense.

But... what about all this current AI hype?

Well, it's mostly hype. Thanks to ever cheaper (though not faster anymore) computing power, we're able to put some thousand cores up and achieve some remarkable results in machine learning, like image or voice recognition, pattern recognition in general and playing full information games. That's impressive, yes, but -

- it's not INTELLIGENCE.

Despite all the marketing hype (want to get VC funding now, you must brag about AI tech), it's not intelligence by any sensible definition. It's things like fuzzy pattern matching, tree searches and things like that. Any acceptable definition of intelligence would require the ability to understand the subject matter, indeed any subject matter.

With the demise of the symbolic AI (which was anyway a failure) and the ascent of the statistical "AI" (scare quotes intended) any and all pretense of understanding has been given up. For good, or at least until another AI paradigm prevails. Statistical "AI" does not understand.

That's why this "AI" is able to recognize, say, a school bus on a road, but once it's rotated to an unusual position, the recognition engine starts spewing garbage, confusing a school bus with a... punching bag, a mistake no human child of three would make.

That's why it's enough to show this same human child a bun and a sausage once, and tell that a hot dog is a sausage in a bun, with some sauce and perhaps other ingredients, probably answer some questions so the child is able to recognize hot dogs, but an "AI" must be trained on thousands and thousands of images of buns and thousands of images of sausages to be able to more or less identify buns and sausages, and it still will NOT identify hot dogs; it must be trained specifically for hot dogs. Worse still, while this child will be able to recognize French and American hot dogs once they hear respective descriptions, an "AI" perfectly trained to identify one kind of hot dog will be just as perfectly unable to identify the other kind. For humans, let me remind you, it's still basically a sausage in a bun.

That's why machine translators, while quite impressive, are known for mistakes that make humans with even basic knowledge of translated languages laugh. They don't understand. At all.

That means, AI matchers do not have what we might call a mental model we humans do have, and they don't have imagination to work with such models. So when we know what a school bus is, and we see it turned over, we know that it's not in its right setting, we know what setting is right for it and we can imagine it in its right setting. Like, we know that it should stand on its wheels, and - more importantly - we know when and how to apply this knowledge. Or we have models of buns and sausages and can imagine a sausage in a bun without necessarily seeing it beforehand. Machines don't have such models or imagination, and so they cannot really understand.

But hey, the machines now surpass human players in chess and Go by several levels of proficiency, you have to understand the game to achieve that, don't you?

You don't. All these new engines do is train on a several hundred million games and build a state network to choose moves from (older ones also had state trees but these were prebuilt, "AI" ones build their own). "Zero" engines are new only in that, unlike earlier ones, they weren't initially fed with an archive of human games, and so they built their trees from scratch, only playing against themselves, to a result fundamentally different than what humans (or engines pre-seeded with human games archives) achieved. Is that not understanding?

It's not. We've known for decades that AI algorithms deliver solutions which no human would think about. Like in a 1996 classic Thompson experiment with evolved hardware, an FPGA board which evolved a configuration to discriminate tones: not only did the solution look "strange" (ie. not like a human would approach it), but it also exploited hardware physics other than explicit connections, so that some unconnected cells were essential to the solution. A human engineer would not think about it, and if he did, he'd be probably unable to identify and calculate proper values (we don't know exact physics of this solution to date!). Does that mean that this evolved 100-cell FPGA board understands the concept of tones?

It does not. And neither a Monte Carlo tree search does understand a game of chess. It does not have a model of it, does not think, it "just" searches an evolved network of game states. But it obviously helped us, humans, understand the game, mostly that human players were all too risk averse. That's another story, though.

Strong AI - an AI that understands might be possible with digital silicon-based technology. I mean nobody has formally proved that it's impossible. But it probably is. Or, at least, we have not the slightest idea - despite all the hype - of how to even start. Symbolic AI has failed, and statistical "AI" isn't going in this direction. To achieve a strong AI we'd need a significant theoretical breakthrough, most probably several such breakthroughs, which may or may not be even possible. And even then, hardware might simply not be able to exploit this. Deep learning, after all, relies on a 30 year old theory which was unusable until recently because of hardware limitations. And now it seems we have reached the end of Moore's Law, at least with silicon-based digital tech (so we need a hardware breakthrough as well).

So this strong AI might appear one day - tomorrow, in 10 years or in 10 thousand years. Or never. But today it's not even science fiction.

It's pure fiction.

---------------------------------------

Having read this review of "Game Changer" and comments thereunder, I think (and yes, I know that it's easy in hindsight) that even Zero chess are less impressive than I previously thought. After all, human as a species evolved to survive, but compared to many animals we are slow, weak and defenseless, which means risk avoidance is a good survival strategy. So we are natural risk avoiders (or simply cowards) and it takes some mental effort to overcome this natural propensity (we call this courage). But why would this natural cowardice not influence our strategic thinking in many subjects, even if not directly related to survival, like playing chess? Why indeed.

Well, machines have no survival instinct, or cowardice, they don't care about anything (like, say, shame after losing a drawn game), and they only have the fitness function, which rewards winning. And as it turns out, it pays to risk. So they risk more. And I think players in other games, full information or not, better prepare and learn to take really heavy risks.

But then again, no strong AI in it, not even close, not even a start.

---------------------------------------



Now this is the state of the art AI translator at its AIest. Since "Hätte, hätte Fahrradkette" is a relatively new saying in German (meaning something like "woulda coulda shoulda"), it doesn't have it in its database so it translated the phrase word by word.

A human, even one with only a basic knowledge of German and not very bright would not make this mistake. There are several warning signs that this is not to be understood literally: it rhymes, the conversation is not about bicycles, but about missed occasions, the explanation after the quote (and in a real conversation there would be intonation). Indeed, most would get the real meaning without need for further explanation. But Google doesn't know anything of this, it doesn't have it in its databases, and the "AI" doesn't understand a single bit of this.

Smarter than humans? No, dumber than flatworms.

---------------------------------------

https://www.newsweek.com/amazon-echo-...

Yet another "AI" device at its AIest. This time Amazon Echo had downloaded a version of a Wikipedia article which was altered by some jester with a piece of "advice" for the reader to kill themselves for the planet. Most of the time, Wikipedia is more or less credible, so our AI hero, which doesn't understand a bit of what it does calmly, in a matter-of-fact voice had read this "advice" aloud. Any sane person would know immediately that this is a kind of vandalism, joke or not, but since AI is - as all AIs are - dumber than flatworms, it didn't get it. It never does.

---------------------------------------



Another month, another epic "AI" failure. I'll run of space here soon. A minimal modification of the "3" digit in a road sign makes AI mistake a 35 mph limit for an 85 mph one. It's how AI driven cars will be safer I guess.



But this isn't even the worst. A few stickers on a STOP sign made an AI mistake it for a 45 mph speed limit. Oh well. When I was in a driving school, one of the first and most important things we were taught was that the STOP sign is the only octagonal one and so it's easily recognizable even if obscured by e.g. snow. Well, not for an AI, I guess.



But it's not like an AI cannot recognize stop signs! It sure can, e.g. when they are printed on a billboard.

These examples are either research or a funny glitch, but there are real world fuckups as well. Like long dresses: in the Western countries hardly anyone wears these nowadays, so AI was always able to "see" moving legs while training, and taught itself to recognize a moving human by recognizing moving legs. But since a long dress obscures legs, a driver AI was unable to recognize women in long dresses for humans. Great, isn't it?

That's the problem with current AI tech. It cannot and does not understand. The above problems will probably get fixed, but that won't make an AI understand. It will be the same dumber-than-flatworms tech, made to remember some things more.

That's all for now.

Technically all cars are self-driving, for at least a little while.

---------------------------------------

After reading this article: https://jalopnik.com/elon-musk-didnt-... one thing more occurred to me wrt 35 vs 85 mph limit. We humans think in contexts. Even if some stupid jester used enough sticker to completely change the 3 into an 8 in that number (don't do that, in many countries it's a criminal offense), most of the time 35 mph limits are in different kinds of places than 85 mph limits. Most human drivers, at least those who are not Cameron Herrin, even if they'd not recover the original limit, would be able to assess the place as too risky to drive at 85 mph and would therefore go slower. No AI we now can realistically think of could do that.

---------------------------------------

Oh well, I came across a joke that sums the current "AI" situation better than anything I could write:

If it's in Python, it's Machine Learning, if it's in Powerpoint, it's Artificial Intelligence.
Profile Image for Andreas.
482 reviews146 followers
March 26, 2021
Artificial General Intelligence (AGI) will recursively improve itself, leading to a technological singularity and unpredictable changes to human civilization. Low probability combined with high impact generates a risk which certainly makes one wonder about the backgrounds.

Academic philosopher Nick Bostrom is by far not the first to argue about the singularity, but he makes a high effort to imagine how it would come so far, what we can do about it, and what the consequences would be. He works as professor at the Uni Oxford, runs the Future of Humanity Institute at Oxford, and is very well connected to the industry – Elon Musk financed the institute, Bill Gates recommends the book, others like Hassabis contributed.

It is a different point of view, not a technological but a philosophical one. This makes it in parts difficult to understand his argumentation for me as a computer scientist, because I lack part of the presupposed terms and way of discussion. The recent book by Stuart Russell, “Human Compatible”, was far nearer to my understanding in that regard (review). But he also is very light on the technological side and can be very easily read by educated readers with a broad understanding of relevant topics like economics, computer science, or biology.

Another fact that needs to be mentioned is the book has been published in 2014. That doesn’t make it ancient, but old in internet terms. It doesn’t include newest insights and discussions that followed this influential book. On the other side, it isn’t as basic and old as Ray Kurzweil’s 2005 “The Singularity is Near” or even older books about the subject.

Add to this that the argumentation mostly crosses the border from non-fiction to fiction in the way that Bostrom argues what might happen. Much of the book is highly speculative, some of the discussion is really over-the-top up to the point where I was amused in a popcorn-eating-sitting-back-entertain-me fashion.

Those sometimes crazy ideas are a great strength of the book, because it doesn’t stop at exploring one or two single ways to account for the control problem of AGI, but does so exhaustively.

Bostrom classifies several types of superintelligences from working as an oracle (answering questions with yes/no only) to executing specific commands up to working completely autonomous. Sadly, the beneficial ones are less probable than many other forms like “perverse instantiation” filling the world with paperclips due to a wrong interpretation of human input. “Make humans smile” is a nice request but might lead to a forcefully altering of human facial muscles.

He explores who capabilities can be limited or controlled, and how the agent’s motivation selection can be steered, i.e. the agent should consider human values. Both issues are highly complicated and further discussed in Russel’s book “Human Compatible”.

In the last part, the author discusses what must be done immediately to minimize the risks from AGI. The philosophical understanding has to be developed further in this rather new field, we are scratching only the surface of the consequences while playing with fire. As far away as AGI might be – most experts place it somewhere in the next 50 years – the risks are immense high to do it in a catastrophic way.

I recommend this insightful book for everyone interested in a non-SF treatment of the singularity.
Profile Image for Tammam Aloudat.
370 reviews29 followers
February 13, 2018
This is at the same time a difficult to read and horrifying book. The progress that we may or will see from "dumb" machines into super-intelligent entities can be daunting to take in and absorb and the consequences can range from the extinction of human life all the way to a comfortable and effortlessly meaningful one.

The first issue with the book is the complexity. It is not only the complexity of the scientific concepts included, one can read the book without necessarily fully understanding the nuances of science included. It is the complexity of language and referencing to a multitude of legal, philosophical, and scientific concepts outside the direct domain of the book from "Malthusian society" to "Rawlsian veil of ignorance" as if assuming that the lay reader should, by definition, fully grasp the reference. This, I find, has a lot of pretension on the side of the author.

However, the book is a valuable analysis of the history, presence, and possible futures of developing artificial and machine intelligence that is diverse and well though of. The author is critical and comprehensive and knows his stuff well. I found it made me think of things I haven't considered before and provided me with some frameworks to understand how one can position oneself when confronted with the possibilities of intelligent or super intelligent machines.

Another one is purely technical. I have learned a lot about the possibilities of artificial intelligence that apparently is not only a programmed supercomputer but AIs that are adjusted copies of human brains, ones that do not require the maker to understand the intelligence of the machine they are creating.

The book also talks in details about some fascinating topics. In a situation where, intelligence wise, a machine is to a human like a human is to a mouse, we cannot even understand the ways a super-intelligent machine can out-think us and we, for all intents and purposes, cannot make sure that such machine is not going to override any safety features we put in place to contain it. We also cannot understand the many ways the AI can be motivated and towards what ends and how any miscalculation on our side in making it can lead to grave consequences.

The good news, in a way, is that we are still some time away (or so it seems) from a super-intelligent AI.

The one thing I missed more than anything in this book, to go back to the readability issue, is a little reference that hinges the concepts we read about in concepts we understand. After all, on the topic of AI, we have a wealth of pop-culture references that will help us understand what the author is talking about that he did not as much as hint at. I was somewhat expecting that he would link the concepts he was talking about to science fiction known to us all. I had may moments of "ah, this is skynet/Asimov/HAL 9000/The Matrix/etc etc". There is an art to linking science with culture that Mr. Bostrom has little grasp on in his somber and barely readable style. This book could have been much more fun and much easier to read.
Profile Image for Bill.
Author 7 books147 followers
November 21, 2014
An extraordinary achievement: Nick Bostrom takes a topic as intrinsically gripping as the end of human history if not the world and manages to make it stultifyingly boring.
Displaying 1 - 30 of 1,762 reviews

Can't find what you're looking for?

Get help and learn more about the design.