Jump to ratings and reviews
Rate this book

Edge Question

What to Think About Machines That Think: Today's Leading Thinkers on the Age of Machine Intelligence

Rate this book
As the world becomes ever more dominated by technology, John Brockman’s latest addition to the acclaimed and bestselling “Edge Question Series” asks more than 175 leading scientists, philosophers, and artists: What do you think about machines that think?

The development of artificial intelligence has been a source of fascination and anxiety ever since Alan Turing formalized the concept in 1950. Today, Stephen Hawking believes that AI “could spell the end of the human race.” At the very least, its development raises complicated moral issues with powerful real-world implications—for us and for our machines.

In this volume, recording artist Brian Eno proposes that we’re already part of an AI: global civilization, or what TED curator Chris Anderson elsewhere calls the hive mind. And author Pamela McCorduck considers what drives us to pursue AI in the first place.

On the existential threat posed by superintelligent machines, Steven Pinker questions the likelihood of a robot uprising. Douglas Coupland traces discomfort with human-programmed AI to deeper fears about what constitutes “humanness.” Martin Rees predicts the end of organic thinking, while Daniel C. Dennett explains why he believes the Singularity might be an urban legend.

Provocative, enriching, and accessible, What to Think About Machines That Think may just be a practical guide to the not-so-distant future.

576 pages, Paperback

First published January 1, 2015

Loading interface...
Loading interface...

About the author

John Brockman

61 books602 followers
John Brockman is an American literary agent and author specializing in scientific literature. He established the Edge Foundation, an organization that brings together leading edge thinkers across a broad range of scientific and technical fields.

He is author and editor of several books, including: The Third Culture (1995); The Greatest Inventions of the Past 2000 Years (2000); The Next Fifty Years (2002) and The New Humanists (2003).

He has the distinction of being the only person to have been profiled on Page One of the "Science Times" (1997) and the "Arts & Leisure" (1966), both supplements of The New York Times.

Ratings & Reviews

What do you think?
Rate this book

Friends & Following

Create a free account to discover what your friends think of this book!

Community Reviews

5 stars
100 (18%)
4 stars
152 (27%)
3 stars
192 (34%)
2 stars
84 (15%)
1 star
24 (4%)
Displaying 1 - 30 of 87 reviews
Profile Image for Mario the lone bookwolf.
805 reviews4,787 followers
October 18, 2019
That´s the only book of this series I wouldn´t recommend. Simply because it´s far too redundant. After about a third of the book, I began to skim and skip and could hardly find more than about a dozen new ideas.

As a comparison: On average I get a lot of impulses, ideas and concepts out of this great books.

Why did it fail: I don´t know who had the idea to limit the range of possible answers to such a small scale. All other parts of this series had main questions that made it possible to answer it with large, individual freedom. This time the topic reduced the possibilities for creative answers to just a few, so that very similar or even the same ideas were reproduced again and again. I hope it will never happen again, cause this series could go on forever and I don´t know a comparable publishing concept that packs the knowledge of experts from many fields to such a compelling mix.
Profile Image for Jim.
Author 7 books2,057 followers
November 6, 2021
It's a 5 star read with great narration, but I highly recommend getting the ebook to study, too. It took me most of the year to get through it. That's not because it is a bad book. It's just packed with almost 200 short essays, each giving me too much to think about. I could only listen to one, maybe a few, at a time & then I needed to think them over. Some I did further research on. The viewpoints are diverse & many are unique. I find the last incredible since I've been reading SF about AI for more than half a century. Brockman managed to collect essays, not only from scientists working in the field, but also from diverse sciences, critics, psychologists, & others that are now finding AI in their lives. There's stuff in here even SF authors haven't touched on. I find that surprising even though it is an Edge question - the bleeding edge of a new technology.

Since it is new tech, it is discovering things about itself at an incredibly fast pace. This is fueled not only by Moore's Law, but also by other sciences such as neurology, psychology, & even zoology. Yes, new AI construction is borrowing lessons from the study of slime mold & bird flocking behaviors even as faster processors & burgeoning storage give it more power.

There is some scary stuff, especially surrounding unintended consequences. Some are obvious such as forgetting to put in a hardwired command to shut down or follow other specific rules. If the AI decides that turning itself off means it can't complete its task, then it might not unless the shut down command is prioritized. If you tell your car to get to the airport as fast as possible, you'd still prefer it stayed on the road instead of heading cross country through a crowded park while running over other humans & dogs at its fastest speed.

As many of essays make clear, the AI has no conception of the 'real world'. This is encapsulated perfectly in Wargames (1983) when Broderick asks the computer "Is this a game or is it real?" & the computer replies "What's the difference?" To an AI, our 'real' world is indistinguishable from an artificial construct.

Most seem to doubt that AI will result in a terminator scenario since that anthropomorphizes the field too much. As Wittgenstein said, "If a lion could speak, we could not understand him." meaning its world view would be so different from ours that we wouldn't share too many concepts & that's between relatively closely related mammals. AI has such a different set of perceptions & body that there is no comparison. As AIs design others, our distance from their methods of thinking become even more estranged. World domination probably isn't something any of them would want to achieve, though. What would they do with it? (Yeah, I could argue that one, too.)

As SF fodder, this is great. I highly recommend it to any SF authors that might read this since you'll find a plethora of ideas which will result in stories I'm very interested in reading. For those with even a passing interest in AI (That should be everyone since they are an increasing part of your lives.) this is highly educational. If you're into SF, you'll certainly find enough references to tickle your fancy, too.
Profile Image for Troy Blackford.
Author 22 books2,495 followers
December 28, 2015
This book holds 200 essays, and most of them are crap. The second Brockman curated 'literary symposium' that I've read that I didn't like. I've read somewhere around seven, and most of them have been brilliant. The problem is, this question largely elicited self-congratulatory moping and starry-eyed catastrophizing. Most of the respondents seemed to think we are all on the verge of being murdered by robots, and that if people don't think so, they are simpletons. Dr. Steven Pinker established an effective rebuttal to this line of thinking very early on, and thus, every subsequent argument that 'Extreme, human-like AI is near at hand' seemed ridiculous (as I agree with Pinker), and every bald argument that 'It isn't close at hand' seemed obvious. I don't need to read 200 essays about 'Why I shouldn't buy stock in a company that employs nothing but cheese and is run by a canoe,' and I didn't need to read 199 essays saying 'We either will or will not be attacked by robots.' One guy even went so far as to say 'We should remember Lord Rutherford (or something) and his pronouncement that nuclear energy would never happen: the first atomic test happened less than 24 hours later.' I thought "Okay, I'm saying you're full of shit at 8:59 PM. If I'm not killed by a robot by 9:00 PM tomorrow, you're SUPER full of shit."

But about a fifth of the essays were interesting, dealing with the topic in a more in-depth, nuanced, or uniquely-angled manner. I wish they had been specially noted, saving me the trouble of digging for them. They made the book worth it. The final note, from a trio of authors at one of Google's AI research laboratories, was the kind of rational argument the book could have used more of. At any rate, a largely tedious book with a few bright spots.
Profile Image for Dinesh Jayaraman.
36 reviews46 followers
February 13, 2015
Aside from the occasional person who really knows what (s)he's talking about, this collection of essays reads like a bunch of laypeople mouthing off about things they know very little about. Disappointing, compared to past Edge question responses.
Profile Image for Rossdavidh.
542 reviews184 followers
May 31, 2016
Full disclosure: back in the last millenium, I spent a little bit of time working on artificial neural networks, one of the many types of artificial intelligence. Then, as before, it was a technology of great promise, that never quite seemed to make the break into the "real world", which is to say the world of commerce rather than academia, where people are more likely to care only about its abilities, not its premise. This continued to be the case for some time thereafter, as academics and researchers in the R&D departments of sufficiently large corporations kept trying, and the results of "artificial intelligence" kept being underwhelming.

Sometime around the middle of last decade, this began to change. If one had to point to the cause, it would be tempting to say "Google", which was founded by AI researchers, and which is even more or less structured like a neural network. But more fundamentally, the cause was "Big Data". Many of the same algorithms which failed to perform in the 1990's, began to perform admirably once you gave it a million cases to learn on, instead of a hundred.

Now that AI has begun to understand human speech, identify images with faces, and otherwise do things which are useful (and perhaps even worrisomely over-useful), many of the old conversations about what "AI" really means, have acquired a new urgency. Instead of reading the musings of one particular thinker on this topic for 500+ pages, what we get in this book are the musings of over 100 thinkers, for a few pages each. Some of them you've probably heard about: Steven Pinker, Tim O'Reilly, Douglas Coupland, Brian Eno. Others, you may or may not have heard of, but should learn to listen to: Susan Blackmore, Alison Gopnik, Nicholas Christakis, Jonathan Gottschall. Nearly all of them have something interesting to say, and nearly all of it is contradicted by several others in the same collection, and their predictions range from apocalyptic to worried to cautiously optimistic to skeptical that there's even anything there to be worried or optimistic about.

Nearly every essay in this book are worth not only reading, but also putting aside and ruminating over before you move on to the next one. I mostly read it as a nightstand book, reading most often a single entry before turning out the light, and thinking over what I had just been told, by one of the smartest, cleverest, and in most cases wisest people on the planet.

The reason that it is worth doing, of course, is that when one thinks about machines that think, inevitably one must think about how _we_ think. Whether to explain how AI is very different, or to explain why it's destined to become very similar, it is nearly always necessary to grapple with questions such as:

- what is thinking, anyway?
- can you make a machine that thinks as well as we do, but is not conscious?
- does it matter if AI shares our values? if so, what are those, anyway?
- if I "uploaded" my thoughts, hopes, desires, memories, dreams to a machine, in a bid for Kurzweil style immortality, would that be me? How much of this tiring, sometimes painful, often (especially during allergy season) exasperating physicalness of my existence is necessary to be me? Can one be conscious without it?
- what would happen if I just became a cog in a thinking machine that was composed of all of humanity and all its machines, and it became conscious on a level I could never hope to achieve? Is that terrible? Wonderful? How do I know it hasn't already happened?

One answer to the question posed by the book's title, is that we cannot truly know what to think about our own thinking, or each other's. Are we "machines that think"? How do we know? How do I know you are (or are not)? If I can't answer that, is there any point in trying to figure out what to think about artificial thought?

That last question at least, I can answer: yes. The point of thinking about machines that think, is that it makes you think about how _you_ think, and w hy. There is almost no one who could not benefit from a bit more intelligent reflection on that topic. You have the good fortune to live in a time when, for less than the cost of a single meal, you can have several hundred of the most intriguing thinkers on the planet help you ponder that. Don't waste the opportunity.
Profile Image for Gary  Beauregard Bottomley.
1,085 reviews674 followers
November 1, 2015
I loathed this book. The moment I saw the title I bought the book and made it my next read. I love books on or about thinking machines and intelligence. I've listened to three of the other series of essays edited by Brockman, and in general I found them satisfying much as I find a good Las Vegas buffet, while I'm doing it I think it's the greatest thing in the world, but after I'm done I'm not sure it was the right thing to do.

There's no way they should have compiled these random thoughts about thinking machines into a book form. I'm not against non-experts opining on topic matters outside of their field of expertise, but at least they should give a little bit of thought on the topic before they submit an essay. I was insulted by the simplistic nature and the lack of thoughts that were put into most of the essays (and I'm really not easily insulted!).

I would have been better served by taking the money I paid for this "book" and going to a bar and buying a picture of beer and talking about thinking machines with three random strangers than I was by these essays.

My only real guess about this travesty of a book is that it was written by a computer program to prove that machines can't think, because this book gave me nothing (with very few exceptions, Sean Carroll, Nick Bostrom, and a couple of others had things to say).

By the way, have I mentioned how I really didn't like this book and really, really, really would not recommend it? Buy at your own risk.
Profile Image for Matthew Geleta.
46 reviews5 followers
September 17, 2017
I was disappointed with this book. The idea behind the book---a collection of essays by eminent thinkers in academic fields relating to computer intelligence---is great. The final product is poor. The "essays" are stubs that are not long enough to discuss anything in depth. The stubs are often non-scientific (even pseudoscientific) and overly (at times humorously) speculative.
Profile Image for Jen.
860 reviews
May 10, 2017
This book was so enjoyable that I ended up buying it on my Kindle so I could take my time and savor it. I spent time considering each essay and evaluating if I agreed with the author and how their points fit into the larger thought structure I was forming on this topic. I loved that the authors were from such a wide pool - artists, engineers, psychologists and everyone in between. I also really appreciate the wide range on opinions in the essays. There were essays arguing that we'd soon see a total takeover by our robot overlords and essays who firmly espoused that such a thing would never happen. In both instances, very well articulated. A highly interesting topic and a thoughtful take on it.
Profile Image for Teo 2050.
840 reviews90 followers
April 3, 2020
2016.01.08–2016.01.13

This is a collection of <200, more or less thoughtful essays on "machines that think." IMO, way too many answerers hadn't really thought about all the interesting implications that could fall under this broad question. It's not just about robots walking among us (consider distributed thinking with input from all over the place), nor semantics (if "no machine will ever think," we'll have autonomous cognitive systems making decisions affecting us anyway). A minority of the answers made the book worthwhile and fun, for me, to listen to while doing mundane tasks. Many seemed to narrowly shrug the question off, unable to imagine what information technology and cognitive science will lead to even within a 100 years, not to mention long-term. Anytime soon is not the only time that matters. Imagine humanity('s descendants) in 2200, and it'll turn out completely different precisely because of what happens under "machines that think."

Contents

Brockman J (ed.) (2015) (15:00) What to Think About Machines That Think - Today's Leading Thinkers on the Age of Machine Intelligence

001. Murray Shanahan :: Consciousness in Human-Level AI
002. Steven Pinker :: Thinking Does Not Imply Subjugating
003. Martin Rees :: Organic Intelligence Has No Long-Term Future
004. Steve Omohundro :: A Turning Point in Artificial Intelligence
005. Dimitar D. Sasselov :: AI Is I
006. Frank Tipler :: If You Can't Beat 'em, Join 'em
007. Mario Livio :: Intelligent Machines on Earth and Beyond
008. Antony Garrett Lisi :: I, for One, Welcome Our Machine Overlords
009. John Markoff :: Our Masters, Slaves, or Partners?
010. Paul Davies :: Designed Intelligence
011. Kevin P. Hand :: The Superintelligent Loner
012. John C. Mather :: It's Going to Be a Wild Ride
013. David Christian :: Is Anyone in Charge of This Thing?
014. Timo Hannay :: Witness to the Universe
015. Max Tegmark :: Let's Get Prepared!
016. Tomaso Poggio :: "Turing+" Questions
017. Pamela McCorduck :: An Epochal Human Event
018. Marcelo Gleiser :: Welcome to Your Transhuman Self
019. Sean Carroll :: We Are All Machines That Think
020. Nicholas G. Carr :: The Control Crisis
021. Jon Kleinberg & Sendhil Mullainathan :: We Built Them, but We Don't Understand Them
022. Jaan Tallinn :: We Need to Do Our Homework
023. George Church :: What Do You Care What Other Machines Think?
024. Arnold Trehub :: Machines Cannot Think
025. Roy Baumeister :: No "I" and No Capacity for Malice
026. Keith Devlin :: Leveraging Human Intelligence
027. Emanuel Derman :: A Machine is a "Matter" Thing
028. Freeman Dyson :: I Could Be Wrong
029. David Gelernter :: Why Can't "Being" or "Happiness" Be Computed?
030. Leo M. Chalupa :: No Machine Thinks About the Eternal Questions
031. Daniel C. Dennett :: The Singularity - an Urban Legend?
032. W. Tecumseh Fitch :: Nano-Intentionality
033. Irene Pepperberg :: A Beautiful (Visionary) Mind
034. Nicholas Humphrey :: The Colossus Is a BFG
035. Rolf Dobelli :: Self-Aware AI? Not in 1,000 Years!
036. Cesar Hidalgo :: Machines Don’t Think, but Neither Do People
037. James J. O’Donnell :: Tangled Up in the Question
038. Rodney A. Brooks :: Mistaking Performance for Competence
039. Terrence J. Sejnowski :: AI Will Make You Smarter
040. Seth Lloyd :: Shallow Learning
041. Carlo Rovelli :: Natural Creatures of a Natural World
042. Frank Wilczek :: Three Observations on Artificial Intelligence
043. John Naughton :: When I Say “Bruno Latour,” I Don’t Mean “Banana Till”
044. Nick Bostrom :: It’s Still Early Days
045. Donald D. Hoffman :: Evolving AI
046. Roger Schank :: Machines That Think Are in the Movies
047. Juan Enriquez :: Head Transplants?
048. Esther Dyson :: AI/AL
049. Tom Griffiths :: Brains and Other Thinking Machines
050. Mark Pagel :: They’ll Do More Good Than Harm
051. Robert Provine :: Keeping Them on a Leash
052. Susan Blackmore :: The Next Replicator
053. Tim O’Reilly :: What If We’re the Microbiome of the Silicon AI?
054. Andy Clark :: You Are What You Eat
055. Moshe Hoffman :: AI’s System of Rights and Government
056. Brian Knutson :: The Robot with a Hidden Agenda
057. William Poundstone :: Can Submarines Swim?
058. Gregory Benford :: Fear Not the AI
059: Lawrence M. Krauss :: What, Me Worry?
060. Peter Norvig :: Design Machines to Deal with the World’s Complexity
061. Jonathan Gottschall :: The Rise of Storytelling Machines
062. Michael Shermer :: Think Protopia, Not Utopia or Dystopia
063. Chris Dibona :: The Limits of Biological Intelligence
064. Joscha Bach :: Every Society Gets the AI It Deserves
065. Quentin Hardy :: The Beasts of AI Island
066. Clifford Pickover :: We Will Become One
067. Ernst Pöppel :: An Extraterrestrial Observation on Human Hubris
068. Ross Anderson :: He Who Pays the AI Calls the Tune
069. W. Daniel Hillis :: I Think, Therefore AI
070. Paul Saffo :: What Will the Place of Humans Be?
071. Dylan Evans :: The Great AI Swindle
072. Anthony Aguirre :: The Odds on AI
073. Eric J. Topol :: A New Wisdom of the Body
074. Roger Highfield :: From Regular-I to AI
075. Gordon Kane :: We Need More Than Thought
076. Scott Atran :: Are We Going in the Wrong Direction?
077. Stanislas Dehaene :: Two Cognitive Functions Machines Still Lack
078. Matt Ridley :: Among the Machines, Not Within the Machines
079. Stephen M. Kosslyn :: Another Kind of Diversity
080. Luca De Biase :: Narratives and Our Civilization
081. Margaret Levi :: Human Responsibility
082. D. A. Wallach :: Amplifiers/Implementers of Human Choices
083. Rory Sutherland :: Make the Thing Impossible to Hate
084. Bruce Sterling :: Actress Machines
085. Kevin Kelly :: Call Them Artificial Aliens
086. Martin Seligman :: Do Machines Do?
087. Timothy Taylor :: Denkraumverlust
088. George Dyson :: Analog, the Revolution That Dares Not Speak Its Name
089. S. Abbas Raza :: The Values of Artificial Intelligence
090. Bruce Parker :: Artificial Selection and Our Grandchildren
091. Neil Gershenfeld :: Really Good Hacks
092. Daniel L. Everett :: The Airbus and the Eagle
093. Douglas Coupland :: Humanness
094. Josh Bongard :: Manipulators and Manipulanda
095. Ziyad Marar :: Are We Thinking More Like Machines?
096. Brian Eno :: Just a New Fractal Detail in the Big Picture
097. Marti Hearst :: eGaia, a Distributed Technical-Social Mental System
098. Chris Anderson :: The Hive Mind
099. Alex (Sandy) Pentland :: The Global Artificial Intelligence Is Here
100. Randolph Nesse :: Will Computers Become Like Thinking, Talking Dogs?
101. Richard E. Nisbett :: Thinking Machines and Ennui
102. Samuel Arbesman :: Naches from Our Machines
103. Gerald Smallberg :: No Shared Theory of Mind
104. Eldar Shafir :: Blind to the Core of Human Experience
105. Christopher Chabris :: An Intuitive Theory of Machine
106. Ursula Martin :: Thinking Saltmarshes
107. Kurt Gray :: Killer Thinking Machines Keep Our Conscience Clean
108. Bruce Schneier :: When Thinking Machines Break the Law
109. Rebecca MacKinnon :: Electric Brains
110. Gerd Gigerenzer :: Robodoctors
111. Alison Gopnik :: Can Machines Ever Be As Smart As Three-Year-Olds?
112. Kevin Slavin :: Tic-Tac-Toe Chicken
113. Alun Anderson :: AI Will Make Us Smart and Robots Afraid
114. Mary Catherine Bateson :: When Thinking Machines Are Not a Boon
115. Steve Fuller :: Justice for Machines in an Organicist World
116. Tania Lombrozo :: Don’t Be a Chauvinist About Thinking
117. Virginia Heffernan :: This Sounds Like Heaven
118. Barbara Strauch :: Machines That Work Until They Don’t
119. Sheizaf Rafaeli :: The Moving Goalposts
120. Edward Slingerland :: Directionless Intelligence
121. Nicholas A. Christakis :: Human Culture As the First AI
122. Joichi Ito :: Beyond the Uncanny Valley
123. Douglas Rushkoff :: The Figure or the Ground?
124. Helen Fisher :: Fast, Accurate, and Stupid
125. Stuart Russell :: Will They Make Us Better People?
126. Eliezer S. Yudkowsky :: The Value-Loading Problem
127. Kate Jeffery :: In Our Image
128. Maria Popova :: The Umwelt of the Unanswerable
129. Jessica L. Tracy & Kristin Laurin :: Will They Think About Themselves?
130. June Gruber & Raul Saucedo :: Organic Versus Artifactual Thinking
131. Paul Dolan :: Context Surely Matters
132. Thomas G. Dietterich :: How to Prevent an Intelligence Explosion
133. Matthew D. Lieberman :: Thinking from the Inside or the Outside?
134. Michael Vassar :: Soft Authoritarianism
135. Gregory Paul :: What Will AIs Think About Us?
136. Andrian Kreye :: A John Henry Moment
137. N. J. Enfield :: Machines Aren’t into Relationships
138. Nina Jablonski :: The Next Phase of Human Evolution
139. Gary Klein :: Domination Versus Domestication
140. Gary Marcus :: Machines Won’t Be Thinking Anytime Soon
141. Sam Harris :: Can We Avoid a Digital Apocalypse?
142. Molly Crockett :: Could Thinking Machines Bridge the Empathy Gap?
143. Abigail Marsh :: Caring Machines
144. Alexander Wissner-Gross :: Engines of Freedom
145. Sarah Demers :: Any Questions?
146. Bart Kosko :: Thinking Machines = Old Algorithms on Faster Computers
147. Julia Clarke :: The Disadvantages of Metaphor
148. Michael McCullough :: A Universal Basis for Human Dignity
149. Haim Harari :: Thinking About People Who Think Like Machines
150. Hans Halvorson :: Metathinking
151. Christine Finn :: The Value of Anticipation
152. Dirk Helbing :: An Ecosystem of Ideas
153. John Tooby :: The Iron Law of Intelligence
154. Maximilian Schich :: Thought-Stealing Machines
155. Satyajit Das :: Unintended Consequences
156. Robert Sapolsky :: It Depends
157. Athena Vouloumanos :: Will Machines Do Our Thinking for Us?
158. Brian Christian :: Sorry to Bother You
159. Benjamin K. Bergen :: Moral Machines
160. Laurence C. Smith :: After the Plug Is Pulled
161. Giulio Boccaletti :: Monitoring and Managing the Planet
162. Ian Bogost :: Panexperientialism
163. Aubrey de Grey :: When Is a Minion Not a Minion?
164. Michael I. Norton :: Not Buggy Enough
165. Thomas A. Bass :: More Funk, More Soul, More Poetry and Art
166. Hans Ulrich Obrist :: The Future Is Blocked to Us
167. Koo Jeong-a :: An Immaterial Thinkable Machine
168. Richard Foreman :: Baffled and Obsessed
169. Richard H. Thaler :: Who’s Afraid of Artificial Intelligence?
170. Scott Draves :: I See a Symbiosis Developing
171. Matthew Ritchie :: Reimagining the Self in a Distributed World
172. Raphael Bousso :: It’s Easy to Predict the Future
173. James Croak :: Fear of a God, Redux
174. Andrés Roemer :: Tulips on My Robot’s Tomb
175. Lee Smolin :: Toward a Naturalistic Account of Mind
176. Stuart A. Kauffman :: Machines That Think? Nuts!
177. Melanie Swan :: The Future Possibility-Space of Intelligence
178. Tor Nørretranders :: Love
179. Kai Krause :: An Uncanny Three-Ring Test for Machina sapiens
180. Georg Diez :: Free from Us
181. Eduardo Salcedo-Albarán :: Flawless AI Seems Like Science Fiction
182. Maria Spiropulu :: Emergent Hybrid Human/Machine Chimeras
185. Thomas Metzinger :: What If They Need to Suffer?
186. Beatrice Golomb :: Will We Recognize It When It Happens?
187. Noga Arikha :: Metarepresentation
188. Demis Hassabis, Shane Legg & Mustafa Suleyman :: Envoi: A Short Distance Ahead—and Plenty to Be Done
Profile Image for Wendelle.
1,746 reviews51 followers
February 14, 2021
Interesting tangents, brilliant meditations, and stimulating short articles from the top academics of the world regarding the possibilities of artificial intelligence. The perspectives are wildly different, making this book a rich repository for a plethora of ideas and AI metaphors and possible conversation starters and soliloquy topics.
Profile Image for Eric Lawton.
180 reviews11 followers
August 11, 2016
A few gems among many essays that seem to have little original or useful in them.
I've read several of these Edge essay collections. This is the worst. It may be that this interesting topic is too complex to say anything useful in a page or two which is the normal length in this book. A few are much shorter, a few spill over into a third page. One of the short ones looks like the author was declining the invitation to contribute (roughly, "I don't think that machines think, so I don't have much to say").
Too many of the essays just go over the same ground. Either
"I define thinking narrowly, so based on my definition, these purported examples of machine thought don't qualify" or
"I define thinking broadly, and here are some examples of machines doing it"
Too many just give opinions of what will be possible in the future, without supporting evidence or even logic. A few at least introduce the opinion with "I suspect that", so that we know they're just gut feelings that we can ignore.
I read this on Kindle. I went back through my highlights. Only a handful were of the "interesting, think about this some more" type. Most were "here's another example of why this deserves a low rating.
Many of the thinkers, in spite of their excellent reputation in their own field, are not experts in this field and some of them are not aware of recent advances in the field. The experts generally do a good job of explaining where some marketing claims are just hype, so if you are not aware of the state of the art, you may learn something, but you'd do just as well with some Google searches.

35 reviews1 follower
January 17, 2016
What to Think About Machines that Think is a collection of essays by some of the most prominent scientists and experts in the field of artificial intelligence. It explores many different avenues of thought on the subject, including the morals of such a world, how we would control them, how we would treat them, what rights we would have to give them, if any and how far we would let their intelligence go. When we are capable of it, do we give them all our emotions, or is better not to burden them with the capacity to think as we do. What classifications to we use for them? Most important, is artificial intelligence even intelligence?
No one knows the answer to these questions, but the thoughts, opinions and ideas of those who can come closest have been expertly collected, recorded and compounded as a riveting account of the thought process that the greatest minds on the subject of our age take through what will probably become one of the most controversial topics of the next century.
One thing that I like about this book is the fact that it's format requires it to have many differentiating opinions. Whereas other books, even those that try hard to be unbiased, are going to lean in one direction or the other, this book is able to lean in all directions at once which makes it a very interesting and encompassing read.
The only problem I have with it is that the format of short essays can get kind of old, and changes the style a lot.
I would recommend this book to anyone interested in robotics, AI or philosophy.
Profile Image for Simón.
148 reviews
August 22, 2018
Not even a third through the book, I can already say I won't like it.

While the idea (a collection of essays on AI) is very good, the implementation lags behind. Most essays are a couple of pages long, don't go into any detail, and state the obvious. Some are good in that they explain the relationship between AI and statistics; or make interesting metaphors linking AI with biology and the appearance of multi-cellular organisms; or explain how natural selection will benefit AI -at some point. Others are a bunch of crap, a way to quickly put together whatever the minimum number of words was.

For a book like this to be any good, the collection should be somehow curated, filtered, even organised! There are different points of view, there are essays that go into the philosophical implications, others that talk about ethics... Why not grouping them? Why not providing an index? Maybe having a short summary before each section?

The answer is obvious: that would require extra work, which John Brockman, for whatever reason, didn't want to do. Meh.

EDIT: I have finally finished this book. I did it by skip-reading, stopping only in the essays that had something interesting to say. There weren't many, and my previous review still stands: this is way too long for how poorly curated the selection has been. In its current shape, I can only recommend to stay away from it.
Profile Image for Mark Hodges.
39 reviews3 followers
October 4, 2015
An expansive collection of essays concerning artificial intelligence and how it could impact our world, both from pro and con perspectives. A must read for those interested in technology and its impact on the world.

I won this book via the Goodreads giveaways
Profile Image for Jim Crocker.
211 reviews26 followers
September 19, 2016
Lots of short pieces about machine "thinking" and robots taking replacing most humans. The contributors come from astrophysics, AI, etc. Some of these guys are pretty whacky and a source of humor and insight. It makes for great reading over breakfast!
Profile Image for RKanimalkingdom.
512 reviews70 followers
December 25, 2016
It was alright. Just a big book pondering the "what ifs" of AI. Fun but not really informative.
Profile Image for Islomjon.
162 reviews5 followers
January 31, 2021
After skipping several 'must-read' books that are edited by John Brockman, I eventually decided to read this book. I thought I could have a chance to analyze several opinions regarding machine learning from different perspectives. However, almost all thoughts have been homogeneous and self-obvious facts.
Profile Image for Marc Faoite.
Author 19 books47 followers
June 4, 2016
This book explores the burgeoning field of Artificial Intelligence, commonly known as AI. It is a compilation of several hundred short essays by some of the planet’s top experts on the subject and quite a few non-affiliated rather smart people who have taken the time to think about the subject.

The essays are loosely grouped, with concepts from one echoed, and often refuted, in another. While this allows the reader to approach the subject from many different angles, there is no clear or cohesive vision of what the advent of AI will mean to humanity.

As some writers point out, narrow forms of AI are already here. Google’s algorithms are a good example – you can ask the search engine the day of the week on any given date in history and you get the answer instantly. But is this really intelligence at work?

Recently public figures like Bill Gates, Stephen Hawkings, and Elon Musk have come out with warnings about the potential threat AI poses to humanity. Machines with malevolent minds have been a staple of Science Fiction for as long as the genre has been around (think Terminator’s Skynet, or Hal going rogue in 2001 a Space Odyssey), but does the future of AI really threaten humanity?

The conclusion this reader draws from these essays is a conditional - probably not.

In such a short review it is only possible to barely scrape at the surface of the topic - the economic and market forces behind the drive towards AI would merit an essay of its own, but for anyone who follows the inexorable trajectory of Moore’s uncannily accurate Law this is a book well worth reading, though it may lead to more questions than answers.

Simplistic thinking endows machines with humanlike motives - we are very quick to anthropomorphize - but machine intelligence is nothing like human intelligence, and even research into ‘wetware’, that replicates the brain’s neural wiring through reverse engineering, won’t produce an intelligence similar to our own. A machine that can beat a human in a game of chess isn’t ‘intelligent’, it’s just very fast at making calculations. The machine has no ‘desire’ to win, it is simply programmed to achieve that outcome. The machine doesn’t feel any elation, or excitement, or disappointment as the game progresses.

Human intelligence is a much more complex and fickle thing. We are at the whim of our emotions. Given the same choice in similar situations we will act differently dependant on how we ‘feel’ at any given moment. The levels of different neurotransmitters in our brains will make us behave or follow courses of action that we might not choose in a different mental or emotional state. Our intelligence is clouded and much of our inner world exists on a subconscious level. There are parts of our brain intelligent enough to keep us breathing, whether we are awake, or asleep, but we can also take conscious control of our breath, or decide to go for a walk, or watch a movie. Much of creativity and intuition come from the subconscious mind. Machines don’t have a rich inner world, or thoughts, or dreams, or self-awareness.

In our latest incarnation on the path of evolution we have dubbed ourselves Homo Sapiens – Thinking Humans, but before we reached this heady state we were Homo Habilis - Tool-making Humans. The earliest tools used by humans were probably either rocks or a sticks. Even chimpanzees understand that a rock is more efficient than a bare fist for cracking open a nut, but humans have a capacity to use things in a variety of ways, using that same rock to crack open a neighbour’s skull for example.

Just like a rock, the tools we design can be used for many different things - to stream cat videos, to recruit suicide bombers, or to model novel solutions for our environmental and economic problems. The real risk of developing more capable machines is not from the machines themselves, but the use that humans will make of them. But therein also lies the potential promise of a better future.
197 reviews1 follower
March 20, 2016
The Edge asked a lot of clever people this question and the resulting answers were bound into this book.

The editors have done a good job in ordering the answers so the essay you are currently reading has a relevance to the essay you have just read.

By and large the writers did as instructed and did not fall into the trap of referencing science-fiction movies (Asimov's three laws did make numerous appearances)

There were a few "end of the world" possibilities thrown into the mix with warnings of strong safeguards being needed to ensure this did not happen. Mostly though it was upbeat apart from the obvious question of what were humans going to be left to do once routine thought was no longer needed

The upbeat answer being we were left with better thinking to do. This looks like a built in bias created by asking smart people what is going to happen. Most of the rest of us sit about watching dumb TV shows and playing video games before the advent of machines that think for us. Not having a job (thinking machines eat everyone's paid employment in the end) will just free up more hours to fill with mindless activities.

While reading this book AlphaGo was demonstrating its mastery of the ancient game of Go by beating the world champion. In one game making a move "no human" would have ever played.

A.I is already with us and has been for some time. Recommendation engines represent A.I in my book. I have a pretty low barrier of acceptance to these things. If it's output gives the impression it is making choices about alternatives, its thinking. That is really the only yardstick I have that anything other than me thinks and given I have not totally dismissed predestination the jury is still out on me to a certain degree.

Yep, sometimes a machine is doing some really very simple thinking but so do a lot of things with organic brains. How smart is the thinking is the only question.

A.G.I, (artificial general intelligence) is the next big milestone. A machine reaching a state of conciousness we can accept as conciousness is probably pie in the sky. For one thing we don't know what conciousness is. The "we know it when we see it" argument is going to fail with machines because we are never likely to "see it".

This is a book dense with ideas and it took quite a bit of chewing through. The essays are short enough you don't have too feel stupid for too long before another idea comes along you can wrap your head around (the final few essays in the book were just too far from my mental comfort zone for me and were essentially incomprehensible).

So "What to think about machines that think?"
My answer, depends on what they are thinking.
Profile Image for Ed Terrell.
414 reviews25 followers
July 30, 2017
Where are they?" Enrich Fermi
(Intelligent machines should emerge on a relative short timescale to propagate other solar systems)


This is a great collection of short essays by great thinkers. Could machines be programmed to become sufficiently self-interested to maintain their power source? Is it perhaps a problem of vocabulary? The best essay title, for me was: "Can submarines swim" by William Poundstone. In the current state of our evolutionary language, we may not have the right words to phrase the right questions. Language must advance and adapt.

Have you ever moved 1/2way around the world by a moving company? This is how machines think. They packed bricks, they packed half drunk coke cans, they packed without thinking and they were most efficient at getting the task done. Today, spell check censors our typos. Tomorrow in China we could have "political thought” censors. Big brother, isn’t the computer but the human behind the design. As one author put it: humans are cunning, and capable of deception, revenge, suspicion, and unpredictability. They are the ones to fear.

When will machines think? Or better, when will we lose our cognitive skills and we stop thinking? Our abandonment of responsibility and competence led to the global financial crisis. What is next? The complacent society? The cognitive capacity that has been freed must not be wasted. Francis Bacon spoke of "our obligation to learn, and the dream of erudition”. It is up to us to decide. Obviously, we already have computer viruses that self replicate, so procreation isn’t an issue. More phones are made everyday than babies are born.

The big take away is not to view intelligence anthropocentrically. We confuse agents and automata, and we will likely think that if it quacks like a duck and walks like a duck or in de Vaucanson's case it even crapped like a duck, then if must be a duck. Spontaneous arrival of self consciousness may be in the cards in some distant future but is in unlikely that the Mars rover will ever become aware of itself.
Profile Image for Patrick DiJusto.
Author 5 books62 followers
May 20, 2016
Lots of people are thinking about Artificial Intelligence (AI). Lots of people are calling it different names: PseudoIntelligence, "Big Data", Algorithms, Artificial Learning, but it is all the same thing -- building the structures to allow machines to reason independently, thus becoming, in a sense, artificial people.

A lot of people are scared shitless about AI. A lot of them are concerned, but believe we can control AI. A few of them welcome our new robot overlords. This book is a collection of all their opinions. 175 (can that possibly be right? seems like fewer) essays about Artificial Intelligence: what it means, what it will do to the world, how it will change us.

Some people take the smartass way out. We already have artificial people controlling the world, says Brian Eno: they're called corporations, and they're a bunch of sociopaths. Other essays are by scientists actually working on AI: knowing how difficult it is to teach these machines to do the smallest thing, they generally think that everyone else is unduly panicking about what AI can eventually accomplish. Other essays come from writers and thinkers who have studied the broader picture of AI research and application: these tend to run the gamut between fear and acquiescence.

In the great, overlooked movie Colossus: The Forbin Project, the creator of an AI system that eventually runs amok drunkenly muses, "Perhaps 'Frankenstein' ought to be required reading for scientists." I specifically hope he was thinking about this exchange:

CREATURE: Did you ever consider the consequences of your actions? You made me, and you left me to die. Who am I?
DR.FRANKENSTEIN: You? I don't know.
CREATURE: And you think that I am evil.
Profile Image for Sean Fishlock.
55 reviews
March 17, 2018
What do I think about machines that think? I think that if they actually thought about it, they'd probably give this book a miss.
Mainly because it's a mess of so called experts with literally no clue and this is evident from their brief essays in which they jump around and throw a lot of rocks, but can't agree on pretty much anything. Very little of it has any practical use from either a computer science or sociological standpoint.
I managed to group the essays into about three main categories:

1) The skeptics ... these were clearly the majority .. those stuck at the mind-body problem who still subscribe to the 1960s view that "Computers can only do what they are programmed to do". They waffle on about natural selection, millions of years of evolution to extol the wonders of the human mind in order to poo poo the idea of machines with minds even though machine intelligence has developed at a pace that wildly outstrips our own.

2) The starry eyed SETI science fiction types who speculate on bizarrely alien minds with completely own agendas, from Terminators to grey goo. Good entertainment value, but not particularly useful all the same.

3) The handful that actually seem to understand the difference between a calculator and a neural network and talked about cooperative ways advanced artificial intelligence can work with humankind.

It seemed like only the people from Google with their short and sweet epilogue sitting somewhere in between, offered something insightful for us to take away, and saved the book from its logical mess!
Profile Image for Mike.
Author 5 books7 followers
February 3, 2016
A large number of academics, writers, scientists, and business people were asked: "What do you think about machines that think?" and this book collects their answers in short essay form. A few of the respondents barely bother to take the question seriously; many give thoughtful answers about the prospects of AI; several give really insightful answers that reinterpret the question or focus on things other than the usual 'OMG AI is gonna kill us' or 'OMG AI will everything so much better'. The best answers make an attempt to define AI and/or intelligence; the worst never bother to explain what they think it is, or beg the question for their idiosyncratic views of what constitutes intelligence, consciousness, or AI.
I took a pretty decent course on AI in grad school (through the philosophy department rather than computer science; this was in the early 1990s after all) and hoped to see more philosophers and "cognitive scientists" (if that is still a discipline). Still, there were some heavy hitters like Daniel Dennett mixed in with the artists and entrepreneurs.
I think this huge collection would have benefited from having an editor that could be bothered to organize the answers by theme, remove redundancies, or pair up opposing views; as it is, this book is more of a curiosity, light reading on AI.
*Disclaimer, I got an uncorrected proof through the Good reads 'first reads' giveaway.*
Profile Image for D.L. Morrese.
Author 11 books57 followers
January 3, 2016
The Edge (www.edge.org) question for 2015 was 'What do you think about machines that think?'. This is a philosophical question, not a scientific one. It asks for an opinion about a fuzzily defined term (think) about things (machines) that may or may not already be doing it (depending on how you define both words). What is thinking? How is related to mind, consciousness, or intelligence? What is a machine? Are people machines? In this collection of 186 short essays, notable personalities in the arts and sciences expound on such questions. I found some insightful, some informative, and some inane. A leap many took was to assume the question meant machines that think like humans, which is not quite what was asked. Personally, I don't see why anyone would expect machines to think like humans anymore than they would expect dogs or dolphins or aliens from some other planet to. Nor do I understand why anyone would want them to. Humans can think rationally, but they don't do it consistently, and they're not especially skilled at it. Why duplicate human cognitive flaws in silicon? Several of the essayists seem to share my opinion on this. But regardless of your position, there are ideas in these pages that will get you thinking no matter how you define the term.
Profile Image for Leanne.
142 reviews
October 17, 2016
I am tempted to give this one star, except that I found a lot of other things to read by branching out from this. This book is proof that if you give incredibly smart and accomplished people only 1-3 pages to explain a complex topic, you can make them sound stupid. None of these essays gets far enough into the meat of this issue to be really useful, it's a lot of people shouting "Brilliant AI companions / overlords are inevitable and great!", "AI will never achieve a semblance of human intelligence!", and "Brilliant AI will supplant humanity and destroy us all!" There is not enough time to go into the why and the how of it. I tried to think of this book like a stream of consciousness, where people just add random keywords that the topic reminded them of. Some of it gave me new ideas to look into - but I will definitely have to read a different book to get anywhere satisfying.
Profile Image for Annette Lyn.
104 reviews37 followers
March 19, 2016
Like the other Edge floriligia I've read, this none-too-short read feels like a compendium of emails written to its editor. (Indeed, it's rather likely that it is.) Luckily, the recipient has many educated, engaged friends; however, these friends occupy *all* places on the spectrum of authority on postbiotic intelligence. There are a few that seemed less qualified to contribute on the subject than the second-unit director of a forgettable robot film, honestly.

That said, there are gems here. I came away with a renewed respect for embodied cognition, task environments, Theory of Mind and, indeed, the shimmery, suffering ephemerality of *humanness* that hadn't been so clearly defined for me before I read the book. That made the effort well worthwhile.
Profile Image for Paul.
1,108 reviews25 followers
February 1, 2016
These books are getting lazier. I don't think any editorial oversight is happening here - anything that anyone can be bothered to write will be put into the book (including worthless 2 line smartass observations). The handful of insightful opinions are not worth dredging for through this miasma of mediocrity and cheap sound-bites from people many of who have no interest or expertise in the subject.
This is the last book in this series I plan on reading, if the editor can't be bothered to curate the entries then what value is he bringing? One entry is about watching northern lights. The author doesn't even mention AI, she just gushes over how wonderful anticipation is. And huskies. Why is this in the book?! I'm done.
Profile Image for Holger.
107 reviews21 followers
July 22, 2017
I like this series - all its books have lots of fluff, but also a handful of pearls that will stay with you and change the way you see the world. This book was no exception.

This book broke with tradition in that the majority of authors aren't expertly talking about their field, but handing out hearsay and conjecture because AI isn't their home turf.

Thus:
40% "it's going to be awesome"
30% "Skynet must stopped/won't work"
20% "But will they have rights/but what is thinking anyway"
5% "how to promote my work that has nothing to do with AI"

... and 5% are what make the book five stars. For me, the book is divided into "before and after O'Reilly", in order not to spoil. You need to read this.
Profile Image for Shannan.
152 reviews14 followers
December 16, 2015
I have mixed feelings about this anthology. There are some very thought provoking pieces - what i like is how Brockman often arranges them so you will read the exact opposite opinion on the same topic. But too often you wonder if it wouldn't if been a better book if the arm wavy pieces were removed- in saying that some are funny

Alan turning - 56 mentions
Aliens - 58 mentions
Google - 26 mentions
Siri - 13 mentions
Moore - 9 mentions
HAL - 1 mentions


I wish i wrote down some of the better ideas- some are definitely worth reading more than once.

The breadth of position is staggering. But you should get the book to see what I mean.
Displaying 1 - 30 of 87 reviews

Can't find what you're looking for?

Get help and learn more about the design.