Jump to ratings and reviews
Rate this book

Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence

Rate this book
The hidden costs of artificial intelligence, from natural resources and labor to privacy, equality, and freedom

“Eloquent, clear and profound—this volume is a classic for our times. It draws our attention away from the bright shiny objects of the new colonialism through elucidating the social, material and political dimensions of Artificial Intelligence.”—Geoffrey C. Bowker, University of California, Irvine

What happens when artificial intelligence saturates political life and depletes the planet? How is AI shaping our understanding of ourselves and our societies? In this book Kate Crawford reveals how this planetary network is fueling a shift toward undemocratic governance and increased racial, gender, and economic inequality. Drawing on more than a decade of research, award‑winning science, and technology, Crawford reveals how AI is a technology of extraction: from the energy and minerals needed to build and sustain its infrastructure, to the exploited workers behind “automated” services, to the data AI collects from us. 
 
Rather than taking a narrow focus on code and algorithms, Crawford offers us a political and a material perspective on what it takes to make artificial intelligence and where it goes wrong. While technical systems present a veneer of objectivity, they are always systems of power. This is an urgent account of what is at stake as technology companies use artificial intelligence to reshape the world.

288 pages, ebook

First published January 1, 2020

Loading interface...
Loading interface...

About the author

Kate Crawford

17 books65 followers

Ratings & Reviews

What do you think?
Rate this book

Friends & Following

Create a free account to discover what your friends think of this book!

Community Reviews

5 stars
556 (34%)
4 stars
660 (40%)
3 stars
294 (18%)
2 stars
87 (5%)
1 star
30 (1%)
Displaying 1 - 30 of 222 reviews
Profile Image for Alexander Smith.
232 reviews60 followers
January 10, 2022
I really had high hopes for this book considering the reviews, but even the way the book is framed is a red flag. Let's just walk through this book linearly:

Introduction: Let's define the entirety of AI as stupid, dumb, and mean calculations.

Not only does Crawford never even attempt to give a sufficient definition of what an Artificial Intelligence is, she mostly just calls it names. Granted, she loosely ties it to common usages and popular interpretations of how AI has been used, but she never gives it a clear definition, or explains broadly how it is different from other kinds of computation. But she only uses these straw person positions of what AI are in order to say we don't need one. This is intentional. It enables a particular perspective of AI that does not have to be very nuanced, and it allows her to make all sorts of arguments without justification. It essentially makes it really easy to read this book and think you learned something about AI when really what you are about to learn is the history of basically every technology invented since the 1800s. This book was praised as being the book of the moment and an instant classic. This book might as well be about the printing press. The arguments of its devastation to society would hold just the same at the time of its industrialization.

Chapter 1 Earth: Mining is bad. Computers use parts that are mined.

Yes, Crawford initiates with an ethnographic style that she had a paid field trip to visit a mining site, yet she never seems to use any ethical ethnographic practices. "I'm here to learn what AI is made from." Okay, but she doesn't use any of the information she got there that she couldn't have gotten from a Google map search and literal hundreds of years of news reports about the hazards and labor issues in mining. As best as I can tell, she doesn't quote a single person from the site, she doesn't help anybody there, and most of her arguments aren't about AI specifically but are about computers we all use everyday. It's almost like this chapter intends to say AI did this when literally the entire world of computing plays a part. Your smartphone is a part of this. The entire chapter (and even the book as a whole) is riddled with narrative holes such as connecting all of statistics with eugenics without further explanation or motivation for reason in a chapter about mineral mines. Am I to believe that anyone who computes a t-statistic is connected to a goal of eugenics oriented computation? The author is not clear on this but she certainly wants the reader to uncritically follow that narrative.

Chapter 2 Labor: There's a history of labor exploitation in computation.

Uh, yes. There's a history of labor exploitation. How is this unique to AI? It isn't. Why is this a book about AI again? Why are we ignoring more than 200 years of labor literature on this exact subject?

Chapter 3 Data: Models often use data abstracted from their origins which carry information we should address.

This is likely the best chapter in the book. For the first time, the author addresses something interesting about machine learning and AI. The authors argument that we use data that are deeply personal to some people (specifically in the use of mug shots used for crime prediction or facial recognition), because it strips those people of their humanity through abstraction and sets a norm of being able to use people's image data without asking them. However while this argument is obvious, the way in which she makes it doesn't address data itself as the chapter proclaims. It addresses models. Otherwise, why not critique the Census while we're at it? Is that not basically the same? No, because according to the author it's not about the data itself (except that the image data make her feel more strongly about this particular case) but it's about dehumanizing modeling techniques. But later, it's not about modeling techniques because it's about the fact that these data contain histories of things these models don't model. This story is not well unpacked. She cannot argue both models and data are steps after each other in dehumanizing people simultaneously. This is where Crawford's refusal to directly explain how AI is defined and works mechanistically hurts her the most. WIthout this nuance, it's not clear where the problem is. And so for the remainder of the book, we talk about government programs without clarity of what was built at a datafied level or at a modelled level, and so a casual reader would have to already know how AI works to actually learn the ethics here. The book does not give anyone the chance to reflect on their own relationship to data and programming. What practical ethical consideration should I take away from this? Despite critiquing induction/deduction in computation, the implication is to induct that all things about social data trace back to unnecessary and dehumanizing abstractions. But all a data scientist can do is say, "Yes, this case is sad," and be educated enough to know that this isn't true in every case. But Crawford implies again and again that this basically is the history of AI and datafication.

Chapter 4 Classification: Ontologies have implications.

Now she turns on the laborer she earlier claimed was exploited, and says that when we use cheap labor to classify stuff, we end up with racist outcomes and the degradation of linguistic principles. The author remains conservative to following an epistemology that she doesn't explain about language and classification. This is nothing but an elitist trope that classification experts should do classification since data scientists aren't willing to ethically clean their own datasets. While there are good points here about the harms of classification, just read Bowker and Star instead. It seems more reasonable for me to say this is a mix of (1) inductions without critique are bad, (2) some people are overtly problematic in their speech and classification (and in fact classification itself is problematic ontologically), and (3) there simply isn't enough labor and financial ability for scientists on minimal budgets to do this work. In summary what Crawford's policy here is, don't let pedestrian people datafy things because they will muddy up your data. This feels elitist about interpreting how MTurkers work, and unfair to researchers who cannot meet publication requirements on their own. It also doesn't account for the argument that researchers muddy their own data all the time, computational data or not. "Expert" checking doesn't solve this on it's own. Crawford is turning her back on her own labor chapter. Researchers can't work in isolation for most of academia in order to answer the questions posed to them, and the use of crowdsourced data (while problematic and should be addressed more by the standards of the Labor chapter) is not necessarily lesser quality data just because they are unaware of the academic training. If anything, an oppositional reading of this section teaches us that researchers can get better at cleaning crowdsourced data with experience of what to look for or can use that same data for a different purpose related to the biases it contains. Unfortunately, instead of actively looking for ways to continue valuing the labor of our peers, she suggests we induct that all datafication processes using classification are problematic necessities of AI because nobody is looking at datasets they use, which is laughable. While its been described this way, clearly somebody eventually looked at all the datasets Crawford claims weren't looked at because it's those researchers trying to correct these problems that gave her the historical data to write the chapter.

Chapter 5 Affect: Reading people's emotions from their faces is not a science.

While the simple argument here is true: we can't look at a person's face and know their feelings, this chapter badly butchers its theory. Firstly, while her main antagonist, Elkman, actually did attempt to do this and its still used in computational research, the way she frames Tomkins is horrendous. He argues that every body has an affective interpretation, not necessarily that all bodies essentially are the same in how we interpret their affects. Secondly, Tomkins does not fixate on singular human affects as the only individuals of interest. Tomkins is a post-human psychologist. Crawford reduces Affect theory to simple emotions of individuals (as do most communication scholars actually; it's kind of a meme at this point.) In total, this is probably one of the worst readings of Tomkins I've ever seen. There's no "affect theory" here in this interpretation. This is just badly interpreted psychology theory. For anyone who wants to check this out, look up Eve Sedgewick's Tomkins reader _Shame and its Sisters_. For affect theory more broadly, Spinoza's _Ethics_ and Deleuze make great starting points also. Even a brief overview from experts in affect should convince you of how badly Crawford misunderstands affect theory.

Chapter 6 State: Lots of computational tools used to police and track people online are AI tools that create logics to hurt "others" as a part of datafied "precision."

Again, we knew this. She briefly discusses Benjamin Bratton's book (which this chapter is merely a summary of), but also _We Are Data_ describes this very carefully. Crawford supposedly, in hacker-like style, had access to all of Snowden's data release and still the best she could do with it was repeat the same arguments others did. Yes, AI is used to track people and occasionally decide to kill them. Yes, this is dehumanizing to reduce humans to numbers. Yes, this is what new nationalist racism looks like. But more could have been said that summarizing the work of others as a part of having pulled a couple of slides from Snowden's files and putting them in a book.

This book is simple to interpret because it's written to be devastating and have the sense of absolute completion. While I would like to say that I agree with most of it, half of the stuff that I do agree with isn't new here even though it's written stylistically to look like she's uncovering conspiracies like some Jason Borne-level academic. The other half of stuff that I agree with ties itself to framings that might be more problematic to academic standards and labor ethics. This does not even include that I think that most of this felt really disingenuous. The narrative turns from the ethnographic writing style to carefully selected historical tangents you can find on Wikipedia that have nothing to do with the immediate ethnographic framing give me the feeling that I'm not being told the whole truth... and I know I'm not. It's too simple. It's too easy to consume. It makes too many hidden and contradictory assumptions that can be teased out quickly if someone is looking. While I wanted to like this book, its a case of how stuff with AI in the name sells, and it cheapens a lot of the work that others have worked towards on this subject. While someone who isn't trained in this might walk away with a sense of gained knowledge, it doesn't well introduce how critical AI research is being done well. This book seems to shut down interest more than raise it and leave open questions. This is a book that only someone trained in critical communication studies could agree with in summary, but it is condescending to everyone else that it's written for.

This is a book that required a lot more nuance and should have trusted its audience to understand more than the consumable drama and oversimplifications this book narrates. While good things are here, they can only be interpreted with clarity by someone who already knew those things. This is a common problem I've seen in "accessible academia": don't trust the public with the clear and fair investigated truth-claims; trust them instead with patchworked, token truths sewn together with drama.
Profile Image for Henk.
928 reviews
March 14, 2023
A wide ranging book focussing on the dark sides of techno utopianism, and how AI requires vast amounts of unwillingly provided input, cheap labor and energy to function
What remains is a persistent asymmetry of power, where technical systems maintain and extend structural inequality, regardless of the intention of the designers

Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence is in some ways a chilling book, at times quite winding but definitely clear eyed on many of the negative aspects lurking under the surface of Silicon Valley’s belief in the malleability of the world.
What is being optimized and for whom is a recurring question Kate Crawford asks, while touching upon the many parts of the economy that enable the rise of AI. The scope is more broadly a critique of capitalism than really tackling AI as such, and takes a more sociological angle than I initially expected. In contrast to similar books like The Mushroom at the End of the World: On the Possibility of Life in Capitalist Ruins, the narrative is supported with examples and the goal of critique does not obscure analysis.

In the long run AI is the only science some aficionados say, but Crawford zooms in much more on the thesis AI as an extractive industry. Extraction, power and politics form the core of the book.

Material extraction is first examined, with the extracting of one tonne of rare earth minerals requiring not just displacing 499 tons of earth (since the purity is just 0.2% in nature) but also 17.000 liters of toxic acids and 1.000 kilo of radioactive waste. The extraction via deep mines of gold in California being mirrored in skyscrapers in the sky, is an imagine the author uses. Global aviation and server emissions of AWS, Azure and Google cloud being roughly the same, with Chinese datacenters being 73% powered by coal plants. Some examples don't really hit the mark, I for instance don’t see how the container example has much of a link with AI, but this overall chapter was definitely interesting in reassessing the tech sector as inherently clean and low in emissions.

Labor extraction is next, and starts with a meditation on the concept of a panopticon stemming from a wish to better monitor workers. The problematic implication that control is equivalent to intelligence would continue to shape the AI field for decades.
Crawfurd introduces a concept of Potemkin AI, being an AI application being heavily reliant on low paid gig economy workers to function appropriately (take the example of content moderators).
Reducing the link between labor and value creation, making the bargaining power of tech employers larger and larger is another topic, including we as users training Google’s AI image recognition via ReCatchpa, which ironically forces us to prove we are human.
Campaigns like We are all techworkers -to view the tech supply chain and its reach more broadly, seem sympathetic, but I wonder on the viability of these initiatives versus the massive scale of big tech.

Data is the next topic, with many companies espousing a regime of There is no data like more data
IBM is an example, with the company using an anti-trust case against it of 13 years and 100 million words as a basis of its speech recognition software.
Further down the line the Enron corpus, a major fraud, is used for natural language processing.
This all feels rustic versus the 315 million photos on Meta’s platforms and 500 million tweets are published every day in 2019.
Extraction is justified if it comes from a primitive and unrefined source - on the narrative of data as the new oil.
These sources of data are worrying, as a 2014 model of Amazon shows. The model lead it to reinforce hiring men and masculine language in resumes.
In this sense artificial intelligence is a register of power, with over 95% of AI models treating gender as binary and inalterable, reinforced by classifying people as objects in datasets which are used to train AI.

Affect, or emotion recognition, is a new frontier.
Job applicants being scored on their facial expressions for suitability for a job is disturbing, or Palantir using its software, with a point system, makes police move into intelligence roles, reminiscent to Minority Report in terms of self reinforcement.
Vigilant system selling mass surveillance of license plates to local governments, with these kind of companies even being contracted to collect fines against a 25% fee for police.
Amazon using its Ring cameras to monitor its delivery staff and signing agreements with 600 local police departments to provide them with access to recordings and providing credits for subscribers to its Neighbors app.

We kill people based on metadata e.g. we track them, you whack them, as a vendor of "killer AI" says on a military conference, is not that far away.
Even seemingly benign initiatives, like IBM creating a hypothetical “terrorist credit score” for Syrian refugees to prevent backlash against immigrants, lead to discrimination and prosecution of those already in a very weak position. Then we have 60.000 Michigan citizens being penalized by AI, taking away their social benefits based on model outcomes, costing much more than the "savings" realised, and eroding trust in society.

A grim and thought provoking book, on how (as Margaret Atwood already said so poignantly in the The Handmaid's Tale) Better is never better for everyone.
Profile Image for Alja.
95 reviews47 followers
May 5, 2021
The book masterfully dispels the myths of a green, clean and fair AI sector by exposing the environmental and human costs that make AI magic possible. At times slightly repetitive, but an essential contribution to the topic that should be read by anyone even remotely involved or curious about AI (and the tech sector in general). The Atlas asks the tough questions the industry and academia refuse to acknowledge and goes deeper than the usual questions of ethics and arguments for more diverse training datasets. Read it and get ready to questions the power dynamics and exploitative assumptions behind AI magic.
Profile Image for Greymalkin.
1,293 reviews
June 3, 2021
There is some good information buried in this book, but it's so hard to suss out with the repetitive and pedantic writing. The definition of AI appears to change from page to page, even from paragraph to paragraph and while that could be fine (because the author mentions that the definition of AI changes) it's not okay when the term is flexed to stand in for whatever topic they are arguing against. Sometimes AI means "technology" sometimes it means "the internet" sometimes it means "machine learning" and sometimes it means "computer programming" and so on. The argument loses impact when I can think of so many other nouns I could swap in with the same argument, such as "food production" or "garbage handling" or "housing". There was so little that seemed specific to AI, why did they pick that term to cluster their arguments around?

The layout was also frustrating. The whole book could have been reduced to the few pages in the end (right before the coda) where the actual intention of the previous chapters was laid out. It would have helped a lot to have that first so that I had some idea where the rambling previous 6 chapters were going and what the point of them was. Each chapter felt the same, just with different bad actors. Half of the books is references and bibliography, so just as soon as it starts pulling info together and discussing it it more analytically, the book ends. Reading in the notes that this was several presentations and articles that were pulled together and updated for the book that makes some sense but I think it would actually have been better served to include those as they were and write forwards and afterwords for them instead of trying to force them to be a coherent book.

But even with all those complaints, I could certainly put up with all that because the point of the book (technology is bad in ways many people don't realize), is one I'm quite sympathetic to. Unfortunately the book doesn't offer a way forward, or alternatives, or any examples of positive use of the technology and ways it is improving. There are brief dismissive mentions of regulation and awareness but they are brushed off as "the whole thing is rotten from the start so why bother just nuke it all from orbit and start over" but there's no suggestion of an alternative or what research is being done to replace what AI does.
This entire review has been hidden because of spoilers.
146 reviews
November 14, 2021
if you're considering buying or reading this book, i encourage you to choose another title from the rapidly expanding genre given crawford's apparently widely known plagiarism.

i did not particularly enjoy 'atlas of ai', which reads more like a stream of disconnected anecdotes rather than a coherent whole. seriously, if you are on progressive tech twitter at all, there won't be anything new for you in here, i promise. it's not that the material in here isn't interesting or valuable, it's just that 'atlas of ai' highlights an enormous body of work from the sts and algorithmic fairness communities without performing any real synthesis.
Profile Image for Dan.
375 reviews102 followers
November 1, 2022
In the “German Ideology”, Marx denounced the abstract, impersonal, ahistorical, and necessary pronouncements of metaphysics, philosophy, religion, law, economics, science, and so on - as power structures used by the capitalist system in its quest for domination and profit. These days it is quite fashionable to add gender, race, ecology, colonialism, externalities, war, along with a few others to the original economic class-category and to just repeat the Marxist critique to new fields like AI. Even if trite in this respect, this book is great in reminding us that AI is not some geek, idealist, objective, or wonderful fantasy - but the newest and most powerful power structure developed and employed these days.
Profile Image for Lucas Gelfond.
88 reviews14 followers
November 12, 2021
I have been in the middle of this for so many months so honestly good to just finish a book LOL. I found the first two chapters (Earth and Labor, specifically the first bits about mining and the really detailed description of Mechanical Turk) really interesting but felt like, as the book went on, it was pretty general/I didn't gain a ton from it. Glad I read though
16 reviews3 followers
July 3, 2022
Mandatory reading for all “tech dudebros” who think AI is the ultimate answer to life and the universe. Actually, mandatory reading for everyone.
Profile Image for Ellen.
44 reviews3 followers
January 4, 2022
This is probably my favorite of all the books tech-and-politics bookclub (even though I'm reading it half a year later whoops). I haven't yet found another book in this area with as much intersectional knowledge. Crawford has a robust background in tech journalism, and her acknowledgements further demonstrate her care in looping in experts from outside her own domain to paint a expansive picture of AI in our world at large, including the environmental impacts of computation, the near-invisible sociopolitical implications of the way ML practices format classification, and the impacts of machine decision making on the realm of human life.

Also bonus points for the drawings at the beginning of each chapter

[image error]
Profile Image for Matthew Jordan.
101 reviews69 followers
Read
September 28, 2022
After a very long hiatus, I'm going to work through my backlog of Goodreads reviews, though I'll probably write far less than before.

I interviewed Kate Crawford about her book for the New Books Network!

Apple Podcasts: https://podcasts.apple.com/us/podcast...

Spotify: https://open.spotify.com/episode/7DmZ...

The thing I liked most about Atlas of AI is that it focused on the physical materials that make up our computers and AI systems. Computers are made of stuff. That stuff has to be mined and manufactured and transported around the world. AI software runs on massive servers. Those servers are physical behemoths that are sitting in the desert. Kate Crawford has the chutzpah to genuine consider what an AI system is, from top to bottom: the miners, the underwater internet cables, the shipping containers, the software engineers, the click workers who label data, the corporations that acquire data from users, etc. It's all here: https://anatomyof.ai/.

This is such an ambitious and frankly baller way of doing things. Imagine how much we could understand the workings of the world if we had diagrams like this for every technology. It really sends the mind through a major tizzy to think about the extent and scope of all of this. It really makes me think: what would it mean for Anatomy of AI to be "obvious"? Would every single person working in every job remotely related to the AI field have to, like, write blog posts about their role and how it relates to everything else? Would everyone need a Go Pro on their head blasting a live feed of everything they're doing all day? I'm not sure this would be useful. We'd be inundated in a morass of information. We would need a Kate Crawford to go sift through it all and figure it out for us. So there's no easy way out here.

Let me try to articulate this better. What I want is a deep understanding of how the world works. I get that things are complicated, but surely it's not too much to ask in the 21st century, that I can see, in full, all of the people and processes involved in a given technology. But this is actually very hard. Most of the work being done in the world is happening in private. There are no journalists or TV cameras in the vast majority of workplaces. A great deal of work involves extracting materials from the earth or coordinating the way groups of people operate. None of the people doing this work have any incentive to describe the complex networks of relationships between themselves and the people they work with. And if everyone somehow _did_ start to do that—say, everyone in the AI industry had a blog—there would be way too much information, an overwhelming panopticon that would become meaningless.

Given this state of affairs, what we need is armies of Kate Crawfords. I'm enlisting.
Profile Image for Nicole.
73 reviews28 followers
April 29, 2022
OK SO. I'm not saying this book was not very well researched and does not analyze critically and thoroughly facets of AI that most of us never really consider. It is. And it does. And I highlighted quite a lot of paragraphs where I loved the phrasing, the ideas being conveyed or the shocking examples provided.

BUT...

On multiple occasions, I caught myself comparing it to User Friendly: How the Hidden Rules of Design Are Changing the Way We Live, Work, and Play. Both these books set out to cover a complex topic exhaustively (AI and design, respectively). And both build their narratives around examples, events and characters. However, where User Friendly made me care about a topic I hadn't previously given any thoughts, Atlas of AI made me lose interest in a topic I was already invested in. Somehow Atlas of AI seems to fall short of creating an interesting storyline and really bringing to life the people and events it's talking about. At least for me. It could be the long sentences or the abstruse language (see what I did there?), but I had trouble connecting to it and getting through it, even though it's relatively short.

In sum, this book didn't reaaally do it for me. Have I not had to read it for a book club I'm organizing, I would have DNF'ed it.
Profile Image for Julia.
311 reviews15 followers
March 25, 2023
This feels like a bad time to review this book, because I'm a bit oversaturated on the "let's discuss AI" moment that's happening right now, but I'll do it anyways.

Starting with the good things: I really liked the 'atlas' framing. An atlas is not a map that purports to objectively convey its territory, but rather a particular viewpoint or interpretation of reality, imbued with its own preoccupations and biases. It felt like a helpful shorthand for the thesis of Crawford's critiques: that technologists have been (willfully?) ignorant of the politics imbedded in their work, and thoughtless about the broader social and historical contexts they operate within. I also found this book to be a useful overview of some of the considerations and history that are frequently overlooked in discussions about AI, and I think I’ll dive into some of the authors that Crawford referenced.

Despite having some interesting content, this book bored me more than it should have, and I thought that the writing style was too academic. The other thing that bothers me in general about this specific genre of book is that it's uninspiring. I think I'm just tired of the "here's what people did that was bad" pattern, and what I actually want is a compelling vision for how we could build and use new technologies to make the world better.
Profile Image for Craig Werner.
Author 15 books179 followers
January 11, 2022
One of the most sobering--verging on depressing--necessary books I've read in a while. The choice of Atlas in the title is precise: it's a compilation of maps at different scales and of different areas of the territory shorthanded as Artificial Intelligence. As Crawford points out, the term is often used to indicate a set of technical problems related to algorithms and computer engingeering, as if they were disconnected from the material realities of resource extraction, climate change, labor and political/economic exploitation. They aren't and to pretend otherwise is to engage in the game of roulette we've so gleefully accepted as "reality."

This is a necessary book for anyone who wants to have a realistic idea of the complexity of the problems facing anyone who cares about minor issues like justice and the sustainability of human life.

A slightly academic tinge to the prose at times, but Crawford's done her homework and it reads quickly. Crucial.
35 reviews1 follower
May 6, 2021
Enjoyed it over all. I read it in a breeze and learned a lot. Accessible, informative, synthetic and encompassing. Chapters on labor, classification and the earth were especially informative.

It did leave me wondering a) whether the book buys into some of the bullshit that it is supposedly trying to demystify b) if what is called ai now is just a continuation and extension of colonialism and capitalism (which the historical narrative seems to imply) than why not just call it that? What's "AI" about AI apart from being a marketing gimmick?
Profile Image for Lieke.
15 reviews
March 3, 2023
"Refusal requires rejecting the idea that the same tools that serve capital, militaries, and police are also fit to transform schools, hospitals, cities, and ecologies, as though they were value neutral calculators that can be applied everywhere."
2 reviews
May 1, 2021
This is called “Atlas of AI” for a good reason. We know technology is ubiquitous and we rely on it and enjoy it but somehow miss, or overlook the fact that we may all be victims as a consequence. Whether via environmental degradation, increasingly controlled workplaces, or abuse of our data. The stakes are huge, and not popularly understood and the gift of this work is to illuminate how the path ahead is now inevitably tied to degradation of our lives, and lifestyles, that in the case of climate change is existential, but does not end there. Eloquent and alarming this book is a classic for these troubled times.
Profile Image for A.
449 reviews11 followers
September 27, 2021
This is an important book that should be read not only by AI/ML practitioners but also by Crypto and in general anybody working on cloud and internet technologies.

Although well written, it is a bit depressing how Craword describes the hidden costs, not only of energy-wise, but also in terms of labor (mining, production of hardware and generation of training sets) as well as the consequences of the application of these models (discrimination, bias, misrecognitions).

At the end of the day, many of these AI models are simply amplifiers of current oppressive systems, and they only further the unjustices that has existed always.

The only section I missed in this book was about how to fix all these issues: I don't think anybody has a clear idea of what is the solution. Crawford provides a few general ideas that seems to go in the right direction, but they are only that, general policies that may work.

Nevertherless, I think this is a fundamental book for everybody who works with a computer.
Profile Image for Allison Sylviadotter.
84 reviews27 followers
April 18, 2023
This book had a lot of great information, however I feel Crawford lost her credibility when she spoke on facial recognition technology and how trans and non-binary people were being "misidentified," when in fact the AI is simply correctly sexing them. Trans and non-binary people still have a subjective biological SEX. It measures the human face and categorizes according to male or female and does this with great accuracy. To claim AI is "wrong" for not taking into account a person's personal self-idenity is unscientific and absurd. Just as AI cannot see a person's religion or political leanings in their face, it cannot see a person'spersonal gender ideology. It sees facts, it interprets that into data and can therfore correctly determine a person's sex (which is immutable), and it does this with great accuracy, regardless of the person's subjective self-identity. This take lacked scientific integrity and stood out like a sore thumb in a book based on data and facts.
Profile Image for Sofía .
136 reviews30 followers
November 4, 2021
Demasiado descriptivo, los ejemplos están bien traídos y la estructura funciona pero no llega a entrar realmente en profundidad a analizar nada. Puede que yo esperase otra cosa y es verdad que este libro sí sirve para desmontar muchos mitos sobre la inteligencia artificial, pero creo que explica mejor todo lo que no es que todo lo que es realmente (lo que me gustaría entender mejor a mí). Además he leído que la autora es una imbécil que les robaba las ideas a investigadoras más jóvenes.
Profile Image for Maxwell Dalton.
87 reviews2 followers
December 21, 2023
My main mistake on purchasing this book was not reading past the title "Atlas of AI." Seeing just that title, I figured that this would just be a survey covering the various technical aspects of machine learning techniques and their applications, maybe with a bit of history tied in. What I wound up reading instead happened to be perhaps the most pessimistic/apocalyptic view of AI that I've ever encountered.

The book aims to follow the building of AI systems from the ground-up, highlighting where people are being exploited along the way, all while offering little to no solution whatsoever to the problem apart from just stopping the development of AI altogether. Crawford devotes some time to things that also have almost nothing to do with AI in particular and are instead just general problems of capitalism: the mining of materials for making batteries (mainly for phones and electric cars, unless I'm mistaken), the use of under-paid workers in other countries for mining datasets, poorly chosen terminology (master-slave in clock synchronization techniques, git [side note: what does clock synchronization have to do specifically with AI?????]), the advancement of technology for military purposes. Another one of these criticisms was how classification tasks are using discretized information that abstracts away hidden meaning from things, which again seems like a far reach considering that there needs to be some form of discretization if it's going to work on a computer. Perhaps the biggest fault is that all of this criticism takes place with next to no praise for the actual advancements achieved by advancements in AI.

Still, I suppose some of the overarching arguments are valid, specifically the ones having to do with the chapter following machine learning techniques for surveillance or for machine learning techniques used in tasks for which model bias could have devastating consequences, perhaps the biggest of these being security screenings or welfare "credit scores."

With just the ideas presented in this book, I would probably have given this a 3-star rating. Alas, the writing itself was pretty poorly done, with random jumps to first-person narration that just really have no place in this story. Also, the combination of footnotes with citations is extremely annoying, as you never know whether its worth it to flip all the way to the back of the book or not.

TL;DR: Not what the title says. Missed opportunity.
Profile Image for Anshuman Swain.
182 reviews7 followers
March 26, 2022
An eye opening book about the costs - social, economic and environmental - of 'AI' and how it propagates the already existing power structures and problems in a newly packaged way. The mythos around AI hides layers of costs and labor that most people are unaware of, and the book does a great job of describing them succinctly.
3 reviews
August 7, 2023
Alexander die dit boek 1 ster gaf (heeft op het moment 41 likes en 5 comments onder zijn review), beschrijft nauwkeurig hoe ik me voel over dit boek
12 reviews2 followers
May 27, 2021
I loved this book so much. I have been frustrated by many tech analyses, because it feels like writers don't quite get how things work in practice, and therefore have a lot of naive ideas for root problems or solutions. That is not the case here.

Crawford instead contextualizes AI into historical environmental destruction, racism, colonialism (etc), things that we all now understand to be bad. She then paints a picture of how the same destructive assumptions we made then, show up in today's practices.

Truly insightful! Everybody who works in tech needs to read this book!
Profile Image for Paz.
62 reviews10 followers
June 1, 2021
This is a fascinating book as a first reading for people starting on the matter. It is not common to attempt to include material aspects of AI; however, I believe AI’s ecological impact was too short for the amount of evidence.

But, again, even though this book is better than most of the “critical approaches” of many authors from the US, I still miss more analysis on how capitalism/neoliberalism is playing a role here and how that is expressed in planetary inequality. Power is not a misunderstanding; economic relationships matter.
Profile Image for Elizabeth.
31 reviews
August 3, 2021
short and pretty good read! i especially enjoyed the first few chapters on the environmental/human costs of AI (in terms of materials for chips and processors, and your cheap data labellers). thought those had more relevance across different fields of AI.

later chapters focus on specific use cases that anyone who isn't just a crazy tech visionary can easily recognize the ethical concerns over - facial/affective recognition, use of AI in policing/risk prediction, military usage etc... but the book does a nice job of pulling together the history of how these use cases came to be.

minor annoyance is that the author likes to use a lot of big conceptual words and thematic statements which makes me backtrack 2-3 sentences everytime because im trying to figure out what she's referring to (with concrete examples). but i think this is a humanities thing because i am pretty sure my essays in JC probably were like this too lol
Profile Image for Siu Hong.
99 reviews2 followers
August 31, 2021
10/5 stars

Beneath of optimism of AI is the cruel exploitation of the planet, of the workers and of the countries. The current climate crisis and humanitarian crisis cannot be stopped with more control, more exercising power over others. Kate Crawford has opened my eyes with in depth investigations united in a psychological and philosophical vision that we can only solve the world's problems by changing out minds, not making more AIs.

But it is the haunting coda that elevates the already important book to the classic of 21st century. Space race, crimes against the Earth, Ownership and fear of death all interconnect underneath the ambition of AI.

Thank you Jannik for recommending the book to me!
Displaying 1 - 30 of 222 reviews

Can't find what you're looking for?

Get help and learn more about the design.