Jump to ratings and reviews
Rate this book

Army of None: Autonomous Weapons and the Future of War

Rate this book
The era of autonomous weapons has arrived. Today around the globe, at least thirty nations have weapons that can search for and destroy enemy targets all on their own. Paul Scharre, a leading expert in next-generation warfare, describes these and other high tech weapons systems—from Israel’s Harpy drone to the American submarine-hunting robot ship Sea Hunter—and examines the legal and ethical issues surrounding their use. “A smart primer to what’s to come in warfare” (Bruce Schneier), Army of None engages military history, global policy, and cutting-edge science to explore the implications of giving weapons the freedom to make life and death decisions. A former soldier himself, Scharre argues that we must embrace technology where it can make war more precise and humane, but when the choice is life or death, there is no replacement for the human heart.

448 pages, Paperback

First published April 24, 2018

Loading interface...
Loading interface...

About the author

Paul Scharre

4 books56 followers

Ratings & Reviews

What do you think?
Rate this book

Friends & Following

Create a free account to discover what your friends think of this book!

Community Reviews

5 stars
602 (23%)
4 stars
1,113 (42%)
3 stars
687 (26%)
2 stars
155 (5%)
1 star
41 (1%)
Displaying 1 - 30 of 280 reviews
Profile Image for Bill Gates.
Author 10 books525k followers
December 3, 2018
When I was a kid, I read a lot of sci-fi books. One of the most common themes was “man vs. machine,” which often took the form of robots becoming self-aware and threatening humanity. This theme has also become a staple of Hollywood movies like The Terminator and The Matrix.

Despite the prevalence of this theme, I don’t lose any sleep worrying about this scenario. But I do think we should spend more time thinking about the implications—positive and negative—of recent progress in artificial intelligence, machine learning, and machine vision. For example, militaries have begun to develop drones, ships, subs, tanks, munitions, and robotic troops with increasing levels of intelligence and autonomy.

While this use of A.I. holds great promise for reducing civilian casualties and keeping more troops out of harm’s way, it also presents the possibility of unintended consequences if we’re not careful. Earlier this year, U.N. Secretary General António Guterres called global attention to these threats: “The weaponization of artificial intelligence is a growing concern. The prospect of weapons that can select and attack a target on their own raises multiple alarms…. The prospect of machines with the discretion and power to take human life is morally repugnant.”

Unfortunately, my first attempt to educate myself on autonomous weapons was a bust. I read a book that was dry and felt really outdated. Then a few months ago I picked up Army of None: Autonomous Weapons and the Future of War, by Paul Scharre. It’s the book I had been waiting for. I can’t recommend it highly enough.

Scharre is a great thinker who has both on-the-ground experience and a high-level view. He’s a former Army Ranger who served four tours of combat duty in Iraq and Afghanistan. He then went onto a policy role at the U.S. Department of Defense and led the working group that drafted the government’s policy on autonomous weapons. He’s currently a policy expert at the Center for a New American Security, a center-left think tank in DC.

He is also a good writer. Scharre writes clearly about a huge range of topics: computer science, military strategy, history, philosophy, psychology, and ethics. He gives you the right grounding to start participating in the debate over where our country should draw the line on these powerful technologies.

Scharre makes clear from the beginning that he has no problem with some well-bounded military uses of autonomy. For example, he brings you along for a tour of the U.S. Navy’s Aegis Combat System, an advanced system for tracking and guiding missiles at sea. Aegis has a mode of operation in which human operators delegate all firing decisions to an advanced computer (but can override them if necessary). Why would you want to put a computer in charge? If you’re out at sea and an enemy fires 50 missiles at you all at once, you’d be very happy to have a system that can react much faster than a human could.

Army of None also shows that autonomy has great benefits in environments where humans can’t survive (such as flight situations with high G forces) or in which communications have broken down. It can be enormously helpful to have an unmanned drone, tank, or sub that carries out a clear, limited mission with little communication back and forth with human controllers.

In addition, autonomous weapons could potentially help save civilian lives. Scharre cites robotics experts who argue that “autonomous weapons … could be programmed to never break the laws of war…. They wouldn’t seek revenge. They wouldn’t get angry or scared. They would take emotion out of the equation. They could kill when necessary and then turn killing off in an instant.”

Despite these and other advantages, Scharre does not want the military ever to turn over judgment to computers. To make his case, he offers compelling real-life cases in which human judgment was essential for preventing needless killing, such as his own experiences in Afghanistan. “A young girl of maybe five or six headed out of the village and up our way, two goats in trail. Ostensibly she was just herding goats, but she [was actually] spotting for Taliban fighters.” Scharre’s unit did not shoot. Yes, it would have been legal, but he argues that it would not have been morally right. A robotic sniper following strict algorithms might well have opened fire the second it detected a radio in her hand.

Scharre ends the book by exploring the possibility of an international ban on fully autonomous weapons. He concludes that this kind of absolute ban is not likely to succeed. However, he holds out hope that enlightened self-interest could bring countries together to ban specific uses of autonomous weapons, such as those that target individual people. He also believes it’s feasible to establish non-binding rules of the road that could reduce the potential for autonomous systems to set each other off accidentally. He also believes we could update the international laws of war to embed a common principle for human involvement in lethal force.

There are no easy answers here. But I agree with Scharre that we have to guard against becoming “seduced by the allure of machines—their speed, their seeming perfection, their cold precision.” And we should not leave it up to military planners or the people writing software to determine where to draw the proper lines. We need many experts and citizens across the globe to get involved in this important debate.
32 reviews1 follower
December 2, 2018
I could not finish this book despite my interest in technology. As an engineer who is fascinated by artificial intelligence and machine learning, I was very eager to read this book and learn more from what experts are thinking and doing with AI. Though I did learn a few cool things, there was lots of repetition. I believe the author meant to write an informative and interesting book but as a reader it was not enough to keep me engaged.
Profile Image for Kuszma.
2,431 reviews201 followers
December 22, 2019
Mire az összes fegyvermániás szálkásra gyúrta magát, és betéve tudja a Zrínyi rohamtarack műszaki jellemzőit, egyszer csak azt veszi észre, hogy a háború már néhány tucat pattanásos, buborékszemüveges informatikus kezébe került. Mert bár a Star Wars-ban még emberi vadászgéppilóták cikáznak az űrben, de a fegyverek evolúciója valószínűleg nem abból áll, hogy ugyan lézerrel csapatjuk egymást a csillagközi pusztán, de még humanoidok rángatják a botkormányt. Hanem abból, hogy autonóm drónrajok feszülnek egymásnak, akik emberfeletti összehangoltsággal, minimális operátori beavatkozással csapnak le a célpontokra, miközben a kibertérben – láthatatlanul - a felek megkísérlik meghekkelni egymás algoritmusait, hogy megzavarják vagy működésképtelenné tegyék az ellenséges sereget.

Persze drónok már most is vannak, és előnyeik is nyilvánvalóak: olyan helyekre jutnak el saját veszteség nélkül, ahová a kommandósok be se mernék tenni a lábukat. Felderítenek, támadnak extrém környezetben, amely túlmegy az emberi teljesítőképesség határán. De előfordulhat – és elő is fordul -, hogy kénytelenek homályos kommunikációs közegben tevékenykedni, a körülmények vagy az ellenséges zavarás következtében elszakadnak operátoraiktól, védtelenné válnak. Erre jelent megoldást az autonómia, vagyis az, hogy a gép képes magától is bizonyos döntések meghozatalára – például arra, hogy kilőjön egy járművet, ha úgy ítéli meg, az veszélyezteti őt. Ám az autonómia kétélű fegyver. Egyfelől a gépi döntéshozatal összehasonlíthatatlanul gyorsabb az emberinél – amíg egy ember eldönti, a balról érkező tárgy galamb-e vagy ellenséges rakéta, addig egy autonóm drón elvégzi a feladatot, majd lefőz egy kávét, elszív egy cigit, és utána még passziánszozni is marad ideje. Kétségesebb, de létező erénye továbbá eme szerkezeteknek, hogy nem kegyetlenek – egy gép (hacsak nem kap rá külön utasítást) nem okoz felesleges fájdalmat, és az is biztos, hogy ha autonóm drónok vonultak volna be a Vörös Hadsereg helyett Magyarországra, a nemi erőszakok száma is lényegesen kevesebb lett volna. Ugyanakkor a gépeknek van egy bitang nagy hátránya: a kontextusban való gondolkodás finoman szólva sem erősségük. Mondhatni, nem értik az iróniát, pont mint a moly adminisztrátorai. Nem tudják mérlegelni, a célpont simán blöfföl-e, vagy valódi veszélyt jelent – ugyanolyan brutalitással reagálnak mindkét esetben. Ráadásul az ember is lelőhet ugyan véletlenül egy civilt, de a gép – amennyiben az algoritmus hibás - ha ugyanezt teszi, lelövi az összes többi civilt is, egész egyszerűen azért, mert nem esik le neki, hogy valamit rosszul csinált. Nem képes tanulni a hibáiból. Hisz nem is érzékeli hibának.

Mert az autonómia még mindig messze nem ugyanaz, mint a mesterséges intelligencia. A mesterséges intelligencia ugyanis nemcsak fejleszti önmagát: képes akár arra is, hogy a változó körülményekre reagálva átírja saját programjait vagy célkitűzéseit, így egy folyamatosan tökéletesedő entitást hozva létre. Illetve képes VOLNA – de ilyen program még nincs: a jelenleg ismert legjobb mély neutrális hálózatok elképzelhetetlen gyorsasággal tudnak ugyan tanulni, fejlődni, de csak a saját, ember által írt programjaikon belül. Ez pedig MÉG nem a mesterséges intelligencia. És ha a Terminátor-franchise-ra gondolunk, akkor ne is legyen mesterséges intelligencia. Viszont ha arra gondolunk, hogy egy ellenséges hatalom létrehozza ezt, és ezzel stratégiai előnyre tesz szert – akkor hiába minden filozofálás, meg a kínos morális kérdések, úgyis kénytelenek leszünk létrehozni. Így megy ez. Hogy milyen piszkos vagy piszoktalan eszközöket vet be egy állam egy háborúban, az bizonyos mértékben a másik féltől függ, attól, milyen kihívások elé állít minket.

A mesterséges intelligencia csábító lehetőség: olyan eszköz, ami folyton reagál környezetére, mindig éber, sosem szál inába a bátorsága, és egy mikroszekundumnyi idő alatt hoz meg olyan döntéseket (mégpedig, ha a rendelkezésére álló információk helytállóak, akkor az adott helyzetben tökéletes döntést), amelyek egy hadsereg törzskarának órákba kerülnének, és még úgy is elhamarkodottnak ítélnénk őket. Csakhogy az van, hogy egyszerűen nem tudjuk, hová vezet mindez. Mert hiába mi csináljuk a gépeket, nem nagyon látjuk át, hogyan „gondolkodnak” – ami nem csoda, hisz még azt sem értjük teljesen, a saját agyunk hogy működik. Hajlamosak vagyunk antropomorfizálni az algoritmusok viselkedését, azt hisszük, tudják, mit akarunk tőlük, viszont ezért újra és újra meglepnek minket. Itt van például az a program, amit arra fejlesztettek ki, hogy sose veszítsen Tetrisben. (Tényleg van ilyen. Mire nem érnek rá a tudósok. Ha ezt Palkovics tudná.) Ez a program tökéletesen megoldotta a feladatát: mielőtt az utolsó, a játék végét jelentő Tetris-kocka lehullott volna, nemes egyszerűséggel lefagyasztotta a játékot. És tessék. Soha nem veszített. Hol itt a probléma? Vagy itt vannak azok az öntanuló algoritmusok, amelyeket egy határozott cél érdekében fejlesztettek ki. Esetükben a tervezők azt vették észre, hogy a program megpróbál ellenállni a kikapcsolásnak, no meg annak, hogy lefektetett céljait a programozók utólag megváltoztassák – mert úgy vélte, ezzel távolabb kerülne a megvalósítandó céltól. Szóval elég félelmetes dolgok vannak itt – jó részük pedig épp azért veszélyes, mert homályos, merre is fejlődik. Szóval maradjon egyelőre az ember a rendszerben, mert a Terminátor csak filmben jó – de még ott se mindegyik.

Vannak korlátai ennek a könyvnek, annyi szent. Néha önismétlő, néha talán szükségtelenül részletező – bár lehet, ezt csak az informatika iránt érzett masszív ellenszenvem mondatja velem. Másfelől nevezhetjük egyoldalúnak, hisz szinte kizárólag az amerikai eredményekkel foglalkozik*, és bizony az is az elavulás veszélyét hordozza magában, hogy egy olyan témáról akar általános érvényű megállapításokat tenni, ami e percben is szinte követhetetlen gyorsasággal fejlődik. Ugyanakkor - minden fenntartásom ellenére – ez egy olyan könyv, ami úgy kellett, mint egy falat kenyér. Egyszerűen olyan dolgokat érint, amelyeket – tudtommal – más könyv egyáltalán nem. Öröm, hogy magyarul is olvasható. Bár aki a címet így fordította, annak kijárna egy enyhe fülcibálás.

* Mondjuk ez szükségszerűen így van: az oroszok meg a kínaiak nyilván nem fogják engedni egy egykori tengerészgyalogosnak, hogy betekintést nyerjen szupertitkos fejlesztéseikbe. Vagy azért, mert azok elképesztően rémületes fejlesztések, és még ellopnák az amcsik a licenszet, vagy azért, mert azok a fejlesztések sokkal bénábbak, mint ahogy azt a hivatalos diskurzus állítja, és ciki, ha ez napvilágra jut.
Profile Image for Jay Pruitt.
222 reviews17 followers
January 20, 2019
"It can't be bargained with. It can't be reasoned with. It doesn't feel pity, or remorse, or fear! And it absolutely will not stop, ever, until you are dead!"
---Terminator---




Does it concern you that in the near future we'll all be dependent upon driverless cars to get around? Trust me, that's nothing!

This book, Army of None, was a real eye-opener for me. We're living in a world where warfare will soon be waged at the push of a button. "Autonomous" weapons systems, designed and programmed by imperfect humans, will be able to independently take military actions (i.e., detecting and identifying enemy combatants, developing target solutions, estimating collateral damage, and deciding whether or not to engage their weapons). This book introduces us to what is not only the future, but what has already been designed, at least in prototype state.



Imagine a swarm of drones, a mechanized hunting wolf pack if you will, powerful and stealthy, making decisions without having to check in with a human "boss", able to communicate as a team and make decisions collectively, not dependent on a pack leader which could be taken out by the enemy, sufficiently armed to terminate both military and human targets, to take down airplanes, to swarm battlefields and destroy transportation systems, or to sit dormant for years off the ocean floor and wait for ships and submarines to pass by. Worst of all, imagine these AI-directed autonomous systems being created with no built in "off switch."



Sound like a science fiction novel? Think again. 16 nations have already built or acquired (mainly from China) weaponized drones. A dozen more are working on it. Many of these are definitely not countries you'd think of as friendly allies. Armed robots are also proliferating. South Korea (Samsung) has deployed a robot sentry along its border with North Korea. Israel has sent an armed robotic vehicle on patrol near along the Gaza strip. Russia is building a robotic tank. Even countries such as Singapore and Ecuador now have armed robotic boats to protect their coastline.



DARPA (Defense Advanced Research Projects Agency), the agency who gave us the internet, GPS technology, stealth capability, and advancements in AI, has delivered to the Navy a prototype Sea Hunter, an unmanned ship which looks like a Klingon Bird of Prey, tasked with hunting enemy subs and ships. Also, Northrup Grumman has delivered an experimental, autonomously operated, unmanned Salty Dog fighter aircraft, intended to be launched from aircraft carriers and capable of mid-air refueling.

And the most frightening autonomous weapon system of all may not even be hardware, but rather software. Of course, we've all heard of viruses and worms infiltrating our computers and multiplying like a cancer. Autonomous "cyberbots" take this dark world of destruction one step further. They are designed to think and operate independently, searching every computer system (personal computers, cellphones, businesses, governments) which is connected directly or indirectly to the internet, looking for security vulnerabilities. In today's world, that could also be simple household "smart" products (which are designed with little attention to cyber security), such as Nest climate controllers, Amazon Echos, cars with GPS systems, house alarms, etc. These cyberbots then autonomously determine whether the vulnerable programs are friendly/allies or unfriendly/foes. If friendly, the bots can be designed to repair the vulnerability on the spot (defensive) and/or hide themselves within the enemy program and wait for the most ideal time to wreak havoc (offensive). And they are designed to cover their tracks so that, while infected, everything continues to appear normal to the program user. Even though important computer networks are often designed to be protected with an air gap (no direct connection to the internet), cyberbots can find other ways to infiltrate, such as spreading via USB drives and sticks.

While cyberwarfare is a major concern (imagine what would happen if the Air Traffic Controller systems were suddenly corrupted), the nightmare scenario is when cyberbots somehow infiltrate, corrupt and/or "turn" the neural programming of our sophisticated weaponry, changing let's say a friendly fighter-robot into an unfriendly one. Can you say "Terminator"?

While this book was definitely an education for me, well over half of the book was not about autonomous systems, per se, but about the moral implications of their use. This could have been covered in one or two chapters, not twenty.
92 reviews16 followers
December 2, 2020
This book is the reason I'm 5 books behind in my 2019 reading challenge. It took me forever to finish. I had to keep renewing my library loans. It was difficult to focus on the audio version; I had to keep rewinding, and often just had to give up and play music on my commute. I found it impossible to read the hardcopy. I did learn a little, but I'm pretty certain I'd fail a quiz.

I really wanted to learn more about the complexity of the challenges associated with ethics as well as the technologies, and this is covered, finally, in the later chapters.

It's clearly written, but it's just really dry and repetitive. I recommend skimming the first 8 chapters, and starting with chapter 9. Or maybe even skipping the first eight chapters, reading chapters 9 thru 11 , 20, 21, and conclusion, and skimming the rest.

A minor note - the narrator's voice and cadence were fine, but his pronunciation of parabolic and executable suggest he's unfamiliar with engineering.
Profile Image for Ietrio.
6,732 reviews25 followers
March 10, 2019
Short version: a toxic book coming from a fear monger with a governmental expansion agenda.

Long version:

I am very interested in the subject. And, as with bioethics, the perspective is very dark. Most, if not all, asking for "moderation", "control" or anything in between are simply primitivists scared out of their wits of what technology might bring.

In this particular case, Scharre is a Luddite. His understanding of the subject is shallow at best, although the surface covered is large indeed. From the first pages he starts with an anecdote about the wonderful angel of the Soviet rocket command that saved the World from Nuclear Holocaust. He continues with another anecdote in which he puts himself (?) as a participant in such a saving action. Well. How about Mai Lai Massacre? And that is the one that got to the public. How about the other massacres that did not have a whistle-blower? How about the ones in which the whistle-blower lost? Why would anyone believe the first story coming from a country that specialized in low blows and deception? The point is simply humans are way way below the angelic nature Scharre's superstitious mind can grasp.

My second point might appear as making instead angels from the machines. No. The machine, at least for now, will do what the human will ask it to do. So NO machine can be ANY better than the human programming it. The decisions can be far more clear. Or the programming can simply be buggy.

I got to this book though Bill Gates' recommended books. And I get it. To grossly paraphrase Peter Thiel, betting on Microsoft simply means betting against progress. And that seems to be Gates' mindset.

I have nothing against argument. Even an emotional argument like Scharre's, devoid of reason. What I point out, is also pointed out by Scharre: many groups are pushing for this sort of weapons. ANY restriction would push what Jello Biafra wonderfully put "If Evolution is outlawed, only outlaws will evolve". Paul Scharre is true to his own agenda. He is a fear monger, a bureaucrat who wants a total control of the state, damn the rest. And that is what makes this book toxic. Because his gang might win based on this sort of emotional cries.
Profile Image for Matt Seraph.
27 reviews
August 31, 2019
We've gotten used to a certain model of popularized knowledge: a clear thesis, summarized on the back of the book, and a chapter by chapter marshalling of evidence in support.

Army of None is from a different school, and frustrating if you ask that of it. Rather than a single argument, it is structured as a thematic literature survey, exploring topics as diverse as the targeting systems of Aegis Combat system equipped submarines, cyber warfare, and philosophy of the rules of armed engagement. This isn't a book well-suited for the casually curious; to the reader's cost, the driest material comes first, reviewing what is known of the US and other countries current semi-automated systems in what is probably excessive detail. The further one gets into the book, however, the more it broadens and starts treating the questions that probably matter more to most potential readers. What rules should or could be applied to an automated weapon system? How can humans be kept in or on the loop of combat decision-making? What bans on weapons have held historically, and why?

Sharre's choice to build so much of his book around interviews with experts can come across as a lack of confidence in his own thesis, but may reflect his years of service in both policy and on the front with the US military, crafting cautious and well-founded argument. When his perspective on the best path forward reveals itself late in the books, it is something of a last concept standing. Through process of elimination of possibilities we've explored in detail with him and found in one way or another wanting, some possibilities stand out as worth exploring, while other common perspectives in the media no longer hold up.

Firmly recommended to people looking for an expert guide thinking through the issues around the arrival of automated force; but don't make the mistake of picking this up to fill your popularized science reading slot. It will ask more from you than that.
Profile Image for Szymon Warda.
53 reviews15 followers
February 20, 2019
It took me a while to start reading the book and I've struggled through the first 10%. Not because it is bad, but because my point of view was that automated warfare is the way things will go.
The book showed a much broader perspective and how automation complicates war and politics.
But if it was the only thing that cought my attention in this book I wouldn't rate it so high. It is also a very realistic view of what questions we will need to ask ourselves before, or during the development of AI systems. Why? Because warfare will be the main and most lethal testing ground for human - AI interaction. In both roles - supporting and making decisions.
As for AI, this book presents a more realistic vision and asks relevant questions than Superintelligence. I highly recommend it.
Profile Image for Benjamin.
104 reviews
January 7, 2020
This didn't really get interesting for me until the chapter on Russian bots which started off the section that I'd call 'what's the state of the art'. In covering several weapon systems from around the world, the salient point for each was whether or not it was automated or truly autonomous and how that status is defined. The chapter on general IA concepts was the highlight of the book for me. It covered Google's AI program DeepMind as it learned the Chinese/Japanese/Korean game of GO. A task far more challenging to an AI than learning chess, by several orders of magnitude. The following program AlphaGo was pitted against World Champion Lee Sedol and it came up with a move so unusual and startling that Lee got up and left the room for half an hour. He came back and eventually lost. As an indicator of how fast this is all moving, DeepMind & AlphaGo played ten matches, AlphaGo won all ten. The book drives home the point that we have to figure out what we're going to do with this tech right now. The general public isn't aware how soon this will show up on the battlefield and in the air.
Profile Image for Randall Wallace.
594 reviews469 followers
November 12, 2023
“Global spending on military robotics is estimated to reach $7.5 billion per year in 2018.” Our military sees that they can’t just throw hundreds or thousands of drones in the air because each drone requires at least one operator – that is expensive, and you have to train each operator. There must be another way to keep US imperial designs for full spectrum dominance going. Fear not, the humanist tech response has arrived: swarming or cooperative autonomy which is the next step in evolution towards the autonomous and soulless Terminator lifestyle we hope and dream for. The plan is for not just swarming US drones but swarming US boats to protect our soldiers as they brazenly move with advanced weaponry around the world and our over 750 military bases.

Semi-autonomous systems are where the machine does what it’s told to do, then it waits for the next command (like a typewriter that won’t type the next letter by itself). Supervised autonomous operation is like semi-autonomous operation but a human is also actively watching and supervising. To his credit, Paul recognizes that if “our adversaries” develop autonomous weapons then we will too. That could make the US go from happily bipartisan permanent war footing, to even more expensive happily bipartisan permanent war footing. If ‘they” do harmful AI, then we will have to do harmful AI too in a charmingly malignant form of Joni Mitchell’s “Circle Game”. Drone operator Trauma: “Afterward, drone operators can see the human costs of their actions, as the wounded suffer or friends and relatives come to gather the dead.” A problem is former military dude author Paul didn’t then tell us about the sadistic US history of Double Tap Drone Strikes, perhaps because he wanted the military and centrists like Bill Gates to blurb this book.


In 1983, Stanislav Petrov (and later, Russian Captain Vasili Arkhipov as well in a sub) in a Russian bunker saved all our asses on earth; he received launch warnings and wouldn’t launch in retaliation without confirm, and then he got notification of four more missiles headed his way. Petrov called ground-based radar operators, but they saw nothing on their screens. Petrov called his superiors and told them the equipment was malfunctioning. It was: “Sunlight was reflecting off cloud tops had triggered a false alarm in Soviet satellites.” This story is in this book because a machine in Petrov’s place would do what it was programmed to do. “If Petrov’s fateful decision had been automated, the consequences would have been disastrous: nuclear war.”

Tech Time: Gatling guns can shoot 2 ½ miles and required four soldiers to operate. Comedy Time: Richard Gatling who invented the gun actually thought it would SAVE lives. Then in 1883 comes the Maxim gun which did away with the hand cranking (although the soldiers still did that in their tents) – which made it the first machine gun. Then comes the M249 Squad Automatic Weapon a.k.a. the 17-pound SAW. “The SAW will fire 800 rounds per minute. That’s thirteen bullets streaming out of the barrel per second.” Nothing says we are a Christian nation more than pointing one of these bad boys at people you’ve just invaded w/o Congressional or UN approval. “Radar ‘sees’ reflected electromagnetic energy and sonar ‘hears’ reflected sound waves. “A single (US) submarine (with eight 100-kiloton warheads per missile) has the power to unleash over a thousand times the destructive power of the attack on Hiroshima.” The Pentagon is one of the largest buildings in the world at 6.5 million square feet, and a staff of over 20,000.


The book’s leftish slanted facts: The 1991 Iraq Highway of Death Massacre by US forces actually got the bipartisan US war machine to pause: When Bush Sr. saw the grisly pics of it, he called “an early end to the war.” When Colin Powell saw pics of it, he later wrote in his memoirs, “the television coverage was starting to make it look as though we were engaged in slaughter for slaughter’s sake.” Jody Williams told Paul, “War is about attempting to increase one’s power …It’s not about fairness in any way. It’s about power …It’s all bullshit.” General Tecumseh Sherman wrote, “I am sick and tired of war. It is only those who have never fired a shot nor heard the shrieks and moans of the wounded who cry aloud for blood, for vengeance, for desolation. War is hell.”

This book’s rightward slant involving Iran: Paul will tell you Iran “regularly uses small high-speed craft to harass US ships” but he won’t tell you that our USS Vincennes ship did something FAR more than harassing, without warning it shot down an Iranian civilian airliner (1988) killing 290 civilians. Paul does mention the US Vincennes as proof that the US trusted automation too much, even though Paul knows automation did not shoot down those civilians – the Vincennes Captain (William C Rodgers III) gave that death dealing order. Nor will Paul tell you that two USS Vincennes captains got a Legion of Merit for the war crime shooting down those Iranian civilians. Pause to salute and wave your US flag. Also, Paul won’t tell you the US never apologized to Iran for killing so many civilians – active rogue states historically never apologize (see Chalmers Johnson “Blowback”). Paul also won’t tell you that the US removed the democratically elected leader of Iran in 1953 and replaced him with a brutal tyrant (the Shah and dreaded SAVAK) for decades. Or the US supplied Iraq with chemical weapons to illegally use them against Iran.

Right slant continues: Paul will mention by name that Saddam, Gaddafi, and Assad intentionally target civilians (w/o showing evidence) but won’t mention the US or Israeli soldiers historically have also targeted civilians. Our side only commits honest mistakes, but not so those who are not our allies. The heathens! Paul writes, “Today, chemical weapons are widely reviled, although their continued use by Bashar al-Assad in Syria shows no ban is absolute.” Note that Paul won’t dare tell readers that The US and Israel have no problems with using chemical weapons – the US sold them to Saddam, The US used white phosphorus in Fallujah and Israel used white phosphorus in both the occupied territories and Lebanon. Both countries admit it too, yet Paul won’t tell us. Go figure - better to attack only Assad to unfairly make our side look good.

AI presented only from the Right: In this book, you will hear no distracting talk about the malignant use of AI by the US and ally Israel to injure others, instead Paul focuses on the usual suspects, Russia and China. “Russia is sending bots to spread disinformation and disrupt Western democracies. China is building a techno-dystopian surveillance state to control its citizens.” Paul will have us believe the US and Israel are not ALSO using bots to spread misinformation and disrupt adversaries. Paul will have us believe there is no NSA Utah Data Center tracking all we do in the US and keeping records. That our privacy is also forever compromised, and our democratic Presidents like Obama helped get us there. Paul will have us believe Israel and the US don’t operate troll farms to spread disinformation. No wonder Bill Gates loves this book; it intentionally points fingers elsewhere and not at ourselves. How convenient.

Humanistic Hubris: “There were at least thirteen near-use nuclear incidents from 1962 to 2002.” There’s a new field called “roboethics”.

Strange quote from this book: “people frequently name their Roomba.”

Fun Facts: In WWII, “a typical bomb had only a 50-50 chance of landing in a 1.25 zone.” “most hand grenades around the world have a three to five second fuse.” German bombers were initially told to avoid civilians in London, but some bombs strayed into it. In retaliation, Britain intentionally bombed Berlin. In retaliation for that, Hitler launched the massive London Blitz because bombing major cities was apparently just okayed by the Brits.

All in all, this book was a “Meh”. I barely learned anything. I was going to read P.W. Singer’s books after this, but P.W. Singer’s glowing words about this book make me not want to waste my time checking out his “Wired for War” so I dodged a time bullet there. Instead, read, On Killing, by Dave Grossman – that was such a great book, but less from the tech angle.
Profile Image for Eman Elshamekh.
108 reviews154 followers
November 25, 2018
حيادي، وبسيط للغاية، وعلى الرغم من بساطته لم تنقص كلماته شدة في رنينها. اللي محبوك فعلًا الجزء المتعلق بتحجيم الاستخدام وليس الحظر التام
Profile Image for Andy Klein.
1,000 reviews6 followers
January 16, 2019
Informative but not all that interesting. Although there was a lot of detail, I don’t feel like I learned very much. Could easily have been 50% shorter. Lots of repetition.
Profile Image for Darnell.
1,195 reviews
August 23, 2019
I enjoyed this book considerably. I went back and forth on the rating, but since I statistically don't give enough five star ratings, I'll bump it up. This book isn't perfect but it has a lot to say about automation in a readable style.

Beyond that, I don't have many comments. It's an excellent description of issues regarding automation in warfare, a decent exploration of some general AI risks, and a start at considering policy. Looking at criticism of the book, I think it's a more nuanced and balanced take than many.
Profile Image for Zane.
361 reviews7 followers
November 9, 2020
Off, this took a while, but I don't regret it.
I got to know additional information about AI, as well as war thinking and cyberwars and many, many other really interesting facts. Seems that autonomous weapons are our future, but will we choose to have them supervised or running free is a different question.
In case you have wondered if Terminator or I, Robot scenarios will come to life, this book gives you quite an understanding of this.
Profile Image for Mark Haichin.
6 reviews1 follower
August 19, 2022
This acts as an excellent overview of the issues surrounding autonomous weapons of all kinds in warfare (ranging from the expected robots to malware and nuclear command-and-control systems). Paul Schafer does a great job explaining various concerns in an objective manner while showing that there are no simple answers.
Profile Image for Mac.
372 reviews7 followers
December 24, 2020
Borrow.

A worthy and expansive contribution to the topic which is intellectually stimulating and well thought out. The downside is that it is overly repetitive and would have benefited from some editing that would have cut it down by about 100 pages. Nonetheless, it is worth a read just don't feel guilty about skipping around a bit.
Profile Image for Dennis Murphy.
822 reviews11 followers
April 25, 2019
Army of None: Autonomous Weapons and the Future of War by Paul Scharre is a frightening book. It is also probably one of the more important books I have come across. Since I was very young, I have been fascinated by the notion of robots and automation in war. Probably my first exposure to the subject was either from the Terminator, Alien, or Gundam Wing. [Space Odyssey, War Games, and an episode of the twilight zone that involved playing chess on Mars were a little later, probably early teens]. Around the time of my late teens, my thinking on the issue largely atrophied. It wasn't that I did not think about it, just that my thinking about it did not really change all that much. That started to change last year, and since then I have moved pretty far away from my older views on the subject.

Paul Scharre does a rather fantastic job of doing a deep dive on the subject of the use of AI in war. He engages in a lot of mythbusting, regularly references popular culture as a means to persuade, and warns of the danger of anthropomorphizing our technology. Right off the bat, within the first few chapters, he informs the reader that depending on how autonomous weapons are defined, they have already been in existence for some decades. He uses this as a jumping point for what we mean, and how we understand, artificial intelligence and autonomy in war. AI is very good at working within the confines of a system, and in time can power its way through to completing a task far better than any human. The issue is that these AI are idiots outside of that narrow degree of hyperintelligence. Fear of general AI, something akin to a living being, is a major point of discussion. For some individuals it is courting death and danger for all humanity, while for others general AI in the sense it is feared will not emerge. Some are cited as suggesting that general AI is a moving target, as whenever a task that requires general AI is accomplished, the goal post is shifted.

Emergent intelligence and swarming technologies for largely automated drones were a standout of the early part of the book that was both illuminating and unnerving.

Actually, that sentiment covers most of the book.

Which is why I recommend it.

I lack the ability to comment on the degree to which Scharre is accurate, and I find it difficult to test the sources he employs. The automated part of automated war is not my area of expertise. As such, I cannot tell you if it is accurate or false, pessimistic or optimistic.

I will, however, say that one of the main takeaways that Scharre wants you to walk away with - that automated weapons in war will maintain human agents, if in ever increasingly distant roles (with some exceptions, like Strangelove's Doomsday Weapon, or some other automated second strike response without a human actor) - is actually something I would shy away from.

AI's indifference to human beings? Sure. Skynet likely won't care that humans exist, and likely would not care if it gets deleted, unless we teach it to have those traits. Some exceptions to that exist, and Scharre goes into them, but he warns against science fiction simply coming into reality.

But the human being in war is something that I think can only be guaranteed in the near to mid range of the future.

Beyond that? I'm doubtful.

If you care at all about the behavior of nations and the evolution of war into the future, this book will probably need to be consulted at some point.

96/100
Profile Image for Michael F.
51 reviews
May 23, 2019
Autonomous weapons (i.e. killer robots) are one of the many incredible and terrifying technologies coming to our world whether we're ready for them or not. Army of None is a thorough survey of the development of autonomy in weapons technology, its potential application in the future, and the advantages and disadvantages of autonomous and semi-autonomous systems. It explains technical details well for one with no particular knowledge of the subject. The book focuses particularly on the moral and ethical issues created by fully autonomous weapons. It presents both sides of the argument fairly well, though it is somewhat skewed against autonomy. Oddly, it does not spend much time on the simple point that autonomous weapons could keep more of one's own human soldiers out of harm's way, which I would have thought to be one of the primary advantages.
The book's primary flaw is a redundant style; I think a good editor could have reduced the length of the book by at least a third without removing anything substantive. I'd still recommend the book to those interested in the subject matter.
Profile Image for Daniel.
181 reviews5 followers
September 11, 2018
Good book that explores the future of autonomous weapons. These weapons range from loitering munitions to drones to nuclear command and control systems (think war games). Further, the book explores not only advancements in technology, but the ethics and morality of developing these types of systems.
1,398 reviews5 followers
December 13, 2018
Each year Bill Gates recommends the 5 books that he has liked the most. An avid reader and quite good tastes. I almost always follow his recommendations.
Of the books he recommends he had already read one, today I finished "Army of None: autonomous weapons and the future of war" written by Paul Scharre.
To think many topics:
- Artificial intelligence is progressing more and more, there are even programs that, in developing this intelligence, obtain unpredictable results for humans. This can happen with the weapons that have AI ... They can end up being against us as we have already seen in some movies.
- The control of autonomous weapons is a big problem. If we have central and human control they could affect the communications with them and control them, if not, we are practically leaving them alone. These weapons will have no heart and will do whatever is necessary to achieve their goals, however rare and bad it may seem.
- The prohibition to use autonomous weapons works for the times of peace, who knows what the world powers already have of armament and how they will react in a real war.
- The decision-making times are becoming shorter. The more quickly you need more outside, the human remains of the decision factor. How healthy can this be?
I recommend this book to everyone, but especially those who study systems or robotics ... a lot to think about ...
Profile Image for Maria.
4,122 reviews109 followers
October 21, 2018
The US military has paid millions of dollars to stay at the technological cutting edge. Scharre walks the reader thru the various weapon platforms that are in development and the arguments for and against autonomy. He interviewed activists, ethicists, psychologists, inventors, programers and defense experts to give a well rounded view of the current field. He also traces the development of smart weapons, all the way back to World War II. Scharre is a Pentagon defense expert and former Army Ranger so he has skin in the game; and experience in the real-life applications.

Why I started this book: Never fails that all my holds arrive with in hours of each other and I'm left scrambling to not be the link that slows down the whole chain.

Why I finished it: Wonderful introduction book, walking the reader thru the basics and their ethical, legal, and moral implications. I loved the thought that we consider technology of the future AI but when it is achieved it's just software.
Profile Image for Evan.
582 reviews11 followers
November 27, 2018
The content wasn't what I was expecting based on the title. I thought there would be more content on weapons of the future and how they would affect warfare. There was some of that, but it seemed like most of the book philosophized on the ethics/morality of autonomous weapons. I'm glad people are thinking about it, and people have thought a lot about it, but I think there was one message repeated throughout the book: Ultimately, what other nations (e.g., China, Russia) choose with respect to deploying autonomous weapons will dictate what the United States does. Once the other side starts using technology to defeat their opponent, the opponent adopts the same technology or accepts terms.
Profile Image for Mick.
238 reviews19 followers
August 13, 2018
A very good exploration of autonomous weapons, AI, and the potential of future technology in war. Written in very simple language. Worth your time.
Profile Image for Brahm.
511 reviews68 followers
February 14, 2020
Over the holiday break I read Stuart Russell's Human Compatible (my review) which was an extremely interesting, thought-provoking exploration into the real risks of artificial intelligence, and how they can be mitigated. Super interesting but written from a philosophic and academic perspective. (that's not a bad thing, just the truth)

In Army of None, Scharre explores the consequences of automation and machine autonomy from his real-life perspectives and experiences in serving in the US Army and the Defense Dept. He calls on 30+ years of real-life "trolley problems" and experiences in war machinery where humans have already delegated different level of autonomy and decision making to machines. Anything from a missile seeking a fixed or moving target (heat, GPS, or radar-seeking), to drones that can automatically "loiter" over a certain area and attack predetermined targets.

What's truly mind-blowing, then, is that these historical levels of machine decision-making have not been powered by "AI", merely careful human programming and judgement (and there have been friendly-fire fatalities as a result). So what happens if we are able to create a "true" artificial intelligence and delegate killing authority to a machine? Who is responsible? What are the guidelines, the rules of engagement?

I am not that into war or military in general - but I found many ideas highly relevant for industrial applications, where automation is being promoted more and more, and highly complex systems (like underground mining) depend on human judgement for mitigating life-threatening situations. Can we ever trust a machine to make those decisions? Scharre explores the idea of Normal Accidents which suggests that it is impossible to mitigate all risks and hazards in highly complex systems. Can we make weapons-bearing AI-powered machines error-free? Similarly, can we eliminate all risks in a complex underground mine or self-driving car?

Scharre shares a good framework for articulating different levels of automation and autonomy:
1. Human in the loop: AI cannot take action without human confirmation.
2. Human on the loop: AI capable of independent action, but human capable of overriding or disabling
3. Human out of the loop: Unsupervised AI taking independent actions

Another well-articulated idea is the Accountability Gap. If a machine takes a life erroneously (for the sake of this example, it can be in war, or space exploration, or self-driving cars, or underground mining) - who is accountable? This is not an easy question to answer.

The book structure and flow was quite good. I liked the final chapters on policy and listing some of the challenges in enforcing an "AI weapons ban" - Scharre takes a look back in time at the success rate of banned weapons and it's hit and miss. Basically, if a weapon is too tantalizingly deadly for a military to pass up, the success rate of banning it is low. Example: Antipersonnel land mines are "banned" globally, but anti-vehicle landmines with human anti-tamper features (essentially, antipersonnel under a different name) are not banned. Will AI prove too good a competitive advantage to ban from the battlefield?

One from Bill Gates' reading list that I've been waiting to read for about a year. Highly recommend reading before or after Stuart Russell's book. 4.5 stars (couple bits that were too long or repetitive) but for the most part extremely engaging and interesting outside of the war/military area so we'll round up.
Profile Image for Roman Trukhin.
82 reviews12 followers
August 12, 2019
В Lviv Business School (LvBS) є Центр етики і технологій. Дивний напрямок, як на перший погляд, особливо враховуючи, що ми, подекуди, живемо серед "технологій" на кшалт старих чеських трамваїв і дерев'яних рахівниць.

З іншої сторони - у нас в країні R&D офіси багатьох технологічних компаній, закритий цикл виготовлення ракетного палива, державна і вже приватні космічні програми, напівавтономні комбайни і з аутсорсу повільно переїжджаємо на продуктову модель.

Тобто Україна повільно, але невпинно рухається від оливкового дерева до лексусу. (The Lexus and the Olive Tree by Thomas L. Friedman) І коли ми врешті дійдемо до свого лексуса, усі "закони робототехніки Азімова" стануть для нас несподівано актуальними.

Коли артилерійська батарея з шести автономних 152-мм самохідних гаубиць, перебуваючи під дією придушення радіосигналу переходить в повністю автономний режим, розносить на тріски опорний пункт і переорює додатково два гекатара в радіусі, то важливо розуміти, на кому відповідальність за втрати цивільного населення? На старшому офіцері батареї, який вводив цілі? На командирі батареї, який віддавав бойовий наказ? На командирі бригади, який доводив замисел бойової операції? Чи на програмістах, які писали алгоритми?

Напівавтономна зброя уже тут. Про наявність повністю автономної ніхто не поширюється але країни G7 її мають. Усі до смерті бояться гонки озброєнь штучного інтелекту, бо тоді швидкі рішення буде приймати інтелект іншого типу, ніж людський.

Ми можемо скільки завгодно сміятися над камеронівським скайнетом але два роки тому штучний інтелект, який змагався із людиною у грі Ґо зробив переможний хід, який абсолютно невластивий людині і її мисленню.
55 reviews
May 7, 2019
This is more of a 3.5, but I feel like I give every single book 4 stars, so I wanted to mix it up a little bit. Anyway, this is a very important book to read! It was enlightening to say the least, as I don't normally think about autonomous weapons on a daily basis, even though the era of robots is rapidly approaching. My main criticism is that it's very technical, which I suppose is good if you are fascinated by the nitty-gritty of autonomous weapons and things like feedback loops. However, I was more intrigued by the philosophical elements, which are unfortunately not as highlighted in the book, although the author does include some fascinating research into this aspect of the issue. Overall, it's worth the read, just be strategic about skimming some parts!
Profile Image for Jaka Tomc.
Author 11 books47 followers
October 20, 2019
Autonomous weapons are no longer a matter of science fiction. They are among us and what's even more frightening, some of them can be bought relatively cheap or built quite easily.

Army of None is not just facts and figures. It's a tale about a world we live in. What we make of it is completely in our hands. For now.
2 reviews
January 25, 2019
Rather technical overview of autonomous weapons. Mentions many different weapon systems that lie along the spectrum of autonomy to give a good idea on how broad the topic can be. Would hope for it to cover autonomous weapons on a more moral, ethical, and philosophical perspective.
Profile Image for Zhou Fang.
141 reviews
February 16, 2020
I listened to this on audiobook. This is a comprehensive review of the history and important issues surrounding autonomous weapons and artificial intelligence. While the subject is interesting, this book is quite a trudge to get through and reads like a textbook. The book could have been shorter with focus on a few key themes. Instead, it read like a fact sheet. Overall, I do think that the issue is important and this book gives clarity to some of the moral and technical dilemmas that military personnel face today in the age of autonomous weapons. But I wish this book were written better.
Displaying 1 - 30 of 280 reviews

Can't find what you're looking for?

Get help and learn more about the design.