Jump to ratings and reviews
Rate this book

The Myth of Artificial Intelligence: Why Computers Can’t Think the Way We Do

Rate this book
A cutting-edge AI researcher and tech entrepreneur debunks the fantasy that superintelligence is just a few clicks away--and argues that this myth is not just wrong, it's actively blocking innovation and distorting our ability to make the crucial next leap.

Futurists insist that AI will soon eclipse the capacities of the most gifted human mind. What hope do we have against superintelligent machines? But we aren't really on the path to developing intelligent machines. In fact, we don't even know where that path might be.

A tech entrepreneur and pioneering research scientist working at the forefront of natural language processing, Erik Larson takes us on a tour of the landscape of AI to show how far we are from superintelligence, and what it would take to get there. Ever since Alan Turing, AI enthusiasts have equated artificial intelligence with human intelligence. This is a profound mistake. AI works on inductive reasoning, crunching data sets to predict outcomes. But humans don't correlate data sets: we make conjectures informed by context and experience. Human intelligence is a web of best guesses, given what we know about the world. We haven't a clue how to program this kind of intuitive reasoning, known as abduction. Yet it is the heart of common sense. That's why Alexa can't understand what you are asking, and why AI can only take us so far.

Larson argues that AI hype is both bad science and bad for science. A culture of invention thrives on exploring unknowns, not overselling existing methods. Inductive AI will continue to improve at narrow tasks, but if we want to make real progress, we will need to start by more fully appreciating the only true intelligence we know--our own.

320 pages, Hardcover

First published April 6, 2021

Loading interface...
Loading interface...

About the author

Erik J. Larson

1 book11 followers

Ratings & Reviews

What do you think?
Rate this book

Friends & Following

Create a free account to discover what your friends think of this book!

Community Reviews

5 stars
193 (34%)
4 stars
235 (42%)
3 stars
87 (15%)
2 stars
33 (5%)
1 star
8 (1%)
Displaying 1 - 30 of 97 reviews
Profile Image for Valeriu Gherghel.
Author 6 books1,688 followers
July 11, 2023
De ce recomand această lucrare și o consider importantă în vasta literatură cu privire la inteligența artificială? Încerc cîteva răspunsuri provizorii.

Pentru că m-a ajutat să ințeleg că nu am definit niciodată corect termenul „inteligență”. Ea nu e doar capacitatea de a rezolva probleme noi (cum a caracterizat-o Allan Turing, într-un articol din 1950), ci și abilitatea de a inventa (mereu și mereu) probleme noi.

Pentru că m-a îndemnat să mă întreb în ce măsură a (mai) evoluat inteligența umană de la Platon încoace și să constat că, de fapt, inteligența umană a rămas aceeași. A stagnat. Ea nu depinde, așadar, de volumul de cunoștințe pe care-l deține un om. Cunoașterea crește, inteligența nu. Știm mult mai multe decît Aristotel (în demeniul biologiei, al fizicii etc.), dar nu putem îndrăzni să spunem că am devenit mai inteligenți decît întemeietorul logicii.

Marile descoperiri științifice nu presupun o inteligență sporită față de inteligența lui Platon, Aristotel și Euclid, ci doar o sumă mai mare de cunoștințe matematice / fizice. Newton nu a fost mai inteligent decît strămoșii lui.

Prin urmare, inteligența umană a atins o limită de nedepășit. Și a atins-o acum 6 mii de ani, cînd omul a inventat scrierea. Pornind de la acest dat, mă întreb de ce o inteligență artificială poate evolua pînă la nivelul celei omenești și apoi poate exploda în ceea ce s-a numit „superinteligență”. Exact asta spune mitul comentat de Larson: evoluția inteligenței artificiale e inevitabilă și tot inevitabil e și momentul în care ea va depăși posibilitățile inteligenței omenești. În plus, acest moment va fi ceva rău, un semn al apocalipsei. Și chiar apocalipsa însăși.

Dar ca o inteligență de orice fel să devină malefică (superinteligența nu e prin ea însăși malefică, nu putem decreta asta), ea trebuie să fie însoțită de conștiință. Nimeni nu poate argumenta definitiv că există o legătură necesară între creșterea inteligenței și apariția conștiinței. În ce chip am putea deduce că superinteligența va deveni conștientă de sine?

Pe urmele filosofului C. S. Peirce, Erik J. Larson consideră că gîndirea e, mai degrabă, capacitatea de a infera, de a construi argumente (deductive, inductive și abductive). Deducția și inducția pot fi „traduse” în algoritmi și în proceduri pe care o mașină inteligentă le poate „imita”. Dar nu există posibilitatea de a formaliza abducția (care e o specie de raționament inductiv „situat”) și de a o „transfera” mașinilor. Un astfel de argument ar putea fi numit și „intuiție”. Și el e singurul creativ...

În concluzie, Mitul inteligenței artificiale. De ce computerele nu pot gîndi la fel ca noi e una dintre foarte puținele cărți dedicate acestui subiect care merită cu adevărat să fie citită.

P. S. Gîndirea umană nu e de natură computațională (nu se bazează pe calcul) și asta explică de ce o mașină îl poate înfrînge la go sau la șah pe orice campion mondial. Mașina stăpînește infinit mai multe date decît omul și le procesează cu o viteză incomparabilă. Dar nu înseamnă că gîndește. Și, în definitiv, nici șahiștii nu gîndesc. Dacă nu mă credeți, întrebați-l pe Ding Liren, noul campion mondial...
Profile Image for Dan.
374 reviews102 followers
October 8, 2021
Successful guys like Musk - who by the way believes that we live in a simulation like Matrix - spend money, form organizations, and participate in talks to block the imminent and evil AI. Bostrom devises brilliant schemes on how to outsmart, while still we can, a superintelligence that may turn us humans into means to some unexpected objective; objective that in no way we can anticipate. Tegmark presents us with naive utopias on how wonderful everything will be once AI will arrive; he even tells us that he cries with joy at the prospect of AI's arrival. Kurzweil dreams of some imminent metaphysics with the rise of cyborgs and the nearness of “singularity”. And the list goes on. From time to time they get together, scare each other with their stories, and think that is good business to scare the rest of us also.
But there is a basic problem here - as in fact no one has any idea how to practically achieve AI and how to fill the gaps in fundamental AI research; and thus all this talk about the imminence of AI is just hype and myth - as Larson argues here. It all started with the deductive and symbolic approach to AI that failed way back. Now we are in the middle of a statistical and inductive machine learning inference that is quite successful in narrow business fields; an approach that cannot be extrapolated and generalized to the mythical AGI. Larson nicely presents the deductive and inductive inferences along with their limitations as possible fundamentals for AI. Instead, he proposes what Pierce defined as “abductive” inference; basically a hybrid between the deductive and inductive inference. Since there is no developed theory of “abductive” inference, not much can be done in this respect.
IBM successes with chase and “Jeopardy!” were marketing stunt events well prepared, patched, and achieved by narrow algorithms well coordinated and balanced. All the promises that Watson will turn to medicine and other fields proved nothing but promises. DeepMind did extremely well with games and video games like Go and StartCraft; but this is not the real since all game's "rules are known and the world is discrete with only a few types of objects." If AlphaStar, the algorithm who mastered StarCraft, plays a different race in the same StarCraft game, then it needs to be trained from scratch. Voice recognition and personal assistants like Alexa are just inferential inductive algorithms that are trained on huge data and provide you with the most likely associated answer – given a previous and similar question-answer pair. In other words, there is no depth, no causal understanding, no trace of any “intelligence”, and no generalization outside these extremely narrow fields where the relevant and commercial data are available in huge quantities.
Actually, the fundamental and basic knowledge needed to construct an AI is not actively pursued by any scientist - simply because no one has a clue of what is needed or how to bridge the missing gaps. More specifically, the needed and unknown knowledge is actively hoped and prayed for by such “AI scientists”. That is, they are “sure” that the current and future algorithms running on larger and larger data sets will somehow fill the gaps in fundamental knowledge and that the “conscience”, “intelligence”, “singularity”, and so on will spontaneously emerge. In other words and according to Larson, the current AI scientists gave up science for some time now, replaced it with algorithms running on large data, and are waiting, praying, and hoping for AI to spontaneously emerge “soon”.
Larson is great in demystifying all this AI hype; and he is doing it well and clearly in the AI researcher's domain and language – since he is one of them. However, I believe that Dreyfus's critique of AI is more fundamental and relevant when compared with this one – even if it is 50 years old.
Profile Image for Ben Chugg.
11 reviews33 followers
April 17, 2021
There is a prevailing dogma that achieving "artificial general intelligence" will require nothing more than bigger and better machine learning models. Add more layers, add more data, create better optimization algorithms and voila: a system as general purpose as humans but infinitely superior in their processing speed. Nobody quite knows exactly how this jump from narrow AI (good on a particular, very well defined task) to general AI will happen, but that hasn't stopped many from building careers based on erroneous predictions, or prophesying that such a development spells the doom of the human race. The AI space is dominated by vague arguments and absolute certainty in the conclusions.

Onto the scene steps Erik Larson, an engineer who understands both how these systems work and their philosophical assumptions. Larson points out that all our machine learning models are built on induction: inferring general patterns from specific observations. We feed an algorithm 10,000 labelled pictures and it infers which relationships among the pixels are most likely to predict "cat". Some models are faster than others, more clever in their pattern recognition, and so on, but at bottom they're all doing the same thing: correlating datasets.

We know of only one system capable of universal intelligence: human brains. And humans don't learn by induction. We don't infer the general from the specific. Instead, we guess the general and use the specifics to refute our guesses. We use our creativity to conjecture aspects of the world (space-time is curved, Ryan is lying, my shoes are in my backpack), and use empirical observations to disavow us of those ideas that are false. This is why humans are capable of developing general theories of the world. Induction implies that you can only know what you see (a philosophy called "empiricism") - but that's false (we've never seen the inside of a star, yet we develop theories which explain the phenomena).

Charles Sanders Pierce called the method of guessing and checking "abduction." And we have no good theory for abduction. To have one, we would have to better understand human creativity, which plays a central role in knowledge creation. In other words, we need a philosophical and scientific revolution before we can possibly generate true artificial intelligence. As long as we keep relying on induction, machines will be forever constrained by what data they are fed.

Larson argues that the philosophical confusion over induction and the current focus on "big-data" is infecting other areas of science. Many neuroscience departments have forgotten the role that theories play in advancing our knowledge, and are hoping that a true understanding of the human brain will be borne out of simply mapping it more accurately. But this is hopeless. Even after having developed an accurate map, what will you look for? There is no such thing as observation without theory.

At a time when it's in fashion to point out all the biases and "irrationalities" in human thinking, hopefully the book helps remind us of the amazing ability of humans to create general purpose knowledge. Highly recommended read.
Profile Image for Live Forever or Die Trying.
59 reviews244 followers
January 23, 2022
I personally believe one of the most valuable things a reader can do is to read books that are contrarian to your held viewpoint. I surround myself with books and articles from Futurist, Transhumanist, Techno-progressive Leftists, and Scientist working on finding a cure for aging (writing this out makes me seem a bit out there huh?). A keystone piece of the futures these people write about is the coming superpower that is AGI or artificial general intelligence, a machine that can think the same way we do. That’s where “The Myth of Artificial Intelligence” by Erik J. Larson, put out by Harvard University Press comes in to serve as my contrarian viewpoint.

Broken into 3 main sections this book first covers a history of computation and initial theories of intelligence and how we arrived at our present world. We look at Alan Turning and his time at Bletchley all the way to figures such as Nick Bostrom and Ray Kurzwiel and along the way learn what predictions these figures put forward as problems that AI would solve.

Secondly we take a deep dive into what AI Is good at, namely Machine Learning, Deep Learning, Neural Nets, and other “narrow” forms of intelligence. We take a long hard look at why general intelligence is not making progress and the problems AI have when trying to “jump to a conclusion” or use “abductive” learning. We also spend a lot of time reading the problems of language learning and logic for AI.

Finally we finish the book with the 3rd part where we analyze the halt in progress of AI and the damage these “myths” have cause. If we want AI to progress to AGI then we must go back to the drawing board and first understand how the mind works before we dive head first into full brain emulations.

Overall this book was a 5/5 for me. I am very novice on the workings of AI and this book was very readable for me although at times tedious and dense. For someone new to the topic it had more information than I could integrate in one sitting but provides me endless jumping off points to learn about this topic in more detail. I would recommend it for anyone interested in AI, especially if you are like me and wish to see this tool leveraged in the future.
Profile Image for Loren Picard.
64 reviews12 followers
June 19, 2021
The best, most level headed, and honest take on where scientists are with AI. No talk of cosmic endowment, killer robots, and machines replacing humans as a species. Larson doesn't sidestep the narrow successes of AI; he explains them for what they are. Larson explains why computers can beat humans at games, but can't understand an ambiguous sentence. In an ironic twist, you come away from the book somewhat letdown that the idea of artificial general intelligence is nowhere in sight (there are no workable theories being explored), but in what is the best outcome of reading this book is you feel newly empowered as a human with an intellect that can't be duplicated.
Profile Image for Buzz Andersen.
26 reviews111 followers
December 3, 2021
Fantastic book, and a great compliment to a book like The Alignment Problem. Only deducting a star for the strong whiff of Thielism detectable in the polemical section toward the end. Really, though, a great and surprisingly philosophical book about the limits of AI and the misguided myths that have grown up around the field.
Profile Image for Matt Lenzen.
28 reviews
September 3, 2023
Larson, a computer scientist and philosopher, released this book in 2021, two years before ChatGPT3’s public release. Larson’s core thesis is that although we have made incredible strides in creating narrow AI (i.e. ChatGPT3, AlphaGo, AlphaFold, IBM Watson, etc.), we still haven’t a clue about how to create super-intelligence, much less artificial general intelligence. Instead, we have fallen victim to an AI mythology that’s been peddled since the early 20th century, relying on faith, rather than science, that artificial general intelligence is inevitable. He explores the foundations of inference: deduction, induction, and abduction, and explains why deep learning— which is primarily a form of induction—can by nature never become general intelligence. He concludes that our worship of data will not only prevent us from creating AGI, but is an existential threat to science as a whole.

Although ChatGPT3 has already rendered some of Larson’s criticisms obsolete, his core critique still holds. Like adherents of any faith, AI pioneers like Sam Altman and Mustafa Suleyman have no doubt in their dogma—in the inevitability of super-intelligence. The Myth of Artificial Intelligence has taught me how to listen to what they are not saying, rather than get drawn into the hype of what they are.

Who should read this book? Larson says it best in his introduction:

“Certainly, anyone should who is excited about AI but wonders why it is always ten or twenty years away. There is a scientific reason for this, which I explain. You should also read this book if you think Al's advance toward superintelligence is inevitable and worry about what to do when it arrives. While I cannot prove that Al overlords will not one day appear, I can give you reason to seriously discount the prospects of that scenario. Most generally, you should read this book if you are simply curious yet confused about the widespread hype surrounding Al in our society. I will explain the origins of the myth of AI, what we know and don't know about the prospects of actually achieving human-level Al, and why we need to better appreciate the only true intelligence we know—our own.”
Profile Image for John.
792 reviews30 followers
November 16, 2023
Lindsey, who is an environmental and experience designer, visited me and left some books behind on AI that she read as research for an upcoming project. I have fairly strong opinions on “AI” and wasn’t really interested, but I couldn’t just leave those books sitting there unread…

The back of this book has a quote from Peter Thiel and its opening paragraph calls Elon Musk a “thought leader,” so it took a lot of fortitude to push on. This book is not written by someone with a natural gift for language. The style is long-winded and pedantic, when the thesis questions are fairly simple. The points raised are salient and deserve to be heard, however. The moving goal posts of how “AI” is defined, along with the sci-fi fervor that permeates most discussions, obscure meaningful discourse and complicate potential legislation. The author does a decent job of demystifying the past, present, and potential future of artificial intelligence.

With the author a self-described tech entrepreneur, I was expecting a tech-bro capitalist shoe to drop at some point but it never did. He even spent some time in his conclusions criticizing the narrow focus of profitable tech as detrimental to advancement. A couple of his specific examples of the limitations of language learning models have been proven partially false with the advent of ChatGPT and even DeepL, but his larger points still stand. Ultimately I feel like my existing knowledge was reinforced and I’m not sure I took much away from this book.
Profile Image for Baldy Reads.
118 reviews11 followers
August 16, 2023
Before reading this, my opinion on general artificial intelligence was already in agreement with Larson. However, this book solidified my thoughts and brought to light many new ideas.

This was a great combination of science and philosophy (with a touch of linguistics which I appreciated). There wasn’t jargon heavy or convoluted exposition.

Whether intentional or not, I also think Larson reinforced the importance and celebration of human life—which is a breath of fresh air from not only all the horror books I read, but also the non-fiction books that typically point out humans’ flaws.

While this book is already two/three years old, it now feels relevant to the widely talked about topic of ChatGPT and other AI-powered models (especially if you work in education like I do). If you’re interested in learning more about AI in general or you think “the machines will take over” someday, I highly recommend this book.
1,184 reviews14 followers
September 19, 2021
AI was always a field of interest for me. By knowing people longer in this field than me I was aware of the up-and-down of interest in AI since after WW2, starting from simpler application like self-guiding systems to expert-systems and ways of training neural networks for purposes of data classification. For me this was always rather mechanistic, without that mystery and magnificence of SF AI's. As time went by I started to think that AI will at the end be a cross-over of human tissue and technology, not unlike in W4oK.

So when all the hype started couple of years ago I was taken aback. For any question I asked I got no answer - be it verbal or in texts in magazines. Everything came to - it works on its own, it is just required to add sufficient data. OK, this sounds like expert system but what about AI, how does it reason? Same answer. And it was given to me in a way like I was most stupid man ever for not getting it. And I started feeling like that after every discussion with my colleagues - AI is here, will take whole bunch of tasks because you can program it to do almost everything. OK, all good, great, but bloody how? Just add more data.

And this put me in place where I could not make sense out of anything. I was sure I was missing something and tried finding additional literature - unfortunately these were such a mish-mash of wishful thinking that it only left me with more questions.

And then I found this book.

In a very concise way author gives the overview of current AI research and its rather sorry state.

Author is very to the point and he writes as if he was asked so many times about the AI [by people like me :)] that he decided to write a book to provide reference to everyone interested in the field.

Book is somewhere between popular and mid level science book. Few chapters that deal with logic and rules of logical reasoning might be uninteresting to people that are not in field of computer science or applied mathematics but rest of the book is accessible to everyone.

And what a damning picture this book paints. Presented with the possibility that there are no more major breakthroughs ahead (rather theoretical Nobel prizes notwithstanding I think there is still lot more practical research to be done) scientists took an unscientific approach - instead of research they changed their course and joined forces with the corporations. Corporations, being in their nature, decided to push their own products as they are, because any further research would cost, and marketing hype came in force. Result? Terrible. It only confirmed that for person with hammer every problem looks like a nail.

Due to the hype that came form all known authorities in the field (corporations and individual scientists associated with them) states started funding research that was using AI (in truth only classification engines) and neglecting others, that went more traditional way. As a result AI research was a major hit - nobody wanted to "waste time" when AI will definitely bring results (and if anyone asked how, answer was definitely "Add more data").

As author clearly states - entire AI research to date is based on constant futile attempts of simplifying human brain to the level of chain of data processors. Futile because this same attempt failed over and over again (including now, even with the enormous computer power and data collections). Reason? Very simple, how can one build intelligence when we do not know anything about our own intelligence process (when I read this I was stupefied, even after all these years we still do not have definitive answer on how our mind works, mind-blowing).

It is like entire science community decided to decipher how does automobile work by just looking at the outside parts and not being aware of the main part - engine. And expected good results. Blimey.

As a result complete AI research wasted good part of last decade. Significant improvements were made for classification systems and expert systems (as author says very very narrow AIs) but everything else was stopped in its tracks.

Unfortunately hype caused quite a social upheaval. I agree with the author, it looks like anti-human revolution took place, humans were discarded as used tools (it is incredible how this morbid cult of human irrelevance and expendability took root world wide in last decade) all in expectations of rise of our machine masters. Benevolent or not, it seems it does not matter to any of the technocratic leaders - they are so eager to give birth to something - without even knowing what exactly.

In a short time use of classification engines helped to create a divide between people by pushing news people like to read. It is not their error, mind you, they do what they are programmed to do but inadvertently this back-fired because of society totally surrendered itself to these computer idiot-savants for everyday news and information - from food to politics.

This is very timely book, and I hope that author's message to bring back sense in AI scientific research is accepted by the community. AI in any form can do wonders for humans but it must not be goal in itself. It is a device that can bring enlightenment and propel the humanity forward, but that can only be achieved without trickery [in scientific approach] and by following age old scientific approach (getting to theories and proving or disapproving them] that proved itself many times over.

Will it take time? Definitely. But this will help us to perform detailed and valuable research and, which might be more important, we will become mature enough to cope with the end-results.

Excellent book, highly recommended.
Profile Image for J.J..
2,059 reviews16 followers
February 11, 2023
Everyone that is freaking out about ChatGPT just needs to calm down, at least according to this book. Really delves into the history and development of AI as well as the human brain. Sometimes a bit dry though, which is why a 4 instead of a 5 rating.
Profile Image for David Zimmerman.
146 reviews12 followers
January 27, 2022
The myth of artificial intelligence is that it doesn't exist. Intelligence is the exclusive domain of sentient life. There is an uncrossable barrier between the virtually inexhaustible knowledge base available to machines and their ability to process that knowledge, and the ability of the human brain to tap into its limited data base, to reason, hypothesize, explore and experiment with ideas, to pave the way to new discoveries and technologies. The human brain can THINK, a machine can only PROCESS data.

That is the proposition set forth in the first quarter of this book, and I found it fascinating. The author has not only done his research, but has a background in the field of AI. Logically and progressively, he sets forth his case that the concept of building a machine with a an "intellect" that will surpass that of the people who designed it is a myth.

Having stated his case, the author then sets forth to prove each part of his proposition. This portion of the book often felt tedious and redundant, as the author answered objections he knew would be raised. Because the concept of "thinking" machines has been so thoroughly popularized in our culture, the author writes to insure we - the readers -realize that there has not been a single technological or scientific breakthrough in the past fifty years of research that has moved us any closer to the goal of a thinking machine. His evidence is overwhelming, but seldom captivating.

The author concludes the book with reasons why he believes the continued emphasis on artificial intelligence is actually inhibiting advancements in many other areas of scientific and technological research.

The book will not appeal to everyone, but it does have something of importance to say. From a Christian perspective, I was surprised at how many elements of this myth of thinking machines I had allowed to passively invade my thinking. Humans are getting more skilled at creating machines that bear certain likenesses to our humanity, much the same way as God created humans in His image. But though we bear God's image, we are not gods, nor ever will be. Nor will any machine ever possess the qualities that make us human, and one of them is authentic intelligence.
Profile Image for Beybolat.
159 reviews7 followers
March 13, 2024
“Zeka, bir makineydi. Hilbert, matematikçi meslektaşlarına, 1900 yılında Paris’te düzenlenen İkinci Uluslarası Matematikçiler Kongresi’inde meydan okumuştu. Düşünce dünyası ona kulak kesilmişti. Meydan okuması üç ana kısımdan oluşuyordu: Matematiğin(1) eksiksiz, (II) tutarlı ve (III) kararlaştıralabilir olduğu tanıtlanmalıydı” (s.28)

“Turing, sorun çözmek için içgörüye veya zekaya hiçbir şekilde başvurmayan, tamamen belirlenimci bir makine icat edilerek matematiğin kararlaştırılabilir hale getirilemeyeceğini tanıtladı” (s.30)

“...Turing matematiğin kararlaştırılabilir olduğu iddiasını çürütmek isterken kesinlikle arz eden ve mekanik bir şey icat etmişti: bilgisayar” (s.30)

“Sezginin, bilgisayar gibi saf formel bir sistemin işlemlerinden ayrı ve bunun dışında kalmasına izin vererek, aslında matematik işlemleri yapabilen bilgisayar programları ile bir matematikçi arasında indirgenemez farklar olabileceğini ima etmiş oluyordu” (s.31)

“Satrancın Turing ve meslektaşlarını büyülemesinin bir nedeni, bilgisayarların satranç oynamak üzere programlanabilmesi ve programcıların makinenin yapabileceği her hamleyi önceden bilmesine gerek olmamasıydı. Bilgisayarlar İSE, VE, VEYA gibi mantık eklemlerini kullanabildiğinden, program (komut kümesi) çalıştırabiliyor, bu komutları çalıştırırken karşılaştığı senaryolara bağlı olarak farklı çıktılar üretebiliyorlardı” (s.35)

“Gödel’in teoremi 1931’de içinde her şeyin tanıtlanabileceği yetkin bir sistemin var olamayacağını kesin olarak göstermiş olsa da, zihnin izleyeceği kuralları tercih ederken başvurduğu sezgiye bazı makimelerin de sahip olup olamayacağı sorusuna net bir yanıt vermiyordu” (s.38)

“İşte Turing’in en büyük dehası ve hatası, insan zekasının sorun çözmeye indirgenebileceğini düşünmesiydi” (s.39)

“Oysa Bletchey’deki şifre kırma girişimlerinin başarasını mercek altına aldığımızda, insana ve makineye dair felsefi düşüncede nasıl tehlkeli indirgemelre başvurulduğu da hemen görünür hale geliyor. Bletchley’nin ta kendisi zeki bir sistemdi: Askeri seferberlik (buna gözetleme, casusluk ve düşman gemilerinin yakalanması da dahildi), ordu mensupları ile Bletchey’deki çeşitli bilim insanları ve mühendisleri arasındaki toplumsal zeka ve hayatta hep olduğu üzere bazen kör talih bile bu süreçte deyim yerindeyse el ele vermiş, eşgüdümlü çalışmıştı. Aslında bakılırsa Almanların modifiye Enigma cihazının şifrelerini saf mekanik yöntemlerle kırmak pratikte mümkün değildi. Almanların mekanik şifre kırıcılığının hangi güçlüklerle malul olduğunu bu konudaki matematik savlardan hareketle zaten biliyorlardu. Bletchley’nin başarısı kısmen Nazi komutanlarının Enigma cihazının şifrelerinin kırılamayacağına aşırı güvenmelerinin bir sonucuydu ki bu büyük bir ironidir” (s.40)

“Bu sistemin olmazsa olmazı ise mekanik bir şey değildi, zeki başlangıç gözlemleriydi.” (s.42)

“Zekayı sorun çözme becerisine indirgeyen bakış, yapay zekanın, tarihi boyunca neden dar kapsamlı uygulamaların ötesine geçemediğini de açıklıyor” (s.45)

“Savımızı özetleyelim: Zekayı sorun çözme becerisi olarak gören bakış, doğası gereği dar kapsamlı yapay zeka üretmek zorunda kalıyor, bu yüzden yapay genel zeka tasarlamaya elverişli değil” (s.48)

“Turing’in başlarda sezgi ile hüner arasında yaptığı ayrımı gelin bu gözle bir daha düşünelim. Yapay zeka sorusu onun gözünde formel bir sistemi tasarlayanlar tarafından sisteme dışarıdan sağlanan sezginin bir şekilde o sistemin (hüner makinesinin) “içine” çekilip çekilemeyeceği, yani sistemin içkin bir parçası haline getirilip getirileyeceğiyle ilgiliydi. Böylyece sistem, çözeceği sorunları, sezgisini kullanarak kendi seçebilecek ve darlık tuzağından kurtularak daha fazla akıllanıp daha fazla öğrenebilecekti. Bugüne dek hiç kimse bunu bir bilgisayara programlamayı başaramadı. Aslında bakılırsa bunun işe yarayıp yaramayacağı hakkında bile hiç kimsenin en ufak fikri yok” (s.50)

“Good’un fikri aslında basitti: Bir makine insan seviyesinde zekaya sahip olabilirse zamanla insanın zekasını aşması da kaçınılmaz olmalıydı.” (s.51)

“Aptalın biri sırf ne olacağını görmek için o ateşleme tuşuna illa ki basacaktır” (s.52)

“Bir şey sentezleyen bir örgütlenmenin, sentezlediğinden daha karmaşık, daha üst düzeyde olması şarttır” (s.55)

“Dünya çağında ağın (World Wide Web) ortaya çıkışı tek ve basit bir nedenden ötürü yapay zeka konusunun yeninden yükselişini tetikledi:veri.” (s.79)

“Lev Tolstoy’un uyarısı burada cuk oturuyor: “Savaşın gidişatı savaş planlarına sığmaz” (s.98)

“20. Yüzyıl bilim felsefecisi Karl Popper’in icatların tahmin edilemezliği savını bize hatırlatıyor:
-Diyelim Eski Taş Çağı’ndayız ve bir an geldi sizinde geleceği tartışıyoruz. Ben önümüzdeki 10 yıl içinde tekerleğin icat edileceği tahmininde bulunuyorum. Bana “Tekerlek mi, o da ne?” diye soruyorsunuz. Bunun üzerine üşenmeyip bir kasnağın, tekerlek parmağının, tekerlek göbeğinin, hatta belki de bir dingilin ne olduğunu ilk kez anlatmak için kaba sözdağarcığımızdaki en uygun sözcükleri seçip tekerleği güçbela da olsa size tarif ediyorum. Sonra bir anda donakalıyorum: “Artık kimse gelecekteki tekerleği icat edemeyecek çünkü ben az önce icat ettim bile! Yani tekerleğin icadı tehmin edilemez. Bir icadın olmazsa olmazı, onun ne olduğunu söyleyebilmektir. Tekerleğin ne olduğunu söyleyebilmel, onu icat etmek demektir. Bir örneğin nasıl genellebileceğini görmek kolay. Kökten yeni bir kavramın ayrıntılı tarifini içeren tüm icatlar, tüm keşifler, özünde tahmin edilemezdir çünkü tahminin olmazsa olmazı, gelecekte icat tarif etmektir. Radiksal kavramsal inovasyonu tahmin eetme fikrinin kendisi kavramsal olarak tutarsızdır” (s.100)

“Çözümleme diye adlandırdığımız zihinsel yeteneğin kendisi çözümlenmeye pek de elverişli değildir” (s.126)

“Pierce için düşünmek, hesaplamak değildi, bir sıçrama yapmak, bir tahminde bulunmaktı” (ss.126)

“Gerçek dünyadan, başka benzer örnekler de verebiliriz. Sürücüsüz arabaların otonom gezinme sistemleri, okul otobüslerini kar temizleme aracı olarak tanımlayabiliyor yahut dönen bir tırı üst geçit olarak görüp yanlış sınıflandırabiliyor” (s.168)

“Milyonlarca öfkeli yurttaşın “Trump bir geri zekalıdır! Diye bir tweet üstüne tweet attığını düşünün. Ardından Trump’ın bir rakibini münazarada madara ettiğini ve taraftarlarının bu kez aynı tweeti onu sevmeyenlere nazire olsun diye müstehzi bir tutumla attıklarını da düşünün. Bu durumda bu iletiler “Trum-Gerizekalı” örüntüsünün başka örnekleri olarak kayda geçirilecektir. Öğrenme algoritması başlangıçta bir bilgiye sahip olmadığı için bu ileti onun için sadece ardışık sözcüklerden ibaret olacaktır. Oysa istihza sözcük temelli bir öznitelik olmadığı gibi düz anlamın aksine sık da görülmez. Makine öğrenimi böylesi dil fenomenleriyle karşılaştığında son derece kalın kafalı davranmasıyla ünlüdür. Bu durum Google gibi şirketleri kedere boğuyor. Google reklam hedeflerken istihzayı saptayabilseydi herhalde çok memnun olurdu. Diyelim ki dışarıda kar fırtınası var ve biri müstehzi bir bir gönderisinde “Koşun bana güneş kremimi getirin” diyerek dalga geçiyor. Bağlama duyarlı bir reklam yerleştirme sisteminin bu durumda ona “güneş kremi” yerine “bataryalı ısıtıcı çorap” reklamı sunması gereki” (s.201)
Profile Image for Zach.
122 reviews
April 13, 2022
Larson makes a really interesting argument concerning how far away we are from generalized artificial intelligence.

Basically his idea is that there are three sorts of ways of coming to knowledge. Deductive reasoning finds answers based on premises with predetermined outcomes - this is essentially what we do when we hard code computers to make choices given inputs. Inductive reasoning takes in lots of data and makes assumptions - this is basically modern machine learning. Then there is abductive reasoning which is basically guessing that lets us shortcut to the best solutions or eliminate many possible solutions based on common knowledge.

Larson points out we have the first two types of reasoning down pat, but no one is working on the third type of reasoning and until someone works on it we have either automatons or idiot savants but not general intelligence.

This point is well and good as far as it goes and the book is worth reading to see how he expands on the ideas above expressed. However, I deducted a star for the ending section of the book where he expands this out to a general critique on culture. Basically without the intellectual grounding he demonstrates the rest of his book he makes an argument that only the West can generate the insights and did for artificial intelligence, and PC culture is keeping our ability to identify new ideas down. I can't excuse the classic white man bullshit. That said if you don't read the afterword, you don't have to hear it...
Profile Image for Roger Grobler.
28 reviews12 followers
April 30, 2023
"The Myth of Artificial Intelligence by Erik Larson is a well researched book that does a reasonable job of stating the case against the hype that surrounds AI. Larson presents a balanced view of the capabilities and limitations of AI, and provides a sobering reminder that it is still a long way from achieving true intelligence.

However, with deference to the author, I do feel that the book will not age well given the success of Chat GPT so soon after its publishing. While Larson makes a strong case against the hype surrounding AI, the rapid progress being made in the field means that some of his arguments may soon be outdated.

Overall, I would recommend "The Myth of Artificial Intelligence" to anyone interested in a critical examination of the current state of AI and its potential impact on society. While it may not be the definitive word on the subject, it is a thought-provoking read that challenges some of the more grandiose claims made about AI.”

- Review written by Chat GPT… 😆. And disproves many of Larson’s arguments about AI not being able to handle inference.

[Update 30 April 2023] - This book is now entirely irrelevant. May well just have broken the world record for shortest time to become irrelevant.
Profile Image for Z R.
9 reviews2 followers
October 29, 2023
Published mid 2021, this book, within a year, did not age well.

The irony is the author trying to demonstrate how narrow the capabilities of AI are, using an increasingly narrow definition of how we'd define capable. Whereas once upon a time conversational AI would be considered breakthrough, now the author states that AI is incapable since it cannot always accurately determine grammatically ambiguous sentences. "I tried to place the trophy in the package but it was too small", is "it" referencing the trophy or the package, asks the author. "It" represents a poorly structured sentence.

The book grasps at disparate potential proofs that non-biological units cannot think in the precise manner we do, so therefore fails at the intelligence test. It completely ignores the greater conversation of the increasing role (not just hype) AI is playing in our daily lives and how it is influencing how we interact with "it", with each other - whether it plays by our rules or we play by its (oh, and help me determine which "it" I was referring to again, my own intelligence seems to be getting narrower).
17 reviews
Read
April 19, 2022
I might not be the intended audience given that I've been given proper, rigorous treatment of many topics covered in this book. However, from what I've read, this is far better and more in line with my opinion than the "AI evil" book by Dr. Scaremonger James Barrat. A distinction must be made between "acting human" and "acting rationally". Most algorithms target the second objective, not the first.

Nevertheless, I'll keep this book as a future reference. I find it hard to articulate my thoughts when it comes to this topic, but this book's shown me one way of articulating it.

Profile Image for Tõnu Vahtra.
564 reviews87 followers
November 12, 2023
"There is no such thing as observation without theory". Felt a bit repetitive at times, but the book does a good job at explaining the limitations of current mainstream approaches "towards AI" and why just more data and computing power is not sufficient but rather a limitation by itself (getting stuck and fixated on existing capabilities). I believe that we could also draw some parallels with Conoway's law here, one cannot come up with systems that are far more complex than oneself, in a similar manner how can we expect "AI" to solve problems that we ourselves have no clue how to handle. Also the availability of more data does not mean that it could somehow substitute for scientific discovery (Higgs boson discovery was made based on the underlying theory and hypothesis).

Induction VS abduction (method of guessing and checking), we have no good theory for abduction.

“Understanding natural language is a paradigm case of undercoded abductive inference.”

“First, intelligence is situational—there is no such thing as general intelligence. Your brain is one piece in a broader system which includes your body, your environment, other humans, and culture as a whole. Second, it is contextual—far from existing in a vacuum, any individual intelligence will always be both defined and limited by its environment. (And currently, the environment, not the brain, is acting as the bottleneck to intelligence.) Third, human intelligence is largely externalized, contained not in your brain but in your civilization. Think of individuals as tools, whose brains are modules in a cognitive system much larger than themselves—a system that is self-improving and has been for a long time.”

“In the early part of the twentieth century, the philosopher of language Paul Grice offered four maxims for successful conversation:
The maxim of quantity. Try to be as informative as you possibly can, and give as much information as is needed, but no more.
The maxim of quality. Try to be truthful, and don’t give information that is false or that is not supported by evidence.
The maxim of relation. Try to be relevant, and say things that are pertinent to the discussion.
The maxim of manner. Try to be as clear, as brief, and as orderly as you can, and avoid obscurity and ambiguity.”

“General (non-narrow) intelligence of the sort we all display daily is not an algorithm running in our heads, but calls on the entire cultural, historical, and social context within which we think and act in the world.”

“Kurzweilians and Russellians alike promulgate a technocentric view of the world that both simplifies views of people—in particular, with deflationary views of intelligence as computation—and expands views of technology, by promoting futurism about AI as science and not myth.
Focusing on bat suits instead of Bruce Wayne has gotten us into a lot of trouble. We see unlimited possibilities for machines, but a restricted horizon for ourselves. In fact, the future intelligence of machines is a scientific question, not a mythological one. If AI keeps following the same pattern of overperforming in the fake world of games or ad placement, we might end up, at the limit, with fantastically intrusive and dangerous idiot savants.”
Profile Image for Ramil Kazımov.
338 reviews10 followers
May 21, 2023
Zorlu, ama okunması ve sindirilmesi gerekli bir kitap..

Teknoloji ve yapay zeka bu aralar fena halde benim merceğime takıldı. Bugüne kadar yapay zekanın ne olacağını ve gerçekleşirse, zorluklarının neler olabileceğini okumuştum ama daha sonra acaba YZ gerçekten mümkün olabilir mi diye sordum ve Erik J. Larson'un "Yapay Zeka Miti"ni okumaya kadar verdim. Ve bitirdikten sonra yazarla aynı fikirde olmaktan kendimi alamadım: YZ aslında bir mit ve bu mitin bizim anladığımız şekli ile, insan zekası formunda evrimi pek olası değil. Yazarımız dil konusunda bir hayli örnek ve de yaşanmış olaylardan örnekler getiriyor ve YZ-nin neden geçerliliği olamayacağını açıklıyor. Her ne kadar makineler bugünlerde eskisine nazaran daha gelişmiş (daha süratli) görünüme bürünmüş olsalar da aslında beklediğimiz düzeye gelmediklerini söylemek için pek çok neden vardır. Ayrıca bugünlerde ChatGPT gibi YZ furyaları bile aslında insan zekasına eş ve yahut onun taklidi bir yapay zekadan ne kadar uzak olduğumuz gerçeğini gizleyemiyor. Zira makineler her ne kadar basit makematik hesaplamarda insanüstü bir zeka sergilemiş olsalar bile en basitinden mantık örneklerinde ve de küçük bir oğlan çocuğunda bile kafa karışıklığı yaratamayacak olan düşünce örneklerinde çuvallıyor. Yazarımıza göre bu gerçekliğin yakın gelecekte değişeceği düşüncesi olası değildir zira bir makineye basit bir mekan olayını bile açıklamak üstesinden gelinemeyecek kadar zorluklar içerir. Yazarımızın söylediği örnekleri daha önce Martin Ford isimli yazarın "Robotların İktidarı" kitabında okumuştum. Bu sorunlar öyle sorunlar ki, makineler insan gibi düşünemezse bunların üstesinden gelemez. Makinelerin insan gibi düşünemeyeceği ise çok açık, zira insanların düşünceleri deneyimlerinden oluşur. Ama bu deneyimler bazılarının düşündüğü kadar basit bir sözcük değildir. İnsanı bir makineye indirgeyen bakış açısının sığlığı bir kenara, insanların deneyimleri duygularının mantıkla yoğrulmasından oluşuyor ve de bazen mantık barındırmasına bile ihtiyaç yoktur. Eğer Zekayı daha önce okumuş olduğum Max Tegmark isimli yazarın "Yaşam 3.0" kitabında belirttiği gibi "karmaşık görevleri yerine getirme yetisi" gibi algılamış olsak bile yine YZ-ye ulaşmak olası değildir diye düşünüyorum, zira "Karmaşık görevler" matematik alanı istisna sayılırsa, insan zekasının hatırı sayılır derecede müdahil olduğu sorunları çözmeyi de kapsıyor. Özetle, YZ düşündüğümüzden daha zor ve de gerçekleşme ihtimalini düşünmenin bile mit olarak görülebileceği bir gayri-olası sorundur..
Profile Image for Mitch.
120 reviews8 followers
September 8, 2022
++Fantastic overview of AI from Turn to Nick Bostrom
++The problem with Big Data investment interfering with the search for actual general intelligence
++Big Data as a dead end of research producing avalanches of false positives and no genuinely new ideas
++The need for the individual in scientific breakthroughs is being minimised due to the mythologizing of AI by people like Henry Markram. Specifically, huge narrow inference machines are being treated as if they’re already intelligent. The Large Hadron Collider’s success in finding the Higgs Boson came down to the original theory by one man, not the petabytes of data that came afterwards.
+++Chapters on deduction, induction and abductive inference
+Horgan’s “End of Science”
+We quite literally are still at square one with finding out how to create general intelligence
+Gödel’s incompleteness theorem and how it relates to AI

Wonderfully written book; runs contrary to alot of my previously held beliefs and goes through not just the limitations of current AI but also the societal impact that widespread dumb robots seem to be having on us.

Thought: A machine is powerful in environments that are predictable closed systems. It seems that as big data companies (Google, Amazon) continue to dominate everything, societies will be incentivised to behave in alignment to algorithmic structures. This will create a feedback loop of rigid thinking/behaviour that will make innovation on a fundamental level increasingly rare. Or maybe not. I don’t know.

I still believe Nick Bostrom’s super-intelligence is a real risk. The whole infinite paperclip maximizer thingy (look it up) doesn’t need to have general intelligence to create a real pickle for mankind.
November 29, 2023
This is a compelling and very persuasive response to the dominant thread of AI thinking.

Larson shows the key weaknesses in the thinking of those like Bostrom, who assume that computers will achieve human capacity but don’t do the work to show how (or at least do not do so for the public). Larson sometimes takes an uncharitable view of Bostrom’s positions in particular, so I don’t want to over value this point.

However, it’s true that the idea that a computer will qualitatively improve itself, or create qualitatively better computers, needs to be argued not assumed.

In addition, Larson provides a helpful framework for understanding logical inferences, and shows very clearly the gap between computers conducting deductive and inductive reasoning and humans performing abductive reasoning as well. He could use a bit more theoretical support here, beyond relying so much on CS Pierce. Still, he’s persuasive.

I saw another review state that Larson is a poor writer. This is not true. However this book is written more like a Gladwell think piece than an academic work. Larson would sell less books, but would make a stronger point, if he wrote more formally like Bostrom.

I will say that Larson makes too much of his Big Data complaints, which are not as connected to AI as he feels (or at least he doesn’t show the connection). So, Part III is a weakness to the book as a whole.

Certainly, I learned a lot from the book and it felt refreshing to read more technical work on AI since it so often feels like alchemy.

Profile Image for Bram VanderMark.
36 reviews2 followers
Read
February 24, 2023
The speculation and buzz of the incredibly broad term “AI” has caused me lots of unrest and anxiety in recent years, yet I have been entirely uninformed of any particulars regarding the actual science and practicalities of the field. I have also been uninformed of the various outlooks and expectations that exist within the field. My knowledge was, indeed, almost entirely filled with “myth.”

Reading this book was an effort to plunge a bit deeper into the particulars. Of course, the title was a comforting place for me, someone unsettled by their perception of AI, to begin.

While I often wondered if the most recent “advancements” in language processing (re: 2023 Chat-GPT) would give Larson an updated perspective on many of his points, I found his knowledge and myth-dispelling perspectives to be incredibly eye opening and compelling. This book reinvigorated and expanded my awe of the mysteries of the human mind, filled me with vigor to continue cultivating that incredible endowment we each have, and motivated me to cultivate a human-flourishing culture.
Profile Image for Serdar.
Author 13 books28 followers
September 2, 2022
About twenty years back I got my hands on a copy of a program for PCs that translated Japanese to English and vice versa. It took only a few minutes of work with it to have my excitement dashed. The program could barely translate declarative sentences into something coherent, in either direction. Anything beyond that produced word salad.

Today, I can open a web browser and use a service like Google Translate -- or better yet, DeepL -- to produce remarkably good translations in either direction for a whole swath of languages, Japanese included. But all it takes is one tricky phrase, one unorthodox usage, and out comes word salad all over again. There's still no graceful failure mode for this stuff.

For a long time I could give examples of how brittle failure modes were the most visible symptoms of why the dream of generalized artificial intelligence was just that, a dream. This book explains why that is the case as one of its many missions. It traces the history of the field and its roots in many mistaken assumptions about intelligence; it describes the "inference trap", or the way ML/AI cannot be used to perform the abductive inferences used in educated human guesswork; and it examines where we stand now as a result of all these mistakes.

What I've long suspected, and what this book confirms, is that the chief obstacles are not technical. It's not that our data sets aren't big enough, or that we lack good algorithms -- it's that the things we want out of ML and AGI are not amenable to algorithms as we know them. But what's most problematic, the author argues, is how faith in the idea of making machines smarter is coming at the expense of faith in making humans smarter.
Profile Image for Remy.
200 reviews16 followers
February 17, 2022
A thorough and mostly easy to understand book that requires little to no understanding of computers, statistics, or neuroscience to grasp. Prior to reading this book, I was skeptical of AI and its capabilities (eg "garbage in, garbage out" maxim)--now I see just how limited and farcical it really is.
I share Larson's worries of the cultural death of science, though he suffers from a lack of greater cultural, political, and economic analysis--which is made fairly evident in the extremely brief (and annoying) foray he makes into irrelevant political commentary in describing the origin of the term "kitsch." Whatever, dude who admits later to working on a contract for the US Department of Defense.
Overall, worth a read.
Profile Image for Rafael Ramirez.
123 reviews15 followers
September 10, 2022
Fascinante libro que nos ayuda a entender el alcance actual de la Inteligencia Artificial para no caer en el error, muy común, de confundir la realidad con la ciencia ficción o creer en lo que el autor denomina el "mito" de la Inteligencia Artificial: que la "inteligencia" de las máquinas es similar e inclusive, en muchos aspectos, superior a la inteligencia humana.

En los últimos años ha crecido enormemente el uso de algoritmos de IA y su aplicación tiene el potencial de revolucionar todos los aspectos de nuestra vida, dando la apariencia de inteligencia, al ser capaces, por ejemplo, de traducir idiomas, jugar ajedrez, describir una imagen, manejar un auto o, inclusive, crear obras de arte. Dichos avances han llevado a muchas personas a pensar que es inevitable y cercana la fecha en que las máquinas cobren conciencia y desarrollen una inteligencia superior a la de los seres humanos. Si una máquina puede hacer todas estas cosas que pensábamos sólo podría hacer una persona, ¿qué no podrán hacer en el futuro? Es el sueño de crear lo que se llama una Inteligencia Artificial General, a diferencia de una Inteligencia Artificial Estrecha o específica, que es la que existe a la fecha.

Sin embargo, como el autor explica claramente en el libro, estos avances en el diseño e implementación de algoritmos para realizar tareas específicas de una manera cada vez más eficiente, en el fondo no es más que la identificación de patrones en un conjunto de datos. No tiene nada que ver, ya no digamos, con la inteligencia humana o la conciencia de la propia existencia, sino con la capacidad de realmente comprender la realidad (aunque sea de manera imperfecta) y actuar en consecuencia. Es decir, por alegoría, hemos llamado inteligencia a lo que es sólo una técnica. El riesgo es que esta alegoría nos haga pensar que a fin de cuentas, el ser humano (y su cerebro) no es mas que una máquina y un simple eslabón en la evolución hacia otras máquinas más sofisticadas y poderosas.

14 reviews
January 29, 2023
I came to this book quite skeptical. As a lover of doomsday sci-fi, and the many theories of simulated reality, I almost hope for the ‘inevitable’ future of artificial general intelligence. However, Larson’s succinct method of pointing out the open circuit between induction and abduction has made me a believer: (1) AGI isn’t possible without a fundamental breakthrough and (2) the potential for breakthroughs is becoming increasingly unlikely as we allow very narrow (and dumb) AI to govern our lives and stall our ideation.

Now my main concern is a future similar to WALL•E. Not the evil AI part. The fat, comfortable humans who gave control of their lives over to a machine, part.
July 14, 2023
This book is not particularly "anti-AI". It's anti-the myth of AI: That machines will one day have a life of theirs and replace humans in every endeavour. I've always suspected that is fundamentally impossible but this book did a good job at informing my suspicion and giving it more texture.

I appreciate that it gave an Introduction to Reasoning and Logic in an accessible way for some of us that didn't study Philosophy and Logic formally.

Also, I learnt that AI, as is applied in mainstream today, is a misnomer. What we have right now is Machine Learning applied in different domains.

Great book. Very insightful. Highly recommended.
Profile Image for Brenton.
Author 1 book70 followers
September 18, 2023
I read this book because Larsen was making the argument I had come to through a steeplechase of thinking over the decades, so I'm not a partial reader. While Larsen could make the idea of "Myth" really work at much deeper levels, and you will need a pencil and pad out during the analytical logic moments, it is engaging, thoughtful, and illustrative.
Profile Image for Piritta.
490 reviews19 followers
March 15, 2022
This was a slow one, but I'm glad that I read it, because now I'm much more optimistic about the role of humans in the future.
Displaying 1 - 30 of 97 reviews

Can't find what you're looking for?

Get help and learn more about the design.