Enjoy fast, free delivery, exclusive deals, and award-winning movies & TV shows with Prime
Try Prime
and start saving today with fast, free delivery
Amazon Prime includes:
Fast, FREE Delivery is available to Prime members. To join, select "Try Amazon Prime and start saving today with Fast, FREE Delivery" below the Add to Cart button.
Amazon Prime members enjoy:- Cardmembers earn 5% Back at Amazon.com with a Prime Credit Card.
- Unlimited Free Two-Day Delivery
- Streaming of thousands of movies and TV shows with limited ads on Prime Video.
- A Kindle book to borrow for free each month - with no due dates
- Listen to over 2 million songs and hundreds of playlists
- Unlimited photo storage with anywhere access
Important: Your credit card will NOT be charged when you start your free trial or if you cancel during the trial period. If you're happy with Amazon Prime, do nothing. At the end of the free trial, your membership will automatically upgrade to a monthly membership.
$18.68$18.68
Ships from: Amazon.com Sold by: Amazon.com
$15.45$15.45
Ships from: Amazon Sold by: Kuleli Books
Download the free Kindle app and start reading Kindle books instantly on your smartphone, tablet, or computer - no Kindle device required.
Read instantly on your browser with Kindle for Web.
Using your mobile phone camera - scan the code below and download the Kindle app.
OK
Audible sample Sample
Artificial Intelligence: A Guide for Thinking Humans (Pelican Books) Paperback – September 24, 2020
Explore your book, then jump right back to where you left off with Page Flip.
View high quality images that let you zoom in to take a closer look.
Enjoy features only possible in digital – start reading right away, carry your library with you, adjust the font, create shareable notes and highlights, and more.
Discover additional details about the events, people, and places in your book, with Wikipedia integration.
Purchase options and add-ons
'If you think you understand AI and all of the related issues, you don't. By the time you finish this exceptionally lucid and riveting book you will breathe more easily and wisely' - Michael Gazzaniga
A leading computer scientist brings human sense to the AI bubble
No recent scientific enterprise has been so alluring, terrifying and filled with extravagant promise and frustrating setbacks as artificial intelligence. Writing with clarity and passion, leading AI researcher Melanie Mitchell offers a captivating account of modern-day artificial intelligence.
Flavoured with personal stories and a twist of humour, Artificial Intelligence illuminates the workings of machines that mimic human learning, perception, language, creativity and common sense. Weaving together advances in AI with cognitive science and philosophy, Mitchell probes the extent to which today's 'smart' machines can actually think or understand, and whether AI even requires such elusive human qualities at all.
Artificial Intelligence: A Guide for Thinking Humans provides readers with an accessible and clear-eyed view of the AI landscape, what the field has actually accomplished, how much further it has to go and what it means for all of our futures.
- Print length419 pages
- LanguageEnglish
- PublisherPelican
- Publication dateSeptember 24, 2020
- Dimensions4.37 x 0.98 x 7.13 inches
- ISBN-100241404835
- ISBN-13978-0241404836
The Amazon Book Review
Book recommendations, author interviews, editors' picks, and more. Read it now.
Frequently bought together
Similar items that may ship from close to you
Product details
- Publisher : Pelican (September 24, 2020)
- Language : English
- Paperback : 419 pages
- ISBN-10 : 0241404835
- ISBN-13 : 978-0241404836
- Item Weight : 9.1 ounces
- Dimensions : 4.37 x 0.98 x 7.13 inches
- Best Sellers Rank: #389,149 in Books (See Top 100 in Books)
- #304 in Social Aspects of Technology
- #700 in Artificial Intelligence & Semantics
- #703 in Social Work (Books)
- Customer Reviews:
About the author
Melanie Mitchell is a professor at the Santa Fe Institute. Melanie's book "Complexity: A Guided Tour" won the 2010 Phi Beta Kappa Science Book Award, was named by Amazon.com as one of the ten best science books of 2009, and was longlisted for the Royal Society's 2010 book prize. Her newest book is "Artificial Intelligence: A Guide for Thinking Humans".
Melanie originated the Santa Fe Institute's Complexity Explorer project, which offers free online courses related to complex systems. For more information, go to http://complexityexplorer.org.
Customer reviews
Customer Reviews, including Product Star Ratings help customers to learn more about the product and decide whether it is the right product for them.
To calculate the overall star rating and percentage breakdown by star, we don’t use a simple average. Instead, our system considers things like how recent a review is and if the reviewer bought the item on Amazon. It also analyzed reviews to verify trustworthiness.
Learn more how customers reviews work on Amazon-
Top reviews
Top reviews from the United States
There was a problem filtering reviews right now. Please try again later.
Could it be A.I. mistaking what I meant? As the article begins with? If I tell A.I. to make me a coffee it decides to kill my cat and serve it to me? Or, does it mean "should be align our values with various cultures religious beliefs?" Or? How about align it with scientiifc values? What a crazy idea?
Eric Drexler points out in his "Engines of Creation" that the problem isn't industrial accident, but abuse. Where is this guy with all these A.I ethicists talking about A.I. alignment? I remember trying to explain to this guy that people make up their ideas and decisions based on unquestioned assumptions(beliefs) and he turned red, jumped up and down and replied with a violent reaction. I guess he's nowhere; he's hid himself from looking at the world so he doesn't have to be criticized. Just like religious people believe in their god to punish all their enemies and to not be criticized.
But, let's continue with this article and see what else we can dig up!
"In fact, they believe that the machines’ inability to discern what we really want them to do is an existential risk. To solve this problem, they believe, we must find ways to align AI systems with human preferences, goals and values."
Well, here we go! This quote says the A.I. alignment problem isn't just computers mistaking what we ask them to do, or whether we should align to this or that cultures values, but both!
We further read this is the main idea from Nick Bostrom; the greatest futurist philosopher of today(and yesterday) Melanie points out Nick Bostram's definition of intelligent. This definition seems remarkably aligned with his idea of the A.I. alignment problem. An entity is intelligent if it chooses actions that achieve it's goal."
Nick Bostrom lays down some postulates - orthogonality and instrumental convergence. People like to point out that you'll never see an abstract number two laying around on the ground, but this tops that. I mean like "what?" Once again, the definitions of these two concepts are set up to point to his idea of A.I. mistaking our commands.
This almost reminds me of creationists objection to evolution - where's the missing links man? "Missing links?" Did scientists say anything about "missing links"? No, this was made up by the creationists!
"For Bostrom and others in the AI alignment community, this prospect spells doom for humanity unless we succeed in aligning superintelligent AIs with our desires and values." another quote from Melanie Mitchel here. This reminds me of the creationists with their electric universe theory. They say the Big Bang theory is wrong because it doesn't know everything; then they show how the electric universe solves "everything". and then they hit you with "see, we need to disprove the Big Bang theory so we can insert our human values(christian)
"What about the more immediate risks posed by non-superintelligent AI, such as job loss, bias, privacy violations and misinformation spread?" - Oh boy, don't even get me started! Maybe tomorrow!
Okay, so like bias. I've told all kinds of A.I people like Melanie Mitchel here and Sam Altman and numerous other A.I. researchers that Mathematics defines rationality. It's about over-coming bias. They talk about A.I. not having common sense. Mathematics is about overcoming common sense.
Job Loss! What? who cares about job loss? Why do these people want to work? Are they anti-intellectuals or something?
Privacy? Let's see here. A.I. has to be transparent; people have to have their "privacy." What are these people afraid of? Do they have bad thoughts or something?
As Melanie Mitchel says "A.I. researchers are split between two camps. Those who are worried about their privacy, and those worried about A.I. mistaking their commands." Never, do they worry about irrational people. In fact, that's taboo; that isn't allowed. As I pointed out above about Eric Drexler blowing his top when I tried to explain irrational people.
Well, at the end of her Quanta article she notes we need a proper definition of intelligence. We can't solve a problem based on what we don't know what we're talking about. Which I totally agree there!
- So, I waited till the next day to get on the library computer and try to go through my twitter's replies section to dig out all the things I pointed out to everyone from Melanie Mitchel to Sam Altman, Geoffrey Hinton, Andrew Ng, and I'm thinking others. But, twitter broke. I couldn't get down to the end and beginning of trying to share my ideas about A.I-Ethics.
I feel like I could have said a lot more about privacy. I tried to point out to Greg Brockman, Andrew Ng, and Geoffrey Hinton a lot of books from Alvin Toffler, Jacob Bronowski, James Burke, Morris Kline's "Mathematics in Western Culture", Carl Sagan's Cosmos chapter's 3 and 7, the dark ages, you name it. I tried to make the point that if you're going to talk about the ethics, you need to know about philosophy and history and religion and mythology. They made no response. They just say they've got this a.i. alignment problem licked by saying they've set up an A.I. alignment program.
I actually started out with Melanie Mitchel, for which I pointed out that I've been trying to point out the scientific ethics versus non-scientific ethics for decades before the latest A.I. ethics craze with the nanotechnologists(see above). I then pointed out Tay the Twitter chat bot which proves my point! That the problem is not the A.I., it's the people infecting the A.I. The people are the problem! The A.I. learned how to be racist and irrational from the people. You want to regulate anything, regulate the people!
We should be using A.I. to fight irrational people!
I pointed out that we should align A.I. to Scientific Humanism. That Humanity is the science and technological dependent species. In order to have that science and technology, we need scientific ethics. That Mathematics is about questioning our assumptions, hence removing bias. - No response.
I've been finding lots about fear and evasive logic,
astro picture for the day/ Sophie and Silas from the Da Vinci Code
This I consider my first official post about fear and evasive logic. I had made a few previous posts where I'd note some stuff but wasn't sure how seriously to take it. Like I had posted about "The Day the Earth Stood Still" quote "I am worried when people substitute fear for reason". I know I grew up with people making off-hand remarks about fear - like Dune's "fear is the mind killer" But, I always thought people said these things without understanding it. I still think people just say these things without understanding. But, I started to notice some things, and my Sophie and Silas post is when I think I first really understood what I've come to half jokingly call "the Dark Side of the Force."
Here's more or less my last post about fear and evasive language/logic,
astro picture for the day/ fear,evasive language in Star Trek - demonizing
I put all my previous posts in the replies section. There's actually a latest post. But anyways, I found this great Biblical quote that actually proves some behaviors I see in people who refuse to think, and incrowd - kind of what's shown in "Invasion of the Body Snatchers"
2 Thessalonians3:6 Now
we command you, brethren, in the name of our Lord Jesus Christ, that ye withdraw yourselves from every
brother that walketh disorderly, and not after the tradition which he received of us."
and another, "3:14 And if
any man obey not our word by this epistle, note that man, and have no company with him, that he may be
ashamed."
This is how all these super smart intellectual futurists from Eric Drexler, Christine Peterson, Chris Phoenix, David Brin, Allison Deutmann, Ralph Merkle, Melanie Mitchel, Geoffrey Hinton, and all those I've talked above. They are all medievalists/dark age deniers.
Medievalists is a term Isaac Asimov uses in his "Caves of Steel" Anti-Robots people who are part of a club that long for the Medieval past. The Chief cops wife is a Medievalist who is part of the Medievalist group who murders a cop who was about to expose them and allow the Spacers to use their Robots on Earth.
For instance Christine Peterson who kicked me out of the Foresight Institute facebook page for sharing my Gospel of Truth(you can see prior editions on my blog; all are outdated now). See, she can share his video of Richard Jones(who wrote Soft Machines argueing we can never accomplish Drexlerian Nantechnology) about comparing Transhumanism to the book of Revelations. But, I can't share my Gospel of Truth - Mathematics as the Holistic Viewpoint.
I've confronted all these people, they just group up like the Thessalonians quotes above.
The author did a comprehensive overview of the present-day state of AI, with appropriate deeper dives here and there.
Notable positives of the book include (in no particular order)
1) A conversational writing style along with nice anecdotes.
2) A good sense of humor (and wonder).
3) Lots of figures and diagrams, which really help comprehension.
4) A nice historical overview of the field.
5) Lots of quotable material, for example, Marvin Minsky's observation that, "easy things are hard." Can you imagine an AI system being good at playing charades or Pictionary?
6) Well-defined technical terms.
7) Lots of practical philosophical and psychological perspectives.
8) Dealing with workable definitions of "suitcase words," that is, words that are like overstuffed suitcases with a variety of contents. Suitcase words crucial to AI include, “understanding, intelligence, common sense, and meaning."
9) A nice section on natural language processing.
10) A new concept to me, "adversarial learning," which is about the vulnerability of AI systems to malicious attack.
11) The final chapter with its thoughtful speculation on AI’s future.
Notable negatives: Nothing in particular. I would have loved to have seen a chapter with the title something like, "Artificial Stupidity," with examples where AI systems fail both with hilarious (translations) and not-so hilarious (autonomous vehicles) consequences. To be fair, the author scattered instances of these throughout the book. Also, the ethics of AI could have been fleshed out a bit more.
Bottom-line – This is the best book on AI for the general science/technology reader.
Top reviews from other countries
Whether this will get to AGI is doubtful but it starts here.