Tweet
RE: Cicero
AI researchers are maximizing for fear-induction. Same as metaverse, etc. Tech has stopped having utopian ambitions for years now and is now gunning for "unavoidable dystopia from which you cannot escape" as its marketing image.
Facebook/Meta especially so.
I think all the AI labs are blowing their loads right now because they've gotten into a "who has the scariest AI program" arms race and Facebook was feeling behind after GPT-3.
The capabilities people are basically reading the AI doom/safety people for ideas at this point. "Oh they say our AIs aren't scary yet because they can't lie to humans. Let's fix that."

Any bets on which company will announce a literal paperclip maximizer?
So let's take a different rhetorical tack:

Your AI isn't interesting because it can't grapple with its own sinfulness, wrestle with its moral-existential vertigo, and choose to pray to God for salvation and love its neighbors. I don't want to hear about it until then.
Current AI systems aren't dangerous. It's AI researchers who are dangerous. They are literally an apocalypse cult trying to end the world. Regardless of your belief in their methods, you should notice that THEY believe it, and arrest them for conspiracy to commit murder.

Recommended by
Recommendations from around the web and our community.

I know some AI researchers who believe the technology is dangerous yet inevitable. Interesting thread.