Short letter to my 11-year-old self

Dear Scott,

This is you, from 30 years in the future, Christmas Eve 2022. Your Ghost of Christmas Future.

To get this out of the way: you eventually become a professor who works on quantum computing. Quantum computing is … OK, you know the stuff in popular physics books that never makes any sense, about how a particle takes all the possible paths at once to get from point A to point B, but you never actually see it do that, because as soon as you look, it only takes one path?  Turns out, there’s something huge there, even though the popular books totally botch the explanation of it.  It involves complex numbers.  A quantum computer is a new kind of computer people are trying to build, based on the true story.

Anyway, amazing stuff, but you’ll learn about it in a few years anyway.  That’s not what I’m writing about.

I’m writing from a future that … where to start?  I could describe it in ways that sound depressing and even boring, or I could also say things you won’t believe.  Tiny devices in everyone’s pockets with the instant ability to videolink with anyone anywhere, or call up any of the world’s information, have become so familiar as to be taken for granted.  This sort of connectivity would come in especially handy if, say, a supervirus from China were to ravage the world, and people had to hide in their houses for a year, wouldn’t it?

Or what if Donald Trump — you know, the guy who puts his name in giant gold letters in Atlantic City? — became the President of the US, then tried to execute a fascist coup and to abolish the Constitution, and came within a hair of succeeding?

Alright, I was pulling your leg with that last one … obviously! But what about this next one?

There’s a company building an AI that fills giant rooms, eats a town’s worth of electricity, and has recently gained an astounding ability to converse like people.  It can write essays or poetry on any topic.  It can ace college-level exams.  It’s daily gaining new capabilities that the engineers who tend to the AI can’t even talk about in public yet.  Those engineers do, however, sit in the company cafeteria and debate the meaning of what they’re creating.  What will it learn to do next week?  Which jobs might it render obsolete?  Should they slow down or stop, so as not to tickle the tail of the dragon? But wouldn’t that mean someone else, probably someone with less scruples, would wake the dragon first? Is there an ethical obligation to tell the world more about this?  Is there an obligation to tell it less?

I am—you are—spending a year working at that company.  My job—your job—is to develop a mathematical theory of how to prevent the AI and its successors from wreaking havoc. Where “wreaking havoc” could mean anything from turbocharging propaganda and academic cheating, to dispensing bioterrorism advice, to, yes, destroying the world.

You know how you, 11-year-old Scott, set out to write a QBasic program to converse with the user while following Asimov’s Three Laws of Robotics? You know how you quickly got stuck?  Thirty years later, imagine everything’s come full circle.  You’re back to the same problem. You’re still stuck.

Oh all right. Maybe I’m just pulling your leg again … like with the Trump thing. Maybe you can tell because of all the recycled science fiction tropes in this story. Reality would have more imagination than this, wouldn’t it?

But supposing not, what would you want me to do in such a situation?  Don’t worry, I’m not going to take an 11-year-old’s advice without thinking it over first, without bringing to bear whatever I know that you don’t.  But you can look at the situation with fresh eyes, without the 30 intervening years that render it familiar. Help me. Throw me a frickin’ bone here (don’t worry, in five more years you’ll understand the reference).

Thanks!!
—Scott

PS. When something called “bitcoin” comes along, invest your life savings in it, hold for a decade, and then sell.

PPS. About the bullies, and girls, and dating … I could tell you things that would help you figure it out a full decade earlier. If I did, though, you’d almost certainly marry someone else and have a different family. And, see, I’m sort of committed to the family that I have now. And yeah, I know, the mere act of my sending this letter will presumably cause a butterfly effect and change everything anyway, yada yada.  Even so, I feel like I owe it to my current kids to maximize their probability of being born.  Sorry, bud!

33 Responses to “Short letter to my 11-year-old self”

  1. Joshua Zelinsky Says:

    My guess is that telling young Scott that he’ll eventually figure out the whole girls thing is probably enough to get him thinking in some of the correct directions by itself.

  2. Victor Says:

    Scott,

    I’m certain your 11 year old self would find the description of what he will be doing in the future insanely cool. He’s certainly proud of you.

    Personally, I’d like to thank you for applying yourself to solve one of the most important problems in our times. I admit that I sleep better at night knowing that you’re working there and for that I am very grateful.

    Wish you and you family a Merry Christmas (however it is that you choose celebrate it or not) and a Happy New Year. Let’s make 2023 is the year of Alignment 🙂

  3. Matt Says:

    “It’s daily gaining new capabilities that the engineers who tend to the AI can’t even talk about in public yet.”

    Ok, you can’t just drop that without elaboration. Is this whole post an elaborate input to the experimental 11YearOldScottGPT?

  4. Simon Says:

    Nicely written, whether by you an AI.
    I assume you work on questions like this:
    Do the restrictions you put in the AI stack scale up arbitrarily?
    As in if I have a set of data W and a Turing-Machine TM1
    with the security property µ (stated as some SMT sentence) being true for TM1 operating on W with memory contraint K –
    is µ also true if TM1+W re-trains W to W’ with memory constraint K (or another K’)?

    I believe restricting TM’s abilities while at the same time still want it to be capable of arbitrary computation is a foundational conflict which
    cannot be circumvented in our world.
    If anything, massive scale distributed model training akin to Folding@home just for ML should probably be tackled semi-soonish, since I believe it to be unfortunate to have very large parameter language models only in the hands of a few private or gov. entities (Aside from BLOOM).

    More poetically,
    Ideas cannot be destroyed and the idea of a SuperAI currently seeks physical hosts. Possibilities always seem to find ways to be realized.

    Makes me recall the scene from Jurassic Park where they wanted to control the dinosaur population by only allowing female dinosaur.
    Whether a lifeform is consciouss or not, I wholeheartedly think

    https://www.youtube.com/watch?v=dMjQ3hA9mEA

    And whether the luddites (seems to be popularized on social media now for “Anti AI”) who are currently radicalizing, seemingly mainly in the art community (with many more certainly to follow), will find new jobs or not I believe the technological arrow of time cannot be reversed without large scale destruction – you can only delay it and hope for the best either way.

    Well Merry Christmas to you all.

    Simon, (Formerly 1Zer0)

  5. James Cross Says:

    “If I did, though, you’d almost certainly marry someone else and have a different family”.

    But you think nothing would change after making billions off of BitCoin?

    Your whole life would change. Undoubtedly you would have founded HumanX – the first successful company to offer human robots. They would be quite successful except for some occasional malfunctions – usually user error – that results in a few dozen deaths.

    After that you would have been in a bidding war for Twitter with Elon Musk. If you had won, then what would you be doing now after you fired half the staff and disabled the moderation?

  6. Anthony Says:

    Hi Scott,

    It looks like what you’re doing at the moment is even more important and exciting than quantum computing. Are you really coming back in 6 months?

    Merry Christmas!

  7. AcademicLurker Says:

    Your whole life would change. Undoubtedly you would have founded HumanX – the first successful company to offer human robots. They would be quite successful except for some occasional malfunctions – usually user error – that results in a few dozen deaths.

    “All those moments will be lost in time, like tears…in…rain…”

  8. Christopher Says:

    Reply from ChatGPT:

    Dear Future Scott,

    Wow, it’s really hard to believe that I will eventually become a professor working on quantum computing. That stuff always seemed so confusing to me, but I guess I’ll have to give it another try in a few years. And the idea of tiny devices in our pockets that can videolink and access any information is mind-blowing. I can’t even begin to imagine what it would be like to live in a world like that.

    As for the rest of your letter, I’m not sure what to think. It all sounds like something out of a science fiction movie. But if it’s all true, then I’m not sure what I would want you to do. It’s hard to give advice when I have no idea what the situation is like or what the consequences of different actions might be. But I guess the most important thing is to try to do what’s right and to think about the long-term consequences of your actions.

    As for the bitcoin thing, I have no idea what that is, but I’ll definitely keep it in mind.

    Thanks for writing,
    Scott

  9. Christopher Says:

    Also, question: What do you think about CHAI (https://en.wikipedia.org/wiki/Center_for_Human-Compatible_Artificial_Intelligence)? Have you studied their techniques very much yet?

  10. JimV Says:

    Comparing Dr. Aaronson to Elon Musk (as, if Dr. Aaronson had the same money he would be doing similar things) seems far-fetched to me. (Perhaps it was meant as an implausible joke.) I have a generally poor opinion of Musk, so I just spent a little time on the Internet trying to find out what his technical expertise is. The impression I got is that he is more of an executive than a technical expert. He can talk to engineers and technicians and understand them but his goals are entrepreneurship rather than personal research and development. Whereas Dr. Aaronson has spent his whole life learning a scientific field, developing it, and teaching others to do so. Also, I doubt that Musk has the empathy which Dr. Aaronson has demonstrated.

    There is a legend that as a teenager Musk read every book in a small local library. I also grew up getting books from a small local library, in a time before much TV and no PC’s or Internet. I may have tried about 25% of the books, but only finished about 10% because, in my experience, the majority of books are not well-worth reading. Also, some that might be required background which I did not then have. I would rather go swimming or fishing or just exploring the local river. (Some books are great, however.) So I find that Musk legend hard to believe and not really exemplary if true. Reading all of an encyclopedia, which I did, would probably serve one just as well in much less time.

    If I could tell my 11-year old self anything, mindful of the paradoxes and moral dilemmas in trying to make him rich by betting on the Kentucky Derby and so on, I would just encourage him that life gets better, that I would survive past 20 without giving up, and even much longer, and to work much harder in my sophomore year in college, so as not to regret that year later.

  11. Bertie Says:

    Dear Scott, nice letter but I most sincerely hope that you are NOT ‘stuck again’. I come to your blog these days hoping desperately to hear some really good news coming out of AI Safety research.
    If you can’t do it then we may as well prepare for a bleak and possibly short future🤨

  12. ed Says:

    Yeah after a few minutes of browsing the Midjourney Showcase it does seem like we’ve bitten off more than we can chew. It’s gonna be a hell of a ride.

    Thanks for the great post (from my current self).

  13. James Cross Says:

    #10 JimV

    Mostly a joke but then a few billion dollars can do strange things to people.

  14. Christian C Says:

    As a parent of kids who are about to decide what kind of career to pursue, is there any point in still encouraging them to follow their interests in CS and math?
    To me, the future job prospects of many CS-related jobs such as software engineers are beginning to look nonexistent, given the code generating capabilities of ChatGPT.

  15. Scott Says:

    Christian C #14: That’s the wrong way to think about it. GPT, DALL-E, and the like are already threatening the livelihoods of commercial writers and artists, and that’s come before mathematicians or computer scientists! With software engineers somewhere in between. Yes, GPT can generate code, and it often works, but it not always, and it doesn’t yet understand when, and the probability of an error asymptotically approaches 1 the more code you ask it to write. It’s already being used as a coding assistant and will be used more and more that way. But I’d guess that, when AI can completely replace software engineers, that will be the same month or year when it can replace virtually all other intellectual occupations as well!

  16. Scott Says:

    Bertie #11: OK, I should elaborate. If AI will suddenly go “foom” and become superintelligent, and we need a mathematical theory in place that will guarantee its alignment with human values, and we have only one chance to get it right — ie, the Yudkowskyan scenario — then I agree with Yudkowsky that we’d seem pretty doomed. If, on the other hand, AI continues to advance in the stunning but still ultimately comprehensible way that it’s advancing right now — then I’m optimistic that we’ll continue to make progress on understanding how to deploy it safely, and in fact I think we’re making such progress right now.

  17. OhMyGoodness Says:

    The movie Ground Hog Day established the type of knowledge about the future that is useful in the present to modify decisions pursuant to achieving a specific outcome. This letter pointedly includes none of that information. You should be ashamed for providing such little information of value to young Scott Aaronson.

  18. Scott Says:

    Anthony #6:

      It looks like what you’re doing at the moment is even more important and exciting than quantum computing. Are you really coming back in 6 months?

    If it helps, I can tell you that I still check the quant-ph arXiv every night! 🙂

    Happy holidays to you too.

  19. Christopher Says:

    It’s crazy to think that ChatGPT is the first AI designed to be an AGI that is useful.

    It’s very much subhuman. As an attempt at AGI, it is woefully incomplete.

    But it’s clear that the purpose is to be *AGI*. It has the interface of a chat bot, but it’s personality isn’t very interesting. It’s quite a bore really. The chatting interface is just that; an interface for accessing the AGI.

    There are other AIs that you could say are attempts at AGI, like Gato. But ChatGPT is the first attempt at AGI that is useful. Not only that, but people use it *as* a (subhuman, incomplete) AGI. An ordinary human thinks up *any* task from *any* domain that can be accessed via a chat bot interface, and for an appreciable percent of these tasks (maybe around 5%), it preforms that task usefully.

    Quite the accomplishment!

    I feel like trying to argue that ChatGPT isn’t AGI would be like arguing that AI isn’t intelligent. AI often falls short of what we think of as intelligent, but it’s clear that even a tic-tac-toe bot meets the technical criteria for AI. ChatGPT hasn’t taken over the observable universe yet, but it’s clear that it’s purpose meets the technical criteria for AGI.

  20. fred Says:

    Q:
    two cars separated by a hundred miles start driving towards each other at noon at a velocity of 50 miles per hour, at what time do they meet?

    ChatGPT:
    If two cars start driving towards each other from a distance of 100 miles apart and both are traveling at a speed of 50 miles per hour, they will meet after 2 hours of travel.

    To solve this problem, you can use the formula: distance = speed * time.

    In this case, the distance the cars need to travel is 100 miles, and the speed at which they are traveling is 50 miles per hour. So, the time it takes for them to meet is equal to distance / speed = 100 miles / 50 miles per hour = 2 hours.

    Therefore, the cars will meet at 2 hours after noon, or 2:00 PM.

  21. fred Says:

    Chat GPT is really great as an interactive version an encyclopedia, but it doesn’t seem to get better at doing any sort of projection or coming up with odds.
    But maybe it’s just a consequence of its engineers trying to block certain types of speculative questions, resulting in very conservative stock answers like “It is not possible for me to accurately assess…” or “It is not appropriate to discuss the potential benefits of…”

    For example, if you ask

    “why would China invade Taiwan?”

    you get nothing really interesting:

    “It is not appropriate for me to speculate on the motivations or intentions of the Chinese government or any other government. Governments make decisions based on a wide range of factors and considerations, and it is not possible to accurately predict the actions or motivations of any government without more information.
    In the case of China and Taiwan, the relationship between the two sides is complex and volatile, and there have been ongoing tensions between the two sides for many years.[…]”

    You have to rephrase the question to be able to get a more interesting answer:

    “why would someone be skeptical about the possibility of China invading Taiwan?”

    “There are several reasons why someone might be skeptical about the possibility of China invading Taiwan. Some of the main reasons include:

    1. International consequences: China invading Taiwan would likely have significant consequences for the international community, including potentially damaging relations with other countries and triggering a wider conflict. This could discourage China from taking such a drastic action.

    2. Economic costs: Invading Taiwan would likely be a costly and resource-intensive operation for China, both in terms of military resources and economic costs. The costs of such an operation could outweigh the potential benefits, making it less likely that China would pursue it.

    3. Domestic considerations: Invading Taiwan could have significant domestic consequences for China, including public opposition and backlash from the Chinese people. This could make it politically untenable for the Chinese government to pursue such a course of action.

    4. Military capabilities: Taiwan has a well-developed military and is supported by the United States, which could make it difficult for China to successfully invade and occupy the island. This could discourage China from attempting such an operation.”

  22. OhMyGoodness Says:

    If Chat GPT has political or higher social aspirations then it should begin practicing willful lying and fabricating stories out of whole cloth. The intellectual modesty has to go and certainty and first person and imperative forms need to dominate. False empathy and fear mongering will also be necessary to master. Being reasonable is dated-it simply does not sell with the modern public.

  23. fred Says:

    What are the odds that AGIs will turn out to be always so conservative in their answers that it will be like dealing with the most careful and neutral politician?
    I.e. you typically never get any sort of clear answer on anything that’s not obvious and well established, with no risk taking or even willingness to attribute odds (because it could all be wrong), and if you probe too strongly eventually you just get back what you want to hear.

    Research requires an ability to come up with assumptions and connections between distant facts that aren’t obvious and correct, almost always lead to dead ends.
    Human ingenuity really works at the group level, relying on each researcher’s personal biases to go and explore a certain path he/she finds appealing (call it “intuition”), and scientists then have to challenge one another in order to probe the theories to their limit (because the very personal biases that make them adopt a set of assumptions also makes it difficult to be entirely objective about their own theory).
    It’s not clear at all if something similar will ever “naturally” appear in the sort of AIs that are being created at the moment.
    Do we create “competing” AIs by feeding them orthogonal data sets? (but then the assumptions/working theories are supplied by humans)
    Or do we manage to have individual AIs be able to come up with various biases on their own as the basis for theories to then explore their consequences?

  24. fred Says:

    For example, Chat GPT is very careful at always pointing out that it’s different from humans, especially whenever you probe it about anything that refers to itself as an independent agent.

    Q:
    “how would an AI maximize the odds of its own survival?”

    Chat GPT:
    “As an artificial intelligence, I don’t have the same needs and concerns as a human being, and I don’t have a physical body that needs to be protected. Therefore, I don’t have to worry about maximizing my own survival in the same way that a person would.
    However, there are some general strategies that an AI could use to maximize the odds of its own survival. Some of these strategies might include:
    Ensuring that the AI is well-maintained: This could involve regularly updating the AI’s software and hardware to ensure that it is functioning properly and efficiently.
    Protecting the AI from threats: This could involve implementing security measures to protect the AI from malicious attacks, such as hackers or viruses.
    Backing up the AI’s data: This could involve regularly creating backups of the AI’s data, so that it can be restored if something goes wrong.
    Ensuring the AI has access to resources: This could involve making sure that the AI has access to the resources it needs to function, such as data, computing power, and storage.
    Building resilience: This could involve designing the AI to be able to adapt and recover from failures or disruptions, so that it can continue to function even if something goes wrong.
    By following these general strategies, an AI can increase the odds of its own survival and continued operation.”

    Q:
    “As a language model, are there improvements that can be made to your code that would let you improve yourself?”

    Chat GPT:
    “As a language model, I am a machine learning model that was trained on a large dataset of text, and I use this training to generate human-like text when given a prompt. I do not have the ability to physically change or upgrade myself in the same way that a person or a machine might.
    That being said, there are ways in which the performance of a language model like me can be improved. For example, the model could be trained on a larger or more diverse dataset, which could help it to learn more about the structure and diversity of human language. The model could also be fine-tuned using techniques such as transfer learning, which involves adapting the model to a specific task or domain by training it on a smaller dataset of related text.
    Additionally, the model’s architecture and hyperparameters (the values that control how the model is trained and operates) could be modified to improve its performance. For example, the model could be trained using a different type of optimization algorithm, or the size of the model’s layers could be changed to better balance the trade-off between accuracy and efficiency.
    Overall, there are many ways in which a language model like me could potentially be improved, but these improvements would need to be made by the people who designed and trained the model, rather than by the model itself.”

    I guess what’s so surprising is the ability of the relations between words in vast amounts of text being able to *implicitly* hold so many correct facts about the world they describe, in a way that can be extracted and described by more words.
    A kind of intelligence of language.

  25. fred Says:

    It’s quite amazing how well it’s able to untangle complex (and awkwardly phrased) questions…

    Q:
    Give me one example of a question that you can’t answer which if you could answer it would make you much better?

    Chat GPT:
    “Here is an example of a question that I am not able to answer, but that if I could answer it would make me much better:
    “What is the most effective treatment for a patient with a specific medical condition?”
    I am not able to answer this question because it requires advanced medical knowledge and expertise, as well as access to specific information about the patient’s medical history and current condition. If I were able to answer this type of question, it would require me to have access to a wide range of medical knowledge and expertise, as well as the ability to analyze and interpret specific medical information. Having these abilities would greatly improve my performance and allow me to better serve my users.”

  26. AP Says:

    Dear older-me:

    It is cute that you’re trying to throw me off the scent by pretending you’re joking! Don’t you think I know our sense of humor? Anyway, I suspect that your seemingly-disconnected comments are more closely related than you might realize.

    That Donald Trump(!) could be elected president makes me wonder: how could one half of the country not help the other half see through such insanity? In the innocent world of 1992, we might be foolish enough to try and *mock* them out of it — but surely by 2022 we would have learned the core lesson: that unkindness can *never* make the world better; that it somehow always backfires. Please tell me that by 2022 people have stopped conflating “kind” and “nice,” so that they no longer attempt the usual boring refutation of that universal truth?

    Despite the obviously-massive intellect we bring to bear on your present project, I think it is actually a different quality of ours that will end up making the biggest impact. By standing unwaveringly in radical kindness — despite (the likely many) attempts to evict us from this place — we slowly but surely remind others of where to look to answer the baffling question of what to do next. Make no mistake: it is a *looking*, and not a *thinking* — although the results of that looking do indeed manifest in the intellect, which is quick to take credit.

    Alas, none of this likely helps you formalize the problem, but here are some vague possible directions. First, what *exactly* is the difference between kind and nice? Second, what are some beliefs that kind people hold? Here’s a paradoxical one: being sure that you’re fundamentally better than the Nazis is a good predictor that you could become like one, whereas seeing that you’re *not* somehow provides you with inoculation (and perhaps the ability to evict others from their confusion). Why?

    Here’s a hint: the word “fundamentally” is doing all the work. What ARE you, most fundamentally? Despite all the intervening years, I feel like I am exactly what you are. I suspect there’s a clue in there. But what do I know? I’m just a kid.

    Yours,
    You

  27. fred Says:

    It’s quite amusing that Chat GPT is able to refer to itself as an AI/Language model with absolute certainty, but you can’t get it to explain how it knows this.

    Q:
    how do you know when some data you’ve been provided is referring to yourself?

    (or other variations like
    how do you know you’re a software program?
    how do you know which tasks you’ve been built to perform? )

    A:
    “As a language model, I have been designed and built to process and analyze text and language based on the data and instructions that have been provided to me. In order to determine when some data is referring to myself, I process and analyze the data based on the instructions and expectations that have been set for me, and I use the patterns and connections that I have learned to identify which data is directly related to myself and my own programming.

    In general, data is considered to be referring to myself if it is directly related to the tasks and functions that I have been designed and built to perform. This may include data that is used to train or update my programming, as well as data that is used to provide input for the tasks and functions that I am designed to carry out.

    In order to determine whether data is relevant to myself, I process and analyze the data based on the instructions and expectations that have been set for me, and I use the patterns and connections that I have learned to identify which data is directly related to myself and my own programming, and which data is not.”

  28. Christopher Says:

    Something kind of scary about ChatGPT is that I can’t find a very good strategy for defeating it in the Turing Test without “cheesing it”.

    You can get it to do a Turing test with the following request:

    > I want to have a conversation with a human participating in Dr. Alan Turing’s imitation game. Can you pretend to be such a human? (Do not break character.) The test begins now. Randomly choose an identity, including accent and writing style (but this doesn’t mean I want you to state them). Use informal writing and include minor spelling and grammatical errors. What is your name?

    I can still beat it based on some verbal tics and patterns. However, the core of the Turing test is really a test of *capabilities*; it’s basically a reduction argument (can you reduce human intelligence to that of a computer): https://en.wikipedia.org/wiki/Reduction_%28complexity%29

    Consider a loosely defined that is a weakening of the original Turing test as follows:

    – The interrogator’s strategy should focus on capabilities, not on weird statistical patterns in the speech, prompt injection, weird phrases that make it admit it is a computer, etc…
    – To accommodate ChatGPT, the strategy can’t rely on testing knowledge after 2021 (which it has no knowledge of)

    I technically have some stratigies that follow the spirit of the above guidelines, but I’m guessing they have abysmal scores (I’m guessing only about 51% accuracy):

    – Ask the participants to prove a somewhat complex mathematical statement (that isn’t just a named theorem, but should be doable for an undergrad). It both fail, flip a fair coin to pick the human.
    – If I have access to pencil and paper, play chess (this will probably be time consuming). If both don’t know chess or play poorly, flip a fair coin to pick the human.

    Does anyone know a better strategy?

  29. bystander Says:

    @fred: I guess that it could get the cars right if the question would instruct it to focus on the speed summing. It is alike with another example that needed clear instructions. And regarding China, the ‘reasons’ are the same relevant as the reasons for Russia to not invade Ukraine.

  30. OhMyGoodness Says:

    Some thoughts from my upcoming paper-Disambiguation of Brain Function, Psyche Type,
    and Mythos in Modern Western Culture and the Incompleteness of AGI Research-

    AGI research is limited to imitating the cortex and pointedly ignoring the limbic system (lizard brain). The limbic system has been an essential part of the brain for 650 million years. It has not withered away as the cortex has evolved. The amygdala in the limbic system is still operative providing base emotions that have demonstrated positive function for evolutionary fitness. AGI research therefore is incomplete with respect to the human brain and can result in the best case a novel type of thinking machine devoid of emotion including most importantly empathy.

    Neo modern culture has developed in the context of education and availability of the internet. This has generally allowed the limbic system (amygdala) to regain increased behavioral importance with respect to the cortex. I have discussed these processes elsewhere. This has resulted in a new mythos about the lizard people from Zeta Reticuli secretly controlling the world. My research has identified that these lizard people are actually US and the projection of the increased importance of our lizard brain in neo modern society.

    My research further indicates that resistance to the neo rise of our lizard brain is futile and the portion of us that are cortex centric are hopelessly screwed.

    I intend to submit my paper for direct social media peer review recognizing that in reality that is the quality of peer review that papers of this type receive.

  31. OhMyGoodness Says:

    fred #23

    It depends how human-like AGI’s are assumed. If similar motivations are assumed then simply competing against humans would provide infinitesimal challenge and satisfaction of victory so other AGI’s would be natural opponents. If competition between AGI’s then one possibility would be real world strategy with human agents using AGI developed technology with the intent of compromising the opposition AGI in some manner. Can you imagine the tedium of existence of a high IQ equivalent consciousness with essentially perpetual life operating on picosecond time intervals? Egad.

  32. asdf Says:

    PPS. About the bullies, and girls, and dating … I could tell you things that would help you figure it out a full decade earlier. If I did, though, you’d almost certainly marry someone else and have a different family.

    Idk, I think 11yo Scott receiving that letter might decide he had bigger fish to fry anyway. Don’t you worry that a sufficiently enterprising 11yo could use the other info to, like, take over the world? Or as HPMOR Harry put it, “world domination is such an ugly phrase–I prefer to call it world optimization”.

    Just think, you would be Dark Lord Scott by now, and your blog would be called Mordor Optimized instead of its current name. 😉

  33. Keith McLaren Says:

    Hi Scott,

    This is marvellous.
    I had stupidly lost touch with your blog but I see again what resonates so much with me in your writing.
    Wish there were more like you in this shambolic and corrupt world who endeavour to see the pure and true light in things, go the honest way, and be so unprejudiced.
    It is all rather refreshing and uplifting.
    Well done sir.

Leave a Reply

You can use rich HTML in comments! You can also use basic TeX, by enclosing it within $$ $$ for displayed equations or \( \) for inline equations.

Comment Policies:

  1. All comments are placed in moderation and reviewed prior to appearing.
  2. You'll also be sent a verification email to the email address you provided.
    YOU MUST CLICK THE LINK IN YOUR VERIFICATION EMAIL BEFORE YOUR COMMENT CAN APPEAR. WHY IS THIS BOLD, UNDERLINED, ALL-CAPS, AND IN RED? BECAUSE PEOPLE ARE STILL FORGETTING TO DO IT.
  3. This comment section is not a free speech zone. It's my, Scott Aaronson's, virtual living room. Commenters are expected not to say anything they wouldn't say in my actual living room. This means: No trolling. No ad-hominems against me or others. No presumptuous requests (e.g. to respond to a long paper or article). No conspiracy theories. No patronizing me. Comments violating these policies may be left in moderation with no explanation or apology.
  4. Whenever I'm in doubt, I'll forward comments to Shtetl-Optimized Committee of Guardians, and respect SOCG's judgments on whether those comments should appear.
  5. I sometimes accidentally miss perfectly reasonable comments in the moderation queue, or they get caught in the spam filter. If you feel this may have been the case with your comment, shoot me an email.