Thread
Eliezer and his acolytes believe it’s inevitable AIs will go “foom” without warning, meaning, one day you build an AGI and hours or days later the thing has recursively self improved into godlike intelligence and then eats the world. Is this realistic?
So let’s say you actually create an AGI capable, at least in principle, of doing the engineering needed for self-improvement. What’s that going to look like? Humans probably involve 100T weights, so even if we’re insanely good at it, we’re talking about many trillions of weights.
(I wouldn’t be surprised if it took hundreds of trillions in fact, just like humans, but let’s assume a bit lighter, because we can’t build economically practical systems with 100T weights right now.)
That means we’re already running on state of the art hardware and probably doing okayish in terms of performance but not astonishing. We’re talking big Cerebras wafer scale iron, not consumer grade nvidia cards. Can such a device meaningfully foom? What would that involve.
(Let’s ignore the question of what you would train such a system on, which usually requires a lot more crunch than what you can run the resulting model on. We’ll pretend a lot of things.)
So the initial AGI is not that much smarter or faster than a human to begin with. Certainly vastly more is possible; you can imagine designs that are millions or trillions of times more capable. But the initial design isn’t. What can you realistically expect it to be capable of?
Could it buy a lot of additional hardware for itself? Sure. That’s already at the limits of what human production capacity can generate though; everyone is already buying all the hardware they can.
Our early AGI is not going to get billions of clones of itself fabricated, shipped, and installed the same day, not when Cerebras hardware has months of lead time, costs millions, and eats 20kW per wafer so you need a new data center too.
Over a period of time, like many months, it might get a few thousand more clones of itself installed if it can get humans to give it an unlimited budget (hundreds of millions of dollars or more.) It’s not going to FOOM this way though.
Would the AI it be able to design new TPU hardware for itself and get it turned around? Sure, but it’s not going to get new miraculous chip technology designed instantly.
Given many months of turnaround time to design, test, and fabricate with cooperation of a cutting edge fab like TSMC, maybe it gets some significant (a few times) improvements given some really clever insights and a large budget. Doesn’t smell very FOOM.
Maybe it can work on nanotechnology and avoid needing to improve integrated circuit technology; how long will that take? Maybe years even given a large team of identical AIs?
I think nanotechnology is a real prospect btw. But it doesn’t exist yet, and building it is a really hard engineering challenge.
UHV chambers take a finite amount of time to build, a long time to evacuate. Test cycles take time. The AIs might have good ideas, but at first they’re not vastly better than humans. This will also take a big budget.
Maybe our new AGI it can come up with better algorithms for itself, but on the other hand, there are tens of thousands of humans working on that problem already and they’re not getting instantaneous insights.
Maybe better algorithms exist, but they’re not going to miraculously grant many orders of magnitude improvement instantly.
So again, I have no doubt that over years to decades, AIs are going to become deeply, deeply, deeply superhuman because AIs will be used to design better AIs which will design better AIs still, but years to decades ain’t FOOM.
Can AGI FOOM in weeks starting from something mildly above humans? Seems pretty unlikely to me without some serious insights I don’t see. Instant nanotechnology seems out. Insane improvements to chip designs seem out. Insane purchases of new hardware seem out.
Insane algorithmic improvements seem out. Again, I think mild improvements on all of these, even much bigger ones than humans could manage in similar timescales, are on the table, but that’s still not FOOM.
I’m not skeptical at all that given AGI, you can get to more and more deeply superhuman AGI over time. From the point of view of history, the event will be nearly instantaneous, but that’s because years are nearly instantaneous when measured against 200 millennia.
Is a Yuddite Orthodox FOOM Event likely? Probably not, and that goes double given that AI hardware can’t all be devoted to improving AI hardware. Much of it is going to be devoted to reading MRIs and writing device drivers and looking for new cancer drugs…
…and doing homework assignments where the student absolutely positively promises they wrote it themselves. AIs are, after all, being built because they make money for the people building them, which means paying work is an important part of overall capacity.
Maybe the mildly superhuman AGI could take over all that hardware running paying jobs for a while? Sure. It wouldn’t go unnoticed though, and it still needs *something* to do with all of it.
Maybe magical AI will understand how to hypnotize vast numbers of humans into doing its slavish bidding? It could study cults or something, right? Takes a lot of time though too.
Is “foom” logically possible? Maybe. I’m not convinced. Is it real world possible? I’m pretty sure no. Is long term deeply superhuman AI going to be a thing? Yes, but not a “foom”: not insane progress hours after the first AGI gets turned on even if we fully cooperate.
Mentions
See All