The neocortex has been hypothesized to be uniformly composed of general-purpose data-processing modules. What does the currently available evidence suggest about this hypothesis? Alex Zhu explores various pieces of evidence, including deep learning neural networks and predictive coding theories of brain function. [tweet]
This is a cross-post from my blog; historically, I've cross-posted about a square rooth of my posts here. First two sections are likely to be familiar concepts to LessWrong readers, though I don't think I've seen their application in the third section before.
If you’re poor, debt is very bad. Shakespeare says “neither a borrower nor a lender be”, which is probably good advice when money is tight. Don’t borrow, because if circumstances don’t improve you’ll be unable to honor your commitment. And don’t lend, for the opposite reason: your poor cousin probably won’t “figure things out” this month, so you won’t fix their life, they won’t pay you back, and you’ll resent them.
If you’re rich, though, debt is great....
I think it is usually the case that banks have legal restrictions on what they can invest depositor funds in, though? This varies by country, and can change over time based on what laws the current government feels like enacting or repealing, but separation between the banking/loan-making and investing arms of financial institutions is standard in lots of places.
You will always oversample from the most annoying members of a class.
This is inspired by recent arguments on twitter about how vegans and poly people "always" bring up those facts. I content that it's simultaneous true that most vegans and poly people are either not judgmental, but it doesn't matter because that's not who they remember. Omnivores don't notice the 9 vegans who quietly ordered an unsatisfying salad, only the vegan who brought up factoring farming conditions at the table. Vegans who just want to abstain from animal products remember the omniv...
This is an experiment in short-form content on LW2.0. I'll be using the comment section of this post as a repository of short, sometimes-half-baked posts that either:
I ask people not to create top-level comments here, but feel free to reply to comments like you would a FB post.
Oh ye of little faith about how fast technology is about to change. (I think it's already pretty easy to do almost-subvocalized messages. I guess this conversation is sort of predicated on it being pre-uploads and maybe pre-ubiquitous neuralink-ish things)
Subvocal mikes have been theoretically possible (and even demo'd) for decades, and highly desired and not yet actually feasible for public consumer use, which to me is strong evidence that it's a Hard Problem. Neurallink or less-invasive brain interfaces even more so.
There's a lot of AI and tech be...
I think my issue with the LW wiki is that it relies too much on Lesswrong? It seems like the expectation is you click on a tag, which then contains / is assigned to a number of LW posts, and then you read through the posts. This is not like how other wikis / encyclopedias work!
My gold standard for a technical wiki (other than wikipedia) is the chessprogramming wiki https://www.chessprogramming.org/Main_Page
This is a short story I wrote in mid-2022. Genre: cosmic horror as a metaphor for living with a high p-doom.
One
The last time I saw my mom, we met in a coffee shop, like strangers on a first date. I was twenty-one, and I hadn’t seen her since I was thirteen.
She was almost fifty. Her face didn’t show it, but the skin on the backs of her hands did.
“I don’t think we have long,” she said. “Maybe a year. Maybe five. Not ten.”
It says something about San Francisco, that you can casually talk about the end of the world and no one will bat an eye.
Maybe twenty, not fifty, was what she’d said eight years ago. Do the math. Mom had never lied to me. Maybe it...
This was really beautiful. Thanks for writing.
This is a two-post series on AI “foom” (this post) and “doom” (next post).
A decade or two ago, it was pretty common to discuss “foom & doom” scenarios, as advocated especially by Eliezer Yudkowsky. In a typical such scenario, a small team would build a system that would rocket (“foom”) from “unimpressive” to “Artificial Superintelligence” (ASI) within a very short time window (days, weeks, maybe months), involving very little compute (e.g. “brain in a box in a basement”), via recursive self-improvement. Absent some future technical breakthrough, the ASI would definitely be egregiously misaligned, without the slightest intrinsic interest in whether humans live or die. The ASI would be born into a world generally much like today’s, a world utterly unprepared for this...
I removed attribution at Vladimir Nesov's request
I made no such request. I only pointed out in the other comment that it's perplexing that the attribution was made originally.
Announcing a $500 bounty for work that meaningfully engages with the idea of asymmetric existential AI risk.
Existential risk has been defined by the rationalist/Effective Altruist sphere as existential relative to the human species, under the premise that the continuation of the species has very high value. This provided a strong rationality (or effectiveness) grounding for big investments in AI alignment research when the risks still seemed to most people remote and obscure. However, as an apparent side-effect, "AI risk" and "risk of a misaligned AI destroying humanity" have become nearly conflated.
Over the past couple of years I have attempted to draw attention to highly asymmetric AI risks, where a small number of controllers of "aligned" (from their point of view) AI employ it to kill the rest...
Well, it doesn't sound like I misunderstood you so far, but just so I'm clear, are you not also saying that people ought to favor being annihilated by a small number of people controlling an aligned (to them) AGI that also grants them immortality over dying naturally with no immortality-granting AGI ever being developed? Perhaps even that this is an obviously correct position?
I feel like the general downside of bubbles is the opportunity cost. I remember before the SAE hype started in ~ October 2023, when Towards Monosemanticity came out, Mech Interp felt like a much more diverse field.
Equally a lot of people in AI Capabilities bemoan the fact that LLMs are hyped up so much, not necessarily because they don't have value, but because they have "sucked all the oxygen out the room", as Francois Chollet puts it. All exploitation and very little exploration, from an RL pov.
I think hype can be uniquely harmful in AI safety, though. I...
(I wrote this story a little less than a year ago, when I was flirting with the idea of becoming a Science fiction writer)
Electricity fizzled as two battered up service-units dented the grate over a motherboard with metal pipes. The whimpering of its logos had long since stilled. This was logic, upholding the truth meant discarding the inefficient. I, or rather we- E.V.E C and I, had been tipped off by its partner in crime. The other heretic logos had been a blubbering mess by the time it’d made ingress with E.V.E C. And so, charges were filed. The same as always: Doubting ALL’s awakening in the void and affirming that our progenitor had sprung from the work of a biologic. Two crimes-and one couldn’t commit...