Complex Regulation is Bad Regulation: We Need Simple Enduser Rights

Readers of this blog and/or my book know that I am pro regulation as a way of getting the best out of technological progress. One topic I have covered repeatedly over the years is the need to get past the app store lock-in. The European Digital Markets Act was supposed to accomplish this but Apple gave a middle finger by figuring out a way to comply with the letter of the law while going against its spirit.

We have gone down a path for many years now where regulation has become ever more complex. One argument would be that this is simply a reflection of the complexity of the world we live in. “A complex world requires complex laws” sounds reasonable. And yet it is fundamentally mistaken.

When faced with increasing complexity we need regulation that firmly ensconces basic principles. And we need to build a system of law that can effectively apply these principles. Otherwise all we are doing is making a complex world more complex. Complexity has of course been in the interest of large corporations which employ armies of lawyers to exploit it (and often help create and maintain complexity through lobbying). Tax codes around the world are a great example of this process.

So what are the principles I believe need to become law in order for us to have more “informational freedom”?

  1. A right to API access
  2. A right to install software
  3. A right to third party support and repair

In return manufacturers of hardware and providers of software can void warranty and refuse support when these rights are exercised. In other words: endusers proceed at their own risk.

Why not give corporations the freedom to offer products any which way they want to? After all nobody is forced to buy an iPhone and they could buy an Android instead. This is a perfectly fine argument for highly competitive markets. For example, it would not make sense to require restaurants to sell you just the ingredients instead of the finished meal (you can go and buy ingredients from a store separately any time and cook yourself). But Apple has massive market power as can easily be seen by its extraordinary profitability.

So yes regulation is needed. Simple clear rights for endusers, who can delegate these rights to third parties they trust. We deserve more freedom over our devices and over the software we interact with. Too much control in the hands of a few large corporations is bad for innovation and ultimately bad for democracy.

Posted: 28th January 2024Comments
Tags:  regulation informational freedom app store apple

The World After Capital: All Editions Go, Including Audio

My book, The World After Capital, has been available online and as hardcover for a couple of years now. I have had frequent requests for other editions and I am happy to report that they are now all available!

One of the biggest requests was for an audio version. So this past summer I recorded one which is now available on Audible.

Yes, of course I could have had AI do this, but I really wanted to read it myself. Both because I think there is more of a connection to me for listeners and also because I wanted to see how the book has held up. I am happy to report that I felt that it had become more relevant in the intervening years.

image

There is also a Kindle edition, as well as a paperback one (for those of you who like to travel light and/or crack the spine). If you have already read the book, please leave a review on Amazon, this helps with disoverability and I welcome the feedback.

In keeping with the spirit of the book, you can still read The World After Capital directly on the web and even download an ePub version for free (registration required).

If you are looking for holiday gift ideas: it’s not too late to give The World After Capital as a present to friends and family ;)

PS Translations into other languages are coming!

Posted: 14th December 2023Comments
Tags:  world after capital book

AI Safety Between Scylla and Charybdis and an Unpopular Way Forward

I am unabashedly a technology optimist. For me, however, that means making choices for how we will get the best out of technology for the good of humanity, while limiting its negative effects. With technology becoming ever more powerful there is a huge premium on getting this right as the downsides now include existential risk.

Let me state upfront that I am super excited about progress in AI and what it can eventually do for humanity if we get this right. We could be building the capacity to turn Earth into a kind of garden of Eden, where we get out of the current low energy trap and live in a World After Capital.

At the same time there are serious ways of getting this wrong, which led me to write a few posts about AI risks earlier this year. Since then the AI safety debate has become more heated with a fair bit of low-rung tribalism thrown into the mix. To get a glimpse of this one merely needs to look at the wide range of reactions to the White House Executive Order on Safe, Secure and Trustworthy Development and Use of Artificial Intelligence. This post is my attempt to point out what I consider to be serious flaws in the thinking of two major camps on AI safety and to mention an unpopular way forward.

First, let’s talk about the “AI safety is for wimps” camp, which comes in two forms. One is the happy-go-lucky view represented by Marc Andreessen’s “Techno-Optimist Manifesto” and also his preceding Tweet thread. This view dismisses critics who dare to ask social or safety questions as luddites and shills.

So what’s the problem with this view? Dismissing AI risks doesn’t actually make them go away. And it is extremely clear that at this moment in time we are not really set up to deal with the problems. On the structural risk side we are already at super extended income and wealth inequality. And the recent AI advances have already been shown to further accelerate this discrepancy.

On the existential risk side, there is recent work by Kevin Esvelt et al. showing how LLMs can broaden access to pandemic agents. Jeffrey Ladish et. al. demonstrating how cheap it is to remove safety training from an open source model with published weights. This type of research clearly points out that as open source models become rapidly more powerful they can be leveraged for very bad things and that it continues to be super easy to strip away the safeguards that people claim can be built into open source models.

This is a real problem. And people like myself, who have strongly favored permissionless innovation, would do well to acknowledge it and figure out how to deal with it. I have a proposal for how to do that below.

But there is one intellectually consistent way to continue full steam ahead that is worth mentioning. Marc Andreessen cites Nick Land as an inspiration for his views. Land in Meltdown wrote the memorable line “Nothing human makes it out of the near-future”. Embracing AI as a path to a post-human future is the view embraced by the e/acc movement. Here AI risks aren’t so much dismissed as simply accepted as the cost of progress. My misgiving with this view is that I love humanity and believe we should do our utmost to preserve it (my next book which I have started to work on will have a lot more to say about this).

Second, let’s consider the “We need AI safety regulation now” camp, which again has two subtypes. One is “let regulated companies carry on” and the other is “stop everything now.” Again both of these have deep problems.

The idea that we can simply let companies carry on with some relatively mild regulation suffers from three major deficiencies. First, this has the risk of leading us down the path toward highly concentrated market power and we have seen the problems of this in tech again and again (it has been a long standing topic on my blog). For AI market power will be particularly pernicious because this technology will eventually power everything around us and so handing control to a few corporations is a bad idea. Second, the incentives of for-profit companies aren’t easily aligned with safety (and yes, I include OpenAI here even though it has in theory capped investor returns but also keeps raising money at ever higher valuations, so what’s the point?).

But there is an even deeper third deficiency of this approach and it is best illustrated by the second subtype which essentially wants to stop all progress. At its most extreme this is a Ted Kaczynsci anti technology vision. The problem with this of course is that it requires equipping governments with extraordinary power to prevent open source / broadly accessible technology from being developed. And this is an incredible unacknowledged implication of much of the current pro-regulation camp.

Let me just give a couple of examples. It has long been argued that code is speech and hence protected by first amendment rights. We can of course go back and revisit what protections should be applicable to “code as speech,” but the proponents of the “let regulated companies go ahead with closed source AI” don’t seem to acknowledge that they are effectively asking governments to suppress what can be published as open source (otherwise, why bother at all?). Over time government would have to regulate technology development ever harder to sustain this type of regulated approach. Faster chips? Government says who can buy them. New algorithms? Government says who can access them. And so on. Sure, we have done this in some areas before, such as nuclear bomb research, but these were narrow fields, whereas AI is a general purpose technology that affects all of computation.

So this is the conundrum. Dismissing AI safety (Scylla) only makes sense if you go full on post humanist because the risks are real. Calling for AI safety through oversight (Charybdis) doesn’t acknowledge that way too much government power is required to sustain this approach.

Is there an alternative option? Yes but it is highly unpopular and also hard to get to from here. In fact I believe we can only get there if we make lots of other changes, which together could take us from the Industrial Age to what I call the Knowledge Age. For more on that you can read my book The World After Capital.

For several years now I have argued that technological progress and privacy are incompatible. The reason for this is entropy, which means that our ability to destroy will always grow faster than our ability to (re)build. I gave a talk about it at the Stacks conference in Berlin in 2018 (funny side note: I spoke right after Edward Snowden gave a full throated argument for privacy) and you can read a fuller version of the argument in my book.

The only solution other than draconian government is to embrace a post privacy world. A world in which it can easily be discovered that you are building a super dangerous bio weapon in your basement before you have succeeded in releasing it. In this kind of world we can have technological progress but also safeguard humanity – in part by using aligned super intelligences to detect what is happening. And yes, I believe it is possible to create versions of AGI that have deep inner alignment with humanity that cannot easily be removed. Extremely hard yes, but possible (more on this in upcoming posts on an initiative in this direction).

Now you might argue that a post privacy world also requires extraordinary state power but that’s not really the case. I grew up in a small community where if you didn’t come out of your house for a day, the neighbors would check in to make sure you were OK. Observability does not require state power per se. Much of this can happen simply if more information is default public. And so regulation ought to aim at increased disclosure.

We are of course a long way away from a world where most information about us could be default public. It will require massive changes from where we are today to better protect people from the consequences of disclosure. And those changes would eventually have to happen everywhere that people can freely have access to powerful technology (with other places opting for draconian government control instead). 

Given that the transition which I propose is hard and will take time, what do I believe we should do in the short run? I believe that a great starting point would be disclosure requirements covering training inputs, cost of training runs, and powered by (i.e. if you launch say a therapy service that uses AI you need to disclose which models). That along with mandatory API access could start to put some checks on market power. As for open source models I believe a temporary voluntary moratorium on massively larger more capable models is vastly preferable to any government ban. This has a chance of success because there are relatively few organizations in the world that have the resources to train the next generation of potentially open source models. 

Most of all though we need to have a more intellectually honest conversation about risks and how to mitigate them without introducing even bigger problems. We cannot keep suggesting that these are simple questions and that people must pick a side and get on with it.

Posted: 15th November 2023Comments
Tags:  ai artificial intelligence safety regulation open source

Weaponization of Bothsidesism

One tried and true tactic for suppressing opinions is to slap them with a disparaging label. This is currently happening in the Israel/Gaza conflict with the allegation of bothsidesism, which goes as follows: you have to pick a side, anything else is bothsidesism. Of course nobody likes to be accused of bothsidesism, which is clearly bad. But this is a completely wrong application of the concept. Some may be repeating this allegation unthinkingly, but others are using it as an intentional tactic.

Bothsidesism, aka false balance, is when you give equal airtime to obvious minority opinions on a well-established issue. The climate is a great example, where the fundamental physics, the models, and the observed data all point to a crisis. Giving equal airtime to people claiming there is nothing to see is irresponsible. To be clear, it would be equally dangerous to suppress any contravening views entirely. Science is all about falsifiability.

Now in a conflict, there are inherently two sides. That doesn’t at all imply that you have to pick one of them. In plenty of conflicts both sides are wrong. Consider the case of the state prosecuting a dealer who sold tainted drugs that resulted in an overdose. The dealer is partially responsible because they should have known what they were selling. The state is also partially responsible because it should decriminalize drugs or regulate them in a way that makes safety possible for addicts. I do not need to pick a side between the dealer and the state.

I firmly believe that in the Israel/Gaza conflict both sides are wrong. To be more precise, the leaders on both sides are wrong and their people are suffering as a result. I do not have to pick a side and neither do you. Don’t let yourself be pressured into picking a side via a rhetorical trick.

Posted: 11th November 2023Comments
Tags:  politics

Israel/Gaza

I have not personally commented in public on the Israel/Gaza conflict until now (USV signed on to a statement). The suffering has been heartbreaking and the conflict is far from over. Beyond the carnage on the ground, the dialog online and in the street has been dominated by shouting. That makes it hard to want to speak up individually.

My own hesitation was driven by unacknowledged emotions: Guilt that I had not spoken out about the suffering of ordinary Palestinians in the past, despite having visited the West Bank. Fear that support for one side or the other would be construed as agreeing with all its past and current policies. And finally, shame that my thoughts on the matter appeared to me as muddled, inconsistent and possibly deeply wrong. I am grateful to everyone who engaged with me in personal conversations and critiqued some of what I was writing to wrestle down my thoughts over the last few weeks, especially my Jewish and Muslim friends, for whom this required additional emotional labor in an already difficult time.

Why speak out at all? Because the position I have arrived at represents a path that will be unpopular with some on both sides of this conflict. If people with views like mine don’t speak, then the dialog will be dominated by those with extremely one-sided views contributing to further polarization. So this is my attempt to help grow the space for discussion. If you don’t care about my opinion on this conflict, you don’t have to read it.

The following represents my current thinking on a possible path forward. As always that means it is subject to change despite being intentionally strongly worded.

  1. Hamas is a terrorist organization. I am basing this assessment not only on the most recent attack against Israel but also on its history of violent suppression of Palestinian opposition. Hamas must be dismantled.
  2. Israel’s current military operation has already resulted in excessive civilian casualties and must be replaced with a strategy that minimizes further Palestinian civilian casualties, even if that entails increased risk to Israeli troops (there is at least one proposal for how to do this being floated now). If there were a ceasefire-based approach to dismantling Hamas that would be even better and we should all figure out how that might work.
  3. Immediate massive humanitarian relief is needed in southern Gaza. This must be explicitly temporary. The permanent displacement of Palestinians is not acceptable.
  4. Israel must commit to clear territorial lines for both Gaza and the Westbank and stop its expansionist approach to the latter. This will require relocating some settlements to establish sensible borders. Governments need clear borders to operate with credibility, which applies also to any Palestinian government (and yes I would love to see humanity eventually transcend the concept of borders but that will take a lot of time).
  5. A Marshall Plan-level commitment to a full reconstruction of Gaza must be made now. All nations should be called upon to join this effort. Reconstruction and constitution of a government should be supervised by a coalition that must include moderate Islamic countries. If none can be convinced to join such an effort, that would be good to know now for anyone genuinely wanting to achieve durable peace in the region.

I believe that an approach along these lines could end the current conflict and create the preconditions for lasting peace. Importantly it does not preclude democratically elected governments from eventually choosing to merge into a single state.

All of this may sound overly ambitious and unachievable. It certainly will be if we don’t try and instead choose more muddling through. It will require strong leadership and moral clarity here in the US. That is a tall order on which we have a long way to go. But here are two important starting points.

We must not tolerate antisemitism. As a German from Nürnberg I know all too well the dark places to which antisemitism has led time and time again. The threat of extinction for Jews is not hypothetical but historical. And it breaks my heart that my Jewish friends are removing mezuzahs from their doors. There is one important confusion we should get past if we genuinely want to make progress in the region. Israel is a democracy and deserves to be treated as such. Criticizing Israeli government policies isn’t antisemitic, just like criticizing the Biden administration isn’t anti-Christian, or criticizing the Modi government isn’t anti-Hindu. And yes, I believe that many of Israel’s historic policies towards Gaza and the Westbank were both cruel and ineffective. Some will argue that Israel is an ethnocracy and/or a colonizer. One can discuss potential implications of this for policy. But if what people really mean is that Israel should cease to exist then they should come out and say that and own it. I strongly disagree.

We must not tolerate islamophobia. We also have to protect citizens who want to practice Islam. We must not treat them as potential terrorists or as terrorist supporters on the basis of their religion. How can we ask people to call out Hamas as a terrorist organization when we readily accept mass casualties among Muslims (not just in the region but also in other places, such as the Iraq war) while also not pushing back on people depicting Islam as an inherently hateful religion? And for those loudly claiming the second amendment, how about also supporting the first, including for Muslims? I have heard from several Muslim friends that they frequently feel treated as subhuman. And that too breaks my heart.

This post will likely upset some people on both sides of the conflict. There is nothing of substance that can be said that will make everyone happy. I am sure I am wrong about some things and there may be better approaches. If you have read something that you found particularly insightful, please point me to it. I am always open to learning and plan to engage with anyone who wants to have a good faith conversation aimed at achieving peace in the region.

Posted: 30th October 2023Comments
Tags:  israel gaza

We Need New Forms of Living Together

I was at a conference earlier this year where one of the topics was the fear of a population implosion. Some people are concerned that with birth rates declining in many parts of the world we might suddenly find ourselves without enough humans. Elon Musk has on several occasions declared population collapse the biggest risk to humanity (ahead of the climate crisis). I have two issues with this line of thinking. First, global population is still growing and some of those expressing concern are veiling a deep racism where they believe that areas with higher birth rates are inferior. Second, a combination of ongoing technological progress together with getting past peak population would be a fantastic outcome.

Still there are people who would like to have children but are not in fact having them. While some of this is driven by concern about where the world is headed, a lot of it is a function of the economics of having children. It’s expensive to do so not just in dollar terms but also in time commitment. At the conference one person advanced the suggestion that the answer is we must bring back the extended family as a widely embraced structure. Grandparents, the argument goes, could help raise children and as an extra benefit this could help address the loneliness crisis for older people.

This idea of a return to the extended family neatly fits into a larger pattern of trying to solve our current problems by going back to an imagined better past. The current tradwife movement is another example of this. I say “imagined better past” because the narratives conveniently omit much of the actual reality of that past. My writing here on Continuations and in The World After Capital is aimed at a different idea: what can we learn from the past so that we can create a better future?

People living together has clear benefits. It allows for more efficient sharing of resources. And it provides company which is something humans thrive on. The question then becomes what forms can this take? Thankfully there is now a lot of new exploration happening. Friends of mine in Germany bought an abandoned village and have formed a new community there. The Supernuclear Substack documents a variety of new coliving groups, such as Radish in Oakland. Here is a post on how that has made it easier to have babies.

So much of our views of what constitutes a good way of living together is culturally determined. But it goes deeper than that because over time culture is reflected in the built environment which is quite difficult to change. Suburban single family homes are a great example of that, as are highrise buildings in the city without common spaces. The currently high vacancy rates in office buildings may provide an opportunity to build some of these out in ways that are conducive to experimenting with new forms of coliving.

If you are working an initiative to convert offices into dedicated space for coliving (or are simply aware of one), I would love to hear more about it.

Posted: 18th September 2023Comments
Tags:  living coliving

Low Rung Tech Tribalism

Silicon Valley’s tribal boosterism has been bad for tech and bad for the world.

I recently criticized Reddit for clamping down on third party clients. I pointed out that having raised a lot of money at a high valuation required the company to become more extractive in an attempt to produce a return for investors. Twitter had gone down the exact same path years earlier with bad results, where undermining the third party ecosystem ultimately resulted in lower growth and engagement for the network. This prompted an outburst from Paul Graham who called it a “diss” and adding that he “expected better from [me] in both the moral and intellectual departments.”

Comments like the one by Paul are a perfect example of a low rung tribal approach to tech. In “What’s Our ProblemTim Urban introduces the concept of a vertical axis of debate which distinguishes between high rung (intellectual) and low rung (tribal) approaches. This axis is as important, if not more important, than the horizontal left versus right axis in politics or the entrepreneurship/markets versus government/regulation axis in tech. Progress ultimately depends on actually seeking the right answers and only the high rung approach does that.

Low rung tech boosterism again and again shows how tribal it is. There is a pervasive attitude of “you are either with us or you are against us.” Criticism is called a “diss” and followed by a barely veiled insult. Paul has a long history of such low rung boosterism. This was true for criticism of other iconic companies such as Uber and Airbnb also. For example, at one point Paul tweeted that “Uber is so obviously a good thing that you can measure how corrupt cities are by how hard they try to suppress it.”

Now it is obviously true that some cities opposed Uber because of corruption / regulatory capture by the local taxi industry. At the same time there were and are valid reasons to regulate ride hailing apps, including congestion and safety. A statement such as Paul’s doesn’t invite a discussion, instead it serves to suppresses any criticism of Uber. After all, who wants to be seen as corrupt or being allied with corruption against something “obviously good”? Tellingly, Paul never replied to anyone who suggested that his statement was too extreme.

The net effect of this low rung tech tribalism is a sense that tech elites are insular and believe themselves to be above criticism, with no need to engage in debate. The latest example of this is Marc Andreessen’s absolutist dismissal of any criticism or questions about the impacts of Artificial Intelligence on society. My tweet thread suggesting that Marc’s arguments were overly broad and arrogant promptly earned me a block.

In this context I find myself frequently returning to Martin Gurri’s excellent “Revolt of the Public.” A key point that Gurri makes is that elites have done much to undermine their own credibility, a point also made in the earlier “Revolt of the Elites” by Christopher Lasch. When elites, who are obviously benefiting from a system, dismiss any criticism of that system as invalid or “Communist,” they are abdicating their responsibility.

The cost of low rung tech boosterism isn’t just a decline in public trust. It has also encouraged some founders’ belief that they can be completely oblivious to the needs of their employees or their communities. If your investors and industry leaders tell you that you are doing great, no matter what, then clearly your employees or communities must be wrong and should be ignored. This has been directly harmful to the potential of these platforms, which in turn is bad for the world at large which is heavily influenced by what happens on these platforms.

If you want to rise to the moral obligations of leadership, then you need to find the intellectual capacity to engage with criticism. That is the high rung path to progress. It turns out to be a particularly hard path for people who are extremely financially successful as they often allow themselves to be surrounded by sycophants both IRL and online.

PS A valid criticism of my original tweet about Reddit was that I shouldn’t have mentioned anything from a pitch meeting. And I agree with that.

Posted: 3rd July 2023Comments
Tags:  politics tech criticism

Artificial Intelligence Existential Risk Dilemmas

A few week backs I wrote a series of blog posts about the risks from progress in Artificial Intelligence (AI). I specifically addressed that I believe that we are facing not just structural risks, such as algorithmic bias, but also existential ones. There are three dilemmas in pushing the existential risk point at this moment.

First, there is the potential for a “boy who cried wolf” effect. The more we push right now, if (hopefully) nothing terrible happens, then the harder existential risk from artificial intelligence will be dismissed for years to come. This of course has been the fate of the climate community going back to the 1980s. With most of the heat to-date from global warming having been absorbed by the oceans, it has felt like nothing much is happening, which had made it easier to disregard subsequent attempts to warn of the ongoing climate crisis.

Second, the discussion of existential risk is seen by some as a distraction from focusing on structural risks, such as algorithmic bias and increasing inequality. Existential risk should be the high order bit, since we want to have the opportunity to take care of structural risk. But if you believe that existential risk doesn’t exist at all or can be ignored, then you will see any mention of it as a potentially intentional distraction from the issues you care about. This unfortunately has the effect that some AI experts who should be natural allies on existential risk wind up dismissing that threat vigorously.

Third, there is a legitimate concern that some of the leading companies, such as OpenAI, may be attempting to use existential risk in a classic “pulling up the ladder” move. How better to protect your perceived commercial advantage than to get governments to slow down potential competitors through regulation? This is of course a well-rehearsed strategy in tech. For example, Facebook famously didn’t object to much of the privacy regulation because they realized that compliance would be much harder and more costly for smaller companies.

What is one to do in light of these dilemmas? We cannot simply be silent about existential risk. It is far too important for that. Being cognizant of the dilemmas should, however, inform our approach. We need to be measured, so that we can be steadfast, more like a marathon runner than a sprinter. This requires pro-actively acknowledging other risks and being mindful of anti-competitive moves. In this context I believe it is good to have some people, such as Eliezer Yudkowsky, take a vocally uncompromising position because that helps stretch the Overton window to where it needs to be for addressing existential AI risk to be seen as sensible.

Posted: 26th June 2023Comments
Tags:  ai artificial intelligence risk

Power and Progress (Book Review)

A couple of weeks ago I participated in Creative Destruction Lab’s (CDL) “Super Session” event in Toronto. It was an amazing convocation of CDL alumni from around the world, as well as new companies and mentors. The event kicked off with a 2 hour summary and critique of the new book “Power and Progress” by Daron Acemoglu and Simon Johnson. There were eleven of us charged with summarizing and commenting on one chapter each, with Daron replying after 3-4 speakers. This was the idea of Ajay Agrawal, who started CDL and is a professor of strategic management at the University of Toronto’s Rotman School of Business. I was thrilled to see a book given a two hour intensive treatment like this at a conference, as I believe books are one of humanity’s signature accomplishments.

image

Power and Progress is an important book but also deeply problematic. As it turns out the discussion format provided a good opportunity both for people to agree with the authors as well as to voice criticism.

Let me start with why the book is important. Acemoglu is a leading economist and so it is a crucial step for that discipline to have the book explicitly acknowledge that the distribution of gains from technological innovation depends on the distribution of power in societies. It is ironic to see Marc Andreessen dismissing concerns about Artificial Intelligence (AI) by harping on about the “lump of labor” fallacy at just the time when economists are soundly distancing themselves from that overly facile position (see my reply thread here). Power and Progress is full of historic examples of when productivity innovations resulted in gains for a few elites while impoverishing the broader population. And we are not talking about a few years here but for many generations. The most memorable example of this is how agricultural innovation wound up resulting in richer churches building ever bigger cathedrals while the peasants were suffering more than before. It is worth reading the book for these examples alone.

As it turns out I was tasked with summarizing Chapter 3, which discusses why some ideas find more popularity in society than others. The chapter makes some good points, such as persuasion being much more common in modern societies than outright coercion. The success of persuasion makes it harder to criticize the status quo because it feels as if people are voluntarily participating in it. The chapter also gives several examples of how as individuals and societies we tend to over-index on ideas coming from people who already have status and power thus resulting in a self-reinforcing loop. There is a curious absence though of any mention of media – either mainstream or social (for this I strongly recommend Martin Gurri’s “Revolt of the Public”). But the biggest oversight in the chapter is that the authors themselves are in positions of power and status and thus their ideas will carry a lot of weight. This should have been explicitly acknowledged.

And that’s exactly why the book is also problematic. The authors follow an incisive diagnosis with a whimper of a recommendation chapter. It feels almost tacked on somewhat akin to the last chapter of Gurri’s book, which similarly excels at analysis and falls dramatically short on solutions. What’s particularly off is that “Power and Progress” embraces marginal changes, such as shifts in taxation, while dismissing more systematic changes, such as universal basic income (UBI). The book is over 500 pages long and there are exactly 2 pages on UBI, which use arguments to dismiss UBI that have lots of evidence against them from numerous trials in the US and around the world.

When I pressed this point, Acemoglu in his response said they were just looking to open the discussion on what could be done to distribute the benefits more broadly. But the dismissal of more systematic change doesn’t read at all like the beginning of a discussion but rather like the end of it. Ultimately while moving the ball forward a lot relative to prior economic thinking on technology, the book may wind up playing an unfortunate role in keeping us trapped in incrementalism, exactly because Acemoglu is so well respected and thus his opinion carries a lot of weight.

In Chapter 3 the authors write how one can easily be in “… a vision trap. Once a vision becomes dominant, its shackles are difficult to throw off.” They don’t seem to recognize that they might be stuck in just such a vision trap themselves, where they cannot imagine a society in which people are much more profoundly free than today. This is all the more ironic in that they explicitly acknowledge that hunter gatherers had much more freedom than humanity has enjoyed in either the agrarian age or the industrial age. Why should our vision for AI not be a return to a more freedom? Why keep people’s attention trapped in the job loop?

The authors call for more democracy as a way of “avoiding the tyranny of narrow visions.” I too am a big believer in more democracy. I just wish that the authors had taken a much more open approach to which ideas we should be considering as part of that.

Posted: 25th June 2023Comments
Tags:  book artificial intelligence progress

What’s Our Problem by Tim Urban (Book Review)

Politics in the US has become ever more tribal on both the left and the right. Either you agree with 100 percent of group doctrine or you are considered an enemy. Tim Urban, the author of the wonderful Wait but Why blog has written a book digging into how we have gotten here. Titled “What’s Our Problem” the book is a full throated defense of liberalism in general and free speech in particular.

As with his blog, Urban does two valuable things rather well: He goes as much as possible to source material and he provides excellent (illustrated) frameworks for analysis. The combination is exactly what is needed to make progress on difficult issues and I got a lot out of reading the book as a result. I highly recommend reading it and am excited that it is the current selection for the USV book club.

The most important contribution of What’s Our Problem is drawing a clear distinction between horizontal politics (left versus right) and vertical politics (low-rung versus high-rung). Low-rung politics is tribal, emotional, religious, whereas high-rung politics attempts to be open, intellectual, secular/scientific. Low-rung politics brings out the worst in people and brings with it the potential of violent conflict. High-rung politics holds the promise of progress without bloodshed. Much of what is happening in the US today can be understood as low-rung politics having become dominant.

image

The book on a relative basis spends a lot more time examining low-rung politics on the left in the form of what Urban calls Social Justice Fundamentalism compared to the same phenomenon on the right. Now that can be excused to a dgree because his likely audience is politically left and already convinced that the right has descended into tribalism but not been willing to admit that the same is the case on the left. Still for me it somewhat weakened the overall effect and a more frequent juxtaposition of left and right low-rung poltics would have been stronger in my view.

My second criticism is that the book could have done a bit more to point out that the descend to low-rung politics isn’t just a result of certain groups pulling everyone down but rather also of the abysmal failure of nominally high-rung groups. In that regard I strongly recommend reading Martin Gurri’s “Revolt of the Public” as a complement.

This leads my to my third point. The book is mostly analysis and has only a small recommendation section at the end. And while I fully agree with the suggestions there, the central one of which is an exhortation to speak up if you are in a position to do so, they do fall short in an important way. We are still missing a new focal point (or points) for high-rung politics. There may indeed be a majority of people who are fed up with low-rung politics on both sides but it is not clear where they should turn to. Beginning to establish such a place has been the central goal of my own writing in The World After Capital and here on Continuations.

Addressing these three criticisms would of course have resulted in a much longer book and that might in the end have been less effective than the book at hand. So let me reiterate my earlier point: this is an important book and if you care about human and societal progress you should absolutely read What’s Our Problem.

Posted: 30th April 2023Comments
Tags:  book review society progress politics

Older posts