RecreAIting DemocrAIcy
We all agree solutionism is bad, which is good, because we don't have solutions.
Last week I attended / spoke at the Second International Workshop for Recreating Democracy, hosted by living legend and newsletter trendsetter Bruce Schneier. The idea was “to pretend we’re forming a new country and don’t have any precedent to deal with … What could democracy look like if it were reinvented today?”
So, naturally, we spent a large chunk of our time talking about AI.
(My own mini-talk was actually not about AI — rather, he said noncontentiously, it was about what academia/policy people could and should learn from Silicon Valley — but even when I avoid the subject, it seems like I can’t avoid the subject.)
The talks and discussions were all interesting … and the subtext when you read the room even moreso. I found it a pretty easy room to read because almost everyone was like me, i.e. liberal-leftist. (I may consider myself slightly right of cohorts around me, but, I mean, I’m a Canadian who lives in Berkeley.) There was some reckoning of the risks of a) liberal-leftist groupthink b) turning a blind eye to the fact that far leftists are just as bad as far rightists, if currently less dangerous — but really not a lot of reckoning, and almost entirely from one person.
Most remarkable of all was the generation gap. Almost without exception, every Zoomer / younger Millennial in the room was very skeptical indeed of representative democracy as we know it, and interested in trying something very different — e.g. citizens’ assemblies selected by sortition (i.e. by lottery) to support … and/or, eventually supplant … representative democracy. Today, please. If not sooner.
Also notable was that everyone felt something from “profound skepticism” ranging through “a deep belief that coercive regulation is urgently required” all the way to “barely contained eruptive fury” re the tech industry. I know, I know, not news. But what infuriated them so was mostly … social media … which felt, I hate to say it, amusingly quaint.
The tech industry doesn’t really care about social media any more. Elon Musk does, but even his fans in tech mostly really wish he didn’t. Mark Zuckerberg seems far more into AI and his quixotic metaverse. More generally, seven years on from the hysterical claims that Russia used Facebook to steal the 2016 election, social media feels old, tired, stale, and — modulo those hawks who want TikTok banned — anything but dangerous. To us in tech, that is. To academics, journalists, and policy people, it seems it’s still an awfully touchy subject.
This is one reason why, to quote one of the few there with a foot in both worlds, “The gap between DC and Silicon Valley has never been this wide.” And that gap becomes a chasm when it comes to AI. “On one side, ‘this will literally change everything’; on the other, ‘it’s just another hype cycle.’” This can only end well! He said mordantly.
But honestly the biggest gap between tech and academia/policy/government doesn’t seem to be their theories about social media, or AI; it’s the disparity between their theories of change. From what I can tell the academic/policy version seems to be:
Write a good paper.
Get it published in a good journal.
Hope that people notice it’s good.
Hope that people keep talking about it.
Hope it eventually gets adopted into a policy proposal.
Hope that proposal is eventually officially espoused by an actual party.
Hope it is then actually introduced for codification.
Hope that, after a long, gruelling process with many pitfalls, it’s codified into law.
Hope that said law is then actually funded, staffed, implemented, enforced, and not overturned by the courts or a subsequent election.
That said it is important to note one alternative theory of change: “In times of crisis, people look for ideas that are lying around. It’s our job to ensure good ideas are lying around.” This is very true! And by itself justifies this kind of event!
…But either way, the academic/policy theory of change implies a whole lot of waiting for the right wave of change — people talked about this explicitly, as do works like This Is An Uprising — and, in the interim, climbing the professional ladder so as to better surf the tsunami when it hits. There was literally talk about how today is such an exciting time because so much of the existing establishment is retiring or dying out. Apparently policy advances one funeral or retirement party at a time.
The tech industry’s theory of change is far far simpler, at least for any changes which can be fit into capitalism’s Procrustean bed:
Build something new.
Hope that people want it. If not, return to step 1.
Take their money/investment, grow exponentially, and transform people’s lives — not one hotly-contested polity at a time, but simultaneously across the globe
Do not, and I cannot stress this enough, patiently wait for your idea to be selected as A Good Policy Proposal and/or for The Right Wave Of Administrative Change
You can see why the policy people view the tech people as reckless. And you can see how tech became the primary engine of change in the world, while academia and government became … not. (They used to be! In the Manhattan Project and Great Society days. They still are, in some nations. I’m not sure if the theory of change is/was different in those cases, or just works a whole lot faster.)
The defense of the policy theory of change is that it is safer, healthier, and better for ordinary people. The problem with this defense is that it’s nonsense. If you disagree, I encourage you to start writing a history of soberly considered, carefully introduced policies that turned out to be utterly disastrous and catastrophically harmful. I encourage you to have entire reams of paper to hand.
The problem is you never really know how any idea, tech or policy, will work. Every new idea, when exposed to society at scale, is always an anthropological experiment; and most experiments fail, no matter how brilliant their inventors, because of unforeseeable emergent properties. The tech approach works not because tech people are smarter but because as a group they run many experiments simultaneously, and learn from them, so as to better inform their next wave of experiments. The policy approach is far too slow for that.
So to my mind you need both tech and policy, simultaneously: tech’s job is to experiment rapidly and broadly and sometimes wackily; policy’s is to rein in those experiments which are clearly going wrong. Viewed through that lens, tech is certainly doing its job! Policy, he understated drily, is not.
…Which brings us full circle back to AI, which is, now that nobody in the Valley particularly cares about social media any more, the field in which the tension between tech and policy is most increasingly fraught. You don’t have to be an AI doomer to be pessimistic about some of its potential consequences. (I’m an optimist, but I have pessimist theory of mind.) And you don’t have to be e/acc to think that if AI is a technology that accelerates all progress, then weighing it down with chains could be, in terms of opportunity cost, the most harmful policy decision of all time.
There is at least one thing both sides agree on: policy is not currently doing its job. (Though personally I think the recent executive order on A.I. wasn’t awful.) The disconcerting thing is that a lot of tech people think it shouldn’t. But you can kind of see why? Of late that job has been done so incredibly clumsily and badly — see e.g. cookie popups, or GDPR, or Section 702 — that no policy is widely seen as preferable to any policy.
This is pretty crazy! Policy, by which I mean regulations, by which I mean government, has brought us things like seat belts, unleaded gasoline, childhood vaccinations, automobile insurance, public pensions, public holidays, and (in civilized nations) universal health care, to name only a few, all of which are extremely good for the world and its people. How did things get so bad that many smart people, some of them not even libertarians, are now reflexively opposed to even the idea of regulation?
That is a larger and more disconcerting question, probably out of scope for a newsletter that theoretically focuses on AI. So I’ll end by saying that, as an optimist, I grudgingly agree that no-to-minimal AI regulations and policies are probably for the best, for the foreseeable future. But I only agree because I, too, have basically lost all faith and hope in the implementation of intelligent and timely tech policy on either side of the Atlantic … and I really wish I hadn’t. Maybe in ten years’ time this will be something else AI can help fix? Let’s hope.