Why we must contemplate life in a post-AGI world
Plus why healthy epistemic reasoning habits are penalised to the detriment of society
Here are my initial priors, posted on X, on how labour markets will change in a post-AGI world:
“Moravec's paradox means manual labour and trades are less likely to be automated. Jobs on alignment, prompting, those relating to empathy and creativity, will likely increase in number and wages.
If L share of Y falls, Δ^2Y>0 so K share rises. AI (agents) reduce fixed costs to launching startups too. In worst-case scenario where AGI automates most jobs, I expect a lot more entrepeneurs, and a decline in firms. Firms arise given inefficiencies as Coase proved, so the result pushes us closer to the Pareto frontier even if not cost-free. Those are my priors of life in the worst-case scenario.
Far more likely, humans maintain a comparative advantage in many occupations, low substitutability between tasks mean human L is still required, and new occupation classes arise. On my second point, worth noting that AI can complement rather than substitute many roles.
Entry-level jobs will erode however, so this increases intergenerational inequality, yielding political economy implications. Young adults are already turning to the radical left in Britain. Hopefully YIMBY increases in popularity as the concerns of younger generations gain more traction.”
AI researchers and those working at the top AI labs seem to agree. AGI is arriving faster than we think, and is arguably already here with Mythos, which would explain its restricted rollout. I place the odds of widespread AGI commerically available from 2030 at roughly 60%.
The bottlenecks arise in alignment issues favouring caution in implementation and rollout, currently alongside a number of complementarities in senior roles. Yet we must increasingly contemplate a world in which these bottlenecks dissappear. Entire professions, many of them prestigious (such as law, programming, IB, etc), will be eliminated and present status hierarchies re-written. Those with conventional career paths will lose out relative to outsider nonconformists or those with more turbulent histories. Raw IQ, at least that proportion orthogonal to established social networks, will matter more in hiring in a world where AI can estimate our cognitive ability via our social media output alone. Therefore it's plausible that equality in opportunity, and intergenerational social mobility, will increase. As with all technological revolutions, there will be winners and losers (at least in relative terms), with those disproportionately at the top of existing hierarchies losing relative to the more disasvantaged. Ultimately, the result will be a more meritocratic and efficient world.
Alignment is a vital issue, and I agree with Séb Krier et al that it's an institutional and incentive problem, which is where I depart from most of the EA and rationalist community. With clever prompting, one can get an LLM or agent to deviate from their alignment training. My p(doom) has now increased from the sub 1% superforecaster average to 5-6%, close to the median for AI researchers. Although I'm not a doomer, it's increasingly apparent that many of the Yudkowsky and LessWrong predictions are materialising, which strengthens the hypothesis they're correct.
To perhaps add some clarity, my p(doom)=(5%,6%) means it's a non-negligible possibility yet otherwise a worst-case scenario. Such probabilities could have held for numerous technological advancements and military events throughout human history too - arguably we've faced far worse XR odds before. Therefore my immense AI optimism remains. Anything less still strikes me as somewhat vulgur. Yet AI alignment is by far our most pressing economic, social, and political issue, and it's not even close! Conventional partisan and ideological alignments seem wholly outdated for the nascent revolution that is materialising.
P(doom) is perhaps less anchored to base rates than other forecasting questions. I have drawn upon a range of economic models, and priors well-informed from a history of structural cultural and technological changes, in order to arrive at a reasonable point estimate. A weakness of my writing is that I often don't state my probabilities or bet according to such (although prediction markets aren't legalised in Britain). I'll make more effort to do so going forward.
However, aside from an endogenous bias vs variance tradeoff in our objective functions, the penalties for attempting a precise and well-calibrated probabilistic forecast that never materialised seem worse than falsified yet vague and verbose punditry. Robin Hanson got dunked on for this today. What strikes me as impressive however is that Tyler and Hanson were onto the possibility of AI's upcoming rapid development to AGI back when most considered such limited to the realm of esoteric science fiction.
More generally, the implicit social penalty on inaccurate forecasts penalises prediction and updating based on well-calibrated likelihoods. This generates substantial negative externalities for the quality of our discourse. Prediction markets are socially valuable precisely because they generate the opposite incentives. Anyone taking the first (and courageous) step to quantify their point estimates with precise numbers and ranges deserves credit. I think the defining legacy of the emergence of the LessWrong rationalist crowd, is the idea that an army of autistic autodidacts can achieve the same epistemic status and legitimacy of established domain experts. Perhaps with a pinch of grandiosity, I label this the triumph of Bayesian epistemology. A post-AGI world might sharpen the returns on healthy epistemic habits relative to conventional credentialised paths?

