Foreword
Archie Chaudhury, June 2024
The day is November 29, 2022. The world of “tech”, comprised of snazzy journalists with RayBan glasses and comb overs, sweatshirt-draped college dropouts, product managers with inflated titles, Web3 connoisseurs with NFT profile pictures, and B2B SaaS founders working remotely on a Caribbean island, was collectively reeling from the potential financial windfall following the demise of public darling FTX while publicly decrying the Effective Altruism Movement and everything it stood for. 24 hours later, the so-called vibe had shifted, FTX was forgotten, and the aforementioned group of outcasts were collectively allocating their attention to an online chatbot, which, as fate would have it, will set forth perhaps the greatest technological innovation in human history.
Artificial intelligence has gone from being a far-fetched pipe dream that existed only in science fiction, to a legitimate field of study, to being used by millions of people all over the world in little over 60 years. OpenAI has been transformed from a non-profit research lab burning capital to one of the most valuable companies on the planet. ChatGPT has become a colloquial term, and is now a part of the vocabulary of many an over-caffeinated college student. AI has influenced movements that have led to the creation of new companies, reinvigorated entire industries, and caused monumental shifts in the fate of a few privileged companies and their fortunate shareholders.
Any discussion on AI will be without merit if it omitted the “doomerism vs accelerationism” debate. Various metrics, such as P-Doom, which rank different leaders by their predictions on the likelihood of a superintelligent AI destroying humanity, have been developed to chart the different positions on this debate. However, despite the innate philosophical attraction that comes with imagining a Detroit Become Human (play this game if you haven’t yet) future in which our metallic creations wrought of silicon rebel against us and lead to our ultimate downfall, it is not the biggest argument to be had in AI. Indeed, there is a larger, more influential battle brewing, one which very well may shape how this technology is developed, governed, and regulated.
Earlier this month, Leopold Ashenberner, a former OpenAI employee working on superallignment, published a set of long-form essays collectively titled “Situational Awareness”. In it, he argues that the potential development of Artificial Generalized Intelligence (AGI), which he predicts will be complete in less than 4 years, represents the largest national security hazard since the atomic bomb, and that in order to contain this threat, the US Government must ultimately nationalize the development of AI in order to protect crucial research from its enemies (China, North Korea, the usual suspects). Situational Awareness is a prediction of the not so far future, a future in which the development of AI has accelerated to the point that it is developing itself, a future in which the government must take control of AI or risk the equivalent of allowing “Sam Altman and Elon Musk to operate their own nuclear warheads”. Its intended audience is not a member of government or nor the world of tech; rather, it is the users of AI, the stakeholders of a democratic government who will ultimately to some degrees decide how this technology is built and governed. Thus, each individual post is significantly more digestible than traditional technical works, and is meant to be read more of as an informal commentary (with some hard evidence) rather than a scientific essay only accessible by a few.
This work is meant to partially be a response/counter-argument to the ideology espoused in Situational Awareness, while simultaneously introducing a new line of philosophical thought that complements the existing “e/x” movements that live on the Internet: Rational Accelerationism. Rational Accelerationism is rooted in the notion that the entirety of humanity, not just a small set of labs or government bureaucrats, can be trusted to move technology, science, and cognition forward in a responsible and ethical manner, that it should be independent, open-source scientists, not elderly statesmen, who should be on the forefront of governing artificial intelligence and other innovations. It is meant to be a summary of the outsiders, the ones who, while not being a direct part of the AI labs or the revolution in San Francisco, have meaningfully engaged with this technology for the better part of a year, and have been advocates for open-source, regulation-free development in all realms of technological innovation for even longer. This is an important discussion; while the accelerationists and their altruistic counterparts may very well be set in their beliefs, motives, and decisions, the broader public is not, and deserves to be, well-informed before a decision is made for them.
Table of Contents
This essay, like Situational Awareness, is organized into multiple independent subsections. These subsections can be categorized into three broader parts: part A delves into why it is rather unlikely for us to develop human-level intelligence relatively soon, part B discusses why it is unlikely for the US Government, or really any world power, to nationalize AI, and why unnecessary regulation may stifle any real progress we may have made on fundamentally improving the human condition, and part C introduces rational accelerationism, a movement advocating for the rapid, but safe, development of AI and adjacent technologies. Each subsection can be read like an independent post. Part C has a short introduction, and then an independent manifesto introducing a new school of thought regarding the development of frontier technologies.