Rational Accelerationism

C. Introducing Rational Accelrationism

It is important to note that some form of artificial general intelligence is likely to come if we continue expending a massive amount of both physical resources and talent toward AI research and development. However, accelerating AI does not have to come with a potential dystopia. In this final section, we introduce rational accelerationism, a new philosophy on the development of AI that probably has become the opinion of the silent majority.

Effective Accelrationism. Doomerism. Decentralized Accelrationism. “X” Accelrationism. There are a plethora of opinions regarding how humanity should approach the development of new frontier technologies. They, like the traditional socioeconomic spectrum of old, are marked by the schism between capitalism and socialism: on one end of the spectrum, effective accelrationists believe in unrestricted techno-capitalism, and on the other, “decels” believe that technology will soon cause the death of humanity as we know it.

However, just like politics, philosophy, or economics, the vast majority of participants in AI are not extremists. They don’t believe in technological advancement without any checks, nor do they believe that we need a proverbial big brother in the form of government regulations to make sure we don’t develop something straight out . They are passionate about developing AI for the betterment of all, to eliminate manual labor and other tasks so that humanity can focus on creative tasks and expression. They believe that AI will one day be able to solve maladies and produce mathematical research that rivals that of our brightest, but also recognize that we are far away from needing to prepare for such a reality.

Rational Accelerationism is perhaps the best summation of the philosophical motivation behind this piece. Situational Awareness raises an interesting and pertinent point about the future of AI; Rational Accelerationism is a philosophy for why humanity, be it in the form of corporate scientists or anonymous developers with anime profile pictures, can be trusted to undertake in this future without the need for government oversight.

The following manifesto summarizes this school of thought, and at the same time, acts as a parting thought for this piece.

The Rational Accelerationist Manifesto

AGI is coming. There is no denying that. A small proportion of technologists are already readying themselves for a post-AGI future, focus on developing creative skills and output rather than intellectual or technological ability.

The rapid proliferation and adoption of AI has also raised numerous ethical questions about whether we are in reality creating a dystopia rather than a utopia that betters the human condition as a whole. The most cautious of us have called for government-mandated pauses or oversight of AI development, taking it out of the hands of independent companies or startups, believing that civilizational collapse is in order if we are not responsible.

Yet, if there is one thing that has been made abundantly clear over the past century, it is that humanity can ultimately be trusted to produce abundance, to produce positive outcomes, when dealing with technologies. The net positive of the internet, which could have easily become a vector for unrestrained cyberwarfare and espionage, has far outweighed its negatives. Investment in nuclear energy and other forms of alternative energy generation have laid the groundwork for sustainable energy consumption. The development of space-exploration, which could have resulted in the loss of millions of dollars, has seen private corporations putting forth the vision for a future in which humanity becomes a multi-planetary species.

All of these advancements have come through individual corporations and technology firms operating independently, only relying on the government for support and guidance. The development of AI has the potential to not only be the technological movement with the largest potential to improve the human condition, but also one that can assemble the largest amount of independent talent. Engineers, scientists, policy experts and economists, for the first time, are all coming together to work on the same ideas.

From the anonymous developer you work with on Discord, to the entrepreneur doing the rounds on Forbes, we have shown our capacity to act rationally, to behave in a way that preserves us. Are we going to make mistakes as we head toward the development of AGI? Probably. Is that better than the alternative? Without a doubt. Safety and alignment are extremely important to get right: they are not just divisions within a company. However, the frontier labs currently leading the AI movement have not only shown a commitment to addressing such issues, but have spent actual capital on it. It should be up to the free-market, not our regulatory overlords, to guide to ensure that we build technologies that are both beneficial but safe. Our desire to survive is rooted in the most powerful instinct in the known universe, the human survival instinct, and will guide how we handle AGI, just as it guided the development of all the technologies that have gotten us here in the first place.