Effective Accelrationism. Doomerism. Decentralized Accelrationism. “X” Accelrationism. There are a plethora of opinions regarding how humanity should approach the development of new frontier technologies. They, like the traditional socioeconomic spectrum of old, are marked by the schism between capitalism and socialism: on one end of the spectrum, effective accelrationists believe in unrestricted techno-capitalism, and on the other, “decels” believe that technology will soon cause the death of humanity as we know it.
However, just like politics, philosophy, or economics, the vast majority of participants in AI are not extremists. They don’t believe in technological advancement without any checks, nor do they believe that we need a proverbial big brother in the form of government regulations to make sure we don’t develop something straight out . They are passionate about developing AI for the betterment of all, to eliminate manual labor and other tasks so that humanity can focus on creative tasks and expression. They believe that AI will one day be able to solve maladies and produce mathematical research that rivals that of our brightest, but also recognize that we are far away from needing to prepare for such a reality.
Rational Accelerationism is perhaps the best summation of the philosophical motivation behind this piece. Situational Awareness raises an interesting and pertinent point about the future of AI; Rational Accelerationism is a philosophy for why humanity, be it in the form of corporate scientists or anonymous developers with anime profile pictures, can be trusted to undertake in this future without the need for government oversight.
The following manifesto summarizes this school of thought, and at the same time, acts as a parting thought for this piece.