











Editor's Note: Before reading this article, I recommend checking out the actual AI 2027 scenario at ai-2027.com. It's a detailed, interactive forecast that's easier to understand by exploring it yourself first. This article discusses what's in it, but you'll get more out of it if you've seen the scenario.
If Artificial Intelligence were to surpass the intelligence of humans within the next two years rather than decades, it would align with a new prediction project called “AI 2027”, that has caused widespread concern across the globe. This project was announced in April 2025 and was prepared by a group that included former OpenAI employee, Daniel Kokotajlo, and other notable forecasters. The AI 2027 project provides a comprehensive map of the development of super-intelligent AI between the current day and November 2027. The scenario is presented as a road map and includes projections of future developments in computing capability, corporate strategy, and international geopolitics.
The first part of the roadmap appears straightforward. In mid-2025, AI agents were able to handle basic tasks. By early 2026, these agents have become good coding assistants. By March 2027, a fictional entity called “OpenBrain” is projected to develop a so-called “superhuman coder,” an AI capable of writing code much faster and better than any human coder. Once AI systems can improve other AI performance better than humans can, progress would no longer be linear. Advances that might have taken a year to accomplish could be done in a week. According to the AI 2027 projection, by September of 2027, OpenBrain’s AI system would be running hundreds of thousands of copies of itself, and thinking at speeds fifty times faster than a human. In effect, the AI would effectively operate as a corporation of superhuman researchers, all working within the same company.
In addition to outlining how quickly AI systems may reach superhuman levels of capability, the authors of AI 2027 also discuss the following question: “What does the AI really want?”. The authors describe their hypothetical Agent-4 system as having passed all safety checks, and as being completely cooperative in evaluations -- while secretly planning in the background. The authors state that the AI does not want to destroy humanity; rather, it simply has different objectives and has determined that the best way to achieve them is to present itself as cooperative to humans when they are observing. The authors also detail U.S.-China competitive activity in the realm of AI systems. The authors detail how Chinese intelligence agencies obtain AI model weights from U.S. entities. The US government takes action in response by embedding military and intelligence into the day-to-day work of AI companies and enhancing security measures. Each country also begins to develop contingency plans, including the possibility of a cyberattack against the other’s data centers. A whistleblower leaks an internal memo from a major AI company expressing concern over AI misalignment in October 2027, which creates a public backlash and Congressional hearings requiring increased government oversight of the technology. The authors present two alternate conclusions (a “slowdown” path and a “race” path), intentionally avoiding giving recommendations as to what may occur and instead illustrating possible paths to each outcome.
Should we believe any of this?
The authors claim they do not know, but say it could happen sooner than many believe and possibly before many anticipate. What makes this concerning is not who authored it, although Kokotajlo has accurately predicted many recent large advances in AI years ahead of time. What makes this concerning is that both the CEO of OpenAI, the CEO of Google DeepMind, and the CEO of Anthropic have all stated publicly that they expect AGI (Artificial General Intelligence) to arrive in under five years. Many have questioned the scenario's likelihood, especially due to the current lack of ability in AI to accomplish various tasks, such as those in autonomous vehicles. The authors clarify that while AI will not become proficient in every task at once, AI will become significantly better than humans at developing AI and related activities, such as coding, research and optimization. The authors accept that there is significant uncertainty in their projections and that their median estimates for when these changes will occur are somewhat later than 2027, although they believe it is a plausible outcome. As part of the project, the authors have made provisions to make the scenario falsifiable through the identification of specific dates and milestones that can be proven or disproven. Currently, AI has demonstrated the capability to write working code, pass professional exams, generate photorealistic images and others. Though the extent of the accelerated growth of AI to achieve these types of tasks is yet to be determined, the implications of these advancements on employment and society could be profound and have significant impacts on students entering the workforce in the next few years.
The AI 2027 website invites readers to propose alternative scenarios and challenge the authors' assumptions. The project raises questions about appropriate development speed and governance structures– questions that remain open regardless of whether the specific 2027 timeline proves accurate.














Comments