Sam Altman on what could go wrong with AI in 2026
Also: Why startups should not worry about OpenAI, a hiring slowdown at the company, and more highlights from this week’s “town hall” meeting with the CEO.
Sam Altman is “very nervous” about how the world currently tackles the biorisks from AI, like someone developing a new pathogen (i.e. a virus) with the technology.
The OpenAI CEO said this during a livestreamed “town hall” meeting with users earlier this week, answering questions from the audience and the internet.
Specifically, Altman addressed concerns about the approach of trying to restrict who gets access to the models and setting up systems to prevent adversarial use.
“I don’t think that’s going to work for much longer and the shift that the world needs to make for AI security generally, and AI biosecurity in particular, is to move from a strategy of blocking to one of resilience.”
What he means by that, and other key highlights from the one-hour Q&A session, is below.
1) AI biorisks and mitigation in 2026
The question (paraphrased for brevity): how does biosecurity fall into OpenAI’s roadmap?
“There are many ways AI can go wrong in 2026. Certainly one of them that we are quite nervous about is bio,” Altman said.
He acknowledged the capabilities of current models in biology. For instance, GPT 5.2 scored 92% on the GPQA benchmark, which consists of “Google-proof” questions on biology, physics, and chemistry, written by PhDs.
As mentioned, Altman is proposing a strategy shift towards resilience. To explain what he means by that, he draws a parallel to fire, inspired by his co-founder Wojciech Zaremba.
“Fire did all these wonderful things for society, then it started burning down cities. We tried to restrict fire. … Then we got better at resilience to fire and we came up with fire codes and flame-resistant materials. Now we’re pretty good at that as a society. I think we need to think about AI the same way.”
OpenAI has talked to biological researchers and companies about what it takes to deal with novel pathogens, and while they report that AI can be part of the answer, it won’t be an entirely technical solution, Altman says.
“You will need the world to think about these things differently than we have been. I am very nervous about where things are, but I don’t see a path other than the resilience-based approach,” he said, adding:
“… if something goes visibly really wrong for AI this year, I think bio would be a reasonable bet for what that could be.”
2) AI will drive down costs
While the impact of the AI boom on the economy as a whole is still an unanswered question, Altman expects that we will soon see prices go down broadly, as the technology enables cheaper production of goods and services.
“Given the progress with work you can do in front of a computer, but also what looks like it will soon happen with robotics and a bunch of other things—we’re going to have massively deflationary pressure in the economy.”
3) Plummeting costs of OpenAI’s current frontier model
“I think we should be able to deliver GPT-5.2 xhigh-level intelligence by the end of 2027 for—at least 100x less.”
Building apps with GPT 5.2 currently costs $14 for one million tokens (~750,000 words) of output, so one hundredth of that would be $0.14.
The scope of OpenAI’s developer business is not publicly known, but last week Altman tweeted that it added $1 billion of revenue in the preceding month.
4) Software as evolving tools
With the cost of writing software going down, Sam Altman expects that we will see more programs customised for each user.
“Maybe I want to use the same word processor every time, but I have a bunch of repeated quirks of how I use it and I would like the software to be increasingly customized—a static or a slowly evolving piece of software, but written for me. … That idea that our tools are constantly evolving and converging just for us seems like it’s going to happen.”
5) Why startups should not worry about OpenAI as a competitor
How should builders think about durability when a startup’s features can be replaced by a model update? An audience member wanted to know. When OpenAI announced AgentKit, their tool for building agentic workflows, some speculated what the consequences could be for incumbents like n8n and Zapier. Are there business areas OpenAI will not enter?
“There have been many startups that have done things that in a perfect world we would have done sooner, but it was too late and people built up a real durable advantage. That will keep happening.”
(One can guess about what companies he is referring to — maybe Cursor, Lovable, and Manus?)
6) Current limitations of the models
“… There seems to be something about creativity, intuition, and judgment that we are not close to with the current generation of models.”
7) Doing more with fewer people
OpenAI has grown rapidly in headcount since the launch of ChatGPT, reportedly from a few hundred to 3,000 as of August last year. According to Altman, the increase will start to taper off.
“We’re going to keep hiring software developers, but for the first time—and I know every other company and startup is thinking about this too—we are planning to dramatically slow down how quickly we grow because we think we’ll be able to do so much more with fewer people.”
8) Assume 100x performance incoming
On what people can expect from OpenAI’s future models.
“Assume we will have a model that is 100 times more capable than the current model with 100 times the context length at 100x the speed of the 100x reduced cost, perfect tool calling, and extreme coherence. We’re going to get there.”




