OpenAI’s ambitious efforts to develop GPT-5, its next-generation AI model, appear to be lagging behind schedule, according to a recent report from The Wall Street Journal. Despite significant investment, the advancements seen so far reportedly don’t justify the steep costs.
This aligns with earlier coverage from The Information, which noted OpenAI might be reassessing its strategies, as GPT-5 might not deliver the kind of transformative progress seen with previous iterations. However, the WSJ report adds new insights into the 18-month development process of GPT-5, internally referred to as Orion.
The report indicates that OpenAI has conducted at least two large-scale training runs. These efforts, which involve processing vast amounts of data to refine the model, have encountered challenges. The first run progressed more slowly than anticipated, suggesting that a larger-scale training effort could be both time-intensive and expensive. While early results show some improvement over prior models, the enhancements are not yet significant enough to offset the high operational costs.
In addition to leveraging publicly available data and licensed datasets, OpenAI has reportedly taken a more hands-on approach. The company has employed individuals to generate original data, such as writing code and solving mathematical problems, and has also incorporated synthetic data produced by another internal model, o1.
OpenAI has not provided immediate comments on the matter but previously confirmed that it would not release a model code-named Orion this year.