According to a new report in The Wall Street Journal, OpenAI is facing delays in developing its next major model, GPT-5, with results that do not yet justify the high costs.
This aligns with an earlier report in The Information, which suggests that OpenAI is exploring new strategies due to concerns that GPT-5 may not represent a significant advancement compared to previous models. The WSJ article provides additional insights into the 18-month development process of GPT-5, codenamed Orion.
OpenAI has reportedly conducted at least two extensive training runs to enhance the model by exposing it to large volumes of data. The initial training run was slower than anticipated, indicating that a larger run would be time-consuming and expensive. While GPT-5 shows signs of outperforming its predecessors, it has not yet reached a level where the costs of maintenance are justified.
The WSJ also reveals that in addition to using publicly available data and licensing agreements, OpenAI has hired individuals to generate new data through coding or solving mathematical problems. The organization is also utilizing synthetic data generated by another model, o1.
OpenAI has not responded to requests for comment at this time. Previously, the company had stated that it would not release a model under the name Orion this year.