
OpenAI is set to release an AI product capable of reasoning, enabling it to solve complex problems in mathematics, coding, and science. This marks a significant step toward achieving human-like cognition in machines.
According to the Financial Times, the new AI models, called o1, are a result of rapid technological advancements in recent years. Companies like Google DeepMind, OpenAI, and Anthropic are racing to create more sophisticated AI systems.
These models focus particularly on developing so-called agents—personalized bots designed to assist humans in work, creation, or digital communication. OpenAI has announced that these models will be integrated into ChatGPT Plus starting Thursday, specifically aimed at scientists and programmers rather than the general public. The company claims the o1 models have significantly outperformed previous models like GPT-4o in the International Mathematical Olympiad qualification exam, scoring 83% compared to GPT-4o’s 13%.
OpenAI’s CTO, Mira Murati, commented that these models open new possibilities for understanding how artificial intelligence functions.
“We can track the model’s thought process step by step,” Murati said.
Reinforcement Learning and Advanced Problem-Solving
The o1 models utilize a technique known as reinforcement learning to solve problems. Although processing queries takes more time—making them more expensive than GPT models—these models are more consistent and sophisticated in their responses.
Mark Chen, the lead researcher on the project, explained:
“The models evaluate different strategies while answering your query, and if they detect errors, they can correct them.”
Additionally, Murati mentioned that this series of models could introduce a new paradigm in search, facilitating better research and data retrieval, especially in applications like OpenAI’s SearchGPT tool.
The Path to General Artificial Intelligence (AGI)
Experts believe that reasoning and planning abilities are essential for achieving general artificial intelligence (AGI)—machines with cognitive abilities similar to humans. However, the scientific community stresses that current AI systems still lack full reasoning capabilities. Yoshua Bengio, a Turing Award recipient, stated that demonstrated reasoning would enable consistency in facts, arguments, and conclusions made by AI, marking a significant step toward machine autonomy.
On the other hand, Gary Marcus, a cognitive science professor at New York University, warns that previous similar claims have often crumbled under scientific scrutiny:
“Any new claims should be met with a degree of skepticism,” Marcus said.
Strengthened Security Measures
OpenAI has also stated that it has strengthened safety tests to keep up with these advancements, allowing early access to this model for AI safety institutes in the UK and the US.
Advancements in this area will be crucial for the future of AI technology, according to experts. Aidan Gomez, CEO of the AI startup Cohere, noted that models that work on step-by-step reasoning and analysis have shown “dramatic” improvements in their capabilities.