Fears that large language models could develop dangerous abilities, such as reasoning and planning, are unfounded. British and German researchers have demonstrated this in a recent study. Large language models, like ChatGPT, cannot learn autonomously or acquire new skills on their own. This limitation means they do not pose an existential threat to humanity.
A study conducted by researchers at the University of Bath and the Technical University of Darmstadt reveals that large language models only possess a superficial ability to follow instructions and excel in language knowledge. However, they lack the potential to master new skills without explicit guidance. This finding shows that these models are controlled, predictable, and safe. As a result, they can be used without significant safety concerns. The researchers also acknowledge that, like all technologies, AI can be misused.
Thousands of Experiments
These models will likely generate more sophisticated language over time and improve in following detailed queries. However, the study authors argue that it is unlikely they will develop complex reasoning skills. The research focused on the “emergent abilities” of large language models (LLMs). The researchers conducted a series of experiments to test LLMs’ ability to perform tasks they had not previously encountered.
LLMs can respond to questions about various social situations without explicit training or programming for those tasks. While previous research suggested this ability resulted from models “knowing” about social situations, the researchers demonstrated that it is the result of LLMs completing tasks based on in-context learning (ICL).
Control of the Process
LLMs’ combination of instruction-following ability, memory, and linguistic skills explains both their capabilities and limitations. After conducting thousands of experiments, the researchers concluded that fears of large language models acquiring dangerous abilities, like reasoning and planning, are unfounded. Tests clearly show these models do not develop new complex reasoning skills.
“Our findings do not suggest that artificial intelligence poses no threat at all. Instead, we show that the supposed emergence of complex reasoning skills linked to certain threats lacks evidence. We can control the learning process. However, future research should focus on other risks these models pose, such as their potential use in creating fake news,” the researchers warn.
For more on how AI models like ChatGPT function, you can explore our article on how AI learns and processes information.
Can you be more specific about the content of your article? After reading it, I still have some doubts. Hope you can help me.