Skip to main content

Tech leaders call for pause of GPT-4.5, GPT-5 development due to ‘large-scale risks’

Generative AI has been moving at an unbelievable speed in recent months, with the launch of various tools and bots such as OpenAI’s ChatGPT, Google Bard, and more. Yet this rapid development is causing serious concern among seasoned veterans in the AI field — so much so that over 1,000 of them have signed an open letter calling on AI developers to slam on the brakes.

The letter was published on the website of the Future of Life Institute, an organization whose stated mission is “steering transformative technology towards benefitting life and away from extreme large-scale risks.” Among the signatories are several prominent academics and leaders in tech, including Apple co-founder Steve Wozniak, Twitter CEO Elon Musk, and politician Andrew Yang.

The article calls for all companies working on AI models that are more powerful than the recently released GPT-4 to immediately halt work for at least six months. This moratorium should be “public and verifiable” and would allow time to “jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts.”

The letter says this is necessary because “AI systems with human-competitive intelligence can pose profound risks to society and humanity.” Those risks include the spread of propaganda, the destruction of jobs, the potential replacement and obsolescence of human life, and the “loss of control of our civilization.” The authors add that the decision over whether to press ahead into this future should not be left to “unelected tech leaders.”

AI ‘for the clear benefit of all’

ChatGPT versus Google on smartphones.
DigitalTrends

The letter comes just after claims were made that GPT-5, the next version of the tech powering ChatGPT, could achieve artificial general intelligence. If correct, that means it would be able to understand and learn anything a human can comprehend. That could make it incredibly powerful in ways that haven’t yet been fully explored.

What’s more, the letter contends that responsible planning and management surrounding the development of AI systems is not happening, “even though recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one — not even their creators — can understand, predict, or reliably control.”

Instead, the letter asserts that new governance systems must be created that will regulate AI development, help people distinguish AI-created and human-created content, hold AI labs like OpenAI responsible for any harm they cause, enable society to cope with AI disruption (especially to democracy), and more.

The authors end on a positive note, claiming that “humanity can enjoy a flourishing future with AI … in which we reap the rewards, engineer these systems for the clear benefit of all, and give society a chance to adapt.” Hitting pause on AI systems more powerful than GPT-4 would allow this to happen, they state.

Will the letter have its intended effect? That’s hard to say. There are clearly incentives for OpenAI to continue working on advanced models, both financial and reputational. But with so many potential risks — and with very little understanding of them — the letter’s authors clearly feel those incentives are too dangerous to pursue.

Editors' Recommendations

Alex Blake
In ancient times, people like Alex would have been shunned for their nerdy ways and strange opinions on cheese. Today, he…
Google Gemini vs. GPT-4: Which is the best AI?
A person typing on a laptop that is showing the ChatGPT generative AI website.

Google's Gemini artificial intelligence and OpenAI's ChatGPT that uses the GPT-4 model are two of the most advanced artificial intelligence (AI) solutions available today. They can comprehend and interact with text, images, video, audio, and code, as well as output various alterations of each. they also provide expertise that would cost a lot to replicate with an expert human.

But if you're weighing which tool to put your time and energies into learning how to use, you want to pick the best one. Which is the more capable AI tool? Gemini or GPT-4?
Availability and pricing
Gemini is available in Pro and Nano form, though Ultra has yet to be released. Image used with permission by copyright holder

Read more
OpenAI and Microsoft sued by NY Times for copyright infringement
A phone with the OpenAI logo in front of a large Microsoft logo.

The New York Times has become the first major media organization to take on AI firms in the courts, accusing OpenAI and its backer, Microsoft, of infringing its copyright by using its content to train AI-powered products such as OpenAI's ChatGPT.

In a lawsuit filed in Federal District Court in Manhattan, the media giant claims that “millions” of its copyrighted articles were used to train its AI technologies, enabling it to compete with the New York Times as a content provider.

Read more
Here’s why people are claiming GPT-4 just got way better
A person sits in front of a laptop. On the laptop screen is the home page for OpenAI's ChatGPT artificial intelligence chatbot.

It appears that OpenAI is busy playing cleanup with its GPT language models after accusations that GPT-4 has been getting "lazy," "dumb," and has been experiencing errors outside of the norm for the ChatGPT chatbot circulated social media in late November.

Some are even speculating that GPT-4.5 has secretly been rolled out to some users, based on some responses from ChatGPT itself. Regardless of whether or not that's true, there's definitely been some positive internal changes over the past behind GPT-4.
More GPUs, better performance?
Posts started rolling in as early as last Thursday that noticed the improvement in GPT-4's performance. Wharton Professor Ethan Mollick, who previously commented on the sharp downturn in GPT-4 performance in November, has also noted a revitalization in the model, without seeing any proof of a switch to GPT-4.5 for himself. Consistently using a code interpreter to fix his code, he described the change as "night and day, for both speed and answer quality" after experiencing ChatGPT-4 being "unreliable and a little dull for weeks."

Read more