Skip to main content

ChatGPT could threaten 300 million jobs around the world

The meteoric rise of artificial intelligence (AI) tools like ChatGPT has fueled a wide range of fears, from an increase in undetectable propaganda to the spread of racist and discriminatory speech. Experts have also raised the alarm over possible job losses, and a new report lays out precisely how disastrous AI tools could be for employment.

According to Goldman Sachs, up to 300 million full-time jobs could be lost around the world as a result of the automation that ChatGPT and other AI tools could usher in. That’s as much as 18% of the global workforce.

A person on the Google home page while using a MacBook Pro laptop on a desk.
Firmbee.com / Unsplash

The impact will be felt more keenly in advanced economies than in developing nations. That’s partly because much of the risk will be faced by white-collar workers compared to manual laborers. The professions most at risk include lawyers and administrative workers, while physically demanding work such as construction will fare better.

The situation appears worrying in the United States and Europe, where the report estimates roughly two-thirds of all work will face some form of automation, while up to a quarter of all jobs could be handled entirely by AI.

A risk or an opportunity?

The ChatGPT name next to an OpenAI logo on a black and white background.
Image used with permission by copyright holder

It isn’t all bleak. The report notes that since many jobs will be only partly impacted by AI, this work could be complemented by automation rather than being wholly replaced by it. Over the long term, the disruption caused by AI might help create new jobs and increase productivity in ways that other new technologies, like the electric motor and the personal computer, have done in the past.

That said, the report comes as over 1,000 scientists and business leaders signed an open letter calling on all development of AI models more advanced than GPT-4 to be paused for at least six months. This would allow the world to put safeguards in place to ensure AI tools are used “for the clear benefit of all.” Otherwise, the authors contended, artificial intelligence will “pose profound risks to society and humanity.”

What seems certain is that artificial intelligence could put huge numbers of jobs at risk. The question is whether that disruption will ultimately be a boost for workers — replacing tedious and repetitive work and opening up new job opportunities — or a threat that leaves everyone worse off. As the recent open letter warned, the frontiers of AI are largely unknown, with no guide to navigating their many potential perils.

Editors' Recommendations

Alex Blake
In ancient times, people like Alex would have been shunned for their nerdy ways and strange opinions on cheese. Today, he…
This one image breaks ChatGPT each and every time
A laptop screen shows the home page for ChatGPT, OpenAI's artificial intelligence chatbot.

Sending images as prompts to ChatGPT is still a fairly new feature, but in my own testing, it works fine most of the time. However, someone's just found an image that ChatGPT can't seem to handle, and it's definitely not what you expect.

The image, spotted by brandon_xyzw on X (formerly Twitter), presents some digital noise. It's nothing special, really -- just a black background with some vertical lines all over it. But if you try to show it to ChatGPT, the image breaks the chatbot each and every time, without fail.

Read more
Researchers just unlocked ChatGPT
ChatGPT versus Google on smartphones.

Researchers have discovered that it is possible to bypass the mechanism engrained in AI chatbots to make them able to respond to queries on banned or sensitive topics by using a different AI chatbot as a part of the training process.

A computer scientists team from Nanyang Technological University (NTU) of Singapore is unofficially calling the method a "jailbreak" but is more officially a "Masterkey" process. This system uses chatbots, including ChatGPT, Google Bard, and Microsoft Bing Chat, against one another in a two-part training method that allows two chatbots to learn each other's models and divert any commands against banned topics.

Read more
OpenAI and Microsoft sued by NY Times for copyright infringement
A phone with the OpenAI logo in front of a large Microsoft logo.

The New York Times has become the first major media organization to take on AI firms in the courts, accusing OpenAI and its backer, Microsoft, of infringing its copyright by using its content to train AI-powered products such as OpenAI's ChatGPT.

In a lawsuit filed in Federal District Court in Manhattan, the media giant claims that “millions” of its copyrighted articles were used to train its AI technologies, enabling it to compete with the New York Times as a content provider.

Read more