Skip to main content

ChatGPT creator launches bug bounty program with cash rewards

ChatGPT isn’t quite so clever yet that it can find its own flaws, so its creator is turning to humans for help.

OpenAI unveiled a bug bounty program on Tuesday, encouraging people to locate and report vulnerabilities and bugs in its artificial intelligence systems, such as ChatGPT and GPT-4.

In a post on its website outlining details of the program, OpenAI said that rewards for reports will range from $200 for low-severity findings to up to $20,000 for what it called “exceptional discoveries.”

The Microsoft-backed company said that its ambition is to create AI systems that “benefit everyone,” adding: “To that end, we invest heavily in research and engineering to ensure our AI systems are safe and secure. However, as with any complex technology, we understand that vulnerabilities and flaws can emerge.”

Addressing security researchers interested in getting involved in the program, OpenAI said it recognized “the critical importance of security and view it as a collaborative effort. By sharing your findings, you will play a crucial role in making our technology safer for everyone.”

With more and more people taking ChatGPT and other OpenAI products for a spin, the company is keen to quickly track down any potential issues to ensure the systems run smoothly and to prevent any weaknesses from being exploited for nefarious purposes. OpenAI therefore hopes that by engaging with the tech community it can resolve any issues before they become more serious problems.

The California-based company has already had one scare where a flaw exposed the titles of some users’ conversations when they should have stayed private.

Sam Altman, CEO of OpenAI, said after the incident last month that he considered the privacy mishap a “significant issue,” adding: “We feel awful about this.” It’s now been fixed.

The blunder became a bigger problem for OpenAI when Italy expressed serious concerns over the privacy breach and decided to ban ChatGPT while it carries out a thorough investigation. The Italian authorities are also demanding details of measures OpenAI intends to take to prevent it from happening again.

Editors' Recommendations

Trevor Mogg
Contributing Editor
Not so many moons ago, Trevor moved from one tea-loving island nation that drives on the left (Britain) to another (Japan)…
This one image breaks ChatGPT each and every time
A laptop screen shows the home page for ChatGPT, OpenAI's artificial intelligence chatbot.

Sending images as prompts to ChatGPT is still a fairly new feature, but in my own testing, it works fine most of the time. However, someone's just found an image that ChatGPT can't seem to handle, and it's definitely not what you expect.

The image, spotted by brandon_xyzw on X (formerly Twitter), presents some digital noise. It's nothing special, really -- just a black background with some vertical lines all over it. But if you try to show it to ChatGPT, the image breaks the chatbot each and every time, without fail.

Read more
Google Gemini vs. GPT-4: Which is the best AI?
A person typing on a laptop that is showing the ChatGPT generative AI website.

Google's Gemini artificial intelligence and OpenAI's ChatGPT that uses the GPT-4 model are two of the most advanced artificial intelligence (AI) solutions available today. They can comprehend and interact with text, images, video, audio, and code, as well as output various alterations of each. they also provide expertise that would cost a lot to replicate with an expert human.

But if you're weighing which tool to put your time and energies into learning how to use, you want to pick the best one. Which is the more capable AI tool? Gemini or GPT-4?
Availability and pricing
Gemini is available in Pro and Nano form, though Ultra has yet to be released. Image used with permission by copyright holder

Read more
Researchers just unlocked ChatGPT
ChatGPT versus Google on smartphones.

Researchers have discovered that it is possible to bypass the mechanism engrained in AI chatbots to make them able to respond to queries on banned or sensitive topics by using a different AI chatbot as a part of the training process.

A computer scientists team from Nanyang Technological University (NTU) of Singapore is unofficially calling the method a "jailbreak" but is more officially a "Masterkey" process. This system uses chatbots, including ChatGPT, Google Bard, and Microsoft Bing Chat, against one another in a two-part training method that allows two chatbots to learn each other's models and divert any commands against banned topics.

Read more