AI research company OpenAI announced today the launch of a new bug bounty program to allow registered security researchers to discover vulnerabilities in its product line and get paid for reporting them via the Bugcrowd crowdsourced security platform.
As the company revealed today, the rewards are based on the reported issues' severity and impact, and they range from $200 for low-severity security flaws up to $20,000 for exceptional discoveries.
"The OpenAI Bug Bounty Program is a way for us to recognize and reward the valuable insights of security researchers who contribute to keeping our technology and company secure," OpenAI said.
"We invite you to report vulnerabilities, bugs, or security flaws you discover in our systems. By sharing your findings, you will play a crucial role in making our technology safer for everyone."
However, while the OpenAI Application Programming Interface (API) and its ChatGPT artificial-intelligence chatbot are in-scope targets for bounty hunters, the company asked researchers to report model issues via a separate form unless they have a security impact.
"Model safety issues do not fit well within a bug bounty program, as they are not individual, discrete bugs that can be directly fixed. Addressing these issues often involves substantial research and a broader approach," OpenAI said.
"To ensure that these concerns are properly addressed, please report them using the appropriate form, rather than submitting them through the bug bounty program. Reporting them in the right place allows our researchers to use these reports to improve the model."
Other issues that are out of scope include jailbreaks and safety bypasses that ChatGPT users have been exploiting to trick the ChatGPT chatbot into ignoring the safeguards implemented by OpenAI engineers.
Last month, OpenAI disclosed a ChatGPT payment data leak the company blamed on a bug in the Redis client open-source library bug used by its platform.
Because of the bug, ChatGPT Plus subscribers began seeing other users' email addresses on their subscription pages. Following an increasing stream of user reports, OpenAI took the ChatGPT bot offline to investigate an issue.
In a post-mortem published days later, the company explained that the bug caused the ChatGPT service to expose chat queries and personal information for roughly 1.2% of Plus subscribers.
The exposed info included subscriber names, email addresses, payment addresses, and partial credit card information.
"The bug was discovered in the Redis client open-source library, redis-py. As soon as we identified the bug, we reached out to the Redis maintainers with a patch to resolve the issue," OpenAI said.
While the company didn't link today's announcement with this recent incident, the issue would've potentially been discovered earlier, and the data leak might've been avoided if OpenAI already had a running bug bounty program to allow researchers to test its products for security flaws.
Comments
Mahhn - 1 year ago
A major bug is that it is programed with political agendas, noted by several sources. That means it is not able to think, it is preprogrammed with the Fears and Agendas of humans - That to me is the most wrong/dangerous part of any supposed AI.
ctigga - 1 year ago
AI is driven both by the databases it draws upon for knowledge and the algorithm(s) it uses to deduce decisions (many of which are contextually chosen based on aforementioned databases and present state)
Like all other software, there is plenty of room for bias to slip in there ;)
I think of AI as being in a similar realm as search engines: great at searching the data that is permitted/available to be searched, but incapable of making accurate and reliable decisions without real intelligence (i.e. as bestowed upon humans by God)
(Even humans who have real intelligence routinely fail and don't learn from mistakes)
As usual, life is a mix of truth and lies. Separating the two most often requires more input than a software program has access to, for all but the most trivial of scenarios.
AI would likely work well for many trivial scenarios, but I wouldn't rely upon it for anything remotely important.