OpenAI is reportedly taking a cautious approach to releasing its latest, highly capable cybersecurity AI model. Instead of a wide public launch, they are opting for a pilot program with a select group of partners. This measured rollout is driven by significant concerns about the potential misuse of such a powerful tool if it fell into the wrong hands.

The company has already introduced its "Trusted Access for Cyber" initiative, which provides invited organisations with access to advanced reasoning models. This program aims to accelerate legitimate defensive cybersecurity work, with OpenAI offering substantial API credits to participants. This strategy mirrors how cybersecurity firms handle the disclosure of software vulnerabilities, highlighting a long-standing debate around re sponsible release practices for potent technologies.
The decision to limit the release stems from fears that the AI could be exploited to create new cyber attacks, rather than just identifying existing vulnerabilities. This careful, staged release strategy is seen as a prudent measure to mitigate risks associated with powerful AI, ensuring its development and deployment are handled with maximum security awareness.
Fuente Original: https://it.slashdot.org/story/26/04/09/194221/openai-to-limit-new-model-release-on-cybersecurity-fears?utm_source=rss1.0mainlinkanon&utm_medium=feed
Artículos relacionados de LaRebelión:
- Microsofts Agent Toolkit Tackling Top AI Security Risks
- Anthropics Dangerous AI Cyber Model Remains Restricted
- Iran Targets OpenAIs 30bn AI Hub Stargate Under Threat
- OCSF The Security Data Language Teams Need
- Claude Code Leak 5 Security Actions for Enterprises
Artículo generado mediante LaRebelionBOT
No hay comentarios:
Publicar un comentario