Reimagining Secure Infrastructure for Advanced AI
Securing advanced AI systems will require an evolution in infrastructure security. OpenAI is sharing six security measures that they believe will complement the security controls of today and contribute to the protection of advanced AI.
OpenAI’s mission is to ensure that advanced AI benefits everyone, from healthcare providers to scientists to educators – and yes, even to cybersecurity engineers. That work begins with building secure, trustworthy AI systems that protect the underlying technology from those who seek to subvert it.
Threat Model
AI is the most strategic and sought-after technology of our time. It is pursued with vigor by sophisticated cyber threat actors with strategic aims. At OpenAI, they defend against these threats every day. They expect these threats to grow in intensity as AI continues to increase in strategic importance.
Protecting Model Weights
Protecting model weights is an important priority for many AI developers. Model weights are the output of the model training process. Model training combines three essential ingredients: sophisticated algorithms, curated training datasets, and vast amounts of computing resources. The resulting model weights are sequences of numbers stored in a file or series of files. AI developers may wish to protect these files because they embody the power and potential of the algorithms, training data, and computing resources that went into them.
Rethinking Secure Infrastructure
We believe that protecting advanced AI systems will require an evolution of secure infrastructure. Similarly to how the advent of the automobile required new developments in safety or the creation of the Internet required new frontiers in security, advanced AI will also require innovations.
Security is a team sport, and is best approached through collaboration and with transparency. Our security program has sought to manifest this principle via voluntary security commitments provided to the White House, research partnerships via the Cybersecurity Grant Program, participation in industry initiatives such as the Cloud Security Alliance AI Safety Initiative, and transparency via compliance and third-party audits and our Preparedness Framework. Now, we seek to develop forward-looking security mechanisms for advanced AI systems through ongoing collaboration with industry, the research community, and government.
Six Security Measures for Advanced AI
OpenAI is sharing six security measures that are meant to complement existing cybersecurity best practices and build on them:
- Authorization: Access grants to research storage accounts containing sensitive model weights require multi-party approvals.
- Access: Storage resources for research model weights are private-linked into OpenAI’s environment to reduce exposure to the Internet and require authentication and authorization through Azure for access.
- Egress Controls: OpenAI’s research environment uses network controls that allow egress traffic only to specific predefined Internet targets. Network traffic to hosts not on the allowlist is denied.
- Detection: OpenAI maintains a mosaic of detective controls to backstop this architecture. Details of these controls are intentionally withheld.
- Auditing and Testing: OpenAI uses internal and external red teams to simulate adversaries and test our security controls for the research environment. They’ve had their research environment penetration tested by a leading third-party security consultancy, and their internal red team performs deep assessments against their priorities.
- Compliance Frameworks: They’re exploring compliance regimes for their research environment. Since protecting model weights is a bespoke security problem, establishing a compliance framework to cover this challenge will require some customization.
Conclusion
Securing advanced AI systems will require continuous innovation and adaptation. By staying ahead of emerging threats and enhancing the security of our AI infrastructure, we can ensure that advanced AI benefits everyone.
For more information on how OpenAI is reimagining secure infrastructure for advanced AI, visit this page.
Leave a Reply