Artificial Intelligence is unquestionably having its moment in the spotlight. AI has existed for decades, but the public release of Generative AI tools in late 2022 sparked global interest and ignited a discussion around guidelines and policy that continues today. During the last 18 months, there has been a surge of notable activity and milestones related to the development, adoption, and regulation of AI. These global efforts highlight the critical importance of the safety, security, and trustworthiness of AI systems.
At its core, Secure AI is about minimizing risk and enabling trust and security while enhancing decision making, protecting privacy, and combating broader legal, societal, and national security risks. To deliver the best outcomes, AI/ML capabilities need to be trained and enriched using a broad, diverse range of data sources. Foundational to these efforts are Privacy Enhancing Technologies (PETs), a family of technologies that are uniquely equipped to enable, enhance, and preserve the privacy of data throughout its lifecycle, allowing users to capitalize on the power of AI while mitigating risk and prioritizing protection.
Recognizing the transformative progress taking place around the world, we put together this Secure AI milestone infographic to help track and highlight the policies, reports, and regulatory actions that are shaping global outcomes. While this represents only a portion of the activity taking place, we believe these actions showcase the urgency and momentum we see taking hold across the Secure AI landscape. Below the graphic, you’ll find links to each milestone — we encourage you to take time to explore this important progress at a deeper level.
Secure AI Milestones
In January 2023, under direction from Congress, NIST developed the Artificial Intelligence Risk Management Framework to help organizations incorporate trustworthiness into the design, development, use, and evaluation of AI products, services, and systems.
In October 2023, the release of the Hiroshima Process International Code of Conduct for Organizations Developing Advanced AI Systems promoted safe, secure, and trustworthy AI and provided voluntary guidance for actions by organizations developing advanced AI systems.
Also in October 2023, the Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence tasked Congress and federal agencies to use available policy and technical tools, including Privacy-Enhancing Technologies (PETs) where appropriate, to protect privacy and combat broader risk.
The NCSC Guidelines for Secure AI System Development, released in November 2023, identified security as a core requirement, not just in the development phase, but throughout the life cycle of the system and identifies PETs as a means of mitigating risk to AI systems.
The Bletchley Declaration, signed by global leaders representing 28 countries during the gathering of the AI Safety Summit in November 2023, notes the importance of trustworthy and responsible AI that accounts for privacy and data protection.
In February 2024, the U.S. Government created the NIST AI Safety Institute Consortium in support of the development and deployment of safe and trustworthy AI, bringing together leaders from industry, civil society, and academia to set safety standards and protect the innovation ecosystem.
The European Union approved the EU Artificial Intelligence Act in March 2024, marking the world’s first major set of regulatory ground rules to govern artificial intelligence, which dictates that the right to privacy and to protection of personal data must be guaranteed throughout the entire lifecycle of the AI system.
Also in March 2024, the United Nations General Assembly adopted a U.S.-led resolution on AI, the first ever standalone resolution to establish a global consensus approach to AI governance, encouraging member states to promote safe, secure, and trustworthy AI systems worldwide.
The Organisation for Economic Co-operation and Development (OECD) updated its AI Principles in May 2024 to guide AI actors in their efforts to develop trustworthy AI, which requires trust in all aspects of personal data collection, management and use, such as acquiring reliable data, using it responsibly, keeping it secured, and maintaining transparency about its use.
Later in May 2024, the Roadmap for AI Policy in the United States Senate identified areas of consensus that merit bipartisan consideration to harness the full potential of AI while prioritizing responsible innovation, which includes foundational trustworthy AI topics, such as transparency, explainability, privacy, interoperability, and security.
Conclusion
While the last 18 months have been significant in shaping the global approach to Secure AI, we’re not done yet. The opportunity for leadership in the AI space remains immense, and our team at Enveil is proud to play a role in supporting these initiatives by delivering PETs-powered software solutions that advance Secure AI efforts in the commercial and public sector markets.
Powered by technology breakthroughs and informed by experience, Enveil exemplifies how PETs can be used to enable the secure usage of disparate, decentralized datasets for AI/ML applications. Organizations can securely enrich existing ML models by expanding the pool of available data sources for training and deployment. Encrypted models can be leveraged across jurisdictional, third-party, and organizational boundaries, even when using highly sensitive or proprietary models, allowing organizations to securely and privately derive insights and improve outcomes. By securing the usage of data, Enveil allows users to capitalize on the power of AI while mitigating risks and prioritizing protection.
Contact our team to learn how the power of Privacy Enhancing Technologies can help your organization take advantage of this AI momentum in a way that is secure, compliant, trustworthy, and sustainable.