Resources

News & Thought Leadership

Check out the latest news, insights, and updates.

Videos & Podcasts

See and hear more about our capabilities and tech.

Frequently Asked Questions

Uncover answers + common myths and misconceptions.

The Data Triad

Discover why protecting Data in Use is critical.

Company

About Us

Learn our story and meet our team.

Our Partners

Explore our collaborations to advance secure data usage.

Careers

We're hiring!
Consider our active openings — Join our team!

Use Cases

We're hiring!
Unlock untapped opportunities across verticals.

Verticals

Public Sector

Mission-enabling, transformative data usage for federal users.

Financial Services

Secure and private data sharing across silos and jurisdictions.

Healthcare

Securely use and collaboration with sensitive, health-related assets.

Secure AI

Enhance decision making, protect privacy, and combat ML/AI risks.
Enveil By The Numbers
Featued Content:
Enveil By The Numbers
Highlights from Enveil's journey as an innovative PETs-powered COTS software provider
Get in Touch
December 12, 2024

TechInformed: Trustworthy Artificial Intelligence

In this article, Enveil CEO Ellison Anne Williams writes about how Privacy Enhancing Technologies and other global actions are shaping secure, responsible, and trustworthy AI adoption

Over the past two years, the hype around Artificial Intelligence (AI) has been unprecedented — and so has the resulting push to understand and adopt AI-powered business-enabling capabilities. Enterprise leaders across verticals want to harness the power of AI to improve efficiency, extract data-driven insights, and drive positive business outcomes. While AI tools are indeed on the path to delivering value to many organizations, the increased visibility around this quickly evolving category exposes another by-product of AI usage: elevated organizational risk.

Recognizing this risk has spurred several global regulators and lawmakers to action. One prominent example is the directives the US government outlines, which aim to ensure a safe and sustainable path forward for government-facilitated AI efforts. On October 30, 2024, the White House issued the Executive Order on Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, a framework designed to guide US federal agencies as they adopt AI-powered capabilities. The AI Executive Order was notable for its depth and clear directives, including specific calls to action for more than 20 agencies. Implementation deadlines spanned between 30 and 365 days.

As we examine the AI landscape one year after this directive, the progress made, including the recently released National Security Memorandum on Artificial Intelligence, is encouraging. While these actions effectively establish the baseline expectation that privacy and security cannot be afterthoughts when adopting AI but must be intentionally integrated into AI systems from the beginning, one action-filled year is not the end of the story. It is important to continue this commitment to creating an environment where AI risks are acknowledged, privacy is respected, and security is foundational.

At its core, Secure AI is about minimizing risk and enabling trust and security while enhancing decision-making, protecting privacy, and combating risks. To deliver the best outcomes, AI/ML capabilities need to be trained and enriched using a broad, diverse range of data sources.

Privacy Enhancing Technologies

Foundational to these efforts are Privacy Enhancing Technologies (PETs), a family of technologies uniquely equipped to enable, enhance, and preserve data privacy throughout its lifecycle. PETs allow users to capitalize on the power of AI while mitigating risk and prioritizing protection.

Data is the foundation upon which AI is built, so it may seem obvious that the privacy and security challenges that have long been associated with data also extend to AI tools and workflows. Yet, within many organizations, the fog of AI hype seems to have hidden this reality. Since the surge of activity driven by the host of Generative AI tools that burst onto the scene in late 2022, numerous AI efforts have advanced without a passing thought to the security implications or long-term sustainability.

Responsible AI innovation requires action — and systemic action requires resources. Like the AI Executive Order directives that initiated a number of workstreams in the US, there remains a role for global governments to work alongside industry to support safe, responsible, trustworthy, and sustainable AI practices. Technical AI experts from nine countries and the European Union met in San Francisco to discuss international cooperation on AI safety science through a network of AI safety institutes.

Legislative and regulatory actions and the funding of tools and technologies that prioritize privacy and security further bolster global AI leadership. Dedicating resources to adopting technology-enabled solutions, such as PETs, will help ensure that the protection of models and workflows is foundational, safeguarding the vast amount of sensitive data used during AI training.

Reflecting this pursuit, the European Union approved the EU Artificial Intelligence Act in March 2024. This consumer-centric act mandated the right to privacy by stating that personal data protection must be guaranteed throughout the entire lifecycle of the AI system. “Measures taken by providers to ensure compliance with those principles may include not only anonymization and encryption but also the use of technology that permits algorithms to be brought to the data and allows training of AI systems without the transmission between parties or copying of the raw or structured data themselves.”

The NCSC Guidelines for Secure AI System Development were released in the UK in November 2023. They identified security as a core requirement, not just in the development phase, but throughout the life cycle of the system and pointed to PETs as a means of mitigating risk to AI systems: “Privacy-enhancing technologies (such as differential privacy or homomorphic encryption) can be used to explore or assure levels of risk associated with consumers, users and attackers having access to models and outputs.”

Sustaining Momentum

As the AI market continues to expand exponentially, leaders must understand and support efforts to drive the responsible use of these technologies. That support includes crafting directives, policies, and budgets to advance Secure AI efforts. It also includes working with tech leaders, academics, and entrepreneurs who have a strong stake in advancing the adoption of these technologies in a secure and sustainable way. Prioritizing the safe, secure, and responsible use of AI and providing the funding necessary to sustain its advancement will ensure the impact of these transformative tools far into the future.

Read full article at TechInformed here.

To learn more about the expanded value unlocked by Enveil, please schedule a meeting.
Enveil Logo
Enveil is a pioneering Privacy Enhancing Technology company protecting Data in Use. Enveil’s business-enabling and privacy-preserving capabilities change the paradigm of how and where organizations can leverage data to unlock value. Defining the transformative category of Privacy Enhancing Technologies (PETs), Enveil’s award-winning ZeroReveal® solutions for secure data usage, collaboration, monetization, and Secure AI protect the content of the search, analytic, or model while it's being used or processed. Customers can extract insights, cross-match, search, analyze, and leverage AI across boundaries and silos at scale without exposing their interests and intent or compromising the security or ownership of the underlying data. A World Economic Forum Technology Pioneer and Gartner Cool Vendor, Enveil is deployed and operational today, revolutionizing data usage in the global marketplace.
Copyright © 2025 Enveil | Privacy Policy