Refresh | This website www.enveil.com/enveil-ceo-selected-to-speak-at-securityweek-ai-risk-summit/ is currently offline. Cloudflare's Always Online™ shows a snapshot of this web page from the Internet Archive's Wayback Machine. To check for the live version, click Refresh. |
The AI Risk Summit + CISO Forum Summer Summit, hosted by SecurityWeek, will take place on June 25-26, 2024, at the Ritz-Carlton, Half Moon Bay. The annual event brings together security and risk management executives, artificial intelligence (AI) researchers, policy makers, software developers, and influential business and government stakeholders to drive the conversation forward with consequential dialogue and real-world examples that skip past the hype and provide meaningful guidance on risk management and cybersecurity in the age of artificial intelligence.
Enveil Founder and CEO Ellison Anne Williams will speak on the topic, "Delivering Secure AI — and How Privacy Enhancing Technologies Will Help Achieve It". The session will be featured as part of the event's AI Risk track.
Full session description: In this data-driven era, organizations are harnessing the power of artificial intelligence to unlock insights, gain operational efficiencies, and capture business advantage. Continued developments have raised awareness around the power of AI while also illuminating the foundational need for privacy and security. Over the past 18 months, the term Secure AI has come to the forefront as a label encompassing this need to consider the broad spectrum of challenges relating to AI/ML privacy, security, and risk, but labels hold little value if we don’t understand the means to achieve it. A family of technologies that preserve and enhance the security of data throughout its processing lifecycle, Privacy Enhancing Technologies (PETs) confront AI/ML vulnerabilities head on by protecting models and safeguarding against cyber threats and other nefarious actions.
PETs uniquely enable secure data usage, allowing organizations to encrypt sensitive ML models, run and/or train them, and extract valuable insights while eliminating the risk of exposure. Users can securely leverage data sources across silos, jurisdictions, and organizational boundaries, even when using sensitive indicators such as IP and PII. PETs are specifically identified as a key enabler of Secure AI in recent global initiatives including the White House AI Executive Order and the NCSC Guidelines for Secure AI System Development, which also aim to advance the utilization of these transformative technologies for AI use cases.
But why PETs? To deliver the best outcomes, AI/ML capabilities need to be trained, enriched, and leveraged over a broad, diverse range of data sources. When an ML model is trained on new and disparate datasets, it becomes smarter over time, resulting in increasingly accurate and valuable insights that were previously inaccessible. However, since these models encode the data over which they were trained, using them outside trusted environments raises significant concerns. This is where PETs are well-positioned to change the game.
With the push to capitalize on the value of AI/ML, organizations must understand the risks and the protections available. This session will educate audience members on the data-related vulnerabilities within AI workflows, and highlight why Privacy Enhancing Technologies are transforming Secure AI by securely and privately unlocking data value in ways that were not previously possible.
Key Takeaways:
- Understand how regulatory actions and market factors are driving awareness around AI privacy, security, trust, and risk.
- Gain awareness of how leveraging AI + PETs can expand an organization's ability to securely and privately extract value from data across silos and boundaries.
- Explore ways data-driven organizations are leveraging these transformative technologies to support privacy in AI analytics today.
Check out the full event agenda and register to attend.