Every technology has a maturation cycle; today we see Artificial Intelligence transitioning from being a parlor trick to being considered for serious applications. The federal government wants secure and reliable solutions to solve problems in the military and healthcare.
Our guest today is Dr. Ellison Anne Williams, she has a PhD in mathematics and is the founder and CEO of Enveil. She provides an overview of AI security by suggesting it is only as good as the data over which you train and use it.
AI is exposed to large data sets and models are encoded with the data with which they were trained. This process can leave the model vulnerable and open to attack, she describes one called a model inversion.”
"If you are going to adopt it broadly, then you’ve got to make sure that you’re doing that in a safe, trustworthy, responsible, and secure fashion. And that’s really the case for our different kind of federal entities and organizations that deal with very, vey sensitive types of data and situations on a daily basis. ."
Ellison Anne Williams, Founder and CEO of Enveil
This is a machine learning technique that examines a model’s output and infers personal information about its data subject.
Dr. Williams suggests a group of technologies called “Privacy Enhancing Technologies” a family of technologies that can offer secure use of Artificial Intelligence. Using this method, the model is securely and privately trained to produce richer insights. PETs allow leaders to secure the use of a wider range of data sources.
This interview is an overview of a technology that can allow federal agencies that must deal with sensitive information to be able to leverage the speed and insights that AI can provide.
Listen to full podcast here.