Better pay and benefits are common reasons why employees across a wide range of industries decide to go on strike. It certainly has a role in the Hollywood writers’ and actors’ strikes. But in both incidents, there is a newer twist brought to the bargaining table—AI. Writers are concerned that generative AI will be used as a less expensive way to create content. But actors have an additional concern—the use of studios using the actors’ images to create artificial identities.
It’s not just a problem for the rich and famous or for CEOs and executives, as we’ve seen with deepfakes. As AI capabilities continue to evolve, they bring yet another security risk in their ability to take over identities. We’ve already seen the power of deepfakes in social engineering and how identity theft resulting from data breaches destroys lives and businesses. But this is just the tip of the iceberg.
To keep AI-generated identity from becoming a security risk, we need to better understand where the technology could cause the most problems and what it will mean for identity and access management in the future.
IAM Vs. ChatGPT
“In the AI/data wave, we need to think of identity as a sequence of decisions,” RSA CEO Rohit Ghai said during a talk at RSA Conference 2023. “Who should have access, why, when and to what? We need insight to inform those decisions; insight and meaning derived by reasoning over data.”
Identity and access management has been the key component of determining the who, why, when and what around authentication and authorization. Identity has always centered on three main components, according to Ghai—compliance, convenience and security. Identity management is one of the first lines of defense in protecting our networks and data.
But with generative AI—and whatever is coming next—security needs to become a bigger component of identity management, and that may mean that traditional IAM systems will have to adapt or become obsolete.
“The term ‘identity and access management platform’ is outdated,” said Ghai. “Access management and identity management are table-stakes features, just like making a phone call. Today, the core purpose of an identity platform is security.”
It’s time to move to an identity security platform.
Using AI to Track AI-Generated Identities
AI knows exactly how it can disrupt identity and access—just ask it. SDXCentral did just that, posing a question to ChatGPT about the risks to identity that attackers using AI could generate. In addition to phishing and taking on the identity of a trusted user to spread malware or breach sensitive information, ChatGPT’s response pointed out the greater effectiveness of social engineering campaigns where the voice, image and personality of a person are hijacked for nefarious intent.
Ghai also used ChatGPT to find out how AI can be used to improve identity security.
“Identity threat detection and response will be a key capability of an identity security platform,” was the chatbot’s answer. “Artificial intelligence will be needed to analyze threat intelligence and signals to detect threats on a timely basis and avoid false positives and alert fatigue.”
Zero-trust will become more prevalent as a security system to protect identity and access in an environment of AI identities generated by threat actors.
It’s easy to get hung up on the bad things that AI is used for and what the technology will be capable of in the future. It’s especially scary to think that it could create realistic clones of human identities, making it even more difficult to tell the difference between real and fake. But AI will do a lot of good, too, and for better identity security, we’re going to need ‘good AI.’
“If identity is the defender’s shield, then it is also the attacker’s target,” said Ghai. “In fact, identity is the most attacked part of the attack surface. Without AI’s help, identity is a sitting duck.”