Artificial Intelligence as a Force Multiplier
- Subscribe to RSS Feed
- Mark as New
- Mark as Read
- Printer Friendly Page
- Report Content to Moderator
by Susie Spencer
Security professionals, identity managers, and IT operations teams are under growing pressure to make fast decisions based on a nonstop flow of alerts, reports, and initiatives to both enable and protect the business. According to a recent cloud security report1, 59% of surveyed IT professionals say they receive more than 500 public cloud security alerts per day; 38% receive more than 1,000 per day. Not only that, almost half say more than 40% of their alerts are false positives. This, combined with a persistent shortfall of skilled resources, is certainly not a tenable situation.
Organizations are increasingly turning to artificial intelligence (AI) and machine learning (ML) to address these challenges. This is not meant to replace valuable and scarce expertise, but rather augment it by using algorithms as a “force multiplier” to support overtaxed security analysts, identity management professionals, and incident responders who all need to sort through an increasing amount of information to do their jobs.
Gain Better Visibility, Drive Greater Efficiency with AI
In today’s world, there’s an enormous amount of identity data generated; from different users and systems to automated robotic processes. But there’s so much data that finding the anomalies are a bit like finding the proverbial needle in the haystack. When it comes to managing identities, at the end of the day enterprises want to do two things: reduce risk through better visibility and drive efficiency through automation. With AI and ML organizations can gain new visibility and insight into the specific risks associated with user access. Additionally, AI and ML can help automate and streamline identity processes and decisions such as access requests, role modeling, and access certifications, driving greater efficiencies across an organization. The powerful combination of AI/ML will have a significant impact on how organizations manage, control, and secure all identities (both human and non-human).
Use Cases for AI and ML
Customers are using ML to improve their identity-related risk management posture. Through examples such as peer group analysis and machine learning-driven processes, organizations can start to gain visibility into identity-based anomalies — whether from a permissions perspective or in terms of “out of policy” user activity. To highlight, professionals in the same peer group have similar types of job functions; and therefore, similar levels of access. Today, many organizations typically sort through permission settings and access logs manually to identify anything out of the norm – even though this is known to be extremely time-consuming and error-prone.
Conversely, utilizing ML, companies can gain broader visibility into how employees are accessing resources, and then regularly map them to “policy based” job functions and organizational alignment. If security analysts discover unusual activity, that activity and the user can be flagged immediately and “sand-boxed” to isolate any potential malicious behavior that could constitute an incident or breach. To highlight a real-world example, overseas travelers often attempt to use their credit cards outside their countries of origin. They will typically get a text message from the credit card company asking them to confirm the transaction because it’s a deviation from their normal purchasing behavior.
Similarly, enterprise access visibility follows the same policies and best practices. ML helps to identify activity outside normal user behavior and flags it for analysis and review. ML is extremely valuable when enabling a risk management process that helps to verify legitimate access and behavior within policy guidelines; whether that be around permission settings or patterns of usage.
Just as companies can identify behavior that might represent high risk; security teams can also, over time, associate characteristics that represent “low risk” to the organization. This visibility and analysis helps to inform what process can be safely automated. For example, a newly hired professional in the accounting department will be granted access to a particular set of applications and resources. Policies can be created to quickly and efficiently grant access to anyone filling a similar job function as defined by HR and enabled through IT. This is a very common use case for automation and a real-world example of how AI-Driven Identity can help with predictive analysis across the organization for risk reduction and better use of scarce security resources.
SailPoint’s AI-Driven Identity provides customers with the visibility and insight they need to understand and act on specific risks associated with user identity and access. Armed with this capability; security, operations and IT teams can work together to create and scale company-wide governance controls that will allow greater visibility and faster action when an identity or access anomaly is discovered, reducing overall risk. Moreover, AI/ML can enable the successful automation of critical, yet “low risk” functions; resulting in less “time on task” for technical resources and optimized productivity for the overall organization. With AI/ML at the foundation, organizations can automatically adapt to changing environments and stay ahead of security concerns.
1 – https://www.securitymagazine.com/articles/97260-one-fifth-of-cybersecurity-alerts-are-false-positive...