Search

How behavior analytics can thwart insider threats - GCN.com

sambilsambel.blogspot.com
user analytics (Elnur/Shutterstock.com)

INDUSTRY INSIGHT

How behavior analytics can thwart insider threats

A recent Ponemon Institute study confirms the troubling news that insider threats are on the rise. Ponemon estimates that incidents attributed to insiders have risen 47% since 2018. Not only are the threats more prevalent, but the cost of an insider-caused breach is going up too. According to the study, the average cost of an insider breach rose 31% to $11.45 million. Clearly, this is not something to be ignored.

Who is doing all this damage? Ponemon attributes the acts to negligent insiders (62%), criminal insiders (23%) and credential insiders (14%). A credential insider is an external intruder who has gained access to a network through subversion of a legitimate set of credentials, such as via a phishing expedition. The intruder assumes an insider’s identity and has all the access privileges that the real employee does.

Security professionals say that insider events are more difficult to prevent and detect than external attacks. This is largely because they don’t have the right tools at their disposal. Organizations tend to spend the lion’s share of their IT security budget on tools and resources designed to fight threats originating from outside the organization -- and these are simply the wrong tools to catch insiders in the act.

What’s needed to detect insider threats

Traditional prevention and detection systems that guard against external threats are largely ineffective in detecting and surfacing insider threats. Oftentimes, these systems are primed to look for indicators of compromise (IoCs) that an insider simply doesn’t need to use, such as  excessive login attempts, geographical irregularities, web traffic with non-human behavior, or any number of other tactics, techniques, and procedures (TTPs) indicative of outsider attacks.

The most prominent indicator of an insider attack is abuse of privileges -- doing things the employee doesn’t have legitimate permission to do. Detecting this behavior requires tools that look at the actions of users -- particularly those people with elevated permissions such as systems administrators, managers and executives -- and looking for behaviors that are outside the range of permissible and normal activities. 

One tool designed for this purpose is user behavior analytics. A UBA tool collects past and current data such as user and entity activity, user roles and groups, and account access and permissions from directory services. From that and other data, the tool establishes a baseline of normal activities for individuals and their peer groups. Then big data and machine learning are used to highlight deviations from these baselines. 

While this is a good start in understanding anomalous behavior that could indicate malicious (or unintentionally erroneous) activity, there are ways to further refine the data to help eliminate false positives and false negatives. 

A more effective approach combines UBA with in-depth intelligence about a user's identity attributes and network privileges. People often have multiple digital identities for the various systems they log into and applications they use. Each identity has entitlements associated with it. For example, a user may be allowed to change or update records in a customer database, but only if those customers are assigned to his sales team. He may not have the privilege to even view records of customers assigned to a different sales region.

Altogether, a person’s identities and privileges create a threat plane -- places where data or information can be stolen or damaged in some way, making it possible to triangulate data from three important sources:

  • A user’s access rights and entitlements.
  • His current and past activities across all the accounts assigned to him.
  • The typical activities of his peer groups.

Applying machine learning to these datasets reveals the anomalies indicative of misuse of assigned privileges.  

Determining the risk of a user identity and its activities

Years of manually maintaining identity management systems has led to excessive access privileges assigned to employees. As a result, workers -- or an attacker using a worker’s hijacked account -- often have the ability to move throughout the network and do more than what should be permitted. 

Organizations must strike a balance: the right access for the right users when they need it for their job, and no access when they do not need it. UBA can help in this regard.

To really understand a user identity, and to determine the risk of that identity as a threat plane, it's essential to collect relevant data from a variety of sources, including:

  • Identity management systems
  • Privileged account management systems
  • Directories
  • Log sources
  • Defense-in-depth systems
  • Intelligence sources

Once this data is collected, normalized and stored in a big data repository, it’s ready for machine learning to perform the analytics. The ML algorithms can look at every new transaction by a given identity and score it according to risk. Using clustering and outlier ML makes suspicious behaviors stand out from benign activities.

For even more accurate analysis, the next step is to baseline a user’s behavior and compare it to his dynamic peer group, i.e., those people who perform the same types of activities, have the same types of identities and hold the same privileges. This is more effective than simply comparing a user’s activities to the static groups he is assigned to via the company’s directory services system because, as pointed out earlier, these services tend to be out of date in terms of group memberships and assigned privileges.  

Baselining behavior to dynamic peer groups ultimately reduces the likelihood of false positive alerts often seen with static peer group analysis.

Add a self-audit for one more security measure

Any behavioral anomalies that surface from the processes outlined above are very likely to be true insider threats. Certainly, they would set off alerts to prompt investigation. This, in its own right, would constitute a strong insider threat detection program. But there is one more safeguard that provides the cherry on top: a user self-audit. 

Much like a credit card statement shows every transaction in a time period, individual users can be shown their own risk-ranked anomalous activities, identities, access rights, devices and other key data points via a web portal. When users detect an anomaly, the false positive rate is very low, and the context provided is richer and faster than IT can provide. What’s more, the visibility of what data sources are monitored and analyzed against dynamic peer groups also acts as a deterrent against insider threats.

Detection of insider threats requires a completely different approach and set of tools from detecting threats coming from the outside. A combination of user behavior analysis and identity attributes and privileges can surface truly anomalous activity that is well out of the realm of normal behavior, thus setting off alerts prompting response and mitigation. 


About the Author

Saryu Nayyar is CEO of Gurucul, a provider of behavior-based security and fraud analytics technology.

Let's block ads! (Why?)



"behavior" - Google News
January 23, 2021 at 04:49AM
https://ift.tt/3cgMghl

How behavior analytics can thwart insider threats - GCN.com
"behavior" - Google News
https://ift.tt/2We9Kdi


Bagikan Berita Ini

0 Response to "How behavior analytics can thwart insider threats - GCN.com"

Post a Comment

Powered by Blogger.