The State of Innovation: Expert Outlook on Technologies
Category
Artificial Intelligence

The State of AI in Cyber Security in 2022

From traditional cyber security challenges such as network intrusion detection and user authentication to newer ones such as deepfakes and fake news, AI offers hope for new solutions.

The rise of GPU-driven machine learning over the last ten years1 has brought a new level of interest regarding the potential of artificial intelligence in cyber security. The ability of Convolutional Neural Networks (CNNs) and Recurrent Neural Networks (RNNs) to find critical patterns in very large datasets is being marketed as a broadly applicable answer to a new and rapid acceleration2 in the rate and scope of breaches.

Research indicates that cyber security companies adopting AI are receiving notable increases in detection rates for malicious entities and decreased time for positive detection, compared to legacy systems. Increasing data science and AI use in computer security has also brought a surprising reduction in the complexity of companies’ security architecture.

However, findings also indicate that enterprises lack the technical resources to realize the advertised benefits of AI. As the market is set to exceed $300 billion by 20243, and the AI-related cyber security market predicted to reach a value of $38.2 billion by 20264, there's clear motivation in the sector to foster association with AI consulting, even when AI doesn’t necessarily apply.

In this article we'll take a look at what constitutes 'AI' in the context of enterprise security, the current trends and inhibiting factors around adoption, and the cultural background that is driving security toward using artificial intelligence solutions. We’ll also touch on three of the key areas where ML and AI have potential to provide new and innovative solutions.

The State of AI in Cyber Security in 2019, an expert look at the market to be valued $38 billion by 2026
Tweet

Are Algorithms 'AI'?

In a recent IBM-backed report around AI in cyber security adoption, 45% of respondents expressed concern over their ability to understand the real value or truth of vendor claims regarding their own AI cyber security products5. That's no minor misgiving in a business climate where almost half of all AI-based tech startups are reported to actually use no AI in their products6.

Many of the common terms used to describe AI and machine learning (two phrases still at the center of their own semantic dispute) can be misleading. For instance, the term 'algorithm' is often misunderstood in the context of AI. Some of the most famous examples of this conflation are Google's search algorithm, Netflix's recommender system, and Facebook's tailored advertising and timeline-post decision processes—the headline-grabbing code from some of the world's leading researchers into machine intelligence.

All these systems have enormous amounts of proprietary user data to train on, and all of them leverage machine learning—but not directly. Typing Game Of Thrones into a Google or Netflix search box doesn’t immediately create a new data point in a neural network.

You could hardly expect a reasonable response time if it did: even the significant resources of these global players are equaled or exceeded by high-volume queries7 from customers expecting almost immediate results across a range of connectivity conditions and device types.

Rather, machine learning networks are used to design more efficient 'traditional' algorithms: static code that can handle and act upon data mechanically and predictably based on designed rules. The difference is that, increasingly, these rules will have been updated, amended, and sometimes even determined by AI. Updates may be very frequent, even in the order of minutes or less; but these interactions rarely offer a direct interface with a responsive AI or ML.

Algorithms vs AI

This does not signify that algorithms are 'impersonating' AI, but rather that AI can provide better algorithm development for the data-heavy and high-volume network environments where machine learning can't currently be expected to operate. Or at least, to operate with the same response rate or surety of accountability.

This is important in a cyber security context, where latency is often such a critical factor that only pre-optimized algorithms are able to respond to incursions or anomalies quickly enough.

In some cases, which we'll examine later, network connectivity can either be insufficient, not available, or not recommended for an AI-based protection system. In these instances, a 'local' machine intelligence must gather, cogitate, and act within preset parameters.

This 'standalone' AI will likely be dealing with a very targeted or filtered set of data points (such as a facial recognition task), and it will usually be running in a minimalist, even frugal implementation.

Since the advent of mobile and low-powered neural networks makes this scenario increasingly likely in the next 5-10 years8, and since 'real-time' Convolutional Neural Networks and other ML frameworks seem set to rise9, it's worth understanding the practical trade-offs between a 'live' AI system and an algorithm.

Applied AI insights are best suited to recommendation systems, customer services and predictive maintenance, whereas highly curated algorithms are better set to tasks such as credit risk assessment, insurance underwriting, and claims processing.

These use cases derive from the contrasting strengths of each approach:

AI vs Algorithms comparison

Even if practical considerations didn’t make them necessary, algorithms derived from machine learning systems provide a useful layer of curation and human control in a culture still very nervous10 at the prospect of 'autonomous' AI.

Since the growing speed and responsiveness of live neural networks is likely to remove the need for this layer of interpretation in the next 10-15 years, we'll have some hard decisions to make at that point about the discriminatory powers granted to AI frameworks. For now, it's enough to understand the relationship between the algorithm and the 'live' machine learning system that creates it.

So, with these and other restrictions and limitations, how convinced is the security sector about the current benefits of implementing applied AI techniques?

Current Uptake for AI in Cyber Security

One strong index of demand for the role of AI in cyber security is the shortfall of available AI specialists in the field. Job postings for cyber security positions currently exceed supply11, with the prospect of 3.5 million of vacancies remaining unfilled over the next two years12.

This is a factor of concern to respondents in the aforementioned IBM-backed report. Of those polled, 52% said that adopting AI-based technologies would mean hiring new development staff. According to the poll, such domain experts would be granted extraordinary influence over the way AI is implemented within the company, second only to the CIO and CSO:

Who determines the use of AI in cyber security? IBM chart

The rewards seem worth these extraordinary measures and expenditure, the report indicates. Cyber security companies that have embraced AI have been able to increase threat analysis speed by 69%, the containment of infected endpoints, device and hosts by 64%, and the identification of application vulnerabilities by 60%.

AI users surveyed also responded that they were now able to detect 63% of 'zero day' exploits which had previously been considered 'undetectable' prior to applying the technology.

AI cyber security improvements

Further, the report asserts that AI implementations bring the average cost of a cyber exploit down from $3 million to $800,000, offering consultancies an average saving in excess of $2.5 million in operational costs.

Perhaps most interesting is that 56% of participants found that incorporating AI into their systems and services actually reduces the complexity of their IT stack.

AI impact on security architecture

Although no specifics are given for the use cases that inform these figures, the framing suggests a migration to AI-based systems rather than the addition of supplementary AI-based layers onto existing legacy security systems.

If extensive retooling is indeed a prerequisite for ROI of AI in cyber security, that does contradict the more business-friendly notion that AI practices can be economically assimilated13 into a company's current stack.

Why would this be? The new prominence of AI has not changed the essential nature of security principles, which are concerned with generating hashes that define historically dangerous behavior, and the less prescribed heuristic approach that seeks to anticipate new maleficent patterns of behavior.

Why then might we need such a radical change in applied technologies?

Fighting AI with AI

One of the most compelling reasons for enterprise IT teams to adopt AI is that malefactors are doing likewise14. The open-source nature of global AI and machine learning research means that powerful tools with criminal applications can be crafted from common code repositories and projects.

In 2018, a panel of experts gathered in the UK to explain the potential ramifications of criminal AI15. The implications put forward range from persuading mining companies to bid for drilling rights in unsuitable locations to the creation of AI-powered viruses capable of performing far more user-specific damage from an infected consumer machine than the customary inconveniences (such as the mailing of spam messages, enlistment in a botnet, or the installation of crypto-mining software).

It's a chilling picture of a new breed of infection that will seem much more 'personal' to the victim, as infected systems are exploited to gather up, collate, and intelligently interpret data in the most damaging way possible.

A 2018 report from the Universities of Oxford and Cambridge (together with the Electronic Frontier Foundation and several other institutes) proposes that governments around the world devote greater resources to assessing their state of preparedness for AI-based nation state and criminal attacks in the context of a locus of global research that is currently accessible to all16.

It also suggests a new mandate for transparency—aka the 'Blade Runner Law'—to distinguish the actions of machines from human action; and that the community should work with governments to define and divide clearly the responsibilities and liabilities of the state and private sector in relation to the proliferation of AI-based attack vectors and their consequences.

These are matters of policy and governance that are set to affect the cyber security industry in the next five years, irrespective of the action it takes or doesn't take. At the moment, many of the issues raised seem abstract and conjectural.

In reality, they may have a direct bearing on a company's future viability. An enterprise that has not committed to AI-based deployment will be the least prepared to protect its interests in a global research environment where public AI code can be easily weaponized by bad actors.

With these as some of the motivating factors for the adoption of AI in the security arena, what are some of the areas where the innovative use of applied machine intelligence might be most productive?

Cyber security now is largely about AI fighting with AI
Tweet
Equip your IT ecosystem with AI.
Iflexion will help.

Applications for Artificial Intelligence in Cyber Security

The following sections will look at the three major use cases of artificial intelligence in cyber security where it can help fight deepfakes, identity fraud, and network breaches. 

Confirming Information Integrity and Veracity

The phenomena of counter-intelligence and propaganda have extended from political and cultural topics into relatively new sub-sections of cyber security over the last five years. Whether the risk is overstated or not, inaccurate information (lies) is becoming synonymous with malicious information (exploits), from a safety perspective.

This is partly because the damage such techniques can wreak has become so frequent and quantifiable recently, but mainly because of the extent to which they have become subject to programmatic, automated processes. Attack vectors now can be greatly facilitated by machine learning, and potentially repelled by AI information systems

Consequently, 'fake news' is increasingly discussed in the context of weaponized data17, capable of causing harm comparable to a database leak in terms of reputation and commercial impact. Such attacks are proving viable in both the public and private sectors.

Facebook, one of the global leaders in AI research, has committed18 to using machine learning techniques in an effort to distinguish fact-based news reporting published in its timelines from terrorist propaganda which uses the same journalistic constructs and conventions of language.

Since the difference can be subtle and semantic, it's a considerable task for Natural Language Processing (NLP) analysis, even for a company like Facebook that has such a high volume of sample data to analyze.

Indeed, the sheer scale of the big data era is probably the defining factor in the move toward AI-driven cyber safety. One company (since incorporated into Twitter) that specializes in identifying fake news found it necessary to actually redefine the 'volume problem' with a system called Geometric Deep Learning.

New models for disinformation have emerged recently. Deepfake video manipulation applications— a new stratum of AI software first revealed in late 201819 that uses autoencoders to misrepresent famous personalities in manipulated footage— have become an increasing concern as the 2020 US election nears20.

Previous techniques for identifying AI-generated fake footage have included the blink test (which is relatively easy for fakers to obviate with better curation of the source data) and analysis of lighting and breathing rates. At the time of writing, a new technique has been developed21 that uses the correlation between facial expressions and head movements to provide a potential deepfake signature. However, experts acknowledge that deepfake software can adapt quite easily to new 'tells' as they come to light22.

But this model of an 'algorithmic arms race' now applies to a much wider context in AI-centered cyber security.

Fake news is not entirely centered around AI-driven methods—more rudimentary manipulation can be just as effective23. But counter-measure AI techniques remain applicable for these cases too, in concert with potential new data-led approaches such as video-hashing and blockchain accountability.

User Authentication and Fraud Detection

Machine learning has great potential to provide better solutions for user authentication than the venerable 'password' model, which reportedly accounts for 81% of all corporate data breaches24.

Unfortunately, nearly all of the systems currently available or in development involve leveraging Personally Identifiable Information (PII) in ways likely to come into contention with growing privacy concerns among consumers and with new and existing regulatory frameworks designed to protect user anonymity, including the European Union's GDPR.

The model for multi-factor authentication (MFA), which is familiar to most of us at least through one-time passwords (OTP), increases stored knowledge about the end user with each added factor. This characterizes the problem: the majority of proposed AI authentication and fraud prevention frameworks seek to reduce end-user complexity by collating these multiple data points into a behavioral analysis workflow, using multiple biometric and data sources. These might include purchasing histories, social network posts, and the analysis of typing styles and mouse movements, among many other possible sources.

One scheme by TeleSign implemented 'Continuous Authentication' based on multiple analyses of user behavior, removing the need for any login procedure at all. At the same time, IBM's model for MFA makes clear that even one of the largest tech players on the planet has few better ideas than to pile weak, data-oriented authentication techniques onto each other in the hope of an 'aggregate' improvement in security26.

Critics of such data-centric schemes of AI security startups and larger players like IBM discount the value of that 'friction-free' login against its potential to usher us toward a China-style system of social credit, where the state and large corporations can retain high levels of personal information about a user and mine that data in ways the user did not intend or permit.

Even if we accept that there might be an inevitable trade-off between security and privacy, such approaches do not change the public-facing server design of the traditional authentication model: these 'biometric maps' inevitably reduce down to yet more net-accessible hashes and stored tokens—a new server-based target for hackers.

The challenge for AI, therefore, is to find workable authentication solutions which:

  • Don't excessively (or in contravention of legislation) infringe the user's privacy or control of personal data
  • Limit the number of network transactions necessary to authenticate
  • Are discrete and integral, rather than bolted onto legacy mechanisms already proven vulnerable
  • Don't require new, expensive dedicated (and stealable) hardware devices
  • Identify the user at an acceptable failure rate (false negatives and false positives)
  • Don't excessively involve users in IT chores themselves

The way forward could be at the local rather than network level. Apple's 'blind' storage of user credentials in its mobile devices gathers local data and uses it only in situ27, often to the chagrin of investigating authorities28.

From the release of iOS10, Apple began using a highly optimized deep network framework for its Vision facial recognition system29. In the context of a security scenario where printed masks and 3D-printed glasses have confounded traditional facial recognition systems, Apple felt constrained to dedicate local machine learning resources to the problem.

Now that popular open-source machine learning libraries such as TensorFlow are implementable even for minimally specified devices, a 'local' machine intelligence may represent a better model for the development of innovative AI-based authentication systems than those which are dependent on network transactions.

In any case, it remains an unsolved problem that's ready for a fresh take from AI.

Network Intrusion Detection and Prevention

The historical foundations of the internet still underpin most of the recent innovations and protocols in the global network infrastructure30. Therefore, network safety is one area where dealing with legacy technology is inevitable, both in terms of the security arena and applied systems.

The stacked nature of this super-scale 'legacy' architecture also has a direct bearing on the security of user authentication techniques and other, more automated network protection measures, such as Comodo SSL. Consequently, sweeping change seems unlikely either in the attack surface or the systems that have grown to defend it.

However, this is not why AI has made so little impact on the problem of network intrusion detection, relative to (apparently) similar sectors such as natural language analysis, spam detection and recommendation systems. Rather, it's because the core problem of anomaly detection is not as similar to AI's most fruitful areas of research as it may appear31.

Machine learning systems excel at discovering relationships between disparate data points and at finding similarities that may be difficult, labor-intensive, or uneconomical to identify manually. During training, an ML system might cycle through petabytes of relevant data, where even a low eventuality of target behavior scales up to an operable mass of relevant data.

But the proposition for anomaly detection is not like this: network incursions are historically and statistically rare32, meaning that the system must learn to identify 'outliers' using training sets which are out of date or highly unbalanced—or both.

A mission-critical machine learning system needs a perfect model of normality in order to establish a baseline for the identification of incursions or abnormal events. A model that is both accurate and resilient to the appearance of previously unknown types of abnormality can be difficult to generate or maintain in these circumstances.

The cost of failure, measured in terms of 'ignored' anomalies or false positives, is much higher than in semantically similar machine learning pursuits such as optical character recognition, spam detection or product recommendation algorithms.

Neither is it possible, as in those less critical research sectors, to generate datasets for training purposes that feature authentic events of failure at scale, due to the innovative designs and signatures that we can logically expect from new approaches to network incursion.

AI systems face a similar paucity of data in the field of predictive failure analysis, where the low frequency of 'abnormal' data can require new approaches to the dimensionality of models aimed at the task.

AI-driven network intrusion detection is not the lowest-hanging fruit in the present wave of business interest in machine learning. Current commercial offerings in this line are few, mostly centering around analysis rather than real-time response.

There are at least 3 use cases for AI in cyber security
Tweet

Conclusion

A large number of the security-related AI solutions dominating the headlines these days are actually quite AI-agnostic. They're predicated on access to voluminous and multi-layered PII that either may not continue in the long term or may never be granted, no matter how compelling the use case.

Given that condition, many such solutions would have worked as well in the 1990s had the data been available then, and so are difficult to characterize as genuine innovations in the field of commercial AI.

No IT sector touches this sensitive area more than cyber security, which seems likely to define the terms by which AI will negotiate the interests of privacy vs security over the next ten years.

Therefore, it seems that the cyber security sector will need to leverage machine learning and deep learning innovatively, not merely as a convenient processing system for data that is fast becoming more political than technical in nature. As we've seen, there's lots of scope for radical innovation, in fields such as airgapped machine intelligence deployments and new approaches to structural challenges, such as network protection and fraud detection.

Content type
Blog
Let's talk AI for you project.
Free initial consultation.

WANT TO START A PROJECT?

It’s simple!

Attach file
Up to 5 attachments. File must be less than 5 MB.
By submitting this form I give my consent for Iflexion to process my personal data pursuant to Iflexion Privacy and Cookies Policy.