Get in touch

Fight or flight: The power of people and technology in cybersecurity

To mark the 20th year of cybersecurity awareness month in October, America’s Cybersecurity and Infrastructure Security Agency (CISA) announced a new program that they’re coining “Secure Our World” which is focused on four “easy” ways to stay safe online:

  • Use strong passwords
  • Turn on multifactor authentication (MFA)
  • Recognise and report phishing
  • Update software

In principle at least, these do all sound easy, but when dealing with human behaviour – as at least two of these areas do – it’s rarely quite so simple. A cynic might say that this helps to feed the narrative that security breaches are more often than not down to human error (which is, of course, a factor).

But when a breach does occur, it always feels similar to when there is an incident involving an aircraft – if you’re a user of commercial airlines, would you rather hear that it was pilot error, or a fault with the aircraft?

Regardless of our conspiracy theories or your answer to that question, the theme is highly relevant to cybersecurity, and technology more broadly. If the tech and the people interacting with/operating the tech are not in perfect harmony, then something can and will go wrong.

So, with that overarching theme in mind, and some of the main security conferences of the year now firmly in the rear view, we felt it was time we took stock of what 2023 held for cybersecurity, as well as what might be in store during 2024.

Breaches, breaches, breaches…

In amongst all of the noise surrounding generative AI (gen AI) – which we’ll get into later – it did feel as though some significant breaches were gone from mainstream media as quickly as they arrived.

The UK public sector in particular seemed to take something of a pummelling during 2023, with the Police Service of Northern Ireland (PSNI)Greater Manchester Police (GMP), and the UK’s Electoral Commission all suffering a breach of some description, due to a mixed bag of human error and technology failings.

And while that supports our opening gambit of people and technology being crucial, rather than one or the other, it perhaps points to another key factor…and it’s an old favourite – budgets. Without going too far down the rabbit hole of public sector funding, it does highlight the importance of spending in the right areas, and cybersecurity should certainly be considered among those.

If organisations are not directing significant portions of their budget towards cybersecurity then the voices of those shouting “it’s only a matter of time before you get breached” will continue to get louder. Despite that always sounding a little defeatist, it has proven time and again to be true – there’s only downside to throwing money at the problem after the event; a sentiment shared by a Vanson Bourne CommunITy member in a recent in-depth interview (IDI).

That’s not to say that money is the sole answer, but it can help to level the playing field. Nation state actors, cyber criminal gangs, and hacktivists, among others, will be spending much of their “hard earned” cash on trying to add to their hitlist. So, those looking to defend their data, finances, and reputation must invest in a similarly robust manner – after all, it’s hard to put the toothpaste back in the tube.

It’s also worth making the point, that private sector firms have been far from immune to breaches this year – despite their budget ceiling typically being higher. To continue the aviation theme from earlier, take Boeing, for example – a huge global brand, falling foul of the LockBit 3.0 ransomware gang, due to a vulnerability in their software supply chain.

LockBit – who operate on a Ransomware-as-a-Service (RaaS) model – have been prolific in recent years. And this breach of Boeing along with others such as that on the US arm of the Industrial and Commercial Bank of China (ICBC) feels like their way of reminding the world that while we’re all looking at gen AI, they’ll be going about their business of taking names and cashing cheques.

Say what you like about threat actors, but there is a certain brilliance in the way they execute their missions and continuously evolve their tactics, techniques and procedures (TTPs). Take this approach for example:

  1. BlackCat ransomware gang exfiltrate data from MeridianLink
  2. MeridianLink decides not to fully engage in negotiations with BlackCat
  3. BlackCat gets annoyed and reports MeridianLink to the US Securities and Exchange Commission (SEC)

“Good guy ransomware gang” – this, of course, isn’t designed to glorify these hacker groups in any way, but it does highlight what organisations and the authorities are dealing with. Highly aggressive and innovative approaches, driven, in general, by greed. A dangerous combination.

So, what does this mean for organisations, how can they combat these threats, and, ultimately, how do they go about increasing the efficacy of their security stack?

Expansion or consolidation?

The attack surface that organisations are trying to monitor and mitigate against is growing – no great epiphanies there.

But with cloud sprawl being a genuine concern, incidents resulting from zero-day vulnerabilities seemingly increasing in prevalence, and the potential rise of shadow AI, among many other factors, it’s apparent that IT security teams must find a way to bolster their cyber defences through the utilisation of a technology stack that suits the specific requirements of their organisation.

And this leads us nicely into one of the main topics of discussion from security events such as RSA and BlackHat USA this year: Should companies be pursuing a “best of breed” / point solution strategy, or a “consolidation” / platform-based approach?

Over the years, it seems as though organisations have gravitated towards the former – searching out and implementing the best solutions for specific security needs, regardless of vendor. While this sounds sensible, the approach does have its drawbacks – more tools equates to a more complex security stack, and more potential points of failure that hackers could exploit.

Not only that, but it creates an integration headache for even the most seasoned of IT security professionals, which can, again, lead to gaps due to disparate tools not working cohesively together – at least in part because they were never designed to do so.

And for that reason, it would appear that attitudes are showing signs of shifting as we head into 2024, with security leaders appreciating that a comprehensive cybersecurity platform – meaning fewer tools and vendors in their stack – is likely to give them the best chance of protecting their organisation, from both external threats and insider risk.

We posed the question of point solutions vs. consolidation to our community of IT and IT security decision makers with a third (33%) saying that in 2024 they believe that their organisation will utilise/invest in a consolidation approach, so that they use as many (or as few) tools from the same vendor as possible. While the majority (59%) say they will utilise/invest in point solutions that solve specific problems, regardless of vendor.

Perhaps this isn’t the ringing endorsement of a consolidation approach that we anticipated, but it is indicative of how change takes time, particularly when a certain strategy is so ingrained. It won’t be as simple as ripping off the band aid when it comes to migrating towards a new look cybersecurity approach, and it will take careful planning and execution to do it properly (and securely), but the pros do seem to outweigh the cons.

As such – at least given the industry discussion – it would seem that vendors dealing in extended detection and response (XDR) platforms could be in for a period of growth during 2024. We’re certainly not here to plug any particular vendor or platform, but this XDR based approach seems to be about as close to a “silver bullet” for cybersecurity as you’re likely to find.

The ability to ingest data from a range of different sources, investigate and analyse threat levels, and then prioritise and respond to those threats/events, all within a unified platform, is surely going to simplify the lives of (typically) under-resourced security teams. These are the same teams who are monitoring significant numbers of alerts, across a host of security solutions, many of which have been compared to essentially playing a highly sophisticated game of “whack-a-mole”.

While it’s an entertaining analogy, the whack-a-mole approach cannot be sustainable with the evolving threat landscape in mind, and XDR feels like a notable step in the right direction.

Generative AI – risk or opportunity?

So, here we are…gen AI and large language models (LLMs). What can we say that hasn’t already been said this year…on multiple occasions? Well, in all honesty, probably not an awful lot…

  • …has it been fear-inducing? Yes
  • …has it been disruptive? Absolutely
  • …will it transform how we live and work? Without a doubt

We live and breathe B2B tech, so despite the recent carnage at OpenAI, in our minds, it is indisputable that this rapidly evolving technology – the explosion of which has been driven by ChatGPT – will provide significant benefits across all industries, and the world economy.

However, it’s also clear that there is going to be an adjustment period as the business world continues to figure out how to extract maximum value while introducing minimal risk to the company.

We’ve already referenced the phenomenon of shadow AI. This feels like something of an inevitability considering the wide-ranging use cases across software development, marketing, data modelling and many others. But, in the long-term, it will probably be viewed as a growing pain – “a necessary evil” – as functions from across the business rush towards gen AI, to ensure that they aren’t seen as the department causing their company to be left behind.

It is though worth sparing a thought for IT security teams during this settling in phase, as, ultimately, they will still be held accountable if a breach occurs due to a gen AI tool that they might not have approved or had visibility over. To that end, it’s crucial that all areas of the business not only consider how they can best utilise gen AI to support their own objectives, but also how they can work with the IT / IT security department to embed the tools they need in a responsible way.

Most things worth doing tend to be challenging or introduce elements of risk, and it feels like gen AI falls firmly into this camp. But the expanding attack surface is real, and this technology evolution will perpetuate that, at least in the short-medium term.

Emphasising this is the fact that when we asked 81 of our community members what they believed would be the biggest challenge and/or transformation in cybersecurity during 2024 (in a verbatim format), just under 60% mentioned AI in some way, shape, or form – with many of them highlighting the potential associated risks, or benefits for cyber criminals.

Nonetheless, we’re talking about a technology that can be used for good as well as evil. The aforementioned XDR solutions already lean upon AI, so that the data ingestion, threat analysis, and decisioning phases can be expedited. The reduction/removal of these hugely time-consuming tasks will help to ease the burden on IT security teams, as well as benefit the IT security posture of organisations able to implement such a platform.

But, and there’s always a “but”, we’ve already noted that cyber criminals are just as innovative, if not more so, than the organisations that they’re targeting. And, while the jury is still out on how effective gen AI is for specific coding purposes, or how long it will be before we get to a point where malware arrives carrying baked-in gen AI capabilities (as this post-RSA article mentions), the security community understands that it can often be the simplest attacks that are the most effective.

Therefore, at this stage, it’s most likely that gen AI will be used by attackers to improve the success levels of their social engineering attacks, primarily through phishing scams, which can now be executed more effectively and on a larger scale.

Which brings us full circle to one of CISA’s core themes – recognise and report phishing. The other themes, of course, cannot be disregarded, but it feels like this one in particular stands out. This seemingly straightforward task will be made all the more difficult now that cyber criminals have gen AI at their disposal.

As such, organisations must invest in proper training for their employees to reduce the risk of them succumbing to increasingly convincing messages. That, in tandem with settling on a security approach and technology stack that suits their business requirements will give them as good a chance as they can hope for against the flood of rapidly evolving threats coming their way during 2024.

This is one of those odd occasions where the problem and the solution are the same – people and technology.

Cybersecurity for 2024: people, technology…and dogs?!

A year is a long time in cybersecurity, and with the developments witnessed in 2023, it begs the question of what on earth will 2024 have in store? The probable answer…more of the same, but on steroids.

Gen AI will, of course, be right at the centre – whether that be how it is deployed by organisations to defend themselves, or how cyber criminals utilise it to breach those defences. One thing for certain as we plough headlong into the new year, is that the interplay between people and technology – AI or otherwise – will be more important than ever.

And with that, for the last time (promise) we return to our aviation subtext, for there is an old joke among pilots that says the ideal flight crew is a computer, a pilot and a dog. The computer’s job is to fly the plane. The pilot is there to feed the dog. And the dog’s job is to bite the pilot if he tries to touch the computer.

We don’t just bring this up for comic effect, as there is a serious underlying point. In 2024, we as the pilots cannot afford to let the technology outpace our ability to keep up, which is ultimately what would happen in this fictitious situation, if the dog were doing its job properly. People must be at the heart of technology and security transformation to ensure that if something does go wrong, we are able to fix it.

So, perhaps it’s the dog whose role changes the most – it goes from trying to prevent the human interacting with the technology, to forcing this interaction and helping everyone within the business to understand why it’s important.

While they’re both, evidently, critical facets, it is not just down to the IT / IT security team and the technology when it comes to tackling cybersecurity; it has to be a wider effort. And this is why CISA set out their guidelines in the way that they did. In order for a company’s threat mitigation efforts to be a success, everyone in the workforce must hold themselves accountable as well – where we’re all the “dogs biting the hand” to keep each other vigilant.

From the ground level up, it’s incumbent upon everyone within the organisation to know what the latest threats are – whether it be teenage hackers, nation state attackers or RaaS gangs – as well as the key trends that are on the rise, such as gen AI, and what this means for them in their day-to-day roles.

We live in a world that’s driven by technology, regardless of industry or organisation size. Sharing knowledge as we all head into 2024 will enable organisations to tackle their people and technology problems, with their people and technology.

Methodology

82 UK IT decision makers from the Vanson Bourne Community were interviewed in November 2023. All came from organisations with 500+ employees, from across various sectors.