Get in touch

The devil is in the data

  • Over half (56%) of IT decision makers surveyed feel that their personal data is less secure now than 5 years ago
  • Almost 9 in 10 (87%) feel forced to share an increasing amount of personal data
  • 94% feel that increased regulation is needed to control what voice assistants, such as Google, Siri, and Alexa, are allowed to listen to and collect

 

Back in 2018, a Forbes article estimated that 90% of the world’s data had been generated in the two years prior – an extraordinary claim at the time, and a figure that is likely even greater today with the continued growth of the Internet of Things (IoT). Fast forward 5 years, and while the Internet of Things (IoT) is now the fastest growing data segment, social networks are close on its heels. But where is all the data stored? Who has access to it? And how is it protected? With so many questions, it’s easy to get a little overwhelmed and even paranoid, about how our data is being used.

We are all responsible for adding to this large collective of data, whether knowingly or not. Each time we browse the internet, we (perhaps unwittingly) leave behind a unique digital trail that organisations might store and use to make more effective decisions. Or we may consciously be creating and sharing our digital identities; each social media account we create, discussion thread we participate in, application we fill out electronically, and even the latest gadgets we might browse online, all add to our digital footprint.

Data collection is not limited to consumers, however. Businesses are increasingly relying on the data of their competitors, employees, as well as customers, to feed into the Big Data sets that are now becoming too complex for traditional data processing applications to handle. Increasingly, large language models are taking our data and using it to recognise, predict or even generate content.

We thought we’d take the opportunity to reach out to our Vanson Bourne Community of IT professionals to get their thoughts on data, from both a consumer and ‘insider’ point of view, and whether the aforementioned concerns might be justified.

Do IT decision makers feel their data is safe, and do they care?

Perhaps unsurprisingly, IT decision makers feel their data is most secure with their employer, and least secure with social media platforms (such as Facebook and Instagram) as well as websites (such as news, streaming and shopping). With the overwhelmingly lengthy regulations and protocols in place to protect our personal and professional data held by our employers, this is a reassuring finding, and surely one that makes reading all those policies and documents worthwhile!

Professional networking sites however didn’t go unscathed. Almost 1 in 10 (9%) of the ITDMs we interviewed feel that LinkedIn as a platform needs a complete overhaul of its security process. As one of the top recruitment resources in the UK today, it’s surprising so many feel their data is at risk with LinkedIn but not with their employer, who could easily gain their CVs and other personal details from the platform.

Although all platforms surveyed have room for improvement in their security processes, UK ITDMs do understand and acknowledge their own responsibilities in keeping data secure – their insider perspective on such threats should offer a lesson for others in the business to follow.

With the events of recent years having driven an acceleration in digital transformation, and the latest developments in artificial intelligence looking set to create a platform shift like that of the cloud, or even the internet itself, there’s a plethora of information about each and every one of us being collected and stored digitally. While most entities that store data have some form of data security procedures in place, the sophistication and level of protection can vary significantly across organisations and, even more so, across borders. The European Union’s general data protection regulation, or GDPR, is considered by some as the toughest privacy and security law in the world, designed to protect data belonging to its citizens and residents and applying to organisations that process such data, regardless of where the organisation is based.

These measures are, in part, aimed at improving our confidence in the privacy and security of our data, yet their impact appears somewhat muted with just over half (56%) of the ITDMs we interviewed feeling that their personal data is less secure now than 5 years ago, while 39% felt this was not the case and 5% didn’t really know. And although only a few data breaches make the headlines – such as Yahoo, LinkedIn and Marriott International which impacted billions of accounts globally – they do sadly remain more commonplace than we might dare to think.

So, what if the worst were to happen?

Unsurprisingly, all the ITDMs we interviewed state they’d be concerned if their data were to be leaked. This makes sense from a consumer point of view – after all, these same ITDMs are consumers themselves. But aren’t they also those responsible for securing our data in the first place? So, who is to be held accountable when data is leaked?

Evidently, the blame shouldn’t be laid solely at the door of said ITDMs, themselves responsible for securing our data. At least not in their eyes. While ITDMs acknowledge their responsibility in protecting the data, and admit to being at fault when data is leaked, short of blaming the attackers themselves 86% of respondents feel the company (i.e., Instagram or Facebook) is at least partially to blame for social media data attacks. Similarly, over 8 in 10 ITDMs felt that online websites (including news and shopping sites), LinkedIn, or even government sites (storing data relating to pensions, or tax information) were themselves to blame. Only employers fared slightly more positively, with 79% blaming them for a related data leak. In each case, this may be due to a lack of sufficient tools or security protocols available to the ITDMs to secure data effectively, or investment into cybersecurity overall. More research might be needed to delve deeper into blame, but is that really the point here? If most of us feel that data breaches are out of our control, either personally or professionally – yet we need to give our data out to survive in the world – do we have any autonomy left? With almost 9 in 10 (87%) of those interviewed feeling forced to share an increasing amount of personal data, we may not like the answer to that question!

The increasing use of AI and machine learning looks set to continue that data dilemma. The vast majority (93%) of ITDMs interviewed have had exposure or experience in some capacity with these technologies – and yet, 92% feel that these tools and programmes are not keeping their data fully secured. Clearly cybersecurity needs to remain at the forefront of organisations with access to our data.

Are humanoid or autonomous robots any safer?

Like Frankenstein, who was built out of a scientific experiment from a variety of parts from corpses to resemble a god-like human, humanoid robots are robots which look to mirror human behaviour and can sometimes also have human-like facial features and expressions. Are they the modern era Frankenstein? Typically, these robots can perform human like activities such as running, jumping, and carrying objects. An extension of this is autonomous robots who operate independently from human operators, using sensors to perceive the environment around them, such as cleaning bots or hospitality bots etc. Both have seen a growing interest in the last few years with 89% of our ITDMs surveyed reporting an experience with one, 19% of which have interacted or used one. However, only 1% of decision makers feel the information these robots use is stored in a secure location. Additionally, just under half (47%) are hopeful the data is secured protected, but the majority (49%) don’t trust it’s protected safely.

This begs the question – from an insider’s perspective, why are ITDMs, a tech savvy group of professionals, interacting or exposing themselves to potentially unsafe technology? Perhaps it’s due to the many (62%) who feel these robots are the future. The technological advancement doesn’t stop with robots, through the sophistication of AI, ML, and robots our world is growing in independence and complexity as many roles move away from humans. Three quarters of ITDMs surveyed feel AI technology, such as Copilot by Microsoft, or ChatGPT, will disrupt the administrative job market by replacing human employees with such technology. Is the future of jobs no longer human?

Hopefully the fate of AI, and robots, won’t end as disastrously as Frankenstein’s story of raising the dead. All we can do is hope that the tech engineers and designers have a secure and safe data protection process that will remove any opportunity for a data breach – a fate just as scary as Frankenstein coming to life!

Perhaps the financial penalties imposed on organisations for data breaches should be imparted to the subjects of the data itself. 91% of the ITDMs we spoke with agree that organisations should be legally forced to financially compensate individuals for breaches involving their personal data. Almost all (94%) feel that increased regulation is needed to control what voice assistants, such as Google, Siri, and Alexa, are allowed to listen to and collect.

So why do we need so much data?

The importance of data to an organisation’s success cannot be overstated, but we would say that, wouldn’t we? Well, research conducted by McKinsey suggests that companies who strategically use data, such as consumer behavioural insights, to inform their business decisions outperform their peers in sales growth by 85%.

It’s clear that concern around the data organisations hold is at the forefront for professionals and consumers alike. But as we’ve seen from the vast majority of the ITDMs we’ve interviewed, themselves consumers too, there’s an increasing pressure to share more personal data with organisations. With that in mind, organisations would be wise to heed these concerns and regularly review the security policies and procedures implemented to safeguard this data – what is collected, why is it collected, how is it stored, how long for, how is it protected, etc. Such measures would ensure that they are best placed to avoid a leak in the data, and any financial or legal implications which that may bring. A company won’t want to make the headlines for the wrong reasons!

Perhaps more importantly however, is that in doing so organisations might earn the trust of those whose data they are responsible for safeguarding, and in turn increase the amount of data people are willing to share with them. With data an increasingly valuable commodity, trust in a brand can have a significant impact in its success. And to build trust? Organisations need to be accountable and commit to understanding their audience.

Authentic messaging that aligns with the values of their audience can play a significant part in building trust in a brand. Vanson Bourne has a wealth of experience helping brands refine their messaging through understanding the impact that key individual elements of a message have on audience appeal. In a separate survey of 300 B2B marketers, strategists, and insight professionals, from the US and UK, 95% told us their organisation is conducting (or has plans to conduct) message testing.

Get in touch to find out more about our message testing capabilities, and how we can support your other research requirements.

Methodology

100 UK IT decision makers from the Vanson Bourne Community were interviewed in October 2023.