We’re all now living in a data-driven economy with over 2.5 quintillion bytes of data generated every day but only a fraction of that is transformed and used to deliver value to the business.
This data is somewhat of an untapped gold mine, with The Economist describing it as “to this century what oil was to the last one: a driver of growth and change”. This analogy was proven correct at the end of 2020 when Zoom overtook Exxon Mobil in market value. Data has also been described as “the new gold” in the banking sector, where an hour of downtime is estimated to cost $6.48m.
Yet look more closely, and significant differences become apparent. Oil gets cheaper once the sunken costs of exploitation are taken into account, whereas collecting data only gets more expensive. This poses a real problem for businesses looking to leverage data because they need to analyse more data without it becoming cost prohibitive and to justify investment.
Where data adds value
Data presents businesses with the opportunity to understand what is working and what is not in relation to product development, pricing, business costs, and competitive positioning. Insights can only be achieved once data has been collated, observed and analysed. The McKinsey Global Survey on data and analytics found that half of all digitised incumbents (i.e. businesses where more than 20% of the infrastructure/operations are digital) are now extracting novel insights from data that was traditionally unrelated or sitting in different systems.
But there are also other ways that data can add value. Data has a direct impact on innovation and competitive differentiation, for instance, with the same McKinsey survey finding that data and analytics contributed at least 20% to earnings in high growth businesses.
Effective data management can make it easier to meet regulatory reporting requirements. Financial organisations, for example, deal with large volumes of data that needs to be stored in a structured way to enable them to extract key information to respond to regulatory or customer requests. Previously, this data was housed internally in data warehouses but banks and regulators are now turning to the Cloud to set up processes that standardise how data is stored and transferred, according to KPMG.
And companies are now becoming more comfortable at sharing data with stakeholders and partners to generate better service and product delivery, and legal compliance, with 50% of businesses recognising that doing so will increase company growth. The McKinsey survey found analytics-based businesses are increasingly pooling data via a shared utility, with almost double the number of companies forming data-related partnerships along the value chain compared to a year ago.
Are you data ready?
Several challenges continue to prevent businesses from leveraging their data. Many are struggling with their ‘data readiness’, that is, how effectively a business can utilise its data. The term covers a much broader spectrum than just data collection: it looks at how data is reported, analysed, interpreted and actioned upon. An entire process that enables you to capture, recognise and seize upon opportunities.
There are real differences in the data readiness between industry sectors. The best prepared are finance and accounting, followed by IT and telecoms and then legal while the least prepared are those in the engineering and design sector, according to an Open Data Institute and YouGov survey.
To improve data readiness, businesses have moved either in whole or in part to the cloud yet 65% claim the migration has not yielded the expected benefits, according to a report from Accenture. The survey found private cloud users struggled far more than public cloud users to achieve their goals, with 87% saying they would consider using managed services to help get the most out of their deployments.
The data dichotomy
The cloud isn’t delivering on its promise because we’re essentially asking it to do two different things. On the one hand, we want the data to be accessible but on the other, we want it to be protected. This conflict means businesses need to both control risk and cost as well as monetise the opportunities presented by data.
Industry surveys reveal the same top factors inhibiting cloud adoption, which are:
Security and compliance: The Accenture survey found security and compliance was the top risk cited by 65% of companies while the complexity of change required in the business came a close second at 55%. Yet organisations find managing both easier in the cloud than on-premises, with a Contino report finding 93% view it as ‘more’ or ‘as secure’ and 72% ‘easier’ or ‘much easier’ to stay compliant. It concludes that this contradiction reveals that “switching to cloud-native security and compliance models is a struggle”. McKinsey reports that 52% of CIOs have not reached their agility objectives because of security and compliance obstacles.
Data breaches, misconfiguration/inadequate change management, and insufficient identity and access management (IAM) number among the top security threats, according to the Cloud Security Alliance (CSA) report ‘Top Threats to Cloud Computing: The Egregious Eleven’ lists.
Compliance with legal and financial requirements such as data privacy regulations like GDPR, financial regulations like Sarbanes-Oxley, and security regulations like the PCI DSS or ISO 27000 series. These can also cause issues because teams attempt to replicate their existing on-premise practices in the cloud instead of embracing cloud-native tools and techniques.
Data architecture: The business needs to have technical infrastructure in place that allows the cloud to be used productively. The CSA report noted the absence of a cloud security architecture and strategy were a top threat to the viability of cloud deployments. Having an effective data strategy in place ensures data integrity is prioritised along with the rapid collection and dissemination of data to facilitate decision making. However, the Contino report found nearly a tenth of the organisations they questioned had no cloud strategy at all, while 16% were in the process of developing one, suggesting many are entering the cloud ill-prepared.
Vendor lock-in: Although using a hyper-scaler confers several advantages, providing access to advanced cloud services such as serverless computing, big data, machine learning and containerisation, dependency on these services can lead to vendor lock-in. The Contino report found 63% of those questioned were ‘somewhat’ or ‘very afraid’ of getting locked into a single cloud provider.
There are a number of cloud models the business can choose to use, from a private cloud (deployed internally but can be expensive) to a public cloud hyper-scaler service (such as AWS, Azure or GCP) to a hybrid cloud (which is a mix of the two). But the fear of vendor lock-in is now seeing some companies go for a multi-cloud set-up, where more than one hyper-scaler is used, with 20% having adopted this model.
To overcome these challenges, businesses must be able to use cloud services more flexibly and have the freedom of choice to select cloud-native technologies that allow them to access, shape and analyse their data more effectively.
In the future, it makes sense to refine your data readiness strategy by seeking to:
Get the maximum benefit from your data analysis investment by leveraging relevant data rather than collecting all data.
Avoid getting locked-in to proprietary vendors so you can move between technology platforms and cloud-native solutions.
Keep your options open so you can use different coding languages or access new and emerging technology to draw data from disparate systems.
To learn more about how you can more effectively manage your data and the technology options available, download our ‘Unlocking the power of data’ white paper today. Or, for a free consultation, call 0330 128 9180 or email us at email@example.com.