Data Compliance
Traditional data security approaches do nothing to protect the data itself. Here are four steps to mitigate the risk of data loss and exposure through DataOps and establish policies and controls to enable data to flow freely.
Andrew Pan
Jul 30, 2019
Share
News of data hacks and failures of IT systems proliferate the media landscape. Major organizations continue to suffer attacks and lose data to malicious actors with increasing frequency. Although many businesses place security as a high priority, they seem to be befuddled as to how to ensure the protection of sensitive data.
A majority of data within an enterprise organization resides in non-production systems, where sensitive data is particularly vulnerable. Many organizations have a false sense of security that non-production data is safe because it’s separated from production data and not running business operations. While in fact, development and testing are often run on non-production data, and the multiple datasets used in executing code and testing feature changes across different teams increase the risk of data breaches due to the expanded surface area.
Any security measure that can mitigate these risks needs to account for data in non-production environments to ensure comprehensive coverage. Here’s how to get started.
Having to wait for data to test code should no longer be the status quo. While companies have efficiently streamlined the application and infrastructure layers in their development stack through a DevOps approach, they’ve failed to automate the data layer in a secure manner — making it challenging to scale, maintain, and keep consistency in implementation.
DataOps is an emerging practice within the IT landscape that enables the rapid, automated, and secure management of data. It made its debut as an “Innovation Trigger” in three separate gartner Hype Cycle reports in 2018, and we’ve been seeing enterprises plan significant investment in this growing discipline since then.
The first step in any data security operation is to find and discover the data that is valuable and vulnerable to attack. With massive datasets in modern enterprises today, this is not a trivial task. There needs to be an automated way to search through the datasets and profile the sensitive data that spans application types or data relevant to specific privacy regulations. This process needs to be fast and intelligent in order not to allow gaps for intruders to enter.
Second, after sensitive data has been profiled, it needs to be transformed in a way that makes it unrecognizable while still retaining business value. In other words, the data needs to maintain referential integrity. If your 10-digit phone number becomes a string of letters after transformation, it’s essentially useless because it has lost all meaning in its original context.
Data teams need a clearly articulated, policy-driven approach to data obfuscation when managing regulatory compliance. Since masking requirements vary from company to company, custom algorithms should be available to enable organizations to define their own masking routines.
Lastly, once you’ve masked and transformed your data, there are many times when organizations need to roll back to a previous point in time and retain the exact values from that moment. Whether it has to do with a shift in business strategy or used as a means to undo a mistake, the ability to rewind data to its original state is crucial for enterprises.
Prior to the advent of DataOps, there were only ad-hoc solutions to address data-related challenges. Solutions required manual intervention from database administrators to provision data and introduce security loopholes when unauthorized users accessed sensitive datasets. For example, if a tester were to accidentally leave a patient’s health information exposed while running multiple tests, hackers could have the ability to see real data in the exposed systems versus accessing fictitious data if the data were secured through masking.
In a world where every company is a data company, the ability to easily access fast, secure data should be fundamental to any enterprise data security strategy. Organizations that do not embed secure data into their workflows will lose customer loyalty if and when a disaster occurs. Similarly, firms that do not deliver data at speed will sacrifice their competitive positions as upstarts create new products and services faster than ever before. In other words, failure to adopt DataOps can be fatal to your data and your business.