Data Compliance
For decades, there has been one factor that has maintained data security in the IT industry: the honor code. But is that enough?
Tim Gorman
Dec 10, 2018
Share
For decades, there has been one factor that has maintained data security in the IT industry: the honor code, which can be interpreted as: Thou shalt not expose data.
The IT industry has always been far too casual to have anything as formal and structured as a code of conduct. But, if any IT professional stops and thinks, they will realize that such an honor code has indeed guided them.
These days, data security courseware is mandatory at almost all large IT organizations, but how did the industry manage without such mandatory and repeated training through the decades previous?
Nothing other than such a professional code of honor can explain why - despite a lack of formal security for decades, naiveté and ignorance in the early days and the best efforts of “bad actors - data breaches have been as rare as they have been.
Regardless of industry, IT professionals have done their best to protect confidential data and prevent the wrong people from accessing or tampering with it well before the first privacy regulations. Sometimes these efforts are comical and sometimes they're inadequate, but the effort is undeniably there.
But an honor code? I don't ever recall anyone stating it explicitly, but the requirement for privacy was just assumed. No amount of out-of-the-box thinking, normally prized in engineering, went as far as to open that particular box. We knew that we weren't supposed to indulge our own curiosity, or that of others.
Starting in the mid-1990s, as the internet became mainstream, data security and privacy regulations came to the forefront with privacy laws, such as the Health Insurance Portability and Accounting Act (HIPAA) in 1996. In the same year, the European Union created the Article 29 Working Party, whose goal was to create guidance for EU member states on data security and privacy, culminating with the adoption of the General Data Protection Regulations (GDPR) in 2016.
In 2001, Canada enacted the Personal Information Protection and Electronic Documents Act (PIPEDA), which was the world's first national data protection law (at first only affecting certain organizations), but in 2004 it included all organizations in Canada. While GDPR wholly affects any organization doing business in Europe, similar legislation is passing also in the United States as evidenced by the California Consumer Privacy Act (CCPA) passed earlier this year. Thus, national law enforcement has entered the arena, imposing real and substantial penalties for breaches of data.
Things took a dramatic turn with the Edward Snowden affair in 2013. Snowden was a systems administrator who leaked information about U.S. government surveillance activities. Many people cheered Snowden as a dissident rebel and bemoaned his persecution by U.S. law enforcement and sympathized with his taking refuge in Russia. In the end, the consequences of the information leaked by Snowden created embarrassment, but mainly it alerted many organizations of the capabilities of the NSA, which led to a change in habits and a shift in the balance of power, at least temporarily. He demonstrated in spectacular fashion that IT professionals could not be trusted implicitly - honor code or not.
So is it sufficient that IT professionals merely be unwilling to expose data? No, rather we have entered an era where we must be unable to do so.
The Roman poet Juvenal is recorded as the first to ask the question: “Quis custodiet ipsos custodes?” which translates to “Who will guard the guards themselves?”
It’s quite obvious that the guards must adhere to a code of conduct, but it has also been demonstrated that this is not enough. To resolve this quandary, we should review the nature of IT departments within organizations. IT at most companies is usually perceived in the form of production systems, which run important functions within the organization and its users are essentially everyone in the company. But like the visible surface of an iceberg, production systems are the smaller visible part of IT.
The real larger part of IT is not visible to the rest of the world or even to the rest of the organization. This is where all the support for the production applications takes place, including development of new releases, replica systems for quality testing, test systems for integration, training systems, upgrades and patches for faults.
For each production instance of an application, it’s fair to estimate that there are at least five or six, in some cases ten or twenty, instances existing in such non-production environments, where development, testing, training and patching occur. Thus, the sheer number and volume of these lower environments is far larger than the highly visible production environments, again spurring comparison to an iceberg.
Initially, when hackers attacked an organization, they attacked what was visible, which were the production systems. Naturally, organizations learned to harden such systems, so attackers then turned their attention to the larger and less secure, less visible parts of IT.
Non-production systems are also almost always cloned from production systems. This is done so that activities like developing new functionality, testing changes, integrating new systems and training can take place using the same data and functionality as production.
The problem is by cloning production systems, we are also cloning confidential data. Even worse, instead of exposing that confidential data only to those users who are authorized to access it or manage it, we are now exposing that confidential data to users who absolutely do not need to access it. Developers and testers have no need to see confidential data. Trainees being trained on systems certainly do not.
For this reason, confidential data cloned to non-production systems should be masked. Masking the data changes it permanently within the database or document where it resides, also known as data masking at-rest. By performing data masking at-rest and overwriting and eliminating confidential data altogether, we’re able to remove the value of that data to any attackers. Thus, we remove the risk of a data breach completely, regardless of who manages it, who accesses it or where it resides.
We also sharply reduce the surface area of risk. We can continue to clone production systems, and we can make non-production systems completely devoid of value to attackers.
There is an important side benefit from reducing the surface area of risk. One of the biggest inhibitors for cloud adoption is data security, either due to risks of interception or exposure while data is in-transit from on-premise to the cloud or due to risks of exposure or loss of ownership while the data resides in the cloud.
While it's debatable whether currently on-premise production systems would benefit from the cloud, there is no debate whether non-production systems would benefit from the infinite elasticity of the cloud. Thus, masking of confidential data and the removal of all the security risks it imposes also removes impediments to cloud adoption where it is needed most.
Computer technology and IT continue to evolve. The internet and internetworking continue to pose security issues, and cloud technology introduces more. Such complexities benefit from taking a step back and addressing root causes rather than continuing to add more layers of complexity.
Put the honor code to rest by making IT personnel unable to expose confidential data rather than merely unwillingly. Reduce your surface area of risk. Remove the value from your asset. Permit software development and quality assurance testing and software maintenance to make full use of the elasticity, speed and economies of the cloud.
Read more about Delphix data masking and learn how you can stay compliant with regulations, meet cloud mandates with less risk and protect sensitive data from unauthorized access_._