Let’s start here: GDPR replaces the Data Protection Directive that stood for 23 years and was created to describe and regulate the ways data is handled, protected and collected.
The new regulation aims to do the same, but in a more comprehensive manner, since the information highway has gained a bunch of new lanes since 1995, and it’ll continue to grow. This is why both documents are vague; it’s to ensure that they stand the test of time as information technology, modes of communication, and data processing evolve over time.
GDPR manages to be quite strict, yet vague at the same time. It lays out the goals you need to achieve, what you must prohibit from happening, and what you must take into consideration in the future. But at the same time gives little in ways of practical examples of how these things need to achieved. Which is understandable, since the means will change over time --security protocols and standards for example.
There are 3 distinct themes in the document that help sum up the principles described in the articles:
The new regulation aims to give people better, and easier, access to the data that is being collected and processed about them on their behalf. You have the right to view it, rectify it, have it deleted, restrict processing, have your information given to you in a structured, machine-readable format, right to be notified if your data has been deleted or rectified, and the right to object to data processing based on certain sensitive criteria (such as religious affiliation or sexual preference).
These rights give far more freedom to each EU citizen or resident to choose how their data is handled, who they want as a service provider, and increase their personal privacy.
The subject of processing security is described in Article 32. The set ideas are quite concrete: The company shall implement appropriate technical and organisational measures to ensure a level of security appropriate to the risk. Among many others, measures like:
- The pseudonymisation and encryption of personal data;
- the ability to ensure the ongoing confidentiality, integrity, availability and resilience of processing systems and services;
- the ability to restore the availability and access to personal data in a timely manner in the event of a physical or technical incident;
- a process for regularly testing, assessing and evaluating the effectiveness of technical and organisational measures for ensuring the security of the processing.
It’s no longer the user's fault if they use a weak password either, as companies must ensure that security measures used are strong (such as mandatory strong passwords and mandatory 2FA).
All in all, this means that all systems need to be made as secure as the technical standards of the day dictate, and that obtaining, handling, storing and transferring data needs to be designed to be safe on the organizations’ side from the very beginning --it has to be a way of doing things, set in the culture of the organization.
Security systems aren’t just a one-time thing anymore, they not only need to be maintained and kept in tune with current standards, but any data breaches have to be identified and notified of as well.
This means that central logging systems have to be in place in order to detect breaches in a timely manner. The impact of the breach must be assessed and appropriate countermeasures must be taken. These steps need to be known beforehand and be documented for easier access, as well as to prove that the needed steps have been taken to ensure a swift response in times of trouble.
These three themes help us understand why the law is kept vague. We all know how fast technology changes. For example, it wasn’t that long ago that smart speakers weren’t a thing (image how much more data we give Google.) Instead of changing legislation to try to keep up with technology, governments are mandating that those collecting and processing data act more responsibly as technology rapidly evolves. In other words, you can’t say, “well we didn’t think about that” (I’m looking at you, Facebook.)