Data Handling Policy

2020/3/30 by CSO; approved by the ITMM and the FAP-BC group leader; updated 2021/4/22

This subsidiary policy to Operational Circular No. 5 provides the rules on how all Digital Data owned, controlled and/or processed by the Organization must be protected in a secure manner. It complements CERN’s Operational Circular No. 11 on the "Processing of Personal Data at CERN" (OC11) and refers to CERN’s Guidelines for Data Classification (pending with the Office of Data Privacy). Compliance with these rules is the joint responsibility of the Controlling and Processing Services.


The terms used in this Policy are to be understood in light of the definitions contained in OC11. Additional definitions include:

  • "Digital Data" (or, here, short, "Data"): All digital institutional data in the possession of, controlled by, or processed by CERN;
  • "Data Store": A CERN computing facility dedicated for storing Data.


Depending on the Classification Level of the Data, the following data protection measures must be applied. For collections of Data (e.g. containers, "zipped" or "tared" files), the highest Classification Level of any of the individual data elements should be considered:

  • Physical Protection: Physical access to Data Stores and their storage media must be protected by limiting physical access such that it is granted only to identified and authorised individuals who have a professional need to access those Data Stores (the so-called "Principle of Least Privilege"). Adequate access controls are, for example, using CERN's badge system;
  • Encryption at Rest: Data at rest must be encrypted using widely recognized, strong encryption mechanisms, e.g. the most recent versions of "Bitlocker", "FileVault" or "LUKS" which are all based on the AES encryption standard. The corresponding encryption secret (e.g. the encryption key(s), passphrases) must be protected themselves (e.g. on encrypted media, media kept physically offline and under lock & key, or by the human brain), and, if applicable, follow the standard CERN password complexity rules;
  • Access Control: Following the "Principle of Least Privilege", access must only be possible by identified individuals with authenticated personal login and subject to access authorization. Actions from privileged accounts’ usage must be recorded/logged by the corresponding Data Store. This policy acknowledges the fact that the corresponding Data Store managers need to have full read and modification access for the execution of their professional duties;
  • Encryption in Transit: Data must be encrypted in transit using widely recognized, strong encryption mechanisms like an up-to-date and secure version of "SSH" or "TLS". The corresponding encryption secret (e.g. the encryption key(s), passphrases) must be protected themselves (e.g. on encrypted media, media kept physically offline and under lock & key, or by the human brain), and, if applicable, follow the standard CERN password complexity rules;
  • Data Destruction: With the end of the retention period, Data must be anonymized or unrecoverably deleted with adequate techniques (see CERN's Data Destruction Rules).


Classification Level
SensitiveRestricted to...CERN-internalPublic
Physical ProtectionMandatory if Data is not encrypted at restOptionalOptionalOptional
Encryption at RestMandatory if Data is not physically protectedOptionalOptionalOptional
Access ControlPrinciple of Least PrivilegePrinciple of Least PrivilegeLimited to all CERN primary computing account holdersNot applicable
Encryption in TransitMandatoryMandatoryMandatoryOptional
Data DestructionMandatoryMandatoryOptionalOptional

If the title of Data is also Sensitive or Restricted Data, it must be protected separately. For example, a file name (i.e. the title of certain Data) might be classified as Restricted Data and, thus, must be protected at the folder level.

Responsibilities for the Controlling Service

  • Data Tagging: The Controlling Service is responsible for defining and tagging the Classification Level of their Data. Where necessary, it can also be re-classified by them. It is responsible for informing the Processing Service of the initial and subsequent Classification Level of its Data. However, the Controlling Service can define an event or future date upon which the Data should be re/de-classified automatically;
  • Due Diligence: The Controlling Service must ensure that Data is only introduced to Data Stores that are compatible with the Classification Level of that Data.

Responsibilities for the Processing Service

  • Declaration: The Processing Service is responsible for assessing, declaring and guaranteeing the compatibility of their Data Store with each of these Classification Levels. Compatibility with a higher level automatically implies compatibility with the lower levels. This default maximum compatibility level must be declared in the corresponding ServiceNow Service Element (SE);
  • Entirety: The Processing Service must understand the underlying dependencies on other Data Stores. Unless additional protective measures with regards to data handling are taken, the Classification Level can only be as high as the level of the underlying Data Stores;
  • Media Tagging: The Processing Service physically holding storage media like SSDs and hard disks must tag those media with the highest Classification Level used when holding Data of different Classification Levels;
  • Logging: The Processing Service must ensure that any re/de-classification previously classified as Sensitive Data is logged. This logging must include date & time of each change in Classification Level as well as the name of the person enacting the change and the reason for the change.

For example, by default the protection of web contents depends on the protection of files stored on AFS, DFS or EOS. Thus, the Classification Level of the web contents store cannot be higher than that of the underlying AFS, DFS or EOS Data Stores.