Researchers requesting microdata (individual records) from data centres or data access panels are usually required to describe the legal basis for their use of the data. This is because data controllers and processors need to have a legal basis documented for each use of data under UK GDPR.
However, researchers are usually unaware of legal bases. The form fields may be left blank or, more often, filled with vague answers that might not fulfil what the access panel are requesting. This creates an inefficient process where support teams, panel managers and researchers are engaged in a back-and-forth to get the required information in the right box.
Underlying the current application form dance is the hot potato of responsibility. Someone needs to decide whether the data can be legally processed under which legal basis. For those who have to put their name to a decision, one question always hovers in the background: “If something goes wrong, will I/we be blamed?” This encourages shifting responsibility for providing evidence onto the applicants, as the requestors of the data – you want to do this new thing with the data, you have to show it’s safe and legal. But few researchers, data access panels or data centre staff are legal experts, and the responsibility starts its journey. All players want the same thing – confirmed safe and efficient use of data – but can’t always agree the best way of getting there.
A popular solution is that researchers are requested to go and speak to their institution’s Data Protection Officer (DPO) or legal team to decipher which legal basis fits for their use of the data. But this shifts the problem; it doesn’t solve it. Institutional guardians face the same concerns about taking responsibility. Often stock answers are copied and pasted into forms based on previous experience of what has “passed”.
If this is an academic researcher, requesting data to do academic/government sponsored research, is it worth sending them to DPOs or expensive lawyers to get the same answer as 10 researchers before them, for something the panels are likely to know the answer for? Do researchers now need to be experts in GDPR/data sharing as well as project managers, grant writers, statistical experts, public speakers and all of the other currently required skills?
Most importantly, does this encourage the data sharing community to work together to use data safely? Or is it an example of misunderstanding and division?
From the data controller/support team/access panel point of view, an obvious solution seems to be training researchers in what legal bases are and how to find out what applies. This is the “tell them what they need to do” approach. Guidance documents can be written; if the forms are not completed appropriately, this is down to the applicants not reading or using the guidance.
The trouble is that applicants and the assessors of applicants don’t necessarily have the same language, interests or understanding. To the assessor#, ‘Show how this project supports organisation X’s public function’ has a clear context, purpose and meaning, and directly provides a legal basis for access. To the applicant, the question is gibberish unless she happens to be familiar with the legislation; even then, it is not clear how to answer it.
Is there another better solution?
Pedagogical evidence shows that researchers/applicants can understand and apply complex data protection issues if couched in language and examples that have meaning for them. Instead of telling people what they need to know, decide what you need to get out of them, what they can reasonably be expected to give you that fills that need, and make it interesting and easy for them to give you that information – as Mary Poppins would say “snap, the job’s a game!”.
This encourages a more cooperative frame of mind, a more compliant researcher, a sharing rather than shedding of responsibility. It reflects a broader movement towards the ‘community’ model of data access, where emphasis is placed on shared understanding and joint responsibility rather than separation of duties/risks.
This is not straightforward. Is there a way to ask researchers to describe what they’re going to do with the data, to allow data access panels to be comfortable enough to categorise a legal basis? Could it be a joint conversation? Could a checklist be used in the first instance to support researchers understand what answers MIGHT be acceptable? Could the data centre community create and publish a consensus on what is appropriate, acceptable and will be used as standard – allowing for the inevitable exceptions that cutting edge research brings?
The gains of a cooperative approach are procedural and personal: knowing what information can reasonably be supplied, and designing processes around that, rather than designing processes for an unachievable standard of input.
Pulling things away from the researcher may seem to place a higher burden on the assessment panel: moving from “tell me why what you are doing is lawful and ethical” to “tell me what you are doing, and I’ll decide if it is lawful and ethical”. But the burden comes in two parts, procedure and accountability, and the accountability burden never went away. The potato always stopped with the ones making the decision; shifting responsibility onto applicants to give good information doesn’t change this.
This is one small area of the application process, but across the board there are substantial gains to be made, both in the efficiency of operations , and in the confidence that both applicants and assessment panels can have in the correctness of decisions. The potato of responsibility can be made digestible.
This blog post was written by Professor Felix Ritchie who leads the Data Research, Access and Governance Network (DRAGoN) at UWE Bristol and Amy Tilbrook from the University of Edinburgh.