Law and technology: keeping a “lawyer in the loop”
Malcolm Dowden (Modern History, 1983)
The Covid-19 crisis has focused attention on the role of technology in legal services, accelerating the adoption of systems ranging from video calls and electronic signatures to artificial intelligence and machine-learning tools for automated contract management. In many cases, though, accelerated adoption has served to highlight the limitations of many technological solutions, particularly where they seek to move beyond specific rule-based procedures and into the realms of decision-making and the exercise of discretion. It raises the question: artificial intelligence might be able to “do law”, but can it “do legal advice”?
Since May 2019 I have been working on a data and privacy law project with the working title “Lawyer in the Loop”. It is a collaboration with AI-developers at UK tech and cyber-security specialists Innovative Integrations. The experience has taught both sides a great deal about the radically different assumptions and approaches that technologists and lawyers bring to questions of compliance and legal risk management. Technologists look for rules. Lawyers look for points at which decisions must be made and judgment exercised. Above all, it has underlined the benefits of open and ongoing discussion as the driver for innovation in legaltech.
One key strand of the project involved the development of “chatbots” or virtual assistants to guide organisations through critical procedures under the EU’s General Data Protection Regulation (GDPR) and its UK equivalent. Each chatbot provides a structured combination of video clips and written guidance, links to forms designed to draw out and organise the information required for a legally-compliant response, and live links out to experienced lawyers where more specific advice is required, or where decisions need to be made or validated.
To begin the process, Innovative Integrations pulled in for analysis the text of GDPR, using it to map the relevant rules and deadlines. From there, they produced a basic script and decision-tree for the chatbot. Early versions mapped directly against GDPR, with the result that users of the “data breach response” chatbot were first asked for information that would determine whether the breach required notification under GDPR Article 33 to the relevant data law regulator (in the UK, the Information Commissioner’s Office). Having been through that process, users were then asked whether the breach might create a high risk to the rights and freedoms of data subjects, potentially requiring individual notification under GDPR Article 34.
The result accurately tracked the order of GDPR, but did not feel right from a legal practitioner’s perspective. If notification to the regulator is required, then that step must be taken within 72 hours after becoming aware of the breach. If the degree of risk to data subjects requires individual notification, then that step must be taken “without undue delay”. Depending on the precise circumstances, that might require immediate notification, or notification well within the regulators’ 72-hour deadline. For example, I asked the tech developers to consider a data breach that interrupts or alters the functioning of wearable technology designed to administer doses of insulin or other medication to manage a data subject’s health. In those circumstances, urgent breach notification might be the difference between life and death. Even where the impact would be merely financial, such as a confidentiality breach affecting bank or credit card details, time might be of the essence to spare data subjects from adverse effects.
Subsequent versions of the data breach response chatbot departed from the strict order of provisions in GDPR, and more accurately reflect the way in which an experienced data lawyer would apply triage to a “live” breach. First, establish whether urgent steps are required to safeguard data subjects, then draw on that information to determine not only what needs to be disclosed to the regulator, but also what steps can be taken to close down the breach and mitigate its impact. In practice, that intelligent engagement can make the difference between a regulatory fine that approaches the statutory maximum of €20 million or 4% of the data controller’s annual turnover, or one that is substantially reduced to reflect a swift and effective breach response.  The UK data regulator’s July 2019 notice of intention to fine British Airways £183 million (approximately 1.5% of turnover) reflected a finding that there had been little or no effective breach response management and mitigation in place.
The experience of working directly with tech developers has, for a practising lawyer, been of immense value. In particular, it has confirmed that while technology can drive huge improvements in information gathering and organisation, it cannot easily replicate the exercise of legal judgement and risk assessment. This is true both when looking at specific examples, such as data breach response, and more broadly when considering AI and law. An approach that conceptualises law as a system of rules to be automated as “regtech” would rapidly risk becoming inflexible, exclusionary and unjust. In much the way that law has been assisted by equity, legaltech can be improved by ensuring that there is a “lawyer in the loop”.