Humans cannot abdicate their legal responsibilities to machines

An underlying and longstanding issue for all Data Protection Officers and their legal advisors is the balance between wanting regulatory guidance for a sense of direction and enjoying flexibility on its implementation. AI is the latest subject which illustrates this tension.

The AI and Data Protection Risk Toolkit(1) is used by the ICO when carrying out an audit or investigation, and so is the perfect guidance to help organisations looking to develop or deploy AI in their business. You have the advantage of being prepared by gaining insights into the regulator’s thinking.

At PL&B’s roundtable on 23 January, Ensuring fair and lawful AI implementation,(2) Stephen Almond, Director of Technology and Innovation, and Executive Director, Regulatory Risk, ICO, stated that it stands ready to scrutinise organisations' performance. If it has cause to look into an organisation’s use of generative AI, it would expect to review the relevant Data Protection Impact Assessment.

Would the ICO produce a more detailed AI governance template along the lines of the NIST AI Framework for example? Almond declared that there is a limit on how prescriptive the ICO can be in its guidance, as data protection law in the UK builds in a certain amount of flexibility around how compliance can be achieved.

Risks from inaction are as important to consider as risks of action: Whilst the discussion around AI often focuses on recognising the risks and opportunities, not engaging with AI creates its own risks. Ben Rapp, Founder and Principal, Securys, said at the roundtable that a challenge with regulation is that there is no AI perimeter. There are risks both to using AI and not using AI, for example ChatGPT which deploys natural language. Organisations, while being alert to risks to end users, should also take this opportunity to upskill their staff by getting them ready for change.

Employees and suppliers increasingly expect to be able to use widely available AI technology, creating a ‘shadow AI’ risk. It can be difficult for management to monitor AI’s use by individual members of staff wishing to deploy their creativity on their own initiative.

Effective communication of the risks: Data privacy specialists and legal teams should effectively co-operate and communicate the risks and regulatory regime around AI with those:

  • Developing the AI tools, as they need to ensure they design compliance in from the start; and
  • Using the AI tools – it is important that AI models are used as anticipated and that there are processes in place to ensure that any potential new use cases go through the proper risk analysis and testing processes. Read about using AI in recruitment.

The role of the privacy team in managing AI extends throughout the data’s lifecycle: Organisations manage AI in different ways. Some expand their traditional privacy functions to become both data privacy and AI focused, whereas others set up AI committees which bring together expertise from across the organisation. The need for transparency remains. It is clear that the privacy function and Data Protection Officers (DPOs) have a key role to play in AI governance wherever AI sits.

Changes to the DPO role in the Data Protection and Digital Innovation (DP&DI) Bill, in particular, the new role of Senior Responsible Individual (SRI) give more flexibility on how data privacy governance can be combined and aligned with other functions. It may have the advantage of placing responsibility and accountability higher than usual in an organisation.

DPO reporting lines: What is the best way to structure reporting lines within an organisation? Legal/privacy advisors need to maintain the required independence from the decision makers within the business while ensuring the DPO remains sufficiently aware of what is happening.

Other practical points emerged at the roundtable, such as:

  1. Using an AI system to compare different contracts on a similar subject and discovering the differences, a process which would normally take many hours or days when reviewed by a human
  2. Putting the same question into different AI systems (e.g. Chat GPT and Bard) and comparing the results
  3. Asking an AI system to change the mood of a communication.

AI presents opportunities and risks. AI is not something completely new. The familiar data protection principles remain in place and should be applied to new situations. DPOs, SRIs and their legal advisors are in a good place to play a leading role in their organisations adapting to working with AI systems. Explainability is crucial. Humans cannot abdicate their legal responsibilities to machines.

PL&B has arranged in-person and online events(3) and look forward to you joining us.

Best regards,

Stewart Dresner
Publisher, Privacy Laws & Business

REFERENCES
  1. ICO - AI and data protection risk toolkit
  2. Key Reflections from the "Ensuring fair and lawful AI implementation" Roundtable - the full note is available to PL&B UK Report subscribers
  3. PL&B Events

March 2024

News & Blogs

March 2024 Report Contents

Next