Shaking up the UK’s data protection eco-system
The government is shaking up the UK’s data protection eco-system and striking out in new directions as part of its National Data Strategy.
After months of reflection by ministers, the Data Protection and Digital Information No.2 Bill is now making its way through its Parliamentary stages. Although it might not mark a radical change, there are plenty of aspects which, taken together, demand our attention. An example is the new definition of personal data which may erode privacy rights, as it is expected to open up new avenues for commercially valuable and less restrictive rules on data sharing.
Following the same path, the government is demonstrating its independence and divergence from the EU GDPR by forging stronger data relationships beyond Europe. It intends to join the Comprehensive and Progressive Agreement for Trans-Pacific Partnership and the Global Cross Border Privacy Rules Forum which has developed from the APEC Cross Border Privacy Rules.
These initiatives are strongly supported by the United States and specifically the International Trade Administration in the US Department of Commerce which shows where US strategic interests lie. While there are clear benefits for the US to be part of a wider international privacy protection grouping, the benefits to countries with comprehensive privacy laws, such as Korea and Japan, or indeed the UK, are less clear. As a result, very few national Accountability Agents have been established and very few companies have taken the trouble to go through the certification process. For countries with comprehensive privacy laws, there does not seem to be a clear advantage to being part of these initiatives.
The ICO’s practical advice on AI issues
John Edwards, UK Information Commissioner, said in March “The ICO can do most of what it wants with the current Data Protection Act” and indeed the ICO is devoting substantial resources of people and time to respond to AI data protection challenges to data protection principles. With its feet on the ground and commitment to everyday practical advice, the ICO has worked with the financial education app for young people Good With. The company’s planned use of AI and automated decision-making is a form of data processing, and is likely to result in a high risk to its users.
The ICO accepted Good With into the Sandbox on 12 April 2021, and its exit report is dated April 2023. But it also now has a much faster service called ICO Innovation. This service aims to provide an applicant with feedback within 10-15 business days.
The ICO explains its Innovation Advice Service by stating: “We aim to enable innovation and facilitate economic growth by supporting businesses to bring privacy-respectful products and services to market more quickly, whilst protecting the public’s personal data.”
Questions sent to ICO Innovation include ones on specific products or services, deep learning AI solutions and smart city technology. For example, a company could seek help from ICO Innovation at the concept stage if it wanted to use ChatGPT to test responses to typical questions.
The ICO will be well represented at PL&B’s conference (see below) with three speakers, including the Deputy Commissioner, General Counsel and Head of Legal Service (Legal Policy & Advice).
Using AI to filter questions and complaints
A pattern of typical questions and complaints applies to many consumer oriented companies. The principles are the same whether a company wishes to use ChatGPT or its competitors - Meta, Microsoft, Apple, Snap or anyone else. In all cases, the Large Language Models work on strings of words, and do not actually know anything, so are not “intelligent” in any human sense.
Such examples of use of Large Language Models (LLM) could be used by any organisation, including Data Protection Authorities. The ICO receives 40,000 questions and complaints per year of which around a half relate to data subject access requests. So such LLM experiments could provide a great saving in staff and financial resources to make recommendations, on condition that decisions are taken by humans, aided by these computer programs which work through the thousands of communications.
This is not a theoretical concept. Finland’s Data Protection Commissioner, Anu Talus, has funding from her government to research the use of such a program to review the thousands of complaints, enquiries and data breaches which her office receives every year. To engage with people with experience of deploying LLM systems, she is leading a session at PL&B’s Conference on this concept, to help the Finnish plan become operational.
The purpose of this conference session is to explore the feasibility of LLM helping filter communications in any country and in any sector. We have invited conference participants to share their ideas on how this might work, based on their experience in different private and public sector contexts. Please join us for this cutting-edge session.
To be part of this exciting and constructive event, we invite you to register for Who’s Watching Me? PL&B’s conference in Cambridge 3 to 5 July.
The current discount period ends on 30 May.
We look forward to meeting you there.
Publisher, Privacy Laws & Business