Adding AI to the DP role
Introducing PL&B’s 37th International Conference last month, I said that even without new data protection laws, we can be certain that the role of Data Protection Manager/Chief Privacy Officer/data lawyer will become increasingly complex. Many will be expected by management to add AI to their other advisory roles.
With the EU AI Act, entering into force on 1 August, it will have an impact both in the European Economic Area and have “Brussels effects” in the wider world, writes Graham Greenleaf. Some DPAs are using their powers to become more interventionist so it is worth considering now how your organisation will prepare accordingly.
Data Protection Authorities in several countries, including the UK and Ireland, have persuaded companies, such as OpenAI and Meta to pause deploying their Large Language Models (LLMs). There may be different views within DPAs and at the level of the European Data Protection Board, between protecting fundamental rights and not wanting to stop such a popular deployment of AI. They want to reach consensus on how to interpret the law. For how long can they persuade the companies to pause or even stop popular programmes, such as ChatGPT?
This fine balancing act was explained at the conference by Guido Scorza, member of the board of Italy’s Garante. He declared: “The market has become an enormous laboratory to test artificial intelligence and the people become lab rats sacrificing fundamental rights.” This view was balanced by Cary Bassin, Senior Privacy Counsel, AI and Privacy, Governance at OpenAI, the ChatGPT company, who explained “We are not interested in personal data or sensitive data per se”.
Do Large Language Models contain personal data?
To answer this question, we need to consider first whether personal data in an LLM is actually personal data as defined in the legislation within the meaning of the GDPR and national data protection laws.
As explained by David Rosenthal, Partner at Swiss law firm, Vischer, in his multi-part analyses, “a proper answer to this question must take into account the intrinsic nature of such AI models as well as the needs of the data subjects.”
Artificial Intelligence is an unfortunate term because LLMs “know” nothing. Instead, they use immense amounts of data when being trained to give apparently authoritative information which, upon examination by an expert, might be accurate at a generic level but falls down on the details. Sometimes LLMs make things up (“hallucinating”) when the programme cannot provide accurate information.
Rosenthal answers the question whether LLMs contain personal data in a counter-intuitive way:(1)
“For one user, a language model will not contain or even produce any personal data; for another, the same model will do so. In addition, it should be noted that not all personal data generated by the model is based on the factual knowledge stored in the model ….
It can also be merely a random result because the model does not have more probable content available for the output. In such cases, the output may contain personal data, but it is not in the model.
This applies even if the data in question has been "seen" by the model during training. This is because the model does not "memorize" most of this data as such, but only information seen more often.”
Rosenthal has prepared a table of 18 Key AI compliance issues in five groups: data protection; contractual commitments and secrecy; third party content protection; EU AI Act; ethical and other aspects.(2) From this useful checklist, it is clear that data protection law skills and expertise are a necessary part of AI management but not sufficient to decide on how your organisation should engage with these issues.
To manage compliance in a way which is credible, data protection professionals also need to attract to their side other staff with expertise in risk, strategy, auditing, ethics and governance. To this list should be added the User Experience (UX) team, and the AI coders - the people who can help explain to your team how they plan to convert the objectives of planned AI programmes into how they will achieve them. Ensuring understanding, transparency and explainability within your organisation is essential before planning an internal and external communications strategy.
Valuing personal data as an asset on the balance sheet
This case is made strongly by Dr John Selby, Principal Consultant at Privcore, Australia. Whilst intangible assets that were purchased from third parties have usually been recognised as an asset, internally-developed intangible assets have not. As a result, failure to recognize the value of these data assets has often led to under-investment in data security and neglect in creating a data asset strategy.
Planning ahead
Come and meet Dr Des Hogan, Ireland’s new Data Protection Commissioner at PL&B’s Roundtable. Save the date, 6 February 2025 for this event in Dublin in cooperation with, and hosted by, McCann FitzGerald.
PL&B’s 38th International Conference, 7-9 July 2025, St. John’s College, Cambridge: Some people have started asking us about speaking and sponsorship opportunities at next year’s event. Please contact us now!
Hundreds of conference photos are available on our open website. If you attended in-person or online, you are welcome to use the photos for your social media, and see slides and videos. You can pay and register now to gain access to the slides and full videos.
Please contact Laura Linkomies, Editor, with your ideas for future articles.
Regards
Stewart Dresner
Publisher, Privacy Laws & Business
REFERENCES |
August 2024
News & Blogs |
August 2024 Report Contents |
Next |