How should privacy teams manage AI? It takes a village

Rebecca Cousin, Cindy Knott and Bryony Bacon of Slaughter and May advise on how to ensure that data privacy is not seen as a blocker to AI plans and innovation.

“New tech, old tricks” – those were the words of John Edwards about genAI in a recent speech(1). Given the unprecedented growth of ChatGPT, now hitting 100 million users, and that business uptake of AI is increasing at pace (with a nearly 40% increase in UK companies reporting using AI between 2022 and 2023 according to IBM research(2)), it is not surprising that John Edwards was trying to reassure everyone “there are protections in place for people”.

But how does that translate into practice for privacy teams that are having to move quickly to address the heightened legal and practical challenges posed by genAI without hindering business imperatives? Sometimes privacy teams will be leading the legal advice on use, training and deployment of AI, but, even where they are not, they inevitably have an important role to play. In this article, we look at some of the challenges posed by the current landscape, the evolving role of privacy professionals in relation to AI governance and how their existing knowledge and experience can be best leveraged as a driver for organisations’ AI compliance.

Continue Reading

UK Report subscribers, please login to access the full article


If you wish to subscribe, please see our subscription information.