Whose AI is it anyway? Accountability and DP in AI

Organisations need to be aware of the controller/processor distinction and its implications for accountability in AI. By Emma Erskine-Fox of TLT.

The recent explosion in mainstream use of artificial intelligence (AI) has caused many organisations to scrutinise their AI arrangements closely and ask questions about how tools are implemented and governed within their businesses. Understandably, one of the central questions is often: “Who is responsible if it all goes wrong?” This question becomes even more important (and complex) when personal data is involved; the potential for harm to individuals when AI tools are used to process personal data is significant.

Understanding accountability in AI arrangements is crucial to establishing the overall risk profile of a proposed AI deployment. But this is not always straightforward; there are a number of challenges when looking at data protection accountability in AI. This article explores some of these challenges and how they can be addressed so that parties can make sure that lines of accountability are as clear as possible.

Continue Reading

UK Report subscribers, please login to access the full article

LOGIN

If you wish to subscribe, please see our subscription information.

Subscribe