The use of personal data in AI through the lens of data protection

A Data Protection Impact Assessment may not always be needed when AI is used, but controllers ought to proceed with considerable care and evaluate the risks involved. By Nikolaos Theodorakis and Christopher Foo of Wilson Sonsini Goodrich & Rosati.

Although Artificial Intelligence (AI) is constantly evolving, the issues relating to AI are gradually coming to light in a coordinated fashion. In the UK, the Information Commissioner’s Office (ICO) has stated that ensuring the protection of privacy in the development of AI is a priority for 2020(1), while identifying, through the numerous pieces of guidance(2) this year, that the use of personal data in AI gives rise to specific data protection issues. In this article we will explore these issues and provide ­practical advice to organisations.

Assessment of risk

The ICO recognizes that a zero-tolerance approach to risk in the use of AI is not realistic and instead advocates for risks to be identified, managed and mitigated(3).

UK Report subscribers please login to access the full article.

If you wish to subscribe please see our subscription information.