Using personal data in AI projects: Overcoming the challenges
Be transparent, and to be on a safe side, conduct a DPIA whenever AI is involved. By Katie Hewson and Kate Ackland of Stephenson Harwood.
It has been reported this month that the number of generative AI users in China has surpassed a huge 600 million(1) and more than 200 generative AI service models have been registered in the country. Quite an impressive feat. This year, AI has made leaps and bounds in revolutionising technological advances across industries including coral reef restoration, personalised curriculums and bespoke treatment plans for patients. Even the European Commission has launched its own generative AI tool to help staff with generating policy documents. Whilst the opportunities presented by AI tools seem to be infinite, it certainly does not come without its challenges. According to a report(2) from Boston Consulting Group, 33% of consumers using AI were most concerned about data security and ethics. Given our increasing awareness of how our data is being used and exploited in this rapidly evolving technological world, this key concern is hardly surprising.
Continue Reading
UK Report subscribers, please login to access the full article |
If you wish to subscribe, please see our subscription information. |