Deepseek under international scrutiny: Balancing innovation and users’ data protection
Juliette Faivre, a PL&B Correspondent, explains what is at stake in the field of AI where competition is strong and DPAs increasingly conduct investigations.
On 27 June, Meike Kamp, Berlin’s Commissioner for Data Protection and Freedom of Information, requested tech giants Apple and Google to block the AI app DeepSeek due to serious privacy concerns. This news follows just months after DeepSeek R1 large language model (LLM) was launched in early 2025, significantly disrupting tech-related markets due to excitement and speculation around R1’s impact on the entire industry.
A boom for the AI market
Prominent tech figure Marc Andreessen described DeepSeek’s R1 launch as the “Sputnik moment”(1) for AI.
In DeepSeek’s R1 release paper(2), researchers claimed that the model had a performance on a par with OpenAI’s O1 model at the time (O1-1217), based on reliable industry benchmarks(3). Strikingly, R1’s performance was achieved at only 1/20th ($5 million) of the reported cost to train OpenAI’s models ($100 million)(4), given the remarkable reduction in chips used(5). However, costs were later disputed as estimates did not account for key variables: i.e. “years of R&D, previous versions, infrastructure and operational costs”(6), among others.
Continue Reading
|
International Report subscribers, please login to access the full article |
If you wish to subscribe, please see our subscription information. |