ChatGPT and large language models are a privacy ticking bomb

Continuing the discussion about AI and privacy, Luiza Jarovsky of Implement Privacy reports on large language models and the evident privacy risks they pose.

Large language models (LLM) can be defined as “a deep learning algorithm that can recognize, summarize, translate, predict and generate text and other content based on knowledge gained from massive datasets.” According to Priyanka Pandey(1), “GPT-3 is the largest language model known at the time with 175 billion parameters trained on 570 gigabytes of text.”

ChatGPT – or Chat Generative Pre-Trained Transformer – the tool everyone is talking about, which can answer questions, create songs, articles (not this one!), and even pass an MBA exam given by a Warton professor - is not the same thing as GPT-3. According to ChatGPT itself: “ChatGPT is a variant of the GPT-3 model specifically designed for chatbot applications. It has been trained on a large dataset of conversational text, so it is able to generate responses that are more appropriate for use in a chatbot context. ChatGPT is also capable of inserting appropriate context-specific responses in conversations, making it more effective at maintaining a coherent conversation.”

Continue Reading

UK Report subscribers, please login to access the full article

LOGIN

If you wish to subscribe, please see our subscription information.

Subscribe