The question "Who's Harry Potter?" is the title of the article by Ronen Eldan (Microsoft Research) and Mark Russinovich (Azure) on the topic of whether AI systems can forget something once they have learned it. As far as the topic of "forgetting" is involved, the GDPR also comes up here. Article 17 of the GDPR regulates the right to deletion / to be forgotten. Microsoft has already provided information on the topic for AI solutions in the context of the GDPR.
But one thing at a time...
But one thing at a time...
Who’s Harry Potter?
Ronen Eldan and Mark Russinovich wanted to make the Llama2-7b model forget the content of the Harry Potter books. The background to this is that the data set "books3", which contains many other copyrighted texts in addition to the Harry Potter books, was allegedly used to train the LLM. Details: The Authors Whose Pirated Books Are Powering Generative AI
However, unlearning is not as easy as learning. How to train or fine-tune an LLM in Azure OpenAI is described here. Essentially, a JSONL file is used to instruct a base model which answer should be given to an explicit question:
Further details: Customize a model with fine-tuning
From a high-level perspective, Ronen Eldan and Mark Russinovich proceeded in exactly the same way, as there is currently no "delete function" for LLMs. The model was therefore trained to answer questions about Harry Potter differently:
However, these adjustments resulted in the model hallucinating significantly more. The ability to hallucinate is a key feature of generative AI solutions. If the model has no information to generate an answer, an answer is created on the basis of likelihood calculation. This is called hallucinating. This results in outputs such as this one, which claims that Frankfurt Airport will have to close in 2024:
Privacy, and Security for Microsoft AI solutions
As mentioned above, the right to be forgotten is only one aspect when it comes to the requirements of the GDPR or ISO/IEC 27018. Microsoft does not offer any explicit legal support in the actual sense. Rather, it is described that Microsoft AI solutions also generally meet the necessary requirements. The key points here are:
- Prompts, responses and data accessed via Microsoft Graph are not used for the training of LLMs, including those of Microsoft 365 Copilot.
- For customers from the European Union, Microsoft guarantees that the EU data boundary will be respected. EU data traffic remains within the EU data boundary, while global data traffic in the context of AI services can also be sent to other countries or regions.
- Logical isolation of customer content within each tenant for Microsoft 365 services is ensured by Microsoft Entra authorization and role-based access control.
- Microsoft ensures strict physical security, background screening and a multi-level encryption strategy to protect the confidentiality and integrity of customer content.
- Microsoft is committed to complying with applicable data protection laws, such as the GDPR and data protection standards, such as ISO/IEC 27018.
Currently (November 24, 2023) Microsoft does not yet offer any guarantees for data in-rest in the context of Microsoft 365 Copilot. This applies to customers with Advanced Data Residency (ADR) in Microsoft 365 or Microsoft 365 Multi-Geo. Microsoft 365 Copilot builds on Microsoft's current commitments for data security and data protection. In the context of AI solutions, the following also applies:
All details on how Microsoft AI solutions fulfill regulatory requirements are described here:
Keine Kommentare:
Kommentar veröffentlichen