What is Retrieval Augmented Generation / RAG?
A nice analogy that also makes it clear what RAG is, is the concept of a metasearch engine. Here, the search query is forwarded to several other search engines. The results of all the requested services are then collected, processed and made available to the user. RAG is a technique in which an AI model is combined with other data sources in addition to the data in the LLM (Large Language Model) in order to generate more precise and contextually relevant answers. This is therefore a very similar approach to the metasearch engine. Even the two schematic diagrams of the technologys are similar:
RAG is used in this way in Microsoft 365 Copilot. To extend the capabilities of the AI, information is retrieved from various data sources and integrated into the response generation. This enables Copilot not only to access pre-trained data, but also to use current and specific information from other sources, including the data in the M365 Tenant. Access is via the Microsoft Graph. This also ensures that the underlying permission concept is always respected by the AI.
Source and further details: How Copilot for Microsoft 365 works: A deep dive
Copilot in Microsoft 365 uses RAG - this cannot be customized
In Microsoft 365 Copilot, RAG is used to improve responses to user queries. Copilot can access various data sources, such as documents, emails, Teams chats, etc., to provide well-grounded and accurate answers.
This also determines which functions / roles Copilot provides in the respective apps.
Examples:
- Word: Generate text with and without formatting in new or existing documents.
- Excel: Suggestions for formulas, chart types and insights for data in Excel sheets.
- PowerPoint: Create a presentation from a prompt or a Word file.
Now we have GraphRAG - that can be customized
The article Unlocking LLM discovery on narrative private data describes GraphRAG, a new method from Microsoft Research that extends the capabilities of large language models (LLMs) to access and analyze your data.
GraphRAG combines LLM-generated knowledge graphs with machine learning to improve document analysis performance, for example. This method shows significant improvements in answering complex questions compared to standard approaches.
A key benefit of GraphRAG is its ability to identify and understand topics and concepts in large data sets, even if the data was not previously known to the LLM. Here are some practical use cases for this technology:
- Information extraction: GraphRAG can be used to extract specific information from large document collections or databases.
- Content generation: GraphRAG helps to create content that requires in-depth contextual knowledge.
- Customer support: GraphRAG can improve customer support by accessing a knowledge base and providing accurate answers to customer queries.
- Knowledge management: In large organizations, GraphRAG can help to make efficient use of existing knowledge by retrieving and consolidating relevant information from different departments and documents.
Quickstart
To get started with the GraphRAG system (https://github.com/microsoft/graphrag), it is recommended to use the Solution Accelerator package (https://github.com/Azure-Samples/graphrag-accelerator). This offers a user-friendly end-to-end solution based on Azure resources, quote: One-click deploy of a Knowledge Graph powered RAG (GraphRAG) in Azure
The graphic shows, for example, the following sources for own solutions and GraphRAG:
- Azure Blob Storage
- Cosmos DB
- Azure OpenAI
- Azure AI Search / Vectorstore
- Container Registry
- Application Insights
As described on GraphRAG's GitHub page, Prompt Tuning options can also be used to customize the solution to your needs and use cases:
Source: GraphRAG Prompt Tuning
Keine Kommentare:
Kommentar veröffentlichen