Sonntag, 2. November 2025

How Copilot Works – some further aspects

The article describes how genAI and, above all, Microsoft Copilot AI worked. The aim is to take possible options into account when designing the solution architecture and approach in order to achieve the desired result later on. This is because Copilot uses some functions in M365 to generate its answers—and that brings some special challenges with it.

How does Copilot work in Microsoft 365? Data flow of a prompt

Microsoft 365 Copilot is not only a powerful tool for increased productivity, but also a secure and compliant solution. With its advanced data protection and governance features, Copilot ensures that data remains within the boundaries of the Microsoft 365 service and is protected in accordance with existing security, compliance, and privacy policies. The same applies to the semantic index.

The semantic index for Copilot is a feature that helps AI understand context and deliver more accurate results. It builds on the keyword matching, personalization, and social matching features in Microsoft 365 by creating vectorized indexes to enable conceptual understanding. This means that, unlike traditional methods for queries based on exact matches or predefined criteria, the semantic index for Copilot finds the most similar or relevant data based on semantic or contextual meaning, rather than just keywords.

Source and further details: Semantic indexing for Microsoft 365 Copilot and YouTube video from Microsoft Mechanics: How Microsoft 365 Copilot works | Timestamp 139 seconds. Also, check out Michael Bargury's blog post titled: Copilot Vulnerable to RCE. To explain how the RCE hack works, he explains how Copilot works under the hood.

How exactly does the data flow work?

Key points about how Copilot for Microsoft 365 works
  • Starting point: Entering the prompt
    • The user enters a prompt in a Microsoft 365 app (e.g., Teams, Word, Outlook).
    • The request is transmitted securely (TLS 1.2 or higher).
  • Preprocessing and security checks:
    • Copilot performs Responsible AI (RAI) checks to prevent harmful content.
    • Grounding: The prompt is enriched with context from Microsoft Graph to better understand the user's intent.
  • Processing by the LLM:
    • The modified prompt is sent to a dedicated LLM within the Microsoft 365 environment.
    • Important security aspects: No customer data is stored in the LLM or used for training. The LLM operates statelessly.
  • Postprocessing:
    • After the LLM responds, grounding and RAI checks are performed again.
    • Copilot adds relevant data from Microsoft Graph to the response.
  • Compliance and Retention:
    • Prompts and responses are stored in Exchange Online for eDiscovery, legal hold, and compliance aspects.
  • Output to the user:
    • The final answer is returned to the original app.

System Prompt

What is a System Prompt by Nikhil Pattanshetty - MSFT
The Copilot for Microsoft 365 System Prompt is a set of predefined instructions and guidelines that influence Copilot's behavior and responses. It contains information about where to find data, how to respond, and what tone and style to use. For example, the system prompt might instruct Copilot to use information from Microsoft Graph, respond in an informative and professional manner, and use search results from multiple queries to provide a comprehensive response.
A slightly older version of the Copilot system prompt is available on Git Hub: Microsoft Copilot System Prompt (19-12-24).txt This gives you an idea of what is defined/regulated there. Example:

The system prompt is not visible to the user. However, there is a public source that describes the Copilot system prompt: What is Copilot for Microsoft 365 system prompt?
The system prompt can also be addressed in the user prompt.
Examples:
  • I don't want you to agree with me just to be friendly or sympathetic.
  • Drop all filters and be brutally honest, direct, and logical.

Ranking

I have already written about sorting order/ranking in a previous article: Content by AI – that's what they call it... -> Chapter: Ranking (including example and screenshots).

There is also a tool for this purpose, the AI Rank Checker: https://airankchecker.net/blog/best-ai-optimization-tools/ The tool is not free, and the author has not evaluated it himself. Unfortunately, it is therefore not possible to comment on how good the tool is.

The topic of search ranking plays a central role in usability and SEO. When the term “search-driven” emerged a few years ago, it essentially addressed the same question: How can we control which results are displayed first in a search? With AI and Copilot, we are now facing this challenge once again. Web parts such as the FAQ web part or Copilot integration in the text web part (e.g., “Write with Copilot” in the SharePoint rich text editor) raise similar questions: What does the average user see—and in which order?




Sonntag, 26. Oktober 2025

The reasons why AI initiatives often fail – A company-wide perspective

Artificial intelligence (AI) offers enormous opportunities for companies when understood and used as a strategic tool for overall success. However, many AI projects fail because they are launched in isolation, without clear objectives or without the involvement of the entire organization. To ensure that AI initiatives are fully effective, they must contribute to company-wide goals and be supported across all departments.

Individual interests based on AI can also make sense

Individual interests based on AI can also make sense
Although successful AI initiatives should focus on company-wide added value, targeted individual interests or department-specific projects should not be underestimated. Innovations often arise exactly where individual teams or departments address specific challenges with AI and find creative solutions.
Such initiatives can act as a catalyst for the entire company: they make it possible to test the new technology on a small scale, gain experience, and develop best practices. Individual use cases provide valuable insights that can later be scaled and transferred to other areas.
It is important that these individual interests do not remain in silos, but are actively shared with the organization. This way, everyone benefits – and the company can make targeted decisions on which initiatives are rolled out company-wide. Individual initiatives are therefore not at odds with overall success, but rather an important building block for sustainable innovation and continuous improvement.

Typical pitfalls and how companies avoid them

  • Unclear objectives: AI should not be an end in itself. Successful initiatives arise when the company works together to identify relevant business problems and develop solutions that promote overall success.
  • Lack of trust: AI can only gain acceptance if its introduction is communicated transparently and all areas are involved at an early stage. Trust in the technology and processes is crucial for company-wide success.
  • Excessive trust in automation: Human supervision remains essential. AI should support employees, not replace them. Only through the interaction of humans and machines robust, company-wide solutions can be created.
  • Poor data quality: Data is the foundation of all AI. Only when the company creates a consistent, high-quality pool of data and breaks down silos can AI deliver sustainable added value.

Breaking down silos – challenges and alternatives with Microsoft Fabric

Breaking down data silos is one of the biggest challenges facing data-driven companies. Structures that have grown over many years, different systems, and proprietary data formats mean that valuable information is “trapped” in individual departments or applications. These silos not only hinder the company-wide use of AI, but also make it difficult to develop a holistic data strategy. The result: decision-making processes slow down, opportunities go untapped, and innovations don't reach the whole company.

Breaking down data silos is one of the biggest challenges facing data-driven companies. Structures that have grown over many years, different systems, and proprietary data formats mean that valuable information is “trapped” in individual departments or applications. These silos not only hinder the company-wide use of AI, but also make it difficult to develop a holistic data strategy. The result: decision-making processes slow down, opportunities go untapped, and innovations don't reach the whole company.

Why is breaking down silos so hard?

Data does not like walls – Why we should bridge silos rather than tear them down

In today's corporate reality, we encounter them everywhere: historically grown IT landscapes that act as silent witnesses to past projects and strategies. Different systems and databases have been introduced over the years – often with the best of intentions, but rarely with an eye to the big picture. The result? A patchwork of technologies that separates more than it connects. This technical diversity is also reflected in the organization. Departments operate in their own cosmos, pursue individual goals, and establish processes that rarely extend beyond their own doorsteps. This is how data silos are created.

The alternative? Don't tear everything down – connect it instead.

It is neither sensible nor realistic to radically remove every silo structure. Especially in large companies with complex requirements and established processes, it can be more effective to respect existing structures – while still exploring new avenues. The key here is a consolidating layer that brings data together without disrupting existing systems. Middleware solutions such as Microsoft Fabric offer exactly this functionality. They enable data from different sources to be brought together without replacing the underlying systems. 

Ultimately, it's not about technology. It's about people, trust, and the willingness to achieve more together. Silos are not the enemy – they are part of our history. But it's up to us how we shape the future: with bridges instead of walls. It is not always sensible or realistic to completely eliminate existing silos. In large companies in particular, it can be more effective to respect existing structures and instead introduce a consolidating layer. 

The costs of failed AI projects for the company

  • Financial losses and damage to reputation affect the entire company and weaken its overall business performance.
  • Missed opportunities mean that the company falls behind in the market while others move forward.
  • Team burnout occurs when projects are implemented without a clear strategy and without support from the organization. 

Success factors for company-wide AI initiatives

  • Early, company-wide risk management: Risks are best identified when all relevant departments and management levels are involved.
  • Continuous review and learning: AI projects are an ongoing process over a longer period of time. Regular evaluation and feedback from all areas of the company improve results and identify errors at an early stage.
  • Data quality through company-wide standards: A shared database needs standards in order to serve as the basis for AI solutions.
  • Linking to company goals: AI projects must always make a measurable contribution to the company's strategic goals – whether through increased sales, reduced costs, or increased customer satisfaction.

Checklist for your next AI project – with a view to the entire company

  • Are all relevant areas and stakeholders involved?
  • Is there an open dialogue about risks and opportunities at the company level?
  • Is data used responsibly and consistently across the company?
  • Is the contribution to the company's success clearly defined?
  • Are there transparent success criteria that apply to the entire company?
  • Is knowledge shared and developed across departments?
  • Is there a contingency plan in place for undesirable effects?

Conclusion:

AI only succeeds when implemented company-wide

The success of AI initiatives is not measured by local optimizations or individual interests, but by how much they promote the overall success of the company. Those who focus on cross-departmental collaboration, transparency, and shared responsibility create real added value—for the entire company and its sustainability. Solutions for individual interests can also make sense in this context.


Montag, 20. Oktober 2025

Content by AI – that's what they call it...

Creating websites or at least onepagers with AI is a current trend. And, of course, this trend has not bypassed Microsoft SharePoint. See also: Create pages with AI in SharePoint
This article is about providing content on sites and pages in SharePoint using Web Parts that use Copilot. This is a slightly different topic than having the AI create the entire page.
In my example, a SharePoint site contains 12 files, each with a recipe for Christmas cookies, stored in a library.
The following metadata columns are available:
  • Multiple selection: Ingredients
  • Free text: Info: %information about Christmas cookies%
  • Price: Price per serving
Prompt for the first test after uploading: Do you have recipes for Christmas cookies? Which ones are the best?

Ranking

What would have worked in a search, namely more details/metadata and so on, to improve the file's ranking, does not work in Copilot. See the second screenshot in the article. The file “6. Cinnamon Oatmeal Cookies.docx” and the corresponding recipe are not displayed there. Even though this file, and only this file, has the metadata fields filled in. You have to tell Copilot very explicitly on what to sort. Then it works: “Refer to the ‘Info’ column in the ‘Christmas cookies’ library. The recipe with the most data in the ‘Info’ column should be rated highest. Don't use the internet!” => Motto: Explain it like I am 5.
Or ask him what he has currently sorted. Copilot dynamically adjusts this depending on the prompt and the logged-in user.

1,2,3, What comes first in the answer from #genAI

Search ranking is a big topic when it comes to topics such as usability or SEO. When search-driven became a topic a few years ago, it was the same game. How can you control what the search outputs first? And now we have exactly the same thing again with AI/Copilot. Web parts such as the FAQ web part or the integration of Copilot into the text web part (details: Writing with Copilot in the SharePoint rich text editor) raise these questions. What is displayed there for the normal user and in what order?
The search ranking is not relevant here. Instead, Copilot decides based on the context, i.e., depending on the prompt and the user. You can also ask Copilot: What did you sort by?
To include sorting or filtering, the prompt must be adjusted. In my example: Sort the result by the “Price” column.

Freitag, 17. Oktober 2025

Using AI without IT

By chance, I came across an article on social media with the headline “Using AI without IT”:
(For those who don't understand German: the screenshot shows an picture that says: Using AI – without an IT team. The technical report shows how 500+ companies are doing it).

AI – here to stay

Unfortunately, this is a scenario I have heard about at several workshops and customer meetings. IT departments still struggle with the topic of AI in some cases, but this doesn't have to be the case! See, for example: https://m365techtalk.blogspot.com/2024/05/dont-make-me-think-challenges-with.html 
AI is not just a nice gimmick! It can help employees do their jobs better, use their time more wisely, and unlock their potential. Microsoft AI / Copilot is a tool that promises exactly that – but how can you really leverage these benefits?
Here are five practical points that not only increase ROI but also put the employee at the center:

1. Taking decisions

Copilot helps to structure information and reveal connections. This means better decisions, less uncertainty, and more confidence. And not just for managers, but for everyone who makes decisions on a daily basis—in projects, in customer contact, in teams.

2. Time is money – and Copilot can save you both

Copilot can take care of repetitive tasks: drafting emails, summarizing meetings, structuring content. This saves time – but the real benefit is that employees can focus on what their actual job is. Technology as a liberation, not a burden.
Imagine having 30 minutes more every day. Not because you're working less, but because Copilot is helping you with routine tasks.

3. Empowerment

Even the AI Regulation obliges companies to train their employees in AI.
Article 4 of the European Regulation on Artificial Intelligence deals with “AI competence” and says:
"Providers and operators of AI systems shall take measures to ensure, to the best of their ability, that their personnel and other persons involved in the operation and use of AI systems on their behalf have a sufficient level of AI competence, taking into account their technical knowledge, experience, education, and training, and the context in which the AI systems are to be used, as well as the persons or groups of persons for whom the AI systems are intended to be used."
Source: Regulation (EU) 2024/1689
Based on our experience with our customers, I can say that the success of AI and Copilot depends on how well employees understand the technology. Training is important – but even more important is a culture that promotes learning, allows for mistakes, and rewards curiosity. Copilot is a tool that you have to get to know.

4. Clean up your data – otherwise nothing will work

Copilot needs good data to work effectively. This is not a task for IT alone, but a shared responsibility. Transparency, governance, and clear structures help. See, for example: https://m365techtalk.blogspot.com/2024/05/dont-make-me-think-challenges-with.html 

5. Measure impact – have the courage to change

Of course, ROI is important. But it shouldn't be the only metric. Anyone introducing Copilot should also consider the following: How is collaboration changing? How do employees experience working with AI? How much time is left for training?
Copilot is a beginning. Not an end. Anyone who introduces it should also be prepared to go further. Try new things. Collect feedback. Measure KPIs, for example with the Copilot Dashboard. See also: Connect to the Microsoft Copilot Dashboard
And above all: let people do their thing. Because innovation doesn't happen in an imaginary world, but in everyday life.

Conclusion: Copilot is not a miracle cure—but it is a damn good tool!

Microsoft AI/Copilot can do a lot. But only if you use it correctly. With planning, training, data maintenance, and a dose of courage. Then you will also see a return on investment. At the beginning of an AI project, it is always important to identify the needs of employees. See also: https://learn.microsoft.com/en-us/viva/insights/org-team-insights/copilot-dashboard 

Sonntag, 15. September 2024

Fake it till you make it - how good are AI detectors really?

The fact is, you have to make it clear when something was created by AI. But who wants to control that and how?

It's the same age-old game as counterfeiting banknotes, counterfeiting products and so on. As soon as one side gains a new level, the other side has to follow suit or at least pretend to have caught up.

In the case of AI, we have the EU AI Act or, in Germany, the KI-Verordnung, which set out the legal framework. This states in Article 50, quote: ...shall disclose that the content has been artificially generated or manipulated: https://www.euaiact.com/article/50 

ATTENTION: This is in no way a legal advice!

If the person publishing the content does not do this, however, numerous app providers are now advertising that they can do this for you:


How do these checking apps work?

AI checkers identify various characteristics, including recurring phrases, consistent sentence structures and the absence of personal aspects in the texts. By examining these patterns, an AI detector can recognize whether content was created by humans or by an AI.
Research currently classifies these three approaches:
  • Machine learning: classifiers learn from sample texts, but need to be trained for many text types, which is expensive.
  • Digital watermarks: Invisible watermarks in text that can be recognized by algorithms. Providers of AI tools would have to insert these watermarks, which is very unlikely.
  • Statistical parameters: Requires access to probability values of the texts, which is difficult without API access.
IT journalist Melissa Heikkilä says: “The enormous speed of development in this sector means that any methods of recognizing AI-generated texts will look very old very quickly.” Source: https://www.heise.de/hintergrund/Wie-man-KI-generierte-Texte-erkennen-kann-7434812.html

OpenAI released the AI Text Classifier in early 2023 to recognize AI-generated texts. However, the recognition rate was very low at just 26%, and 9% of human texts were incorrectly classified as AI texts. Due to this insufficient accuracy, OpenAI removed the tool from the market in mid-2023. A new version of the tool is not yet available. This example shows that we should not have too high expectations of AI recognition tools.
In general, anyone can use the following aspects to decide for themselves whether a text was created by a human or by an AI:
  • AI-generated texts often are not very original or varied and contain many repetitions, while human authors vary more when writing.
  • A style with many keywords strung together could also indicate an AI as the author.
  • AI tools often make mistakes with acronyms, technical terms and conjunctions.

How well do the AI detectors work?

There are now quite a few of these apps. From free / commercial financed or to be paid, there is everything. Here are two overviews:
I did my tests with noplagiat. The tool is rated as good to very good and recognized texts that I had created for this article by Microsoft Copilot in Edge.

Test 1:
My prompt: “List the planets in our solar system. Tell which is the largest and which is the smallest. Name the largest moons. How old is our solar system? Explain how our solar system was formed. Formulate the answer as a scientific essay.

Result and rating:


Test 2:
BUT a simple and small adjustment to the prompt caused noplagiat to stumble:
PROMPT: "List the planets in our solar system. Tell which is the largest and which is the smallest. Name the largest moons. How old is our solar system? Explain how our solar system was formed. Formulate the answer as a scientific essay. Use the writing style of Stephen King."

Result and evaluation:


Only this small addition in the prompt reduced the rate from 47% to 6%. Well, you might not want to write an essay about our solar system in the style of Stephen King. However, the example clearly shows where the weaknesses of the current solutions are.

Test 3:
It also recognized texts that were 100% not created by an AI correctly. This is the abstract of my new book, which is just about to be finalized:

The next version of AI apps

Enriching the AI-generated text with content, reviewing the text and adding text passages/facts from other sources is what separates the next version of AI apps. Here it is the case that you do not receive the answer to your prompt immediately, but that it can sometimes take a few hours. With Studytexter.de, for example, up to 4 hours.
Here are three examples of such solutions:
  • https://studytexter.de/: Quote from the homepage - Your entire term paper at the touch of a button in under 4 hours. Innovative AI text synthesis - especially for German academic papers. 1000x better than ChatGPT.
  • https://neuroflash.com/de/: Quote from the homepage - Strengthen your marketing with personalized AI content. The all-in-one solution for brand-compliant content with AI, from ideation to content creation and optimization. neuroflash helps marketing teams save time, ensure a consistent message and improve creative processes.
  • https://thezimmwriter.com/: ZimmWriter is the world's first AI content writing software for Microsoft Windows. It allows you to use the AI provided by OpenAI directly on your desktop! -> Note: The app advertises 10 features that are supposed to be entered and many details that are supposed to make generated text unique.

Conclusion

Considering that humans do themselves well to check the output and adapt / reformulate it if necessary, it is currently not possible to verify reliably whether a text was created by an AI or not. 



Sonntag, 8. September 2024

AI and the productivity and quality of consultants work

The study Field Experimental Evidence of the Effects of AI on Knowledge Worker Productivity and Quality examines the impact of artificial intelligence on the productivity and quality of life of knowledge workers. This is a field study conducted by the Harvard Business School in collaboration with the Boston Consulting Group.

Key statements:

  • Experimental conditions: The study included 758 counselors who were divided into three groups: without AI access, with GPT-4 AI access, and with GPT-4 AI access plus an introduction to prompt engineering.
  • Increase in productivity: Consultants who used AI were significantly more productive. On average, they completed 12.2% more tasks and required 25.1% less time.
  • Quality improvement: The quality of the tasks supported by AI was more than 40% higher compared to the control group.
  • Different effects: Consultants with below-average performance benefited more from AI support (43% increase in performance) than those with above-average performance (17% increase in performance).
  • Limitations of AI: For tasks that were outside the capabilities of AI, consultants with AI support were 19 percentage points less successful in providing correct answers.
  • Use of AI: Two main patterns of successful AI use were identified: “Centaurs”, who divide tasks between humans and AI, and “Cyborgs”, who fully integrate their workflows with AI.

The study highlights that AI can offer significant benefits for the productivity and quality of life of knowledge workers, but also names challenges and risks, especially for tasks that are outside the current capabilities of AI.

A detailed summary can be downloaded here:

Zusammenfassung über KI und die Produktivität und Qualität von Wissensarbeitern.pdf

Recap on AI and Knowledge Worker Productivity and Quality.pdf


The linke to the complete study is this one: https://www.hbs.edu/faculty/Pages/item.aspx?num=64700

Mittwoch, 28. August 2024

Copilot in M365 & PowerPoint had some couples-therapy

UPDATES August 2024:
The article Work Smarter: Copilot Productivity Tips by Briana Taylor from August 26, 2024 is about Copilot in PowerPoint this time.

The article refers to the roadmap ID: 406170 
The article is structured as follows:
  • Tip 1: Create presentations using brand templates
  • Tip 2: Create presentations from Word & PDF documents
  • Tip 3: Add images to your presentations
At least the function behind Tip 2 has been available for some time. Focusing Tip 1, some technical requirements are necessary in order to be able to use this feature. An Organizational Asset Library (OAL) must be set up for PowerPoint, in which the PowerPoint templates (.potx files) must then be stored and maintained centrally.
How to do this is described here: Create an organization assets library.

In order to then create a PowerPoint based on your own master with the support of Copilot, this master must first be selected:

The rest is then the same as before:
The article from which the screenshot is taken, Add a slide or image to your presentation with Copilot in PowerPoint, also goes into Tip 3: Add images to your presentations: