Microsoft Copilot is a powerful tool, but like any AI, it can provide incorrect or misleading answers. To ensure you’re getting the most accurate responses, it’s essential to understand how to prompt Copilot properly in order to prevent bad outputs. 

Let’s explore how Microsoft Copilot works and where it gets its data. Then we’ll walk through a collection of tips to get the best responses.

What is Microsoft Copilot?

Microsoft Copilot is an AI-powered chat tool designed to enhance productivity by integrating directly into Microsoft Office applications like Word, Excel, PowerPoint, and Outlook. 

Copilot is a ChatGPT-like technology that uses natural language processing and machine learning to help you with tasks such as drafting documents, summarizing information, generating insights from your data, creating presentations, and even automating routine tasks. 

With the help of large language models, Copilot understands your prompts and delivers contextual responses. This helps you to complete tasks faster and more efficiently. It’s a powerful productivity tool, but like any AI-powered tool, it requires oversight to prevent errors and ensure its output is reliable.

Understanding Copilot’s Data Sources

One of Copilot’s key features is that it can pull information from a variety of data sources. These sources include your internal documents, emails, databases, and external resources like the web or other connected systems. 

Like any AI system, the quality and accuracy of Copilot’s responses are heavily influenced by the data it has access to. As we like to say, “garbage in, garbage out,” meaning you’re bound to get poor responses if you supply your model with incomplete or outdated data. 

To prevent bad answers, you need to ensure that the data Copilot accesses is both current and accurate. This involves regularly updating your internal databases and carefully curating external data sources.

Why Does Microsoft Copilot Provide Poor Answers?

If you’re getting weird or unusual responses from Copilot, you aren’t alone. It’s a common issue that plagues many users. Copilot can provide wrong answers for several reasons, mostly related to the data it uses and how it interprets your prompts. Here are some common factors:

  1. Outdated or Incomplete Data: Copilot relies on the information available to it, both from internal documents and external sources. If the data is outdated, incomplete, or inaccurate, the answers it generates will reflect that.
  2. Vague or Ambiguous Prompts: The quality of the output depends heavily on the clarity of the input. If you ask Copilot unclear or overly broad questions, it may struggle to interpret your request.
  3. Lack of Context: Copilot is not always able to understand nuanced or complex contexts. If your query requires specific knowledge or context that Copilot cannot grasp from its data, it might give a simplified or incorrect answer.
  4. Over-Reliance on External Sources: Copilot often pulls information from external resources, which can sometimes be unreliable, especially if that external data isn’t properly vetted or sourced.
  5. Limited Learning: Copilot doesn’t learn from feedback automatically. Without regularly reviewing and fine-tuning its outputs, it may continue providing flawed answers over time.
  6. Microsoft’s Built-In Limits: Microsoft has designed Copilot with certain boundaries, such as limitations on the amount of data it can process at once, the complexity of tasks it can perform, malicious prompts, and inappropriate content. It also comes with limits designed to reduce bias and discriminatory language, but these restrictions aren’t perfect. 
How to Prevent Microsoft Copilot From Giving Bad Answers: image 3

How to Get Better Responses from Microsoft Copilot

Like many generative artificial intelligence systems, you can improve the accuracy and usability of its responses (and get correct answers) with a bit of smart prompt engineering. Here’s a collection of tips that will help you get better answers out of Microsoft Copilot. 

Shelf platform reveals low-quality Copilot answers and identifies the data issues behind them. Schedule a demo to see how Shelf can help you achieve better, higher-quality Copilot responses.
How to Prevent Microsoft Copilot From Giving Bad Answers: image 4

1: Keep Your Data Sources Up-to-Date

We repeat this a lot, but it’s the first rule of working with AI: Your data should be robust, completed, well-tagged, and updated regularly. 

Microsoft Copilot pulls from your internal and external data to generate answers, so the quality of the output depends on how accurate and current those sources are. 

Furthermore, it’s important to be clear with Copilot about where you want it to find answers. If your prompts are vague, it will grab whatever information it finds most reasonable, instead of the specific data you’re looking for. 

Bad Prompt: “What are last year’s sales numbers?”

Good Prompt: “What are the Q4 2023 sales numbers from our internal database?”

2: Train Your Team to Use Copilot

Like all generative AI systems, Copilot is a support tool, not a replacement for human expertise. This means your team needs to understand how to use Copilot properly to assist their work. They can’t sit back and let it do the heavy-lifting for them. 

Provide these tips—and your own personal advice—to your team to train them on the right way to use Copilot. Work together to iterate on prompts until you get correct answers to questions.

3: Curate External Data Sources Carefully

Copilot may pull from external data, which isn’t always reliable. To prevent inaccurate responses or misleading information, make sure to vet the external sources Copilot can access and then deliberately point Copilot to them. 

Bad Prompt: “Find the latest market trends.”

Good Prompt: “Find the latest market trends using data from reputable sources like Gartner or McKinsey.”

4: Provide Details (Goal, Context, and Specific Sources)

Copilot works best when you provide clear context and details about what you’re trying to achieve. When you include your goal and specify any data sources or context,  you help Copilot understand what you’re looking for and where to pull the data from.

Bad Prompt: “Analyze the data.”

Good Prompt: “Analyze the sales data for Q3 2023, and compare it with Q3 2022, using the financial reports in our system.”

In Power Apps and Dynamics 365 Sales, you can use the record picker to select a specific record as the basis for Copilot’s response. This ensures Copilot is working with the right data.

Prompt: “Summarize the customer record for John Doe using the record picker.”

5: Use Clear Language, Grammar, Punctuation, and Capitalization

Clarity is crucial for getting good results from any AI system. AIs use language rules to understand meaning, just like we do in person-to-person exchanges. Following those rules with clear phrasing, proper punctuation, and correct capitalization helps Copilot interpret your request accurately.

Bad Prompt: “summarize the project file”

Good Prompt: “Summarize the Project Alpha file located in the shared drive and highlight any changes to the budget.”

6: Give Examples to Create a Target

Providing examples helps Copilot understand what kind of output you’re looking for. This is known as few-shot prompting, where you guide the AI by showing examples of desired responses. (Read this guide to learn the difference between few-shot and zero-shot prompting.)

Bad Prompt: “Write a report on customer feedback.”

Good Prompt: “Write a report on customer feedback like this: ‘Our customers expressed positive feelings about our customer service, with 80% satisfaction.'”

7: Provide Positive Instructions (“To-Do” Instead of “Not To-Do”)

Copilot responds better when you tell it what to do, rather than what to avoid. By framing your instructions positively, you help Copilot focus on the task at hand. If you tell Copilot what not to do, it might confuse your instructions and provide the kinds of answers you specifically don’t want. 

Bad Prompt: “Don’t include unnecessary details.”

Good Prompt: “Summarize the main points of the report in under 100 words.”

8. Keep It Simple and Avoid Overloading

Giving Copilot too many details or complex logic can overwhelm the system and result in poor answers and abnormal behavior. Simple prompts help Copilot focus on the task at hand and respond more accurately.

In many cases, this means you’ll have to break your complex needs into multiple prompts to walk Copilot through the process. 

Bad Prompt: “Summarize this financial report with detailed revenue breakdowns for each quarter, while also highlighting marketing spend and customer acquisition costs.”

Good Prompt 1: “Summarize the key revenue figures from this financial report.”

Good Prompt 2: “Provide a revenue breakdown for each quarter.”

Good Prompt 3: “Highlight marketing spend with figures and a summary.”

Good Prompt 4: “Highlight customer acquisition costs with figures and a summary.”

How to Prevent Microsoft Copilot From Giving Bad Answers: image 5

9. Give Copilot an Alternative Path

Sometimes Microsoft Copilot can’t complete a task due to missing data or a complex request. Nevertheless, it will try to complete the task, even if it means providing an inaccurate response or an outright hallucination. 

By giving it an alternative action or “out,” you prevent it from giving bad or incomplete answers. This gives Copilot room to handle limitations in a way that doesn’t totally invalidate the response.

Bad Prompt: “Generate a full market analysis.”

Good Prompt: “Generate a market analysis. If you lack data, summarize what data you can access and note any missing information.”

10. Use “New Topic” When Switching Tasks

Copilot may carry context from previous conversations into new requests. It may formulate responses based on instructions you gave it in the past. So it’s important to start with a “new topic” prompt for each new task. This ensures that Copilot isn’t influenced by past interactions.

All you have to do is type “new topic” before asking Copilot to summarize a new document or task. This clears the previous context and ensures Copilot focuses on the current task.

11. Ask Copilot to Explain How It Arrived at a Response

If you’re unsure about an answer, ask Copilot to explain its reasoning. This gives you insight into how it arrived at its conclusion and whether it’s using the right data. It can also help you catch any mistakes or gaps in the data it’s using.

Prompt: “Explain how you arrived at this sales projection.”

12. Create Roles for Copilot and Yourself

AIs thrive when they know exactly what role to play in the conversation. Start by declaring who Copilot is talking to by establishing the audience. Then assign a role for Copilot to inhabit. This gives Copilot clear parameters and helps it focus on a particular style or role.

Prompt: “Craft your responses like you’re speaking to the C-suite in a post-Series A funded startup.”

Prompt: “Act as a financial advisor and give a brief on Q3 profits and losses.”

13: Iterate, Revise, and Improve the Prompt

Sometimes the first answer Copilot provides isn’t the best. You can improve the results by iterating on your prompt and regenerating the responses. Trying different approaches can refine the output.

First Prompt: “Summarize the Q4 sales data.”

Result: “Sales were up in Q4.”

Second Prompt: “Summarize the Q4 sales data with specific increases and decreases in product categories.”

Result: “Sales increased in electronics but decreased in furniture.”

5 Point RAG Strategy Guide to Prevent Hallucinations & Bad Answers This guide designed to help teams working on GenAI Initiatives gives you five actionable strategies for RAG that will improve answer quality and prevent hallucinations.

Third Prompt: “Summarize the Q4 sales data by percentage growth in each product category and compare it with Q3.”

Final Result: “Electronics sales grew by 10% compared to Q3, while furniture sales dropped by 5%.”

Additionally, Microsoft Copilot encourages user-initiated feedback using the thumbs up or thumbs down icons. This feedback directly influences the system’s improvement over time. A thumbs up reinforces the quality of the answer whereas a thumbs down indicates that the response wasn’t satisfactory. 

14. Evaluate the Content Before Using

Always evaluate the Copilot-generated content before you use it. AI-generated content can sometimes contain errors or misinterpretations, so reviewing the output ensures accuracy.

15. Add feedback for every response

Submitting feedback for each response is a key way to help Copilot improve itself. Microsoft uses Adaptive Cards to give users the chance to give feedback for specific answers. Over time, this human-in-the-loop feedback leads to better responses. 

Better Copilot Results with Smart Prompting

By following these tips, you can significantly improve the accuracy and quality of the responses you get from Microsoft Copilot. Keep your data up-to-date, craft clear and detailed prompts, and review outputs before using them. 

Whether you’re generating reports, analyzing data, or drafting content, these strategies will help you avoid common pitfalls and make the most of Copilot’s capabilities.