You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
* Fixes#165 updated images for SharePoint site creation
* Fixes#171 Web Search defaults to off, but is activated when adding URL knowledge items, students now will disable this option before testing their agent
* Fixes#173 - add step to Select Add channel
* fixes#175 - further changes to UI, only General Quality is there by default, added steps to configure an additional option
* fixes#176 - increased to 3-5 minutes
* fixes#177 - updated to reflect ui
* removed image for now
* Fixes#182 - fixed typo
* fixes#185 - changed chit chat example
* Fixes#188 - fixed typo Topic.Formula should have been Topic.relevantNewsForOpportunities
* Fixes#189 - removed reference to non old steps
* fixes#200 - fixed names to be consistent
* fixes#201 - added note
* fixes#184#183 - simplified commission calculation
Copy file name to clipboardExpand all lines: labs/autonomous-account-news/README.md
+2-4Lines changed: 2 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -231,6 +231,7 @@ Set up an autonomous agent with a recurring trigger that automatically activates
231
231
```
232
232
cat_amount gt 300000 and cat_isclosed eq false
233
233
```
234
+
You may see a validation error like missing required property - value. After a few seconds or as you select away from that input field it should clear. Also, it is not necessary to select the ... to provide the OData filter.
234
235
235
236
1. Select **Save** to finalize the tool configuration.
236
237
@@ -417,7 +418,7 @@ Set up an autonomous agent with a recurring trigger that automatically activates
417
418
- Select on **(+)** to add a node.
418
419
- Select **Variable management** and then **Set a variable value**.
419
420
- For **Set variable**, create a new variable, make it **Global**, and name it `relevantNewsForOpportunities`.
420
-
- In **To value** select `Topic.relevantNewsForOpportunities`. You need to select ... then select Formula, then type **Topic.Formula**, then select Insert.
421
+
- In **To value** select `Topic.relevantNewsForOpportunities`. You need to select ... then select Formula, then type **Topic.relevantNewsForOpportunities**, then select Insert.
421
422
422
423
1. Select **Save**
423
424
@@ -481,9 +482,6 @@ Set up an autonomous agent with a recurring trigger that automatically activates
481
482
5. Use Log relevant news for opportunities to log your findings. The base input for Log relevant news for opportunities should be {Global.searchResults} with determined relevance added
482
483
```
483
484
484
-
> [!TIP]
485
-
> If instructions don't offer the "/" option to reference topics, tools or variables, you can skip step 26-30 and continue with the next steps. You can always come back to this later.
486
-
487
485
1 To increase orchestration accuracy, you will now replace names of topics, tools and variables with references. References can be added to instructions by typing **/** and selecting the appropriate object from the drop-down menu.
488
486
489
487
1. In the instructions, select `<Get Opportunity records>`. Type **/** and in the drop-down menu, under **Tool**, select **Get Opportunity records**. The previous text in curly brackets should be replaced by a visual reference to the tool.
Copy file name to clipboardExpand all lines: labs/core-concepts-agent-knowledge-tools/README.md
+6Lines changed: 6 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -263,6 +263,12 @@ Add a document knowledge source to your agent and verify that it accurately answ
263
263
264
264
1. Review the **Name** and **Description** fields. Update if needed to make the source easily identifiable.
265
265
266
+
#### Check and disable Web Search
267
+
The Use information from the web setting is available on the Generative AI settings page or the Web Search setting in the Knowledge section of the agent's Overview page. This setting lets your agent access broad, real-time, and up-to-date information beyond what is available in predefined or enterprise-specific knowledge bases. For our scenario, we want to keep the use of knowedge focused on our provided resources and not the broader web.
268
+
269
+
1. Navigate to the Overview tab, scroll down to the Knowledge section
Copy file name to clipboardExpand all lines: labs/core-concepts-analytics-evaluations/README.md
+10-14Lines changed: 10 additions & 14 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -319,6 +319,14 @@ Create evaluation test sets using four different methods and understand how each
319
319
320
320
1. In the **Configure test set** panel on the right side of your screen, change the test set name to **Non-Critical Copilot Studio Guide Set**
321
321
322
+
1. In the Test method General Question is configured by default. You can configure additional test methods, those will be available to configure later when you edit questions.
323
+
324
+
1. Select **Add test method**, review the list of test methods you can configure.
325
+
326
+
1. On the **Set pass score**, review the settings for this option and select **OK**. Other test methods may have other options you can configure based on the approach.
327
+
328
+
1. Select **Compare meaning** and then select **OK**.
329
+
322
330
1. Select **Save** at the bottom of that same panel.
323
331
324
332
1. In that same panel, select the **Manage profile** button.
@@ -329,25 +337,15 @@ Create evaluation test sets using four different methods and understand how each
329
337
330
338
1. Select the first generated question to explore all the available options. For each test case, you can configure the **evaluation method**:
331
339
332
-
-**Exact Match**: Character-for-character comparison between expected and actual response. Use for questions with precise, factual answers.
333
-
334
-
-**Keyword Match**: Checks whether key terms from the expected response appear in the actual response. Use when exact wording doesn't matter but key concepts must be present.
335
-
336
-
-**Similarity**: Uses cosine similarity to compare semantic meaning on a 0-1 scale with a configurable threshold. Use when meaning matters more than exact wording.
337
-
338
-
-**General Quality**: Uses a LLM to evaluate response quality across four dimensions - relevance, groundedness, completeness, and abstention. Does NOT require an expected response. Use for open-ended questions.
339
-
340
-
-**Compare Meaning**: Evaluates whether the intent and meaning of the actual response matches the expected response, with a configurable threshold. Use for semantic comparison with more nuance than cosine similarity.
341
-
342
340
> [!TIP]
343
-
> Choose evaluation methods that match the nature of each question. Factual questions with precise answers work well with Exact Match or Keyword Match. Open-ended questions benefit from General Quality or Similarity methods.
341
+
> Choose evaluation methods that match the nature of each question. Factual questions with precise answers work well with Exact Match or Keyword Match. Open-ended questions benefit from General Quality or Similarity methods. You can add additional test methods to the set at creation of the test set or later during editing.
344
342
345
343
1. After reviewing, select **Cancel** to close the edit of the test case.
346
344
347
345
1. Select **Evaluate** to start the evaluation of this test set.
348
346
349
347
> [!NOTE]
350
-
> Evaluation time depends on the number of test cases and agent response time. A test set with 10 cases typically completes in 1-3 minutes.
348
+
> Evaluation time depends on the number of test cases and agent response time. A test set with 10 cases typically completes in 3-5 minutes.
351
349
352
350
1. After your evaluation runs, review the overall result and then select the evaluation row to drill down into details for each question.
353
351
@@ -520,10 +518,8 @@ Review and interpret evaluation results, compare outcomes across test sets, and
520
518
521
519
1. Select an individual test case to view its detailed results:
522
520
- **Question**: The original test question
523
-
- **Expected response**: What the AI generated as the correct answer
524
521
- **Actual response**: What the agent actually responded with
525
522
- **Result**: Pass or fail
526
-
- **Reasoning**: An explanation of why the test passed or failed
527
523
528
524
1. For any failed test cases, review the **activity map** to see the step-by-step conversation flow showing the agent's decision path, including which knowledge sources, tools, and topics were used.
0 commit comments