Skip to content
This repository has been archived by the owner on Jun 30, 2022. It is now read-only.

Commit

Permalink
[Docs] Added activity flow doc (#2740)
Browse files Browse the repository at this point in the history
* updated docs

* Update 4-edit-your-cognitive-models.md
  • Loading branch information
lauren-mills authored Nov 22, 2019
1 parent ec8b46c commit a2cf1e2
Show file tree
Hide file tree
Showing 5 changed files with 64 additions and 8 deletions.
52 changes: 52 additions & 0 deletions docs/_docs/virtual-assistant/handbook/activity-flow.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,52 @@
---
category: Virtual Assistant
subcategory: Handbook
title: Activity Handling
description: Manage routing incoming activities, including handling interruptions.
order: 1
toc: true
---

# {{ page.title }}
{:.no_toc}
{{ page.description }}

## Introduction
The Virtual Assistant provides foundational logic for handling incoming user activities. It uses a combination of concepts from the Bot Builder SDK v4 and base classes that enable additional scenarios.

## Adapters and Middleware
Incoming activities are initially received through the BotAdapter implementation, processed through the configured Middleware pipeline, then routed onto the Assistant's dialog stack. The **DefaultAdapter** in the Virtual Assistant template provides a set of Middleware out of the box including the following:

- **Telemetry Middleware** - Configures Application Insights telemetry logging.
- **Transcript Logger Middleware** - Configures conversation transcript logging.
- **Show Typing Middleware** - Sends typing indicators from the bot.
- **Feedback Middleware** - Configures the [Feedback]({{site.baseurl}}//virtual-assistant/feedback) feature.
- **Set Locale Middleware** - Configures the CurrentUICulture to enable localization scenarios.
- **Event Debugger Middleware** - Enables debugging for event activities.

## Activity Handler
After the activity is processed by the Adapter and Middleware pipeline, it is received by the **ActivityHandler** implementation. The **DefaultActivityHandler** in the template implements the TeamsActivityHandler which enables Teams scenarios out of the box. By default, the **DefaultActivityHandler** passes the incoming message into the **MainDialog**. However, this logic can be customized as needed.

## Dialogs
The **DefaultActivityHandler** passes incoming activities into the **MainDialog**. **MainDialog** implements the **ActivityHandlerDialog**, which provides its own routing logic for handling activities of different types, as well as enables interruptions. The following diagram shows how the activities flow through the different methods in **MainDialog**:

![]({{site.baseurl}}/assets/images/virtual-assistant-main-dialog-flow.png)

### Interruptions
Once an activity flows into MainDialog, one of the first methods that will be called is OnInterruptDialogAsync(). The following interruptions are configured out of the box:
- **Switching between Skills** - Switches between connected skills based on intent.
- **Cancellation** - Cancels the current dialog.
- **Help** - Sends a help message, then resumes the waiting dialog.
- **Escalation** - Shows an escalation message.
- **Log out** - Logs the user out.
- **Repeat** - Repeats the last set of activities from the bot. Useful for speech scenarios.
- **Start over** - Starts the current dialog over.
- **Stop** - Can be implemented to stop readout in speech scenarios.

### Activity Routing
Once interruptions are evaluated, the activity is processed according to its activity type:

- **OnMessageActivityAsync()** - Any incoming message activities that were not handled by a waiting dialog.
- **OnMembersAddedAsync()** - Any incoming conversation update activity. Used for introduction logic.
- **OnEventActivityAsync()** - Any incoming event activity
- **OnUnhandledActivityTypeAsync()** - Any other incoming activity.
4 changes: 3 additions & 1 deletion docs/_docs/virtual-assistant/handbook/feedback.md
Original file line number Diff line number Diff line change
Expand Up @@ -60,4 +60,6 @@ After the middleware is configured, you can request feedback as usual.
## View your feedback in Power BI
You can view your **Feedback** in the Feedback tab of the Conversational AI Dashboard.

[Learn how to set up your own Power BI dashboard]({{site.baseurl}}/virtual-assistant/tutorials/view-analytics/1-intro/)
![]({{site.baseurl}}/assets/images/analytics/virtual-assistant-analytics-powerbi-13.png)

[Learn how to set up your own Power BI dashboard.]({{site.baseurl}}/solution-accelerators/tutorials/view-analytics/1-intro/)
Original file line number Diff line number Diff line change
Expand Up @@ -9,22 +9,24 @@ order: 4

# Tutorial: {{page.subcategory}} ({{page.language}})

## Update your knowledgebases
The Virtual Assistant Template include two knowledgebases, FAQ and Chitchat, that can be customized to fit your scenario. For example, QnA Maker offers FAQ and PDF extraction to automatically build a knowledgebase from your existing content ([learn more](https://docs.microsoft.com/en-us/azure/cognitive-services/qnamaker/concepts/data-sources-supported)). They also offer a variety of prebuilt chitchat knowledgebases with different personality types ([learn more](https://docs.microsoft.com/en-us/azure/cognitive-services/qnamaker/how-to/chit-chat-knowledge-base)). Refer to this documentation to learn how to edit your knowledgebases in the QnA Maker portal: https://docs.microsoft.com/en-us/azure/cognitive-services/qnamaker/how-to/edit-knowledge-base.
## Update your knowledge bases
The Virtual Assistant Template includes two knowledge bases, FAQ and Chitchat, that can be customized to fit your scenario. For example, QnA Maker offers FAQ and PDF extraction to automatically build a knowledge base from your existing content ([learn more](https://docs.microsoft.com/en-us/azure/cognitive-services/qnamaker/concepts/data-sources-supported)).

There are also a variety of prebuilt chitchat knowledge bases with different personality types ([learn more](https://docs.microsoft.com/en-us/azure/cognitive-services/qnamaker/how-to/chit-chat-knowledge-base)). Refer to this documentation to learn how to edit your knowledge bases in the QnA Maker portal: [How to edit a knowledgebase](https://docs.microsoft.com/en-us/azure/cognitive-services/qnamaker/how-to/edit-knowledge-base).

Once you have made your desired changes, your Virtual Assistant Dispatch model will need to be updated with your changes. Run the following command from your project directory to update your Dispatch model:
```
.\Deployment\Scripts\update_cognitive_models.ps1 -RemoteToLocal
```
> This script updates your local .lu files with any changes in made in the QnA Maker or LUIS portals, then runs `dispatch refresh` to update your Dispatch model with the changes.
## Add an additional knowledgebase
## Add an additional knowledge base

You may wish to add an additional [QnA Maker](https://www.qnamaker.ai/) knowledge base to your assistant, this can be performed through the following steps.

1. Create your new knowledge base using the QnAMaker portal. You can alternatively create this from a new `.lu` file by adding that file to the corresponding resource folder. For example, if you are using an English resource, you should place it in the `deployment/resources/QnA/en` folder. To understand how to create a knowledge base from a `.lu` file using the `ludown` and `qnamaker` CLI tools please refer to [this blog post](https://blog.botframework.com/2018/06/20/qnamaker-with-the-new-botbuilder-tools-for-local-development/) for more information.
1. Create your new knowledge base using the QnAMaker portal. You can alternatively create this from a new `.lu` file by adding that file to the corresponding resource folder. For example, if you are using an English resource, you should place it in the `deployment/resources/QnA/en-us` folder. To understand how to create a knowledge base from a `.lu` file using the `ludown` and `qnamaker` CLI tools please refer to [this blog post](https://blog.botframework.com/2018/06/20/qnamaker-with-the-new-botbuilder-tools-for-local-development/) for more information.

3. Update the `cognitiveModels.json` file in the root of your project with a new entry for your newly created QnAMaker knowledgebase, an example is shown below:
3. Update the `cognitiveModels.json` file in the root of your project with a new entry for your newly created QnA Maker knowledge base, an example is shown below:

```json
{
Expand All @@ -37,7 +39,7 @@ You may wish to add an additional [QnA Maker](https://www.qnamaker.ai/) knowledg
}
```

The `kbID`, `hostName` and `endpoint key` can all be found within the **Publish** page on the [QnAMaker portal](https://qnamaker.ai). The subscription key is available from your QnA resource in the Azure Portal.
The `kbID`, `hostName` and `endpoint key` can all be found within the **Publish** page on the [QnA Maker portal](https://qnamaker.ai). The subscription key is available from your QnA resource in the Azure Portal.

4. The final step is to update your dispatch model and associated strongly typed class (LuisGen). We have provided the `update_cognitive_models.ps1` script to simplify this for you. The optional `-RemoteToLocal` parameter will generate the matching LU file on disk for your new knowledgebase (if you created using portal). The script will then refresh the dispatcher.

Expand All @@ -53,7 +55,7 @@ You can now leverage multiple QnA sources as a part of your assistant's knowledg

## Update your local LU files for LUIS and QnAMaker

As you build out your assistant you will likely update the LUIS models and QnAMaker knowledgebases for your Assistant in the respective portals. You'll then need to ensure the LU files representing your LUIS models in source control are kept up to date. We have provided the following script to refresh the local LU files for your project which is driven by the sources in your `cognitiveModels.json` file.
As you build out your assistant you will likely update the LUIS models and QnA Maker knowledge bases for your Assistant in the respective portals. You'll then need to ensure the LU files representing your LUIS models in source control are kept up to date. We have provided the following script to refresh the local LU files for your project which is driven by the sources in your `cognitiveModels.json` file.

Run the following command from within Powershell (pwsh.exe) within your **project directory**.

Expand Down
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.

0 comments on commit a2cf1e2

Please sign in to comment.