diff --git a/articles/active-directory-b2c/active-directory-b2c-reference-spa.md b/articles/active-directory-b2c/active-directory-b2c-reference-spa.md index d05fac2aab5dc..100ea9c753929 100644 --- a/articles/active-directory-b2c/active-directory-b2c-reference-spa.md +++ b/articles/active-directory-b2c/active-directory-b2c-reference-spa.md @@ -23,7 +23,7 @@ Many modern apps have a single-page app front end that primarily is written in J To support these applications, Azure Active Directory B2C (Azure AD B2C) uses the OAuth 2.0 implicit flow. The OAuth 2.0 authorization implicit grant flow is described in [section 4.2 of the OAuth 2.0 specification](https://tools.ietf.org/html/rfc6749). In implicit flow, the app receives tokens directly from the Azure Active Directory (Azure AD) authorize endpoint, without any server-to-server exchange. All authentication logic and session handling takes place entirely in the JavaScript client, without additional page redirects. -Azure AD B2C extends the standard OAuth 2.0 implicit flow to more than simple authentication and authorization. Azure AD B2C introduces the [policy parameter](active-directory-b2c-reference-policies.md). With the policy parameter, you can use OAuth 2.0 to add policies to your app, such as sign-up, sign-in, and profile management user flows. In this article, we show you how to use the implicit flow and Azure AD to implement each of these experiences in your single-page applications. To help you get started, take a look at our [Node.js](https://github.com/Azure-Samples/active-directory-b2c-javascript-singlepageapp-nodejs-webapi) and [Microsoft .NET](https://github.com/Azure-Samples/active-directory-b2c-javascript-singlepageapp-dotnet-webapi) samples. +Azure AD B2C extends the standard OAuth 2.0 implicit flow to more than simple authentication and authorization. Azure AD B2C introduces the [policy parameter](active-directory-b2c-reference-policies.md). With the policy parameter, you can use OAuth 2.0 to add policies to your app, such as sign-up, sign-in, and profile management user flows. In this article, we show you how to use the implicit flow and Azure AD to implement each of these experiences in your single-page applications. In the example HTTP requests in this article, we use our sample Azure AD B2C directory, **fabrikamb2c.onmicrosoft.com**. We also use our own sample application and user flows. You can try the requests yourself by using these values, or you can replace them with your own values. Learn how to [get your own Azure AD B2C directory, application, and user flows](#use-your-own-azure-ad-b2c-tenant). @@ -272,8 +272,3 @@ To try these requests yourself, complete the following three steps. Replace the 2. [Create an application](active-directory-b2c-app-registration.md) to obtain an application ID and a `redirect_uri` value. Include a web app or web API in your app. Optionally, you can create an application secret. 3. [Create your user flows](active-directory-b2c-reference-policies.md) to obtain your user flow names. -## Samples - -* [Create a single-page app by using Node.js](https://github.com/Azure-Samples/active-directory-b2c-javascript-singlepageapp-nodejs-webapi) -* [Create a single-page app by using .NET](https://github.com/Azure-Samples/active-directory-b2c-javascript-singlepageapp-dotnet-webapi) - diff --git a/articles/active-directory-b2c/active-directory-b2c-setup-goog-app.md b/articles/active-directory-b2c/active-directory-b2c-setup-goog-app.md index 86da0cd6df045..5efb78abb4f6d 100644 --- a/articles/active-directory-b2c/active-directory-b2c-setup-goog-app.md +++ b/articles/active-directory-b2c/active-directory-b2c-setup-goog-app.md @@ -8,7 +8,7 @@ manager: daveba ms.service: active-directory ms.workload: identity ms.topic: conceptual -ms.date: 09/11/2018 +ms.date: 03/25/2019 ms.author: davidmu ms.subservice: B2C --- @@ -20,15 +20,13 @@ ms.subservice: B2C To use a Google account as an [identity provider](active-directory-b2c-reference-oauth-code.md) in Azure Active Directory (Azure AD) B2C, you need to create an application in your tenant that represents it. If you don’t already have a Google account you can get it at [https://accounts.google.com/SignUp](https://accounts.google.com/SignUp). 1. Sign in to the [Google Developers Console](https://console.developers.google.com/) with your Google account credentials. -2. Select **Create project**, and then click **Create**. If you have created projects before, select the project list, and then select **New Project**. +2. In the upper-left corner of the page, select the project list, and then select **New Project**. 3. Enter a **Project Name**, click **Create**, and then make sure you are using the new project. -3. Select **Credentials** in the left menu, and then select **Create credentials** > **Oauth client ID**. -4. Select **Configure consent screen**. -5. Select or specify a valid **Email address**, provide a **Product name shown to users**, add `b2clogin.com` to **Authorized domains**, and click **Save**. -6. Under **Application type**, select **Web application**. -7. Enter a **Name** for your application, enter `https://your-tenant-name.b2clogin.com` in **Authorized JavaScript origins**, and `https://your-tenant-name.b2clogin.com/your-tenant-name.onmicrosoft.com/oauth2/authresp` in **Authorized redirect URIs**. Replace `your-tenant-name` with the name of your tenant. You need to use all lowercase letters when entering your tenant name even if the tenant is defined with uppercase letters in Azure AD B2C. -8. Click **Create**. -9. Copy the values of **Client ID** and **Client secret**. You will need both of them to configure Google as an identity provider in your tenant. **Client secret** is an important security credential. +4. Select **Credentials** in the left menu, and then select **Create credentials** > **Oauth client ID**. +5. Under **Application type**, select **Web application**. +6. Enter a **Name** for your application, enter `https://your-tenant-name.b2clogin.com` in **Authorized JavaScript origins**, and `https://your-tenant-name.b2clogin.com/your-tenant-name.onmicrosoft.com/oauth2/authresp` in **Authorized redirect URIs**. Replace `your-tenant-name` with the name of your tenant. You need to use all lowercase letters when entering your tenant name even if the tenant is defined with uppercase letters in Azure AD B2C. +7. Click **Create**. +8. Copy the values of **Client ID** and **Client secret**. You will need both of them to configure Google as an identity provider in your tenant. **Client secret** is an important security credential. ## Configure a Google account as an identity provider diff --git a/articles/active-directory/authentication/howto-password-ban-bad-on-premises-monitor.md b/articles/active-directory/authentication/howto-password-ban-bad-on-premises-monitor.md index 486519dc9bdfb..d2bcd24ef8b7a 100644 --- a/articles/active-directory/authentication/howto-password-ban-bad-on-premises-monitor.md +++ b/articles/active-directory/authentication/howto-password-ban-bad-on-premises-monitor.md @@ -18,6 +18,8 @@ ms.collection: M365-identity-device-management After the deployment of Azure AD Password Protection, monitoring and reporting are essential tasks. This article goes into detail to help you understand various monitoring techniques, including where each service logs information and how to report on the use of Azure AD Password Protection. +Monitoring and reporting are done either by event log messages or by running PowerShell cmdlets. The DC agent and proxy services both log event log messages. All PowerShell cmdlets described below are only available on the proxy server (see the AzureADPasswordProtection PowerShell module). The DC agent software does not install a PowerShell module. + ## DC agent event logging On each domain controller, the DC agent service software writes the results of each individual password validation operation (and other status) to a local event log: diff --git a/articles/active-directory/conditional-access/block-legacy-authentication.md b/articles/active-directory/conditional-access/block-legacy-authentication.md index 4d973ba1f370a..951c3e2f8a46f 100644 --- a/articles/active-directory/conditional-access/block-legacy-authentication.md +++ b/articles/active-directory/conditional-access/block-legacy-authentication.md @@ -15,7 +15,7 @@ ms.devlang: na ms.topic: article ms.tgt_pltfrm: na ms.workload: identity -ms.date: 03/22/2019 +ms.date: 03/25/2019 ms.author: markvi ms.reviewer: calebb @@ -136,4 +136,6 @@ If you block legacy authentication using the other clients condition, you can al ## Next steps -If you are not familiar with configuring conditional access policies yet, see [require MFA for specific apps with Azure Active Directory conditional access](app-based-mfa.md) for an example. +- If you are not familiar with configuring conditional access policies yet, see [require MFA for specific apps with Azure Active Directory conditional access](app-based-mfa.md) for an example. + +- For more information about modern authentication support, see [How modern authentication works for Office 2013 and Office 2016 client apps](https://docs.microsoft.com/en-us/office365/enterprise/modern-auth-for-office-2013-and-2016) diff --git a/articles/active-directory/reports-monitoring/concept-activity-logs-azure-monitor.md b/articles/active-directory/reports-monitoring/concept-activity-logs-azure-monitor.md index 339471edcb62d..a5d970a18e460 100644 --- a/articles/active-directory/reports-monitoring/concept-activity-logs-azure-monitor.md +++ b/articles/active-directory/reports-monitoring/concept-activity-logs-azure-monitor.md @@ -3,7 +3,7 @@ title: Azure Active Directory activity logs in Azure Monitor (preview) | Microso description: Introduction to Azure Active Directory activity logs in Azure Monitor (preview) services: active-directory documentationcenter: '' -author: priyamohanram +author: MarkusVi manager: daveba editor: '' @@ -15,7 +15,7 @@ ms.tgt_pltfrm: na ms.workload: identity ms.subservice: report-monitor ms.date: 11/13/2018 -ms.author: priyamo +ms.author: markvi ms.reviewer: dhanyahk ms.collection: M365-identity-device-management diff --git a/articles/active-directory/reports-monitoring/concept-audit-logs.md b/articles/active-directory/reports-monitoring/concept-audit-logs.md index 360ac62540d6d..1418977d63203 100644 --- a/articles/active-directory/reports-monitoring/concept-audit-logs.md +++ b/articles/active-directory/reports-monitoring/concept-audit-logs.md @@ -4,7 +4,7 @@ title: Audit activity reports in the Azure Active Directory portal | Microsoft D description: Introduction to the audit activity reports in the Azure Active Directory portal services: active-directory documentationcenter: '' -author: priyamohanram +author: MarkusVi manager: daveba editor: '' @@ -16,7 +16,7 @@ ms.tgt_pltfrm: na ms.workload: identity ms.subservice: report-monitor ms.date: 11/13/2018 -ms.author: priyamo +ms.author: markvi ms.reviewer: dhanyahk ms.collection: M365-identity-device-management diff --git a/articles/active-directory/reports-monitoring/concept-reporting-api.md b/articles/active-directory/reports-monitoring/concept-reporting-api.md index e9719d68a8ae2..0f78743888ef7 100644 --- a/articles/active-directory/reports-monitoring/concept-reporting-api.md +++ b/articles/active-directory/reports-monitoring/concept-reporting-api.md @@ -4,7 +4,7 @@ title: Get started with the Azure AD reporting API | Microsoft Docs description: How to get started with the Azure Active Directory reporting API services: active-directory documentationcenter: '' -author: priyamohanram +author: MarkusVi manager: daveba editor: '' @@ -16,7 +16,7 @@ ms.tgt_pltfrm: na ms.workload: identity ms.subservice: report-monitor ms.date: 11/13/2018 -ms.author: priyamo +ms.author: markvi ms.reviewer: dhanyahk ms.collection: M365-identity-device-management diff --git a/articles/active-directory/reports-monitoring/concept-risk-events.md b/articles/active-directory/reports-monitoring/concept-risk-events.md index 103ae36e61516..a1b18d238549c 100644 --- a/articles/active-directory/reports-monitoring/concept-risk-events.md +++ b/articles/active-directory/reports-monitoring/concept-risk-events.md @@ -3,7 +3,7 @@ title: Azure Active Directory risk events | Microsoft Docs description: This artice gives you a detailed overview of what risk events are. services: active-directory keywords: azure active directory identity protection, security, risk, risk level, vulnerability, security policy -author: priyamohanram +author: MarkusVi manager: daveba ms.assetid: fa2c8b51-d43d-4349-8308-97e87665400b @@ -14,7 +14,7 @@ ms.devlang: na ms.topic: conceptual ms.subservice: report-monitor ms.date: 11/13/2018 -ms.author: priyamo +ms.author: markvi ms.reviewer: dhanyahk ms.collection: M365-identity-device-management diff --git a/articles/active-directory/reports-monitoring/concept-risky-sign-ins.md b/articles/active-directory/reports-monitoring/concept-risky-sign-ins.md index 1fbd8025cf4ad..f615baf4aac42 100644 --- a/articles/active-directory/reports-monitoring/concept-risky-sign-ins.md +++ b/articles/active-directory/reports-monitoring/concept-risky-sign-ins.md @@ -3,7 +3,7 @@ title: Risky sign-ins report in the Azure Active Directory portal | Microsoft Docs description: Learn about the risky sign-ins report in the Azure Active Directory portal services: active-directory -author: priyamohanram +author: MarkusVi manager: daveba ms.assetid: 7728fcd7-3dd5-4b99-a0e4-949c69788c0f @@ -14,7 +14,7 @@ ms.tgt_pltfrm: na ms.workload: identity ms.subservice: report-monitor ms.date: 11/13/2018 -ms.author: priyamo +ms.author: markvi ms.reviewer: dhanyahk ms.collection: M365-identity-device-management diff --git a/articles/active-directory/reports-monitoring/concept-sign-ins.md b/articles/active-directory/reports-monitoring/concept-sign-ins.md index 842fd4ccaa483..93dca427fda9c 100644 --- a/articles/active-directory/reports-monitoring/concept-sign-ins.md +++ b/articles/active-directory/reports-monitoring/concept-sign-ins.md @@ -3,7 +3,7 @@ title: Sign-in activity reports in the Azure Active Directory portal | Microsoft description: Introduction to sign-in activity reports in the Azure Active Directory portal services: active-directory documentationcenter: '' -author: priyamohanram +author: MarkusVi manager: daveba editor: '' @@ -15,7 +15,7 @@ ms.tgt_pltfrm: na ms.workload: identity ms.subservice: report-monitor ms.date: 11/13/2018 -ms.author: priyamo +ms.author: markvi ms.reviewer: dhanyahk ms.collection: M365-identity-device-management diff --git a/articles/active-directory/reports-monitoring/concept-user-at-risk.md b/articles/active-directory/reports-monitoring/concept-user-at-risk.md index 7b24b8467068b..8178c0d24d1a1 100644 --- a/articles/active-directory/reports-monitoring/concept-user-at-risk.md +++ b/articles/active-directory/reports-monitoring/concept-user-at-risk.md @@ -3,7 +3,7 @@ title: Users flagged for risk security report in the Azure Active Directory portal | Microsoft Docs description: Learn about the users flagged for risk security report in the Azure Active Directory portal services: active-directory -author: priyamohanram +author: MarkusVi manager: daveba ms.assetid: addd60fe-d5ac-4b8b-983c-0736c80ace02 @@ -14,7 +14,7 @@ ms.tgt_pltfrm: na ms.workload: identity ms.subservice: report-monitor ms.date: 01/17/2019 -ms.author: priyamo +ms.author: markvi ms.reviewer: dhanyahk ms.collection: M365-identity-device-management diff --git a/articles/active-directory/reports-monitoring/howto-analyze-activity-logs-log-analytics.md b/articles/active-directory/reports-monitoring/howto-analyze-activity-logs-log-analytics.md index 1e3f94240cc9e..2e1d626a8e004 100644 --- a/articles/active-directory/reports-monitoring/howto-analyze-activity-logs-log-analytics.md +++ b/articles/active-directory/reports-monitoring/howto-analyze-activity-logs-log-analytics.md @@ -3,7 +3,7 @@ title: Analyze Azure Active Directory activity logs using Azure Monitor logs (pr description: Learn how to analyze Azure Active Directory activity logs using Azure Monitor logs (preview) services: active-directory documentationcenter: '' -author: priyamohanram +author: MarkusVi manager: daveba editor: '' @@ -15,7 +15,7 @@ ms.tgt_pltfrm: na ms.workload: identity ms.subservice: report-monitor ms.date: 11/13/2018 -ms.author: priyamo +ms.author: markvi ms.reviewer: dhanyahk ms.collection: M365-identity-device-management diff --git a/articles/active-directory/reports-monitoring/howto-configure-prerequisites-for-reporting-api.md b/articles/active-directory/reports-monitoring/howto-configure-prerequisites-for-reporting-api.md index 8e4cb5974332b..0a9dfa58a2688 100644 --- a/articles/active-directory/reports-monitoring/howto-configure-prerequisites-for-reporting-api.md +++ b/articles/active-directory/reports-monitoring/howto-configure-prerequisites-for-reporting-api.md @@ -3,7 +3,7 @@ title: Prerequisites to access the Azure Active Directory reporting API | Micros description: Learn about the prerequisites to access the Azure AD reporting API services: active-directory documentationcenter: '' -author: priyamohanram +author: MarkusVi manager: daveba editor: '' @@ -15,7 +15,7 @@ ms.tgt_pltfrm: na ms.workload: identity ms.subservice: report-monitor ms.date: 11/13/2018 -ms.author: priyamo +ms.author: markvi ms.reviewer: dhanyahk ms.collection: M365-identity-device-management diff --git a/articles/active-directory/reports-monitoring/howto-find-activity-reports.md b/articles/active-directory/reports-monitoring/howto-find-activity-reports.md index 8bae4c283fc0d..c676ef26bef7a 100644 --- a/articles/active-directory/reports-monitoring/howto-find-activity-reports.md +++ b/articles/active-directory/reports-monitoring/howto-find-activity-reports.md @@ -4,7 +4,7 @@ title: Find Azure Active Directory user activity reports in Azure portal | Micro description: Learn where the Azure Active Directory user activity reports are in the Azure portal. services: active-directory documentationcenter: '' -author: priyamohanram +author: MarkusVi manager: daveba editor: '' @@ -13,7 +13,7 @@ ms.topic: conceptual ms.workload: identity ms.subservice: report-monitor ms.date: 11/13/2018 -ms.author: priyamo +ms.author: markvi ms.reviewer: dhanyahk ms.collection: M365-identity-device-management diff --git a/articles/active-directory/reports-monitoring/howto-install-use-log-analytics-views.md b/articles/active-directory/reports-monitoring/howto-install-use-log-analytics-views.md index c46e30f29e860..0add407b7149f 100644 --- a/articles/active-directory/reports-monitoring/howto-install-use-log-analytics-views.md +++ b/articles/active-directory/reports-monitoring/howto-install-use-log-analytics-views.md @@ -3,7 +3,7 @@ title: How to install and use the log analytics views for Azure Active Directory description: Learn how to install and use the log analytics views for Azure Active Directory (preview) services: active-directory documentationcenter: '' -author: priyamohanram +author: MarkusVi manager: daveba editor: '' @@ -15,7 +15,7 @@ ms.tgt_pltfrm: na ms.workload: identity ms.subservice: report-monitor ms.date: 11/13/2018 -ms.author: priyamo +ms.author: markvi ms.reviewer: dhanyahk ms.collection: M365-identity-device-management diff --git a/articles/active-directory/reports-monitoring/howto-integrate-activity-logs-with-arcsight.md b/articles/active-directory/reports-monitoring/howto-integrate-activity-logs-with-arcsight.md index a58d3ae618bad..c7e3ff418516f 100644 --- a/articles/active-directory/reports-monitoring/howto-integrate-activity-logs-with-arcsight.md +++ b/articles/active-directory/reports-monitoring/howto-integrate-activity-logs-with-arcsight.md @@ -3,7 +3,7 @@ title: How to integrate Azure Active Directory logs with ArcSight using Azure Mo description: Learn how to integrate Azure Active Directory logs with ArcSight using Azure Monitor (preview) services: active-directory documentationcenter: '' -author: priyamohanram +author: MarkusVi manager: daveba editor: '' @@ -15,7 +15,7 @@ ms.tgt_pltfrm: na ms.workload: identity ms.subservice: report-monitor ms.date: 12/03/2018 -ms.author: priyamo +ms.author: markvi ms.reviewer: dhanyahk ms.collection: M365-identity-device-management diff --git a/articles/active-directory/reports-monitoring/howto-integrate-activity-logs-with-log-analytics.md b/articles/active-directory/reports-monitoring/howto-integrate-activity-logs-with-log-analytics.md index 479038ac3e832..ec04b76846a6e 100644 --- a/articles/active-directory/reports-monitoring/howto-integrate-activity-logs-with-log-analytics.md +++ b/articles/active-directory/reports-monitoring/howto-integrate-activity-logs-with-log-analytics.md @@ -3,7 +3,7 @@ title: Stream Azure Active Directory logs to Azure Monitor logs (preview) | Mic description: Learn how to integrate Azure Active Directory logs with Azure Monitor logs (preview) services: active-directory documentationcenter: '' -author: priyamohanram +author: MarkusVi manager: daveba editor: '' @@ -15,7 +15,7 @@ ms.tgt_pltfrm: na ms.workload: identity ms.subservice: report-monitor ms.date: 11/13/2018 -ms.author: priyamo +ms.author: markvi ms.reviewer: dhanyahk ms.collection: M365-identity-device-management diff --git a/articles/active-directory/reports-monitoring/howto-integrate-activity-logs-with-sumologic.md b/articles/active-directory/reports-monitoring/howto-integrate-activity-logs-with-sumologic.md index e8aefc19a8d22..b7e083becaf9c 100644 --- a/articles/active-directory/reports-monitoring/howto-integrate-activity-logs-with-sumologic.md +++ b/articles/active-directory/reports-monitoring/howto-integrate-activity-logs-with-sumologic.md @@ -3,7 +3,7 @@ title: Stream Azure Active Directory logs to SumoLogic using Azure Monitor (prev description: Learn how to integrate Azure Active Directory logs with SumoLogic using Azure Monitor (preview) services: active-directory documentationcenter: '' -author: priyamohanram +author: MarkusVi manager: daveba editor: '' @@ -15,7 +15,7 @@ ms.tgt_pltfrm: na ms.workload: identity ms.subservice: report-monitor ms.date: 11/13/2018 -ms.author: priyamo +ms.author: markvi ms.reviewer: dhanyahk ms.collection: M365-identity-device-management diff --git a/articles/active-directory/reports-monitoring/howto-power-bi-content-pack.md b/articles/active-directory/reports-monitoring/howto-power-bi-content-pack.md index e2734dc22d9f7..793b4cd0c33ad 100644 --- a/articles/active-directory/reports-monitoring/howto-power-bi-content-pack.md +++ b/articles/active-directory/reports-monitoring/howto-power-bi-content-pack.md @@ -3,7 +3,7 @@ title: How to use the Azure Active Directory Power BI Content Pack | Microsoft Docs description: Learn how to use the Azure Active Directory Power BI Content Pack services: active-directory -author: priyamohanram +author: MarkusVi manager: daveba ms.assetid: addd60fe-d5ac-4b8b-983c-0736c80ace02 @@ -14,7 +14,7 @@ ms.tgt_pltfrm: ms.workload: identity ms.subservice: report-monitor ms.date: 11/13/2018 -ms.author: priyamo +ms.author: markvi ms.reviewer: dhanyahk ms.collection: M365-identity-device-management diff --git a/articles/active-directory/reports-monitoring/howto-remediate-users-flagged-for-risk.md b/articles/active-directory/reports-monitoring/howto-remediate-users-flagged-for-risk.md index 65ee4f8ece35f..ee02e4da89db3 100644 --- a/articles/active-directory/reports-monitoring/howto-remediate-users-flagged-for-risk.md +++ b/articles/active-directory/reports-monitoring/howto-remediate-users-flagged-for-risk.md @@ -3,7 +3,7 @@ title: Users flagged for risk security report in the Azure Active Directory portal | Microsoft Docs description: Learn about the users flagged for risk security report in the Azure Active Directory portal services: active-directory -author: priyamohanram +author: MarkusVi manager: daveba ms.assetid: addd60fe-d5ac-4b8b-983c-0736c80ace02 @@ -14,7 +14,7 @@ ms.tgt_pltfrm: na ms.workload: identity ms.subservice: report-monitor ms.date: 11/13/2018 -ms.author: priyamo +ms.author: markvi ms.reviewer: dhanyahk ms.collection: M365-identity-device-management diff --git a/articles/active-directory/reports-monitoring/howto-troubleshoot-sign-in-errors.md b/articles/active-directory/reports-monitoring/howto-troubleshoot-sign-in-errors.md index 9db2740fb938d..08d2b1948d406 100644 --- a/articles/active-directory/reports-monitoring/howto-troubleshoot-sign-in-errors.md +++ b/articles/active-directory/reports-monitoring/howto-troubleshoot-sign-in-errors.md @@ -4,7 +4,7 @@ title: How to troubleshoot sign-in errors using Azure Active Directory reports | description: Learn how to troubleshoot sign-in errors using Azure Active Directory reports in the Azure portal services: active-directory documentationcenter: '' -author: priyamohanram +author: MarkusVi manager: daveba editor: '' @@ -13,7 +13,7 @@ ms.topic: conceptual ms.workload: identity ms.subservice: report-monitor ms.date: 11/13/2018 -ms.author: priyamo +ms.author: markvi ms.reviewer: dhanyahk ms.collection: M365-identity-device-management @@ -35,17 +35,17 @@ In addition, the sign-ins report can also help you troubleshoot sign-in failures You need: * An Azure AD tenant with a premium (P1/P2) license. See [Getting started with Azure Active Directory Premium](../fundamentals/active-directory-get-started-premium.md) to upgrade your Azure Active Directory edition. -* A user, who is in the **global administrator**, **security administrator**, **security reader** or **report reader** role for the tenant. In addition, any user can access their own sign-ins. +* A user, who is in the **global administrator**, **security administrator**, **security reader**, or **report reader** role for the tenant. In addition, any user can access their own sign-ins. ## Troubleshoot sign-in errors using the sign-ins report 1. Navigate to the [Azure portal](https://portal.azure.com) and select your directory. 2. Select **Azure Active Directory** and select **Sign-ins** from the **Monitoring** section. -3. Use the provided filters to narrow down the failure, either by the username or object identifier, application name or date. In addition select **Failure** from the **Status** drop-down to display only the failed sign-ins. +3. Use the provided filters to narrow down the failure, either by the username or object identifier, application name or date. In addition, select **Failure** from the **Status** drop-down to display only the failed sign-ins. ![Filter results](./media/howto-troubleshoot-sign-in-errors/filters.png) -4. Identify the failed sign-in that you want to investigate and select it. This will open up the additional details window with more information about the failed sign-in. Note down the **Sign-in error code** and **Failure reason**. +4. Identify the failed sign-in you want to investigate. Select it to open up the additional details window with more information about the failed sign-in. Note down the **Sign-in error code** and **Failure reason**. ![Select record](./media/howto-troubleshoot-sign-in-errors/sign-in-failures.png) @@ -53,7 +53,7 @@ You need: ![Troubleshooting and support](./media/howto-troubleshoot-sign-in-errors/troubleshooting-and-support.png) -6. The failure reason describes the error. For example, in the above scenario, the failure reason is **Invalid username or password or Invalid on-premise username or password**. This means that the user entered an incorrect username or password to sign-in to the Azure portal. The fix is to simply sign-in again with the correct username and password. +6. The failure reason describes the error. For example, in the above scenario, the failure reason is **Invalid username or password or Invalid on-premise username or password**. The fix is to simply sign-in again with the correct username and password. 7. You can get additional information, including ideas for remediation, by searching for the error code, **50126** in this example, in the [sign-ins error codes reference](reference-sign-ins-error-codes.md). diff --git a/articles/active-directory/reports-monitoring/overview-monitoring.md b/articles/active-directory/reports-monitoring/overview-monitoring.md index 64c8491d0bd6e..f4dfeef74f41f 100644 --- a/articles/active-directory/reports-monitoring/overview-monitoring.md +++ b/articles/active-directory/reports-monitoring/overview-monitoring.md @@ -4,7 +4,7 @@ title: What is Azure Active Directory monitoring? (preview) | Microsoft Docs description: Provides a general overview of Azure Active Directory monitoring. services: active-directory documentationcenter: '' -author: priyamohanram +author: MarkusVi manager: daveba editor: '' @@ -16,7 +16,7 @@ ms.tgt_pltfrm: na ms.workload: identity ms.subservice: report-monitor ms.date: 11/13/2018 -ms.author: priyamo +ms.author: markvi ms.reviewer: dhanyahk # Customer intent: As an Azure AD administrator, I want to understand what monitoring solutions are available for Azure AD activity data and how they can help me manage my tenant. diff --git a/articles/active-directory/reports-monitoring/overview-reports.md b/articles/active-directory/reports-monitoring/overview-reports.md index 5c19a6d3c2973..5c6a5b9e80dbb 100644 --- a/articles/active-directory/reports-monitoring/overview-reports.md +++ b/articles/active-directory/reports-monitoring/overview-reports.md @@ -4,7 +4,7 @@ title: What are Azure Active Directory reports? | Microsoft Docs description: Provides a general overview of Azure Active Directory reports. services: active-directory documentationcenter: '' -author: priyamohanram +author: MarkusVi manager: daveba editor: '' @@ -16,7 +16,7 @@ ms.tgt_pltfrm: na ms.workload: identity ms.subservice: report-monitor ms.date: 11/13/2018 -ms.author: priyamo +ms.author: markvi ms.reviewer: dhanyahk # Customer intent: As an Azure AD administrator, I want to understand what Azure AD reports are available and how I can use them to gain insights into my environment. diff --git a/articles/active-directory/reports-monitoring/quickstart-azure-monitor-route-logs-to-storage-account.md b/articles/active-directory/reports-monitoring/quickstart-azure-monitor-route-logs-to-storage-account.md index 60ef979585973..ed696c8134ab8 100644 --- a/articles/active-directory/reports-monitoring/quickstart-azure-monitor-route-logs-to-storage-account.md +++ b/articles/active-directory/reports-monitoring/quickstart-azure-monitor-route-logs-to-storage-account.md @@ -3,7 +3,7 @@ title: Tutorial - Archive Azure Active Directory logs to a storage account (prev description: Learn how to set up Azure Diagnostics to push Azure Active Directory logs to a storage account (preview) services: active-directory documentationcenter: '' -author: priyamohanram +author: MarkusVi manager: daveba editor: '' @@ -15,7 +15,7 @@ ms.tgt_pltfrm: na ms.workload: identity ms.subservice: report-monitor ms.date: 11/13/2018 -ms.author: priyamo +ms.author: markvi ms.reviewer: dhanyahk # Customer intent: As an IT administrator, I want to learn how to route Azure AD logs to an Azure storage account so I can retain it for longer than the default retention period. diff --git a/articles/active-directory/reports-monitoring/quickstart-configure-named-locations.md b/articles/active-directory/reports-monitoring/quickstart-configure-named-locations.md index a68b1ab6858d6..dc0eabba24286 100644 --- a/articles/active-directory/reports-monitoring/quickstart-configure-named-locations.md +++ b/articles/active-directory/reports-monitoring/quickstart-configure-named-locations.md @@ -3,7 +3,7 @@ title: Configure named locations in Azure Active Directory | Microsoft Docs description: Learn how to configure named locations. services: active-directory documentationcenter: '' -author: priyamohanram +author: MarkusVi manager: daveba ms.assetid: f56e042a-78d5-4ea3-be33-94004f2a0fc3 @@ -14,7 +14,7 @@ ms.tgt_pltfrm: na ms.devlang: na ms.topic: quickstart ms.date: 11/13/2018 -ms.author: priyamo +ms.author: markvi ms.reviewer: dhanyahk #Customer intent: As an IT administrator, I want to label trusted IP address ranges in my organization so that I can whitelist them and configure location-based conditional access. ms.collection: M365-identity-device-management diff --git a/articles/active-directory/reports-monitoring/quickstart-download-audit-report.md b/articles/active-directory/reports-monitoring/quickstart-download-audit-report.md index d599d8652fcb2..78c1e6c8212bf 100644 --- a/articles/active-directory/reports-monitoring/quickstart-download-audit-report.md +++ b/articles/active-directory/reports-monitoring/quickstart-download-audit-report.md @@ -3,7 +3,7 @@ title: Quickstart Download an audit report using the Azure portal | Microsoft Do description: Learn how to download an audit report using the Azure portal services: active-directory documentationcenter: '' -author: priyamohanram +author: MarkusVi manager: daveba editor: '' @@ -15,7 +15,7 @@ ms.tgt_pltfrm: na ms.workload: identity ms.subservice: report-monitor ms.date: 11/13/2018 -ms.author: priyamo +ms.author: markvi ms.reviewer: dhanyahk # Customer intent: As an IT administrator, I want to learn how to download an audit report from the Azure portal so that I can understand what actions are being performed by users in my environment. diff --git a/articles/active-directory/reports-monitoring/quickstart-download-sign-in-report.md b/articles/active-directory/reports-monitoring/quickstart-download-sign-in-report.md index ed241be60bdc2..b4bd12132f089 100644 --- a/articles/active-directory/reports-monitoring/quickstart-download-sign-in-report.md +++ b/articles/active-directory/reports-monitoring/quickstart-download-sign-in-report.md @@ -3,7 +3,7 @@ title: Quickstart Download a sign-in report using the Azure portal | Microsoft D description: Learn how to download a sign-in report using the Azure portal services: active-directory documentationcenter: '' -author: priyamohanram +author: MarkusVi manager: daveba editor: '' @@ -15,7 +15,7 @@ ms.tgt_pltfrm: na ms.workload: identity ms.subservice: report-monitor ms.date: 11/13/2018 -ms.author: priyamo +ms.author: markvi ms.reviewer: dhanyahk # Customer intent: As an IT administrator, I want to learn how to download a sign report from the Azure portal so that I can understand who is using my environment. diff --git a/articles/active-directory/reports-monitoring/quickstart-install-power-bi-content-pack.md b/articles/active-directory/reports-monitoring/quickstart-install-power-bi-content-pack.md index 698cb3c2b6e73..5469a817c6012 100644 --- a/articles/active-directory/reports-monitoring/quickstart-install-power-bi-content-pack.md +++ b/articles/active-directory/reports-monitoring/quickstart-install-power-bi-content-pack.md @@ -3,7 +3,7 @@ title: Install Azure AD Power BI content pack | Microsoft Docs description: Learn how to install Azure AD Power BI content pack services: active-directory documentationcenter: '' -author: priyamohanram +author: MarkusVi manager: daveba ms.assetid: fd5604eb-1334-4bd8-bfb5-41280883e2b5 @@ -14,7 +14,7 @@ ms.tgt_pltfrm: na ms.devlang: na ms.topic: quickstart ms.date: 11/13/2018 -ms.author: priyamo +ms.author: markvi ms.reviewer: dhanyahk #Customer intent: As an IT administrator, I want to install Active Directory Power BI content pack so I can use the pre-configured reports to get insights about my environment. ms.collection: M365-identity-device-management diff --git a/articles/active-directory/reports-monitoring/reference-audit-activities.md b/articles/active-directory/reports-monitoring/reference-audit-activities.md index e40634d682db1..4147636905c73 100644 --- a/articles/active-directory/reports-monitoring/reference-audit-activities.md +++ b/articles/active-directory/reports-monitoring/reference-audit-activities.md @@ -4,7 +4,7 @@ title: Azure Active Directory (Azure AD) audit activity reference | Microsoft Do description: Get an overview of the audit activities that can be logged in your audit logs in Azure Active Directory (Azure AD). services: active-directory documentationcenter: '' -author: priyamohanram +author: MarkusVi manager: daveba editor: '' @@ -16,7 +16,7 @@ ms.tgt_pltfrm: na ms.workload: identity ms.subservice: report-monitor ms.date: 01/24/2019 -ms.author: priyamo +ms.author: markvi ms.reviewer: dhanyahk ms.collection: M365-identity-device-management diff --git a/articles/active-directory/reports-monitoring/reference-azure-monitor-audit-log-schema.md b/articles/active-directory/reports-monitoring/reference-azure-monitor-audit-log-schema.md index 439501d356df4..06154cb962056 100644 --- a/articles/active-directory/reports-monitoring/reference-azure-monitor-audit-log-schema.md +++ b/articles/active-directory/reports-monitoring/reference-azure-monitor-audit-log-schema.md @@ -3,7 +3,7 @@ title: Interpret the Azure Active Directory audit log schema in Azure Monitor (p description: Describe the Azure AD audit log schema for use in Azure Monitor (preview) services: active-directory documentationcenter: '' -author: priyamohanram +author: MarkusVi manager: daveba editor: '' @@ -15,7 +15,7 @@ ms.tgt_pltfrm: na ms.workload: identity ms.subservice: report-monitor ms.date: 12/14/2018 -ms.author: priyamo +ms.author: markvi ms.reviewer: dhanyahk ms.collection: M365-identity-device-management diff --git a/articles/active-directory/reports-monitoring/reference-azure-monitor-sign-ins-log-schema.md b/articles/active-directory/reports-monitoring/reference-azure-monitor-sign-ins-log-schema.md index 935c2a916ee6c..f41086a819ef2 100644 --- a/articles/active-directory/reports-monitoring/reference-azure-monitor-sign-ins-log-schema.md +++ b/articles/active-directory/reports-monitoring/reference-azure-monitor-sign-ins-log-schema.md @@ -3,7 +3,7 @@ title: Azure Active Directory sign-in log schema in Azure Monitor (preview) | Mi description: Describe the Azure AD sign in log schema for use in Azure Monitor (preview) services: active-directory documentationcenter: '' -author: priyamohanram +author: MarkusVi manager: daveba editor: '' @@ -15,7 +15,7 @@ ms.tgt_pltfrm: na ms.workload: identity ms.subservice: report-monitor ms.date: 11/13/2018 -ms.author: priyamo +ms.author: markvi ms.reviewer: dhanyahk ms.collection: M365-identity-device-management diff --git a/articles/active-directory/reports-monitoring/reference-reports-data-retention.md b/articles/active-directory/reports-monitoring/reference-reports-data-retention.md index 5a38a7836c439..ca3a35be885af 100644 --- a/articles/active-directory/reports-monitoring/reference-reports-data-retention.md +++ b/articles/active-directory/reports-monitoring/reference-reports-data-retention.md @@ -3,7 +3,7 @@ title: Azure Active Directory report retention policies | Microsoft Docs description: Retention policies on report data in your Azure Active Directory services: active-directory documentationcenter: '' -author: priyamohanram +author: MarkusVi manager: daveba editor: '' @@ -15,7 +15,7 @@ ms.tgt_pltfrm: ms.workload: identity ms.subservice: report-monitor ms.date: 11/13/2018 -ms.author: priyamo +ms.author: markvi ms.reviewer: dhanyahk ms.collection: M365-identity-device-management diff --git a/articles/active-directory/reports-monitoring/reference-reports-latencies.md b/articles/active-directory/reports-monitoring/reference-reports-latencies.md index e37a3cc3250da..45053efcdf88f 100644 --- a/articles/active-directory/reports-monitoring/reference-reports-latencies.md +++ b/articles/active-directory/reports-monitoring/reference-reports-latencies.md @@ -3,7 +3,7 @@ title: Azure Active Directory reporting latencies | Microsoft Docs description: Learn about the amount of time it takes for reporting events to show up in your Azure portal services: active-directory documentationcenter: '' -author: priyamohanram +author: MarkusVi manager: daveba editor: '' @@ -15,7 +15,7 @@ ms.tgt_pltfrm: na ms.workload: identity ms.subservice: report-monitor ms.date: 11/13/2018 -ms.author: priyamo +ms.author: markvi ms.reviewer: dhanyahk ms.collection: M365-identity-device-management diff --git a/articles/active-directory/reports-monitoring/reference-sign-ins-error-codes.md b/articles/active-directory/reports-monitoring/reference-sign-ins-error-codes.md index 5a2e3a889232b..cde0abbf67c46 100644 --- a/articles/active-directory/reports-monitoring/reference-sign-ins-error-codes.md +++ b/articles/active-directory/reports-monitoring/reference-sign-ins-error-codes.md @@ -3,7 +3,7 @@ title: Sign-in activity report error codes in the Azure Active Directory portal description: Reference of sign-in activity report error codes. services: active-directory documentationcenter: '' -author: priyamohanram +author: MarkusVi manager: daveba editor: '' @@ -15,7 +15,7 @@ ms.tgt_pltfrm: na ms.workload: identity ms.subservice: report-monitor ms.date: 11/13/2018 -ms.author: priyamo +ms.author: markvi ms.reviewer: dhanyahk ms.collection: M365-identity-device-management diff --git a/articles/active-directory/reports-monitoring/reports-faq.md b/articles/active-directory/reports-monitoring/reports-faq.md index a6328916339cd..2bb5deae2d1bd 100644 --- a/articles/active-directory/reports-monitoring/reports-faq.md +++ b/articles/active-directory/reports-monitoring/reports-faq.md @@ -3,7 +3,7 @@ title: Azure Active Directory Reports FAQ | Microsoft Docs description: Frequently asked quesitons around Azure Active Directory reports. services: active-directory documentationcenter: '' -author: priyamohanram +author: MarkusVi manager: daveba ms.assetid: 534da0b1-7858-4167-9986-7a62fbd10439 @@ -14,7 +14,7 @@ ms.devlang: na ms.topic: conceptual ms.subservice: report-monitor ms.date: 11/13/2018 -ms.author: priyamo +ms.author: markvi ms.reviewer: dhanyahk ms.collection: M365-identity-device-management diff --git a/articles/active-directory/reports-monitoring/troubleshoot-content-pack.md b/articles/active-directory/reports-monitoring/troubleshoot-content-pack.md index 5ca8392daeaf6..19d894f9bc509 100644 --- a/articles/active-directory/reports-monitoring/troubleshoot-content-pack.md +++ b/articles/active-directory/reports-monitoring/troubleshoot-content-pack.md @@ -4,7 +4,7 @@ title: 'Troubleshooting Azure Active Directory Activity logs content pack errors description: Provides you with a list of error messages of the Azure Active Directory Activity content pack and steps to fix them. services: active-directory documentationcenter: '' -author: priyamohanram +author: MarkusVi manager: daveba editor: '' @@ -16,7 +16,7 @@ ms.tgt_pltfrm: na ms.workload: identity ms.subservice: report-monitor ms.date: 11/13/2018 -ms.author: priyamo +ms.author: markvi ms.reviewer: dhanyahk ms.collection: M365-identity-device-management diff --git a/articles/active-directory/reports-monitoring/troubleshoot-graph-api.md b/articles/active-directory/reports-monitoring/troubleshoot-graph-api.md index 0d18be0c287f2..f64bce7de6656 100644 --- a/articles/active-directory/reports-monitoring/troubleshoot-graph-api.md +++ b/articles/active-directory/reports-monitoring/troubleshoot-graph-api.md @@ -4,7 +4,7 @@ title: 'Troubleshoot errors in Azure Active Directory reporting API | Microsoft description: Provides you with a resolution to errors while calling Azure Active Directory Reporting APIs. services: active-directory documentationcenter: '' -author: priyamohanram +author: MarkusVi manager: daveba editor: '' @@ -16,7 +16,7 @@ ms.tgt_pltfrm: na ms.workload: identity ms.subservice: report-monitor ms.date: 11/13/2018 -ms.author: priyamo +ms.author: markvi ms.reviewer: dhanyahk ms.collection: M365-identity-device-management diff --git a/articles/active-directory/reports-monitoring/troubleshoot-missing-audit-data.md b/articles/active-directory/reports-monitoring/troubleshoot-missing-audit-data.md index 6e1e5aa16bc7b..4c1f046bcf991 100644 --- a/articles/active-directory/reports-monitoring/troubleshoot-missing-audit-data.md +++ b/articles/active-directory/reports-monitoring/troubleshoot-missing-audit-data.md @@ -4,7 +4,7 @@ title: 'Troubleshoot Missing data in the Azure Active Directory activity logs | description: Provides you with a resolution to missing data in Azure Active Directory activity logs. services: active-directory documentationcenter: '' -author: priyamohanram +author: MarkusVi manager: daveba editor: '' @@ -16,7 +16,7 @@ ms.tgt_pltfrm: na ms.workload: identity ms.subservice: report-monitor ms.date: 01/15/2018 -ms.author: priyamo +ms.author: markvi ms.reviewer: dhanyahk ms.collection: M365-identity-device-management diff --git a/articles/active-directory/reports-monitoring/troubleshoot-missing-data-download.md b/articles/active-directory/reports-monitoring/troubleshoot-missing-data-download.md index 44333fa826b6a..f5f02994e8cb2 100644 --- a/articles/active-directory/reports-monitoring/troubleshoot-missing-data-download.md +++ b/articles/active-directory/reports-monitoring/troubleshoot-missing-data-download.md @@ -4,7 +4,7 @@ title: 'Troubleshooting: Missing data in the downloaded Azure Active Directory a description: Provides you with a resolution to missing data in downloaded Azure Active Directory activity logs. services: active-directory documentationcenter: '' -author: priyamohanram +author: MarkusVi manager: daveba editor: '' @@ -16,7 +16,7 @@ ms.tgt_pltfrm: na ms.workload: identity ms.subservice: report-monitor ms.date: 11/13/2018 -ms.author: priyamo +ms.author: markvi ms.reviewer: dhanyahk ms.collection: M365-identity-device-management diff --git a/articles/active-directory/reports-monitoring/tutorial-access-api-with-certificates.md b/articles/active-directory/reports-monitoring/tutorial-access-api-with-certificates.md index 4407fcefaa02a..b230224bf4a7b 100644 --- a/articles/active-directory/reports-monitoring/tutorial-access-api-with-certificates.md +++ b/articles/active-directory/reports-monitoring/tutorial-access-api-with-certificates.md @@ -3,7 +3,7 @@ title: Tutorial Get data using the Azure AD Reporting API with certificates | Mi description: This tutorial explains how to use the Azure AD Reporting API with certificate credentials to get data from directories without user intervention. services: active-directory documentationcenter: '' -author: priyamohanram +author: MarkusVi manager: daveba ms.assetid: @@ -14,7 +14,7 @@ ms.devlang: na ms.topic: conceptual ms.subservice: report-monitor ms.date: 11/13/2018 -ms.author: priyamo +ms.author: markvi ms.reviewer: dhanyahk # Customer intent: As a developer, I want to learn how to access the Azure AD reporting API using certificates so that I can create an application that does not require user intervention to access reports. diff --git a/articles/active-directory/reports-monitoring/tutorial-azure-monitor-stream-logs-to-event-hub.md b/articles/active-directory/reports-monitoring/tutorial-azure-monitor-stream-logs-to-event-hub.md index 6e605bdd39bf3..fbfac4b2e1f88 100644 --- a/articles/active-directory/reports-monitoring/tutorial-azure-monitor-stream-logs-to-event-hub.md +++ b/articles/active-directory/reports-monitoring/tutorial-azure-monitor-stream-logs-to-event-hub.md @@ -3,7 +3,7 @@ title: Tutorial - Stream Azure Active Directory logs to an Azure event hub (prev description: Learn how to set up Azure Diagnostics to push Azure Active Directory logs to an event hub (preview) services: active-directory documentationcenter: '' -author: priyamohanram +author: MarkusVi manager: daveba editor: '' @@ -15,7 +15,7 @@ ms.tgt_pltfrm: na ms.workload: identity ms.subservice: report-monitor ms.date: 11/13/2018 -ms.author: priyamo +ms.author: markvi ms.reviewer: dhanyahk # Customer intent: As an IT administrator, I want to learn how to route Azure AD logs to an event hub so I can integrate it with my third party SIEM system. diff --git a/articles/active-directory/reports-monitoring/tutorial-integrate-activity-logs-with-splunk.md b/articles/active-directory/reports-monitoring/tutorial-integrate-activity-logs-with-splunk.md index 6ce56e50c4c33..464877d3099fd 100644 --- a/articles/active-directory/reports-monitoring/tutorial-integrate-activity-logs-with-splunk.md +++ b/articles/active-directory/reports-monitoring/tutorial-integrate-activity-logs-with-splunk.md @@ -3,7 +3,7 @@ title: Stream Azure Active Directory logs to Splunk using Azure Monitor (preview description: Learn how to integrate Azure Active Directory logs with Splunk by using Azure Monitor (preview) services: active-directory documentationcenter: '' -author: priyamohanram +author: MarkusVi manager: daveba editor: '' @@ -15,7 +15,7 @@ ms.tgt_pltfrm: na ms.workload: identity ms.subservice: report-monitor ms.date: 11/13/2018 -ms.author: priyamo +ms.author: markvi ms.reviewer: dhanyahk # Customer intent: As an IT administrator, I want to learn how to integrate Azure AD logs with my Splunk instance so I can visualize Azure AD logs in the context of all other data collected in my environment. diff --git a/articles/active-directory/saas-apps/hrworks-single-sign-on-tutorial.md b/articles/active-directory/saas-apps/hrworks-single-sign-on-tutorial.md new file mode 100644 index 0000000000000..96c3f0a9b8a82 --- /dev/null +++ b/articles/active-directory/saas-apps/hrworks-single-sign-on-tutorial.md @@ -0,0 +1,225 @@ +--- +title: 'Tutorial: Azure Active Directory integration with HRworks Single Sign-On | Microsoft Docs' +description: Learn how to configure single sign-on between Azure Active Directory and HRworks Single Sign-On. +services: active-directory +documentationCenter: na +author: jeevansd +manager: mtillman +ms.reviewer: barbkess + +ms.assetid: c4c5d434-3f8a-411e-83a5-c3d5276ddc0a +ms.service: active-directory +ms.subservice: saas-app-tutorial +ms.workload: identity +ms.tgt_pltfrm: na +ms.devlang: na +ms.topic: tutorial +ms.date: 03/25/2019 +ms.author: jeedes + +--- +# Tutorial: Azure Active Directory integration with HRworks Single Sign-On + +In this tutorial, you learn how to integrate HRworks Single Sign-On with Azure Active Directory (Azure AD). +Integrating HRworks Single Sign-On with Azure AD provides you with the following benefits: + +* You can control in Azure AD who has access to HRworks Single Sign-On. +* You can enable your users to be automatically signed-in to HRworks Single Sign-On (Single Sign-On) with their Azure AD accounts. +* You can manage your accounts in one central location - the Azure portal. + +If you want to know more details about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](https://docs.microsoft.com/azure/active-directory/active-directory-appssoaccess-whatis). +If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/) before you begin. + +## Prerequisites + +To configure Azure AD integration with HRworks Single Sign-On, you need the following items: + +* An Azure AD subscription. If you don't have an Azure AD environment, you can get one-month trial [here](https://azure.microsoft.com/pricing/free-trial/) +* HRworks Single Sign-On single sign-on enabled subscription + +## Scenario description + +In this tutorial, you configure and test Azure AD single sign-on in a test environment. + +* HRworks Single Sign-On supports **SP** initiated SSO + +## Adding HRworks Single Sign-On from the gallery + +To configure the integration of HRworks Single Sign-On into Azure AD, you need to add HRworks Single Sign-On from the gallery to your list of managed SaaS apps. + +**To add HRworks Single Sign-On from the gallery, perform the following steps:** + +1. In the **[Azure portal](https://portal.azure.com)**, on the left navigation panel, click **Azure Active Directory** icon. + + ![The Azure Active Directory button](common/select-azuread.png) + +2. Navigate to **Enterprise Applications** and then select the **All Applications** option. + + ![The Enterprise applications blade](common/enterprise-applications.png) + +3. To add new application, click **New application** button on the top of dialog. + + ![The New application button](common/add-new-app.png) + +4. In the search box, type **HRworks Single Sign-On**, select **HRworks Single Sign-On** from result panel then click **Add** button to add the application. + + ![HRworks Single Sign-On in the results list](common/search-new-app.png) + +## Configure and test Azure AD single sign-on + +In this section, you configure and test Azure AD single sign-on with HRworks Single Sign-On based on a test user called **Britta Simon**. +For single sign-on to work, a link relationship between an Azure AD user and the related user in HRworks Single Sign-On needs to be established. + +To configure and test Azure AD single sign-on with HRworks Single Sign-On, you need to complete the following building blocks: + +1. **[Configure Azure AD Single Sign-On](#configure-azure-ad-single-sign-on)** - to enable your users to use this feature. +2. **[Configure HRworks Single Sign-On Single Sign-On](#configure-hrworks-single-sign-on-single-sign-on)** - to configure the Single Sign-On settings on application side. +3. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with Britta Simon. +4. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable Britta Simon to use Azure AD single sign-on. +5. **[Create HRworks Single Sign-On test user](#create-hrworks-single-sign-on-test-user)** - to have a counterpart of Britta Simon in HRworks Single Sign-On that is linked to the Azure AD representation of user. +6. **[Test single sign-on](#test-single-sign-on)** - to verify whether the configuration works. + +### Configure Azure AD single sign-on + +In this section, you enable Azure AD single sign-on in the Azure portal. + +To configure Azure AD single sign-on with HRworks Single Sign-On, perform the following steps: + +1. In the [Azure portal](https://portal.azure.com/), on the **HRworks Single Sign-On** application integration page, select **Single sign-on**. + + ![Configure single sign-on link](common/select-sso.png) + +2. On the **Select a Single sign-on method** dialog, select **SAML/WS-Fed** mode to enable single sign-on. + + ![Single sign-on select mode](common/select-saml-option.png) + +3. On the **Set up Single Sign-On with SAML** page, click **Edit** icon to open **Basic SAML Configuration** dialog. + + ![Edit Basic SAML Configuration](common/edit-urls.png) + +4. On the **Basic SAML Configuration** section, perform the following steps: + + ![HRworks Single Sign-On Domain and URLs single sign-on information](common/sp-signonurl.png) + + In the **Sign-on URL** text box, type a URL using the following pattern: + `https://login.hrworks.de/?companyId=&directssologin=true` + + > [!NOTE] + > The value is not real. Update the value with the actual Sign-On URL. Contact [HRworks Single Sign-On Client support team](mailto:nadja.sommerfeld@hrworks.de) to get the value. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal. + +5. On the **Set up Single Sign-On with SAML** page, in the **SAML Signing Certificate** section, click **Download** to download the **Federation Metadata XML** from the given options as per your requirement and save it on your computer. + + ![The Certificate download link](common/metadataxml.png) + +6. On the **Set up HRworks Single Sign-On** section, copy the appropriate URL(s) as per your requirement. + + ![Copy configuration URLs](common/copy-configuration-urls.png) + + a. Login URL + + b. Azure AD Identifier + + c. Logout URL + +### Configure HRworks Single Sign-On Single Sign-On + +1. In a different web browser window, sign in to HRworks Single Sign-On as an Administrator. + +2. Click on **Administrator** > **Basics** > **Security** > **Single Sign-on** from the left side of menu bar and perform the following steps: + +    ![Configure Single Sign-On](./media/hrworks-single-sign-on-tutorial/configure01.png) + + a. Check the **Use Single Sign-on** box. + + b. Select **XML Metadata** as **Meta data input method**. + + c. Select **Individual NameID identifier** as **Value for NameID**. + + d. In Notepad, open the Metadata XML that you downloaded from the Azure portal, copy its content, and then paste it into the **Metadata** textbox. + + e. Click **Save**. + +### Create an Azure AD test user + +The objective of this section is to create a test user in the Azure portal called Britta Simon. + +1. In the Azure portal, in the left pane, select **Azure Active Directory**, select **Users**, and then select **All users**. + + ![The "Users and groups" and "All users" links](common/users.png) + +2. Select **New user** at the top of the screen. + + ![New user Button](common/new-user.png) + +3. In the User properties, perform the following steps. + + ![The User dialog box](common/user-properties.png) + + a. In the **Name** field, enter **BrittaSimon**. + + b. In the **User name** field, type the username like BrittaSimon@contoso.com. + + c. Select **Show password** check box, and then write down the value that's displayed in the Password box. + + d. Click **Create**. + +### Assign the Azure AD test user + +In this section, you enable Britta Simon to use Azure single sign-on by granting access to HRworks Single Sign-On. + +1. In the Azure portal, select **Enterprise Applications**, select **All applications**, then select **HRworks Single Sign-On**. + + ![Enterprise applications blade](common/enterprise-applications.png) + +2. In the applications list, select **HRworks Single Sign-On**. + + ![The HRworks Single Sign-On link in the Applications list](common/all-applications.png) + +3. In the menu on the left, select **Users and groups**. + + ![The "Users and groups" link](common/users-groups-blade.png) + +4. Click the **Add user** button, then select **Users and groups** in the **Add Assignment** dialog. + + ![The Add Assignment pane](common/add-assign-user.png) + +5. In the **Users and groups** dialog select **Britta Simon** in the Users list, then click the **Select** button at the bottom of the screen. + +6. If you are expecting any role value in the SAML assertion then in the **Select Role** dialog select the appropriate role for the user from the list, then click the **Select** button at the bottom of the screen. + +7. In the **Add Assignment** dialog, click the **Assign** button. + +### Create HRworks Single Sign-On test user + +To enable Azure AD users, sign in to HRworks Single Sign-On, they must be provisioned into HRworks Single Sign-On. In HRworks Single Sign-On, provisioning is a manual task. + +**To provision a user account, perform the following steps:** + +1. Sign in to HRworks Single Sign-On as an Administrator. + +2. Click on **Administrator** > **Persons** > **Persons** > **New person** from the left side of menu bar. + +  ![Configure Single Sign-On](./media/hrworks-single-sign-on-tutorial/configure02.png) + +3. On the Pop-up, click **Next**. + + ![Configure Single Sign-On](./media/hrworks-single-sign-on-tutorial/configure03.png) + +4. On the **Create new person with country for legal terms** pop-up, fill the respective details like **First name**, **Last name** and click **Create**. + + ![Configure Single Sign-On](./media/hrworks-single-sign-on-tutorial/configure04.png) + +### Test single sign-on + +In this section, you test your Azure AD single sign-on configuration using the Access Panel. + +When you click the HRworks Single Sign-On tile in the Access Panel, you should be automatically signed in to the HRworks Single Sign-On for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](https://docs.microsoft.com/azure/active-directory/active-directory-saas-access-panel-introduction). + +## Additional Resources + +- [ List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory ](https://docs.microsoft.com/azure/active-directory/active-directory-saas-tutorial-list) + +- [What is application access and single sign-on with Azure Active Directory? ](https://docs.microsoft.com/azure/active-directory/active-directory-appssoaccess-whatis) + +- [What is conditional access in Azure Active Directory?](https://docs.microsoft.com/azure/active-directory/conditional-access/overview) + diff --git a/articles/active-directory/saas-apps/media/hrworks-single-sign-on-tutorial/configure01.png b/articles/active-directory/saas-apps/media/hrworks-single-sign-on-tutorial/configure01.png new file mode 100644 index 0000000000000..dab6a779e982d Binary files /dev/null and b/articles/active-directory/saas-apps/media/hrworks-single-sign-on-tutorial/configure01.png differ diff --git a/articles/active-directory/saas-apps/media/hrworks-single-sign-on-tutorial/configure02.png b/articles/active-directory/saas-apps/media/hrworks-single-sign-on-tutorial/configure02.png new file mode 100644 index 0000000000000..d2e5df8649155 Binary files /dev/null and b/articles/active-directory/saas-apps/media/hrworks-single-sign-on-tutorial/configure02.png differ diff --git a/articles/active-directory/saas-apps/media/hrworks-single-sign-on-tutorial/configure03.png b/articles/active-directory/saas-apps/media/hrworks-single-sign-on-tutorial/configure03.png new file mode 100644 index 0000000000000..a562e6db92009 Binary files /dev/null and b/articles/active-directory/saas-apps/media/hrworks-single-sign-on-tutorial/configure03.png differ diff --git a/articles/active-directory/saas-apps/media/hrworks-single-sign-on-tutorial/configure04.png b/articles/active-directory/saas-apps/media/hrworks-single-sign-on-tutorial/configure04.png new file mode 100644 index 0000000000000..2f6b57d46e47c Binary files /dev/null and b/articles/active-directory/saas-apps/media/hrworks-single-sign-on-tutorial/configure04.png differ diff --git a/articles/active-directory/saas-apps/media/pingboard-tutorial/create_aaduser_001.png b/articles/active-directory/saas-apps/media/pingboard-tutorial/create_aaduser_001.png deleted file mode 100644 index 6ce33063b403b..0000000000000 Binary files a/articles/active-directory/saas-apps/media/pingboard-tutorial/create_aaduser_001.png and /dev/null differ diff --git a/articles/active-directory/saas-apps/media/pingboard-tutorial/create_aaduser_002.png b/articles/active-directory/saas-apps/media/pingboard-tutorial/create_aaduser_002.png deleted file mode 100644 index d9f9eb8a13257..0000000000000 Binary files a/articles/active-directory/saas-apps/media/pingboard-tutorial/create_aaduser_002.png and /dev/null differ diff --git a/articles/active-directory/saas-apps/media/pingboard-tutorial/create_aaduser_003.png b/articles/active-directory/saas-apps/media/pingboard-tutorial/create_aaduser_003.png deleted file mode 100644 index 2ea4aab1800e0..0000000000000 Binary files a/articles/active-directory/saas-apps/media/pingboard-tutorial/create_aaduser_003.png and /dev/null differ diff --git a/articles/active-directory/saas-apps/media/pingboard-tutorial/create_aaduser_01.png b/articles/active-directory/saas-apps/media/pingboard-tutorial/create_aaduser_01.png deleted file mode 100644 index 19cb268bb31b8..0000000000000 Binary files a/articles/active-directory/saas-apps/media/pingboard-tutorial/create_aaduser_01.png and /dev/null differ diff --git a/articles/active-directory/saas-apps/media/pingboard-tutorial/create_aaduser_02.png b/articles/active-directory/saas-apps/media/pingboard-tutorial/create_aaduser_02.png deleted file mode 100644 index 5a52c44d9b21a..0000000000000 Binary files a/articles/active-directory/saas-apps/media/pingboard-tutorial/create_aaduser_02.png and /dev/null differ diff --git a/articles/active-directory/saas-apps/media/pingboard-tutorial/create_aaduser_03.png b/articles/active-directory/saas-apps/media/pingboard-tutorial/create_aaduser_03.png deleted file mode 100644 index 21ce52515ad1e..0000000000000 Binary files a/articles/active-directory/saas-apps/media/pingboard-tutorial/create_aaduser_03.png and /dev/null differ diff --git a/articles/active-directory/saas-apps/media/pingboard-tutorial/create_aaduser_04.png b/articles/active-directory/saas-apps/media/pingboard-tutorial/create_aaduser_04.png deleted file mode 100644 index fdea60786b792..0000000000000 Binary files a/articles/active-directory/saas-apps/media/pingboard-tutorial/create_aaduser_04.png and /dev/null differ diff --git a/articles/active-directory/saas-apps/media/pingboard-tutorial/create_testuser_csv1.png b/articles/active-directory/saas-apps/media/pingboard-tutorial/create_testuser_csv1.png deleted file mode 100644 index c5cfc26053118..0000000000000 Binary files a/articles/active-directory/saas-apps/media/pingboard-tutorial/create_testuser_csv1.png and /dev/null differ diff --git a/articles/active-directory/saas-apps/media/pingboard-tutorial/create_testuser_csv2.png b/articles/active-directory/saas-apps/media/pingboard-tutorial/create_testuser_csv2.png deleted file mode 100644 index 6bfbc962bb4fc..0000000000000 Binary files a/articles/active-directory/saas-apps/media/pingboard-tutorial/create_testuser_csv2.png and /dev/null differ diff --git a/articles/active-directory/saas-apps/media/pingboard-tutorial/create_testuser_csv3.png b/articles/active-directory/saas-apps/media/pingboard-tutorial/create_testuser_csv3.png deleted file mode 100644 index 64835e8edc4bf..0000000000000 Binary files a/articles/active-directory/saas-apps/media/pingboard-tutorial/create_testuser_csv3.png and /dev/null differ diff --git a/articles/active-directory/saas-apps/media/pingboard-tutorial/create_testuser_google.png b/articles/active-directory/saas-apps/media/pingboard-tutorial/create_testuser_google.png deleted file mode 100644 index 8c3f6ffce8b16..0000000000000 Binary files a/articles/active-directory/saas-apps/media/pingboard-tutorial/create_testuser_google.png and /dev/null differ diff --git a/articles/active-directory/saas-apps/media/pingboard-tutorial/pingboard_invite.png b/articles/active-directory/saas-apps/media/pingboard-tutorial/pingboard_invite.png deleted file mode 100644 index 27189261c865c..0000000000000 Binary files a/articles/active-directory/saas-apps/media/pingboard-tutorial/pingboard_invite.png and /dev/null differ diff --git a/articles/active-directory/saas-apps/media/pingboard-tutorial/pingboard_user_provisioning.png b/articles/active-directory/saas-apps/media/pingboard-tutorial/pingboard_user_provisioning.png deleted file mode 100644 index 69437430f960c..0000000000000 Binary files a/articles/active-directory/saas-apps/media/pingboard-tutorial/pingboard_user_provisioning.png and /dev/null differ diff --git a/articles/active-directory/saas-apps/media/pingboard-tutorial/tutorial_general_01.png b/articles/active-directory/saas-apps/media/pingboard-tutorial/tutorial_general_01.png deleted file mode 100644 index 19cb268bb31b8..0000000000000 Binary files a/articles/active-directory/saas-apps/media/pingboard-tutorial/tutorial_general_01.png and /dev/null differ diff --git a/articles/active-directory/saas-apps/media/pingboard-tutorial/tutorial_general_02.png b/articles/active-directory/saas-apps/media/pingboard-tutorial/tutorial_general_02.png deleted file mode 100644 index 1ff3f25482e34..0000000000000 Binary files a/articles/active-directory/saas-apps/media/pingboard-tutorial/tutorial_general_02.png and /dev/null differ diff --git a/articles/active-directory/saas-apps/media/pingboard-tutorial/tutorial_general_03.png b/articles/active-directory/saas-apps/media/pingboard-tutorial/tutorial_general_03.png deleted file mode 100644 index 1f3d381fc718b..0000000000000 Binary files a/articles/active-directory/saas-apps/media/pingboard-tutorial/tutorial_general_03.png and /dev/null differ diff --git a/articles/active-directory/saas-apps/media/pingboard-tutorial/tutorial_general_04.png b/articles/active-directory/saas-apps/media/pingboard-tutorial/tutorial_general_04.png deleted file mode 100644 index 014502ab06379..0000000000000 Binary files a/articles/active-directory/saas-apps/media/pingboard-tutorial/tutorial_general_04.png and /dev/null differ diff --git a/articles/active-directory/saas-apps/media/pingboard-tutorial/tutorial_general_100.png b/articles/active-directory/saas-apps/media/pingboard-tutorial/tutorial_general_100.png deleted file mode 100644 index 4fe5408cdbfaf..0000000000000 Binary files a/articles/active-directory/saas-apps/media/pingboard-tutorial/tutorial_general_100.png and /dev/null differ diff --git a/articles/active-directory/saas-apps/media/pingboard-tutorial/tutorial_general_200.png b/articles/active-directory/saas-apps/media/pingboard-tutorial/tutorial_general_200.png deleted file mode 100644 index 84a3a8cb56791..0000000000000 Binary files a/articles/active-directory/saas-apps/media/pingboard-tutorial/tutorial_general_200.png and /dev/null differ diff --git a/articles/active-directory/saas-apps/media/pingboard-tutorial/tutorial_general_201.png b/articles/active-directory/saas-apps/media/pingboard-tutorial/tutorial_general_201.png deleted file mode 100644 index 39bc0e0407d5c..0000000000000 Binary files a/articles/active-directory/saas-apps/media/pingboard-tutorial/tutorial_general_201.png and /dev/null differ diff --git a/articles/active-directory/saas-apps/media/pingboard-tutorial/tutorial_general_202.png b/articles/active-directory/saas-apps/media/pingboard-tutorial/tutorial_general_202.png deleted file mode 100644 index f873c028bcb36..0000000000000 Binary files a/articles/active-directory/saas-apps/media/pingboard-tutorial/tutorial_general_202.png and /dev/null differ diff --git a/articles/active-directory/saas-apps/media/pingboard-tutorial/tutorial_general_203.png b/articles/active-directory/saas-apps/media/pingboard-tutorial/tutorial_general_203.png deleted file mode 100644 index 9a5e9e7eea18e..0000000000000 Binary files a/articles/active-directory/saas-apps/media/pingboard-tutorial/tutorial_general_203.png and /dev/null differ diff --git a/articles/active-directory/saas-apps/media/pingboard-tutorial/tutorial_general_300.png b/articles/active-directory/saas-apps/media/pingboard-tutorial/tutorial_general_300.png deleted file mode 100644 index 5a9f929f06020..0000000000000 Binary files a/articles/active-directory/saas-apps/media/pingboard-tutorial/tutorial_general_300.png and /dev/null differ diff --git a/articles/active-directory/saas-apps/media/pingboard-tutorial/tutorial_general_400.png b/articles/active-directory/saas-apps/media/pingboard-tutorial/tutorial_general_400.png deleted file mode 100644 index bf1f9ced09e43..0000000000000 Binary files a/articles/active-directory/saas-apps/media/pingboard-tutorial/tutorial_general_400.png and /dev/null differ diff --git a/articles/active-directory/saas-apps/media/pingboard-tutorial/tutorial_pingboard_addfromgallery.png b/articles/active-directory/saas-apps/media/pingboard-tutorial/tutorial_pingboard_addfromgallery.png deleted file mode 100644 index 7e449db3151df..0000000000000 Binary files a/articles/active-directory/saas-apps/media/pingboard-tutorial/tutorial_pingboard_addfromgallery.png and /dev/null differ diff --git a/articles/active-directory/saas-apps/media/pingboard-tutorial/tutorial_pingboard_app.png b/articles/active-directory/saas-apps/media/pingboard-tutorial/tutorial_pingboard_app.png deleted file mode 100644 index cf1c2dd33625d..0000000000000 Binary files a/articles/active-directory/saas-apps/media/pingboard-tutorial/tutorial_pingboard_app.png and /dev/null differ diff --git a/articles/active-directory/saas-apps/media/pingboard-tutorial/tutorial_pingboard_cert.png b/articles/active-directory/saas-apps/media/pingboard-tutorial/tutorial_pingboard_cert.png deleted file mode 100644 index 5ad82f093af33..0000000000000 Binary files a/articles/active-directory/saas-apps/media/pingboard-tutorial/tutorial_pingboard_cert.png and /dev/null differ diff --git a/articles/active-directory/saas-apps/media/pingboard-tutorial/tutorial_pingboard_certificate.png b/articles/active-directory/saas-apps/media/pingboard-tutorial/tutorial_pingboard_certificate.png deleted file mode 100644 index d2654d42d0403..0000000000000 Binary files a/articles/active-directory/saas-apps/media/pingboard-tutorial/tutorial_pingboard_certificate.png and /dev/null differ diff --git a/articles/active-directory/saas-apps/media/pingboard-tutorial/tutorial_pingboard_configure.png b/articles/active-directory/saas-apps/media/pingboard-tutorial/tutorial_pingboard_configure.png deleted file mode 100644 index 87f7218afd23d..0000000000000 Binary files a/articles/active-directory/saas-apps/media/pingboard-tutorial/tutorial_pingboard_configure.png and /dev/null differ diff --git a/articles/active-directory/saas-apps/media/pingboard-tutorial/tutorial_pingboard_configuresignon.png b/articles/active-directory/saas-apps/media/pingboard-tutorial/tutorial_pingboard_configuresignon.png deleted file mode 100644 index 7ea8deee9a7ac..0000000000000 Binary files a/articles/active-directory/saas-apps/media/pingboard-tutorial/tutorial_pingboard_configuresignon.png and /dev/null differ diff --git a/articles/active-directory/saas-apps/media/pingboard-tutorial/tutorial_pingboard_makecertactive.png b/articles/active-directory/saas-apps/media/pingboard-tutorial/tutorial_pingboard_makecertactive.png deleted file mode 100644 index f73e7fc319d3f..0000000000000 Binary files a/articles/active-directory/saas-apps/media/pingboard-tutorial/tutorial_pingboard_makecertactive.png and /dev/null differ diff --git a/articles/active-directory/saas-apps/media/pingboard-tutorial/tutorial_pingboard_samlbase.png b/articles/active-directory/saas-apps/media/pingboard-tutorial/tutorial_pingboard_samlbase.png deleted file mode 100644 index ec9a394b6c54c..0000000000000 Binary files a/articles/active-directory/saas-apps/media/pingboard-tutorial/tutorial_pingboard_samlbase.png and /dev/null differ diff --git a/articles/active-directory/saas-apps/media/pingboard-tutorial/tutorial_pingboard_search.png b/articles/active-directory/saas-apps/media/pingboard-tutorial/tutorial_pingboard_search.png deleted file mode 100644 index 5fc3173f420d6..0000000000000 Binary files a/articles/active-directory/saas-apps/media/pingboard-tutorial/tutorial_pingboard_search.png and /dev/null differ diff --git a/articles/active-directory/saas-apps/media/pingboard-tutorial/tutorial_pingboard_sp_initiated.png b/articles/active-directory/saas-apps/media/pingboard-tutorial/tutorial_pingboard_sp_initiated.png deleted file mode 100644 index bffe8c1148d2e..0000000000000 Binary files a/articles/active-directory/saas-apps/media/pingboard-tutorial/tutorial_pingboard_sp_initiated.png and /dev/null differ diff --git a/articles/active-directory/saas-apps/media/pingboard-tutorial/tutorial_pingboard_sp_initiated01.png b/articles/active-directory/saas-apps/media/pingboard-tutorial/tutorial_pingboard_sp_initiated01.png deleted file mode 100644 index 1815d81a6d703..0000000000000 Binary files a/articles/active-directory/saas-apps/media/pingboard-tutorial/tutorial_pingboard_sp_initiated01.png and /dev/null differ diff --git a/articles/active-directory/saas-apps/media/pingboard-tutorial/tutorial_pingboard_url.png b/articles/active-directory/saas-apps/media/pingboard-tutorial/tutorial_pingboard_url.png deleted file mode 100644 index c0ba5f0af3fff..0000000000000 Binary files a/articles/active-directory/saas-apps/media/pingboard-tutorial/tutorial_pingboard_url.png and /dev/null differ diff --git a/articles/active-directory/saas-apps/pingboard-tutorial.md b/articles/active-directory/saas-apps/pingboard-tutorial.md index 094819d82c934..ef0baa3fd5059 100644 --- a/articles/active-directory/saas-apps/pingboard-tutorial.md +++ b/articles/active-directory/saas-apps/pingboard-tutorial.md @@ -4,8 +4,8 @@ description: Learn how to configure single sign-on between Azure Active Director services: active-directory documentationCenter: na author: jeevansd -manager: daveba -ms.reviewer: joflore +manager: mtillman +ms.reviewer: barbkess ms.assetid: 28acce3e-22a0-4a37-8b66-6e518d777350 ms.service: active-directory @@ -13,193 +13,217 @@ ms.subservice: saas-app-tutorial ms.workload: identity ms.tgt_pltfrm: na ms.devlang: na -ms.topic: article -ms.date: 05/15/2018 +ms.topic: tutorial +ms.date: 03/25/2019 ms.author: jeedes -ms.collection: M365-identity-device-management --- # Tutorial: Azure Active Directory integration with Pingboard In this tutorial, you learn how to integrate Pingboard with Azure Active Directory (Azure AD). - Integrating Pingboard with Azure AD provides you with the following benefits: -- You can control in Azure AD who has access to Pingboard -- You can enable your users to automatically get signed-on to Pingboard (Single Sign-On) with their Azure AD accounts -- You can manage your accounts in one central location - the Azure portal +* You can control in Azure AD who has access to Pingboard. +* You can enable your users to be automatically signed-in to Pingboard (Single Sign-On) with their Azure AD accounts. +* You can manage your accounts in one central location - the Azure portal. -If you want to know more details about SaaS app integration with Azure AD, see [what is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md). +If you want to know more details about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](https://docs.microsoft.com/azure/active-directory/active-directory-appssoaccess-whatis). +If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/) before you begin. ## Prerequisites To configure Azure AD integration with Pingboard, you need the following items: -- An Azure AD subscription -- A Pingboard single sign-on enabled subscription - -> [!NOTE] -> To test the steps in this tutorial, we do not recommend using a production environment. +* An Azure AD subscription. If you don't have an Azure AD environment, you can get one-month trial [here](https://azure.microsoft.com/pricing/free-trial/) +* Pingboard single sign-on enabled subscription -To test the steps in this tutorial, you should follow these recommendations: +## Scenario description -- Do not use your production environment, unless it is necessary. -- If you don't have an Azure AD trial environment, you can [get a one-month trial](https://azure.microsoft.com/pricing/free-trial/). +In this tutorial, you configure and test Azure AD single sign-on in a test environment. -## Scenario description -In this tutorial, you test Azure AD single sign-on in a test environment. -The scenario outlined in this tutorial consists of two main building blocks: +* Pingboard supports **SP** and **IDP** initiated SSO -1. Adding Pingboard from the gallery -1. Configuring and testing Azure AD single sign-on +* Pingboard supports [Automated user provisioning](https://docs.microsoft.com/azure/active-directory/saas-apps/pingboard-provisioning-tutorial) ## Adding Pingboard from the gallery + To configure the integration of Pingboard into Azure AD, you need to add Pingboard from the gallery to your list of managed SaaS apps. **To add Pingboard from the gallery, perform the following steps:** -1. In the **[Azure portal](https://portal.azure.com)**, on the left navigation panel, click **Azure Active Directory** icon. +1. In the **[Azure portal](https://portal.azure.com)**, on the left navigation panel, click **Azure Active Directory** icon. - ![The Azure Active Directory button][1] + ![The Azure Active Directory button](common/select-azuread.png) -1. Navigate to **Enterprise applications**. Then go to **All applications**. +2. Navigate to **Enterprise Applications** and then select the **All Applications** option. - ![The Enterprise applications][2] + ![The Enterprise applications blade](common/enterprise-applications.png) -1. Click **Add** button on the top of the dialog. +3. To add new application, click **New application** button on the top of dialog. - ![The New application button][3] + ![The New application button](common/add-new-app.png) -1. In the search box, type **Pingboard**, select **Pingboard** from result panel and then click **Add** button to add the application. +4. In the search box, type **Pingboard**, select **Pingboard** from result panel then click **Add** button to add the application. - ![Pingboard in the results list](./media/pingboard-tutorial/tutorial_pingboard_addfromgallery.png) + ![Pingboard in the results list](common/search-new-app.png) ## Configure and test Azure AD single sign-on -In this section, you configure and test Azure AD single sign-on with Pingboard based on a test user called "Britta Simon". - -For single sign-on to work, Azure AD needs to know what the counterpart user in Pingboard is to a user in Azure AD. In other words, a link relationship between an Azure AD user and the related user in Pingboard needs to be established. - -This link relationship is established by assigning the value of the **user name** in Azure AD as the value of the **Username** in Pingboard. +In this section, you configure and test Azure AD single sign-on with Pingboard based on a test user called **Britta Simon**. +For single sign-on to work, a link relationship between an Azure AD user and the related user in Pingboard needs to be established. To configure and test Azure AD single sign-on with Pingboard, you need to complete the following building blocks: 1. **[Configure Azure AD Single Sign-On](#configure-azure-ad-single-sign-on)** - to enable your users to use this feature. -1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with Britta Simon. -1. **[Create a Pingboard test user](#create-a-pingboard-test-user)** - to have a counterpart of Britta Simon in Pingboard that is linked to the Azure AD representation of user. -1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable Britta Simon to use Azure AD single sign-on. -1. **[Test single sign-on](#test-single-sign-on)** - to verify whether the configuration works. +2. **[Configure Pingboard Single Sign-On](#configure-pingboard-single-sign-on)** - to configure the Single Sign-On settings on application side. +3. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with Britta Simon. +4. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable Britta Simon to use Azure AD single sign-on. +5. **[Create Pingboard test user](#create-pingboard-test-user)** - to have a counterpart of Britta Simon in Pingboard that is linked to the Azure AD representation of user. +6. **[Test single sign-on](#test-single-sign-on)** - to verify whether the configuration works. ### Configure Azure AD single sign-on -In this section, you enable Azure AD single sign-on in the Azure portal and configure single sign-on in your Pingboard application. +In this section, you enable Azure AD single sign-on in the Azure portal. + +To configure Azure AD single sign-on with Pingboard, perform the following steps: + +1. In the [Azure portal](https://portal.azure.com/), on the **Pingboard** application integration page, select **Single sign-on**. -**To configure Azure AD single sign-on with Pingboard, perform the following steps:** + ![Configure single sign-on link](common/select-sso.png) -1. In the Azure portal, on the **Pingboard** application integration page, click **Single sign-on**. +2. On the **Select a Single sign-on method** dialog, select **SAML/WS-Fed** mode to enable single sign-on. - ![Configure single sign-on link][4] + ![Single sign-on select mode](common/select-saml-option.png) -1. On the **Single sign-on** dialog, select **Mode** as **SAML-based Sign-on** to enable single sign-on. +3. On the **Set up Single Sign-On with SAML** page, click **Edit** icon to open **Basic SAML Configuration** dialog. - ![Single sign-on dialog box](./media/pingboard-tutorial/tutorial_pingboard_samlbase.png) + ![Edit Basic SAML Configuration](common/edit-urls.png) -1. On the **Pingboard Domain and URLs** section, perform the following steps if you wish to configure the application in **IDP** initiated mode: +4. On the **Basic SAML Configuration** section, if you wish to configure the application in **IDP** initiated mode, perform the following steps: - ![Pingboard Domain and URLs single sign-on information IDP](./media/pingboard-tutorial/tutorial_pingboard_url.png) + ![Pingboard Domain and URLs single sign-on information](common/idp-intiated.png) - a. In the **Identifier** textbox, type the value as: `http://app.pingboard.com/sp` + a. In the **Identifier** text box, type a URL: + `http://app.pingboard.com/sp` - b. In the **Reply URL** textbox, type a URL using the following pattern: `https://.pingboard.com/auth/saml/consume` + b. In the **Reply URL** text box, type a URL using the following pattern: + `https://.pingboard.com/auth/saml/consume` -1. Check **Show advanced URL settings**, if you wish to configure the application in **SP** initiated mode: +5. Click **Set additional URLs** and perform the following step if you wish to configure the application in **SP** initiated mode: - ![Pingboard Domain and URLs single sign-on information SP](./media/pingboard-tutorial/tutorial_pingboard_sp_initiated01.png) + ![Pingboard Domain and URLs single sign-on information](common/metadata-upload-additional-signon.png) - In the **Sign-on URL** textbox, type the URL using the following pattern: `https://.pingboard.com/sign_in` + In the **Sign-on URL** text box, type a URL using the following pattern: + `https://.pingboard.com/sign_in` > [!NOTE] - > Please note that these values are not real. Update these values with the actual Reply URL and Sign-On URL. Contact [Pingboard Client support team](https://support.pingboard.com/) to get these values. + > These values are not real. Update these values with the actual Reply URL and Sign-on URL. Contact [Pingboard Client support team](https://support.pingboard.com/) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal. + +6. On the **Set up Single Sign-On with SAML** page, in the **SAML Signing Certificate** section, click **Download** to download the **Federation Metadata XML** from the given options as per your requirement and save it on your computer. + + ![The Certificate download link](common/metadataxml.png) -1. On the **SAML Signing Certificate** section, click **Metadata XML** and then save the XML file on your computer. +7. On the **Set up Pingboard** section, copy the appropriate URL(s) as per your requirement. - ![Pingboard metadata xml](./media/pingboard-tutorial/tutorial_pingboard_certificate.png) + ![Copy configuration URLs](common/copy-configuration-urls.png) -1. Click **Save** button. + a. Login URL - ![Configure Single Sign-On Save button](./media/pingboard-tutorial/tutorial_general_400.png) + b. Azure AD Identifier -1. To configure SSO on Pingboard side, open a new browser window and log in to your Pingboard Account. You must be a Pingboard admin to set up single sign on. + c. Logout URL -1. From the top menu,, select **Apps > Integrations** +### Configure Pingboard Single Sign-On + +1. To configure SSO on Pingboard side, open a new browser window and sign in to your Pingboard Account. You must be a Pingboard admin to set up single sign on. + +2. From the top menu,, select **Apps > Integrations** ![Configure Single Sign-On](./media/pingboard-tutorial/Pingboard_integration.png) -1. On the **Integrations** page, find the **"Azure Active Directory"** tile, and click it. +3. On the **Integrations** page, find the **"Azure Active Directory"** tile, and click it. ![Pingboard Single Sign-On Integration](./media/pingboard-tutorial/Pingboard_aad.png) -1. In the modal that follows click **"Configure"** +4. In the modal that follows click **"Configure"** ![Pingboard configuration button](./media/pingboard-tutorial/Pingboard_configure.png) -1. On the following page, you notice that "Azure SSO Integration is enabled". Open the downloaded Metadata XML file in a notepad and paste the content in **IDP Metadata**. +5. On the following page, you notice that "Azure SSO Integration is enabled". Open the downloaded Metadata XML file in a notepad and paste the content in **IDP Metadata**. ![Pingboard SSO configuration screen](./media/pingboard-tutorial/Pingboard_sso_configure.png) -1. The file is validated, and if everything is correct, single sign-on will now be enabled. +6. The file is validated, and if everything is correct, single sign-on will now be enabled. -### Create an Azure AD test user +### Create an Azure AD test user The objective of this section is to create a test user in the Azure portal called Britta Simon. -![Create an Azure AD test user][100] +1. In the Azure portal, in the left pane, select **Azure Active Directory**, select **Users**, and then select **All users**. + + ![The "Users and groups" and "All users" links](common/users.png) + +2. Select **New user** at the top of the screen. + + ![New user Button](common/new-user.png) -**To create a test user in Azure AD, perform the following steps:** +3. In the User properties, perform the following steps. -1. In the **Azure portal**, on the left navigation pane, click **Azure Active Directory** icon. + ![The User dialog box](common/user-properties.png) - ![The Azure Active Directory button](./media/pingboard-tutorial/create_aaduser_01.png) + a. In the **Name** field enter **BrittaSimon**. + + b. In the **User name** field type brittasimon@yourcompanydomain.extension. For example, BrittaSimon@contoso.com -1. To display the list of users, go to **Users and groups** and click **All users**. + c. Select **Show password** check box, and then write down the value that's displayed in the Password box. - ![The "Users and groups" and "All users" links](./media/pingboard-tutorial/create_aaduser_02.png) + d. Click **Create**. -1. At the top of the dialog, click **Add** to open the **User** dialog. +### Assign the Azure AD test user - ![Add button](./media/pingboard-tutorial/create_aaduser_03.png) +In this section, you enable Britta Simon to use Azure single sign-on by granting access to Pingboard. -1. On the **User** dialog page, perform the following steps: +1. In the Azure portal, select **Enterprise Applications**, select **All applications**, then select **Pingboard**. - ![The User dialog box](./media/pingboard-tutorial/create_aaduser_04.png) + ![Enterprise applications blade](common/enterprise-applications.png) - a. In the **Name** textbox, type **BrittaSimon**. +2. In the applications list, select **Pingboard**. - b. In the **User name** textbox, type the **email address** of BrittaSimon. + ![The Pingboard link in the Applications list](common/all-applications.png) - c. Select **Show Password** and write down the value of the **Password**. +3. In the menu on the left, select **Users and groups**. - d. Click **Create**. + ![The "Users and groups" link](common/users-groups-blade.png) + +4. Click the **Add user** button, then select **Users and groups** in the **Add Assignment** dialog. -### Create a Pingboard test user + ![The Add Assignment pane](common/add-assign-user.png) + +5. In the **Users and groups** dialog select **Britta Simon** in the Users list, then click the **Select** button at the bottom of the screen. + +6. If you are expecting any role value in the SAML assertion then in the **Select Role** dialog select the appropriate role for the user from the list, then click the **Select** button at the bottom of the screen. + +7. In the **Add Assignment** dialog click the **Assign** button. + +### Create Pingboard test user The objective of this section is to create a user called Britta Simon in Pingboard. Pingboard supports automatic user provisioning, which is by default enabled. You can find more details [here](pingboard-provisioning-tutorial.md) on how to configure automatic user provisioning. **If you need to create user manually, perform following steps:** -1. Log in to your Pingboard company site as an administrator. +1. Sign in to your Pingboard company site as an administrator. -1. Click **“Add Employee”** button on **Directory** page. +2. Click **“Add Employee”** button on **Directory** page. ![Add Employee](./media/pingboard-tutorial/create_testuser_add.png) -1. On the **“Add Employee”** dialog page, perform the following steps: +3. On the **“Add Employee”** dialog page, perform the following steps: ![Invite People](./media/pingboard-tutorial/create_testuser_name.png) a. In the **Full Name** textbox, type the full name of user like **Britta Simon**. - b. In the **Email** textbox, type the email address of user like **brittasimon\@contoso.com**. + b. In the **Email** textbox, type the email address of user like **brittasimon@contoso.com**. c. In the **Job Title** textbox, type the job title of Britta Simon. @@ -207,66 +231,25 @@ The objective of this section is to create a user called Britta Simon in Pingboa e. Click **Add**. -1. A confirmation screen comes up to confirm the addition of user. +4. A confirmation screen comes up to confirm the addition of user. ![confirm](./media/pingboard-tutorial/create_testuser_confirm.png) > [!NOTE] > The Azure Active Directory account holder receives an email and follows a link to confirm their account before it becomes active. -### Assign the Azure AD test user - -In this section, you enable Britta Simon to use Azure single sign-on by granting access to Pingboard. - -![Assign User][200] - -**To assign Britta Simon to Pingboard, perform the following steps:** - -1. In the Azure portal, open the applications view, and then navigate to the directory view and go to **Enterprise applications** then click **All applications**. - - ![Assign User][201] - -1. In the applications list, select **Pingboard**. - - ![The Pingboard link in the Applications list](./media/pingboard-tutorial/tutorial_pingboard_app.png) - -1. In the menu on the left, click **Users and groups**. - - ![The "Users and groups" link][202] - -1. Click **Add** button. Then select **Users and groups** on **Add Assignment** dialog. - - ![The Add Assignment pane][203] - -1. On **Users and groups** dialog, select **Britta Simon** in the Users list. - -1. Click **Select** button on **Users and groups** dialog. - -1. Click **Assign** button on **Add Assignment** dialog. - -### Test single sign-on +### Test single sign-on In this section, you test your Azure AD single sign-on configuration using the Access Panel. -For more information about the Access Panel, see [Introduction to the Access Panel](../user-help/active-directory-saas-access-panel-introduction.md). - -When you click the Pingboard tile in the Access Panel, you should get automatically signed-on to your Pingboard application. -## Additional resources +When you click the Pingboard tile in the Access Panel, you should be automatically signed in to the Pingboard for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](https://docs.microsoft.com/azure/active-directory/active-directory-saas-access-panel-introduction). -* [List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory](tutorial-list.md) -* [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md) -* [Configure User Provisioning](pingboard-provisioning-tutorial.md) +## Additional Resources - +- [ List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory ](https://docs.microsoft.com/azure/active-directory/active-directory-saas-tutorial-list) -[1]: ./media/pingboard-tutorial/tutorial_general_01.png -[2]: ./media/pingboard-tutorial/tutorial_general_02.png -[3]: ./media/pingboard-tutorial/tutorial_general_03.png -[4]: ./media/pingboard-tutorial/tutorial_general_04.png +- [What is application access and single sign-on with Azure Active Directory? ](https://docs.microsoft.com/azure/active-directory/active-directory-appssoaccess-whatis) -[100]: ./media/pingboard-tutorial/tutorial_general_100.png +- [What is conditional access in Azure Active Directory?](https://docs.microsoft.com/azure/active-directory/conditional-access/overview) -[200]: ./media/pingboard-tutorial/tutorial_general_200.png -[201]: ./media/pingboard-tutorial/tutorial_general_201.png -[202]: ./media/pingboard-tutorial/tutorial_general_202.png -[203]: ./media/pingboard-tutorial/tutorial_general_203.png +- [Configure User Provisioning](https://docs.microsoft.com/azure/active-directory/saas-apps/pingboard-provisioning-tutorial) diff --git a/articles/app-service/containers/quickstart-docker-go.md b/articles/app-service/containers/quickstart-docker-go.md index 1ca1516b23f54..c5a6dd1eeabfe 100644 --- a/articles/app-service/containers/quickstart-docker-go.md +++ b/articles/app-service/containers/quickstart-docker-go.md @@ -4,7 +4,7 @@ description: How to deploy a Docker image running a Go application to Web App fo keywords: azure app service, web app, go, docker, container services: app-service author: msangapu -manager: cfowler +manager: jeconnoc ms.assetid: b97bd4e6-dff0-4976-ac20-d5c109a559a8 ms.service: app-service @@ -26,8 +26,6 @@ ms.custom: seodec18 [!INCLUDE [cloud-shell-try-it.md](../../../includes/cloud-shell-try-it.md)] -[!INCLUDE [Configure deployment user](../../../includes/configure-deployment-user.md)] - [!INCLUDE [Create resource group](../../../includes/app-service-web-create-resource-group-linux.md)] [!INCLUDE [Create app service plan](../../../includes/app-service-web-create-app-service-plan-linux.md)] @@ -74,4 +72,4 @@ http://.azurewebsites.net/hello ## Next steps > [!div class="nextstepaction"] -> [Use a custom Docker image](tutorial-custom-docker-image.md) +> [Use a custom Docker image](tutorial-custom-docker-image.md) \ No newline at end of file diff --git a/articles/app-service/containers/tutorial-custom-docker-image.md b/articles/app-service/containers/tutorial-custom-docker-image.md index 0b071ddfcaff6..17963034715f9 100644 --- a/articles/app-service/containers/tutorial-custom-docker-image.md +++ b/articles/app-service/containers/tutorial-custom-docker-image.md @@ -4,8 +4,8 @@ description: How to use a custom Docker image for Web App for Containers. keywords: azure app service, web app, linux, docker, container services: app-service documentationcenter: '' -author: SyntaxC4 -manager: SyntaxC4 +author: msangapu +manager: jeconnoc editor: '' ms.assetid: b97bd4e6-dff0-4976-ac20-d5c109a559a8 @@ -15,7 +15,7 @@ ms.tgt_pltfrm: na ms.devlang: na ms.topic: tutorial ms.date: 10/24/2017 -ms.author: cfowler +ms.author: msangapu ms.custom: mvc ms.custom: seodec18 --- @@ -305,7 +305,7 @@ SSH enables secure communication between a container and a client. In order for EXPOSE 8000 2222 ``` -* Make sure to [start the ssh service](https://github.com/Azure-App-Service/node/blob/master/6.9.3/startup/init_container.sh) by using a shell script in the /bin directory. +* Make sure to [start the ssh service](https://github.com/Azure-App-Service/node/blob/master/8.9/startup/init_container.sh#L18) by using a shell script in the /bin directory. ```bash #!/bin/bash @@ -558,4 +558,4 @@ The command reveals output similar to the following JSON string, showing that th ## Next steps > [!div class="nextstepaction"] -> [Build a Docker Python and PostgreSQL web app in Azure](tutorial-python-postgresql-app.md) +> [Build a Docker Python and PostgreSQL web app in Azure](tutorial-python-postgresql-app.md) \ No newline at end of file diff --git a/articles/azure-functions/functions-bindings-service-bus.md b/articles/azure-functions/functions-bindings-service-bus.md index 03f35d14519ee..cf5129cfc1d23 100644 --- a/articles/azure-functions/functions-bindings-service-bus.md +++ b/articles/azure-functions/functions-bindings-service-bus.md @@ -74,7 +74,7 @@ This example is for Azure Functions version 1.x. To make this code work for 2.x: - [omit the access rights parameter](#trigger---configuration) - change the type of the log parameter from `TraceWriter` to `ILogger` - change `log.Info` to `log.LogInformation` - + ### Trigger - C# script example The following example shows a Service Bus trigger binding in a *function.json* file and a [C# script function](functions-reference-csharp.md) that uses the binding. The function reads [message metadata](#trigger---message-metadata) and logs a Service Bus queue message. @@ -157,7 +157,7 @@ The following Java function uses the `@ServiceBusQueueTrigger` annotation from t ) { context.getLogger().info(message); } - ``` +``` Java functions can also be triggered when a message is added to a Service Bus topic. The following example uses the `@ServiceBusTopicTrigger` annotation to describe the trigger configuration. @@ -174,7 +174,7 @@ Java functions can also be triggered when a message is added to a Service Bus to ) { context.getLogger().info(message); } - ``` +``` ### Trigger - JavaScript example @@ -276,7 +276,7 @@ The following table explains the binding configuration properties that you set i |---------|---------|----------------------| |**type** | n/a | Must be set to "serviceBusTrigger". This property is set automatically when you create the trigger in the Azure portal.| |**direction** | n/a | Must be set to "in". This property is set automatically when you create the trigger in the Azure portal. | -|**name** | n/a | The name of the variable that represents the queue or topic message in function code. Set to "$return" to reference the function return value. | +|**name** | n/a | The name of the variable that represents the queue or topic message in function code. Set to "$return" to reference the function return value. | |**queueName**|**QueueName**|Name of the queue to monitor. Set only if monitoring a queue, not for a topic. |**topicName**|**TopicName**|Name of the topic to monitor. Set only if monitoring a topic, not for a queue.| |**subscriptionName**|**SubscriptionName**|Name of the subscription to monitor. Set only if monitoring a topic, not for a queue.| @@ -337,7 +337,21 @@ See [code examples](#trigger---example) that use these properties earlier in thi The [host.json](functions-host-json.md#servicebus) file contains settings that control Service Bus trigger behavior. -[!INCLUDE [functions-host-json-event-hubs](../../includes/functions-host-json-service-bus.md)] +```json +{ + "serviceBus": { + "maxConcurrentCalls": 16, + "prefetchCount": 100, + "maxAutoRenewDuration": "00:05:00" + } +} +``` + +|Property |Default | Description | +|---------|---------|---------| +|maxConcurrentCalls|16|The maximum number of concurrent calls to the callback that the message pump should initiate. By default, the Functions runtime processes multiple messages concurrently. To direct the runtime to process only a single queue or topic message at a time, set `maxConcurrentCalls` to 1. | +|prefetchCount|n/a|The default PrefetchCount that will be used by the underlying MessageReceiver.| +|maxAutoRenewDuration|00:05:00|The maximum duration within which the message lock will be renewed automatically.| ## Output @@ -469,7 +483,7 @@ public String pushToQueue( result.setValue(message + " has been sent."); return message; } - ``` +``` In the [Java functions runtime library](/java/api/overview/azure/functions/runtime), use the `@QueueOutput` annotation on function parameters whose value would be written to a Service Bus queue. The parameter type should be `OutputBinding`, where T is any native Java type of a POJO. @@ -580,7 +594,7 @@ The following table explains the binding configuration properties that you set i |---------|---------|----------------------| |**type** | n/a | Must be set to "serviceBus". This property is set automatically when you create the trigger in the Azure portal.| |**direction** | n/a | Must be set to "out". This property is set automatically when you create the trigger in the Azure portal. | -|**name** | n/a | The name of the variable that represents the queue or topic in function code. Set to "$return" to reference the function return value. | +|**name** | n/a | The name of the variable that represents the queue or topic in function code. Set to "$return" to reference the function return value. | |**queueName**|**QueueName**|Name of the queue. Set only if sending queue messages, not for a topic. |**topicName**|**TopicName**|Name of the topic to monitor. Set only if sending topic messages, not for a queue.| |**connection**|**Connection**|The name of an app setting that contains the Service Bus connection string to use for this binding. If the app setting name begins with "AzureWebJobs", you can specify only the remainder of the name. For example, if you set `connection` to "MyServiceBus", the Functions runtime looks for an app setting that is named "AzureWebJobsMyServiceBus." If you leave `connection` empty, the Functions runtime uses the default Service Bus connection string in the app setting that is named "AzureWebJobsServiceBus".

To obtain a connection string, follow the steps shown at [Get the management credentials](../service-bus-messaging/service-bus-dotnet-get-started-with-queues.md#get-the-connection-string). The connection string must be for a Service Bus namespace, not limited to a specific queue or topic.| @@ -639,11 +653,11 @@ This section describes the global configuration settings available for this bind ``` |Property |Default | Description | -|---------|---------|---------| -|maxAutoRenewDuration|00:05:00|The maximum duration within which the message lock will be renewed automatically.| -|autoComplete|true|Whether the trigger should immediately mark as complete (autocomplete) or wait for processing to call complete.| -|maxConcurrentCalls|16|The maximum number of concurrent calls to the callback that the message pump should initiate. By default, the Functions runtime processes multiple messages concurrently. To direct the runtime to process only a single queue or topic message at a time, set `maxConcurrentCalls` to 1. | -|prefetchCount|n/a|The default PrefetchCount that will be used by the underlying MessageReceiver.| +|---------|---------|---------| +|maxAutoRenewDuration|00:05:00|The maximum duration within which the message lock will be renewed automatically.| +|autoComplete|true|Whether the trigger should immediately mark as complete (autocomplete) or wait for processing to call complete.| +|maxConcurrentCalls|16|The maximum number of concurrent calls to the callback that the message pump should initiate. By default, the Functions runtime processes multiple messages concurrently. To direct the runtime to process only a single queue or topic message at a time, set `maxConcurrentCalls` to 1. | +|prefetchCount|n/a|The default PrefetchCount that will be used by the underlying MessageReceiver.| ## Next steps diff --git a/articles/azure-functions/functions-bindings-signalr-service.md b/articles/azure-functions/functions-bindings-signalr-service.md index 742dddee7bd4a..b4f63ce2ea6e1 100644 --- a/articles/azure-functions/functions-bindings-signalr-service.md +++ b/articles/azure-functions/functions-bindings-signalr-service.md @@ -9,7 +9,7 @@ editor: '' tags: '' keywords: azure functions, functions, event processing, dynamic compute, serverless architecture -ms.service: functions +ms.service: azure-functions ms.devlang: multiple ms.topic: reference ms.tgt_pltfrm: multiple diff --git a/articles/azure-functions/functions-bindings-storage-blob.md b/articles/azure-functions/functions-bindings-storage-blob.md index e12c0e88b2f77..ff724ff65658b 100644 --- a/articles/azure-functions/functions-bindings-storage-blob.md +++ b/articles/azure-functions/functions-bindings-storage-blob.md @@ -278,7 +278,7 @@ In [C# class libraries](functions-dotnet-class-library.md), use the following at { .... } - ``` + ``` For a complete example, see [Trigger - C# example](#trigger---c-example). @@ -314,8 +314,8 @@ The following table explains the binding configuration properties that you set i |---------|---------|----------------------| |**type** | n/a | Must be set to `blobTrigger`. This property is set automatically when you create the trigger in the Azure portal.| |**direction** | n/a | Must be set to `in`. This property is set automatically when you create the trigger in the Azure portal. Exceptions are noted in the [usage](#trigger---usage) section. | -|**name** | n/a | The name of the variable that represents the blob in function code. | -|**path** | **BlobPath** |The [container](../storage/blobs/storage-blobs-introduction.md#blob-storage-resources) to monitor. May be a [blob name pattern](#trigger---blob-name-patterns). | +|**name** | n/a | The name of the variable that represents the blob in function code. | +|**path** | **BlobPath** |The [container](../storage/blobs/storage-blobs-introduction.md#blob-storage-resources) to monitor. May be a [blob name pattern](#trigger---blob-name-patterns). | |**connection** | **Connection** | The name of an app setting that contains the Storage connection string to use for this binding. If the app setting name begins with "AzureWebJobs", you can specify only the remainder of the name here. For example, if you set `connection` to "MyStorage", the Functions runtime looks for an app setting that is named "AzureWebJobsMyStorage." If you leave `connection` empty, the Functions runtime uses the default Storage connection string in the app setting that is named `AzureWebJobsStorage`.

The connection string must be for a general-purpose storage account, not a [Blob storage account](../storage/common/storage-account-overview.md#types-of-storage-accounts).| [!INCLUDE [app settings to local.settings.json](../../includes/functions-app-settings-local.md)] @@ -446,11 +446,9 @@ JavaScript and Java functions load the entire blob into memory, and C# functions ## Trigger - polling -If the blob container being monitored contains more than 10,000 blobs, the Functions runtime scans log files to watch -for new or changed blobs. This process can result in delays. A function might not get triggered until several minutes or longer -after the blob is created. In addition, [storage logs are created on a "best effort"](/rest/api/storageservices/About-Storage-Analytics-Logging) -basis. There's no guarantee that all events are captured. Under some conditions, logs may be missed. If you require faster or more reliable blob processing, consider creating a [queue message](../storage/queues/storage-dotnet-how-to-use-queues.md) - when you create the blob. Then use a [queue trigger](functions-bindings-storage-queue.md) instead of a blob trigger to process the blob. Another option is to use Event Grid; see the tutorial [Automate resizing uploaded images using Event Grid](../event-grid/resize-images-on-storage-blob-upload-event.md). +If the blob container being monitored contains more than 10,000 blobs (across all containers), the Functions runtime scans log files to watch for new or changed blobs. This process can result in delays. A function might not get triggered until several minutes or longer after the blob is created. In addition, [storage logs are created on a "best effort"](/rest/api/storageservices/About-Storage-Analytics-Logging) basis. There's no guarantee that all events are captured. Under some conditions, logs may be missed. + +If you require faster or more reliable blob processing, consider creating a [queue message](../storage/queues/storage-dotnet-how-to-use-queues.md) when you create the blob. Then use a [queue trigger](functions-bindings-storage-queue.md) instead of a blob trigger to process the blob. Another option is to use Event Grid; see the tutorial [Automate resizing uploaded images using Event Grid](../event-grid/resize-images-on-storage-blob-upload-event.md). ## Input @@ -479,7 +477,7 @@ public static void Run( { log.LogInformation($"BlobInput processed blob\n Name:{myQueueItem} \n Size: {myBlob.Length} bytes"); } -``` +``` ### Input - C# script example @@ -802,7 +800,7 @@ private static Dictionary imageDimensionsTable = new Dict { ImageSize.Small, (640, 400) }, { ImageSize.Medium, (800, 600) } }; -``` +``` ### Output - C# script example diff --git a/articles/azure-functions/functions-create-first-function-python.md b/articles/azure-functions/functions-create-first-function-python.md index db5c4999b3e06..3e54b546b7452 100644 --- a/articles/azure-functions/functions-create-first-function-python.md +++ b/articles/azure-functions/functions-create-first-function-python.md @@ -7,7 +7,7 @@ author: ggailey777 ms.author: glenga ms.date: 08/29/2018 ms.topic: quickstart -ms.service: functions +ms.service: azure-functions ms.custom: mvc ms.devlang: python manager: jeconnoc diff --git a/articles/azure-functions/functions-host-json-v1.md b/articles/azure-functions/functions-host-json-v1.md index 86ed5ac131c60..d8471722dcc6b 100644 --- a/articles/azure-functions/functions-host-json-v1.md +++ b/articles/azure-functions/functions-host-json-v1.md @@ -239,7 +239,21 @@ Configuration settings for [Storage queue triggers and bindings](functions-bindi Configuration setting for [Service Bus triggers and bindings](functions-bindings-service-bus.md). -[!INCLUDE [functions-host-json-service-bus](../../includes/functions-host-json-service-bus.md)] +```json +{ + "serviceBus": { + "maxConcurrentCalls": 16, + "prefetchCount": 100, + "autoRenewTimeout": "00:05:00" + } +} +``` + +|Property |Default | Description | +|---------|---------|---------| +|maxConcurrentCalls|16|The maximum number of concurrent calls to the callback that the message pump should initiate. By default, the Functions runtime processes multiple messages concurrently. To direct the runtime to process only a single queue or topic message at a time, set `maxConcurrentCalls` to 1. | +|prefetchCount|n/a|The default PrefetchCount that will be used by the underlying MessageReceiver.| +|autoRenewTimeout|00:05:00|The maximum duration within which the message lock will be renewed automatically.| ## singleton diff --git a/articles/azure-functions/functions-recover-storage-account.md b/articles/azure-functions/functions-recover-storage-account.md index ea95be08d92b2..460dabfba14ac 100644 --- a/articles/azure-functions/functions-recover-storage-account.md +++ b/articles/azure-functions/functions-recover-storage-account.md @@ -7,7 +7,7 @@ author: alexkarcher-msft manager: cfowler editor: '' -ms.service: functions +ms.service: azure-functions ms.workload: na ms.devlang: na ms.topic: article diff --git a/articles/azure-functions/functions-reference-node.md b/articles/azure-functions/functions-reference-node.md index 3bef612e8b918..d123f830ef78a 100644 --- a/articles/azure-functions/functions-reference-node.md +++ b/articles/azure-functions/functions-reference-node.md @@ -44,7 +44,6 @@ FunctionsProject | - host.json | - package.json | - extensions.csproj - | - bin ``` At the root of the project, there's a shared [host.json](functions-host-json.md) file that can be used to configure the function app. Each function has a folder with its own code file (.js) and binding configuration file (function.json). The name of `function.json`'s parent directory is always the name of your function. @@ -613,6 +612,10 @@ When you create a function app that uses the App Service plan, we recommend that When developing Azure Functions in the serverless hosting model, cold starts are a reality. *Cold start* refers to the fact that when your function app starts for the first time after a period of inactivity, it takes longer to start up. For JavaScript functions with large dependency trees in particular, cold start can be significant. To speed up the cold start process, [run your functions as a package file](run-functions-from-deployment-package.md) when possible. Many deployment methods use the run from package model by default, but if you're experiencing large cold starts and are not running this way, this change can offer a significant improvement. +### Connection Limits + +When you use a service-specific client in an Azure Functions application, don't create a new client with every function invocation. Instead, create a single, static client in the global scope. For more information, see [managing connections in Azure Functions](manage-connections.md). + ## Next steps For more information, see the following resources: diff --git a/articles/azure-functions/functions-reference-python.md b/articles/azure-functions/functions-reference-python.md index b221ae9a1d929..b500717797b46 100644 --- a/articles/azure-functions/functions-reference-python.md +++ b/articles/azure-functions/functions-reference-python.md @@ -6,7 +6,7 @@ documentationcenter: na author: ggailey777 manager: cfowler keywords: azure functions, functions, event processing, dynamic compute, serverless architecture, python -ms.service: functions +ms.service: azure-functions ms.devlang: python ms.topic: article ms.tgt_pltfrm: multiple diff --git a/articles/azure-functions/functions-test-a-function.md b/articles/azure-functions/functions-test-a-function.md index 24a9644719918..c6c6aebb75a51 100644 --- a/articles/azure-functions/functions-test-a-function.md +++ b/articles/azure-functions/functions-test-a-function.md @@ -10,7 +10,7 @@ keywords: azure functions, functions, event processing, webhooks, dynamic comput ms.service: azure-functions ms.devlang: multiple ms.topic: conceptual -ms.date: 12/10/2018 +ms.date: 030/25/2019 ms.author: cshoe --- @@ -40,7 +40,7 @@ To set up your environment, create a Function and test app. The following steps 2. [Create an HTTP function from the template](./functions-create-first-azure-function.md) and name it *HttpTrigger*. 3. [Create a timer function from the template](./functions-create-scheduled-function.md) and name it *TimerTrigger*. 4. [Create an xUnit Test app](https://xunit.github.io/docs/getting-started-dotnet-core) in Visual Studio by clicking **File > New > Project > Visual C# > .NET Core > xUnit Test Project** and name it *Functions.Test*. -5. Use Nuget to add a references from the test app to [Microsoft.Extensions.Logging](https://www.nuget.org/packages/Microsoft.Extensions.Logging/) and [Microsoft.AspNetCore.Mvc](https://www.nuget.org/packages/Microsoft.AspNetCore.Mvc/) +5. Use Nuget to add a references from the test app [Microsoft.AspNetCore.Mvc](https://www.nuget.org/packages/Microsoft.AspNetCore.Mvc/) 6. [Reference the *Functions* app](https://docs.microsoft.com/visualstudio/ide/managing-references-in-a-project?view=vs-2017) from *Functions.Test* app. ### Create test classes @@ -51,11 +51,28 @@ Each function takes an instance of [ILogger](https://docs.microsoft.com/dotnet/a The `ListLogger` class is meant to implement the `ILogger` interface and hold in internal list of messages for evaluation during a test. -**Right-click** on the *Functions.Test* application and select **Add > Class**, name it **ListLogger.cs** and enter the following code: +**Right-click** on the *Functions.Test* application and select **Add > Class**, name it **NullScope.cs** and enter the following code: + +```csharp +using System; + +namespace Functions.Tests +{ + public class NullScope : IDisposable + { + public static NullScope Instance { get; } = new NullScope(); + + private NullScope() { } + + public void Dispose() { } + } +} +``` + +Next, **right-click** on the *Functions.Test* application and select **Add > Class**, name it **ListLogger.cs** and enter the following code: ```csharp using Microsoft.Extensions.Logging; -using Microsoft.Extensions.Logging.Abstractions.Internal; using System; using System.Collections.Generic; using System.Text; @@ -90,7 +107,7 @@ namespace Functions.Tests The `ListLogger` class implements the following members as contracted by the `ILogger` interface: -- **BeginScope**: Scopes add context to your logging. In this case, the test just points to the static instance on the [NullScope](https://docs.microsoft.com/dotnet/api/microsoft.extensions.logging.abstractions.internal.nullscope) class to allow the test to function. +- **BeginScope**: Scopes add context to your logging. In this case, the test just points to the static instance on the `NullScope` class to allow the test to function. - **IsEnabled**: A default value of `false` is provided. diff --git a/articles/azure-monitor/app/asp-net.md b/articles/azure-monitor/app/asp-net.md index 1198abcf77f73..4be235e9c3d9d 100644 --- a/articles/azure-monitor/app/asp-net.md +++ b/articles/azure-monitor/app/asp-net.md @@ -121,6 +121,10 @@ To upgrade to a [new release of the SDK](https://github.com/Microsoft/Applicatio If you made any customizations to ApplicationInsights.config, save a copy of it before you upgrade. Then, merge your changes into the new version. +## Video + +* External step-by-step video about [configuring Application Insights with a .NET application from scratch](https://www.youtube.com/watch?v=blnGAVgMAfA). + ## Next steps There are alternative topics to look at if you are interested in: @@ -128,10 +132,6 @@ There are alternative topics to look at if you are interested in: * [Instrumenting a web app at runtime](../../azure-monitor/app/monitor-performance-live-website-now.md) * [Azure Cloud Services](../../azure-monitor/app/cloudservices.md) -## Video - -* External step-by-step video about [configuring Application Insights with a .NET application from scratch](https://www.youtube.com/watch?v=blnGAVgMAfA). - ### More telemetry * **[Browser and page load data](../../azure-monitor/app/javascript.md)** - Insert a code snippet in your web pages. diff --git a/articles/azure-monitor/learn/quick-monitor-portal.md b/articles/azure-monitor/learn/quick-monitor-portal.md index e4f36deed6e1f..d2816fe52a469 100644 --- a/articles/azure-monitor/learn/quick-monitor-portal.md +++ b/articles/azure-monitor/learn/quick-monitor-portal.md @@ -103,6 +103,10 @@ window.appInsights=appInsights,appInsights.queue&&0===appInsights.queue.length&& To learn more, visit the GitHub repository for our [open-source JavaScript SDK](https://github.com/Microsoft/ApplicationInsights-JS). +## Video + +* External step-by-step video about [configuring Application Insights with a .NET application from scratch](https://www.youtube.com/watch?v=blnGAVgMAfA). + ## Next steps In this quick start, you’ve enabled your application for monitoring by Azure Application Insights. Continue to the tutorials to learn how to use it to monitor statistics and detect issues in your application. diff --git a/articles/azure-resource-manager/move-support-resources.md b/articles/azure-resource-manager/move-support-resources.md index 38400e92cdcfa..4ee14e6bb59ad 100644 --- a/articles/azure-resource-manager/move-support-resources.md +++ b/articles/azure-resource-manager/move-support-resources.md @@ -4,7 +4,7 @@ description: Lists the Azure resource types that can be moved to a new resource author: tfitzmac ms.service: azure-resource-manager ms.topic: reference -ms.date: 2/13/2019 +ms.date: 03/22/2019 ms.author: tomfitz --- @@ -417,7 +417,7 @@ To get the same data as a file of comma-separated values, download [move-support ## Microsoft.LabServices | Resource type | Resource group | Subscription | | ------------- | ----------- | ---------- | -| labaccounts | Yes | Yes | +| labaccounts | No | No | ## Microsoft.LocationBasedServices | Resource type | Resource group | Subscription | diff --git a/articles/azure-stack/user/azure-stack-policy-module.md b/articles/azure-stack/user/azure-stack-policy-module.md index cc6fd1df3f73d..84864ca776956 100644 --- a/articles/azure-stack/user/azure-stack-policy-module.md +++ b/articles/azure-stack/user/azure-stack-policy-module.md @@ -13,16 +13,17 @@ ms.workload: na ms.tgt_pltfrm: na ms.devlang: na ms.topic: article -ms.date: 11/29/2018 +ms.date: 03/26/2019 ms.author: sethm -ms.lastreviewed: 11/29/2018 +ms.lastreviewed: 03/26/2019 --- + # Manage Azure policy using the Azure Stack Policy Module *Applies to: Azure Stack integrated systems and Azure Stack Development Kit* -The Azure Stack Policy module allows you to configure an Azure subscription with the same versioning and service availability as Azure Stack. The module uses the [New-AzureRmPolicyDefinition](/powershell/module/azurerm.resources/new-azurermpolicydefinition) cmdlet to create an Azure policy, which limits the resource types and services available in a subscription. You then create a policy assignment within the appropriate scope by using the [New-AzureRmPolicyAssignment](/powershell/module/azurerm.resources/new-azurermpolicyassignment) cmdlet. After configuring the policy, you can use your Azure subscription to develop apps targeted for Azure Stack. +The Azure Stack Policy module enables you to configure an Azure subscription with the same versioning and service availability as Azure Stack. The module uses the [New-AzureRmPolicyDefinition](/powershell/module/azurerm.resources/new-azurermpolicydefinition) PowerShell cmdlet to create an Azure policy, which limits the resource types and services available in a subscription. You then create a policy assignment within the appropriate scope by using the [New-AzureRmPolicyAssignment](/powershell/module/azurerm.resources/new-azurermpolicyassignment) cmdlet. After configuring the policy, you can use your Azure subscription to develop apps targeted for Azure Stack. ## Install the module @@ -31,31 +32,30 @@ The Azure Stack Policy module allows you to configure an Azure subscription with 3. [Configure PowerShell for use with Azure Stack](azure-stack-powershell-configure-user.md). 4. Import the AzureStack.Policy.psm1 module: - ```PowerShell - Import-Module .\Policy\AzureStack.Policy.psm1 - ``` + ```PowerShell + Import-Module .\Policy\AzureStack.Policy.psm1 + ``` ## Apply policy to Azure subscription -You can use the following command to apply a default Azure Stack policy against your Azure subscription. Before running this command, replace `Azure Subscription Name` with the name of your Azure subscription. +You can use the following command to apply a default Azure Stack policy against your Azure subscription. Before running this command, replace `Azure subscription name` with the name of your Azure subscription: ```PowerShell Add-AzureRmAccount -$s = Select-AzureRmSubscription -SubscriptionName "Azure Subscription Name" +$s = Select-AzureRmSubscription -SubscriptionName "Azure subscription name" $policy = New-AzureRmPolicyDefinition -Name AzureStackPolicyDefinition -Policy (Get-AzsPolicy) $subscriptionID = $s.Subscription.SubscriptionId New-AzureRmPolicyAssignment -Name AzureStack -PolicyDefinition $policy -Scope /subscriptions/$subscriptionID - ``` ## Apply policy to a resource group -You may want to apply policies that are more granular. As an example, you might have other resources running in the same subscription. You can scope the policy application to a specific resource group, which lets you test your apps for Azure Stack using Azure resources. Before running the following command, replace `Azure Subscription Name` with the name of your Azure subscription. +You might want to apply policies that are more granular. As an example, you might have other resources running in the same subscription. You can scope the policy application to a specific resource group, which enables you to test your apps for Azure Stack using Azure resources. Before running the following command, replace `Azure subscription name` with the name of your Azure subscription: ```PowerShell Add-AzureRmAccount $rgName = 'myRG01' -$s = Select-AzureRmSubscription -SubscriptionName "Azure Subscription Name" +$s = Select-AzureRmSubscription -SubscriptionName "Azure subscription name" $policy = New-AzureRmPolicyDefinition -Name AzureStackPolicyDefinition -Policy (Get-AzsPolicy) $subscriptionID = $s.Subscription.SubscriptionId New-AzureRmPolicyAssignment -Name AzureStack -PolicyDefinition $policy -Scope /subscriptions/$subscriptionID/resourceGroups/$rgName @@ -63,7 +63,7 @@ New-AzureRmPolicyAssignment -Name AzureStack -PolicyDefinition $policy -Scope /s ## Policy in action -Once you've deployed the Azure policy, you receive an error when you try to deploy a resource that is prohibited by policy. +Once you've deployed the Azure policy, you receive an error when you try to deploy a resource that is prohibited by policy: ![Result of resource deployment failure because of policy constraint](./media/azure-stack-policy-module/image1.png) diff --git a/articles/backup/backup-azure-delete-vault.md b/articles/backup/backup-azure-delete-vault.md index afb26fc35ec6c..f42c0645297f7 100644 --- a/articles/backup/backup-azure-delete-vault.md +++ b/articles/backup/backup-azure-delete-vault.md @@ -25,7 +25,7 @@ Before you start, it's important to understand that you can't delete a Recovery - If you don't want to retain any data in the Recovery Services vault, and want to delete the vault, you can delete the vault by force. - If you try to delete a vault, but can't, the vault is still configured to receive backup data. -To learn how to delete a vault, see the section, [Delete a vault from Azure portal](backup-azure-delete-vault.md#delete-a-vault-from-azure-portal). If section, [Delete the vault by force](backup-azure-delete-vault.md#delete-the-recovery-services-vault-by-force). If you aren't sure what's in the vault, and you need to make sure that you can delete the vault, see the section, [Remove vault dependencies and delete vault](backup-azure-delete-vault.md#remove-vault-dependencies-and-delete-vault). +To learn how to delete a vault, see the section, [Delete a vault from Azure portal](#delete-a-vault-from-the-azure-portal). If section, [Delete the vault by force](backup-azure-delete-vault.md#delete-the-recovery-services-vault-by-force). If you aren't sure what's in the vault, and you need to make sure that you can delete the vault, see the section, [Remove vault dependencies and delete vault](backup-azure-delete-vault.md#remove-vault-dependencies-and-delete-vault). ## Delete a vault from the Azure portal diff --git a/articles/cdn/cdn-dynamic-site-acceleration.md b/articles/cdn/cdn-dynamic-site-acceleration.md index 5bf533d46e246..243de905930a1 100644 --- a/articles/cdn/cdn-dynamic-site-acceleration.md +++ b/articles/cdn/cdn-dynamic-site-acceleration.md @@ -13,7 +13,7 @@ ms.workload: tbd ms.tgt_pltfrm: na ms.devlang: na ms.topic: article -ms.date: 03/01/2018 +ms.date: 03/25/2019 ms.author: magattus --- # Dynamic site acceleration via Azure CDN @@ -22,7 +22,7 @@ With the explosion of social media, electronic commerce, and the hyper-personali Standard content delivery network (CDN) capability includes the ability to cache files closer to end users to speed up delivery of static files. However, with dynamic web applications, caching that content in edge locations isn't possible because the server generates the content in response to user behavior. Speeding up the delivery of such content is more complex than traditional edge caching and requires an end-to-end solution that finely tunes each element along the entire data path from inception to delivery. With Azure CDN dynamic site acceleration (DSA) optimization, the performance of web pages with dynamic content is measurably improved. -**Azure CDN from Akamai** and **Azure CDN from Verizon** both offer DSA optimization through the **Optimized for** menu during endpoint creation. +**Azure CDN from Akamai** and **Azure CDN from Verizon** both offer DSA optimization through the **Optimized for** menu during endpoint creation. Dynamic site acceleration from Microsoft is offered via [Azure Front Door Service](https://docs.microsoft.com/azure/frontdoor/front-door-overview). > [!Important] > For **Azure CDN from Akamai** profiles, you are allowed to change the optimization of a CDN endpoint after it has been created. diff --git a/articles/cdn/cdn-features.md b/articles/cdn/cdn-features.md index 0d9fb829271ff..b2ebd88ee94dc 100644 --- a/articles/cdn/cdn-features.md +++ b/articles/cdn/cdn-features.md @@ -13,7 +13,7 @@ ms.workload: tbd ms.tgt_pltfrm: na ms.devlang: na ms.topic: overview -ms.date: 02/28/2019 +ms.date: 03/25/2019 ms.author: magattus ms.custom: mvc @@ -28,7 +28,7 @@ The following table compares the features available with each product. | **Performance features and optimizations** | **Standard Microsoft** | **Standard Akamai** | **Standard Verizon** | **Premium Verizon** | | --- | --- | --- | --- | --- | -| [Dynamic site acceleration](https://docs.microsoft.com/azure/cdn/cdn-dynamic-site-acceleration) | | **✓** | **✓** | **✓** | +| [Dynamic site acceleration](https://docs.microsoft.com/azure/cdn/cdn-dynamic-site-acceleration) | Offered via [Azure Front Door Service](https://docs.microsoft.com/azure/frontdoor/front-door-overview) | **✓** | **✓** | **✓** | |      [Dynamic site acceleration - adaptive image compression](https://docs.microsoft.com/azure/cdn/cdn-dynamic-site-acceleration#adaptive-image-compression-azure-cdn-from-akamai-only) | | **✓** | | | |      [Dynamic site acceleration - object prefetch](https://docs.microsoft.com/azure/cdn/cdn-dynamic-site-acceleration#object-prefetch-azure-cdn-from-akamai-only) | | **✓** | | | | [General web delivery optimization](https://docs.microsoft.com/azure/cdn/cdn-optimization-overview#general-web-delivery) | **✓** | **✓**, Select this optimization type if your average file size is smaller than 10 MB | **✓** | **✓** | diff --git a/articles/cdn/cdn-optimization-overview.md b/articles/cdn/cdn-optimization-overview.md index eda1dc1e87d45..cacdb3a86627e 100644 --- a/articles/cdn/cdn-optimization-overview.md +++ b/articles/cdn/cdn-optimization-overview.md @@ -13,7 +13,7 @@ ms.workload: tbd ms.tgt_pltfrm: na ms.devlang: na ms.topic: article -ms.date: 06/13/2018 +ms.date: 03/25/2019 ms.author: magattus --- # Optimize Azure CDN for the type of content delivery @@ -33,6 +33,8 @@ This article provides an overview of various optimization features and when you * [General web delivery](#general-web-delivery). This optimization is also used for media streaming and large file download. +> [!NOTE] +> Dynamic site acceleration from Microsoft is offered via [Azure Front Door Service](https://docs.microsoft.com/azure/frontdoor/front-door-overview). **Azure CDN Standard from Verizon** and **Azure CDN Premium from Verizon** profiles support the following optimizations: @@ -116,6 +118,9 @@ For more information about large file optimization, see [Large file optimization Dynamic site acceleration (DSA) is available for **Azure CDN Standard from Akamai**, **Azure CDN Standard from Verizon**, and **Azure CDN Premium from Verizon** profiles. This optimization involves an additional fee to use; for more information, see [Content Delivery Network pricing](https://azure.microsoft.com/pricing/details/cdn/). +> [!NOTE] +> Dynamic site acceleration from Microsoft is offered via [Azure Front Door Service](https://docs.microsoft.com/azure/frontdoor/front-door-overview) which is a global [anycast](https://en.wikipedia.org/wiki/Anycast) service leveraging Microsoft's private global network to deliver your app workloads. + DSA includes various techniques that benefit the latency and performance of dynamic content. Techniques include route and network optimization, TCP optimization, and more. You can use this optimization to accelerate a web app that includes numerous responses that aren't cacheable. Examples are search results, checkout transactions, or real-time data. You can continue to use core Azure CDN caching capabilities for static data. diff --git a/articles/cognitive-services/Computer-vision/computer-vision-how-to-install-containers.md b/articles/cognitive-services/Computer-vision/computer-vision-how-to-install-containers.md index 04f87e4952af2..20471711f5957 100644 --- a/articles/cognitive-services/Computer-vision/computer-vision-how-to-install-containers.md +++ b/articles/cognitive-services/Computer-vision/computer-vision-how-to-install-containers.md @@ -8,7 +8,7 @@ manager: nitinme ms.service: cognitive-services ms.subservice: computer-vision ms.topic: article -ms.date: 3/19/2019 +ms.date: 3/22/2019 ms.author: diberry ms.custom: seodec18 --- @@ -38,7 +38,7 @@ You must meet the following prerequisites before using Recognize Text containers ### The host computer -[!INCLUDE [Request access to private preview](../../../includes/cognitive-services-containers-host-computer.md)] +[!INCLUDE [Host Computer requirements](../../../includes/cognitive-services-containers-host-computer.md)] ### Container requirements and recommendations diff --git a/articles/cognitive-services/Face/face-how-to-install-containers.md b/articles/cognitive-services/Face/face-how-to-install-containers.md index 51567d9184665..efc41a7a08ffa 100644 --- a/articles/cognitive-services/Face/face-how-to-install-containers.md +++ b/articles/cognitive-services/Face/face-how-to-install-containers.md @@ -9,7 +9,7 @@ ms.custom: seodec18 ms.service: cognitive-services ms.subservice: face-api ms.topic: article -ms.date: 03/19/2019 +ms.date: 03/22/2019 ms.author: diberry --- @@ -36,7 +36,7 @@ You must meet the following prerequisites before using Face API containers: ### The host computer -[!INCLUDE [Request access to private preview](../../../includes/cognitive-services-containers-host-computer.md)] +[!INCLUDE [Host Computer requirements](../../../includes/cognitive-services-containers-host-computer.md)] ### Container requirements and recommendations diff --git a/articles/cognitive-services/LUIS/luis-container-howto.md b/articles/cognitive-services/LUIS/luis-container-howto.md index 131cba948d7c0..b4dd59921fb1d 100644 --- a/articles/cognitive-services/LUIS/luis-container-howto.md +++ b/articles/cognitive-services/LUIS/luis-container-howto.md @@ -9,7 +9,7 @@ ms.custom: seodec18 ms.service: cognitive-services ms.subservice: language-understanding ms.topic: article -ms.date: 03/19/2019 +ms.date: 03/22/2019 ms.author: diberry --- @@ -35,7 +35,7 @@ In order to run the LUIS container, you must have the following: ### The host computer -[!INCLUDE [Request access to private preview](../../../includes/cognitive-services-containers-host-computer.md)] +[!INCLUDE [Host Computer requirements](../../../includes/cognitive-services-containers-host-computer.md)] ### Container requirements and recommendations diff --git a/articles/cognitive-services/LUIS/luis-reference-regions.md b/articles/cognitive-services/LUIS/luis-reference-regions.md index 2a1fb1e9be0a1..7c6f9a4fac616 100644 --- a/articles/cognitive-services/LUIS/luis-reference-regions.md +++ b/articles/cognitive-services/LUIS/luis-reference-regions.md @@ -1,7 +1,7 @@ --- title: Publishing regions & endpoints titleSuffix: Azure Cognitive Services -description: The region in which you publish your LUIS app corresponds to the region or location you specify in the Azure portal when you create an Azure LUIS endpoint key. When you publish an app, LUIS automatically generates an endpoint URL for the region associated with the key. +description: 3 authoring regions and their portals support all the many publishing regions. The region in which you publish your LUIS app corresponds to the region or location you specify in the Azure portal when you create an Azure LUIS endpoint key. When you publish an app, LUIS automatically generates an endpoint URL for the region associated with the key. services: cognitive-services author: diberry manager: nitinme @@ -9,21 +9,27 @@ ms.custom: seodec18 ms.service: cognitive-services ms.subservice: language-understanding ms.topic: article -ms.date: 03/07/2019 +ms.date: 03/25/2019 ms.author: diberry --- # Authoring and publishing regions and the associated keys -The region in which you publish your LUIS app corresponds to the region or location you specify in the Azure portal when you create an Azure LUIS endpoint key. When you [publish an app](./luis-how-to-publish-app.md), LUIS automatically generates an endpoint URL for the region associated with the key. To publish a LUIS app to more than one region, you need at least one key per region. +Three authoring regions and their portals support all the many publishing regions. The region in which you publish your LUIS app corresponds to the region or location you specify in the Azure portal when you create an Azure LUIS endpoint key. When you [publish an app](./luis-how-to-publish-app.md), LUIS automatically generates an endpoint URL for the region associated with the key. To publish a LUIS app to more than one region, you need at least one key per region. -## LUIS website + + +## LUIS Authoring regions There are three LUIS websites, based on region. You must author and publish in the same region. -|LUIS|Region| -|--|--| -|[www.luis.ai][www.luis.ai]|U.S.
not Europe
not Australia| -|[au.luis.ai][au.luis.ai]|Australia| -|[eu.luis.ai][eu.luis.ai]|Europe| +|LUIS|Global region|Authoring region in Azure| +|--|--|--| +|[www.luis.ai][www.luis.ai]|U.S.
not Europe
not Australia| `westus`| +|[au.luis.ai][au.luis.ai]|Australia| `australiaeast`| +|[eu.luis.ai][eu.luis.ai]|Europe|`westeurope`| + +You can use the authoring region for interacting with deployed LUIS service in a different Azure publishing region. + +Authoring regions have [paired fail-over regions](https://docs.microsoft.com/azure/best-practices-availability-paired-regions). ## Regions and Azure resources The app is published to all regions associated with the LUIS resources added in the LUIS portal. For example, for an app created on [www.luis.ai][www.luis.ai], if you create a LUIS resource in **westus** and add it to the app as a resource, the app is published in that region. diff --git a/articles/cognitive-services/LUIS/toc.yml b/articles/cognitive-services/LUIS/toc.yml index 368b4150fc440..1cffe8c0e248f 100644 --- a/articles/cognitive-services/LUIS/toc.yml +++ b/articles/cognitive-services/LUIS/toc.yml @@ -263,9 +263,9 @@ items: - name: User privacy href: luis-user-privacy.md - - name: Regions + - name: Authoring and publishing regions href: luis-reference-regions.md - displayName: failover, fail-over, fail over, Europe, EU, Austrailia, business continuity, bcdr + displayName: failover, fail-over, fail over, Europe, EU, Australia, business continuity, bcdr - name: Boundaries href: luis-boundaries.md - name: Prebuilt entity reference diff --git a/articles/cognitive-services/QnAMaker/How-To/create-knowledge-base.md b/articles/cognitive-services/QnAMaker/How-To/create-knowledge-base.md index 508ea5e74e13b..9053e7d7c6065 100644 --- a/articles/cognitive-services/QnAMaker/How-To/create-knowledge-base.md +++ b/articles/cognitive-services/QnAMaker/How-To/create-knowledge-base.md @@ -8,7 +8,7 @@ manager: nitinme ms.service: cognitive-services ms.subservice: qna-maker ms.topic: article -ms.date: 03/11/2019 +ms.date: 03/25/2019 ms.author: tulasim ms.custom: seodec18 --- @@ -65,5 +65,7 @@ When you are done with the knowledge base, remove it in the QnA Maker portal. ## Next steps +For cost savings measures, you can [share](upgrade-qnamaker-service.md?#share-existing-services-with-qna-maker) some but not all Azure resources created for QnA Maker. + > [!div class="nextstepaction"] > [Add chit-chat personal](./chit-chat-knowledge-base.md) diff --git a/articles/cognitive-services/QnAMaker/How-To/set-up-qnamaker-service-azure.md b/articles/cognitive-services/QnAMaker/How-To/set-up-qnamaker-service-azure.md index f0970ea44860e..0bf0c67818909 100644 --- a/articles/cognitive-services/QnAMaker/How-To/set-up-qnamaker-service-azure.md +++ b/articles/cognitive-services/QnAMaker/How-To/set-up-qnamaker-service-azure.md @@ -8,7 +8,7 @@ manager: nitinme ms.service: cognitive-services ms.subservice: qna-maker ms.topic: article -ms.date: 01/14/2019 +ms.date: 03/25/2019 ms.author: tulasim ms.custom: seodec18 --- @@ -16,26 +16,28 @@ ms.custom: seodec18 Before you can create any QnA Maker knowledge bases, you must first set up a QnA Maker service in Azure. Anyone with authorization to create new resources in a subscription can set up a QnA Maker service. -This setup deploys a few Azure resources. Together, these resources manage the knowledge base content and provide question-answering capabilities though an endpoint. +## Create a new service -1. Log in to the [Azure portal](). +This procedure deploys a few Azure resources. Together, these resources manage the knowledge base content and provide question-answering capabilities though an endpoint. -2. Click on **Add new resource**, and type "qna maker" in search, and select the QnA Maker resource +1. Sign in to the [Azure portal](). + +1. Select **Add new resource**, and type "qna maker" in search, and select the QnA Maker resource ![Create a new QnA Maker service - Add new resource](../media/qnamaker-how-to-setup-service/create-new-resource.png) -3. Click on **Create** after reading the terms and conditions. +1. Select **Create** after reading the terms and conditions. ![Create a new QnA Maker service](../media/qnamaker-how-to-setup-service/create-new-resource-button.png) -4. In **QnA Maker**, select the appropriate tiers and regions. +1. In **QnA Maker**, select the appropriate tiers and regions. ![Create a new QnA Maker service - pricing tier and regions](../media/qnamaker-how-to-setup-service/enter-qnamaker-info.png) * Fill the **Name** with a unique name to identify this QnA Maker service. This name also identifies the QnA Maker endpoint to which your knowledge bases will be associated. * Choose the **Subscription** in which the QnA Maker resource will be deployed. * Select the **Management pricing tier** for the QnA Maker management services (portal and management APIs). See [here](https://aka.ms/qnamaker-pricing) for details on the pricing of the SKUs. - * Create a new **Resource Group** (recommended) or use an existing one in which to deploy this QnA Maker resource. + * Create a new **Resource Group** (recommended) or use an existing one in which to deploy this QnA Maker resource. QnA Maker creates several Azure resources; when you create a resource group to hold these resources, you can easily find, manage, and delete these resources by the resource group name. * Choose the **Search pricing tier** of the Azure Search service. If you see the Free tier option greyed out, it means you already have a Free Azure Search tier deployed in your subscription. In that case, you will need to start with the Basic Azure Search tier. See details of Azure search pricing [here](https://azure.microsoft.com/pricing/details/search/). * Choose the **Search Location** where you want Azure Search data to be deployed. Restrictions in where customer data must be stored will inform the location you choose for Azure Search. * Give a name to your App service in **App name**. @@ -47,10 +49,11 @@ This setup deploys a few Azure resources. Together, these resources manage the k * Choose whether you want to enable **Application Insights** or not. If **Application Insights** is enabled, QnA Maker collects telemetry on traffic, chat logs, and errors. * Choose the **App insights location** where Application Insights resource will be deployed. + * For cost savings measures, you can [share](upgrade-qnamaker-service.md?#share-existing-services-with-qna-maker) some but not all Azure resources created for QnA Maker. -5. Once all the fields are validated, you can click on **Create** to start deployment of these services in your subscription. It will take a few minutes to complete. +1. Once all the fields are validated, you can select **Create** to start deployment of these services in your subscription. It will take a few minutes to complete. -6. Once the deployment is done, you will see the following resources created in your subscription. +1. Once the deployment is done, you will see the following resources created in your subscription. ![Resource created a new QnA Maker service](../media/qnamaker-how-to-setup-service/resources-created.png) diff --git a/articles/cognitive-services/QnAMaker/How-To/upgrade-qnamaker-service.md b/articles/cognitive-services/QnAMaker/How-To/upgrade-qnamaker-service.md index 460153b3794e4..745760d5a52a0 100644 --- a/articles/cognitive-services/QnAMaker/How-To/upgrade-qnamaker-service.md +++ b/articles/cognitive-services/QnAMaker/How-To/upgrade-qnamaker-service.md @@ -1,20 +1,34 @@ --- title: Upgrade your QnA Maker service - QnA Maker titleSuffix: Azure Cognitive Services -description: You can choose to upgrade individual components of the QnA Maker stack after the initial creation. +description: Share or upgrade your QnA Maker services in order to manage the resources better. services: cognitive-services author: tulasim88 manager: nitinme ms.service: cognitive-services ms.subservice: qna-maker ms.topic: article -ms.date: 01/24/2019 +ms.date: 03/25/2019 ms.author: tulasim --- -# Upgrade your QnA Maker service +# Share or upgrade your QnA Maker service +Share or upgrade your QnA Maker services in order to manage the resources better. + You can choose to upgrade individual components of the QnA Maker stack after the initial creation. See the details of the dependent components and SKU selection [here](https://aka.ms/qnamaker-docs-capacity). +## Share existing services with QnA Maker + +QnA Maker creates several Azure resources. In order to reduce management and benefit from cost sharing, use the following table to understand what you can and can't share: + +|Service|Share| +|--|--| +|Cognitive Services|X| +|App service plan|✔| +|App service|X| +|Application Insights|✔| +|Search service|✔| + ## Upgrade QnA Maker Management SKU When you need to have more questions and answers in your knowledge base, beyond your current tier, upgrade your QnA Maker service pricing tier. diff --git a/articles/cognitive-services/Translator/custom-translator/how-to-train-model.md b/articles/cognitive-services/Translator/custom-translator/how-to-train-model.md index 36c22d1d14cf7..8995130d744a0 100644 --- a/articles/cognitive-services/Translator/custom-translator/how-to-train-model.md +++ b/articles/cognitive-services/Translator/custom-translator/how-to-train-model.md @@ -49,6 +49,9 @@ To train a model: ![Train model page](media/how-to/how-to-train-model-3.png) +>[!Note] +>Custom Translator supports 10 concurrent trainings within a workspace at any point in time. + ## Edit a model diff --git a/articles/cognitive-services/Translator/custom-translator/how-to-view-system-test-results.md b/articles/cognitive-services/Translator/custom-translator/how-to-view-system-test-results.md index 3f5cf0703de13..09ce08d7b5b42 100644 --- a/articles/cognitive-services/Translator/custom-translator/how-to-view-system-test-results.md +++ b/articles/cognitive-services/Translator/custom-translator/how-to-view-system-test-results.md @@ -81,6 +81,9 @@ To request a deployment: 5. You can view the status of your model in the “Status” column. +>[!Note] +>Custom Translator supports 10 deployed models within a workspace at any point in time. + ## Update deployment settings To update deployment settings: diff --git a/articles/cognitive-services/text-analytics/how-tos/text-analytics-how-to-install-containers.md b/articles/cognitive-services/text-analytics/how-tos/text-analytics-how-to-install-containers.md index 8fcfdbf49a54d..cbba763c232a3 100644 --- a/articles/cognitive-services/text-analytics/how-tos/text-analytics-how-to-install-containers.md +++ b/articles/cognitive-services/text-analytics/how-tos/text-analytics-how-to-install-containers.md @@ -9,7 +9,7 @@ ms.custom: seodec18 ms.service: cognitive-services ms.subservice: text-analytics ms.topic: article -ms.date: 03/19/2019 +ms.date: 03/22/2019 ms.author: diberry --- @@ -35,7 +35,7 @@ You must meet the following prerequisites before using Text Analytics containers ### The host computer -[!INCLUDE [Request access to private preview](../../../../includes/cognitive-services-containers-host-computer.md)] +[!INCLUDE [Host Computer requirements](../../../../includes/cognitive-services-containers-host-computer.md)] ### Container requirements and recommendations diff --git a/articles/data-factory/data-flow-sink.md b/articles/data-factory/data-flow-sink.md index c9b618912cdc1..f8e6fe3f263dc 100644 --- a/articles/data-factory/data-flow-sink.md +++ b/articles/data-factory/data-flow-sink.md @@ -12,7 +12,7 @@ ms.date: 02/03/2019 [!INCLUDE [notes](../../includes/data-factory-data-flow-preview.md)] -![Sink options](media/data-flow/windows1.png "sink 1") +![Sink options](media/data-flow/sink1.png "sink 1") At the completion of your data flow transformation, you can sink your transformed data into a destination dataset. In the Sink transformation, you can choose the dataset definition that you wish to use for the destination output data. You may have as many Sink transformation as your data flow requires. diff --git a/articles/databox-online/data-box-edge-overview.md b/articles/databox-online/data-box-edge-overview.md index 46791b5918237..8c4722b25eaa2 100644 --- a/articles/databox-online/data-box-edge-overview.md +++ b/articles/databox-online/data-box-edge-overview.md @@ -7,7 +7,7 @@ author: alkohli ms.service: databox ms.subservice: edge ms.topic: overview -ms.date: 03/22/2019 +ms.date: 03/25/2019 ms.author: alkohli #Customer intent: As an IT admin, I need to understand what Data Box Edge is and how it works so I can use it to process and transform data before sending to Azure. --- @@ -61,22 +61,6 @@ Data Box Edge has the following capabilities: |Resiliency | Built-in network resiliency. | -## Features and specifications - -The Data Box Edge physical device has the following features: - -| Features/specifications | Description | -|---------------------------------------------------------|--------------------------| -| Dimensions | Width: 17.25” Depth: 27.25” Height: 1.75”
(excludes ears and PSU handles) | -| Rack space|1U when placed in the rack| -| Cables| 2 X Power cable
2 X 1 Gbps RJ45 cables
2 X 10 Gbps SFP+ copper cables| -| Components|2 built-in Power Supply Units (PSUs)| -| CPU|2 Intel Xeon processors with 10 cores each | -| Memory| 64 GB RAM| -| Disks| 8 NVMe SSDs, each disk is 1.6 TB
The system fails if one NVMe SSD fails. | -| Local storage capacity| 12.8 TB total capacity| -| Network interfaces| 2 X 1 GbE interfaces – 1 management, not user configurable, used for initial setup. The other interface is user configurable, can be used for data transfer, and is DHCP by default.
2 X 25 GbE interfaces – These can also operate as 10 GbE interfaces. These data interfaces can be configured by user as DHCP (default) or static.
2 X 25 GbE interfaces - These data interfaces can be configured by user as DHCP (default) or static.| - ## Components The Data Box Edge solution comprises of Data Box Edge resource, Data Box Edge physical device, and a local web UI. @@ -101,31 +85,17 @@ The Data Box Edge solution comprises of Data Box Edge resource, Data Box Edge ph Data Box Edge physical device, Azure resource, and target storage account to which you transfer data do not all have to be in the same region. - **Resource availability** - For this release, the Data Box Edge resource is available in the following regions: - - **United States** - West US2 and East US + - **United States** - East US - **European Union** - West Europe - **Asia Pacific** - SE Asia - + + Data Box Gateway can also be deployed in the Azure Government Cloud. For more information, see [What is Azure Government?](https://docs.microsoft.com/azure/azure-government/documentation-government-welcome). + - **Destination Storage accounts** - The storage accounts that store the data are available in all Azure regions. The regions where the storage accounts store Data Box data should be located close to where the device is located for optimum performance. A storage account located far from the device results in long latencies and slower performance. -## Sign up - -Data Box Edge is in preview and you need to sign up. Perform the following steps to sign up for Data Box Gateway: - -1. Sign into the Azure portal at: [https://aka.ms/databox-edge](https://aka.ms/databox-edge). - -2. Pick the subscription that you want to use for Data Box Edge preview. Select the region where you want to deploy the Data Box Edge resource. In the Data Box Edge option, click **Sign up**. - - ![The Data Box Edge sign up 3](media/data-box-edge-overview/data-box-edge-sign-up3.png) - -3. Answer the questions regarding data residence country, time-frame, target Azure service for data transfer, network bandwidth, and data transfer frequency. Review **Privacy and terms** and select the checkbox against **Microsoft can use your email address to contact you**. - - ![The Data Box Edge sign up 4](media/data-box-edge-overview/data-box-edge-sign-up4.png) - -4. Once you are signed up and enabled for preview, you can order a Data Box Edge. - ## Next steps - Review the [Data Box Edge system requirements](https://aka.ms/dbe-docs). diff --git a/articles/databox-online/index.yml b/articles/databox-online/index.yml index ddb6d3b1881a7..88eed8bfd091b 100644 --- a/articles/databox-online/index.yml +++ b/articles/databox-online/index.yml @@ -10,10 +10,10 @@ metadata: ms.service: databox ms.subservice: edge ms.topic: landing-page - ms.date: 03/22/2019 + ms.date: 03/25/2019 ms.author: alkohli abstract: - description: The Azure Data Box family lets you transfer hundreds of terabytes of data to Azure in a quick, inexpensive, and reliable manner. Use the Data Box devices for over the network high performance data transfers.

Choose Data Box GatewayPreview to send data for cloud archival, disaster recovery, or to process data at cloud scale. Use Data Box EdgePreview to filter, analyze, and transform your data as it moves to Azure.

Learn how to leverage Data Box Gateway and Data Box Edge for network-based transfer with our tutorials.

+ description: The Azure Data Box family lets you transfer hundreds of terabytes of data to Azure in a quick, inexpensive, and reliable manner. Use the Data Box devices for over the network high performance data transfers.

Choose Data Box Gateway to send data for cloud archival, disaster recovery, or to process data at cloud scale. Use Data Box Edge to filter, analyze, and transform your data as it moves to Azure.

Learn how to leverage Data Box Gateway and Data Box Edge for network-based transfer with our tutorials.

sections: - title: Tutorials items: diff --git a/articles/dev-spaces/how-dev-spaces-works.md b/articles/dev-spaces/how-dev-spaces-works.md index 21b6404479c3d..87f617d1f170c 100644 --- a/articles/dev-spaces/how-dev-spaces-works.md +++ b/articles/dev-spaces/how-dev-spaces-works.md @@ -163,6 +163,8 @@ install: kubernetes.io/ingress.class: traefik-azds hosts: # This expands to [space.s.][rootSpace.]webfrontend...azds.io + # Customize the public URL by changing the 'webfrontend' text between the $(rootSpacePrefix) and $(hostSuffix) tokens + # For more information see https://aka.ms/devspaces/routing - $(spacePrefix)$(rootSpacePrefix)webfrontend$(hostSuffix) configurations: develop: @@ -311,6 +313,8 @@ install: kubernetes.io/ingress.class: traefik-azds hosts: # This expands to [space.s.][rootSpace.]webfrontend...azds.io + # Customize the public URL by changing the 'webfrontend' text between the $(rootSpacePrefix) and $(hostSuffix) tokens + # For more information see https://aka.ms/devspaces/routing - $(spacePrefix)$(rootSpacePrefix)webfrontend$(hostSuffix) ... ``` diff --git a/articles/dev-spaces/team-development-java.md b/articles/dev-spaces/team-development-java.md index 5bf84b97c3ae4..21bde511f0f76 100644 --- a/articles/dev-spaces/team-development-java.md +++ b/articles/dev-spaces/team-development-java.md @@ -15,7 +15,7 @@ manager: "mmontwil" [!INCLUDE [](../../includes/devspaces-team-development-1.md)] ### Make a code change -Go to the VS Code window for `mywebapi` and make a code edit to the `String index()` method, for example: +Go to the VS Code window for `mywebapi` and make a code edit to the `String index()` method in `src/main/java/com/ms/sample/mywebapi/Application.java`, for example: ```java @RequestMapping(value = "/", produces = "text/plain") diff --git a/articles/dev-spaces/team-development-netcore.md b/articles/dev-spaces/team-development-netcore.md index e605f29ada03d..68ef69d266a38 100644 --- a/articles/dev-spaces/team-development-netcore.md +++ b/articles/dev-spaces/team-development-netcore.md @@ -14,7 +14,7 @@ keywords: "Docker, Kubernetes, Azure, AKS, Azure Kubernetes Service, containers, [!INCLUDE [](../../includes/devspaces-team-development-1.md)] ### Make a code change -Go to the VS Code window for `mywebapi` and make a code edit to the `string Get(int id)` method, for example: +Go to the VS Code window for `mywebapi` and make a code edit to the `string Get(int id)` method in `Controllers/ValuesController.cs`, for example: ```csharp [HttpGet("{id}")] diff --git a/articles/dev-spaces/team-development-nodejs.md b/articles/dev-spaces/team-development-nodejs.md index 67c880a66e116..25df916527454 100644 --- a/articles/dev-spaces/team-development-nodejs.md +++ b/articles/dev-spaces/team-development-nodejs.md @@ -14,7 +14,7 @@ keywords: "Docker, Kubernetes, Azure, AKS, Azure Kubernetes Service, containers, [!INCLUDE [](../../includes/devspaces-team-development-1.md)] ### Make a code change -Go to the VS Code window for `mywebapi` and make a code edit to the default GET `/` handler, for example: +Go to the VS Code window for `mywebapi` and make a code edit to the default GET `/` handler in `server.js`, for example: ```javascript app.get('/', function (req, res) { diff --git a/articles/governance/blueprints/concepts/sequencing-order.md b/articles/governance/blueprints/concepts/sequencing-order.md index 9de8086c16eb7..bffa01698440b 100644 --- a/articles/governance/blueprints/concepts/sequencing-order.md +++ b/articles/governance/blueprints/concepts/sequencing-order.md @@ -1,10 +1,10 @@ --- title: Understand the deployment sequence order -description: Learn about the life-cycle that a blueprint goes through and details about each stage. +description: Learn about the life-cycle that a blueprint definition goes through and details about each stage. services: blueprints author: DCtheGeek ms.author: dacoulte -ms.date: 11/12/2018 +ms.date: 03/25/2019 ms.topic: conceptual ms.service: blueprints manager: carmonm @@ -13,7 +13,7 @@ ms.custom: seodec18 # Understand the deployment sequence in Azure Blueprints Azure Blueprints uses a **sequencing order** to determine the order of resource creation when -processing the assignment of a blueprint. This article explains the following concepts: +processing the assignment of a blueprint definition. This article explains the following concepts: - The default sequencing order that is used - How to customize the order @@ -25,8 +25,8 @@ There are variables in the JSON examples that you need to replace with your own ## Default sequencing order -If the blueprint contains no directive for the order to deploy artifacts or the directive is null, -then the following order is used: +If the blueprint definition contains no directive for the order to deploy artifacts or the directive +is null, then the following order is used: - Subscription level **role assignment** artifacts sorted by artifact name - Subscription level **policy assignment** artifacts sorted by artifact name @@ -42,25 +42,21 @@ created within that resource group: ## Customizing the sequencing order -When composing large blueprints, it may be necessary for resources to be created in a specific -order. The most common use pattern of this scenario is when a blueprint includes several Azure -Resource Manager templates. Blueprints handles this pattern by allowing the sequencing order to be -defined. +When composing large blueprint definitions, it may be necessary for resources to be created in a +specific order. The most common use pattern of this scenario is when a blueprint definition includes +several Azure Resource Manager templates. Blueprints handles this pattern by allowing the sequencing +order to be defined. -The ordering is accomplished by defining a `dependsOn` property in the JSON. Only the blueprint (for -resource groups) and artifact objects support this property. `dependsOn` is a string array of -artifact names that the particular artifact needs to be created before it's created. +The ordering is accomplished by defining a `dependsOn` property in the JSON. The blueprint +definition, for resource groups, and artifact objects support this property. `dependsOn` is a string +array of artifact names that the particular artifact needs to be created before it's created. -> [!NOTE] -> **Resource group** artifacts support the `dependsOn` property, but can't be the target of a -> `dependsOn` by any artifact type. +### Example - ordered resource group -### Example - blueprint with ordered resource group - -This example blueprint has a resource group that has defined a custom sequencing order by declaring -a value for `dependsOn`, along with a standard resource group. In this case, the artifact named -**assignPolicyTags** will be processed before the **ordered-rg** resource group. **standard-rg** -will be processed per the default sequencing order. +This example blueprint definition has a resource group that has defined a custom sequencing order by +declaring a value for `dependsOn`, along with a standard resource group. In this case, the artifact +named **assignPolicyTags** will be processed before the **ordered-rg** resource group. +**standard-rg** will be processed per the default sequencing order. ```json { @@ -112,6 +108,46 @@ ordering allows the policy artifact to wait for the Azure Resource Manager templ } ``` +### Example - subscription level template artifact depending on a resource group + +This example is for a Resource Manager template deployed at the subscription level to depend on a +resource group. In default ordering, the subscription level artifacts would be created prior to any +resource groups and child artifacts in those resource groups. The resource group is defined in the +blueprint definition like this: + +```json +"resourceGroups": { + "wait-for-me": { + "metadata": { + "description": "Resource Group that is deployed prior to the subscription level template artifact" + } + } +} +``` + +The subscription level template artifact depending on the **wait-for-me** resource group is defined +like this: + +```json +{ + "properties": { + "template": { + ... + }, + "parameters": { + ... + }, + "dependsOn": ["wait-for-me"], + "displayName": "SubLevelTemplate", + "description": "" + }, + "kind": "template", + "id": "/providers/Microsoft.Management/managementGroups/{YourMG}/providers/Microsoft.Blueprint/blueprints/mySequencedBlueprint/artifacts/subtemplateWaitForRG", + "type": "Microsoft.Blueprint/blueprints/artifacts", + "name": "subtemplateWaitForRG" +} +``` + ## Processing the customized sequence During the creation process, a topological sort is used to create the dependency graph of the diff --git a/articles/hdinsight/hdinsight-extend-hadoop-virtual-network.md b/articles/hdinsight/hdinsight-extend-hadoop-virtual-network.md index eea165823d09a..492bae6477664 100644 --- a/articles/hdinsight/hdinsight-extend-hadoop-virtual-network.md +++ b/articles/hdinsight/hdinsight-extend-hadoop-virtual-network.md @@ -278,6 +278,7 @@ If you use network security groups, you must allow traffic from the Azure health | China | China North | 42.159.96.170
139.217.2.219

42.159.198.178
42.159.234.157 | 443 | Inbound | |   | China East | 42.159.198.178
42.159.234.157

42.159.96.170
139.217.2.219 | 443 | Inbound | |   | China North 2 | 40.73.37.141
40.73.38.172 | 443 | Inbound | + |   | China East 2 | 139.217.227.106
139.217.228.187 | 443 | Inbound | | Europe | North Europe | 52.164.210.96
13.74.153.132 | 443 | Inbound | |   | West Europe| 52.166.243.90
52.174.36.244 | 443 | Inbound | | France | France Central| 20.188.39.64
40.89.157.135 | 443 | Inbound | diff --git a/articles/healthcare-apis/media/cors/cors.png b/articles/healthcare-apis/media/cors/cors.png index 9f7c95cda028e..ca6696bc1e0fe 100644 Binary files a/articles/healthcare-apis/media/cors/cors.png and b/articles/healthcare-apis/media/cors/cors.png differ diff --git a/articles/index.md b/articles/index.md index 2c40ed28d10c7..06dac2c6a709e 100644 --- a/articles/index.md +++ b/articles/index.md @@ -462,6 +462,12 @@ featureFlags:

Data Factory

+
  • + + +

    Azure Data Explorer

    +
    +
  • @@ -509,7 +515,7 @@ featureFlags:

    Azure Database Migration Service

    -
  • +

    Containers

  • @@ -3340,7 +3365,7 @@ featureFlags:
    - +
    diff --git a/articles/iot-dps/how-to-legacy-device-symm-key.md b/articles/iot-dps/how-to-legacy-device-symm-key.md index 31b8296d32355..d69e4b657947e 100644 --- a/articles/iot-dps/how-to-legacy-device-symm-key.md +++ b/articles/iot-dps/how-to-legacy-device-symm-key.md @@ -48,20 +48,22 @@ In this section, you will prepare a development environment used to build the [A The SDK includes the sample code for the simulated device. This simulated device will attempt provisioning during the device's boot sequence. -1. Download the version 3.11.4 of the [CMake build system](https://cmake.org/download/). Verify the downloaded binary using the corresponding cryptographic hash value. The following example used Windows PowerShell to verify the cryptographic hash for version 3.11.4 of the x64 MSI distribution: +1. Download the [CMake build system](https://cmake.org/download/). Verify the downloaded binary using the cryptographic hash value that corresponds to the version you download. The cryptographic hash values are also located from the CMake download link already provided. + + The following example used Windows PowerShell to verify the cryptographic hash for version 3.13.4 of the x64 MSI distribution: ```PowerShell - PS C:\Downloads> $hash = get-filehash .\cmake-3.11.4-win64-x64.msi - PS C:\Downloads> $hash.Hash -eq "56e3605b8e49cd446f3487da88fcc38cb9c3e9e99a20f5d4bd63e54b7a35f869" + PS C:\Downloads> $hash = get-filehash .\cmake-3.13.4-win64-x64.msi + PS C:\Downloads> $hash.Hash -eq "64AC7DD5411B48C2717E15738B83EA0D4347CD51B940487DFF7F99A870656C09" True ``` - - The following hash values for version 3.11.4 were listed on the CMake site at the time of this writing: + + The following hash values for version 3.13.4 were listed on the CMake site at the time of this writing: ``` - 6dab016a6b82082b8bcd0f4d1e53418d6372015dd983d29367b9153f1a376435 cmake-3.11.4-Linux-x86_64.tar.gz - 72b3b82b6d2c2f3a375c0d2799c01819df8669dc55694c8b8daaf6232e873725 cmake-3.11.4-win32-x86.msi - 56e3605b8e49cd446f3487da88fcc38cb9c3e9e99a20f5d4bd63e54b7a35f869 cmake-3.11.4-win64-x64.msi + 563a39e0a7c7368f81bfa1c3aff8b590a0617cdfe51177ddc808f66cc0866c76 cmake-3.13.4-Linux-x86_64.tar.gz + 7c37235ece6ce85aab2ce169106e0e729504ad64707d56e4dbfc982cb4263847 cmake-3.13.4-win32-x86.msi + 64ac7dd5411b48c2717e15738b83ea0d4347cd51b940487dff7f99a870656c09 cmake-3.13.4-win64-x64.msi ``` It is important that the Visual Studio prerequisites (Visual Studio and the 'Desktop development with C++' workload) are installed on your machine, **before** starting the `CMake` installation. Once the prerequisites are in place, and the download is verified, install the CMake build system. @@ -288,7 +290,7 @@ Be aware that this leaves the derived device key included as part of the image, ## Next steps -* To learn more Reprovisioning, see [IoT Hub Device reprovisoning concepts](concepts-device-reprovision.md) +* To learn more Reprovisioning, see [IoT Hub Device reprovisioning concepts](concepts-device-reprovision.md) * [Quickstart: Provision a simulated device with symmetric keys](quick-create-simulated-device-symm-key.md) * To learn more Deprovisioning, see [How to deprovision devices that were previously auto-provisioned](how-to-unprovision-devices.md) diff --git a/articles/iot-dps/how-to-use-custom-allocation-policies.md b/articles/iot-dps/how-to-use-custom-allocation-policies.md index fcf1a592848f2..ab1490c729ace 100644 --- a/articles/iot-dps/how-to-use-custom-allocation-policies.md +++ b/articles/iot-dps/how-to-use-custom-allocation-policies.md @@ -347,22 +347,22 @@ In this section, you will prepare a development environment used to build the [A This section is oriented toward a Windows-based workstation. For a Linux example, see the set-up of the VMs in [How to provision for multitenancy](how-to-provision-multitenant.md). +1. Download the [CMake build system](https://cmake.org/download/). Verify the downloaded binary using the cryptographic hash value that corresponds to the version you download. The cryptographic hash values are also located from the CMake download link already provided. - -1. Download the version 3.11.4 of the [CMake build system](https://cmake.org/download/). Verify the downloaded binary using the corresponding cryptographic hash value. The following example used Windows PowerShell to verify the cryptographic hash for version 3.11.4 of the x64 MSI distribution: + The following example used Windows PowerShell to verify the cryptographic hash for version 3.13.4 of the x64 MSI distribution: ```PowerShell - PS C:\Downloads> $hash = get-filehash .\cmake-3.11.4-win64-x64.msi - PS C:\Downloads> $hash.Hash -eq "56e3605b8e49cd446f3487da88fcc38cb9c3e9e99a20f5d4bd63e54b7a35f869" + PS C:\Downloads> $hash = get-filehash .\cmake-3.13.4-win64-x64.msi + PS C:\Downloads> $hash.Hash -eq "64AC7DD5411B48C2717E15738B83EA0D4347CD51B940487DFF7F99A870656C09" True ``` - - The following hash values for version 3.11.4 were listed on the CMake site at the time of this writing: + + The following hash values for version 3.13.4 were listed on the CMake site at the time of this writing: ``` - 6dab016a6b82082b8bcd0f4d1e53418d6372015dd983d29367b9153f1a376435 cmake-3.11.4-Linux-x86_64.tar.gz - 72b3b82b6d2c2f3a375c0d2799c01819df8669dc55694c8b8daaf6232e873725 cmake-3.11.4-win32-x86.msi - 56e3605b8e49cd446f3487da88fcc38cb9c3e9e99a20f5d4bd63e54b7a35f869 cmake-3.11.4-win64-x64.msi + 563a39e0a7c7368f81bfa1c3aff8b590a0617cdfe51177ddc808f66cc0866c76 cmake-3.13.4-Linux-x86_64.tar.gz + 7c37235ece6ce85aab2ce169106e0e729504ad64707d56e4dbfc982cb4263847 cmake-3.13.4-win32-x86.msi + 64ac7dd5411b48c2717e15738b83ea0d4347cd51b940487dff7f99a870656c09 cmake-3.13.4-win64-x64.msi ``` It is important that the Visual Studio prerequisites (Visual Studio and the 'Desktop development with C++' workload) are installed on your machine, **before** starting the `CMake` installation. Once the prerequisites are in place, and the download is verified, install the CMake build system. @@ -550,7 +550,7 @@ To delete the resource group by name: ## Next steps -- To learn more Reprovisioning, see [IoT Hub Device reprovisoning concepts](concepts-device-reprovision.md) +- To learn more Reprovisioning, see [IoT Hub Device reprovisioning concepts](concepts-device-reprovision.md) - To learn more Deprovisioning, see [How to deprovision devices that were previously auto-provisioned](how-to-unprovision-devices.md) diff --git a/articles/iot-dps/quick-create-simulated-device-symm-key.md b/articles/iot-dps/quick-create-simulated-device-symm-key.md index 19f421b0f436c..11ba32b8a08b3 100644 --- a/articles/iot-dps/quick-create-simulated-device-symm-key.md +++ b/articles/iot-dps/quick-create-simulated-device-symm-key.md @@ -42,20 +42,22 @@ In this section, you will prepare a development environment used to build the [A The SDK includes the sample code for a simulated device. This simulated device will attempt provisioning during the device's boot sequence. -1. Download the version 3.11.4 of the [CMake build system](https://cmake.org/download/). Verify the downloaded binary using the corresponding cryptographic hash value. The following example used Windows PowerShell to verify the cryptographic hash for version 3.11.4 of the x64 MSI distribution: +1. Download the [CMake build system](https://cmake.org/download/). Verify the downloaded binary using the cryptographic hash value that corresponds to the version you download. The cryptographic hash values are also located from the CMake download link already provided. + + The following example used Windows PowerShell to verify the cryptographic hash for version 3.13.4 of the x64 MSI distribution: ```PowerShell - PS C:\Downloads> $hash = get-filehash .\cmake-3.11.4-win64-x64.msi - PS C:\Downloads> $hash.Hash -eq "56e3605b8e49cd446f3487da88fcc38cb9c3e9e99a20f5d4bd63e54b7a35f869" + PS C:\Downloads> $hash = get-filehash .\cmake-3.13.4-win64-x64.msi + PS C:\Downloads> $hash.Hash -eq "64AC7DD5411B48C2717E15738B83EA0D4347CD51B940487DFF7F99A870656C09" True ``` - - The following hash values for version 3.11.4 were listed on the CMake site at the time of this writing: + + The following hash values for version 3.13.4 were listed on the CMake site at the time of this writing: ``` - 6dab016a6b82082b8bcd0f4d1e53418d6372015dd983d29367b9153f1a376435 cmake-3.11.4-Linux-x86_64.tar.gz - 72b3b82b6d2c2f3a375c0d2799c01819df8669dc55694c8b8daaf6232e873725 cmake-3.11.4-win32-x86.msi - 56e3605b8e49cd446f3487da88fcc38cb9c3e9e99a20f5d4bd63e54b7a35f869 cmake-3.11.4-win64-x64.msi + 563a39e0a7c7368f81bfa1c3aff8b590a0617cdfe51177ddc808f66cc0866c76 cmake-3.13.4-Linux-x86_64.tar.gz + 7c37235ece6ce85aab2ce169106e0e729504ad64707d56e4dbfc982cb4263847 cmake-3.13.4-win32-x86.msi + 64ac7dd5411b48c2717e15738b83ea0d4347cd51b940487dff7f99a870656c09 cmake-3.13.4-win64-x64.msi ``` It is important that the Visual Studio prerequisites (Visual Studio and the 'Desktop development with C++' workload) are installed on your machine, **before** starting the `CMake` installation. Once the prerequisites are in place, and the download is verified, install the CMake build system. diff --git a/articles/iot-dps/quick-create-simulated-device-x509.md b/articles/iot-dps/quick-create-simulated-device-x509.md index 8b399f2f3ac17..88d36ae66ad22 100644 --- a/articles/iot-dps/quick-create-simulated-device-x509.md +++ b/articles/iot-dps/quick-create-simulated-device-x509.md @@ -41,20 +41,22 @@ This article will demonstrate individual enrollments. In this section, you will prepare a development environment used to build the [Azure IoT C SDK](https://github.com/Azure/azure-iot-sdk-c) which include the sample code for the X.509 boot sequence. -1. Download the version 3.11.4 of the [CMake build system](https://cmake.org/download/). Verify the downloaded binary using the corresponding cryptographic hash value. The following example used Windows PowerShell to verify the cryptographic hash for version 3.11.4 of the x64 MSI distribution: +1. Download the [CMake build system](https://cmake.org/download/). Verify the downloaded binary using the cryptographic hash value that corresponds to the version you download. The cryptographic hash values are also located from the CMake download link already provided. + + The following example used Windows PowerShell to verify the cryptographic hash for version 3.13.4 of the x64 MSI distribution: ```PowerShell - PS C:\Downloads> $hash = get-filehash .\cmake-3.11.4-win64-x64.msi - PS C:\Downloads> $hash.Hash -eq "56e3605b8e49cd446f3487da88fcc38cb9c3e9e99a20f5d4bd63e54b7a35f869" + PS C:\Downloads> $hash = get-filehash .\cmake-3.13.4-win64-x64.msi + PS C:\Downloads> $hash.Hash -eq "64AC7DD5411B48C2717E15738B83EA0D4347CD51B940487DFF7F99A870656C09" True ``` - - The following hash values for version 3.11.4 were listed on the CMake site at the time of this writing: + + The following hash values for version 3.13.4 were listed on the CMake site at the time of this writing: ``` - 6dab016a6b82082b8bcd0f4d1e53418d6372015dd983d29367b9153f1a376435 cmake-3.11.4-Linux-x86_64.tar.gz - 72b3b82b6d2c2f3a375c0d2799c01819df8669dc55694c8b8daaf6232e873725 cmake-3.11.4-win32-x86.msi - 56e3605b8e49cd446f3487da88fcc38cb9c3e9e99a20f5d4bd63e54b7a35f869 cmake-3.11.4-win64-x64.msi + 563a39e0a7c7368f81bfa1c3aff8b590a0617cdfe51177ddc808f66cc0866c76 cmake-3.13.4-Linux-x86_64.tar.gz + 7c37235ece6ce85aab2ce169106e0e729504ad64707d56e4dbfc982cb4263847 cmake-3.13.4-win32-x86.msi + 64ac7dd5411b48c2717e15738b83ea0d4347cd51b940487dff7f99a870656c09 cmake-3.13.4-win64-x64.msi ``` It is important that the Visual Studio prerequisites (Visual Studio and the 'Desktop development with C++' workload) are installed on your machine, **before** starting the `CMake` installation. Once the prerequisites are in place, and the download is verified, install the CMake build system. diff --git a/articles/iot-dps/quick-create-simulated-device.md b/articles/iot-dps/quick-create-simulated-device.md index c3b7aa2890822..669b43aac93d8 100644 --- a/articles/iot-dps/quick-create-simulated-device.md +++ b/articles/iot-dps/quick-create-simulated-device.md @@ -40,20 +40,22 @@ This article will demonstrate individual enrollments. In this section, you will prepare a development environment used to build the [Azure IoT C SDK](https://github.com/Azure/azure-iot-sdk-c) and the [TPM](https://docs.microsoft.com/windows/device-security/tpm/trusted-platform-module-overview) device simulator sample. -1. Download the version 3.11.4 of the [CMake build system](https://cmake.org/download/). Verify the downloaded binary using the corresponding cryptographic hash value. The following example used Windows PowerShell to verify the cryptographic hash for version 3.11.4 of the x64 MSI distribution: +1. Download the [CMake build system](https://cmake.org/download/). Verify the downloaded binary using the cryptographic hash value that corresponds to the version you download. The cryptographic hash values are also located from the CMake download link already provided. + + The following example used Windows PowerShell to verify the cryptographic hash for version 3.13.4 of the x64 MSI distribution: ```PowerShell - PS C:\Downloads> $hash = get-filehash .\cmake-3.11.4-win64-x64.msi - PS C:\Downloads> $hash.Hash -eq "56e3605b8e49cd446f3487da88fcc38cb9c3e9e99a20f5d4bd63e54b7a35f869" + PS C:\Downloads> $hash = get-filehash .\cmake-3.13.4-win64-x64.msi + PS C:\Downloads> $hash.Hash -eq "64AC7DD5411B48C2717E15738B83EA0D4347CD51B940487DFF7F99A870656C09" True ``` - - The following hash values for version 3.11.4 were listed on the CMake site at the time of this writing: + + The following hash values for version 3.13.4 were listed on the CMake site at the time of this writing: ``` - 6dab016a6b82082b8bcd0f4d1e53418d6372015dd983d29367b9153f1a376435 cmake-3.11.4-Linux-x86_64.tar.gz - 72b3b82b6d2c2f3a375c0d2799c01819df8669dc55694c8b8daaf6232e873725 cmake-3.11.4-win32-x86.msi - 56e3605b8e49cd446f3487da88fcc38cb9c3e9e99a20f5d4bd63e54b7a35f869 cmake-3.11.4-win64-x64.msi + 563a39e0a7c7368f81bfa1c3aff8b590a0617cdfe51177ddc808f66cc0866c76 cmake-3.13.4-Linux-x86_64.tar.gz + 7c37235ece6ce85aab2ce169106e0e729504ad64707d56e4dbfc982cb4263847 cmake-3.13.4-win32-x86.msi + 64ac7dd5411b48c2717e15738b83ea0d4347cd51b940487dff7f99a870656c09 cmake-3.13.4-win64-x64.msi ``` It is important that the Visual Studio prerequisites (Visual Studio and the 'Desktop development with C++' workload) are installed on your machine, **before** starting the `CMake` installation. Once the prerequisites are in place, and the download is verified, install the CMake build system. diff --git a/articles/iot-dps/tutorial-set-up-device.md b/articles/iot-dps/tutorial-set-up-device.md index bb5c5e44f2e73..f32616185f730 100644 --- a/articles/iot-dps/tutorial-set-up-device.md +++ b/articles/iot-dps/tutorial-set-up-device.md @@ -40,20 +40,22 @@ If you're unfamiliar with the process of auto-provisioning, be sure to review [A The Device Provisioning Service Client SDK helps you implement your device registration software. But before you can use it, you need to build a version of the SDK specific to your development client platform and attestation mechanism. In this tutorial, you build an SDK that uses Visual Studio 2017 on a Windows development platform, for a supported type of attestation: -1. Download the version 3.11.4 of the [CMake build system](https://cmake.org/download/). Verify the downloaded binary using the corresponding cryptographic hash value. The following example used Windows PowerShell to verify the cryptographic hash for version 3.11.4 of the x64 MSI distribution: +1. Download the [CMake build system](https://cmake.org/download/). Verify the downloaded binary using the cryptographic hash value that corresponds to the version you download. The cryptographic hash values are also located from the CMake download link already provided. + + The following example used Windows PowerShell to verify the cryptographic hash for version 3.13.4 of the x64 MSI distribution: ```PowerShell - PS C:\Downloads> $hash = get-filehash .\cmake-3.11.4-win64-x64.msi - PS C:\Downloads> $hash.Hash -eq "56e3605b8e49cd446f3487da88fcc38cb9c3e9e99a20f5d4bd63e54b7a35f869" + PS C:\Downloads> $hash = get-filehash .\cmake-3.13.4-win64-x64.msi + PS C:\Downloads> $hash.Hash -eq "64AC7DD5411B48C2717E15738B83EA0D4347CD51B940487DFF7F99A870656C09" True ``` - - The following hash values for version 3.11.4 were listed on the CMake site at the time of this writing: + + The following hash values for version 3.13.4 were listed on the CMake site at the time of this writing: ``` - 6dab016a6b82082b8bcd0f4d1e53418d6372015dd983d29367b9153f1a376435 cmake-3.11.4-Linux-x86_64.tar.gz - 72b3b82b6d2c2f3a375c0d2799c01819df8669dc55694c8b8daaf6232e873725 cmake-3.11.4-win32-x86.msi - 56e3605b8e49cd446f3487da88fcc38cb9c3e9e99a20f5d4bd63e54b7a35f869 cmake-3.11.4-win64-x64.msi + 563a39e0a7c7368f81bfa1c3aff8b590a0617cdfe51177ddc808f66cc0866c76 cmake-3.13.4-Linux-x86_64.tar.gz + 7c37235ece6ce85aab2ce169106e0e729504ad64707d56e4dbfc982cb4263847 cmake-3.13.4-win32-x86.msi + 64ac7dd5411b48c2717e15738b83ea0d4347cd51b940487dff7f99a870656c09 cmake-3.13.4-win64-x64.msi ``` It is important that the Visual Studio prerequisites (Visual Studio and the 'Desktop development with C++' workload) are installed on your machine, **before** starting the `CMake` installation. Once the prerequisites are in place, and the download is verified, install the CMake build system. diff --git a/articles/iot-edge/troubleshoot.md b/articles/iot-edge/troubleshoot.md index bd59a86664d55..c9d5fd208c13c 100644 --- a/articles/iot-edge/troubleshoot.md +++ b/articles/iot-edge/troubleshoot.md @@ -333,6 +333,39 @@ While IoT Edge provides enhanced configuration for securing Azure IoT Edge runti |AMQP|5671|BLOCKED (Default)|OPEN (Default)|
    • Default communication protocol for IoT Edge.
    • Must be configured to be Open if Azure IoT Edge is not configured for other supported protocols or AMQP is the desired communication protocol.
    • 5672 for AMQP is not supported by IoT Edge.
    • Block this port when Azure IoT Edge uses a different IoT Hub supported protocol.
    • Incoming (Inbound) connections should be blocked.
    | |HTTPS|443|BLOCKED (Default)|OPEN (Default)|
    • Configure Outgoing (Outbound) to be Open on 443 for IoT Edge provisioning. This configuration is required when using manual scripts or Azure IoT Device Provisioning Service (DPS).
    • Incoming (Inbound) connection should be Open only for specific scenarios:
      • If you have a transparent gateway with leaf devices that may send method requests. In this case, Port 443 does not need to be open to external networks to connect to IoTHub or provide IoTHub services through Azure IoT Edge. Thus the incoming rule could be restricted to only open Incoming (Inbound) from the internal network.
      • For Client to Device (C2D) scenarios.
    • 80 for HTTP is not supported by IoT Edge.
    • If non-HTTP protocols (for example, AMQP or MQTT) cannot be configured in the enterprise; the messages can be sent over WebSockets. Port 443 will be used for WebSocket communication in that case.
    | +## Edge Agent module continually reports 'empty config file' and no modules start on the device + +The device has trouble starting modules defined in the deployment. Only the edgeAgent is running but continually reporting 'empty config file...'. + +### Potential root cause +By default, IoT Edge starts modules in their own isolated container network. The device may be having trouble with DNS name resolution within this private network. + +### Resolution +Specify the DNS server for your environment in the container engine settings. Create a file named `daemon.json` specifying the DNS server to use. For example: + +``` +{ + "dns": ["1.1.1.1"] +} +``` + +The above example sets the DNS server to a publicly accessible DNS service. If the edge device cannot access this IP from its environment, replace it with DNS server address that is accessible. + +Place `daemon.json` in the right location for your platform: + +| Platform | Location | +| --------- | -------- | +| Linux | `/etc/docker` | +| Windows host with Windows containers | `C:\ProgramData\iotedge-moby-data\config` | + +If the location already contains `daemon.json` file, add the **dns** key to it and save the file. + +*Restart the container engine for the updates to take effect* + +| Platform | Command | +| --------- | -------- | +| Linux | `sudo systemctl restart docker` | +| Windows (Admin Powershell) | `Restart-Service iotedge-moby -Force` | ## Next steps Do you think that you found a bug in the IoT Edge platform? [Submit an issue](https://github.com/Azure/iotedge/issues) so that we can continue to improve. diff --git a/articles/iot-hub/iot-hub-arduino-iot-devkit-az3166-get-started.md b/articles/iot-hub/iot-hub-arduino-iot-devkit-az3166-get-started.md index 9587e94891183..834496ed788d3 100644 --- a/articles/iot-hub/iot-hub-arduino-iot-devkit-az3166-get-started.md +++ b/articles/iot-hub/iot-hub-arduino-iot-devkit-az3166-get-started.md @@ -19,7 +19,9 @@ You can use the [MXChip IoT DevKit](https://microsoft.github.io/azure-iot-develo ## What you do -Connect the DevKit to an Azure IoT hub that you create. Then collect the temperature and humidity data from sensors, and send the data to the IoT hub. +In this article, you will use [Visual Studio Code](https://code.visualstudio.com/), a cross platform source code editor, along with the [Azure IoT Tools](https://aka.ms/azure-iot-tools) extension pack. + +You will connect the DevKit to an Azure IoT hub that you create. Then collect the temperature and humidity data from sensors, and send the data to the IoT hub. Don't have a DevKit yet? Try the [DevKit simulator](https://azure-samples.github.io/iot-devkit-web-simulator/) or [purchase a DevKit](https://aka.ms/iot-devkit-purchase). @@ -114,7 +116,9 @@ Press button B to test the sensors. Continue pressing and releasing the button B ### Install Azure IoT Tools -We recommend [Azure IoT Tools](https://aka.ms/azure-iot-tools) extension pack for Visual Studio Code to develop on the DevKit. The Azure IoT Tools contains [Azure IoT Device Workbench](https://aka.ms/iot-workbench) to develop and debug on various IoT devkit devices and [Azure IoT Hub Toolkit](https://aka.ms/iot-toolkit) to manage and interact with Azure IoT Hub. +In this section, you will install the [Arduino IDE](https://www.arduino.cc/en/Main/Software) along with [Visual Studio Code](https://code.visualstudio.com/), a cross platform source code editor. + +You will also install the [Azure IoT Tools](https://aka.ms/azure-iot-tools) extension pack for Visual Studio Code. We recommend using [Azure IoT Tools](https://aka.ms/azure-iot-tools) extension pack for Visual Studio Code to develop applications on the DevKit. The Azure IoT Tools extension pack contains the [Azure IoT Device Workbench](https://aka.ms/iot-workbench) which is used to develop and debug on various IoT devkit devices. The [Azure IoT Hub Toolkit](https://aka.ms/iot-toolkit), also included with the Azure IoT Tools extension pack, is used to manage and interact with Azure IoT Hubs. You can watch these [Channel 9](https://channel9.msdn.com/) videos to have overview about what they do: * [Introduction to the new IoT Workbench extension for VS Code](https://channel9.msdn.com/Shows/Internet-of-Things-Show/IoT-Workbench-extension-for-VS-Code) diff --git a/articles/iot-hub/iot-hub-devguide-quotas-throttling.md b/articles/iot-hub/iot-hub-devguide-quotas-throttling.md index 6f0c138aed33f..de1b7b12e10cc 100644 --- a/articles/iot-hub/iot-hub-devguide-quotas-throttling.md +++ b/articles/iot-hub/iot-hub-devguide-quotas-throttling.md @@ -37,6 +37,7 @@ The following table shows the enforced throttles. Values refer to an individual | Cloud-to-device receives1
    (only when device uses HTTPS)| 16.67/sec/unit (1000/min/unit) | 16.67/sec/unit (1000/min/unit) | 833.33/sec/unit (50000/min/unit) | | File upload | 1.67 file upload notifications/sec/unit (100/min/unit) | 1.67 file upload notifications/sec/unit (100/min/unit) | 83.33 file upload notifications/sec/unit (5000/min/unit) | | Direct methods1 | 160KB/sec/unit2 | 480KB/sec/unit2 | 24MB/sec/unit2 | +| Queries | 20/sec/unit | 20/sec/unit | 1000/sec/unit | | Twin (device and module) reads1 | 100/sec | Higher of 100/sec or 10/sec/unit | 500/sec/unit | | Twin updates (device and module)1 | 50/sec | Higher of 50/sec or 5/sec/unit | 250/sec/unit | | Jobs operations1,3
    (create, update, list, delete) | 1.67/sec/unit (100/min/unit) | 1.67/sec/unit (100/min/unit) | 83.33/sec/unit (5000/min/unit) | diff --git a/articles/iot-hub/quickstart-device-streams-echo-c.md b/articles/iot-hub/quickstart-device-streams-echo-c.md index 133f18969484f..e952dfca86bd1 100644 --- a/articles/iot-hub/quickstart-device-streams-echo-c.md +++ b/articles/iot-hub/quickstart-device-streams-echo-c.md @@ -46,14 +46,16 @@ If you don’t have an Azure subscription, create a [free account](https://azure For this quickstart, you will be using the [Azure IoT device SDK for C](iot-hub-device-sdk-c-intro.md). You will prepare a development environment used to clone and build the [Azure IoT C SDK](https://github.com/Azure/azure-iot-sdk-c) from GitHub. The SDK on GitHub includes the sample code used in this quickstart. -1. Download version 3.13.4 of the [CMake build system](https://cmake.org/download/). Verify the downloaded binary using the corresponding cryptographic hash value. The following example used Windows PowerShell to verify the cryptographic hash for version 3.13.4 of the x64 MSI distribution: +1. Download the [CMake build system](https://cmake.org/download/). Verify the downloaded binary using the cryptographic hash value that corresponds to the version you download. The cryptographic hash values are also located from the CMake download link already provided. + + The following example used Windows PowerShell to verify the cryptographic hash for version 3.13.4 of the x64 MSI distribution: ```PowerShell PS C:\Downloads> $hash = get-filehash .\cmake-3.13.4-win64-x64.msi PS C:\Downloads> $hash.Hash -eq "64AC7DD5411B48C2717E15738B83EA0D4347CD51B940487DFF7F99A870656C09" True ``` - + The following hash values for version 3.13.4 were listed on the CMake site at the time of this writing: ``` @@ -62,7 +64,7 @@ For this quickstart, you will be using the [Azure IoT device SDK for C](iot-hub- 64ac7dd5411b48c2717e15738b83ea0d4347cd51b940487dff7f99a870656c09 cmake-3.13.4-win64-x64.msi ``` - It is important that the Visual Studio prerequisites (Visual Studio and the 'Desktop development with C++' workload) are installed on your machine **before** starting the `CMake` installation. Once the prerequisites are in place and the download is verified, install the CMake build system. + It is important that the Visual Studio prerequisites (Visual Studio and the 'Desktop development with C++' workload) are installed on your machine, **before** starting the `CMake` installation. Once the prerequisites are in place, and the download is verified, install the CMake build system. 2. Open a command prompt or Git Bash shell. Execute the following command to clone the [Azure IoT C SDK](https://github.com/Azure/azure-iot-sdk-c) GitHub repository: diff --git a/articles/iot-hub/quickstart-device-streams-proxy-c.md b/articles/iot-hub/quickstart-device-streams-proxy-c.md index 652d1ca3816f5..00383e16dfccd 100644 --- a/articles/iot-hub/quickstart-device-streams-proxy-c.md +++ b/articles/iot-hub/quickstart-device-streams-proxy-c.md @@ -56,7 +56,9 @@ If you don’t have an Azure subscription, create a [free account](https://azure For this quickstart, you will be using the [Azure IoT device SDK for C](iot-hub-device-sdk-c-intro.md). You will prepare a development environment used to clone and build the [Azure IoT C SDK](https://github.com/Azure/azure-iot-sdk-c) from GitHub. The SDK on GitHub includes the sample code used in this quickstart. -1. Download version 3.13.4 of the [CMake build system](https://cmake.org/download/). Verify the downloaded binary using the corresponding cryptographic hash value. The following example used Windows PowerShell to verify the cryptographic hash for version 3.13.4 of the x64 MSI distribution: +1. Download the [CMake build system](https://cmake.org/download/). Verify the downloaded binary using the cryptographic hash value that corresponds to the version you download. The cryptographic hash values are also located from the CMake download link already provided. + + The following example used Windows PowerShell to verify the cryptographic hash for version 3.13.4 of the x64 MSI distribution: ```PowerShell PS C:\Downloads> $hash = get-filehash .\cmake-3.13.4-win64-x64.msi @@ -72,7 +74,7 @@ For this quickstart, you will be using the [Azure IoT device SDK for C](iot-hub- 64ac7dd5411b48c2717e15738b83ea0d4347cd51b940487dff7f99a870656c09 cmake-3.13.4-win64-x64.msi ``` - It is important that the Visual Studio prerequisites (Visual Studio and the 'Desktop development with C++' workload) are installed on your machine **before** starting the `CMake` installation. Once the prerequisites are in place and the download is verified, install the CMake build system. + It is important that the Visual Studio prerequisites (Visual Studio and the 'Desktop development with C++' workload) are installed on your machine, **before** starting the `CMake` installation. Once the prerequisites are in place, and the download is verified, install the CMake build system. 2. Open a command prompt or Git Bash shell. Execute the following command to clone the [Azure IoT C SDK](https://github.com/Azure/azure-iot-sdk-c) GitHub repository: diff --git a/articles/iot-hub/quickstart-send-telemetry-c.md b/articles/iot-hub/quickstart-send-telemetry-c.md index bf97a8bb26820..971432dca93f5 100644 --- a/articles/iot-hub/quickstart-send-telemetry-c.md +++ b/articles/iot-hub/quickstart-send-telemetry-c.md @@ -48,15 +48,16 @@ You can use the SDK by installing the packages and libraries for the following e However, in this quickstart, you will prepare a development environment used to clone and build the [Azure IoT C SDK](https://github.com/Azure/azure-iot-sdk-c) from GitHub. The SDK on GitHub includes the sample code used in this quickstart. +1. Download the [CMake build system](https://cmake.org/download/). Verify the downloaded binary using the cryptographic hash value that corresponds to the version you download. The cryptographic hash values are also located from the CMake download link already provided. -1. Download the version 3.13.4 of the [CMake build system](https://cmake.org/download/). Verify the downloaded binary using the corresponding cryptographic hash value. The following example used Windows PowerShell to verify the cryptographic hash for version 3.11.4 of the x64 MSI distribution: + The following example used Windows PowerShell to verify the cryptographic hash for version 3.13.4 of the x64 MSI distribution: ```PowerShell PS C:\Downloads> $hash = get-filehash .\cmake-3.13.4-win64-x64.msi PS C:\Downloads> $hash.Hash -eq "64AC7DD5411B48C2717E15738B83EA0D4347CD51B940487DFF7F99A870656C09" True ``` - + The following hash values for version 3.13.4 were listed on the CMake site at the time of this writing: ``` diff --git a/articles/jenkins/jenkins-plugins.md b/articles/jenkins/jenkins-plugins.md new file mode 100644 index 0000000000000..38f3f38320902 --- /dev/null +++ b/articles/jenkins/jenkins-plugins.md @@ -0,0 +1,28 @@ +--- +title: Jenkins plugins for Azure +description: Learn about the Jenkins plugin available for use with Azure +ms.service: jenkins +keywords: jenkins, plugis, azure, devops +author: tomarchermsft +manager: jeconnoc +ms.author: tarcher +ms.date: 03/22/2019 +ms.topic: article +--- + +# Jenkins plugins for Azure + +The following Jenkins plugins support various features for use with Azure. + +| Jenkins plugin | Description | +|------------------------------------------------------------------------------| +| [Azure App Service plugin](https://plugins.jenkins.io/azure-app-service) | A Jenkins plugin to deploy an Azure App Service (currently supports only Web App). | +| [Azure AD plugin](https://plugins.jenkins.io/azure-ad) | A Jenkins Plugin that supports authentication & authorization via Azure Active Directory. | +| [Azure Container Agents plugin](https://plugins.jenkins.io/azure-container-agents) | Azure Container Agents Plugin can help you to run a container as an agent in Jenkins | +| [Azure Container Service plugin](https://plugins.jenkins.io/azure-acs) | A Jenkins Plugin to deploy configurations to Azure Container Service (AKS). | +| [Azure Credential plugin](https://plugins.jenkins.io/azure-credentials) | Jenkins plugin to manage Azure credentials. | +| [Azure Function plugin](https://plugins.jenkins.io/azure-function) | To use this plugin to deploy to Azure Function, first you need to have an Azure Service Principal in your Jenkins instance. | +| [Azure Service Fabric plugin](https://plugins.jenkins.io/service-fabric) | A Jenkins Plugin for Linux Azure Service Fabric projects. | +| [Azure Storage plugin](https://plugins.jenkins.io/windows-azure-storage) | A plugin for uploading build artifacts to, or downloading build dependencies from, Microsoft Azure Blob storage. | +| [Azure VM agents plugin](https://plugins.jenkins.io/azure-vm-agents) | A Jenkins Plugin to create Jenkins agents in Azure virtual machines (via Azure Resource Manager template). | +| [Azure virtual machine scale set plugin](https://plugins.jenkins.io/azure-vmss) | A Jenkins plugin to deploy VM images to Azure virtual machine scale sets. | \ No newline at end of file diff --git a/articles/jenkins/toc.yml b/articles/jenkins/toc.yml index 2b006574b1d12..ccfcb6a64a97d 100644 --- a/articles/jenkins/toc.yml +++ b/articles/jenkins/toc.yml @@ -8,75 +8,67 @@ expanded: true items: - name: Create a Jenkins server - href: /azure/jenkins/install-jenkins-solution-template - maintainContext: true + href: install-jenkins-solution-template.md - name: Tutorials items: - - name: CI/CD to App Service - href: /azure/jenkins/tutorial-jenkins-deploy-web-app-azure-app-service - - name: CI/CD to Kubernetes - href: /azure/aks/jenkins-continuous-deployment - maintainContext: true - - name: CI/CD to Linux VMs - href: /azure/virtual-machines/linux/tutorial-jenkins-github-docker-cicd - maintainContext: true - - name: Create Azure resources in a pipeline job - href: /azure/jenkins/execute-cli-jenkins-pipeline - - name: Scale with Azure VM agents - href: /azure/jenkins/jenkins-azure-vm-agents - - name: Build using Azure Container Instances - href: /azure/container-instances/container-instances-jenkins - maintainContext: true - - name: Deploy to AKS using blue/green pattern - href: /azure/jenkins/jenkins-aks-blue-green-deployment - - name: Deploy to Azure Functions - href: ./jenkins-azure-functions-deploy.md -- name: How-to - items: - - name: Secure Jenkins on Azure - href: https://jenkins.io/blog/2017/04/20/secure-jenkins-on-azure/ - - name: Use the App Service plugin - href: deploy-jenkins-app-service-plugin.md + - name: 1. Install + items: + - name: Create a Jenkins server + href: install-jenkins-solution-template.md + - name: 2. Configure + items: + - name: Scale with Azure VM agents + href: jenkins-azure-vm-agents.md + - name: 3. Implement CI/CD + items: + - name: AKS + items: + - name: CI/CD to Kubernetes + href: /azure/aks/jenkins-continuous-deployment + maintainContext: true + - name: Deploy to AKS using blue/green pattern + href: jenkins-aks-blue-green-deployment.md + - name: App Service + items: + - name: Create Azure resources in a pipeline job + href: execute-cli-jenkins-pipeline.md + - name: Deploy an app from GitHub to App Service + href: tutorial-jenkins-deploy-web-app-azure-app-service.md + - name: Build using Azure Container Instances + href: /azure/container-instances/container-instances-jenkins + maintainContext: true + - name: Use Jenkins with Azure DevOps + href: /azure/virtual-machines/linux/tutorial-build-deploy-jenkins + maintainContext: true + - name: Deploy to Azure Functions + href: jenkins-azure-functions-deploy.md - name: Publish to Azure Storage href: /azure/storage/storage-java-jenkins-continuous-integration-solution maintainContext: true - - name: Use Jenkins with Azure DevOps - href: https://www.visualstudio.com/docs/build/apps/jenkins/build-deploy-jenkins - - name: Deploy Service Fabric apps + - name: CI/CD to Linux VMs + href: /azure/virtual-machines/linux/tutorial-jenkins-github-docker-cicd + maintainContext: true + - name: CI/CD to Service Fabric href: /azure/service-fabric/service-fabric-cicd-your-linux-applications-with-jenkins maintainContext: true +- name: How-to + items: + - name: Use the App Service plugin + href: deploy-jenkins-app-service-plugin.md - name: Samples items: - name: Sample jobs and scripts href: https://github.com/azure/jenkins - name: Resources items: - - name: Plugins - items: - - name: Azure App Service plugin - href: https://plugins.jenkins.io/azure-app-service - - name: Azure AD plugin - href: https://plugins.jenkins.io/azure-ad - - name: Azure Container Agents plugin - href: https://plugins.jenkins.io/azure-container-agents - - name: Azure Container Service plugin - href: https://plugins.jenkins.io/azure-acs - - name: Azure Credential plugin - href: https://plugins.jenkins.io/azure-credentials - - name: Azure Function plugin - href: https://plugins.jenkins.io/azure-function - - name: Azure Service Fabric plugin - href: https://plugins.jenkins.io/service-fabric - - name: Azure Storage plugin - href: https://plugins.jenkins.io/windows-azure-storage - - name: Azure VM agents plugin - href: https://plugins.jenkins.io/azure-vm-agents - - name: Azure VM scale set plugin - href: https://plugins.jenkins.io/azure-vmss + - name: Jenkins Plugins for Azure + href: jenkins-plugins.md - name: Azure Roadmap href: https://azure.microsoft.com/roadmap/ - name: Jenkins home href: https://jenkins.io/ + - name: Jenkins X Home + href: https://jenkins-x.io - name: Jenkins architecture href: /azure/architecture/reference-architectures/jenkins/ maintainContext: true \ No newline at end of file diff --git a/articles/lab-services/TOC.yml b/articles/lab-services/TOC.yml index c48ffeeac052f..20eeda34ad090 100644 --- a/articles/lab-services/TOC.yml +++ b/articles/lab-services/TOC.yml @@ -90,6 +90,15 @@ href: personal-data-delete-export.md - name: Import virtual machines from another lab href: import-virtual-machines-from-another-lab.md + - name: Create an image factory + href: image-factory-create.md + items: + - name: Run an image factory from AzureDevOps + href: image-factory-set-up-devops-lab.md + - name: Save custom images and distribute to multiple labs + href: image-factory-save-distribute-custom-images.md + - name: Set retention policy and run cleanup scripts + href: image-factory-set-retention-policy-cleanup.md - name: Set up DevTest Labs infrastructure in your enterprise href: devtest-lab-guidance-prescriptive-adoption.md items: @@ -130,7 +139,9 @@ - name: Redeploy a VM href: devtest-lab-redeploy-vm.md - name: Import a VM - href: devtest-lab-import-vm.md + href: devtest-lab-import-vm.md + - name: Start or stop a VM using PowerShell or CLI + href: use-command-line-start-stop-virtual-machines.md - name: Use environments in a lab items: - name: Create an environment diff --git a/articles/lab-services/image-factory-create.md b/articles/lab-services/image-factory-create.md new file mode 100644 index 0000000000000..b4cd53d69c27f --- /dev/null +++ b/articles/lab-services/image-factory-create.md @@ -0,0 +1,58 @@ +--- +title: Create an image factory in Azure DevTest Labs | Microsoft Docs +description: Learn how to create a custom image factory in Azure DevTest Labs. +services: devtest-lab, lab-services +documentationcenter: na +author: spelluru +manager: femila + +ms.service: lab-services +ms.workload: na +ms.tgt_pltfrm: na +ms.devlang: na +ms.topic: article +ms.date: 03/25/2019 +ms.author: spelluru + +--- + +# Create a custom image factory in Azure DevTest Labs +This article shows you how to set up a custom image factory by using sample scripts available in the [Git repository](https://github.com/Azure/azure-devtestlab/tree/master/Scripts/ImageFactory). + +## What's an image factory? +An image factory is a configuration-as-code solution that builds and distributes images automatically on a regular basis with all the desired configurations. The images in the image factory are always up-to-date, and the ongoing maintenance is almost zero once the whole process is automated. And, because all the required configurations are already in the image, it saves the time from manually configuring the system after a VM has been created with the base OS. + +The significant accelerator to get a developer desktop to a ready state in DevTest Labs is using custom images. The downside of custom images is that there's something extra to maintain in the lab. For example, trial versions of products expire over time (or) newly released security updates aren't applied, which force us to refresh the custom image periodically. With an image factory, you have a definition of the image checked in to source code control and have an automated process to produce custom images based on the definition. + +The solution enables the speed of creating virtual machines from custom images while eliminating additional ongoing maintenance costs. With this solution, you can automatically create custom images, distribute them to other DevTest Labs, and retire the old images. In the following video, you learn about the image factory, and how it's implemented with DevTest Labs. All the Azure Powershell scripts are freely available and located here: [http://aka.ms/dtlimagefactory](http://aka.ms/dtlimagefactory). + +
    + +> [!VIDEO https://channel9.msdn.com/Blogs/Azure/Custom-Image-Factory-with-Azure-DevTest-Labs/player] + + +## High-level view of the solution +The solution enables the speed of creating virtual machines from custom images while eliminating additional ongoing maintenance costs. With this solution, you can automatically create custom images and distribute them to other DevTest Labs. You use Azure DevOps (formerly Visual Studio Team Services) as the orchestration engine for automating the all the operations in the DevTest Labs. + +![High-level view of the solution](./media/create-image-factory/high-level-view-of-solution.png) + +There's a [VSTS Extension for DevTest Labs](https://marketplace.visualstudio.com/items?itemName=ms-azuredevtestlabs.tasks) that enables you to execute these individual steps: + +- Create custom image +- Create VM +- Delete VM +- Create environment +- Delete environment +- Populate environment + +Using the DevTest Labs extension is an easy way to get started with automatically creating custom images in DevTest Labs. + +There's an alternate implementation using PowerShell script for a more complex scenario. Using PowerShell, you can fully automate an image factory based on DevTest Labs that can be used in your Continuous Integration and Continuous Delivery (CI/CD) toolchain. The principles followed in this alternate solution are: + +- Common updates should require no changes to the image factory. (for example, adding a new type of custom image, automatically retiring old images, adding a new ‘endpoint’ DevTest Labs to receive custom images, and so on.) +- Common changes are backed by source code control (infrastructure as code) +- DevTest Labs receiving custom images may not be in the same Azure Subscription (labs span subscriptions) +- PowerShell scripts must be reusable so we can spin up additional factories as needed + +## Next steps +Move on to the next article in this section: [Run an image factory from Azure DevOps](image-factory-set-up-devops-lab.md) diff --git a/articles/lab-services/image-factory-save-distribute-custom-images.md b/articles/lab-services/image-factory-save-distribute-custom-images.md new file mode 100644 index 0000000000000..a2c6d3c48d05a --- /dev/null +++ b/articles/lab-services/image-factory-save-distribute-custom-images.md @@ -0,0 +1,95 @@ +--- +title: Save and distribute images in Azure DevTest Labs | Microsoft Docs +description: Learn how to create a custom image factory in Azure DevTest Labs. +services: devtest-lab, lab-services +documentationcenter: na +author: spelluru +manager: femila + +ms.service: lab-services +ms.workload: na +ms.tgt_pltfrm: na +ms.devlang: na +ms.topic: article +ms.date: 03/25/2019 +ms.author: spelluru + +--- + +# Save custom images and distribute to multiple labs +This article covers gives you the steps to save custom images from the already created virtual machines (VMs). It also covers how to distribute these custom images to other DevTest Labs in the organization. + +## Prerequisites +The following items should already be in place: + +- A lab for the Image Factory in Azure DevTest Labs. +- An Azure DevOps Project that's used to automate the image factory. +- Source code location containing the scripts and configuration (in our example, in the same DevOps Project mentioned in the previous step). +- Build definition to orchestrate the Azure Powershell tasks. + +If needed, follow steps in the [Run an image factory from Azure DevOps](image-factory-set-up-devops-lab.md) to create or set up these items. + +## Save VMs as generalized VHDs +Save the existing VMs as generalized VHDs. There's a sample PowerShell script to save the existing VMs as generalized VHDs. To use it, first, add another **Azure Powershell** task to the build definition as shown in the following image: + +![Add Azure PowerShell step](./media/save-distribute-custom-images/powershell-step.png) + +Once you have the new task in the list, select the item so we can fill in all the details as shown in the following image: + +![PowerShell settings](./media/save-distribute-custom-images/powershell-settings.png) + + +## Generalized vs. specialized custom images +In the [Azure portal](https://portal.azure.com), when creating a custom image from a virtual machine, you can choose to have a generalized or a specialized custom image. + +- **Specialized custom image:** Sysprep/Deprovision was NOT run on the machine. It means that the image is an exact copy of the OS Disk on the existing virtual machine (a snapshot). The same files, applications, user accounts, computer name, and so on, are all present when we create a new machine from this custom image. +- **Generalized Custom Image:** Sysprep/Deprovision was run on the machine. When this process runs, it removes user accounts, removes the computer name, strips out the user registry hives, etc., with the goal of generalizing the image so it can be customized when creating another virtual machine. When you generalize a virtual machine (by running sysprep), the process destroys the current virtual machine – it will no longer be functional. + +The script for snapping custom images in the Image Factory will save VHDs for any virtual machines created in the prior step (identified based on a tag on the resource in Azure). + +## Update configuration for distributing images +The next step in the process is to push the custom images from the image factory lab out to any other labs that need them. The core part of this process is the **labs.json** configuration file. You can find this file in the **Configuration** folder included in the image factory. + +There are two key things listed in the labs.json configuration file: + +- Uniquely identifying a specific destination lab using the subscription ID and the lab name. +- The specific set of images that should be pushed to the lab as relative paths to the configuration root. You can specify entire folder (to get all the images in that folder) too. + +Here is an example labs.json file with two labs listed. In this case, you are distributing images to two different labs. + +```json +{ + "Labs": [ + { + "SubscriptionId": "", + "LabName": "", + "ImagePaths": [ + "Win2012R2", + "Win2016/Datacenter.json" + ] + }, + { + "SubscriptionId": "", + "LabName": "", + "ImagePaths": [ + "Win2016/Datacenter.json" + ] + } + ] +} +``` + +## Create a build task +Using the same steps you have seen earlier in this article, add an additional **Azure Powershell** build task to you build definition. Fill in the details as shown in the following image: + +![Build task to distribute images](./media/save-distribute-custom-images/second-build-task-powershell.png) + +The parameters are: `-ConfigurationLocation $(System.DefaultWorkingDirectory)$(ConfigurationLocation) -SubscriptionId $(SubscriptionId) -DevTestLabName $(DevTestLabName) -maxConcurrentJobs 20` + +This task takes any custom images present in the image factory and pushes them out to any labs defined in the Labs.json file. + +## Queue the build +Once the distribution build task is complete, queue up a new build to make sure that everything is working. After the build completes successfully, the new custom images will show up in the destination lab that was entered into the Labs.json configuration file. + +## Next steps +In the next article in the series, you update the image factory with a retention policy and cleanup steps: [Set retention policy and run cleanup scripts](image-factory-set-retention-policy-cleanup.md). diff --git a/articles/lab-services/image-factory-set-retention-policy-cleanup.md b/articles/lab-services/image-factory-set-retention-policy-cleanup.md new file mode 100644 index 0000000000000..a2616852b95f9 --- /dev/null +++ b/articles/lab-services/image-factory-set-retention-policy-cleanup.md @@ -0,0 +1,76 @@ +--- +title: Create an image factory in Azure DevTest Labs | Microsoft Docs +description: Learn how to create a custom image factory in Azure DevTest Labs. +services: devtest-lab, lab-services +documentationcenter: na +author: spelluru +manager: femila + +ms.service: lab-services +ms.workload: na +ms.tgt_pltfrm: na +ms.devlang: na +ms.topic: article +ms.date: 03/25/2019 +ms.author: spelluru + +--- + +# Create a custom image factory in Azure DevTest Labs +This article covers setting a retention policy, cleaning up the factory, and retiring old images from all the other DevTest Labs in the organization. + +## Prerequisites +Make sure that you have followed these articles before proceeding further: + +- [Create an image factory](image-factory-create.md) +- [Run an image factory from Azure DevOps](image-factory-set-up-devops-lab.md) +- [Save custom images and distribute to multiple labs](image-factory-save-distribute-custom-images.md) + +The following items should already be in place: + +- A lab for the image factory in Azure DevTest Labs +- One or more target Azure DevTest Labs where the factory will distribute golden images +- An Azure DevOps Project used to automate the image factory. +- Source code location containing the scripts and configuration (in our example, in the same DevOps Project used above) +- A build definition to orchestrate the Azure Powershell tasks + +## Setting the retention policy +Before you configure the clean Up steps, define how many historic images you wish to retain in the DevTest Labs. When you followed the [Run an image factory from Azure DevOps](image-factory-set-up-devops-lab.md) article, you configured various build Variables. One of them was **ImageRetention**. You set this variable to `1`, which means that the DevTest Labs will not maintain a history of custom images. Only the latest distributed images will be available. If you change this variable to `2`, the latest distributed image plus the previous ones will be maintained. You can set this value to define the number of historic images you wish to maintain in your DevTest Labs. + +## Cleaning Up the factory +The first step in cleaning Up the factory is to remove the golden Image VMs from the image factory. There is a script to do this task just like our previous scripts. The first step is to add another **Azure Powershell** task to the build definition as shown in the following image: + +![PowerShell step](./media/set-retention-policy-cleanup/powershell-step.png) + +Once you have the new task in the list, select the item, and fill in all the details as shown in the following image: + +![Clean up old images PowerShell task](./media/set-retention-policy-cleanup/configure-powershell-task.png) + +The script parameters are: `-DevTestLabName $(devTestLabName)`. + +## Retire old images +This task removes any old images, keeping only a history matching the **ImageRetention** build variable. Add an additional **Azure Powershell** build task to our build definition. Once it's added, select the task, and fill in the details as shown in the following image: + +![Retire old images PowerShell task](./media/set-retention-policy-cleanup/retire-old-image-task.png) + +The script parameters are: `-ConfigurationLocation $(System.DefaultWorkingDirectory)$(ConfigurationLocation) -SubscriptionId $(SubscriptionId) -DevTestLabName $(devTestLabName) -ImagesToSave $(ImageRetention)` + +## Queue the build +Now that you have completed the build definition, queue up a new build to make sure that everything is working. After the build completes successfully the new custom images show up in the destination lab and if you check the image factory lab, you see no provisioned VMs. Furthermore if you queue up further builds, you see the cleanup tasks retiring out old custom images from the DevTest Labs in accordance to the retention value set in the build variables. + +> [!NOTE] +> If you have executed the build pipeline at the end of the last article in the series, manually delete the virtual machines that were created in the image factory lab before queuing a new build. The manual cleanup step is only needed while we set everything up and verify it works. + + + +## Summary +Now you have a running image factory that can generate and distribute custom images to your labs on demand. At this point, it’s just a matter of getting your images set up properly and identifying the target labs. As mentioned in the previous article, the **Labs.json** file located in your **Configuration** folder specifies which images should be made available in each of the target labs. As you add other DevTest Labs to your organization, you simply need to add an entry in the Labs.json for the new lab. + +Adding a new image to your factory is also simple. When you want to include a new image in your factory you open the [Azure portal](https://portal.azure.com), navigate to your factory DevTest Labs, select the button to add a VM, and choose the desired marketplace image and artifacts. Instead of selecting the **Create** button to make the new VM, select **View Azure Resource Manager template**” and save the template as a .json file somewhere within the **GoldenImages** folder in your repository. The next time you run your image factory, it will create your custom image. + + +## Next steps +1. [Schedule your build/release](/devops/pipelines/build/triggers?view=azure-devops&tabs=designer) to run the image factory periodically. It refreshes your factory-generated images on a regular basis. +2. Make more golden images for your factory. You may also consider [creating artifacts](devtest-lab-artifact-author.md) to script additional pieces of your VM setup tasks and include the artifacts in your factory images. +4. Create a [separate build/release](/devops/pipelines/overview.md?view=azure-devops-2019) to run the **DistributeImages** script separately. You can run this script when you make changes to Labs.json and get images copied to target labs without having to recreate all the images again. + diff --git a/articles/lab-services/image-factory-set-up-devops-lab.md b/articles/lab-services/image-factory-set-up-devops-lab.md new file mode 100644 index 0000000000000..70a08ccb16fb2 --- /dev/null +++ b/articles/lab-services/image-factory-set-up-devops-lab.md @@ -0,0 +1,127 @@ +--- +title: Run an image factory from Azure DevOps in Azure DevTest Labs | Microsoft Docs +description: Learn how to create a custom image factory in Azure DevTest Labs. +services: devtest-lab, lab-services +documentationcenter: na +author: spelluru +manager: femila + +ms.service: lab-services +ms.workload: na +ms.tgt_pltfrm: na +ms.devlang: na +ms.topic: article +ms.date: 03/25/2019 +ms.author: spelluru + +--- + +# Run an image factory from Azure DevOps +This article covers all the preparations needed to run the image factory from Azure DevOps (formerly Visual Studio Team Services). + +> [!NOTE] +> Any orchestration engine will work! Azure DevOps is not mandatory. The image factory is run using Azure PowerShell scripts, so it could be run manually, by using Windows Task Scheduler, other CI/CD systems, and so on. + +## Create a lab for the image factory +The first step in setting up the image factory is to create a lab in Azure DevTest Labs. This lab is the image factory lab where we create the virtual machines and save custom images. This lab is considered as part of the overall image factory process. Once you create a lab, make sure to save the name since you’ll need it later. + +## Scripts and templates +The next step in adopting the image factory for your team is to understand what’s available. The image factory scripts and templates are available publicly in the [DevTest Labs GitHub Repo](https://github.com/Azure/azure-devtestlab/tree/master/Scripts/ImageFactory). Here is an outline of the pieces: + +- Image Factory. It's the root folder. + - Configuration. The inputs to the image factory + - GoldenImages. This folder contains JSON files that represent the definitions of custom images. + - Labs.json. File where teams sign up to receive specific custom images. +- Scripts. The engine for the image factory. + +The articles in this section provide more details about these scripts and templates. + +## Create an Azure DevOps team project +Azure DevOps let you store the source code, run the Azure PowerShell in one place. You can schedule recurring runs to keep images up-to-date. There are good facilities for logging the results to diagnose any issues. Using Azure DevOps isn’t a requirement however, you can use any harness/engine that can connect to Azure and can run Azure PowerShell. + +If you have an existing DevOps account or project you would like to use instead, skip this step. + +To get started, create a free account in Azure DevOps. Visit https://www.visualstudio.com/ and select **Get started for free** right under **Azure DevOps** (formerly VSTS). You’ll need to choose a unique account name and make sure to choose to manage code using Git. Once this is created, save the URL to your team project. Here is a sample URL: https://.visualstudio.com/MyFirstProject. + +## Check in the image factory to Git +All the PowerShell, templates and configuration for the image factory are located in the [public DevTest Labs GitHub repo](https://github.com/Azure/azure-devtestlab/tree/master/Scripts/ImageFactory). The fastest way to get the code into your new team project is to import a repository. This pulls in the whole DevTest Labs repository (so you’ll get extra docs, and samples). + +1. Visit the Azure DevOps project that you created in the previous step (URL looks like **https://.visualstudio.com/MyFirstProject**). +2. Select **Import a Repository**. +3. Enter the **clone URL** for the DevTest Labs Repo: `https://github.com/Azure/azure-devtestlab`. +4. Select **Import**. + + ![Import Git repo](./media/set-up-devops-lab/import-git-repo.png) + +If you decide to only check in exactly what’s needed (the image factory files), follow the steps [here](https://www.visualstudio.com/en-us/docs/git/share-your-code-in-git-vs) to clone the Git repo and push only the files located in the **scripts/ImageFactory** directory. + +## Create a build and connect to Azure +At this point, you have the source files stored in a Git repo in Azure DevOps. Now, you need to set up a pipeline to run the Azure PowerShell. There are lots of options to do these steps. In this article, you use build definition for simplicity, but it works with DevOps Build, DevOps Release (single or multiple environments), other execution engines like Windows Task Scheduler or any other harness that can execute Azure PowerShell. + +> [!NOTE] +> One important point to keep in mind that some of the PowerShell files take a long time to run when there are a lot (10+) custom images to create. Free hosted DevOps Build/Release agents have a timeout of 30 min, so you can’t use the free hosted agent once you start building many images. This timeout challenge applies to whatever harness you decide to use, it’s good to verify up front that you can extend the typical timeouts for long running Azure PowerShell scripts. In the case of Azure DevOps, you can either use paid hosted Agents or use your own build agent. + +1. To start, select **Set up Build** on the homepage of your DevOps Project: + + ![Setup Build button](./media/set-up-devops-lab/setup-build-button.png) +2. Specify a **name** for the build (for example: Build and Deliver Images to DevTest Labs). +3. Select an **empty** build definition, and select **Apply** to create your build. +4. At this stage, you can choose **Hosted** for the build agent. +5. **Save** the build definition. + + ![Build definition](./media/set-up-devops-lab/build-definition.png) + +## Configure the build variables +To simplify the command-line parameters, encapsulate the key values that drive the image factory to a set of build variables. Select the **Variables** tab and you’ll see a list of several default variables. Here’s the list of variables to enter in to Azure DevOps: + + +| Variable Name | Value | Notes | +| ------------- | ----- | ----- | +| ConfigurationLocation | /Scripts/ImageFactory/Configuration | This is the full path in the repository to the **Configuration** folder. If you imported the whole repo above, the value to the left is correct. Otherwise update to point to the Configuration location. | +| DevTestLabName | MyImageFactory | The name of the lab in Azure DevTest Labs used as the factory to produce images. If you don’t have one, create one. Make sure that the Lab is in the same subscription that the service endpoint has access to. | +| ImageRetention | 1 | The number of images you want to save of each type. Set default value to 1. | +| MachinePassword | ******* | The built-in admin account password for the virtual machines. This is a transient account, so make sure that it’s secure. Select the little lock icon on the right to ensure it’s a secure string. | +| MachineUserName | ImageFactoryUser | The built-in admin account username for the virtual machines. This is a transient account. | +| StandardTimeoutMinutes | 30 | The timeout we should wait for regular Azure operations. | +| SubscriptionId | 0000000000-0000-0000-0000-0000000000000 | The ID of the subscription where the lab exists and that the service endpoint has access to. | +| VMSize | Standard_A3 | The size of the virtual machine to use for the **Create** step. The VMs created are transient. The size must be the one that's [enabled for the lab](devtest-lab-set-lab-policy.md). Confirm that there's enough [subscription cores quota](../azure-subscription-service-limits.md). + +![Build variables](./media/set-up-devops-lab/configure-build-variables.png) + +## Connect to Azure +The next step is to set up service principal. This is an identity in Azure Active Directory that enables the DevOps build agent to operate in Azure on the user’s behalf. To set it up, start with adding you first Azure PowerShell Build Step. + +1. Select **Add Task**. +2. Search for **Azure PowerShell**. +3. Once you find it, select **Add** to add the task to the build. When you do this, you’ll see the task appear on the left side as added. + +![Set up PowerShell step](./media/set-up-devops-lab/set-up-powershell-step.png) + +The fastest way to set up a service principal is to let Azure DevOps do it for us. + +1. Select the **task** you just added. +2. For **Azure Connection Type**, choose **Azure Resource Manager**. +3. Select the **Manage** link to set up the service principal. + +For more information, see this [blog post](https://devblogs.microsoft.com/devops/automating-azure-resource-group-deployment-using-a-service-principal-in-visual-studio-online-buildrelease-management/). When you select the **Manage** link, you’ll land in the right place in DevOps (second screenshot in the blog post) to set up the connection to Azure. Make sure to choose **Azure Resource Manager Service Endpoint** when setting this up. + +## Complete the build task +If you select the build task, you’ll see all the details on the right pane that should be filled in. + +1. First, name the build task: **Create Virtual Machines**. +2. Choose the **service principal** you created by choosing **Azure Resource Manager** +3. Choose the **service endpoint**. +4. For **Script Path**, select **… (ellipsis)** on the right. +5. Navigate to **MakeGoldenImageVMs.ps1** script. +6. Script Parameters should look like this: `-ConfigurationLocation $(System.DefaultWorkingDirectory)$(ConfigurationLocation) -DevTestLabName $(DevTestLabName) -vmSize $(VMSize) -machineUserName $(MachineUserName) -machinePassword (ConvertTo-SecureString -string '$(MachinePassword)' -AsPlainText -Force) -StandardTimeoutMinutes $(StandardTimeoutMinutes)` + + ![Complete the build definition](./media/set-up-devops-lab/complete-build-definition.png) + + +## Queue the build +Let’s verify that you have everything set up correctly by queuing up a new build. While the build is running, switch to the [Azure portal](https://portal.azure.com) and select on **All Virtual Machines** in your image factory lab to confirm that everything is working correctly. You should see three virtual machines get created in the lab. + +![VMs in the lab](./media/set-up-devops-lab/vms-in-lab.png) + +## Next steps +The first step in setting up the image factory based on Azure DevTest Labs is complete. In the next article in the series, you get those VMs generalized and saved to custom images. Then, you have them distributed to all your other labs. See the next article in the series: [Save custom images and distribute to multiple labs](image-factory-save-distribute-custom-images.md). diff --git a/articles/lab-services/media/create-image-factory/high-level-view-of-solution.png b/articles/lab-services/media/create-image-factory/high-level-view-of-solution.png new file mode 100644 index 0000000000000..a4f7a7908b65f Binary files /dev/null and b/articles/lab-services/media/create-image-factory/high-level-view-of-solution.png differ diff --git a/articles/lab-services/media/save-distribute-custom-images/powershell-settings.png b/articles/lab-services/media/save-distribute-custom-images/powershell-settings.png new file mode 100644 index 0000000000000..5fb2dad174695 Binary files /dev/null and b/articles/lab-services/media/save-distribute-custom-images/powershell-settings.png differ diff --git a/articles/lab-services/media/save-distribute-custom-images/powershell-step.png b/articles/lab-services/media/save-distribute-custom-images/powershell-step.png new file mode 100644 index 0000000000000..f67ad3887bb31 Binary files /dev/null and b/articles/lab-services/media/save-distribute-custom-images/powershell-step.png differ diff --git a/articles/lab-services/media/save-distribute-custom-images/second-build-task-powershell.png b/articles/lab-services/media/save-distribute-custom-images/second-build-task-powershell.png new file mode 100644 index 0000000000000..e7c8bd7e2329c Binary files /dev/null and b/articles/lab-services/media/save-distribute-custom-images/second-build-task-powershell.png differ diff --git a/articles/lab-services/media/set-retention-policy-cleanup/configure-powershell-task.png b/articles/lab-services/media/set-retention-policy-cleanup/configure-powershell-task.png new file mode 100644 index 0000000000000..c616a49659928 Binary files /dev/null and b/articles/lab-services/media/set-retention-policy-cleanup/configure-powershell-task.png differ diff --git a/articles/lab-services/media/set-retention-policy-cleanup/powershell-step.png b/articles/lab-services/media/set-retention-policy-cleanup/powershell-step.png new file mode 100644 index 0000000000000..2e6ec9769436e Binary files /dev/null and b/articles/lab-services/media/set-retention-policy-cleanup/powershell-step.png differ diff --git a/articles/lab-services/media/set-retention-policy-cleanup/retire-old-image-task.png b/articles/lab-services/media/set-retention-policy-cleanup/retire-old-image-task.png new file mode 100644 index 0000000000000..9a5f5e7d7ddee Binary files /dev/null and b/articles/lab-services/media/set-retention-policy-cleanup/retire-old-image-task.png differ diff --git a/articles/lab-services/media/set-up-devops-lab/build-definition.png b/articles/lab-services/media/set-up-devops-lab/build-definition.png new file mode 100644 index 0000000000000..0bd1194977772 Binary files /dev/null and b/articles/lab-services/media/set-up-devops-lab/build-definition.png differ diff --git a/articles/lab-services/media/set-up-devops-lab/complete-build-definition.png b/articles/lab-services/media/set-up-devops-lab/complete-build-definition.png new file mode 100644 index 0000000000000..ae1700987c7de Binary files /dev/null and b/articles/lab-services/media/set-up-devops-lab/complete-build-definition.png differ diff --git a/articles/lab-services/media/set-up-devops-lab/configure-build-variables.png b/articles/lab-services/media/set-up-devops-lab/configure-build-variables.png new file mode 100644 index 0000000000000..f2e215bd1b3c3 Binary files /dev/null and b/articles/lab-services/media/set-up-devops-lab/configure-build-variables.png differ diff --git a/articles/lab-services/media/set-up-devops-lab/import-git-repo.png b/articles/lab-services/media/set-up-devops-lab/import-git-repo.png new file mode 100644 index 0000000000000..6458eb140d709 Binary files /dev/null and b/articles/lab-services/media/set-up-devops-lab/import-git-repo.png differ diff --git a/articles/lab-services/media/set-up-devops-lab/set-up-powershell-step.png b/articles/lab-services/media/set-up-devops-lab/set-up-powershell-step.png new file mode 100644 index 0000000000000..0e3cbb50082ca Binary files /dev/null and b/articles/lab-services/media/set-up-devops-lab/set-up-powershell-step.png differ diff --git a/articles/lab-services/media/set-up-devops-lab/setup-build-button.png b/articles/lab-services/media/set-up-devops-lab/setup-build-button.png new file mode 100644 index 0000000000000..50d19b0186fd1 Binary files /dev/null and b/articles/lab-services/media/set-up-devops-lab/setup-build-button.png differ diff --git a/articles/lab-services/media/set-up-devops-lab/vms-in-lab.png b/articles/lab-services/media/set-up-devops-lab/vms-in-lab.png new file mode 100644 index 0000000000000..2c1142a1875f3 Binary files /dev/null and b/articles/lab-services/media/set-up-devops-lab/vms-in-lab.png differ diff --git a/articles/lab-services/use-command-line-start-stop-virtual-machines.md b/articles/lab-services/use-command-line-start-stop-virtual-machines.md new file mode 100644 index 0000000000000..0921971beb59e --- /dev/null +++ b/articles/lab-services/use-command-line-start-stop-virtual-machines.md @@ -0,0 +1,91 @@ +--- +title: Use command-line tools to start and stop VMs Azure DevTest Labs | Microsoft Docs +description: Learn how to use command-line tools to start and stop virtual machines in Azure DevTest Labs. +services: devtest-lab,virtual-machines,lab-services +documentationcenter: na +author: spelluru +manager: femila + +ms.service: lab-services +ms.workload: na +ms.tgt_pltfrm: na +ms.devlang: na +ms.topic: article +ms.date: 03/25/2019 +ms.author: spelluru + +--- + +# Use command-line tools to start and stop Azure DevTest Labs virtual machines +This article shows you how to use Azure PowerShell or Azure CLI to start or stop virtual machines in a lab in Azure DevTest Labs. You can create PowerShell/CLI scripts to automate these operations. + +## Overview +Azure DevTest Labs is a way to create fast, easy, and lean dev/test environments. It allows you to manage cost, quickly provision VMs, and minimize waste. There are built-in features in the Azure portal that allow you to configure VMs in a lab to automatically start and stop at specific times. + +However, in some scenarios, you may want to automate starting and stopping of VMs from PowerShell/CLI scripts. It gives you some flexibility with starting and stopping individual machines at any time instead of at specific times. Here are some of the situations in which running these tasks by using scripts would be helpful. + +- When using a 3-tier application as part of a test environment, the tiers need to be started in a sequence. +- Turn off a VM when a custom criteria is met to save money. +- Use it as a task within a CI/CD workflow to start at the beginning of the flow, use the VMs as build machines, test machines, or infrastructure, then stop the VMs when the process is complete. An example of this would be the custom image factory with Azure DevTest Labs. + +## Azure PowerShell +The following PowerShell script starts a VM in a lab. [Invoke-AzureRmResourceAction](/powershell/module/azurerm.resources/invoke-azurermresourceaction?view=azurermps-6.13.0) is the primary focus for this script. The **ResourceId** parameter is the fully qualified resource ID for the VM in the lab. The **Action** parameter is where the **Start** or **Stop** options are set depending on what is needed. + +```powershell +# The id of the subscription +$subscriptionId = "111111-11111-11111-1111111" + +# The name of the lab +$devTestLabName = "yourlabname" + +# The name of the virtual machine to be started +$vMToStart = "vmname" + +# The action on the virtual machine (Start or Stop) +$vmAction = "Start" + +# Select the Azure subscription +Select-AzureRMSubscription -SubscriptionId $subscriptionId + +# Get the lab information +if ($(Get-Module -Name AzureRM).Version.Major -eq 6) { + $devTestLab = Get-AzureRmResource -ResourceType 'Microsoft.DevTestLab/labs' -Name $devTestLabName +} else { + $devTestLab = Find-AzureRmResource -ResourceType 'Microsoft.DevTestLab/labs' -ResourceNameEquals $devTestLabName +} + +# Start the VM and return a succeeded or failed status +$returnStatus = Invoke-AzureRmResourceAction ` + -ResourceId "$($devTestLab.ResourceId)/virtualmachines/$vMToStart" ` + -Action $vmAction ` + -Force + +if ($returnStatus.Status -eq 'Succeeded') { + Write-Output "##[section] Successfully updated DTL machine: $vMToStart, Action: $vmAction" +} +else { + Write-Error "##[error]Failed to update DTL machine: $vMToStart, Action: $vmAction" +} +``` + + +## Azure CLI +The [Azure CLI](/cli/azure/get-started-with-azure-cli?view=azure-cli-latest) is another way to automate the starting and stopping of DevTest Labs VMs. Azure CLI can be [installed](/cli/azure/install-azure-cli?view=azure-cli-latest) on different operating systems. The following script gives you commands for starting and stopping a VM in a lab. + +```azurecli +# Sign in to Azure +az login + +## Get the name of the resource group that contains the lab +az resource list --resource-type "Microsoft.DevTestLab/labs" --name "yourlabname" --query "[0].resourceGroup" + +## Start the VM +az lab vm start --lab-name yourlabname --name vmname --resource-group labResourceGroupName + +## Stop the VM +az lab vm stop --lab-name yourlabname --name vmname --resource-group labResourceGroupName +``` + + +## Next steps +See the following article for using the Azure portal to do these operations: [Restart a VM](devtest-lab-restart-vm.md). \ No newline at end of file diff --git a/articles/logic-apps/logic-apps-pricing.md b/articles/logic-apps/logic-apps-pricing.md index cc7612217a0b2..ec15df8bbb7cb 100644 --- a/articles/logic-apps/logic-apps-pricing.md +++ b/articles/logic-apps/logic-apps-pricing.md @@ -7,29 +7,37 @@ ms.suite: logic-apps author: kevinlam1 ms.author: klam ms.reviewer: estfan, LADocs -manager: carmonm ms.assetid: f8f528f5-51c5-4006-b571-54ef74532f32 ms.topic: article -ms.date: 02/26/2019 +ms.date: 03/25/2019 --- # Pricing model for Azure Logic Apps -You can create and run automated integration workflows that -can scale in the cloud when you use Azure Logic Apps. -Here are the details about how billing and pricing work for Logic Apps. +[Azure Logic Apps](../logic-apps/logic-apps-overview.md) helps you create +and run automated integration workflows that can scale in the cloud. +This article describes how billing and pricing work for Azure Logic Apps. +For specific pricing information, see [Azure Logic Apps Pricing](https://azure.microsoft.com/pricing/details/logic-apps). ## Consumption pricing model -For new logic apps that run in the public or "global" Logic -Apps service, you pay only for what you use. These logic apps -use a consumption-based plan and pricing model. In your logic app -definition, each step is an action. Actions include the trigger, -any control flow steps, built-in actions, and connector calls. -Logic Apps meters all actions that run in your logic app. -For more information, see [Logic Apps Pricing](https://azure.microsoft.com/pricing/details/logic-apps). +For new logic apps that run in the public or "global" +Azure Logic Apps service, you pay only for what you use. +These logic apps use a consumption-based plan and pricing model. +In your logic app definition, each step is an action. For example, +actions include: + +* Triggers, which are special actions. +All logic apps require a trigger as the first step. +* "Built-in" or native actions such as HTTP, +calls to Azure Functions and API Management, and so on +* Calls to connectors such as Outlook 365, Dropbox, and so on +* Control flow steps, such as loops, conditional statements, and so on + +Azure Logic Apps meters all the actions that run in your logic app. +Learn more about how billing works for [triggers](#triggers) and [actions](#actions). @@ -40,98 +48,172 @@ For new logic apps that run inside an you pay a fixed monthly price for built-in actions and standard connectors. An ISE provides a way for you to create and run isolated logic apps that can -access resources in an Azure virtual network. +access resources in an Azure virtual network. + +> [!NOTE] +> The ISE is in [*public preview*](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). +> For specific pricing information, see +> [Azure Logic Apps Pricing](https://azure.microsoft.com/pricing/details/logic-apps). Your ISE base unit has fixed capacity, so if you need more throughput, you can [add more scale units](../logic-apps/connect-virtual-network-vnet-isolated-environment.md#add-capacity), either during creation or afterwards. Your ISE includes one free Enterprise connector, which includes as many connections as you want. -Usage for additional Enterprise connectors are charged based -on the Enterprise consumption price. +Usage for additional Enterprise connectors is charged based +on the Enterprise consumption price. -> [!NOTE] -> The ISE is in [*public preview*](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). -> For more information, see -> [Logic Apps Pricing](https://azure.microsoft.com/pricing/details/logic-apps). + + +## Connectors + +Azure Logic Apps connectors help your logic app access apps, +services, and systems in the cloud or on premises by providing +[triggers](#triggers), [actions](#actions), or both. Connectors +are classified as either Standard or Enterprise. For an overview +about these connectors, see [Connectors for Azure Logic Apps](../connectors/apis-list.md). +The following sections provide more information about how billing +for triggers and actions work. ## Triggers -Triggers are special actions that create a logic app instance when a specific event happens. -Triggers act in different ways, which affect how the logic app is metered. +Triggers are special actions that create a logic app instance +when a specific event happens. Triggers act in different ways, +which affect how the logic app is metered. Here are the various +kinds of triggers that exist in Azure Logic Apps: -* **Polling trigger** – This trigger continually checks an endpoint for messages -that satisfy the criteria for creating a logic app instance and starting the workflow. -Even when no logic app instance gets created, Logic Apps meters each polling request as an execution. +* **Polling trigger**: This trigger continually checks an endpoint +for messages that satisfy the criteria for creating a logic app +instance and starting the workflow. Even when no logic app instance +gets created, Logic Apps meters each polling request as an execution. To specify the polling interval, set up the trigger through the Logic App Designer. [!INCLUDE [logic-apps-polling-trigger-non-standard-metering](../../includes/logic-apps-polling-trigger-non-standard-metering.md)] -* **Webhook trigger** – This trigger waits for a client to send a request to a specific endpoint. -Each request sent to the webhook endpoint counts as an action execution. -For example, the Request and HTTP Webhook trigger are both webhook triggers. +* **Webhook trigger**: This trigger waits for a client to send a request to a +specific endpoint. Each request sent to the webhook endpoint counts as an action +execution. For example, the Request and HTTP Webhook trigger are both webhook triggers. -* **Recurrence trigger** – This trigger creates a logic app instance based -on the recurrence interval that you set up in the trigger. -For example, you can set up a recurrence trigger that runs every three days or on a more complex schedule. +* **Recurrence trigger**: This trigger creates a logic app instance based +on the recurrence interval that you set up in the trigger. For example, +you can set up a Recurrence trigger that runs every three days or on a more complex schedule. + + ## Actions -Logic Apps meters built-in actions as native actions. For example, -built-in actions include calls over HTTP, calls from Azure Functions -or API Management, and control flow steps such as loops and conditions -- each with their own action type. Actions that call -[connectors](https://docs.microsoft.com/connectors) have the "ApiConnection" type. -These connectors are classified as standard or enterprise connectors, -which are metered based on their respective [pricing][pricing]. -Enterprise connectors in *Preview* are charged as standard connectors. +Azure Logic Apps meters "built-in" actions, such as HTTP, as native actions. +For example, built-in actions include HTTP calls, calls from Azure Functions +or API Management, and control flow steps such as conditions, loops, and +switch statements. Each action has their own action type. For example, +actions that call [connectors](https://docs.microsoft.com/connectors) +have the "ApiConnection" type. These connectors are classified as +Standard or Enterprise connectors, which are metered based on their +respective [pricing](https://azure.microsoft.com/pricing/details/logic-apps). +Enterprise connectors in *Preview* are charged as Standard connectors. -Logic Apps meters all successfully and unsuccessfully run actions as action executions. -Logic Apps doesn't meter these actions: +Azure Logic Apps meters all successful and unsuccessful actions as executions. +However, Logic Apps doesn't meter these actions: * Actions that get skipped due to unmet conditions * Actions that don't run because the logic app stopped before finishing -Disabled logic apps aren't charged while disabled -because they can't create new instances. +For actions that run inside loops, Azure Logic Apps counts each action +for each cycle in the loop. For example, suppose you have a "for each" +loop that processes a list. Logic Apps meters an action in that loop by +multiplying the number of list items with the number of actions in the loop, +and adds the action that starts the loop. So, the calculation for a 10-item +list is (10 * 1) + 1, which results in 11 action executions. -> [!NOTE] -> After you disable a logic app, any currently running instances -> might take some time before they completely stop. +## Disabled logic apps -For actions that run inside loops, Logic Apps counts each action per cycle in the loop. -For example, suppose you have a "for each" loop that processes a list. -Logic Apps meters an action in that loop by multiplying the number of list items -with the number of actions in the loop, and adds the action that starts the loop. -The calculation for a 10-item list is (10 * 1) + 1, which results in 11 action executions. +Disabled logic apps aren't charged because they +can't create new instances while they're disabled. +After you disable a logic app, any currently running +instances might take some time before they completely stop. -## Integration Account usage +## Integration accounts -Consumption-based usage applies to +Consumption pricing applies to [integration accounts](logic-apps-enterprise-integration-create-integration-account.md) where you can explore, develop, and test the -[B2B/EDI](logic-apps-enterprise-integration-b2b.md) and -[XML processing](logic-apps-enterprise-integration-xml.md) -features in Logic Apps at no additional cost. You can have -one integration account per region. Each integration account -can store up to specific [numbers of artifacts](../logic-apps/logic-apps-limits-and-config.md), +[B2B and EDI](logic-apps-enterprise-integration-b2b.md) +and [XML processing](logic-apps-enterprise-integration-xml.md) +features in Azure Logic Apps at no additional cost. +You can have one integration account in each Azure region. +Each integration account can store up to specific +[numbers of artifacts](../logic-apps/logic-apps-limits-and-config.md), which include trading partners, agreements, maps, schemas, assemblies, certificates, batch configurations, and so on. -Logic Apps also offers basic and standard integration accounts with supported Logic Apps SLA. -You can use basic integration accounts when you just want message handling or act as a small -business partner that has a trading partner relationship with a larger business entity. -Standard integration accounts support more complex B2B relationships and increase the -number of entities you can manage. For more information, see -[Azure pricing](https://azure.microsoft.com/pricing/details/logic-apps). +Azure Logic Apps also offers Basic and Standard integration +accounts with supported Logic Apps SLA. Here are ways you +can choose whether to use a Basic or Standard integration account: -## Next steps +* Use Basic integration accounts when you just want message +handling or act as a small business partner that has a +trading partner relationship with a larger business entity. + +* Use Standard integration accounts when you have more complex +B2B relationships and want to increase the number of entities +you can manage. + +For specific pricing information, see +[Azure Logic Apps pricing](https://azure.microsoft.com/pricing/details/logic-apps). + + + +## Data retention + +All inputs and outputs that are stored in your logic +app's run history get billed based on a logic app's +[run retention period](logic-apps-limits-and-config.md#run-duration-retention-limits). +For specific pricing information, see +[Azure Logic Apps pricing](https://azure.microsoft.com/pricing/details/logic-apps). -* [Learn more about Logic Apps][whatis] -* [Create your first logic app][create] +To help you monitor your logic app's storage consumption, you can: -[pricing]: https://azure.microsoft.com/pricing/details/logic-apps/ -[whatis]: logic-apps-overview.md -[create]: quickstart-create-first-logic-app-workflow.md +* View the number of storage units in GB that your logic app uses monthly. +* View the sizes for a specific action's inputs and outputs in your logic app's run history. + + + +### View logic app storage consumption + +1. In the Azure portal, find and open your logic app. + +1. From your logic app's menu, under **Monitoring**, select **Metrics**. + +1. In the right-hand pane, under **Chart Title**, +from the **Metric** list, select +**Billing Usage for Storage Consumption Executions**. + + This metric gives you the number of storage consumption + units in GB per month that are getting billed. + + + +### View action input and output sizes + +1. In the Azure portal, find and open your logic app. + +1. On your logic app's menu, select **Overview**. + +1. In the right-hand pane, under **Runs history**, +select the run that has the inputs and outputs you want to check. + +1. Under **Logic app run**, choose **Run Details**. + +1. In the **Logic app run details** pane, in the actions +table, which lists each action's status and duration, +select the action you want to view. + +1. In the **Logic app action** pane, find the sizes for +that action's inputs and outputs appear respectively +under **Inputs link** and **Outputs link**. + +## Next steps +* [Learn more about Azure Logic Apps](logic-apps-overview.md) +* [Create your first logic app](quickstart-create-first-logic-app-workflow.md) \ No newline at end of file diff --git a/articles/machine-learning/service/azure-machine-learning-release-notes.md b/articles/machine-learning/service/azure-machine-learning-release-notes.md index 16518c4410b0c..845eef17c7f83 100644 --- a/articles/machine-learning/service/azure-machine-learning-release-notes.md +++ b/articles/machine-learning/service/azure-machine-learning-release-notes.md @@ -22,6 +22,9 @@ In this article, learn about the Azure Machine Learning service releases. For a ### Azure Machine Learning SDK for Python v1.0.21 ++ **New features** + + The *azureml.core.Run.create_children* method allows low-latency creation of multiple child runs with a single call. + ## 2019-03-11 ### Azure Machine Learning SDK for Python v1.0.18 diff --git a/articles/marketplace/cloud-partner-portal-orig/cloud-partner-portal-lead-management-instructions-dynamics.md b/articles/marketplace/cloud-partner-portal-orig/cloud-partner-portal-lead-management-instructions-dynamics.md index 62a5719d01944..472767b7f4519 100644 --- a/articles/marketplace/cloud-partner-portal-orig/cloud-partner-portal-lead-management-instructions-dynamics.md +++ b/articles/marketplace/cloud-partner-portal-orig/cloud-partner-portal-lead-management-instructions-dynamics.md @@ -60,6 +60,7 @@ Use the following steps to configure Azure Active Directory for Dynamics CRM. 1. Sign in to [Azure portal](https://portal.azure.com/) and then select the Azure Active Directory service. 2. Select **Properties** and then copy the **Directory Id**. This is your tenant account identification that you need use in the Cloud Partner Portal. + ![Get Directory ID](./media/cloud-partner-portal-lead-management-instructions-dynamics/directoryid.png) 3. Select **App registrations**, and then select **New application registration**. @@ -74,6 +75,7 @@ Use the following steps to configure Azure Active Directory for Dynamics CRM. 11. On the Keys menu, select **Copy the key value.** Save a copy of this value because you'll need it for the Cloud Partner Portal. ![Dynamics get registered key](./media/cloud-partner-portal-lead-management-instructions-dynamics/registerkeys.png) + 12. Select **Required permissions** and then select **Add**. 13. Select **Dynamics CRM Online** as the new API, and check the permission for *Access CRM Online as organization users*. @@ -87,19 +89,21 @@ Use the following steps to configure Azure Active Directory for Dynamics CRM. ![Add new application user](./media/cloud-partner-portal-lead-management-instructions-dynamics/applicationuser.PNG) 16. In **New User**, provide the name and email that you want to use with this connection. Paste in the **Application Id** for the app you created in the Azure portal. + ![Configure new user](./media/cloud-partner-portal-lead-management-instructions-dynamics/leadgencreateuser.PNG) 17. Go to "Security settings" in this article to finish configuring the connection for this user. ### Office 365 -If you don't want to use Azure Active Directory, you can register a new user on the Office 365 Admin portal. You'll be required to update your username/password every 90 days to continue getting leads. +If you don't want to use Azure Active Directory, you can register a new user on the *Microsoft 365 admin center*. You'll be required to update your username/password every 90 days to continue getting leads. Use the following steps to configure Office 365 for Dynamics CRM. -1. Sign in to the [Microsoft Office 365 Admin Portal](https://go.microsoft.com/fwlink/?LinkId=225975). +1. Sign in to the [Microsoft 365 admin center](https://admin.microsoft.com). + +2. Select the **Admin** tile. -2. Select the **Admin** tile ![Office Online Admin](./media/cloud-partner-portal-lead-management-instructions-dynamics/crmonline3.png) 3. Select **Add a user**. @@ -111,6 +115,7 @@ Use the following steps to configure Office 365 for Dynamics CRM. - Provide a password and uncheck the "Make this user change their password when they first sign in" option. - Select "User (no administrator access)" as the role for the user. - Select the product license shown in the next screen capture. You'll be charged for the license you choose. The solution will also work with Dynamics CRM Online Basic license. + ![Configure user permissions and license](./media/cloud-partner-portal-lead-management-instructions-dynamics/crmonline5.png) ## Security settings @@ -123,6 +128,7 @@ The final step is to enable the User you created to write the leads. ![Security settings](./media/cloud-partner-portal-lead-management-instructions-dynamics/crmonline6.png) 3. Select the user that you created in **User permissions**, and then select **Manage User Roles**. Check **Microsoft Marketplace Lead Writer** to assign the role. + ![Assign user role](./media/cloud-partner-portal-lead-management-instructions-dynamics/crmonline7.png)\ >[!NOTE] @@ -130,6 +136,7 @@ The final step is to enable the User you created to write the leads. 4. In Security, select **Security Roles** and find the role for Microsoft Marketplace Lead Writer. + ![Configure security lead writer](./media/cloud-partner-portal-lead-management-instructions-dynamics/crmonline10.jpg)\ 5. Select the **Core Records** tab. Enable Create/Read/Write for the User Entity UI. diff --git a/articles/media-services/latest/dynamic-packaging-overview.md b/articles/media-services/latest/dynamic-packaging-overview.md index 9d21abc44488b..2a2970f3ba269 100644 --- a/articles/media-services/latest/dynamic-packaging-overview.md +++ b/articles/media-services/latest/dynamic-packaging-overview.md @@ -26,20 +26,7 @@ To take advantage of **Dynamic Packaging**, you need to have an **Asset** with a As a result, you only need to store and pay for the files in single storage format and Media Services service will build and serve the appropriate response based on requests from a client. -In Media Services, Dynamic Packaging is used whether you are streaming live or on-demand. The following diagram shows the on-demand streaming with dynamic packaging workflow. - -![Dynamic Packaging](./media/dynamic-packaging-overview/media-services-dynamic-packaging.svg) - -## Delivery protocols - -|Protocol|Example| -|---|---| -|HLS V4 |`https://amsv3account-usw22.streaming.media.azure.net/21b17732-0112-4d76-b526-763dcd843449/ignite.ism/manifest(format=m3u8-aapl)`| -|HLS V3 |`https://amsv3account-usw22.streaming.media.azure.net/21b17732-0112-4d76-b526-763dcd843449/ignite.ism/manifest(format=m3u8-aapl-v3)`| -|HLS CMAF| `https://amsv3account-usw22.streaming.media.azure.net/21b17732-0112-4d76-b526-763dcd843449/ignite.ism/manifest(format=m3u8-cmaf)`| -|MPEG DASH CSF| `https://amsv3account-usw22.streaming.media.azure.net/21b17732-0112-4d76-b526-763dcd843449/ignite.ism/manifest(format=mpd-time-csf)` | -|MPEG DASH CMAF|`https://amsv3account-usw22.streaming.media.azure.net/21b17732-0112-4d76-b526-763dcd843449/ignite.ism/manifest(format=mpd-time-cmaf)` | -|Smooth Streaming| `https://amsv3account-usw22.streaming.media.azure.net/21b17732-0112-4d76-b526-763dcd843449/ignite.ism/manifest`| +In Media Services, Dynamic Packaging is used whether you are streaming live or on-demand. ## Common on-demand workflow @@ -50,6 +37,10 @@ The following is a common Media Services streaming workflow where Dynamic Packag 3. Publish the asset that contains the adaptive bitrate MP4 set. You publish by creating a **Streaming Locator**. 4. Build URLs that target different formats (HLS, Dash, and Smooth Streaming). The **Streaming Endpoint** would take care of serving the correct manifest and requests for all these different formats. +The following diagram shows the on-demand streaming with dynamic packaging workflow. + +![Dynamic Packaging](./media/dynamic-packaging-overview/media-services-dynamic-packaging.svg) + ### Encode to adaptive bitrate MP4s For information about [how to encode a video with Media Services](encoding-concept.md), see the following examples: @@ -80,13 +71,16 @@ The following diagram shows the live streaming with dynamic packaging workflow. ![pass-through](./media/live-streaming/pass-through.svg) -## Dynamic Encryption - -**Dynamic Encryption** enables you to dynamically encrypt your live or on-demand content with AES-128 or any of the three major digital rights management (DRM) systems: Microsoft PlayReady, Google Widevine, and Apple FairPlay. Media Services also provides a service for delivering AES keys and DRM (PlayReady, Widevine, and FairPlay) licenses to authorized clients. For more information, see [Dynamic Encryption](content-protection-overview.md). - -## Dynamic Manifest +## Delivery protocols -Dynamic filtering is used to control the number of tracks, formats, bitrates, and presentation time windows that are sent out to the players. For more information, see [filters and dynamic manifests](filters-dynamic-manifest-overview.md). +|Protocol|Example| +|---|---| +|HLS V4 |`https://amsv3account-usw22.streaming.media.azure.net/21b17732-0112-4d76-b526-763dcd843449/ignite.ism/manifest(format=m3u8-aapl)`| +|HLS V3 |`https://amsv3account-usw22.streaming.media.azure.net/21b17732-0112-4d76-b526-763dcd843449/ignite.ism/manifest(format=m3u8-aapl-v3)`| +|HLS CMAF| `https://amsv3account-usw22.streaming.media.azure.net/21b17732-0112-4d76-b526-763dcd843449/ignite.ism/manifest(format=m3u8-cmaf)`| +|MPEG DASH CSF| `https://amsv3account-usw22.streaming.media.azure.net/21b17732-0112-4d76-b526-763dcd843449/ignite.ism/manifest(format=mpd-time-csf)` | +|MPEG DASH CMAF|`https://amsv3account-usw22.streaming.media.azure.net/21b17732-0112-4d76-b526-763dcd843449/ignite.ism/manifest(format=mpd-time-cmaf)` | +|Smooth Streaming| `https://amsv3account-usw22.streaming.media.azure.net/21b17732-0112-4d76-b526-763dcd843449/ignite.ism/manifest`| ## Video codecs supported by dynamic packaging @@ -99,6 +93,10 @@ Dynamic Packaging supports MP4 files, which contain audio encoded with [AAC](htt > [!NOTE] > Dynamic Packaging does not support files that contain [Dolby Digital](https://en.wikipedia.org/wiki/Dolby_Digital) (AC3) audio (it is a legacy codec). +## Dynamic Encryption + +**Dynamic Encryption** enables you to dynamically encrypt your live or on-demand content with AES-128 or any of the three major digital rights management (DRM) systems: Microsoft PlayReady, Google Widevine, and Apple FairPlay. Media Services also provides a service for delivering AES keys and DRM (PlayReady, Widevine, and FairPlay) licenses to authorized clients. For more information, see [Dynamic Encryption](content-protection-overview.md). + ## Manifests Media Services supports HLS, MPEG DASH, Smooth Streaming protocols. As part of **Dynamic Packaging**, the streaming client manifests (HLS Master Playlist, DASH Media Presentation Description (MPD), and Smooth Streaming) are dynamically generated based on the format selector in the URL. See the delivery protocols in [this section](#delivery-protocols). @@ -189,6 +187,10 @@ Here is an example of a Smooth Streaming manifest: ``` +## Dynamic Manifest + +Dynamic filtering is used to control the number of tracks, formats, bitrates, and presentation time windows that are sent out to the players. For more information, see [filters and dynamic manifests](filters-dynamic-manifest-overview.md). + > [!NOTE] > Currently, you cannot use the Azure portal to manage v3 resources. Use the [REST API](https://aka.ms/ams-v3-rest-ref), [CLI](https://aka.ms/ams-v3-cli-ref), or one of the supported [SDKs](developers-guide.md). diff --git a/articles/media-services/video-indexer/video-indexer-use-apis.md b/articles/media-services/video-indexer/video-indexer-use-apis.md index a1c1ba9f26a79..6519528ca1b21 100644 --- a/articles/media-services/video-indexer/video-indexer-use-apis.md +++ b/articles/media-services/video-indexer/video-indexer-use-apis.md @@ -8,7 +8,7 @@ manager: femila ms.service: media-services ms.topic: article -ms.date: 02/10/2019 +ms.date: 03/25/2019 ms.author: juliako --- @@ -70,19 +70,6 @@ Access tokens expire after 1 hour. Make sure your access token is valid before u You are ready to start integrating with the API. Find [the detailed description of each Video Indexer REST API](https://api-portal.videoindexer.ai/). -## Location - -All operation APIs require a Location parameter, which indicates the region to which the call should be routed and in which the account was created. - -The values described in the following table apply. The **Param value** is the value you pass when using the API. - -|**Name**|**Param value**|**Description**| -|---|---|---| -|Trial|trail|Used for trial accounts.| -|West US|westus2|Used for the Azure West US 2 region.| -|North Europe |northeurope|Used for the Azure North Europe region.| -|East Asia|eastasia|Used for the Azure East Asia region.| - ## Account ID The Account ID parameter is required in all operational API calls. Account ID is a GUID that can be obtained in one of the following ways: @@ -220,6 +207,6 @@ Debug.WriteLine(playerWidgetLink); ## Next steps -[Examine details of the output JSON](video-indexer-output-json-v2.md). - -[Video Indexer overview](video-indexer-overview.md) +- [Examine details of the output JSON](video-indexer-output-json-v2.md). +- [Video Indexer overview](video-indexer-overview.md) +- [Regions](https://azure.microsoft.com/global-infrastructure/services/?products=cognitive-services) diff --git a/articles/media/index/azure_data_explorer.svg b/articles/media/index/azure_data_explorer.svg new file mode 100644 index 0000000000000..f837e9fe74cc9 --- /dev/null +++ b/articles/media/index/azure_data_explorer.svg @@ -0,0 +1 @@ +1- Icon - Query 60x60 Color \ No newline at end of file diff --git a/articles/notification-hubs/TOC.yml b/articles/notification-hubs/TOC.yml index d4bf585309dcc..79d96ca999fa0 100644 --- a/articles/notification-hubs/TOC.yml +++ b/articles/notification-hubs/TOC.yml @@ -126,9 +126,7 @@ - name: PowerShell href: /powershell/module/az.notificationhubs - name: REST API - href: https://docs.microsoft.com/previous-versions/azure/reference/dn223264(v%3dazure.100) - - name: Management API - href: /rest/api/notificationhubs + href: /rest/api/notificationhubs/ - name: Resource Manager template href: /azure/templates/microsoft.notificationhubs/allversions - name: Resources diff --git a/articles/notification-hubs/index.yml b/articles/notification-hubs/index.yml index ad970e85b95dd..65f4ec117cf9d 100644 --- a/articles/notification-hubs/index.yml +++ b/articles/notification-hubs/index.yml @@ -56,5 +56,4 @@ sections: items: - html: Azure PowerShell - html: .NET SDK - - html: REST API - - html: Management API + - html: REST API diff --git a/articles/search/TOC.yml b/articles/search/TOC.yml index 05ec5ad26f7b0..e4ce8486aad73 100644 --- a/articles/search/TOC.yml +++ b/articles/search/TOC.yml @@ -17,10 +17,6 @@ href: search-get-started-portal.md - name: Enrich with AI (cognitive search) href: cognitive-search-quickstart-blob.md - - name: Postman - href: search-fiddler.md - - name: PowerShell - href: search-create-index-rest-api.md - name: C# items: - name: 1 - Create an index @@ -29,6 +25,10 @@ href: search-import-data-dotnet.md - name: 3 - Search an index href: search-query-dotnet.md + - name: Postman + href: search-fiddler.md + - name: PowerShell + href: search-create-index-rest-api.md - name: Tutorials items: - name: Index Azure SQL Database diff --git a/articles/search/query-lucene-syntax.md b/articles/search/query-lucene-syntax.md index adbc2541109f9..4b06c4df04cc2 100644 --- a/articles/search/query-lucene-syntax.md +++ b/articles/search/query-lucene-syntax.md @@ -4,7 +4,7 @@ description: Reference for the full Lucene syntax, as used with Azure Search. services: search ms.service: search ms.topic: conceptual -ms.date: 01/31/2019 +ms.date: 03/25/2019 author: "brjohnstmsft" ms.author: "brjohnst" @@ -30,7 +30,7 @@ Set the `queryType` search parameter to specify which parser to use. Valid value -## Example showing full syntax +### Example showing full syntax The following example finds documents in the index using the Lucene query syntax, evident in the `queryType=full` parameter. This query returns hotels where the category field contains the term "budget" and all searchable fields containing the phrase "recently renovated". Documents containing the phrase "recently renovated" are ranked higher as a result of the term boost value (3). @@ -56,50 +56,6 @@ For additional examples, see [Lucene query syntax examples for building queries > [!NOTE] > Azure Search also supports [Simple Query Syntax](query-simple-syntax.md), a simple and robust query language that can be used for straightforward keyword search. - -## Field-scoped queries - You can specify a `fieldname:searchterm` construction to define a fielded query operation, where the field is a single word, and the search term is also a single word or a phrase, optionally with boolean operators. Some examples include the following: - -- genre:jazz NOT history - -- artists:("Miles Davis" "John Coltrane") - - Be sure to put multiple strings within quotation marks if you want both strings to be evaluated as a single entity, in this case searching for two distinct artists in the `artists` field. - - The field specified in `fieldname:searchterm` must be a `searchable` field. See [Create Index](https://docs.microsoft.com/rest/api/searchservice/create-index) for details on how index attributes are used in field definitions. - -## Fuzzy search - A fuzzy search finds matches in terms that have a similar construction. Per [Lucene documentation](https://lucene.apache.org/core/4_10_2/queryparser/org/apache/lucene/queryparser/classic/package-summary.html), fuzzy searches are based on [Damerau-Levenshtein Distance](https://en.wikipedia.org/wiki/Damerau%E2%80%93Levenshtein_distance). - - To do a fuzzy search, use the tilde "~" symbol at the end of a single word with an optional parameter, a number between 0 and 2 (default), that specifies the edit distance. For example, "blue~" or "blue~1" would return "blue", "blues", and "glue". - - Fuzzy search can only be applied to terms, not phrases. Fuzzy searches can expand a term up to the maximum of 50 terms that meet the distance criteria. - -## Proximity search - Proximity searches are used to find terms that are near each other in a document. Insert a tilde "~" symbol at the end of a phrase followed by the number of words that create the proximity boundary. For example, `"hotel airport"~5` will find the terms "hotel" and "airport" within 5 words of each other in a document. - - -## Term boosting - Term boosting refers to ranking a document higher if it contains the boosted term, relative to documents that do not contain the term. This differs from scoring profiles in that scoring profiles boost certain fields, rather than specific terms. - -The following example helps illustrate the differences. Suppose that there's a scoring profile that boosts matches in a certain field, say *genre* in the [musicstoreindex example](index-add-scoring-profiles.md#bkmk_ex). Term boosting could be used to further boost certain search terms higher than others. For example, `rock^2 electronic` will boost documents that contain the search terms in the genre field higher than other searchable fields in the index. Further, documents that contain the search term *rock* will be ranked higher than the other search term *electronic* as a result of the term boost value (2). - - To boost a term use the caret, "^", symbol with a boost factor (a number) at the end of the term you are searching. You can also boost phrases. The higher the boost factor, the more relevant the term will be relative to other search terms. By default, the boost factor is 1. Although the boost factor must be positive, it can be less than 1 (for example, 0.20). - -## Regular expression search - A regular expression search finds a match based on the contents between forward slashes "/", as documented in the [RegExp class](https://lucene.apache.org/core/4_10_2/core/org/apache/lucene/util/automaton/RegExp.html). - - For example, to find documents containing "motel" or "hotel", specify `/[mh]otel/`. Regular expression searches are matched against single words. - -## Wildcard search - You can use generally recognized syntax for multiple (*) or single (?) character wildcard searches. Note the Lucene query parser supports the use of these symbols with a single term, and not a phrase. - - For example, to find documents containing the words with the prefix "note", such as "notebook" or "notepad", specify "note*". - -> [!NOTE] -> You cannot use a * or ? symbol as the first character of a search. -> No text analysis is performed on wildcard search queries. At query time, wildcard query terms are compared against analyzed terms in the search index and expanded. - ## Syntax fundamentals The following syntax fundamentals apply to all queries that use the Lucene syntax. @@ -134,19 +90,19 @@ Field grouping is similar but scopes the grouping to a single field. For example ### SearchMode parameter considerations The impact of `searchMode` on queries, as described in [Simple query syntax in Azure Search](query-simple-syntax.md), applies equally to the Lucene query syntax. Namely, `searchMode` in conjunction with NOT operators can result in query outcomes that might seem unusual if you aren't clear on the implications of how you set the parameter. If you retain the default, `searchMode=any`, and use a NOT operator, the operation is computed as an OR action, such that "New York" NOT "Seattle" returns all cities that are not Seattle. -## Boolean operators +## Boolean operators (AND, OR, NOT) Always specify text boolean operators (AND, OR, NOT) in all caps. -#### OR operator `OR` or `||` +### OR operator `OR` or `||` The OR operator is a vertical bar or pipe character. For example: `wifi || luxury` will search for documents containing either "wifi" or "luxury" or both. Because OR is the default conjunction operator, you could also leave it out, such that `wifi luxury` is the equivalent of `wifi || luxuery`. -#### AND operator `AND`, `&&` or `+` +### AND operator `AND`, `&&` or `+` The AND operator is an ampersand or a plus sign. For example: `wifi && luxury` will search for documents containing both "wifi" and "luxury". The plus character (+) is used for required terms. For example, `+wifi +luxury` stipulates that both terms must appear somewhere in the field of a single document. -#### NOT operator `NOT`, `!` or `-` +### NOT operator `NOT`, `!` or `-` The NOT operator is an exclamation point or the minus sign. For example: `wifi !luxury` will search for documents that have the "wifi" term and/or do not have "luxury". The `searchMode` option controls whether a term with the NOT operator is ANDed or ORed with the other terms in the query in the absence of a + or || operator. Recall that `searchMode` can be set to either `any`(default) or `all`. @@ -160,6 +116,50 @@ Using `searchMode=all` increases the precision of queries by including fewer res ## Scoring wildcard and regex queries Azure Search uses frequency-based scoring ([TF-IDF](https://en.wikipedia.org/wiki/Tf%E2%80%93idf)) for text queries. However, for wildcard and regex queries where scope of terms can potentially be broad, the frequency factor is ignored to prevent the ranking from biasing towards matches from rarer terms. All matches are treated equally for wildcard and regex searches. +## Field-scoped queries + You can specify a `fieldname:searchterm` construction to define a fielded query operation, where the field is a single word, and the search term is also a single word or a phrase, optionally with boolean operators. Some examples include the following: + +- genre:jazz NOT history + +- artists:("Miles Davis" "John Coltrane") + + Be sure to put multiple strings within quotation marks if you want both strings to be evaluated as a single entity, in this case searching for two distinct artists in the `artists` field. + + The field specified in `fieldname:searchterm` must be a `searchable` field. See [Create Index](https://docs.microsoft.com/rest/api/searchservice/create-index) for details on how index attributes are used in field definitions. + +## Fuzzy search + A fuzzy search finds matches in terms that have a similar construction. Per [Lucene documentation](https://lucene.apache.org/core/4_10_2/queryparser/org/apache/lucene/queryparser/classic/package-summary.html), fuzzy searches are based on [Damerau-Levenshtein Distance](https://en.wikipedia.org/wiki/Damerau%E2%80%93Levenshtein_distance). Fuzzy searches can expand a term up to the maximum of 50 terms that meet the distance criteria. + + To do a fuzzy search, use the tilde "~" symbol at the end of a single word with an optional parameter, a number between 0 and 2 (default), that specifies the edit distance. For example, "blue~" or "blue~1" would return "blue", "blues", and "glue". + + Fuzzy search can only be applied to terms, not phrases, but you can append the tilde to each term individually in a multi-part name or phrase. For example, "Unviersty~ of~ "Wshington~" would match on "University of Washington". + + +## Proximity search + Proximity searches are used to find terms that are near each other in a document. Insert a tilde "~" symbol at the end of a phrase followed by the number of words that create the proximity boundary. For example, `"hotel airport"~5` will find the terms "hotel" and "airport" within 5 words of each other in a document. + + +## Term boosting + Term boosting refers to ranking a document higher if it contains the boosted term, relative to documents that do not contain the term. This differs from scoring profiles in that scoring profiles boost certain fields, rather than specific terms. + +The following example helps illustrate the differences. Suppose that there's a scoring profile that boosts matches in a certain field, say *genre* in the [musicstoreindex example](index-add-scoring-profiles.md#bkmk_ex). Term boosting could be used to further boost certain search terms higher than others. For example, `rock^2 electronic` will boost documents that contain the search terms in the genre field higher than other searchable fields in the index. Further, documents that contain the search term *rock* will be ranked higher than the other search term *electronic* as a result of the term boost value (2). + + To boost a term use the caret, "^", symbol with a boost factor (a number) at the end of the term you are searching. You can also boost phrases. The higher the boost factor, the more relevant the term will be relative to other search terms. By default, the boost factor is 1. Although the boost factor must be positive, it can be less than 1 (for example, 0.20). + +## Regular expression search + A regular expression search finds a match based on the contents between forward slashes "/", as documented in the [RegExp class](https://lucene.apache.org/core/4_10_2/core/org/apache/lucene/util/automaton/RegExp.html). + + For example, to find documents containing "motel" or "hotel", specify `/[mh]otel/`. Regular expression searches are matched against single words. + +## Wildcard search + You can use generally recognized syntax for multiple (*) or single (?) character wildcard searches. Note the Lucene query parser supports the use of these symbols with a single term, and not a phrase. + + For example, to find documents containing the words with the prefix "note", such as "notebook" or "notepad", specify "note*". + +> [!NOTE] +> You cannot use a * or ? symbol as the first character of a search. +> No text analysis is performed on wildcard search queries. At query time, wildcard query terms are compared against analyzed terms in the search index and expanded. + ## See also + [Search Documents](https://docs.microsoft.com/rest/api/searchservice/Search-Documents) diff --git a/articles/search/query-simple-syntax.md b/articles/search/query-simple-syntax.md index e3d651559a178..da4b8cd76efd8 100644 --- a/articles/search/query-simple-syntax.md +++ b/articles/search/query-simple-syntax.md @@ -4,7 +4,7 @@ description: Reference for the simple query syntax used for full text search que services: search ms.service: search ms.topic: conceptual -ms.date: 01/31/2019 +ms.date: 03/25/2019 author: "brjohnstmsft" ms.author: "brjohnst" ms.manager: cgronlun @@ -24,7 +24,7 @@ translation.priority.mt: Azure Search implements two Lucene-based query languages: [Simple Query Parser](https://lucene.apache.org/core/4_7_0/queryparser/org/apache/lucene/queryparser/simple/SimpleQueryParser.html) and the [Lucene Query Parser](https://lucene.apache.org/core/4_10_2/queryparser/org/apache/lucene/queryparser/classic/package-summary.html). In Azure Search, the simple query syntax excludes the fuzzy/slop options. > [!NOTE] -> Azure Search provides an alternative [Lucene Query Syntax](query-lucene-syntax.md) for more complex queries. To learn more about query parsing architecture and benefits of each syntax, see [How full text search works in Azure Search](https://docs.microsoft.com/azure/search/search-lucene-query-architecture). +> Azure Search provides an alternative [Lucene Query Syntax](query-lucene-syntax.md) for more complex queries. To learn more about query parsing architecture and benefits of each syntax, see [How full text search works in Azure Search](search-lucene-query-architecture.md). ## How to invoke simple parsing @@ -38,38 +38,38 @@ As straightforward as this sounds, there is one aspect of query execution in Azu Typically, you're more likely to see these behaviors in user interaction patterns for applications that search over content, where users are more likely to include an operator in a query, as opposed to e-commerce sites that have more built-in navigation structures. For more information, see [NOT operator](#not-operator). -## Operators in simple search +## Boolean operators (AND, OR, NOT) You can embed operators in a query string to build a rich set of criteria against which matching documents are found. -## AND operator `+` +### AND operator `+` The AND operator is a plus sign. For example, `wifi+luxury` will search for documents containing both `wifi` and `luxury`. -## OR operator `|` +### OR operator `|` The OR operator is a vertical bar or pipe character. For example, `wifi | luxury` will search for documents containing either `wifi` or `luxury` or both. -## NOT operator `-` +### NOT operator `-` The NOT operator is a minus sign. For example, `wifi –luxury` will search for documents that have the `wifi` term and/or do not have `luxury` (and/or is controlled by `searchMode`). > [!NOTE] > The `searchMode` option controls whether a term with the NOT operator is ANDed or ORed with the other terms in the query in the absence of a `+` or `|` operator. Recall that `searchMode` can be set to either `any` (default) or `all`. If you use `any`, it will increase the recall of queries by including more results, and by default `-` will be interpreted as "OR NOT". For example, `wifi -luxury` will match documents that either contain the term `wifi` or those that do not contain the term `luxury`. If you use `all`, it will increase the precision of queries by including fewer results, and by default - will be interpreted as "AND NOT". For example, `wifi -luxury` will match documents that contain the term `wifi` and do not contain the term "luxury". This is arguably a more intuitive behavior for the `-` operator. Therefore, you should consider using `searchMode=all` instead of `searchMode=any` if You want to optimize searches for precision instead of recall, *and* Your users frequently use the `-` operator in searches. -## Suffix operator `*` +## Suffix operator -The suffix operator is an asterisk. For example, `lux*` will search for documents that have a term that starts with `lux`, ignoring case. +The suffix operator is an asterisk `*`. For example, `lux*` will search for documents that have a term that starts with `lux`, ignoring case. -## Phrase search operator `" "` +## Phrase search operator -The phrase operator encloses a phrase in quotation marks. For example, while `Roach Motel` (without quotes) would search for documents containing `Roach` and/or `Motel` anywhere in any order, `"Roach Motel"` (with quotes) will only match documents that contain that whole phrase together and in that order (text analysis still applies). +The phrase operator encloses a phrase in quotation marks `" "`. For example, while `Roach Motel` (without quotes) would search for documents containing `Roach` and/or `Motel` anywhere in any order, `"Roach Motel"` (with quotes) will only match documents that contain that whole phrase together and in that order (text analysis still applies). -## Precedence operator `( )` +## Precedence operator -The precedence operator encloses the string in parentheses. For example, `motel+(wifi | luxury)` will search for documents containing the motel term and either `wifi` or `luxury` (or both).| +The precedence operator encloses the string in parentheses `( )`. For example, `motel+(wifi | luxury)` will search for documents containing the motel term and either `wifi` or `luxury` (or both). ## Escaping search operators diff --git a/articles/search/search-query-lucene-examples.md b/articles/search/search-query-lucene-examples.md index aefbbad08e8cc..08ac3775ec5d5 100644 --- a/articles/search/search-query-lucene-examples.md +++ b/articles/search/search-query-lucene-examples.md @@ -7,18 +7,19 @@ tags: Lucene query analyzer syntax services: search ms.service: search ms.topic: conceptual -ms.date: 08/09/2018 +ms.date: 03/25/2019 ms.author: heidist ms.custom: seodec2018 --- -# Lucene syntax query examples for building advanced queries in Azure Search -When constructing queries for Azure Search, you can replace the default [simple query parser](https://docs.microsoft.com/rest/api/searchservice/simple-query-syntax-in-azure-search) with the more expansive [Lucene Query Parser in Azure Search](https://docs.microsoft.com/rest/api/searchservice/lucene-query-syntax-in-azure-search) to formulate specialized and advanced query definitions. +# Query examples using "full" Lucene search syntax (advanced queries in Azure Search) -The Lucene Query Parser supports complex query constructs, such as field-scoped queries, fuzzy and prefix wildcard search, proximity search, term boosting, and regular expression search. The additional power comes with additional processing requirements so you should expect a slightly longer execution time. In this article, you can step through examples demonstrating query operations available when using the full syntax. +When constructing queries for Azure Search, you can replace the default [simple query parser](query-simple-syntax.md) with the more expansive [Lucene Query Parser in Azure Search](query-lucene-syntax.md) to formulate specialized and advanced query definitions. + +The Lucene parser supports complex query constructs, such as field-scoped queries, fuzzy and prefix wildcard search, proximity search, term boosting, and regular expression search. The additional power comes with additional processing requirements so you should expect a slightly longer execution time. In this article, you can step through examples demonstrating query operations available when using the full syntax. > [!Note] -> Many of the specialized query constructions enabled through the full Lucene query syntax are not [text-analyzed](https://docs.microsoft.com/azure/search/search-lucene-query-architecture#stage-2-lexical-analysis), which can be surprising if you expect stemming or lemmatization. Lexical analysis is only performed on complete terms (a term query or phrase query). Query types with incomplete terms (prefix query, wildcard query, regex query, fuzzy query) are added directly to the query tree, bypassing the analysis stage. The only transformation performed on incomplete query terms is lowercasing. +> Many of the specialized query constructions enabled through the full Lucene query syntax are not [text-analyzed](search-lucene-query-architecture.md#stage-2-lexical-analysis), which can be surprising if you expect stemming or lemmatization. Lexical analysis is only performed on complete terms (a term query or phrase query). Query types with incomplete terms (prefix query, wildcard query, regex query, fuzzy query) are added directly to the query tree, bypassing the analysis stage. The only transformation performed on incomplete query terms is lowercasing. > ## Formulate requests in Postman @@ -53,13 +54,15 @@ URL composition has the following elements: ## Send your first query -As a verification step, paste the following request into GET and click **Send**. Results are returned as verbose JSON documents. You can copy-paste this URL in first example below. +As a verification step, paste the following request into GET and click **Send**. Results are returned as verbose JSON documents. Entire documents are returned, which allows you to see all fields and all values. + +Paste this URL into a REST client as a validation step and to view document structure. ```http https://azs-playground.search.windows.net/indexes/nycjobs/docs?api-version=2017-11-11&$count=true&search=* ``` -The query string, **`search=*`**, is an unspecified search equivalent to null or empty search. It's not especially useful, but it is the simplest search you can do. +The query string, **`search=*`**, is an unspecified search equivalent to null or empty search. It's the simplest search you can do. Optionally, you can add **`$count=true`** to the URL to return a count of the documents matching the search criteria. On an empty search string, this is all the documents in the index (about 2800 in the case of NYC Jobs). @@ -75,12 +78,26 @@ All of the examples in this article specify the **queryType=full** search parame ## Example 1: Field-scoped query -This first example is not parser-specific, but we lead with it to introduce the first fundamental query concept: containment. This example scopes query execution and the response to just a few specific fields. Knowing how to structure a readable JSON response is important when your tool is Postman or Search explorer. +This first example is not Lucene-specific, but we lead with it to introduce the first fundamental query concept: containment. This example scopes query execution and the response to just a few specific fields. Knowing how to structure a readable JSON response is important when your tool is Postman or Search explorer. For brevity, the query targets only the *business_title* field and specifies only business titles are returned. The syntax is **searchFields** to restrict query execution to just the business_title field, and **select** to specify which fields are included in the response. +### Partial query string + +```http +&search=*&searchFields=business_title&$select=business_title +``` + +Here is the same query with multiple fields in a comma-delimited list. + +```http +search=*&searchFields=business_title, posting_type&$select=business_title, posting_type +``` + +### Full URL + ```http -https://azs-playground.search.windows.net/indexes/nycjobs/docs?api-version=2017-11-11&$count=true&searchFields=business_title&$select=business_title&search=* +https://azs-playground.search.windows.net/indexes/nycjobs/docs?api-version=2017-11-11&queryType=full&$count=true&search=*&searchFields=business_title&$select=business_title ``` Response for this query should look similar to the following screenshot. @@ -91,10 +108,24 @@ You might have noticed the search score in the response. Uniform scores of 1 occ ## Example 2: Intra-field filtering -Full Lucene syntax supports expressions within a field. This query searches for business titles with the term senior in them, but not junior: +Full Lucene syntax supports expressions within a field. This example searches for business titles with the term senior in them, but not junior. + +### Partial query string + +```http +searchFields=business_title&$select=business_title&search=business_title:senior+NOT+junior +``` + +Here is the same query with multiple fields. + +```http +searchFields=business_title, posting_type&$select=business_title, posting_type&search=business_title:senior+NOT+junior AND posting_type:external +``` + +### Full URL ```GET -https://azs-playground.search.windows.net/indexes/nycjobs/docs?api-version=2017-11-11&$count=true&searchFields=business_title&$select=business_title&queryType=full&search=business_title:senior+NOT+junior +https://azs-playground.search.windows.net/indexes/nycjobs/docs?api-version=2017-11-11&queryType=full&$count=true&searchFields=business_title&$select=business_title&search=business_title:senior+NOT+junior ``` ![Postman sample response](media/search-query-lucene-examples/intrafieldfilter.png) @@ -113,49 +144,73 @@ The field specified in **fieldname:searchterm** must be a searchable field. See Full Lucene syntax also supports fuzzy search, matching on terms that have a similar construction. To do a fuzzy search, append the tilde `~` symbol at the end of a single word with an optional parameter, a value between 0 and 2, that specifies the edit distance. For example, `blue~` or `blue~1` would return blue, blues, and glue. +### Partial query string + +```http +searchFields=business_title&$select=business_title&search=business_title:asosiate~ +``` + +Phrases aren't supported directly but you can specify a fuzzy match on component parts of a phrase. + +```http +searchFields=business_title&$select=business_title&search=business_title:asosiate~ AND comm~ +``` + + +### Full URL + This query searches for jobs with the term "associate" (deliberately misspelled): ```GET -https://azs-playground.search.windows.net/indexes/nycjobs/docs?api-version=2017-11-11&$count=true&searchFields=business_title&$select=business_title&queryType=full&search=business_title:asosiate~ +https://azs-playground.search.windows.net/indexes/nycjobs/docs?api-version=2017-11-11&queryType=full&$count=true&searchFields=business_title&$select=business_title&search=business_title:asosiate~ ``` ![Fuzzy search response](media/search-query-lucene-examples/fuzzysearch.png) -Per [Lucene documentation](https://lucene.apache.org/core/4_10_2/queryparser/org/apache/lucene/queryparser/classic/package-summary.html), fuzzy searches are based on [Damerau-Levenshtein Distance](https://en.wikipedia.org/wiki/Damerau%e2%80%93Levenshtein_distance). > [!Note] -> Fuzzy queries are not [analyzed](https://docs.microsoft.com/azure/search/search-lucene-query-architecture#stage-2-lexical-analysis). Query types with incomplete terms (prefix query, wildcard query, regex query, fuzzy query) are added directly to the query tree, bypassing the analysis stage. The only transformation performed on incomplete query terms is lowercasing. +> Fuzzy queries are not [analyzed](search-lucene-query-architecture.md#stage-2-lexical-analysis). Query types with incomplete terms (prefix query, wildcard query, regex query, fuzzy query) are added directly to the query tree, bypassing the analysis stage. The only transformation performed on incomplete query terms is lowercasing. > ## Example 4: Proximity search Proximity searches are used to find terms that are near each other in a document. Insert a tilde "~" symbol at the end of a phrase followed by the number of words that create the proximity boundary. For example, "hotel airport"~5 will find the terms hotel and airport within 5 words of each other in a document. +### Partial query string + +```http +searchFields=business_title&$select=business_title&search=business_title:%22senior%20analyst%22~1 +``` + +### Full URL + In this query, for jobs with the term "senior analyst" where it is separated by no more than one word: ```GET -https://azs-playground.search.windows.net/indexes/nycjobs/docs?api-version=2017-11-11&$count=true&searchFields=business_title&$select=business_title&queryType=full&search=business_title:%22senior%20analyst%22~1 +https://azs-playground.search.windows.net/indexes/nycjobs/docs?api-version=2017-11-11&queryType=full&$count=true&searchFields=business_title&$select=business_title&search=business_title:%22senior%20analyst%22~1 ``` ![Proximity query](media/search-query-lucene-examples/proximity-before.png) Try it again removing the words between the term "senior analyst". Notice that 8 documents are returned for this query as opposed to 10 for the previous query. ```GET -https://azs-playground.search.windows.net/indexes/nycjobs/docs?api-version=2017-11-11&$count=true&searchFields=business_title&$select=business_title&queryType=full&search=business_title:%22senior%20analyst%22~0 +https://azs-playground.search.windows.net/indexes/nycjobs/docs?api-version=2017-11-11&queryType=full&$count=true&searchFields=business_title&$select=business_title&search=business_title:%22senior%20analyst%22~0 ``` ## Example 5: Term boosting Term boosting refers to ranking a document higher if it contains the boosted term, relative to documents that do not contain the term. To boost a term, use the caret, "^", symbol with a boost factor (a number) at the end of the term you are searching. +### Full URLs + In this "before" query, search for jobs with the term *computer analyst* and notice there are no results with both words *computer* and *analyst*, yet *computer* jobs are at the top of the results. ```GET -https://azs-playground.search.windows.net/indexes/nycjobs/docs?api-version=2017-11-11&$count=true&searchFields=business_title&$select=business_title&queryType=full&search=business_title:computer%20analyst +https://azs-playground.search.windows.net/indexes/nycjobs/docs?api-version=2017-11-11&queryType=full&$count=true&searchFields=business_title&$select=business_title&search=business_title:computer%20analyst ``` ![Term boosting before](media/search-query-lucene-examples/termboostingbefore.png) In the "after" query, repeat the search, this time boosting results with the term *analyst* over the term *computer* if both words do not exist. ```GET -https://azs-playground.search.windows.net/indexes/nycjobs/docs?api-version=2017-11-11&$count=true&searchFields=business_title&$select=business_title&queryType=full&search=business_title:computer%20analyst%5e2 +https://azs-playground.search.windows.net/indexes/nycjobs/docs?api-version=2017-11-11&queryType=full&$count=true&searchFields=business_title&$select=business_title&search=business_title:computer%20analyst%5e2 ``` A more human readable version of the above query is `search=business_title:computer analyst^2`. For a workable query, `^2` is encoded as `%5E2`, which is harder to see. @@ -172,10 +227,18 @@ When setting the factor level, the higher the boost factor, the more relevant th A regular expression search finds a match based on the contents between forward slashes "/", as documented in the [RegExp class](https://lucene.apache.org/core/4_10_2/core/org/apache/lucene/util/automaton/RegExp.html). -In this query, search for jobs with either the term Senior or Junior: `search=business_title:/(Sen|Jun)ior/``. +### Partial query string + +```http +searchFields=business_title&$select=business_title&search=business_title:/(Sen|Jun)ior/ +``` + +### Full URL + +In this query, search for jobs with either the term Senior or Junior: `search=business_title:/(Sen|Jun)ior/`. ```GET -https://azs-playground.search.windows.net/indexes/nycjobs/docs?api-version=2017-11-11&$count=true&searchFields=business_title&$select=business_title&queryType=full&search=business_title:/(Sen|Jun)ior/ +https://azs-playground.search.windows.net/indexes/nycjobs/docs?api-version=2017-11-11&queryType=full&$count=true&searchFields=business_title&$select=business_title&search=business_title:/(Sen|Jun)ior/ ``` ![Regex query](media/search-query-lucene-examples/regex.png) @@ -187,10 +250,18 @@ https://azs-playground.search.windows.net/indexes/nycjobs/docs?api-version=2017- ## Example 7: Wildcard search You can use generally recognized syntax for multiple (\*) or single (?) character wildcard searches. Note the Lucene query parser supports the use of these symbols with a single term, and not a phrase. +### Partial query string + +```http +searchFields=business_title&$select=business_title&search=business_title:prog* +``` + +### Full URL + In this query, search for jobs that contain the prefix 'prog' which would include business titles with the terms programming and programmer in it. You cannot use a * or ? symbol as the first character of a search. ```GET -https://azs-playground.search.windows.net/indexes/nycjobs/docs?api-version=2017-11-11&$count=true&searchFields=business_title&$select=business_title&queryType=full&search=business_title:prog* +https://azs-playground.search.windows.net/indexes/nycjobs/docs?api-version=2017-11-11&queryType=full&$count=true&searchFields=business_title&$select=business_title&search=business_title:prog* ``` ![Wildcard query](media/search-query-lucene-examples/wildcard.png) diff --git a/articles/search/search-query-overview.md b/articles/search/search-query-overview.md index 953ff75d18195..23df9834e748b 100644 --- a/articles/search/search-query-overview.md +++ b/articles/search/search-query-overview.md @@ -7,7 +7,7 @@ ms.author: heidist services: search ms.service: search ms.topic: conceptual -ms.date: 02/14/2019 +ms.date: 03/25/2019 ms.custom: seodec2018 --- # How to compose a query in Azure Search diff --git a/articles/search/search-query-simple-examples.md b/articles/search/search-query-simple-examples.md index 8391480ca646c..8b8793c41190a 100644 --- a/articles/search/search-query-simple-examples.md +++ b/articles/search/search-query-simple-examples.md @@ -1,5 +1,5 @@ --- -title: Simple query examples - Azure Search +title: Query examples using the "simple" search syntax - Azure Search description: Simple query examples for full text search, filter search, geo search, faceted search, and other query strings used to query an Azure Search index. author: HeidiSteen manager: cgronlun @@ -7,12 +7,12 @@ tags: Simple query analyzer syntax services: search ms.service: search ms.topic: conceptual -ms.date: 08/09/2018 +ms.date: 03/25/2019 ms.author: heidist ms.custom: seodec2018 --- -# Simple syntax query examples for building queries in Azure Search +# Query examples using the "simple" search syntax in Azure Search [Simple query syntax](https://docs.microsoft.com/rest/api/searchservice/simple-query-syntax-in-azure-search) invokes the default query parser for executing full text search queries against an Azure Search index. The simple query analyzer is fast and handles common scenarios in Azure Search, including full text search, filtered and faceted search, and geo-search. In this article, step through examples demonstrating query operations available when using the simple syntax. @@ -50,7 +50,9 @@ URL composition has the following elements: ## Send your first query -As a verification step, paste the following request into GET and click **Send**. Results are returned as verbose JSON documents. You can copy-paste this URL in first example below. +As a verification step, paste the following request into GET and click **Send**. Results are returned as verbose JSON documents. Entire documents are returned, which allows you to see all fields and all values. + +Paste this URL into a REST client as a validation step and to view document structure. ```http https://azs-playground.search.windows.net/indexes/nycjobs/docs?api-version=2017-11-11&$count=true&search=* @@ -70,6 +72,20 @@ This first example is not parser-specific, but we lead with it to introduce the For brevity, the query targets only the *business_title* field and specifies only business titles are returned. The syntax is **searchFields** to restrict query execution to just the business_title field, and **select** to specify which fields are included in the response. +### Partial query string + +```http +searchFields=business_title&$select=business_title&search=* +``` + +Here is the same query with multiple fields in a comma-delimited list. + +```http +search=*&searchFields=business_title, posting_type&$select=business_title, posting_type +``` + +### Full URL + ```http https://azs-playground.search.windows.net/indexes/nycjobs/docs?api-version=2017-11-11&$count=true&searchFields=business_title&$select=business_title&search=* ``` diff --git a/articles/site-recovery/vmware-physical-azure-support-matrix.md b/articles/site-recovery/vmware-physical-azure-support-matrix.md index aa155b38f4336..f4d64c4c04d96 100644 --- a/articles/site-recovery/vmware-physical-azure-support-matrix.md +++ b/articles/site-recovery/vmware-physical-azure-support-matrix.md @@ -6,7 +6,7 @@ manager: carmonm ms.service: site-recovery services: site-recovery ms.topic: conceptual -ms.date: 03/20/2019 +ms.date: 03/26/2019 ms.author: raynew --- @@ -150,7 +150,7 @@ Multi-NIC | Yes Reserved IP address | Yes IPv4 | Yes Retain source IP address | Yes -Azure Virtual Network service endpoints
    (without Azure Storage firewalls) | Yes +Azure Virtual Network service endpoints
    | Yes Accelerated Networking | No ## Storage @@ -199,7 +199,7 @@ Block blobs | No Encryption at rest (Storage Service Encryption)| Yes Premium storage | Yes Import/export service | No -Azure Storage firewalls for virtual networks configured on target storage/cache storage account (used to store replication data) | No +Azure Storage firewalls for virtual networks configured on target storage/cache storage account (used to store replication data) | Yes General purpose v2 storage accounts (both hot and cool tiers) | No ## Azure compute diff --git a/articles/stream-analytics/media/stream-analytics-define-outputs/09-stream-analytics-custom-properties.png b/articles/stream-analytics/media/stream-analytics-define-outputs/09-stream-analytics-custom-properties.png new file mode 100644 index 0000000000000..8f1a3012afe2b Binary files /dev/null and b/articles/stream-analytics/media/stream-analytics-define-outputs/09-stream-analytics-custom-properties.png differ diff --git a/articles/stream-analytics/media/stream-analytics-define-outputs/10-stream-analytics-property-columns.png b/articles/stream-analytics/media/stream-analytics-define-outputs/10-stream-analytics-property-columns.png new file mode 100644 index 0000000000000..599ca5e4da1f4 Binary files /dev/null and b/articles/stream-analytics/media/stream-analytics-define-outputs/10-stream-analytics-property-columns.png differ diff --git a/articles/stream-analytics/stream-analytics-define-outputs.md b/articles/stream-analytics/stream-analytics-define-outputs.md index 7ea80bbdf60a3..45421a19b4d5f 100644 --- a/articles/stream-analytics/stream-analytics-define-outputs.md +++ b/articles/stream-analytics/stream-analytics-define-outputs.md @@ -122,6 +122,7 @@ There are a few parameters that are needed to configure Event Hub data streams a | Encoding | For CSV and JSON, UTF-8 is the only supported encoding format at this time. | | Delimiter | Only applicable for CSV serialization. Stream Analytics supports a number of common delimiters for serializing data in CSV format. Supported values are comma, semicolon, space, tab, and vertical bar. | | Format | Only applicable for JSON serialization. Line separated specifies that the output is formatted by having each JSON object separated by a new line. Array specifies that the output is formatted as an array of JSON objects. This array is closed only when the job stops or Stream Analytics has moved on to the next time window. In general, it is preferable to use line separated JSON, since it doesn't require any special handling while the output file is still being written to. | +| Property Columns [optional] | Comma separated columns that need to be attached as user properties of outgoing message instead of the payload. More info about this feature in the section "Custom metadata properties for output" | ## Power BI [Power BI](https://powerbi.microsoft.com/) can be used as an output for a Stream Analytics job to provide for a rich visualization experience of analysis results. This capability can be used for operational dashboards, report generation, and metric driven reporting. @@ -225,6 +226,7 @@ The table below lists the property names and their description for creating a Qu | Encoding |For CSV and JSON, UTF-8 is the only supported encoding format at this time | | Delimiter |Only applicable for CSV serialization. Stream Analytics supports a number of common delimiters for serializing data in CSV format. Supported values are comma, semicolon, space, tab, and vertical bar. | | Format |Only applicable for JSON type. Line separated specifies that the output is formatted by having each JSON object separated by a new line. Array specifies that the output is formatted as an array of JSON objects. | +| Property Columns [optional] | Comma separated columns that need to be attached as user properties of outgoing message instead of the payload. More info about this feature in the section "Custom metadata properties for output" | The number of partitions is [based on the Service Bus SKU and size](../service-bus-messaging/service-bus-partitioning.md). Partition key is a unique integer value for each partition. @@ -243,6 +245,7 @@ The table below lists the property names and their description for creating a ta | Event serialization format |Serialization format for output data. JSON, CSV, and Avro are supported. | | Encoding |If using CSV or JSON format, an encoding must be specified. UTF-8 is the only supported encoding format at this time | | Delimiter |Only applicable for CSV serialization. Stream Analytics supports a number of common delimiters for serializing data in CSV format. Supported values are comma, semicolon, space, tab, and vertical bar. | +| Property Columns [optional] | [Optional] Comma separated columns that need to be attached as user properties of outgoing message instead of the payload. More info about this feature in the section "Custom metadata properties for output" | The number of partitions is [based on the Service Bus SKU and size](../service-bus-messaging/service-bus-partitioning.md). Partition key is a unique integer value for each partition. @@ -288,6 +291,26 @@ When Azure Stream Analytics receives 413 (http Request Entity Too Large) excepti Also, in a situation where there is no event landing in a time window, no output is generated. As a result, computeResult function is not called. This behavior is consistent with the built-in windowed aggregate functions. +## Custom metadata properties for output + +This feature allows attaching query columns as user properties to your outgoing messages. These columns do not go into the payload. These properties are present in the form of a Dictionary on the output message. Key is the column name and value is the column value in the properties dictionary. All Stream Analytics data types are supported except Record and Array. + +Supported outputs: +* Service Bus Queues +* Service Bus Topics +* Event Hub + +Example: +In the following example, we will add the 2 fields DeviceId and DeviceStatus to the metadata. +* Query: `select *, DeviceId, DeviceStatus from iotHubInput` . +* Output Configuration: `DeviceId,DeviceStatus`. + +![Property Columns](./media/stream-analytics-define-outputs/10-stream-analytics-property-columns.png) + +Output Message properties inspected in EventHub using [Service Bus Explorer](https://github.com/paolosalvatori/ServiceBusExplorer). + + ![Event custom properties](./media/stream-analytics-define-outputs/09-stream-analytics-custom-properties.png) + ## Partitioning The following table summarizes the partition support and the number of output writers for each output type: @@ -297,7 +320,7 @@ The following table summarizes the partition support and the number of output wr | Azure Data Lake Store | Yes | Use {date} and {time} tokens in the Path prefix pattern. Choose the Date format, such as YYYY/MM/DD, DD/MM/YYYY, MM-DD-YYYY. HH is used for the Time format. | Follows the input partitioning for [fully parallelizable queries](stream-analytics-scale-jobs.md). | | Azure SQL Database | Yes | Based on the PARTITION BY clause in the query | Follows the input partitioning for [fully parallelizable queries](stream-analytics-scale-jobs.md). To learn more about achieving better write throughput performance when you're loading data into SQL Azure Database, visit [Azure Stream Analytics output to Azure SQL Database](stream-analytics-sql-output-perf.md). | | Azure Blob storage | Yes | Use {date} and {time} tokens from your event fields in the Path pattern. Choose the Date format, such as YYYY/MM/DD, DD/MM/YYYY, MM-DD-YYYY. HH is used for the Time format. Blob output can be partitioned by a single custom event attribute {fieldname} or {datetime:\}. | Follows the input partitioning for [fully parallelizable queries](stream-analytics-scale-jobs.md). | -| Azure Event Hub | Yes | Yes | Varies depending on partition alignment.
    When the output Event Hub partition key is equally aligned with upstream (previous) query step, the number of writers is the same the number of output Event Hub partitions. Each writer uses EventHub’s [EventHubSender class](/dotnet/api/microsoft.servicebus.messaging.eventhubsender?view=azure-dotnet) to send events to the specific partition.
    When the output Event Hub partition key is not aligned with upstream (previous) query step, the number of writers is the same as the number of partitions in that prior step. Each writer uses EventHubClient [SendBatchAsync class](https://docs.microsoft.com/dotnet/api/microsoft.servicebus.messaging.eventhubclient.sendasync?view=azure-dotnet) to send events to all the output partitions. | +| Azure Event Hub | Yes | Yes | Varies depending on partition alignment.
    When the output Event Hub partition key is equally aligned with upstream (previous) query step, the number of writers is the same the number of output Event Hub partitions. Each writer uses EventHub’s [EventHubSender class](/dotnet/api/microsoft.servicebus.messaging.eventhubsender?view=azure-dotnet) to send events to the specific partition.
    When the output Event Hub partition key is not aligned with upstream (previous) query step, the number of writers is the same as the number of partitions in that prior step. Each writer uses EventHubClient [SendBatchAsync class](/dotnet/api/microsoft.servicebus.messaging.eventhubclient.sendasync?view=azure-dotnet) to send events to all the output partitions. | | Power BI | No | None | Not applicable. | | Azure Table storage | Yes | Any output column. | Follows the input partitioning for [fully parallelized queries](stream-analytics-scale-jobs.md). | | Azure Service Bus Topic | Yes | Automatically chosen. The number of partitions is based on the [Service Bus SKU and size](../service-bus-messaging/service-bus-partitioning.md). Partition key is a unique integer value for each partition.| Same as the number of partitions in the output topic. | diff --git a/articles/virtual-desktop/create-host-pools-powershell.md b/articles/virtual-desktop/create-host-pools-powershell.md index 787dcd3e8e345..5afee9940fb6b 100644 --- a/articles/virtual-desktop/create-host-pools-powershell.md +++ b/articles/virtual-desktop/create-host-pools-powershell.md @@ -49,7 +49,7 @@ Add-RdsAppGroupUser -TenantName -HostPoolName -AppGr The **Add-RdsAppGroupUser** cmdlet doesn't support adding security groups and only adds one user at a time to the app group. If you want to add multiple users to the app group, rerun the cmdlet with the appropriate user principal names. -Run the following cmdlet to export the registration token to a variable, which you will use later in [Register the virtual machines to the Windows Virtual Desktop host pool](#register-the-virtual-machines-to-the-windows-virtual-desktop-host-pool). +Run the following cmdlet to export the registration token to a variable, which you will use later in [Register the virtual machines to the Windows Virtual Desktop host pool](#register-the-virtual-machines-to-the-windows-virtual-desktop-preview-host-pool). ```powershell $token = (Export-RdsRegistrationInfo -TenantName -HostPoolName ).Token diff --git a/articles/virtual-desktop/set-up-customize-master-image.md b/articles/virtual-desktop/set-up-customize-master-image.md index 5d88e97bc84e7..30c02e27fd47a 100644 --- a/articles/virtual-desktop/set-up-customize-master-image.md +++ b/articles/virtual-desktop/set-up-customize-master-image.md @@ -157,8 +157,8 @@ You can disable Automatic Updates manually. To disable Automatic Updates: -1. Install Office365 by following the instructions in [Office image preparation](set-up-customize-master-image.md#office-image-preparation). -2. Install any additional applications by following the instructions in [User profile setup (FSLogix)](set-up-customize-master-image.md#user-profile-setup-fslogix), [Windows Defender](set-up-customize-master-image.md#windows-defender), and [Other applications and registry configuration](set-up-customize-master-image.md#other-applications-and-registry-configuration). +1. Install Office365 by following the instructions in [Software preparation and installation](set-up-customize-master-image.md#software-preparation-and-installation). +2. Install any additional applications by following the instructions in [Set up user profile container (FSLogix)](set-up-customize-master-image.md#set-up-user-profile-container-fslogix), [Configure Windows Defender](set-up-customize-master-image.md#configure-windows-defender), and [Other applications and registry configuration](set-up-customize-master-image.md#other-applications-and-registry-configuration). 3. Disable Windows Auto Update Service on the local VM. 4. Open **Local Group Policy Editor\\Administrative Templates\\Windows Components\\Windows Update**. 5. Right-click **Configure Automatic Update** and set it to **Disabled**. @@ -227,9 +227,7 @@ Windows Virtual Desktop does not officially support Skype for Business and Teams ### Set up user profile container (FSLogix) -To include the FSLogix container as part of the image, follow the instructions in [Set up a user profile share for a host pool](create-host-pools-user-profile.md#configure-the-fslogix-profile-container). - -When configuring the file share registry key, use the file share you created in [Configure permissions for the file server](set-up-customize-master-image.md#configure-permissions-for-the-file-server) where you plan to store the profile containers. You can also test the functionality of the FSLogix container using this [quickstart](https://docs.fslogix.com/display/20170529/Profile+Containers+-+Quick+Start). +To include the FSLogix container as part of the image, follow the instructions in [Set up a user profile share for a host pool](create-host-pools-user-profile.md#configure-the-fslogix-profile-container). You can test the functionality of the FSLogix container with [this quickstart](https://docs.fslogix.com/display/20170529/Profile+Containers+-+Quick+Start). ### Configure Windows Defender diff --git a/articles/virtual-machines/windows/using-visual-studio-vm.md b/articles/virtual-machines/windows/using-visual-studio-vm.md index b2bee8756eeae..e9c222c4d536e 100644 --- a/articles/virtual-machines/windows/using-visual-studio-vm.md +++ b/articles/virtual-machines/windows/using-visual-studio-vm.md @@ -14,7 +14,7 @@ ms.workload: azure-vs ms.devlang: na ms.topic: article ms.tgt_pltfrm: vm-windows -ms.date: 02/19/2019 +ms.date: 03/15/2019 ms.author: phillee keywords: visualstudio --- @@ -29,9 +29,9 @@ Images for the most recent major versions, Visual Studio 2017 and Visual Studio | Release version | Editions | Product version | |:------------------------------------------------------------:|:----------------------------:|:------------------------:| -| Visual Studio 2019: Preview (Preview 3) | Enterprise | Version 16.0.0 Preview 3 | -| Visual Studio 2017: Latest (Version 15.9) | Enterprise, Community | Version 15.9.7 | -| Visual Studio 2017: RTW | Enterprise, Community | Version 15.0.20 | +| Visual Studio 2019: Preview (RC3) | Enterprise | Version 16.0.0 RC3 | +| Visual Studio 2017: Latest (Version 15.9) | Enterprise, Community | Version 15.9.9 | +| Visual Studio 2017: RTW | Enterprise, Community | Version 15.0.22 | | Visual Studio 2015: Latest (Update 3) | Enterprise, Community | Version 14.0.25431.01 | | Visual Studio 2015: RTW | None | (Expired for servicing) | diff --git a/includes/cognitive-services-containers-host-computer.md b/includes/cognitive-services-containers-host-computer.md index 52762e51be65d..af9cce5eb6e33 100644 --- a/includes/cognitive-services-containers-host-computer.md +++ b/includes/cognitive-services-containers-host-computer.md @@ -3,10 +3,10 @@ author: diberry ms.author: diberry ms.service: cognitive-services ms.topic: include -ms.date: 01/24/2019 +ms.date: 03/22/2019 --- -The **host** is the computer that runs the docker container. It can be a computer on your premises or a docker hosting service in Azure including: +The **host** is a x64-based computer that runs the docker container. It can be a computer on your premises or a docker hosting service in Azure including: * [Azure Kubernetes Service](../articles/aks/index.yml) * [Azure Container Instances](../articles/container-instances/index.yml) diff --git a/includes/devspaces-team-development-1.md b/includes/devspaces-team-development-1.md index 90dc8dfeffac8..aff74ca9b2b07 100644 --- a/includes/devspaces-team-development-1.md +++ b/includes/devspaces-team-development-1.md @@ -64,9 +64,11 @@ First we'll need to deploy a baseline of our services. This deployment will repr > > ![Example CI/CD diagram](../articles/dev-spaces/media/common/ci-cd-complex.png) -At this point your baseline should be running. Run the `azds list-up` command, and you'll see output similar to the following: +At this point your baseline should be running. Run the `azds list-up --all` command, and you'll see output similar to the following: ``` +$ azds list-up --all + Name DevSpace Type Updated Status ---------------------------- -------- ------- ------- ------- mywebapi dev Service 3m ago Running diff --git a/includes/devspaces-team-development-2.md b/includes/devspaces-team-development-2.md index b16f82460733d..727b3f45082b9 100644 --- a/includes/devspaces-team-development-2.md +++ b/includes/devspaces-team-development-2.md @@ -14,48 +14,34 @@ manager: yuvalm ### Run the service -1. Hit F5 (or type `azds up` in the Terminal Window) to run the service. The service will automatically run in your newly selected space _dev/scott_. -1. You can confirm that your service is running in its own space by running `azds list-up` again. You'll notice an instance of *mywebapi* is now running in the _dev/scott_ space (the version running in _dev_ is still running but it is not listed). +To run the service, hit F5 (or type `azds up` in the Terminal Window) to run the service. The service will automatically run in your newly selected space _dev/scott_. Confirm that your service is running in its own space by running `azds list-up`: - ``` - Name DevSpace Type Updated Status - mywebapi scott Service 3m ago Running - mywebapi-bb4f4ddd8-sbfcs scott Pod 3m ago Running - webfrontend dev Service 26m ago Running - ``` - -1. Run `azds list-uris`, and notice the access point URL for *webfrontend*. - - ``` - Uri Status - ------------------------------------------------------------------------- --------- - http://localhost:53831 => mywebapi.scott:80 Tunneled - http://scott.s.dev.webfrontend.6364744826e042319629.ce.azds.io/ Available - ``` - -1. Use the URL with the *scott.s* prefix to navigate to your application. Notice this updated URL still resolves. This URL is unique to the _dev/scott_ space. The special URL signifies that requests sent to the "Scott URL" will try to first route to services in the _dev/scott_ space, but if that fails, they will fall back to services in the _dev_ space. - - -![](../articles/dev-spaces/media/common/space-routing.png) +Notice the public access point URL for *webfrontend* is prefixed with *scott.s*. This URL is unique to the _dev/scott_ space. This URL prefix tells the Ingress controller to route requests to the _dev/scott_ version of a service. When a request with this URL is handled by Dev Spaces, the Ingress Controller first tries to route the request to the *webfrontend* service in the _dev/scott_ space. If that fails, the request will be routed to the *webfrontend* service in the _dev_ space as a fallback. Also notice there is a localhost URL to access the service over localhost using the Kubernetes *port-forward* functionality. For more information about URLs and routing in Azure Dev Spaces, see [How Azure Dev Spaces works and is configured](../articles/dev-spaces/how-dev-spaces-works.md). + + + +![Space Routing](../articles/dev-spaces/media/common/Space-Routing.png) This built-in feature of Azure Dev Spaces lets you test code in a shared space without requiring each developer to re-create the full stack of services in their space. This routing requires your app code to forward propagation headers, as illustrated in the previous step of this guide. diff --git a/includes/functions-host-json-service-bus.md b/includes/functions-host-json-service-bus.md deleted file mode 100644 index 7febe9c1a7edf..0000000000000 --- a/includes/functions-host-json-service-bus.md +++ /dev/null @@ -1,22 +0,0 @@ ---- -author: ggailey777 -ms.service: azure-functions -ms.topic: include -ms.date: 09/04/2018 -ms.author: glenga ---- -```json -{ - "serviceBus": { - "maxConcurrentCalls": 16, - "prefetchCount": 100, - "autoRenewTimeout": "00:05:00" - } -} -``` - -|Property |Default | Description | -|---------|---------|---------| -|maxConcurrentCalls|16|The maximum number of concurrent calls to the callback that the message pump should initiate. By default, the Functions runtime processes multiple messages concurrently. To direct the runtime to process only a single queue or topic message at a time, set `maxConcurrentCalls` to 1. | -|prefetchCount|n/a|The default PrefetchCount that will be used by the underlying MessageReceiver.| -|autoRenewTimeout|00:05:00|The maximum duration within which the message lock will be renewed automatically.|