Quantcast
Channel: Terence Luk
Viewing all 836 articles
Browse latest View live

Configuring App-only authentication with certificates for unattended scripts with EXO V2 module

$
0
0

Those who have worked with Exchange Online PowerShell for a while will know how much a challenge it was to move from basic to modern authentication with MFA for unattended scripts that are scheduled to run non-interactively. The introduction and enforcement of modern authentication and MFA meant attempting to run unattended scripts would no longer work because it would require an administrator to interactively enter credentials and authenticate with the 2nd authentication. As such, many administrators continued to use basic authentication or modern authentication with accounts without MFA enforced to work around the issue, which leads to vulnerability issues. Microsoft originally had plans to disable basic authentication in 2020 but delayed it indefinitely.

Given the challenge as described above, I was extremely excited when Microsoft released the EXO V2 2.0.3 module for public preview in July 2020 where it introduced certificate based authentication. This meant it was now possible to connect to Exchange Online with unattended scripts that no longer passed a username and password, and therefore eliminating the issue where MFA is required upon authentication. I haven’t been working with Exchange Online for sometime due to my new role but was asked by an ex-colleague about setting up unattended scripts so I took the opportunity to capture the process while demonstrating the setup, which I will now use for this blog.

The Scenario

An organization wants to use a PowerShell script to export Office 365 audit logs of users’ events from Exchange Online, SharePoint Online, OneDrive for Business, Azure Active Directory, Microsoft Teams, Power BI, and other Microsoft 365 services with the Audit Log search feature in the following two Microsoft 365 consoles:

Office 365 Security & Compliance
https://protection.office.com/unifiedauditlog

Microsoft 365 compliance
https://compliance.microsoft.com/auditlogsearch

The log will then get emailed for review at the end of the month with a scheduled task.

The Challenge

As this will be an automated process that runs at the end of each month, attempting to setup a PowerShell script that requires using Connect-ExchangeOnline with basic or modern authentication means a username and password will be used with an account without MFA.

The Solution

With the release of EXO V2 2.0.3, unattended scripts (automation) scenarios can now authenticate using Azure AD applications and self-signed certificates.

How does it work?

The EXO V2 module uses the Active Directory Authentication Library to fetch an app-only token using the application Id, tenant Id (organization), and certificate thumbprint. The application object provisioned inside Azure AD has a Directory Role assigned to it, which is returned in the access token. Exchange Online configures the session RBAC using the directory role information that's available in the token.

image

The Tools

To accomplish the task above, we will require the following components:

  1. EXO V2 2.0.3 or higher module
  2. PowerShell Version 7 and higher
  3. Self-signed certificate
  4. PowerShell Script using the Search-UnifiedAuditLog (https://docs.microsoft.com/en-us/powershell/module/exchange/search-unifiedauditlog?view=exchange-ps)

This post will focus on setting up the components required for certificate based authentication but for those who are interested in #4, I’ve written another blog post and will provide the following link to that post:

Script to export audit logs for the current month from Office 365 using Search-UnifiedAuditLog
http://terenceluk.blogspot.com/2021/05/script-to-export-audit-logs-for-current.html

Official Microsoft Documentation

As always, I’d like to provide the official Microsoft documentation and other useful reference documents here:

App-only authentication for unattended scripts in the EXO V2 module
https://docs.microsoft.com/en-us/powershell/exchange/app-only-auth-powershell-v2?view=exchange-ps

Modern Auth and Unattended Scripts in Exchange Online PowerShell V2
https://techcommunity.microsoft.com/t5/exchange-team-blog/modern-auth-and-unattended-scripts-in-exchange-online-powershell/ba-p/1497387

About the Exchange Online PowerShell V2 module
https://docs.microsoft.com/en-us/powershell/exchange/exchange-online-powershell-v2?view=exchange-ps

Basic Authentication and Exchange Online – July Update
https://techcommunity.microsoft.com/t5/exchange-team-blog/basic-authentication-and-exchange-online-july-update/ba-p/1530163

UPDATE: Exchange Online deprecating Basic Authentication (Basic Auth)
https://docs.microsoft.com/en-us/lifecycle/announcements/exchange-online-basic-auth-deprecated

Step #1 – Required Module and PowerShell

Begin by obtaining the required PowerShell and EXO V2 modules as using the incorrect version of say, PowerShell, will cause the certificate based authentication to fail. Download and install the latest versions of the following:

ExchangeOnlineManagement 2.0.5 (the latest at the time of this writing)
https://www.powershellgallery.com/packages/ExchangeOnlineManagement

v7.1.3 Release of PowerShell (the latest at the time of this writing)
https://github.com/PowerShell/PowerShell/releases/tag/v7.1.3

While not a requirement, I highly recommend using Visual Studio Code and the PowerShell extension to write and test PowerShell scripts as the Windows built-in ISE only supports version 5:

Visual Studio Code
https://code.visualstudio.com/download

Using Visual Studio Code for PowerShell Development
https://docs.microsoft.com/en-us/powershell/scripting/dev-cross-plat/vscode/using-vscode?view=powershell-7.1

image

Step #2 – Register an application in Azure AD

In order to authenticate with the EXO V2 module’s Connect-ExchangeOnline with a certificate, you must register the an application in Azure AD, which will represent the unattended script.

Begin by logging into https://portal.azure.com, navigate to Azure Active Directory > App registrations:

image

Click on the New Registration button to register a new app that will represent the identity of the unattended script:

image

Provide a name for the application and select the appropriate account types option. For the purpose of this demonstration, we’re limiting to the single tenant so the following is selected:

Accounts in this organizational directory only (<YourOrganizationName> only - Single tenant)

The Redirect URI (optional) field is not required for what we’re trying to accomplish so leave Web as the selection and the URI empty.

image

The newly created registration of the application should now be displayed:

image

With the application created, we’ll need to assign the appropriate permissions to the application that will represent our unattended script. Proceed to navigate into the configuration of the registered application:

image

----------------------------------------------------------------------------------------------------------------------------

There are two ways to configure the appropriate permissions:

Option #1 – Use the Manifest configuration

This is the easiest as you simply edit the manifest properties, which will configure the permissions and remove the default Microsoft Graph >User.Read permissions as shown here in the API permissions:

image

Navigate to the Manifest configuration and locate requiredResourceAccess entry around line 44:

image

"requiredResourceAccess": [

{

"resourceAppId": "00000002-0000-0ff1-ce00-000000000000",

"resourceAccess": [

{

"id": "dc50a0fb-09a3-484d-be87-e023b12c6440",

"type": "Role"

}

]

}

],

image

Click Save to apply the changes:

image

Navigate to API permissions should now display the Exchange.ManageAsApp permissions configured:

image

Proceed to grant admin consent for the configured permission:

image

image

image

Option #2 – Use the API permissions configuration

The second option is to use the API permissions to manually add the preproperate permissions for the app:

image

Search for Office 365 Exchange Online:

image

Select Application permissions:

image

Under Exchange (1), select Exchange.ManageAsApp:

image

Grant admin consent for the Exchange.ManageAsApp permissions that was just assigned:

image

image

Remove the Microsoft Graph permissions:

image

image

----------------------------------------------------------------------------------------------------------------------------

Step #3 – Generate a self-signed certificate for the application that will be authenticating

The next step is to generate a self-signed X.509 certificate that the application will use to authenticate against Azure AD to request the app-only access token. It is also possible to use an internal or public PKI infrastructure for the certificate but this demonstration will use a self-signed certificate that can be generated locally on a Windows Server with PowerShell version 7.

Note that next Generation (CNG) certificates are not supported for app-only authentication with Exchange. CNG certificates are created by default in modern Windows versions. You must use a certificate from a CSP key provider.

To generate a self-signed certificate for authentication, log onto any Windows Server or desktop with PowerShell version 7 or newer and execute the following:

# Create certificate
$mycert = New-SelfSignedCertificate -DnsName "contoso.org" -CertStoreLocation "cert:\LocalMachine\My" -NotAfter (Get-Date).AddYears(1) -KeySpec KeyExchange

image

A certificate with the private key will be created in the local computer store:

image

The certificate with the private key located on the computer will be used to authenticate the identity of an unattended script. A corresponding certificate with the public key (without the private key) will need to be attached to the Azure AD application. Execute the following cmdlet to create a .cer file with the public key:

# Export certificate to .cer file
$mycert | Export-Certificate -FilePath mycert.cer

image

image

If the intention is to execute Connect-ExchangeOnline from the server where this self-signed certificate is generated then we can simply reference it with its thumbprint when we authenticate. If the script will reside on another server that may not be Windows or have the certificate imported into the local computer store then it is possible to export the certificate with the private key to a .pfx file, which can then be used to authenticate. Scenarios such as having an App Service that needs to authenticate against Azure AD can have the .pfx file stored in the Azure Key Vault and retrieved during the authentication process. Use the following cmdlet to export the certificate to a pfx file:

# Export certificate to .pfx file
$mycert | Export-PfxCertificate -FilePath mycert.pfx -Password $(ConvertTo-SecureString -String "P@ssw0Rd1234" -AsPlainText -Force)

Step #4 – Attach the self-signed certificate to the Azure AD application

The next step is to upload the certificate with the public key (without the private key) to the Azure AD application. Proceed to navigate to the App registrations configuration in the Azure portal and click into the application:

image

Navigate to Certificates & secrets and click on the Upload certificate button:

image

Upload the .cer export of the certificate:

image

The uploaded certificate will be displayed:

image

Step #5 – Assign the required Azure AD roles to the application

The last step for the configuration is the RBAC roles with the required permissions for the registered application so it is able to execute the required cmdlets with the EXO V2 module because authenticating with the certificate will not be in the context of a user. Not all of the Azure AD roles are currently supported but the following ones are:

  • Global administrator
  • Compliance administrator
  • Security reader
  • Security administrator
  • Helpdesk administrator
  • Exchange administrator
  • Global Reader

Begin by navigating to Azure Active Directory > Roles and administrators, search for the Exchange administrator role and open the properties:

image

Click on Add assignments:

image

Search for the app registration we created earlier to grant the service principal permissions:

image

Note how the registered app is now assigned Exchange administrator permissions:

image

Step #6 – Authenticate against Azure AD with Connect-ExchangeOnline using the certificate

We can now use the certificate to authenticate against Azure AD with the Connect-ExchangeOnline cmdlet. Proceed to obtain the Application (client)ID from the Overview properties of the registered application:

image

And the thumbprint of the certificate from the Certificates & secrets of the registered application or the certificate on the Windows operating system’s local computer store:

image

image

With the variable properties above, the following cmdlet can be used to authenticate.

Authenticating with a certificate stored on the Windows server or desktop’s Local Computer > Personal Certificates store

Ensure that the certificate is located in the appropriate store:

image

Connect-ExchangeOnline -CertificateThumbPrint "3D057B3299A75B824F326F1A8A64262F12C60958" -AppID "b6925809-be8c-441b-8915-1bbcc2f2b6fc" -Organization "contoso.onmicrosoft.com"

image

Authenticating with an exported PFX file

Alternatively, you can also use an exported PFX to connect via the following cmdlet:

Connect-ExchangeOnline -CertificateFilePath "C:\scripts\mycert.pfx" -CertificatePassword (ConvertTo-SecureString -String "<MyPassword>" -AsPlainText -Force) -AppID "36ee4c6c-0812-40a2-b820-b22ebd02bce3" -Organization "contoso.onmicrosoft.com"

Authenticating with a certificate object (e.g. retrieved from Azure Key Vault)

The last method to provide a certificate is to use a certificate object via the following cmdlet:

Connect-ExchangeOnline -Certificate <%X509Certificate2 Object%> -AppID "36ee4c6c-0812-40a2-b820-b22ebd02bce3" -Organization "contoso.onmicrosoft.com"

When the Certificate parameter is used, the certificate does not need to be installed on the computer where the command is executed. This parameter is applicable for scenarios where the certificate object is stored remotely and fetched at runtime during script execution. An example of this is when the certificate is stored in the Azure Key Vault.


Skype for Business Online (SkypeOnlineConnector) PowerShell connections are blocked and Set-CsUser no longer works

$
0
0

Problem

You’ve noticed that the following cmdlets fail with the error message indicating Skype for Business Online (SkypeOnlineConnector) PowerShell connections are blocked:

Import-Module SkypeOnlineConnector

$sfbSession = New-CsOnlineSession

New-PSSession : [admin0b.online.lync.com] Processing data from remote server admin0b.online.lync.com failed with the

following error message: Skype for Business Online PowerShell connections are blocked. Please replace the Skype for

Business Online PowerShell connector module with the Teams PowerShell Module. Please visit https://aka.ms/sfbocon2tpm

for supported options. For more information, see the about_Remote_Troubleshooting Help topic.

At C:\Program Files\Common Files\Skype for Business

Online\Modules\SkypeOnlineConnector\SkypeOnlineConnectorStartup.psm1:254 char:16

+ ... $session = New-PSSession -Name $psSessionName -ConnectionUri $Connec ...

+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

+ CategoryInfo : OpenError: (System.Manageme....RemoteRunspace:RemoteRunspace) [New-PSSession], PSRemotin

gTransportException

+ FullyQualifiedErrorId : IncorrectProtocolVersion,PSSessionOpenFailed

PS C:\WINDOWS\system32> Import-PSSession $sfbSession

Import-PSSession : Cannot validate argument on parameter 'Session'. The argument is null. Provide a valid value for

the argument, and then try running the command again.

At line:1 char:18

+ Import-PSSession $sfbSession

+ ~~~~~~~~~~~

+ CategoryInfo : InvalidData: (:) [Import-PSSession], ParameterBindingValidationException

+ FullyQualifiedErrorId : ParameterArgumentValidationError,Microsoft.PowerShell.Commands.ImportPSSessionCommand

image

As per the following Microsoft documentation:

Migrating from Skype for Business Online Connector to the Teams PowerShell module
https://aka.ms/sfbocon2tpm

Skype for Business Online (SkypeOnlineConnector) PowerShell connections will be rejected / blocked starting May 17, 2021 and to use Teams PowerShell Module for administration. However, attempting to use Connect-MicrosoftTeams and then executing the Set-CsUser cmdlet to enable a user for Enterprise Voice fails indicating it is not recognized:

image

Solution

One of the reasons why the Set-CsUser or any Cs- cmdlets will not work even if the MicrosofTeams module is used is if an old version is used. In order for the legacy Cs- cmdlets to be available, Teams PowerShell Module 2.0 or later needs to be installed and imported. The example above has an older module imported:

Get-Module

image

To update the module to the latest version, execute the following:

Uninstall-Module -Name MicrosoftTeams

Install-Module -Name MicrosoftTeams -RequiredVersion 2.0.0 -AllowClobber

Import-Module -Name MicrosoftTeams

image

The legacy Cs- cmdlets should now work:

Import-Module MicrosoftTeams

Connect-MicrosoftTeams

$usernameUPN = "tluk@contoso.com"

$extension = "tel:+7899"

Set-CsUser -Identity $usernameUPN -EnterpriseVoiceEnabled $true -HostedVoiceMail $true -OnPremLineURI $extension

Get-CsOnlineUser -Identity $usernameUPN | FL *uri

Grant-CsOnlineVoiceRoutingPolicy -Identity $usernameUPN -PolicyName "Toronto"

Grant-CsTenantDialPlan -PolicyName Toronto -Identity (Get-CsOnlineUser $usernameUPN).SipAddress

image

Using AirWatch to remotely run batch files for uninstalling applications

$
0
0

I don’t get to work with MDM or UEM applications much anymore due to my focus on Azure so I got pretty excited when an ex-colleague asked me if there was a way to use VMware AirWatch (also known as Workspace One) to remotely execute an uninstall command for an application because I had gone through the process before. After digging up some old notes to guide him through the setup, I thought I’d write this blog post in case anyone else happens to ask me in the future.

The official documentation, although a bit outdated, for this feature can be found here:

Using Product Provisioning to Deliver Files to Windows 10: Workspace ONE Operational Tutorial
https://techzone.vmware.com/using-product-provisioning-deliver-files-windows-10-workspace-one-operational-tutorial#_991596

For the purpose of this example, I will be using a batch file that manually (and forcefully) removes Cylance Protect from devices. Not long ago I ran into an issue where attempting to uninstall Cylance Protect from devices would display the following error:

Cylance PROTECT
You are attempting to run the 32-bit installer on a 64-bit version of Windows. Please run the 64-bit installer.

image

I couldn’t determine how to get around this as none of the methods such as editing registry keys and executing the following msiexec.exe would work:

msiexec /x {2E64FC5C-9286-4A31-916B-0D8AE4B22954} /quiet

Reaching out to Blackberry support had an support engineer point me to:

Fix problems that block programs from being installed or removed

https://support.microsoft.com/en-us/mats/program_install_and_uninstall

The tool worked but it had to be ran interactively and did not allow me to use it at scale.

A bit more research led me to the following script written by James Gallagher, which worked but please note that as this was originally provided by Cylance but later modified by Cyberforce, use this at your own risk. It worked for me but may not for others.

Manual Removal Of CylancePROTECT
https://cyberforcesecurityhelp.freshdesk.com/support/solutions/articles/44002036687-manual-removal-of-cylanceprotect

In case the post ever gets deleted, I will paste the contents for the customized-CylanceCleanupTool.bat at the end of this post.

With the above scenario described, let’s begin creating the configuration in AirWatch Version: 20.11.0.5 (2011):

image

Create the Files/Actions

The first step is to create a Files/Actions that will allow you to upload the batch file, define where to store it on the device, and how to execute the batch file.

Begin by navigating to Devices > Provisioning > Components > Files/Actions:

image

Click on the ADD FILES/ACTIONS:

image

Select Windows under Add Files/Actions:

image

Select Windows Desktop under Select Device Type:

image

Type in a name for the action and select the appropriate organization for Managed By:

image

Navigate to the Files tab, click on the ADD FILES tab, then Choose Files to select the batch file that will be uploaded and pushed to the clients:

image

image

Specify a download path where the batch file will be downloaded to on the client:

C:\Temp\AirWatch\

image

Save the configuration and the following line will be displayed in the Files tab:

image

Navigate to the Manifest tab and click on InstallManifest:

image

Select Run for Action(s) To Perform:

image

Select System for the Execution Context so the batch file is running with elevated permissions and specify the path and batch file (the location specified earlier for downloading the batch file and the batch file name that was just uploaded:

C:\Temp\AirWatch\customized-CylanceCleanupTool.bat

image

The following configuration will be displayed under Install Manifest. We can specify a command to uninstall but there will not be one configured for this example:

image

Proceed to save the new Files/Actions:

image

Create the Product to assign to devices

With the creation of the Files/Actions completed, the next step is to assign it to devices.

Navigate to Devices > Provisioning > Product List View and click on ADD PRODUCT:

image

Select Windows under Add Product:

image

Select Windows Desktop under Select Device Type:

image

Provide a name and description for the product, select the appropriate organization, and select the Smart Group this product should be applied to. For the purpose of this example, we will be assigning it to all devices.

image

Click on the Manifest tab and then the ADD button:

image

Select File/Action - Install for Action(s) To Perform and the previously created Files/Actions for the Files/Actions field:

image

The saved Manifest will be displayed as such:

image

You can further specify Conditions, Deployment and Dependencies options:

image

image

image

With the configuration completed, decide to click Save button to simply save the Manifest or Activate to save and immediately activate the configuration:

image

For the purpose of this example, I will click on Activate, which will display the list of devices it will be applied to:

image

The new product should now be displayed:

image

Waiting a few seconds and refreshing the update the In Progress, Compliant and Failed values:

image

Hope this helps anyone who might be looking for instructions on how to remotely run batch files with AirWatch.

customized-CylanceCleanupTool.bat

@ECHO OFF

title UNIFIED DRIVER CYLANCE CLEANUP TOOL

Echo UNIFIED DRIVER CYLANCE CLEANUP TOOL

:SwitchToWorkingDirectory

cd /d "%~dp0" 1>nul 2>&1

:AdminCheck

openfiles>nul 2>&1

IF %ERRORLEVEL% == 0 (

:AdminCheckPass

GOTO ManualUninstall

) ELSE (

:AdminCheckFail

Echo * Please re-run the Unified Driver Cylance Cleanup Tool as Administrator.

Echo * Exiting...

GOTO CyExit

)

:ManualUninstall

reg delete HKEY_LOCAL_MACHINE\SOFTWARE\Cylance /f

reg delete HKEY_CLASSES_ROOT\Installer\Products\C5CF46E2682913A419B6D0A84E2B9245 /f

reg delete HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\services\CylanceSvc /f

taskkill /im CylanceUI.exe

takeown /f "C:\Program Files\Cylance" /r /d y

icacls "C:\Program Files\Cylance" /reset /T

rd /s /q "C:\Program Files\Cylance"

takeown /f "C:\programdata\Cylance" /r /d y

rd /s /q C:\programdata\Cylance

:InstallCleanup

Echo * Installing the Unified Driver Cylance Cleanup Tool service...

CyCleanupSvc.exe "-install"

IF %ERRORLEVEL% == 0 (

GOTO WaitCleanup

) ELSE (

Echo * Failed to install the Unified Driver Cylance Cleanup Tool service.

Echo * Please check the Logs directory.

Echo * Exiting...

GOTO CyExit

)

:WaitCleanup

Echo * Waiting for the Unified Driver Cylance Cleanup Tool service to cleanup...

ping -n 30 127.0.0.1 1>nul 2>&1

Echo * Unified Driver Cylance Cleanup Tool is finished.

Echo * Removing the Unified Driver Cylance Cleanup Tool service...

CyCleanupSvc.exe "-uninstall"

IF %ERRORLEVEL% == 0 (

GOTO FinishCleanup

) ELSE (

Echo * Failed to remove the Unified Driver Cylance Cleanup Tool service.

Echo * Please check the Logs directory.

Echo * Exiting...

GOTO CyExit

)

:FinishCleanup

Echo * Unified Driver Cylance Cleanup Tool service has been removed.

Echo * Exiting...

:CyExit

exit

How to configure Citrix ADC / NetScaler to forward client Source IP to Exchange Server 2019 / 2016 or any IIS application

$
0
0

Those who have worked with load balancers for applications will know that it can be a pain to troubleshoot issue where the source IP address is required because from the application’s perspective, all incoming connections have the originating IP address as the load balancer. With Citrix ADC / NetScalers, there are several methods in achieving this such as using the X-Forwarded-For header to include the source client IP address (this only works with HTTP and SSL services) or configuring direct server return (DSR) mode to allow the server to respond to clients directly by using a return path that does not flow through the Citrix ADC appliance. There are advantages and disadvantages for each method but for the purpose of this post, I will demonstrate how to configure Exchange Server 2019 (or any IIS application) to receive the source client IP with the X-Forwarded-For header.

The Scenario

Let’s assume that you have a user who is continuously locked out of their account and you have identified the event to take place on an on-premise Exchange server as you can see event ID 4625Audit Failure events in the Security log as shown in the screenshot below:

image

The Exchange server is placed behind Citrix ADC / NetScalers and therefore have the IP address 172.16.5.90 of the load balancer for the Source Network Address field in the event.

Proceeding to navigate into the IIS logs on the Exchange server in the W3SVC1 folder located in the C:\inetpub\logs\LogFiles\ directory:

image

… and opening the logs show only the source IP of the Citrix ADC / NetScaler:

image

image

Configuration IIS on Exchange Server to log the X-Forwarded-For request header

The first step to log the source IP address is to configure IIS on the Exchange server to log the X-Forwarded-For request header that is passed from the Citrix ADC / NetScaler load balancer. The following TechNet Blog does a fantastic job of demonstrating the process:

How to use X-Forwarded-For header to log actual client IP address?
https://techcommunity.microsoft.com/t5/iis-support-blog/how-to-use-x-forwarded-for-header-to-log-actual-client-ip/ba-p/873115

Below is a demonstration with an Exchange 2019 server on Windows Server 2019 and IIS version 10.0.17763.1:

image

Begin by launching Internet Information (IIS) Manager, navigate to either the Server node or one of the websites and then open on Logging:

image

image

We will add the X-Forwarded-For field by clicking on SelectFields beside the W3CFormat dropdown menu:

image

Proceed to click on Add Field and add the X-Forwarded-For text as the Field Name and Source, with the Source Type as Request Header:

image

Note how the X-Forwarded-For is added as a Custom Field:

image

Apply the changes:

image

**Note that configuring the above on one site automatically applies it to the other sites.

Now navigate back to the IIS log files and open the latest log file and confirm that the X-Forwarded-For field is added as a header:

image

The following is a side by side comparison where the log on the top has the X-Forwarded-For custom field added and the bottom does not:

image

Configure the Citrix ADC / NetScaler to forward client source IP as X-Forwarded-For

With the IIS server configured to receive the custom X-Forwarded-For field, proceed to log into the Citrix ADC / NetScaler, navigate to Traffic Management > Load Balancing > Service Groups or Services:

image

For the purpose of this example, we will be configuring all of the Exchange service groups to forward the client source IP address as X-Forwarded-For (owa, activesync, rpc, ews, Autodiscover, oab, mapi, ecp).

Open the properties of the load balancing service group or service, navigate to the Settings area and click on the edit icon:

image

Enable the Insert Client IP Header and type in the X-Forwarded-For string for the Header text box:

image

Click OK to save the settings and proceed to save the settings by clicking Done.

Repeat for the rest of the Load Balancing Service Group by using the GUI or the CLI command:

set service <name> -CIP <Value> <cipHeader>

Here are the commands for each Exchange service:

set serviceGroup SVG_EX2019_owa -cip enabled X-Forwarded-For
set serviceGroup SVG_EX2019_activesync -cip enabled X-Forwarded-For
set serviceGroup SVG_EX2019_rpc -cip enabled X-Forwarded-For
set serviceGroup SVG_EX2019_ews -cip enabled X-Forwarded-For
set serviceGroup SVG_EX2019_autodiscover -cip enabled X-Forwarded-For
set serviceGroup SVG_EX2019_oab -cip enabled X-Forwarded-For
set serviceGroup SVG_EX2019_mapi -cip enabled X-Forwarded-For
set serviceGroup SVG_EX2019_ecp -cip enabled X-Forwarded-For

image

Testing the configuration by verifying source IP address in IIS Logs

Switching back to the Exchange Server and navigating to the IIS logs should now have the latest log reveal a value for the X-Forwarded-For field. Below is a screenshot of the log before the configuration of the Citrix ADC / NetScaler:

image

Below is a screenshot after the change with an IP address added to the end of each connection with the source IP:

image

Hope this helps anyone looking for a way to log the originating source IP address of client requests on IIS that is load balanced by a Citrix ADC / NetScaler.

Attempting to upgrade ESXi on a Nutanix node manually fails with: "OSError: [Errno 39 Directory not empty: ‘/vmfs/volumes/e2be1277-f8844313-564e-dbe0fc3821c4/Nutanix’"

$
0
0

Problem

You’re attempting to manually upgrade a Nutanix node from VMware ESXi, 6.5.0, 17167537 to VMware ESXi, 6.7.0, 15160138 with the ISO VMware-VMvisor-Installer-201912001-15160138.x86_64.iso then patch it to VMware ESXi, 6.7.0, 17700523 with ESXi670-202103001.zip:

image

Upgrade ESXi, preserve VMFS datastore

image

image

… but the upgrade from 6.5 to 6.7 would fail with the following error after you select the upgrade option:

------ An unexpected error occurred ------

See logs for details

OSError: [Errno 39 Directory not empty: ‘/vmfs/volumes/e2be1277-f8844313-564e-dbe0fc3821c4/Nutanix’

image

The upgrade does not proceed further until you restart the host, which will boot into ESXi 6.5 normally.

Subsequent attempts to re-run the upgrade will now present the option of installing a fresh ESXi installation and not upgrade.

Install ESXi, preserve VMFS datastore

image

Solution

I could not find any Nutanix KB but I did find quite a few forum and blog posts indicating that the files in /bootbank need to be copied to /altbootbank (copy not move) so I performed the following with WinSCP:

  1. Backed up single file in /altbootbank named boot.cfg.backup and then deleted it
  2. Backed up the Nutanix folder in bootbank folder and then deleted it
  3. Copied the files in /bootbank to /altbootbank

Note that removing the Nutanix folder is important as failure to do so will not allow you to upgrade.

bootbank directory:

image

image

altbookbank directory:

image

I then rebooted the ESXi host with the 6.7 ISO mounted and was able to proceed with the upgrade and proceed to patch 6.7.

image

To be safe, I performed the following after the upgrade:

  1. Copy the file named boot.cfg.backup back into in /altbootbank
  2. Copy the the Nutanix folder back into the bootbank folder

Proceeded to upgrade

esxcli software vib install -d /vmfs/volumes/ntnx-lun03/ISO/ESXi670-202103001.zip

image

Hope this helps anyone who may encounter this issue.

Planning for an EA to CSP Azure Subscription to Subscription Migration

$
0
0

One of the projects I've been recently involved in is a EA to CSP subscriptions migration for a client and those who have been a part of such a project will know that there are two methods for the migration. One is a manual subscription to subscription migration, which requires quite a bit of planning before execution, and the other is a billing transfer that is available to Partners who are Azure Expert MSPs. I've been fortunate enough to have access to the Azure Expert MSPs method (https://docs.microsoft.com/en-us/azure/cost-management-billing/manage/mpa-request-ownership#request-billing-ownership), which eliminates majority of the heavy lifting. With that said, I did have to perform a manual migration a year ago so I wanted to share the assessment, planning and migration tips I have in hopes of providing help to anyone have to go through this exercise.

Useful documentation, blog posts and forums for subscription migration information

Let me begin by providing the following list of links that I found very useful when planning for a subscription to subscription migration through the use moving resources between EA to CSP and Pay-as-you-go to CSP.

Microsoft Documentation:

Move resources to a new resource group or subscription - most important to documentation to read to start the migration planning:

https://docs.microsoft.com/en-us/azure/azure-resource-manager/management/move-resource-group-and-subscription

Move operation support for resources - use this to determine whether a resource can be migrated between subscriptions by obtaining its type and reviewing the information provided by the table (e.g. Microsoft.ClassicCompute/domainnames):

https://docs.microsoft.com/en-us/azure/azure-resource-manager/management/move-support-resources

How to move a Recovery Services vault across Azure Subscriptions and Resource Groups:

https://docs.microsoft.com/en-gb/azure/backup/backup-azure-move-recovery-services-vault

Move guidance for virtual machines

https://docs.microsoft.com/en-gb/azure/azure-resource-manager/management/move-limitations/virtual-machines-move-limitations

Blogs and GitHub:

Jack Tracey, a Microsoft Cloud Solutions Architect, wrote a blog post that provides many important details about manually migrating resources as well as the Azure Expert MSP billing transfer:
https://jacktracey.co.uk/migration/azure-subscription-migrations/

Igor Shastitko wrote a LinkedIn post outlining his experience, items to lookout for, and a high level plan:
https://www.linkedin.com/pulse/azure-paygea-csp-subscriptions-migrations-limitations-igor-shastitko/

Morten Pedholt provides his experience, migration planning tips and how to call the validation API to determine whether resources can be migrated from one subscription to another:
https://pedholtlab.com/migrate-between-azure-subscriptions-like-a-pro/

Pantelis Apostolidis provides a way to use Postman to run the resource migration validation:
https://www.cloudcorner.gr/microsoft/azure/validate-azure-resource-move-with-postman/

Jeroen shares his experience with subscription to subscription migration:
http://www.jeroenwint.com/2017/08/09/lessons-learned-migrate-between-azure-ea-and-azure-csp-subscription/

How to migrate Azure IaaS VMs if they are not encrypted with SSE with PMK with Azure Key Vault:
https://github.com/MicrosoftDocs/azure-docs/issues/8684#issuecomment-400024929

https://stackoverflow.com/questions/55922310/can-we-move-encrypted-virtual-machines-from-subscription-to-subscription

How to migrate application gateways:
https://github.com/MicrosoftDocs/azure-docs/issues/8037

PowerShell script to validate whether a resource can be moved across subscriptions:
https://www.powershellbros.com/check-possibility-of-azure-resource-migration/

Azure Resource Mover for moving resources:
https://www.red-gate.com/simple-talk/cloud/infrastructure-as-a-service/lets-move-azure-resource-mover/

While not related to migrating resources from one subscriptions to another, Wesley Haakman outlines the process of transfer the billing ownership of an Azure subscription from an Enterprise Agreement to a CSP:
https://www.wesleyhaakman.org/transferring-ea-subscriptions-to-csp/

Export subscription resources and review items for migration

Begin by downloading Jack Tracey's Azure Resource Migration Support Tool spreadsheet from his blog: AzureResourceMigrationSupportV10.xlsx

https://jacktracey.co.uk/migration/azure-subscription-migrations/

The Introduction tab of the spreadsheet will provide a detailed explanation of the spreadsheet's features and functionality. Before proceeding to populate the spreadsheet with data, it would be best to update it because the spreadsheet available on Jack Tracey's blog was last updated on October 2020 and Microsoft continues to make new improvements to the subscription migration feature so he has provided links to github where CSVs containing an updated list of whether resources can be migrated. Proceed to download the latest move-support-resources.csv CSV file:

image

Then copy it into the move-support-resources tab in the AzureResourceMigrationSupportV10.xlsx spreadsheet:

image

With the spreadsheet updated, proceed to the Introduction tab, which will contain the following PowerShell cmdlet to export the Name, ResourceGroupName, Location and Type of every resource in the subscription into a CSV file:

Connect-AzAccount

Get-AzSubscription -SubscriptionId xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxx | Set-AzContext

Get-AzResource | Select Name,ResourceGroupName,Location,Type | Export-CSV SubscriptionXXXXExport.csv -NoTypeInformation

image

Another method of obtaining the resources export is to use Azure Resource Graph Explorer with the following query:

resources
| project name, resourceGroup, location, type
| order by type asc

image

With the CSV file of the resources created, paste the resources into the first 4 columns (Name, Resource Group, Region, Resource Type) SubsX tab, which will then automatically fill in the remaining 4 columns (Resource Group Migration Supported? Subscription Migration Supported? Resource Group Migration Limitations, Subscription Migration Limitations) that provide information about the ability to migrate the resources along with the migration limitations of each resource.

image

This spreadsheet provides a great overview of the resources in the subscription and identify what can be migrated and what cannot. It is important to note that the spreadsheet will place a Yes for classic resources even though it is not supported in a CSP migration so don't make the mistake of only displaying the items with No for the Subscription Migration Supported? column and assume the rest can be migrated:

Not supported in CSP - Must migrate to ARM model first.
Target subscription cannot contain any other classic resources.
All classic resources must be moved in one go.

image

Organization Subscriptions into Groups

The next step is to use the spreadsheet to separate subscriptions into different groups. The following are the 3 groups I use but it may not work for everyone so create as many or as little:

1. All resources can be migrated
2. All resources can be migrated but there are classic resources
3. Not all resources can be migrated

Group #1 - All resources can be migrated

Resources that can be migrated means that they can be moved without any service disruption to the resource during the move. However, subscriptions where all of the resources can be migrated does not mean you can navigate into portal.azure.com, click on All resources, select all the resources and move them. Even if a resource can be migrated cross subscriptions, it could have dependencies, which would require it to be moved with other resources or be placed in the resource group that it was originally created. It is also likely that some of the dependencies would mean downtime will be required. The following are the steps I perform for this group:

1. Organize all of the resource types in a spreadsheet and sort them by resource type

image

2. Log into portal.azure.com, launch Resource Explorer, navigate to Subscriptions> "Subscription Name" > Providers:

image

3. Have the Resource Explorer and spreadsheet side by side, analyze each type of resources to determine what the dependencies are. The reason why I like to use the Resource Explorer is because you can easily skip through the resource types that have no dependencies and click through each resource that do and use the Open blade button to navigate directly to the properties of the resource for further investigation

image

4. Create an action and notes column to the resources and fill in what actions need to be taken before moving them.

Migration Restrictions

The following are some of the restrictions that are likely to be encountered even if all the resources can be moved:

1. You can only move multiple resources if they are stored in the same resource group. It is not possible to select resources from multiple resource groups and move them.

2. When you move a virtual machine, dependent components such as the VNet the NIC is attached to also needs to be moved, which means if the VNet has a peering then it needs to be removed thus potentially causing downtime.

3. You cannot move a virtual machine and the associated VNet without moving any other virtual machines that are attached to the same VNet.

4. Azure App Services must be placed to original Resource Groups where they were created. If you are unsure where the App Service was originally created, you can use the following cmdlet to retrieve the serverFarmID that contains path of the original Resource Group:

$checkapp = Get-AzResource -ResourceGroupName <current_apps_RG_name> -Name <app_name> -ResourceType "Microsoft.Web/sites"

$checkapp.Properties.serverFarmId

5. Any IaaS VMs with backups configured need to have them disabled and with restore collection points deleted. Note that the deleting the restore collection points does not mean the data will be deleted. Use the PowerShell cmdlet in the following document to bulk delete the restore points: https://docs.microsoft.com/en-us/azure/azure-resource-manager/management/move-limitations/virtual-machines-move-limitations#powershell

6. You cannot move a virtual machine without the public IP address that is attached to it. Note that it is possible to move Basic SKU public IP addresses but not Standard SKU public IP addresses.

7. You cannot move a VNet that is configured with an App Service with VNet integration.

8. When you move an App Service, you need to move its App Service Plan, which means you need to move all Apps in the same Apps Service Plan.

9. The destination subscription's resource group for an App Service to be moved cannot contain existing App Services.

10. When moving App Services or Functions, you cannot move them without moving other App Services or Functions in the same Resource Group.

11. You cannot move an App Service that has an uploaded but unbinded SSL certificate to another subscription. In order to move the App Service, we will need to move the certificate as well but the certificate object is not displayed in the resource group because it is a hidden resource so use the enable the Show hidden types option to select the certificate.

There are too many dependencies for me to list but careful analyzation of the resources should reveal all of them.

Group #2 - All resources can be moved but there are classic resources

The second group would contain resources that can be moved but with classic resources that need to be upgraded to ARM before they can be moved. I won't go into the details of upgrading classic resources to ARM as there are plenty of documentation on how this can be done but one of the scenarios I've come across is where a client had Cloud Apps in their subscription but it hosted an application that had been upgraded for other divisions. Rather than attempting to upgrade the Cloud Apps to ARM, we simply planned for deploying the newer version of the application in parallel then migrated over.

Group #3 - Not all resources can be moved

The last group, which likely would apply to most production environments, would contain resources that cannot be moved. The components I've typically come across are ones with the following resources:

1. Load Balancers with a Standard SKU

2. Public IP address with a Standard SKU

3. Azure Front Door

4. Private Endpoints

5. Azure Key Vaults with CMK and ADE

6. Azure SQL Managed Instances

7. ExpressRoute

8. Azure Monitor and Alerts

9. Subscriptions with different tenants

10. MarketPlace items with a plan (e.g. F5, CheckPoint, Fortinet, Citrix)

Every non-migratable components will need to be planned around and the following are a few options to provide ideas:

Load Balancers - Basic SKU Load Balancers can be migrated but no production environment would use basic SKU LBs. What I've typically done is deploy these resources in parallel, migrate one of the two, three or more backend servers to the destination subscription, add it to the new LB backend pool, make the new LB live and then migrate the remaining servers over. This can significantly reduce downtime.

Public IP Address - IPs with Standard SKUs cannot be migrated can have a new Public IP address configured in the destination subscription that is ready to be applied to migrated resources. Other alternatives could be to create a basic SKU IP address, assign it to the resource and move them together if the resource can be moved. More options are available for clusters so be creative.

Azure Front Door - These can be deployed in parallel and in the new subscription, and point them to, say, the App Services in the old subscriptions via public URL and migrate App Service over live.

Private Endpoints - These require more labour as you'll need to change all of them to use Service EndPoints, migrate the resources and then recreate them.

Azure Key Vaults - These can be a pain to migrate if the environment uses CMK and ADE as you'll need to decrypt the resources prior to moving them.

Azure SQL Managed Instances - These managed instances can be a headache as they're expensive but building new instances in parallel is likely the way to go for most cases.

ExpressRoute - These require a lot of planning as it is best to deploy in parallel but aside from the cost, these links can serve many services' ingress and egress traffic to on-premises datacenters or offices. Network address planning will also be required.

Azure Monitor and Alerts - Some of these cannot be migrated and therefore will need to be recreated.

Subscriptions with different tenants - the source and destination subscriptions need to be associated to the same tenant so if the destination a transfer may be required if they are not the same. Note that when you transfer a subscription to a different Azure AD directory, some resources are not transferred to the target directory. For example, all role assignments and custom roles in Azure role-based access control (Azure RBAC) are permanently deleted from the source directory and are not be transferred to the target directory.

Marketplace Items - These items can be laborious as you'll need to identify them in the current subscription and ensure they are also offered in the CSP subscription. The native Azure tools I've had luck with migrating these items are:

  • Disk snapshots: create snapshots of VMs, migrate snapshots to the new subscription, then a new VM with the same Marketplace image, and finally swap the disks with the migrated disks.
  • Azure Site Recovery: I generally prefer this method if possible as it requires much less manual labour than snapshots.
  • Note that there are many instances where a migrated Marketplace resources such as firewalls will require the license to be reapplied. Some WAFs such as Citrix NetScalers have their licenses bounded to the NIC's MAC so if that changes then a new license will need to get reallocated.

How well you plan around all the resources whether they can be migrated or not will dictate how successful the migration can be.

Clean up and delete unused resources

I’ve found that most of the environments I’ve worked in almost always have resources that are either orphaned or no longer needed and therefore can be deleted. Cleaning up resources and RGs will save a lot of time for the next step so delete any resources that are identified to not be used. Azure Resource Graph Explorer can help with identifying resources such as unattached disks and the following is a query that can be used across all subscriptions:

resources
| where type =~ 'Microsoft.Compute/disks'
| where properties.diskState =~ 'Unattached'
| project name, resourceGroup, subscriptionId, location, tenantId

Grouping resources into resource groups

Once all resources have been reviewed, the next step is to start organizing them into resource groups that will be migrated. Resources that need to be migrated together should be grouped into the same RG so all of them can be migrated in one move. Also don't forget that there are limitations on what the destination RG can have depending on what the source resources are being moved into it (e.g. you can't move App Service resources into a RG that already have App Services).

Test Migration of Resources

At the time of this writing (June 19, 2021), the migration feature within the Azure Portal randomly presents two methods where one will instantly move the resources selected if validation succeeds, while the other will halt and wait for the administrator to interactively click on the button to proceed. This will likely change in the future but if it hasn't, use the following two resources I mentioned earlier to test the migration:

Morten Pedholt provides his experience, migration planning tips and how to call the validation API to determine whether resources can be migrated from one subscription to another:

https://pedholtlab.com/migrate-between-azure-subscriptions-like-a-pro/

Pantelis Apostolidis provides a way to use Postman to run the resource migration validation:

https://www.cloudcorner.gr/microsoft/azure/validate-azure-resource-move-with-postman/

It is best to also create a test environment with two subscriptions so you can test the migration of resources in the environment. I’ve performed some tests for moving resources in the following two blog posts:

Moving an Azure virtual machine and the Recovery Services Vault with its backups from one subscription to another
http://terenceluk.blogspot.com/2021/04/moving-azure-virtual-machine-and.html

Moving Azure App Service resources from one subscription to another subscription
http://terenceluk.blogspot.com/2021/04/moving-app-service-resources-from-one.html

Identify any dependencies on the resource ID

One of the common items that can get missed with subscription migrations are any applications that integrate with Azure and reference resource IDs or subscription IDs. Given how the source and destination subscriptions are different, the resource ID of all resources will change.

Another common consideration I come across are any integration with the EA portal (ea.azure.com) for billing information as going to CSP means the EA portal will no longer provide subscription information.

Using Azure Resource Graph Explorer to help with migration planning

I've found that one of the best tools available for gathering information is Azure Resource Graph Explorer as you can query and filter results as you please.

List subscriptions with Classic Resources

You can quickly export a list of Classic Resources in one or all subscriptions that need to be upgraded/migrated to ARM with the following query:

Resources
| where type contains 'Microsoft.Classic'
| order by type asc

image

Look for Marketplace solutions
 
You can use the following query to query for marketplace solutions with plans:
 
resources
| where isnotnull(plan)
| sort by (type) asc

image

List non-Microsoft native resources

The following query can be used to look up non-Microsoft Azure native resources such as SendGrid, which would need to be recreated in the target subscription:
 
resources
| where type notcontains ('microsoft.')
| sort by (type) asc

image

List Load Balancers across subscriptions

The following query will assist with obtaining a list of load balancers across subscriptions:
 
resources
| where type contains 'Microsoft.Network/Loadbalancers'
| order by type asc

image

Retrieve list of VM disks configured with SSE with CMK

Obtaining a list of virtual machine disks configured with SSE with CMK Encryption:

resources
| where type =~ 'Microsoft.Compute/disks'
| where properties.encryption.type =~ 'EncryptionAtRestWithCustomerKey'

Retrieve list of VMs with disks configured with ADE

Obtain a list of virtual machines disks with Azure Disk Encryption (ADE) enabled, which would need to be disabled before migration:

resources
| where type =~ 'Microsoft.Compute/disks'
| where properties.encryptionSettingsCollection.enabled =~ 'true'

I've been very excited ever since Azure Resource Graph Explorer became available as it made viewing, filtering, searching for resources so convenient and it has proven to be a very useful tool for analyzing subscriptions. Refer to the following documentation for more information:
 
Starter Resource Graph query samples
https://docs.microsoft.com/en-us/azure/governance/resource-graph/samples/starter?tabs=azure-cli
 
I'm sure there are more considerations that I may not have covered but I hope I can contribute to the limited information around planning available for this type of project.

What are Proximity Placement Groups?

$
0
0

Proximity Placement Groups was welcomed by many organizations when Microsoft announced the preview (https://azure.microsoft.com/en-us/blog/introducing-proximity-placement-groups/) in July 2019 and finally GA (https://azure.microsoft.com/en-ca/blog/announcing-the-general-availability-of-proximity-placement-groups/) in December 2019. The concept isn’t in any way complex but I wanted to write this post to demonstrate its use case for an OpenText Archive Center solution hosted on Azure project I was recently involved in. Before I begin, the following is the official documentation provided by Microsoft:

Proximity placement groups
https://docs.microsoft.com/en-us/azure/virtual-machines/co-location

The Scenario

One of the decisions we had to make at the beginning was how to deliver HA across Availability Zones in a region but OpenText was not clear as to whether they supported clustering the Archive Center across Availability Zones due to potential latency concerns. I do not believe that Microsoft publishes specific latency metrics for each region’s zones but the general guideline I use is that latency across zones can be 2ms or less as per the marketing material here:

https://azure.microsoft.com/en-ca/global-infrastructure/availability-zones/#faq

What is the latency perimeter for an Availability Zone?

We ensure that customer impact is minimal to none with a latency perimeter of less than two milliseconds between Availability Zones.

image

To make a long story short, what OpenText eventually provided as a requirement was that we would need to have a cluster of 4-nodes, where 2 servers need to be in one zone and another 2 can be in another. The servers that are located in the same zone must have the lowest latency possible, preferably be hosted in the same datacenter. The following is a diagram depicting the requirement:

image

Limitations of Availability Zones

With the above requirements in mind, simply deploying the 2 nodes with the availability zone set to 1 and another 2 nodes with the availability zone set to 2 or 3 would not suffice because of the following facts about Azure regions, zones and datacenters as its footprint grows:

  1. Availability Zones can span multiple datacenters because each zone can contain more than one datacenter
  2. Scale sets can span multiple datacenters
  3. Availability Sets in the future can span multiple datacenters

Microsoft understands that organizations would need a way to guarantee the lowest latency between VMs and therefore provides the concept of Proximity Placement Groups, which is a logical grouping used to guarantee Azure compute resources are physically located close to each other. Proximity Placement Groups can also be useful for low latency between stand-alone VMs, availability sets, virtual machine scale sets, and multiple application tier virtual machines.

Availability Zones with Proximity Groups

Leveraging Proximity Placement Groups (PPG) will guarantee that the 2 OpenText Archive Center cluster nodes in the same Availability Set will also be placed together to provide the lowest latency. The following is a diagram that depicts this.

image

Note that the above diagram also includes two additional Document Pipeline servers that will also be grouped with each of the OpenText Archive Center servers.

Limitations of Proximity Placement Group

Proximity Placement Groups does have limitations and that is when there are VM SKUs that are considered to be exotic where they may not be offered at every datacenter. Example of these VMs can be N series with NVDIA cards or large sized VMs for SAP. When mixing exotic VM SKUs, it is best to power on the most exotic VM first so the more popular VMs will likely be available in the same datacenter. In the event where Azure is unable to power on a VM in the same datacenter as the previously powered on VMs, it will fail with the error message:

Oversubscribed allocation request
Stop allocate and try in reverse order

Another method of ensuring all VMs can be powered on is to use an ARM template to place all the VMs in the Proximity Placement Group together to power on as Azure will locate a datacenter that has all of the available VM SKUs.

Measuring Latency Across Availability Zones

When we think about testing latency between servers, the quickest method is to use PING and while we’ll get a millisecond latency metric in the results, it actually isn’t an accurate way to measure latency. The Microsoft recommended tool to test and measure latency are the following because they measure TCP and UDP delivery time unlike PING, which uses ICMP:

As described in the following article:

Test VM network latency
https://docs.microsoft.com/en-us/azure/virtual-network/virtual-network-test-latency

The following is a demonstration of using latte.exe to obtain statistics for two Azure VMs hosted in different Azure Availability Zones in the Canada Central region.

image

I will begin by using two low end Standard B1s (1 vcpus, 1 GiB memory) virtual machines.

Set Up Receiver VM

Begin by logging onto the receiving VM and open firewall for the latte.exe tool:

netsh advfirewall firewall add rule program=c:\temp\latte.exe name="Latte" protocol=any dir=in action=allow enable=yes profile=ANY

image

The Latte application should be shown in the Allowed apps and features list upon successfully executing the netsh command:

image

On the receiver, start latte.exe (run it from the CMD window, not from PowerShell):

latte -a <Receiver IP address>:<port> -i <iterations>
latte -a 10.0.0.5:5005 -i 65100
Protocol TCP
SendMethod Blocking
ReceiveMethod Blocking
SO_SNDBUF Default
SO_RCVBUF Default
MsgSize(byte) 4
Iterations 65100

The parameters are as follows:

  • Use -a to specify the IP address and port
  • Use the IP address of the receiving VM
  • Any available port number is can be used (this example uses 5005)
  • Use -i to specify the iterations
  • Microsoft documentation indicates that around 65,000 iterations is long enough to return representative results
image

Set Up Sender VM

On the sender, start latte.exe (run it from the CMD window, not from PowerShell):

latte -c -a <Receiver IP address>:<port> -i <iterations>

The resulting command is the same as on the receiver, except with the addition of -c to indicate that this is the client, or sender:

latte -c -a 10.0.0.5:5005 -i 65100
Protocol TCP
SendMethod Blocking
ReceiveMethod Blocking
SO_SNDBUF Default
SO_RCVBUF Default
MsgSize(byte) 4
Iterations 65100

image

Results

Wait for a minute or so for the results to be displayed:

Sender:

C:\Temp>latte -c -a 10.0.0.5:5005 -i 65100
Protocol TCP
SendMethod Blocking
ReceiveMethod Blocking
SO_SNDBUF Default
SO_RCVBUF Default
MsgSize(byte) 4
Iterations 65100
Latency(usec) 2026.40
CPU(%) 5.2
CtxSwitch/sec 1382 (2.80/iteration)
SysCall/sec 3678 (7.45/iteration)
Interrupt/sec 1100 (2.23/iteration)
C:\Temp>

image

Receiver:

C:\Temp>latte -a 10.0.0.5:5005 -i 65100
Protocol TCP
SendMethod Blocking
ReceiveMethod Blocking
SO_SNDBUF Default
SO_RCVBUF Default
MsgSize(byte) 4
Iterations 65100
Latency(usec) 2026.51
CPU(%) 2.2
CtxSwitch/sec 1140 (2.31/iteration)
SysCall/sec 1524 (3.09/iteration)
Interrupt/sec 1068 (2.17/iteration)
C:\Temp>

image

Sender and receiver metrics side by side:

Note that the latency is labeled as Latency(usec), which is in microseconds and the results are 2130.93, which is about 2ms.

image

Next, I will change the VM size from the low end Standard B1s (1 vcpus, 1 GiB memory) to D series Standard D2s v3 (2 vcpus, 8 GiB memory).

Notice the latency, which is 1546 usec, is better with the D series:

image

However, changing the VM size to a higher Standard D4s v3 (4 vcpus, 16 GiB memory) actually yields slow results at 1840.29 usec. This is likely the fluctuation of the connectivity speed between the datacenter.

image

Accelerated Networking

Accelerated Networking is one of the recommendations provided in the Proximity Placement Group documentation. Not all VM sizes are capable of accelerated networking but the Standard D4s v3 (4 vcpus, 16 GiB memory) supports it so the following is a test with it enabled.

image

I have validated that my operating system is part of the supported operating systems. If connectivity to your VM is disrupted due to incompatible OS, please disable accelerated networking here and connection will resume.

image

image

Note that the latency has decreased to 1364.43 usec after enabling accelerated networking:

image

Latency within the same Availability Zone

The following are tests with two Standard D4s v3 (4 vcpus, 16 GiB memory) without accelerated networking VMs in the same availability zone.

Latency is 308.55 usecs.

image

The following are tests with two Standard D4s v3 (4 vcpus, 16 GiB memory) with accelerated networking.

The latency significantly improves to 55 to 61.09 usecs.

image

Same Availability Zone with Proximity Placement Group

The following are tests with two Standard D4s v3 (4 vcpus, 16 GiB memory) with accelerated networking VMs in the same availability zone with Proximity Placement Group configured.

Create the Proximity Placement Group:

image

image

image

Add the VMs to the Proximity Placement Group:

image

image

The latency results are more or less the same even though the 3 tests are a bit lower at 54 to 57 usec:

image

Summary

The last two tests where the results for two VMs in the same availability zone without PPG and the results for the two VMs in the same availability zone with PPG are more or less the same should not discourage you from using PPG because the VMs were likely powered on in the same datacenter. Using Proximity Placement Group will guarantee that this is the case every time it is powered off and back on.

The sample size of the tests I performed wouldn’t be able to claim that the results are conclusive but I hope it will give a general idea of the latency improvements with Accelerated Networking and Proximity Placement Groups.

If you would like to learn more about real world applications and Proximity Placement Groups, the following SAP on Azure article is a good read:

Azure proximity placement groups for optimal network latency with SAP applications

https://github.com/MicrosoftDocs/azure-docs/blob/master/articles/virtual-machines/workloads/sap/sap-proximity-placement-scenarios.md

Azure site Recovery replication for Windows 2008 R2 server fails with: "Installation of mobility agent has failed as SHA-2 code signing is not supported on the current Microsoft Windows Server 2008 R2 Standard OS version"

$
0
0

As much as Windows Server 2008 R2 has come to end of support, I still periodically come across them when working with clients and one of the common scenarios I’ve had to deal with is attempting to replicate them from an on-premise network to Microsoft Azure with Azure Site Recovery. Below is an issue that I’ve seen quite a few times so I’d like to write this quick blog post to describe the problem and the steps to remediate.

Problem

You’re trying to replicate an on-premise Windows 2008 R2 server that has Service Pack 1 installed to Azure with Azure Site Recovery:

image

However, the installation of the mobility service fails:

image

The specific Error Details for the server are as follow:

----------------------------------------------------------------------------------------------------------------------------

Error Details

Installing Mobility Service and preparing target

·

· Error ID

78007

· Error Message

The requested operation did not complete.

· Provider error

Provider error code: 95560 Provider error message: Installation of mobility agent has failed as SHA-2 code signing is not supported on the current Microsoft Windows Server 2008 R2 Standard OS version. Provider error possible causes: For successful installation, mobility service requires SHA-2 support as SHA-1 is deprecated from September 2019. Provider error recommended action: Update your Microsoft Windows Server 2008 R2 Standard operating system with the following KB articles and then retry the operation. Servicing stack update (SSU) https://support.microsoft.com/en-us/help/4490628 SHA-2 update https://support.microsoft.com/en-us/help/4474419/sha-2-code-signing-support-update Learn more (https://aka.ms/asr-os-support)

· Possible causes

Check the provider error for more details.

· Recommendation

Resolve the issue as recommended in the provider error details.

· Related links

o https://support.microsoft.com/en-us/help/4490628

o https://support.microsoft.com/en-us/help/4474419/sha-2-code-signing-support-update

o https://aka.ms/asr-os-support

· First Seen At

7/22/2021, 9:28:00 PM

----------------------------------------------------------------------------------------------------------------------------

image

The Error Details provides the suggestion to download and install KB4490628 but when you attempt to do so, the installation wizard indicates the update is already installed on the server:

https://support.microsoft.com/en-us/help/4490628

AMD64-all-windows6.1-kb4490628-x64_d3de52d6987f7c8bdc2c015dca69eac96047c76e.msu

image

Solution

I’ve come across the following 2 scenarios for this:

  1. The update KB4490628 indicated above has been installed
  2. The update KB4490628 indicated above has not been installed

Regardless of which of the above scenario applies to the problematic server, the first step is to download the following KB4474419 update and install it:

2019-09 Security Update for Windows Server 2008 R2 for x64-based Systems (KB4474419)

AMD64-all-windows6.1-kb4474419-v3-x64_b5614c6cea5cb4e198717789633dca16308ef79c.msu

image

image

Once the update has been installed and the server has been restarted, proceed to try installing the suggested KB. If it had already been installed then it will not continue but if it hasn’t, it will proceed, complete and not require a restart.

With the above completed, the Microsoft Azure Site Recovery Mobility Service/Master Target Server should now install successfully and the Enable replication job should complete successfully:

image

With the required updates installed, the deployment of the Mobility Service agent should succeed and the replication job should complete:

image

Hope this helps anyone who may be encountering this issue.


Configuring Microsoft Azure AD Single Sign-On (SSO) for Citrix ShareFile

$
0
0

I recently had an ex-colleague reach out to me about configuring the integration between Citrix ShareFile and Azure Active Directory (Azure AD) as he was required to configure SAML authentication for a Citrix ShareFile portal so that it would use Azure AD as an IDP. The official documentation can be found here:

How to Configure Single Sign-On (SSO) for ShareFile
https://support.citrix.com/article/CTX208557

Tutorial: Azure Active Directory integration with Citrix ShareFile
https://docs.microsoft.com/en-us/azure/active-directory/saas-apps/sharefile-tutorial

However, the documentation wasn’t extremely clear on some of the steps and other blog posts available references the older Azure portal so I thought writing this post may help anyone who may be looking for updated information.

Step #1 – Adding Citrix ShareFile as an Enterprise Application

Begin by logging into portal.azure.com for the tenant that will be providing the Azure AD as the iDP, navigate to Azure Active Directory > Enterprise Applications:

image

Click on New application:

image

Search for Citrix ShareFile and then click on the tile:

image

A window will slide out from the right to display the application, proceed to click on the Create button:

image

image

The creation of the application will take a few minutes and eventually finish:

image

Step #2 – Configure Azure ShareFile Enterprise Application

Proceed to navigate into the Single sign-on configuration in the ShareFile Enterprise Application:

image

Click on the SAML tile:

image

The SAML configuration will be displayed:

image

Click on the Edit button for the Basic SAML Configuration:

image

Remove the default Identifier (Entity ID) configuration:

image

Enter the following for the configuration and then save it:

Identity (Entity ID):

https://<customDomain>.sharefile.com/saml/info < set this as default

https://<customDomain>.sharefile.com

Reply URL (Assertion Consumer Service URL):

https://<customDomain>.sharefile.com/saml/acs

Sign on URL:

https://<customDomain>.sharefile.com/saml/login

Relay State:

Leave blank.

Logout Url:

Leave blank.

image

image

Saving the settings will now display the new configuration:

image

You will be prompted to test the single sign-on settings upon successfully configuring the SAML configuration but given that we have not configured ShareFile yet, select No, I’ll test later:

image

Scroll down and locate the certificate download link labeled as:

Certificate (Base64) Download

Download the certificate and then proceed to expand the Configuration URLs and copy the value for the following to somewhere like NotePad:

  • Login URL
  • Azure AD Identifier
  • Logout URL
image

*Note that the Login URL and Logout URL values are the same and the following is a sample:

https://login.microsoftonline.com/97f1d4b7-d6e7-4ebb-842d-cce6024b0bb3/saml2
https://sts.windows.net/87f1d4b7-d6e7-4ebb-842d-cce6024b0bb2/
https://login.microsoftonline.com/97f1d4b7-d6e7-4ebb-842d-cce6024b0bb3/saml2

Step #3 – Grant Azure AD user access to ShareFile

The next step is to configure grant permissions to users and groups who will be logging into ShareFile with their Azure AD credentials. Failure to do so will throw an error indicating the user logging on is not assigned to a role for the application.

From within the Citrix ShareFile Enterprise Application, navigate to Usersand groups then click on the Add user/group button:

image

Use the User and groups link to select either a test user or a group that will log into ShareFile with their Azure AD credentials (I will use my user account for this example) and then use the Select a role link to configure a role. The Microsoft documentation indicates we can have none selected as Default Access will automatically be configured but I’ve found that the assign button does not become active until a role is selected. Other documentation I was able to find indicates the Employee role should be configured so proceed with using Employee as the role:

image

Proceed by clicking the Assign button:

image

Notice that my account is now assigned:

image

Step #4 – Configure ShareFile to for Single sign-on / SAML 2.0 Configuration

With Azure AD configured, proceed to log into the ShareFile portal as an administrator, then navigate to Settings > Admin Settings > Security > Login & Security Policy:

image

Scroll down to the Single sign-on / SAML 2.0 Configuration section and select Yes for Enable SAML:

image

Proceed by opening the Notepad with the Configuration URLs that were copy from Azure:

  • Login URL
  • Azure AD Identifier
  • Logout URL

As well as opening the downloaded Citrix ShareFile.cer Certificate (Base64):

image

Fill in the following fields:

Field: ShareFile Issuer / Entity ID
Value: https://<customDomain>.sharefile.com/saml/info

Field: Your IDP Issuer / Entity ID
Value: Azure AD Identifier (example:https://sts.windows.net/87f1d4b7-d6e7-4ebb-942d-cce6024b0bb2/)

Field: X.509 Certificate
Value: Paste the certificate content from the downloaded Citrix ShareFile.cer Certificate into the configuration.

Field: Login URL
Value: Login URL from Azure (example: https://login.microsoftonline.com/87f1d4b5-d6e7-4ebb-842d-cce6024b0bb2/saml2)

Field: Logout URL:
Value: Logout URL from Azure (example: https://login.microsoftonline.com/87f1d4b5-d6e7-4ebb-842d-cce6024b0bb2/saml2)

image

Scroll down to the Optional Settings section:

image

Locate the SP-Initiated Auth Context configuration:

image

Change the configuration to User Name and Password, Exact for the field to the right, and save the settings:

image

Step #5 – Set up user as Employee in ShareFile

The next step is to set up the corresponding test user or ShareFile users in ShareFile. This environment uses on-premise Active Directory accounts, which are synced into Azure AD and the method I used to configure the accounts in ShareFile is the ShareFile User Management Tool (https://support.citrix.com/article/CTX214038). I will not be demonstrating the process in this post.

Step #6 – Test SSO with SAML

The final step is to test SSO to ensure that the configuration is correct. We can begin by using the Test this application button in the Citrix ShareFile Enterprise Application in Azure portal:

image

image

A successful test will display the following:

image

Next, navigate to the Sharefile login portal and you will notice the additional Company Employee Sign In option for logging in:

image

Proceed to login and confirm that the process is successful.

Automating Azure Site Recovery Recovery Plan Test Failover with PowerShell Script (on-premise VMs to Azure)

$
0
0

I’ve recently been asked by a colleague whether I had any PowerShell scripts that would automate the test failover and cleanup of Azure Site Recovery replicated VMs and my original thought was that there must be plenty of scripts available on the internet but quickly found that results from Google were either the official Microsoft documentation that goes through configuring ASR, replicate, and only failover over one VM (https://docs.microsoft.com/en-us/azure/site-recovery/azure-to-azure-powershell) or blog posts that provided bits and pieces of information and not a complete script.

Having been involved in Azure Site Recovery design, implementation and testing, I have created a PowerShell script to initiate the failover of a recovery plan and then perform the cleanup when the DR environment has been tested. This post serves to share the script that I use and I would encourage anyone who decides to use it to improve and customize the script as needed.

Environment

The environment this script will be used for will have the source as an on-premise and target in Azure’s East US region. The source environment are virtual machines hosted on VMware vSphere.

Requirements

  1. Account with appropriate permissions that will be used to connect to the tenant with the Connect-AzAccount PowerShell cmdlet
  2. Recovery Plan already configured (we’ll be initiating the Test failover on the Recovery Plan and not individual VMs).
  3. The Subscription ID containing the servers being repliated
  4. The name of the Recovery Services Vault containing the replicated VMs
  5. The Recovery Plan name that will be failed over
  6. The VNet name that will be used for the failover VMs

Script Process

  1. Connect to Azure with Connect-AzConnect
  2. Set the context to the subscription ID
  3. Initiates the Test Failover task for the recovery plan
  4. Wait until the Test Failover has completed
  5. Notify user that the Test Failover has completed
  6. Pause and prompt the user to cleanup the failover test VMs
  7. Proceed to clean up Test Failover
  8. End script

I have plans in the future to add additional improvements such as accepting a subscription ID upon execution, providing recovery plan selection for failover testing, or listing failed over VM details (I can’t seem to find a cmdlet that displays the list of VMs and its status in a specified Recovery Group).

Script Variables

$RSVaultName = <name of Recovery Services Group> - e.g. "rsv-us-eus-contoso-asr"

$ASRRecoveryPlanName = <name of Recovery Plan> - e.g. "Recover-Domain-Controllers"

$TestFailoverVNetName = <Name of VNet name in the failover site the VM is to be connected to> - e.g. "vnet-us-eus-dr"

The Script

The following is the script:

Connect-AzAccount

Set-AzContext -SubscriptionId "adae0952-xxxx-xxxx-xxxx-2b8ef42c9bbb"

$RSVaultName = "rsv-us-eus-contoso-asr"

$ASRRecoveryPlanName = "Recover-Domain-Controllers"

$TestFailoverVNetName = "vnet-us-eus-dr"

$vault = Get-AzRecoveryServicesVault -Name $RSVaultName

Set-AzRecoveryServicesAsrVaultContext -Vault $vault

$RecoveryPlan = Get-AzRecoveryServicesAsrRecoveryPlan -FriendlyName $ASRRecoveryPlanName

$TFOVnet = Get-AzVirtualNetwork -Name $TestFailoverVNetName

$TFONetwork= $TFOVnet.Id

#Start test failover of recovery plan

$Job_TFO = Start-AzRecoveryServicesAsrTestFailoverJob -RecoveryPlan $RecoveryPlan -Direction PrimaryToRecovery -AzureVMNetworkId $TFONetwork

do {

$Job_TFOState = Get-AzRecoveryServicesAsrJob -Job $Job_TFO | Select-Object State

Clear-Host

Write-Host "======== Monitoring Failover ========"

Write-Host "Status will refresh every 5 seconds."

try {

    }

catch {

Write-Host -ForegroundColor Red "ERROR - Unable to get status of Failover job"

Write-Host -ForegroundColor Red "ERROR - " + $_

        log "ERROR" "Unable to get status of Failover job"

        log "ERROR" $_

exit

    }

Write-Host "Failover status for $($Job_TFO.TargetObjectName) is $($Job_TFOState.state)"

Start-Sleep 5;

} while (($Job_TFOState.state -eq "InProgress") -or ($Job_TFOState.state -eq "NotStarted"))

if($Job_TFOState.state -eq "Failed"){

Write-host("The test failover job failed. Script terminating.")

Exit

}else {

Read-Host -Prompt "Test failover has completed. Please check ASR Portal, test VMs and press enter to perform cleanup..."

#Start test failover cleanup of recovery plan

$Job_TFOCleanup = Start-AzRecoveryServicesAsrTestFailoverCleanupJob -RecoveryPlan $RecoveryPlan -Comment "Testing Completed"

do {

$Job_TFOCleanupState = Get-AzRecoveryServicesAsrJob -Job $Job_TFOCleanup | Select-Object State

Clear-Host

Write-Host "======== Monitoring Cleanup ========"

Write-Host "Status will refresh every 5 seconds."

try {

    }

catch {

Write-Host -ForegroundColor Red "ERROR - Unable to get status of cleanup job"

Write-Host -ForegroundColor Red "ERROR - " + $_

        log "ERROR" "Unable to get status of cleanup job"

        log "ERROR" $_

exit

    }

Write-Host "Cleanup status for $($Job_TFO.TargetObjectName) is $($Job_TFOCleanupState.state)"

Start-Sleep 5;

} while (($Job_TFOCleanupState.state -eq "InProgress") -or ($Job_TFOCleanupState.state -eq "NotStarted"))

Write-Host "Test failover cleanup completed."

}

image

The following are screenshots of the PowerShell script output:

image

I hope this will help anyone out there who may be looking for a PowerShell script to automate ASR failover process.

One of the additions I wanted to add to this script was to list the Status VMs in the recovery group after the test failover has completed but I could not find a way to list the VMs that only belong to the recovery group. The cmdlets below lists all of the VMs that are protected but combing through the properties does not appear to contain any reference to what recovery plans they belong to. Please feel free to comment if you happen to know the solution.

$PrimaryFabric = Get-AzRecoveryServicesAsrFabric -FriendlyName svr-asr-01

#svr-asr-01 represents Configuration Servers

$PrimaryProtContainer = Get-AzRecoveryServicesAsrProtectionContainer -Fabric $PrimaryFabric

$ReplicationProtectedItem = Get-AzRecoveryServicesAsrReplicationProtectedItem -ProtectionContainer $PrimaryProtContainer

----------Update July 31, 2021---------

After reviewing some of my old notes, I managed to find another version of the PowerShell script that performed test failover for two plans and included steps to shutdown a VM, remove VNet peering between production and DR regions before the test failover, then recreate them afterwards. The following is a copy of the script:

Connect-AzAccount

Set-AzContext -SubscriptionId "53ea69af-xxx-xxxx-a020-xxxxea02f8b"

#Shutdown DC2

Write-Host "Shutting down DC2 VM in DR"

$DRDCName = "DC2"

$DRDCRG = "Canada-East-Prod"

Stop-AzVM -ResourceGroupName $DRDCRG -Name $DRDCName -force

#Declare variables for DR production VNet

$DRVNetName = "vnet-prod-canadaeast"

$DRVnetRG = "Canada-East-Prod"

$DRVNetPeerName = "DR-to-Prod"

$DRVNetObj = Get-AzVirtualNetwork -Name $DRVNetName

$DRVNetID = $DRVNetObj.ID

#Declare variables for Production VNet

$ProdVNetName = "Contoso-Prod-vnet"

$ProdVnetRG = "Contoso-Prod"

$ProdVNetPeerName = "Prod-to-DR"

$ProdVNetObj = Get-AzVirtualNetwork -Name $ProdVNetName

$ProdVNetID = $ProdVNetObj.ID

# Remove the DR VNet's peering to production

Write-Host "Removing VNet peering between Production and DR environment"

Remove-AzVirtualNetworkPeering -Name $DRVNetPeerName -VirtualNetworkName $DRVNetName -ResourceGroupName $DRVnetRG -force

Remove-AzVirtualNetworkPeering -Name $ProdVNetPeerName -VirtualNetworkName $ProdVNetName -ResourceGroupName $ProdVnetRG -force

#Failover Test for Domain Controller BREAZDC2

$RSVaultName = "rsv-asr-canada-east"

$ASRRecoveryPlanName = "Domain-Controller"

$TestFailoverVNetName = "vnet-prod-canadaeast"

$vault = Get-AzRecoveryServicesVault -Name $RSVaultName

Set-AzRecoveryServicesAsrVaultContext -Vault $vault

$RecoveryPlan = Get-AzRecoveryServicesAsrRecoveryPlan -FriendlyName $ASRRecoveryPlanName

$TFOVnet = Get-AzVirtualNetwork -Name $TestFailoverVNetName

$TFONetwork= $TFOVnet.Id

$Job_TFO = Start-AzRecoveryServicesAsrTestFailoverJob -RecoveryPlan $RecoveryPlan -Direction PrimaryToRecovery -AzureVMNetworkId $TFONetwork

do {

$Job_TFOState = Get-AzRecoveryServicesAsrJob -Job $Job_TFO | Select-Object State

Clear-Host

Write-Host "======== Monitoring Failover ========"

Write-Host "Status will refresh every 5 seconds."

try {

    }

catch {

Write-Host -ForegroundColor Red "ERROR - Unable to get status of Failover job"

Write-Host -ForegroundColor Red "ERROR - " + $_

        log "ERROR" "Unable to get status of Failover job"

        log "ERROR" $_

exit

    }

Write-Host "Failover status for $($Job_TFO.TargetObjectName) is $($Job_TFOState.state)"

Start-Sleep 5;

} while (($Job_TFOState.state -eq "InProgress") -or ($Job_TFOState.state -eq "NotStarted"))

if($Job_TFOState.state -eq "Failed"){

Write-host("The test failover job failed. Script terminating.")

Exit

}else {

#Failover Test for Remaining Servers

$ASRRecoveryPlanName = "DR-Servers"

$RecoveryPlan = Get-AzRecoveryServicesAsrRecoveryPlan -FriendlyName $ASRRecoveryPlanName

$Job_TFO = Start-AzRecoveryServicesAsrTestFailoverJob -RecoveryPlan $RecoveryPlan -Direction PrimaryToRecovery -AzureVMNetworkId $TFONetwork

do {

$Job_TFOState = Get-AzRecoveryServicesAsrJob -Job $Job_TFO | Select-Object State

Clear-Host

Write-Host "======== Monitoring Failover ========"

Write-Host "Status will refresh every 5 seconds."

try {

        }

catch {

Write-Host -ForegroundColor Red "ERROR - Unable to get status of Failover job"

Write-Host -ForegroundColor Red "ERROR - " + $_

            log "ERROR" "Unable to get status of Failover job"

            log "ERROR" $_

exit

        }

Write-Host "Failover status for $($Job_TFO.TargetObjectName) is $($Job_TFOState.state)"

Start-Sleep 5;

    } while (($Job_TFOState.state -eq "InProgress") -or ($Job_TFOState.state -eq "NotStarted"))

if($Job_TFOState.state -eq "Failed"){

Write-host("The test failover job failed. Script terminating.")

Exit

    }else {

Read-Host -Prompt "Test failover has completed. Please check ASR Portal, test VMs and press enter to perform cleanup..."

$Job_TFOCleanup = Start-AzRecoveryServicesAsrTestFailoverCleanupJob -RecoveryPlan $RecoveryPlan -Comment "Testing Completed"

do {

$Job_TFOCleanupState = Get-AzRecoveryServicesAsrJob -Job $Job_TFOCleanup | Select-Object State

Clear-Host

Write-Host "======== Monitoring Cleanup ========"

Write-Host "Status will refresh every 5 seconds."

try {

    }

catch {

Write-Host -ForegroundColor Red "ERROR - Unable to get status of cleanup job"

Write-Host -ForegroundColor Red "ERROR - " + $_

        log "ERROR" "Unable to get status of cleanup job"

        log "ERROR" $_

exit

    }

Write-Host "Cleanup status for $($Job_TFO.TargetObjectName) is $($Job_TFOCleanupState.state)"

Start-Sleep 5;

} while (($Job_TFOCleanupState.state -eq "InProgress") -or ($Job_TFOCleanupState.state -eq "NotStarted"))

$ASRRecoveryPlanName = "Domain-Controller"

$RecoveryPlan = Get-AzRecoveryServicesAsrRecoveryPlan -FriendlyName $ASRRecoveryPlanName

$Job_TFOCleanup = Start-AzRecoveryServicesAsrTestFailoverCleanupJob -RecoveryPlan $RecoveryPlan -Comment "Testing Completed"

do {

$Job_TFOCleanupState = Get-AzRecoveryServicesAsrJob -Job $Job_TFOCleanup | Select-Object State

Clear-Host

Write-Host "======== Monitoring Cleanup ========"

Write-Host "Status will refresh every 5 seconds."

try {

    }

catch {

Write-Host -ForegroundColor Red "ERROR - Unable to get status of cleanup job"

Write-Host -ForegroundColor Red "ERROR - " + $_

        log "ERROR" "Unable to get status of cleanup job"

        log "ERROR" $_

exit

    }

Write-Host "Cleanup status for $($ASRRecoveryPlanName) is $($Job_TFOCleanupState.state)"

Start-Sleep 5;

} while (($Job_TFOCleanupState.state -eq "InProgress") -or ($Job_TFOCleanupState.state -eq "NotStarted"))

Write-Host "Test failover cleanup completed."

}

}

#Create the DR VNet's peering to production

Write-Host "Recreating VNet peering between Production and DR environment after failover testing"

Add-AzVirtualNetworkPeering -Name $DRVNetPeerName -VirtualNetwork $DRVNetObj -RemoteVirtualNetworkId $ProdVNetID -AllowForwardedTraffic

Add-AzVirtualNetworkPeering -Name $ProdVNetPeerName -VirtualNetwork $ProdVNetObj -RemoteVirtualNetworkId $DRVNetID -AllowForwardedTraffic

#Power On DC2

Write-Host "Powering on DC2 VM in DR after testing"

Start-AzVM -ResourceGroupName $DRDCRG -Name $DRDCName

Performing the Initial Setup of the Citrix FAS Administration Console fails at Authorize this service with: "Failed to Issue certificate: CR_DISP_DENIED (code 2)"

$
0
0

Problem

You’re attempting to set up a Citrix Federated Authentication Service server to allow using Azure AD authentication with single sign-on but the configuration fails at the Authorize this service with the error:

The authorization request on <CertServerFQDN>\<CA Name> failed: Failed to Issue certificate: CR_DISP_DENIED (code 2).

image

Reviewing the Certification Authority management console’s Pending Requests does not show the expected pending request and reviewing the Failed Requests show the FAS server request being denied:

image

image

Request Status Code: The requested certificate template is not supported by this CA. 0x80094800 (-2146875392)

Request Disposition Message: Denied by Policy Module 0x80094800, The request was for a certificate template that is not supported by the Active Directory Certificate Services policy: Citrix_RegistratrionAuthority_ManualAuthorization.

Attempting to manually enroll from the certificates console for the certificate also fails:

image

Solution

One of the reasons why the authorization of the FAS server would fail is if the permissions for the Citrix_RegistrationAuthority_ManualAuthorization template is not configured properly. Begin by launching the Certificates Templates Console on the CA that the FAS server is attempting to be authorized and open the properties of the Citrix_RegistrationAuthority_ManualAuthorization template:

image

Navigate to the Security tab and verify that the Authenticated Users group has Read permissions:

image

Domain Computers has Read and Enroll:

image

With the required permissions in place, attempt to authorize the server again and the status should now display:

There is a pending authorization request on CertServerFQDN>\<CA Name>.

image

Navigate into the Certification Authority management console’s Pending Requests and you should now see the following pending request:

The operation completed successfully. 0x0 (WIN32:0)

image

Proceed to authorize the pending request and the Authorize this service step should now complete:

image

Started GitHub!

$
0
0

Many of my contacts have joked with me about how I need to modernize the way I provide scripts in my blog posts as I do not:

#1 - Use special code boxes with proper formatting and colour coding
#2 - Use GitHub to better manage the updates and enable collaboration

To everyone who has made the above comments to me: You are absolutely correct so I have created a GitHub account, uploaded my most recent Azure Site Recovery PowerShell Failover Scripts and will continue to add my old scripts to my account:

https://github.com/terenceluk

I want to thank everyone who has reached out to me over the years with suggestions or called out any mistakes I've made in my posts. Every message is appreciated and I hope to continue contributing to the community.

image

Granting a CSP Foreign Principal the Reader or Owner role onto an Azure Subscription with PowerShell

$
0
0

Those who have worked at a Cloud Solution Provider (CSP) organization with access to the MSP Expert tool for migrating subscriptions from EA to CSP will know that one of the post migration steps is to grant the Foreign Principal representing the CSP as either an Owner or Reader on the migrated subscriptions. This step can be easily missed because you can see the subscriptions in the Partner portal with the billing details but will quickly notice that you cannot open tickets for the migrated subscriptions because they are not present when you’re logged into the client’s Azure portal as the CSP due to the lack of permissions.

Attempting to apply the permissions from within https://portal.azure.com will reveal that you cannot simply click into the IAM blade to assign the Owner or Reader role because the Foreign Principal will not be searchable. The only way to perform this operation is to use PowerShell and the present Microsoft documentation doesn’t appear to have instructions for this specific operation (this is the closest one I could find: https://docs.microsoft.com/en-us/partner-center/revoke-reinstate-csp?tabs=workspaces-view) so I would like to share the cmdlets and a script for it.

PowerShell Cmdlets

The following assumes that you have an existing subscription with the Foreign Principal assigned with a role to it as we’ll need to retrieve the ObjectID of the Foreign Principal from a subscription then use it to assign to another subscription. If there isn’t a subscription with the Foreign Principal already assigned with a role, you can temporarily provision a new subscription from the Partner Central portal in the Azure Plan.

1. Install and import the Az module if it is not present:

Install-Module -Name Az

Import-Module -Name Az

2. Connect to the Azure tenant:

Connect-AzAccount

3. Obtain the subscription ID of a subscription that already has the Foreign Principal ID assigned a role:

Get-AzSubscription

4. Create a variable to store the subscription ID that has the Foreign Principal ID assigned a role:

$subscriptionID = "/subscriptions/<replaceWithSubID>"

5. Retrieve the Foreign Principal object along with its attributes and values:

Get-AzRoleAssignment -Scope $subscriptionID | Where-Object {$_.DisplayName -match "Foreign Principal*"}

6. Create a variable to store the Foreign Principal’s ObjectID value:

$foreignPrincpalObjectID = Get-AzRoleAssignment -Scope $subscriptionID | Where-Object {$_.DisplayName -match "Foreign Principal*"} | Select -expand ObjectID

7. Retrieve the subscription ID of the subscription that you want to grant the Foreign Principal permissions:

Get-AzSubscription

8. Update the variable to store the subscription ID that you want to assign the Foreign Principal:

$subscriptionID = "/subscriptions/<replaceWithSubID>"

9. Grant the Foreign Principal either Reader or Owner permissions to the subscription:

New-AzRoleAssignment -ObjectId $foreignPrincpalObjectID -Scope $subscriptionID -RoleDefinitionName Reader -ObjectType "ForeignGroup"

**Note that I’ve had mixed results with the -ObjectType parameter. Some tenants appear to require it and some do not. If an error is thrown indicating the ObjectType is unknown, remove the -ObjectType "ForeignGroup"

PowerShell Script

I’ve created a PowerShell script that will perform the following:

1. Use the Connect-AzAccount to connect to Azure

2. Retrieve the list of subscriptions and prompt the user to select one that already has the Foreign Principal assigned with a role

3. Retrieve the list of Foreign Principals assigned to the subscription and prompt the user to select the one that will be used to assign a role for another subscription

4. Retrieve the list of subscriptions again and prompt the user to select one that we are to assign the Foreign Principal assigned with a role

5. Prompt the user to choose whether to assign the Foreign Principal the Owner or Reader role

6. Prompt the user to confirm the assignment with Y or N

7. Output the result

This AssignSubscriptionRoleToForeignIdentity.ps1 script can be found on my GitHub account: https://github.com/terenceluk/Azure-CSP/tree/main/Role-Assigment

--------------------------------------------------------------------------------------------------------------------------

The following is a screenshot of a subscription that has the Tech Data Corporation Foreign Principal assigned the Owner role:

image

The following is a screenshot after using the script to assign the same Tech Data Corporation Foreign Principal the Reader role:

image

I hope this will help anyone who might be looking for a demonstration on how this.

Configuring Security Headers to secure Microsoft Active Directory Federation Services / AD FS for scoring an A on SecurityHeaders.com

$
0
0

I’ve recently been asked by a colleague who wanted to know how they can score an A+ on www.securityheaders.com with a Windows Server 2019 AD FS WAP server that is exposed to the internet. It has been a while since I’ve configured one so I had to dig up my old notes and thought it would be great to write this quick post with the headers that achieves an A score. The reason why an A+ is not possible because it would require the Content-Security-Policy header to exclude the values:

  1. 'unsafe-inline'
  2. 'unsafe-eval'

… and excluding these would throw the following error:

JavaScript required

JavaScript is required. This web browser does not support JavaScript or JavaScript in this web browser is not enabled.

To find out if your web browser supports JavaScript or to enable JavaScript, see web browser help.

image

The following are the configuration for headers that I’ve used in the past to score an A (these are executed on the internal AD FS server and not on the WAP):

Set-AdfsResponseHeaders -SetHeaderName "Content-Security-Policy" -SetHeaderValue "default-src 'self''unsafe-inline''unsafe-eval'; img-src 'self'"

Set-AdfsResponseHeaders -SetHeaderName "Strict-Transport-Security" -SetHeaderValue " max-age=157680000; includeSubDomains"

Set-AdfsResponseHeaders -SetHeaderName "X-XSS-Protection" -SetHeaderValue "1;mode=block"

Set-AdfsResponseHeaders -SetHeaderName "X-Content-Type-Options" -SetHeaderValue "nosniff"

Set-AdfsResponseHeaders -SetHeaderName "Referrer-Policy" -SetHeaderValue "no-referrer"

Set-AdfsResponseHeaders -SetHeaderName "Permissions-Policy" -SetHeaderValue "geolocation=(),microphone=(),fullscreen=(self), vibrate=(self)"

Note that X-Frame-Options is already set to DENY by the AD FS server so there is no need to configure it. Use the following cmdlet to review the settings:

Get-AdfsResponseHeaders | Select-Object -ExpandProperty ResponseHeaders

image

Hope this helps anyone who may be looking for these headers.

Enable Security Authentication for NVIDIA License Server Manager

$
0
0

Those who have worked with the NVIDIA License Server Manager that is deployed to provide virtual desktops with GPU licenses will quickly notice that the default install does not provide any security for the management console as navigating to the URL: http://localhost:8080/licserver/ will bring you straight into the console without authentication. Given that I’ve been asked many times in the past about securing this portal, this post serves to demonstrate the process.

Enabling the requirement for logging in as shown in the screenshot below cannot be done via the GUI:

image

To enable the authentication requirement, we’ll need to use the nvidialsadmin.bat via the command line. The nvidialsadmin.bat can be found in the directory C:\NVIDIA\LicenseServer\enterprise on the licensing server:

image

Enable Security for the NVIDIA License Server Manager

1. Begin by launching a command prompt as an administrator and navigating to the directory: C:\NVIDIA\LicenseServer\enterprise

2. Execute the following command to set the security flag as true:

nvidialsadmin.bat -server http://127.0.0.1:7070 -config -set security.enabled=true

image

3. Next, execute the following command with the default password for the admin account (Admin@123) and set the new password (Update the New-Password1 value to the password desired):

nvidialsadmin.bat -server http://127.0.0.1:7070 -authorize admin Admin@123 -users -edit admin New-Password1

image

4. Proceed to restart the Apache Tomcat 9.0 Tomcat9 service in the services console on the license server:

image

5. Wait for the license server to be fully started then try to navigate to the console at http://localhost:8080/licserver/ to verify that credentials are required:

image

Disable Security for the NVIDIA License Server Manager

1. To disable the security login requirement execute the following command with the configured password to authorize the session:

nvidialsadmin -server http://127.0.0.1:7070 -authorize admin New-Password1

image

2. Then set the security flag to false:

nvidialsadmin.bat -server http://127.0.0.1:7070 -authorize admin New-Password -config -set security.enabled=false

image

3. Proceed to restart the Apache Tomcat 9.0 Tomcat9 service in the services console on the license server:

image

4. Wait for the license server to be fully started then try to navigate to the console at http://localhost:8080/licserver/ to verify that credentials are no longer required.


Releasing NVIDIA RTX Virtual Workstation License(s) on the License Server Manager

$
0
0

One of the frequent questions I’ve been asked in the past for VDI deployments that are accelerated with NVIDIA GPU GRID cards is how we can release assigned licenses to VDIs that no longer exist. Scenarios that can cause this is if a set of virtual desktops were deployed but then had to get redeployed the same evening because of a required change in configuration (I had to do this once when I needed to change the GPU memory allocation for the desktops).

image

image

Failure to have sufficient licenses for the VDIs will display the following message upon logging into the virtual desktop:

Failed to acquire NVIDIA license.

Failed to acquire NVIDIA RTX Virtual Workstation license. Click here for more information.

image

The short answer is that there isn’t a way to do this via the on-premise license server, command line or the NVIDIA Application Hub, and the recommended method is to either reduce the lease time for the license or completely remove all the licenses allocated to the desktops.

The steps to modify the lease time are as follows:

  1. Log onto the VDI master image
  2. Open the registry editor and browse to HKEY_LOCAL_MACHINE\SOFTWARE\NVIDIA Corporation\Global\GridLicensing
  3. Edit the LicenseInterval DWord (REG_DWORD) and configure the interval time that represents how long the license lease is valid for

The integer configured should be within the range 10-10080 that specifies the period of time in minutes for which a license can be borrowed after it is checked out. After this period has elapsed, the client must obtain a new license from the server.
The default is 1440 minutes, which corresponds to a period of 1 day. The value can be reduce to an hour so a new license would be reissued.

The environment I was working with already had the lease time configured to be a day (default) and further reducing it was not ideal in case we ever had a license server failure so I opted to use the 2nd method, which was to completely remove all the licenses allocated to the desktops so new ones would be issued. It is worth noting that I was told by the NVIDIA support engineer that this does not adversely affect the VDIs currently in use so it can be performed during regular hours.

The following are the steps:

1. Begin by logging into the NVIDIA Application Hub via the URL: https://nvid.nvidia.com/dashboard/

image

2. Select NVIDIA LICENSING PORTAL:

image

3. Navigate to LICENSE SERVERS, expand the License Server node and click on Download:

image

4. The license file representing the licenses will be downloaded:

imageimage

5. On the on-premise NVIDIA licensing server, stop FlexNet License Server - nvidia service:

image

6. Navigate to the path: C:\Windows\ServiceProfiles\NetworkService\flexnetls\:

image

7. Rename the nvidia folder to nvidia-old:

image

8. Start FlexNet License Server - nvidia service:

image

9. Open the license portal on the on-premise NVIDIA license server, navigate to License Management, ensure that the server is up, and the following error message is NOT present:

Connection error: Please make sure the FNE server is up and running

image

10. Confirm that the nvidia folder previously renamed has been recreated:

image

11. Confirm that there are no licensed clients listed:

image

12. With the server services up, proceed to upload the previously downloaded license file (.bin):

image

13. Confirm that the message Successfully applied license file to the license server. is displayed:

image

14. Navigating back to the Licensed Clients window should initially show an empty list and then new clients being listed:

image

Generating a network trace capture and analyzing with Microsoft Network Monitor

$
0
0

An ex-colleague recently reached out to me for assistance on how he could perform a network trace and analyze it for a particular Citrix Virtual Apps and Desktop environment and the most common tool I usually recommend is Wireshark. The challenge he had was that the Wireshark installation would error out during the NCAP install so attempting to use that tool was not a viable option.

My ex-colleague’s challenge lead me to remember another method I had used in the past (probably more than 5 years ago) where we could use the native netsh trace command to capture an ETL file without requiring any software installation and after successfully testing the process, I thought I’d write a blog post to demonstrate it.

Creating a network trace capture file on the virtual desktop

1. On the VDI, launch the command prompt in administrator mode and start a trace with the following command:

netsh trace start capture=yes tracefile=c:\net.etl persistent=yes maxsize=4096

image

2. Replicate issue, note the time stamp, and stop trace with the following command:

netsh trace stop

image

Analyzing the network trace

  1. Download and install Microsoft Network Monitor: https://www.microsoft.com/en-in/download/details.aspx?id=4865
  1. Launch Microsoft Network Monitor and open the ETL file:
  1. Click Tools > Options:
image

Navigate to Parser Profiles tab, right click on Windows and click Set as Active:

image

Drill down to the NDISPacCap node:

image

For the purpose of this demonstration, we’ll be searching for an SMB path that contains the string college.

Click on Load Filter > Standard Filters > SMB > SmbFileName:

image

Update the string to look up and click Apply:

image

Hope this helps anyone who may be looking for a alternative method for capturing network traffic and analyzing it in an environment that may not have Wireshark available.

Securing Azure AD with Duo 2FA

$
0
0

Duo has been one of the most common MFA solutions I’ve worked with over the past 5 years and most clients who have this as their MFA solution for on-premise services such as ADFS, tend to ask whether they can also use it for Azure AD. The short answer is yes but there the requirement of having Conditional Access (available with Azure AD Premium P1) within Azure means an additional cost would be required to replace Azure MFA with the solution (https://docs.microsoft.com/en-us/azure/active-directory/fundamentals/concept-fundamentals-mfa-get-started). Another consideration is that certain risk detection features of Azure Identity Protection (https://docs.microsoft.com/en-us/azure/active-directory/identity-protection/concept-identity-protection-risks) will not be available. With that said, the purpose of this post is to provide a quick walkthrough of the setup process for using Duo as the MFA for Azure AD portal.azure.com and Office 365 authentication. The official Duo documentation can be found here: https://duo.com/docs/azure-ca

Begin by logging into the Duo administration portal: https://admin.duosecurity.com/, navigate to Applications and click on Protect and Application to register Azure as an application:

image

Search for Azure, select Microsoft Azure Active Directory and click on the Protect button:

image

Click on the Authorize button to have Duo provide the Azure tenant sign-in prompt:

image

Sign into the Azure tenant with an account with global administrator permissions:

image

Authorize the permissions request:

image

Decide on whether to activate the Universal Prompt for Microsoft Azure AD:

image

Copy the custom control code snippet into notepad as we’ll require it to configure Azure AD:

image

Scroll down and review the settings of the Azure AD application:

image

Locate the Usernamenormalization setting and change the radio button from None to Simple. The reason why this setting is important is because failure to configuring this will cause Duo to create multiple accounts for the same user depending on the username used. For example, it may create two accounts for the logins:

  1. tluk
  2. tluk@contoso.com
  3. contoso\tluk

image

image

Proceed to save the configuration and you should now see the application created:

image

Login into https://portal.azure.com, navigate to Azure Active Directory and click on Security:

image

Select Conditional Access:

image

Click on Custom controls (Preview) and then New custom control:

image

Delete the existing content in the customized controls box:

image

Paste the JSON snippet from Duo and click on Create:

image

Note the new custom control named RequireDuoMFA listed:

image

Navigate to Policies and click on New policy:

Note that if the New custom control button is greyed out then that means you do not have Azure AD Premium P1 licenses and therefore unable to use Conditional Access.

image

Select Create new policy:

image

Provide a name for the policy:

image

Click on Specified users include under Users or workload identities and configure the users or groups that you want the policy to be applied to:

image

Click on No cloud apps, actions, or authentication contextsselected under Cloud apps or actions, select Select apps and configure the applications you want this policy to apply to. For this example, we’ll include Office 365 and Microsoft Azure Management:

image

Note that Azure reminds you to not lock yourself out as Microsoft Azure Management affects portal.azure.com. Microsoft provides guidance on having and managing emergency accounts in the following document: https://docs.microsoft.com/en-us/azure/active-directory/roles/security-emergency-access

image

Select 0 controlsselected under Access controls, select Grant access, enable RequireDuoMfa, and enable Require all the selected controls:

Note that if you do not see RequireDuoMfa then that means you skipped the custom control creation.

image

With the settings configured, you can choose to have Enable policy set as Report-only, which will only report expected behavior in the logs:

image

Or configured as On, which would put the policy in effect:

image

For this example, we will have the policy configured as On:

image

Proceed to test login and you should now see the following behavior:

image

image

image

Hope this provides an idea of what the process of configuring and using Duo as MFA looks like.

Connect-AzureAd fails in PowerShell version 7 with: "Could not load type 'System.Security.Cryptography.SHA256Cng'..."

$
0
0

Problem

You attempt to use the cmdlet Connect-AzureAd (after Import-Module AzureAd) to connect to Azure:

Connect-AzureAds

image

… but quickly notice that it fails with the following error:

PS C:\Git\Azure\Azure> Connect-AzureAD

Connect-AzureAD: One or more errors occurred. (Could not load type 'System.Security.Cryptography.SHA256Cng' from assembly 'System.Core, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089'.): Could not load type 'System.Security.Cryptography.SHA256Cng' from assembly 'System.Core, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089'.

Connect-AzureAD: One or more errors occurred. (Could not load type 'System.Security.Cryptography.SHA256Cng' from assembly 'System.Core, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089'.)

Connect-AzureAD: Could not load type 'System.Security.Cryptography.SHA256Cng' from assembly 'System.Core, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089'.

Connect-AzureAD: One or more errors occurred. (Could not load type 'System.Security.Cryptography.SHA256Cng' from assembly 'System.Core, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089'.): Could not load type 'System.Security.Cryptography.SHA256Cng' from assembly 'System.Core, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089'.

image

This happens in a PowerShell version 7 terminal as well as Visual Studio Code’s PowerShell 7 terminal:

Version: 1.63.2 (user setup)

Commit: 899d46d82c4c95423fb7e10e68eba52050e30ba3

Date: 2021-12-15T09:40:02.816Z

Electron: 13.5.2

Chromium: 91.0.4472.164

Node.js: 14.16.0

V8: 9.1.269.39-electron.0

OS: Windows_NT x64 10.0.19043

image

The PowerShell 7 version is displayed below:

PS C:\Git\Azure\Azure> $PSVersionTable

Name Value

---- -----

PSVersion 7.2.1

PSEdition Core

GitCommitId 7.2.1

OS Microsoft Windows 10.0.19043

Platform Win32NT

PSCompatibleVersions {1.0, 2.0, 3.0, 4.0…}

PSRemotingProtocolVersion 2.3

SerializationVersion 1.1.0.1

WSManStackVersion 3.0

image

This is not a problem in PowerShell version 5:

PS C:\> $PSVersionTable

Name Value

---- -----

PSVersion 5.1.19041.1320

PSEdition Desktop

PSCompatibleVersions {1.0, 2.0, 3.0, 4.0...}

BuildVersion 10.0.19041.1320

CLRVersion 4.0.30319.42000

WSManStackVersion 3.0

PSRemotingProtocolVersion 2.3

SerializationVersion 1.1.0.1

image

Solution

To get around this issue, use the -UseWindowsPowerShell switch when importing the AzureAD module as such:

PS C:\Git\Azure\Azure> Import-Module AzureAD -UseWindowsPowerShell

WARNING: Module AzureAD is loaded in Windows PowerShell using WinPSCompatSession remoting session; please note that all input and output of commands from this module will be deserialized objects. If you want to load this module into PowerShell please use 'Import-Module -SkipEditionCheck' syntax.

image

Connect-AzureAD should now complete:

image

PowerShell script to assign users in an on-premise AD group to an Azure Enterprise Application's Users and Groups

$
0
0

Azure Active Directory administrators who have configured Enterprise Applications for Azure AD SSO would most likely have encountered the limitations of not being able to assign groups when granting access permissions due to not having Azure AD Premium P1 licenses. The cost of Azure AD Premium P1 licenses aren’t very expensive per user but the cost can quickly run up when there are thousands of users in an organization. One of the clients I worked with faced this issue when they decided to move their SaaS applications currently using their ADFS portal for SAML authentication to Azure AD. Their tenant was relatively new but they were keen to get a taste of what Azure AD could provide but quickly realized they would have to fork out additional money for Azure AD Premium P1 to assign groups to a newly created Enterprise Application. Their headcount was in the thousands and the only had Office 365 licenses rather than M365 licenses that included Azure AD Premium P1.

What I recommended as a stop gap (poor man’s solution) was to proceed and assign the users individually to the Enterprise Application as a stop gap until they can get approval to procure the licenses. I would be embarrassed to ask them to manually assign the permissions so I set out to find a script that can automate the process. A bit of lead me to Ruud’s script here: https://lazyadmin.nl/powershell/add-users-to-azure-ad-application-with-powershell/ which was very close to what I wanted, requiring a slight tweak as the client wanted to assign permissions to an on-premise AD group currently synchronized to Azure AD with AD Connect.

The following is what the modified script does:

  1. Connects to Azure AD
  2. Interactively prompts for the Enterprise Application name
  3. Interactively prompts for the on-premise AD group name
  4. Stores the Enterprise Application in a variable
  5. Obtains all the users currently assigned to the Enterprise Application
  6. Obtains all the users in the on-premise AD group
  7. Compares the list of users currently assigned to the Enterprise Application with the on-premise AD group and stores the difference in a variable
  8. If there are no users to be added, exit PowerShell script immediately
  9. Use a loop to assign the users to the Enterprise Application

I’ve included the following screenshots of the Enterprise Application:

image

Note the User and groups configuration to be populated:

image

Note that groups are not available for this tenant because Azure AD Premium P1 licenses are not available:

Groups are not available for assignment due to your Active Directory plan level. You can assign individual users to the application.

image

The script is pasted below and also available at my following GitHub repo: https://github.com/terenceluk/Azure/blob/main/PowerShell/EnterpriseApps-Permissions.ps1

If automation is required, this script can be used in a Runbook or Azure Function to automate the process.

image  

<#

Refer to the following documents for the source of where this script is derived from and the PowerShell cmdlets used:

Assign Users to Azure AD Application with PowerShell

https://lazyadmin.nl/powershell/add-users-to-azure-ad-application-with-powershell/

New-AzureADUserAppRoleAssignment

https://docs.microsoft.com/en-us/powershell/module/azuread/new-azureaduserapproleassignment?view=azureadps-2.0

Get-AzureADUser

https://docs.microsoft.com/en-us/powershell/module/azuread/get-azureaduser?view=azureadps-2.0

Get-AzureADGroupMember

https://docs.microsoft.com/en-us/powershell/module/azuread/get-azureadgroupmember?view=azureadps-2.0

#>

# Import AzureAD module with -UseWindowsPowerShell switch for PowerShell 7

# Import-Module AzureAD -UseWindowsPowerShell

# Connect to Azure AD

Connect-AzureAD

#Hardcode Enterprise Application name and on-premise AD group name

# $enterpriseAppName = "MetaCompliance User Provisioning"

# $onPremiseADgroup = "All_Staff"

# Prompt input for Enterprise Application name and on-premise AD group name

$enterpriseAppName = Read-Host "Please type the Enterprise Application Name"

$onPremiseADgroup = Read-Host "Please type the On-Premise AD Group Name"

# Get the service principal for the Enterprise Application you want to assign the user to

$servicePrincipal = Get-AzureADServicePrincipal -Filter "Displayname eq '$enterpriseAppName'"

## Use this cmdlet to list the roles available for this Enterprise App: Get-AzureadApplication -SearchString $enterpriseAppName | select Approles | Fl

## Use this cmdlet to list the specific role $servicePrincipal.Approles[0].id

# Get all users that are already assigned to the application

$existingUsers = Get-AzureADServiceAppRoleAssignment -all $true -ObjectId $servicePrincipal.Objectid | Select-Object -ExpandProperty PrincipalId

# Get all users from on-prem AD group

$allUsers = Get-AzureADGroup -Filter "DisplayName eq '$onPremiseADgroup'" -All $true | Get-AzureADGroupMember -All $true | Select-Object displayname,objectid

# Compare list of users from the on-premise AD group and list of users already assigned default permissions to the Enterprise Application

$newUsers = $allUsers | Where-Object { $_.ObjectId -notin $existingUsers }

# Check to see if there are any new users to add and if there isn't, terminate the script now rather than attempting the loop

if ($newUsers.count -eq 0) {

Exit

}

ForEach ($user in $newUsers) {

Try {

## Note that the Id parameter specifies app because this application has two defined roles

# If multiple roles does not exist then use: -Id ([Guid]::Empty) instead of -Id $servicePrincipal.Approles[0].id

# Use this cmdlet to display the available roles: Get-AzureadApplication -SearchString $enterpriseAppName | select Approles | Fl

New-AzureADUserAppRoleAssignment -ObjectId $user.ObjectId -PrincipalId $user.ObjectId -ResourceId $servicePrincipal.ObjectId -Id $servicePrincipal.Approles[0].id -ErrorAction Stop

[PSCustomObject]@{

UserPrincipalName = $user.displayname

AppliciationAssigned = $true

}

}

catch {

[PSCustomObject]@{

UserPrincipalName = $user.displayname

AppliciationAssigned = $false

}

}

}

Viewing all 836 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>