Quantcast
Channel: Terence Luk
Viewing all 836 articles
Browse latest View live

Determining a Machine Catalog's master image, snapshot and VMware vSphere datacenter with PowerShelll for Citrix Virtual Apps and Desktops

$
0
0

One of the most common administration questions I get asked for Citrix Virtual Apps and Desktops is how to determine the master image currently being used for a Machine Catalog. Those familiar with the Web Studio will be familiar that you can select the Machine Catalog in the administration console:

image

Then click on the Template Properties tab, then review the name under Disk Image to determine the snapshot being used. In the example below, the snapshot being used is named Citrix_XD_Dev Desktops. However, this does not tell the administrator the actual virtual machine name that is being used:

image

To retrieve the details of the master image currently being used for the Machine Catalog, launch the PowerShell console and connect to Citrix Cloud with:

asnp citrix*

Get-XDAuthentication

Then proceed to use the follow cmdlet to display the details of the Machine Catalog:

Get-ProvScheme -ProvisioningSchemeName "Dev Desktops"

**We have used the Dev Desktops as the Machine Catalog for this example.

image

The virtual machine used for the machine catalog is provided in the MasterImageVM setting:

MasterImageVM : XDHyp:\HostingUnits\WorkspaceSTG\MasterImage Feb 2020 v1.vm\Citrix_XD_Dev Desktops.snapshot

With the details of the virtual machine determined, we can look for the VMware vSphere datacenter it is located in by further drilling into the details of the virtual machine by using the CD command as such:

CD "XDHyp:\HostingUnits\WorkspaceSTG\MasterImage Feb 2020 v1.vm"

Then use the DIR command to list the details:

image

The setting that provides the datacenter is ObjectPath:

ObjectPath: /TechHall.datacenter/Desktops.cluster/MasterImage Feb 2020 v1.vm/Citrix_XD_Dev Desktops.snapshot

The datacenter in this example is named TechHall.

Hope this helps anyone looking for how to retrieve the details of a machine catalog within Citrix Virtual Apps and Desktops.


Configuring and accessing Microsoft Azure Files

$
0
0

One of the common questions I am asked by colleagues and clients is how and why they would use Azure Files. The answer to the “how” and “why” are in abundance and I usually provide examples based on the environment I am working in. Some features may be more important for others but a few examples I have off the top of my head are:

  • Lift and shift initiatives for applications installed on Windows Servers can have the data transitioned into the cloud with Azure Files (you can easily move all of the data stored on, say, and E drive of a Windows server into Azure Files then mount that drive back on the server as an E drive without changing code)
  • Container instances, which requires a drive for persistent storage, can leverage Azure Files to provide a drive that can be mounted in Linux
  • Traditional file servers hosted on Windows Servers can be migrated into Azure Files for serverless hosting
  • Azure File Sync can be used to synchronize files stored in Azure Files to an on-premise file server for fast local access

The following Microsoft document provides more information about Azure files:

What is Azure Files?
https://docs.microsoft.com/en-us/azure/storage/files/storage-files-introduction

With a brief overview of Azure Files and its benefits out of the way, the following is a demonstration of how to set up Azure Files, access the share, snapshot, and lockdown access with a service endpoint. Azure Files can also be configured with Share and NTFS permissions similar to a traditional shared folder on a Windows Server but the process of the configuration is too long to include into this post so I will write a separate one in the future.

Setting up Azure Files

Begin by creating a new storage account that will contain the Azure Files:

image

Basic Tab

Fill in the required configuration parameters for the storage account based on the requirements. Note that the Storage account name will need to be unique across all of Azure’s storage accounts because the name will be used as part of the URL for access. The name needs to be:

  • Between 3 to 24 characters long
  • Contain only lowercase characters and numbers (no special characters such as “-“)
image

Networking Tab

We will be locking down the connectivity method to private endpoints later so leave the Connectivity method as Public endpoint (all networks) for now and Routingpreference as the default Microsoft networking routing (default):

image

Data protection tab

The data protection options are displayed and the one that is related to Azure Files is the Turn on soft delete for file shares:

image

The setting that pertains to Azure Files in the advanced tag is Large file shares support, which provides file share support up to a maximum of 100 TiB but does not support geo-redundant storage:

image

image

Proceed to create the storage account by clicking Review + create button then Create.

With the storage account successfully created, open the new storage account and navigate to the File shares menu option:

image

Click on the + File share button to create a new file share:

image

Configure the new file share with the settings required.

I won’t go into the details of the Tiers but will provide this reference link for more information: https://docs.microsoft.com/en-us/azure/storage/files/storage-files-planning?WT.mc_id=Portal-Microsoft_Azure_FileStorage#storage-tiers

image

Complete creating the file share by clicking on the Create button.

With the test File share created, click to open it:

image

You can directly upload files into the file share, modify the tier, configure various operations and retrieve information pertaining to the file share.

image

You may notice that clicking into the Access Control (IAM) menu option will display the following:

Identity-based authentication (Active Directory) for Azure file shares

To give individual accounts access to the file share (Kerberos), enable identity-based authentication for the storage account. Learn more

image

This is where you would configure the Share permissions for Active Directory account access, which I will cover in a future blog post.

Clicking into the properties of the file share will display the https URL to access the share.

image

Note that you won’t be able to browse into the folder as how you would be able to for blog storage with anonymous access. Attempting to do so will display the following:

<Error>
<Code>InvalidHeaderValue</Code>
<Message>The value for one of the HTTP headers is not in the correct format. RequestId:ee3cfc97-601a-0077-765e-1342f2000000 Time:2021-03-07T14:33:25.6179879Z</Message>
<HeaderName>x-ms-version</HeaderName>
<HeaderValue/>
</Error>

image

The rest of the configuration settings are fairly self-explanatory where backups is to configure backups for the Azure File and Snapshots is a feature I will demonstrate later in this post.

Administratively Accessing Azure Files for Upload and Download and other Folder Operations

The Azure portal allows you to upload and download files but is not very efficient. A better way of administratively accessing the share would be to use Azure Storage Explorer, which is an application that is installed onto a desktop or server. Proceed to download and install the application: https://azure.microsoft.com/en-ca/features/storage-explorer/

Launch the application and click on the power plug icon on the left to connect to a variety of Azure services:

image

Note the following selection of Azure resources we can connect to:

  • Subscription
  • Storage account
  • Blog container
  • ADLS Gen2 container or directory
  • File share
  • Queue
  • Table
  • Local storage emulator
image

As we are configuring Azure Files, the 3 options we are interested in connecting to are:

  • Subscription
  • Storage account
  • File share

I will demonstrate connecting to the 3 of them.

Subscription

Connecting with the Subscription option simply requires credentials to the Azure tenant and essentially provides access to all of the storage resources in the subscription:

image

image

Storage account

Connecting to the Storage Account provides to options:

  • Account name and key
  • Shared access signature (SAS)
image

To use the Account name and key, navigate to the storage account in the Azure portal and into Access keys. The information we need is the Storage account name and key1 or key2:

image

Paste the information in the Azure Storage Explorer:

image

Proceed to connect:

image

The connection should succeed and you will see the storage account listed in the Storage Accounts node:

image

To use the Shared access signature (SAS), navigate to the storage account in the Azure portal and into Access keys. The information we need is the Storage account name and Connection string:

image

Paste the information in the Azure Storage Explorer:

image

Proceed to connect:

image

The connection should succeed and you will see the storage account listed in the Storage Accounts node:

image

File share

Connecting with the File share option requires a SAS (Shared Access Signature) to be created. You unfortunately can’t create it directly from the storage account portal as it will not be Azure File specific:

image

An alternative way of creating it is to use Azure Storage Explorer with an already established connection to the storage account, right click on the File Share, then select Get Shared Access Sigantuare…:

image

A Shared Access Signature window will be displayed with options to configure the permissions for this access:

image

Selecting Create after the parameters are set will generate the following three strings:

Share: Test

URI: https://steastusserviceendpoint.file.core.windows.net/test?st=2021-03-07T15%3A04%3A29Z&se=2021-03-08T15%3A04%3A29Z&sp=rl&sv=2018-03-28&sr=s&sig=X5lu4wbZGuOggVERMHuasvDVHPayoxFj9muJ9L%2FWsPM%3E

Query string: ?st=2021-03-07T15%3A02%3A29Z&se=2021-03-08E15%3A04%3A29Z&sp=rl&sv=2018-03-28&sr=s&sig=X5lu4mbEGuOugVJRMHutsvDVHPayoxFj9muJ9L%2FWsPM%3D

image

Use the strings to connect to Azure Files in the Azure Storage Explorer:

imageimage

image

Once connected to the Azure File Shares with Azure Storage Explorer, you’ll be able to create new folders, upload/download files and perform other folder related operations.

image

Access policies on the share can also be configured:

image

image

Accessing Azure Files by mounting the folder as a drive in Windows, Linux or Mac OS

With the Azure Files file share setup, access to it can be provided to Windows, Linux or Mac OS by clicking on the Connect button to bring up the commands to mount the drive:

image

Linux and Macs:

imageimage

The following demonstrates what using the PowerShell to mount the drive in Windows looks like:

image

**Note that just as all Windows map drives are, SMB over port 445 is used for communication and this port is usually blocked by ISPs so it is not likely to work if you run this on a remote computer coming in from the internet with no VPN into Azure.

The PowerShell cmdlet used to map the drive performs the following:

  • Test the connection to the storage account via port 445
  • Assuming connection succeeds, it will save the password to the storage account
  • Map the drive as the letter defined in the Azure portal and set it to be persistent

If this drive ever needs to get removed then use the Remove-PSDrive to remove it.

image

The drive should now be mapped as Z:

image

Another way to map the drive without using a PowerShell script is to simply use the drive mapping feature directly from Windows Explorer. Before attempting to map the drive, you’ll need to retrieve the path and the credentials for connecting to the drive. Begin by using the Azure portal and navigate into Properties of the File share, then copy the URL without the https:// as shown in the screenshot below:

steastusserviceendpoint.file.core.windows.net/test

image

Navigate to the Access keys of the storage account and copy key1 or key2 as it will be used as the password:

image

Proceed to use the Windows desktop or server to map a network drive and use the following parameters:

Folder: \\steastusserviceendpoint.file.core.windows.net\test < note that I changed the “/” for test to “\” and added “\\” to the beginning.

Use the following for authentication:

Username: Azure\<storageAccountName>

Password: Key1 or Key2

image

image

image

Azure Files Snapshots

Snapshots for the File Share is also available but note that it behaves more like a Volume Shadow Copies (VSS) on a Windows Server that allows the use of the Previous Versions tab than, say, a SAN or VM snapshot. You can create snapshot by navigating into the File share, select the Snapshots operation, and then click on Add snapshot:

image

Provide a comment for the snapshot then click OK:

image

A snapshot will be created:

image

Now if you proceed to edit the Test.txt file in the Azure File share, a previous version will be made available:

image

Lock down Azure Files access with to service endpoint

You may want to tighten the access security depending on the sensitivity of the data stored in the Azure Files file share and one of the ways to achieve this is to use an Azure Service Endpoint and Private Endpoint. I won’t go into depth for either of them as I want to write a separate blog post for it so what I’ll do is provide a brief overview of how we can secure access with a service endpoint.

Begin by navigating to the Networking configuration for the storage account and change Allow access from the configuration All networks to Selected networks:

image

There are multiple options for limiting access for this storage account but for the purpose of this example, I will be placing in 1 subnet from a production VNet in the environment:

image

With the above configuration set, only subnets in the defined VNet will be able to access the storage account and the Azure Files.

Accessing Exchange Server 2019 /OWA and /ECP throws the errors: "Status code: 500" and "Could not load file or assembly 'Microsoft.Exchange.Common, Version=15.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35' or one of its dependencies."

$
0
0

Problem

You’ve noticed that after patching Exchange Server 2019 servers (for this example, the patch for HAFNIUM was applied), access to /OWA and /ECP now displays the following errors:

OWA

:-(

Something went wrong

Your request couldn't be completed. HTTP Status code: 500.

X-ClientId: F8E662D41996402E8660EEEB0976EA50

request-id a5077939-13bd-4032-9471-d6c8dc221d5a

X-OWA-Error System.Web.HttpUnhandledException

X-OWA-Version 15.2.721.13

X-FEServer Exch01

X-BEServer Exch02

Date:3/8/2021 12:31:53 PM

InnerException: System.IO.DirectoryNotFoundException

Fewer details...

Refresh the page

image

ECP

Server Error in '/ecp' Application.

_______________________________________

Could not load file or assembly 'Microsoft.Exchange.Common, Version=15.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35' or one of its dependencies. The system cannot find the file specified.

Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code.

Exception Details: System.IO.FileNotFoundException: Could not load file or assembly 'Microsoft.Exchange.Common, Version=15.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35' or one of its dependencies. The system cannot find the file specified.

Source Error:

An unhandled exception was generated during the execution of the current web request. Information regarding the origin and location of the exception can be identified using the exception stack trace below.

Assembly Load Trace: The following information can be helpful to determine why the assembly 'Microsoft.Exchange.Common, Version=15.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35' could not be loaded.

WRN: Assembly binding logging is turned OFF.

To enable assembly bind failure logging, set the registry value [HKLM\Software\Microsoft\Fusion!EnableLog] (DWORD) to 1.

Note: There is some performance penalty associated with assembly bind failure logging.

To turn this feature off, remove the registry value [HKLM\Software\Microsoft\Fusion!EnableLog].

Stack Trace:

[FileNotFoundException: Could not load file or assembly 'Microsoft.Exchange.Common, Version=15.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35' or one of its dependencies. The system cannot find the file specified.]

System.RuntimeTypeHandle.GetTypeByName(String name, Boolean throwOnError, Boolean ignoreCase, Boolean reflectionOnly, StackCrawlMarkHandle stackMark, IntPtr pPrivHostBinder, Boolean loadTypeFromPartialName, ObjectHandleOnStack type) +0

System.RuntimeTypeHandle.GetTypeByName(String name, Boolean throwOnError, Boolean ignoreCase, Boolean reflectionOnly, StackCrawlMark& stackMark, IntPtr pPrivHostBinder, Boolean loadTypeFromPartialName) +96

System.Type.GetType(String typeName, Boolean throwOnError, Boolean ignoreCase) +65

System.Web.Compilation.BuildManager.GetType(String typeName, Boolean throwOnError, Boolean ignoreCase) +62

System.Web.Configuration.ConfigUtil.GetType(String typeName, String propertyName, ConfigurationElement configElement, XmlNode node, Boolean checkAptcaBit, Boolean ignoreCase) +50

[ConfigurationErrorsException: Could not load file or assembly 'Microsoft.Exchange.Common, Version=15.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35' or one of its dependencies. The system cannot find the file specified.]

System.Web.Configuration.ConfigUtil.GetType(String typeName, String propertyName, ConfigurationElement configElement, XmlNode node, Boolean checkAptcaBit, Boolean ignoreCase) +572

System.Web.Configuration.ConfigUtil.GetType(String typeName, String propertyName, ConfigurationElement configElement, Boolean checkAptcaBit) +31

System.Web.Configuration.Common.ModulesEntry.SecureGetType(String typeName, String propertyName, ConfigurationElement configElement) +59

System.Web.Configuration.Common.ModulesEntry..ctor(String name, String typeName, String propertyName, ConfigurationElement configElement) +59

System.Web.HttpApplication.BuildIntegratedModuleCollection(List`1 moduleList) +221

System.Web.HttpApplication.GetModuleCollection(IntPtr appContext) +1103

System.Web.HttpApplication.RegisterEventSubscriptionsWithIIS(IntPtr appContext, HttpContext context, MethodInfo[] handlers) +122

System.Web.HttpApplication.InitSpecial(HttpApplicationState state, MethodInfo[] handlers, IntPtr appContext, HttpContext context) +173

System.Web.HttpApplicationFactory.GetSpecialApplicationInstance(IntPtr appContext, HttpContext context) +255

System.Web.Hosting.PipelineRuntime.InitializeApplication(IntPtr appContext) +347

[HttpException (0x80004005): Could not load file or assembly 'Microsoft.Exchange.Common, Version=15.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35' or one of its dependencies. The system cannot find the file specified.]

System.Web.HttpRuntime.FirstRequestInit(HttpContext context) +552

System.Web.HttpRuntime.EnsureFirstRequestInit(HttpContext context) +122

System.Web.HttpRuntime.ProcessRequestNotificationPrivate(IIS7WorkerRequest wr, HttpContext context) +737

image

You’ve noticed that reviewing the BinSearchFolders application settings for ecp folder in the Exchange Back End website shows that the Value is configured with %ExchangeInstallDir%:

image

Changing this to the path without using the variable appears to fix the ECP page but not OWA:

image

Solution

One of the possible solutions to correct the issue is to use the UpdateCas.ps1 script located in the \Microsoft\Exchange Server\V15\Bin folder to rebuild the /OWA and /ECP directory:

image

image

Proceed to test the /owa and /ecp directories once the PowerShell completes.

Using Azure Files SMB access with Windows on-premise Active Directory NTFS permissions

$
0
0

Years ago when I first started working with Azure, I was very excited about the release of Azure Files because it would allow migrating traditional Windows file servers to the cloud without having an IaaS Windows server providing the service. What I quickly realized was that it did not support the use of NTFS permissions and therefore was not a possible replacement. Fast forward to a few years later, the support for traditional NTFS permissions with on-premise Active Directory is finally available. I’ve been meaning to write a blog post to demonstrate the configuration so this post serves as a continuation to my previous post on how we can leverage Azure Files to replace traditional Windows Server file services.

Configuring and accessing Microsoft Azure Files
http://terenceluk.blogspot.com/2021/03/configuring-and-accessing-azure-files.html

I won’t go into too much detail about how the integration works as the information can be found in the following Microsoft documentation:

Overview - on-premises Active Directory Domain Services authentication over SMB for Azure file shares
https://docs.microsoft.com/en-us/azure/storage/files/storage-files-identity-auth-active-directory-enable

What I will do is highlight the items we need to configure it.

Topology

The environment I will be working with will be the following simple topology where an on-premise network is connected to Azure East US through a site-to-site VPN and an Azure Files configured:

clip_image002

Prerequisites

The following are the prerequisites for enabling AD DS authentication for Azure file shares:

  1. The on-premise Active Directory Domain Services will need to be synced into Azure AD with Azure AD Connect
  2. The identities that will be used for accessing the Azure Files need to be synced to Azure AD if filtering is applied
  3. The endpoint accessing the file share in Azure Files need to be able to traverse over via port 445
  4. A storage account name that will be less than 15 characters as that is the limit for the on-premise Active Directory SamAccountName

Step #1 – Create the Azure Storage Account and Azure File share

Begin by creating a new storage account with a name that has less than 15 characters:

image

With the storage account successfully created, open the new storage account and navigate to the File shares menu option:

image

Click on the + File share button to create a new file share:

image

Configure the new file share with the settings required.

I won’t go into the details of the Tiers but will provide this reference link for more information: https://docs.microsoft.com/en-us/azure/storage/files/storage-files-planning?WT.mc_id=Portal-Microsoft_Azure_FileStorage#storage-tiers

image

Complete creating the file share by clicking on the Create button.

With the test File share created, click to open it:

image

You can directly upload files into the file share, modify the tier, configure various operations and retrieve information pertaining to the file share.

image

Azure Storage Explorer can also be used to manage the file share.

image

You may notice that clicking into the Access Control (IAM) menu option will display the following:

Identity-based authentication (Active Directory) for Azure file shares

To give individual accounts access to the file share (Kerberos), enable identity-based authentication for the storage account. Learn more

image

This is where you would configure the Share permissions for Active Directory account access and will be configured in the following steps.

Step #2 – Enable AD DS authentication on the storage account

How Azure Files and on-premise authorization works

Unlike traditional Windows Servers, you can’t join an Azure storage account to an on-premise Active Directory so the way in which this is achieved is by registering the storage account with AD DS by creating an account representing it in AD DS. The account that will be created in the on-premise AD can be a user account or a computer account and if you are familiar with on-premise AD, you’ll immediately recognize that both of these accounts have passwords. Failure to update the password will cause authentication to Azure Files to fail.

Computer accounts– these accounts have a default password expiration age of 30 days
User accounts– these accounts have password expiration age set based on the password policy applied

The easy way to get around password expiration is to use a user account and set the password to never expire but doing so will likely get any administrator in trouble. The better method is to use Update-AzStorageAccountADObjectPassword cmdlet (https://docs.microsoft.com/en-us/azure/storage/files/storage-files-identity-ad-ds-update-password) to manually update the account’s password before it expires. There are several ways to automate this with either something as simple as a Windows task scheduler task or an enterprise management application to run it on a schedule.

Using AzFilesHybrid to create the on-premise account representing Azure Files

Proceed to download the latest AzFilesHybrid PowerShell module at the following URL: https://github.com/Azure-Samples/azure-files-samples/releases

image

Unpacking the ZIP file will provide the following 3 files:

  • AzFilesHybrid.psd1
  • AzFilesHybrid.psm1
  • CopyToPSPath.ps1

image

Before executing the script, you’ll need to use an account with the following properties and permissions:

  1. Replicated to Azure AD
  2. Permissions to modify create a user or computer object to the on-premise Active Directory
  3. Storage Account Owner or Contributor permissions

The account I’ll be using is a Domain admin and Global Admin rights.

From a domain joined computer where you are logged in with the required on-premise Active Directory account, launch PowerShell or PowerShell ISE and begin by setting the execution policy to unrestricted so we can run the AzFilesHybrid PowerShell scripts:

Set-ExecutionPolicy -ExecutionPolicy Unrestricted -Scope CurrentUser

Navigate to the directory containing the unzipped scripts and execute:

.\CopyToPSPath.ps1

Import the AzFilesHybrid module by executing:

Import-Module -Name AzFilesHybrid

Connect to the Azure tenant:

Connect-AzAccount

image

Set up the variables for the subscription ID, the resource group name and storage account name:

$SubscriptionId = "<SubscriptionID>"

$ResourceGroupName = "<resourceGroupName>"

$StorageAccountName = "<storageAccountName>"

As you can have more than one subscription in a tenant, select the subscription containing the resources by executing:

Select-AzSubscription -SubscriptionId $SubscriptionId

image

With the prerequisites executed, we can now use the Join-AzStorageAccountForAuth cmdlet to create the account in the on-premise AD that represents the storage account in Azure:

Join-AzStorageAccountForAuth `

-ResourceGroupName $ResourceGroupName `

-Name $StorageAccountName `

-DomainAccountType "<ComputerAccount or ServiceLogonAccount>" `

## You can either specify the OU name or DN of the OU

-OrganizationalUnitName "<Name of OU>" `

-OrganizationalUnitDistinguishedName "<DN of OU>"

The following is an example:

Join-AzStorageAccountForAuth `

-ResourceGroupName $ResourceGroupName `

-Name $StorageAccountName `

-DomainAccountType "ServiceLogonAccount" `

-OrganizationalUnitDistinguishedName "OU=AzureFiles,DC=contoso,DC=com"

**Note that there are backticks (the character sharing the tilde character on the keyboard) used, which is used as an word-wrap operator. It allows the command to be written in multiple lines.

-----------------------------------------------------------------------------------------------------------------------

If your storage account is longer than 15 character then you’ll get an error:

WARNING: Parameter -DomainAccountType is 'ServiceLogonAccount', which will not be supported AES256 encryption for Kerberos tickets.

Join-AzStorageAccountForAuth : Parameter -StorageAccountName 'steastusserviceendpoint' has more than 15 characters, which is not supported to be used

as the SamAccountName to create an Active Directory object for the storage account. Azure Files will be supporting AES256 encryption for Kerberos

tickets, which requires that the SamAccountName match the storage account name. Please consider using a storage account with a shorter name.

At line:1 char:1

+ Join-AzStorageAccountForAuth `

+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

+ CategoryInfo : NotSpecified: (:) [Write-Error], WriteErrorException

+ FullyQualifiedErrorId : Microsoft.PowerShell.Commands.WriteErrorException,Join-AzStorageAccount

-----------------------------------------------------------------------------------------------------------------------

Successful execution of the Join-AzStorageAccountForAuth will display the following:

PS C:\AzFilesHybrid> Join-AzStorageAccountForAuth `

-ResourceGroupName $ResourceGroupName `

-Name $StorageAccountName `

-DomainAccountType "ServiceLogonAccount" `

-OrganizationalUnitDistinguishedName "OU=AzureFiles,DC=contoso,DC=com"

WARNING: Parameter -DomainAccountType is 'ServiceLogonAccount', which will not be supported AES256 encryption for Kerberos tickets.

StorageAccountName ResourceGroupName PrimaryLocation SkuName Kind AccessTier CreationTime ProvisioningState EnableHttpsTrafficOnly

------------------ ----------------- --------------- ------- ---- ---------- ------------ ----------------- ----------------------

stfsreplacement rg-prod-infraServers eastus Standard_LRS StorageV2 Hot 3/8/2021 11:30:02 AM Succeeded True

PS C:\AzFilesHybrid>

image

The corresponding object (in this case a user object) should also be created in the specified OU:

image

Notice how the password is automatically set to not expire:

image

We can also verify the configuration with the following PowerShell cmdlets:

Obtain the storage account and store it as a variable:

$storageAccount = Get-AzStorageAccount `

-ResourceGroupName $ResourceGroupName `

-Name $StorageAccountName

List the directory domain information of the storage account has enabled AD DS authentication for file shares

$storageAccount.AzureFilesIdentityBasedAuth.ActiveDirectoryProperties

https://docs.microsoft.com/en-us/dotnet/api/microsoft.azure.management.storage.models.azurefilesidentitybasedauthentication.activedirectoryproperties?view=azure-dotnet

View the directory service of the storage:

$storeageAccount.AzureFilesIdentityBasedAuth.DirectoryServiceOptions

https://docs.microsoft.com/en-us/java/api/com.microsoft.azure.management.storage.azurefilesidentitybasedauthentication.directoryserviceoptions?view=azure-java-stable

image

Step #3 – Configure On-Premise AD Groups for Azure Files Access (Share Permissions)

With the AD DS authentication integration setup for the storage account, the next step is to configure the on-premise Active Directory groups that will be granted access to the Azure Files file share. Think of this step as how we would configure Share permissions on a folder so we can then proceed to configure the NTFS permissions.

There are 3 predefined RBAC roles provided by Azure that will map to the on-premise AD groups and they are as follows:

Storage File Data SMB Share Contributor – Allows for read, write, and delete access in Azure Storage file shares over SMB.

Storage File Data SMB Share Elevated Contributor – Allows for read, write, delete and modify NTFS permissions access in Azure Storage file shares over SMB.

Storage File Data SMB Share Reader – Allows for read access to Azure File Share over SMB.

image

The following are the mappings that I have planned:

Azure Role: Storage File Data SMB Share Contributor
On-premise AD group: AzFileShareContributor

Azure Role: Storage File Data SMB Share Elevated Contributor
On-premise AD group: AzFileShareElevContributor

Azure Role: Storage File Data SMB Share Reader
On-premise AD group: AzFileShareReader

Proceed to create the groups in the on-premise Active Directory:

image

Then log into the Azure portal and navigate to the storage account > File Shares then click on the file share that has been created:

image

From within the file share, click on Access Control (IAM) and then Add role assignments:

image

Configure the appropriate mapping for the 3 on-premise AD groups and the Azure roles:

image

image

image

Step #4 – Mount the Azure Files file share with full permissions and configure NTFS permissions

With the share permissions set, we can now configure the NTFS permissions on the file share. There isn’t a way to perform this from within the Azure portal so we will need to mount an Azure file share to a VM joined to the on-premise Active Directory.

The UNC path for accessing the Azure Files share would be as follows:

\\<storageAccountName>.file.core.windows.net\<shareName> <storageAccountKey> /user:Azure\<storageAccountName>

You can use the net use <driveLetter>: command to mount the drive as such:

net use z: \\<storageAccountName>.file.core.windows.net\<shareName> <storageAccountKey> /user:Azure\<storageAccountName>

net use z: \\stfsreplacement.file.core.windows.net\test N2PrIm73/xHNPxe7BoVyNHBdjU3HBPpQg33Z+PeKmjy8nxUMSeOG4Azfnknyn+up2pQpOinUJ/FWl9ceeGz/bQ== /user:Azure\stfsreplacement

image

Note that the storage account key can be obtained here:

image

Or as an alternative, you can also retrieve a full PowerShell cmdlet to map the drive by using the Connect button for the file share:

image

With the file share mapped as a drive, we can now assign the appropriate NTFS permissions for the groups we created earlier:

Azure Role: Storage File Data SMB Share Contributor
On-premise AD group: AzFileShareContributor
Permissions:

  • Modify
  • Read & execute
  • List folder contents
  • Read

Azure Role: Storage File Data SMB Share Elevated Contributor
On-premise AD group: AzFileShareElevContributor
Permissions:

  • Full control
  • Modify
  • Read & execute
  • List folder contents
  • Read

Azure Role: Storage File Data SMB Share Reader
On-premise AD group: AzFileShareReader
Permissions:

  • Read & execute
  • List folder contents
  • Read
image

Step #5 – Mount the Azure Files file share as an on-premise Active Directory User

Now that the share and NTFS permissions have been set, we can proceed to mount the share as users who are placed into one of the 3 groups to test.

Step #6 – Update the password of the storage account identity in the on-premise Active Directory DS

The last action is how we would change/update the password on the account object representing storage account to enable Kerberos authentication. The following is a snippet from the Microsoft documentation: https://docs.microsoft.com/en-us/azure/storage/files/storage-files-identity-ad-ds-update-password

If you registered the Active Directory Domain Services (AD DS) identity/account that represents your storage account in an organizational unit or domain that enforces password expiration time, you must change the password before the maximum password age. Your organization may run automated cleanup scripts that delete accounts once their password expires. Because of this, if you do not change your password before it expires, your account could be deleted, which will cause you to lose access to your Azure file shares.

To trigger password rotation, you can run the Update-AzStorageAccountADObjectPassword command from the AzFilesHybrid module. This command must be run in an on-premises AD DS-joined environment using a hybrid user with owner permission to the storage account and AD DS permissions to change the password of the identity representing the storage account. The command performs actions similar to storage account key rotation. Specifically, it gets the second Kerberos key of the storage account, and uses it to update the password of the registered account in AD DS. Then, it regenerates the target Kerberos key of the storage account, and updates the password of the registered account in AD DS. You must run this command in an on-premises AD DS-joined environment.

The syntax for the Update-AzStorageAdccountADObjectPassword cmdlet to perform this will look as follows:

Update-AzStorageAccountADObjectPassword `

-RotateToKerbKey kerb2 `

-ResourceGroupName "<resourceGroupName>" `

-StorageAccountName "<storageAccountName>"

If you are continuing the configuration from the beginning of this blog post then the resource group and storage accounts are already stored in a variable so you can just call them as such:

Update-AzStorageAccountADObjectPassword `

-RotateToKerbKey kerb2 `

-ResourceGroupName $ResourceGroupName `

-StorageAccountName $StorageAccountName

image

Hope this helps anyone looking for a step by step demonstration on how to setup Azure Files for SMB accessing using on-premise AD NTFS permissions.

Configuring Azure Service Endpoints and Private Endpoints

$
0
0

One of the common questions I am asked about accessing Azure services is the concept of service endpoints and/or private endpoints in Azure. To put it simply, Azure is a public cloud service, which means it is inherently designed to allow its services to be publicly accessed through the internet. This often instills quite a bit of fear to security experts (or anyone in IT for that matter) as they decide to move on-premises workloads to the cloud. Microsoft is aware of this potential road block for organizations and thus provides service endpoints and private endpoints to limit public access to Azure services.

In this blog post, I will attempt to provide a summarized explanation of Service Endpoints and Private Endpoints, then demonstrate the configuration. For more in-depth information on these two services, please see the official Microsoft documentation. I would highly recommend reading the documentation if you decide to read through this blog post.

Virtual Network service endpoints
https://docs.microsoft.com/en-us/azure/virtual-network/virtual-network-service-endpoints-overview

What is Azure Private Endpoint?
https://docs.microsoft.com/en-us/azure/private-link/private-endpoint-overview

Service Endpoints

Service endpoints allows an administrator to lock down the Azure resource such as a storage account to a VNet (all of its subnets), a specific VNet’s subnet, or a public IP. The locking down via public IP is an easy to grasp concept so we’ll look at how locking down with a VNet’s subnet works. When a VNet’s subnet is selected, the routing table on the subnet is updated to route traffic to the service endpoint before routing to the internet. The traffic from the subnet still accesses the storage account by its public IP address but the traffic will flow through the faster Azure backbone network rather than the internet thus enjoying better performance and arguably better security.

The following are some key items to note about Service Endpoints:

  • The resource will continue to have a public IP address (it does not get a private IP address)
  • The public IP will continue to be resolved by the DNS provided by Microsoft
  • It is not accessible from the on-premise network through, say, a site-to-site VPN because the service endpoint does not actually have a private IP so if an on-premise device needs to access the resource then the on-premise public IP will need to be added to the service endpoint (the on-premise device will use the internet to reach the resource)
  • If there is an ExpressRoute connecting an on-premise network to Azure, then the NAT IP addresses can be added to allow access: https://docs.microsoft.com/en-us/azure/virtual-network/virtual-network-service-endpoints-overview#secure-azure-service-access-from-on-premises

The following is a diagram demonstrating the traffic flow of a service endpoint configured for a storage account:

**Note that the device attempting to access the storage account will continue to reach it via its public IP address but the originating IP address seen by the storage account will be the device’s private IP address. This is the important difference to highlight for service endpoints. Also, as the public IP address of the storage account is being used as the destination, any restrictive NSGs applied to the traffic flowing outbound will need to be configured to allow traffic to reach the public IP address of the storage account.

image

**Note that Service Endpoints is not available for all services. The current services it support are as follows:

Generally available

Public Preview

  • Azure Container Registry (Microsoft.ContainerRegistry): Preview available in limited Azure regions where Azure Container Registry is available.

Private Endpoints

Private Endpoints adds a virtual network interface to the resource that connects to the VNet. The network interface will then have a private IP address and behaves like a device on the network. Having a private IP presence on the VNet will mean that on-premise devices traversing through a site-to-site VPN or express route can now access the resource via its private IP address through the private connection.

The following are some key items to note about Private Endpoints:

  • All public IP address access can be blocked because a private IP is available to access the resource
  • Azure internal DNS will now resolve the resource’s hostname to its private IP address
  • Network Security Groups (NSG) will not be applied to the private endpoint’s network interface

· If there is a desire to block other network resources to the private endpoint, then outbound NSG security groups assigned to the source can be used to block access

The following is a diagram demonstrating the traffic flow of a private endpoint configured for a storage account:

**Note that this example places the endpoint directly into the subnet where the virtual machines are located but it is also possible to place the endpoint into its own subnet within the same VNET as the other subnets that will access the endpoint. It is important to note that subnets within the same VNET as the subnet containing the endpoint will have Azure DNS resolve the internal IP address of the private endpoint. The public IP will be returned when resolving the DNS name outside of the VNet.

image

As with Service Endpoints, Private Endpoints are only available for a set of services as listed here: https://docs.microsoft.com/en-us/azure/private-link/private-endpoint-overview#private-link-resource

With a brief overview of what a Service Endpoints and Private Endpoints are, let’s have a look at what the configuration would look like.

I’ve configured two storage accounts to demonstrate the Service Endpoint and Private Endpoint.

Configuring a Service Endpoint

The storage account we’ll be using for the service endpoint is named sgserviceendpoint:

image

I’ve created a container and uploaded a text file:

image

Without a service endpoint configured, access to the container is available from anywhere on the internet with authentication because of the access level I’ve configured:

image

I can access the container with Azure Storage Explorer from my home computer over internet and connect to a Blob container with a generated shared access signature:

image

image

image

image

image

image

To create a service endpoint simply navigating to the Networking configuration for the storage account and select Firewalls and virtual networks:

image

Change Allow access from the configuration All networks to Selected networks, select Add existing virtual network, then in the right blade, add the subnet or subnets that you want to access the storage group via the service endpoint:

image

Notice that the following information message is displayed:

The following networks don’t have service endpoints enabled for 'Microsoft.Storage'. Enabling access will take up to 15 minutes to complete. After starting this operation, it is safe to leave and return later if you do not wish to wait.

This is because the Microsoft.Storage service needs to be enabled for the service endpoint as we are accessing a storage account. Azure noticed that it is not enabled and therefore will be enable it for you.

image

Click on the Save button to apply the changes once the subnet has been added but note that saving the configuration will update the routing table on the subnet and may cause a disruption on any connections for the subnet (best to make this change on a scheduled maintenance window):

image

Now when you navigate to the VNet containing the subnet that was enabled for the service endpoint, you’ll be able to see the configuration added to the Service endpoints setting:

image

Navigating into the Subnets configuration and clicking on the subnet configured for the service endpoint will also show that Microsoft.Storage is listed:

image

Access to the container should no longer accessible via the internet as this is what is displayed when I hit the refresh button from my laptop:

This request is not authorized to perform this operation. RequestId:44c96890-501e-0053-310c-254d92000000 Time:2021-03-30T02:31:40.6499686Z

image

Attempting to use Azure Storage Explorer from a server in the subnet defined in the service endpoint will be continued to be allowed:

image

You will also notice that attempting to resolve the sgserviceendpoint.blob.core.windows.net address representing the container resolves to a public IP address and not a private IP address:

image

Configuring a Private Endpoint

The storage account we’ll be using for the private endpoint is named sgprivateendpoint. To create a private endpoint representing this storage account, navigate to the Networking configuration under Settings, click on the Private endpoint connections tab, then Private endpoint:

image

Select the subscription, resource group, provide a name for the logical private endpoint (what the private endpoint will be named in Azure), and the region it should reside in:

image

Select the appropriate resource for the Target sub-resource field:

**Note that if you want to provide blob and another resource such as file access then you’ll need to add the additional resource(s) afterwards one at a time.

image

Proceed to select the appropriate subnet you want to place the private endpoint in:

**Note the Private DNS integration remark below the Networking configuration. We will discuss this in more depth after the initial configuration

image

Add additional tags if necessary:

image

Proceed to create the private endpoint:

image

Clicking into the Private Endpoint resource once it has been created will display the following information:

image

Clicking into the DNS configuration under Settings will display the FQDN and the private IP address assigned to the private endpoint:

FQDN: sgprivateendpoint.privatelink.blob.core.windows.net

IP address: 10.248.1.7

image

Navigating into the VNet where the private endpoint was created will display the private endpoint that was just created:

image

Navigating into the resource group containing the private endpoint will display the private endpoint object as well as a NIC created:

image

Clicking into the NIC representing the private endpoint will show that the settings are the same as any other NIC adapter:

image

This private endpoint configuration is depicted in the diagram I included earlier in the blog post and pasted again here:

image

Private Endpoints and DNS

With the private endpoint created, the next important configuration is to ensure that DNS resolution resolves as it is supposed to in various places. The following are 3 common places you may want to access the private endpoint:

#1 - Within the same subnet where the private endpoint resides:

image

#2 - Within a subnet residing in the same VNet where the private endpoint resides:

image

#3 - From an on-premise network connected to Azure via ExpressRoute or VPN:

image

The URL to access the storage account’s blog service is:

sgprivateendpoint.blob.core.windows.net

image

Attempting to perform an nslookup from a machine using internet DNS will resolve the public IP along with two aliases where one represented the private endpoint. Attempting to lookup the public or private endpoint alias will return the public IP:

> sgprivateendpoint.blob.core.windows.net

Server: dns.google

Address: 8.8.8.8

Non-authoritative answer:

Name: blob.blz22prdstr18a.store.core.windows.net

Address: 20.60.7.100

Aliases: sgprivateendpoint.blob.core.windows.net

sgprivateendpoint.privatelink.blob.core.windows.net

> sgprivateendpoint.privatelink.blob.core.windows.net

Server: dns.google

Address: 8.8.8.8

Non-authoritative answer:

Name: blob.blz22prdstr18a.store.core.windows.net

Address: 20.60.7.100

Aliases: sgprivateendpoint.privatelink.blob.core.windows.net

image

Alternatively, if an nslookup is used with an Azure DNS server (168.63.129.16) within the same VNET, the link will return the private link alias with the private IP:

C:\>nslookup

Default Server: UnKnown

Address: ::1

> server 168.63.129.16

Default Server: [168.63.129.16]

Address: 168.63.129.16

> sgprivateendpoint.blob.core.windows.net

Server: [168.63.129.16]

Address: 168.63.129.16

Non-authoritative answer:

Name: sgprivateendpoint.privatelink.blob.core.windows.net

Address: 10.248.1.7

Aliases: sgprivateendpoint.blob.core.windows.net

image

As most administrators will immediately know, many of the environments in Azure does not use Azure DNS because they would typically use Active Directory domain DNS. To get around this issue, we can simply add the private link as a zone in the internal DNS and a corresponding A record representing the storage account. The following is a demonstration of how this would be configured.

Launch DNS Manager on your DNS server (usually a domain controller) and create a new zone:

image

Select Primary zone:

image

Whether to replicate to all the DCs in the forest or domain would be your choice:

image

Create a zone representing the private link without the storage account name:

image

Select the desired dynamic update configuration and create the zone:

image

With the zone configured, create a new A record to represent the storage account (the private link alias):

image

Add the appropriate private IP:

image

image

Now when you initiate a nslookup against the internal Active Directory domain controller, the private IP will be returned:

C:\>nslookup

Default Server: UnKnown

Address: ::1

> server 10.248.1.250

Default Server: [10.248.1.250]

Address: 10.248.1.250

> sgprivateendpoint.blob.core.windows.net

Server: [10.248.1.250]

Address: 10.248.1.250

Non-authoritative answer:

Name: sgprivateendpoint.privatelink.blob.core.windows.net

Address: 10.248.1.7

Aliases: sgprivateendpoint.blob.core.windows.net

image

Note that this example configured blob storage so if you require, say, file storage access then an additional forward lookup zone will need to be created.

More information about private link DNS configuration can be found here:

Azure Private Endpoint DNS configuration
https://docs.microsoft.com/en-us/azure/private-link/private-endpoint-dns

Preventing Access to Storage Account with Private Endpoint

Unlike a service endpoint, configuring a private endpoint does not prevent access to the storage account so if that is a requirement then you will need to mimic what you would normally do to configure a service endpoint by selecting Selected networks but do not add any virtual networks or IP ranges:

image

Hope this helps anyone who may be looking for information on service endpoints and private endpoints. Note that service endpoints have no extra cost while private endpoints do (https://azure.microsoft.com/en-ca/pricing/details/private-link/)

Attempting to Activate Windows Server 2019 fails with: "Windows can't activate right now. Try activating again later. If that doesn't work, contact your system administrator. Error code: 0x800705B4"

$
0
0

Problem

You attempt to activate a Windows Server 2019 Standard server with a MAK key but notice that it fails with:

Windows can't activate right now. Try activating again later. If that doesn't work, contact your system administrator. Error code: 0x800705B4

If you’re having problems with activation, select Troubleshoot to try and fix the problem.

You’ve activated other servers with this same MAK key without any issues.

image

The product key you entered didn’t work. Check the product key and try again, or enter a different one. (0x80070490)

image

Solution

One of the ways this issue can be fixed is to execute the slmgr.vbs script with the upk switch to uninstall the current key on the Windows Server 2019 server. More information about this script and switch can be found here: https://docs.microsoft.com/en-us/windows-server/get-started/activation-slmgr-vbs-options#advanced-options

One of the ways this issue can be fixed is to execute the slmgr.vbs script with the upk switch to uninstall the current key on the Windows Server 2019 server. More information about this script and switch can be found here: https://docs.microsoft.com/en-us/windows-server/get-started/activation-slmgr-vbs-options#advanced-options

/upk [<Application ID>]

This option uninstalls the product key of the current Windows edition. After a restart, the system will be in an Unlicensed state unless a new product key is installed.
Optionally, you can use the <Activation ID> parameter to specify a different installed product.
This operation must be run from an elevated Command Prompt window.

Executing this:com

slmgr.vbs -upk

… command will display the following prompt:

Uninstalled product key successfully.

image

Once the existing product key is removed, we can use the same slmgr.vbs script with the ipk switch to install the new key as such:

slmgr.vbs -ipk KBP8M-XXXXX-K47P8-XXXXX-XXXXX

image

Review the Activation settings of the server should display the server as being activated:

image

Deploying Carbon Black Cloud via GPO with a transform (MST) file specifying the Company Code and Group Name

$
0
0

I was recently asked about deploying Carbon Black Cloud Sensor via Group Policy as a published MSI file and recall how much difficulty I had with incorporating the settings for the Company Code and Group Name so I decided to dig up my old notes and write this blog post in case anyone else who may be trying to find this information.

Before I begin, those who might be looking for the installation command for the deployment with, say, Workspace ONE can use the following:

installer_vista_win7_win8-64-3.6.0.1979.msi /L*vx log.txt COMPANY_CODE=XXXXXXXXXXXXXX GROUP_NAME=Monitored /qn

**Substitute the COMPANY_CODE value with your organization code and the GROUP_NAME with the name of your group.

Before publishing the Carbon Black Cloud Sensor MSI in Active Directory as GPO, you’ll need to customize the MSI file with the orca.exe tool. Trying to obtain it isn’t straight forward so I’ll outline the process here.

Obtaining orca.exe for creating a Transform file (.MST)

Navigate to the following site where Windows 10 SDK can be downloaded:

Windows 10 SDK
https://developer.microsoft.com/en-us/windows/downloads/windows-10-sdk/

Download the ISO file:

image

Mount the ISO, navigate to the following directory:

E:\Installers

… and obtain the following files:

  • a35cd6c9233b6ba3da66eecaa9190436.cab
  • 838060235bcd28bf40ef7532c50ee032.cab
  • fe38b2fd0d440e3c6740b626f51a22fc.cab
  • Orca-x86_en-us.msi

image

Proceed to install Orca by running the MSI file and you should see the application in your start menu.

Creating a Microsoft Installer Transform (.MST) File

With Orca installed, we can proceed to modify the MSI file as demonstrated in the following KB:

To Create a Microsoft Installer Transform (.MST) File

https://docs.vmware.com/en/VMware-Carbon-Black-Cloud/services/cbc-sensor-installation-guide/GUID-F28C735B-EC91-4A56-A041-3C07F9D36DE6.html

Open the MSI file with Orca and click Transform > New Transform:

image

Select the Property table, then click on Tables > New Row:

image

Click Property and enter "COMPANY_CODE" then click Value and enter the company registration code for your organization:

image

Repeat the same process for the GROUP_NAME:

image

You should now see the two parameters added:

image

Proceed to generate the transform file by clicking on Transform > Generate Transform:

image

image

Deploying Carbon Black Cloud via Group Policy

With both the MSI and transform file (MST) created, we can now publish it in a Group Policy:

image

Select Advanced as the deployment method:

image

Navigate to the Modifications tab and select the transform file:

image

Click OK and assign the GPO to the appropriate OUs containing the computer objects.

image

Deploying Carbon Black Cloud via GPO with a with a transform (MST) file fails with: “CAInstallPreCheck: Expect a cfg.ini in the same directory as the MSI, but could not find it.“

$
0
0

Problem

You’ve completed setting up Carbon Black Cloud to be deployed via GPO as described in one of my previous posts:

Deploying Carbon Black Cloud via GPO with a transform (MST) file specifying the Company Code and Group Name
http://terenceluk.blogspot.com/2021/04/deploying-carbon-black-cloud-via-gpo.html

But notice that it fails with the following event log errors:

Log Name: Application
Source: CbDefense
Event ID: 49
Level: Error

The description for Event ID 49 from source CbDefense cannot be found. Either the component that raises this event is not installed on your local computer or the installation is corrupted. You can install or repair the component on the local computer.

If the event originated on another computer, the display information had to be saved with the event.

The following information was included with the event:

CbDefense

CAInstallPreCheck: Expect a cfg.ini in the same directory as the MSI, but could not find it.

image

Log Name: Application
Source: Application Management Group Policy
Event ID: 102
Level: Error

The install of application Carbon Black Cloud Sensor 64-bit from policy Test Carbon Black Cloud Install failed. The error was : %%1603

image

Solution

One of the reasons why this error would be thrown is if the COMPANY_CODE was missed when creating the transform file. Verify that both the COMPANY_CODE and GROUP_NAME exists in the transform file.

image

Configuring Azure Privileged Identity Management (PIM)

$
0
0

One of the features I’ve liked a lot when I was able to work with clients who had Azure AD Premium 2 or Enterprise Mobility + Security (EMS) E5 licenses is the Privileged Identity Management (PIM). This feature has a lot of offer when it comes to one of the most neglected operations that every organization should have:

managing privileged access. In this post, I will describe attempt to describe the benefits of it as well as demonstrate some of its features.

What is Azure Privileged Identity Management (PIM)?

Let me begin by saying that Microsoft provides an excellent write up and video about PIM, which can be found here:

What is Azure AD Privileged Identity Management?
https://docs.microsoft.com/en-us/azure/active-directory/privileged-identity-management/pim-configure

When I am asked this question, I usually provide the following:

The short and condensed explanation of Azure’s Privileged Identity Management (PIM) is that provides you with the tools to manage, control, monitor, and audit access to resources in the organization. An example of this could be a consultant is engaged in a project with your organization and I need administrative rights in Azure because I need to add or manage another domain so this I am granted the global admin role but the role then never gets removed. The use of a consultant can easily be interchanged with any administrator on the team who was granted the global admin role and never gets removed, which is very similar to, say, the Enterprise Admins or Domain Admins group in an on-premise Active Directory. Another example could be that we do not want administrators to have persistent administrative permissions whenever they log into Azure so we would like them to have the ability to elevate their permissions. Lastly, another example could be that there is suspicion the account used to sign up for the Azure tenant had its password reset at some point and is being used so an audit of the history is required.

The following are the key features taken straight from the Microsoft documentation:

  • Provide just-in-time privileged access to Azure AD and Azure resources
  • Assign time-bound access to resources using start and end dates
  • Require approval to activate privileged roles
  • Enforce multi-factor authentication to activate any role
  • Use justification to understand why users activate
  • Get notifications when privileged roles are activated
  • Conduct access reviews to ensure users still need roles
  • Download audit history for internal or external audit

Leveraging the features above can allow any organization to better manage privileged access to Azure AD, Azure, and other Microsoft Online Services such as Microsoft 365 or Microsoft Intune.

Active vs Eligible Roles for Privileged Identity Management

With Azure Privileged Identity Management, there are two types of assignments that can be made to roles and they are:

  • Eligible assignments require the member of the role to perform an action to use the role. Actions might include performing a multi-factor authentication (MFA) check, providing a business justification, or requesting approval from designated approvers.
  • Active assignments don't require the member to perform any action to use the role. Members assigned as active have the privileges assigned to the role at all times.

It is generally advised to use eligible as much as you can so you can avoid having an account that always has permissions. If an account needs to have the an active assignment for scenarios such as hiring a consultant for a full day of review, specify an active assignment with an assignment start and end date/time.

Licensing Requirements

The official Microsoft provided licensing requirements for using PIM can be found here:

License requirements to use Privileged Identity Management

https://docs.microsoft.com/en-us/azure/active-directory/privileged-identity-management/subscription-requirements

To summarize, if you are strictly purchasing Azure AD Premium P2 licenses for PIM then you will only need as many employees that will be performing the following tasks:

  • Users assigned as eligible to Azure AD or Azure roles managed using PIM
  • Users who are assigned as eligible members or owners of privileged access groups
  • Users able to approve or reject activation requests in PIM
  • Users assigned to an access review
  • Users who perform access reviews

Azure AD Premium P2 licenses are not required for the following tasks:

  • No licenses are required for users who set up PIM, configure policies, receive alerts, and set up access reviews.

Examples of usage scenarios can be found here: https://docs.microsoft.com/en-us/azure/active-directory/privileged-identity-management/subscription-requirements#example-license-scenarios

The following is what happens with the license examples:

If an Azure AD Premium P2, EMS E5, or trial license expires, Privileged Identity Management features will no longer be available in the directory:

  • Permanent role assignments to Azure AD roles will be unaffected.
  • The Privileged Identity Management service in the Azure portal, as well as the Graph API cmdlets and PowerShell interfaces of Privileged Identity Management, will no longer be available for users to activate privileged roles, manage privileged access, or perform access reviews of privileged roles.
  • Eligible role assignments of Azure AD roles will be removed, as users will no longer be able to activate privileged roles.
  • Any ongoing access reviews of Azure AD roles will end, and Privileged Identity Management configuration settings will be removed.
  • Privileged Identity Management will no longer send emails on role assignment changes.

Note that Azure AD Premium P2 also provides the Identity protection feature for accounts (https://docs.microsoft.com/en-us/azure/active-directory/identity-protection/overview-identity-protection), which would enable the following:

  • Vulnerabilities and risky accounts detection
  • Risk events investigation
  • Risk-based Conditional Access policies

If these features are designed for the organization then everyone will need to be licensed.

With an overview of PIM provided, I will proceed to demo each of the key features provided in the Microsoft documentation:

  • Provide just-in-time privileged access to Azure AD and Azure resources
  • Assign time-bound access to resources using start and end dates
  • Require approval to activate privileged roles
  • Enforce multi-factor authentication to activate any role
  • Use justification to understand why users activate
  • Get notifications when privileged roles are activated
  • Conduct access reviews to ensure users still need roles
  • Download audit history for internal or external audit

Deploy PIM

I find that many administrators typically skip through the Deploy PIM section of the Microsoft documentation (https://docs.microsoft.com/en-us/azure/active-directory/privileged-identity-management/pim-deployment-plan) as the section does not actually contain any configuration instructions but I’d like to stress how important it is to read through all the items that Microsoft outlines to successfully plan for a PIM deployment. I would highly recommend going through the documentation before jumping into the next configuration section.

No more “Consent to PIM”

Those who have worked with PIM in the past or written the older AZ-500 exam may remember how administrators need to “consent to PIM” prior to using the feature. The process of consenting to PIM has been removed so there is no need to perform this step anymore and the console now has the following banner inserted into the Quick start page:

You are using the updated Privileged Identity Management experience for Azure AD roles.

image

The changes that Microsoft made as per the documentation https://docs.microsoft.com/en-us/azure/active-directory/privileged-identity-management/pim-getting-started#prerequisites is as such:

When a user who is active in a privileged role in an Azure AD organization with a Premium P2 license goes to Roles and administrators in Azure AD and selects a role (or even just visits Privileged Identity Management):

  • We automatically enable PIM for the organization
  • Their experience is now that they can either assign a "regular" role assignment or an eligible role assignment

When PIM is enabled it doesn't have any other effect on your organization that you need to worry about. It gives you additional assignment options such as active vs eligible with start and end time. PIM also enables you to define scope for role assignments using Administrative Units and custom roles. If you are a Global Administrator or Privileged Role Administrator, you might start getting a few additional emails like the PIM weekly digest. You might also see MS-PIM service principal in the audit log related to role assignment. This is an expected change that should have no effect on your workflow.

Start Using PIM with Wizard

If you’re new to PIM and need to quickly start using the features with minimal configuration, using the security wizard (https://docs.microsoft.com/en-us/azure/active-directory/privileged-identity-management/pim-security-wizard) would be a great start. The Discovery and insights (Preview) feature in the Privileged Identity Management blade provides the easy to use wizard to begin leveraging PIM features:

image

Configuring a PIM Administrator with an Active Assignment

If you already have a deployment plan created then our first step would be to assign the planned account to Privileged Role Administrator role for PIM administration. Navigate to Azure ADroles under Manage:

image

Then Roles and type in Privileged Role Administrator to list the PIM role, then select it:

image

Once in the properties of the role, proceed to use the Add assignments button to add a user into the role. I will add my own account for the purpose of this example.

**Note that as of the time of writing this post, the Microsoft documentation dated on 08/06/2020 specifies that we should click on the Add Member button (https://docs.microsoft.com/en-us/azure/active-directory/privileged-identity-management/pim-how-to-give-access-to-pim#delegate-access-to-manage-pim) but this button is no longer available on 4/3/2021.

image

image

I will configure this assignment to be active and permanently assigned but note that it is generally advised that most active assignments should be configured with a start and end time/date if possible.

image

The account specified should now be displayed under the Active assignments tab:

image

Navigating back to the Privileged Role Management page and selecting My roles:

image

Then under Azure AD roles and Active Assignments should display the roles I am currently a part of (Global Administrator and Privileged Role Administrator):

image

Note that the act of granting my account the Privileged Role Administrator role will send the following notification to my email address about the assignment.

image

Privileged Identity Management Alerts

You can instantly view a list of issues as identified by PIM by clicking on the Alerts under the Manage settings:

image

Note the 3 alerts that are raised for this environment:

  • Roles don’t require multi-factor authentication for activation
  • Potential stale accounts in a privileged role
  • There are too many global administrators

The first two are fairly obvious while the second one lists accounts that have not changed their password in the past 90 days and clicking into the line item will bring up the details:

image

Clicking on the Settings button will bring us into configuration set for identifying these risks:

image

You can click on each of the alerts to see what configuration changes you can make as well as whether to disable them:

image

Assigning a User with Eligible Role (J.I.T)

With the initial configuration and walk through of the alerts out of the way, let’s proceed to assigning a user with the eligible role for the Global Admin role. Begin by navigating to Azure ADroles under Manage:

image

Then Roles and type in Global Admins to list the PIM role, then select it:

image

Click on Add assignments:

image

I’ll be using a account named John Smith as the example:

image

In the Setting tab, we will configure the Assignment type as Eligible with Permanently eligible disabled and a Assignment starts and Assignment ends date/time specified to be one day:

image

Note that the Assignment starts and Assignment ends date/time cannot exceed more than a year or the message Time duration specified exceeds maximum allowed. will be displayed:

image

The role assignment should now be displayed under the Eligibleassignments tab:

image

Note that upon completion of assignment John Smith the eligible role, notification emails such as the one below would have been sent to the admins.

image

Testing a User with Eligible Role

With the eligible role assigned to John Smith, we can log into the Azure portal and confirm that he does not have the ability to create new users or reset passwords as he is eligible but not a Global Admin yet:

image

We can then navigate to the Privileged Identity Management section, click on Azure AD roles under Activate, then see that we can eligible for the Global Administrator role with the option of activating it:

**Note that there is an end time for this eligible assignment as how it was configured in the previous section.

image

We’ll notice that attempting to activate the role indicates we are prompted with:

Additional verification required. Click to continue

The reason why this prompt is displayed is because this account does not have MFA set up and the default settings for activation is to have MFA setup, which I will show a bit later.

image

Proceed to set up MFA:

image

imageimage

Once MFA is successfully set up, the following activation options will be displayed. Note how there are various parameters we can configure such as the activation start time, the duration, and a reason, which is required for activation:

image

For the purpose of this example, I will set the duration to only 2 hours mimicking the scenario that I only need 2 hours of elevated permissions. My reason for the activation will be to: Test Global Admin activation.

image

The activation proceeds through 3 stages and will complete fairly quickly (you don’t need to walk away from the computer):

image

Upon completion of activation, the Eligible assignments tab will refresh and display the following message:

You have just activated a role. Click here to view your active roles

image

Clicking the Click here to view your active roles will change the table to Active assignments which will display the activated state. Note the End time listed is 2 hours from when I activated it:

image

The test John Smith user will now be able to create accounts:

image

Upon activating the Global Admin assignment, notification emails such as the one below would have been sent to the admins:

image

When the duration has expired an email will be sent:

image

Activation Role Settings Configuration

It is possible to customize the activation role settings as demonstrated in the previous activation by navigating to Azure ADroles under Manage:

image

Roles and type in Global Admins to list the PIM role, then select it:

image

Click on Role settings to list the parameters that we can edit:

image

The configuration settings are partitioned into tabs, which I have combined into one screenshot.

The Activation tab allows us to:

  1. Change the activation maximum duration in hours, which defaults to 8 and is customizable during the activation process
  2. Requires the account to have MFA setup – why John Smith had to set up MFA
  3. Require a justification – why we had to enter a reason
  4. Require ticket information on activation – this will provide two fields – 1. ticket number, 2. ticketing system link
  5. Require approval to activate – One of the features I enjoy most as this will require another administrator to interactively approve the activation from the Azure portal

The Assignment tab allows us to:

  1. Allow permanent eligible assignment – this can be changed to limit the amount of time an eligible assignment can be
  2. Allow permanent active assignments after – this enables or disables permanent active assignment and if it is disabled, the active assignment can be force to expire after a period of time (1 year, 6 months, 3m months, 1 month, 15 days)
  3. Require Azure MFA on active assignment – this will force MFA for active assignments
  4. Require justification on active assignment – forces a reason to be entered

The Notification tab allows us to specify various notification settings for when activation or assignments take place.

image

These configuration settings are set independently for each role.

PIM Auditing

To perform an audit of on the activities of privileged accounts, navigate to Azure ADroles under Manage:

image

Select Resource Audit under Activity and you will see the actions of PIM administrators as well as users who have activated their eligibility for configured roles:

image

Clicking on a line item will bring the details of the action:

image

Navigating to the My audit section will display all the PIM activities the account logged in has made:

image

Creating an Access Review

You can create an access review to list the specific PIM activities of a role such Global Admin by navigating to Azure ADroles under Manage:

image

Then Access reviews and select New:

image

Then configure the review as required:

image

The frequency can be set to the following:

image

Note the Upon completion settings and Advanced settings that are available:

image

A review will be created in the Access reviews:

image

Clicking into the report will provide information about the PIM activity for Global Admins:

image

image

image

image

image

Hope this gives anyone looking for information about PIM an overview and demonstration of what it has to offer.

Logging onto a Citrix ADC / NetScaler hosted on Azure with nsroot fails with the error: "nsnet_connect: No such file or directory"

$
0
0

Problem

You’ve noticed that Citrix ADC / NetScaler hosted on Azure is no longer reachable and attempting to use the Serial console feature in Azure administration portal fails with the following error after entering the password for the nsroot account:

Nsnet_connect: No such file or directory

image

Solution

One of the possible reasons why the above error is displayed is if the /var directory is full but not being able to log into the appliance means that there is no way of cleaning up the drive. The workaround for this scenario is to try and use the nsrecover account to log in. The default password for this account is usually nsroot:

image

Once successfully logged in, proceed to use df -h to confirm the issue as shown in the screenshot above then use the following KB to clean up the /var directory:

How to free space on /var directory for logging issues with a Citrix ADC appliance
https://docs.citrix.com/en-us/citrix-adc/current-release/system/troubleshooting-citrix-adc/how-to-free-space-on-var-directory.html

Understanding the differences between Azure AD Roles and Azure RBAC Roles with respect to the levels tenant, root, management group and subscription

$
0
0

I find that getting to understand the differences between Azure AD Roles and Azure RBAC Roles with respect to the following levels are often overlooked or misunderstood:

  • Tenant
  • Root
  • Management group
  • Subscription

Those who have not yet had the opportunity to be exposed to larger organizations where multiple subscriptions and the use of management groups for organizing subscriptions may not realize the way in which all these components interact with each other so this blog post serves to provide an overview of them.

Let me begin by providing the official Microsoft documentation for each of the components and note that if you decide to reach all of them then you may not need to read this blog post.

Quickstart: Set up a tenant
https://docs.microsoft.com/en-us/azure/active-directory/develop/quickstart-create-new-tenant

What are Azure management groups?
https://docs.microsoft.com/en-us/azure/governance/management-groups/overview

Microsoft Azure glossary: A dictionary of cloud terminology on the Azure platform - Subscription
https://docs.microsoft.com/en-us/azure/azure-glossary-cloud-terminology#subscription

Associate or add an Azure subscription to your Azure Active Directory tenant
https://docs.microsoft.com/en-us/azure/active-directory/fundamentals/active-directory-how-subscriptions-associated-directory

Classic subscription administrator roles, Azure roles, and Azure AD roles (must read)
https://docs.microsoft.com/en-us/azure/role-based-access-control/rbac-and-directory-admin-roles

Note that there isn’t a Microsoft document that explains what the Root level is but the following diagram should explain what and where it sits among the other components:

clip_image002[14]

Notice that I’ve included an on-premise Active Directory in the diagram to denote that domains within the traditional Active Directory Domain Services (AD DS) can be synchronized into Azure AD after they have been added as a custom domain. The user accounts, computer accounts and groups (also known as security principals) that are synchronized into AAD (Azure Active Directory) can be used to assign Azure permissions from various levels of the resources (e.g. Root, manage groups, subscriptions, and resources). As accounts are primarily synchronized from AD DS to AAD and not the other way around, you cannot grant permissions to your on-premise resources with AAD accounts. I’ve also included Office 365 in the diagram to depict how Azure AD roles for SaaS application management (O365, D365, PowerBi, Intune), which I will elaborate later in this blog post.

Azure account and Azure subscriptions

Before jumping into the roles used to manage Azure, it is important to understand that an Azure account represents a billing relationship. This essentially means that an Azure account is all of the following:

  • A user identity
  • One or more Azure subscriptions
  • An associated set of Azure resources.

The administrator who creates the account is the Account Administrator for all subscriptions created in that account as well as the default Service Administrator for the subscription.

Azure subscriptions help administrators accomplish the following:

An example can be where there are multiple environments for a project and each environment’s budget should be billed separately for their own Azure consumption. Having multiple subscriptions under the same tenant would allow the access to resources and billing costs to be isolated.

clip_image002[4]

Lastly, each subscription is associated with an Azure AD directory and cannot have more than one even though an Azure AD directly can be associated to more than one subscription. The following is a diagram that depicts the relationship between the Azure AD tenant owner, the subscription and the resources.

clip_image002[6]

The Beginning of Azure – Classic Subscription Administrator Roles

When Azure was first released, access to resources was managed by only the following three administrator roles:

  • Account Administrator
  • Service Administrator
  • Co-Administrator

clip_image002[16]

Those who got into Azure later (after RBAC roles were introduced) will see references to these accounts in the Classic administrators tab:

imageimage

Or alternatively some CSP subscriptions such as the one below provides the informational message indicating:

This type of subscription does not support classic administrators.

image

The Evolution of Azure Roles with RBAC

Having only the 3 type of roles to manage the full scope of Azure resources did not scale very well so Microsoft later added Azure role-based access control (Azure RBAC) to provide a more fine-grained access management to resources. Azure RBAC provided 70 built-in roles that could be assigned at different scopes (Management Group, Subscription and Resources), and allows the creation of custom roles. There are four fundamental Azure roles. The first three apply to all resource types:

Azure role

Permissions

Notes

Owner

  • Full access to all resources
  • Delegate access to others

The Service Administrator and Co-Administrators are assigned the Owner role at the subscription scope

Applies to all resource types.

Contributor

  • Create and manage all of types of Azure resources
  • Create a new tenant in Azure Active Directory
  • Cannot grant access to others

Applies to all resource types.

Reader

  • View Azure resources

Applies to all resource types.

User Access Administrator

  • Manage user access to Azure resources
 

clip_image002[18]

The rest of the 66 roles can be found here: https://docs.microsoft.com/en-us/azure/role-based-access-control/built-in-roles Being familiar with these roles is useful for the AZ exams.

While most of the new environments should have move away from the Azure classic deployment model, it is important to note that only the Azure portal and the Azure Resource Manager APIs support Azure RBAC. Users, groups, and applications that are assigned Azure roles cannot use the Azure classic deployment model APIs. I would highly recommend the following document to understand the differences:

Azure Resource Manager vs. classic deployment: Understand deployment models and the state of your resources
https://docs.microsoft.com/en-us/azure/azure-resource-manager/management/deployment-models

Azure RBAC roles are available throughout the Azure portal via the Access control (IAM) blade for management groups:

image

… subscriptions:

image

… resource groups:

image

… other resources (a disk resource in the screenshot below):

image

In the Azure portal, role assignments using Azure RBAC appear on the Access control (IAM) blade. This blade can be found throughout the portal, such as management groups, subscriptions, resource groups, and various resources.

What is RBAC?

As a refresh, RBAC abbreviates for Role-Based Access Control, which is used to define what the security principal that is assigned the RBAC role, can perform. The nuts and bolts of how RBAC work in Azure is where a built in or custom role has a definition with a collection of permissions. The definition lists the operations that can be performed and these can include read, write, and delete. One of the ways to review the definition is to use the Get-AzRoleDefinition cmdlet with a specified role such as Contributor as shown below:

PS C:\> Get-AzRoleDefinition "Contributor" | ConvertTo-Json

{

"Name": "Contributor",

"Id": "b24988ac-6180-42a0-ab88-20f7382dd24c",

"IsCustom": false,

"Description": "Lets you manage everything except access to resources.",

"Actions": [

"*"

],

"NotActions": [

"Microsoft.Authorization/*/Delete",

"Microsoft.Authorization/*/Write",

"Microsoft.Authorization/elevateAccess/Action",

"Microsoft.Blueprint/blueprintAssignments/write",

"Microsoft.Blueprint/blueprintAssignments/delete"

],

"DataActions": [],

"NotDataActions": [],

"AssignableScopes": [

"/"

]

}

List Azure role definitions

https://docs.microsoft.com/en-us/azure/role-based-access-control/role-definitions-list?tabs=roles

Note the Actions that are allowed (the * denotes all actions) and the NotActions that are not allowed (granting authorization, elevating access, and Blue Print write and delete).

What are Azure AD Roles? They should not be mistaken for Azure Roles (RBAC)

It is important to recognize that Azure AD roles and Azure roles (RBAC) are not the same. Azure roles are as described from the previous sections, while Azure AD roles are used to manage Azure AD resources in a directory such as create or edit users, assign administrative roles to others, reset user passwords, manage user licenses, and manage domains. Think of them as the traditional Active Directory but a modernized version used in the cloud with a drastically different architecture. The following table describes a few of the more important Azure AD roles.

Azure role

Permissions

Notes

Global Administrator

  • Manage access to all administrative features in Azure Active Directory, as well as services that federate to Azure Active Directory
  • Assign administrator roles to others
  • Reset the password for any user and all other administrators

The person who signs up for the Azure Active Directory tenant becomes a Global Administrator.

User Administrator

  • Create and manage all aspects of users and groups
  • Manage support tickets
  • Monitor service health
  • Change passwords for users, Helpdesk administrators, and other User Administrators
 

Billing Administrator

  • Make purchases
  • Manage subscriptions
  • Manage support tickets
  • Monitors service health
 

clip_image002[10]

The Azure AD roles can be found in Roles and administrators blade:

image

image

The complete list of Azure AD roles can be found here: https://docs.microsoft.com/en-us/azure/active-directory/roles/permissions-reference

Differences between Azure roles and Azure AD roles

Microsoft’s official documentation states that at a high level, Azure roles control permissions to manage Azure resources, while Azure AD roles control permissions to manage Azure Active Directory resources with the following table highlighting some of the differences:

Azure roles

Azure AD roles

Manage access to Azure resources

Manage access to Azure Active Directory resources

Supports custom roles

Supports custom roles

Scope can be specified at multiple levels (management group, subscription, resource group, resource)

Scope can be specified at the tenant level (organization-wide), administrative unit, or on an individual object (for example, a specific application)

Role information can be accessed in Azure portal, Azure CLI, Azure PowerShell, Azure Resource Manager templates, REST API

Role information can be accessed in Azure admin portal, Microsoft 365 admin center, Microsoft Graph, AzureAD PowerShell

Are there any overlaps between Azure roles and Azure AD roles?

One of the common questions I get asked about Azure roles and Azure AD is whether they overlap and the short answer is no as shown in the diagram I presented earlier:

clip_image002[20]

Azure AD role permissions can't be used in Azure custom roles and vice versa. The only situation where Azure AD spans into the resources that Azure RBAC roles manage is if a Global Admin elevates their access by activating the Global Admin can manage Azure Subscriptions and Management Groups switch in the Azure portal as shown in the following screenshot:

Access management for Azure resources

Terence Luk can manage access to all Azure subscriptions and management groups in this tenant.

image

https://docs.microsoft.com/en-us/azure/role-based-access-control/elevate-access-global-admin

Enabling this option will grant the user the Azure RBAC role named User Access Administrator role to all subscriptions for the tenant as shown in the screenshot below:

image

Notice that there is a Foreign Principal for ‘Tec… labeled above. The subscription in this example was provisioned by the CSP partner Tech Data and by accepting the invite, they were granted Owner permissions to the subscription. It is also important to note that the User Access Administrator role allows the user to grant other users access to Azure resources, which is only a permission an Owner has.

Azure AD and Office 365

Those who work heavily in the Office 365 space will recognize that some Azure AD administrator roles span into Microsoft Office 365. These roles include the Global Administrator role and the User Administrator role. The following is a screenshot of the Roles listed in the Microsoft 365 admin center Roles portal:

image

This list is can be expanded to show the rest of the non-administrative roles:

image

image

Notice how there is a Billing admin in the Microsoft 365 roles. This is the same role as the Billing Administrator in Azure AD and assigning this role in either Azure AD or Microsoft 365 will have the other reflect the same change:

image 

This is also the same for the Global Administrator role in Azure AD and in Office 365:

image

image

CSP Permissions for Subscriptions

Having worked for CSP over the past few years, one of the most common questions I get asked by the client we as the CSP is trying to establish a relationship with is what type of permissions are granted when they accept us (the vendor) as the their CSP and the answer is that we are granted Global Administrator and Helpdesk admin roles as shown in the Microsoft 365 admin console screenshot below:

image

Which can be removed if required:

image

As well as a Owner as shown in the Azure portal below:

image

Here are some useful reference documents about this topic:

Azure subscriptions and resource management
https://docs.microsoft.com/en-us/partner-center/customers-revoke-admin-privileges#azure-subscriptions-and-resource-management

Delegated admin privileges in Azure AD
https://docs.microsoft.com/en-us/partner-center/customers-revoke-admin-privileges#delegated-admin-privileges-in-azure-ad

Invite a customer to establish a reseller relationship with you
https://docs.microsoft.com/en-us/partner-center/request-a-relationship-with-a-customer#invite-a-customer-to-establish-a-reseller-relationship-with-you

Transitioning to CSP for Seat-based services
https://docs.microsoft.com/en-us/partner-center/transition-seat-based-services

Multi-partner functions in CSP
https://docs.microsoft.com/en-us/partner-center/multipartner

I hope this blog post has been informative for anyone who may be looking for information about the differences between Azure AD roles and Azure RBAC roles.

PowerShell script to remove users in an Active Directory group from all Microsoft Teams' Teams in an organization

$
0
0

I was recently asked by a colleague about whether it was possible to use PowerShell to remove a group of users in an Active Directory group from all Microsoft Teams’ Teams in an organization. A bit of Googling did not yield any results so I quickly wrote one that performs the following:

  1. Uses Get-ADGroupMember to export a list of users’ User Principal Name from an Active Directory group to a txt file
  2. Uses the exported list of UPNs to get the list of Teams each user belongs to
  3. Write the list of Teams the user belongs to into a txt file with their UPN as the file name
  4. Remove the user from every Team they belong to

The following is the PowerShell script.

Obtain list of users in an AD Group (you can run this on a domain controller and copy the file to where you will connect to O365)

Get-ADGroupMember -Identity "Board Members" | %{Get-ADUser $_.SamAccountName | foreach { $_.userPrincipalName }} > C:\Scripts\UPNofADGroup.txt

**The example above retrieves users from a AD Group named “Board Members”

Connect to Microsoft Teams environment

Connect-MicrosoftTeams

https://docs.microsoft.com/en-us/powershell/module/teams/connect-microsoftteams?view=teams-ps

Use the list of UPNs to export the Teams they belong to then remove them from the Teams

ForEach ($userToRemove in Get-Content C:\Scripts\UPNofADGroup.txt)

{

$exportedFile = "C:\Scripts\" + $userToRemove + ".txt"

Get-Team -User $userToRemove | FT -AutoSize > $exportedFile

$GroupIDList = Get-Team -User $userToRemove | Select *GroupID*

Foreach ($GroupID in $GroupIDList)

{

Remove-TeamUser -GroupID $GroupID.GroupID -user $userToRemove

}

}

--------------------------------------------------------------------------------------------------

Hope this helps anyone who may be looking for a script like this.

Moving an Azure virtual machine and the Recovery Services Vault with its backups from one subscription to another

$
0
0

For those who have undertaken the task or project to move resources from one Azure subscription to another will know that the operation within the Azure Portal is a matter of a few clicks but the complexity can significantly increase depending on the type, the amount, and the dependencies of the resources that are being moved. While I feel that Microsoft will eventually make this potentially painful process something of the pass, it can be a daunting task today as I write this post. To help demystify a common operation that is made, I would like to demonstrate how to move an Azure virtual machine and the Recovery Services Vault with its backups from one subscription to another.

Microsoft Documentation

To begin, I would like to provide the following three links that should be reviewed to understand the operations this post will be demonstrating:

Move resources to a new resource group or subscription
https://docs.microsoft.com/en-us/azure/azure-resource-manager/management/move-resource-group-and-subscription

Move a Recovery Services vault across Azure Subscriptions and Resource Groups
https://docs.microsoft.com/en-us/azure/backup/backup-azure-move-recovery-services-vault?toc=/azure/azure-resource-manager/toc.json

Move guidance for virtual machines
https://docs.microsoft.com/en-us/azure/azure-resource-manager/management/move-limitations/virtual-machines-move-limitations

Test Environment

The following are the two Pay-as-you-go subscriptions I’ll be using:

image

The following is the Test-VM I will be moving from Azure Source Sub to Azure Destination Sub (note that tags are configured for this VM):

image

The following are the components all placed into the resource group RG-VMs and their respective Resource IDs, which will change after the subscription move:

Virtual Machine:
/subscriptions/b0957dbf-1cad-4e5e-9360-341ccc197953/resourceGroups/RG-VMs/providers/Microsoft.Compute/virtualMachines/Test-VM

VM NIC:
/subscriptions/b0957dbf-1cad-4e5e-9360-341ccc197953/resourceGroups/RG-VMs/providers/Microsoft.Network/networkInterfaces/Test-VM-nic-9190f27cc6cc40a3bede740db168b0d3

VM Public IP with Basic SKU (basic SKU IP addresses can be moved but standard cannot):
/subscriptions/b0957dbf-1cad-4e5e-9360-341ccc197953/resourceGroups/RG-VMs/providers/Microsoft.Network/publicIPAddresses/Test-VM-pip-90462821bc164f12a624afccf204b93c

VNet:
/subscriptions/b0957dbf-1cad-4e5e-9360-341ccc197953/resourceGroups/RG-VMs/providers/Microsoft.Network/virtualNetworks/Test-VNet

VM Disk:
/subscriptions/b0957dbf-1cad-4e5e-9360-341ccc197953/resourceGroups/RG-VMs/providers/Microsoft.Compute/disks/testvm-osdisk-20210424-103611

image

The following is the Recovery Services Vault that is configured to backup the Test-VM:

image

The following is the backup status for the Test-VM (note that there are backups configured and a few restore points):

image

Subscription to Subscription Move Blockers (Validation errors)

The subscription to subscription move operation will run a validation check prior to allow you to initiate it and the following are the type of errors that would be presented.

You can’t move a VM with backups enabled:

image

{"code":"DiskHasRestorePoints","target":"Microsoft.Compute/disks","message":"The move resources request contains resources like /subscriptions/b0957dbf-1cad-4e5e-9360-341ccc197953/resourceGroups/RG-VMs/providers/Microsoft.Compute/disks/testvm-osdisk-20210424-103611 that are being backed up as part of a Azure Backup job. Browse the link https://aka.ms/vmbackupmove for information.","details":[{"code":"DiskHasRestorePoints","target":"/subscriptions/b0957dbf-1cad-4e5e-9360-341ccc197953/resourceGroups/RG-VMs/providers/Microsoft.Compute/disks/testvm-osdisk-20210424-103611","message":"The move resources request contains resources like /subscriptions/b0957dbf-1cad-4e5e-9360-341ccc197953/resourceGroups/RG-VMs/providers/Microsoft.Compute/disks/testvm-osdisk-20210424-103611 that are being backed up as part of a Azure Backup job. Browse the link https://aka.ms/vmbackupmove for information."}]}

image

You can’t move a VM without deleting its restore collection points even if the backup is disabled:

The same message is displayed as having backups configured but the component that isn’t passing the validation is DiskHasRestorePoints:

{"code":"DiskHasRestorePoints","target":"Microsoft.Compute/disks","message":"The move resources request contains resources like /subscriptions/b0957dbf-1cad-4e5e-9360-341ccc197953/resourceGroups/RG-VMs/providers/Microsoft.Compute/disks/testvm-osdisk-20210424-103611 that are being backed up as part of a Azure Backup job. Browse the link https://aka.ms/vmbackupmove for information.","details":[{"code":"DiskHasRestorePoints","target":"/subscriptions/b0957dbf-1cad-4e5e-9360-341ccc197953/resourceGroups/RG-VMs/providers/Microsoft.Compute/disks/testvm-osdisk-20210424-103611","message":"The move resources request contains resources like /subscriptions/b0957dbf-1cad-4e5e-9360-341ccc197953/resourceGroups/RG-VMs/providers/Microsoft.Compute/disks/testvm-osdisk-20210424-103611 that are being backed up as part of a Azure Backup job. Browse the link https://aka.ms/vmbackupmove for information."}]}

image

You can’t move a VM attached to a VNET that has another VM attached to it that is not being moved:

In this example, the Test-VM to be migrated is attached to the Test-VNet VNet. This VNet also has another VM named Test-VM2 attached to it, which isn’t being moved.

{"code":"MissingMoveDependentResources","target":"Microsoft.Network/virtualNetworks","message":"The move resources request does not contain all the dependent resources. Please check details for missing resource Ids.","details":[{"code":"0","message":"/subscriptions/b0957dbf-1cad-4e5e-9360-341ccc197953/resourceGroups/RG-VMs/providers/Microsoft.Compute/virtualMachines/Test-VM2"},{"code":"0","message":"/subscriptions/b0957dbf-1cad-4e5e-9360-341ccc197953/resourceGroups/RG-VMs/providers/Microsoft.Network/publicIPAddresses/Test-VM2-ip"}]

image

You can’t move a VM without the VNet it is attached to:

image

{"code":"MissingMoveDependentResources","target":"Microsoft.Network/networkInterfaces","message":"The move resources request does not contain all the dependent resources. Please check details for missing resource Ids.","details":[{"code":"0","message":"/subscriptions/b0957dbf-1cad-4e5e-9360-341ccc197953/resourceGroups/RG-VMs/providers/Microsoft.Network/virtualNetworks/Test-VNet"}]}

image

You can’t move a VM with a public IP address without moving the VNet it is attached to:

image

{"code":"MissingMoveDependentResources","target":"Microsoft.Network/publicIPAddresses","message":"The move resources request does not contain all the dependent resources. Please check details for missing resource Ids.","details":[{"code":"0","message":"/subscriptions/b0957dbf-1cad-4e5e-9360-341ccc197953/resourceGroups/RG-VMs/providers/Microsoft.Network/virtualNetworks/Test-VNet"}]}

image

You can’t move a VM without the attached public IP address that is assigned to it (you can move a Basic SKU IP address but not Standard):

image

{"code":"MissingMoveDependentResources","target":"Microsoft.Network/publicIPAddresses","message":"The move resources request does not contain all the dependent resources. Please check details for missing resource Ids.","details":[{"code":"0","message":"/subscriptions/b0957dbf-1cad-4e5e-9360-341ccc197953/resourceGroups/RG-VMs/providers/Microsoft.Network/virtualNetworks/Test-VNet"}]}

image

You can’t move a VM without the attached public IP address that is assigned to it:

image

{"code":"MissingMoveDependentResources","target":"Microsoft.Network/networkInterfaces","message":"The move resources request does not contain all the dependent resources. Please check details for missing resource Ids.","details":[{"code":"0","message":"/subscriptions/b0957dbf-1cad-4e5e-9360-341ccc197953/resourceGroups/RG-VMs/providers/Microsoft.Network/publicIPAddresses/Test-VM-pip-90462821bc164f12a624afccf204b93c"},{"code":"0","message":"/subscriptions/b0957dbf-1cad-4e5e-9360-341ccc197953/resourceGroups/RG-VMs/providers/Microsoft.Network/virtualNetworks/Test-VNet"}]}

image

You can’t move a VM with a dependent VNet that has a VNet Peering configured with another VNet regardless of whether the peered VNet is being moved as well:

image

{"code":"CannotMoveResource","target":"Microsoft.Network/virtualNetworks","message":"Cannot move one or more resources in the request. Please check details for information about each resource.","details":[{"code":"CannotMoveVnetDueToPeering","message":"Cannot move virtual network /subscriptions/b0957dbf-1cad-4e5e-9360-341ccc197953/resourceGroups/RG-VMs/providers/Microsoft.Network/virtualNetworks/Test-VNet because it's peered with other virtual networks: /subscriptions/b0957dbf-1cad-4e5e-9360-341ccc197953/resourceGroups/RG-NewVNet/providers/Microsoft.Network/virtualNetworks/Test-VNet2."}]}

image

You cannot move a VNet with App Service attached to it (Regional VNet Integration):

{"code":"CannotMoveResource","target":"Microsoft.Network/virtualNetworks","message":"Cannot move one or more resources in the request. Please check details for information about each resource.","details":[{"code":"CannotMoveResourceDueToReference","message":"Cannot move resource /subscriptions/b0957dbf-1cad-4e5e-9360-341ccc197953/resourceGroups/RG-VMs/providers/Microsoft.Network/virtualNetworks/Test-VNet since it references resource /subscriptions/b0957dbf-1cad-4e5e-9360-341ccc197953/resourceGroups/RG-VMs/providers/Microsoft.Web/serverfarms/ASP-RGVMs-9e72, which does not support move or updating references after the move."}]}

image

The validation process will not throw and error if you attempt to move a recovery services vault with backups enabled. However, you will need to move the virtual machines along with it and use the same target VM resource group name (as it was in old subscription) to continue backups.

It is important to note that regardless of whether the virtual machine is configured to be backed up by the Recovery Services Vault again, you will be able to restore the backups carried over from the time when the RSV was in the old subscription.

Moving virtual machine from one subscription to another

Step #1 – Identify Dependencies for all components

Before starting any configuration, it is important to spend time to identify every component of the virtual machine that are dependent on a resource in which other components that are not being moved are dependent on. An example of this is if you have to virtual machines:

  • Test-VM1
  • Test-VM2

… and both of these VMs have NICs connected to the same VNet. As mentioned earlier, you cannot move a virtual machine without the VNet it is connected to and attempting to move a VNet means you would also need to move other machines in that same VNet. When you need to move all VMs attached to the same VNet, you will also need to consolidate those VM resources into the same resource group. Finally, then there are other component such as App Services, which can use Regional VNet Integration, which attaches to a VNet for egress traffic that would be routed directly to the VNet instead of the internet but is not supported to be moved when attached to a VNet. Leverage information and diagrams provided by the Azure portal for planning (e.g. VNet > Monitoring > Diagram to identify resources connected to the VNet). Only proceed to the next step after all the dependent components have been identified.

Step #2 – Disable backup for virtual machine

Navigate to the backup configuration for the virtual machine and click on Stop backup to stop the backup:

image

There are two options for how the backup data is handled. Retain Backup Data will retain the backups until the retention period is reached, in which the backup will expire. Delete Backup Data will remove the backups immediately:

image

Successfully stopping the backup will disable the Stop backup button and enable the Resume backup button will be available as well as state the Last backup status as Warning(Backup disabled):

image

Step #3 – Delete restore point collections

With the backup disabled for the virtual machine, proceed to delete the delete the restore points from the vault by locating the resource group Azure creates when a Recovery Services Vault is created. The naming convention is:

AzureBackupRG_<RSV-region>_1

image

You won’t see the restore point collections until you select Show hidden types:

imageimage

Proceed to select the restore point collection and delete them (this will not delete the existing backups OR prevent the restore of them):

image

Step #4 – Delete any VNet peerings

As the VM’s NIC will be associated with a VNET, any peerings configured between the VNet will need to be deleted. Note that you will need to do this even if the VNet was peered with another VNet that is being moved at the same time.

Step #5 – Consolidate the virtual machine and all of its dependent resources into one resource group

Ensure that all the components of the virtual machine as well as dependent components are placed into one resource group so they can be selected to together and be moved.

Step #6 – Move virtual machine and machine and all of its dependent resources to the destination subscription

Proceed to initiate the move of the resources to the destination subscription (you can move the RSV together with the resources as well), which will not cause an outage as the resource will continue to be reachable. This particular VM also has a Basic SKU public IP address which allows for subscription transfers so attempting to constantly PING the IP address during the move will show that no replies are missed:

image

You will need to select the checkbox indicating the following to proceed with the move:

I understand that tools and scripts associated with moved resources will not work until I update them to use new resource IDs

image

Step #7 – Move Recovery Services Vault to destination subscription

If the Recovery Services Vault of the virtual machine was not moved with the VM, proceed to move it into the destination subscription:

image

image

Step #8 – Enable backups for virtual machine

If you place the virtual machine into a resource group in the target subscription that has the same as the resource group in the source subscription then you can simply click on the Resume backup button to re-enable backups with the same backupitem object. If the migrated VM is in a resource group that is a different name then a new backup item will be configured.

image

Note that if you’ve placed the VM in the target subscription that has the same as the resource group in the source subscription then attempting to add a VM to be backed up will display:

There are on virtual machines that can be backed up in this vault.

image

With the resources migrated to the new subscription, remember to update any applications and services that reference the resource ID as the URL will have been changed. The following lists the original and the updated resource IDs of the moved VM:

Virtual Machine:

/subscriptions/b0957dbf-1cad-4e5e-9360-341ccc197953/resourceGroups/RG-VMs/providers/Microsoft.Compute/virtualMachines/Test-VM

/subscriptions/22d77ef8-89c3-40c2-8730-e786114770a0/resourceGroups/RG-VMs/providers/Microsoft.Compute/virtualMachines/Test-VM

VM NIC:

/subscriptions/b0957dbf-1cad-4e5e-9360-341ccc197953/resourceGroups/RG-VMs/providers/Microsoft.Network/networkInterfaces/Test-VM-nic-9190f27cc6cc40a3bede740db168b0d3

/subscriptions/22d77ef8-89c3-40c2-8730-e786114770a0/resourceGroups/RG-VMs/providers/Microsoft.Network/networkInterfaces/Test-VM-nic-9190f27cc6cc40a3bede740db168b0d3

VM Public IP with Basic SKU (basic SKU IP addresses can be moved but standard cannot):

/subscriptions/b0957dbf-1cad-4e5e-9360-341ccc197953/resourceGroups/RG-VMs/providers/Microsoft.Network/publicIPAddresses/Test-VM-pip-90462821bc164f12a624afccf204b93c

/subscriptions/22d77ef8-89c3-40c2-8730-e786114770a0/resourceGroups/RG-VMs/providers/Microsoft.Network/publicIPAddresses/Test-VM-pip-90462821bc164f12a624afccf204b93c

VNet:

/subscriptions/b0957dbf-1cad-4e5e-9360-341ccc197953/resourceGroups/RG-VMs/providers/Microsoft.Network/virtualNetworks/Test-VNet

/subscriptions/22d77ef8-89c3-40c2-8730-e786114770a0/resourceGroups/RG-VMs/providers/Microsoft.Network/virtualNetworks/Test-VNet

VM Disk:

/subscriptions/b0957dbf-1cad-4e5e-9360-341ccc197953/resourceGroups/RG-VMs/providers/Microsoft.Compute/disks/testvm-osdisk-20210424-103611

/subscriptions/22d77ef8-89c3-40c2-8730-e786114770a0/resourceGroups/RG-VMs/providers/Microsoft.Compute/disks/testvm-osdisk-20210424-103611

Resource tags will be retained across subscriptions moves:

Source Subscription:

image

Destination Subscription:

image

Hope this helps anyone who may be looking for more information about what the migration of VMs from one subscription to another would look like.

Citrix Virtual Apps and Desktops Machine Catalog displaying "Power State" as "Unknown"

$
0
0

Problem

You have a Citrix Virtual Apps and Desktops environment hosted with Citrix Cloud Connectors connecting to Citrix cloud and noticed that the management portal displays virtual machines in a machine catalog with the Power State as Unknown:

image

This appears to only affect machines hosted by a specific vCenter because virtual desktops hosted on a different vCenter displays the Power State properly. Monitoring the VDIs also indicate that users are not able to connect to them.

Solution

A quick search on the internet will only return the following Citrix KB that refers to an on-premise deployment without Citrix Cloud connectors:

VM's Power State Does Not Update And Shows As "Unknown" After vCenter Server Reboots
https://support.citrix.com/article/CTX238157

Although not stated in the KB, the issue can be resolved by turning off and turning On the Maintenance mode for the vCenter connection as such:

image

Moving Azure App Service resources from one subscription to another subscription

$
0
0

As a follow up to my previous post:

Moving an Azure virtual machine and the Recovery Services Vault with its backups from one subscription to another
http://terenceluk.blogspot.com/2021/04/moving-azure-virtual-machine-and.html

I would like to continue to demonstrate and outline the process of moving App Service and Azure Functions from one subscription to another.

Microsoft Documentation

As always, I would like to provide the following links to the official Microsoft documentation and recommend that anyone undertaking this task ready through them.

Move resources to a new resource group or subscription
https://docs.microsoft.com/en-us/azure/azure-resource-manager/management/move-resource-group-and-subscription

Move operation support for resources
https://docs.microsoft.com/en-us/azure/azure-resource-manager/management/move-support-resources

Move guidance for App Service resources
https://docs.microsoft.com/en-us/azure/azure-resource-manager/management/move-limitations/app-service-move-limitations

Note that improvements for these cross subscription move operations are continued to be made so the behavior demonstrated below were performed in April 2021. An example of a change since 2020 is that Microsoft.Web/Certificates can now be moved.

Different behavior during Subscription to Subscription Move

I’ve found that the subscription to subscription move operation changes depending on when you run it.

image

One of them will run a validation check prior then stopping to allow you to initiate it. The second will run the validation and upon successfully completing it will automatically initiate the move. There does not appear to be any consistency in which is presented but the GUI will be different so it is best to assume that you will not get the option to initiate the move.

This is the GUI that would stop after validation and allow you to manually initiate the move:

image

This is the GUI that would automatically initiate the move upon a successful validation:

image

Subscription to Subscription Move Blockers (Validation errors)

You cannot move an App Service without the associated App Service Plan:

image

I understand that tools and scripts associated with moved resources will not work until I update them to use new resource IDs

image

Attempting to migrate the App Service without its corresponding App Service Plan will fail with:

{"code":"ResourceMoveProviderValidationFailed","message":"Resource move validation failed. Please see details. Diagnostic information: timestamp '20210426T203719Z', subscription id 'b0957dbf-1cad-4e5e-9360-341ccc197953', tracking id '53ba79fa-d2cf-4e59-b40c-7fa8067d58f2', request correlation id '88065feb-7ee2-45cc-935d-560b90ad7d73'.","details":[{"code":"ResourceMoveProviderValidationFailed","target":"Microsoft.Web/sites","message":"{\"Code\":\"BadRequest\",\"Message\":\"Please select all the Microsoft.Web resources from 'RG-VMs' resource group for cross-subscription migration. Also, please ensure destination resource group 'RG-VMs' doesn't have any Microsoft.Web resources before move operation. Here is the list of resources you have to move together: Test-App-Ser (Microsoft.Web/sites)\\r\\n ASP-RGVMs-806a (Microsoft.Web/serverFarms)\\r\\n. Please check this link for more information: https://portal.azure.com/?websitesextension_ext=asd.featurePath%3Ddetectors%2FMigration#resource/subscriptions/b0957dbf-1cad-4e5e-9360-341ccc197953/resourceGroups/RG-VMs/providers/Microsoft.Web/sites/Test-App-Ser/troubleshoot\",\"Target\":null,\"Details\":[{\"Message\":\"Please select all the Microsoft.Web resources from 'RG-VMs' resource group for cross-subscription migration. Also, please ensure destination resource group 'RG-VMs' doesn't have any Microsoft.Web resources before move operation. Here is the list of resources you have to move together: Test-App-Ser (Microsoft.Web/sites)\\r\\n ASP-RGVMs-806a (Microsoft.Web/serverFarms)\\r\\n. Please check this link for more information: https://portal.azure.com/?websitesextension_ext=asd.featurePath%3Ddetectors%2FMigration#resource/subscriptions/b0957dbf-1cad-4e5e-9360-341ccc197953/resourceGroups/RG-VMs/providers/Microsoft.Web/sites/Test-App-Ser/troubleshoot\"},{\"Code\":\"BadRequest\"},{\"ErrorEntity\":{\"ExtendedCode\":\"52036\",\"MessageTemplate\":\"Please select all the Microsoft.Web resources from '{0}' resource group for cross-subscription migration. Also, please ensure destination resource group '{1}' doesn't have any Microsoft.Web resources before move operation. Here is the list of resources you have to move together:{2}. Please check this link for more information: {3}\",\"Parameters\":[\"RG-VMs\",\"RG-VMs\",\" Test-App-Ser (Microsoft.Web/sites)\\r\\n ASP-RGVMs-806a (Microsoft.Web/serverFarms)\\r\\n\",\"https://portal.azure.com/?websitesextension_ext=asd.featurePath%3Ddetectors%2FMigration#resource/subscriptions/b0957dbf-1cad-4e5e-9360-341ccc197953/resourceGroups/RG-VMs/providers/Microsoft.Web/sites/Test-App-Ser/troubleshoot\",\"0\"],\"Code\":\"BadRequest\",\"Message\":\"Please select all the Microsoft.Web resources from 'RG-VMs' resource group for cross-subscription migration. Also, please ensure destination resource group 'RG-VMs' doesn't have any Microsoft.Web resources before move operation. Here is the list of resources you have to move together: Test-App-Ser (Microsoft.Web/sites)\\r\\n ASP-RGVMs-806a (Microsoft.Web/serverFarms)\\r\\n. Please check this link for more information: https://portal.azure.com/?websitesextension_ext=asd.featurePath%3Ddetectors%2FMigration#resource/subscriptions/b0957dbf-1cad-4e5e-9360-341ccc197953/resourceGroups/RG-VMs/providers/Microsoft.Web/sites/Test-App-Ser/troubleshoot\"}}],\"Innererror\":null}"}]}

image

To successfully migrate the App Service, select the corresponding App Service Plan:

Note that the interface below is the one that automatically moves the resource when the validation completes. You cannot stop once the validation starts so proceed with caution to avoid unintended moves.

image

----------------------------------------------------------------------------------------------------------------------------

You cannot move App Service resources if they have been moved from the original resource group they were originally created in. The example provided below displays an error message when the App Service and its resources was created in RG-VMs but later moved to RG-SQLDatabases. To correct this issue, simply move the resources back to the original resource group, then move it to the target subscription:

{"code":"ResourceMoveProviderValidationFailed","message":"Resource move validation failed. Please see details. Diagnostic information: timestamp '20210426T212111Z', subscription id 'b0957dbf-1cad-4e5e-9360-341ccc197953', tracking id 'b993a16b-4362-4997-b494-35df505256ab', request correlation id '28624067-966e-4527-a5a4-4461e48e4381'.","details":[{"code":"ResourceMoveProviderValidationFailed","target":"Microsoft.Web/serverFarms","message":"{\"Code\":\"BadRequest\",\"Message\":\"Please select all the Microsoft.Web resources from 'RG-SQLDatabases' resource group for cross-subscription migration. Also, please ensure destination resource group 'Rg-Dest' doesn't have any Microsoft.Web resources before move operation. Here is the list of resources you have to move together: Test-App-Ser (Microsoft.Web/sites). This resource is located in resource group 'RG-SQLDatabases', but hosted in the resource group 'RG-VMs'. This may be a result of prior move operations. Move it back to respective hosting resource group\\r\\n ASP-RGVMs-806a (Microsoft.Web/serverFarms). This resource is located in resource group 'RG-SQLDatabases', but hosted in the resource group 'RG-VMs'. This may be a result of prior move operations. Move it back to respective hosting resource group\\r\\n. Please check this link for more information: https://portal.azure.com/?websitesextension_ext=asd.featurePath%3Ddetectors%2FMigration#resource/subscriptions/b0957dbf-1cad-4e5e-9360-341ccc197953/resourceGroups/RG-SQLDatabases/providers/Microsoft.Web/sites/Test-App-Ser/troubleshoot\",\"Target\":null,\"Details\":[{\"Message\":\"Please select all the Microsoft.Web resources from 'RG-SQLDatabases' resource group for cross-subscription migration. Also, please ensure destination resource group 'Rg-Dest' doesn't have any Microsoft.Web resources before move operation. Here is the list of resources you have to move together: Test-App-Ser (Microsoft.Web/sites). This resource is located in resource group 'RG-SQLDatabases', but hosted in the resource group 'RG-VMs'. This may be a result of prior move operations. Move it back to respective hosting resource group\\r\\n ASP-RGVMs-806a (Microsoft.Web/serverFarms). This resource is located in resource group 'RG-SQLDatabases', but hosted in the resource group 'RG-VMs'. This may be a result of prior move operations. Move it back to respective hosting resource group\\r\\n. Please check this link for more information: https://portal.azure.com/?websitesextension_ext=asd.featurePath%3Ddetectors%2FMigration#resource/subscriptions/b0957dbf-1cad-4e5e-9360-341ccc197953/resourceGroups/RG-SQLDatabases/providers/Microsoft.Web/sites/Test-App-Ser/troubleshoot\"},{\"Code\":\"BadRequest\"},{\"ErrorEntity\":{\"ExtendedCode\":\"52036\",\"MessageTemplate\":\"Please select all the Microsoft.Web resources from '{0}' resource group for cross-subscription migration. Also, please ensure destination resource group '{1}' doesn't have any Microsoft.Web resources before move operation. Here is the list of resources you have to move together:{2}. Please check this link for more information: {3}\",\"Parameters\":[\"RG-SQLDatabases\",\"Rg-Dest\",\" Test-App-Ser (Microsoft.Web/sites). This resource is located in resource group 'RG-SQLDatabases', but hosted in the resource group 'RG-VMs'. This may be a result of prior move operations. Move it back to respective hosting resource group\\r\\n ASP-RGVMs-806a (Microsoft.Web/serverFarms). This resource is located in resource group 'RG-SQLDatabases', but hosted in the resource group 'RG-VMs'. This may be a result of prior move operations. Move it back to respective hosting resource group\\r\\n\",\"https://portal.azure.com/?websitesextension_ext=asd.featurePath%3Ddetectors%2FMigration#resource/subscriptions/b0957dbf-1cad-4e5e-9360-341ccc197953/resourceGroups/RG-SQLDatabases/providers/Microsoft.Web/sites/Test-App-Ser/troubleshoot\",\"0\"],\"Code\":\"BadRequest\",\"Message\":\"Please select all the Microsoft.Web resources from 'RG-SQLDatabases' resource group for cross-subscription migration. Also, please ensure destination resource group 'Rg-Dest' doesn't have any Microsoft.Web resources before move operation. Here is the list of resources you have to move together: Test-App-Ser (Microsoft.Web/sites). This resource is located in resource group 'RG-SQLDatabases', but hosted in the resource group 'RG-VMs'. This may be a result of prior move operations. Move it back to respective hosting resource group\\r\\n ASP-RGVMs-806a (Microsoft.Web/serverFarms). This resource is located in resource group 'RG-SQLDatabases', but hosted in the resource group 'RG-VMs'. This may be a result of prior move operations. Move it back to respective hosting resource group\\r\\n. Please check this link for more information: https://portal.azure.com/?websitesextension_ext=asd.featurePath%3Ddetectors%2FMigration#resource/subscriptions/b0957dbf-1cad-4e5e-9360-341ccc197953/resourceGroups/RG-SQLDatabases/providers/Microsoft.Web/sites/Test-App-Ser/troubleshoot\"}}],\"Innererror\":null}"}]}

image

One of the questions I’ve been asked in the past is if an App service can be moved to another RG and then have the original RG deleted? I’ve done a test and it appears this is not possible.

----------------------------------------------------------------------------------------------------------------------------

You cannot move App Service resources into another subscription’s resource group that already has a App Service (Microsoft.Web) resources as the following error message will be displayed:

{"code":"ResourceMoveProviderValidationFailed","message":"Resource move validation failed. Please see details. Diagnostic information: timestamp '20210426T213019Z', subscription id 'b0957dbf-1cad-4e5e-9360-341ccc197953', tracking id '1d510c1c-d21c-423b-8fc9-6066aae13dec', request correlation id 'b06e6675-4e1c-4de9-9796-fe99d874187e'.","details":[{"code":"ResourceMoveProviderValidationFailed","target":"Microsoft.Web/serverFarms","message":"{\"Code\":\"BadRequest\",\"Message\":\"Please select all the Microsoft.Web resources from 'RG-VMs' resource group for cross-subscription migration. Also, please ensure destination resource group 'RG-VMs' doesn't have any Microsoft.Web resources before move operation. Here is the list of resources you have to move together: Test-App-Ser (Microsoft.Web/sites)\\r\\n ASP-RGVMs-806a (Microsoft.Web/serverFarms)\\r\\n. Please check this link for more information: https://portal.azure.com/?websitesextension_ext=asd.featurePath%3Ddetectors%2FMigration#resource/subscriptions/b0957dbf-1cad-4e5e-9360-341ccc197953/resourceGroups/RG-VMs/providers/Microsoft.Web/sites/Test-App-Ser/troubleshoot\",\"Target\":null,\"Details\":[{\"Message\":\"Please select all the Microsoft.Web resources from 'RG-VMs' resource group for cross-subscription migration. Also, please ensure destination resource group 'RG-VMs' doesn't have any Microsoft.Web resources before move operation. Here is the list of resources you have to move together: Test-App-Ser (Microsoft.Web/sites)\\r\\n ASP-RGVMs-806a (Microsoft.Web/serverFarms)\\r\\n. Please check this link for more information: https://portal.azure.com/?websitesextension_ext=asd.featurePath%3Ddetectors%2FMigration#resource/subscriptions/b0957dbf-1cad-4e5e-9360-341ccc197953/resourceGroups/RG-VMs/providers/Microsoft.Web/sites/Test-App-Ser/troubleshoot\"},{\"Code\":\"BadRequest\"},{\"ErrorEntity\":{\"ExtendedCode\":\"52036\",\"MessageTemplate\":\"Please select all the Microsoft.Web resources from '{0}' resource group for cross-subscription migration. Also, please ensure destination resource group '{1}' doesn't have any Microsoft.Web resources before move operation. Here is the list of resources you have to move together:{2}. Please check this link for more information: {3}\",\"Parameters\":[\"RG-VMs\",\"RG-VMs\",\" Test-App-Ser (Microsoft.Web/sites)\\r\\n ASP-RGVMs-806a (Microsoft.Web/serverFarms)\\r\\n\",\"https://portal.azure.com/?websitesextension_ext=asd.featurePath%3Ddetectors%2FMigration#resource/subscriptions/b0957dbf-1cad-4e5e-9360-341ccc197953/resourceGroups/RG-VMs/providers/Microsoft.Web/sites/Test-App-Ser/troubleshoot\",\"0\"],\"Code\":\"BadRequest\",\"Message\":\"Please select all the Microsoft.Web resources from 'RG-VMs' resource group for cross-subscription migration. Also, please ensure destination resource group 'RG-VMs' doesn't have any Microsoft.Web resources before move operation. Here is the list of resources you have to move together: Test-App-Ser (Microsoft.Web/sites)\\r\\n ASP-RGVMs-806a (Microsoft.Web/serverFarms)\\r\\n. Please check this link for more information: https://portal.azure.com/?websitesextension_ext=asd.featurePath%3Ddetectors%2FMigration#resource/subscriptions/b0957dbf-1cad-4e5e-9360-341ccc197953/resourceGroups/RG-VMs/providers/Microsoft.Web/sites/Test-App-Ser/troubleshoot\"}}],\"Innererror\":null}"}]}

image

Note that you cannot simply move the App Service resources out of the target subscription’s destination resource group and then proceed to repeat the move of the App Service resources from the source subscription because the App Service moved out of the resource group will still be hosted in that resource group. The only way around this if is to delete the App Service in the target subscription’s target resource group.

image

----------------------------------------------------------------------------------------------------------------------------

You cannot an Azure Function in a resource group that contains another App Service without moving it together. You need to move all Microsoft.Web resources in the resource of the Azure Function or App Service that is being moved.

image

{"code":"ResourceMoveProviderValidationFailed","target":"Microsoft.Web/serverFarms","message":"{\"Code\":\"BadRequest\",\"Message\":\"Please select all the Microsoft.Web resources from 'RG-VMs' resource group for cross-subscription migration. Also, please ensure destination resource group 'RG-VMs' doesn't have any Microsoft.Web resources before move operation. Here is the list of resources you have to move together: Test-Function-App001 (Microsoft.Web/sites)\\r\\n Test-App-Ser (Microsoft.Web/sites)\\r\\n ASP-RGVMs-ba9d (Microsoft.Web/serverFarms)\\r\\n ASP-RGVMs-bc50 (Microsoft.Web/serverFarms)\\r\\n. Please check this link for more information: https://portal.azure.com/?websitesextension_ext=asd.featurePath%3Ddetectors%2FMigration#resource/subscriptions/b0957dbf-1cad-4e5e-9360-341ccc197953/resourceGroups/RG-VMs/providers/Microsoft.Web/sites/Test-Function-App001/troubleshoot\",\"Target\":null,\"Details\":[{\"Message\":\"Please select all the Microsoft.Web resources from 'RG-VMs' resource group for cross-subscription migration. Also, please ensure destination resource group 'RG-VMs' doesn't have any Microsoft.Web resources before move operation. Here is the list of resources you have to move together: Test-Function-App001 (Microsoft.Web/sites)\\r\\n Test-App-Ser (Microsoft.Web/sites)\\r\\n ASP-RGVMs-ba9d (Microsoft.Web/serverFarms)\\r\\n ASP-RGVMs-bc50 (Microsoft.Web/serverFarms)\\r\\n. Please check this link for more information: https://portal.azure.com/?websitesextension_ext=asd.featurePath%3Ddetectors%2FMigration#resource/subscriptions/b0957dbf-1cad-4e5e-9360-341ccc197953/resourceGroups/RG-VMs/providers/Microsoft.Web/sites/Test-Function-App001/troubleshoot\"},{\"Code\":\"BadRequest\"},{\"ErrorEntity\":{\"ExtendedCode\":\"52036\",\"MessageTemplate\":\"Please select all the Microsoft.Web resources from '{0}' resource group for cross-subscription migration. Also, please ensure destination resource group '{1}' doesn't have any Microsoft.Web resources before move operation. Here is the list of resources you have to move together:{2}. Please check this link for more information: {3}\",\"Parameters\":[\"RG-VMs\",\"RG-VMs\",\" Test-Function-App001 (Microsoft.Web/sites)\\r\\n Test-App-Ser (Microsoft.Web/sites)\\r\\n ASP-RGVMs-ba9d (Microsoft.Web/serverFarms)\\r\\n ASP-RGVMs-bc50 (Microsoft.Web/serverFarms)\\r\\n\",\"https://portal.azure.com/?websitesextension_ext=asd.featurePath%3Ddetectors%2FMigration#resource/subscriptions/b0957dbf-1cad-4e5e-9360-341ccc197953/resourceGroups/RG-VMs/providers/Microsoft.Web/sites/Test-Function-App001/troubleshoot\",\"0\"],\"Code\":\"BadRequest\",\"Message\":\"Please select all the Microsoft.Web resources from 'RG-VMs' resource group for cross-subscription migration. Also, please ensure destination resource group 'RG-VMs' doesn't have any Microsoft.Web resources before move operation. Here is the list of resources you have to move together: Test-Function-App001 (Microsoft.Web/sites)\\r\\n Test-App-Ser (Microsoft.Web/sites)\\r\\n ASP-RGVMs-ba9d (Microsoft.Web/serverFarms)\\r\\n ASP-RGVMs-bc50 (Microsoft.Web/serverFarms)\\r\\n. Please check this link for more information: https://portal.azure.com/?websitesextension_ext=asd.featurePath%3Ddetectors%2FMigration#resource/subscriptions/b0957dbf-1cad-4e5e-9360-341ccc197953/resourceGroups/RG-VMs/providers/Microsoft.Web/sites/Test-Function-App001/troubleshoot\"}}],\"Innererror\":null}"}

image

Note how the resource group contains other App Service and Azure Function resources that need to be moved together.

image

Failing to select the Test-App-Ser App Service will continue to fail the validation:

image

Validation will be successful when the Test-App-Ser App Service is selected for move:

image

----------------------------------------------------------------------------------------------------------------------------

You cannot move an App Service that has an uploaded but unbinded SSL certificate to another subscription:

The following App Service currently does not have a certificate uploaded:

image

Here I have uploaded a certificate via the TLS/SSL settings:

image

Notice how attempting to move the App Service with the certificate uploaded but not binded will fail the validation checks:

image

{"code":"ResourceMoveProviderValidationFailed","target":"Microsoft.Web/serverFarms","message":"{\"Code\":\"BadRequest\",\"Message\":\"Please select all the Microsoft.Web resources from 'RG-VMs' resource group for cross-subscription migration. Also, please ensure destination resource group 'RG-VMs' doesn't have any Microsoft.Web resources before move operation. Here is the list of resources you have to move together: Test-Function-App001 (Microsoft.Web/sites)\\r\\n Test-App-Ser (Microsoft.Web/sites)\\r\\n ASP-RGVMs-ba9d (Microsoft.Web/serverFarms)\\r\\n ASP-RGVMs-bc50 (Microsoft.Web/serverFarms)\\r\\n 522312DA8771ED22F143DC297BF75F9C8EAF171E-RG-VMs-CanadaCentralwebspace (Microsoft.Web/certificates)\\r\\n. Please check this link for more information: https://portal.azure.com/?websitesextension_ext=asd.featurePath%3Ddetectors%2FMigration#resource/subscriptions/22d77ef8-89c3-40c2-8730-e786114770a0/resourceGroups/RG-VMs/providers/Microsoft.Web/sites/Test-Function-App001/troubleshoot\",\"Target\":null,\"Details\":[{\"Message\":\"Please select all the Microsoft.Web resources from 'RG-VMs' resource group for cross-subscription migration. Also, please ensure destination resource group 'RG-VMs' doesn't have any Microsoft.Web resources before move operation. Here is the list of resources you have to move together: Test-Function-App001 (Microsoft.Web/sites)\\r\\n Test-App-Ser (Microsoft.Web/sites)\\r\\n ASP-RGVMs-ba9d (Microsoft.Web/serverFarms)\\r\\n ASP-RGVMs-bc50 (Microsoft.Web/serverFarms)\\r\\n 522312DA8771ED22F143DC297BF75F9C8EAF171E-RG-VMs-CanadaCentralwebspace (Microsoft.Web/certificates)\\r\\n. Please check this link for more information: https://portal.azure.com/?websitesextension_ext=asd.featurePath%3Ddetectors%2FMigration#resource/subscriptions/22d77ef8-89c3-40c2-8730-e786114770a0/resourceGroups/RG-VMs/providers/Microsoft.Web/sites/Test-Function-App001/troubleshoot\"},{\"Code\":\"BadRequest\"},{\"ErrorEntity\":{\"ExtendedCode\":\"52036\",\"MessageTemplate\":\"Please select all the Microsoft.Web resources from '{0}' resource group for cross-subscription migration. Also, please ensure destination resource group '{1}' doesn't have any Microsoft.Web resources before move operation. Here is the list of resources you have to move together:{2}. Please check this link for more information: {3}\",\"Parameters\":[\"RG-VMs\",\"RG-VMs\",\" Test-Function-App001 (Microsoft.Web/sites)\\r\\n Test-App-Ser (Microsoft.Web/sites)\\r\\n ASP-RGVMs-ba9d (Microsoft.Web/serverFarms)\\r\\n ASP-RGVMs-bc50 (Microsoft.Web/serverFarms)\\r\\n 522312DA8771ED22F143DC297BF75F9C8EAF171E-RG-VMs-CanadaCentralwebspace (Microsoft.Web/certificates)\\r\\n\",\"https://portal.azure.com/?websitesextension_ext=asd.featurePath%3Ddetectors%2FMigration#resource/subscriptions/22d77ef8-89c3-40c2-8730-e786114770a0/resourceGroups/RG-VMs/providers/Microsoft.Web/sites/Test-Function-App001/troubleshoot\",\"0\"],\"Code\":\"BadRequest\",\"Message\":\"Please select all the Microsoft.Web resources from 'RG-VMs' resource group for cross-subscription migration. Also, please ensure destination resource group 'RG-VMs' doesn't have any Microsoft.Web resources before move operation. Here is the list of resources you have to move together: Test-Function-App001 (Microsoft.Web/sites)\\r\\n Test-App-Ser (Microsoft.Web/sites)\\r\\n ASP-RGVMs-ba9d (Microsoft.Web/serverFarms)\\r\\n ASP-RGVMs-bc50 (Microsoft.Web/serverFarms)\\r\\n 522312DA8771ED22F143DC297BF75F9C8EAF171E-RG-VMs-CanadaCentralwebspace (Microsoft.Web/certificates)\\r\\n. Please check this link for more information: https://portal.azure.com/?websitesextension_ext=asd.featurePath%3Ddetectors%2FMigration#resource/subscriptions/22d77ef8-89c3-40c2-8730-e786114770a0/resourceGroups/RG-VMs/providers/Microsoft.Web/sites/Test-Function-App001/troubleshoot\"}}],\"Innererror\":null}"}

image

In order to move the App Service, we will need to move the certificate as well but the certificate object is not displayed in the resource group because it is a hidden resource. Begin by obtaining the certificate name by opening the properties of it in the TLS/SSL settings:

image

Navigate to the resource group that contains the App Service, enable the Show hidden types option and locate the microsoft.web/certificates object with the name that matches what is displayed in the Certificate Details:

image

Note that you can click into the certificate object to view its properties to confirm the name and other properties:

image

With the certificate selected, you can now proceed to migrate the App Service resource to another subscription.

Note that I selected the Azure Function because you have to move all microsoft.web resources within the resource group.

image

The validation should successfully complete so the subscription move can start:

image

Note that you cannot move a certificate on its own even if it is not binded. The following is the error displayed if a certificate is moved on its own:

{"code":"ResourceMoveProviderValidationFailed","message":"Resource move validation failed. Please see details. Diagnostic information: timestamp '20210427T010342Z', subscription id 'b0957dbf-1cad-4e5e-9360-341ccc197953', tracking id 'a40dbd4b-28af-4f44-b150-e6b269c10592', request correlation id '2af61d8a-ab6a-476c-b024-e191d758009c'.","details":[{"code":"ResourceMoveProviderValidationFailed","target":"Microsoft.Web/certificates","message":"{\"Code\":\"BadRequest\",\"Message\":\"Please select all the Microsoft.Web resources from 'RG-VMs' resource group for cross-subscription migration. Also, please ensure destination resource group 'Rg-Dest' doesn't have any Microsoft.Web resources before move operation. Here is the list of resources you have to move together: Test-Function-App001 (Microsoft.Web/sites)\\r\\n Test-App-Ser (Microsoft.Web/sites)\\r\\n ASP-RGVMs-ba9d (Microsoft.Web/serverFarms)\\r\\n ASP-RGVMs-bc50 (Microsoft.Web/serverFarms)\\r\\n 522312DA8771ED22F143DC297BF75F9C8EAF171E-RG-VMs-CanadaCentralwebspace (Microsoft.Web/certificates)\\r\\n. Please check this link for more information: https://portal.azure.com/?websitesextension_ext=asd.featurePath%3Ddetectors%2FMigration#resource/subscriptions/b0957dbf-1cad-4e5e-9360-341ccc197953/resourceGroups/RG-VMs/providers/Microsoft.Web/sites/Test-Function-App001/troubleshoot\",\"Target\":null,\"Details\":[{\"Message\":\"Please select all the Microsoft.Web resources from 'RG-VMs' resource group for cross-subscription migration. Also, please ensure destination resource group 'Rg-Dest' doesn't have any Microsoft.Web resources before move operation. Here is the list of resources you have to move together: Test-Function-App001 (Microsoft.Web/sites)\\r\\n Test-App-Ser (Microsoft.Web/sites)\\r\\n ASP-RGVMs-ba9d (Microsoft.Web/serverFarms)\\r\\n ASP-RGVMs-bc50 (Microsoft.Web/serverFarms)\\r\\n 522312DA8771ED22F143DC297BF75F9C8EAF171E-RG-VMs-CanadaCentralwebspace (Microsoft.Web/certificates)\\r\\n. Please check this link for more information: https://portal.azure.com/?websitesextension_ext=asd.featurePath%3Ddetectors%2FMigration#resource/subscriptions/b0957dbf-1cad-4e5e-9360-341ccc197953/resourceGroups/RG-VMs/providers/Microsoft.Web/sites/Test-Function-App001/troubleshoot\"},{\"Code\":\"BadRequest\"},{\"ErrorEntity\":{\"ExtendedCode\":\"52036\",\"MessageTemplate\":\"Please select all the Microsoft.Web resources from '{0}' resource group for cross-subscription migration. Also, please ensure destination resource group '{1}' doesn't have any Microsoft.Web resources before move operation. Here is the list of resources you have to move together:{2}. Please check this link for more information: {3}\",\"Parameters\":[\"RG-VMs\",\"Rg-Dest\",\" Test-Function-App001 (Microsoft.Web/sites)\\r\\n Test-App-Ser (Microsoft.Web/sites)\\r\\n ASP-RGVMs-ba9d (Microsoft.Web/serverFarms)\\r\\n ASP-RGVMs-bc50 (Microsoft.Web/serverFarms)\\r\\n 522312DA8771ED22F143DC297BF75F9C8EAF171E-RG-VMs-CanadaCentralwebspace (Microsoft.Web/certificates)\\r\\n\",\"https://portal.azure.com/?websitesextension_ext=asd.featurePath%3Ddetectors%2FMigration#resource/subscriptions/b0957dbf-1cad-4e5e-9360-341ccc197953/resourceGroups/RG-VMs/providers/Microsoft.Web/sites/Test-Function-App001/troubleshoot\",\"0\"],\"Code\":\"BadRequest\",\"Message\":\"Please select all the Microsoft.Web resources from 'RG-VMs' resource group for cross-subscription migration. Also, please ensure destination resource group 'Rg-Dest' doesn't have any Microsoft.Web resources before move operation. Here is the list of resources you have to move together: Test-Function-App001 (Microsoft.Web/sites)\\r\\n Test-App-Ser (Microsoft.Web/sites)\\r\\n ASP-RGVMs-ba9d (Microsoft.Web/serverFarms)\\r\\n ASP-RGVMs-bc50 (Microsoft.Web/serverFarms)\\r\\n 522312DA8771ED22F143DC297BF75F9C8EAF171E-RG-VMs-CanadaCentralwebspace (Microsoft.Web/certificates)\\r\\n. Please check this link for more information: https://portal.azure.com/?websitesextension_ext=asd.featurePath%3Ddetectors%2FMigration#resource/subscriptions/b0957dbf-1cad-4e5e-9360-341ccc197953/resourceGroups/RG-VMs/providers/Microsoft.Web/sites/Test-Function-App001/troubleshoot\"}}],\"Innererror\":null}"}]}

image

Subscription to Subscription Move Possibilities

You can move App Service resources with other types of resources. The following demonstrates how the resources App Service, Function App, VM, VNet, Storage account, and RSV can all be moved in one operation:

image

--------------------------------------------------------------------------------------------------------------------------- 

You can move a App Service with a custom domain configured and a SSL certificate binded as long as all the resources are moved together:

image

image

image

----------------------------------------------------------------------------------------------------------------------------

Having a VNet configured for the App Service (VNet Integration) does not stop you from migrating the App Service to another subscription. This essentially means that You can move the App Service and App Plan without removing VNet integration even if the VNet is still in the source subscription. I’ve also noticed that it is possible to assign a VNet and subnet from another subscription (cross subscriptions) when configuring VNet integration for an App Service.

With the above said, I have not been able to test whether VNet integration would continue to work so if anyone reading this post has experience with this, please leave a comment.

image

Hope this helps anyone who may be looking for more information about what the migration of App Services and other related resources from one subscription to another would look like.


Azure Server-side Encryption (SSE) and Azure Disk Encryption (ADE) - Part 1 of 2

$
0
0

Security has become one of the most important requirements with clients as no company wants to be the next front page news for a data breach. I’ve been fortunate to have worked with clients who passed on application forms they’ve received when obtaining cyber & data breach insurance and found that every iteration dives deeper into how well their data is protected. This has lead to many conversations around explaining to technical and non-technical audiences about how Azure protects their data. I have to admit that I’m not an encryption expert but have made a conscious effort to keep up with the technologies available to be a better solutions architect so this post serves to provide an overview of the various options of how data can be encrypted in Azure.

Two of the types of encryption you’ll see most while searching through Azure documentation will be:

  1. Server-side Encryption (SSE)
  2. Azure Disk Encryption (ADE)

For the purpose of narrowing the scope, I will discuss how these apply to IaaS virtual machines in Azure and leave Storage Accounts in another post for the future. Due to the amount of content required for SSE and ADE, this post will be separated into two parts.

Server-side Encryption (SSE) – Part #1
Azure Disk Encryption (ADE) – Part #2

Let me begin by providing the following links to the documentation provided by Microsoft:

Server-side encryption of Azure Disk Storage
https://docs.microsoft.com/en-us/azure/virtual-machines/disk-encryption

Azure Data Encryption at rest
https://docs.microsoft.com/en-us/azure/security/fundamentals/encryption-atrest

Use the Azure portal to enable server-side encryption with customer-managed keys for managed disks
https://docs.microsoft.com/en-us/azure/virtual-machines/disks-enable-customer-managed-keys-portal

What is encryption at rest?
https://docs.microsoft.com/en-us/azure/security/fundamentals/encryption-atrest#what-is-encryption-at-rest

Azure Storage encryption for data at rest
https://docs.microsoft.com/en-us/azure/storage/common/storage-service-encryption

Preview: Server-side encryption with customer-managed keys for Azure Managed Disks
https://azure.microsoft.com/en-ca/blog/preview-server-side-encryption-with-customer-managed-keys-for-azure-managed-disks/

Data encryption models
https://docs.microsoft.com/en-us/azure/security/fundamentals/encryption-models

I’ve found all of the documentation very information and highly suggest the read but it is also a bit difficult to tie all of the information together and impractical to send these to a client so the following is usually how I present the information either through a PowerPoint or discussion with Visio diagrams.

Azure Server-side Encryption (SSE)

By default, managed disks are encrypted with Azure Storage encryption, which uses server-side encryption (SSE) with a platform-managed key to protect the data on OS and data disks. The data on the disks are encrypted transparently using 256-bit AES encryption, one of the strongest block ciphers available, and is FIPS 140-2 compliant. Microsoft also clearly states that Azure Storage encryption does not impact the performance of managed disks. The one note that is worth calling out is that temporary disks are not managed disks and are therefore not encrypted by SSE, unless you enable encryption at host.

What SSE provides is data at rest encryption where if someone where to successfully break into an Azure datacenter, physically take the hard drives, they will not be able to read them upon attaching them to another computer because they would not have the encryption key. SSE is comprised of several components and there are two choices when determining how encryption keys are managed. The two types are:

  1. Platform Managed Keys (PMK)
  2. Customer Managed Keys (CMK)

Platform Managed Keys (PMK)

PMK is essentially using keys that Microsoft manages for the encryption. This is the default offering and setting when you create a managed disk. When you navigate to a virtual machine, click on the Disks of the VM, you will notice that the Encryption header states: SSE with PMK

image

Hovering over the information icon will display the following:

SSE with PMK is server-side encryption with a platform-managed key. This is enabled by default on all managed disks. SSE with CMK is server-side encryption with a customer-managed key. ADE is Azure disk encryption.

image

For many organizations, the essential requirement is to ensure that the data is encrypted whenever it is at rest. Server-side encryption using service-managed Keys enables this model by allowing customers to mark the specific resource (Storage Account, SQL DB, etc.) for encryption and leaving all key management aspects such as key issuance, rotation, and backup to Microsoft. Most Azure services that support encryption at rest typically support this model of offloading the management of the encryption keys to Azure. The Azure resource provider creates the keys, places them in secure storage, and retrieves them when needed. This means that the service has full access to the keys and the service has full control over the credential lifecycle management.

The following diagram provides a visualization of the components of how a managed disk is encrypted at rest with PMK:

image

Let’s have a more detailed look at each component:

Managed Disk

image

Microsoft recommends managed disks for deployments as compared to unmanaged ones and the difference between the two is that the legacy unmanaged disks are stored in a storage account with a page blob that stores one or more VHDs. This is similar to the concept of a SAN with aggregates/volumes and LUN in VMware but this also means the Azure administrator would need to continue to assess how many disks are in the page blob, how performance is affected by each disk and many other aspects. The new managed disks abstracts the components so the administrator doesn’t need to place the disks into a storage account and manage them manually. Unmanaged disks arguably provides the administrator more control but the overhead it creates can be laborious.

Key Hierarchy

image

More than one encryption key is used to encrypt managed disks that are encrypted at rest. The Key Encryption Key is the encryption key stored in a Microsoft key store (equivalent to an Azure Key Vault) and used to encrypt/decrypt the Data Encryption Key that is used to encrypt the actual data. The reason why there are two keys in this process is because having to continuously access the Microsoft key store (Azure Key Vault) and retrieve the KEK would be very inefficient for large amounts of data operation. Using DEK to provide service local access to encryption keys is more efficient for bulk encryption an decryption for data operations and also allows for stronger encryption and better performance. Having different DEKs to encrypt different data reduces the risk of having a key compromised and the cost/effort to re-encrypt with a new key.

Data Encryption Keys (DEK)

The DEK is a symmetric AES256 key used to encrypt a partition or block of data and the foundational Azure resource providers will store the Data Encryption Keys in a store that is close to the data and quickly available and accessible. A single resource can have many partitions and many DEKs, which makes crypto analysis attacks more difficult.

image

Another advantage of having multiple DEKs encrypting blocks of data is that when a DEK is replaced with a new key, only the data in its associated block must be re-encrypted with the new key.

Key Encryption Key (KEK)

image

The purpose of a KEK is to encrypt the DEK. Having the KEK that is stored and never leaves the key vault allows the DEK to be securely encrypted and controlled by a centrally managed key store where the KEK resides. The entity has access to the KEK may be different than the entity that requires the DEK. An entity may broker access to the DEK to limit the access of each DEK to a specific partition. Since the KEK is required to decrypt the DEKs, the KEK is effectively a single point by which DEKs can be effectively deleted by deletion of the KEK.

More information about the key hierarchy be found at:

Azure Data Encryption at rest
https://docs.microsoft.com/en-us/azure/security/fundamentals/encryption-atrest

Customer Managed Keys (CMK)

The architecture for CMK forks out into two paths:

  1. Server-side encryption using customer-managed keys in Azure Key Vault
  2. Server-side encryption using customer-managed keys in customer-controlled hardware

Server-side encryption using customer-managed keys in Azure Key Vault

Server-side encryption using customer-managed keys in Azure Key Vault is essentially the same as Server-side encryption using service-managed keysPMK and the difference is that the keys are stored in a customer managed key vault rather than a Microsoft managed key store. This option typically store the root Key Encryption Key in the customer managed Azure Key Vault and store the encrypted Data Encryption Key in an internal location closer to the data. CMK offers organizations that have the requirement to manage the encryption keys themselves the ability to bring their own keys to Key Vault (BYOK – Bring Your Own Key), or generate new ones, and use them to encrypt the desired resources. While the Resource Provider performs the encryption and decryption operations, it uses the configured key encryption key as the root key for all encryption operations.

As the key encryption keys is used to encrypt and decrypt the data encryption key that is used to encrypt and decrypt the data, loss of the KEK directly means loss of data given how there would be no way to decrypt the DEKs. The recommended best practice is that KEKs should never be deleted. Rather, they should be backed up whenever newly created or rotated. The customer was required to rotate their own keys until recently when a new Automatickey rotation of customer-managed keys (preview) was made available as stated here: https://docs.microsoft.com/en-us/azure/virtual-machines/disk-encryption#automatic-key-rotation-of-customer-managed-keys-preview and here: https://azure.microsoft.com/en-ca/updates/public-preview-automatic-key-rotation-of-customermanaged-keys-for-encrypting-azure-managed-disks/. The feature is still limited to a small set of regions at the time of this writing (May 2021):

Automatic key rotation is in preview and only available in the following regions:

  • East US
  • East US 2
  • South Central US
  • West US
  • West US 2
  • North Europe
  • West Europe
  • France Central

Additionally, soft-delete and purge protection must be enabled on any vault storing key encryption keys to protect against accidental or malicious cryptographic erasure. Instead of deleting a key, it is recommended to set enabled to false on the key encryption key, which disables the ability to use an out of rotation and retired key.

An additional resource that is required when using CMK is the concept of a Disk Encryption Set, which is a new resource introduced for simplifying the key management for managed disks by placing managed disks into a group that will then be encrypted by DEKs, then KEKs and finally have the KEKs stored in a customer-managed Azure Key Vault. You can place disks as they are newly created into a Disk Encryption Set or you can move existing ones in there provided that the disk is detached from a VM or the VM it is attached to it is deallocated. Those who have attempted to create a managed disk and set the Encryption type to Encryptionat-rest with a customer-managed key will see that another drop down list named Disk encryption set will be presented:

image

Note the warning message that is presented to indicate you can’t revert back to PMK once CMK is used for this disk:

Once a customer-managed-key is used, you can’t change the selection back to a platform-managed key.

The following is also displayed if you hover over the information icon:

A disk encryption set stores the customer key that a disk or snapshot will use for encrypting its data. You must choose an existing disk encryption set during this step. Disk encryption sets require access to key vault and keys.

image

You can’t create a disk encryption set within the previous managed disk wizard so you’ll need to create one beforehand:

image

image

With disk encryption sets described, the following diagram provides a visualization of the components of how a managed disk is encrypted at rest with CMK:

image

The authorization and authentication of the customer-managed keys is provided by Azure AD to grant permissions to use the keys stored in Azure Key Vault. Azure Active Directory accounts can be permissioned to manage or access keys for Encryption at Rest encryption and decryption.

Microsoft’s official documentation about how the CMK process works is presented here: https://docs.microsoft.com/en-us/azure/virtual-machines/disk-encryption#full-control-of-your-keys

As stated in the documentation above, the following are details about access, disabling keys and auditing:

  • You grant access to managed disks in the Azure Key Vault to use keys for encrypting and decrypting the DEK used to encrypt the disks in the disk encryption set
  • Keys can be disabled and access to managed disks can be revoked at any time
  • Encryption key usage in the Azure Key Vault can be audited (what is accessing the managed disks)

Disabling or deleting keys:

  • Working with premium SSDs, standard SSDs, and standard HDDs – When the keys used to encrypt these disks are disabled, any virtual machines with the disks using the key will automatically shutdown and become unusable unless the key is enabled again or a new key is assigned.
  • Working with ultra disks - When the keys used to encrypt these disks are disabled, any virtual machines with the disks using the key will continue to run until it is deallocated or restarted as either of those operation will not allow the virtual machines to come back online until the key is enabled again or a new key is assigned.

Microsoft provides a great diagram that shows how managed disks use Azure AD and Azure Key vault to make requests using customer-managed key:

image

I won’t attempt to rewrite the official document that describes the above diagram as I find it quite clear:

  1. An Azure Key Vault administrator creates key vault resources.
  2. The key vault admin either imports their RSA keys to Key Vault or generate new RSA keys in Key Vault.
  3. That administrator creates an instance of Disk Encryption Set resource, specifying an Azure Key Vault ID and a key URL. Disk Encryption Set is a new resource introduced for simplifying the key management for managed disks.
  4. When a disk encryption set is created, a system-assigned managed identity is created in Azure Active Directory (AD) and associated with the disk encryption set.
  5. The Azure key vault administrator then grants the managed identity permission to perform operations in the key vault.
  6. A VM user creates disks by associating them with the disk encryption set. The VM user can also enable server-side encryption with customer-managed keys for existing resources by associating them with the disk encryption set.
  7. Managed disks use the managed identity to send requests to the Azure Key Vault.
  8. For reading or writing data, managed disks sends requests to Azure Key Vault to encrypt (wrap) and decrypt (unwrap) the data encryption key in order to perform encryption and decryption of the data.

To revoke access to customer-managed keys, see Azure Key Vault PowerShell (https://docs.microsoft.com/en-us/powershell/module/azurerm.keyvault/?view=azurermps-6.13.0) and Azure Key Vault CLI (https://docs.microsoft.com/en-us/cli/azure/keyvault?view=azure-cli-latest). Revoking access effectively blocks access to all data in the storage account, as the encryption key is inaccessible by Azure Storage.

As mentioned earlier in the post, CMK traditionally required the customer to rotate their own keys but a new feature currently in previous provides this service:

Automatic key rotation of customer-managed keys (preview)

You can choose to enable automatic key rotation to the latest key version. A disk references a key via its disk encryption set. When you enable automatic rotation for a disk encryption set, the system will automatically update all managed disks, snapshots, and images referencing the disk encryption set to use the new version of the key within one hour. The feature is currently available in limited regions in preview. For regional availability, see the SupportedRegions (https://docs.microsoft.com/en-us/azure/virtual-machines/disk-encryption#supported-regions) section.

Server-side encryption using customer-managed keys in customer-controlled hardware

Server-side encryption using customer-managed keys in customer-controlled hardware is fairly self-explanatory where the customer will not use the Azure Key Vault to store their keys but rather a solution outside of Microsoft Azure (e.g. on-premise data center or another cloud). I won’t attempt to rewrite the documentation so I’ll simply paste it here:

https://docs.microsoft.com/en-us/azure/security/fundamentals/encryption-models#server-side-encryption-using-customer-managed-keys-in-customer-controlled-hardware

Some Azure services enable the Host Your Own Key (HYOK) key management model. This management mode is useful in scenarios where there is a need to encrypt the data at rest and manage the keys in a proprietary repository outside of Microsoft's control. In this model, the service must retrieve the key from an external site. Performance and availability guarantees are impacted, and configuration is more complex. Additionally, since the service does have access to the DEK during the encryption and decryption operations the overall security guarantees of this model are similar to when the keys are customer-managed in Azure Key Vault. As a result, this model is not appropriate for most organizations unless they have specific key management requirements. Due to these limitations, most Azure services do not support server-side encryption using server-managed keys in customer-controlled hardware.

Encryption at host - End-to-end encryption for your VM data

A new feature that is currently being made available in Azure (it isn’t available through my test subscription) is Encryption at host. I find this to be an exciting feature because it would eliminate the need for Azure Disk Encryption (ADE), which I will be writing more about in part #2. What Encryption at host does is essentially provide end-to-end encryption between the disk as rest and when the disk is allocated to and ran on a host. As mentioned earlier, Azure Server-side Encryption (SSE) encrypts data at rest and not in transit, when running on a host, and not the temporary disk a VM is configured with.

image

With Encryption at host, traffic between the host and storage service and all the disks including the temporary disk will be encrypted. Lastly, the most common question I get asked when I start raving about this is whether this will impact the performance of the host or VM and the answer from Microsoft’s document is always follow: Encryption at host does not use your VM's CPU and doesn't impact your VM's performance.

Reference: https://docs.microsoft.com/en-us/azure/virtual-machines/disk-encryption#encryption-at-host---end-to-end-encryption-for-your-vm-data

The following is a diagram outlining the process:

  • Whatever is used to encrypt the disk encryption set at rest (PMK or CMK) will be used to encrypt the in transit / in flight data from the disk to the host
  • The same PMK or CMK used for the at rest disk will be used to encrypt the cache disk
  • The temp disk will always be encrypted by a PMK
image

Note that as great as the feature sounds, there are restrictions and they are as follow (as of mid-2021):

  • Does not support ultra disks.
  • Cannot be enabled if Azure Disk Encryption (guest-VM encryption using bitlocker/VM-Decrypt) is enabled on your VMs/virtual machine scale sets.
  • Azure Disk Encryption cannot be enabled on disks that have encryption at host enabled.
  • The encryption can be enabled on existing virtual machine scale set. However, only new VMs created after enabling the encryption are automatically encrypted.
  • Existing VMs must be deallocated and reallocated in order to be encrypted.
  • Supports ephemeral OS disks but only with platform-managed keys.

Double encryption at rest

The last SSE encryption feature is the Double encryption at rest.High security sensitive organizations who are concerned of the risk associated with any particular encryption algorithm, implementation, or key being compromised can opt for additional layer of encryption using a different encryption algorithm/mode at the infrastructure layer using platform managed encryption keys. This new layer can be applied to persisted OS and data disks, snapshots, and images, all of which will be encrypted at rest with double encryption. The following is a diagram depicting the double encryption using both CMK and PMK:

image

With all the theory out of the way, let’s proceed onto demoing how all this looks during the configuration.

Demo - Platform Managed Keys (PMK)

As mentioned earlier in this post, SSE with PMK is turned on by default for managed disks:

image

Attempting to create a new data disk for a VM will automatically turn on PMK as well as state that a Disk encryption set is not required:

image

Demo - Customer Managed Keys (CMK)

To set up SSE with CMK, begin by creating an Azure Key Vault that will store the keys:

image

Depending on what this vault is used for, you would enable the following:

  • Azure Virtual Machines for deployment
  • Azure Resource Manager for template deployment
  • Azure Disk Encryption for volume encryption

image

image

For the purpose of this demonstration, I will enable all of them as I will be using this to demonstrate ADE later on as well.

image

The Permission model default is set to Vault access policy and Azure role-based access control (RBAC) was later introduced but for the purpose of this example, I will leave it as Vault access policy. More information about using Azure RBAC can be found here: https://docs.microsoft.com/en-us/azure/virtual-machines/windows/disk-encryption-key-vault

Proceed to create the vault:

image

The vault should be created fairly quickly and once complete, proceed into the vault, select Keys and then Generate/Import to create a new key used for SSE with CMK:

image

This is where we can generate, import or restore from backup keys:

image

Proceed to generate a key with the default Key type as RSA and RSA key size as 2048:

image

The new key should be created fairly quick:

image

Proceed to set up the disk encryption set:

image

----------------------------------------------------------------------------------------------------------------------------

Note that if the account attempting to create the disk encryption set does not have permissions to the subscription then the following error would be thrown:

If you don’t have Owner permissions then this error would be thrown:

[ForbiddenByRbac (Forbidden)] Caller is not authorized to perform action on resource. If role assignments, deny assignments or role definitions were changed recently, please observe propagation time. Caller: name=KeyVault/ManagementPlane;appid=…

image

To display the error above, I removed my account as an Owner to the subscription.

----------------------------------------------------------------------------------------------------------------------------

image

The disk encryption set should be created fairly quickly but notice the banner with the message:

To associate a disk, image, or snapshot with this disk encryption set, you must grant permissions to the key vault Test-SSE-CMK:

image

Make sure you click on the banner so permissions for the disk encryption set is configured in the Access policies of the key vault:

image

The following are the permissions the above process places into the Azure Key vault:

Key Management Operations: Get
Cryptographic Operations: Unwrap Key, Wrap Key

image

image

With the disk encryption set created and permissions to the key vault granted, we can either create a new VM or move an existing disk into it. Let’s start with creating a new VM:

image

Once the virtual machine has created, proceed to view the properties of the OS disk and you should see SSE with CMK listed:

image

From here, you can add an additional data disk to a VM and turn on SSE with CMK at the same time.

However, if you want to turn on SSE with CMK on an existing VM’s OS disk, that VM will need to be stopped and deallocated.

Demo – Encryption at host

My Azure subscription unfortunately does not have the encryption at host available so I will refer you to the following documentation:

Use the Azure portal to enable end-to-end encryption using encryption at host

https://docs.microsoft.com/en-us/azure/virtual-machines/disks-enable-host-based-encryption-portal

----------------------------------------------------------------------------------------------------------------------------

I hope this post has been informative and provides the reader an idea of how Azure Server-side Encryption (SSE) works. Microsoft has provided plenty of documents on this topic but I’ve found that information can be scattered so I hope I’ve been able to capture the important sections to combine into this post.

There was quite a bit of writing involved so I will separate Azure Disk Encryption (ADE) to my next post.

Azure Server-side Encryption (SSE) and Azure Disk Encryption (ADE) - Part 2 of 2

$
0
0

As a follow up to my previous post:

Azure Server-side Encryption (SSE) and Azure Disk Encryption (ADE) - Part 1 of 2
http://terenceluk.blogspot.com/2021/05/azure-server-side-encryption-sse-and.html

Where I wrote about Azure Server-side Encryption (SSE), this post will be dedicated to Azure Disk Encryption (ADE).

As always, I would like to provide links to the Microsoft documentation and highly suggest reading them:

Azure Disk Encryption for virtual machines and virtual machine scale sets
https://docs.microsoft.com/en-us/azure/security/fundamentals/azure-disk-encryption-vms-vmss

Azure Disk Encryption for Windows VMs
https://docs.microsoft.com/en-us/azure/virtual-machines/windows/disk-encryption-overview

Azure Disk Encryption for Linux VMs
https://docs.microsoft.com/en-us/azure/virtual-machines/linux/disk-encryption-overview

Create and configure a key vault for Azure Disk Encryption on a Windows VM
https://docs.microsoft.com/en-us/azure/virtual-machines/windows/disk-encryption-key-vault

Creating and configuring a key vault for Azure Disk Encryption with Azure AD (previous release) for Linux VMs
https://docs.microsoft.com/en-us/azure/virtual-machines/linux/disk-encryption-key-vault-aad

Given that SSE protects data at rest, it doesn’t provide protection when the data is in use because a virtualization host cannot read an encrypted VHD disk when attempting to run it so upon starting a virtual machine, the corresponding OS and any data disks will be unencrypted. The following diagram depicts this:

image

Before end-to-end encryption using encryption at host was available, to add a layer of protection to circumvent this risk, Azure Disk Encryption can be used. Azure Disk Encryption, also known as ADE, allows the disk to be encrypted at the operating system level. Furthermore, having ADE encrypt the disks at the OS level prevents any disks that are downloaded from Azure to be accessible.

The way in which ADE works is where encryption is enabled at the operation system level by leveraging Windows or Linux native encryption capabilities. The two operating system and the respective encryption technology Azure provides ADE are:

  1. Windows with BitLocker
  2. Linux with DM-Crypt

Azure Disk Encryption (ADE) is resilient to the zone-wide outages.

There are requirements for the supportability of ADE and they are as follow.

Windows with BitLocker

  • Not available on A-series VMs (not usually an issue as these aren’t used in production)
  • VMs with less than 2GB of memory (typically not an issue as most VMs are allocated more than that)
  • VMs that does not have temp disks
    • Dv4, Dsv4, Ev4, and Esv4
  • Applying ADE to a VM that has disks encrypted with server-side encryption with customer-managed keys (SSE + CMK)
  • Applying SSE + CMK to a data disk on a VM encrypted with ADE
  • More unsupported scenarios can be found here: https://docs.microsoft.com/en-us/azure/virtual-machines/windows/disk-encryption-windows#unsupported-scenarios
  • Group policy requirements for domain joined VMs include:
    • Do not push any group policies that enforce TPM protectors
    • BitLocker policy on domain joined virtual machines with custom group policy must include the following setting: Configure user storage of BitLocker recovery information -> Allow 256-bit recovery key
  • Azure Disk Encryption will fail if domain level group policy blocks the AES-CBC algorithm, which is used by BitLocker
  • Azure Disk Encryption requires an Azure Key Vault to control and manage disk encryption keys and secrets and require the key vault and VMs must reside in the same Azure region and subscription
  • The Windows VM must be able to connect to an Azure Active Directory endpoint, [login.microsoftonline.com] to get the token to connect to the key vault
  • The Windows VM must be able to connect to the key vault endpoint to write the encryption keys to the key vault

Requirements may change in the future and the above include highlights but not all of the requirements so please refer to the following documentation for the full list:

Azure Disk Encryption for Windows VMs
https://docs.microsoft.com/en-us/azure/virtual-machines/windows/disk-encryption-overview

Linux with DM-Crypt

  • Not available on A-series VMs (not usually an issue as these aren’t used in production)
  • Minimum of 2GB is required when encrypting only data volumes
  • Minimum of 8GB when encrypting both data and OS volumes and where the root (/) file system usage is 4GB or less
  • When the data and OS volumes root (/) is great than 4GB then the minimum that will be required is root file system usage * 2. For instance, a 16 GB of root file system usage requires at least 32GB of RAM
  • Not all Linux OS are supported so ensure that the following table is referenced: https://docs.microsoft.com/en-us/azure/virtual-machines/linux/disk-encryption-overview#supported-operating-systems
  • The Linux VM must be able to connect to an Azure Active Directory endpoint, [login.microsoftonline.com] to get the token to connect to the key vault
  • The Linux VM must be able to connect to the key vault endpoint to write the encryption keys to the key vault

Requirements may change in the future and the above include highlights but not all of the requirements so please refer to the following documentation for the full list:

Azure Disk Encryption for Linux VMs
https://docs.microsoft.com/en-us/azure/virtual-machines/linux/disk-encryption-overview

As stated in the requirements, the virtual machine, whether Windows or Linux, utilizes an extension to directly access to the Azure Key Vault to access encryption key for encrypting each drive. The following diagram depicts the disks and their interaction with the Azure Key Vault:

image

The Azure Key Vault natively does not allow direct access from the virtual machines and therefore require a flag to be turned on in order to allow access. The following is where this setting is turned on:

Azure Disk Encryption for volume encryption

Specifies whether Azure Disk Encryption is permitted to retrieve secrets from the vault and unwrap keys.

image

Azure Backup Limitations with ADE

It is important to note that there are some limitations when backing up ADE disks with Azure backup and they are as follows:

  • You can back up and restore ADE encrypted VMs within the same subscription and region
  • Azure Backup supports VMs encrypted using standalone keys. Any key that's a part of a certificate used to encrypt a VM isn't currently supported
  • You can back up and restore ADE encrypted VMs within the same subscription and region as the Recovery Services Backup vault
  • ADE encrypted VMs can’t be recovered at the file/folder level. You need to recover the entire VM to restore files and folders
  • When restoring a VM, you can't use the replace existing VM option for ADE encrypted VMs. This option is only supported for unencrypted managed disks

Please refer to the following document for more details:

https://docs.microsoft.com/en-us/azure/backup/backup-azure-vms-encryption#encryption-support-using-ade

Server Side Encryption (SSE) compatibility

  • ADE can be paired with SSE with PMK
  • ADE cannot be paired with SSE with CMK

Summary

The following are key points to summarize ADE:

  • ADE allows encrypting OS, data and temp disks
  • Encrypting at the OS level effectively encrypts the cache as well
  • ADE provides protection against data access when VHDs are downloaded from Azure with methods such as Azure Storage Explorer
  • You cannot mix ADE with Disk Encryption Set (you can only use one or the other)
  • If Encryption at Host is not available then ADE is the only way to guarantee data is encrypted when disks are attached to the host
  • As ADE relies on the operating system to perform the encryption, turning it on will require the virtual machine to be turned on

Demo - Turning on ADE for Windows VM

The following is a virtual machine that does not have ADE turned on for its OS or data drive:

image

To enable ADE on a VM, simply navigate to the virtual machine, select Disks in the blade and then Additional settings.

image

Assuming that the virtual machine is turned on, the Disks to encrypt drop down menu will be configurable:

image

image

To proceed with encrypting the disks, select OS and data disks, the Key Vault, Key and Version of the key to use for the encryption:

image

Once complete, you will see the Encryption field labeled as SSE with PMK & ADE:

image

Navigating into the VM and reviewing the BitLocker settings will display the followings:

image

Disk Management will now have the (BitLocker Encrypted) tag attached to the OS, the Temporary Storage and Data drive:

image

An additional BeK Volume will also be created with no drive letter attached to it. For those who are not familiar with BitLocker, BEK stands for BitLocker Encryption Key and this volume contains the key to decrypt and boot up the virtual machine during its startup.

Note that new disks added to the VM will not automatically be encrypted:

image

To encrypt the disk, simply stop and deallocated the VM, then power it back on.

image

I’ve noticed that sometimes the Azure portal doesn’t reflect that the newly added disk has ADE enabled so it is best to log into the VM to check if the portal still indicates it does not have ADE enabled:

image

Lastly, you may find that a newly attached disk does not get encrypted by ADE even after stopping and deallocating it. One of the possible causes is if the disk has not been initialized and assigned a drive letter as BitLocker will not be able to encrypt disks that aren’t configured to be used.

VMs with SSE with PMK and SSE with CMK disks

VMs with a mix of SSE with PMK and SSE with CMK will not be able to have ADE enabled:

image

You’ll notice the following warning message when attempting to encrypt the disks:

image

The details of the virtual machine image are not known and may not be currently supported for ADE. Please be aware that the operation may fail if the VM image is not currently supported. Learn more

image

Proceeding to encrypt the disks will fail:

image

Failed to update disk encryption settings

Failed to update disk encryption settings for Test-VM. Error: There was an error processing your request. Try again in a few moments.

Disabling ADE

To disable ADE, you can either use PowerShell, CLI or with a Resource Manager template as outlined here:

https://docs.microsoft.com/en-us/azure/virtual-machines/windows/disk-encryption-windows#disable-encryption

Note that you cannot disable only the data disk if the OS disk has also been encrypted.

Hope this post is able to help anyone who may be looking for more information about how ADE works and what the enabling and disabling looks like.

What is Azure Key Vault?

$
0
0

Azure Key Vault is arguably one of the most important services that Microsoft Azure provides to enable organizations to centrally and securely store encryption keys, secrets, and certificates. There aren’t many configuration parameters in Azure Key Vault but having an understanding of the 3 core components it stores, their purpose, and how the service works is important so this blog post serves to provide an overview and the configuration of this service.

As always, I would like to provide the following links to the official Microsoft Azure Key Vault documentation:

Azure Key Vault basic concepts
https://docs.microsoft.com/en-us/azure/key-vault/general/basic-concepts

About Azure Key Vault
https://docs.microsoft.com/en-us/azure/key-vault/general/overview

Azure Key Vault security
https://docs.microsoft.com/en-us/azure/key-vault/general/security-features

Best practices to use Key Vault
https://docs.microsoft.com/en-us/azure/key-vault/general/best-practices

Using secrets from Azure Key Vault in a pipeline
https://azuredevopslabs.com/labs/vstsextend/azurekeyvault/

What is Azure Key Vault?

Azure Key Vault provides the service to store 3 types of entities:

  1. Keys
  2. Secrets
  3. Certificates

These 3 entities fulfill various functionalities of applications, virtual machines and other services within Azure and also on-premise networks. Having Azure Key Vault centrally store these entities allow an organization to easily provision, manage, and audit access of the objects in the vault.

What type of “Keys” does Azure Key Vault Store?

The type of keys that are stored in an Azure Key Vault are keys used for encryption and are typically asymmetric encryption keys such as RSA and EC (Elliptic-curve). AES symmetric keys are also available but a managed HSM service will be required. An example of a service that would utilize the Azure Key Vault to store encryption key would be Azure Server-side Encryption (SSE) and Azure Disk Encryption (ADE). I’ve written two blog posts about how they work:

Azure Server-side Encryption (SSE) and Azure Disk Encryption (ADE) - Part 1 of 2
http://terenceluk.blogspot.com/2021/05/azure-server-side-encryption-sse-and.html

Azure Server-side Encryption (SSE) and Azure Disk Encryption (ADE) - Part 2 of 2
http://terenceluk.blogspot.com/2021/05/azure-server-side-encryption-sse-and_8.html

What type of “Secrets” does Azure Key Vault Store?

The type of secrets that Azure Key Vault stores can be:

  1. A string of characters or numbers that serves as a password
  2. A certificate that is uploaded.

Note that #2 has been deprecated and will eventually be removed because of the certificate store feature. Attempting to use a secret to store a certificate will display the following message:

This feature has been deprecated. Click here to go to your list of certificates and import a new certificate.

image 

An example of a service that would utilize the Azure Key Vault to store secrets would be an Azure App Service or Function that requires a place to safely store SQL Database connection strings, which contains information such as the credentials password. Rather than storing sensitive information in the application’s code, it can now use a URI to retrieve the string as a secret.

What type of “Certificates” does Azure Key Vault Store?

The Azure Key Vault provides management of x509 certificates that are used in many internet protocols with SSL/HTTPS being one of the most popular. Administrators are able to create self-signed and Certificate Authority generated certificates, as well as importing existing certificates. Certificate owners can implement secure storage and management of X509 certificates without interaction with private key material. Policies can be created to direct Key Vault to manage the life-cycle of a certificate and allow certificate owners to provide contact information for notification about life-cycle events of expiration and renewal of certificate. Certain level of automation for renewals are available selected issuers - Key Vault partner X509 certificate providers / certificate authorities.

An example of a service that would utilize the Azure Key Vault to store certificates can be an Azure Front Door service accessing the key vault to retrieve an X509 certificate to secure the frontend or an App Service similarly retrieving a certificate for its TLS/SSL binding.

----------------------------------------------------------------------------------------------------------------------------

With each feature described, I would like to provide the following diagram outlining the Azure Key Vault’s services and some resources that uses the service. Note that many services require one or more of Azure Key Vault’s services in order to secure their functionality or data.

image

Creating an Azure Key Vault

Creating an Azure Key Vault is fairly straight forward as there aren’t many configuration parameters available. The configuration settings that require consideration are as follows:

Key vault name
This value needs to be unique across all of the customers in Azure as the name will be used for the the access URL.

Region
The region placement of the key vault is important because you cannot move a key vault from one region to another and you cannot encrypt VMs with SSE or ADE if the Key Vault is not in the same region as the VM.

Pricing tier
There are two pricing tiers: Standard and Premium. The specifications for both versions are identical aside from Premium having the ability to use HSM-protected keys. Note that there is also a Managed HSM option that is not presented by the Azure Portal GUI but can be created through CLI or PowerShell (https://docs.microsoft.com/en-us/azure/key-vault/managed-hsm/quick-create-powershell#create-a-managed-hsm). I won’t be covering a Managed HSM in this post.

image

image

Soft-delete and Days to retain deleted vaults
Moving further down the Basics tab will display the Soft-delete and this is enabled by default with 90 days set. Losing keys, secrets, and certificates can mean losing access to services or data and having soft-delete provides a layer of protection against accidental deletion.

Purge protection
Purge protection is disabled by default but highly suggested to be enabled as this protects accidental purge of vaults as well as protection against malicious attempts to delete vaults and purge them in the event where there is unauthorized access.

image

The Access policy tab contains settings for the controlling access to the new key vault.

Enable Access to
The 3 access settings are not enabled by default but should be reviewed and enabled if the intention of the key vault is to be used with those services.

Azure Virtual Machines for deployment
Specifies whether Azure Virtual Machines are permitted to retrieve certificates stored as secrets from the key vault.

Azure Resource Manager for template deployment
Specifies whether Azure Resource Manager is permitted to retrieve secrets from the key vault. Instead of putting a secure value (like a password) directly in a template or parameter file, this value can retrieve the value from an Azure Key Vault during a deployment.

Azure Disk Encryption for volume encryption
Specifies whether Azure Disk Encryption is permitted to retrieve secrets from the vault and unwrap keys. Enabling this will allow ADE to use this key vault to store keys and secrets for BitLocker to encrypt a VMs.

Permission model
The permission model provides 2 methods of controlling access to the key vault.

Vault access policy
The default Vault access policies allows specific rights to be granted to an identity to operate with keys, secrets, and certificates. These settings are configured on a per vault basis and cannot be applied to Azure resource hierarchies such as Management Groups, Subscriptions, and Resource Groups.

The following are the specific rights that can be granted to an identity:

  • Key, Secret, Certificate Management
  • Key & Secret Management
  • Secret & Certificate Management
  • Key Management
  • Secret Management
  • Certificate Management
  • SQL Server Connector
  • Azure Data Lake Storage or Azure Storage
  • Azure Backup
  • Exchange Online Customer Key
  • SharePoint Online Customer Key
  • Azure Information BYOK

Azure role-based access control
Azure role-based access control, also known as Azure RBAC, is a more modern method to control access to the Key Vault that allows permissions to be granted based on predefined or custom created roles to an identity such as a user, group, service principal, or managed identity. Scope level such as management groups, subscriptions, and resource groups can be used as well

The following are RBAC Key Vault built-in roles for keys, certificates, and secrets access management:

  • Key Vault Administrator
  • Key Vault Reader
  • Key Vault Certificate Officer
  • Key Vault Crypto Officer
  • Key Vault Crypto User
  • Key Vault Crypto Service Encryption User
  • Key Vault Secrets Officer
  • Key Vault Secrets User

More information about the differences between the two types of permission model can be found here:

Migrate from vault access policy to an Azure role-based access control permission model
https://docs.microsoft.com/en-us/azure/key-vault/general/rbac-migration

image

As with other PaaS services, Service Endpoints and Private Endpoints are available:

image

Creating a Standard Azure Key Vault generally doesn’t take much time.

Azure Key Vault Keys

Once the Azure Key Vault is created, the first of the 3 types of objects is the Keys and as mentioned earlier in this post, these keys can be used for cryptographic operations like for SSE (CMK option) and ADE. To create a key for SSE or ADE, simply navigate to the Keys setting then click Generate/Import:

image

Options to create an RSA or EC keys with activation and expiration dates are presented:

image

Other than generating a key, Import and Restore Backup options are also available. These features are useful for scenarios for migrating services that previously stored encryption keys elsewhere into the key vault or restoring a backup if the key vault had been deleted:

image

image

The properties of the key can be reviewed by clicking onto the key:

image

The version of the key will be displayed and further details of the key can be reviewed by clicking on the version of interest:

image

Notice that it is possible to download the public key (this is an asymmetric key) but not the private key. It is also possible to modify the Activation date and Expirationdate, and the Permitted operations on this key:

image

It is possible to back up the key:

image

Note the following message about where the backup can be restored:

The backup of this key will be encrypted to Azure Key Vault and can only be restored in Azure Key Vault within the same subscription. Please download and save this backup somewhere secure, as it will not be persisted following this browser session.

image

A .keybackup file will be available for download:

image

New versions of the key can be generated:

image

The options to generate a new key will be presented:

image

Then a new key will be generated and placed under CURRENTVERSION, while the previous key will be placed under OLDERVERSIONS:

image

With soft-delete enabled on the vault, any deleted keys will be recoverable with the Managed deleted keys within the retention configured:

image

Azure Key Vault Secrets

Secrets can be a string of characters for any service that needs to securely store the value (e.g. ARM template with an administrator password, an App Service with a SQL connection string). To create a secret, simply navigate to the Secrets setting then click Generate/Import:

image

As mentioned earlier, secrets used to be the service that can store certificates but the feature has since been deprecated:

image

image

Proceed to select Manual as the Upload options and configure the secret’s parameters as required:

image

The new secret will be displayed as such once created:

image

As with keys, versioning of the secret is provided:

image

Opening the properties of the secret will display configuration and the Secret Identifier of the secret which could be used by App Services to retrieve the secret from the key vault (permissions for the App Service will need to e configured):

image

Note how you can review the secret by clicking on the Show Secret Value button:

image

----------------------------------------------------------------------------------------------------------------------------

Example of how the Secret Identifier would be used for an App Service

The process of configuring an App Service to store and retrieve secrets in an Azure Key Vault is fairly straight forward so I would like to quickly demonstrate storing a SQL database connection string here.

Begin by creating an Azure Key Vault with Azure role-based access control as the Permissions Model:

image

You can also alternatively change the Permissions model of an existing Key Vault via the Access Policies configuration:

image

You won’t be able to browse the keys, secrets or certificates with Azure role-based access control set as the Permissions Model even though you may be an owner:

image

image

To correct this, grant yourself the Key Vault Administrator role via the Access Control (IAM):

image

image

With the Key Vault Administrator role assigned, you should now be able to browse the keys, secrets, and certificates:

image

Proceed to create a secret with the SQL connection string, open the secret and copy the Secret Identifer:

https://my-test-keyvault01.vault.azure.net/secrets/appService-to-SQLDB/e49822276fab4f28ac24a75bbfb5e104

image

Proceed to add the secret identifier reference to the Azure App Service Settings by opening the App Service configuration settings, then click on New Connection String:

image

Type the Name of the connection string (you can use the same name as the secret name in the key vault, configure the Value as:

@Microsoft.KeyVault(SecretUri=<Value of the Secret Identifier>)

For example:

@Microsoft.KeyVault(SecretUri=https://my-test-keyvault01.vault.azure.net/secrets/appService-to-SQLDB/e49822276fab4f28ac24a75bbfb5e104)

image

Next, we’ll need to grant the App Service permissions to access the secret in the key vault by using a System assigned identity. Proceed to select the Identity setting in the App Service, under the System assigned tab, turn on the Status:

image

Note the message indicating the app service will be registered with Azure Active Directory so it can be granted permissions to access resources protected by Azure AD (Azure role-based access control set as the Permissions Model for the key vault):

image

Proceed to click on Azure role assignments to assign permissions to the App Service:

image

Click on Add role assignment:

image

Grant the permissions to the App Service:

Scope: Key Vault
Subscription: <the subscription>
Resource: <the key vault>
Role: Key Vault Secrets User

image

image

With the connection string configured on the App Service, the connection string contents of the web.config can now be cleared.

----------------------------------------------------------------------------------------------------------------------------

Similar to keys, secrets can also be backed up. Instead of a .keybackup file, a .secretbackup file is created:

image

image

Attempting to open the backup file will display a series of scrambled text even though there is only 1 secret stored in the backup:

image

Another example of how a secret is used is for Azure Disk Encryption to store its secrets. Below is a screenshot of an Azure Key Vault used by ADE to store BitLocker BEKs (BitLocker Recovery Key):

image

Opening the secret will allow you to browse the BitLocker Recovery Key (BEK) details:

image

Azure Key Vault Certificates

The last object that the key vault stores are certificates. To create a certificate, simply navigate to the Certificates setting then click Generate/Import:

image

The Create a certificate menu will provide various certificate options:

image

The ability to generate a certificate request or import a certificate (PFX or PEM):

image

If a new certificate is to be created there are 3 options:

image

Lifecycle management can be managed by the Lifetime Action Type where the following options are available:

  • Automatically renew at a given percentage lifetime
  • Automatically renew at a given number of days before expiry
  • E-mail all contacts at a given percentage lifetime
  • E-mail all contacts at a given number of days before expiry

The automatically renew options will be dependent on whether the certificate was generated from an integrated CA.

image

Additional advanced policy configuration are available for the certificate:

image

The following is a request generated for a non-integrated CA:

image

image

Navigating into the request will allow you to browse the certificate request information:

image

Navigating back out to the Overview section and clicking on Certificate Operation will allow you to download the CSR as well as merge the signed request the signing CA provides:

image

image

Another method of place certificates into the key vault is to import a PEM or PFX. The following is an example of importing a PFX:

image

image

image

With the certificate imported in the key vault, we can now allow resources such as App Services to import the certificate from the key vault. Before proceeding to import the certificate, we’ll need to grant the App Service the required RBAC role to retrieve the certificate and this process and the role required is the Key Vault Secrets User. Once the appropriate role is assigned to the App Service, usethe Import Key Vault Certificate feature under the Private Key Certificates (.pfx) tab to import the certificate from the key vault:

image

----------------------------------------------------------------------------------------------------------------------------

Note that if you attempt to import a certificate from a key vault that is not in the same region as the App Service then the process will fail with the following message with no other detail:

Import Key Vault Certificate
Failed to import Key Vault Certificate

image

----------------------------------------------------------------------------------------------------------------------------

Other services such as Azure Front Door can also leverage key vaults to import certificates:

image

How many Key Vaults should an organization have?

Microsoft general guideline is to have 1 key vault per application, per environment, per region. With that said, this can quickly grow to many key vaults in a large environment.

How can an organization manage Key Vaults at scale?

When a large organization has many key vaults distributed across the Azure because of the amount of applications and divisions, it is suggested to use Azure Policy manage organization compliance and set standards such as key sizes and validity periods. Auzre Policy can be applied to a scope within the organization (managemend group, subscription, resource group or individual key vault) and ensure all resources within the scope to be evaluated for compliance against the rule. Policies can be configured to only evaluate whether the resource meets a compliance rule or not, or enforce compliance and block the creation or import of objects that don’t meet the policy rule.

I will not go further into the details and will provide the following Microsoft document:

Integrate Azure Key Vault with Azure Policy
https://docs.microsoft.com/en-us/azure/key-vault/general/azure-policy?tabs=certificates

----------------------------------------------------------------------------------------------------------------------------

I hope this post is able to help anyone who may be looking for more information about how Azure Key Vault works, what the differences are between keys, secrets, and certificates, and what the configuration looks like.

Attempting to use EXO V2 Connect-ExchangeOnline using certificate authentication with Azure App Registration fails with: "Error Acquiring Token..."

$
0
0

Problem

You’re attempting to configure an on-premise server to use certificate authentication with Connect-ExchangeOnline to run unattended scripts (automation) scenarios using AzureAD applications and self-signed certificates as described in the following article:

App-only authentication for unattended scripts in the EXO V2 module

https://docs.microsoft.com/en-us/powershell/exchange/app-only-auth-powershell-v2?view=exchange-ps

However, you are presented with the following error when attempting to connect:

PS C:\> Connect-ExchangeOnline -CertificateThumbPrint "3968B23E6A91C8F7FF4A9587341E9B0FDB50DB0E" -AppID "ac28a30a-6e5f-4c2d-9384-17bbb0809d57" -Organization "bmabm.onmicrosoft.com"

----------------------------------------------------------------------------

The module allows access to all existing remote PowerShell (V1) cmdlets in addition to the 9 new, faster, and more reliable cmdlets.

|--------------------------------------------------------------------------|

| Old Cmdlets | New/Reliable/Faster Cmdlets |

|--------------------------------------------------------------------------|

| Get-CASMailbox | Get-EXOCASMailbox |

| Get-Mailbox | Get-EXOMailbox |

| Get-MailboxFolderPermission | Get-EXOMailboxFolderPermission |

| Get-MailboxFolderStatistics | Get-EXOMailboxFolderStatistics |

| Get-MailboxPermission | Get-EXOMailboxPermission |

| Get-MailboxStatistics | Get-EXOMailboxStatistics |

| Get-MobileDeviceStatistics | Get-EXOMobileDeviceStatistics |

| Get-Recipient | Get-EXORecipient |

| Get-RecipientPermission | Get-EXORecipientPermission |

|--------------------------------------------------------------------------|

To get additional information, run: Get-Help Connect-ExchangeOnline or check https://aka.ms/exops-docs

Send your product improvement suggestions and feedback to exocmdletpreview@service.microsoft.com. For issues related to the module, contact Microsoft support. Don't use the feedback alias for problems or support issues.

----------------------------------------------------------------------------

Error Acquiring Token:

Could not use the certificate for signing. See inner exception for details. Possible cause: this may be a known issue with apps build against .NET Desktop 4.6 or lower. Either target a higher version of .NET desktop - 4.6.1 and above, or use a different certificate type (non-CNG) or sign your own assertion as described at https://aka.ms/msal-net-signed-assertion.

New-ExoPSSession : One or more errors occurred.

At C:\Program

Files\WindowsPowerShell\Modules\ExchangeOnlineManagement\2.0.5\netFramework\ExchangeOnlineManagement.psm1:475 char:30

+ ... PSSession = New-ExoPSSession -ExchangeEnvironmentName $ExchangeEnviro ...

+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

+ CategoryInfo : NotSpecified: (:) [New-ExoPSSession], AggregateException

+ FullyQualifiedErrorId : System.AggregateException,Microsoft.Exchange.Management.ExoPowershellSnapin.NewExoPSSess

ion

PS C:\>

image

You’ve verified that you have 2.0.3 or later of the EXO V2 module (2.0.5 in this case) by using the cmdlet:

Get-InstalledModule | FL

image

Solution

One of the possible reasons why this error would be thrown is if you are not using PowerShell V7. Official document indicating that version 7.0.3 or later is required can be found here:

Supported operating systems for the EXO V2 module

https://docs.microsoft.com/en-us/powershell/exchange/exchange-online-powershell-v2?view=exchange-ps

“Specifically, version 2.0.4 or later of the EXO V2 module is supported in PowerShell 7.0.3 or later.”

Use the of the following two cmdlets to determine which PowerShell you’re running:

Get-Host | Select-Object Version

$PSVersionTable

PS C:\> Get-Host | Select-Object Version

Version

-------

5.1.17763.1007

PS C:\> $PSVersionTable

Name Value

---- -----

PSVersion 5.1.17763.1007

PSEdition Desktop

PSCompatibleVersions {1.0, 2.0, 3.0, 4.0...}

BuildVersion 10.0.17763.1007

CLRVersion 4.0.30319.42000

WSManStackVersion 3.0

PSRemotingProtocolVersion 2.3

SerializationVersion 1.1.0.1

PS C:\>

image

In this example, the installed PowerShell version is 5.1.17763.1007.

Proceed to download version 7.0.3 or later from GitHub: https://github.com/PowerShell/PowerShell/releases, install the EXO V2, and Connect-ExchangeOnline using a certificate for authentication should work.

Script to export audit logs for the current month from Office 365 using Search-UnifiedAuditLog

$
0
0

I was recently asked by a colleague who was looking for a way to automate the export of events from Exchange Online, SharePoint Online, OneDrive for Business, Azure Active Directory, Microsoft Teams, Power BI, and other Microsoft 365 services with the Audit Log search feature in the following two Microsoft 365 consoles:

Office 365 Security & Compliance
https://protection.office.com/unifiedauditlog

image

Microsoft 365 compliance
https://compliance.microsoft.com/auditlogsearch

image

**I believe the Audit search in the Microsoft 365 compliance portal will be replacing Office 365 Security & Compliance.

I haven’t written scripts for a while so I decided to create one the best that I could and have him modify it as needed. My PowerShell script uses the Search-UnifiedAuditLog cmdlet to export the audit logs of a user and the cmdlet’s documentation can be found here: https://docs.microsoft.com/en-us/powershell/module/exchange/search-unifiedauditlog?view=exchange-ps

Note that in order for the script to work, EXO V2 2.0.3 or later with PowerShell 7 will be required as authenticating with a certificate requires these two components. This example will use EXO V2 2.0.5 with PowerShell 7.1.3.

To allow for the script to export the audit logs of multiple users, create a txt file and add the user names on a line of its own as such:

image

The following is the script and few points describing what it does:

  1. Connects to O365 with Connect-ExchangeOnline and authenticates with a certificate to work around MFA through modern authentication
  2. Gets the first and last day of the month (the assumption is that this script will be ran the last day of the month at 11:59p.m.)
  3. Gets the month name
  4. Loops through each username in the txt file
  5. Uses Search-UnifiedAuditLog to export the audit logs starting at the beginning of the month to the last day of the month for a user into a CSV file
  6. Uses Send-MailMessage to email the CSV to two users by relaying off of an on-premise Exchange server (https://docs.microsoft.com/en-us/powershell/module/microsoft.powershell.utility/send-mailmessage?view=powershell-7.1)

#Install-Module -Name ExchangeOnlineManagement

#Import-Module ExchangeOnlineManagement

Connect-ExchangeOnline -CertificateThumbPrint "3968B23E6A91C8F7FF4A9587341E9B0FDB50DB0E" -AppID "ac28a30a-6e5f-4c2d-9384-17bbb0809d57" -Organization "contoso.onmicrosoft.com"

# Get the first day and the last day of the current month

$date = Get-Date

$year = $date.Year

$month = $date.Month

$startOfMonth = Get-Date -Year $year -Month $month -Day 1 -Hour 0 -Minute 0 -Second 0 -Millisecond 0

$endOfMonth = ($startOfMonth).AddMonths(1).AddTicks(-1)

#Get the current month name

$monthName = (Get-Culture).DateTimeFormat.GetMonthName((Get-Date).Month)

#Loop through each entry in a text file containing usernames and use Search-UnifiedAuditLog to search the unified audit log, export to CSV and email out to user.

foreach ($alias in Get-Content C:\scripts\Users.txt) {

$useralias=$alias

$domain = '@contoso.com'

$user=$userAlias+$domain

$csvFileName=($userAlias + "-O365-Activities-" + $monthName + "-" + $year + ".csv")

Search-UnifiedAuditLog -StartDate $startOfMonth -EndDate $endOfMonth -UserIds $user | Export-Csv $csvFileName -NoTypeInformation

$mailSubject=$monthName + " " + $year + " " + $user + ' O365 Audit Log'

$mailBody="Sending " + $user + " O365 Audit Log for the month of " + $monthName + " " + $year + "."

Send-MailMessage -From 'O365 Audit Job <o365audit@contoso.com>' -To 'Terence Luk <tluk@contoso.com>', 'John Smith <jsmith@contoso.com>' -Subject $mailSubject -Body $mailBody -Attachments $csvFileName -Priority High -DeliveryNotificationOption OnSuccess, OnFailure -SmtpServer 'smtp.contoso.com'

}

The following is an output of the script using EXO V2 (2.0.5) with PowerShell 7.1.3:

image

The following is a sample output in the CSV audit log:

image
Viewing all 836 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>