Quantcast
Channel: Terence Luk
Viewing all 836 articles
Browse latest View live

Unable to access the AD FS update password page after a new deployment

$
0
0

One of the common questions I get asked from my colleagues during their deployment of AD FS has been the following error they are presented with when they attempt to access the AD FS password update page after a new deployment:

https://fs.contoso.com/ADFS/portal/updatepassword/

fs.contoso.com

An error occurred

An error occurred. Contact your administrator for more information.

Error details

· Activity ID: 9c0d8275-b381-43ab-3b01-0080000000c2

· Relying party: fs.contoso.com

· Error details: Object reference not set to an instance of an object.

· Node name: b660961e-76bc-481e-a991-d9ab86f379e4

· Error time: Wed, 20 May 2020 18:41:27 GMT

· Proxy server name: BR***P1

· Cookie: enabled

· User agent string: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/81.0.4044.138 Safari/537.36

image

The reason for this is because this page is disabled default, which is the same as for the Idp-Initiated Sign on page (https://docs.microsoft.com/en-us/windows-server/identity/ad-fs/troubleshooting/ad-fs-tshoot-initiatedsignon).

To enable this page, simply launch the AD FS management console, navigate to AD FS > Service > Endpoints and scroll all the to the bottom to the line item with the URL Path/ADFS/portal/updatepassword/:

There are two configuration properties to configure depending on where you want to be able to reach the update password change. To enable the password change for only internal access, change the configuration for Enabled to Yes and if you want the page to be accessible via the internet, change the Proxy Enabled to Yes:

image

Once complete, you’ll need restart the AD FS service on all of the servers in order for the configuration to take effect.


Enabling privacy mode for Microsoft Teams to hide presence information for external federated contacts

$
0
0

One of the most common questions I am asked about Teams is whether there is a way to hide an organization’s presence information from externally federated domains as the default is to display the status. The short answer is yes and it is configurable for the organization via PowerShell. Trying to research the appropriate cmdlets can be a bit confusing because the legacy Skype for Business Online cmdlet to use is actually the following from Skype for Business Online:

Set-CsPrivacyConfiguration

https://docs.microsoft.com/en-us/powershell/module/skype/set-csprivacyconfiguration?view=skype-ps

Note that the Applies to does not have Teams listed:

image

Although Teams is not listed, you can still use this cmdlet to enable privacy mode by executing the following:

Import-Module SkypeOnlineConnector

$session = New-CsOnlineSession

Import-PSSession -Session $session

Set-CsPrivacyConfiguration -EnablePrivacyMode $true

image

With the configuration set, external federated contacts should no longer see the presence status of the users in this organization:

image

Installing Edge Chromium browser on a Citrix Virtual Apps and Desktops 1909 application server with VDA installed causes the browser to freeze when launched

$
0
0

Problem

You’ve decided to install the new Microsoft Edge Chromium browser onto a Citrix application server:

image

… but quickly notice that upon a successful install, launching the browser displays a frozen window that is not responsive:

image

Reviewing the processes in Task Manager will reveal that multiple process of the browser has consumed all of the CPU:

image

Attempting to uninstall the browser also freezes because the uninstall process launches the Edge Chromium browser:

image

image

Trying other versions as offered in the following URL does not make a difference:

Microsoft Edge Insider Channels
https://www.microsoftedgeinsider.com/en-us/download

image

Solution

What apparently causes the Microsoft Edge Chromium browser to freeze are the Citrix API hooks that interacts with the MSEdge.exe process, which reminds me of an old issue I came across back in 2014 when white block artifacts would be displayed in a Citrix session: http://terenceluk.blogspot.com/2014/02/white-block-artifacts-displayed-in.html

To correct the issue, we’ll need to add the MSEdge.exe process to the exclusion list on the Citrix application server. Begin by navigating to the following registry path on the host with the VDA agent installed:

HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\CtxUvi

image

Open the UviProcessExcludesREG_SZ key and you’ll see the following default processes that are excluded:

LsaIso.exe;BioIso.exe;FsIso.exe;sppsvc.exe;vmsp.exe;

image

Add the following to the list:

msedge.exe;

image

image

With the exclusion updated, proceed to restart the server and Edge Chromium should now launch properly:

imageimage

Deploying Azure AD Connect for Office 365 with AD FS for authentication

$
0
0

One of the methods for providing authentication for Office 365 services is to redirect users back to an on-premise AD FS (Active Directory Federation Services) portal so that authentication can be handled by the local infrastructure with Domain Controllers. Some organizations prefer this route because they may already have AD FS setup for multiple services with a MFA solution configured and would like to unify authentication requests to their on-premise infrastructure because they have yet to migrate their infrastructure to the Azure cloud. In other instances, the organization may already have most of their infrastructure migrated to the Azure cloud but would like to leverage the additional capabilities of AD FS. I won’t go into the details of each option but refer to the following documentation for more information:

Azure AD Connect user sign-in options
https://docs.microsoft.com/en-us/azure/active-directory/hybrid/plan-connect-user-signin#federation-that-uses-a-new-or-existing-farm-with-ad-fs-in-windows-server-2012-r2

I’ve come across many organizations who have their AD Connect installed on a domain controller but it is actually not recommended by Microsoft:

https://docs.microsoft.com/en-us/azure/active-directory/hybrid/how-to-connect-install-prerequisites

Installing Azure AD Connect on a Domain Controller is not recommended due to security practices and more restrictive settings that can prevent Azure AD Connect from installing correctly.

Other security concerns are as follows:

  • Restarts of the server for troubleshooting AD Connect would affect domain controller services, DNS or other roles that may be on the server
  • DR for domain controllers would not be as straightforward with the additional AD Connect service installed and the same will be for the AD Connect
  • AD Connect installs a version of SQL Server 2012 Express Edition database which complicates the demotion of a domain controller if that is to be done in the future
  • SQL Server 2012 is an additional component that can raise security concerns when on a domain controller
  • Extra application or agent increases the attack surface (SQL Server are known for vulnerabilities)
  • Azure AD Connect service accounts requiring administrative permissions are added locally to the server which means it would be placed into the Builtin\Administrators group thus resulting it having administrative privileges to the AD Domain
  • Local install of SQL Server would consume additional memory thus resulting in extra memory consumption contending with the domain controller’s use of RAM to cache the ntds.dit database
  • If the domain controller AD Connect is installed on is not a GC then that can cause issues

I would suggest reviewing the Microsoft provided documentation so that you can have the information required to make the best decision that meets the organization’s requirements.

With the above stated, the purpose of this post will be to demonstrate the process of installing and setting up Azure AD Connect to use AD FS as the user sign-in method for the organization on a Windows Server 2019 server with an already deployed AD FS on Windows Server 2019 farm.

For more information about deploying AD FS on Windows Server 2019, refer to my previous blog posts:

Deploying a redundant Active Directory Federation Services (ADFS) farm on Windows Server 2019
http://terenceluk.blogspot.com/2020/04/deploying-redundant-active-directory.html

Deploying a redundant Active Directory Federation Services (ADFS) Web Application Proxy servers on Windows Server 2019
http://terenceluk.blogspot.com/2020/04/deploying-redundant-active-directory_21.html

Custom Domain Prerequisite

Begin by ensure that the organization’s sign-in domain (typically the primary SMTP domain) has been added as a custom domain and verified with the required TXT record in your Azure tenant:

image

Proceed to download the latest Azure AD Connect from the Azure Active Directory portal:

image

In the event where the organization’s Active Directory domain is non-routable (e.g. contoso.local), you will need to prepare the domain for directory synchronization as outlined in the following documentation.

Prepare a non-routable domain for directory synchronization

https://docs.microsoft.com/en-us/office365/enterprise/prepare-a-non-routable-domain-for-directory-synchronization

Install Azure AD Connect

Launch the downloaded AzureADConnect.msi file to start the installer:

image

Proceed to install Azure AD Connect:

imageimage

Configure Azure AD Connect

image

I usually recommend to never go with Express settings as this configures Azure AD Connect with default settings, which most administrations may not go back to review the documentation to understand what it does and may have unanticipated consequences in the future.

I would like to encourage everyone who needs to set up AD Connect to go through all the following documentation that will outline the configuration parameters for a custom installation:

Custom installation of Azure AD Connect
https://docs.microsoft.com/en-us/azure/active-directory/hybrid/how-to-connect-install-custom

Setting up AD FS for sign-in also is not available via the Express Settings so select the Customize button:

image

By leaving all of the optional settings unchecked, AD Connect will set up a SQL Server 2012 Express LocalDB instance, create the appropriate groups, and assign permissions without intervention. For more information about the optional settings, please refer to the following table:

Use an existing SQL Server

Allows you to specify the SQL Server name and the instance name. Choose this option if you already have a database server that you would like to use. Enter the instance name followed by a comma and port number in Instance Name if your SQL Server does not have browsing enabled. Then specify the name of the Azure AD Connect database. Your SQL privileges determine whether a new database will be created or your SQL administrator must create the database in advance. If you have SQL SA permissions see How to install using an existing database. If you have been delegated permissions (DBO) see Install Azure AD Connect with SQL delegated administrator permissions.

Use an existing service account

By default Azure AD Connect uses a virtual service account for the synchronization services to use. If you use a remote SQL server or use a proxy that requires authentication, you need to use a managed service account or use a service account in the domain and know the password. In those cases, enter the account to use. Make sure the user running the installation is an SA in SQL so a login for the service account can be created. See Azure AD Connect accounts and permissions.
With the latest build, provisioning the database can now be performed out of band by the SQL administrator and then installed by the Azure AD Connect administrator with database owner rights. For more information see Install Azure AD Connect using SQL delegated administrator permissions.

Specify custom sync groups

By default Azure AD Connect creates four groups local to the server when the synchronization services are installed. These groups are: Administrators group, Operators group, Browse group, and the Password Reset Group. You can specify your own groups here. The groups must be local on the server and cannot be located in the domain.

We will be using a local SQL Express database (installed by AD Connect) to store the data for AD Connect so proceed by clicking on the Install button:

imageimage

Once Azure AD Connect has been installed, you will be presented with the following User sign-in options:

image

Password Hash Synchronization is the default option as it is for the Express settings and given that we are configuring AD FS as the sign-in option, proceed by selecting Federation with AD FS:

image

Proceed by entering the credentials for an account that has global administrator permissions for the Azure tenant:

imageimage

Select the domain in the forest that Azure AD Connect will be synchronization from the FOREST drop down menu:

**Note that the domain in this example has a non-routable internal domain (.local suffix) but a UPN suffix with the SMTP domain the organization will be used to sign into resources have already been added as a custom domain and verified.

image

Select Create new AD account if a service account has not already been created for the AD Connect Synchronization application:

image

AD Connect will validate the Enterprise admin credentials to ensure that it is able to connect to the selected domain:

image

Upon successful completion, the selected domain will be listed under the CONFIGURED DIRECTORIES heading with a green check mark beside it:

image

Clicking Next will allow AD Connect to retrieve the directory schema of the domain:

image

Since the Active Directory domain uses a non-routable suffix but an additional UPN has already been configured, the wizard will indicate that it is not added (it isn’t possible to do so) but the UPN suffix has been verified:

image

For referencing the following are attributes that are available:

image

Proceed with using having the userPrincipalname selected for the USER PRINCIPAL NAME drop box menu and select Continue without matching all UPN suffixes to verified domains (this needs to be selected as there is a non-routable domain listed):

image

The next Domain/OU Filtering page is where the configuration for whether AD Connect will synchronize the whole domain or only selected domains and OUs is configured. Be cognizant of whether the whole domain should be synchronized or not as this as it is best practice to only synchronize OUs that are required especially if the domain has a lot of unused objects. Selectively synchronizing only the objects that is needed will optimize the performance. With that said, if the OU structure isn’t organized well with objects scattered across everywhere then it may be best to synchronize the whole domain to avoid missing critical objects.

image

Uniquely identifying users isn’t as important for a single forest with single domain as users typically have only one identity but having multiple forests and domains usually means a single user could have multiple identities. If an organization with multiple forests and domains is being configured, decide multiple identities will be identified as described in the following table.

Setting

Description

Users are only represented once across all forests

All users are created as individual objects in Azure AD. The objects are not joined in the metaverse.

Mail attribute

This option joins users and contacts if the mail attribute has the same value in different forests. Use this option when your contacts have been created using GALSync. If this option is chosen, User objects whose Mail attribute aren't populated will not be synchronized to Azure AD.

ObjectSID and msExchangeMasterAccountSID/ msRTCSIP-OriginatorSid

This option joins an enabled user in an account forest with a disabled user in a resource forest. In Exchange, this configuration is known as a linked mailbox. This option can also be used if you only use Lync and Exchange is not present in the resource forest.

sAMAccountName and MailNickNameThis option joins on attributes where it is expected the sign-in ID for the user can be found.
A specific attributeThis option allows you to select your own attribute. If this option is chosen, User objects whose (selected) attribute aren't populated will not be synchronized to Azure AD. Limitation: Make sure to pick an attribute that already can be found in the metaverse. If you pick a custom attribute (not in the metaverse), the wizard cannot complete.

For the purpose of this example, only one domain in a forest will be synchronized so the Users are represented only once across all directories will be selected:

image

For reference, a few of the options for attributes that can be used to identify users are as follows:

image

The Filter users and devices is great for AD Connect pilots where the OU structure does not permit the synchronization of objects that are selected for the pilot. With the filtering option, an administrator can create a group and add all the objects in so these objects are the only ones that get synchronized into Azure AD. Note that nested groups are not allowed so all the objects in the group must be direct members.

image

I won’t go write an explanation as to what each optional feature provides as the table from the documentation does a great job explaining it:

Optional Features

Description

Exchange Hybrid Deployment

The Exchange Hybrid Deployment feature allows for the co-existence of Exchange mailboxes both on-premises and in Office 365. Azure AD Connect is synchronizing a specific set of attributes from Azure AD back into your on-premises directory.

Exchange Mail Public Folders

The Exchange Mail Public Folders feature allows you to synchronize mail-enabled Public Folder objects from your on-premises Active Directory to Azure AD.

Azure AD app and attribute filtering

By enabling Azure AD app and attribute filtering, the set of synchronized attributes can be tailored. This option adds two more configuration pages to the wizard. For more information, see Azure AD app and attribute filtering.

Password hash synchronization

If you selected federation as the sign-in solution, then you can enable this option. Password hash synchronization can then be used as a backup option. For additional information, see Password hash synchronization.
If you selected Pass-through Authentication this option can also be enabled to ensure support for legacy clients and as a backup option. For additional information, see Password hash synchronization.

Password writeback

By enabling password writeback, password changes that originate in Azure AD is written back to your on-premises directory. For more information, see Getting started with password management.

Group writeback

If you use the Office 365 Groups feature, then you can have these groups represented in your on-premises Active Directory. This option is only available if you have Exchange present in your on-premises Active Directory.

Device writeback

Allows you to writeback device objects in Azure AD to your on-premises Active Directory for Conditional Access scenarios. For more information, see Enabling device writeback in Azure AD Connect.

Directory extension attribute sync

By enabling directory extensions attribute sync, attributes specified are synced to Azure AD. For more information, see Directory extensions.

image

As AD FS is going to be configured, we will enabled Password has synchronization as a fallback if the federation service is not available:

Implement password hash synchronization with Azure AD Connect sync
https://docs.microsoft.com/en-us/azure/active-directory/hybrid/how-to-connect-password-hash-synchronization

Password hash synchronization can also be enabled in addition to federation. It may be used as a fallback if your federation service experiences an outage.

image

Enter an account with domain administrator rights for the Azure AD Connect to perform the configuration changes:

image

Note that the account needs to be a local administrator on the AD FS servers that will be used. Domain Administrators group is usually a part of the administrator group on the servers but if they are not, ensure that you add this account onto the servers:

image

Configure AD FS Farm

The AD Connect wizard can actually create and configure an AD FS farm but this example already has an AD FS farm configured so the Use an existing AD FS farm will be selected:

image

image

image

Specify the FQDN of the primary server in the AD FS farm:

image

The wizard will attempt to validate the AD FS farm:

image

Upon successfully validating the farm, select the domain to federate with Azure AD:

image

Allow AD Connect to perform the configuration:

image

Once the configuration has completed, allow AD Connect to start the synchronization:

image

Verifying synchronization service connectivity to Azure Active Directory

image

Creating the Azure Active Directory Synchronization Account

image

Installing Azure AD Connect Health agent for sync

image

Azure AD Connection configuration has now completed:

image

Verify Internal and External AD FS Access

If an existing AD FS farm was used then the DNS records for intranet and internet access should have been configured so proceed to run both of the Verify federation connectivity tests:

imageimage

Verify Azure AD Connect Configuration for Azure Active Directory

Switching back to the Azure portal’s Azure Active Directory will display the Status as Enabled for Azure AD Connect:

image

Verify Azure AD Connect Synchronization Service

Launching the Synchronization Service Manager on the AD Connect server will show that a Full Import, Full Synchronization and Export is ran:

image

Test Office 365 Sign-in AD FS Redirection

Attempting to sign in through https://login.microsoftonline.com should not bring you to the AD FS login page:

image

However, you will be redirected the AD FS sign page once the email address (UPN) is entered as the identity:

image

image

Verify Relying Party Trusts Configuration for O365 in AD FS

Logging onto the AD FS management console on the AD FS server will show that a Microsoft Office 365 Identity Platform Worldwide was configured in the Relying Party Trusts by the Azure AD Connect wizard:

image

Turn on MFA for O365 Sign-in

If a MFA solution such as Duo in this example is deployed, it can be turned on by right clicking on Microsoft Office 365 Identity Platform Worldwide and selecting Edit Access Control Policy…:

image

Click on Use access control policy link:

image

Select Permit everyone and require MFA:

image

image

image

MFA will not be required when users sign into Office 365 via the AD FS sign-in portal.

Creating a VNet Peering (Virtual Network Peering) across different subscriptions and Azure Active Directory tenants

$
0
0

I was recently asked to assist with establishing a connection between a client’s Azure tenant with a partner’s tenant and as the two organizations are different companies with no ties, their tenants were in different subscriptions. Prior to November 2019, the only option to achieve this was through the use of a site-to-site VPN but Microsoft has since released the feature to configure VNet Peering across subscriptions that allows organizations to enjoy all the performance benefits of VNet Peering such as traversing through the Azure backbone network rather than the internet (Virtual network peering across Azure Active Directory tenants: https://azure.microsoft.com/en-us/updates/cross-aad-vnet-peering/).

The configuration outlined in the following document was very straight forward:

Create a virtual network peering - Resource Manager, different subscriptions and Azure Active Directory tenants
https://docs.microsoft.com/en-gb/azure/virtual-network/create-peering-different-subscriptions

However, both the administrator and I became lost when we reached step 21 just after entering the Resource ID:

image

The documentation did not mention what we were supposed to enter or select for the Directory and when we clicked on the dropdown box, no options were displayed:

image

We went through the documentation together but couldn’t figure out what we were supposed to fill in for the Directory so I ended up opening a support case with Microsoft. The support engineer suggested that we tried to use PowerShell to create the VNet Peering but it did not resolve the issue. It was by chance that I decided to log into the email of the account I used, which had Azure Global Administrator permissions, to see if there was some sort of an invite when the administrator added my account as a Networkcontributor was when I realized an invitation was sent and required to be accepted:

image

Accepting the invitation and attempting to create the VNet Peering now displayed the other organization’s directory in the dropdown box:

image

Selecting the Directory and completing the configuration established the VNet Peering.

Notes

Notes that I feel are worth mentioning are as follows:

  1. The account you use provide the other tenant’s administrator to be granted Networkcontributor may not be an account that is regularly used and therefore may not have a mailbox so ensure that you use one that has one
  2. The account you use isn’t going to be used as a service account so don’t create a dedicated account for this operation
  3. The guest account you grant Network contributor role from the other tenant can be removed from your Azure AD after the VNet Peering has established (we tested connectivity after removing the account and the peering still works)
  4. Network Security Groups (NSG) are only can only be associated to NICs or Subnets in your own tenant so to secure your tenant from the peered tenant’s virtual network will require NSGs to be applied to the resources you have in your tenant (you cannot apply an NSG to, say, the peered tenant’s VNet)

Hope this helps anyone who may encounter the same problem as I experienced.

Generating a SharePoint Online permissions audit report by site collection

$
0
0

I was recently asked by a client to look for a PowerShell script that would export the permissions for all of the files and folders within each of their site collection so they can perform a thorough audit and as common as such a request seemed to me, I had much difficulty when I searched through the internet for such a script. Since I did not have any luck finding one that actually worked, I reached out to one of our SharePoint resources for help and he eventually provided one that was able to execute and export the permissions to a CSV file. I am unsure as to where the script is from so I’d like to apologize for not crediting the author but I hope this will provide anyone who may be looking for such a script as I did.

The generated report will look as such:

imageimage

The following PowerShell script runs against per site collection so you will need to run it multiple times against as many sites as required as well as grant yourself administrative permissions to the site collection:

image

image

The script only requires two lines to be adjusted accordingly:

image

#region ***Parameters***

$SiteURL="https://contoso.sharepoint.com/"

$ReportFile="C:\temp\SitePermissionRepor.csv"

#endregion

Other sites will require the path to be adjusted as such:

$SiteURL="https://contoso.sharepoint.com/sites/FinanceSite"

The following is the PowerShell script:

image

#Function to Get Permissions Applied on a particular Object, such as: Web, List, Folder or List Item

Function Get-PnPPermissions([Microsoft.SharePoint.Client.SecurableObject]$Object)

{

#Determine the type of the object

Switch($Object.TypedObject.ToString())

    {

"Microsoft.SharePoint.Client.Web"  { $ObjectType = "Site" ; $ObjectURL = $Object.URL; $ObjectTitle = $Object.Title }

"Microsoft.SharePoint.Client.ListItem"

        {

If($Object.FileSystemObjectType -eq "Folder")

            {

$ObjectType = "Folder"

#Get the URL of the Folder

$Folder = Get-PnPProperty -ClientObject $Object -Property Folder

$ObjectTitle = $Object.Folder.Name

$ObjectURL = $("{0}{1}" -f $Web.Url.Replace($Web.ServerRelativeUrl,''),$Object.Folder.ServerRelativeUrl)

            }

Else #File or List Item

            {

#Get the URL of the Object

Get-PnPProperty -ClientObject $Object -Property File, ParentList

If($Object.File.Name -ne $Null)

                {

$ObjectType = "File"

$ObjectTitle = $Object.File.Name

$ObjectURL = $("{0}{1}" -f $Web.Url.Replace($Web.ServerRelativeUrl,''),$Object.File.ServerRelativeUrl)

                }

else

                {

$ObjectType = "List Item"

$ObjectTitle = $Object["Title"]

#Get the URL of the List Item

$DefaultDisplayFormUrl = Get-PnPProperty -ClientObject $Object.ParentList -Property DefaultDisplayFormUrl

$ObjectURL = $("{0}{1}?ID={2}" -f $Web.Url.Replace($Web.ServerRelativeUrl,''), $DefaultDisplayFormUrl,$Object.ID)

                }

            }

        }

Default

        {

$ObjectType = "List or Library"

$ObjectTitle = $Object.Title

#Get the URL of the List or Library

$RootFolder = Get-PnPProperty -ClientObject $Object -Property RootFolder

$ObjectURL = $("{0}{1}" -f $Web.Url.Replace($Web.ServerRelativeUrl,''), $RootFolder.ServerRelativeUrl)

        }

    }

#Get permissions assigned to the object

Get-PnPProperty -ClientObject $Object -Property HasUniqueRoleAssignments, RoleAssignments

#Check if Object has unique permissions

$HasUniquePermissions = $Object.HasUniqueRoleAssignments

#Loop through each permission assigned and extract details

$PermissionCollection = @()

Foreach($RoleAssignment in $Object.RoleAssignments)

    {

#Get the Permission Levels assigned and Member

Get-PnPProperty -ClientObject $RoleAssignment -Property RoleDefinitionBindings, Member

#Get the Principal Type: User, SP Group, AD Group

$PermissionType = $RoleAssignment.Member.PrincipalType

#Get the Permission Levels assigned

$PermissionLevels = $RoleAssignment.RoleDefinitionBindings | Select -ExpandProperty Name

#Remove Limited Access

$PermissionLevels = ($PermissionLevels | Where { $_ -ne "Limited Access"}) -join ","

#Leave Principals with no Permissions

If($PermissionLevels.Length -eq 0) {Continue}

#Get SharePoint group members

If($PermissionType -eq "SharePointGroup")

        {

#Get Group Members

$GroupMembers = Get-PnPGroupMembers -Identity $RoleAssignment.Member.LoginName

#Leave Empty Groups

If($GroupMembers.count -eq 0){Continue}

$GroupUsers = ($GroupMembers | Select -ExpandProperty Title) -join ","

#Add the Data to Object

$Permissions = New-Object PSObject

$Permissions | Add-Member NoteProperty Object($ObjectType)

$Permissions | Add-Member NoteProperty Title($ObjectTitle)

$Permissions | Add-Member NoteProperty URL($ObjectURL)

$Permissions | Add-Member NoteProperty HasUniquePermissions($HasUniquePermissions)

$Permissions | Add-Member NoteProperty Users($GroupUsers)

$Permissions | Add-Member NoteProperty Type($PermissionType)

$Permissions | Add-Member NoteProperty Permissions($PermissionLevels)

$Permissions | Add-Member NoteProperty GrantedThrough("SharePoint Group: $($RoleAssignment.Member.LoginName)")

$PermissionCollection += $Permissions

        }

Else

        {

#Add the Data to Object

$Permissions = New-Object PSObject

$Permissions | Add-Member NoteProperty Object($ObjectType)

$Permissions | Add-Member NoteProperty Title($ObjectTitle)

$Permissions | Add-Member NoteProperty URL($ObjectURL)

$Permissions | Add-Member NoteProperty HasUniquePermissions($HasUniquePermissions)

$Permissions | Add-Member NoteProperty Users($RoleAssignment.Member.Title)

$Permissions | Add-Member NoteProperty Type($PermissionType)

$Permissions | Add-Member NoteProperty Permissions($PermissionLevels)

$Permissions | Add-Member NoteProperty GrantedThrough("Direct Permissions")

$PermissionCollection += $Permissions

        }

    }

#Export Permissions to CSV File

$PermissionCollection | Export-CSV $ReportFile -NoTypeInformation -Append

}

#Function to get sharepoint online site permissions report

Function Generate-PnPSitePermissionRpt()

{

[cmdletbinding()]

Param

    (   

[Parameter(Mandatory=$false)] [String] $SiteURL,

[Parameter(Mandatory=$false)] [String] $ReportFile,

[Parameter(Mandatory=$false)] [switch] $Recursive,

[Parameter(Mandatory=$false)] [switch] $ScanItemLevel,

[Parameter(Mandatory=$false)] [switch] $IncludeInheritedPermissions

    ) 

Try {

#Connect to the Site

Connect-PnPOnline -URL $SiteURL -UseWebLogin

#Get the Web

$Web = Get-PnPWeb

Write-host -f Yellow "Getting Site Collection Administrators..."

#Get Site Collection Administrators

$SiteAdmins = Get-PnPSiteCollectionAdmin

$SiteCollectionAdmins = ($SiteAdmins | Select -ExpandProperty Title) -join ","

#Add the Data to Object

$Permissions = New-Object PSObject

$Permissions | Add-Member NoteProperty Object("Site Collection")

$Permissions | Add-Member NoteProperty Title($Web.Title)

$Permissions | Add-Member NoteProperty URL($Web.URL)

$Permissions | Add-Member NoteProperty HasUniquePermissions("TRUE")

$Permissions | Add-Member NoteProperty Users($SiteCollectionAdmins)

$Permissions | Add-Member NoteProperty Type("Site Collection Administrators")

$Permissions | Add-Member NoteProperty Permissions("Site Owner")

$Permissions | Add-Member NoteProperty GrantedThrough("Direct Permissions")

#Export Permissions to CSV File

$Permissions | Export-CSV $ReportFile -NoTypeInformation

#Function to Get Permissions of All List Items of a given List

Function Get-PnPListItemsPermission([Microsoft.SharePoint.Client.List]$List)

        {

Write-host -f Yellow "`t `t Getting Permissions of List Items in the List:"$List.Title

#Get All Items from List in batches

$ListItems = Get-PnPListItem -List $List -PageSize 500

$ItemCounter = 0

#Loop through each List item

ForEach($ListItem in $ListItems)

            {

#Get Objects with Unique Permissions or Inherited Permissions based on 'IncludeInheritedPermissions' switch

If($IncludeInheritedPermissions)

                {

Get-PnPPermissions -Object $ListItem

                }

Else

                {

#Check if List Item has unique permissions

$HasUniquePermissions = Get-PnPProperty -ClientObject $ListItem -Property HasUniqueRoleAssignments

If($HasUniquePermissions -eq $True)

                    {

#Call the function to generate Permission report

Get-PnPPermissions -Object $ListItem

                    }

                }

$ItemCounter++

Write-Progress -PercentComplete ($ItemCounter / ($List.ItemCount) * 100) -Activity "Processing Items $ItemCounter of $($List.ItemCount)" -Status "Searching Unique Permissions in List Items of '$($List.Title)'"

            }

        }

#Function to Get Permissions of all lists from the given web

Function Get-PnPListPermission([Microsoft.SharePoint.Client.Web]$Web)

        {

#Get All Lists from the web

$Lists = Get-PnPProperty -ClientObject $Web -Property Lists

#Exclude system lists

$ExcludedLists = @("Access Requests","App Packages","appdata","appfiles","Apps in Testing","Cache Profiles","Composed Looks","Content and Structure Reports","Content type publishing error log","Converted Forms",

"Device Channels","Form Templates","fpdatasources","Get started with Apps for Office and SharePoint","List Template Gallery", "Long Running Operation Status","Maintenance Log Library", "Images", "site collection images"

,"Master Docs","Master Page Gallery","MicroFeed","NintexFormXml","Quick Deploy Items","Relationships List","Reusable Content","Reporting Metadata", "Reporting Templates", "Search Config List","Site Assets","Preservation Hold Library",

"Site Pages", "Solution Gallery","Style Library","Suggested Content Browser Locations","Theme Gallery", "TaxonomyHiddenList","User Information List","Web Part Gallery","wfpub","wfsvc","Workflow History","Workflow Tasks", "Pages")

$Counter = 0

#Get all lists from the web  

ForEach($List in $Lists)

            {

#Exclude System Lists

If($List.Hidden -eq $False -and $ExcludedLists -notcontains $List.Title)

                {

$Counter++

Write-Progress -PercentComplete ($Counter / ($Lists.Count) * 100) -Activity "Exporting Permissions from List '$($List.Title)' in $($Web.URL)" -Status "Processing Lists $Counter of $($Lists.Count)"

#Get Item Level Permissions if 'ScanItemLevel' switch present

If($ScanItemLevel)

                    {

#Get List Items Permissions

Get-PnPListItemsPermission -List $List

                    }

#Get Lists with Unique Permissions or Inherited Permissions based on 'IncludeInheritedPermissions' switch

If($IncludeInheritedPermissions)

                    {

Get-PnPPermissions -Object $List

                    }

Else

                    {

#Check if List has unique permissions

$HasUniquePermissions = Get-PnPProperty -ClientObject $List -Property HasUniqueRoleAssignments

If($HasUniquePermissions -eq $True)

                        {

#Call the function to check permissions

Get-PnPPermissions -Object $List

                        }

                    }

                }

            }

        }

#Function to Get Webs's Permissions from given URL

Function Get-PnPWebPermission([Microsoft.SharePoint.Client.Web]$Web)

        {

#Call the function to Get permissions of the web

Write-host -f Yellow "Getting Permissions of the Web: $($Web.URL)..."

Get-PnPPermissions -Object $Web

#Get List Permissions

Write-host -f Yellow "`t Getting Permissions of Lists and Libraries..."

Get-PnPListPermission($Web)

#Recursively get permissions from all sub-webs based on the "Recursive" Switch

If($Recursive)

            {

#Get Subwebs of the Web

$Subwebs = Get-PnPProperty -ClientObject $Web -Property Webs

#Iterate through each subsite in the current web

Foreach ($Subweb in $web.Webs)

                {

#Get Webs with Unique Permissions or Inherited Permissions based on 'IncludeInheritedPermissions' switch

If($IncludeInheritedPermissions)

                    {

Get-PnPWebPermission($Subweb)

                    }

Else

                    {

#Check if the Web has unique permissions

$HasUniquePermissions = Get-PnPProperty -ClientObject $SubWeb -Property HasUniqueRoleAssignments

#Get the Web's Permissions

If($HasUniquePermissions -eq $true)

                        {

#Call the function recursively                           

Get-PnPWebPermission($Subweb)

                        }

                    }

                }

            }

        }

#Call the function with RootWeb to get site collection permissions

Get-PnPWebPermission $Web

Write-host -f Green "`n*** Site Permission Report Generated Successfully!***"

     }

Catch {

write-host -f Red "Error Generating Site Permission Report!" $_.Exception.Message

   }

}

#region ***Parameters***

$SiteURL="https://contoso.sharepoint.com/"

$ReportFile="C:\temp\SitePermissionRepor.csv"

#endregion

#Call the function to generate permission report

#Generate-PnPSitePermissionRpt -SiteURL $SiteURL -ReportFile $ReportFile -Recursive

Generate-PnPSitePermissionRpt -SiteURL $SiteURL -ReportFile $ReportFile -Recursive -ScanItemLevel

Planning and setting up a Site-to-Site VPN connection between an on-premise network and Azure

$
0
0

One of the tasks most administrators will have to perform when adopting Microsoft’s Azure as their cloud platform is setting up a site-to-site VPN connection between their on-premise infrastructure to an Azure region where they will host their IaaS, PaaS or SaaS services. There are plenty of blog posts, YouTube videos and Microsoft documentation demonstrating the configuration but I find that most of the environments I come across tend to show that not a thought has been put into something as simple as, yet I feel is very important, naming conventions. With that in mind, this post serves to place a bit more emphasis on preparing the resources that will be required to set up the VPN connection prior to proceeding with the configuration.

Azure Resource Preparation

It is possible to create the Azure resources required for a site-to-site VPN connection during the configuration but making decision on the fly is prone to mistakes that you may make so it is always best to plan and design the resources before commencing with the configuration.

As a start, if this has not been done, spend some time to determine how resources should be named. Maintaining a standard naming convention will save you a lot of grief in the future and make it much easier to identify resources and/or sort them into lists for exercises such as billing. Microsoft has an official document that provides recommendations for naming and tagging here:

Recommended naming and tagging conventions
https://docs.microsoft.com/en-us/azure/cloud-adoption-framework/ready/azure-best-practices/naming-and-tagging

**Note that I don’t proclaim that my naming conventions are the best but I hope a demonstration would provide an example for such an exercise.

Resource Groups
Every resource in Azure needs to be placed into a resource group so the first step is to determine where the resources that will be required for the site-to-site VPN will be created. It is possible to move a resource to another resource group or even another subscription but putting a bit of thought in how you would like to organize your resources and maintaining a standard will ensure tidiness. For the purpose of this example, all of the resources created for the site-to-site VPN will be placed into a single resource group containing networking items as the environment isn’t very large. It is possible to break up the components into dedicated groups (e.g. Public IPs, Virtual Network Gateways, Virtual Networks).

Resource group: rg-prod-network

Subnets
Subnets are networks you define in Virtual Networks (VNet) and it is not something you create before the creation of the Virtual Network but it is best to pre-plan the name and subnet prior to the configuration.

Subnet

Parameter

Configuration

Name

snet-prod-svr-10.248.1.0-24

Address range (CIDR block)

10.248.1.0/24

The naming convention chosen is to use the snet- prefix followed by the environment (in this case production) and then the Azure region (East US).

Gateway Subnet
The gateway subnet contains the IP addresses that the virtual network gateway virtual machines and services use. When you create your virtual network gateway, gateway VMs are deployed to the gateway subnet and configured with the required VPN gateway settings. This subnet is reserved solely for the virtual network gateway so do not deploy anything else into it. The subnet size can be as small as /29 but it is recommended to be /27 or larger as this would accommodate most configurations and not limit additional virtual network gateway connections in the future.

Gateway Subnet

Parameter

Configuration

Name

GatewaySubnet

Address range (CIDR block)

10.248.255.0/27

The Gateway Subnet is specific to the VNet and you are not allowed to provide a custom name so the default will be used.

Virtual Networks
The Azure Virtual Network (also known as VNet) is a fundamental building block for the network in Azure. These objects define the IP scheme of the resources located within it and interacts with other VNets and/or networks through a Virtual network Gateway. Note that moving a virtual machine between VNets isn’t very straight forward and trying to redesign VNets after a deployment can be a laborious task so spend some time to think out the naming convention and IP addressing before creating and using them. For the purpose of this example, we’ll have one VNet that contains all of the production servers.

Virtual network

Parameter

Configuration

Name

vnet-prod-eastus

Location

East US

Address Space

10.248.0.0/16

Subnets

Hosts Subnet: 10.248.1.0/24 (247 addresses)

GatewaySubnet: 10.248.255.0/27 (26 IP addresses)

DNS servers

Domain Controllers

The naming convention chosen is to use the vnet- prefix followed by the environment (in this case production) and then the Azure region (East US).

Public IP Address
Public IP addresses are fairly self-explanatory as these objects represent the public IP addresses available that can be assigned to resources such as a virtual machine, Virtual Network Gateway or application. The public IP address we’ll need for the site-to-site VPN is a public IP for the VPN Gateway.

Public IP Address:

Parameter

Configuration

Name

pip-vgw-EastUS

SKU

Basic

IP address assignment

Dynamic

The naming convention chosen is to use the pip- prefix followed by the Virtual Network Gateway prefix vgw- and then the Azure region (East US).

Virtual Network Gateway
The Virtual Network Gateway is the device that establishes the connection to the VPN device at your on-premise datacenter or office. Note that a single Virtual Network Gateway can establish multiple site-to-site VPN connections so it is not advisable to name it to include where it is connecting to.

Virtual Network Gateway

Parameter

Configuration

Name

vgw-EastUS

VPN type

Route-based

SKU

VpnGw1

Gateway type

VPN

The naming convention chosen is to use the vgw- prefix followed by the Azure region (East US).

Local Network Gateway
The Local Network Gateway represents the device at your on-premise datacenter or office that establishes the connection to the Virtual Network Gateway at the Azure region. You will need to work with the network engineer to determine what static IP address this device will use to accept the site-to-site VPN connection as well as the address space (subnets) that reside in the network so the VNet will know what traffic to route through this VPN.

Local Network Gateway

Parameter

Configuration

Name

lgw-BDA-EastUS

IP Address

162.x.213.x

Address Space

10.247.0.0/16, 192.168.113.0/24

The naming convention chosen is to use the lgw- prefix followed by the location of the gateway (in this case BDA represents Bermuda) and finally which Azure region (East US) it is connecting to.

Connection (Site-to-Site)
The connection represents the site-to-site connection between the Azure Virtual Network Gateway to the Local Network Gateway.

Connection between Azure Virtual Network Gateway to Local Network Gateway

Parameter

Configuration

Connection Type

Site-to-Site

Name

cn-lgw-BDA-EastUS-to-vgw-EastUS

Shared key (PSK)

<create a unique string>

Protocol

IKEv2

Location

East US

The naming convention chosen is to use the cn- prefix followed by the local network gateway object name, then the -to- to indicate that it is connected to the virtual network gateway in Azure.

On-Premise Preparation

The network engineer configuring the VPN appliance on-premise will likely ask you for the:

  • Public IP address of the virtual network gateway in Azure
  • Shared key
  • Address space in Azure (if they were not involved with the subnet planning)
  • VPN type

You wouldn’t be able to complete the configuration without asking and obtaining the following information for the on-premise appliance:

  • Public IP address of the local network gateway located on-premise

What I typically do is agree on a shared key, address space and VPN type with the network engineer, proceed to create the virtual network gateway, supply the information to the engineer, then ask for them to provide me with the public IP address of the local network gateway.

Deployment

Resource Group
Begin by creating the resource group or groups that these resources will reside in. For the purpose of this example, we will create one resource group named rg-prod-network for all of the resources.

image

Virtual Networks
Create the virtual network (VNet) that will be used to host the Virtual Network Gateway and host the resources such as virtual machines.

image

Remove the default IPv4 address space of 10.0.0.0/16 and default subnet of 10.0.0.0/24 if those are not the address space and subnet you will be using:

image

Add the subnet you’ll be using into the VNet configuration:

image

image

Leave DDoS protection on and the firewall either as disabled or enabled if one is configured and to be used:

image

Add a designed tag for the resources if ones have been planned for the resource:

image

Proceed to create the VNet:

image

Once the Virtual Network has been created, proceed to navigate into the resource, then to Subnets and then click on Gateway subnet:

image

Create the Gateway Subnet with the pre-planned subnet:

image

For the purpose of this example, we will only configure the Address range (CIDR block) and leave the rest as default/unconfigured.

You should now see the two subnets configured for the VNet:

image

Public IP Address
It may seem logical to create a public IP address before creating the virtual network gateway but what I’ve noticed in the past (not sure if this has changed) is you cannot assign a static public IP address to the gateway and creating a dynamic public IP address would not allow you to select it during the virtual network gateway configuration (probably because a dynamic public IP address doesn’t actually have an IP assigned to it until it is associated and used) so this is the one component that I create during the configuration of the virtual network gateway. Having the name of the public IP address for the virtual network gateway planned should still be done.

Virtual Network Gateway
Proceed to create the virtual network gateway by navigating to the virtual network gateways resource and click Add:

image

Create the virtual network gateway with the configuration previously planned:

image

The creation of the gateway takes a bit of time (upwards to 20 minutes or more) so do not expect it to be within a few minutes.

image

You’ll notice that the public IP address gets created quickly but there is no IP address assigned:

image

The reason is because this public IP address is configured for the virtual network gateway and the deployment hasn’t completed yet. Note how the Assignment configuration is set to Dynamic as well.

image

Once the virtual network gateway deployment has completed, obtain the public IP address that has been assigned to it, proceed to reach out to the network engineer configuring the local network gateway and provide him with the public information.

image

Also ensure that you have obtained the Public IP address of the local network gateway located on-premise.

Local Network Gateway
While waiting for the virtual network gateway deployment to complete, you can continue with creating the local network gateway by navigating to the local network gateways resource and click Add:

image

Create the local network gateway with the configuration previously planned:

image

The deployment, unlike the virtual network gateway, shouldn’t take too long to complete:

image

Connection (Site-to-Site)
Once you have confirmed with the network engineer that the on-premise VPN appliance is configured, you can proceed to connect the Azure Virtual Network Gateway to the Local Network Gateway. Proceed to navigate to the virtual network gateway object, connections and then click on the Add button:

image

Create the virtual network gateway connection to the local network gateway with the configuration previously planned:

image

The creation of the connection doesn’t take long and the initial connection Status will be labeled as Unknown:

image

Assuming there are no problems with the configuration then the Status will be change to Connected:

image

image

**Note that one of the things I’ve noticed in the past is that if you do not define an address space for the local gateway network then the connection status would remain unknown indefinitely so ensure that it has been configured.

image

You can proceed to configure additional site-to-site connections to other gateways if there are other datacenters or offices that need to connect into Azure.

image

Disabling Teams File Sharing and Disabling OneDrive for Office 365

$
0
0

Two of the most common questions I’ve been asked when engaged with a financial or government organization for a Office 365 deployment, which involves Microsoft Teams and OneDrive is whether it was possible to disable file sharing within teams and/or disabling OneDrive. What’s complicated about this is that Teams integrates with SharePoint Online and therefore is very different than what administrators are used to when working with the older Skype for Business, Lync Server and Office Communications Server. OneDrive is also integrated with SharePoint Online. Given that this appears to be asked frequently and I had reached out to Microsoft support for an official answer, I thought I’d write this short blog post about:

  1. Disabling Teams File Sharing (1 method)
  2. Disabling OneDrive (2 methods)

Disabling Teams File Sharing

Microsoft Teams uses SharePoint Online for file transfers and storage and therefore removing the SharePoint license from users would prevent them from sharing files. This obviously has the consequence of removing SharePoint Online functionality all together so this may or may not be viable if it is needed. The following is an example of removing the feature for a user who is licensed for E1 or E3:

Open the license properties of the account for the user:

image

Scroll all the way to the bottom of the licenses until you see the Apps section then click on the downward carrot to expand the list:

image

Locate the item SharePoint Online (Plan 2) and uncheck it:

imageimage

Disabling SharePoint will also disable the user’s ability to use SharePoint Online.

Disabling OneDrive

There are 2 ways to disable OneDrive. The first option is to repeat the same procedure as above with Teams.

The second option, which is a bit more complicated is to remove the ability to create OneDrive sites, along with removing any existing sites through the use of PowerShell.

Begin by removing ability to create a site for everyone. Navigate from the admin center then click on SharePoint admin center:

image

Click on User Profiles:

image

Click on Manage User Permissions:

image

Remove the permissions currently configured that allows users to create My Sites:

image

The above will prevent new users from creating a MySite which is the storage repository of OneDrive.

For users who already have used OneDrive and therefore created a MySite repository, we’ll need to use PowerShell to remove their created MySite.

Connect to SharePoint Online with PowerShell via the following commands in this document: https://docs.microsoft.com/en-us/powershell/sharepoint/sharepoint-online/connect-sharepoint-online?view=sharepoint-ps

Remove the MySite page with PowerShell via the following commands in this document: https://docs.microsoft.com/en-us/powershell/module/sharepoint-online/remove-sposite?view=sharepoint-ps

The command above will require you to enter the URL of the MySite page of the user who has OneDrive access and to determine the URL, navigate to the user’s account properties in the Microsoft 365admin center, the OneDrive menu and then click on Create link to files to generate the URL to this user account’s MySite page:

image

The action will create a URL for you to copy and paste into the command to remove the MySite page:

image

Remove-SPOSite -Identity https://44-my.sharepoint.com/personal/tluk -NoWait

The second option is a bit of a manual process but if there a lot of user accounts then it would be worthwhile to create a script that will traverse through each account and export the link rather than manually doing so from the GUI.


Event ID 3001 Error constantly logged on Citrix Cloud Connectors after FortiOS upgrade to 6.2.3 causing virtual desktop connectivity issues

$
0
0

Problem

You’ve just recently upgraded the FortiOS of a FortiGate 600D to version 6.2.3 and began to experience connectivity issues to a Citrix Virtual Apps and Desktops 1909 environment where users are unable to connect to desktops and receive the following error:

Cannot start desktop “Desktop Name”.

image

Desktops in the Citrix Studio also show that the VDA agents would suddenly become unregistered and later registered again but regardless of their state, brokered sessions fail majority of the time.

Errors on Citrix Cloud Connector Servers

Logging onto the Citrix Cloud Connectors reveal that the following event ID is constantly logged every 5 to 7 minutes:

LogName: Application

Source: Citrix Remote Broker Provider

EventID: 3001

Level: Error

User: NETWORK SERVICE

HA Mode Checking Start - component Broker Proxy has reported a failure with reason = Received: HAModeException - No WebSocket channels are available. (Target url: contoso.xendesktop.net/Citrix/XaXdProxy/)

image

image

LogName: Application

Source: Citrix Remote Broker Provider

EventID: 3001

Level: Error

User: NETWORK SERVICE

HA Mode Checking Start - component XmlServicesPlugin has reported a failure with reason = The underlying connection was closed: An unexpected error occurred on a receive.(Target Url: https://contoso.xendesktop.net/scripts/wpnbr.dll)

image

Running the Cloud Connector Connectivity Check utility from: https://support.citrix.com/article/CTX260337

image

Will show inconsistent results where various URLs will fail at different times:

image

Performing Wireshark on the Citrix Cloud Connectors will reveal that there are a lot of connection resets between the Citrix Cloud and the Cloud Connectors.

A short trace of 229 packets using filter ip.addr eq 20.41.61.15 and tcp.analysis.flags, reveals that 66 packets are TCP retransitions equating to almost 29% with and 12 TCP resets coming from connector.

**Note that the 20.41.61.15 IP resolves to the URL that the Citrix Cloud Connector is having issues connecting to.

image

Errors on Citrix StoreFront Servers

Logging onto the Citrix StoreFront servers will reveal the following events constantly logged repetitive:

image

LogName: Citrix Delivery Services

Source: Citrix Store Service

EventID: 4011

Level: Information

User: N/A

The Citrix XML Service at address citrixcloud1.contoso.com:80 has passed the background health check and has been restored to the list of active services.

image

LogName: Citrix Delivery Services

Source: Citrix Store Service

EventID: 0

Level: Error

User: N/A

The Citrix servers sent HTTP headers indicating that an error occurred: 500 Internal Server Error. This message was reported from the XML Service at address http://citrixcloud2.contoso.com/scripts/wpnbr.dll [NFuseProtocol.TRequestAddress]. The specified Citrix XML Service could not be contacted and has been temporarily removed from the list of active services.

image

**The above error will cycle through all of the Citrix Cloud Connectors.

LogName: Citrix Delivery Services

Source: Citrix Store Service

EventID: 4003

Level: Error

User: N/A

All the Citrix XML Services configured for farm Cloud failed to respond to this XML Service transaction.

image

LogName: Citrix Delivery Services

Source: Citrix Store Service

EventID: 28

Level: Warning

User: N/A

Failed to launch the resource 'Cloud.Workspace $S32-61' using the Citrix XML Service at address 'http://citrixcloud1.contoso.com/scripts/wpnbr.dll'. All the Citrix XML Services configured for farm Cloud failed to respond to this XML Service transaction.

com.citrix.wing.SourceUnavailableException, PublicAPI, Version=3.12.0.0, Culture=neutral, PublicKeyToken=null

All the Citrix XML Services configured for farm Cloud failed to respond to this XML Service transaction.

at com.citrix.wing.core.mpssourceimpl.MPSFarmFacade.GetAddress(Context ctxt, String appName, String deviceId, String clientName, Boolean alternate, MPSAddressingType requestedAddressType, String friendlyName, String hostId, String hostIdType, String sessionId, NameValuePair[] cookies, ClientType clientType, String retryKey, LaunchOverride launchOverride, Nullable`1 isPrelaunch, Nullable`1 disableAutoLogoff, Nullable`1 tenantId, String anonymousUserId)

at com.citrix.wing.core.mpssourceimpl.MPSLaunchImpl.GetAddress(Context env, String appName, String deviceId, String clientName, Boolean alternate, MPSAddressingType requestedAddressType, String friendlyName, String hostId, String hostIdType, String sessionId, NameValuePair[] cookies, ClientType clientType, String retryKey, LaunchOverride launchOverride, Nullable`1 isPrelaunch, Nullable`1 disableAutoLogoff, Nullable`1 tenantId, String anonymousUserId)

at com.citrix.wing.core.mpssourceimpl.MPSLaunchImpl.LaunchRemoted(Dictionary`2 parameters, Context env, AppLaunchParams appLaunchParams)

at com.citrix.wing.core.mpssourceimpl.MPSLaunchImpl.Launch(Dictionary`2 parameters, Context env, AppLaunchParams appLaunchParams)

at com.citrix.wing.core.applyaccessprefs.AAPLaunch.Launch(Dictionary`2 parameters, Context env, AppLaunchParams appLaunchParams)

at com.citrix.wing.core.clientproxyprovider.CPPLaunch.Launch(Dictionary`2 parameters, Context env, AppLaunchParams appLaunchParams)

at com.citrix.wing.core.connectionroutingprovider.CRPLaunch.LaunchInternal(Dictionary`2 parameters, Context env, AppLaunchParams appLaunchParams, Boolean useAlternateAddress)

at com.citrix.wing.core.connectionroutingprovider.CRPLaunch.Launch(Dictionary`2 parameters, Context env, AppLaunchParams appLaunchParams)

at com.citrix.wing.core.bandwidthcontrolprovider.BCPLaunch.Launch(Dictionary`2 parameters, Context env, AppLaunchParams appLaunchParams)

at Citrix.DeliveryServices.ResourcesCommon.Wing.WingAdaptors.OverrideIcaFileLaunch.Launch(Dictionary`2 launchParams, Context env, AppLaunchParams appLaunchParams)

at Citrix.DeliveryServices.ResourcesCommon.Wing.WingAdaptors.LaunchUtilities.IcaLaunch(IRequestWrapper request, Resource resource, LaunchSettings launchSettings, String retryKey)

com.citrix.wing.core.xmlclient.types.WireException, Private, Version=3.12.0.0, Culture=neutral, PublicKeyToken=null

HttpErrorPacket(500,Internal Server Error)

at com.citrix.wing.core.xmlclient.transactions.TransactionTransport.handleHttpErrorPacket(Int32 httpErrorStatus, String httpReasonPhrase)

at com.citrix.wing.core.xmlclient.transactions.CtxTransactionTransport.receiveTransportHeaders()

at com.citrix.wing.core.xmlclient.transactions.CtxTransactionTransport.receiveResponsePacketImpl(XmlMarshall marshaller)

at com.citrix.wing.core.xmlclient.transactions.ParsedTransaction.sendAndReceiveXmlMessage(XmlMessage request, AccessToken accessToken)

at com.citrix.wing.core.xmlclient.transactions.nfuse.NFuseProtocolTransaction.SendAndReceiveSingleNFuseMessage[TRequest,TResponse](TRequest request, AccessToken accessToken)

at com.citrix.wing.core.xmlclient.transactions.nfuse.AddressTransaction.TransactImpl()

at com.citrix.wing.core.xmlclient.transactions.ParsedTransaction.Transact()

at com.citrix.wing.core.mpssourceimpl.MPSFarmFacade.GetAddress(Context ctxt, String appName, String deviceId, String clientName, Boolean alternate, MPSAddressingType requestedAddressType, String friendlyName, String hostId, String hostIdType, String sessionId, NameValuePair[] cookies, ClientType clientType, String retryKey, LaunchOverride launchOverride, Nullable`1 isPrelaunch, Nullable`1 disableAutoLogoff, Nullable`1 tenantId, String anonymousUserId)

image

Errors on VDAs (Virtual Desktop Agents / VDIs)

Logging directly onto the VDAs will reveal many warnings and errors related to the Citrix Cloud Connector connectivity:

Log Name: Application
Source: Citrix Desktop Service

EventID: 1014

Level: Warning

The Citrix Desktop Service lost contact with the Citrix Desktop Delivery Controller Service on server ‘citrixcloud1.contoso.com’. The service will now attempt to register again.

image

Citrix Cloud Connectors

Review the Cloud Connector connectivity via the Citrix portal will show the cloud connectors with warnings at times and green at other times.

image

Running a Run Health Check will take more than expected and while it completes, the status of the connector may or may not indicate the last checked date.

image

Citrix Cloud Backend Logs

Opening a ticket with Citrix Support and having the engineer review the backend Citrix Cloud connections will reveal an abnormal amount of disconnects. The following was the report we received:

13k events related to Connected/Disconnected/ConnectingFailed in the past 24 hours

038041d1-acac-4903-b88a-817b312f2a1c = citrixcloud2.contoso.com 2270 events disconnected

0812a411-754e-4b03-a6cf-382764a63a6 = citrixcloud3.contoso.com 1782 events disconnected

5ebc62a9-a015-492a-81dd-ceb649fda8f3 citrixcloud1.contoso.com 2508 for disconnected

image

Solution

This issue took quite a bit of time to resolve as the FortiOS upgrade to 6.2.3 was completed 2 weeks prior to the virtual desktop connectivity issues to begin so it was the last place I thought would be the problem. After eliminating every single possibility that there was something wrong with the Citrix environment, I asked the network engineer to open up a ticket with Fortinet to see if they can perform a more in depth tracing for the packets sent and received between the firewall and the Citrix Cloud. To our surprise, the Fortinet engineer who finally gave us a call back immediately indicated that we may be experiencing a bug in the FortiOS 6.2.3, which could cause such an issue. The following are the messages we received from the support engineer:

I informed you that, as you have SSO in your config, you could be very well hitting the known issue for internal servers due session being deleted. We will need to run the flow trace at time of disconnect, so that we can confirm the behavior.

We got on a call with the engineer and was able to determine that it was indeed a bug in this version of the FortiOS. The recommended remediation was to upgrade to either a special build of 6.2.3 that addressed this issue or upgrade to 6.2.4. We ended up upgrading to the 6.2.3 build8283 (GA) which resolved our issue.

image

For those who are interested, the following is the case summary the engineer provided:

1) We discussed the citrix applications were hanging for a prolong period.

2) FGT is currently running the firmware version 6.2.3 and the Citrix server 20.41.61.15 is accessed on the port 443

3) We checked the session list for one of the machines reporting the issue 192.168.5.71 session info: proto=6 proto_state=01 duration=269 expire=268 timeout=300 flags=00000000 sockflag=00000000 sockport=0 av_idx=0 use=5 origin-shaper= reply-shaper=high-priority prio=2 guarantee 0Bps max 134217728Bps traffic 5525Bps drops 0B per_ip_shaper= class_id=0 shaping_policy_id=6 ha_id=0 policy_dir=0 tunnel=/ vlan_cos=0/255 user=MROGERS auth_server=BCAUTH state=log may_dirty npu rs f00 acct-ext statistic(bytes/packets/allow_err): org=30896/83/1 reply=35645/59/1 tuples=2 tx speed(Bps/kbps): 114/0 rx speed(Bps/kbps): 132/1 orgin->sink: org pre->post, reply pre->post dev=11->25/25->11 gwy=198.182.170.1/192.168.5.71 hook=post dir=org act=snat 192.168.5.71:58467->20.41.61.15:443(198.182.170.253:58467) hook=pre dir=reply act=dnat 20.41.61.15:443->198.182.170.253:58467(192.168.5.71:58467) pos/(before,after) 0/(0,0), 0/(0,0) src_mac=00:50:56:b0:c1:53 misc=0 policy_id=213 auth_info=0 chk_client_info=0 vd=0 serial=0a35b6a8 tos=ff/ff app_list=0 app=0 url_cat=0 rpdb_link_id = ff000001 ngfwid=n/a dd_type=0 dd_mode=0 npu_state=0x000c00 npu info: flag=0x81/0x81, offload=8/8, ips_offload=0/0, epid=154/140, ipid=140/154, vlan=0x0000/0x0000 vlifid=140/154, vtag_in=0x0000/0x0000 in_npu=1/1, out_npu=1/1, fwd_en=0/0, qid=7/0

4) In the diagnose firewall auth list we could see the source 192.168.1.57 BC-CC-600D-FW01 # diagnose firewall auth list 192.168.5.71, MROGERS type: fsso, id: 0, duration: 559, idled: 0 server: BCAUTH packets: in 2254 out 2034, bytes: in 933560 out 856864 group_id: 4 33554905 33554989 33555163 33555200 33555204 33555203 33555198 33554433 group_name: ALL_BC_AD_USERS CN=OPERATIONS,OU=DISTRIBUTION GROUPS,OU=GROUPS,DC=CONTOSO,DC=COM CN=SECURITY DEPT,OU=SECURITY,OU=OFFICEADMIN,DC=CONTOSO,DC=COM CN=WIRELESSACCESS,OU=SECURITY GROUPS,OU=GROUPS,DC=CONTOSO,DC=COM CN=TESTALLEMPLOYEES,OU=DISTRIBUTION GROUPS,OU=GROUPS,DC=CONTOSO,DC=COM CN=ALL EMPLOYEES,OU=DISTRIBUTION GROUPS,OU=GROUPS,DC=CONTOSO,DC=COM CN=ALLEMPLOYEES,OU=DISTRIBUTION GROUPS,OU=GROUPS,DC=CONTOSO,DC=COM CN=ALLSUPPORTSTAFF,OU=DISTRIBUTION GROUPS,OU=GROUPS,DC=CONTOSO,DC=COM CN=Domain Users,CN=Users,DC=CONTOSO,DC=COM

5) Further in the debug flow we could see the msg="no session matched" 2020-06-06 18:49:56 id=20085 trace_id=55 func=print_pkt_detail line=5501 msg="vd-root:0 received a packet(proto=6, 192.168.5.71:57775->20.41.61.15:443) from port12. flag [.], seq 1161501577, ack 1787291322, win 255" 2020-06-06 18:49:56 id=20085 trace_id=55 func=vf_ip_route_input_common line=2581 msg="Match policy routing id=2133000193: to 20.41.61.15 via ifindex-25" 2020-06-06 18:49:56 id=20085 trace_id=55 func=vf_ip_route_input_common line=2596 msg="find a route: flag=04000000 gw-198.182.170.1 via port18" 2020-06-06 18:49:56 id=20085 trace_id=55 func=fw_forward_dirty_handler line=385 msg="no session matched"

6) As discussed, we have a know issue of RDP and other applications freezing due to no session match error Bug Id ==> 0605950

7) It seems that when the authed session is changed it clears the non-auth session for the same Ip.

8) The issue is resolved in the newer firmware version 6.2.4. 9) You did not want to upgrade to 6.2.4 so we do have a special build 8283 that resolves this issue. Please upgraded the firmware to this attached build and let us know.

Configuring a Citrix ADC / NetScaler to provide AD FS (Active Directory Federation Services) WAP (Web Application Proxy) service

$
0
0

One of the clients I recently worked with was trying to move away from using their Citrix ADC / NetScaler appliance for authenticating Office 365 services because the federation between the appliance and their Azure AD prevented them from configuring hybrid Azure AD join as both Microsoft and Citrix could not confirm whether it would work (https://docs.microsoft.com/en-us/azure/active-directory/devices/hybrid-azuread-join-federated-domains). A few other issues such as the NetScaler themes being incompatible with the Teams authentication window and password change lead to the decision to move to AD FS (Active Directory Federation Services). As most administrators may know, configuring a redundant AD FS infrastructure requires at least 4 servers (2 x internal AD FS server farm and 2 x WAP servers) and while virtual machines aren’t very expensive to host in Azure, the client wanted to reduce the amount of servers required. With this requirement, the recommendation was made to provision 2 x internal AD FS server farm, 1 x AD FS WAP server, and configure the Citrix ADC / NetScaler to provide the AD FS WAP service as a virtual server / content switching server. This reduces the server count by 1 and leverages the Citrix ADC’s capabilities while still having a full Windows AD FS infrastructure. The following is what the topology looks like:

image

Before I begin, note that I am not configuring the following:

Guide to Deploying NetScaler as an Active Directory Federation Services Proxy

https://www.citrix.com/content/dam/citrix/en_us/documents/products-solutions/guide-to-deploying-netscaler-as-an-active-directory-federation-services-proxy.pdf

… because this configuration will perform authentication at the proxy and may present compatibility issues. The purpose of the WAP configured on the Citrix ADC / NetScaler will act as a AD FS WAP with passthrough configured.

Prerequisites

This post will assume that load balancing has already been set up for the internal AD FS farm servers. If it has not been completed then please have a look at my previous blog post:

Configure Citrix ADC to load balance Microsoft Active Directory Federation Services (AD FS) on Windows Server 2019

http://terenceluk.blogspot.com/2020/05/configure-citrix-adc-to-load-balance.html

Create a Service Group

Begin by creating a Service Group to represent the ADFS service provided by the internal AD FS servers. Note that you cannot reuse the one that was created for load balancing the internal AD FS servers as shown in my previous blog post because the one we’ll be creating will be have the Protocol configured as SSL instead of SSL_Bridge:

image

image

With the new service group created, click on the No Service Group Member line to add the internal AD FS servers:

image

Select the server objects representing the internal AD FS servers and specify the Port as 443:

image

With the service group members added, click on OK to proceed:

image

Scroll to the Settings section and click on the pencil icon to edit the properties:

Configure the settings as such:

SureConnect– Disabled
SurgeProtection– Enabled
UseProxyPort– Enabled
DownStateFlush– Enabled
Use Client IP– Disabled
Client Keep-alive– Disabled
TCP Buffering– Disabled
HTTP Compression– Enabled
Header: X-MS-Forwarded-Client-IP

image

image

Click on the No Service Group to Monitor Binding to add the previously created monitor for the ADFS servers:

image

Select the previously created monitor (as outlined in my previous post) and click on the Bind button to bind the monitor to the service group:

image

The Monitors section should now display 1 Service Group to Monitor Binding:

image

Click Done to complete the configuration for the service group:

image

Create a Virtual Server

Proceed to create a new virtual server:

image

Provide a name for the Virtual Server, configure the protocol as SSL, and specify the IP Address Type as Non Addressable as we’ll be creating a Content Switching Server to referencing this Load Balancing Virtual Server:

image

With the newly created Load Balancing Virtual Server created, click on No Load Balancing Virtual Server ServiceGroup Binding to add the previously created Service Group:

image

Click on the Bind button to complete the configuration:

image

Proceed by selecting No Server Certificate:

image

Select the certificate that will be used for the AD FS WAP service and click Bind:

image

Complete the creation of the virtual server by clicking on Done:

image

Create Content Switching Policies

Navigate to Traffic Management> Content Switching> Policies and click Add:

image

image

Configure the policy as such:

Name: Provide a name that conforms with your naming convention (e.g. CSPolicy_ADFS)

Action: <blank>

LogAction: <blank>

Domain: Expression

Expression:

HTTP.REQ.HOSTNAME.SET_TEXT_MODE(IGNORECASE).EQ("fs.contoso.com") && HTTP.REQ.URL.SET_TEXT_MODE(IGNORECASE).CONTAINS("/adfs")

**Replace fs.contoso.com with the AD FS URL and verify that the quotes are not changed.

image

Proceed and create a second policy for the AD FS metadata as such:

Name: Provide a name that conforms with your naming convention (e.g. CSPolicy_ADFS_Metadata)

Action: <blank>

LogAction: <blank>

Domain: Expression

Expression:

HTTP.REQ.HOSTNAME.SET_TEXT_MODE(IGNORECASE).EQ("fs.contoso.com") && HTTP.REQ.URL.SET_TEXT_MODE(IGNORECASE).CONTAINS("/FederationMetadata")

**Replace fs.contoso.com with the AD FS URL.

image

The following two Content Switching Policies should be created:

image

Create a Content Switching Server

With the policies in place, proceed to create a Content Switching server:

image

image

Configure the Content Switch Virtual Server as such:

Name: Provide a name that conforms with your naming convention (e.g. CSVS_fs.contoso.com_NSWAP)

Protocol: SSL

Target: NONE

Persistent Type: <blank>

Persistent Mask: 255.255.255.255

IPv6 Persist Mask Length: 128

Timeout: 2

IP Address: An IP address for the Content Switch Virtual Server

Port: 443

image

Click on the No Content Switching Policy Bound line item:

image

Select the first policy that was created (non-metadata one) and configure the settings as such:

Priority: 100

Goto Expression: END

Invoke LabelType: None

Target Load Balancing Virtual Server: Select the one that was created earlier

image

Add the second policy that was created (metadata one) and configure the settings as such:

Priority: 110

Goto Expression: END

Invoke LabelType: None

Target Load Balancing Virtual Server: Select the one that was created earlier

image

The following 2 policies should be binded to the Content Switching Server:

image

image

Select the Certificate option under the Advanced Settings:

image

Select the No Server Certificate line item:

image

Select the certificate that will be used for the AD FS WAP service and click Bind:

image

image

Complete the creation of the Content Switching server and verify that the STATE is labeled as UP:

image

Create Rewrite Actions

Navigate to AppExpert> Rewrite> Actions and create a new action:

image

Create a new action with the following configuration:

Name: Provide a name that conforms with your naming convention (e.g. rw_act_adfs_proxyheader)

Type: INSERT_HTTP_HEADER

HeaderName: X-MS-Proxy

Expression:

"NETSCALER"

image

Create a second rewrite action with the following configuration:

Name: Provide a name that conforms with your naming convention (e.g. rw_act_adfs_mex)

Type: REPLACE

Expression:

"/adfs/services/trust/proxymex" + HTTP.REQ.URL.SET_TEXT_MODE(IGNORECASE).PATH_AND_QUERY.STRIP_START_CHARS("/adfs/services/trust/mex").HTTP_URL_SAFE

image

The following two Rewrite Actions should be created:

image

Create Rewrite Policies

Navigate to AppExpert> Rewrite> Policies and create a new action:

image

Create a new policy with the following configuration:

Name: Provide a name that conforms with your naming convention (e.g. rw_pol_adfs_ProxyHeader)

Action: rw_act_adfs_proxyheader

Log Action: <blank>

Undefined-Result Action*: -Global-undefined-result-action-

Expression:

HTTP.REQ.URL.TO_LOWER.STARTSWITH("/adfs")

image

Create a second rewrite action with the following configuration:

Name: Provide a name that conforms with your naming convention (e.g. rw_pol_adfs_mex)

Action: rw_act_adfs_mex

Log Action: <blank>

Undefined-Result Action*: -Global-undefined-result-action-

Expression:

HTTP.REQ.URL.TO_LOWER.STARTSWITH("/adfs/services/trust/mex")

image

The following two polices should be created:

image

Assign the Rewrite Policies to the Load Balancing Virtual Server

With the Rewrite Policies created, open the configuration of the Load Balancing Virtual Server that was created earlier:

image

Select Policies under Advanced Settings:

image

Click on the To add, please click on the + icon line item:

image

Assign a policy with the following configuration:

Choose Policy: Rewrite

Choose Type: Request

image

Select the ProxyHeader policy and configure the following:

Priority: 100

Goto Expression: NEXT

Invoke LabelType: None

image

Bind the mex policy with the following configuration:

Priority: 110

Goto Expression: END

Invoke LabelType: None

image

The following policies should be binded:

image

image

Proceed to test the Citrix ADC / NetScaler Content Switching server AD FS WAP by either the assigned IP address or the Public IP that is NAT-ed to the IP.

HTTP/1.1 Service Unavailable

If tests to the Citrix ADC AD FS WAP displays the error HTTP/1.1 ServiceUnavailable:

image

This is because SNI binding needs to be configured on the AD FS servers. Proceed to use the following command prompt to list the certificate used for the AD FS service:

netsh http show sslcert

Note the following certificate properties:

Hostname:port : fs.contoso.com:443

Certificate Hash : cc429f179e41c0d8a3bc74f92977d3bcb2f549e8

Application ID : {5d89a20c-beab-4389-9447-324788eb944a}

Certificate Store Name : MY

image

The command to configure the SNI binding is as follows:

netsh http add sslcert ipport=0.0.0.0:443 certhash=<the certificate hash> appid=<the certificate appID> certstorename=<the certificate datastore>

For this environment, the command would look as such:

netsh http add sslcert ipport=0.0.0.0:443 certhash= cc429f179e41c0d8a3bc74f92977d3bcb2f549e8 appid={5d89a20c-beab-4389-9447-324788eb944a} certstorename=MY

image

Repeat the same procedure on all of the AD FS servers.

Load Balancing Windows AD FS WAP and Citrix ADC WAP

Note that my original intention was to configure this Content Switching server as the backup of the Load Balancing Virtual Server that provides a SSL_Bridge connection to the Windows AD FS WAP server but realized that it is not possible to do so. What I ended up doing was configure an Azure Traffic Manager to direct traffic between the two services. I will write another blog post to demonstrate the configuration next week.

Configuring an Azure Traffic manager to load balance a Windows Server AD FS WAP and Citrix NetScaler AD FS WAP

$
0
0

To continue from my previous post:

Configuring a Citrix ADC / NetScaler to provide AD FS (Active Directory Federation Services) WAP (Web Application Proxy) service
http://terenceluk.blogspot.com/2020/06/configuring-citrix-adc-netscaler-to.html

I had completed the configuration of a virtual server that directed external inbound AD FS traffic via SSL_Bridge to the Windows Server 2019 AD FS WAP server and another content switching server that responded to external inbound AD FS traffic via SSL (perform the functionalities of a Windows Serve 2019 AD FS WAP server). The next step was to load balance the two but there did not seem to be a way to configure a Content Switching Virtual Server as a Backupserver to a LoadBalancing Virtual Server because the intention was to have the Windows Server 2019 WAP server provide AD FS sign in services and only failover to the Citrix ADC WAP in the event of a failure. Another challenge I had was that the AD FS health probe required port 80 to be opened for the Azure Traffic Director to monitor the health of the AD FS service. Attempting to try and allow the Traffic Manager to monitor the health of the AD FS servers from the WAP to the internal farm has been a top of discussion in other blog posts but for this configuration, the Citrix ADC already monitors the AD FS servers so all I needed was really to allow the Azure Traffic Manager to monitor the WAP services presented by the Citrix ADC.

With the above state, the following is what the topology looks like:

image

Note the following design features:

  1. There are two public IP addresses, 1 represents the Windows Server 2019 AD FS WAP and the other one represents the Content Switching Server WAP on the Citrix ADC.
  2. The Azure Traffic Manager has the routing method configured as priority, which allows all traffic to be directed to the Windows Server 2019 WAP and only two the Content Switching Server WAP on the Citrix ADC when the Windows Server 2019 WAP is unavailable.
  3. Health probes are configured on the Citrix ADC to monitor the Windows AD FS servers’ health via port 80 /adfs/probe.
  4. The idpinitiatedsignon.htm page is enabled in the AD FS farm so it is publicly accessible via https://fs.contoso.com/adfs/ls/idpinitiatedsignon.htm.
  5. The Azure Traffic Manager will deem the AD FS portal as being unavailable if the idpinitiatedsignon.htm page is not reachable.

Configuration of the Azure Traffic Director

The following screenshots are the configuration parameters used for the Azure Traffic Manager:

Overview:

image

Configuration: Routing method: Priority
DNS time to live (TTL): 60
Protocol:
HTTPS
Port: 443
Path: /adfs/ls/idpinitiatedsignon.htm
Custom header settings: host:fs.berninare.com
Expected Status Code Ranges (default: 200): <blank> defaults to 200
Probing interval: 10
Tolerated number of failures: 3
Probe timeout: 5

image

Most of the configuration parameters are self-explanatory but the one I want to note is the DNS time to live (TTL) which has the following description:

image

A more in depth explanation can be found here: https://docs.microsoft.com/en-us/azure/traffic-manager/traffic-manager-faqs#what-is-dns-ttl-and-how-does-it-impact-my-users

What is DNS TTL and how does it impact my users?

When a DNS query lands on Traffic Manager, it sets a value in the response called time-to-live (TTL). This value, whose unit is in seconds, indicates to DNS resolvers downstream on how long to cache this response. While DNS resolvers are not guaranteed to cache this result, caching it enables them to respond to any subsequent queries off the cache instead of going to Traffic Manager DNS servers. This impacts the responses as follows:

  • a higher TTL reduces the number of queries that land on the Traffic Manager DNS servers, which can reduce the cost for a customer since number of queries served is a billable usage.
  • a higher TTL can potentially reduce the time it takes to do a DNS lookup.
  • a higher TTL also means that your data does not reflect the latest health information that Traffic Manager has obtained through its probing agents.

As described above, a higher the value of the TTL reduces cost, reduces time for DNS lookups but at the cost of the results not reflecting the health of the AD FS WAP server. I’ve found that a value of 60 seconds was acceptable for the period it takes to failover to the secondary Citrix ADC WAP but please be cognizant of the increase in cost if this is a heavily accessed service.

Endpoints:

image

image  image

Hope this helps anyone who may be looking to configure their AD FS external access similar to the what I’ve put in place. I am also always open to suggestions and corrections if there are any issues with this configuration.

Installation of VMware.PowerCLI via PowerShell fails with: "Install-PackageProvider : No match was found for the specified search criteria for the provider..."

$
0
0

Problem

You’re attempting to install VMware PowerCLI with the Install-Module -Name VMware.PowerCLI cmdlet from within Powershell but notice that it fails with:

PS C:\scripts\vCheck-vSphere-master> Install-Module -Name VMware.PowerCLI

NuGet provider is required to continue

PowerShellGet requires NuGet provider version '2.8.5.201' or newer to interact with NuGet-based repositories. The NuGet

provider must be available in 'C:\Program Files\PackageManagement\ProviderAssemblies' or

'C:\Users\tluk\AppData\Local\PackageManagement\ProviderAssemblies'. You can also install the NuGet provider by running

'Install-PackageProvider -Name NuGet -MinimumVersion 2.8.5.201 -Force'. Do you want PowerShellGet to install and import

the NuGet provider now?

[Y] Yes [N] No [S] Suspend [?] Help (default is "Y"): y

WARNING: Unable to download from URI 'https://go.microsoft.com/fwlink/?LinkID=627338&clcid=0x409' to ''.

WARNING: Unable to download the list of available providers. Check your internet connection.

PackageManagement\Install-PackageProvider : No match was found for the specified search criteria for the provider

'NuGet'. The package provider requires 'PackageManagement' and 'Provider' tags. Please check if the specified package

has the tags.

At C:\Program Files\WindowsPowerShell\Modules\PowerShellGet\1.0.0.1\PSModule.psm1:7405 char:21

+ ... $null = PackageManagement\Install-PackageProvider -Name $script:N ...

+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

+ CategoryInfo : InvalidArgument: (Microsoft.Power...PackageProvider:InstallPackageProvider) [Install-Pac

kageProvider], Exception

+ FullyQualifiedErrorId : NoMatchFoundForProvider,Microsoft.PowerShell.PackageManagement.Cmdlets.InstallPackageProvider

PackageManagement\Import-PackageProvider : No match was found for the specified search criteria and provider name

'NuGet'. Try 'Get-PackageProvider -ListAvailable' to see if the provider exists on the system.

At C:\Program Files\WindowsPowerShell\Modules\PowerShellGet\1.0.0.1\PSModule.psm1:7411 char:21

+ ... $null = PackageManagement\Import-PackageProvider -Name $script:Nu ...

+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

+ CategoryInfo : InvalidData: (NuGet:String) [Import-PackageProvider], Exception

+ FullyQualifiedErrorId : NoMatchFoundForCriteria,Microsoft.PowerShell.PackageManagement.Cmdlets.ImportPackageProvider

PS C:\scripts\vCheck-vSphere-master>

image

Solution

If you’re in a hurry and need to get the module installed, a quick workaround is to configure TLS 1.2 for the PowerShell session with the following command:

[Net.ServicePointManager]::SecurityProtocol = [Net.SecurityProtocolType]::Tls12

The NuGet provider will install once the above is executed:

image

PS C:\scripts\vCheck-vSphere-master> [Net.ServicePointManager]::SecurityProtocol = [Net.SecurityProtocolType]::Tls12

PS C:\scripts\vCheck-vSphere-master> Install-Module -Name VMware.PowerCLI

NuGet provider is required to continue

PowerShellGet requires NuGet provider version '2.8.5.201' or newer to interact with NuGet-based repositories. The NuGet

provider must be available in 'C:\Program Files\PackageManagement\ProviderAssemblies' or

'C:\Users\tluk\AppData\Local\PackageManagement\ProviderAssemblies'. You can also install the NuGet provider by running

'Install-PackageProvider -Name NuGet -MinimumVersion 2.8.5.201 -Force'. Do you want PowerShellGet to install and import

the NuGet provider now?

[Y] Yes [N] No [S] Suspend [?] Help (default is "Y"): y

Untrusted repository

You are installing the modules from an untrusted repository. If you trust this repository, change its

InstallationPolicy value by running the Set-PSRepository cmdlet. Are you sure you want to install the modules from

'PSGallery'?

[Y] Yes [A] Yes to All [N] No [L] No to All [S] Suspend [?] Help (default is "N"): y

PS C:\scripts\vCheck-vSphere-master>

To permanently correct the issue, open the registry and navigate to the following path for the 64 bit .Net Framework:

Computer\HKEY_LOCAL_MACHINE\SOFTWARE\WOW6432Node\Microsoft\.NETFramework\v4.0.30319

image

Add the chUseStrongCrypto key with the following PowerShell cmdlet:

Set-ItemProperty -Path 'HKLM:\SOFTWARE\Wow6432Node\Microsoft\.NetFramework\v4.0.30319' -Name 'SchUseStrongCrypto' -Value '1' -Type DWord

image

image

Repeat the same for the 32 bit .Net Framework:

Set-ItemProperty -Path 'HKLM:\SOFTWARE\Microsoft\.NetFramework\v4.0.30319' -Name 'SchUseStrongCrypto' -Value '1' -Type DWord

image

Attempting to execute New-CsOnlineSession returns: "The remote server returned an error: (502) Bad Gateway."

$
0
0

Problem

Attempting to use New-CsOnlineSession to create a persistent connection to Microsoft Skype for Business Online for Microsoft Teams management fails with the following error:

PS C:\WINDOWS\system32> $Session = New-CsOnlineSession -UserName tluk@contoso.com

Get-CsOnlinePowerShellEndpoint : The remote server returned an error: (502) Bad Gateway.

At C:\Program Files\Common Files\Skype for Business

Online\Modules\SkypeOnlineConnector\SkypeOnlineConnectorStartup.psm1:149 char:26

+ ... targetUri = Get-CsOnlinePowerShellEndpoint -TargetDomain $adminDomain ...

+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

+ CategoryInfo : NotSpecified: (:) [Get-CsOnlinePowerShellEndpoint], WebException

+ FullyQualifiedErrorId : System.Net.WebException,Microsoft.Rtc.Management.OnlineConnector.GetPowerShellEndpointCm

dletPS C:\WINDOWS\system32>

image

Solution

One of the reasons why this type of error would be thrown is if there is an existing on-premise Skype for Business deployment for the SIP domain that was provided to the -Username parameter and there are DNS records that point the domain to the on-premise deployment. This would also be the same if there is a Skype for Business configured in a hybrid environment because the external DNS records would also point to the on-premises infrastructure.

There are 2 ways to get around this problem:

Option #1 – Log on with an onmicrosoft.com account

Every Office 365 tenant has the onmicrosoft.com domain configured and has an global admin account created with this domain upon the tenant creation. Using an account from this domain will get around the error:

$Session = New-CsOnlineSession -UserName admin@ccsbm.onmicrosoft.com

Option #2 – Use the OverrideAdminDomain parameter to direct the session to connect to the correct Office 365 Tenant

Another way around this is to use the OverrideAdminDomain switch to direct the session to connect to the correct Office 365 tenant as shown in the following cmdlet:

$Session = New-CsOnlineSession -UserName <AdministratorUPN> -OverrideAdminDomain <TenantName>.onmicrosoft.com

The following is an example:

$Session = New-CsOnlineSession -UserName tluk@ccs.bm -OverrideAdminDomain ccsbm.onmicrosoft.com

Turning off "Additional security verification" for Office 365 users

$
0
0

One of the most common questions I’ve been asked about Office 365 over the past few months was whether it was possible to turn off the mandatory Additional security verification prompt that users are presented with a 14 day grace period as shown in the following screenshot:

image

The short answer is Yes but I would advise to put some thought towards it because the way to disable it is to disable the security defaults globally in the Azure portal as shown in the following screenshots:

image

image

image

You will be prompted with 4 checkboxes to enable in order to save the setting configured as disabled:

image

For more information about the security defaults review the following Microsoft documentation:

What are security defaults?
https://docs.microsoft.com/en-us/azure/active-directory/fundamentals/concept-fundamentals-security-defaults

image

What are security defaults?

  • 05/13/2020

Managing security can be difficult with common identity-related attacks like password spray, replay, and phishing are becoming more and more popular. Security defaults make it easier to help protect your organization from these attacks with preconfigured security settings:

· Requiring all users to register for Azure Multi-Factor Authentication.

· Requiring administrators to perform multi-factor authentication.

· Blocking legacy authentication protocols.

· Requiring users to perform multi-factor authentication when necessary.

· Protecting privileged activities like access to the Azure portal.

Azure AD Connect no longer synchronizes and displays the error AzureADConnect has stopped working

$
0
0

Problem

You’ve noticed that the Azure AD Connect server in the on-premise environment is no longer synchronizing with Azure AD and attempting to launch the synchronization engine displays the following error:

AzureADConnect has stopped working

Problem Event Name: CLR20r3

Problem Signature01: AzureADConnect.exe

Attempting to run an AD Sync via PowerShell will indicate that the Synchronization service isn’t running:

Windows PowerShell

Copyright (C) 2014 Microsoft Corporation. All rights reserved.

PS C:\Windows\system32> Start-ADSyncSyncCycle -PolicyType Delta

Start-ADSyncSyncCycle : Synchronization Service is not running.

At line:1 char:1

+ Start-ADSyncSyncCycle -PolicyType Delta

+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

+ CategoryInfo : WriteError: (Microsoft.Ident...ADSyncSyncCycle:StartADSyncSyncCycle) [Start-ADSyncSyncCy

cle], InvalidOperationException

+ FullyQualifiedErrorId : Synchronization Service is not running.,Microsoft.IdentityManagement.PowerShell.Cmdlet.S

tartADSyncSyncCycle

PS C:\Windows\system32>

image

Solution

One of the reasons why the above errors would be thrown is if the AD Connect server has run out of drive space on the C drive. Increasing the C drive space or cleaning up unneeded files to create space will correct the issue:

image

Redirecting on-premise Exchange Server 2019 OWA and ECP authentication to AD FS

$
0
0

I’ve recently worked with a client who had no plans to move away from their on-premise Exchange Server 2019 due to regulations they had to abide to and was interested in leveraging their on-premise AD FS (Active Directory Federation Server) for clients to use claims-based authentication to connect to Outlook on the Web (OWA) and Exchange admin center (EAC). Those who have attempted to integrated 2FA solutions such as Duo, SecurEnvoy, or RSA know how cumbersome the process for configuring OWA or ECP can be and the additional amount of administrative overhead it adds to tasks such as patching the Exchange servers. Using AD FS effectively allows the client to require the 2FA provided by AD FS without affecting the Exchange servers. This post serves to demonstrate the process of configuring AD FS claims-based authentication with Outlook on the web and Exchange admin center.

Server Requirements

Configuring AD FS claims-based authentication for Outlook on the web and the EAC in Exchange Server involves the following additional servers:

  • A Windows Server 2012 or later domain controller (Active Directory Domain Services server role).
  • A Windows Server 2012 or later AD FS server (Active Directory Federation Services server role). Windows Server 2012 uses AD FS 2.1, and Windows Server 2012 R2 uses AD FS 3.0.
  • You need to be a member of the Domain Admins, Enterprise Admins, or local Administrators security group to install AD FS, and to create the required relying party trusts and claim rules on the AD FS server.
  • Optionally, a Windows Server 2012 R2 or later Web Application Proxy server (Remote Access server role, Web Application Proxy role service).
    • Web Application Proxy is a reverse proxy server for web applications that are inside the corporate network. Web Application Proxy allows users on many devices to access published web applications from outside the corporate network.
    • Although Web Application Proxy is typically recommended when AD FS is accessible to external clients, offline access in Outlook on the web isn't supported when using AD FS authentication through Web Application Proxy.
    • You need to deploy and configure the AD FS server before you configure the Web Application Proxy server, and you can't install Web Application Proxy on the same server where AD FS is installed.

This post will not include instructions for deploying an AD FS farm as I have already written posts in the past and they can found here:

Deploying a redundant Active Directory Federation Services (ADFS) farm on Windows Server 2019
http://terenceluk.blogspot.com/2020/04/deploying-redundant-active-directory.html

Deploying a redundant Active Directory Federation Services (ADFS) Web Application Proxy servers on Windows Server 2019
http://terenceluk.blogspot.com/2020/04/deploying-redundant-active-directory_21.html

Before I begin with the steps, note that the official Microsoft deployment documentation can be found here:

Use AD FS claims-based authentication with Outlook on the web
https://docs.microsoft.com/en-us/exchange/clients/outlook-on-the-web/ad-fs-claims-based-auth?view=exchserver-2019

Step #1 - Export ADFS Signing Certificate and Import to Exchange Servers

The AD FS server uses a token-signing certificate for encrypted communication and authentication between the AD FS server, Active Directory domain controllers, and Exchange servers. This self-signed certificate is automatically copied over to the Web Application Proxy server during the installation, but is required to be manually imported into the Trusted Root Certificate store on all of the Exchange servers in the organization.

To export the certificate, log onto the AD FS server, launch the AD FS Management Console, navigate to AD FS> Service> Certificate:

image

Select the certificate listed under Token-signing, right click and select on View Certificate…:

image

The general properties of the certificate will be displayed:

image

Proceed and navigate to the Details tab and click on the Copy to File… button:

image

Go through the Certificate Export Wizard to export the certificate:

image

Select DER encoded X.509 (.CER) format and proceed with the export:

imageimageimageimageimage

Proceed to import the certificate into the Trusted Root Certificate store on all of the Exchange servers in the organization

Also note that this self-signed token-signing certificate is only valid for one year and while the AD FS server will automatically renew and replace its own self-signed certificates before they expire, you will need to re-export and import the certificate back onto the Exchange servers. If the certificate expiration period is deemed to be too short for the organization then the period can be extended by using the following PowerShell cmdlet on the AD FS server:

Set-AdfsProperties -CertificateDuration <Days> (the default value is 365)

Step #2 – Create a relying party trust and custom claim rules in AD FS for OWA (Outlook on the web)

With the AD FS prerequisites configured, proceed to create the relying party trust for OWA (Outlook on the web) on the AD FS server by launching the AD FS Management console:

image

Navigate to AD FS> Relying Party Trusts and click on Add Relying Party Trusts…:

image

Select Claims aware and click on Start:

image

Change the default Import data about the relying party published online or on a local network to Enter data about the relying party manually:

imageimage

Enter the Display name and Notes for the Outlook on the web relying party:

Outlook on the web

This is a trust for https://mail.contoso.com/owa

image

Leave the Configure Certificate window as unconfigured and click on Next:

image

Check the Enable support for the WS-Federation Passive protocol checkbox:

image

Enter in the URL of the Outlook on the web address (e.g. https://mail.contoso.com/owa/):

image

Add the URL of the Outlook on the web address for the Relying party trust identifier:

image

The AD FS server in this environment already had Duo MFA configured so this example will select Permit everyone and require MFA:

image

On the Ready to Add Trust page, review the settings, and then click Next to save the relying party trust information:

image

Leave the Configure claims issuance policy for this application checked and click Close:

image

Step #3 – Create custom claim rules in AD FS for Outlook on the web

The following two claim rules will be created for the Outlook on the web relying party trust:

  • Active Directory user SID
  • Active Directory UPN

In the Edit Claim Issuance Policy for Outlook on the web window, click on Add Rule…:

image

Change the Claim rule template to Send Claims Using a Custom Rule and then click Next:

image

Enter the following configuration for the parameters and ten click Finish.

Claim rule name:

ActiveDirectoryUserSID

Custom rule:

c:[Type == "http://schemas.microsoft.com/ws/2008/06/identity/claims/windowsaccountname", Issuer == "AD AUTHORITY"] => issue(store = "Active Directory", types = ("http://schemas.microsoft.com/ws/2008/06/identity/claims/primarysid"), query = ";objectSID;{0}", param = c.Value);

image

Click Add Rule… again to create a new rule:

image

Change the Claim rule template to Send Claims Using a Custom Rule and then click Next:

image

Enter the following configuration for the parameters and ten click Finish.

Claim rule name:

ActiveDirectoryUPN

Custom rule:

c:[Type == "http://schemas.microsoft.com/ws/2008/06/identity/claims/windowsaccountname", Issuer == "AD AUTHORITY"] => issue(store = "Active Directory", types = ("http://schemas.xmlsoap.org/ws/2005/05/identity/claims/upn"), query = ";userPrincipalName;{0}", param = c.Value);

image

Click OK to close the window after the following 2 rules are created:

image

You should see the new Outlook on the web Relying Party Trust created:

image

Step #4 – Configure AD FS Web Application Proxy Server to publish Outlook on the Web

If users are going to access OWA from the internet then it will be necessary to publish the newly created Outlook on the web relying party trust on the AD FS Web Application Proxy. Begin by launching the Remote Access Management Console on the AD FS WAP server and navigating to the Published Web Applications context then click on Publish:

image

Click Next:

image

Select Active Directory Federation Services (AD FS) and click Next:

image

Select Web and MSOFBA for the type of preauthentication to perform for this application and click Next:

image

The configured relying party trusts will be listed, select the Outlook on the web relying party trust that we created and click Next:

image

Fill in a name for the web application (e.g. Outlook on the web) then the URLs for OWA access, the certificate used for securing traffic and then click Next:

image

A display of the PowerShell cmdlet used to create the we application will be displayed. Proceed by clicking on Publish:

image

The wizard will display the results of publishing the application when completed:

image

The newly published Outlook on the web application should now be displayed:

image

Step #5 – Create a relying party trust and custom claim rules in AD FS for EAC (Exchange admin center)

The steps for creating a relying party trust and custom claim rules in AD FS for EAC is exactly the same as OWA and the only change is the URL (e.g. https://mail.contoso.com/ecp instead of https://mail.contoso.com/owa). Repeat the same steps as above for the EAC.

Step #6 – Configure AD FS Web Application Proxy Server to publish Exchange admin center

The steps to configure the AD FS Web Application Proxy Server to publish Exchange admin center is exactly the same as the Outlook on the web and the only change is the URL. Repeat the steps above for the EAC.

image

Step 6: Configure the Exchange organization to use AD FS authentication

There is no way to configure the Exchange organization to use AD FS authentication within the GUI so begin by launching the Exchange Management Shell from one of the Exchange servers.

The cmdlet to configure the Exchange organization to use AD FS for authentication is as follows:

Set-OrganizationConfig -AdfsIssuer https://<FederationServiceName>/adfs/ls/ -AdfsAudienceUris "<OotwURL>","<EACURL>" -AdfsSignCertificateThumbprint "<Thumbprint>"

Note that the following are the parameters for the cmdlet above:

  • AD FS URL (e.g. https://adfs.contoso.com/adfs/ls/)
  • Outlook on the web URL (https://mail.contoso.com/owa/)
  • EAC URL (e.g. https://mail.contoso.com/ecp/)
  • AD FS token-signing certificate thumbprint which is the ADFS Signing

The first 3 URLs should be self-explanatory, while the last parameter is the thumbprint value of the imported AD FS token signing certificate performed in Step #1. Run the following cmdlet on the Exchange server to find the thumbprint value of the imported AD FS token signing certificate:

Set-Location Cert:\LocalMachine\Root; Get-ChildItem | Sort-Object Subject

image

You can also confirm this thumbprint value on the AD FS server in an elevated Windows PowerShell window by running the command Import-Module ADFS, and then running the command Get-AdfsCertificate -CertificateType Token-Signing or just running Get-AdfsCertificate -CertificateType Token-Signing cmdlet the on the AD FS server:

image

It would be a good idea to execute the Set-OrganizationConfig before the Set-OrganizationConfig toverify that there hasn’t been any ADFS configuration configured for the organization as shown in the following screenshot:

Get-OrganizationConfig | FL *adfs*

image

Having verified the Exchange organization does not have any ADFS configuration, execute the cmdlet with the environment’s settings:

Set-OrganizationConfig -AdfsIssuer https://adfs.contoso.com/adfs/ls/ -AdfsAudienceUris "https://mail.contoso.com/owa/","https://mail.contoso.com/ecp/" -AdfsSignCertificateThumbprint " 75FC78640C9C7F1BDB2B0ADD7E7E2FB8A9159160"

image

Execute Get-OrganizationConfig | FL *adfs* to verify that the configuration has been applied:

image

Step #7 – Configure AD FS authentication on the Outlook on the web and EAC virtual directories

The last step required so that Outlook on the web and EAC redirects authentication to ADFS is to configure AD FS authentication as the only available authentication method and disabling all other authentication methods. The following are additional requirements and recommendations:

  • The EAC virtual directory needs to be configured before the Outlook on the web virtual directory.
  • AD FS authentication can be configured only on Internet-facing Exchange servers that clients use to connect to Outlook on the web and the EAC.
  • By default, only Basic and Forms authentication are enabled for the Outlook on the web and EAC virtual directories.

The following is the cmdlet to configure the EAC or Outlook on the web virtual directory to only accept AD FS authentication:

Set-EcpVirtualDirectory -Identity <VirtualDirectoryIdentity> -AdfsAuthentication $true -BasicAuthentication $false -DigestAuthentication $false -FormsAuthentication $false -WindowsAuthentication $false

**Note that the documentation indicates we should include the following switch:

-OAuthAuthentication $false

However, I’ve noticed that Exchange 2019 does not accept it:

[PS] C:\>Set-EcpVirtualDirectory -Identity "OSMIUM\ecp (Default Web Site)" -AdfsAuthentication $true -BasicAuthentication $false -DigestAuthentication $false -FormsAuthentication $false -OAuthAuthentication $false -WindowsAuthentication $false

A parameter cannot be found that matches parameter name 'OAuthAuthentication'.

+ CategoryInfo : InvalidArgument: (:) [Set-EcpVirtualDirectory], ParameterBindingException

+ FullyQualifiedErrorId : NamedParameterNotFound,Set-EcpVirtualDirectory

+ PSComputerName : osmium.bma.local

[PS] C:\>

image

I’ve been able to successfully configure claims-based authentication without settings this parameter so it should be safe to omit.

It would be a good idea to review what the current configuration is for the two directories with the following Get-EcpVirtualDirectory and Get-OwaVirtualDirectory cmdlet prior to performing the changes:

Get-EcpVirtualDirectory -Identity "svr-mail-01\ecp (Default Web Site)" | FL AdfsAuthentication,BasicAuthentication,DigestAuthentication,FormsAuthentication,OAuthAuthentication,WindowsAuthentication

Get-OwaVirtualDirectory -Identity "svr-mail-01\owa (Default Web Site)" | FL AdfsAuthentication,BasicAuthentication,DigestAuthentication,FormsAuthentication,OAuthAuthentication,WindowsAuthentication

image

Get-EcpVirtualDirectory -Identity "svr-mail-01\ecp (Default Web Site)" | FL *auth*

Get-OwaVirtualDirectory -Identity "svr-mail-01\owa (Default Web Site)" | FL *auth*

image

Repeat the same for other Exchange servers if there are more than one in the environment.

Once the current settings have been reviewed, execute the cmdlet to configure the EAC:

Set-EcpVirtualDirectory -Identity "OSMIUM\ecp (Default Web Site)" -AdfsAuthentication $true -BasicAuthentication $false -DigestAuthentication $false -FormsAuthentication $false -WindowsAuthentication $false

image

Execute the cmdlet to configure the OWA:

Set-OwaVirtualDirectory -Identity "OSMIUM\owa (Default Web Site)" -AdfsAuthentication $true -BasicAuthentication $false -DigestAuthentication $false -FormsAuthentication $false -OAuthAuthentication $false -WindowsAuthentication $false

Once the directories are set, use the get cmdlets to review the settings:

image

If there are more than 1 server in the environment, use the following cmdlet to set all the servers:

Get-EcpVirtualDirectory | Set-EcpVirtualDirectory -AdfsAuthentication $true -BasicAuthentication $false -DigestAuthentication $false -FormsAuthentication $false -OAuthAuthentication $false -WindowsAuthentication $false

Get-OwaVirtualDirectory | Set-OwaVirtualDirectory -AdfsAuthentication $true -BasicAuthentication $false -DigestAuthentication $false -FormsAuthentication $false -OAuthAuthentication $false -WindowsAuthentication $false

Step #8 – Restart IIS on the Exchange servers

Complete the changes by restarting IIS on the Exchange servers by executing the following cmdlets:

net stop was /y

An output similar to the following will be displayed:

[PS] C:\>net stop was /y

The following services are dependent on the Windows Process Activation Service service.

Stopping the Windows Process Activation Service service will also stop these services.

World Wide Web Publishing Service

Net.Tcp Listener Adapter

Net.Pipe Listener Adapter

Net.Msmq Listener Adapter

The World Wide Web Publishing Service service is stopping.....................................

The World Wide Web Publishing Service service was stopped successfully.

The Net.Tcp Listener Adapter service is stopping.

The Net.Tcp Listener Adapter service was stopped successfully.

The Net.Pipe Listener Adapter service is stopping.

The Net.Pipe Listener Adapter service was stopped successfully.

The Net.Msmq Listener Adapter service is stopping.

The Net.Msmq Listener Adapter service was stopped successfully.

The Windows Process Activation Service service is stopping.

The Windows Process Activation Service service was stopped successfully.

[PS] C:\>

image

Lastly, execute the following for the World Wide Web Publishing Service:

net start w3svc

image

Authentication for both EAC and OWA should now be directed to the AD FS portal.

A Microsoft Windows Network Policy Server (RADIUS) logs the System Error: "An Access-Request message was received from RADIUS client x.x.x.x with a Message-Authenticator attribute that is not valid."

$
0
0

Problem

You notice that wireless clients authenticating with a wireless controller that uses a Microsoft Windows Network Policy server fails and the following error is logged:

Log Name: System
Source: NPS
EventID: 18
Level: Error

An Access-Request message was received from RADIUS client 172.23.192.16 with a Message-Authenticator attribute that is not valid.

image

image

Solution

This error is typically caused by a shared secret mismatch between what is configured for the RADIUS client and the NPS server. Correcting the mismatch will remediate the issue and the following events will be logged on the NPS server:

Log Name: System
Source: NPS
EventID: 4400
Level: Information

A LDAP connection with domain controller DC02.contoso.com for domain CONTOSO is established.

image

image

Configuring Zoom with ADFS throws the error: "An error occurred during an attempt to read the federation metadata. Verify that the specified URL or host name is a valid federation metadata endpoint."

$
0
0

Problem

I was recently asked to configure Zoom with ADFS as per the following documentation:

Configuring Zoom With ADFS
https://support.zoom.us/hc/en-us/articles/202374287-Configuring-Zoom-With-ADFS

… and was not able to complete the process because the following error was displayed when importing the federation metadata:

An error occurred during an attempt to read the federation metadata. Verify that the specified URL or host name is a valid federation metadata endpoint.

image

Searching on the internet earlier this year did not return any posts that helped resolve this issue and opening up a ticket with Zoom had us wait months before we received a reply (escalated by the client’s account manager). To give Zoom some credit, the support engineer who reached out to us was extremely quick with response and very helpful. I am unsure if searching for this error will yield the KB he forwarded to us so this post serves to help anyone who may run into the same problem.

Solution

One of the possible reasons why the import of the federation metadata would fail on the ADFS server is if when TLS 1.2 is not enabled on ADFS. The server in this example was a fresh install of Windows Server 2019 and navigating to the following registry showed that TLS 1.2 was not explicitly enabled:

Computer\HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols

image

To correct this issue, simply following the steps provided in this Zoom article:

How to enable TLS 1.2 on an ADFS Server (Windows Server 2012 R2)
https://support.zoom.us/hc/en-us/articles/360033739531-How-to-enable-TLS-1-2-on-an-ADFS-Server-Windows-Server-2012-R2-

The following PowerShell cmdlets create the keys required to enable TLS 1.2:

New-Item 'HKLM:\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.2\Client' -Force | Out-Null

image

New-ItemProperty -path 'HKLM:\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.2\Client' -name 'Enabled' -value '1' -PropertyType 'DWord' -Force | Out-Null

image

New-ItemProperty -path 'HKLM:\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.2\Client' -name 'DisabledByDefault' -value 0 -PropertyType 'DWord' -Force | Out-Null

image

The following PowerShell cmdlet enable’s Strong Authentication for .Net Framework:

image

The following PowerShell cmdlets disables SSL 3.0:

New-Item 'HKLM:\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\SSL 3.0\Client' -Force | Out-Null

New-ItemProperty -path 'HKLM:\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\SSL 3.0\Client' -name 'Enabled' -value '0' -PropertyType 'DWord' -Force | Out-Null

New-ItemProperty -path 'HKLM:\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\SSL 3.0\Client' -name 'DisabledByDefault' -value 1 -PropertyType 'DWord' -Force | Out-Null

image

Here is the list of cmdlets demonstrated above:

New-Item 'HKLM:\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.2\Client' -Force | Out-Null

New-ItemProperty -path 'HKLM:\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.2\Client' -name 'Enabled' -value '1' -PropertyType 'DWord' -Force | Out-Null

New-ItemProperty -path 'HKLM:\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.2\Client' -name 'DisabledByDefault' -value 0 -PropertyType 'DWord' -Force | Out-Null

New-ItemProperty -path 'HKLM:\SOFTWARE\Microsoft\.NetFramework\v4.0.30319' -name 'SchUseStrongCrypto' -value '1' -PropertyType 'DWord' -Force | Out-Null

New-Item 'HKLM:\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\SSL 3.0\Client' -Force | Out-Null

New-ItemProperty -path 'HKLM:\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\SSL 3.0\Client' -name 'Enabled' -value '0' -PropertyType 'DWord' -Force | Out-Null

New-ItemProperty -path 'HKLM:\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\SSL 3.0\Client' -name 'DisabledByDefault' -value 1 -PropertyType 'DWord' -Force | Out-Null

With TLS 1.2 enabled, strong encryption enabled, and SSL 3.0 enabled, the import of the federation metadata should now succeed.

Configuring Zoom with ADFS as an iDP

$
0
0

I was recently asked to configure Zoom with ADFS and found certain parts of the following documentation provided by Zoom:

Configuring Zoom With ADFS
https://support.zoom.us/hc/en-us/articles/202374287-Configuring-Zoom-With-ADFS

… a bit confusing so I would like to write this post to provide a clear example of the settings required in the portal.

To configure Zoom to use ADFS as an iDP, you’ll need to log into the administration console, navigate to Admin> Advanced> SingleSign-On and click on Enable Single Sign-On:

image

Once in the portal, edit the SAML settings as shown in the screenshot below:

image

The two configuration settings I felt wasn’t clear in the instructions were:

  • Identity provider certificate
  • Issuer (IDP Entity ID)

What confused me with the Identity provider certificate was whether we should copy and paste the tags in or not and the answer is no:

image

As for the Issuer (IDP Entity ID), ensure that you use the ADFS URL:

image

The instructions for configuring the ADFS servers were fairly straight forward so I won’t include them in this post. If you experience any issues with logging via the ADFS portal, you can turn on logging in the Zoom administrative portal by enabling the Save SAML response logs on user sign-in:

image

With the above enabled, a new tab will be available to review sign-in attempts:

image

Export a certificate that does not allow the private key to be exported from a Windows Server

$
0
0

I recently had a client who inadvertently created and completed a certificate request on a Windows Server that did not allow the private key of the certificate to be exported and needed it to be exported with the private key so it could be placed on another server.

image

While this is likely a common issue, I haven’t personally come across it so wasn’t familiar with the options available. A quick search on the internet returned a few different methods and the first one I tried actually triggered the antivirus to block the utility. The second method I found ended up working so this post serves to demonstrate the process.

Begin by downloading the following Exporting Non-Exportable RSA Keysutility from GitHub:

https://github.com/luipir/ExportNotExportablePrivateKey/tree/fca10bf0807fe8160502d448d22537e499c3c8d5

image

As well as the following well known PsTools:

https://docs.microsoft.com/en-us/sysinternals/downloads/pstools

image

Unpack exportrsa utility and PsTools into the directories of your choice on the server with the certificate to be exported with the private key, launch the command prompt as an administrator, change the directory to where PsTools was unpacked and execute:

PsExec64.exe -s -i cmd

image

Change the directory to where exportrsa was unpacked and execute exportrsa.exe in the exportrsa\release folder:

image

The utility will cycle through certificates found in the Local Computer certificate an allow you to export the private key:

image

The exported PFX will be placed in the exportrsa.exe directory:

image
Viewing all 836 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>