Quantcast
Channel: Terence Luk
Viewing all 836 articles
Browse latest View live

Deleting stuck VMware Horizon View VMs with viewdbchk.cmd

$
0
0

Problem

You have a pool of desktops that are currently in a status / state where you are unable to use the VMware Horizon 7 administrator console to remove:

Deleting (missing)

Maintenance mode (missing)

Solution

Older version of View required the administrator to manually remove references of the VMs from the ADAM database hosted on the View Connection Servers, the entries in the composer SQL database, virtual machine on vCenter (if it exists), as well as the Active Director computer object but newer version such as 7 now provide a tool named viewdbchk.cmd that automates this process.

This command can be found at the following directory:

C:\Program Files\VMware\VMware View\Server\tools\bin

Simply executing this command will provide you with the switches available:

C:\Program Files\VMware\VMware View\Server\tools\bin>viewdbchk.cmd

No command specified

ViewDbChk --findDesktop --desktopName <desktop name> [--verbose]

Find a desktop pool by name.

ViewDbChk --enableDesktop --desktopName <desktop name> [--verbose]

Enable a desktop pool.

ViewDbChk --disableDesktop --desktopName <desktop name> [--verbose]

Disable a desktop pool.

ViewDbChk --findMachine --desktopName <desktop name> --machineName <machine name

> [--verbose]

Find a machine by name.

ViewDbChk --removeMachine --machineName <machine name> [--desktopName <desktop n

ame>] [--force] [--noErrorCheck] [--verbose]

Remove a machine from a desktop pool.

ViewDbChk --scanMachines [--desktopName <desktop name>] [--limit <maximum delete

s>] [--force] [--verbose]

Scan all machines for problems. The scan can optionally be limited to a speci

fic desktop pool.

ViewDbChk --help [--commandName] [--verbose]

Display help for all commands, or a specific command.

C:\Program Files\VMware\VMware View\Server\tools\bin>

Using automated scan option to detect problematic virtual machines

The recommended first step to take is to use the scanMachines switch to allow the tool to automatically detect any problematic machines in the pools. Note that the tool needs the pool to be disabled in order for it to scan it and it is best to disable provisioning as well to prevent new machines from being provisioned when one is deleted.

Here is a sample of the process with a limit of 200 machines specified:

viewdbchk.cmd --scanMachines --limit 200

C:\Program Files\VMware\VMware View\Server\tools\bin>viewdbchk.cmd --scanMachine

s --limit 200

Checking for machines with errors...

Connecting to vCenter "https://contukvc01.contoso.com:443/sdk". This m

ay take some time...

Connecting to vCenter "https://contdrvc01.contoso.com:443/sdk". This m

ay take some time...

Found 35 machine(s) with errors in 4 desktop pool(s)

Processing desktop pool "cont_disaster_recovery"

Desktop Pool Name: cont_Disaster_Recovery

Desktop Pool Type: STICKY_TYPE

VM Folder: /UK DR/vm/VM View/cont_Disaster_Recovery/

Desktop Pool Disabled: true

Desktop Pool Provisioning Enabled: false

Checking connectivity...

Machine "contDR-008" has errors

VM Name: contDR-008

Creation Date: 9/24/18 9:21:43 PM BST

MOID: vm-10521

VM Folder: /UK DR/vm/VM View/cont_Disaster_Recovery/contDR-008

VM State: MAINTENANCE

VM Missing In vCenter: true

Do you want to remove the desktop machine "contDR-008"? (yes/no):

As shown above, the tool will prompt you with the suspected problematic virtual desktop name and ask if you would like to remove it. It is important that you verify the virtual machine identified is indeed a machine you would like to delete as you cannot reverse the process.

Selecting yes to the machine will output the following:

Do you want to remove the desktop machine "contDR-008"? (yes/no):yes

Shutting down VM "/UK DR/vm/VM View/cont_Disaster_Recovery/contDR-008"...

** ERROR: EXCEPTION: VM not found: vm-10521 **

Removing VM "/UK DR/vm/VM View/cont_Disaster_Recovery/contDR-008" from inventory..

.

** ERROR: EXCEPTION: VM not found: vm-10521 **

Removing ThinApp entitlements for machine "/UK DR/vm/VM View/cont_Disaster_Recove

ry/contDR-008"...

Removing machine "/UK DR/vm/VM View/cont_Disaster_Recovery/contDR-008" from LDAP..

.

Running delete VM scripts for machine "/UK DR/vm/VM View/cont_Disaster_Recovery/T

MRDR-008"...

Machine "contDR-004" has errors

VM Name: contDR-004

Creation Date: 9/24/18 9:21:43 PM BST

MOID: vm-10520

VM Folder: /UK DR/vm/VM View/cont_Disaster_Recovery/contDR-004

VM State: MAINTENANCE

VM Missing In vCenter: true

Do you want to remove the desktop machine "contDR-004"? (yes/no):

The tool will proceed with scanning other machines once the earlier problematic machine has been deleted. Once all of the identified VMs are deleted, the tool will ask if you would like to re-enable provisioning if it is currently disabled:

With the pass of the first pool completed, it will move onto the next pool but if the pool is not disabled then it will ask if I can be disabled. This is obviously potentially service impacting so if the pool is in production and being used, select no.

Checking connectivity...

The desktop pool "Standard_Desktop" must be disabled before proceeding. Do

you want to disable the desktop pool? (yes/no):

Manually identifying problematic virtual machines and removing them

There will be times when the scanMachines switch would not be able to identify problematic machines. An example of this would be pools that are stuck in the Deleting state:

You can find these machines in the Machine view but unable to click into the pool itself:

The method to remove these machines would be to manually specify the machine name and the pool using the removeMachine switch. The following is an example of the cmdlet:

viewdbchk.cmd --removeMachine --machineName UKVD-050 --desktopName Building__VMs

The following is an example of the output:

C:\Program Files\VMware\VMware View\Server\tools\bin>viewdbchk.cmd --removeMachi

ne --machineName contUKVD-050 --desktopName Building__VMs

Looking for desktop pool "Building__VMs" in LDAP...

Desktop Pool Name: Building__VMs

Desktop Pool Type: STICKY_TYPE

VM Folder: /Hemel Hempstead/vm/contE UK Windows 10 VDI/London/Gold 2vCPU - 8GB

RAM/Building__VMs/

Desktop Pool Disabled: true

Desktop Pool Provisioning Enabled: false

Desktop Pool Provisioning Error: The task was canceled by a user.

Looking for machine "/Hemel Hempstead/vm/contE UK Windows 10 VDI/London/Gold 2vCP

U - 8GB RAM/Building__VMs/contUKVD-050" in vCenter...

Connecting to vCenter "https://contukvc01.contoso.com:443/sdk". This m

ay take some time...

** ERROR: NOT FOUND **

Checking connectivity...

Found machine "contUKVD-050"

VM Name: contUKVD-050

Creation Date: 3/6/19 6:48:14 AM GMT

MOID:

VM Folder: /Hemel Hempstead/vm/contE UK Windows 10 VDI/London/Gold 2vCPU - 8GB

RAM/Building__VMs/contUKVD-050

VM State: DELETING

VM Missing In vCenter: true

Do you want to remove the desktop machine "contUKVD-050"? (yes/no):Yes

LDAP record for machine "contUKVD-050" is incomplete.

Trying to remove machine by name...

Looking for machine "/Hemel Hempstead/vm/contE UK Windows 10 VDI/London/Gold 2vCP

U - 8GB RAM/Building__VMs/contUKVD-050" in vCenter...

** ERROR: NOT FOUND **

Removing ThinApp entitlements for machine "/Hemel Hempstead/vm/contE UK Windows 1

0 VDI/London/Gold 2vCPU - 8GB RAM/Building__VMs/contUKVD-050"...

Removing machine "/Hemel Hempstead/vm/contE UK Windows 10 VDI/London/Gold 2vCPU -

8GB RAM/Building__VMs/contUKVD-050" from LDAP...

Running delete VM scripts for machine "/Hemel Hempstead/vm/contE UK Windows 10 VD

I/London/Gold 2vCPU - 8GB RAM/Building__VMs/contUKVD-050"...

Provisioning has been disabled for the desktop pool "Building__VMs". Do you want

to enable it? (yes/no):

Repeat the same for all the other VMs and the pool will be successfully removed.


Attempting to launch a NetScaler published Citrix XenDesktop / XenDesktop application or desktop fails with: “(Unknown client error 0).”

$
0
0

Problem

You attempt to launch a NetScaler published Citrix XenDesktop / XenDesktop application or desktop but immediately receive the following error for the desktop:

The connection to “XenApp Desktop” failed with status (Unknown client error 0).

Launching an application fails with the following message:

Unable to launch your application. Contact your help desk with the following information:

Cannot connect to the Citrix XenApp server.Protocol Driver error

Solution

In the case of this environment, there were 2 issues.

#1 – Certificate on Delivery Controller expired

Reviewing the event logs on the Delivery Controller indicated that the certificate bounded to IIS has expired:

An SSL connection could not be established: The server sent an expired security certificate. The certificate *.domain.int is valid from 10/27/2016 1:45:38 PM until 10/27/2018 1:45:38 PM.. This message was reported from the Citrix XML Service at address https://svr-ctxdc-02.domain.int/scripts/ctxsta.dll[UnknownRequest]. The specified Secure Ticket Authority could not be contacted and has been temporarily removed from the list of active services.

Log Name: Citrix Delivery Services

Source: Citrix Store Service

Event ID: 0

Level: Error

#2 – There were no STAs defined for the NetScaler Virtual Server

Reviewing the settings on the NetScaler virtual server also showed that there were no STAs defined:

Notice how Published Applications was an option on the right side of the Advanced Settings options.

Configuring the appropriate STAs (the Delivery Controllers) should correct the issue:

Attempting to enable a user for Unified Messaging in Exchange Server 2016 fails with: "Extension xxxx is already assigned to another user on the dial plan UMDialPlan or on an equivalent dial plan."

$
0
0

Problem

You attempt to enable a user for Unified Messaging in Exchange Server 2016 but receive the following error:

Extension xxxx is already assigned to another user on the dial plan UMDialPlan or on an equivalent dial plan.

You know that this extension was previously assigned to another user so you search for the previous user and can confirm that Unified Messaging for the user is disabled:

Reviewing the properties of the account’s email address properties confirm there are now EUM addresses:

Executing the following cmdlets does not display this extension assigned to anyone in the dial plan:

Get-UMMailbox | where { $_.Extensions -eq "9533" }

Get-UMMailbox | Format-Table -Wrap -AutoSize > C:\UMExt.txt

You proceed to review the msExchShadowProxyAddresses attribute in the previous user’s AD user account’s properties’ Attribute Editor:

You can see that an eum address with the extension and proceed to remove it.

However, this does not correct the issue as the same error message is thrown.

Solution

I can’t take full credit for the solution but figured it would be worth blogging since it took over an hour for me to locate a cmdlet that was able to find the find the offending user.  This cmdlet can be found on this forum post:

https://social.technet.microsoft.com/Forums/en-US/55dc00d5-2301-49d0-9a02-482d237c339b/exchange-server-2013-enableummailbox-error-extension-2909-is-already-assigned-to-another-user-on?forum=exchangesvrunifiedmessaging

The cmdlet was:

Get-ADobject -filter * -Properties name,msRTCSIP-Line,telephoneNumber,proxyAddresses | Select-Object name,msRTCSIP-Line,telephoneNumber,@{Name="proxyAddresses";Expression={[string]::join(";",( $_ | Select-Object -ExpandProperty proxyaddresses ))}} | Where-Object { $_.proxyAddresses -like "*9533*" } | Format-table -Wrap -AutoSize

This user was also had Unified Messaging disabled:

However, the eum attribute was visible in the properties of the email address settings:

Removing the attributes corrected the issue.

Attempting to upgrade Microsoft Exchange Server 2016 from CU8 to CU12 fails with: “Setup can't continue with the upgrade because the mscorsvw (3152) has open files.”

$
0
0

Problem

You’re attempting to upgrade Microsoft Exchange Server 2016 from CU8 to CU12 but the process fails at the Prerequisite Analysis with:

Setup can't continue with the upgrade because the mscorsvw (3152) has open

files. Close the process, and then restart Setup.

For more information, visit: http://technet.microsoft.com/library(EXCHG.150)/ms.

exch.setupreadiness.ProcessNeedsToBeClosedOnUpgrade.aspx

Solution

This error is typically thrown shortly after a server restart when the .NET framework’s Native Image Generator Technology (NGEN) is running in the background.  It is not a good idea to try and terminate the process and waiting for around 10 minutes is usually enough time for the process to complete depending on how fast the server hardware can complete the operations.

For the environment in this example, it took about 8 minutes for the installer to successfully run:

Remotely terminating a remote session on a Citrix XenApp or RDS server

$
0
0

I’ve been asked several times in the past about the following error that is presented if a user attempts to RDP (remote desktop) to a Citrix XenApp application server:

The target session is incompatible with the current session.

The reason why this message would be presented is because account used for the RDP connection already has an previous ICA session in a disconnected state.  You can verify this by using the net use command to connect to the server, then the query session command to list the sessions on the server:

Step #1 – Connect to the remote server

Launch the command prompt and execute the following:

net use \\<serverName> /user:<adminUserName> <Password>

The command should display the following message if the connection is successful:

The command completed successfully.

Step #2 – Query session on the remote server

Execute the following command to list the sessions on the remote server:

query session /server:<serverName>

The command should display sessions along with the following headings:

  • SESSIONNAME
  • USERNAME
  • ID
  • STATE
  • TYPE
  • DEVICE

Locate the username you are looking as well as the ID number.

Step #3 – Terminate session on the remote server

With the ID of the username you want terminate located, execute the following command to terminate it:

reset session<ID> /server:<serverName>

Step #4 – Confirm that the session on the remote server has been terminated

The command will not provide any output after completion so execute the query session command to confirm that the session has been terminated:

query session /server:<serverName>

Below is an example of the output from the commands executed above:

You should be able to RDP to the server now that the session is no longer present for the account connecting.

Attempting to execute Exchange PowerShell cmdlets on objects in another domain fails with: “The operation couldn't be performed because object 'alias@domain.com' couldn't be found on 'DC01.domain.com'.”

$
0
0

One of the common questions I have been asked over the years is why the following error is thrown when an Exchange PowerShell cmdlet is executed on an object in another domain where the Exchange Organization is not installed with.  The following is an example of the scenario:

  1. The environment has 1 forest and multiple domains
  2. The root domain name is contoso.com
  3. Another domain named tradewinds.com is in the same forest but a separate tree
  4. Exchange is installed into the contoso.com domain

You log onto one of the Exchange servers in the contoso.com domain and attempt to execute a cmdlet for an object in the tradewinds.com domain but receive the following error:

Get-MailboxPermission -Identity inbox@tradewinds.com
The operation couldn't be performed because object 'inbox@tradewinds.com' couldn't be found on
'contDC01.contoso.com'.
     + CategoryInfo          : InvalidData: (:) [Get-MailboxPermission], ManagementObjectNotFoundException
     + FullyQualifiedErrorId : [Server=contBMEXMA01,RequestId=c60334ef-7152-42ec-98f0-f838d0a90283,TimeStamp=4/5/2019 12
    :36:27 PM] [FailureCategory=Cmdlet-ManagementObjectNotFoundException] E1D4908D,Microsoft.Exchange.Management.Recip
   ientTasks.GetMailboxPermission
     + PSComputerName        : contbmexma01.contoso.com

The reason why this error is thrown is because any cmdlets executed in the default context will only look up objects in the domain where Exchange is installed.  In order to search for objects outside of the current domain, you will either need to:

  1. Log onto an Exchange server that is joined to that domain (if there is one)
  2. Use the DomainController switch to specify a domain controller in that domain
  3. Execute the follow cmdlet to expand the scope to include the entire forest (not that this can cause searches to be slow if the environment is large):

Set-AdServerSettings -ViewEntireForest $true

Attempting to send an email with attachment in Outlook 2016 fails with: “The function cannot be performed because the message has been changed.”

$
0
0

Problem

You’ve received reports from users that they would intermittently receive the following error messages when they send an email with an attachment:

The function cannot be performed because the message has been changed.

The operation failed. The messaging interfaces have returned an unknown error. If the problem persists, restart Outlook.

This item is no longer valid because it has been closed.

The Outlook version installed is Microsoft Outlook 2016 MSO (16.0.4738.1000) and Exchange Server 2019 CU1 is where the mailbox is hosted.

This doesn’t to happen with all attachments and appear to only affect attachments that are several MBs in size (under 1MB appears to work but 3MB+ does not).

Solution

Troubleshooting this issue was difficult as it was not easy to replicate and the event logs did not log any errors but after locating several attachments that didn’t work, it was observed that problematic attachments would cause the Outlook window to hang for several seconds and the Send as Adobe Document Cloud link would appear in the email:

This lead me to believe that it may be a plugin issue so I navigated into the the COM Add-ins window, disabled the Adobe Document Cloud for Microsoft Outlook – Acrobat add-in, tested again and the problem went away:

This environment was undergoing an Exchange 2013 to 2019 migration and this appears to only affect migrated users so our suspicion is that the following Adobe Acrobat DC 19.010.20098 reader needs to be updated:

VMware Horizon View virtual desktop in “agent unreachable” status because OS is in repair boot mode

$
0
0

One of the common issues I’ve seen in many virtual desktop environments such as VMware Horizon View (or Citrix XenDesktop) is when a Windows 10 virtual desktop has crashed and gone into the repair mode upon reboot thus causing Horizon View to report the desktop as agent unreachable:

Reviewing the console of the virtual desktop in vCenter will confirm that it has booted into the recovery screen:

There isn’t really a benefit for having this feature on because it causes desktops to never boot into the operating system and therefore generating help desk calls.  The way to disable this feature is through the use of the command: bcdedit

Executing this command in the command prompt will display the configuration of the Windows Boot Loader and whether the recoveryenabled setting is set to Yes or No:

To disable the feature, simply execute the following command:

bcdedit /set recoveryenabled NO

Executing bcdedit after the configuration should show the recoveryenabled configuration set to No:

Those who still have Windows 7 virtual desktops in their environment can use the following command:

bcdedit /set {default} bootstatuspolicy ignoreshutdownfailures


Running Remote Connectivity Analyzer against Exchange 2013/2016/2019 fails with: “MAPI over HTTP connectivity failed.”

$
0
0

Problem

You use the Remote Connectivity Analyzer to test external connectivity for an Exchange 2013/2016/2019 environment but notice that it fails with the following error:

Testing MAPI over HTTP connectivity to server mail.contoso.com
      MAPI over HTTP connectivity failed.
      
Additional Details
      Elapsed Time: 11 ms.

     
Test Steps
      
Attempting to resolve the host name contexc02.contoso.com in DNS.
      The host name couldn't be resolved.
        Tell me more about this issue and how to resolve it

     
Additional Details
      Host contexc02.contoso.com couldn't be resolved in DNS InfoDomainNonexistent.
Elapsed Time: 11 ms.


What’s strange about the error is that the server FQDN referenced in the message has the FQDN of the internal server name and domain name.

Solution

As much as the error suggests this caused by MAPI over HTTP, you may notice that executing the cmdlet:

Get-MapiVirtualDirectory | FL ServerName, *url*, *auth*

… would reveal that the configuration for the external MAPI Virtual Directory is already configured with the appropriate URL.

Executing the cmdlet:

Get-OrganizationConfig | fl identity,*mapi*

… will also indicate that MAPI over HTTP is enabled.

What typically causes this issue is if Outlook Anywhere has not been configured.  You can review the settings on each of the servers either by executing the following cmdlet:

Get-OutlookAnywhere | FL Identity,*Host*,*requireSSL*

… or navigating to the server properties’ Outlook Anywhere section, which would likely have the internal server’s FQDN as that is the default:

Either adjust the configuration in the EAC for each server appropriately or use the following cmdlet to configure the settings:

Set-OutlookAnywhere -Identity:"<serverName>\Rpc (Default Web Site)" -DefaultAuthenticationMethod NTLM -InternalHostname <internalFQDNURL> -InternalClientsRequireSsl $true -ExternalHostname <externalFQDNURL> -ExternalClientsRequireSsl $true -SSLOffloading $false

The Remote Connectivity Analyzer should return successful results once this has been corrected.

Deploying Skype for Business Server 2019 Archiving and Monitoring Services

$
0
0

Deploying Skype for Business Server 2019 archiving and monitoring services are fairly much the same as the previous version but I thought it would be a good idea to write this updated post to demonstrate the process.  Before I begin, note that the official planning documentation Monitoring can be found here:

Deploy archiving for Skype for Business Server
https://docs.microsoft.com/en-us/skypeforbusiness/deploy/deploy-archiving/deploy-archiving

Deploy monitoring in Skype for Business Server
https://docs.microsoft.com/en-us/skypeforbusiness/deploy/deploy-monitoring/deploy-monitoring

Step #1 – Define Archiving and Monitoring services on Front End Server / Pool Properties

Begin by editing the properties of the front-end pool, navigate to the General properties and scroll down to the Associations section:

Enable the Archiving and Monitoring (CDR and QoE metrics) options:

Enter the FQDN of the SQL server that will be hosting the two databases when presented with the Define New SQL Server Store menu:

Select the same or create a new SQL instance for the monitoring service then click OK:

The archiving SQL Server store and Monitoring SQL Server store should now have the new SQL server instance defined:

Publish the topology:

The Publish Topology wizard will present you with the option to either automatically let the wizard determine where to store the database or manually changing the path by going into the advanced options:

Selecting advanced will present the following menu:

What I don’t like about this is that the database doesn’t get stored in the paths defined but rather in sub folders such as D:\Database\ArchivingStore\(default)\DbPath.  You can, obviously, change this afterwards or at least have them on the correct drives.

Step #2 (Optional) – Move SQL database and log files to appropriate location

If you are not satisfied with the path in which the database and the log files are stored, you can use the following SQL queries to change the path after taking the database offline:

ALTER DATABASE LcsCDR  

    MODIFY FILE ( NAME = LcsCDR_data,  

                  FILENAME = 'D:\Database\LcsCDR.mdf'); 

GO

ALTER DATABASE LcsCDR  

    MODIFY FILE ( NAME = LcsCDR_log,  

                  FILENAME = 'D:\Log\LcsCDR.ldf'); 

GO

ALTER DATABASE LcsLog  

    MODIFY FILE ( NAME = LcsLog_data,  

                  FILENAME = 'D:\Database\LcsLog.mdf'); 

GO

ALTER DATABASE LcsLog 

    MODIFY FILE ( NAME = LcsLog_log,  

                  FILENAME = 'D:\Log\LcsLog.ldf'); 

ALTER DATABASE QoEMetrics  

    MODIFY FILE ( NAME = QoEMetrics_data,  

                  FILENAME = 'D:\Database\QoEMetrics.mdf'); 

GO

ALTER DATABASE QoEMetrics 

    MODIFY FILE ( NAME = QoEMetrics_log,  

                  FILENAME = 'D:\Log\QoEMetrics.ldf'); 

GO

Step #3 – Deploy Monitoring Reports on SSRS

With the databases created, proceed by launching the Deployment Wizard and click on Deploy Monitoring Report:

Select the appropriate SQL instance and define the SQL Server Reporting Services (SSRS) instance:

**Note that as the wizard indicates, this will overwrite existing reports so be cautious of any SSRS reports that may already exist on the server.

Enter an account that will be used to access the Monitoring database:

I’ve been asked many times in the past about what credentials should be entered and the short answer is to use a service account because this account will be granted required logon and database permissions to the database in order for it to access the database.  The documentation about this can be found here:

Install Monitoring Reports in Skype for Business Server
https://docs.microsoft.com/en-us/skypeforbusiness/deploy/deploy-monitoring/install-monitoring-reports

On the Specify Credentials page, in the User name box, type the domain name and user name of the account to be used when accessing the Monitoring Reports (for example, litwareinc\kenmyer). If you do not use this format (domain\user name) an error will occur.

Type the user account password in the Password box, and then click Next. Note that no special rights are required for this account. The account will automatically be granted the required logon and database permissions when setup completes.

I usually prefer to create a service account for this:

Define a group that will have read-only access to SQL Server Reporting Services (SSRS):

**Note that the default is RTCUniversalReadOnlyAdmins

The wizard will complete the deployment with the information provided:

Once completed, you should see the service account granted ReportsReadOnlyRole to the databases:

Step #4 – Test Monitoring services reports

The reports can now be accessed via the following URL:

http://<SSRS_Server>/reportserver

Step #6 – Configure Monitoring and Archiving Reports

It is generally not advisable to just leave the configuration for the monitoring or archiving reports as the default (you should at least review the default settings to ensure it meets the requirements of the organization) so proceed to launch the Skype for Business Server 2019 Control Panel and navigate to Monitoring and Archiving:

Go through the menus and edit the global properties or create a new one for the organization:

Attempting to rebalance VMware Horizon View linked-clones desktop pool datastore fails with: "Refit operation rebalance failed"

$
0
0

Problem

You attempt to migrate a linked-clone pool’s storage from one datastore to another by using the rebalance feature as outlined in the following KB:

Migrating linked clone pools to a different or new datastore (1028754)
https://kb.vmware.com/s/article/1028754

… but notice that the operation attempts the migration on a few desktops but fails with the error: Refit operation rebalance failed in the Events log of the linked-clones desktop pool:

Clicking on the ellipsis icon for the error for more information does not reveal additional information:

Navigating outside of the Events log of the linked-clones desktop pool then into the Inventory view and clicking on the ellipsis icon for the desktop in error displays the following message:

Apr 5, 2019 8:34:03 PM ADT: View Composer Invalid Parent VM Fault: View Composer Invalid Parent VM Fault: Selected snapshot on parent VM is not found on VC server

Pairing state:

Configured by:

Attempted theft by:

Solution

This error is usually caused by situations where the master image for the linked-clones pool no longer have the snapshots that the desktops being moved are associated with.  The rebalance operation requires access to the snapshots in order to re-clone the replicas (snapshots) to the new datastore so if they no longer exist then this error would be thrown.  In the case of the environment, the pool of 200 desktops had a mix of 5 different snapshots where only 1 of them was still available on the master image.  To view the linked-clones desktops and the associated snapshot that are being used, navigate to Inventory> Machines (View Composer Details) and review the Image tab:

If the snapshots have already been deleted then the way to remediate the error is to recompose all the snapshots with an existing snapshot on the master image or create a new one and recompose with it.  The rebalance operation should complete successfully once all the linked-clone pools are using a snapshot that exists.

Microsoft Outlook fails to authenticate with Office 365 configured with DUO MFA

$
0
0

I was recently contacted to troubleshoot an issue where a user’s Outlook was unable to connect to Office 365 after a password change over the evening and while I found various forum posts describing the issue, the suggested solution required a slight change to work in the environment I was dealing with so this post serves to describe what I encountered and the solution.

Environment

  • Office 365 is configured for MFA with the product named DUO, which is now owned by Cisco
  • Users are automatically redirected to a Citrix NetScaler configured with DUO MFA authentication webpage https://aaa.domain.com when they attempt to log into Office 365 either via outlook.com/domain, outlook.office365.com or login.microsoftonline.com

Problem

A user is no longer able to connect to Office 365 with their Outlook client after the following actions:

  1. Her password was going to expire so she changed it at the end of the day
  2. She logged off after the password change and went home for the evening
  3. She arrived at the office this morning, logged into her laptop and noticed that her Outlook no longer connected

You’ve confirmed that their password was updated within Azure Active Directory (AAD) yesterday evening:

image

You’ve confirmed that the cached credentials were cleared:

clip_image001

You proceed to connect to their desktop/laptop and notice that her Outlook had the status displayed as:

Trying to connect…

clip_image002

Clicking on the Trying to connect… button would briefly bring up the what appears to be authentication prompt for Office 365:

image

The window is displayed for about 3 seconds and disappears.

Thinking that this may be an authentication issue, you try having the user authenticate via the Office Account sign in page but it does not resolve the issue:

image

You perform a bit of Googling on the internet and find the following two forum posts:

https://techcommunity.microsoft.com/t5/outlook/outlook-password-prompt-disappears-quickly/m-p/793317

https://superuser.com/questions/1349327/outlook-needs-password-but-dialog-box-disappears

The discussion indicates that the following two registry keys be added:

HKEY_CURRENT_USER\Software\Microsoft\Office\16.0\Common\Identity

DWORD: DisableADALatopWAMOverride

Value: 1

DWORD: EnableADAL

Value: 0

You proceed to add these two DWORDs to the registry:

clip_image002[4]

Adding these two keys managed to display this classic authentication prompt when Outlook is restarted:

image

However, logging in with her new credentials did not correct the problem as the status would continue to be stuck at Trying to connect…:

Solution

What does end up working for this issue was when the following registry key was deleted:

HKEY_CURRENT_USER\Software\Microsoft\Office\16.0\Common\Identity

DWORD: EnableADAL

Value: 0

… but with the following one configured:

HKEY_CURRENT_USER\Software\Microsoft\Office\16.0\Common\Identity

DWORD: DisableADALatopWAMOverride

Value: 1

clip_image002[6]

With the above setup, the aaa.domain.com Citrix NetScaler page loaded correctly when Outlook is started:

image

Having the user enter their credentials got Outlook to finally connect to Office 365:

clip_image002[8]

What should also be noted is that the following DWORD is re-added:

HKEY_CURRENT_USER\Software\Microsoft\Office\16.0\Common\Identity

DWORD: EnableADAL

Value: 0

… back in after Outlook has connected will cause it to fail connecting again so I this DWORD should be left unadded.

Confirming a Citrix ADC is CVE-2019-19781 vulnerable and compromised then proceeding to remediate

$
0
0

I took an extended leave from IT for most of 2009 until returning later in December and therefore haven’t had to deal with the Citrix CVE-2019-19781 vulnerability but I just so happened to come across an unpatched Citrix ADC the other day so I thought I’d go through the process as outlined by my friend Thomas Poppelgaard (https://www.poppelgaard.com/cve-2019-19781-what-you-should-know-and-how-to-fix-your-citrix-adc-access-gateway) in detail to capture and show some screenshots of what an exploited Citrix ADC would reveal during the tests as well as the remediation I performed at the end. I won’t be reiterating all the details about the tests so please refer to his blog post for a more in depth description of why and what the test are for.

Confirming Vulnerability and Compromise

CISA and Citrix both offer tools to check the Citrix ADC for the vulnerability:

CISA UTILITY TO CHECK IF CUSTOMER IS VULNERABLE

https://github.com/cisagov/check-cve-2019-19781

CVE-2019-19781 – VERIFICATION TOOL FROM CITRIX

https://support.citrix.com/article/CTX269180

… but a community tool created by zentura_cp is available on Azure to perform the check quickly. That tool can be found here: https://cve-2019-19781.azurewebsites.net

image

Scanning the Citrix ADC in this environment confirmed that it is vulnerable to CVE-2019-19781:

image

The wildcard certificate check did not reveal a match but I have recommended the client to revoke and reissue the certificate.

Checking the Template Files

As indicated by Thomas Pooelgaard’s article, the exploits all write files to the following three directories:

/netscaler/portal/templates/

/var/tmp/netscaler/portal/templates

/var/vpn/bookmark/

There is no need to switch to the shell prompt from the CLI as you can execute an ls on these directories as such:

shell ls /netscaler/portal/templates/*.xml

shell ls /var/tmp/netscaler/portal/templates

shell ls /var/vpn/bookmark/*.xml

Initiating a ls command for those directories return the following:

shell ls /netscaler/portal/templates/*.xml

image

> shell ls /netscaler/portal/templates/*.xml

/netscaler/portal/templates/02a81b35.xml

/netscaler/portal/templates/04c86902.xml

/netscaler/portal/templates/06fb0cee.xml

/netscaler/portal/templates/0bc583f8.xml

/netscaler/portal/templates/17EOk.xml

/netscaler/portal/templates/1Z5g92Aky.xml

/netscaler/portal/templates/1d9363f1.xml

/netscaler/portal/templates/2YMBK.xml

/netscaler/portal/templates/3QJW6.xml

/netscaler/portal/templates/3YDd6gheCtB8mrJL2zfyO4WKl7sG0MQa.xml

/netscaler/portal/templates/473ce5e1.xml

/netscaler/portal/templates/48ZwobXq0yjVtU3GeC2a95kBfAHNu1Qz.xml

/netscaler/portal/templates/4GgZh.xml

/netscaler/portal/templates/53963742.xml

/netscaler/portal/templates/5BdZA.xml

/netscaler/portal/templates/5qIJsMV1x9aUGzKirLuNtRYX032QjnvS.xml

/netscaler/portal/templates/5t4ls.xml

/netscaler/portal/templates/64c2192d.xml

/netscaler/portal/templates/6fef6135.xml

/netscaler/portal/templates/77bd07f9.xml

/netscaler/portal/templates/7BA1C.xml

/netscaler/portal/templates/7ESdi.xml

/netscaler/portal/templates/8HtqLVavu.xml

/netscaler/portal/templates/8W7Cgvyck.xml

/netscaler/portal/templates/8cc4d5fb.xml

/netscaler/portal/templates/9HS1qhTa5cyMXYjeIdbV06LQum3nlxB2.xml

/netscaler/portal/templates/BbMZ6.xml

/netscaler/portal/templates/CHQsB.xml

/netscaler/portal/templates/E9TG5.xml

/netscaler/portal/templates/FLQBEv6FQfjjxFfjn2dSRjGHf8WjgpN6.xml

/netscaler/portal/templates/FUcrY.xml

/netscaler/portal/templates/FnMd4HqxlN0Tar9eX3sYD1L8OtPbA5gG.xml

/netscaler/portal/templates/GWwH1.xml

/netscaler/portal/templates/GbRqm.xml

/netscaler/portal/templates/GgJlo.xml

/netscaler/portal/templates/GiDNJIaYtW7EubqBCLF3rHKodsQ16wRc.xml

/netscaler/portal/templates/H8yeh.xml

/netscaler/portal/templates/H9Qwv12zTE3bUcVZMK7Wga5hBfJGxiu0.xml

/netscaler/portal/templates/HitNA.xml

/netscaler/portal/templates/JTQ5sTyzWXWdQRLHrhWukGTgvAHX8Be9.xml

/netscaler/portal/templates/MAkw4mge4.xml

/netscaler/portal/templates/NpWPYQq2DCndZfFVix74PB6owv0QrOUp.xml

/netscaler/portal/templates/O0aAJQSycUFHDsk32gflC98iTK6bRpm1.xml

/netscaler/portal/templates/O9Bon2fAzapuXry7Q0JM41LTKS8l6DmR.xml

/netscaler/portal/templates/OhjZf.xml

/netscaler/portal/templates/R8p9l.xml

/netscaler/portal/templates/RWM4aJ9GhcvSTZLxAreg0qOo5fsCuz8y.xml

/netscaler/portal/templates/TlkhDg4uyEWRHX1fxV2t9AJ3OM0iKmYz.xml

/netscaler/portal/templates/UHKl9QzvkTpJi34wqWORhGN0bDYsCPX1.xml

/netscaler/portal/templates/X3h40kvVpMxjYIKbqlwsa5eLtuNTf281.xml

/netscaler/portal/templates/YOrPCVxZizJjLauTmRyoWUf5Xg1F4G6E.xml

/netscaler/portal/templates/YP68c.xml

/netscaler/portal/templates/YjpRX.xml

/netscaler/portal/templates/a436n.xml

/netscaler/portal/templates/ad140234.xml

/netscaler/portal/templates/aenp7.xml

/netscaler/portal/templates/bf327bcc.xml

/netscaler/portal/templates/c1ktNfwQ76mDi9y2bVEY4Ze8qJajSCvs.xml

/netscaler/portal/templates/cUAwMikvxEePnrOV0bs6NRzT28lt71jq.xml

/netscaler/portal/templates/cVSMUbaDF4TeAL8C9tpmGf15PNR6sd2W.xml

/netscaler/portal/templates/cs0lxv5ppnrxebo4.xml

/netscaler/portal/templates/czqhO.xml

/netscaler/portal/templates/d12affe3.xml

/netscaler/portal/templates/dddd[%template.new({'BLOCK'='print`id`'})%].xml

/netscaler/portal/templates/dqe82VQf8.xml

/netscaler/portal/templates/eMlUc.xml

/netscaler/portal/templates/f81961c6.xml

/netscaler/portal/templates/fhLCc.xml

/netscaler/portal/templates/hK3Sn.xml

/netscaler/portal/templates/hjXNi.xml

/netscaler/portal/templates/jLBscdX0Q.xml

/netscaler/portal/templates/k42bayNsu1hxf7pZ98UwWPviVYDOAmIM.xml

/netscaler/portal/templates/niphc4KmTRS9E0XkbAo5eBQWMtsYF3zG.xml

/netscaler/portal/templates/oQZFe.xml

/netscaler/portal/templates/orfuxqzupb.xml

/netscaler/portal/templates/pswIV.xml

/netscaler/portal/templates/qTvCLFw9HnfAzFycKGJSdQ9XxOnCityF.xml

/netscaler/portal/templates/qVIgn5Mljz7as2Q6PrkDHBbmoRuhx13v.xml

/netscaler/portal/templates/qrfgO.xml

/netscaler/portal/templates/qui4o.xml

/netscaler/portal/templates/ryMG7.xml

/netscaler/portal/templates/sDZIbl7ZNsL1HVBCzowAq3oVLY2aiexj.xml

/netscaler/portal/templates/somuniquestr.xml

/netscaler/portal/templates/tcHzZ6um0MrlEVXT8dyAwBOx3kYs2go7.xml

/netscaler/portal/templates/v905U.xml

/netscaler/portal/templates/wTJIb.xml

/netscaler/portal/templates/xfWjDEZ0K4lSaRgHvecQ2987GihIYTUX.xml

/netscaler/portal/templates/yR39d.xml

/netscaler/portal/templates/yVStWwCFy9BDXBxjIGvCk3h67Gx4Zm8E.xml

/netscaler/portal/templates/yndBMhT2L.xml

Done

image

The results from above indicate that there are plenty of xml files that have filenames suggesting the Citrix ADC has been compromised.

Proceeding to execute the following:

shell ls /var/tmp/netscaler/portal/templates

image

… reveals that the /var/tmp/netscaler/portal/templates directory appears to be fine.

Executing the last command:

shell ls /var/vpn/bookmark/*.xml

image

> shell ls /var/vpn/bookmark/*.xml

/var/vpn/bookmark/1Fx3kynwZ.xml

/var/vpn/bookmark/OqU8uS1bW.xml

/var/vpn/bookmark/bNRrtwSjF.xml

/var/vpn/bookmark/cx4yylzvg.xml

/var/vpn/bookmark/fKchklbXP.xml

/var/vpn/bookmark/fSdeA2hEn.xml

/var/vpn/bookmark/lztbVXgWEKqmoQLWPjbM.xml

/var/vpn/bookmark/nsroot.xml

/var/vpn/bookmark/pwnpzi1337.xml

/var/vpn/bookmark/testtest.xml

/var/vpn/bookmark/z8P0sSAGK.xml

Done

>

image

… reveals that the /var/vpn/bookmark/ directory also displays the same type of files and suggests the ADC has been compromised.

Reviewing Apache Log Files

Executing the first 3 commands to review the Apache logs does not reveal any results:

shell cat /var/log/httpaccess.log | grep vpns | grep xml

shell cat /var/log/httpaccess.log | grep "/\.\./"

shell gzcat /var/log/httpaccess.log.*.gz | grep vpns | grep xml

However, execute the following command does:

shell gzcat /var/log/httpaccess.log.*.gz | grep "/\.\./"

image

The suspicious xml files are notably present in the results above.

Executing the commands to determine whether there are any scheduled tasks to maintain access after patches does not reveal any jobs that a compromised Citrix ADC would have:

shell cat /etc/crontab

shell crontab -l -u nobody

image

Checking for Backdoor Scripts

Executing the following two commands to look for backdoor scripts:

shell ps -aux | grep python

shell ps -aux | grep perl

… reveal the following:

> shell ps -aux | grep python

nobody 82094 0.0 0.1 60980 1852 ?? Ss 28Jan20 0:37.76 /var/python/bin/python2 -c -c (python2.7)

root 36830 0.0 0.1 9096 912 0 RL+ 1:51PM 0:00.00 grep python

> shell ps -aux | grep perl

nobody 53632 100.0 0.0 5804 272 ?? R Sun05AM 3176:31.20 /tmp/.perl

nobody 77911 0.0 0.0 5804 412 ?? I 28Jan20 0:07.52 /tmp/.perl

root 36832 0.0 0.1 9096 1024 0 S+ 1:51PM 0:00.00 grep perl

>

image

The presence of nobody scripts is a red flag that there are backdoor scripts currently on the NetScaler and we can see that there are 3 of them listed.

Checking for Cyrpto Mining

Executing the following command to list the current running processes:

shell top -n 10

… reveal the following:

> shell top -n 10

last pid: 36937; load averages: 2.00, 2.00, 2.00 up 123+23:58:22 13:53:29

77 processes: 3 running, 71 sleeping, 3 zombie

Mem: 27M Active, 3504K Inact, 1487M Wired, 11M Cache, 169M Buf, 2528K Free

Swap: 4198M Total, 299M Used, 3899M Free, 7% Inuse

PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND

1174 root 1 44 r0 1210M 1210M CPU1 1 2976.0 100.00% NSPPE-00

53632 nobody 1 119 0 5804K 272K RUN 0 53.0H 100.00% .perl

1176 root 1 44 0 83392K 5668K kqread 0 30.7H 0.00% nsnetsvc

1297 root 1 44 0 37128K 3232K kqread 0 387:11 0.00% nsrised

1434 root 1 44 0 54752K 444K nanslp 0 316:34 0.00% nsprofmon

994 root 1 44 0 76720K 1356K select 0 112:35 0.00% vmtoolsd

1199 root 1 44 0 99M 11940K kqread 0 111:55 0.00% nsaggregatord

1331 root 2 76 0 70184K 13168K ucond 0 41:09 0.00% nscopo

25 root 1 44 0 29784K 29872K kqread 0 37:05 0.00% pitboss

1314 root 1 44 0 63928K 9820K nanslp 0 30:48 0.00% nscollect

Done

>

image

The presence of the nobody account running a perl script that is consuming 100.00% of the WCPU suggests malicious activity currently running on the Citrix ADC.

Reviewing the Bash Logs

Executing the following 2 commands to review the bash logs:

shell cat /var/log/bash.log | grep nobody

shell gzcat /var/log/bash.*.gz | grep nobody

… does not reveal the nobody account present:

image

The following 2 commands display the full output of both logs allowing you to manually comb through the entries but the results from all the tests above are sufficient enough to deem the appliance as being compromised:

shell cat /var/log/httperror.log

shell gzcat /var/log/httperror.log.*.gz

image

Remediation

This particular environment did not have a backup pre-December 2019 for me to restore so the only viable option that was available is to redeploy the VPX appliance and restore the configuration. The latest backup I could find dates back to 2 years ago but given that there has not been any change to the Citrix ADC, I decided to download the same build, redeploy, restore backup configuration, upgrade appliance and swap out the existing one.

Prior to completing the swap it is also important to perform the following:

  1. Review all of the servers that the Citrix ADC interacts with to determine whether they have been breached. These services could include domain controllers, Citrix StoreFronts, Citrix Delivery Controllers, Licensing servers and others.
  2. Update all of the passwords both local to the ADC and credentials used to interact with other services such as LDAP lookups against Domain Controllers.
  3. Force password changes for all users who have logged onto the ADC to access resources. As most Citrix ADCs allow Active Directory integration, it is important to force password changes for AD accounts.
  4. Revoke and renew all SSL certificates on the Citrix ADC. Use the list of files on the ADC’s /nsconfig/ssl/ directory as the hackers who have breached the appliance would have been able to retrieve the files in there.
  5. Ensure that the version of the Citrix ADC is not vulnerable to the CVE-2019-19781.
  6. Once the above have been completed, proceed to place the Citrix ADC back on the internet and run the vulnerability scan against it.

Restoring Configuration from Backup

As indicated above, the backup I had to work with was a few versions back from the NS12.1 51.19.nc that the appliance was currently running:

image

Citrix’s official Backup and Restore documentation (https://docs.citrix.com/en-us/netscaler/12/system/basic-operations/backup-restore-netscaler-appliance.html) indicates we can use the same or newer build but just to avoid any unintended issues, I downloaded the same build and redeployed the appliance as the running version would need to be upgraded anyways. The first step for the restore is to get the backup file uploaded and since a restore can only be performed via the GUI, I will use it to upload the configuration backup.

Begin by navigating to System > Backup and Restore then click on the Backup button:

image

Select the Add button:

image

Click on the down arrow beside Choose File and select Local as the source then select the backup TGZ file:

image

**Note that it is important to choose the Local option because the Appliance option would allow you to upload but it will not be displayed as a backup in the options to restore.

With the backup uploaded, proceed to restore it from the TGZ package:

image

Note that there is no need to have backup the appliance since it’s a new deployment so select the Skip Backup checkbox:

image

The restore should be fairly quick and will display a Restore Successful at the bottom right hand corner of the window:

image

The restored configuration will not be active until you restart the appliance and it is important to not select Save configuration if you restart it from the GUI or else the empty configuration will overwrite what you have just restored.

Also note that the IP address configuration will match the original appliance so if you have left the other one online for forensics but prevented it from accessing the internet then you should either change its IP address to avoid conflict or move it to another network.

image

If you’re staging the new appliance and have not given it network connectivity then the following are useful commands to verify the restored configuration:

  • show ip – list the NSIP, SNIP, SIP, MIP, VIP IP addresses
  • show lb vserver – displays the Load Balancing Virtual Servers details
  • show lb vserver -summary -fullvalues – displays the list of Load Balancing Virtual Servers without truncating names (use | grep to add specific values to list such as UP or DOWN)
  • show ssl certKey – to list all certificates on the appliance
  • show ssl certLink – to list the links between Root CA and Subordinate CA certificates

It’s important to note that the previously installed Client certificates will be present but you should deem these as being compromised so proceed to revoke, reissue and reinstall those certificates.

Configuring Content-Security-Policy HTTP Response Header on Citrix ADC for Citrix Apps and Desktops with DUO integration

$
0
0

I was recently asked to troubleshoot an issue where an administrator implemented the Content-Security-Policy HTTP Response Header on a Citrix ADC for Citrix Apps and Desktops with DUO integration and immediately noticed that users were no longer able to successfully log onto the portal. The following outlines the behavior and steps to remediate the issue.

Problem

The following Rewrite, Policy and Binding was configured onto the Citrix ADC (formerly known as Citrix ADC) to secure the

add rewrite action rw_act_insert_Content_security_policy insert_http_header Content-Security-Policy "\"default-src \'self\' ; script-src \'self\' \'unsafe-inline\' \'unsafe-eval\' ; style-src \'self\' \'unsafe-inline\' \'unsafe-eval\'; img-src \'self\' data:\""

add rewrite policy rw_pol_insert_Content_security_policy "HTTP.RES.HEADER(\"Content-Security-Policy\").EXISTS.NOT" rw_act_insert_Content_security_policy

bind vpn vserver _XD_10.0.1.11_443 -policy rw_pol_insert_Content_security_policy -type RESPONSE -priority 160

The login page presented by the Citrix ADC displays and functions properly:

image

However, the 2FA authentication page that is supposed to present the DUO multifaction options for sending the user a push, call to their mobile or entering a password does not load as the redirect appears to fail:

image

Solution

The best way to troubleshoot such an issue when implementing the Content-Security-Policy HTTP Response Header on Citrix ADC is to switch the rewrite action to insert a Content-Security-Policy-Report-Only header instead of the Content-Security-Policy header as this would simply report the expected action and results rather than enforce them. To enable this feature, simply change the field from Content-Security-Policy to Content-Security-Policy-Report-Only as shown in the following screenshot:

image

With the reporting mode enabled, proceed to relaunch the Google Chrome browser, click on the horizontal ellipsis from the top right corner, select More Tools, then Developer tools (CTRL+SHIFT+I):

image

With the Developer tools window opened, select the Network tab, click on the ellipsis at the top right corner and select Show console drawer:

image

The Network tab will display all the resources that is loaded as you navigate through the portal and the console will output verbose details of the Content Security Policy being applied but will only report the results rather than enforce it. Note output shown in the console:

The Content Security Policy 'default-src 'self' ; script-src 'self''unsafe-inline''unsafe-eval' ; style-src 'self''unsafe-inline''unsafe-eval'; img-src 'self' data:' was delivered in report-only mode, but does not specify a 'report-uri'; the policy will have no effect. Please either add a 'report-uri' directive, or deliver the policy via the 'Content-Security-Policy' header.

image

Proceed to log into the portal to replicate the issue, and in this case of this example, will display the following notice the following output indicating the DUO authentication would have been blocked:

[Report Only] Refused to load the script 'https://api-01b56e27.duosecurity.com/frame/hosted/Duo-Citrix-NetScaler-RfWebUI-v1.js' because it violates the following Content Security Policy directive: "script-src 'self''unsafe-inline''unsafe-eval'". Note that 'script-src-elem' was not explicitly set, so 'script-src' is used as a fallback.

[Report Only] Refused to frame 'https://api-01b56e27.duosecurity.com/' because it violates the following Content Security Policy directive: "default-src 'self'". Note that 'frame-src' was not explicitly set, so 'default-src' is used as a fallback.

image

Further reviewing the output:

[Report Only] Refused to load the script 'https://api-01b56e27.duosecurity.com/frame/hosted/Duo-Citrix-NetScaler-RfWebUI-v1.js' because it violates the following Content Security Policy directive: "script-src 'self''unsafe-inline''unsafe-eval'". Note that 'script-src-elem' was not explicitly set, so 'script-src' is used as a fallback.

[Report Only] Refused to frame 'https://api-01b56e27.duosecurity.com/' because it violates the following Content Security Policy directive: "default-src 'self'". Note that 'frame-src' was not explicitly set, so 'default-src' is used as a fallback.

… indicates the following directives we defined in our rewrite action are preventing the redirect of the DUO 2FA api-01b56e27.duosecurity.com page from being loaded:

"default-src 'self' ; script-src 'self''unsafe-inline''unsafe-eval' ; style-src 'self''unsafe-inline''unsafe-eval'; img-src 'self' data:"

To remediate the issue, simply modify the rewrite action as such to include the subdomains of the page the portal is attempting to redirect the user:

"default-src 'self' *.duosecurity.com ; script-src 'self' *.duosecurity.com 'unsafe-inline''unsafe-eval' ; style-src 'self''unsafe-inline''unsafe-eval'; img-src 'self' data:"

image

Note that I opted to include all of the subdomains because I am unsure as to whether DUO would ever change the URL in the future but in the event where you are certain this URL will not change, it would be best to specify an exact match.

With the changes made, proceed to relaunch the page to verify that the authentication page is no longer blocked and loads successfully:

image

… then change the Header Name for the rewrite action on the Citrix ADC from Content-Security-Policy-Report-Only back to Content-Security-Policy.

More information about the Content Security Policy HTTP response header can be found in Scott Helme’s article at the following URL: https://scotthelme.co.uk/content-security-policy-an-introduction/

I hope this will help anyone who may experience a similar issue when securing their web applications to score an A at: https://securityheaders.com/

Securing Citrix ADC (formerly known as NetScaler VPX) to score A+ rating on SSL Labs - February 2020

$
0
0

It has been a while since I’ve updated my previous posts for securing a Citrix ADC (formerly known as Citrix NetScaler) due to my absence from the work force so this post serves to provide the configuration required to published a virtual server to score an A+ on Qualys SSL Labs for the following test:

https://www.ssllabs.com/ssltest/

This post will demonstrate the process on a Citrix ADC NS13.0 47.24.nc via the command line.

Without any additional configuration, a newly published VPN Virtual Server for Citrix Virtual Apps Desktops published by a Citrix ADC typically scores a B or lower:

image

**Note that SSL Profiles allow the packaging of several SSL settings to be configured and applied to SSL-based Virtual Servers and Services but will not be demonstrated in this post.

Step #1 – Confirm that Deny SSL Renegotiation is configured as FRONTEND_CLIENT

The newer versions of the Citrix ADCs typically have the Deny SSL Renegotiation already configured appropriately but it is always good practice to confirm.

Navigate to Traffic Management > SSL > Change advanced SSL settings:

image

Confirm that the Deny SSL Renegotiation setting is set to FRONTEND_CLIENT:

image

Step #2 – Confirm that all available ECC Curves are bound to the virtual server

SSL Virtual Servers created on newer versions of the Citrix ADC such as the version I listed above will automatically have ECC Curves bound to them. However, if the appliance was upgraded from an older version, then the ECC Curves might not be bound.

Navigate into the properties of the virtual server:

image

Scroll down to the ECC Curve section and confirm that all the available options are bound to the virtual server:

image

image

Step #3 – Turn off SSLv3, TLSv1, TLSv11 and enable TLSv12 and TLSv13

The first step is to turn off SSLv3, TLSv11 and TLSv12, TLSv13 on your Load Virtual Server(s) and NetScaler Gateway Virtual Servers. For the purpose of this post, we will use a Virtual Server under the Citrix Gateway (also known as a VPN Virtual Server) for the configuration.

The following screenshots shows where the settings are in the GUI for the VPN Virtual Server:

image

SSLv3 used to be enabled in the older appliances but the later ones have TLSv1, TLSv11 and TLSv12 enabled by default:

image

Either uncheck the support for TLSv1, TLSv11 and enable TLSv12 and TLSv13 in the GUI or execute the following command in the CLI:

set ssl vserver <vpn server name> -ssl3 disabled

set ssl vserver <vpn server name> -tls1 disabled

set ssl vserver <vpn server name> -tls11 disabled

set ssl vserver <vpn server name> -tls12 enabled

set ssl vserver <vpn server name> -tls13 enabled

image

The configuration should look as such once the appropriate protocols are enabled or disabled:

image

Repeat the same process for the other Virtual Servers in the environment.

Step #4 – Create new custom Ciphers

The following is the set of SSL Ciphers that would allow us to score an A+ SSL scan on a Citrix ADC appliance but note that these need to be continually updated over time what is secure today may be vulnerable tomorrow:

TLS1.3-AES256-GCM-SHA384 -cipherPriority 1

TLS1.3-CHACHA20-POLY1305-SHA256 -cipherPriority 2

TLS1.3-AES128-GCM-SHA256 -cipherPriority 3

TLS1.2-ECDHE-ECDSA-AES128-GCM-SHA256

TLS1.2-ECDHE-ECDSA-AES256-GCM-SHA384

TLS1.2-ECDHE-ECDSA-AES128-SHA256

TLS1.2-ECDHE-ECDSA-AES256-SHA384

TLS1-ECDHE-ECDSA-AES128-SHA

TLS1-ECDHE-ECDSA-AES256-SHA

TLS1.2-ECDHE-RSA-AES128-GCM-SHA256

TLS1.2-ECDHE-RSA-AES256-GCM-SHA384

TLS1.2-ECDHE-RSA-AES-128-SHA256

TLS1.2-ECDHE-RSA-AES-256-SHA384

TLS1-ECDHE-RSA-AES128-SHA

TLS1-ECDHE-RSA-AES256-SHA

TLS1.2-DHE-RSA-AES128-GCM-SHA256

TLS1.2-DHE-RSA-AES256-GCM-SHA384

TLS1-DHE-RSA-AES-128-CBC-SHA

TLS1-DHE-RSA-AES-256-CBC-SHA

TLS1-AES-128-CBC-SHA

TLS1-AES-256-CBC-SHA

Attempting to use the GUI to create and add the ciphers in can be time consuming and prone to errors. A more efficient way would be to use the CLI and execute the following to create a group named Custom-VPX-Cipher with the ciphers listed above:

add ssl cipher Custom-VPX-Cipher

bind ssl cipher Custom-VPX-Cipher -cipherName TLS1.3-AES256-GCM-SHA384 -cipherPriority 1

bind ssl cipher Custom-VPX-Cipher -cipherName TLS1.3-CHACHA20-POLY1305-SHA256 -cipherPriority 2

bind ssl cipher Custom-VPX-Cipher -cipherName TLS1.3-AES128-GCM-SHA256 -cipherPriority 3

bind ssl cipher Custom-VPX-Cipher -cipherName TLS1.2-ECDHE-ECDSA-AES128-GCM-SHA256

bind ssl cipher Custom-VPX-Cipher -cipherName TLS1.2-ECDHE-ECDSA-AES256-GCM-SHA384

bind ssl cipher Custom-VPX-Cipher -cipherName TLS1.2-ECDHE-ECDSA-AES128-SHA256

bind ssl cipher Custom-VPX-Cipher -cipherName TLS1.2-ECDHE-ECDSA-AES256-SHA384

bind ssl cipher Custom-VPX-Cipher -cipherName TLS1-ECDHE-ECDSA-AES128-SHA

bind ssl cipher Custom-VPX-Cipher -cipherName TLS1-ECDHE-ECDSA-AES256-SHA

bind ssl cipher Custom-VPX-Cipher -cipherName TLS1.2-ECDHE-RSA-AES128-GCM-SHA256

bind ssl cipher Custom-VPX-Cipher -cipherName TLS1.2-ECDHE-RSA-AES256-GCM-SHA384

bind ssl cipher Custom-VPX-Cipher -cipherName TLS1.2-ECDHE-RSA-AES-128-SHA256

bind ssl cipher Custom-VPX-Cipher -cipherName TLS1.2-ECDHE-RSA-AES-256-SHA384

bind ssl cipher Custom-VPX-Cipher -cipherName TLS1-ECDHE-RSA-AES128-SHA

bind ssl cipher Custom-VPX-Cipher -cipherName TLS1-ECDHE-RSA-AES256-SHA

bind ssl cipher Custom-VPX-Cipher -cipherName TLS1.2-DHE-RSA-AES128-GCM-SHA256

bind ssl cipher Custom-VPX-Cipher -cipherName TLS1.2-DHE-RSA-AES256-GCM-SHA384

bind ssl cipher Custom-VPX-Cipher -cipherName TLS1-DHE-RSA-AES-128-CBC-SHA

bind ssl cipher Custom-VPX-Cipher -cipherName TLS1-DHE-RSA-AES-256-CBC-SHA

bind ssl cipher Custom-VPX-Cipher -cipherName TLS1-AES-128-CBC-SHA

bind ssl cipher Custom-VPX-Cipher -cipherName TLS1-AES-256-CBC-SHA

image

With the above commands successfully executed, we should now see the following Cipher Group created:

image

image

image

Step #5 – Bind new custom Ciphers to Load Balancing Server(s)

With the new cipher group created, proceed with binding them to the Load Balancing Virtual Server(s) and Citrix Gateway Virtual Server(s):

bind ssl vserver www.contoso.com_internal -cipherName Custom-VPX-Cipher
bind ssl vs <vpn server name> -eccCurveName ALL

image

With the new cipher group binded to the virtual servers, we can use the following commands to review the bindings:

show ssl vserver <vpn server name>

image

With the new custom CIPHER list bounded, unbind the DEFAULT list that gets bounded to all the virtual servers with the command:

unbind ssl vserver <vpn server name> -cipherName DEFAULT

image

Browsing to the SSL Ciphers heading for the virtual server should now display the custom Cipher list configured:

image

Step #6 – Create a Deffie-Hellman (DH) key for Forward Secrecy

The following screenshots shows where to create the Deffie-Hellman (DH) key in the GUI of the NetScaler:

Traffic Management > SSL > Create Diffie-Hellman (DH) key

image

image

The CLI command to execute to create the Deffie-Hellman (DH) key is as follows:

create ssl dhparam /nsconfig/ssl/dhkey2048.key 2048 -gen 2

image

**Note that the process could take a few minutes before completing so wait until the green cursor display changes to a >.

Reviewing the /nsconfig/ssl directory on the NetScaler should now show the dhkey2048.key key that was created:

image

Step #7 – Assign Deffie-Hellman (DH) key for Forward Secrecy to Virtual Server

With the Deffie-Hellman (DH) key successfully created, proceed with assigning it to the virtual servers.

The following screenshots shows where the settings are in the GUI:

image

Execute the following command to assign the DH Key via the CLI:

set ssl vserver <vpn server name> -dh ENABLED -dhFile "/nsconfig/ssl/dhkey2048.key" -dhcount 1000

image

Step #8 – Configure Policy for Strict Transport Security – 2 Options

Option #1 – Enable on Virtual Server

As of version 12.0.35.6, a -HSTS ENABLED flag that became available for the Strict Transport Security as shown here:

image

You can either apply the configuration directly onto the virtual server as such:

image

Executing the following CLI command would configure the HSTS setting as shown above:

set ssl vserver <vpn server name> -HSTS ENABLED -maxage 157680000 -IncludeSubdomains YES

image

Option #2 – Create a Rewrite Action and Policy for Strict Transport Security

Another option is to create a rewrite action, policy and then bind it to the virtual server as shown in the following:

Execute the following to create a Rewrite Action for Strict-Transport-Security:

add rewrite action act_sts_header insert_http_header Strict-Transport-Security q/"max-age=157680000"/

image

With the command above successfully executed, you should now see the following action created:

image

image

Execute the following to create a policy and assign the Rewrite Action for to the policy:

add rewrite policy pol_sts_header TRUE act_sts_header

image

image

image

With the Strict Transport Security policy created, proceed with binding them to the virtual servers with the following commands:

bind vpn vserver <vpn server name> -policy pol_sts_header -priority 100 -gotoPriorityExpression END -type RESPONSE

image

With the command above successfully executed, we should now see the Response Rewrite policy bounded to the virtual servers:

image

image

Having completed all the steps outlined above should now allow the NetScaler site to score an A+:

image

There are additional steps that will allow you to obtain a perfect score as the above rating indicates the Key Exchange and Cipher Strength falls just short. The Key Exchange can be updated to use a certificate that is of 4098bit rather than the 2048bitDH Key we used and the Cipher support can be adjusted to remove 128-bit support but the latter change may sacrifice compatibility. Security is a perpetual challenge so the best approach is to constantly update the security hardening configuration on the Citrix ADC to address newly uncovered issues or outdated configuration.

Also note that having less ciphers supported equates to less browser support so it is important to scroll down to the results and review the Handshake Simulation heading that list handshake failures so you are aware of what browsers will no longer be able to access the web application published:

image

Securing a Citrix ADC (formally known as NetScaler VPX) to score an A rating on Security Headers - March 2020

$
0
0

Continuing from my previous post for securing the Citrix ADC to score an A+ via Qualys’ scan:

Securing Citrix ADC (formerly known as NetScaler VPX) to score A+ rating on SSL Labs - February 2020

http://terenceluk.blogspot.com/2020/02/securing-citrix-netscaler-vpx-to-score.html

… I would like to demonstrate how to score an A on the Security Headers (http://securityheaders.com/) scan. Without any additional configuration, a newly published Virtual Server for Citrix Virtual Apps Desktops published by a Citrix ADC with the configuration in my previous post typically scores a C or lower:

image

… with the following headers that are identified as to be missing and need to be addressed:

Content-Security-Policy - https://scotthelme.co.uk/content-security-policy-an-introduction/

Referrer-Policy - https://scotthelme.co.uk/a-new-security-header-referrer-policy/

Feature-Policy - https://scotthelme.co.uk/a-new-security-header-feature-policy/

The steps to remediate these issues are to create new Rewrite Actions to insert the headers, bound them to Rewrite Policies, and finally to bound them to the appropriate internet facing virtual server. I will demonstrate this with the same Citrix Gateway Virtual Server I used in the Qualys example:

image

**Note that the configuration for each Security Header in the examples below can and should be customized based on the requirements of the published virtual server. Please review the links I included above for each header and ensure that you understand the options and what they are used for so you can better tweak the rewrite action parameters.

Adding the Content-Security-Policy header

The rewrite action I will be using for the Content-Security-Policy header will be as follows:

add rewrite action rw_act_insert_Content_Security_Policy insert_http_header Content-Security-Policy "\"default-src \'self\' ; script-src \'self\' \'unsafe-inline\' \'unsafe-eval\' ; style-src \'self\' \'unsafe-inline\' \'unsafe-eval\'; img-src \'self\' data:\""

image

image

"default-src 'self' ; script-src 'self''unsafe-inline''unsafe-eval' ; style-src 'self''unsafe-inline''unsafe-eval'; img-src 'self' data:"

image

The rewrite policy I will be binding the rewrite action will be as follows:

add rewrite policy rw_pol_insert_Content_Security_Policy "HTTP.RES.HEADER(\"Content-Security-Policy\").EXISTS.NOT" rw_act_insert_Content_Security_Policy

image

image

HTTP.RES.HEADER("Content-Security-Policy").EXISTS.NOT

image

The last step to bind the rewrite policy to the virtual server will look as such:

bind lb vserver <virtual server> -policy rw_pol_insert_Content_Security_Policy -type RESPONSE -priority 130 -gotoPriorityExpression NEXT

image

image

Note the Rewrite Policy under the Response Policies heading:

image

image

Be aware of the following configuration in the screenshot above:

Priority– The value will be dependent on what other rewrite policies are currently configured for this VPN Virtual Server.

GOTO Expression– Note that I have configured this value to NEXT instead of END because I will be adding more Rewrite actions that will have a lower priority (higher Priority number) this this VPN Virtual Server and if I configure the value as END then the rest will not be applied.

image

Executing the Security Headers scan should not find the Content-Security-Policy header present:

image

One of the issues I’ve encountered when inserting this header into a portal integrated with a DUO MFA solution was that it was no longer able to redirect my login requests to the 2FA page. If you encounter such an issue then please refer to one of my previous posts here:

Configuring Content-Security-Policy HTTP Response Header on Citrix ADC for Citrix Apps and Desktops with DUO integration

http://terenceluk.blogspot.com/2020/02/configuring-content-security-policy.html

Adding the Referrer-Policyheader

The rewrite action I will be using for the Referrer-Policy header will be as follows:

add rewrite action rw_act_insert_Referrer_Policy insert_http_header Referrer-Policy "\"strict-origin-when-cross-origin\""

image

image

"strict-origin-when-cross-origin"

image

The rewrite policy I will be binding the rewrite action will be as follows:

add rewrite policy rw_pol_insert_Referrer_Policy "HTTP.RES.HEADER(\"Referrer-Policy\").EXISTS.NOT" rw_act_insert_Referrer_Policy

image

image

HTTP.RES.HEADER("Referrer-Policy").EXISTS.NOT

image

The last step to bind the rewrite policy to the virtual server will look as such:

bind lb vserver <virtual server> -policyName rw_pol_insert_Referrer_Policy -type RESPONSE -priority 140 -gotoPriorityExpression NEXT

image

image

As with the previous rewrite policy, be aware of the following configuration in the screenshot above:

Priority– The value will be dependent on what other rewrite policies are currently configured for this VPN Virtual Server.

GOTO Expression– Note that I have configured this value to NEXT instead of END because I will be adding more Rewrite actions that will have a lower priority (higher Priority number) this this VPN Virtual Server and if I configure the value as END then the rest will not be applied.

image

Adding the Feature-Policy header

The rewrite action I will be using for the Content-Security-Policy header will be as follows:

add rewrite action rw_act_insert_Feature_Policy insert_http_header Feature-Policy "\"vibrate \'self\'; usermedia *; sync-xhr \'self\'https://<portalURL.com\""

image

image

"vibrate 'self'; usermedia *; sync-xhr 'self'https://portalURL.com"

image

The rewrite policy I will be binding the rewrite action will be as follows:

add rewrite policy rw_pol_insert_Feature_Policy "HTTP.RES.HEADER(\"Feature-Policy\").EXISTS.NOT" rw_act_insert_Feature_Policy

image

image

HTTP.RES.HEADER("Feature-Policy").EXISTS.NOT

image

The last step to bind the rewrite policy to the virtual server will look as such:

bind lb vserver <virtual server> -policyName rw_pol_insert_Feature_Policy -type RESPONSE -priority 150 -gotoPriorityExpression END

image

image

As with the previous rewrite policy, be aware of the following configuration in the screenshot above:

Priority– The value will be dependent on what other rewrite policies are currently configured for this VPN Virtual Server.

GOTO Expression– Note that I have configured this value to End instead of NEXT because I this was the last Rewrite action with no other one following.

image

Adding the Content-Security-Policy, Referrer-Policy, and Feature-Policy header all together

The following are all of the commands bundled into one. Please modify the virtual server and site name as required:

add rewrite action rw_act_insert_Content_Security_Policy insert_http_header Content-Security-Policy "\"default-src \'self\' ; script-src \'self\' \'unsafe-inline\' \'unsafe-eval\' ; style-src \'self\' \'unsafe-inline\' \'unsafe-eval\'; img-src \'self\' data:\""

add rewrite policy rw_pol_insert_Content_Security_Policy "HTTP.RES.HEADER(\"Content-Security-Policy\").EXISTS.NOT" rw_act_insert_Content_Security_Policy

bind lb vserver citrix.ccs.bm -policy rw_pol_insert_Content_Security_Policy -type RESPONSE -priority 130 -gotoPriorityExpression NEXT

add rewrite action rw_act_insert_Referrer_Policy insert_http_header Referrer-Policy "\"strict-origin-when-cross-origin\""

add rewrite policy rw_pol_insert_Referrer_Policy "HTTP.RES.HEADER(\"Referrer-Policy\").EXISTS.NOT" rw_act_insert_Referrer_Policy

bind lb vserver citrix.ccs.bm -policyName rw_pol_insert_Referrer_Policy -type RESPONSE -priority 140 -gotoPriorityExpression NEXT

add rewrite action rw_act_insert_Feature_Policy insert_http_header Feature-Policy "\"vibrate \'self\'; usermedia *; sync-xhr \'self\' https://citrix.ccs.com\""

add rewrite policy rw_pol_insert_Feature_Policy "HTTP.RES.HEADER(\"Feature-Policy\").EXISTS.NOT" rw_act_insert_Feature_Policy

bind lb vserver citrix.ccs.bm -policyName rw_pol_insert_Feature_Policy -type RESPONSE -priority 150 -gotoPriorityExpression END

With the configuration applied, you should now see a score of A from the Security Headers scan:

image

Deploying Azure Migrate appliance on VMware vSphere 6.7 fails with "Unable to process template"

$
0
0

Problem

You attempt to set up the Azure Migrate appliance to perform an assessment on a VMware vSphere 6.7 environment by downloading the OVA appliance from Azure:

image

image

MicrosoftAzureMigration.ova

image

… but notice that the process fails in the vSphere client with the error message:

Unable to process template.

image

Solution

One of the common causes of this issue is if you are using Internet Explorer as your browser and the following is a version that would display this error:

Internet Explorer 11
Version: 11.3383.14393.0

image

One of the quickest way to get around this is to use Google Chrome:

image

image 

Configuring Conditional Access Policy in Azure to prevent non Hybrid Azure AD Joined devices from accessing Exchange Online

$
0
0

The recent coronavirus pandemic has led many organizations to rush and provide remote access for their employees and with remote connectivity to corporate resources comes with increased security concerns. One of the common questions that I’ve been asked is whether there was a way to lock down Exchange Online so that only corporate assets could access the service. The short answer is yes, but it is important to note that Office 365 was designed to be accessed in many different ways such as an email client (e.g. Outlook), through the web browser (e.g. webmail), or mobile smartphone (e.g. iPhone and Android devices) and tablets. One of the clients I worked with was mainly concerned about Outlook access as once a user has authenticated with an Outlook client and the credentials are cached, the client would automatically connect to Exchange Online as long as the password hasn’t changed. Having gone through the process of setting up Azure’s Conditional Access Policy to achieve the desired access restriction, this post serves to demonstrate the configuration and the experience of a device that is not Hybrid Azure AD Joined.

Prerequisites

The Conditional Access Policy we’ll be configuring is dependent on the devices in domain as being Hybrid Azure AD Joined. I won’t go into the details of how to configure this but will reference the following two documents:

Tutorial: Configure hybrid Azure Active Directory join for managed domains
https://docs.microsoft.com/en-us/azure/active-directory/devices/hybrid-azuread-join-managed-domains

Tutorial: Configure hybrid Azure Active Directory join for federated domains
https://docs.microsoft.com/en-us/azure/active-directory/devices/hybrid-azuread-join-federated-domains

Creating a Conditional Access Policy

Navigate to Azure Active Directory> Security> Conditional Access and click on New policy to create a new Conditional Access Policy:

image

Configuring the Users or Groups to apply the policy

Fill in a name for the Conditional Access Policy, select Users and groups to configure who this policy will apply to. It would be best to validate the policy does what you intend it to do so either apply it to a single account or a group to test:

image

Configuring the cloud apps (Exchange Online) to apply the policy

Select Cloud apps or actions> Cloud Apps> Select apps and locate Office 365 Exchange Online:

image

Configuring the conditions to apply the policy

Select Conditions> Device platforms> Select device platforms and select Windows and macOS:

image

Note that we are restricting desktops and laptops from accessing Exchange Online and if you were to select Android, iOS and Windows Phone then your mobile devices would no longer be able to connect as they can be Azure AD registered but not Hybrid Azure AD Joined. See the following

What is a device identity?
https://docs.microsoft.com/en-us/azure/active-directory/devices/overview

image

Select Conditions> Client apps (Preview)> Select the client apps this policy will apply to and select:

  • Mobile apps and desktop clients
  • Modern authentication clients
  • Exchange ActiveSync clients
  • Other clients
image

Configuring the grant access controls to apply the policy

Select Access controls> Grant> Grant access and select Require Hybrid Azure AD joined device:

image

The Require all the selected controls and Require one of the selected controls configuration doesn’t matter as we are only configuring one control but if you chose to select more than one then you need to decide whether you want the grant access based on all of the requirements or any one of them.

With the policy configured, you can choose to choose Report-only, On, or Off:

image

Testing the policy with the “What If” feature

What I’ve noticed was that configure the policy to Report-only could be misleading if you’re trying to use the What If feature because it would not report that the configured policy is applied even if the conditions are met. The only way for the What If feature to report that the policy is applied is if I enable the policy.

image

Switching the policy to On and using the What If to test the policy will allow you to confirm whether the policy is applied as anticipated:

image

Attempting to access Exchange Online with Outlook 2019 on a PC that is not Hybrid Azure AD Joined

The following is what the experience would look like for a user attempting to use Outlook on a Windows 10 device that is not Hybrid Azure AD Joined.

images

image

image

image

image

The user wouldn’t be able to add the mailbox if they attempted to use Outlook 2013 to access Exchange Online:

image

Important Items to Note

  • Configuring such a policy for Exchange Online will also block access to Teams
  • This policy does not prevent users from accessing Exchange Online with their mobile or tablet devices and if this is a requirement then I would suggest using Intune

Adding Email Address / ProxyAddress to an O365 mailbox for a user account that is synced with an on-premise Active Directory

$
0
0

I’ve recently been asked a few times about a common issue that many on-premise Exchange administrators encounter when transitioning to Office 365 so I thought I’d write a quick blog post outlining how to modify or add email addresses to an Office 365 mailbox for a user account that is synced with an on-premise Active Directory.

Problem

You attempt to use the Exchange Admin Center to add an additional email address or modify the primary email address of an Office 365 mailbox but receive the following error:

The operation on mailbox "<username>" failed because it's out of the current user's write scope. The action 'Set-Mailbox', 'EmailAddresses', can't be performed on the object '<username>' because the object is being synchronized from your on-premises organization. This action should be performed on the object in your on-premises organization.

image

Attempting to use the Microsoft 365 admin center yields the same results:

image

image

The operation on mailbox "TestO365" failed because it's out of the current user's write scope. The action 'Set-Mailbox', 'EmailAddresses', can't be performed on the object 'TestO365' because the object is being synchronized from your on-premises organization. This action should be performed on the object in your on-premises organization.

image

Solution

The way to add or modify email addresses for Office 365 mailboxes for user accounts that are synced with an on-premise Active Directory is to modify the proxyAddress attribute for the user account:

image

Prepending the email address with SMTP: capitalized will configure the primary email address for the account:

image

Additional email address aliases can be configured with smtp: in lower case.

Once the changes have been made to the account from the on-premise Active Directory, proceed to forcing a synchronization on the server with AD Connect to synchronize the changes to the account in Azure AD.

Start-ADSyncSyncCycle -PolicyType Delta

image

Creating a new Exchange Online transport rule with the condition "has specific properties matching these text patterns" does not allow you to configure and add a user property

$
0
0

Problem

You would like to configure a new transport rule in Exchange Online with one of the following conditions:

  • has specific properties including any of these words
  • has specific properties matching these text patterns

You proceed by navigating to mail flow> rules:

image

Create a new rule:

image

Then add the condition The Sender> has specific properties matching these text patterns or has specific properties including any of these words:

image

… but you notice that the select user properties window is laid on top of the window where you are supposed to configure the User properties field:

image

You are unable to edit the User properties field unless you click on the Cancel button for the select user properties window:

image

Proceeding to configuring a user property and clicking OK will bring you back to the main configuration page for the rule without applying the changes you made:

image

Clicking on the Select properties and text patterns… link returns you to the previous issue where the select user properties window is presented:

image

… but clicking the + button will bring up the window User properties window behind it and seemingly stuck:

image

Using Chrome or Edge exhibits the same behavior and testing this on a on-premise Exchange Server 2016 (Version 15.1 Build 1591.10) yields the same result.

Solution

This issue threw me off for quite some time as I thought I was doing something wrong interactively so after not having any luck from searching through posts online, I opened a call with Office 365 support and the engineer eventually told me this was a bug and the solution was to use Internet Explorer. The Internet Explorer I had on my Windows 10 laptop is shown in the screenshot below and it indeed resolves the issue.

image
Viewing all 836 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>