Renewing your GoDaddy SSL certificate from CRT/CER to PFX so it can be installed on IIS

I always have issues getting my certificate renewed using OpenSSL and the certificates that GoDaddy lets you download. I chose IIS and I chose Exchange Server in the GoDaddy download section of the site to get the CRT file. The issue I always have is converting it to PFX so that I can install it with a private key on my IIS Server. This is also relevant if you are using Azure to host your certificates as Microsoft requires PFX certificates in that realm.

So finally after I get it working today, I wanted to write this blog post to make sure I at least have a sure method to get the certificate converted with the private key. NOTE that this is for a certificate that has NOT expired.

First, download your certificate from GoDaddy to the server you have OpenSSL installed on.

Download the Certificate

Next, extract the cert to your directory and note the path. You will use the path in your OpenSSL cmdlet.

You may be seeing other files in there. Well the issue was that I couldn’t generate the proper private key and the PEM file given by GoDaddy did not work in the conversion. So, here is what I had to do on the Web Server to export the proper private key:

In the MMC Certificate Utility, export the current certificate with the private key:

  • Choose to Export the Key and Extended Properties
  • Choose a password and set the encryption to SHA256
  • Name the File and Export it to the directory you’re working from

Next, run the following cmd in OpenSSL to extract the private key from the exported certificate. Enter the password you created during the export when prompted:

Next, use that key file along with the CRT file to create the new PFX. Enter the password again when prompted:

You should now have the proper NEW PFX file to import into IIS or Azure or where ever you need to the certificate installed with the private key! DON’T forget your password!

THANKS FOR READING!! KEEP LEARNING AND REMEBER TO DOCUMENT SO YOU DON’T HAVE TO REMEMBER ALL THE TIME!

REFERENCES:
Extracting Certificate and Private Key Files from a .pfx File – IAM – UW-IT Wiki (washington.edu)
Convert a certificate to PFX (GoDaddy, unable to load private key) – UseIT | Roman Levchenko (rlevchenko.com)

Keep your Federation Trust up-to-date

This article came out in February and I have been behind on my blog updates due to my current project, but I feel this post is important and am going to relay the message that I received here for your review. Thanks again for your support of this blog and its continued longevity.


Microsoft periodically refreshes certificates in Office 365 as part of our effort to maintain a highly available and secure environment. From Jan 23rd, 2021, we are making a certificate change on our Microsoft Federation Gateway every six weeks that could affect some customers as detailed in this knowledge base article. Please note that longer term, this “six week” rhythm to renew the certificate will be further shortened to daily renewals which will further enhance security of the environment. The good news is you can easily avoid any disruption.

Who is affected?

This certificate change can affect any customer that is using the Microsoft Federation Gateway (MFG). If you are in a hybrid configuration that relies on a Federation Trust established with MFG in the Exchange on-premises organization or if you are sharing free/busy information between two different on-premises organizations using the Microsoft Federation Gateway as a trust broker, you need to take action.

When will the change occur?

The change is scheduled to occur every six weeks to begin with, with this frequency further increasing. You must take action to avoid any disruptions.

What type of issues will you face if no action is taken?

If you don’t take action, you won’t be able to use services that rely on the Microsoft Federation Gateway. For example:

  • A cloud user might not be able to see free/busy information for an on-premises user and vice versa.
  • MailTips might not work in a Hybrid configuration.
  • Cross-premises free/busy might stop working between organizations that have organization relationships in place.

Additionally, if you run the Test-FederationTrust cmdlet, you might receive an error message that indicates that the Delegation token has validation issues. For example, you receive an error message that resembles the following:

Id : TokenValidation
Type : Error
Message : Failed to validate delegation token.

And, you might receive one of the following error messages in the Exchange Web Services (EWS) responses:

An error occurred when processing the security tokens in the message
Autodiscover failed for email address User@contoso.com with error System.Web.Services.Protocols.SoapHeaderException: An error occurred when verifying security for the message

What action should you take?

You can use the following command on your Exchange Server to create a scheduled task to run the update process daily. This is how we recommend you keep your Federation Trust constantly updated. This will prevent you from being negatively affected by future metadata changes.

If you prefer to not use a scheduled task, you can manually run the command at any time to refresh the metadata. This is not recommended as the frequency to refresh certificate will increase from 6 week period to daily, and manually updating this would be quite cumbersome.

Please note that we have seen some situations where this command should be run twice to ensure it is successful.


REFERENCES:
Keep your Federation Trust up-to-date – Microsoft Tech Community

LDLNET LLC IT Consulting Business is currently CLOSED.

I am still available for music performance through my company but the IT Side of the business has been closed as I am on a full time project and cannot devote any outside time to IT consulting. Thank everyone for supporting me over the years. My blog will still remain current so please check here often for the latest updates for Exchange, M365, Security, Compliance, and Windows!

Error: Policy Is Missing when trying to load and run an AIP/MIP UL on-premises content scan

WORKAROUND UPDATE!
SEE AT END OF THIS POST!

There is a current BUG is has been filed with Microsoft that relates to AIP/MIP Scanner and running a Unified Labeling content scan on premises. The main issue is with the Security and Compliance Center and it replicating the Policies that you create for your Sensitivity Labels in your M365 Tenant.

Since these Policies will not replicate, your content scans will fail and you will see the following error within the Azure Portal under Azure Information Protection:

Error: Policy is missing in AIP

You will be able to verify that the Policies are present in the Security and Compliance Center under the Information Protection page and the Policies Page in Azure Information Protection:

NOTE: I had created my labels and label policies in Azure AIP and migrated them to Security and Compliance Center via the following LINK

What you will also notice, if you create a policy in SCC, it will NOT replicate to Azure.

Next, I checked to see if the AIP Scanner Service Account has the policies applied to it as a member recipient of the policies. It needs to so that the account can apply labels to on premises accounts through the policy.


Let’s continue troubleshooting…

The AIP Scanner account was a member of a defined policies in the Security & Compliance Center and you are still having issues:

  1. Is the AIP Scanner service started?
  2. If the answer is no, start it
  3. From PowerShell run the following:

Sample Output

It says it is scanning, but you are not getting results AND you have that Error: Policy is missing statement in the Nodes Tab of AIP.

The next thing to verify this whether or not the policy is replicating from SCC to Azure. This is done through PowerShell by running the following:

Connect to SCC PowerShell

Check the policy replication status

Sample Output
Distribution Status is in Pending

Normal replication is up to 24 hours for a change or policy addition. So, if your WhenChanged or WhenCreated values are more than 24 hours old, then they are NOT replicating. You can further verify this by running the following:

Sample Output
Replication Taking Longer Than Expected Error

What do I do next?

If you have this error, it would be best to log a support call with Microsoft and explain that you have the AIP UL Policy Replication Error. From my sources they are saying this is a known issue with the SCC and Azure that will be remediated by the end of October.

So, in the meantime, I guess we will wait!


WORKAROUND

After troubleshooting with this issue with some of my Microsoft Colleagues, I was able to get the Scanner to start scanning properly with out the error being listed in the Azure Portal. Here are the steps.

On the scanner node, right-click a file or folder and choose to protect it:

Next, within the AIP Application, choose Help and Feedback

Next, choose Reset Settings

Click Continue

Once completed, click Close, then exit the AIP application. This clears all the registry settings within the scanner node.

Now you will want to reset all the local files for the scanner

First, stop the scanner services for the scanner and network discovery

Next, navigate to the following folder for the local account that is used for AIP scanner. Example – C:\Users\AIPScanner\AppData\Local\Microsoft\MSIP
Rename or Delete the MIP folder in that MSIP directory.
(I renamed my folder to mip-old2)

Rename or Delete the mip folder

Restart the services you stopped

You should now see the scanner as Running and Working within the Azure Portal. No more errors should be listed.

Thanks to Angel Marroquin at Microsoft for the assistance on this workaround!


THANKS FOR VIEWING!
KEEP THE COMMENTS FLOWING!

REFERENCES:
Migrate AIP Policies
AIP FAQ

Installation and Configuration of Azure Information Protection Unified Labels Scanner

With the release of Unified Labeling in Azure and M365, there is now a way to protect your data and label your data appropriately for confidentiality and encryption for your files shares and files on your on premises devices. The following shows how to install the latest AIP_UL client and configure it in Azure to apply those Unified Labeling Policies.

This is a detailed process and I had some issues myself with getting the process simplified. I will do my best here to make this as smoot has possible with as many reference documents that I can input. Always feel free to comment as this data is ever changing and updating as Microsoft updates the offering.

Prerequisites

Please refer to this document for a full list of pre-requisites before deploying the scanner:

https://docs.microsoft.com/en-us/azure/information-protection/deploy-aip-scanner-prereqs

For the basics we have the following:

The prerequisites below are still required for successful AIP scanner installation.

  • A Windows Server 2012 R2 or greater Server to run the service
    • Minimum 4 CPU and 4GB RAM physical or virtual.
      NOTE: More RAM is better. The scanner will allocate RAM 2.5-3 times of size of all files being scanned in parallel. Thus, if you scan 40 files that are 20MB each at the same time, it should take about 202.540=2GB RAM. However, if you have one big 1GB file it can take 3GB of RAM just for that file.
  • Internet connectivity necessary for Azure Information Protection
  • A SQL Server 2012+ local or remote instance (Any version from Express or better is supported)
    • Sysadmin role needed to install scanner service (the user running Install-AIPScanner, not the service account)

      NOTE: If using SQL Server Express, the SQL Instance name is ServerName\SQLExpress.

      NOTE: At this time, a different SQL instance is needed for each AIP Scanner node.
  • Service account created in On Premises AD (I will call this account AIPScanner in this document).
    • Service requires Log on locally right and Log on as a service right (the second will be given during scanner service install).
    • Service account requires Read permissions to each repository for discovery and Read/Write permissions for classification/protection.
  • AzInfoProtection_UL.exe is available on the Microsoft Download Center (The scanner bits are included with the AIP Client)
  • The Azure AD Preview PowerShell module. From the machine you’re installing AIP Scanner on, run the following from an Administrator PowerShell:

Configure the scanner in the Azure portal

Before you install the scanner, or upgrade it from an older general availability version, configure or verify your scanner settings in the Azure Information Protection area of the Azure portal.

To configure your scanner:

  1. Sign in to the Azure portal with one of the following roles:
    • Compliance administrator
    • Compliance data administrator
    • Security administrator
    • Global administrator

      Then, navigate to the Azure Information Protection pane. For example, in the search box for resources, services, and docs, start typing Information and select Azure Information Protection.
  2. Create a scanner cluster. This cluster defines your scanner and is used to identify the scanner instance, such as during installation, upgrades, and other processes.
  3. Scan your network for risky repositories. Create a network scan job to scan a specified IP address or range, and provide a list of risky repositories that may contain sensitive content you’ll want to secure. Run your network scan job and then analyze any risky repositories found.
  4. Create a content scan job to define the repositories you want to scan.

Create a scanner cluster

  1. From the Scanner menu on the left, select Clusters clusters icon.
  2. On the Azure Information Protection – Clusters pane, select Add add icon.
  3. On the Add a new cluster pane, enter a meaningful name for the scanner, and an optional description. The cluster name is used to identify the scanner’s configurations and repositories. For example, you might enter Europe to identify the geographical locations of the data repositories you want to scan. You’ll use this name later on to identify where you want to install or upgrade your scanner.
  4. Select Save save icon to save your changes.
  5. On the Add a new cluster pane, enter a meaningful name for the scanner, and an optional description. The cluster name is used to identify the scanner’s configurations and repositories. For example, you might enter Europe to identify the geographical locations of the data repositories you want to scan. You’ll use this name later on to identify where you want to install or upgrade your scanner.
  6. Select Save save icon to save your changes.

Create a network scan job (public preview)

Starting in version 2.8.85.0, you can scan your network for risky repositories. Add one or more of the repositories found to a content scan job to scan them for sensitive content.

Note: The network discovery interface is currently in gradual deployment and will be available in all regions by September 15, 2020.

Network discovery prerequisites

PrerequisiteDescription
Install the Network Discovery serviceIf you’ve recently upgraded your scanner, you may need to still install the Network Discovery service.

Run the Install-MIPNetworkDiscovery cmdlet to enable network scan jobs.
Azure Information Protection analyticsMake sure that you have Azure Information Protection analytics enabled.

In the Azure portal, go to Azure Information Protection > Manage > Configure analytics (Preview).

For more information, see Central reporting for Azure Information Protection (public preview).

Creating a network scan job

  1. Log in to the Azure portal, and go to Azure Information Protection. Under the Scanner menu on the left, select Network scan jobs (Preview) network scan jobs icon.
  2. On the Azure Information Protection – Network scan jobs pane, select Add add icon.
  3. On the Add a new network scan job page, define the following settings:

    Network scan job name: Enter a meaningful name for this job. This field is required.
    Description: Enter a meaningful description.
    Select the cluster: From the dropdown, select the cluster you want to use to scan the configured network locations.

    Tip: When selecting a cluster, make sure that the nodes in the cluster you assign can access the configured IP ranges via SMB.

    Configure IP ranges to discover: Click to define an IP address or range.

    In the Choose IP ranges pane, enter an optional name, and then a start IP address and end IP address for your range.

    Tip: To scan a specific IP address only, enter the identical IP address in both the Start IP and End IP fields.

    Set schedule: Define how often you want this network scan job to run.

    If you select Weekly, the Run network scan job on setting appears. Select the days of the week where you want the network scan job to run.
    Set start time (UTC): Define the date and time that you want this network scan job to start running. If you’ve selected to run the job daily, weekly, or monthly, the job will run at the defined time, at the recurrence you’ve selected.

    Note: Be careful when setting the date to any days at the end of the month. If you select 31, the network scan job will not run in any month that has 30 days or fewer.
  4. Select Save save icon to save your changes.

 Tip: If you want to run the same network scan using a different scanner, change the cluster defined in the network scan job. Return to the Network scan jobs pane, and select Assign to cluster to select a different cluster now, or Unassign cluster to make additional changes later.

Analyze risky repositories found

Repositories found, either by a network scan job, a content scan job, or by user access detected in log files, are aggregated and listed on the Scanner > Repositories repositories icon pane.

If you’ve defined a network scan job and have set it to run at a specific date and time, wait until it’s finished running to check for results. You can also return here after running a content scan job to view updated data.

  1. Under the Scanner menu on the left, select Repositories repositories icon.
    The repositories found are shown as follows:
    • The Repositories by status graph shows how many repositories are already configured for a content scan job, and how many are not.
    • The Top 10 unmanaged repositories by access graph lists the top 10 repositories that are not currently assigned to a content scan job, as well as details about their access levels. Access levels can indicate how risky your repositories are.
  2. Do any of the following:
    columns icon Select Columns to change the table columns displayed.
    refresh icon If your scanner has recently run network scan results, select Refresh to refresh the page.
    add icon Select one or more repositories listed in the table, and then select Assign Selected Items to assign them to a content scan job.
    Filter The filter row shows any filtering criteria currently applied. Select any of the criteria shown to modify its settings, or select Add Filter to add new filtering criteria. Select Filter to apply your changes and refresh the table with the updated filter.
    Log Analytics icon In the top-right corner of the unmanaged repositories graph, click the Log Analytics icon to jump to Log Analytics data for these repositories.

Repositories with public access

Repositories where Public access is found to have read or read/write capabilities may have sensitive content that must be secured. If Public access is false, the repository not accessible by the public at all.

Public access to a repository is only reported if you’ve set a weak account in the StandardDomainsUserAccount parameter of the Install-MIPNetworkDiscovery or Set-MIPNetworkDiscovery cmdlets.

  • The accounts defined in these parameters are used to simulate the access of a weak user to the repository. If the weak user defined there can access the repository, this means that the repository can be accessed publicly.
  • To ensure that public access is reported correctly, make sure that the user specified in these parameters is a member of the Domain Users group only.

Create a content scan job

Deep dive into your content to scan specific repositories for sensitive content.

You may want to do this only after running a network scan job to analyze the repositories in your network, but can also define your repositories yourself.

  1. Under the Scanner menu on the left, select Content scan jobs.
  2. On the Azure Information Protection – Content scan jobs pane, select Add add icon.
  3. For this initial configuration, configure the following settings, and then select Save but do not close the pane.

    Content scan job settings
    – Schedule: Keep the default of Manual
    – Info types to be discovered: Change to Policy only
    – Configure repositories: Do not configure at this time because the content scan job must first be saved.

    Policy enforcement
    Enforce: Select Off
    – Label files based on content: Keep the default of On
    – Default label: Keep the default of Policy default
    – Relabel files: Keep the default of OffConfigure file settings– Preserve “Date modified”, “Last modified” and “Modified by”: Keep the default of On
    – File types to scan: Keep the default file types for Exclude
    – Default owner: Keep the default of Scanner Account
  4. Now that the content scan job is created and saved, you’re ready to return to the Configure repositories option to specify the data stores to be scanned. Specify UNC paths, and SharePoint Server URLs for SharePoint on-premises document libraries and folders. 

    Note: SharePoint Server 2019, SharePoint Server 2016, and SharePoint Server 2013 are supported for SharePoint. SharePoint Server 2010 is also supported when you have extended support for this version of SharePoint.

    To add your first data store, while on the Add a new content scan job pane, select Configure repositories to open the Repositories pane:Configure data repositories for the Azure Information Protection scanner
    1. On the Repositories pane, select Add:

      Add data repository for the Azure Information Protection scanner
    2. On the Repository pane, specify the path for the data repository, and then select Save.
      • For a network share, use \\Server\Folder.
      • For a SharePoint library, use http://sharepoint.contoso.com/Shared%20Documents/Folder.
      • For a local path: C:\Folder
      • For a UNC path: \\Server\Folder 

        Note: Wildcards are not supported and WebDav locations are not supported.
    3. If you add a SharePoint path for Shared Documents:
      • Specify Shared Documents in the path when you want to scan all documents and all folders from Shared Documents.
        For example: http://sp2013/SharedDocuments
      • Specify Documents in the path when you want to scan all documents and all folders from a subfolder under Shared Documents.
        For example: http://sp2013/Documents/SalesReports
        or, specify only the FQDN of your SharePoint,
        For example http://sp2013 to discover and scan all SharePoint sites and subsites under a specific URL and subtitles under this URL.
      • Grant scanner Site Collector Auditor rights to enable this.
      • For the remaining settings on this pane, do not change them for this initial configuration, but keep them as Content scan job default. The default setting means that the data repository inherits the settings from the content scan job.

        Use the following syntax when adding SharePoint paths:
        Root path: http://<SharePoint server name>

        Scans all sites, including any site collections allowed for the scanner user.
        Requires additional permissions to automatically discover root content

        Specific SharePoint subsite or collection:
        One of the following:
        – http://<SharePoint server name>/<subsite name>
        – http://SharePoint server name>/<site collection name>/<site name>


        Requires additional permissions to automatically discover site collection content

        Specific SharePoint library:
        One of the following:
        – http://<SharePoint server name>/<library name>
        – http://SharePoint server name>/.../<library name>


        Specific SharePoint folder: http://<SharePoint server name>/.../<folder name>
  5. Repeat the previous steps to add as many repositories as needed.
  6. When you’re done, close both the Repositories and Content scan job panes.

Back on the Azure Information Protection – Content scan job pane, your content scan name is displayed, together with the SCHEDULE column showing Manual and the ENFORCE column is blank.

You’re now ready to install the scanner with the content scanner job that you’ve created. Continue with Scanner Installation.

Scanner Installation

Now that we have verified that all prerequisites and configured the AIP in Azure, we can go through the basic scanner install.

  1. Log onto the server where you will install the AIP Scanner service using an account that is a local administrator of the server and has permission to write to the SQL Server master database. (more restrictive scenarios are documented in the official documentation)
  2. Run AzInfoProtection_UL.exe on the server and step through the client install (this also drops the AIP Scanner bits).
    WARNING: This blog is based on the current version of the AIP Client.  If you want to update to the Preview client, please install the GA first and then install the preview client and use Update-AIPScanner after installation.
  3. Next, open an Administrative PowerShell prompt.
  4. At the PowerShell prompt, type the following command and press Enter:

undefined
Output from Scanner Installation in PowerShell

Create Cloud Service Account

If you are not using Azure AD Sync for your Service account, you will need to create a service account in the cloud tenant to use for AIP authentication. If you have synced your on premises service account, you can skip this task.

  1. Run the command below to connect to Azure AD.

  1. When prompted, provide tenant Global Admin credentials.
  2. To create an account in the cloud, you must first define a password profile object. Run the commands below to define this object.

  1. When prompted, enter a password for the cloud service account.
  2. To create the account, run the commands below.

  1. When prompted, enter the tenant name you want to use for the UserPrincipalName for the cloud service account (e.g. tenant.onmicrosoft.com).

Creating the Azure AD Application in Azure

Next, we will configure the App Registration for the Web App that is required to run the Set-AIPAuthentication command that will be used to get the authentication token. We will also assign the necessary Oauth2Permissions for the Web App to have delegated rights to the App.

  1. Run the commands below to create the Web App, associated Service Principal, and key password.

  1. Next, we need to run some commands to build the RequiredResourceAccess object that is needed to automate delegation of permissions for the native application.

  1. Now we can create the App and associated Service Principal using the commands below.

Authenticating as the AIP Scanner Service

In this task, we will use the command created previously to authenticate the AIP Scanner to the AIP Service.

  1. Open PowerShell using Run as a different user and use the on premises Scanner Service account which should have Run As Administrator rights.

    undefined
  1. Run the commands in the following PowerShell session with the Run as Administrator option, which is required for the OnBehalfOf parameter.
    • The first command creates a PSCredential object and stores the specified Windows user name and password in the $pscreds variable. When you run this command, you are prompted for the password for the user name that you specified.
    • The second command acquires an access token that is combined with the application so that the token becomes valid for 1 year, 2 years, or never expires, according to your configuration of the registered app in Azure AD. The user name of scanner@contoso.com sets the user context to download labels and label policies from your labeling management center, such as the Office 365 Security & Compliance Center.

Successful OUTPUT:
Acquired application access token on behalf of DOMAIN\scanner

  1. Last Step is to Restart the AIP Scanner Service

Look to these reference documents for further details:
Set-AIPAuthentication
Get Azure AD Token for the AIP Scanner

Configure the scanner to apply classification and protection

The default settings configure the scanner to run once, and in reporting-only mode.

To change these settings, edit the content scan job:

  1. In the Azure portal, on the Azure Information Protection – Content scan jobs pane, select the cluster and content scan job to edit it.
  2. On the Content scan job pane, change the following, and then select Save:
    • From the Content scan job section: Change the Schedule to Always
    • From the Policy enforcement section: Change Enforce to On 

      Tip: You may want to change other settings on this pane, such as whether file attributes are changed and whether the scanner can relabel files. Use the information popup help to learn more information about each configuration setting.
  3. Make a note of the current time and start the scanner again from the Azure Information Protection – Content scan jobs pane:

    Initiate scan for the Azure Information Protection scanner

    Alternatively, run the following command in your PowerShell session:

The scanner is now scheduled to run continuously. When the scanner works its way through all configured files, it automatically starts a new cycle so that any new and changed files are discovered.

MUCH MORE TO COME! CHECK OFTEN AND SEND COMMENTS!

REFERENCES:
AIP Prerequisites
Install and Configure AIP UL Application
AIP UL Client Download
AIP Classic Client Express Installation

Remote Desktop Licensing Mode is Not Configured when configuring Remote Desktop Services

I recently setup a backend RDP server so that I could test our remote services. I went through the installation process and installed my RDP license on the server successfully. The problem was that I was not getting an error when logging onto the server locally to check the status of the RDP Server:

remote desktop licensing mode is not configured
Error Example for RDP Licensing

I also had Events within Event Viewer:

Log Name: System
Source: Microsoft-Windows-TerminalServices-Licensing
Date: 6/24/2020 3:44:16 PM
Event ID: 18
Task Category: None
Level: Warning
Keywords: Classic
User: N/A
Computer: SRV2016-02.ldlnet.net
Description:
The Remote Desktop license server “SRV2016-02” has not been activated and therefore will only issue temporary licenses. To issue permanent licenses, the Remote Desktop license server must be activated.

Usually, this error appears as a notification popup in the bottom right-hand corner of the screen saying:

Remote Desktop licensing mode is not configured
Remote Desktop Services will stop working in xx days. On the RD Connection Broker server, use Server Manager to specify the Remote Desktop licensing mode and the license server.

And when you click on this notification popup, it doesn’t redirect you anywhere and it gets simply disappeared which is a quite frustrating situation.

I thought I had this set properly, but the RDP Licensing Diagnoser application told me that I needed to choose the licensing configuration to distribute for Per Device OR Per User. The licenses I installed were per device so I did some research.

I found out that this had to be configured via the Registry, PowerShell, or could be configured through GPO. I chose to do GPO since that would always apply on my servers in my domain.


How To configure the Remote Desktop Licensing Mode through the Registry

Here’s how to change the licensing mode for Remote Desktop session host using the registry editor and get rid of the error message Remote Desktop Services will stop working in xx days:

At first, press Windows + R keys together and then type regedit in the Run dialog box and press Enter key.

regedit windows 10

Next, in the left pane of the Registry Editor, navigate to the following registry key:

licensing mode for the remote desktop session host is not configured

Next in the right pane, double-click on the LicensingMode to edit its value and then change the Value data according to your requirement:

Set the Value data 2 for Per Device RDS licensing mode
Set the Value data 4 for Per User RDS licensing mode

remote desktop services will stop working

Finally, click on the OK button to save the changes.

Now, restart your computer and check if the Remote Desktop licensing mode is not configured issue on Windows Server has been resolved or not.

Once you changed the licensing mode, now everything will be reported accurately and the Remote Desktop session host will recognize the licensing configuration.


How To configure the Remote Desktop Licensing Mode through a Group Policy Object (GPO)

First Logon to a machines that has Group Policy Tools Installed, press Windows + R keys together, type gpedit.msc, and press Enter key.

gpedit-windows-10

Within Group Policy Editor, create a new GPO and link it to the level that you need to link it to. (In my case, the domain level.) Navigate to:

Computer Configuration > Policies > Administrative Templates > Windows Components > Remote Desktop Services > Remote Desktop Session Host > Licensing.

change remote desktop licensing mode

Next, double-click on the “Use the specified Remote Desktop license servers” setting and then select Enabled option. Finally, enter the names of the license servers (host name or IP address) and then click on the OK button.

use the specified remote desktop license servers

Similarly, double-click on the “Set the Remote Desktop licensing mode” setting and then select Enabled option. Finally, set the licensing mode (Per Device or Per User) and then click on the OK button.

set the remote desktop licensing mode

Once all these changes are done, close Group Policy Editor, go to the RDP Licensing Server and run a gpupdate /force to refresh Group Policy.

Now, when you open the Remote Desktop Licensing Diagnoser, you shouldn’t see any errors like the remote desktop licensing mode is not configured on windows server or any kind of issues regarding your licenses.

Corrected Licensing Diagnoser Result

How To configure Remote Desktop Licensing Mode through PowerShell

You can also use PowerShell to set the Licensing Mode via the Set-RDLicenseConfiguration cmdlet from the RemoteDesktop PowerShell Module which is installed with Remote Desktop Services:


KEEP POSITIVITY ON YOUR SIDE!
CONTINUE TO LEARN!

REFERENCES:
How to Fix Remote Desktop Licensing Mode is Not Configured
Set-RDLicenseConfiguration

Windows Server Core – How to have PowerShell automatically start when logging onto the session.

In my environment, I have a Windows Server (2019) Core edition server installed with Exchange 2019. Most of the time, I have to get on the server to run PowerShell commands for maintenance purposes, etc…

Well, by default, Windows Server Core opens the command prompt when you logon and then I have to manually open PowerShell from there to run cmdlets, etc…

However, if you would like to change the default cmd to PowerShell, you can change it by changing the Registry value.

The Registry that I’m talking is located under the following location:

Change the Shell Value in the Registry

The easiest way I see to change the value is to use the Set-ItemProperty cmdlet within PowerShell.

Open Windows PowerShell within Server Core command prompt. You can type “PowerShell” on your command prompt.

Then, enter the following command on PowerShell console and hit enter:

Once completed, you will need to reboot the computer from PowerShell:

When the computer has rebooted and you have logged on, PowerShell should load by default instead of Command Prompt.

EVEN MORE INFORMATION

Now, since I have an Exchange Server installed on this server, there is a Command in the $bin directory called LaunchEMS.cmd that will load the Exchange Management Shell for you. So instead of loading just PowerShell, I tell WinLogon to load Exchange Management Shell so that I do not have to do any additional typing or searching for EMS on the box. Remember, Server Core has no GUI!

I run the same commands as above, but just change the value to LaunchEMS.cmd

Then Restart the Computer:

Once Rebooted, you can logon and EMS will be the only window prompt that loads in the shell!

Exchange Management Shell loads when you logon

NOTE: You can always run cmd from the prompt to open Command Prompt and also run PowerShell.exe to open regular PowerShell from the EMS Session Window.

REMAIN POSITIVE!
THANKS FOR READING!

REFERENCES:
Windows Server Core: How to start PowerShell by Default

Troubleshooting OOF/OOO (Out of Office) Replies in Exchange and Exchange Online

Since the COVID-19 virus has managed to get me laid-off and not working, I have not had too much to post in the past months. And although it has wavered greatly, I will remain positive and hope that someone will see the value that I can bring to their organization in the near future.

With that said, I wanted to repost this article that was sent to me as this issue arises quite often in the workplace when someone has left for vacation and is not in the office. So, here it is:

Understanding and troubleshooting Out of Office (OOF) replies

Out of Office replies can sometimes be a bit of a mystery to people; how do they work? What do you do if they don’t work? In this blog post, we will discuss the bits and pieces of Out of Office and some of the main reasons why an Out of Office (aka. OOF) reply might not get delivered to users. Note that while we are writing this from the viewpoint of an Exchange Online configuration, many of same things can be applied to on-premises configuration also. By the way – did you ever wonder why “OOF” is used instead of “OOO”? If you did, see this!

What is an Out of Office reply?

Out of Office replies are also known as OOF (or OOO) replies or automatic replies. They are Inbox rules that are set in the user’s mailbox by the client. OOF rules are server-side rules, so the response is sent regardless of whether a client is running, or not.

There are several ways of setting up automatic replies:

First, it can be set up as an automatic reply feature from Outlook, like this. It can also be configured using other clients, such as Outlook on the web (OWA), PowerShell command (Set-MailboxAutoReplyConfiguration). Admins can set up OOF replies on behalf of (forgetful) users from the M365 Admin Portal.

Other than using built-in OOF functionality, another thing people sometimes do is use rules to create an Out of Office message while they are away.

By design, Exchange Online Protection uses the high risk delivery pool (HRDP) to send the out of office replies, because they are lower priority messages.

Types of OOF rules

There are three types of OOF rules: Internal, External and Known Senders (Contact list). They are stored in the mailbox with the names in the following table. If a mailbox has set only Internal OOF, there will be no external rule created and the mailbox will have one OOF rule.

TypeMessage ClassPR_RULE_MSG_NAME
InternalIPM.Rule.Version2.MessageMicrosoft.Exchange.OOF.KnownExternalSenders.Global
ExternalIPM.Rule.Version2.MessageMicrosoft.Exchange.OOF.AllExternalSenders.Global
Known ExternalIPM.ExtendedRule.MessageMicrosoft.Exchange.OOF.KnownExternalSenders.Global

Note: Apart from OOF rule, other rules like the Junk Email rule will also have the “IPM.ExtendedRule.Message”  message class; the MSG_NAME will determine what the rule is for.

OOF rule details

All Inbox rules can be viewed using MFCMapi tool:

Logon > profile that you are accessing > Top of information store > Inbox > right click ‘Open associated contents table’. They are listed under the Message Class column. All Inbox rules will have the same message class IPM.Rule.Version2.Message and there is one message class and name for each Inbox rule.

For all rules, the name of the rule is in stored in the PR_RULE_MSG_NAME property. So, if there are 4 Inbox rules, there will be 4 IPM.Rule.Version2.Message one for each rule, and the name of the rule is stored in PR_RULE_MSG_NAME.

OOF rules in MFCMapi:

OOF01.jpg

And OOF rule templates in MFCMapi:

OOF02.jpg

History of OOF replies

OOF response is sent once per recipient. Recipients to whom the OOF was sent are stored in the OOF history and are cleared out when the OOF state changes (enabled/disabled) or the OOF rule is modified. OOF history is stored in the user’s mailbox and can be seen using MFCMapi tool at: Freebusy Data > PR_DELEGATED_BY_RULE.

OOF03.jpg

Note: If you want to send response to the sender every time instead of just once, you can apply mailbox server side rule “have server reply using a specific message” to send automatic reply instead of using the OOF rule. This server-side rule will send reply to the sender every time a message is received.

Now that we know what OOF replies are and how they are stored on the server, we can move on to address some of the scenarios where OOF is not sent to the sender. We will also discuss possible fixes and some more frequently seen issues you may have with OOF configuration.

The first category of issues we will talk about are issues related to OOF replies not being received by the sender of the original message.

OOF issues related to transport rules

When OOF doesn’t seem to be sent for all users in the tenant, usually there is a transport rule causing the issue. Check all the transport rules that may apply to the affected mailbox using step two of this article.

If you suspect a delivery problem, run a message trace from the Office 365 tenant.  We know that OOF response is sent back to the original sender of the message, so for OOF messages, the sender of the original message becomes the recipient when tracking. We should then be able to tell if OOF reply has been triggered and sent to external or internal recipient. If a Transport rule is blocking the OOF response, the message trace will clearly show you that.

There is one scenario I would like to highlight when it comes to transport rules blocking OOF replies. Let’s assume that you moved the MX record to a 3rd party anti-spam solution; you have created a transport rule to reject any email coming from any other IP address than the 3rd party anti-spam.

The transport rule will look something like this:

Description:
If the message: Is received from ‘Outside the organization’ Take the following actions: reject the message and include the explanation ‘You are not permitted to bypass the MX record!’ with the status code: ‘5.7.1’ Except if the message: sender ip addresses belong to one of these ranges: ‘1xx.1xx.7x.3x’

ManuallyModified: False

SenderAddressLocation: Envelope

As OOF replies have a blank (<>) Return-Path, you will see that the rule is unexpectedly matching the transport rule and the OOF responses are getting blocked.

In order to fix this, you can change the transport rule property of ‘Match sender address in message’ to ‘Header or envelope’, so the checks will also be done against ‘From’ (also known as the ‘Header From’ address), ‘Sender’ or ‘reply-to’ fields. More information about the mail flow rule conditions is here.

OOF04.jpg

JournalingReportNdrTo mailbox setting

If the affected mailbox is the one which is configured under JournalingReportNdrTo, OOF replies will not be sent for that mailbox. Moreover, journaling emails may be affected as well. It is recommended to create a dedicated mailbox for the JournalingReportNdrTo setting. Alternatively, you can set it to an external address.

For more details on how to solve this, please see this KB Article.

Forwarding SMTP address is enabled on the mailbox

If the affected user mailbox has SMTP forwarding enabled, OOF replies won’t be generated. This can be checked in user mailbox settings (OWA):

OOF05.jpg

In PowerShell:

OOF06.jpg

Or, in the M365 Portal, user properties:

OOF07.jpg

Please follow the action on step one of this article for more information.

The type of the OOF reply set on remote domains

Remote domains offer you (among other settings) the opportunity to set the type of OOF reply that can be sent to users.

These types are the following:

  • External
  • ExternalLegacy
  • InternalLegacy
  • None

For more information about each of these OOF types, please refer to AllowedOOFType parameter in our Set-Remotedomain document.

The Out of Office type can be checked from Exchange Admin Center > Mail flow > Remote domains

OOF08.jpg

Or, with PowerShell:

OOF12.jpg

You need pay attention to what OOF type you have set up, as this will impact the OOF response and OOF may not be generated at all if the configuration is incorrect. Let’s assume you have a hybrid organization with mailboxes hosted both in Exchange on-premises and Exchange Online. In this scenario, by design only external messages will be sent to on-premises while AllowedOOFType is set to External. To be able to send internal OOF messages to on-premises in hybrid environment, you need to set the AllowedOOFType to InternalLegacy.

You also have option to send external Out of Office replies only for contacts at the mailbox configuration level (ExternalAudience: Known). This can make automatic replies not being sent to anyone external but contacts. The command to check the configuration is:

OOF10.jpg

Remote domain blocking OOF replies

Another setting on remote domains is one which lets you dictate whether you allow or prevent messages that are automatic replies from client email programs in your organization.

This can be found in Exchange Admin Center > Mail flow > Remote domains

OOF11.jpg

Or by running this PowerShell cmdlet:

OOF12.jpg

Note: If this option is set to false, no automatic replies will be sent to users for that domain. This setting takes precedence over the automatic replies set up at the mailbox level or over the OOF type (discussed above).

Please, keep in mind that $false is the default value for new remote domains that you create as well as the built-in remote domain named Default in Exchange Online.

If the email was marked as spam and sent to junk, an automatic reply will not be generated at all.

Pretty self-explanatory, that one!

Message trace shows delivery failure

If you investigate an OOF reply issue and in the message trace you find the following error message:

“550 5.7.750 Service unavailable. Client blocked from sending from unregistered domains.”

You should reach out to Support to find out why the unregistered domain block was enforced.

There are some other scenarios that might come up when working with OOF replies, let’s cover those next!

An old or duplicate OOF message is sent

This is likely due to a duplicate Inbox rule or the OOF history limit. The OOF history has a limit of 10,000 entries, if this threshold is hit, OOF will continue to be sent to recipients that are not already in the list as any new users can’t be added to the list. All users already in the list will not receive duplicate OOF replies. For more information you may want to check this article or follow the action plan below.

  • Remove the OOF rules and the OOF rule templates from the mailbox. To locate the rules and delete them go back to > “Inbox OOF rules”
  • Disable and then re-enable the OOF feature for the mailbox

Now, you can check again whether the OOF feature works as expected and symptoms do not occur.

Automatic replies cannot be enabled; an error message is received

While attempting to access automatic replies from the Outlook client, an error message is received saying that “Your automatic reply settings cannot be displayed because the server is currently unavailable. Try again later.”

To narrow down this issue, you should perform the following steps:

  • Confirm EWS protocol is enabled on the mailbox as OOF replies rely on it to be enabled (note that re-enabling this might take several hours to take effect)
  • Enable the OOF feature by using the following command:Set-MailboxAutoReplyConfiguration <identity> -AutoReplyState Enabled
  • Check whether the OOF feature works as expected.
  • If the issue is still there, review the rules quota on the mailbox: Get-mailbox -identity <mailbox> | fl RulesQuota
OOF13.jpg

By default, the RulesQuota has a maximum quota which is calculated by the size of the rules (not the number of rules). Maximum is 256 KB (262,144 bytes).

  • Remove the OOF rules and the OOF rule templates from the mailbox. To locate the rules and delete them go back to > “Inbox OOF rules”
    After you remove them, you can re-enable the OOF feature and then test again.

An automatic reply is still sent despite OOF being disabled

We have encountered scenarios in which OOF messages are still being sent, although it is disabled. Most of the time, we found that the rule is created manually by the end users using the out-of-office template.

So, as you can see, there is quite a bit that goes into troubleshooting OOF Replies and it is not all straight forward.

THANKS FOR READING!
I’M AVAILABLE FOR WORK!
PLEASE CONTACT ME FOR AN APPOINTMENT!

REFERENCES:
Troubleshooting OOF Replies (Exchange Team Blog)

PowerShell – How to create a custom view for your PS Output Objects

I have been doing some training on PowerShell Scripting this week and am going to be posting a number of articles on what I have been training on. This article deal with formatting your data output from your custom script or function to be viewed the way that you want it.
Have you ever run a cmdlet where the column width is not wide enough and your data get’s truncated? Well, here is a method that you can use to make the default output of your function or script display how you want it to.

The best way to do this is by using an existing xml formatting file as a template. Run the following PowerShell commands to access those templates:

Note: your custom view xml file will need to have the “.ps1xml” extension, which indicates that it is a “Windows Powershell XML Document”.

Now to creating the custom template. Let’s say you have the following function.

Now, create a custom view using the following file as a template:

C:\WINDOWS\system32\WindowsPowerShell\v1.0\DotNetTypes.format.ps1xml

Note: You can get a list of the format type template files that already comes with PS running the following command:

Sample Output:

Mode LastWriteTime Length Name
—- ————- —— —-
-a— 10/06/2009 21:41 27338 Certificate.format.ps1xml
-a— 10/06/2009 21:41 27106 Diagnostics.Format.ps1xml
-a— 23/07/2012 19:12 144442 DotNetTypes.format.ps1xml
-a— 23/07/2012 19:12 14502 Event.Format.ps1xml
-a— 23/07/2012 19:12 21293 FileSystem.format.ps1xml
-a— 23/07/2012 19:12 287938 Help.format.ps1xml
-a— 23/07/2012 19:12 97880 HelpV3.format.ps1xml
-a— 23/07/2012 19:12 101824 PowerShellCore.format.ps1xml
-a— 10/06/2009 21:41 18612 PowerShellTrace.format.ps1xml
-a— 23/07/2012 19:12 13659 Registry.format.ps1xml
-a— 23/07/2012 19:12 17731 WSMan.Format.ps1xml

Remember: A custom view must always end with “.format.ps1xml”

We will use DotNetTypes.format.ps1xml as a template. Using this file, create a file called “hrmctools.formatps1xml”. That file will contain the following information modified from the template:

NOTE: In PS, the content of xml files are always CASE SENSITIVE!!!

Once you have created your own custom view you then need to tell PowerShell to apply the formatting by using the cmdlet Update-FormatData:

Once you run the above command, this custom format you created should now be loaded into memory. You can verify this by using the Get-FormatData cmdlet:

If you have not done so in your function or script, you can attach your custom view to your function or script using the “insert” method:

$MyObject.PSObject.TypeNames.Insert(0,’hrmctoolcustomformat’)

Note*: This code has already been inserted into our function listed in this example.
NOTE**:The first parameter “0” is something that you always type in. You can now confirm that the object has successfully been attached to the custom view, by typing in PowerShell:

Also if you output your object, you should now notice that its appearance should have now changed to those that you defined in the custom view.

A “Typename” is essentially a name that you give to your object. It tells PowerShell the type of object that it is. Know that it is possible for a number of commands to have outputting objects of the same type (the same typename value). This could affect other cmdlets and functions in your script, so be sure to debug if necessary.

HAPPY SCRIPTING!
MORE TO COME!!

REFERENCES:
Creating Custom Format Views

Exchange Server Quarterly Updates March 2020

Released: March 2020 Quarterly Exchange Updates

Today Microsoft is announcing the availability of quarterly servicing cumulative updates for Exchange Server 2016 and 2019. These updates include fixes for customer reported issues as well as all previously released security updates. 

Personal Note: I was recently involved in a layoff at Microsoft in the Vendor PFE world. I am currently looking for new engagements.

Calculator Updates

This quarterly Exchange release includes an important update to the Exchange 2019 Sizing Calculator.  We’ve made improvements to the logic to detect whether a design is bound by mailbox size (capacity) or throughput (IOPs) which affects the maximum number of mailboxes a database will support.  Previous versions of the calculator produced incorrect results in some situations.

The Exchange team highly recommends using calculator version 10.4, included with the March 2020 quarterly CU release, to size Exchange Server 2019 deployments.

MCDB Configuration Issues

Cumulative Update 5 for Exchange Server 2019 also fixes an issue that can happen when you use the Manage-MetaCacheDatabase.ps1 script to enable MetaCacheDatabase (MCDB).

This issue occurred because of a change in behavior in Windows Server 2019 that caused Get-Disk to return all uninitialized discs within the Database Availability Group (DAG) or cluster. The script then incorrectly tried to format an SSD on another DAG member. We documented a workaround for CU4 here, but we’ve fixed it in CU5.

Online Mode Search Issues

Cumulative Update 5 for Exchange Server 2019 is also required to fix a known issue with partial word searches when the client is using Outlook in online mode.

Release Details

The KB articles that describe the fixes in each release and product downloads are available as follows:

Additional Information

Microsoft recommends all customers test the deployment of any update in their lab environment to determine the proper installation process for your production environment. For information on extending the schema and configuring Active Directory, please review the appropriate documentation.

Also, to prevent installation issues you should ensure that the Windows PowerShell Script Execution Policy is set to “Unrestricted” on the server being upgraded or installed. To verify the policy settings, run the Get-ExecutionPolicy cmdlet from PowerShell on the machine being upgraded. If the policies are NOT set to Unrestricted you should use the resolution steps in KB981474 to adjust the settings.

Reminder: Customers in hybrid deployments where Exchange is deployed on-premises and in the cloud, or who are using Exchange Online Archiving (EOA) with their on-premises Exchange deployment are required to deploy the currently supported cumulative update for the product version in use, e.g.,

2013 Cumulative Update 23
2016 Cumulative Update 16 or 15
2019 Cumulative Update 5 or 4.

For the latest information on Exchange Server and product announcements please see: 
What’s New in Exchange Server and Exchange Server Release Notes.

I AM STILL CURRENTLY LOOKING FOR A NEW PROJECT OR ASSIGNMENT!
THANKS FOR READING!

Exchange Server Security Update KB4540123 fails with 0x80070643

PLEASE READ THE ENTIRE POST

I had a failure on one of my two Exchange 2019 CU4 servers when installing the Security Update for them:

Exchange Server Security Update KB4540123 fails with 0x80070643

I could not restart the install as it would fail when getting to the services stoppage part of the installation. I saw this error in the ServiceControl.log file in the C:\ExchangeSetupLogs Directory

I had searched around based on the error code 0x80070643 and found these answers that some had used to get the installation to work.

LINK HERE

I downloaded the .msp file to install manually and read the answers on the web-page. It was said that there was an issue with the ServiceControl.ps1 file that is in the Exchange Server BIN directory when running Patch with the Verbose logging enabled:

I kept reading and digging into the fix for the file. It was said to modify a number of lines in the ServiceControl.ps1 script:

Once I made those changes. I renamed the original ServiceContol.ps1 to a .old file and saved this modified file to the BIN directory. I was then able to successfully run the Security Update.

BUT WHY DID THIS SERVER FAIL AND NOT THE OTHER?!?

Both are CU4, but the server that failed had been upgraded from CU1 where the one that did NOT fail was a clean installation of CU4. So, I checked the ServiceConrol.ps1 file on both servers:

Older Exchange Install
Date of ServiceControl.ps1 is 1/1/2020
CU4 Clean Installation
Date of ServiceControl.ps1 is 2/3/2020

I have not tested this, but maybe if I had copied the ServiceControl.ps1 file from the CU4 clean installation to the original install, the script might have worked since the creation dates are different and I have no other version information to go on. I will verify this though. For now, the changes to the script allowed me to successfully install the Security Update Successfully.

NOTE: All the Exchange and IIS Services were in a Startup Mode: Disabled state and I had to reset them ALL to Automatic. Once that was completed and the server rebooted, Exchange was returned to normal state. ****Also, just to be safe, I ran a great script that assures the server is out of maintenance mode. You can get the script HERE. You can also get the script to put the server into maintenance mode. You can get the script HERE. These scripts will work on Exchange 2013 and above servers.

UPDATE / 3/12/2020 2:39 AM EST

Something was still wrong with the server. ActiveSync and PAM started breaking. I started having all sorts of authentication problems. I decided to restore the server from backup from Tuesday. I forgot to backup the ServiceControl.ps1 file that modified, but I moved the one from the successful installation to the restored server and am currently running the update. I will see if it works and send the information to you.

UPDATE / 3/12/2020 3:45 AM EST

The restore worked great and placing the ServiceControl.ps1 file in the BIN directory on the prior failed Exchange Server did allow for the installation to complete successfully. I have tested ActiveSync and Authentication which is now functioning properly. Hooray!

SEND ME YOUR IDEAS/ FOR POSTS!
HAPPY TROUBLESHOOTING!

REFERENCES:
Exchange Server Security Update fails with 0x80070643
Exchange Maintenance Mode Script (Start)
Exchange Maintenance Mode Script (Stop)
Exchange Server 2019 CU4 Security Update

Adding Windows Capability to Server Core to add features needed for Application Compatibility

I’ve been working on installing Windows Server 2019 Core into my network to be able to look at new features for Windows Administration and learning how Server Core works. I was able to install a virtual machine with Server Core and get it activated. I then wanted to place my custom PowerShell script for loading PowerShell into the Server Core Environment.

So, I added the Server Core Server to the Windows Admin Center and copied my custom scripts for PowerShell into the proper directory:

Windows Admin Center
Windows Admin Center

I then logged on remotely to the server and started PowerShell. When I did that, I got this error with the script load:

Error that IE First Run has not been completed
Error that IE First Run has not been completed

At first, I tried using the -UseBasicParsing as a switch to see if that would repair the issue in the script. It did not because, IE is not installed by default on the default installation of Server Core. That is so there is less of a footprint that can be attacked by a hacker. I needed this installed though so that the Invoke-WebRequest cmdlet would load my script parameters properly.

I started looking for answers to how to install IE onto the Server Core box and found the following article. I had to run the Add-WindowsCapability cmdlet on the server to install the optional components. When I did, I received an error:

Error when adding the Windows Capability
Error when adding the Windows Capability

So I found out that there is a block that WSUS does keeping the cmdlet from going to the online source to download the software package and producing this error. After researching, I found this article. I setup a Group Policy to make sure this setting is propagated to my Server Core machine. I also setup in the same policy the ability to turn off the First-Run for IE so that you do not get that message and have to open IE to “set it up”

Group Policy Setting with Path to Templates Specified
Group Policy Setting with Path to Templates Specified

I then ran a gpupdate /force on the Server and was able to download the components for IE and App Compatibility.

Successful Installation of Windows Capability
Successful Installation of Windows Capability

I then rebooted the server and now my PowerShell loads successfully:

Successful PowerShell Load
Successful PowerShell Load

I learned a few different new things here and was able to get Server Core working more the way that I like it. I will keep posting updates when I run into issues with this type of installation. I would definitely give the Windows Admin Center a try as it has more robust features than Server Manager has, especially for Server 2019 and Server Core.

CONQUER THE UNCOMFORTABLE TO GROW!
POSITIVE ATTITUDE ABIDES!

REFERENCES:
RSAT Tools Installation Error 0x800f0954 – Windows 10 1809
Server Core App Compatibility Feature on Demand (FOD)
“Set Up Internet Explorer 11” Bypass with GPO or Registry

Set-SendConnector cmdlet does not function correctly when updating a Send Connector on an Edge Server in an Exchange Hybrid Deployment

I have run into this issue at a number of my customers that utilize an Exchange Edge Server in their Hybrid Deployment. They’ll need to modify their send connectors for their forced TLS communication with their partners or own mailboxes in Office365. Whenever they want to modify the send connector and save the changes, they get the following error messages:

Symptoms

“PowerShell failed to invoke ‘Set-SendConnector’: Error 0x5 (Access is denied) from cli_GetCertificate”

or

“Error 0x6ba (the RPC server is unavailable) from cli_GetCertificate”

This issue occurs after you install the Cumulative Update 14 for Exchange Server 2016Cumulative Update 13 for Exchange Server 2016, or Cumulative Update 23 for Exchange Server 2013.

Cause

This issue occurs because the TLS certificate check (in case the TlsCertificateName attribute is populated on the send connector) doesn’t work against the Edge servers as the RPC communication is blocked against the Edge servers.

Workaround

Now the current workaround for this has been to delete the Edge Send Connector and recreate the connector from scratch via PowerShell with all the settings and changes entered. This is not a viable solution especially if your communications with your partners change constantly and changes are made to the secure communications channel between you and them.

Resolution

To fix this issue, install one of the following updates:

For Exchange Server 2019, install the Cumulative Update 4 for Exchange Server 2019 or a later cumulative update for Exchange Server 2019.

For Exchange Server 2016, install the Cumulative Update 15 for Exchange Server 2016 or a later cumulative update for Exchange Server 2016.

For Exchange Server 2013, there is no fix at this time. My personal recommendation is to plan an upgrade to Exchange 2019.

KEEP POSITIVLY MOVING FORWARD!

REFERENCES
Set-SendConnector doesn’t work for Exchange Server in hybrid scenarios with Edge Server installed

Exchange System Mailboxes not being configured cause Exchange Setup to fail

My continuation of the “Installation from HELL” proceeded onward today with our team attempting to install Exchange on another server in the test environment and having it fail when getting to the Mailbox Role portion of the installation.

The error kept saying that the installation was failing due to a “Database is mandatory on UserMailbox”. We had been having many issues with the Schema and RBAC roles which were resolved in my other post by adding the Role Assignments to the schema. I did mention that the environment started settling down and the system mailboxes (Arbitration) along with the Health Mailboxes started functioning. This was actually not the case for the Arbitration mailboxes. I glanced at the following article to see how to manually recreate the Arbitration mailboxes again.

I performed a “Get-Mailbox -Arbitration | fl Name” in Exchange Powershell (similar to the screenshot below) to see if the mailboxes were in fact created. They in fact were not and were giving the error “Database is mandatory for the UserMailbox.”

Image
Verification of Arbitration System Mailboxes existing

So, I tried to do what the original article said to do and enable the mailboxes one by one. I kept getting errors when trying to create the mailboxes. So I began to search the internet for another way to possibly remediate this without having to get too deep into the system.

I found the following article explaining the exact error I was getting during the installation of Exchange. In the article, it said to look at the attributes of the account associated with the Arbitration mailbox to see if the homeMDB attribute had no value:

Image
homeMDB attribute NOT set on Arbitration Mailbox Account

Now, since I was NOT having good luck with either the Exchange Setup nor PowerShell, I had to figure out a way to place the attribute value so that the mailbox would be visible. What I did was this:

  • I opened a User in ADUC with a working mailbox on the needed database.
  • I went to the Attributes Tab and looked up the homeMDB attribute for that user then chose Edit.
  • I copied the entire value from the screen and closed it.
  • I then went to the Arbitration mailbox in question and opened it’s homeMDB attribute.
  • I pasted the value into the Value box and saved it.
Paste the active database value in the homeMDB attribute for the Arbitration Mailbox account

Once completed with remediating the attribute for all the Arbitration mailbox accounts missing the value, I re-ran the cmdlet to verify that the error was not present for any arbitration maibox:

I then uninstalled and re-installed Exchange using setup on the failing server and the installation completed successfully.

This has been an excellent week in training on the value of the setup process for Exchange and also the value of the system accounts and values in relation to Exchange and it working properly.

A POSITIVE OUTLOOK WILL YIELD POSITIVE RESULTS ULTIMATELY!

REFERENCES:
Exchange Install Error Database is mandatory on UserMailbox
Recreate missing arbitration mailboxes

RBAC Role Assignments NOT installed during Exchange Directory Preparation

I had a very interesting installation issue recently when installing Exchange 2019 into a new environment. We ran through all the Exchange Preparation for the root and child domains in the forest as described HERE. The results of those installation procedures showed SUCCESS, but when we started installing Exchange, we ran into issues with the System Mailboxes not being available to complete the Mailbox Role part of the installation. Most of the articles that I found said to re-run the Domain Prep (/preparealldomains) and the AD Prep (/pad). So we did, and managed to get the first server installed somehow.

The reason I said somehow is because when we tried to logon to the EAC, we would get a 400 Bad Request Error and could not logon to the console. Next, we tried PowerShell and was able to load PowerShell, but I noticed that only ~100 cmdlets loaded. I thought that maybe we had to re-create the account mailbox to get it working properly. Problem was, one of the cmdlets that would not load was Disable-Mailbox along with others like Enable-Mailbox and New-Mailbox. It was as if the admin account we were using had no rights to administer Exchange in any way.

Next, we opened the mailbox in OWA. The mailbox came up okay, so I told the admin to change the URL to /ecp to try and get into the admin center. What happened was that the normal user control panel opened instead, showing again that the account did not have permissions.

We checked replication to the child domain and made sure there were not any apparent AD issues present. There were none. I next started reviewing how Exchange uses RBAC (Role Based Access Control) Groups and Role Assignments to grant users access to Exchange Admin Functionality. I read the following article located HERE.

Something told me to go and check the Schema again, so I went to ADSIEdit > Configuration Container > Services > Microsoft Exchange > (Organization Name) > RBAC > Role Assignments

I looked at the list of role assignments in the window as follows:

Small List of RBAC Role Assignments
RBAC Role Assignments Missing Objects

From the picture, you can see that the list is small, which in my experience is not correct. I verified this by going into my own 2019 environment and comparing the number of objects in that folder:

RBAC Assignments Object list with CORRECT Objects Listed

If you notice the list is MUCH longer and has many more objects listed in the container. So, how did Exchange Setup miss this during preparation? That I will find out later, but first I have to remediate this problem.

CAUSE:

If the RBAC roles assignments are not installed to allow an account to have administrative privileges in Exchange, then you cannot administrate Exchange to even make the necessary changes! Especially so if you’ve only installed ONE server in the environment!

REMEDIATION:

Manually repair the installation by running the script that creates these Objects in the Schema during setup.

******DISCLAIMER: Running the following commands in these instructions, running ADSIEdit, and/or making changes to your Schema and Exchange Installation outside the normal setup process is NOT recommended! Microsoft, LDLNET LLC, nor I (Lance Lingerfelt) are responsible for any issues or errors that may arise from using these instructions, period!******

That said, preform the following to regenerate the objects in the Schema:

1) Open Windows PowerShell (not the Exchange Management Shell) on the server that you installed Exchange Server on with the same account you used to install Exchange.

a. If you have UAC enabled, right click Windows PowerShell and click Run as administrator.

2) Run Start-Transcript c:\RBAC.txt and press Enter

a. This will start logging all commands and output you type to a text file.

3) Run Add-PSSnapin *setup and press Enter

a. This adds the setup snap-in which contains the setup cmdlets used by Exchange during install. You may see errors about loading a format data file. You can ignore those errors.
NOTE: DO NOT run any other cmdlets in this snap-in. Doing so could irreparably damage your Exchange installation.

4) Run Install-CannedRbacRoleAssignments -InvocationMode Install -Verbose and press Enter.

a. This cmdlet should create the required role assignments between the role groups and roles that should have been created during setup.

b. Be sure you run with the Verbose switch so we can capture what the cmdlet does.

5) Run Remove-PSSnapin *setup and press Enter

6) Run $Session = New-PSSession -ConfigurationName Microsoft.Exchange -ConnectionUri http://servername/PowerShell/ -Authentication Kerberos and press Enter

a. Be sure to replace SERVERNAME with the FQDN of your server.

7) Run Import-PSSession $Session and press Enter

a. You should notice that the normal number of cmdlets load (~700)

8) Run Get-ManagementRoleAssignment and press Enter. If you are able to run the cmdlet, then the remedation worked.

9) Run Stop-Transcript and press Enter

The final check is to return to ADSIEdit and check the container and see if all the objects are there. We also were now able to get into EAC as well as saw that the Arbitration mailboxes were populating along with the Health Mailboxes as needed per the installation.
It was very neat to see how running the Add-PSSnapIn cmdlet opened all the scripts from the Exchange Setup and allowed me to manually fix the installation problem by running the cmdlet script that need to perform that task that setup missed or refused to run.

POST MORTEM REVIEW

I am going to look over the installation logs and see where the installation failed and try to find out why it did not run on the subsequent re-installations of the AD Prep and Domain Prep. I will post those finding in this article when I have that available.
Thanks again to my Microsoft and Trimax teammates for your assistance with this. It has helped the customer in more ways than one!

HAPPY TROUBLESHOOTING!
POSITIVE ATTITUDE YIELDS POSITIVE RESULTS!

Create a custom Windows 10 image for distribution using and ISO image.

I’ve been currently assisting with onboarding at my new Full Time Contractor position at Microsoft. All of the new FTCs received laptops and needed to have the newest build of Windows 10 installed. The issue was that all of our laptops came with Windows 10 Professional and we needed to upgrade them to Enterprise edition.

After finding a working key for Enterprise Edition, we were still having issues joining the MS Azure domain so that we could get all of the needed software properly to being onboarding with Microsoft.

So, after going through a couple of re-images of my laptop with some failures attached to that, I finally was able to get the process down so that time would not be waisted for the oncoming new hires once they received their laptop. The issue was getting the correct build of Windows 10 and getting the proper apps installed in an efficient manner. Since the onboarding process was quickly moving, I needed to find a way to help streamline the process so the others would not have to go through all the mess I went through to get everything setup.

So, I began looking for a way to create a customized ISO for the build that would already have apps, settings, and customizations installed. I found this great article that details the process. I wanted to re-post this article here showing the steps I took to create the customized image by creating a VM in Hyper-V and then converting that completed image to an ISO that could be downloaded and utilized for the installation.

Creating a customized ISO image with pre-installed software and no user accounts

  • A generalized ISO image without any pre-set user accounts, with pre-installed software, desktop, File Explorer and Start customizations will be created.
  • All customizations and personalization will automatically be applied to all new user accounts
  • Clean install will perform a normal OOBE, asking for regional settings, initial user and so on
  • This ISO will be generalized meaning it is hardware independent and can be used to install Windows on any computer capable of running Windows 10, regardless if the machine is a legacy BIOS machine with MBR partitioning, or a UEFI machine with GPT partitioning
  • The ISO image will be bootable on both BIOS / MBR and UEFI / GPT systems

NOTE: This post will show how to use a virtual machine to create the ISO. All virtual machine references and instructions in this tutorial apply to Hyper-V, available in Windows 10 PRO, Education and Enterprise editions. Oracle VirtualBox and VMware users might need to consult their preferred virtualization platform’s documentation if instructions can’t be used as is.
Everything in this instruction can be made in each edition of Windows 10 (in Home and Single Language editions using a third party virtualization platform) with native Windows tools and programs, apart from Windows Deployment and Imaging Tools, part of Windows 10 Assessment and Deployment Kit (ADK) needed later in the post. The ADK is a free native Microsoft tool, downloadable directly from Microsoft.
If you will do this on a Hyper-V virtual machine (which is the recommended method), make sure to set the new virtual machine to use Standard Checkpoints instead of default Production Checkpoints. You can do this in virtual machine’s settings:

Name:  image.png
Views: 109048
Size:  81.9 KB
Use Standard Checkpoints
Virtual machine generation is irrelevant, you can use Generation 1 or 2 as you wish

This method will produce an ISO image which can be compared to any original Windows 10 ISO you download from Microsoft, apart from the fact that it already contains pre-installed software according to your choice. It will also contain a customized and personalized default user profile, the base Windows uses whenever a new user profile will be created.

A customized default user profile means that whenever a new user account is created, all customizations (Start tiles, File Explorer & desktop icon and view settings, colors, wallpaper, theme, screensaver and so on will be applied to new user profile instead of Windows defaults.

Installation using this ISO will take somewhat longer than using a standard ISO because it not only contains full Windows setup, but also the pre-installed software. Notice that depending on how much space pre-installed software takes, you might not be able to burn this ISO to a standard 4.7 GB DVD disk but have to use a dual layer disk or a USB flash drive instead. My customized image came out to be about 8.5 GB in size.

The ISO created will include no user profile folders, personal user data and files.

This ISO image can be used on any hardware setup capable of running Windows and can be shared, subject to people you share the ISO with have valid licenses and / or activation keys for both Windows 10 and pre-installed software.

System Preparation Procedure

  • Download the Windows ISO Installation tool from Microsoft
    • Use this TOOL to download the ISO and create the installation media
  • Install Windows 10 on your VM using the downloaded ISO

NOTE: The normal Windows Download from the link above will download Windows 10 Professional. You will need a key for the installation to upgrade to Enterprise Edition and you will need to be able to activate the copy of Windows to be able to save the customizations you create for your ISO.

  • Boot into Windows 10 and do the following:
    • Activate the Windows Edition your are installing with your key. You will require internet connectivity. I needed Enterprise Edition so I changed the Product Key In Settings to upgrade it from Professional.
    • Install your preferred software, customize and personalize Windows, remove / add Start tiles as you wish, and set your preferred group policies (group policies not available in Home and Single Language editions). Do not run any program you install!
    • Update all software and run Windows Update to get all the latest updates for the image.
    • Notice that software installed now will be included in ISO install media, and will be pre-installed for all users on each computer you install Windows to using this custom ISO.

NOTE: If Windows on your reference machine is not activated, you cannot personalize it. In this case you need to modify Windows theme (wallpaper, screensaver, colors, sounds) as you wish on another, activated Windows 10 machine, save the theme as a theme file, copy it to inactivated reference machine and apply (double click).
Also notice that Edge as well as other UWP apps do not work when signed in to built-in admin account. If you need a browser to download software you have to use a third party browser or Internet Explorer. IE can be started from Run dialog by typing iexplore and clicking OK.

  • Open an elevated command prompt and enter the following:

Windows will now restart in Audit Mode using built-in administrator account. You will see a Sysprep prompt in the middle of display:

Name:  image.png
Views: 107212
Size:  73.1 KB
Sysprep Program Window
Leave it open for now
  • Open Notepad, paste the following code to it, make the necessary changes to customize the installation, and save it as
    File name: unattend.xml (exactly this name!)
    Save as type: All files (important!)
    Save in folder: C:\Windows\System32\Sysprep

  • When Sysprepping with the Generalize switch, which we will soon do, the component CopyProfile being set to be TRUE in answer file has a small issue or rather a small inconvenience: it leaves the last used user folders and recent files of built-in admin to end user’s Quick Access in File Explorer.
  • To fix this, we need to reset Quick Access to default whenever a new user signs in first time. It requires the need to run a small batch file at first logon of new user, and then remove the batch file itself from user’s %appdata% so Quick Access will not be reset on any subsequent logon.
  • To do this, open an elevated (Run as administrator) Notepad (Notepad must be elevated to save in system folders), paste the following code to it, save it as:
    File name: RunOnce.bat (or any name you prefer, with extension .bat)
    Save as type: All files (important!)
    Save File in folder: %appdata%\Microsoft\Windows\Start Menu\Programs\Startup

  • Delete all existing user accounts and their user profile data (Option One in this tutorial)
  • You are currently signed in using Windows built-in administrator account. In File Explorer, open C:\Users\Administrator folder and check that all user folders are empty deleting all possibly found content
  • Run Disk Clean-up, selecting and removing everything possible (tutorial)
  • When the disk has been cleaned, create a checkpoint of the VM from Hyper-V Manager. Right Click VM > Click Checkpoint
Manual Checkpoint from Hyper-V Manager
  • In Sysprep dialog still open on your desktop, select System Cleanup Action: Enter System Out-of-Box Experience (OOBE), select Shutdown Options: Shutdown, select (tick the box) Generalize, click OK:
Name:  image.png
Views: 110675
Size:  14.0 KB
Sysprep Selected Options Before Shutdown

Sysprep will now prepare Windows, shutting down machine when done. LEAVE THE VM OFF AND DO NOT RESTART IT! Now, we continue to the Image Creation section.

Name:  image.png
Views: 110209
Size:  3.4 KB
Sysprep Preparing the Machine

Image Creation Procedure

  • On your Hyper-V Host machine, open Disk Management
  • Select Attach VHD from Action menu:
Name:  image.png
Views: 109944
Size:  18.3 KB
  • Browse to and select your reference virtual machine’s VHD / VHDX file. If you have any checkpoints (AVHD / AVHDX files) created on this vm, select the one with most recent time stamp. Note that you have to select show all files to be able to see checkpoint AVHD / AVHDX files:
Click image for larger version.   Name:	image.png 
Views:	987 
Size:	94.6 KB 
ID:	113169
Select the most recent time stamped file
  • Select the check box labeled Read-only (this is very important!), then click OK:
Name:  image.png
Views: 110148
Size:  15.3 KB
BE SURE TO SELECT READ-ONLY

IMPORTANT: Forgetting to select Read-only will especially when mounting a checkpoint AVHD / AVHDX file make it unusable for Hyper-V! You will NOT be able to boot your VM and could corrupt it should you have write access on the mounted VHDX file.
Windows mounts the virtual hard disk, and all of its partitions, as separate disk. In case of an MBR disk it even mounts the system reserved partition.

Name:  image.png
Views: 110574
Size:  6.3 KB

NOTE: In the above picture the Windows system partition for the reference VM is drive K:

  • Open the Windows system partition VHD to be sure that’s the one where Windows is installed, note the drive letter your host assigned to it.
  • Open an elevated Command Prompt, enter the following command to create a new install.wim file:

NOTE: D:\install.wim path in this case is the drive and directory where you want to save the image file. K:\ path is the capture path with subfolders of the drive you want to image FROM

Click image for larger version.   Name:	image.png 
Views:	907 
Size:	26.5 KB 
ID:	113172
dism command

NOTE: The name given in /name switch in above command is irrelevant as we will name the ISO later on, but is needed for the command to run. Use any name you want to for the switch parameter.
The image process will take time, go get something to eat as I did. On my high end Hyper-V server this takes over 20 minutes, the first half of it without showing any progress indicator whatsoever. DISM works somewhat faster if you don’t use optional switches /checkintegrity and /verify but it is not recommended that you to create install.wim without checking its integrity and verifying it.

  • When completed capturing the image, detach the VHD / VHDX or AVHD / AVHDX file from host by right clicking it in Disk Management and selecting Detach VHD:
Name:  image.png
Views: 109850
Size:  28.5 KB
Detach the VHDX

ISO Image Creation Procedure

  • Mount the original Windows 10 ISO you downloaded in the first step to a Virtuial Drive on your Hyper-V Server Host.
  • Copy its contents (everything) to a folder you create on one of your Hyper-V host’s hard disks:
Click image for larger version.   Name:	image.png 
Views:	1221 
Size:	463.8 KB 
ID:	125114

I named the folder ISO_Files and placed it on the D: drive where I had created the image from the previous section. Alternatively, you can copy the contents of a created Windows 10 install USB or DVD to the ISO_Files​ folder.

  • Browse to your custom install.wim created earlier in previous section. Copy it to Sources folder under the ISO_Files folder, replacing the original install.wim in that directory:
Name:  image.png
Views: 110062
Size:  55.8 KB
Note

IMPORTANT: If the ISO you used when copying the files to the ISO_Files folder has been made with Windows Media Creation Tool, the ISO_Files\Sources folder contains an install.esd file instead of install.wim.
In this case you will naturally not get “File exists” prompt. Simply delete the install.esd file and paste your custom install.wim to replace it.
This will help reduce the overall size of the ISO and not confuse the installation process when ran.

  • Now, we download the latest Windows Assessment and Deployment Kit(ADK)for Windows 10: Windows ADK downloads – Windows Hardware Dev Center
    The full download for the ADK is about 7.5 GB but luckily we only need the Deployment Tools portion. So, unselect everything else except Deployment Tools and click Install:
Name:  image.png
Views: 106320
Size:  348.3 KB
Select Only Deployment Tools for the Installation
  • Once completed, you should have a folder within your start menu for the ADK Tools Installation under the folder Windows Kits. Start the Deployment and Imaging Tools interface program by Running the Program as an Administrator:
Name:  image.png
Views: 110029
Size:  38.8 KB
Right Click the Application and “Run As Administrator”
  • At the command prompt, type cd\ to bring your prompt to the root of the folder path you are on.
  • Type the following command to initiate creation of the ISO image file:

Replace three instances of d:\iso_files with the path to the ISO_Files folder where you copied Windows installation files. Notice that this is not a typo: first two of these instances are typed as argument for switch -b without a space in between the switch and argument. This is to tell the oscdimg command where to find boot files to be added to ISO.

Replace d:\14986PROx64.iso with the path where you want to store the ISO image. This is where you also name the ISO file what you want the file name to be.

Although the command seems a bit complicated, everything in it is needed. For more information about the oscdimg command line options, see: Oscdimg Command-Line Options

Click image for larger version.   Name:	image.png 
Views:	1467 
Size:	54.6 KB 
ID:	113179
Screenshot of the OSCDIMG command ran

You now have a completed ISO image ready for distribution to your machines. The overall process took me about 4 hours to complete with all the customizations that I did. Thanks again to Ten Forums for the article. I have provided references below for your convenience as well.

HAPPY IMAGING!!
PLEASE COMMENT!!

REFERNCES:
Create Windows 10 ISO image from Existing Installation
Open and Use Disk Cleanup in Windows 10
Download Windows 10
Windows 10 sysprep – how to skip entering product key
Windows ADK downloads – Windows Hardware Dev Center

The Windows Time Service, Hyper-V Hosts, and DCs that are VMs.

The sheer craziness of it all! I noticed that my clocks were off on my servers by FOUR minutes. I had originally set in group policy for the PDC emulator for my domain, a VM on one of my Hyper-V hosts, to get the time from the Public NTP hosts. I then configured a group policy to have all the other machines get their time from the PDC Emulator.

This was working great for me until I realized that my Hyper-V hosts were actually controlling the time of the VMs. They were also configured to get the time from the PDC Emulator, but essentially, due to how Hyper-V is configured, the PDC Emulator VM was getting the time from the Host. So, once the time got thrown off, everything went wacky on me!

I’d read through a couple of articles and found the configuration flaw of Hyper-V and the need for those servers to get their time from the external NTP hosts as well as be configured as NTP servers themselves. This totally went against my Group Policy configuration which caused the issue!

Luckily, I had a stand alone server that is a tertiary DC in the domain not running Hyper-V. I was able to get my time synced again properly after performing the following configuration.

  • I had to move the FSMO roles to the tertiary DC with the following cmdlet:

  • I then made sure the tertiary DC was syncing time correctly by running the following on that server:

  • I then removed the Group Policy Object for syncing the time source to the DC that I had linked to my Hyper-V Servers OU in Active Directory
  • Ran a gpupdate /force on the Hyper-V host to remove the policy there
  • I then had to reconfigure the Hyper-V hosts to be NTP Servers and clients that got their time from a public NTP server:

The one problem Hyper-V host that was syncing with the DC VM would not change settings via Group Policy nor through the w32tm cmdlet. I even went into the registry and tried to modify the following keys to make the changes stick:

The values would just not change, most likely due to the time not being synchronized. I had to reboot the server and then run through the process again in order for the changes to stick.

I did look at another article that said to do the following on the DC VM in order for time NOT to sync with the Hyper-V Host:

Go into Hyper-V console on the host machine, right-click on the client VM AD server, and select Settings. Once in here, on the left look under:

Management –> Integration Services
Untick Time Synchronization
Click Apply/OK

Virtual Machine Settings within Hyper-V

Things are running smoothly now. Please view the references at the bottom of the post. There are a couple of great articles about the Time Synchronization process with Hyper-V and why it needs to be setup the way I have it now. I wished I had read it before I originally set this up. I will post the article about getting group policy to handle the time sync process. Just remember, if your PDC Emulator is a VM, don’t sync it to a public NTP server. Sync it to your Hyper-V Host and have the Host sync publicly.
In the long run, I think it is a good design solution to have your Hyper-V hosts time synced to the Public NTP servers than having to remember to configure each VM DC you create to NOT time sync with the host. To each is own though, and one thing I learned from working Microsoft, there are multiple ways to get to the same goal that are technically sound methods.

THANKS FOR READING!
PLEASE COMMENT!

REFERENCES:
Setup of NTP on Hyper-V servers
Time Synchronization in Hyper-V
“It’s Simple!” – Time Configuration in Active Directory
NTP Circular Time Sync – Windows Server 2012 R2 / Hyper-V

Exchange Server Client Access URL Configuration Script

In my career, I have to be able to be efficient as most of my projects are on a time crunch schedule. Being able to quickly configure Exchange when setting up a server environment is crucial to the success of the project.

While still honing my skills in PowerShell, I was attempting to create my own script to help configure all of the Virtual Directories in one shot rather than go to each setting and configure them manually. It did not go very well, so as I do, I research and find great professionals that do great work in scripting so that I may learn from them.

In doing so, I found Paul Cunningham’s script that performs this. I took the following script and modified it to add the PowerShell Virtual Directory to it as I like to configure that as well.

***YOU CAN REM THE LINES OUT SHOULD YOU NOT WANT TO CONFIGURE THAT DIRECTORY***

Here is my version of the script:

NOTES:

  • PowerShell script to configure the Client Access server URLs for Microsoft Exchange Server 2013/2016. All Client Access server URLs will be set to the same namespace.
  • If you are using separate namespaces for each CAS service this script will not handle that.
  • The script sets Outlook Anywhere to use NTLM with SSL required by default.
  • If you have different auth requirements for Outlook Anywhere use the optional parameters to set those.
  • The script sets PowerShell to use Basic with SSL required by default.
  • If you have different authentication requirements for PowerShell use the optional parameters to set those.
  • PowerShell was added to the settings. Please be sure to REM those lines of code should you NOT want to configure the PowerShell Virtual Directory.

USAGE:

HAPPY SCRIPTING!
POSITIVE ENERGY!
PLEASE COMMENT!

REFERENCES:
Exchange Server Client Access URL Configuration Script
PowerShell Script to Configure Exchange Server Client Access URLs

Installing an ‘IP-less’ Exchange Server 2019 Database Availability Group

Yesterday, I posted on how Exchange now uses the Resilient File System (ReFS) to optimize and protect Exchange critical files. Another layer of protection is using a database availability group (DAG) for redundancy and is a necessary factor when designing an Exchange Enterprise Environment.
In this example, I will walk you through the installation of an Exchange Server 2019 DAG as I configured in my environment. This DAG will contain two Exchange Servers in the same site with a third Windows Server 2019 server being the File Share Witness (FSW).

Two Server Exchange DAG Configuration

For my configuration, I configured two identical Windows Server 2019 VMs (same procs, RAM, vhdx drives, partitions, etc…). I configured the Exchange Data Volume using ReFS and mounted them to the same folder on the C: Drive on each server. This is very important for replication to take place successfully when the databases are added to the DAG.


I next went to the Admin server where the FSW would be hosted and added the Exchange Trusted Subsystem Account to the local Administrators group on that server:

IMPORTANT!
Add the Exchange Trusted Subsystem Account to the Local Administrators Group on the FSW.

NOTE: The reason that this is an ‘IP-less’ DAG is that I’m creating a DAG with no cluster administrative access point (CAAP). The DAG has no IP address of its own, and no computer object in Active Directory. The main implication of this is that backup software that relies on the CAAP or backup operations won’t work. This option of an ‘IP-less’ DAG was first introduced in Exchange Server 2013 SP1/CU4, so by now any decent backup products should support this configuration. But you should always verify this with your backup vendor of choice. Also be aware that this is only supported for DAGs that are running on Windows Server 2012 R2 (or later).

Next, we create the DAG from Exchange PowerShell using the New-DatabaseAvailabilityGroup cmdlet. Now remember that since you are using the ReFS system for your database volumes, you will need to specify the -FileSystem parameter within the cmdlet to assure proper setup and replication of the data files.

Next, we add the Exchange Servers that hold the databases that will be replicated within the DAG:

The DAG will now show the two servers as Operational Member Servers:

The FSW Directory was created on the admin01 server when the DAG was created. We can verify that with the following cmdlet:

Next, we add the databases that we want replicated to the DAG as replicated databases. I want all my Databases on EX01 to replicate to EX02 and vice versa for the EX02 Databases. I want the activation preference to remain on the server that the databases were originally created on so I will use the -ActivationPreference parameter to accomplish that. I will go into more detail on Activation Preference in another post.

Now we verify that the Database Copies are healthy on each replication member using the Get-MailboxDatabaseCopyStatus cmdlet. You will see a Healthy Status on the replicated copies:

POSITIVE ENERGY!
KILL NARCISSISM!
HAPPY TROUBLESHOOTING!

REFERENCES:
Installing an Exchange Server 2016 Database Availability Group

Using the Resilient File System for Exchange Server

In my ongoing effort for becoming more knowledgeable on Exchange Server, I found that the preferred new file system for Exchange Databases and Log files is the ReFS.
ReFS is not that new. Microsoft’s Resilient File System (ReFS) was introduced with Windows Server 2012. ReFS is not a direct replacement for NTFS, and is missing some underlying NTFS features, but is designed to be (as the name suggests) a more resilient file system for extremely large amounts of data.

Support for ReFS with Exchange Server

From Exchange Server 2013 and upwards (which includes Exchange Server 2019 today) Microsoft supports the use of ReFS for Exchange servers, and in fact they now recommend it as the preferred file system for Exchange Server 2019, within the following guidelines.

For Exchange Server 2013:

  • ReFS is supported for volumes containing Exchange database files, log files, and content index files.
  • ReFS is not supported for volumes containing Exchange binaries (the program files).
  • ReFS is not supported for volumes containing the system partition.
  • ReFS data integrity features must be disabled for the database (.edb) files or the entire volume that hosts database files.
  • Hotfix KB2853418 must be installed.
  • For Windows 2012, the following hotfixes must be installed:

This means that you should continue to use NTFS for your operating system and Exchange Server 2013 installation volume, but you can consider using ReFS for the volumes hosting Exchange databases, log files, and index files.

For Exchange Server 2016:

  • ReFS is supported for volumes containing Exchange database files, log files, and content index files.
  • ReFS is not supported for volumes containing Exchange binaries (the program files).
  • ReFS is not supported for volumes containing the system partition.
  • ReFS data integrity features are recommended to be disabled.
  • For Windows 2012, the following hotfixes must be installed:

This means that you should continue to use NTFS for your operating system and Exchange Server 2016 installation volume, and it is recommended ReFS for the volumes hosting Exchange databases, log files, and index files.

For Exchange Server 2019:

  • ReFS is supported for volumes containing Exchange database files, log files, and content index files.
  • ReFS is not supported for volumes containing Exchange binaries (the program files).
  • ReFS is not supported for volumes containing the system partition.
  • ReFS data integrity features are recommended to be disabled.

This means that you should continue to use NTFS for your operating system and Exchange Server 2019 installation volume, and it is recommended ReFS for the volumes hosting Exchange databases, log files, and index files.

Creating an ReFS Formatted Volume

In Windows Server during the New Volume Wizard when you get to the step for configuring File System Settings change the file system from NTFS to ReFS.

exchange-server-refs

NOTE: Using the New Volume Wizard does not give you the option to disable data integrity at the volume level. To set it at the volume level itself use PowerShell when configuring new volumes. I found this out the hard way and am now re-configuring my volumes to disable the Integrity Streams.

I needed to create the mount point to mount the volume to:

I then got a list of my available disks:

In my case, disk 2 was the one I needed to format and change. I had to create a new partition and then format it:

Once formatted, I mount the volume to the Directory created earlier:

NOTE: Partition 1 on a disk is always reserved for system files on the drive volume. So the active partitions will always start at 2.

Lastly, verify that the partition is online and that the Integrity Streams are turned off:

Additional Considerations

When you are deploying an Exchange 2016 or 2019 DAG and using Autoreseed, the disk reclaimer needs to know which file system to use when formatting spare disks. So when, creating a DAG in Exchange PowerShell, make sure to set the -FileSystem parameter. For Exchange Server 2013 DAGs, manually format the spare volumes with ReFS.

More coming soon. I will post how I setup the “IP-less” DAG for my environment and got replication functional for my Exchange Databases.

REFERENCES:
Exchange 2013 storage configuration options
Exchange 2016 Preferred Architecture
Exchange Storage for Insiders: It’s ESE (Ignite video)
ReFS Exchange Server Volumes
Preparing ReFS Volumes for Exchange