Reconnecting Shared Mailboxes after an O365 Migration

I get a lot of these incidents in my queue after a user has been migrated to O365. For whatever reason, most likely due to the mailbox being moved itself, whether it is the user’s mailbox, the shared mailbox, or both, the connections to the shared mailboxes stop working in Outlook and the user cannot connect to the shared mailbox.

Here is a quick and easy solution to use to disconnect and reconnect the shared mailbox(es) that you lose connectivity to when migrated. This is usually performed on Outlook 2016 and above as most users upgrade their client software when moved to O365.

First, we remove the existing shared mailbox connection:

  • Click the File > Account Settings > Account Settings.
  • Select your company email address in the account list.
  • Click Change > More Settings > Advanced tab > Select the Shared Mailbox > Remove
  • Click Apply > OK > Next > Finish.
  • The shared mailbox will now automatically be removed in your Folder pane in Outlook.

Second, we re-add the shared mailbox connection to Outlook:

  • Click the File > Account Settings > Account Settings.
  • Select your company email address in the account list.
  • Click Change > More Settings > Advanced tab > Add
  • Type the name of the shared mailbox in the window and click OK.
  • Click Apply > OK > Next > Finish.
  • The shared mailbox will now automatically be added to your Folder List pane within Outlook.

Note: The above procedure must be followed in order to properly reconnect the shared mailbox. You cannot remove and re-add the mailbox in the same process as that will not reset the connection properly. You must save the settings when disconnecting.

I hope that this will assist everyone when troubleshooting Outlook connectivity issues to shared mailboxes after a migration.

HAPPY TROUBLESHOOTING!
PLEASE COMMENT!

Moving mailboxes to O365 via PowerShell in Hybrid Configuration

As many of you have knowledge, I am studying for my MS-202 Exam. And, part of the knowledge needed is to be able to migrate mailboxes between on premises and Exchange Online through PowerShell. Here are the steps for the scenario to move a mailbox from on premises to O365:

1. Connect to Exchange Online via PowerShell

If you have read my previous post: Connect to All PowerShell Modules in O365 with one script
You should have all the settings needed to connect your PowerShell to O365. Note in this scenario, that all these cmdlets will be run from O365 PowerShell and will be monitored from O365 by either PowerShell or the Exchange Admin Center. You will not be able to monitor the moves from On-Premises.

2. Provide your on premises Migration Administrator credentials as a variable for your cmdlet.

3. Move a single mailbox.

In your hybrid configuration you should be doing directory sync with O365/Azure and the accounts should be available in the cloud showing that they are synced with AD. This also assumes that you have your MRS Proxy endpoint enabled, which can be done by the HCW. Also, make sure you have your licensing available for your mailboxes. From my knowledge, you can assign your license to the account in the cloud before moving, especially if you have a particular license that you need to assign the account. Other than that, moving the mailbox will assign an existing license that is available that includes an Exchange Online mailbox feature when the mailbox is moved.
Now we initiate the move with the cmdlet. Similar to what you would do in the GUI, this simple mailbox move cmdlet initiates the move request. It has most of the same parameters as a local move request including BadItemLimit, LargeItemLimit, AcceptLargeDataLoss, etc…
 
Use the following LINK for documentation on the New-MoveRequest cmdlet.

Now with all migration projects, we expect to have to move multiple mailboxes in a single batch. The following will show the process for moving mailboxes in bulk from on premises to O365:

1. Connect to Exchange Online via PowerShell

If you have read my previous post: Connect to All PowerShell Modules in O365 with one script
You should have all the settings needed to connect your PowerShell to O365. Note in this scenario, that all these cmdlets will be run from O365 PowerShell and will be monitored from O365 by either PowerShell or the Exchange Admin Center. You will not be able to monitor the moves from On-Premises.

2. Provide your on premises Migration Administrator credentials as a variable for your cmdlet.

3. Move multiple mailboxes in a single batch.

In your hybrid configuration you should be doing directory sync with O365/Azure and the accounts should be available in the cloud showing that they are synced with AD. This also assumes that you have your MRS Proxy endpoint enabled, which can be done by the HCW. Also, make sure you have your licensing available for your mailboxes. From my knowledge, you can assign your license to the account in the cloud before moving, especially if you have a particular license that you need to assign the account. Other than that, moving the mailbox will assign an existing license that is available that includes an Exchange Online mailbox feature when the mailbox is moved.

This time you want to create a CSV file using the alias or emailaddress as your header and then list the appropriate value for all the users in your batch group. Save the file locally as MigrationBatch01.csv or a name of your choice.

Use EMailAddress
 OR
 Alias as the header

Next you initiate the mailbox moves. When specifying the mailbox identity in the cmdlet, use the respective header in your variable declaration (either $user.EMailAddress OR $user.Alias)

Use the following LINK for documentation on the New-MoveRequest cmdlet.

GOOD LUCK WITH YOUR MIGRATIONS!
HAPPY TROUBLESHOOTING!

References:
Moving Individual Mailboxes to O365
Move Mailboxes in Bulk to O365
PowerShell Mailbox Migration to O365
Connect to all PowerShell Modules in O365 with one script
New-MoveRequest Microsoft Document

Exchange DAG Replication Problem: An established connection was aborted by the software in your host machine

I had an issue with a four node DAG where the DR site with two of the DAG members were having replication issues. It was only technically affecting one DAG Member though. The copy queue length was really high and the logs were not committing to the database. A Test-ReplicationHealth cmdlet test told that the copy queue length for the affected database copy was high. No other databases were affected as there were eight databases on this DAG Node. The issue was that the log files were not replicating properly to the one DAG member for that database, causing the log file drives on all the other DAG members to build and become full:

EX04 DAG member has high Copy Queue Length
The purple member (EX04) free space is different from the other three DAG members

Circular Logging was turned on, but since the db was NOT in sync, the logs could NOT truncate properly which rendered CL useless. What was being done to stave the issue was to suspend the database copy of the affected DAG member (EX04), then resume the copy. The logs would replay and commit to the database copy on the DAG member, but over a short period of time, the same issue would arise again, as shown in this graph:

You can see the other DAG members start dropping in free space

There were absolutely no errors in the Event Viewer showing this replication issue. After some research, I ran the following cmdlet showing a particular output parameter that gave me the actual problem:

Get-MailboxDatabaseCopyStatus DAG1DB01 | ft -a -wr Name, Status, IncomingLogCopyingNetwork

Output with the actual error listed for the DR DAG members.

The operative error here was: {An error occurred while communicating with server ‘EX01’. Error: Unable to read data from the transport connection: An established connection was aborted by the software in your host machine.} 

Now even though only EX04 was actually having problems with its log replication, both DR members EX03 & EX04 were having the same problem. Again, there were NO events in event viewer showing this issue. I next did some connectivity tests to EX01 from EX04 even though the error said there was an established connection that was broken.

Ping EX01 -f -l 1472

Now the -f states do NOT fragment the packet and send it as a whole to the destination.
The -l states the packet/buffer size you want sent. In this case 1472 bits.
By doing this, you are able to assure that a router or switch is NOT segmenting the packets, packet segmentation of replication logs can cause data corruption and replication issues.

That test passed successfully. I also did a trace route to assure there was no packet loss on the route to the replicating server. That test passed successfully.

I next checked the DAG Network to assure that all networks were working for replication. Now, in this scenario, there was only ONE DAG Network, there was NOT a separate Replication Network. I did not design the DAG and limitations most likely came into play during the design. From my experience, you setup a separate replication network for replication only, but if your network has enough bandwidth, and the design calls for simplification, you can use one DAG network in your design.

Get-DatabaseAvailabilityGroupNetwork | fl 

RunspaceId : a1600003-8074-4000-9150-c7800000207f 
Name : MapiDagNetwork 
Description : 
Subnets : {{192.168.1.0/24,Up}, {192.168.2.0/24,Up}} 
Interfaces : {{EX01,Up,192.168.1.25}, {EX02,Up,192.168.1.26},{EX03,Up,192.168.2.25}, {EX04,Up,192.168.2.26}} 
MapiAccessEnabled : True 
ReplicationEnabled : True 
IgnoreNetwork : False 
Identity : DAG1\MapiDagNetwork 
IsValid : True 
ObjectState : New 

All the DAG Network Members were up and not showing errors. I next did a telnet session to EX01 over the default DAG replication port 64327 to see if there would be any connectivity issues to EX01:

telnet EX01 64327

That test was successful and there were no connectivity issues to EX01 from EX04. Again, there was only ONE database out of eight that was having replication problems. After mulling over the problem, it was decided to restart the MSExchangeRepl service on EX03 AND EX04 since the error was present on both DAG members. We would then, suspend the database copy and resume the database copy on the affected servers.

Run on EX03:
Restart-Service MSExchangeRepl
Suspend-MailboxDatabaseCopy DAG1DB01/EX03 -Confirm:$False
Resume-MailboxDatabaseCopy DAG1DB01/EX03 -Confirm:$False

Run on EX04:
Restart-Service MSExchangeRepl
Suspend-MailboxDatabaseCopy DAG1DB01/EX04 -Confirm:$False
Resume-MailboxDatabaseCopy DAG1DB01/EX04 -Confirm:$False

After monitoring the databases and log drives, the issue was resolved and replication started functioning properly.

Log Drive Available Space Returned to Normal for DAG members

PLEASE COMMENT! I WELCOME SUGGESTIONS, TIPS, ALTERNATIVE TROUBLESHOOTING! HAVE A GREAT DAY!

Connect to all PowerShell Modules in O365 with one script

Let’s say you’re an admin that needs to connect to Office365 via PowerShell often. Now, there are many different websites or blogs that will show you how to connect to each session via PowerShell. That can cause a headache since you can end up having five different PowerShell sessions running in five different windows. You end up having to enter a username and password all those times, which can become time consuming.

I want to show you here how to combine all those sessions into one script where, if you’re security is tight enough on your computer, you don’t even have to enter credentials. This way, you can click on one icon and pull up all the O365 PowerShell commands that you’ll need to manage your organization.

First you need to download the following PowerShell Module Installation Files so that your PowerShell Database will have the correct modules installed:

Microsoft Online Service Sign-in Assistant for IT Professionals RTW
Windows Azure Active Directory Module for Windows PowerShell v2
SharePoint Online Management Shell
Skype for Business Online, Windows PowerShell Module

Next, we want to setup the CLI (Command Line Interface) to be too cool for school. I have learned it helps to have knowledge of how to customize the CLI window. You can do all of this in PowerShell ISE or Notepad, which ever you prefer. Here are the commands for the script that I use to setup the CLI:

Next, you want to set your Execution Policy and put in your credentials so that you won’t be prompted to enter the user credentials when you run the script.

NOTE: MAKE SURE YOU KEEP YOUR SCRIPT SAFE AS THE CREDENTIALS ARE VISIBLE WITHIN THE SCRIPT IN PLAIN TEXT!

You can, alternatively, set your script to prompt for credentials every time by using the following:

$LiveCred = Get-Credential

Here is that part of the script:

Now we get into the importing of the modules for each O365 service:

Get the MSOnline Module:

Connect to the MSOnline Service:

Connect to Azure AD PowerShell:

Connect to SharePoint Online PowerShell:
NOTE – MAKE SURE YOU CHANGE TO YOUR COMPANY NAME IN THE URL!!

Connect to Exchange Online PowerShell:

Connect to Skype For Business Online PowerShell:

Connect to the Security & Compliance PowerShell:
NOTE – This one I still get “Access Denied” when trying to connect. I have looked for an answer to that issue, but have not found one. Please comment with a link if you have an answer so that I can update this script!

Lastly, put in a note to show that the PS load is completed:

So Here is the final script in its entirety:

Now you can create your icon for your desktop so that you can easily access the script. I would save the script to your Scripts directory.

That will usually be C:\Users\’username’\Documents\WindowsPowerShell\Scripts or wherever directory you choose.

To start, right click the desktop and choose New > Shortcut
In the Target Field, enter the following for your PowerShell Shortcut, pointing to the path of your script:

C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe -noexit -ExecutionPolicy Unrestricted -File “C:\Users\username\Documents\WindowsPowerShell\Scripts\ConnectO365All.ps1”

Click on the Advanced button and check the box: Run As Administrator
Under the General Tab, name your shortcut: (CompanyName) O365 All PowerShell
Click OK to save the shortcut to your desktop.

LAST BUT NOT LEAST, RUN THE FOLLOWING COMMAND BEFORE EXITING OR CLOSING YOUR POWERSHELL WINDOW. THIS WILL REMOVE ALL THE SESSIONS YOU’VE CONNECTED TO:

Get-PSSession | Remove-PSSession

HAPPY SCRIPTING!
LEARN, DO, LIVE!

References:
Connect to all O365 Services in one PowerShell Window
How to connect to all O365 Services through PowerShell
Connecting to Office 365 “Everything” via PowerShell

Checking Drive Space Volumes for DAG DB members through PowerShell

I had received a weird alert for a DB volume for a DAG member being below threshold. This was odd to me due to the fact that there were four DAG members and we only received an alert for one. I went into Azure Log Analytics and ran the following query to render a graph for the past 14 days showing the percent free space of the volume for all the DAG members.

Thanks Georges Moua for the query script!

Now the reason I can run the query this way is due to the fact that the Design of the DAG was correctly done and the DB folders are identical on all DAG members. The query rendered the following chart:

As you can see the Green DAG member is way below the other DAG members.

I next went to an Exchange Server in the DAG and got the volume data for all the members in the DAG:

EX02’s volume free space is far below the other DAG members

I went on EX02 and found that there was a subfolder named “Restore” that was not present on the other servers. I ran the following script to get the size of that folder in GB:

The folder size was 185 GB. Removing that folder, along with all subfolders/files, would balance the free space to the other DAG members. I ran the following cmdlet to remove the folder and all subfolders/files:

This remediated the alert and balanced the drive space across all DAG members.

POST YOUR COMMENTS OR QUESTIONS!
HAPPY TROUBLESHOOTING!

Event 11022 with MSExchangeTransport – Easy Validation Test

In a hybrid environment, you’re always connecting between the cloud and on premises to establish transport through the connectors to transport mail. By default, this is done over a TLS (Transport Layer Security) connection. It’s similar to a VPN or SSL connection using certificates on the Transport Layer of the network stack to encrypt the data between the two Organizations in a Hybrid configuration.

Because you are using certificates, the certificate must be validated properly and checked to see if it has expired or been revoked by the issuing company. A revocation list is created and updated regularly for this purpose. If the connecting organization cannot validate the revocation of the certificate, it will not establish a TLS connection with the connecting organization. You will then get the following event:

Event 11022
MSExchangeTransport
Error:
Failed to confirm domain capabilities ‘mail.protection.outlook.com:AcceptOorgProtocol’ on connector ‘Inbound from Office 365’ because validation of the Transport Layer Security (TLS) certificate failed with status ‘RevocationOffline’. Contact the administrator of ‘mail.protection.outlook.com’ to resolve the problem, or remove the domain from the TlsDomainCapabilities list of the Receive connector.

Most likely, there is a network issue with the On Premises Organization being able to retrieve the Revocation File with the Certificate Information. Since it cannot retrieve that file, it stops the transport connection and throws the error.

A simple validation to validate the connector and assure transport from Office365 is to run the following cmdlet from the server on premises that performs the connection:

Again, I like to put the other cmdlets of 
write-host, hostname, and date 
in order to make it easy to document when working an incident.

From the highlighted text, we can see the test was successful.

The test runs a connection for each connector and tests the validity of each connector. If a success is returned, then we have knowledge that the certificate was validated and the connection was established through the connector from Office365.

If you get a failure though, you will need to run tests to see if you can pull the revocation list for the certificate as well as a simple test to connect to Office365:

Connect to Exchange Online via Powershell

IMPORTANT NOTE

I wanted to put some information on how to pull the CRL Distribution Point for the Office365 so that you could run an Invoke-WebRequest to pull the CRL file from the Distribution Point, but I have NOT found a single way through Powershell to pull that information. I have searched multiple posts and articles showing all these advanced methods of using certutil and PowerShell to get a bunch of other information, but NOTHING on how to pull the URL for the CRL file from the certificate. Doing a Get-ChildItem for the certificate using the Thumbprint does NOT pull that property from the certificate. Now, if you have a cmdlet that WILL do that, PLEASE POST!

So, in essence, to troubleshoot if you can get to the CRL file, you get the URL for the CRL Distribution Point from the GUI Properties of the certificate. Then you run the following cmdlet in PowerShell:

POST COMMENTS!
HAPPY TROUBLESHOOTING!

What the Hybrid Configuration Wizard Performs in the background and configuring Hybrid Co-Existence with Exchange Online

****UPDATE 3/23/2020****

Changes have been made to the HCW and the installation since this original post. Please read the following to gain knowledge of the updates to the tool and the installation.


March 2020 significant update to Hybrid Configuration Wizard

We wanted to let you know that we are releasing what we consider a significant update to Exchange Hybrid Configuration Wizard (HCW). Along with a handful of small bug fixes, there are four major changes coming that we wanted to share with you:

  1. HCW will no longer enable Federation Trust by default for all installations. Instead, it will only enable Federation Trust if there are Exchange 2010 servers on premises. HCW will call Get-ExchangeServer and if no Exchange 2010 servers are reported, the workflow to enable Federation Trust and subsequently require domain proof will not execute. Note that organization relationships are still created.
  2. When uninstalling the hybrid agent and switching to Classic in the HCW, this action would sometimes fail with a “null reference” error. We have fixed this!
  3. How many of you have hit the HCW 8064 error – unable to configure OAuth, and subsequently had no idea why OAuth failed to configure? Yes, we heard you loud and clear! In this release, we have completely changed the way we enable and configure OAuth. Instead of enabling OAuth at the service layer, we now enable OAuth via a Graph API under the context of the Tenant Admin. This in turn removes the error obfuscation we had with the service layer enablement and allows us to include a detailed error entry in the HCW log. So while you still see the HCW 8064 error in the HCW UI, you can now review the log for the specific error detail which will make it easier to troubleshoot and resolve.
  4. When verifying DNS, we had a fallback mechanism that would reach out to an external site to verify domains. While this fallback mechanism was rarely hit, we received overwhelming feedback to not use this mechanism/site as it was not listed in our IPs & URLs web page. We have removed that fallback and now only use the endpoint “mshybridservice.trafficmanager.net”, which is listed in our endpoints documentation.

Because this is a major version update, the build begins with 17.x vs 16.x. The build number can be found in the top right corner once you download and open the HCW.

Because of the web-based distribution nature HCW uses and this version is a brand new package, you will get all this goodness simply by installing the new HCW from here. The current builds of HCW (16.x) will not automatically update to 17.x build, in fact – you could run the two side-by-side. Once you are on 17.x build – the HCW will then auto-update as usual.

A few additional notes: At this time, we do not anticipate new HCW 16.x builds. Therefore, to continue getting new HCW builds in the future, uninstall the current version of HCW (16.x) and then install the new version (17.x). The new version of HCW has a new dependency, .NET 4.7.2. The installer should take care of this for you, but just so you are aware.


ORIGINAL POST

I’m working on getting certified in Exchange Hybrid Scenarios and Exchange Online configuration as part of my skill set for Exchange. In doing so, I had successfully implemented a complete Full Hybrid Exchange Environment between my Exchange Online Tenant and my On Premises Exchange 2019 Environment last evening.

I wanted to give an update that was posted to my LinkedIn Posting on this. Thank you Brian Day for the vote of confidence and caution that running these cmdlets manually is not supported by Microsoft and that the HCW, like all the Online Microsoft Products, is constantly changing and being updated.

Important Note

As preparation, I bought some Exchange Online Plan 1 licenses which give me a 50 GB mailbox limit and basic mailbox functionality. It does not include the more advanced features such as ATP, or DLP. I am running most of those features through my On Premises Environment. I mainly wanted to be able to place mailboxes in the cloud and have a hybrid setup. My plan was to have mail flow continue through my On Premises environment so that my Exchange Server features would be used and I would not have to change any MX or SPF records. I also had my certificates in place for SSL and OWA so I would want keep mail flow routed that way, through on premises. I do want to be able to have Free/Busy lookups cross-premise so federation would have to be enabled as well. I would also have to enable the MRS proxy on my Exchange Server so that mailbox migration could be implemented cross-premise. I also have previously configured Azure AD Sync along with ADFS for Single Sign On. In my case, another server was not needed as I didn’t have enough mailboxes or real need to split my frontend and backend deployment. Running the Hybrid Configuration Wizard would not open any new ports or change any existing port traffic that was already configured on my firewall. These are just a few of the considerations that need to be looked at when considering a hybrid integration.

Here is a great article to read for the prerequisites
Exchange Hybrid Deployment Pre-requisites

So, once I had all those considerations handled in my design, I ran the Hybrid Configuration Wizard. What I want to do in this blog post is to go through the steps that the wizard does in the background to setup the Hybrid Environment as you go through the Wizard.

I mainly used the following blog post as a reference, but have approached it differently by diving into the cmdlets that are run during the process:

https://www.codetwo.com/admins-blog/office-365-hybrid-configuration-wizard-step-by-step/#validating-connection

1. The HCW validates the On-premises and Online Exchange Connection.

The Hybrid Configuration Wizard checks if it is possible to connect to both servers with PowerShell. It runs the Get-ExchangeServer cmdlet on premises after resolving the server in DNS. It then connects to Exchange Online, authorizing the connection:

Authority=https://login.windows.net/common Resource=https://outlook.office365.com ClientId=abcdefgh-a123-4566-9abc-2bdflancelin

2. The HCW collects data about Exchange configuration from the on-premises Active Directory

The Wizard gathers information about the local domain. In order to do that, the HCW executes a series of cmdlets.

These include, in order:

3. The HCW collects information on the Exchange online (Office 365) configuration

This task repeats what has been done in the previous step, only for the Exchange online, instead of the on-premises one.

The cmdlets include, in order:

4. Federation Trust is determined. If not present, a new Federation Trust and the required certificate will be created on the local Exchange Server

You will be prompted in the Wizard to create a Federation Trust if not present. The following articles explain Federation and its requirements:

Understanding Federation – Link Here
Understanding Federated Delegation – 
Link Here
Create a Federation Trust – 
Link Here

If the activity is finished successfully, a new certificate should appear on the on-premises Exchange Certificates list. The new certificate includes “Federation” in its Subject field. To make sure the certificate is there, you can run a cmdlet: Get-ExchangeCertificate | ft -a -wr


The results will look like this

5. The HCW creates a new Hybrid Configuration Object in the local Active Directory

The HCW will run cmdlets based on the information you provide in the HCW for the certificate, the on premises Exchange Server, the domain(s), and what features you want turned on:

It then checks the settings through the following cmdlets:

It then enables Organization Customization for both environments through this cmdlet:

6. Configuration is then completed to modify the settings on the on premises Exchange environment 

EmailAddressPolicy – HCW adds address @tenant.mail.onmicrosoft.com
The HCW configures remote domains – adds tenant.mail.onmicrosoft.com and tenant.onmicrosoft.com
The HCW adds a new accepted domain – adds tenant.mail.onmicrosoft.com

Some of the cmdlets run:

7. The HCW Configures the Organization Relationship between the local server and the cloud.

This configuration is not necessary in minimal hybrid deployment. Since I have a full hybrid deployment configured, the cmdlets were run as needed to configure it. Thanks to the correct configuration, it is possible to synchronize free/busy status of mailboxes and their elements between the on-premises Exchange Environment and Exchange online. 

Some of the cmdlets run in the process:

8. The HCW and setting connectors on both Exchange servers

The HCW checks to see if the connectors are there, if not, it sets them up. During this workflow, four connectors are set – one receive and one send connector for each server. Those connectors guarantee the mail flow between the on-premises and Exchange Online.

Some of the cmdlets run in the process:

The Intra-Organization is set as well:

9. The HCW configures OAuth Authentication across the Hybrid

This LINK explains how OAuth is configured between Exchange On Premises and Exchange Online. It’s a very good article to read as it shows how to get the Modern Authentication style working. Now the HCW does this for you and at the end of the article, you can run cmdlets to test the validity of the configuration.

If you want to go into a deep dive about how the Hybrid Authentication works, see the following:
Deep Dive Into Hybrid Authentication – from the MS Exchange Team Blog

Here are some of cmdlets run during this process workflow:

Again, look at both of those links to get a little more detail as to what each cmdlet does and how it sets up OAuth. Here are the two cmdlets used to test OAuth:

10. Enable MRS Proxy for Migration

In order to be able to move mailboxes between Exchange On Premises and Exchange Online, you have to enable the Exchange Web Services Virtual Directory to use the MRSProxy (Microsoft Replication Service proxy). You also have to set your EWS Virtual Directory to use Basic Authentication. You’ll want to do this before running the HCW or else you will receive the following error when the HCW validates the Migration setup and configuration:

Microsoft.Exchange.Migration.MigrationServerConnectionFailedException: The connection to the server ‘mail.ldlnet.net’ could not be completed. —> Microsoft.Exchange.MailboxReplicationService.RemoteTransientException: The call to ‘https://mail.ldlnet.net/EWS/mrsproxy.svc’ failed. Error details: The HTTP request was forbidden with client authentication scheme ‘Negotiate’. –> The remote server returned an error: (403) Forbidden.. —> Microsoft.Exchange.MailboxReplicationService.RemotePermanentException: The HTTP request was forbidden with client authentication scheme ‘Negotiate’. —> Microsoft.Exchange.MailboxReplicationService.RemotePermanentException: The remote server returned an error: (403) Forbidden.

Some of the cmdlets run to test Migration and MRS Proxy Settings are as follows:

11. Final HCW Configuration and cleanup.

The HCW runs from final cmdlets to finish up the installation of the Hybrid environment. Here are the cmdlets run:

All this information was found in the setup logs that are in the following directory
C:\Users\%username%\AppData\Roaming\Microsoft\Exchange Hybrid Configuration

REFERENCES
Understanding Federation
Understanding Federated Delegation
Create a Federation Trust
Hybrid deployment prerequisites
Exchange Specific OAuth 2.0 Protocol Specification
Understanding WS-Security
JSON Web Tokens
Using OAuth2 to access Calendar, Contact and Mail API in Office 365 Exchange Online
Configurable token lifetimes in Azure Active Directory (Public Preview)
OAuth Troubleshooting
Principles of Token Validation
Troubleshooting free/busy issues in Exchange hybrid environment
How to configure Exchange Server on-premises to use Hybrid Modern Authentication
Microsoft 365 Messaging Administrator Certification Transition (beta)
Microsoft 365 certification exams
Exchange Server build numbers and release dates
March 2020 Updates to the HCW

PLEASE LEAVE QUESTIONS, COMMENTS, UPDATES! I WOULD LOVE TO HEAR FROM YOU!

Update Edge Server Certificate in a Hybrid Exchange Environment

LDLNET LLC Banner
LDLNET LLC – Life In Action! Your Source for Professional IT Services!

At work, our group was updating the Exchange Edge Server certificates and having mail flow problems causing messages to be in the Poison Queue and not transfer to Office365 properly. We finally got the procedure down to where it started working. I wanted to post that procedure here since I had never really worked with Edge Servers in the past. If this post can help you in the future, then “I done good!”

Now, everywhere I had read said that you have to remove and then re-create the Edge Subscription between your Transport Servers and the Edge Servers when changing the certificate.

Here is why:
When we subscribe the edge server, an AD LDS account called the EdgeSync Bootstrap Replication Account (ESBRA) is created. This is created using the default certificate private key of the certificate assigned to SMTP service as default, hence as long as we have that certificate the transport servers will be able to authenticate to the Edge server and replicate the required information to ADAM database.

Now when we install a third party certificate we assign SMTP service to it and overwrite the current certificate, basically we change the default SMTP certificate. So, by doing this, the current Edge Subscription will fail as the Edge server will not be able to decrypt the ESRA account passed on when communicating with the transport servers using the new certificate private key.

So, once you have your new 3rd party certificate, you install it to your edge servers:

Then, you enable the Exchange Certificate to be used for SMTP:

Mail flow will be broken at this point. Since messages were going to the poison queue due to the ESBRA account encryption failing when authenticating with the internal Transport Servers, I had to completely stop transport by disabling the Send Connectors between the internal Transport Servers and the Edge servers from the Transport Server.

The configuration of the Edge Servers were that there were two servers in the Edge Farm. Since one of the servers had not had a proper sync in a while, I decided to remove the recipient database that had been replicated to the failing server when removing the Edge Subscription. The other server, I left the recipient database in place so that we could get one server up and running quickly since transport was stopped at this point.

Here is the command that was run to remove the Edge Subscriptions. This needed to be completed on both the Edge Servers and the corresponding Transport Server:

I then had to create a new Edge Subscription file on each Edge Server to copy to the Transport Server. I already had connectors set so I did not need to recreate those connectors.

I copied the xml files of each Edge Server to the Transport Server and ran the following cmdlet to create the Edge Subscription to the Edge Servers. I then had the Edge Servers Rebooted for good measure before redoing a Full Manual Edge Sync.

I next had to preform a full manual EdgeSync from the transport server to the Edge Servers to assure that the recipient database on the AD LDS instance was up to date and that the send connectors were replicated properly.

I next had to re-run the Hybrid Configuration Wizard so that I could configure the Edge Servers as the transport for Hybrid cloud-bound Messages. Once the Edge Servers were chosen to transport Hybrid cloud-bound messages, I selected the new Edge Certificate so that transport would work properly when re-enabled and O365 would recognize the new certificate for Hybrid messages bound for the cloud.

I next re-enabled the Edge Send Connectors so that mail flow would begin working once the Full Edge Synchronization was completed. You have to let that complete before you can begin mail flow again so that messages won’t be delivered to the Poison Queue.

Mail flow began working. It took about 90 minutes for all the queues to clear properly that had queued messages waiting to transport. Any Poison Queued messages were removed with NDRs sent to the senders.

It was a doozy to say the least. Happy Troubleshooting!
Leave Comments or Questions you may have!

References:
Exchange 2010 Edge Transport Server: Configuring EdgeSync
Mail flow breaks after renewing SSL Certificate on Edge server with Edge Subscription
Start-EdgeSynchronization

Exchange Back Pressure and Transport Issues

Sometimes you’ll get a situation where your email will stop flowing on one of your Exchange servers. Most of the time, we’re worried about our database and log file drives becoming full, but we don’t necessarily look at the configuration of our Transport servers to see if the resources in those directories become full or taxed to the point where it causes, “Back Pressure”. Exchange has events setup to monitor when that threshold is crossed and transport functionality is hindered:

  • Event ID 15004: Increase in the utilization level for any resource (eg from Normal to Medium)
  • Event ID 15005: Decrease in the utilization level for any resource (eg from High to Medium)
  • Event ID 15006: High utilization for disk space (ie critically low free disk space)
  • Event ID 15007: High utilization for memory (ie critically low available memory)

If you think that your server is experiencing a back pressure event, you can look quickly through event viewer for these events with the following script:

In most cases, you get the 15006 event:

Event 15006, MSExchangeTransport
Microsoft Exchange Transport is rejecting message submissions because the available disk space has dropped below the configured threshold. The following resources are under pressure:
Used disk space (“C:\Microsoft\Exchange Server\V15\TransportRoles\data\Queue”)
Used disk space (“C:\Microsoft\Exchange Server\V15\TransportRoles\data”)

Overall Resources
The following components are disabled due to back pressure:
Mail resubmission from the Message Resubmission component.
Mail submission from Pickup directory
Mail submission from Replay directory
Mail resubmission from the Shadow Redundancy Component
Inbound mail submission from the Internet

Exchange uses the following formula to calculate the threshold at which these events fire:

100 * (hard disk size – fixed constant) / hard drive size

So, in order to get transport running again, get your C: Drive cleared so that back pressure is lifted off of the server and transport can run again. You should then get a 15005 Event:

Log Name: Application 
Source: MSExchangeTransport 
Date: 10/19/2017 2:21:52 PM 
Event ID: 15005 
Task Category: ResourceManager 
Level: Information 
Keywords: Classic 
User: N/A 
Computer: EX01.ldlnet.org 
Description: 
The resource pressure decreased from Medium to Low.No components disabled due to back pressure. 
The following resources are in normal state: 
Private bytes 
System memory 
Version buckets[C:\Program Files\Microsoft\Exchange Server\V15\TransportRoles\data\Queue\mail.que] 
Jet Sessions[C:\Program Files\Microsoft\Exchange Server\V15\TransportRoles\data\Queue\mail.que] 
Checkpoint Depth[C:\Program Files\Microsoft\Exchange Server\V15\TransportRoles\data\Queue\mail.que] 
Queue database and disk space (“C:\Program Files\Microsoft\Exchange Server\V15\TransportRoles\data\Queue\mail.que”) 
Used disk space (“C:\Program Files\Microsoft\Exchange Server\V15\TransportRoles\data\Queue”) 
Used disk space (“C:\Program Files\Microsoft\Exchange Server\V15\TransportRoles\data”) 
Overall Resources

Now that you’ve completed this, how to do setup your Exchange Environment to keep this from happening again? Well, I would pick a drive volume that you’d never have to worry about filling up, or give the transport its own drive volume. This can be accomplished with a .ps1 script that is installed in the default Scripts directory on your Exchange Server installation:

‘C:\Program Files\Microsoft\Exchange Server\Vxx\scripts’

The name of that file is Move-TransportDatabase.ps1 and it
changes the location of the transport directories, moves the Queue Database and restarts the Transport service automatically. Here is an example of how the script is executed when running Exchange PowerShell with elevated privileges and wanting to move all the services to the E: Drive:

So, that’s how you get your transport directories configured to relieve “back pressure”. In my experience, somebody was doing a PST export of a mailbox to the local C: Drive instead of a specific drive volume that wouldn’t affect the OS, Exchange, and Transport. That’s for another time though! Happy Troubleshooting!

Reference: Exchange 2016 – Back Pressure
Reference: A Guide To Back Pressure.
Reference: Change Exchange Server 2013/2016 Mail Queue Database Location

Running Test-MailFlow on remote Exchange Servers

In my job I try to make the process as efficient as possible so that I can determine the issue quickly and then resolve it as quickly as possible. I was having issue with the Test-Mailflow cmdlet and running it remotely against the servers. I was getting the following error:

MapiExceptionSendAsDenied: Unable to submit message. (hr=0x80070005, ec=1244)

If I had multiple servers to test, I would have to logon to each server and run the test which is not efficient at all. I wanted to automate it more without having to change permissions to do so. I wanted to run an Invoke-Command and place the PSSession for Exchange in that command so that I could run the Test-Mailflow cmdlet and get the results.

Paul Cunningham wrote a great article and script to resolve this. READ HERE

His script allows you to input the server name when running the PS1 from the PowerShell Command Prompt:

I was able to take the Test-MailflowRemote.ps1 script and set it to run on all the mailbox servers for the environment I was monitoring. Now, we can only run the Test-MailFlow cmdlet against Exchange Mailbox Servers that have active databases mounted on them. So, I run the following first to get the list of Mailbox Servers that contain at least 1 active database:

I then run the ps1 script using the array I created with the $Svrs variable:

Output:

This helps a bunch when you need to run on multiple servers and get the test information quickly. Please comment! Happy Troubleshooting!

Protected AD Groups and the problems they can cause accounts

I have run into this issue over the years with accounts being in the Domain Admins group and having issues running PowerShell cmdlets as well as not being able to connect to ActiveSync from a mobile device with the account.

These issues are due to the AdminSDHolder Template in AD and the SDProp Process that is run every 60 Minutes in AD.
This is explained in fantastic detail through the following Microsoft article: Protected Accounts & Groups In Active Directory

Here is an example of an issue that occurred in one of the environments that I was managing. A user was trying to run the following AD cmdlet in PowerShell on DC01:

The user got the following error when the cmdlet was executed:

Set-ADUser : Insufficient access rights to perform the operation
At line:1 char:1
+ Set-ADUser lancel -Server dc01.ldlnet.org -Replace @{title=”Senior O …
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo: NotSpecified: (lancel:ADUser) [Set-ADUser], ADException
+ FullyQualifiedErrorId : ActiveDirectoryServer:8344,Microsoft.ActiveDirectory.Management.Commands.SetADUser

The issue was that the admin account used to run the cmdlet was in the Domain Admins group and was not inheriting permissions per the AdminSDHolder template that was applied to the account:

I checked to see that the admin account was in a protected group:

I next went to the Security Tab > Advanced Button and saw that the Enable Inheritance button was visible:

I’ve circled where to look in the window.

This verifies that the account is protected due to being in the Domain Admins group. Now, there are two workarounds for this particular error that we were experiencing.

  1. Click the Enable Inheritance button. This will cause the permissions to be inherited temporarily. When SDProp is cycled again, the account will lose any inherited permissions and will be essentially “broken” again. This is not good if you’re going to be running cmdlets regularly to modify AD Accounts.
  2. The preferred method to work around this issue is to set the -Server parameter to point to a different DC than the one you are on. So, essentially, we tell the cmdlet to execute on DC02 when running the cmdlet from DC01.

Either method will allow the cmdlet to execute successfully and modify the object. You would think that Microsoft would have noticed this issue with running an admin cmdlet for Active Directory, but they have not fixed this issue as of yet nor do i think they plan to. I would just go with workaround number two and remain sane.

Another example of this Protected Group issue comes with an account in a Protected Group that has a mailbox not being able to connect to Exchange ActiveSync when setting up their mobile device.

  • You usually get a 500 error on the device that you cannot connect.
  • You will also see event 1053 in Event Viewer alluding to not having sufficient access to create the container for the user in AD.

Read this page for more information: Exchange ActiveSync Permissions Issue with Protected Groups

So, in your endeavors admins, keep this in mind when running into these types of problems. Happy Troubleshooting!

Exchange Server HealthSets

This is a monitoring feature included with Exchange that until recently, I did not know existed, as it wasn’t really mentioned in any of my dealings with Exchange Server up until recently. The HealthSets feature monitors every aspect of a running Exchange Server and is broken down into three monitoring components:

  • Probe: used to determine if Exchange components are active.
  • Monitor: when probes signal a different state then the one stored in the patters of the monitoring engine, monitoring engine will determine whether a component or feature is unhealthy.
  • Responder: will take action if a monitor will alert the responder about an unhealthy state. Responders will take different actions depending on the type of component or feature. Actions can start with just recycling the application pool and can go to as far as restarting the server or even worse putting the server offline so it won’t accept any connections.

From what I have experienced in the past year with these HealthSets, an alert will be thrown due to a change in a service, or a restart of a service, a failed monitoring probe result, or something of the like. The healthset will become “unhealthy” in state at that time. You can run the following on a server in order to get the healthset status of that server:

If you get alerts for multiple Exchange Servers, let’s say for instance, the transport array, you can run the following cmdlets to get the status of all the Transport Servers in the array:

HealthSet PowerShell Output
HealthSet PowerShell Output

Now, a lot of times, the Unhealthy value in the HealthSet will have corrected itself as per the Responder, even though the AlertValue will remain Unhealthy. To clear the cache quickly and have the monitor probes run again for verification, perform the following restarts of services from this cmdlet in this order:

That should clear the probe results and let them run again. Now, should they again return an error, we will need to dig deeper to figure out the issue.
What you will want to do first is get the monitor definition. In this example, the Mapi.Submit.Monitor was the component that was unhealthy in the healthset. I had to run the following cmdlet to get the Monitor Definition:

Output:

auto-ns2                           : http://schemas.microsoft.com/win/2004/08/events
xmlns                              : myNs
Id                                 : 404
AssemblyPath                       : C:\Program Files\Microsoft\Exchange Server\V15\Bin\Microsoft.Office.Datacenter.ActiveMonitoringLocal.dll
TypeName                           : Microsoft.Office.Datacenter.ActiveMonitoring.OverallXFailuresMonitor
Name                               : Mapi.Submit.Monitor
WorkItemVersion                    : [null]
ServiceName                        : MailboxTransport
DeploymentId                       : 0
ExecutionLocation                  : [null]
CreatedTime                        : 2018-10-03T09:48:32.9036616Z
CreatedTime                        : 2018-10-03T09:48:32.9036616Z
Enabled                            : 1
TargetPartition                    : [null]
TargetGroup                        : [null]
TargetResource                     : MailboxTransport
TargetExtension                    : [null]
TargetVersion                      : [null]
RecurrenceIntervalSeconds          : 0
TimeoutSeconds                     : 30
StartTime                          : 2018-10-03T09:48:32.9036616Z
UpdateTime                         : 2018-10-03T09:45:12.3073447Z
MaxRetryAttempts                   : 0
ExtensionAttributes                : [null]
SampleMask                         : Mapi.Submit.Probe
MonitoringIntervalSeconds          : 3600
MinimumErrorCount                  : 0
MonitoringThreshold                : 8
SecondaryMonitoringThreshold       : 0
MonitoringSamplesThreshold         : 100
ServicePriority                    : 2
ServiceSeverity                    : 0
IsHaImpacting                      : 0
CreatedById                        : 57
InsufficientSamplesIntervalSeconds : 28800
StateAttribute1Mask                : [null]
FailureCategoryMask                : 0
ComponentName                      : ServiceComponents/MailboxTransport/High
StateTransitionsXml                : <StateTransitions>                                       <Transition ToState=”Unrecoverable” TimeoutInSeconds=”0″ />                 </StateTransitions>
AllowCorrelationToMonitor          : 0
ScenarioDescription                : [null]
SourceScope                        : [null]
TargetScopes                       : [null]
HaScope                            : Server
Version                            : 65536

From the output, you look for the SampleMask. Getting the SampleMask will tell you the probe that is being used in the HealthSet query. From there, you can use that value to get the definition of the probe with the following cmdlets:

OUTPUT:

auto-ns2 : http://schemas.microsoft.com/win/2004/08/events
xmlns : myNs
Id : 99
AssemblyPath : C:\Program Files\Microsoft\Exchange Server\V15\Bin\Microsoft.Forefront.Monitoring.ActiveMonitoring.Local.Components.dll
TypeName : Microsoft.Forefront.Monitoring.ActiveMonitoring.Transport.Probes.MapiSubmitLAMProbe
Name : Mapi.Submit.Probe
WorkItemVersion : [null]
ServiceName : MailboxTransport
DeploymentId : 0
ExecutionLocation : [null]
CreatedTime : 2019-01-05T03:22:02.4029588Z
Enabled : 1
TargetPartition : [null]
TargetGroup : [null]
TargetResource : [null]
TargetExtension : [null]
TargetVersion : [null]
RecurrenceIntervalSeconds : 300
TimeoutSeconds : 30
StartTime : 2019-01-05T03:23:36.4029588Z
UpdateTime : 2019-01-05T03:17:17.2695414Z
MaxRetryAttempts : 2
ExtensionAttributes :
CreatedById : 57
Account :
AccountDisplayName :
Endpoint :
SecondaryAccount :
SecondaryAccountDisplayName :
SecondaryEndpoint :
ExtensionEndpoints : [null]
Version : 65536
ExecutionType : 0

From there you can view and verify the associated error messages that the probe generated when it was run. According to the previous data output, the probe runs every 300 seconds. You will want to filter your logs based on criteria that you input into the cmdlet when searching the log for the events. Properties include:

  • ServiceName – Identifies the HealthSet used.
  • ResultName – Identifies the probe name. When there are multiple probes for a monitor the name will include the sample mask and the resource that you are verifying.
  • Error – Lists the error returned during the failure.
  • ResultType – Lists the value for the result type: 1 = timeout, 2 = poisoned, 3 = success, 4 = failed, 5 = quarantined, 6 = rejected.

So, based on that information, run the following cmdlet to get the last errors in the event log based on the ResultName (Mapi.Submit.Probe) and ResultType (failure). Since there could be a lot of returned data, I tell the cmdlet to select the first 2 results in the output:

SAMPLE OUTPUT:

ExecutionStartTime : 2018-10-12T04:42:26.4725482Z
ExecutionEndTime   : 2018-10-12T04:42:26.5037975Z
ResultId           : 350715748
ResultName         : Mapi.Submit.Probe
ResultType         : 4
Error              : MapiSubmitLAMProbe finished with CheckPreviousMail failure.
ExecutionContext   : MapiSubmitLAMProbe started. This performs – 1. Submits a new message to Store 2. Checks results from previous Send Mail operation. Sequence # = 636741569280603580. First Run? = False. Previous mail submission to store was successful. Results –  # of previous results: 0.  Could Not Find stages that ran.  Previous SendMail failure –  Mail submitted to Store during the previous run never reached SendAsCheck. This may indicate a latency from Store to Submission Service. Investigating.  Found lower SA latency. Indicates an issue in Submission service. Investigate. In SendMail –  NotificationID=00000063-0000-0000-0000-00006ab1f5bc Sending mail. SendMail finished. MapiSubmitLAMProbe finished with CheckPreviousMail failure.
FailureContext     : MapiSubmitLAMProbe finished with CheckPreviousMail failure.

Once we have the error, we can begin to investigate what the Responder did to automatically remediate the issue using the following cmdlet:

Now, in this example, I did NOT get any output due to the fact that I am running the query on a server that did NOT have any localized events that had a recent day. The last time this event occurred based on my notes was September 18th, 2018. But based on the screenshot from my research, you should get some similar output to the output in the below picture:

Responder Output

The responder we were looking for is Mapi.Submit.EscalateResponder as suggested by the screenshot above. This type of responder (Escalate) doesn’t make Managed Availability undertake any automatic repairs but is responsible for log notifications in event logs. After getting the correct responder, you would continue to troubleshoot and attempt to remediate the issue(s) that are behind the HealthSet failure.
In my example case, I found that the Health Mailbox used for the probe test was corrupted and had to be rebuilt. Once that mailbox was functional, the probe test ran successfully.

I hope that this will help you in troubleshooting any alerts in your Exchange environment that are HealthSet based. I know for sure that gathering this information has helped me get a grasp on how the Monitoring works and how it can be used to remediate issues.

A big “Thank You” to the following sites that helped provide most of the information that you see posted here:
Exchange 2013 Managed Availability HealthSet Troubleshooting
Managed availability in Exchange 2013/2016

PowerShell Script to log NETLOGON Events 5719 and 5783, then test the Secure Channel to verify connectivity

In my support role, we would get nightly alerts showing disconnection to the PDC from other DCs and Exchange Servers, giving the following events:

DC02
10/31/2018 23:20:28 5719
NETLOGON
This computer was not able to set up a secure session with a domain controller in domain LDLNET due to the following:
The remote procedure call was cancelled.
This may lead to authentication problems. Make sure that this computer is connected to the network. If the problem persists, please contact your domain administrator.

ADDITIONAL INFO
If this computer is a domain controller for the specified domain, it sets up the secure session to the primary domain controller emulator in the specified domain. Otherwise, this computer sets up the
secure session to any domain controller in the specified domain.

DC03
10/31/2018 23:18:58 5783
NETLOGON
The session setup to the Windows NT or Windows 2000 Domain Controller \\DC01.LDLNET.ORG for the domain LDLNET is not responsive. The current RPC call from Netlogon on \\DC03 to \\DC01.ldlnet.org has been cancelled.

In order to validate the secure channel, you normally run the nltest command (you can also run the Test-ComputerSecureChannel PowerShell cmdlet) to verify the connectivity to the PDC on the secure channel. The scenario is though that multiple DCs or Exchange servers are having multiple events at a similar time due to a network hiccup that brought the secure channel offline between the two Servers.

Our team at the time was getting a lot of alerts generated and it was taking an inordinate amount of time to validate and test. In an effort to provide an efficient solution for this issue, I compiled a PowerShell ps1 script to first validate the events posted in the past three hours, and then secondly, test all the DCs and Exchange Servers for the Secure Channel Connectivity:

NOTE: This script needs to be run on a server that has the Exchange and Active Directory RSAT tools for PowerShell.

I can’t really put out the output since it will have customer PII, but you will see where it will list the DC/Exchange Server Name, show the events, then run the test. You can then troubleshoot from there. Also, know that the secure channel test will FAIL when run on the PDC Emulator DC. The PDC Emulator cannot run a secure channel test on itself.

Please, if you have any questions or comments, please leave some feedback! Happy Troubleshooting!

Get-Counter cmdlets…

Sometimes you need to check performance counters within Windows for different services or applications. The problem is being able to record the output if needed.
I have been able to take care of this through PowerShell so that you can get an average of any performance counter output you need over a time period.

According to:  https://blogs.technet.com/b/nexthop/archive/2011/06/02/gpsperfcounters.aspx
A “CookedValue” definition: Performance counters typically have raw values, second values, and cooked values. The raw values and second values are the raw ingredients used by the performance counter, and the “cooked value” is the result of “cooking” those ingredients into something for human consumption. So apparently the CookedValue is the result of combining the counter’s raw data to get a usable value that you can understand and work with.

Here are some examples for Windows:

Examples for Exchange Server:

A couple of links to listings of Performance Counters For Exchange:

https://www.poweradmin.com/help/pa-file-sight-7-1/howto_monitor_exchange.aspx

https://technet.microsoft.com/en-us/library/ff367923(v=exchg.141).aspx

Now, there are more counters available for all types of Windows Applications. You should be able to use every counter that is listed in Performance Monitor on the server you are running the test from.

You can always use the following command to get a list of counters on your server and save them to a file called perfcounters.txt in the C:\Files directory:

I will not go into too much detail as of now, but I will probably update this as I get more information and comments on the post.
Again, this blog is for quick reference and usage when doing reactive support. As this blog grows, I will add more in depth information. Don’t hesitate though to contact me with your questions and comments.

Getting all Exchange Databases listed and whether or not they are on their preferred node or not.

This is a great one liner in PowerShell that will allow you to get a listing of all the databases for your Exchange Server environment. It will also tell you if those databases are on their preferred node in the DAG and whether they are actively mounted on that node.

This is helpful to know if you have multiple database fail-overs and need to know which databases are where so that you can re-balance them properly. If you are in a large environment, this will help you get a handle on the issue and be able to remediate quickly.

Here is an example of the output:

Now, that you have your listing of DBs and their status, you can run the following script from PowerShell to mount those DBs to their preferred nodes:

Since SLA and remediation are big factors in reactive support, having these scripts help save the day when things get quirky in Exchange. Please comment and submit your scripts as well!

Getting Drive Space Through PowerShell for a Server

This cmdlet will list all your mounted volumes, their size, the file system used, and the available free space. You can modify the code to have a where-object statement: ? {$_.Name -like “*logs*”}. This helps if you have an exchange server that has multiple database volumes for DBs and logs and need to quickly find which volume is the culprit.

I also use a lot of these scripts to gather the information quickly so that I can post the output into my incidents that I am working. It’s good to have these handy.

Here is an example output:

NameFree, GBFree, %Capacity, GBFS
C:\ExchangeDB\DAG2DB01\DB\456.8037.451,219.873NTFS
C:\ExchangeDB\DAG2DB01\LOG\39.4999.0339.873NTFS