PowerShell – Create Remote Desktop session with LAPS credentials

If you are managing your servers and computers with LAPS you might want a simple solution to connect via RDP with your LAPS credentials, without checking the password in LAPS UI or PowerShell.

That is why I’ve created a simple PowerShell Module that you can install on your management machine.

You can download the module from Microsoft TechNet Gallery here:


How to Install?

– Install LAPS PowerShell module AdmPwd.PS on your management computer. 

– Copy the RDPviaLAPS folder to C:\Program Files\WindowsPowerShell\Modules.

– Run PowerShell with a domain user account that has LAPS read password permissions. 

Example 1

Connect-RDP -ComputerName Server01 

This command gets local administrator password from Active Directory and creates a RDP connection with default username Administrator on port 3389.

Example 2

Connect-RDP -ComputerName Server01 -UserName Hulk -Port 43389

This command gets local administrator password from Active Directory and creates a RDP connection with username Hulk on port 43389.

DPM 2019: Where is 2012/R2 support?

Almost a month ago System Center 2019 with Datacenter Protection Manager 2019 was released. Many of us quickly realised that there is no support for Windows Server 2012/R2, Exchange 2013, SharePoint 2013, etc.

I’m shocked! Is that true?

Unfortunately, at the time of writing this post it is. If you check the updated document on What can DPM back up? you will see that older workloads are not listed.

Ok, but does that mean that it won’t work?

Not necessary, It’s a very high probability that everything will work as expected. That is why I have decided to test some scenarios:

  • Windows Server 2012 and 2012 R2 volumes and folders backup,
  • Windows Server 2012 bare-metal backup,
  • Windows Server 2012 R2 Virtual Machine (on 2016 Hyper-V Host),
  • Exchange Server 2013 (CU22) backup.

As expected all backup/restore process went well. I had no issues installing DPM agent on Windows Server 2012/R2 servers, but if you experience issues with the new DPM agent, try installing an older agent (for example DPM 2016).

So is it safe to use it anyway?

This is enterally up to you. If you are already deploying Windows Server 2019 then you will need to use DPM 2019, as DPM 2016 cannot backup Windows Server 2019. But if you also have some workloads running on Windows Server 2012/R2 you will need to decide if you want to take the “risk” and use DPM 2019. The other option is to simple use two different DPM Servers for now (2016 and 2019).

Do you think the support for Windows Server 2012/R2 will be added later?

Hope so! If I remember correctly the same “problem” was with DPM 2016 release, so I’m quite sure the support for older workloads will be added later (maybe next update rollup).


Restore failed node Storage Spaces Direct (S2D)

I’ve tested two different options on how to restore a failed node in a 2-node S2D Cluster with one Storage Pool. Here are the necessary steps on how to restore a failed node with existing physical server (OS re-install) and new physical server (hardware failure).

Existing Physical Server (OS re-install)

Let’s assume that one of the nodes failed and you need to re-install your Operating System.

Remove the failed Node from the S2D Cluster.

Re-install the failed node and update it (You can use the same computer name and IP address).

Don’t modify or remove any disks from your Storage Pool.

Install roles and features needed for the Cluster.

Add the node back to the cluster (it’s recommend to run cluster validation before adding your node back to the cluster).

Once done, monitor status of your Disks in the Storage Pool. The missing disks should become healthy in a short time.

When disks are healthy check the status of your Virtual Disks with Get-VirtualDisk cmdlet and wait until the status is changed to Healthy.

You should now have a healthy S2D Cluster.

New Physical Server (hardware failure)

In this example let’s assume that you need to replace your physical server (with new disks for Storage Pool) due the hardware failure.

Change the Usage of the failed disks to Retired.

Remove the failed Node from the Cluster.

Install roles and features on your new node.

Add the new node to the Cluster (it’s recommend to run cluster validation before adding your node back to the cluster).

Monitor the health of Virtual Disks and wait until status is Healthy.

Remove failed disks (with status Retired) from the Cluster.


Keep in mind that this was tested on 2-node S2D Cluster with only one Storage Pool. If you are using multiple Storage Pools you will need to customize cmdlet’s to remove or add the disks to correct Storage Pool.

Create Address Book Policy for Office 365

If you would like to segment users in your Office 365 subscription or on your Exchange On-Premise you will need to use Address Book Policy.

This set of cmdlets will create everything you need to start using Address Book Policy in Office 365 or Exchange On-Premise. Keep in mind that prior running this cmdlets you should already set the value of CustomAttribute1 on each mailbox (PowerShell or via Exchange Admin Center).

If you are not familiar with Address Book Policy, I would recommend checking Microsoft TechNet first – this script is just a Cheat Sheet so you don’t need to create your own. 


This example will create an Address Book Policy linked to a dedicated Address Lists, Global Address List and Offline Address Book. ABP will be assigned to all Mailboxe’s with CustomAttribute1 equal to variable $CustomAttribute1 (in this example contoso.com). Modify the $CustomAttribute1 variable as needed with any value (I personally use domain names). Rerun the cmdlet’s multiple times to create different Address Book Policies.

The cmdlets works also with Exchange On-Premise, just don’t use the #Connect to Exchange Online and #Exit Exchange Online Session section.


Assign custom Organization Unit to VM Template in VMM 2016

Domain joined Virtual Machines deployed via VM Templates in VMM 2016 are by default added to the Computers OU. You can set a custom OU for each VM Template with PowerShell. Here is an example:

Modify the $OU and $VMTemplate variable with your Organization Unit Path and VM Template Name.

DPM 2016 – The storage involving the current operation could not be read from or written to. (ID 40003)

I had an issue with backing up some workloads with DPM 2016 with an error The storage involving the current operation could not be read from or written to. (ID 40003).

To fix the problem you need to remove the last recovery point of affected data source.

The first cmdlet will get the Protection Group specified in $PG variable. The second cmdlet will get the data source specified in $DS variable. The third cmdlet will get the last recovery point of the selected data source stored in $PObject, and the last cmdlet will remove the recovery point. You can run third cmdlet (Get-DPMRecoveryPoint) multiple times to see if the recovery point was removed from DPM Server.
After recovery point is removed, run the consistency check again.


The issue may reappear after some time. Another solution is to move your backups to a new disk storage.


Windows Admin Center – New virtual machine failed

If you are getting a similar error while creating a Highly Available Virtual Machine with Windows Admin Center you are probably missing the Failover Cluster Module for Windows PowerShell on your Hyper-V Host.

Couldn’t create virtual machine ‘test2’. Error: RemoteException: The term ‘Add-ClusterVirtualMachineRole’ is not recognized as the name of a cmdlet, function, script file, or operable program.

Simply install the Failover Cluster Module for Windows PowerShell on all your Cluster Nodes and you should be able to create new Virtual Machines.


SCOM 1801 Console Method not found:Void

If you installed the new SCOM 1801 console and SCSM 1801 console you might be getting an error similar to this one:

The same error was present with SCOM 2016 and SCSM 2016 console. The unofficial fix still applies with minor changes.

Open the System Properties, Advanced and Environment Variables…


Add a new User variable named DEVPATH with value:


Open Microsoft.EnterpriseManagement.Monitoring.Console.exe.config

And add <developmentMode developerInstallation=”true”



This should fix SCOM 1801 Console problem.

Upgrade to SCOM 1801 – Failed to uninstall Operations Manager Agent

Recently I was trying to upgrade SCOM 2016 server to SCOM 1801 and the upgrade failed. I’ve checked the installation log files and noticed this two errors:

It seems that this issue was also present in previous SCOM upgrades. To fix the problem backup and remove this registry key: