Intel 10GbE NICs missing after VMware ESXi 7 upgrade

To provide you with a quick back story on just how I got here, I purchased the first pieces of my home lab back in 2016. When building my home lab, my aim was to build an environment which was small, quiet and closely aligned to the VMware HCL. I obviously also had a budget to work with. At the time many people within the vCommunity were choosing to build their home labs using retired server hardware such as Dell PowerEdge R710s, or cheap & unsupported hardware such as Intel NUCs (remember this is 2016). Neither of these quite suited me because they were either too large & loud or unsupported, so instead I started to look at SuperMicro. These days most people would be familiar with the variety of hardware offerings SuperMicro provide in their lineup, most of which can be found on the VMware HCL. However back in 2016, the vCommunity were only just starting to explore the possibilities of building a home lab using SuperMicro. The other challenge I had was that the 5028D-T4NT server, which was the popular choice, did not come cheap, especially to those living outside the US! So I decided to build my own unique setup using the SuperMicro X10SDV-TL4NF motherboards. I assumed that by choosing a platform which was on the VMware HCL, that I would avoid all the pain and suffering of having to tinker with the hardware to get it to work. Well I assumed wrong!

When I first built my lab I was using a single physical host running VMware ESXi 6.5. I initially used this environment to deploy nested vSphere environments to develop and test AsBuiltReport against many different versions of VMware vSphere. Over the years I have gradually added more hardware to scale out my environment to provide more capacity and redundancy. However as more hardware was added, I began to encounter more issues with my setup. The first issue I encountered was when I added some Noctua cooling fans, and discovered that I needed to modify the fan thresholds. When I chose to add a second ESXi host, direct connecting the onboard Intel X552/X557-AT 10Gb network adapters for vMotion and vSAN traffic, I encountered another issue whereby the NICs would intermittently disconnect and never reconnect. Thankfully Paul Braren was able to provide a solution via his website over at TinkerTry.

And so begins this story…

Continue reading

As Built Report – Documenting Your Datacentre Infrastructure with PowerShell

Having worked the last 10 years as an IT consultant for a leading systems integrator, I have written my fair share of documentation. From design documents, migration plans, test plans, operational guides and health checks, I’ve done it all. But nothing annoys me more than having to write as built documentation.

What’s the problem with writing as built documentation?

As built documents require a lot of detailed system information, which often takes a significant amount of time and effort to retrieve. The information then normally requires you to transpose it from one format to another, again a laborious and time wasting exercise. In rare instances you may find a tool that can do this for you, however, it is never free, and it will never be able to perform this task across all of your systems. Sure, there’s always some basic tool available which can export into CSV, however the pain lies in transposing the information into a document format which is legible and presentable to a client. Excel spreadsheets are never acceptable to clients paying top dollar for your services.

Continue reading

PowerCLI: Add & remove VMs from DRS Groups based on datastore location

Update [21/02/2020] – I have updated these scripts to use the native PowerCLI commands which became available in PowerCLI 6.5.1. You can download the updated scripts from my GitHub page.


Lately I have been working on a number of virtualization projects which make use of VMware vSphere Metro Storage Clusters (vMSC). With most of these types of implementations, virtual machines must be pinned to a preferred site to minimise impact to virtual machines in the event of a site failure. DRS groups are the most common way to achieve this, however I was wanting to find a way to automate the add/remove of virtual machines based on each VM’s datastore location.

To begin, I configured each of the datastores with a prefix of the site which was its preferred site, e.g. DC1-VMFS-01 or DC2-VMFS-01. I then placed VMs on a datastore which corresponded to their preferred site.

With the help of DRSRule I was then able to create two PowerCLI functions to automate the process to add the VMs to a corresponding DRS VM group based on their datastore location. The function can be used with a datastore name, prefix or suffix.

function Add-DrsVMToDrsVMGroup{
#Requires -Modules VMware.VimAutomation.Core, DRSRule
#region script help
    Adds virtual machines to a DRS VM group based on datastore location
    Adds virtual machines to a DRS VM group based on datastore location
    Version:        1.0
    Author:         Tim Carman
    Twitter:        @tpcarman
    Github:         tpcarman
    Specifies the DRS VM Group
    This parameter is mandatory but does not have a default value.
    Specifies the cluster which contains the DRS VM Group
    This parameter is mandatory but does not have a default value.
    Specifies a prefix string for the datastore name
    This parameter is optional and does not have a default value.
    Specifies a suffix string for the datastore name
    This parameter is optional and does not have a default value.
.PARAMETER Datastore
    Specifies a datastore name
    This parameter is optional and does not have a default value.
    Add-DrsVMtoDrsVMGroup -DRSVMGroup 'SiteA-VMs' -Cluster 'Production' -Prefix 'SiteA-'
    Add-DrsVMtoDrsVMGroup -DRSVMGroup 'SiteA-VMs' -Cluster 'Production' -Suffix '-02'
    Add-DrsVMtoDrsVMGroup -DRSVMGroup 'SiteB-VMs' -Cluster 'Production' -Datastore 'VMFS-01'
#endregion script help
#region script parameters
    [Parameter(Mandatory=$True,HelpMessage='Specify the name of the DRS VM Group')]
    [Parameter(Mandatory=$True,HelpMessage='Specify the cluster name')]
    [Parameter(Mandatory=$False,ParameterSetName=’Prefix’,HelpMessage='Specify the prefix string for the datastore name')]
    [Parameter(Mandatory=$False,ParameterSetName=’Suffix’,HelpMessage='Specify the suffix string for the datastore name')]
    [Parameter(Mandatory=$False,ParameterSetName=’Datastore’,HelpMessage='Specify the datastore name')]
#endregion script parameters
#region script body
    $VMs = Get-Datastore | where{($$Prefix)} | Get-VM
    $VMs = Get-Datastore | where{($ -eq $Datastore} | Get-VM
    $VMs = Get-Datastore | where{($$Suffix)} | Get-VM
$objDrsVMGroup = Get-DrsVMGroup -Name $DrsVMGroup -Cluster $Cluster
foreach($VM in $VMs){
    if(($objDrsVMGroup).VM -notcontains $VM){
    	Write-Host "Adding virtual machine $VM to DRS VM Group $DrsVMGroup"
            Set-DrsVMGroup -Name $DrsVMGroup -Cluster $Cluster -Append -VM $VM
            Write-Error "Error adding virtual machine $VM to DRS VM Group $DrsVMGroup"
#endregion script body

Continue reading

VMware Update Manager is not displayed in the vSphere 6.0 U1 Web Client

With the release of VMware vSphere 6.0 Update 1, VMware Update Manager capabilities are now available from within the vSphere web client, or so I thought. After deploying the new vCenter 6.0 U1 Server Appliance and a separate Microsoft Windows server for VMware Update Manager 6.0 U1, I found that the VUM icon was missing from the vSphere web client. After numerous reboots of both the vCSA and VUM server, a reinstall of VUM, and a quick read of the Update Manager 6.0 U1 release notes, I finally managed to resolve the issue by simply re-registering the web client using the VMware vSphere Update Manager Utility.

Continue reading

EMC VPLEX Virtual Edition – Part 1 – Prerequisites

In this series I will provide an insight into a recent deployment I performed of EMC VPLEX Virtual Edition 2.1 SP1 (VPLEX/VE).

For those who are not familiar with the product, VPLEX/VE is a virtual storage platform which provides storage capabilities for Active-Active datacentres for VMware vSphere stretched clusters. The vSphere stretched cluster is configured with compute, network and storage at two physical sites. EMC VPLEX/VE together with the vSphere stretched cluster resources provide the functional requirements to run virtual machines from either datacentre, as well as the ability to move VMs between sites using vMotion and Storage vMotion.

Continue reading