Design a site like this with
Get started

Nutanix CE home lab build log

The plan

For my new job, I’m studying for the Nutanix-MCI. I can use the Nutanix cloud-based labs system, but I’m a hands-on guy, and prefer to learn by practical example. I recently re-built my 3 node vSan cluster, and it was a lot of work to juggle the VMs due to the greater complexity of my current home lab as well as larger amounts of data saved. For instance, my entire network is dependent on DNS via VMs that I run on my cluster. If/when my vSphere hosts go down, my Sonos Speakers can longer stream Def Leppard / Dokken and my desktop won’t be able to join any Teams meetings or read emails ; the second issue isn’t as bad as the first :p

As such, the Nutanix CE build will be focused on providing service-level disaster recover when there is planned or un-planned maintenance on my primary 3 node vSan cluster

Current setup

Currently, I’ve got service-level DR / HA as follows:

  • 2x Citrix Netscaler VPX in an HA pair
  • The Citrix Netscaler HA instance hosts two “least connection” based LB servers which are Microsoft AD DNS and Rasberry Pi (DietPI) based instances
  • With this setup, I’ve got the primary netscaler, MS AD DC, DietPI instance on the vSan, with the secondary instances on a stand-alone 4th ESXi host (HP Microserver Gen 8)

Nutanix CE build process/ hardware bingo

If you’re familiar with the Nutanix community edition (CE) offering, you’ll know the program is rarely updated. So, if you’re used to deploying the latest CR/LTS versions of Nutanix for your clients, and want to try CE on your non-Nutanix certified hardware @ home, you’ll note a gap in releases. The previous CE was un-updated for approx 2 years.

Compare this to the $200 USD per year VMUG program, and a gap of usually just a few weeks exist between major VMware releases: vSphere vCenter, vSAN, ESXi, NSX. Thankfully, the Nutanix CE folks finally released an update a few weeks back, so, the code is now running on the 6.5.2 base.

Years back, I had tried the 2020 version on a spare HP EliteDesk G3 SFF taken away from my main vSan 3 node cluster. I ran into all sorts of issues getting EFI based Win 10 / Win 2022 systems to boot, so converted it back to ESXi. However, a lot of these issues are sorted with the latest release, so, it was time to try CE again! Nutanix CE has similar requirements to deploying Nutanix on certified hardware, you’ve need to have 3 separate drives for the AHV Hypervisor, CVM, and DATA, as such:

My first install attempt on HP EliteDesk G3 SFF , I ensured that I entered the HP setup and set the following:

  • Ensure legacy boot enable / secure boot disable
  • VT-x set to enabled
  • Updated BIOS to the latest version via online update method
  • 3 drives: WD 500 GB PATA for data, Samsung 1 TB 970 EVO NVM for CVM, Patriot brand 128 GB SSD for Hypervisor

Despite the above in place, I wasn’t able to get past the following error condition(s)

I tried out multiple USB drives, as well as, you’ve got the option to attempt the install via EFI (preferred) or BIOS (legacy), the result was the same

After a 3-4 failed install attempts. I figured the new Nutanix CE was just incompatible with my target hardware, so decided to move up a generation, and demote my main desktop ; a HP Elite Desk G4 (Intel 8700 based CPU) to use fot the Nutanix CE install. For the next attempt, I got rid of the 500 GB PATA drive and ended up with the following storage config:

Samsung 256 GB CVM , Samsung 1 TB 970 EVO NVM for DATA, Patriot brand 128 GB SSD for Hypervisor

Success! With the base build done, I’ll work to use Nutanix MOVE to migrate over my DR instances from my ESXi host, i’ll update this blog with my progress, but that’s it for now!



Getting more out my home fiber internet with 10Gbe

In January of 2023, I was coming to the end of a multi-year promo with Videotron (Quebec) via my legacy cable modem. My normal speeds were pretty good, but definitely some room for improvement with my upload channel ; as I primarily WFH, I’m constantly using my upload channel for video calls as well as OneDrive uploads, etc. Modern collaborative apps (Zoom, Teams, Google Meet, etc) will dynamically compress your audio/video channels if your bandwidth ain’t cutting it, and 56 / Mbit certainly isn’t enough if i’m doing other uploads at the same time ; OneDrive, ISO uploads over VPN to clients, etc

As my promo was ending, I called Videotron up to see what promos they had, they were able to knock $10 off my next bill for 2 years, but the speed would remain the same. Next, I looked up big-bad-Bell to get the scoop. I had quit Bell in 2015 when they hit me with a HUGE overage bill when my promo ended. However, I don’t hold grudges against the big Canadian ISPs ; I’ve got zero loyalty to ’em, and will happy to switch for better price/services on phone/internet. Bell provides FTH (fiber to the home) service in my area of Montreal, I signed up for their 1.5 GB package, which was actually $10 cheaper per month against Videotron, but touted 4x times the DL speed / 20x UL speed. WOW

note: Pic below shows some janky pricing which you can probably get around if you come in as a new customer, or use whatever tricks the hound dogs over at the forums recommend for better pricing

On install day, I was VERY happy with the speed based on connecting to the new Bell FTM hub (modem) from my desktop via my 1GBe home network switch

Fast forward to February, I was chatting with a co-worker in my new job, and he mentioned connecting to his 3Gbit Bell FTH via a 2.5 Gbit LAN cards he’s got in his laptop/desktop. To be honest, I had ignored the ports on the back of my own Bell modem!

It was only on talking to him that I checked, and found that the 10Base-T port shown above supports (10 Gbe, 2.5 Gbe, 1Gbe speeds). However, my own home 10Gbe switch is SFP+ based, so, I bought a 10Base-T to SFP+ converter on Amazon for $63 CDN and connected it today

I disabled the 1Gb LAN port on my desktop, enabled the 10Gb port and re-ran a test using Again, here is the before:

And the after:

Which is an 18% speed increase

All for about 5 mins work / 63$. Now, what should I do with the extra bits and bytes I can upload and download?

have a good weekend, ya’ll!


Disable persistent scratch log(s) on home lab ESXi hostss

NOTE: The following advice would NOT be for prod workloads where you may require scratch logs to be persisted between reboots should you need to send them to your hardware vendor or VMware directly

Hopefully you find this blog post while searching for a solution as I did this week, I had to cobble together a few different sources for the solution to a simple issue ; ESXi HOST SCRATCH LOGS!

The back-story

I recently decided to re-enable vSan on my home lab, I’ve got the same host hardware setup that I’ve had since 2019, with 3 HP EliteDesk G4 SFF units. However, from 2019 to today, I’ve re-used some of the internal SSD/NVM drives for other stuff. For instance, I’m studying for the Nutanix MCI, so took a few drives away for a 4th HP EliteDesk G4 host. That means I don’t have spare drives inside my primary 3-node vSan setup. When ESXi is installed, it will attempt to set the scratch partition based on available VMFS formatted volumes (see). In my case, all of my ESXi installs were done to USB drives (which VMware no longer recommends) , at that time, I did have spare drives connected at that time, so, during the ESXi install, locally available drives were set as the scratch partition. This is not ideal, as once the scratch partition is set to a VMFS drive (remote, local) in your ESXi host, you can’t unmount / delete the datastore easily

The work-around

1 – SSH to host (From 2022 onwards, I like MobaXTerm for all things SSH related)

2 – CD /

3 – Mkdir /tmp/scratch

4 – Switch back to the vCenter > inventory > host > advanced system settings > and amend the ScratchConfig.ConfiguredScratchLocation to /tmp/scratch

5 – The new scratch log setting won’t take effect until reboot, don’t reboot yet!

6 – If/when you reboot, to avoid getting a message as follows; “System logs on host are stored on non-persistent storage.“, you will want to amend another advanced settings > navigate to > UserVars.SuppressCoredumpWarning and set it to 1

7 – Finally, find the setting and set it to

8 – With the above 3 settings changed under advanced settings, you can set maintenance mode on the ESXi host, wait for the VMs to move and reboot the host for the setting to take effect

9 – Repeat as required for additional ESXi hosts

Note: If you run through the above process on vSan enabled ESXi hosts, it will log the following within Skyline under Cluster sub-checks, related to vSan daemon liveness, and a service called EPD. Checking the following KB from VMware, “EPD is used to check for component leakage when objects are deleted in a vSan datastore”

As per my note at the beginning of the post, the workaround for lack of spare drives in your host is NOT for prod workloads, just for lab fiends , like me πŸ™‚

I hope you found this post useful, have a nice day


Reference pages

Move ESXi scratch location (

I just want to suppress this alarm; System logs on… – VMware Technology Network VMTN

Home lab winter 2023 update – Desktop vs server hardware decision making process

It’s 2023, and time to re-visit my favorite topic: THE HOME LAB

Last month, I took a new job focusing as a solutions architect implementing hypervisor / datacenter products for VMware & Nutanix. My new job is still with, and I’m happy to keep it in the family and not have to start over again with a new employer

In my previous role as a Citrix solutions architect, SME knowledge on VMware was key, as approx 75% of my customers were running their workloads on vSphere. My new role is the opposite, time to get NCP-MCI certified, baby!

As part of preparing for the exam, I’ll want practical knowledge, so am looking into buying another virtualization host. Right now, I’ve got 3x HP Elitedesk G3 SFFs running ESXi 8 and connected via vCenter 8. I’ve been buying HP SFF/mini hardware since 2019, I love the cost, form factor, function and style

Desktop vs server hardware pros and cons

Here’s a list of my previous blog posts on home lab use ; there are a few, as above, it’s my favorite topic 😍

Each time I consider buying new hardware, I follow a similar “pros and cons” decision making process

Last year, I had considered switching over my ESXi hosts to AMD Ryzen based kit, but wasn’t able to achieve stability with the HP AMD unit I bought, and ended up selling it

So, the Intel v AMD topic is on ⏸️for me at the moment, let’s look at the pros and cons of desktop vs server hardware for home lab use

Hardware TypeProsCons
Desktop-Low power consumption
-Cheaper per core count
-Doesn’t require rack-mount hardware
-Greater supply on used sites like FB market place, Kijiji, eBay
-Lower shipping costs
-Often limited to 64 GB RAM
-Rarely offers IPMI (lights out)
-Have to add PCI express cards to get 10 GBe
-Doesn’t provide hands-on experience with server hardware
-No redundant power supplies
-Less expansion ports
Server-Practical experience with server hardware in the datacenter ; learning the BIOS update process for the server hardware makers (for example)
-Wow factor to impress your friends is greater
-More expansion ports
-Often features built-in 10 GBe networking
-Hardware redundancy at the CPU, PSU, ECC memory level
-Built-in IPMI is usually standard
-Generally higher power consumption
-Generally louder with OEM fans
-More expensive per core when buying within a similar generation of Intel based CPUs

WAF – wife acceptance factor, i’m single, but for how long? Would she want to see a giant loud pizza box in my shared office ? prob not

I ran through the above in late Feb 2023 to cover purchasing a new host for use with Nutanix, and chose a desktop unit it again, HP EliteDesk 600 G3 SFF (which has a 6th gen Core i5-6500 CPU) not super now, but $150 with RAM/SSD is hard to beat

Also this month, I FINALLY committed to a home NAS for storage. I bought an HP Microserver Gen 8 (manual) from a guy in Alberta, Canada for about $280, avg used priced on eBay was double that, and forget about retail for newer units. I chose this unit as it allowed me to add an Intel DA2 10 GBe SFP+ network card (shown in the pic on the left) and had 4 hot swap slots for 3.5 drives on the front. This particular model is known to run TrueNAS without issue, so that’s what I will try, if I find TrueNAS doesn’t suit me, i’ll look into UNRAID, OpenMedia Vault or something else

Pictured in the above shot is my new Bror Ikea shelf (link) purchased in January. This shelf replaces a 15U server cube from I had purchased last spring, but, as I often make changes to my networking / hosts, I found working inside the unit annoying. Also, despite adding extra fans, internal temperatures ran HOT. The Ikea shelf is open, ez to work on and will be provide superior airflow in the summer (Montreal gets hot) in the summer. So I sold the unit, and am very happy with the Bror Ikea shelf (which was about an hour to put together) and cost $160

March 1, 2023 virtualization hosts

3x HP EliteDesk 800 G3 SFFΒ (not on vmware HCL) in a 3 node vSan setup

  • Intel Core i5-6500 CPU – 4 cores / 4 threadsΒ (on vmware HCL)
  • 64 GB DDR4 SDRAM
  • Intel X520 dual port 10 GB nicΒ (on vmware HCL)
  • Intel quad port NIC i350-T4Β (on vmware HCL)
  • Samsung 979 Evo NVM 1 TB (caching tier of vSan)
  • Samsung 860 SSD 1 TB (storage tier of vSan)

1x HP EliteDesk 800 G3 SFFΒ (not on vmware HCL)

  • Intel Core i5-6500 CPU – 4 cores / 4 threads (on vmware HCL)
  • 32 GB DDR4 SDRAM
  • Intel X520 dual port 10 GB nic (on vmware HCL)
  • Intel dual port NIC i350-T4Β (on vmware HCL)

If you’re considering a home lab, or have one and it requires updating, I hope you find this blog post useful πŸ™‚

Feedback is welcome, hit me up on LinkedIn or 🐦


TEAMS /turbo mode in 2023? Capturing memory usage with a PowerShell one-liner

For the past few years, I’ve repeated the same (somewhat lame) joke about how to optimize teams, it goes like this:

  • Start > Run > Appwiz.cpl
  • Find the entry for ‘Microsoft teams’ > choose ‘un-install’
  • Open your preferred browser > Open
  • Install / join workspace or create new one and send out ‘Teams Exodus’ email

Obviously, this was only half-serious. It’s easier said than done to exit Teams at this point. Like a lot of folks who can contrast the UX of Teams vs slack on a daily basis. From a speed perspective, Slack is unmatched, IMO. Zero latency when typing or when switching between windows. Ever tried to switch between chat windows on teams on a windows device that’s lower-spec’d, like VDI? It’s not great.

The good news!? This week, MS announced they will switch the back-end for Teams from electron to webview2, the technical details of that are probably more relevant for programmers, however, they’ve touted “up to 50%’ reduction in memory use”. Which is a huge claim. No-one should take claims like this at face value. as such, I decided to capture some metrics from my own Teams instance before the webview2 change, at which point I’d lose the previous metrics. I came up with a PowerShell one-liner to represent the following view from windows task manager which shows all the teams instances and their related memory usage

That gives me the following total: 1062 MB, so, 1 GB of memory, my teams instance is totally idle at this time, no video/audio/XLS sheets open

[Math]::Round(((Get-process Teams | select name, @{l="PrivateMemoryMB"; e={$_.privatememorysize / 1MB}}).PrivateMemoryMB | Measure-Object -sum | Select-Object -ExpandProperty Sum),3)

I will update this blog post once I’ve got the new webview2-based Teams install up/running, hopefully my memory usage will be less than 1 GB

Hyper-V to VMware conversion tips and tricks

As an IT consultant , I’m asked to assist a wide variety of tasks. Last fall , I worked on a project to convert 21 Hyper-V VMs to ESXi.

Normally, you’d use the VMware vSphere stand-alone converter for such a task, however, VMware pulled the tool in Feb of 2022, only to add it back in October 2022
Read about it here

As such, that left me the third party StarWinds V2V converter. Also great, but not as feature-rich as the VMWare tool. One key feature missing ; is adding VMware tools to the ESXi converted VM. Powershell to the rescue!

The script

I’ve had PS code to automate VMware tools re-installs for about 2 years via my Packer windows build automation work Powershell. See HERE

I’ve been writing PS code since 2015, I’ve done courses, read blogs, free-styled, and made a lot of mistakes along the way. Despite zero background in formal programming, I am a planning based person in all that I do. For scripts I’ll use with clients that are more than a few lines, I’ll create a bullet-point based design to write out what I want to do

For this project, the key functions of the new script were as follows:


  • Collect IP / DNS info and log to a CSV for re-entry on converted VM
  • ID VM type (Hyper-V or VMware ESXi)
  • Create local user & set autologon reg keys
  • Gracefully power off the VM

Destination VM

  • Detect VM type
  • Automate the install of VMware tools
  • Add back IP recorded from source VM from saved CSV file
  • Remove traces of Hyper-V ghost devices
  • Remove AutoLogon once done
  • Reboot the VM for a clean VMware tools install

Here is the code

The script requires just 3 files, a recent VMware tools install.exe, the scheduled task XML, and the .ps1 script. The .ps1 script will setup the sched task to run in the system context on the next boot

I won’t post the code in it’s entirety, you can view it on Github here

Change the password on line 653, no other edits should be required

High-level steps for Hyper-V to ESXi conversion

  • Logon to the source hyper-v VM
  • Download the files for the script to c:\admin
  • If the source VM is an application server, ensure to review any app config that can be used to test out the application functionality once the VM is converted, think SQL to NIC bindings, services that auto-start, etc
  • Take screenshots of attached drives as required
  • Run the script, the VM will be gracefully powered off at the end
  • Convert the VM via StarWinds V2V or VMware stand-alone converter
  • With the VM conversion completed, find the VM within your VMware environment and validate it’s got all the existing drives connected, and is set to the correct network

Notes from the migration

The project I was working on included 21 VMs, 20 windows VMs, 1 Linux VM. Most of the conversions went smooth, a few did not. Here are some issues I noted that you might find useful should you find yourself going through a similar hypervisor VM type conversion

  • Domain controllers will generate a LOT of noise when they boot up without a network card installed. Running DCDIAG on converted VMs will show time sync issues , in my case, the errors were resolved after a few hours. If you want to validate the integrity of your AD domain after converting an AD DC , run dcdiag /s OLDSERVERNAMEHERE on one of your existing AD DCs. I didn’t know this before, but running DCDIAG without any switches will AD related event logs from all DCs in the forest, running DCDIAG /S and targeting an existing DC should show a clean result. Open ‘active directory users & computers’ from the converted VM, and create a new AD OU and check that you can see from the newly created OU from an existing DC
  • On at least one Oracle server we converted as part of the project, we had failures of services that were previously starting. To resolve it, we set them to ‘automatic (delayed start). Regardless, always check that all custom services installed on application servers are started as they were before
  • The Hyper-V converted VHD(x) drives will be VMDK format once converted to an ESXi VM. The drive letters should be the same once the windows VM is booted up again. however, in a few cases, the source Hyper-V VM was gen 1 type, the destination VM was created with an IDE controller, instead, it should have been SCSI (LSI corp to start), re-map the drives on the properties of the ESXI VM to match how they were set before. C drive will be drive 1, D drive will be drive 2, etc. Else, you can run into boot issues if you try to boot windows from a data drive
  • Expect the unexpected. During the migration, we had a hardware failure on one of the Hyper-V hosts, it occurred just as we were about to convert a large database VM. My advice is to schedule conversions on critical VMs after-hours
  • Benchmark file copy speeds with large files between source and destination hosts ahead of time. In this way, you’ll know how long it should take to copy over each VM, and you can adjust the schedule with your client appropriately Thankfully, my client’s source / destination storage network was running at about 500 MB/sec, which meant for some quick conversion times

A Hyper-V to ESXi conversion is pretty niche, however, I learned a lot on this project and really enjoyed thew work. I hope you find the related script / notes provided useful should you be tasked with a similar work effort


2x methods to list installed apps by Powershell

Microsoft doesn’t provide a GUI native way to export the apps you see as below. Why? I don’t know, but via CMD you’ve got a few options.

I often use scripts from my co-worker Jon Pitre (GitHub), so I’ve already got the PowerShell app deploy toolkit installed, so, I can just run the following code to quickly generate a report

Get-InstalledApplication -WildCard * | Select DisplayName, DisplayVersion, Publisher

However, PSADT is a third party module, to install it, you need public internet access. Bad luck, maybe? it’s come up for me many times over the years, i’ve been working in an environment that blocks such modules, as such, there’s another means, built right into most versions of windows

Get-WMIObject -Query “SELECT * FROM Win32_Product” | Select name, Vendor, Version, Caption

This was tested on Win 10 / Win 11 and Server 2022, if it doesn’t work on a joe dirt version of binbowz like 2012 R2 / Win 2016, try updating Powershell as such

You can then use the above code to pipe to CSV via the following:

Get-WMIObject -Query “SELECT * FROM Win32_Product” | Select name, Vendor, Version, Caption | Export-CSV -filepath c:\CSVFolderYouWantTouse\installedApps.csv

The car search continues & concludes

From July 2021 to August 2022, I went on an epic # of test drives , 30 in total. I ended up purchasing a lovely Subaru WRX 2011 with a manual transmission. You can read more about my criteria / experience in my original blog post HERE

Some pix of the Subaru WRX . I added a nice Android touchscreen head-unit, and some ambient lights, else it was left stock

After driving the Subaru around for the summer into the winter of 2022, I realized that manual transmission was no longer for me. From a teenager into adult hood, my entire driving experience has been manual. I learned on manual, sat my driving test on manual, and owned 2 manual cars (two Honda Civics). At the time, manuals were more reliable and more fuel efficient, that’s just not the case in 2022. In fact, we probably crossed that threshold years ago, I just wasn’t paying attention. Automatic cars with paddle shifters / sport shifters are a lot of fun, and to be honest, driving around Montreal one way streets and manually shifting through 3 gears to get from one stop sign to the next feels dated. It’s like vinyl records, I ain’t dropping the needle or flipping the record to listen to my favorite Steely Dan albums. Spotify / Sonos all the way, son

Towards the end of my test drive process, I discovered something interesting. The cars I really liked driving had a “magic ratio”. As per my previous blog posts, I’m a big fan of MS Excel – even using one to vote in our ridiculously unnecessary stupid Canadian election in Sept 2021. Of the 20 individual models of cars I drove, I found the ones I liked the most had were light weight, with a horse power of 250 to 300. When I updated my spreadsheet to divide the weight of the car by the HP, I came up with a magic “thrust to weight ratio”. My final test drive in May of 2022 was a BMW 328i, which I found very disappointing. Why? THURST TO WEIGHT. The BWM 328i is 230 HP but 3600 LBs in weight, which isn’t very sexy. After finishing the test drive, and went home and updated my spread sheet with the new “thrust to weight ratio” , then reviewed against the cars I liked vs the ones I didn’t. The results were interesting, and certainly aligned with more subjective notes I made after each test drive. Prior to adding column “C”, I had just highlighted in green / blue the car specs I found important, as well as a note on the test drive. As you can, 5 of the 14 items I tagged had “thrust to weight” value less than 15. The ones I didn’t like so much had a value of 15 or more.

Over-all, the Lexus IS 350 F was my favorite car of the ones I drive, but I couldn’t justify the cost at the time, it occupies position 2 in the sheet. I bought the car in position 3, the Subaru WRX 2011, light weight with 265 HP, lots of fun!

Which brings me to a definitive week in Gulfport, Florida Dec 2022. I flew with my beloved beast to visit my parents. I don’t like Canadian winters, so it was nice to get away for Christmas at their Florida home

Whenever I travel to the states, I use to book nice cars. Turo is like Airbnb for cars. You can rent all kinds of neat stuff. April 2022, in Arizona it was a Tesla-killer Hyundai Ionic 5 electric. October 2022 in Kentucky, it was a nice Honda Accord Hybrid. Dec 2022 in Gulfport, I went all out, booking a BMW X1 2018 and a Lexus IS 350 F. Both cars I had driven in 2021 as part of my test drives

2018 BWM X1 328i

2014 Lexus IS 350 F Sport, why 3 pix against the single pic of the BMW X1? The Lexus was 3 times sexier, thus the pic of me looking sexy against the murder red interior. I’ve since shaved the winter beard, as it was making me look oldddd

I chose these two cars, as they were two of my favorite test drives from 2021 to 2022, and I’m looking to sell / replace the Subaru WRX 2011 hatchback this winter. And so, I’ve started to scour the local buy/sell via FB MarketPlace / Autotrader/ Kijiji Auto for a 2015 BMW X3 (similar to the X1, a bit bigger, but with a 300 HP engine), or a 2014-2015 Lexus IS 350 F. I’ve chosen these years based on costs, each can be had on the used CDN market for in/around $23k at the time of this posting, based on selling my Subaru WRX 2011 for about $9-10k, the price difference is reasonable. Each would come without warranty, so the BMW is a bit riskier for yearly repairs costs

I’ll update this blog post with the end result, it should be one of the above , as the test drives were basically completed in Florida last month, but, but it’s TBD


A winner has been declared, and that winner is the mighty 3rd gen Lexus IS 350 F Sport

On Wed Jan 18, 2023, I traded in my aging Subaru WRX 2011 (at a CDN $3k loss against what I paid for in August of 2022) for a fully-loaded 2017 Lexus IS 350 F Sport

To be honest, the Lexus IS is the car I should have bought in 2021 when I was doing all those god-damn test drives. I had test driven 2 that year. I was stubborn, having only owned manual transmission cars prior, I thought going with manual was the right choice again, WRONG. All the of advantages / fun factors for manual transmission are gone. Hindsight is always 20/20. Here we are in 2023, and I’ve got my dream car. My second car was an Acura 1.7 EL which I nick-named ‘Silver bird’, as such, the Lexus IS will be ‘silver bird v2

Picture time….

Here is me with the beard trimmed as promised against the back drop of the murder-red leather interior which I πŸ’“

Let’s go buy some booze to celebrate

Side-by-side against the Subaru WRX 2011 which I used as trade-in for silver bird v2

Extending OS drives on EFI based systems the ez way?


Hello friends, has this ever happened to you, you spill red wine on your white couch and don’t know what to do?

Kidding aside, surely you’ve run into this issue before? You note the C drive on your windows VM is running low, extend the disk on your hypervisor, on opening diskmgmt.msc, you see the following:

As a result, you cannot extend your C drive to include the extra space you added in the previous step. This is due to VM’s obeying the rules of “contiguous blocks” that regular physical drives do. DANG

Ten seconds later on Google, you find something like THIS I’ve used similar methods in the past, but it’s a long / manual process with room for error with commands you’ve probably not used since Windows XP. Facing the same issue on 3 of my home lab VMs today, I decided to try out a tool that I found a few years back for re-formatting SSDS to install windows, where I had previously installed ESXi. It’s called GPARTED it’s free / fast / ez to use


  1. Download a recent gparted live linux ISO from
  2. Upload gparted ISO to your datastore
  3. Attach to your VM, set your VM to boot to the EFI “list of boot choices menu”
  4. Boot to ISO attached in step 3
  5. Launch the tool answer defaults for any questions, unless you don’t want to run the GUI in English
  6. Next, move your unallocated space BEFORE the 500 MB EFI recovery partition
  7. below is a screenshot of the BEFORE state, as you can see, we’ve got 52 GB of space that’s AFTER the RECOVER PARTITION
  8. You should then see 52 GB (or whatever amount you chose) unallocated after your C drive, and BEFORE the stupid recovery partition that you’d probably never use anyway πŸ˜†
  9. Right click on your primary partition you want to expand, and use the slider within GPARTED to select all the free space you saw in step 9
  10. Commit the changes
  11. EXIT the tool
  12. Reboot the VM
  13. Logon to the VM, open diskmgmt.msc to confirm the changes worked
  14. go about your biznuzz

Home lab 2022 update – house edition

It’s time for my favorite topic, home planning and upgrading 😁

This article will cover changes I’ve made or have planned based on the move to my house. You can read through in it’s entirety or skip to the respective sections for compute hosts, case/rack, networking, storage, hypervisor choices, desk workspace area, UPS, and monitoring

Equipment Rack

I recently bought a house, see my post on that topic here

With the new house, I decided to finally buy a server rack to contain all my home lab / internet gear. After some careful measuring, I chose a 15U from SysRacks ( link). Little did I know, the company is from Montreal! It was delivered first week of June 2022 in a truck with the Sysracks logo! NICE

If you do go with SysRacks, be warned, what others have said in the Amazon reviews section is true, the instructions aren’t great, especially the the installation of the 19 rack mount ears. I struggled with this piece, having not done a rack mount server install in many years. once done, I placed the unit into my new home office and it fits perfectly.
With the rack installed, I started to look into what other 19″ profile items I could install into the new rack

Within the new rack will be a Tripp Lite SMART1500LCD 8 port rack mount UPS (Amazon link)

I’ve had UPS units connected to my home computer equipment for almost 10 years, i chose the above unit to consolidate to just one unit, replacing my existing pair of older APC brand UPS units

  • Pic 1 – This was the temp setup I had from June to Sept 5, all the my lab gear was placed on top of the new server rack while I wait to finalize my purchases for new rack mount UPS / Storage units
  • Pic 2 is after I placed all my gear inside the case, except for my cable modem (for the rare time I need to power cycle it)
  • Pic 3 is with the locking front door installed

Update for Sept 13, 2022, I removed the Tripp Lite Smart1400 LCD 8 port UPS, too loud/hot! I wasn’t able to get the rack cooler than 29 degrees Celsius with it installed. I’ve gone back to my original 4 port APC unit, and will more than likely sell the Trip Lite unit or try to return to the manufacturer

Workspace / Monitor setup

I’ve had a two monitor setup since about 2015, it’s worked really well. However, as time has gone on, and slack/teams have mostly replaced outlook, I feel a need a dedicated third monitor just for communications. I’m 100% WFH in my job, so am constantly monitoring for alerts/emails/etc, there are no taps on the shoulders in my work day to advise me that I’m needed for something urgent. Years ago, I had a boss that was all about “inbox zero” and 100% replies on client requests, it’s been years since I’ve worked in an environment with such expectations, but the habit has stuck, I don’t think I’ve missed a replying to an important email in about 10 years…

So, my original plan was to try out a 3 monitor via desk attached arm setup as such

Here’s what it looked like after assembly

However, after trying the setup of the Samsung 27 ” monitor on top, with the two Dell 23″ monitors below, I decided it was NOT for me

So, a few Sundays ago in Sept 2022, and one Redbull later, I switched to the following:

The 2x Dell 23″ monitors are on top, the larger Samsung 27″ is on the bottom. I re-used an iPad stand I wasn’t using to mount my webcam. on an unrelated note, since becoming a home owner, I LOVE PLANTS, OMG. SO MANY. The Yuka on the left was outside till Sept 26, 2022, but it’s now getting cold in Montreal, Canada where I live, so it was time to bring it inside


It should be stated, via Reddit/friends, I’ve researched ultra-wide monitors off/on for the past year, but none are the right fit for my workflow at this time. If you’ve got a great working Ultrawide setup and are doing EUC engineering / design / architect work like me, post a pic in the comments, and include what gear you used! I’m not fully sold on the 3 monitor setup, but will keep using it for the rest of 2022

Hypervisor choices

I’m professionally certified on Citrix / VMware, and do some Nutanix integration work with Citrix. I regularly do VMware project work to stand-up new vSan implementations and help customers migrate from vSphere 6.7 to 7 as the Oct 15, 2022 EOL dates approach. I don’t currently run Nutanix on any of my home hardware. My choice to use vSphere is based on job requirements, and my love of their VMUG Advantage program. For $200 USD per year, you can get full access to the entire VMware suite. Nutanix only provides older versions of Prism/AHV via their community edition program. The CE version is often quite behind the GA versions available to customers, so I’ve had the scenario where I wasn’t able to get newer windows builds to boot. Until they rectify this, I’ll stick with VMware

Compute choices

The on-going debate ; AMD vs Intel

I ran my personal desktop on an HP AMD 5600G based system for about a month in from Aug to Sept 2022. Worked fine for two monitors and Windows 11. However, with the exact model I chose from HP , I wasn’t able to drive 3 monitors. So, I switched to an HP EliteDesk G4 Core i7 8700. Before selling the HP AMD unit, I did test ESXI on it, the results weren’t good. I had to disable “secure boot” to get around the “ESXi pink screen of death” many others have reported trying to use AMD home hardware with ESXi. As well, the built-in NIC wasn’t detected, as the HP AMD desktop I bought used from AMD only had one full speed PCI express port, my upgrade path was limited

I’m not alone. There are posts from 2017 all the way to 2020 from home lab fans attempting to use commodity AMD mobo/Ryzen CPUs notingm ESXi 6.7 / 7.x “pink screen of death”. Some report running months without incident, however, to date, I’ve not had either of my HP EliteDesk 800 G3 SFF (Core i5 6500) units running ESXi 6.7 / 7.0.x crash in about 3 years of 24/7 use. As the years have gone by since I finished college in 2005, my “home lab” is no longer used to practice implementations for clients / learn / research, I host plex for me / friends, run active directory, have file servers for archiving and more. If/when any of these servers / services go down, I treat it like prod, and get it fixed as soon as possible. As such, having any of my ESXi hosts go down randomly due to AMD / ESXi issues isn’t going to work for me. I can’t explain why AMD EPYC processors aren’t impacted by the same issues as the Ryzen 3/5/7 counter-parts, maybe it comes down to lack of QA from VMWare on AMD desktop parts? If you have any theories, or have a working AMD mobo/CPU combo, let me know! Also, post your working hardware config to this EPIC thread on William Lab’s blog, I submitted my experience with the HP AMD Pavilion 5600G

The replacement for my 6th gen Intel based HP EliteDesk G3s will be the HP EliteDesk 800 G4 model, which has an Intel Core i7-8700 (6 cores / 12 threads) chip. To date, I’ve not read of similar PSOD issues on this particular model. This model is easily found on eBay for about $400 CDN per box

I’ll re-use my existing Samsung 970 EVO NVM / trad SSD for storage

Networking considerations

In 2019, I bought the Mikrotik CRS309-1G-8S+IN Cloud Router Switch 8xSFP+ switch. Mikrotik is a small Latvian-based networking manufacturer who make robust / reliable well priced gear. The unit has been rock solid, I see no reason to replace it at this point, however, assisting a co-worker with some home lab choices recently, he found the a suitable unit QNAP QSW-M408S 10GbE It’s well reviewed/priced on Amazon

For 10 GBe network cards in your hosts, I like older Intel X520-DA2 model cards. When I was still buying them in 2019, they could be found on eBay for about $75-100, but YMMV as of 2020. These cards aren’t fancy, they don’t support RDMA, for instance, however, I’ve found them reliable and fast. Synthetic benchmarks showed close to the expected line speed , around 9000 Mbit/sec. Real world usage was about 7200 Mbit /sec. The nice thing about this card, you can actually find it on the VMware HCL, good luck finding your other components on there 😜


For the longest time, here’s been how I’ve provided large-file / long-term storage @ home

  • Step 1: Buy/install a large 3.5 traditional hard drive into a single physical server, for the past 5 years, an ESXI host
  • Step 2: 3-4 years later, notice I’m running out of space
  • Step 3: Review backblaze drive stats reports to ID patterns in reliability for large 3.5 HDDs from Seagate, WD, Hitachi, etc
  • Step 4: But new 3.5 HDD that’s at least 25% larger in size than the one it’s replacing
  • Step 5: Migrate data from old to new drive, and yes, it takes longer to copy over all my data each time
  • Step 6: Think about a better way, look at current available NAS units from Synology/QNAP, curse at the price and lack of 10 GBe + M.2 NVM support
  • Step 7: Evaluate TrueNAS (previously FreeNAS) get annoyed with administrative over-head and stop using it after a few days
  • Repeat steps 1-7 till πŸ’€

However, it’s 2022, it’s time to break the cycle

As I’ve got a 19 inch server rack now, I’m looking into a 19″ rack-mountable QNAP TS-432PXU-2G-US NAS unit. It’s got 3.5 drive support only, but 4 bays, and has built-in 10 GBe support. With a 4-bay unit, I can install one 3.5 drive today, and grow my storage needs as time goes on via RAID 5 or similar via this process. I can look at adding M.2 support for NVM drives via a PCI express add-in card later. However, my plan is re-enable vSan on my home lab, which would use the SSD/NVM drives already in my HP ESXi hosts. I’ve used vSan on/off for years, but as of Aug 30, 2022, I’ve got it disabled as I had re-purposed my third ESXi host for use with Nutanix CE, and didn’t want to use have vSan running as a 2-node cluster with an external vSan witness appliance

Monitoring / cooling

I monitor my physical / virtual assets by a script I maintain on GitHub, here

I don’t do kW power monitoring for now, but might do now that I’m settled into my house. If you have any suggestions for software/hardware to do so, let me know in the comments

I’ve installed a basic LCD screen that shows temperature / humidity inside my Sysracks server cage. I’m averaging about 23 degrees Celsius / 73.4 Farenheit with two low CFM 120 MM fans. The fan that came from Sysracks sounded like a jet engine, and could not be throttled down via a speed control swtich, so I replaced two 120 MM adjustable speed fans from Amazon


As with any purchase, do your research as much as possible, finding someone who’s got the exact same unit you want to buy, who’s written a formal review on their blog / YouTube Video / Reddit etc is always a good idea

Share what you have in the comments and happy hunting πŸ˜€