Design a site like this with WordPress.com
Get started

Final lab setup mid-2019 to now

Last year, I must have got a bit excited with the new gear I was buying, and I never ended  up posting my final CONFIG, here it is!

3x HP EliteDesk 800 G3’s (product page)

Each of the HP Elitedesk 800 G3s contains the following hardware:

  • Intel(R) Core(TM) i5-6500 CPU @ 3.20GHz
  • 4x 16 GB RAM for 64 GB ram total
  • Samsung SSD 860 EVO 1TB SSD (vSAN capacity)
  • Samsung 970 EVO 1 TB NVM type SSD (vSAN caching)
  • Intel NIC quad card I350T4V2BLK
  • Intel X520-DA2 (82599) dual head 10 GB NIC
Connecting the 3 computers is a MikrotikCRS309-1G-8S+IN 8 port 10 GB switch

Originally, I’d installed VMware esxi 6.7 U3 on each of them, but have upgraded all 3 to version 7 as of the spring of 2020

If I was to do it all over again, I’d go AMD Zen 3, but buying the used Intel-based kit from HP at the time made sense for the following reasons:

  • Each unit supports intel vPro for lights out management (just ensure you put in secure settings!)
  • 4x DDR4 DIMMS for 64 GB max
  • 4x PCI express, 1x PCI express x16
  • Great BIOS
  • Quiet
  • Very compact/quiet and ez to work with for upgrades
  • As these are common business desktops that are now off-lease at many business, they can be purchased for as low as $300 USD from eBay
Advertisement

You down with PDQ? (ya you know me)



Anyone working in the EUC field has probably pulled out some or ALL of their hair when trying to deploy software via Microsoft’s dreaded SCCM tool
What if I told you there was a better way? That’s assuming of course, you’ve not heard of PDQ Deploy


I’m late to the PDQ game! I had read about it via twitter/slack for months, but only got a chance try it out on my beloved home lab last weekend
There are two versions:

  1. The free version  can run forever, but has some limitations. For me, having to enable the admin share / admin account on my Win 10 assets was a show-stopper for me.
  2. So ! I immediately activated the 14 day trial on the star ship enterprise version. Once I activated the enterprise trial on my existing PDQ deploy install, I was good to go after amending the “TARGET SERVICE” location option as follows:


To start, I downloaded a few common packages: Google Chrome v 81.x, Citrix WorksSpace App 2003 and a few other common apps



The app has intelligence to recognize when new versions are available, they are indicated under the APPROVALS section. All you need to do is approve the updates and your library will update itself on your PDQ Deploy server, you’re then able to deploy / re-deploy to your targets. For instance, as I was writing this blog and collecting screenshots, I got approval requests for a new version of Google Chrome and Slack, done/done!

Deployment is where PDQ Deploy really shines , especially against Citrix MCS / PVS master images.

You select a package, right click and choose DEPLOY ONCE, put in your target asset and let ‘er rip!



Assuming you’ve followed the steps to open the required firewall rules on the target assets, you should have your software deployed within minutes or even seconds if it’s a small package and you’ve got a fast connection from source > target

In summary

  • PDQ Deploy is an incredible tool, and I would absolutely recommend it to anyone looking for a painless means to deploy new or updated software. 
  • Enterprise licenses are just $500 USD which is an absolute bargain when you look @ the time savings against SCCM troubleshooting ; 


Because an SCCM deployment rarely works the first time you try it , right? 

CVAD 1912 UPL removal issues and fix

I noted some interesting (annoying?) unexpected behavior with UPL-enabled CVAD 1912 MCS master image machines over the weekend when I should have been playing DOOM ETERNAL

At a client site , we un-installed CVAD 1912 to remove the UPL components. We did this for two reasons

1 – A default install of CVAD 1912 w UPL will create a dependency for AppV on the UPL filter driver.

The environment I’m working in has a mix of Nutanix blades (G4/G5/G6). Those VMs that get booted from the older G4 blades take longer to boot, and note a race condition where the UPL service fails to start, thus causing the AppV filter driver to fail to start

2 – We’re unable to get away from the legacy path imposed on via the Citrix UPL filter driver policy that sets that ridiculous A_OK sub-folder. We want to use Nutanix files with multi-path support, so all 3 servers in the cluster distribute the load, this isn’t possible with the hard-coded A_OK sub-folder nonsense

Anyway, over the weekend I started the process to remove CVAD 1912 + UPL on our various MCS master images

I completed the removal in our test environment, and upon rolling out the MCS update to a limited set of test machines, noted the following error on user logon over ICA

Machine generated alternative text: Windows couldn't connect to the ULayer service.  Please consult your system administrator.  OK

I used Microsoft system internals autoruns to check for anything left on the MCS master image, and sure enough, the VDA un-install left UPL components before. Thanks, QA! :p

To completely remove any trace of UPL from a system where it was previously installed, you would take the following actions:

  1. Run CVAD 1912 uninstall via add/remove programs + reboot 
  2. From an elevated command prompt run Sc delete Ulayer 
  3. Use PSEXEC -I -s “cmd” then open reged32 to amend permissions on below to include local admin group and delete:HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Winlogon\Notifications\Components\Ulayer
  4. Delete: HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Ulayer
  5. Delete c:\Program Files\Unidesk
  6. Amend HKLM\Software\Micrsosoft\Windows NT\WinLogon\UserInit to be as below:
    C:\Windows\system32\userinit.exe,

  7. Following the un-install, it was set as such:

    C:\Windows\system32\userinit.exe,C:\Program Files\Unidesk\Layering Services\LayerInfo.exe,

  8. But, it should It should be
    C:\Windows\system32\userinit.exe,

  9. NOTE: Pay attention to that last character it’s a “,” 
  10. Reboot
  11. Re-install CVAD 1912

The above i’m using on our test environment, and it works fine, for the prod environment, i’m doing a complete windows re-build just to be 100% sure. The above is more anecdotal and interesting from a troubleshooting perspective. 

I escalated a case to Citrix to tell them to fix their un-install for CVAD, but I doubt it will make it into GA for the current release slated for 2003 (March 2020)


I’m not completely ruling out CVAD 1912 UPL at this point! When it works, it’s great, and solves the long-standing issue of delivering apps to non-persistent Win 10 machines

Owen

 

 

IGEL Disrupt 2020 in Nashville, Tennessee and my next tech conference


It seems my vacation schedule is mostly driven by tech conferences these days. 

  • E2E in NY in 2018
  • Synergy in Atlanta, Georgia in 2019
  • IGEL Disrupt in Nashville, TN in 2020


All of which is great, as none of the above places i’d been to before, and each trip was exciting for personal and professional reasons

Nashville, TN was a bit more special for two reasons: Country music and Bourbon whiskey!

I’ve been a fan of country music all my life, especially the following artists

Dwight Yoakam
Glen Campbell
Shania Twain
Kenny Rogers
Chet Atkins
Dolly Parton
Johnny Cash

So, visiting BROADWAY street on my first night in Nashville was awesome. I got to see an awesome cover band do one of my favorite country songs: “All my ex’s live in Texas”

I drank some great local bourbon called Belle Meade and had a great time enjoying the cover band at “THE STAGE” (link to the venue)

After Sunday night, we spent all of our time in and around the MASSIVE hotel called GAYLORD. Yes, that’s the real name. It’s a really nice hotel, with indoor water falls and tons of tropical trees

The conference was small, about 500 attendees. The highlights were as follows:

1 -The Microsoft WVD (Windows virtual desktop) master class with Jim Moyle (twitter) and Christiaan Brinkoff. Microsoft has filled the gap between RDP and ICA! To me, they are neck and neck (at least on paper), i’m looking forward to doing client implementations in 2020 using WVD to replace or Citrix CVAD or for new implementations where no Citrix INFRA exists

2 -The AMD presentation on their new embedded CPUs. Talk about DISRUPT! AMD truly is a disruptive force of nature against Chipzilla (Intel). I’m very much looking forward to high quality , high performance low cost devices from AMD as a alternative to security hole ridden expensive Intel devices. IGEL product page

3 – The CUGC panel hosted by Patrick Coble . Some good debate around CVAD current release vs LTSR and MCS vs PVS. 

4 -The Iron cowboy motivational speech during the closing note. I’d heard his name, but didn’t know he was from Alberta (i’m from Canada). In 2015, this guy ran 50 iron mans in 50 days in 50 states. Just incredible. 

For 2020, i’ve got two more events I *might* attend, Steve Greenberg’s EUC master’s retreat (link) in Scottsdale, AZ, and possibly Synergy in Orlando. If it came down to one choice, i’m going with the former!  I’ve already been to Synergy and Orlando, and I feel I would learn more at the EUC master’s retreat.

Citrix UPL and FSLOGIX – the best of both worlds



I’ve been working with Citrix technology for 7 years, and most of that time has been working in the slow-moving world of financial industry IT

That being said, I left that world last summer, to work for a small company here in Quebec, called ProContact.ca. I wanted to focus on SMB, where I felt I could make a bigger difference, and I have !

however, I’ve still got some older habits, where I prefer to not to be applying new software as soon as it hits the digital shelf. As we all know, Microsoft hasn’t done a great job with QA on Win 10:https://www.thurrott.com/windows/windows-10/187407/microsoft-has-a-software-quality-problem

To me, Citrix is almost as bad, a bit better? Some of the bugs I found working on the 7.15 LTSR track were just awful. 

In Dec of this year, Citrix finally updated the LTSR path for CVAD to 1912. I read over the release notes and something caught my eye, the User personalization layer (UPL) ported over from Unidesk. For a customer I was working with in 2019, we evaluated Unidesk, and found it overly complex and slow for what it touted. UPL as a replacement for long depreciated PvD sounded like a dream.

So, for the first time in a long time, I installed/configured/tested out a new feature on the SAME day it was released. It’s been about a month since I started using a UPL enabled machine as my primary workspace, and i’m happy to report IT JUST WORKS

For our customer, we had already replaced UPM (mostly awful) with FSLogix (super great)
I setup UPL without any docs from Citrix (they didn’t exist on the day CVAD 1912 was released), I did run into some issues, as UPL is a port-over from Unidesk, and installs a filter driver that directly conflicts with the FSLOGIX filter driver. 
I wasn’t keen to go back to UPM (as Citrix suggests in their UPL tech article) neither was the customer

I ended up resolving this issue, after posting to the world of EUC slack channel, and the ever helpful James Klindon reminded me to check my ALTITUDE settings! Yep!

https://www.citrix.com/blogs/2020/01/07/citrix-app-layering-and-fslogix-profile-containers/

With the amended ALTITUDE setting in place, i’m happy to report UPL / FSL are working perfectly togehter


A few notes:

  • A lot of folks are calling CVAD 1912 UPL a first release. This is and isn’t true. Technically, the binary support is a port-over from the existing UNIDESK “user personal layers” feature.
  • Instead of requiring those dreaded Unidesk appliances, we just need an SMB share, and a Citrix policy that enables UPL and sets a path, that’s it! The rest is managed by the filter driver that runs at boot
  • You don’t have a lot of control over the folder structure created by the policy/Citrix layering service. With FSLogix you’ve got a lot more control, UPL creates sub-folders for each user in your UPL share as such
  • \\SERVER1\UPLSHARE\USERS\DOMAIN_USERNAME\A_OK
  • Again, this is a port-over from Unidesk. I hope they will change it, what is A_OK? :p
  • Performance is acceptable, but you’re now at the mercy of having 3 VHD(s) mounted to a remote filter. If you’re shared storage isn’t up to snuff, your user experience will suffer. For the moment, our customer is running FSLogix and UPL containers on the same Nutanix filer, but it might change we decide to put UPL into mainstream use. We are looking for it as a replacement for Win 10 STATIC VMs , which are a pain to manage. For most of our users, we will be providing standard MCS non-persistent machines without UPL


QUIET! Simple powershell code to set current_user / default_user reg keys to "NONE"

I think for every PC i’ve used /worked on over the past 20 years, the first thing I do after logging on is set the “system sounds” to NONE. I don’t need beeps for every time I fat-finger windows + R or whatnot.

I recently re-installed Win 10 on my personal laptop, and wanted to a build a nice default user profile for any new users logging in (my domain account, my wife’s domain account) on our home AD setup

After some searching, I found a few posts on MS tech net with some suggestions that didn’t work, here’s what I ended up with to set the value for all 50+ sounds to ‘none’

This covers the currently logged in user

$RegSoundCU = GCI -Path HKCU:\AppEvents\Schemes\Apps\.Default -Recurse | Select-object -ExpandProperty name | ForEach {$_ -replace "HKEY_CURRENT_USER" , 'HKCU:'}

ForEach ($i in $RegSoundCU) {

Set-itemproperty -Path $i -Name "(Default)" -Value ""

}

And this covers the default user, so any new users will get the same settings of NO SOUNDS!

reg load HKLM\DEFAULT c:\users\default\ntuser.dat

$RegSoundDU = GCI -Path "HKLM:\DEFAULT\AppEvents\Schemes\Apps\.Default" -Recurse | Select-object -ExpandProperty name | ForEach {$_ -replace "HKEY_LOCAL_MACHINE" , 'HKLM:'}


ForEach ($i in $RegSoundDU) {

Set-itemproperty -Path $i -Name "(Default)" -Value ""

}

reg unload HKLM\DEFAULT

outside of the GPO method that doesn’t allow any changes, If you know of another method to achieve the same, let me know in the comments 🙂

Owen

Not all NVM drives are created equal!



Over the weekend, I learned the hard way what happens when you deviate from Vmware’s HCL list.

Many months back, I’d decided on HP EX920 1TB NVM drives for my home lab ESXI 6.7 U1 hosts

For reference, the hardware for my hosts is level-set as follows:

HP EliteDesk 800 G3 SFF (not on vmware HCL)
Intel Core i5-6500 CPU – 4 cores / 4 threads (on vmware HCL)
32 GB DDR4 SDRAM
Intel X520 dual port 10 GB nic (on vmware HCL)
Intel quad port NIC i350-T4 (on vmware HCL)

The HP EX920 1 TB NVM drives were installed into of the above ESXi 6.7 hosts U1 hosts a few months back, and were running without issue

Last month, I took 2/3 of the hosts and created a two-node vSan cluster, that’s where the trouble started. 

Shortly after creating the vSan cluster, the vmware auto update manager indicated my vSan components were out of date, and required a critical fix. I happily applied ESXi 6.7 U2 to both hosts, on reboot, my NVM drives were gone. After a few minutes of internet searching, I landed here:

https://www.virtuallyghetto.com/2019/05/quick-tip-crucial-nvme-ssd-not-recognized-by-esxi-6-7.html

I followed the work-around in the blog, which was simply to take the NVME.000 driver from an ESXI 6.5 build, copy it to /bootbank and reboot the host. Nice! My NVM drives reappeared. Shortly after completing this step, I ended up removing my vSan config as I needed the hosts to study for some upcoming Citrix cert exams

As of the time of this blog posts, the hosts are just running local storage and I run periodic manual vMotions over 10 GBe (fast!)

Over the weekend, I was running some manual vMotions between the hosts over 10 GBe and noted some very odd behavior: 

Attempting to run vMotions of 3 or more VMs from the HP EX920 NVM internally to another SSD in the same host, or to another host’s drive over 10 GBe would result in the HP EX920 going offline approx 5 mins after the vMotion job completed

As I was aware that my NVM driver was taken from an earlier version of ESXi, I decided to repro the issue on various older versions of ESXi. At home, I can provision these HP hosts very quickly, as each is connected to an IP KVM, and I keep spare USB drives. So, I was able to test out ESXi 6.5, ESXi 6.7 GA, ESXI 6.7 U1 and U2 . Once the base ESXI image was installed, I use host profiles to get up and running. Here’s what I found:

Regardless of the ESXI version/build I chose, the issue was easily reproduced. 3+ or more vMotions and the HP EX920 would fall over 

In one of my other ESXi hosts, I had a spare Samsung 970 EVO Plus NVM unit. I decided to attempt to repro the issue on the same ESXI versions/builds using this drive, I wasn’t able to! My conclusion, the HP EX920 doesn’t play nice with ESXi 6.5 / 6.7, regardless of which NVM driver you use. Others have reported similar results here and here

  • THE GOOD:The issue appears to be isolated to the HP EX920 NVMs
  • THE BAD: I own three of the affected units
  • THE UGLY: See above
HP doesn’t list much in the way of support for these units, instead, after trawling the HP support forums, I came upon the following:

https://www.multipointe.com/downloads-hp-ssd-support/
Seems HP outsourced production and support of the HP EX920 series to a company called Multipointe. I didn’t end up taking off the sticker from my EX920 units, but some have reported that the actual chips are from budget-line Kingston, if i’d known that 3 months back, I’d not have bought the units. Hindsigtht = 20/20 !

Anyway, I purchased a pair of replacements ;  Samsung 970 EVO Plus 1TB NVM


I installed them into 2 of my hosts which I wanted to use in a 2 node vSan cluster, and surprise, the issue no longer occurs!

All 3 units have been posted for sale on KIJIJI, the next owner will be explicitly warned not to use them in an ESXi host! 

Home lab upgrade log – April 26, 2019: "New" IP KVM switch has arrived!

In my last post on the topic of home lab upgrades, I’d stated I was going to use Intel Q370 based mobo’s to satisfy a long-standing need for IPMI/OOB/Lights-out mgmt of my various virtualization hosts

That plan has now changed, as I’ve deiced to go with a more scalable solution that won’t lock me in to building via the older / less available Q730-based motherboards ; as per my previous posts, there are only 3 mobo’s that are Q370 based as of the time of this post, where as there are dozens of Z390 based boards from my preferred OEMS (MSI/Gigabyte)

To achive this, I purchased a used Raritan DLX-108 IP KVM from eBay. The unit has 8 ports, my plans for the next few years are to have a max of 4 hosts, so 8 is plenty!

I did a fair bit of research on the Raritan IP KVM, as the only experience i’ve got with IP KVM units is from my past job, where I used to connect to IBM servers via their own IP KVM implementation. On the Reddit homelab forums, most folks are buying older IP KVM’s from Raritan or Avocent ; Avocent I ruled out due to apparently impossible-to-work-around JAVA issues. Where as most folks reported being able to use the Raritan units with a few Java work-arounds

I got the unit today from the local post office, and immediately hooked it up. The first step was a factory reset. I then connected my main NUC desktop to it via cross-over cable to amend it’s default IP, once this was done, I connected it to my main switch and connected it to my first non-IPMI host, here’s where I ran into a few issues, none of which were completly unexpected.

The unit requires an older Java plug-in , as these are explicitly banned in Chrome, IE was my only choice, here were the steps required to get the java plug-in working

1 – Install java
2 – Open Java control panel > security > site exemptions > added the IP for Raritan IP KVM
3 – Logged on to Raritan unit via IE, within the security > certificate settings, I downloaded the cert, then imported it into the JAVA control panel
4 – I closed / re-opened IE
5 – On reconnecting to the web interface for the Raritan unit, the plug-in started! nice
6 – I then attempted to connect to the HP Prodesk 400 G3 Mini that I’d cabled in to the new Raritan unit, but was getting a “video not available” message, to clear it, I had to reboot the HP Prodesk unit, not ideal, but not the first time I’ve had to restart a server to recover video

So, with the above steps in place, i’ve cleared the path to buy whatever / build based on non-IPMI compliant motherboards 🙂

I’m still keeping my one IPMI complaint computer for use as an esxi / file server, that’s a HP Elitedesk 800 G3 SFF, it’s got intel vPRO support, and works perfectly via web/fat client for lights out stuff

Home lab upgrade log April 11, 2019

Last year, I bought a few new used HP computers via eBay to build-out my VMware lab setup

-A EliteDesk 800 G3 SFF – which has a Intel core i5 6500 CPU, non-hyper threaded!
-A HP Prodesk 400 G3 mini – which has a Intel core i5 6500T CPU, non-hyper threaded! This is the 2nd of these models I’ve bought

I had chosen the HP EliteDesk 800 G3 as it featured the right combination of PCIe, drive slots to support use as a replacement for my existing full-size ATX file server.

The unit itself came with no RAM/HDD, or add-in cards, which suited me fine, as it kept the price low on eBay

I added the following:
-Seagate 12 TB EXOS 3.5 7200 RPM HDD
-32 GB (2 x 16) DDR4 RAM
-Intel Server Adapter I350-T4 V2 Quad- Port gigabit card
-HP 1 TB NVM SSD

The HP ProDesk 400 G3 Mini got a Samsung 256 GB NVM SD and 32 GB RAM, for the OS, I installed ESXi 6.5, ESXi 6.7 is not possible with this particular model of HP , as it’s got the dreaded Realtek NIC, which has been blacklisted in recent ESXi 6.7 builds. I didn’t actually know this when I purchased the unit, so I will eventually be selling it, as i’m keen to level-set my vSphere setup on 6.7

With OS installs completed on the 2 new HP units, physical keyboard/mouse/video connections were removed.

Low-level remote mgmt (aka lights out/IPMI) on my remote lab has always been a challenge for me, as I never had “server grade” systems / motherboards. However, as luck would have it, the HP EliteDesk 800 G3 was vPro enabled, which means Intel AMT ! I’ve seen the vPro sticker on a 1000 Dell/Lenovo/HP desktops I’ve worked on over the years, but never considered it suitable for “lights out” mgmt, as everywhere i’ve worked has used another facility – or none at all – for lights-out mgmt of desktops.

Anyway, I enabled AMT on my HP EliteDesk 800 G3, and it’s been working great for remote mgmt I need to do that can’t be done over RDP, OS re-install, BIOS changes, force reboots, etc

I’m so happy with it, that i’ve already started building the replacement for my HP Prodesk 400 G3 mini around a vPro enabled motherboard!

This one, here: https://www.gigabyte.com/Motherboard/Q370M-D3H-GSM-PLUS-rev-10#kf

Which features a Q370 chipset. Q? Q? No, not that “Q”

Q series chipsets go into PC’s destined for use in business, so feature vPro / AMT for remote management

As such, they are much less common than the Z series: Z390 for instance, each of the “big 4” mobo makers (Asus, Asrock, Gigabye, MSI) have dozens of iterations for ITX/MATX/ATX

Where as, for the Q370 line, you’ve just got 3 models (1 each from Asus, Asrock, Gigabyte)
Asrock Q370M
Asus Prime Q370M
Gigabyte Q370M (which I bought)

Sourcing the Gigabyte Q370M here in Canada was a chore! I ended up buying it from a store/drop-shipper I’ve never used before called SoftwareCity. They’ve let me know the item is on back order, with 3-4 weeks typical processing time. Fingers crossed it arrives, as i’m keen to buy the other pieces, which are as follows

-Intel Core i7 8700 (6 core / 12 threads)
-32 GB DDR4 RAM (2x 16) – I will more than likely expand this to 64 at a late date
-Intel Server Adapter I350-T4 V2 Quad- Port gigabit card (on it’s way from Chinese eBay seller)
-Jonsbo UMX3 case
-TBD Modular ATX PSU
-TBD CPU cooler
-TBD 1TB NVM
-TBD 10 GBe NIC (either SFP or RJ45 based)

New year’s resolutions don’t work

As per the title of this post, I don’t believe in new year’s resolutions. Any date you can put a “magic” start time on, is a date you can prepare a dozen excuses not to make once the date comes up.

That’s why I like to make changes all year around: this week, it was cycling out old hardware. 

This picture represents the past and the present, not the future!

The items in green are older Intel NUC units that I retired over the Christmas break. One is a 2nd gen Core i3 NUC, the other is an original 1st gen Intel nuc ATOM! Both saw active duty on my home network, but were tool old to keep paying the power bill for

The items in blue are units i’ll keep for the moment. All of the items in green/blue are sat upon two larger Antec cases. These large Antec cases are the core lab servers:

The first is a Win 2016 / Hyper-V host is a 3rd gen Core i5-based system with 24 GB RAM The 2nd is an ESXi 6.5 host with is a 2nd gen Core i5-based system 24 GB RAM (ancient!)

Both will be retired in Q1 of 2019. both are still running fine, but i’m keen to reduce the number of larger tower units i’ve got running at home, so am in the process of switching over to smaller / faster units.

Owen