QUIET! Simple powershell code to set current_user / default_user reg keys to "NONE"

I think for every PC i’ve used /worked on over the past 20 years, the first thing I do after logging on is set the “system sounds” to NONE. I don’t need beeps for every time I fat-finger windows + R or whatnot.

I recently re-installed Win 10 on my personal laptop, and wanted to a build a nice default user profile for any new users logging in (my domain account, my wife’s domain account) on our home AD setup

After some searching, I found a few posts on MS tech net with some suggestions that didn’t work, here’s what I ended up with to set the value for all 50+ sounds to ‘none’

This covers the currently logged in user

$RegSoundCU = GCI -Path HKCU:\AppEvents\Schemes\Apps\.Default -Recurse | Select-object -ExpandProperty name | ForEach {$_ -replace "HKEY_CURRENT_USER" , 'HKCU:'}

ForEach ($i in $RegSoundCU) {

Set-itemproperty -Path $i -Name "(Default)" -Value ""

}

And this covers the default user, so any new users will get the same settings of NO SOUNDS!

reg load HKLM\DEFAULT c:\users\default\ntuser.dat

$RegSoundDU = GCI -Path "HKLM:\DEFAULT\AppEvents\Schemes\Apps\.Default" -Recurse | Select-object -ExpandProperty name | ForEach {$_ -replace "HKEY_LOCAL_MACHINE" , 'HKLM:'}


ForEach ($i in $RegSoundDU) {

Set-itemproperty -Path $i -Name "(Default)" -Value ""

}

reg unload HKLM\DEFAULT

outside of the GPO method that doesn’t allow any changes, If you know of another method to achieve the same, let me know in the comments ūüôā

Owen

Not all NVM drives are created equal!



Over the weekend, I learned the hard way what happens when you deviate from Vmware’s HCL list.

Many months back, I’d decided on HP EX920 1TB NVM drives for my home lab ESXI 6.7 U1 hosts

For reference, the hardware for my hosts is level-set as follows:

HP EliteDesk 800 G3 SFF (not on vmware HCL)
Intel Core i5-6500 CPU Р4 cores / 4 threads (on vmware HCL)
32 GB DDR4 SDRAM
Intel X520 dual port 10 GB nic (on vmware HCL)
Intel quad port NIC i350-T4 (on vmware HCL)

The HP EX920 1 TB NVM drives were installed into of the above ESXi 6.7 hosts U1 hosts a few months back, and were running without issue

Last month, I took 2/3 of the hosts and created a two-node vSan cluster, that’s where the trouble started.¬†

Shortly after creating the vSan cluster, the vmware auto update manager indicated my vSan components were out of date, and required a critical fix. I happily applied ESXi 6.7 U2 to both hosts, on reboot, my NVM drives were gone. After a few minutes of internet searching, I landed here:

https://www.virtuallyghetto.com/2019/05/quick-tip-crucial-nvme-ssd-not-recognized-by-esxi-6-7.html

I followed the work-around in the blog, which was simply to take the NVME.000 driver from an ESXI 6.5 build, copy it to /bootbank and reboot the host. Nice! My NVM drives reappeared. Shortly after completing this step, I ended up removing my vSan config as I needed the hosts to study for some upcoming Citrix cert exams

As of the time of this blog posts, the hosts are just running local storage and I run periodic manual vMotions over 10 GBe (fast!)

Over the weekend, I was running some manual vMotions between the hosts over 10 GBe and noted some very odd behavior: 

Attempting to run vMotions of 3 or more VMs from the HP EX920 NVM internally to another SSD in the same host, or to another host’s drive over 10 GBe would result in the HP EX920 going offline approx 5 mins after the vMotion job completed

As I was aware that my NVM driver was taken from an earlier version of ESXi, I decided to repro the issue on various older versions of ESXi. At home, I can provision these HP hosts very quickly, as each is connected to an IP KVM, and I keep spare USB drives. So, I was able to test out ESXi 6.5, ESXi 6.7 GA, ESXI 6.7 U1 and U2 . Once the base ESXI image was installed, I use host profiles to get up and running. Here’s what I found:

Regardless of the ESXI version/build I chose, the issue was easily reproduced. 3+ or more vMotions and the HP EX920 would fall over 

In one of my other ESXi hosts, I had a spare Samsung 970 EVO Plus NVM unit. I decided to attempt to repro the issue on the same ESXI versions/builds using this drive, I wasn’t able to! My conclusion, the HP EX920 doesn’t play nice with ESXi 6.5 / 6.7, regardless of which NVM driver you use. Others have reported similar results here and here

  • THE GOOD:The issue appears to be isolated to the HP EX920 NVMs
  • THE BAD: I own three of the affected units
  • THE UGLY: See above
HP doesn’t list much in the way of support for these units, instead, after trawling the HP support forums, I came upon the following:

https://www.multipointe.com/downloads-hp-ssd-support/
Seems HP outsourced production and support of the HP EX920 series to a company called Multipointe. I didn’t end up taking off the sticker from my EX920 units, but some have reported that the actual chips are from budget-line Kingston, if i’d known that 3 months back, I’d not have bought the units. Hindsigtht = 20/20 !

Anyway, I purchased a pair of replacements ;  Samsung 970 EVO Plus 1TB NVM


I installed them into 2 of my hosts which I wanted to use in a 2 node vSan cluster, and surprise, the issue no longer occurs!

All 3 units have been posted for sale on KIJIJI, the next owner will be explicitly warned not to use them in an ESXi host! 

Home lab upgrade log – April 26, 2019: "New" IP KVM switch has arrived!

In my last post on the topic of home lab upgrades, I’d stated I was going to use Intel Q370 based mobo’s to satisfy a long-standing need for IPMI/OOB/Lights-out mgmt of my various virtualization hosts

That plan has now changed, as I’ve deiced to go with a more scalable solution that won’t lock me in to building via the older / less available Q730-based motherboards ; as per my previous posts, there are only 3 mobo’s that are Q370 based as of the time of this post, where as there are dozens of Z390 based boards from my preferred OEMS (MSI/Gigabyte)

To achive this, I purchased a used Raritan DLX-108 IP KVM from eBay. The unit has 8 ports, my plans for the next few years are to have a max of 4 hosts, so 8 is plenty!

I did a fair bit of research on the Raritan IP KVM, as the only experience i’ve got with IP KVM units is from my past job, where I used to connect to IBM servers via their own IP KVM implementation. On the Reddit homelab forums, most folks are buying older IP KVM’s from Raritan or Avocent ; Avocent I ruled out due to apparently impossible-to-work-around JAVA issues. Where as most folks reported being able to use the Raritan units with a few Java work-arounds

I got the unit today from the local post office, and immediately hooked it up. The first step was a factory reset. I then connected my main NUC desktop to it via cross-over cable to amend it’s default IP, once this was done, I connected it to my main switch and connected it to my first non-IPMI host, here’s where I ran into a few issues, none of which were completly unexpected.

The unit requires an older Java plug-in , as these are explicitly banned in Chrome, IE was my only choice, here were the steps required to get the java plug-in working

1 – Install java
2 – Open Java control panel > security > site exemptions > added the IP for Raritan IP KVM
3 – Logged on to Raritan unit via IE, within the security > certificate settings, I downloaded the cert, then imported it into the JAVA control panel
4 – I closed / re-opened IE
5 – On reconnecting to the web interface for the Raritan unit, the plug-in started! nice
6 – I then attempted to connect to the HP Prodesk 400 G3 Mini that I’d cabled in to the new Raritan unit, but was getting a “video not available” message, to clear it, I had to reboot the HP Prodesk unit, not ideal, but not the first time I’ve had to restart a server to recover video

So, with the above steps in place, i’ve cleared the path to buy whatever / build based on non-IPMI compliant motherboards ūüôā

I’m still keeping my one IPMI complaint computer for use as an esxi / file server, that’s a HP Elitedesk 800 G3 SFF, it’s got intel vPRO support, and works perfectly via web/fat client for lights out stuff

Home lab upgrade log April 11, 2019

Last year, I bought a few new used HP computers via eBay to build-out my VMware lab setup

-A EliteDesk 800 G3 SFF – which has a Intel core i5 6500 CPU, non-hyper threaded!
-A HP Prodesk 400 G3 mini – which has a Intel core i5 6500T CPU, non-hyper threaded! This is the 2nd of these models I’ve bought

I had chosen the HP EliteDesk 800 G3 as it featured the right combination of PCIe, drive slots to support use as a replacement for my existing full-size ATX file server.

The unit itself came with no RAM/HDD, or add-in cards, which suited me fine, as it kept the price low on eBay

I added the following:
-Seagate 12 TB EXOS 3.5 7200 RPM HDD
-32 GB (2 x 16) DDR4 RAM
-Intel Server Adapter I350-T4 V2 Quad- Port gigabit card
-HP 1 TB NVM SSD

The HP ProDesk 400 G3 Mini got a Samsung 256 GB NVM SD and 32 GB RAM, for the OS, I installed ESXi 6.5, ESXi 6.7 is not possible with this particular model of HP , as it’s got the dreaded Realtek NIC, which has been blacklisted in recent ESXi 6.7 builds. I didn’t actually know this when I purchased the unit, so I will eventually be selling it, as i’m keen to level-set my vSphere setup on 6.7

With OS installs completed on the 2 new HP units, physical keyboard/mouse/video connections were removed.

Low-level remote mgmt (aka lights out/IPMI) on my remote lab has always been a challenge for me, as I never had “server grade” systems / motherboards. However, as luck would have it, the HP EliteDesk 800 G3 was vPro enabled, which means Intel AMT ! I’ve seen the vPro sticker on a 1000 Dell/Lenovo/HP desktops I’ve worked on over the years, but never considered it suitable for “lights out” mgmt, as everywhere i’ve worked has used another facility – or none at all – for lights-out mgmt of desktops.

Anyway, I enabled AMT on my HP EliteDesk 800 G3, and it’s been working great for remote mgmt I need to do that can’t be done over RDP, OS re-install, BIOS changes, force reboots, etc

I’m so happy with it, that i’ve already started building the replacement for my HP Prodesk 400 G3 mini around a vPro enabled motherboard!

This one, here: https://www.gigabyte.com/Motherboard/Q370M-D3H-GSM-PLUS-rev-10#kf

Which features a Q370 chipset. Q? Q? No, not that “Q”

Q series chipsets go into PC’s destined for use in business, so feature vPro / AMT for remote management

As such, they are much less common than the Z series: Z390 for instance, each of the “big 4” mobo makers (Asus, Asrock, Gigabye, MSI) have dozens of iterations for ITX/MATX/ATX

Where as, for the Q370 line, you’ve just got 3 models (1 each from Asus, Asrock, Gigabyte)
Asrock Q370M
Asus Prime Q370M
Gigabyte Q370M (which I bought)

Sourcing the Gigabyte Q370M here in Canada was a chore! I ended up buying it from a store/drop-shipper I’ve never used before called SoftwareCity. They’ve let me know the item is on back order, with 3-4 weeks typical processing time. Fingers crossed it arrives, as i’m keen to buy the other pieces, which are as follows

-Intel Core i7 8700 (6 core / 12 threads)
-32 GB DDR4 RAM (2x 16) – I will more than likely expand this to 64 at a late date
-Intel Server Adapter I350-T4 V2 Quad- Port gigabit card (on it’s way from Chinese eBay seller)
-Jonsbo UMX3 case
-TBD Modular ATX PSU
-TBD CPU cooler
-TBD 1TB NVM
-TBD 10 GBe NIC (either SFP or RJ45 based)

New year’s resolutions don’t work

As per the title of this post, I don’t believe in new year’s resolutions. Any date you can put a “magic” start time on, is a date you can prepare a dozen excuses not to make once the date comes up.

That’s why I like to make changes all year around: this week, it was cycling out old hardware.¬†

This picture represents the past and the present, not the future!

The items in green are older Intel NUC units that I retired over the Christmas break. One is a 2nd gen Core i3 NUC, the other is an original 1st gen Intel nuc ATOM! Both saw active duty on my home network, but were tool old to keep paying the power bill for

The items in blue are units i’ll keep for the moment. All of the items in green/blue are sat upon two larger Antec cases. These large Antec cases are the core lab servers:

The first is a Win 2016 / Hyper-V host is a 3rd gen Core i5-based system with 24 GB RAM The 2nd is an ESXi 6.5 host with is a 2nd gen Core i5-based system 24 GB RAM (ancient!)

Both will be retired in Q1 of 2019. both are still running fine, but i’m keen to reduce the number of larger tower units i’ve got running at home, so am in the process of switching over to smaller / faster units.

Owen


How I automate windows updates/reboots on my home lab



As my home lab has grown, so has the amount of time that I spend manually patching/rebooting each server, no fun!

So last weekend, I came up with a means to automate the process with active directory groups, group policy and powershell. 

The goals

  1. Automate patching / rebooting of all vmware/hyper-v guest operating systems in my home lab
  2. As I’ve got about 20 VM’s across 4 physical hypervisors, separating the guests into two reboot windows was ideal
  3. Dynamically update the reboot windows as I add/remove servers
Implementation
  1. Created AD group called “Sat AM patching servers”
  2. Created AD group called “Sun AM patching servers”
  3. Created GPO called “Sat AM Patching” at top-level OU that sets the following:

    Computer Configuration > Administrative Templates > Windows Components > Windows Update  > Configure Automatic Updates РSat @ 5 am

  4. Within the “DELEGATION” tab, I set “Sat AM patching servers” to have read/apply group policy rights on the GPO
  5. To stop the “Sat AM patching” GPO from applying to servers in the Sun group, I then set the “Sun AM patching” servers group to DENY for read and “apply this GPO”
  6. Created GPO called “Sun AM patching” at top-level OU that sets the following:

    Computer Configuration > Administrative Templates > Windows Components > Windows Update  > Configure Automatic Updates РSun @ 5 am

  7. Within the “DELEGATION” tab, and set “Sun AM patching servers” to have read/apply group policy rights on the GPO
  8. To stop the “Sun AM patching” GPO from applying to servers in the SAT group, I then set the “Sat AM patching” servers group to DENY for read and “apply this GPO”

    Note:
     Additional info on the above GPO settings is available here
  9. Steps 1-8 address goals 1 & 2 , which left goal 3 ; to automate the process by which servers would be added/removed from the SAT and SUN global groups ; powershell to the rescue! 
  10. Step 10 was the creation of a new Powershell script, full details are below:

For years, I’ve been naming any virtual servers created on my esxi/hyperv hosts to include either a “1” or “2” in their name. The servers are spread out across multiple OU’s. For instance, my Citrix OU has VDA1, VDA2, my SQL servers are in another OU, and called SQL1, SQL2, my AD controllers are in another OU, and called ADDC1, ADDC2, etc, you get the idea

The actual Powershell code to achieve this was less than 40 lines, here its:

### Filter out servers that we don’t want to regularly patch
$Servers = Get-ADComputer -Filter * | Sort DNSHostname `
| Where {$_.DNShostName -notlike “*GOL*”} `
| Where {$_.DNShostName -notlike “*ESX*”}

### Filter $Servers to include servers with the # 1 in their name
$Servers2AddtoGrp1 = $Servers | Where {$_ -like “*1*”}

### Filter $Servers to include servers with the # 2 in their name
$Servers2AddtoGrp2 = $Servers | Where {$_ -like “*2*”}

### Create object for patching group 1
$ADGrp1 = Get-ADGroup -Filter * | Where {$_.name -eq “SAT AM Patching”}

### Create object for patching group 2
$ADGrp2 = Get-ADGroup -Filter * | Where {$_.name -eq “SUN AM Patching”}

### Reset Patching Group 1 members
Get-ADGroupMember $ADGrp1 | ForEach {

    Remove-ADGroupMember $ADGrp1 -members $_ -Confirm:$False

}

### Reset Patching Group 2 members
Get-ADGroupMember $ADGrp2 | ForEach {

    Remove-ADGroupMember $ADGrp2 -members $_ -Confirm:$False

}

### Add all entries from $Servers2AddtoGrp1
Add-ADGroupMember $ADGrp1 -Members $Servers2AddtoGrp1

### Add all entries from $Servers2AddtoGrp2

Add-ADGroupMember $ADGrp2 -Members $Servers2AddtoGrp2

The script resides on GitHub, HERE
So, we have a script that reads ALL computer accounts in my home lab domain, filters out those with names I don’t want to patch (like esx*), resets their respective AD groups, then adds the servers
The above covers step 10

Step 11, was just to create a scheduled task on my ADDC1 and ADDC2 to run the Powershell script 1 hour before Sat/Sun patch window kicks in!

Hopefully you found the above useful, it wasn’t TOO much work. I was done in the time I took my to consume about 2 servings of Jameson Irish Whiskey/ Dr Pepper (2 hours?) Whiskey purists will scoff @ me, but I won’t see said scoffs, as this is the internet :p

Owen


Export-STFConfiguration / Import-STFConfiguration fails

On a few new Win 2012 / Win 2016 systems I’ve setup on my home lab over the past week , I’ve noted various error messages on attempting the following simple Powershell cmdlet based process which you would use to export the config of a working StoreFront server to a new StoreFront server.

Citrix details the process here

1 – Open an elevated Powershell session on your existing StoreFront server that’s fully setup
2 – Add-PSSnapin Citrix*
3 – . “C:\Program Files\Citrix\Receiver StoreFront\Scripts\ImportModules.ps1” (note the “.” at the beginning”

4 –¬†Export-STFConfiguration -TargetFolder “c:\StoreFrontConfigs” -ZipFileName Source -NoEncryption

Output of the various config pieces being written to the screen will be shown. With the above commands completed, you would then logon to your second server to import the newly created config where you would run through the following:

1 – Copy over the .zip created in step 4 to a local folder on the new server
2 – Open an elevated powershell session
3 – Add-PSSnapin Citrix*
4 – . “C:\Program Files\Citrix\Receiver StoreFront\Scripts\ImportModules.ps1”
5 – Run Import-STFConfiguration -ConfigurationZip “Path to .zip”

That SHOULD be it, right? NOPE! On my test servers at home Win 2012 R2/ Storefront 3.9 and Win 2016 / StoreFront 3.14.0.27 running the import-STFConfiguration resulted in the following cryptic error:

Import-STFConfiguration : An error occurred configuring StoreFront diagnostics. The property ‘instance’ cannot be
found on this object. Verify that the property exists.
At line:1 char:1
+ Import-STFConfiguration -ConfigurationZip C:\StoreFrontConfigs\Source …
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    + CategoryInfo          : NotSpecified: (:) [Import-STFConfiguration], Exception
    + FullyQualifiedErrorId : System.Management.Automation.CmdletInvocationException,Citrix.StoreFront.ImportConfiguration

Now, I can still setup the 2nd server by using the Studio method ; where I create a server group between the existing and new server, and sync the config. However, shouldn’t the Import-STFConfiguration cmdlet do the trick? Anyone else seen this and no of a fix? The Citrix forums have a few posts, but nothing definitive.

Owen

Current lab setup
















My current lab environment supports a few goals:

  • Stay sharp with Microsoft active directory and group policy
  • Keep up to date with windows server versions: 2003, 2008, 2012R2, 2016 > beyond
  • Keep up to date with the windows desktop OS family
  • Keep my hardware skills sharp by continuing to build my own desktops and servers
  • Keep up to date with Citrix XenApp/Desktop suite releases: in 2017 alone, I ran 7.11, 7.12, 7.13, and 7.15 LTSR – a lot!
  • Use Powershell for automating routine tasks
  • Enable quick recovery from hardware/software failure through OS virtualization¬†
  • Prep for certification¬†
All of the above has been achievable using two full size ATX based towers, and one SFF – a Gigabyte Brix
  1. ATX Tower 1 runs server 2016 and Hyper-V – VMs run SQL , file a backup AD controller
  2. ATX Tower 2 runs ESXi – Citrix StoreFront, Citrix desktop controller, session hosts and MCS
  3. The Gigabyte brix is used for appliance functions – vCenter and active directory primary
DRP is covered by a multi-port UPS and USB 3 10 TB HDD
Each year, (and not always @ x-mas) I have the exact same hardware purchase temptations: 
  • A synology NAS unit for backup
  • An off-lease Dell R series or HP/Cisco equivalent¬†
However, I always defer to the next year, when I’ve got a newer apartment w. more room ūüėČ

The 3 unit setup covers my needs for 2017 so far, but i’ve become interested in VMWARE VCP-DCV certification recently, in doing so, I would more than likely go to a 4-host setup to enable VSAN. So, purchasing a 4th ATX or MATX based tower will more than likely occur sometime i the next 4-6 months. I’m looking at an Intel Coffee lake based rig, as they traditionally feature better ESXi support.¬†¬†

Chrome with folder redirection works with only one session

In my previous job, my exposure to Citrix imaging solutions was limited to PVS. So, over the past few days, I’ve been going through the process of setting up MCS in my home lab.¬†

To be honest, it’s a pretty great out-of-the-box experience. PVS is a lot of work to setup / maintain, man! ūüôā

Anyway, I was able to create a nice simple master image for my MCS machine catalog, and tested out basic functionality today, however, I ran into a Google Chrome + Citrix gotcha!

I was logged into a long-build (non-MCS) server VDA session on my home lab hardware, then opened a new MCS VDA server session, all apps opened fine on the 2nd session, but chrome!

I had enabled Chrome folder redirection to cut down on Citrix UPM/roaming profile size, however, when you enable Chrome %AppData% redirection, you can only have one session per user opened at a time!

For reference, the *gotcha* is listed on the Google Chrome product forums, here:


The fix was simple, exit Chrome on session 1, I was then able to open it on Session 2. 

Disabling SMB v1 breaks Sonos home NAS support

I’ve had Sonos gear for about 5 years now. 4 units, great little speakers – when they work!

Case in point, I disabled SMB v1 on my win 2016 hyper-v based file server this weekend, only to discover that the default (as in only) SMB format that Sonos supports to enumerate/connect to windows shares is SMB v1. Yikes!

It’s noted here:
https://sonos.custhelp.com/app/answers/detail/a_id/79/~/using-a-nas-drive-with-sonos

I found it while skimming through 100’s of posts over the past few years on the Sonos customer support forums.¬†

I was about to take to Twitter-bird to chide Sonos for this over-sight, when I noted MSFT Ned Pyle from Microsoft already had!

He’s got this post from last year on the topic

Regardless, I hope Sonos add SMB v2 or v3 support to their units soon. SMB v1 needs to die!

Create your website at WordPress.com
Get started