10 min Windows 10 / Server 2019 build automation via OSDBuilder, autounattend.xml and Packer.IO

Intro

As IT pros, we’ve got no shortage of new / interesting and useful tools available to us. With the adoption of open-source software into consumer and enterprise environments, a lot of the tools are free! As long as you don’t need serious 24/7 support that is

How do you know which tools to use? COMMUNITY ! For me, that introduction came in 2017, I met Jonathan Pitre (twitter) while working in a previous job. Jon is Canada’s only CTA and has become a good friend since I met him in 2017. He opened my eyes to the “power of the community” that exists for EUC. Previously, I was aware of the EUC/CTA/CTP community by way of those awesome posts from the two Carl’s, but I wasn’t in the habit of regularly digesting the related blogs or going to events like Synergy or the like. Jon changed that REAL QUICK by letting me import his RSS blog list and also listed off a bunch of interesting tools he had used or was made aware of via the EUC community.

So, THANK YOU JON !!

This week, I checked another box off from Jon’s original list a few years back, Packer.io, and I’m glad I did. It’s amazing software. Also this week, I took https://osdbuilder.osdeploy.com for a spin, which provides a way to replace legacy dism commands and MDT

The challenges we face

Building windows 10 / server 201x images for cloning has the following challenges (at least as far as I’ve encountered)

  1. Human-error by hand cranking settings @ the virtual shell level
  2. Human-error by hand cranking settings post windows-build
  3. The amount of time it takes to apply windows updates once the base build is done
  4. Time that’s lost while waiting for steps 1/2/3 to complete

Each of the above challenges can be solved by combining a few simple tools that I will describe below

The solution

The solution I’ve found that worked for me is based on the open-source tool Packer

There’s a lot of marchitecture/jargon/nerd speak out there at the moment around dev-ops, CI/CD, “infrastructure as code. I’ve read plenty of posts online where the conversations contained so many acronyms, it made me question my command of the English language. It’s about time-savings and ensuring build consistency

The below steps are based on using Packer with vSphere, as it’s still the most common hypervisor for deploying Citrix workloads, and it happens to be what I use @ home on my VMUG licensed 3 node vSan cluster (<>shout out</>) There are official and community templates for Nutanix as well as all the major cloud platforms, but I’m only going to speak to what I know for this blog post, Packer + vSphere

When doing the work this week, I found that some of the reference blog pages and associated config files (.JSON/XML) for use with Packer had not been updated in a few years, so I definitely lost some time while troubleshooting, but, that’s how you learn! I can’t blame the authors of these blog posts ; how often do you go back and update your own blog posts? I’m guilty of this as well. If you’re reading this in the FUTURE and find that anything I’ve written below doesn’t work for you, please let me know in the comments or message me on twitter bird or email, I’ll try and help ya out! That’s community , baby !

Let’s get started with the proposed tools and proposed workflow at each stage to solve each of the 4 challenges I’ve listed above

Challenge 1: Standardizing VM shell settings
Here is our first case for use of the Packer tool. Packer uses .JSON based config files. These can be used to select the common shell values you select when manually creating a shell via the vSphere HTML5 client, or PowerCLI

Challenge 2: Standardizing windows install settings
Here, we will be using Microsoft autounattend.XML templates to pre-fill all the important settings that we would normally have to click / mouse through in a standard windows install.

Use of autounattend.XML files isn’t something new, but there are integration pieces that are possible by way of Packer integration which makes the autounttend.xml a lot more useful. You can use the Microsoft SDK and the Windows System image manager, but I’ve also got templates for Win 10 / Win 2019 server ready to use on my GIT hub HERE to save you having to create your own

Challenge 3: Windows updates last longer than it takes to make a drink a cup of coffee
OSDBuilder will be used to slipsteam in windows updates to our base ISO. As we are slipstreaming an offline image, we no longer need to reboot the live un-patched / out-of-date windows OS a bunch of times while we wait for each update to apply. This is a huge time-saver once combined with packer to attach the updated ISO

Challenge 4 – the time lost by waiting for each of the above 3 processes to complete
Again, Packer is here to help us out as the GLUE that ties it together! Packer is used to start step 1 and step 2, and thus fixes the issue with time lost in-between these steps

Detailed steps

Packer install/config

  • Download Windows 10
    / Server 2019 trials from Microsoft
  • Download a recent copy of VMwareools.iso for windows
  • Extract the VMware tools .zip , and rename windows.iso > vmwaretools.iso and upload to your preferred vSphere datastore folder
  • Refer to the following YouTube video on how to create a slipstreamed ISO for Win 10 or Server 2019 using OSDBuilder
  • Upload the slipstreamed / updated windows ISO(s) you created in step 2 for either Win 10 or Server 2019 to the same vSphere datastore as used in step 3The paths for these ISOs will be entered in your packer .JSON later
  • Download the packer.exe from packer.IO HEREI found blog posts from 2017/2018 that reference the requirement to download 3rd party plug-ins for the vpshere APO. These are no longer required! The dev folks from the packer team integrated support for the vpshere API into the main packer.exe in Feb of 2020 (see GitHub post here)
  • Extract it to c:\Program Files\Packer on your preferred build machine. I’m re-using an existing “build” machine that I use where I’ve got the Windows deployment toolkit / PDQ deploy installed
  • Start > Run > Sysdm.cpl > Advanced > Environment variables > SYSTEM VARIABLES > PATH > EDIT

  • Add the path to c:\Program Files\Packer
  • Open my Build-Packer git hub repo HERE, download everything as a ZIP to your desktop
  • Extract the contents with your favorite zip manager, and open the folder Build-Packer-master
  • Move or copy the config/scripts folders to c:\Program Files\Packer

Windows Autounattnend.xml amendments

  • Install / open Microsoft Visual Studio code
  • For Server 2019 open:
    c:\Program Files\Packer\Config\Autounattend\Server_2019\autounattend.xml
  • For Win 10 open:
    c:\Program Files\Packer\Config\Autounattend\Win10\autounattend.xml
  • For the below edits, I’ve not included the line number, as it might change as updates to the related GIT hub repo are made. In all cases, use CTRL-F to search for the blow of text highlighted in yellow to find the entry
  • First, you will need to edit the entries for your preferred language
  • Amend the below line if you want a larger C drive size, just remember to amend within the associated packer .JSON file as well, covered in the later steps
  • NOTE: From experience, and especially true with golden images for later use with RDS/Citrix worker roles, it’s best to do your base build in English (en-US) and add your local language after the fact via a language pack
    <SetupUILanguage>
    <UILanguage>en-US</UILanguage>
    </SetupUILanguage>
    <InputLocale>en-US</InputLocale>
    <SystemLocale>en-US</SystemLocale>
    <UILanguage>en-US</UILanguage>
    <UserLocale>en-US</UserLocale
  • Amend the below line if you want a larger C drive size, just remember to amend within the associated packer .JSON file as well, covered in the later steps
    <CreatePartition wcm:action=”add”>
    <Order>1</Order>
    <Size>50</Size>
    <Type>Primary</Type>
    </CreatePartition>
  • I’m using trial versions of Win 10 / Win 2019 in my lab, if you want to use licensed versions and plan on using KMS (Microsoft reference site) on your network to activate them, uncomment the below line. If you input a KMS key that doesn’t match your initial media, your build will fail. Yes, I made this mistake on my first day of building<ProductKey>
    <!– <Key>6XBNX-4JQGW-QX6QG-74P76-72V67</Key> –>
    <WillShowUI>OnError</WillShowUI>
    </ProductKey>
  • Amend the below lines to include you preferred account name associated to the build, and org

    <FullName>Win 2019 Packer Build</FullName>
    <Organization>Your Org Name</Organization>

  • Amend your time zone for wherever you are in the world

    <TimeZone>Eastern Standard Time</TimeZone>

  • Amend the computer name as you see fit

    <ComputerName>w2k19-packer</ComputerName>

  • Change the below value as you see fit, else, you’ll be left with the default password, which is insecure by virtue of being shared on GITHUB
    <AutoLogon>
    <Password>
    <Value>20SuperSecure3d20</Value>
    <PlainText>true</PlainText>
    </Password>
    <LogonCount>2</LogonCount>
    <Username>Administrator</Username>
    <Enabled>true</Enabled>
    </AutoLogon>
  • Match the password set in step 23 in the below section
    <AdministratorPassword>
    <Value>20SuperSecure3d20</Value>
    <PlainText>true</PlainText>
    </AdministratorPassword>
  • Note: Record the password used here, as it will be re-used in the related PACKER JSON file which we will edit when done with the autounattend.xml file
  • Save the file. As well, I would suggest backing it up to a network location for safe keeping, the XML/JSON configs are the most important files in this process, if you corrupted or lost either at any point, it’s probably the most time consuming part of this entire process, and it’s no fun to have to re-create them. I’m saying this, as it happened to me I was writing this blog post

Packer JSON config amendments

  • Using Notepad ++ or MS Visual Studio code, open the PACKER.JSON for Win 10 / Server 2019
  • If you want to use a different structure for your source scripts/config files, you will need to amend the following section under floppy_files
    Otherwise, I would suggest leaving it. My own JSON template was culled from other templates for packer vpshere I found online, they all followed a similar formatting

    “floppy_files”: [

    “config/Autounattend/Win10/autounattend.xml”,
    “scripts/Start-FirstSteps.ps1”,
    “scripts/Install-VMTools.ps1”,
    “scripts/Start-DomainJoin.ps1”,
    “scripts/Enable-WinRM.ps1”
    ]

  • If you want to convert your completed VM/windows install to a template at the end of the build, you can set the following value to “true”

    “convert_to_template”: “false“,

  • Amend the path to vSphere datastore where you uploaded the vmwaretools.iso to in a previous step

    “iso_paths”: [
    “{{user `os_iso_path`}}”,
    “CHANGE ME] CHANGE ME/vmwaretools.iso”
    ]

  • Edit variables section at the end of the file to match your environment
    TIP: Having the “os_iso_path” on the same datastore where you VM will be created will speed up the process, same goes for the vmwaretools.iso

    “variables”: {
    “os_iso_path”: “[CHANGE ME] CHANGE ME/WIN10_2004.ISO”,
    “vm-cpu-num”: “2”,
    “vm-disk-size”: “40960”,
    “vm-mem-size”: “4096”,
    “vm-name”: “WIN10-Packer”,
    “vsphere-cluster”: “CHANGE ME”,
    “vsphere-datacenter”: “CHANGE ME,
    “vsphere-datastore”: “CHANGE ME”,
    “vsphere-folder”: “CHANGE ME”,
    “vsphere-network”: “VM Network”,
    “vsphere-password”: “CHANGE ME”,
    “vsphere-server”: “CHANGE ME”,
    “vsphere-user”: “administrator@CHANGE ME”,
    “winadmin-password”: “CHANGE ME”
    }

NOTE: All guest OS types are listed HERE

Start Packer Build

  • With the above XML/JSON files saved, it’s time to start the build
  • On your build machine, open an admin PowerShell prompt and CD over the directory you installed packer and your config/script files to CD C:\Program Files\Packer
  • Enable logging by the following two commands:
    $env:PACKER_LOG=1
    $env:PACKER_LOG_PATH=”packerlog.txt”
  • Start the build by running the following for your chosen OS
    Server:

    Packer.exe build config/json/server_2019.json
    Win10:
    Packer.exe build config/json/win_10.json
    TIP: I found it useful to watch the process on two screens. Left screen where you’ve got your packer process running, right screen where you’re logged into vSphere

  • The Packer.exe process should connect to your vSphere instance via an API call via the credentials you have in the JSON file you edited in the previous section
  • Within vSphere you should see the new VM shell should be created within just a few seconds based on the settings you chose for vCPU, RAM, HDD size on datastore you choose, and the network you chose
  • The new shell will have the VMware tools ISO attached, as well as either the win_10 / server_2019.ISO you uploaded in step 1 and defined in the .JSON config file
  • A virtual floppy drive will be created on the remote shell, and all those scripts listed in the [floppy_files] section of the .JSON config that correspond to .ps1 scripts in “scripts” sub-folder on the machine where you’re running the packer.exe will be copied over to the remote shell
  • The VM will boot up and be start the windows install via the ISO you specified
  • Here is what you should see on the machine running where you started the packer.exe process, Here is what you should see while you wait for windows to install

    vsphere-iso: Creating VM…
    vsphere-iso: Customizing hardware…
    vsphere-iso: Mounting ISO images…
    vsphere-iso: Creating floppy disk…
    vsphere-iso: Copying files flatly from floppy_files
    vsphere-iso: Copying file: autounattend.xml
    vsphere-iso: Copying file: scripts/Install-VMTools.ps1
    vsphere-iso: Copying file: scripts/Start-FirstSteps.ps1
    vsphere-iso: Done copying files from floppy_files
    vsphere-iso: Collecting paths from floppy_dirs
    vsphere-iso: Resulting paths from floppy_dirs : []
    vsphere-iso: Done copying paths from floppy_dirs
    vsphere-iso: Uploading created floppy image
    vsphere-iso: Adding generated Floppy…
    vsphere-iso: Set boot order temporary…
    vsphere-iso: Power on VM…
    vsphere-iso: Waiting for IP…

  • The Windows setup is hard-coded to scan all removable drive sources for an autounattend.XML. In this case, we’ve got the floppy drive mounted, so A:\autounattend.xml will be located and parsed
  • As long as there aren’t any type-o’s or formatting errors from editing the autounattend.xml for your environment, the windows install should be completely automated all the way to the desktop logon. In my own environment, the build process takes about 9 minutes, which I’m quite happy with. It’s faster than it takes me to take my dog outside, which is a good test
  • Based on the <autologon> that was set within the autounattend.xml file template I provided on git hub, the newly created windows VM will logon and start processing the various <SynchronousCommand> entries listed.
  • Of these commands, there are 4 entries that are hard-coded to run the .ps1 scripts from the virtual A:\ drive that was created earlier in the process. They are as follows:
  • Start-FirstSteps.ps1 will run and go through some basic packer > windows integration steps that I consolidated from the Guillermo Musumeci’s GIT REPO
  • The second script Install-VMTools.ps1 is critical, and I spent some time this week to refine the process by which VMware Tools is installed on a windows machine. It’s been a long-standing issue where the windows VMware tools package will install the drivers correctly when called using:
    Start-Process “setup64.exe” -ArgumentList ‘/s /v “/qb REBOOT=R”‘ -Wait
  • However!!!! the actual windows service called “VMWARE tools” which packer requires to finish the build will sometimes fail to install on the first attempt. To get around this, I found a great script which has logic to detect for this scenario and re-install VMware tools as required. Thanks Tim ! My script has a few edits which I made for use with my own packer builds
  • The third script is Start-DomainJoin.ps1. This script doesn’t actually start the AD domain join process, rather, it copies the script to a new directory called c:\scripts on the newly created windows install, and creates on a shortcut on the desktop called “Join Active Directory”

    This can be called later if you choose to actually join your AD environment, you would just need to enter in valid AD domain admin creds and a domain name, once it’s done, it will delete the shortcut on the admin desktop

  • The final script processed by our autounattend.xml is Enable-WinRM.ps1. Like the Install-VMtools.ps1 script, this one is critical for packer integration. WinRM is used from the remote packer instance to the newly created shell / windows install to finalize the build
  • Once the Enable-WinRM.ps1 script has completed, your running packer.exe process should recognize the remote IP of the newly created VMware shell, and initiate the final steps, which is to disconnect the floppy/CDROM drives and power off the VM
  • Here is what you should see towards the end of the process for a successful build

    vSphere-iso: IP address: 192.168.1.172
    vSphere-iso: Using winrm communicator to connect: 192.168.1.172
    vSphere-iso: Waiting for WinRM to become available…
    vSphere-iso: WinRM connected
    vsphere-iso: Connected to WinRM!
    Vsphere-iso: Provisioning with windows-shell…
    vsphere-iso: Provisioning with shell script: C:\Users\admin\AppData\Local\Temp\windows-shell-provisioner225687659
    vsphere-iso: C:\Users\administrator\ipconfig
    vsphere-iso: Windows IP Configuration
    vsphere-iso: Ethernet adapter Ethernet:
    vsphere-iso: Connection-specific DNS Suffix . : yourdomain.LOC
    vsphere-iso: Link-local Ipv6 Address . . . . . : fe80::18e2:5f3a:617e:8573%6
    vsphere-iso: Ipv4 Address. . . . . . . . . . . : 192.168.1.172
    vsphere-iso: Subnet Mask . . . . . . . . . . . : 255.255.255.0
    vsphere-iso: Default Gateway . . . . . . . . . : 192.168.1.10
    vsphere-iso: Shutting down VM…
    vsphere-iso: Deleting Floppy drives…
    vsphere-iso: Deleting Floppy image…
    vsphere-iso: Eject CD-ROM drives…
    vsphere-iso: Clear boot order…
    Build ‘vsphere-iso’ finished.

  • That’s it! You can boot the VM up again for the next stage of whatever build steps you want to do. Join to the domain, run windows updates, install language packs or convert it to a template for re-use creating more clones, but you’re basic build should now be completed

Troubleshooting

Failures that occur with the PACKER JSON FILE

  • Ensure that you’ve got your floppy_files section formatted correctly based on the folder structureDon’t forget any “,” on your lines! The packer.exe will tell you, and give the line reference, ensure you’re doing your edits in a proper editor, Visual studio code or Notepad ++
  • Ensure you’ve set ALL of the below correctly
  • The below 5 files should be saved to like-named folders where you packer.exe is extracted to. If they are missing, packer will bail out

    “floppy_files”: [
    “config/Autounattend/Win10/autounattend.xml”,
    “scripts/Start-FirstSteps.ps1”,
    “scripts/Install-VMTools.ps1”,
    “scripts/Start-DomainJoin.ps1”,
    “scripts/Enable-WinRM.ps1”
    ]

  • The below values are case-sensitive , ensure you match them exactly as they are set within vpshere. I would suggest using TWO screens for this part

    “variables”: {
    “os_iso_path”: “[CHANGE ME] CHANGE ME/WIN10_2004.ISO”,
    “vm-cpu-num”: “2”,
    “vm-disk-size”: “40960”,
    “vm-mem-size”: “4096”,
    “vm-name”: “WIN10-Packer”,
    “vsphere-cluster”: “CHANGE ME”,
    “vsphere-datacenter”: “CHANGE ME,
    “vsphere-datastore”: “CHANGE ME”,
    “vsphere-folder”: “CHANGE ME”,
    “vsphere-network”: “VM Network”,
    “vsphere-password”: “CHANGE ME”,
    “vsphere-server”: “CHANGE ME”,
    “vsphere-user”: “administrator@CHANGE ME”,
    “winadmin-password”: “CHANGE ME”
    }
    }

Failures that occur with the Windows Autounattend.xml file

  • I’ve indicated the key edits you need to do within the related win 10 / server 2019 XML files
  • The main one that comes up, is the password, it needs to set in 3 places, shown below
  • The password provided in the script needs to be amended for your own environment, I would suggest using a password manager, or LAPS, but don’t leave the default !
  • Once you’ve chosen the updated password, amend it in the corresponding packer .JSON file as well

    <AutoLogon>
    <Password>
    <Value>20SuperSecure3d20</Value>
    <PlainText>true</PlainText>
    </Password>
    <LogonCount>2</LogonCount>
    <Username>Administrator</Username>
    <Enabled>true</Enabled>
    </AutoLogon>

    After </OOBE> section

    UserAccounts>
    <AdministratorPassword>
    <Value>20SuperSecure3d20</Value>
    <PlainText>true</PlainText>
    </AdministratorPassword>

    <LocalAccounts>
    <LocalAccount wcm:action=”add”>
    <Password>
    <Value>ChangeME</Value>
    <PlainText>true</PlainText>
    </Password>
    <Group>administrators</Group>
    <DisplayName>YourOtherAdminAccountHere</DisplayName>
    <Name>YourOtherAdminAccountHere</Name>
    <Description>Custom local admin</Description>
    </LocalAccount>
    </LocalAccounts>
    </UserAccounts>

Failures that occur after the windows build has completed, but before Packer has finished it’s final steps

  • Here’s one I ran into this week. If you see the remote packer.exe process in the below state where it’s “waiting for WinRm to become available”

vsphere-iso: Waiting for IP…
vsphere-iso: IP address: 192.168.1.180
vsphere-iso: Using winrm communicator to connect: 192.168.1.180
vsphere-iso: Waiting for WinRM to become available…

  • This might be due to amending the vSphere-network from it’s default of “VM Network” to a VLAN that was filtering out various TCP ports
  • In the case I observed, once the VM was moved to the normal “VM Network” the issue was resolved.
  • So, you’ve got two choices, leave the build on the “VM Network” default shown below, or ensure that you’ve got the ports for WinRM open TCP 5985

    “variables”: {
    “os_iso_path”: “[CHANGE ME] CHANGE ME/WIN10_2004.ISO”,
    “vm-cpu-num”: “2”,
    “vm-disk-size”: “40960”,
    “vm-mem-size”: “4096”,
    “vm-name”: “WIN10-Packer”,
    “vsphere-cluster”: “CHANGE ME”,
    “vsphere-datacenter”: “CHANGE ME,
    “vsphere-datastore”: “CHANGE ME”,
    “vsphere-folder”: “CHANGE ME”,
    “vsphere-network”: “VM Network”,
    “vsphere-password”: “CHANGE ME”,
    “vsphere-server”: “CHANGE ME”,
    “vsphere-user”: “administrator@CHANGE ME”,
    “winadmin-password”: “CHANGE ME”
    }
    }

Closing

Please let me know if you found this blog post useful in the comments or on twitter

The process was pulled together on the week of July 20th , 2020. I read over a bunch of blog posts and consulted the excellent forums on the packer.io site. To get it all working, much of the refined process came down to trial/error. I believe practical knowledge is key to success in our industry

I’m really impressed with Packer thus far, the only “cost” was the time spent learning it, and will easily pay-out as I continue to use for build automation. Here, I’ve only covered using it with vSphere, but the packer folks support integration with dozens of the platforms, the sky is the limit

Have a nice day!

Maintaining a home lab for on-going career success

Hi! I’m Owen, I work as a virtualization expert for a IT solutions provider in Quebec, Canada
I’ve been working professionally in the IT industry since 2000, and from 2005 on-wards, have seen a huge benefit to maintaining a home lab
The thing about working in the IT industry. Once you’ve completed the first part of education via a 1, 2, 4 year college or uni course, the learning doesn’t stop (or shouldn’t !). However, you probably won’t be going back to a traditional classroom setup for an extended period of time. It’s on you to pick-up distance ed course, follow online tutorials, read blogs, and try to implement as best you can via virtual labs.
For me, though, the best way to learn something new is implement it on my home lab, and create a dependency on it, so when it stops working, I’m motivated to fix it, and by fixing it , I learn new things
A recent example: I’d been interested in formally learning vmware for quite some time. In 2018, I finally took the plunge and enrolled in a distance education course from Stanly college on VMware VCP.  I completed the course over about 2 months using my nights, weekends

I’ve had some form of home lab for years, but in 2019 I decided to get serious, and purchase “like” items to create a homogeneous environment for use as esxi hosts

3x HP Elitedesk 800 G3 SFF w Intel Core i5 6500 CPUs
And to each, I added the following:

  • 2x 16 GB RAM for 32 GB ram total
  • Samsung SSD 860 EVO 1TB SSD (vSAN capacity)
  • Samsung 970 EVO 1 TB NVM type SSD (vSAN caching)
  • Intel NIC quad card I350T4V2BLK
  • Intel X520 dual head 10 GB nic
Connecting the 3 computers is a MikrotikCRS309-1G-8S+IN 8 port 10 GB switch
With the hardware purchases/installs done, I installed esxi 6.7 U3 on each of them, and added them to my existing vCenter setup
Next, it was time to setup vSan. I will tell you, I didn’t get this right the first time! But this is how we learn. I had originally tried a 2 node w external witness appliance setup, but found esxi host maintenance unsuitable, so switched instead to a 3 node N + 1 setup soon after.
I finished my distance education course last July, but I’ve kept my vSan setup with no major changes, and it works great.
It should be noted, that a full 3-node vSan cluster is not required for a home lab. In fact, you could do it for free via Hyper-V on your physical Win 10 laptop/desktop as long as you’ve got enough RAM/processing power. Else, VMware workstation is another alternative.
Now! Why is the above my intro for this blog post? The investment of time/money has paid off, and I’ve started a new project as of April 2020 that’s related to VMware vSan for a new client with my current employer.
Here are some other examples going back about 15 years. By luck or foresight , I’ve been able to identify something I didn’t know, implemented it in my home lab, put some services on it that I wanted to keep using, and ending up learning something new that I’ve been able to use in a production environment for a client/employer
  1. 2005 – Windows active directory
    I was enrolled in a college course that taught us MS Active directory, but the content was delivered at warp-speed, as it was mixed in with other courses. I was not retaining what was being taught, so setup my own AD @ home. The domain has gone through a few upgrades, but remains to this day! Knowing how to create a new AD setup from scratch is invaluable, and you never know when it will be required
  2. 2012 – Hyper-V
    For years, I was running Plex Media server on a physical host, but one day my partner at the time wanted to watch “i love lucy”, and Plex stopped working due to a recent config change. I decided to install / learn Hyper-V to virtualize my plex servers to enable easier snapshot/recovery, as well as setup TWO plex servers, so once could take over when the other was down for maintenance. I’ve still got this setup today, though the plex servers have been moved to vmware instead of Hyper-V
  3. 2015 – Microsoft RDS
    I setup Microsoft RDS for remote app usage to enable me to access published apps that I didn’t want to install locally on my various PCs. As part of setting this up, I learned about VHD based profile management, which served me well in 2019 when I needed to do an implementation with FSLogix!
  4. 2015 – Powershell
    I was previously familiar with regular windows .bat / .vbs scripting , but not Powershell. Towards the end of 2015, I learned powershell to enable me to automate disk clean-up on my file server. I’m still using these scripts today. I used what I learned at home to author 100’s of scripts for various contracts/projects over the past 5 years
  5. 2016 – Citrix XenDesktop 7.12 + StoreFront 3.8
  6. I was working for a client who was on the 7.6 LTSR track, but taking a course that was focused on features in the “current release” track. As such, I installed 7.12 at home, setup StoreFront and starting to learn the the differences between 7.6 and 7.12
  7. 2017Citrix Cloud
    After reading a bunch about Citrix Cloud on the social networks, I requested a trial to experiment with a hybird setup. I moved my Citrix DDC/SQL instances to a new Citrix Cloud intstance, and re-registered my existing VDAs to the cloud. I was impressed, the setup was short/simple. The trial ended, but learning the basics of Citrix Cloud helped me a lot in 2019 where I went into a client that was in the early POC for Citrix cloud.
  8. 2017 – VMware vCenter
    A lot of my career has been spent working for big financial companies, where they’ve got large vmware teams who had implemented vCenter before I’d started working there. I needed to do it for myself, so I signed up for the VMUG advantage program and setup my own vCenter via VCSA
  9. 2018 – Citrix ADC
    This is a common scenario: you’re working somewhere where all the initial build-outs are done. What do you do? Wait for one to break and require a re-build? no thx. I downloaded a trial of Citrix ADC VPX and went through a basic config to start using ADC at home
  10. 2018 – MDT
    Each place i’d worked previously, someone else was in charge of the creation / maintenance on the MDT task sequences, and they never seemed to work “quite right”. Can I do it better? yes I can. I took about a week to learn windows deployment service and MDT and still use it for new VM builds. Great for automation of builds, but I wouldn’t recommend it for app deployment, see point #10!
  11. 2018 – Microsoft DFS
    I set this up to cover fail-over events between my primary/secondary file servers. DFS can be great when setup correctly, but it’s often not. It’s good to know the ins/outs of this service, as it’s super common in the enterprise. Knowing it helped me a lot on a contract I was working on from July 2019-April 2020
  12. 2018 – Microsoft windows fail-over clustering
    I set this up as part of increasing my knowledge of SQL / SQL clustering. This knowledge came in super handy for SQL database migration work I was doing in 2018 , soon after I finished the work on my home lab
  13. 2019 – VMware vSan
    As i’m reasonable actively on twitter and receive a lot of newsletters, I’ve been reading about VMware vSan for years. In 2019, I decided to take the plunge and set it up at home, the details are in the first part of this blog post 🙂
  14. 2020 – PDQ Deploy
    My most recent implementation is to change the way I deploy software to my various virtual / physical assets at home. Previously , I was using chocolatey . It works ok for home use, but doesn’t really have that “enterprise” ready feel to it. PDQ Deploy offers a full GUI, so it’s an excellent choice for those environments where the local staff doesn’t have Powershell knowledge to maintain Chocolatey. I’ve not yet been able to use what I learned of PDQ Deploy with a client, but hope to soon! See my blog post on this topic , here
I hope you found this blog post useful. The various examples of implementations on my home lab have served me very well over the years. I don’t have a crystal ball to know what’s coming next, but I do enjoy learning new things, and using my home lab means I can safely prepare for what might be next 🙂
thanks for reading

Home lab diagram summary for July 2020

Greetings and welcome to the summer!

Someone recently asked me why I had 5 vmware esxi hosts @ home. I realized some parts of my home lab aren’t documented , instead, their roles are trapped inside my head

So, I put together a simple diagram of my current home lab
I used the free draw.io to create it. It’s a handy online tool, that has a Chrome “desktop” app that I found performed better than the online version

 

 
 

Stress + coffee + covid 19 = OMGERD

Related image

This post is not about giving people advice on what to eat//drink. If you’re reading this, you’re online, and I’m sure bombarded with health advice every day, especially under COVID-19, as there are more people at home who are idle for are searching for and touting answers 

For me, health advice is confusing and seemingly endless. I will share my bit, you can read and take what you want from it 🙂 If you have another means by which you’ve resolved your own digestion issues , post it in the comments . 

I’ve had digestion / acid reflux / bloating issues since I was 22. As i’m now 40, I’ve tried 100’s of natural / holistic / exercise / pharmaceutical cures to resolve my issues. I keep track in my beloved OneNote 

With the added stress during the covid-19 lockdown, I’ve found myself having more digestive issues. A few weeks back, following a phone consultation with my doc here in Montreal, I got a diagnosis for a condition called GERD which has led me to adopt a low-acid diet. I read through the following book which gave me a set of rules to follow for 30 days, and then another set of more relaxed rules in the 30 days that followed, and for life!

Prior to the low-acid diet, I was ingesting about 300 mg of caffeine by way of 2 cups of black coffee and at least one energy drink. 400 mg is the max recommended per day dosage for a man of my age (41 next month). Being that close to the upper limit was affecting my sleep, and the chasing the ups and downs of a caffeine high just doesn’t seem ideal for me anymore

I quit on Tuesday May 26, 2020, the next day by about 11 am I started to feel a light headache coming on. I knew what was next, as I usually take a week or so off coffee each year, a massive headache! I took some painkillers, and felt better by the early evening. 

Managing the same tasks as reading technical documents, working with clients, reading emails, checking slack, etc has become easier without coffee, as i’m not riding the ups and downs of the caffeine high. Whatever energy I’ve got when I wake up, is maintained throughout the day. I’m not sleepy, and my mind is not racing, I’m just LEVEL. 

Most importantly, following the removal of coffee and following the low-acid diet for all meals, my digestion has improved tremendously 

If you’re struggling with acid reflux , bloating symptoms, consider the above book or other resources on GERD and a low-acid diet. Doing so, might change your life, I know it changed mine for the better 🙂

I feel I’ve un-done my first statement about not giving advice with my last. Oh well, writing is hard :p

have a good day!

Owen

Powershell health check script



I’ve been writing PowerShell scripts since 2015. One of the first scripts I wrote was a health check script my home computers. When I first wrote it a few years back, it’s main purpose was to identify those computers that hadn’t been rebooted in a long while

As time has gone on, I’ve added a lot of extra code to the health check script to cover new needs , I’ve removed some older functions. It’s main function is a topgraphic review of my stuff that I can read on my home pc, or my phone via an HTML formatted email. 


This blog post will describe the script and it’s functions

The script is on GIT hub , and you can easily re-purpose it by editing the XML to suit your environment. I run it as a windows scheduled task on a computer that has vmware PowerCLI installed. 

The script dynamically collects computer asset info from active directory, I’ve got filters to stop it running scans against certain computers, these computers are filtered out in the related settings.xml under the section

Once the list of AD computers is created, the vmWARE PowerCLI module is loaded to allow for various metrics to be pulled from my home lab vCenter

I love talking about my HomeLab, if you’re interested, you can read about it on my first ever MyCugc.org post here

Let’s look at the output of the script and go over how each section, and why I have each:

Part 1 is mass scan against all domain joined assets. I’ve added/removed columns over the years, the ones I have right now I’ve found the most useful.

Ping, Uptime and most recent windows OS patch / hotfix are the most important items for me

Part 2 relates to my file server. I use DFS to manage failovers between my primary / secondary windows file servers, and have had a few times where I’ve disconnected the USB cable to my external HDD that’s on the secondary file server, so want to know if the related backup task completed successfully 


Part 3 is broken down into 4 sub-sections: A/B/C/D. To be honest, I could probably combine this with part 1, but it would be a lot more scrolling to-the-right. They have been added as time has gone on, and i’ve found it easier to maintain code / html output-wise if they are in different sections

Part 3 A is an over-view of VMShell info from vCenter (names/hosts) are withheld 

Part 3 B: Is pure vmware datastore related. Good to know when space is getting low, yo!


Part 3 C: Is DRS related. I enabled fully-automated DRS on my new vSan cluster last year, and wanted to know what was happening while I was sleeping. I could probably turn it off , but I do like knowing that it’s working 🙂


Part 3 D: Is an over-view for all vCenter connected hosts. I did have a situation last year, where I ended up with a slightly different ESXi build (down to the last few digits) on a few of my hosts. The report caught the mistake, and I level-set the builds the next day


I’ve not described the specific code. however, if you’re interested in learning PowerShell, drop me a comment and I’ll point you in the right direction!

Thanks for reading and have a good day/night!

Owen

Final lab setup mid-2019 to now

Last year, I must have got a bit excited with the new gear I was buying, and I never ended  up posting my final CONFIG, here it is!

3x HP EliteDesk 800 G3’s (product page)

Each of the HP Elitedesk 800 G3s contains the following hardware:

  • Intel(R) Core(TM) i5-6500 CPU @ 3.20GHz
  • 4x 16 GB RAM for 64 GB ram total
  • Samsung SSD 860 EVO 1TB SSD (vSAN capacity)
  • Samsung 970 EVO 1 TB NVM type SSD (vSAN caching)
  • Intel NIC quad card I350T4V2BLK
  • Intel X520-DA2 (82599) dual head 10 GB NIC
Connecting the 3 computers is a MikrotikCRS309-1G-8S+IN 8 port 10 GB switch

Originally, I’d installed VMware esxi 6.7 U3 on each of them, but have upgraded all 3 to version 7 as of the spring of 2020

If I was to do it all over again, I’d go AMD Zen 3, but buying the used Intel-based kit from HP at the time made sense for the following reasons:

  • Each unit supports intel vPro for lights out management (just ensure you put in secure settings!)
  • 4x DDR4 DIMMS for 64 GB max
  • 4x PCI express, 1x PCI express x16
  • Great BIOS
  • Quiet
  • Very compact/quiet and ez to work with for upgrades
  • As these are common business desktops that are now off-lease at many business, they can be purchased for as low as $300 USD from eBay

You down with PDQ? (ya you know me)



Anyone working in the EUC field has probably pulled out some or ALL of their hair when trying to deploy software via Microsoft’s dreaded SCCM tool
What if I told you there was a better way? That’s assuming of course, you’ve not heard of PDQ Deploy


I’m late to the PDQ game! I had read about it via twitter/slack for months, but only got a chance try it out on my beloved home lab last weekend
There are two versions:

  1. The free version  can run forever, but has some limitations. For me, having to enable the admin share / admin account on my Win 10 assets was a show-stopper for me.
  2. So ! I immediately activated the 14 day trial on the star ship enterprise version. Once I activated the enterprise trial on my existing PDQ deploy install, I was good to go after amending the “TARGET SERVICE” location option as follows:


To start, I downloaded a few common packages: Google Chrome v 81.x, Citrix WorksSpace App 2003 and a few other common apps



The app has intelligence to recognize when new versions are available, they are indicated under the APPROVALS section. All you need to do is approve the updates and your library will update itself on your PDQ Deploy server, you’re then able to deploy / re-deploy to your targets. For instance, as I was writing this blog and collecting screenshots, I got approval requests for a new version of Google Chrome and Slack, done/done!

Deployment is where PDQ Deploy really shines , especially against Citrix MCS / PVS master images.

You select a package, right click and choose DEPLOY ONCE, put in your target asset and let ‘er rip!



Assuming you’ve followed the steps to open the required firewall rules on the target assets, you should have your software deployed within minutes or even seconds if it’s a small package and you’ve got a fast connection from source > target

In summary

  • PDQ Deploy is an incredible tool, and I would absolutely recommend it to anyone looking for a painless means to deploy new or updated software. 
  • Enterprise licenses are just $500 USD which is an absolute bargain when you look @ the time savings against SCCM troubleshooting ; 


Because an SCCM deployment rarely works the first time you try it , right? 

CVAD 1912 UPL removal issues and fix

I noted some interesting (annoying?) unexpected behavior with UPL-enabled CVAD 1912 MCS master image machines over the weekend when I should have been playing DOOM ETERNAL

At my current client, we un-installed CVAD 1912 to remove the UPL components. We did this for two reasons
1 – A default install of CVAD 1912 w UPL will create a dependency for AppV on the UPL filter driver.
The environment I’m working in has a mix of Nutanix blades (G4/G5/G6). Those VMs that get booted from the older G4 blades take longer to boot, and note a race condition where the UPL service fails to start, thus causing the AppV filter driver to fail to start
2 – We’re unable to get away from the legacy path imposed on via the Citrix UPL filter driver policy that sets that ridiculous A_OK sub-folder. We want to use Nutanix files with multi-path support, so all 3 servers in the cluster distribute the load, this isn’t possible with the hard-coded A_OK sub-folder nonsense
Anyway, over the weekend I started the process to remove CVAD 1912 + UPL on our various MCS master images
I completed the removal in our test environment, and upon rolling out the MCS update to a limited set of test machines, noted the following error on user logon over ICAMachine generated alternative text: Windows couldn't connect to the ULayer service.  Please consult your system administrator.  OK

I used Microsoft system internals autoruns to check for anything left on the MCS master image, and sure enough, the VDA un-install left UPL components before. Thanks, QA! :p
To completely remove any trace of UPL from a system where it was previously installed, you would take the following actions:
1 – Run CVAD 1912 uninstall via add/remove programs + reboot
 
2 – From an elevated command prompt run Sc delete Ulayer
 
3 – Use PSEXEC -I -s “cmd” then open reged32 to amend permissions on below to include local admin group and
delete:
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Winlogon\Notifications\Components\Ulayer
 
4 – Delete: HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Ulayer
 
5 – Delete c:\Program Files\Unidesk
 
6 – Amend HKLM\Software\Micrsosoft\Windows NT\WinLogon\UserInit to be as below
 
!C:\Windows\system32\userinit.exe,
 
I found that it was set like this after the un-install
 
C:\Windows\system32\userinit.exe,C:\Program Files\Unidesk\Layering Services\LayerInfo.exe,
 
It should be
 
!C:\Windows\system32\userinit.exe,
 
NOTE: Pay attention to that last character it’s a “,” 
7 – Reboot
 
8 – Re-install CVAD 1912
 
The above i’m using on our test environment, and it works fine, for the prod environment, i’m doing a complete windows re-build just to be 100% sure. The above is more anecdotal and interesting from a troubleshooting perspective. 
 
I escalated a case to Citrix to tell them to fix their un-install for CVAD, but I doubt it will make it into GA for the current release slated for 2003 (March 2020)
 
I’m not completely ruling out CVAD 1912 UPL at this point! When it works, it’s great, and solves the long-standing issue of delivering apps to non-persistent Win 10 machines
 
Owen
 
 

IGEL Disrupt 2020 in Nashville, Tennessee and my next tech conference


It seems my vacation schedule is mostly driven by tech conferences these days. 

  • E2E in NY in 2018
  • Synergy in Atlanta, Georgia in 2019
  • IGEL Disrupt in Nashville, TN in 2020


All of which is great, as none of the above places i’d been to before, and each trip was exciting for personal and professional reasons

Nashville, TN was a bit more special for two reasons: Country music and Bourbon whiskey!

I’ve been a fan of country music all my life, especially the following artists

Dwight Yoakam
Glen Campbell
Shania Twain
Kenny Rogers
Chet Atkins
Dolly Parton
Johnny Cash

So, visiting BROADWAY street on my first night in Nashville was awesome. I got to see an awesome cover band do one of my favorite country songs: “All my ex’s live in Texas”

I drank some great local bourbon called Belle Meade and had a great time enjoying the cover band at “THE STAGE” (link to the venue)

After Sunday night, we spent all of our time in and around the MASSIVE hotel called GAYLORD. Yes, that’s the real name. It’s a really nice hotel, with indoor water falls and tons of tropical trees

The conference was small, about 500 attendees. The highlights were as follows:

1 -The Microsoft WVD (Windows virtual desktop) master class with Jim Moyle (twitter) and Christiaan Brinkoff. Microsoft has filled the gap between RDP and ICA! To me, they are neck and neck (at least on paper), i’m looking forward to doing client implementations in 2020 using WVD to replace or Citrix CVAD or for new implementations where no Citrix INFRA exists

2 -The AMD presentation on their new embedded CPUs. Talk about DISRUPT! AMD truly is a disruptive force of nature against Chipzilla (Intel). I’m very much looking forward to high quality , high performance low cost devices from AMD as a alternative to security hole ridden expensive Intel devices. IGEL product page

3 – The CUGC panel hosted by Patrick Coble . Some good debate around CVAD current release vs LTSR and MCS vs PVS. 

4 -The Iron cowboy motivational speech during the closing note. I’d heard his name, but didn’t know he was from Alberta (i’m from Canada). In 2015, this guy ran 50 iron mans in 50 days in 50 states. Just incredible. 

For 2020, i’ve got two more events I *might* attend, Steve Greenberg’s EUC master’s retreat (link) in Scottsdale, AZ, and possibly Synergy in Orlando. If it came down to one choice, i’m going with the former!  I’ve already been to Synergy and Orlando, and I feel I would learn more at the EUC master’s retreat.

Citrix UPL and FSLOGIX – the best of both worlds



I’ve been working with Citrix technology for 7 years, and most of that time has been working in the slow-moving world of financial industry IT

That being said, I left that world last summer, to work for a small company here in Quebec, called ProContact.ca. I wanted to focus on SMB, where I felt I could make a bigger difference, and I have !

however, I’ve still got some older habits, where I prefer to not to be applying new software as soon as it hits the digital shelf. As we all know, Microsoft hasn’t done a great job with QA on Win 10:https://www.thurrott.com/windows/windows-10/187407/microsoft-has-a-software-quality-problem

To me, Citrix is almost as bad, a bit better? Some of the bugs I found working on the 7.15 LTSR track were just awful. 

In Dec of this year, Citrix finally updated the LTSR path for CVAD to 1912. I read over the release notes and something caught my eye, the User personalization layer (UPL) ported over from Unidesk. For a customer I was working with in 2019, we evaluated Unidesk, and found it overly complex and slow for what it touted. UPL as a replacement for long depreciated PvD sounded like a dream.

So, for the first time in a long time, I installed/configured/tested out a new feature on the SAME day it was released. It’s been about a month since I started using a UPL enabled machine as my primary workspace, and i’m happy to report IT JUST WORKS

For our customer, we had already replaced UPM (mostly awful) with FSLogix (super great)
I setup UPL without any docs from Citrix (they didn’t exist on the day CVAD 1912 was released), I did run into some issues, as UPL is a port-over from Unidesk, and installs a filter driver that directly conflicts with the FSLOGIX filter driver. 
I wasn’t keen to go back to UPM (as Citrix suggests in their UPL tech article) neither was the customer

I ended up resolving this issue, after posting to the world of EUC slack channel, and the ever helpful James Klindon reminded me to check my ALTITUDE settings! Yep!

https://www.citrix.com/blogs/2020/01/07/citrix-app-layering-and-fslogix-profile-containers/

With the amended ALTITUDE setting in place, i’m happy to report UPL / FSL are working perfectly togehter


A few notes:

  • A lot of folks are calling CVAD 1912 UPL a first release. This is and isn’t true. Technically, the binary support is a port-over from the existing UNIDESK “user personal layers” feature.
  • Instead of requiring those dreaded Unidesk appliances, we just need an SMB share, and a Citrix policy that enables UPL and sets a path, that’s it! The rest is managed by the filter driver that runs at boot
  • You don’t have a lot of control over the folder structure created by the policy/Citrix layering service. With FSLogix you’ve got a lot more control, UPL creates sub-folders for each user in your UPL share as such
  • \\SERVER1\UPLSHARE\USERS\DOMAIN_USERNAME\A_OK
  • Again, this is a port-over from Unidesk. I hope they will change it, what is A_OK? :p
  • Performance is acceptable, but you’re now at the mercy of having 3 VHD(s) mounted to a remote filter. If you’re shared storage isn’t up to snuff, your user experience will suffer. For the moment, our customer is running FSLogix and UPL containers on the same Nutanix filer, but it might change we decide to put UPL into mainstream use. We are looking for it as a replacement for Win 10 STATIC VMs , which are a pain to manage. For most of our users, we will be providing standard MCS non-persistent machines without UPL


Create your website at WordPress.com
Get started