NOTE: The following advice would NOT be for prod workloads where you may require scratch logs to be persisted between reboots should you need to send them to your hardware vendor or VMware directly
Hopefully you find this blog post while searching for a solution as I did this week, I had to cobble together a few different sources for the solution to a simple issue ; ESXi HOST SCRATCH LOGS!
The back-story
I recently decided to re-enable vSan on my home lab, I’ve got the same host hardware setup that I’ve had since 2019, with 3 HP EliteDesk G4 SFF units. However, from 2019 to today, I’ve re-used some of the internal SSD/NVM drives for other stuff. For instance, I’m studying for the Nutanix MCI, so took a few drives away for a 4th HP EliteDesk G4 host. That means I don’t have spare drives inside my primary 3-node vSan setup. When ESXi is installed, it will attempt to set the scratch partition based on available VMFS formatted volumes (see). In my case, all of my ESXi installs were done to USB drives (which VMware no longer recommends) , at that time, I did have spare drives connected at that time, so, during the ESXi install, locally available drives were set as the scratch partition. This is not ideal, as once the scratch partition is set to a VMFS drive (remote, local) in your ESXi host, you can’t unmount / delete the datastore easily
The work-around
1 – SSH to host (From 2022 onwards, I like MobaXTerm for all things SSH related)
2 – CD /
3 – Mkdir /tmp/scratch
4 – Switch back to the vCenter > inventory > host > advanced system settings > and amend the ScratchConfig.ConfiguredScratchLocation to /tmp/scratch

5 – The new scratch log setting won’t take effect until reboot, don’t reboot yet!
6 – If/when you reboot, to avoid getting a message as follows; “System logs on host are stored on non-persistent storage.“, you will want to amend another advanced settings > navigate to > UserVars.SuppressCoredumpWarning and set it to 1
7 – Finally, find the setting syslog.global.loghost and set it to 127.0.0.1

8 – With the above 3 settings changed under advanced settings, you can set maintenance mode on the ESXi host, wait for the VMs to move and reboot the host for the setting to take effect
9 – Repeat as required for additional ESXi hosts
Note: If you run through the above process on vSan enabled ESXi hosts, it will log the following within Skyline under Cluster sub-checks, related to vSan daemon liveness, and a service called EPD. Checking the following KB from VMware, “EPD is used to check for component leakage when objects are deleted in a vSan datastore”
As per my note at the beginning of the post, the workaround for lack of spare drives in your host is NOT for prod workloads, just for lab fiends , like me 🙂

I hope you found this post useful, have a nice day
Owen
Reference pages
Move ESXi scratch location (elasticsky.de)
I just want to suppress this alarm; System logs on… – VMware Technology Network VMTN