Introducing Nomad 6.3
Working with Nomad
Technical support for Nomad
Relicensing Nomad client
Resolving common issues
Resolving content integrity (hash checks) issues
Resolving dynamic pre-caching issues
Resolving hard link issues
Resolving Nomad client health issues
Resolving Nomad Dashboard issues
Resolving Nomad peer election issues
Resolving PBA issues
Resolving peer copy over HTTP or HTTPS issues
Resolving remote differential compression issues
Resolving slow content transfer issues
- Known issues
The state of the Nomad log when PBA is working
Peer Backup Assistant: Provision Nomad PBA Data Store
PBA process typically starts with provisioning the Nomad data store to find the OSDStateStorePath variable to let the task sequence know where to store and retrieve user state data. This step is initiated for an election within the subnet or query through AE(SSD PBA) for the site. The task sequence executes:
Below snippet from Nomadbranch log shows the process of NMDS polling and accepting the offer from machine SCCM-PRI.
Peer Backup Assistant: Close Nomad PBA Data Store
This step closes any existing connections it establishes with its peers to store content. The command it executes fro the task sequence to complete the NMDS process is:
Peer Backup Assistant: Nomad PBA Data Store High Availability
This step is used to replicate multiple copies in other data stores for high availability. This is helpful to prevent any data loss should the primary data store becomes unavailable or gets corrupted. The command it executes from the task sequence is
Peer Backup Assistant: Locate Existing Nomad PBA Data Store
This step locates an existing data store to restore content. The command it executes from the task sequence is:
Peer Backup Assistant: Release Nomad PBA Data Store
Finally, this step releases and deletes any existing data store it uses. The command it executes from the task sequence is:
Once all the data has been successfully copied back and is no longer required, it can be removed from the host. If this step is not used, the data is automatically deleted after a prescribed time.
Other things to note
If you see this error in the smsts.log:
– it is a generic error message with no further information available. The PBA Step, Peer Backup Assistant: Locate Existing Nomad PBA Data Store is used to find Nomad data store to host PBA data.
If you see this error in the Nomadbranch log on PBA peer machines:
In background, this command runs when this step is initiated:
Once the command is initiated, PBA clients broadcast in the subnet to find a suitable PBA store. PBA peers get these broadcast messages in their subnet and respond. However, in some cases where the PBA client finds no peers, it is likely a result of one of these:
- Peers are NOT enabled to host PBA store.
- PBA is enabled but insufficient disk allocated.
- Allocated space is NOT sufficient to host data store.
If any task sequence steps fails on a PBA step:, check the following:
- Is PBA enabled on peer machines? To verify it, you would need to ensure that value of registry value MaximumMegaByte at location Software\1E\NomadBranch\NMDS\ is not zero. If it is 0, it means that the machine is not allowed to host any data and will reject all PBA requests.
- If PBA is enabled, ensure that the maximum allocation request by each client is configured with optimum space to host PBA data. To configure it, you will need to modify the registry value MaxAllocRequest at location Software\1E\NomadBranch\NMDS\.
For example, if original value is set at 200MB but the space requested by the client to store user state data is 250MB, it will fail. Typically, you can set MaxAllocRequest to be less than or equal to MaximumMegaByte value. The difference between two is that with MaximumMegaByte, the client is configured to host Maximum data for all PBA clients, whereas with MaxAllocRequest, it is configured to host Maximum data which can be allocated to each client.
For example, if MaximumMegaByte=5120 and MaxAllocRequest=2560, the PBA host can store the user state data of two machines simultaneously where each client can store data up to 2560MB and total space allocated is 5120MB.