Bugzilla – Bug 7012
Better round-robin scheduling of multiple VMs per node
Last modified: 2010-06-08 19:55:11
You need to
before you can comment on or make changes to this bug.
When running multiple VMs per node, the workspace service scheduler doesn't
distribute VMs evenly across nodes.
When a VM is being matched to a slot, the behavior is:
1. If there is an empty node that matches requested memory and network, use
2. Otherwise, pick the first node from the pool that matches the requested
memory and network associations.
Once there is a VM running on each node, this has an effect of piling VMs onto
a single node until it is "full" after which the scheduler moves onto the next
It would be better for networking and I/O in general if VMs were distributed in
a more round-robin fashion. Or at least if this were configurable.
See this code:
Committed to master for 2.5
The default resource scheduler now operates with the notion of 'percentage
available' for each node in the VMM pool. This is a percentage of the
memory previously allocated on the node and the still-available memory. This
allows the greedy and round-robin strategies to work better with pools that
have varying amounts of RAM on the VMMs.
The node selection can happen in one of two ways:
1. A "round-robin" configuration in resource-locator-ACTIVE.xml (this is the
default mode). This looks for matching nodes (enough space to run, appropriate
network support, etc.) with the highest percentage of free space. If there are
many equally free nodes it will pick randomly from those. As should be clear,
this favors entirely empty nodes first.
2. A "greedy" configuration in resource-locator-ACTIVE.xml. This looks for
matching nodes (enough space to run, appropriate network support, etc.) with
the lowest percentage of free space. If there are many equally unfree nodes it
will pick randomly from those.
Confirmed this still works after Bug 7015 scheduler refactoring.