gdbsupport: use dynamic partitioning in gdb::parallel_for_each
gdb::parallel_for_each uses static partitioning of the workload, meaning
that each worker thread receives a similar number of work items. Change
it to use dynamic partitioning, where worker threads pull work items
from a shared work queue when they need to.
Note that gdb::parallel_for_each is currently only used for processing
minimal symbols in GDB. I am looking at improving the startup
performance of GDB, where the minimal symbol process is one step.
With static partitioning, there is a risk of workload imbalance if some
threads receive "easier" work than others. Some threads sit still while
others finish working on their share of the work. This is not
desirable, because the gdb::parallel_for_each takes as long as the
slowest thread takes.
When loading a file with a lot of minimal symbols (~600k) in GDB, with
"maint set per-command time on", I observe some imbalance:
Time for "minsyms install worker": wall 0.732, user 0.550, sys 0.041, user+sys 0.591, 80.7 % CPU
Time for "minsyms install worker": wall 0.881, user 0.722, sys 0.071, user+sys 0.793, 90.0 % CPU
Time for "minsyms install worker": wall 2.107, user 1.804, sys 0.147, user+sys 1.951, 92.6 % CPU
Time for "minsyms install worker": wall 2.351, user 2.003, sys 0.151, user+sys 2.154, 91.6 % CPU
Time for "minsyms install worker": wall 2.611, user 2.322, sys 0.235, user+sys 2.557, 97.9 % CPU
Time for "minsyms install worker": wall 3.074, user 2.729, sys 0.203, user+sys 2.932, 95.4 % CPU
Time for "minsyms install worker": wall 3.486, user 3.074, sys 0.260, user+sys 3.334, 95.6 % CPU
Time for "minsyms install worker": wall 3.927, user 3.475, sys 0.336, user+sys 3.811, 97.0 % CPU
^
----´
The fastest thread took 0.732 seconds to complete its work (and then sat
still), while the slowest took 3.927 seconds. This means the
parallel_for_each took a bit less than 4 seconds.
Even if the number of minimal symbols assigned to each worker is the
same, I suppose that some symbols (e.g. those that need demangling) take
longer to process, which could explain the imbalance.
With this patch, things are much more balanced:
Time for "minsym install worker": wall 2.807, user 2.222, sys 0.144, user+sys 2.366, 84.3 % CPU
Time for "minsym install worker": wall 2.808, user 2.073, sys 0.131, user+sys 2.204, 78.5 % CPU
Time for "minsym install worker": wall 2.804, user 1.994, sys 0.151, user+sys 2.145, 76.5 % CPU
Time for "minsym install worker": wall 2.808, user 1.977, sys 0.135, user+sys 2.112, 75.2 % CPU
Time for "minsym install worker": wall 2.808, user 2.061, sys 0.142, user+sys 2.203, 78.5 % CPU
Time for "minsym install worker": wall 2.809, user 2.012, sys 0.146, user+sys 2.158, 76.8 % CPU
Time for "minsym install worker": wall 2.809, user 2.178, sys 0.137, user+sys 2.315, 82.4 % CPU
Time for "minsym install worker": wall 2.820, user 2.141, sys 0.170, user+sys 2.311, 82.0 % CPU
^
----´
In this version, the parallel_for_each took about 2.8 seconds,
representing a reduction of ~1.2 seconds for this step. Not
life-changing, but it's still good I think.
Note that this patch helps when loading big programs. My go-to test
program for this is telegram-desktop that I built from source. For
small programs (including loading gdb itself), it makes no perceptible
difference.
Now the technical bits:
- One impact that this change has on the minimal symbol processing
specifically is that not all calls to compute_and_set_names (a
critical region guarded by a mutex) are done at the end of each
worker thread's task anymore.
Before this patch, each thread would compute the names and hash values for
all the minimal symbols it has been assigned, and then would call
compute_and_set_names for all of them, while holding the mutex (thus
preventing other threads from doing this same step).
With the shared work queue approach, each thread grabs a batch of of
minimal symbols, computes the names and hash values for them, and
then calls compute_and_set_names (with the mutex held) for this batch
only. It then repeats that until the work queue is empty.
There are therefore more small and spread out compute_and_set_names
critical sections, instead of just one per worker thread at the end.
Given that before this patch the work was not well balanced among worker
threads, I guess that threads would enter that critical region at
roughly different times, causing little contention.
In the "with this patch" results, the CPU utilization numbers are not
as good, suggesting that there is some contention. But I don't know
if it's contention due to the compute_and_set_names critical section
or the shared work queue critical section. That can be investigated
later. In any case, what ultimately counts is the wall time, which
improves.
- One choice I had to make was to decide how many work items (in this
case minimal symbols) each worker should pop when getting work from
the shared queue. The general wisdom is that:
- popping too few items, and the synchronization overhead becomes
significant, and the total processing time increases
- popping too many items, and we get some imbalance back, and the
total processing time increases again
I experimented using a dynamic batch size proportional to the number
of remaining work items. It worked well in some cases but not
always. So I decided to keep it simple, with a fixed batch size.
That can always be tweaked later.
- I want to still be able to use scoped_time_it to measure the time
that each worker thread spent working on the task. I find it really
handy when measuring the performance impact of changes.
Unfortunately, the current interface of gdb::parallel_for_each, which
receives a simple callback, is not well-suited for that, once I
introduce the dynamic partitioning. The callback would get called
once for each work item batch (multiple time for each worker thread),
so it's not possible to maintain a per-worker thread object for the
duration of the parallel for.
To allow this, I changed gdb::parallel_for_each to receive a worker
type as a template parameter. Each worker thread creates one local
instance of that type, and calls operator() on it for each work item
batch. By having a scoped_time_it object as a field of that worker,
we can get the timings per worker thread.
The drawbacks of this approach is that we must now define the
parallel task in a separate class and manually capture any context we
need as fields of that class.
Change-Id: Ibf1fea65c91f76a95b9ed8f706fd6fa5ef52d9cf Approved-By: Tom Tromey <tom@tromey.com>