From dad36cf91992ba78a8c3e51eaaf5a95bf19fefa8 Mon Sep 17 00:00:00 2001 From: Simon Marchi Date: Fri, 19 Sep 2025 16:27:05 -0400 Subject: [PATCH] gdb/dwarf: use dynamic partitioning for DWARF CU indexing The DWARF indexer splits the work statically based on the unit sizes, attempting to give each worker thread about the same amount of bytes to process. This works relatively well with standard compilation. But when compiling with DWO files (-gsplit-dwarf), it's not as good. I see this when loading a relatively big program (telegram-desktop, which includes a lot of static dependencies) compiled with -gsplit-dwarf: Time for "DWARF indexing worker": wall 0.000, user 0.000, sys 0.000, user+sys 0.000, -nan % CPU Time for "DWARF indexing worker": wall 0.001, user 0.000, sys 0.000, user+sys 0.000, 0.0 % CPU Time for "DWARF indexing worker": wall 0.001, user 0.001, sys 0.000, user+sys 0.001, 100.0 % CPU Time for "DWARF indexing worker": wall 0.748, user 0.284, sys 0.297, user+sys 0.581, 77.7 % CPU Time for "DWARF indexing worker": wall 0.818, user 0.408, sys 0.262, user+sys 0.670, 81.9 % CPU Time for "DWARF indexing worker": wall 1.196, user 0.580, sys 0.402, user+sys 0.982, 82.1 % CPU Time for "DWARF indexing worker": wall 1.250, user 0.511, sys 0.500, user+sys 1.011, 80.9 % CPU Time for "DWARF indexing worker": wall 7.730, user 5.891, sys 1.729, user+sys 7.620, 98.6 % CPU Note how the wall times vary from 0 to 7.7 seconds. This is undesirable, because the time to do that indexing step takes as long as the slowest worker thread takes. The imbalance in this step also causes imbalance in the following "finalize" step: Time for "DWARF finalize worker": wall 0.007, user 0.004, sys 0.002, user+sys 0.006, 85.7 % CPU Time for "DWARF finalize worker": wall 0.012, user 0.005, sys 0.005, user+sys 0.010, 83.3 % CPU Time for "DWARF finalize worker": wall 0.015, user 0.010, sys 0.004, user+sys 0.014, 93.3 % CPU Time for "DWARF finalize worker": wall 0.389, user 0.359, sys 0.029, user+sys 0.388, 99.7 % CPU Time for "DWARF finalize worker": wall 0.680, user 0.644, sys 0.035, user+sys 0.679, 99.9 % CPU Time for "DWARF finalize worker": wall 0.929, user 0.907, sys 0.020, user+sys 0.927, 99.8 % CPU Time for "DWARF finalize worker": wall 1.093, user 1.055, sys 0.037, user+sys 1.092, 99.9 % CPU Time for "DWARF finalize worker": wall 2.016, user 1.934, sys 0.082, user+sys 2.016, 100.0 % CPU Time for "DWARF finalize worker": wall 25.882, user 25.471, sys 0.404, user+sys 25.875, 100.0 % CPU With DWO files, the split of the workload by size doesn't work, because it is done using the size of the skeleton units in the main file, which is not representative of how much DWARF is contained in each DWO file. I haven't tried it, but a similar problem could occur with cross-unit imports, which can happen with dwz or LTO. You could have a small unit that imports a lot from other units, in which case the size of the units is not representative of the work to accomplish. To try to improve this situation, change the DWARF indexer to use dynamic partitioning, using gdb::parallel_for_each_async. With this, each worker thread pops one unit at a time from a shared work queue to process it, until the queue is empty. That should result in a more balance workload split. I chose 1 as the minimum batch size here, because I judged that indexing one CU was a big enough piece of work compared to the synchronization overhead of the queue. That can always be tweaked later if someone wants to do more tests. As a result, the timings are much more balanced: Time for "DWARF indexing worker": wall 2.325, user 1.033, sys 0.573, user+sys 1.606, 69.1 % CPU Time for "DWARF indexing worker": wall 2.326, user 1.028, sys 0.568, user+sys 1.596, 68.6 % CPU Time for "DWARF indexing worker": wall 2.326, user 1.068, sys 0.513, user+sys 1.581, 68.0 % CPU Time for "DWARF indexing worker": wall 2.326, user 1.005, sys 0.579, user+sys 1.584, 68.1 % CPU Time for "DWARF indexing worker": wall 2.326, user 1.070, sys 0.516, user+sys 1.586, 68.2 % CPU Time for "DWARF indexing worker": wall 2.326, user 1.063, sys 0.584, user+sys 1.647, 70.8 % CPU Time for "DWARF indexing worker": wall 2.326, user 1.049, sys 0.550, user+sys 1.599, 68.7 % CPU Time for "DWARF indexing worker": wall 2.328, user 1.058, sys 0.541, user+sys 1.599, 68.7 % CPU ... Time for "DWARF finalize worker": wall 2.833, user 2.791, sys 0.040, user+sys 2.831, 99.9 % CPU Time for "DWARF finalize worker": wall 2.939, user 2.896, sys 0.043, user+sys 2.939, 100.0 % CPU Time for "DWARF finalize worker": wall 3.016, user 2.969, sys 0.046, user+sys 3.015, 100.0 % CPU Time for "DWARF finalize worker": wall 3.076, user 2.957, sys 0.118, user+sys 3.075, 100.0 % CPU Time for "DWARF finalize worker": wall 3.159, user 3.054, sys 0.104, user+sys 3.158, 100.0 % CPU Time for "DWARF finalize worker": wall 3.198, user 3.082, sys 0.114, user+sys 3.196, 99.9 % CPU Time for "DWARF finalize worker": wall 3.197, user 3.076, sys 0.121, user+sys 3.197, 100.0 % CPU Time for "DWARF finalize worker": wall 3.268, user 3.136, sys 0.131, user+sys 3.267, 100.0 % CPU Time for "DWARF finalize worker": wall 1.907, user 1.810, sys 0.096, user+sys 1.906, 99.9 % CPU In absolute terms, the total time for GDB to load the file and exit goes down from about 42 seconds to 17 seconds. Some implementation notes: - The state previously kept in as local variables in cooked_index_worker_debug_info::process_units becomes fields of the new parallel worker object. - The work previously done for each unit in cooked_index_worker_debug_info::process_units becomes the operator() of the new parallel worker object. - The work previously done at the end of cooked_index_worker_debug_info::process_units (including calling bfd_thread_cleanup) becomes the destructor of the new parallel worker object. - The "done" callback of gdb::task_group becomes the "done" callback of gdb::parallel_for_each_async. - I placed the parallel_indexing_worker struct inside cooked_index_worker_debug_info, so that it has access to parallel_indexing_worker's private fields (e.g. m_results, to push the results). It will also be possible to re-use it for skeletonless type units in a later patch. Change-Id: I5dc5cf8793abe9ebe2659e78da38ffc94289e5f2 Approved-By: Tom Tromey --- gdb/dwarf2/cooked-index-worker.h | 8 ++ gdb/dwarf2/read.c | 152 +++++++++++++------------------ 2 files changed, 73 insertions(+), 87 deletions(-) diff --git a/gdb/dwarf2/cooked-index-worker.h b/gdb/dwarf2/cooked-index-worker.h index 433515bfa5b..dfdc82f5274 100644 --- a/gdb/dwarf2/cooked-index-worker.h +++ b/gdb/dwarf2/cooked-index-worker.h @@ -296,8 +296,16 @@ protected: /* The per-objfile object. */ dwarf2_per_objfile *m_per_objfile; + /* Result of each worker task. */ std::vector m_results; + +#if CXX_STD_THREAD + /* Mutex to synchronize access to M_RESULTS when workers append their + result. */ + std::mutex m_results_mutex; +#endif /* CXX_STD_THREAD */ + /* Any warnings emitted. For the time being at least, this only needed in do_reading, not in every worker. Note that deferred_warnings uses gdb_stderr in its constructor, and this diff --git a/gdb/dwarf2/read.c b/gdb/dwarf2/read.c index 801917937ec..bf982f94042 100644 --- a/gdb/dwarf2/read.c +++ b/gdb/dwarf2/read.c @@ -52,6 +52,7 @@ #include "elf-bfd.h" #include "event-top.h" #include "exceptions.h" +#include "gdbsupport/parallel-for.h" #include "gdbsupport/task-group.h" #include "maint.h" #include "symtab.h" @@ -3308,6 +3309,61 @@ public: } private: + /* The task for parallel workers that index units. */ + struct parallel_indexing_worker + { + parallel_indexing_worker (const char *step_name, + cooked_index_worker_debug_info *parent) + : m_scoped_time_it (step_name, parent->m_per_command_time), + m_parent (parent) + { + } + + DISABLE_COPY_AND_ASSIGN (parallel_indexing_worker); + + ~parallel_indexing_worker () + { + bfd_thread_cleanup (); + + m_thread_storage.done_reading (m_complaint_handler.release ()); + + /* Append the results of this worker to the parent instance. */ +#if CXX_STD_THREAD + std::lock_guard lock (m_parent->m_results_mutex); +#endif + m_parent->m_results.emplace_back (std::move (m_thread_storage)); + } + + void operator() (iterator_range range) + { + for (auto &it : range) + this->process_one (*it); + } + + private: + void process_one (dwarf2_per_cu &unit) + { + m_thread_storage.catch_error ([&] () + { + m_parent->process_unit (&unit, m_parent->m_per_objfile, + &m_thread_storage); + }); + } + + /* Measures the execution time of this worker. */ + scoped_time_it m_scoped_time_it; + + /* Delayed complaints and errors recorded while indexing units. */ + complaint_interceptor m_complaint_handler; + std::vector m_errors; + + /* Index storage for this worker. */ + cooked_index_worker_result m_thread_storage; + + /* The instance that spawned this worker. */ + cooked_index_worker_debug_info *m_parent; + }; + void do_reading () override; /* Print collected type unit statistics. */ @@ -3350,12 +3406,6 @@ private: /* An iterator for the comp units. */ using unit_iterator = std::vector::iterator; - /* Process a batch of CUs. This may be called multiple times in - separate threads. TASK_NUMBER indicates which task this is -- - the result is stored in that slot of M_RESULTS. */ - void process_units (size_t task_number, unit_iterator first, - unit_iterator end); - /* Process unit THIS_CU. */ void process_unit (dwarf2_per_cu *this_cu, dwarf2_per_objfile *per_objfile, cooked_index_worker_result *storage); @@ -3579,32 +3629,6 @@ cooked_index_worker_debug_info::process_skeletonless_type_units }); } -void -cooked_index_worker_debug_info::process_units (size_t task_number, - unit_iterator first, - unit_iterator end) -{ - SCOPE_EXIT { bfd_thread_cleanup (); }; - - /* Ensure that complaints are handled correctly. */ - complaint_interceptor complaint_handler; - - std::vector errors; - cooked_index_worker_result thread_storage; - for (auto inner = first; inner != end; ++inner) - { - dwarf2_per_cu *per_cu = inner->get (); - - thread_storage.catch_error ([&] () - { - process_unit (per_cu, m_per_objfile, &thread_storage); - }); - } - - thread_storage.done_reading (complaint_handler.release ()); - m_results[task_number] = std::move (thread_storage); -} - void cooked_index_worker_debug_info::done_reading () { @@ -3630,62 +3654,16 @@ cooked_index_worker_debug_info::do_reading () m_index_storage.get_addrmap (), &m_warnings); - /* We want to balance the load between the worker threads. This is - done by using the size of each CU as a rough estimate of how - difficult it will be to operate on. This isn't ideal -- for - example if dwz is used, the early CUs will all tend to be - "included" and won't be parsed independently. However, this - heuristic works well for typical compiler output. */ - - size_t total_size = 0; - for (const auto &per_cu : per_bfd->all_units) - total_size += per_cu->length (); - - /* How many worker threads we plan to use. We may not actually use - this many. We use 1 as the minimum to avoid division by zero, - and anyway in the N==0 case the work will be done - synchronously. */ - const size_t n_worker_threads - = std::max (gdb::thread_pool::g_thread_pool->thread_count (), (size_t) 1); - - /* How much effort should be put into each worker. */ - const size_t size_per_thread - = std::max (total_size / n_worker_threads, (size_t) 1); - - /* Work is done in a task group. */ - gdb::task_group workers ([this] () - { - this->done_reading (); - }); - - auto end = per_bfd->all_units.end (); - size_t task_count = 0; - for (auto iter = per_bfd->all_units.begin (); iter != end; ) - { - auto last = iter; - /* Put all remaining CUs into the last task. */ - if (task_count == n_worker_threads - 1) - last = end; - else - { - size_t chunk_size = 0; - for (; last != end && chunk_size < size_per_thread; ++last) - chunk_size += (*last)->length (); - } - - gdb_assert (iter != last); - workers.add_task ([this, task_count, iter, last] () - { - scoped_time_it time_it ("DWARF indexing worker", m_per_command_time); - process_units (task_count, iter, last); - }); - - ++task_count; - iter = last; - } + /* Launch parallel tasks to index units. - m_results.resize (task_count); - workers.start (); + The (unfortunate) reason why we don't use + std::vector::iterator as the parallel-for-each iterator + type is that std::atomic won't work with that type when building with + -D_GLIBCXX_DEBUG. */ + gdb::parallel_for_each_async<1, dwarf2_per_cu_up *, parallel_indexing_worker> + (per_bfd->all_units.data (), + per_bfd->all_units.data () + per_bfd->all_units.size (), + [this] () { this->done_reading (); }, "DWARF indexing worker", this); } static void -- 2.47.3