On some machines, it caused troubles when it tried to find kernel
symbols. I think it's because kernel modules and kallsyms are messed
up during load and split.
Basically we want to make sure the kernel map is loaded and the code has
it in the lock_contention_read(). But recently we added more lookups in
the lock_contention_prepare() which is called before _read().
Also the kernel map (kallsyms) may not be the first one in the group
like on ARM. Let's use machine__kernel_map() rather than just loading
the first map.
Reviewed-by: Ian Rogers <irogers@google.com>
Fixes: 688d2e8de231c54e ("perf lock contention: Add -l/--lock-addr option")
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
struct evlist *evlist = con->evlist;
struct target *target = con->target;
+ /* make sure it loads the kernel map before lookup */
+ map__load(machine__kernel_map(con->machine));
+
skel = lock_contention_bpf__open();
if (!skel) {
pr_err("Failed to open lock-contention BPF skeleton\n");
bpf_prog_test_run_opts(prog_fd, &opts);
}
- /* make sure it loads the kernel map */
- maps__load_first(machine->kmaps);
-
prev_key = NULL;
while (!bpf_map_get_next_key(fd, prev_key, &key)) {
s64 ls_key;