]> git.ipfire.org Git - thirdparty/kernel/linux.git/commitdiff
selftests/proc: add /proc/pid/maps tearing from vma split test
authorSuren Baghdasaryan <surenb@google.com>
Sat, 19 Jul 2025 18:28:49 +0000 (11:28 -0700)
committerAndrew Morton <akpm@linux-foundation.org>
Fri, 25 Jul 2025 02:12:36 +0000 (19:12 -0700)
Patch series "use per-vma locks for /proc/pid/maps reads", v8.

Reading /proc/pid/maps requires read-locking mmap_lock which prevents any
other task from concurrently modifying the address space.  This guarantees
coherent reporting of virtual address ranges, however it can block
important updates from happening.  Oftentimes /proc/pid/maps readers are
low priority monitoring tasks and them blocking high priority tasks
results in priority inversion.

Locking the entire address space is required to present fully coherent
picture of the address space, however even current implementation does not
strictly guarantee that by outputting vmas in page-size chunks and
dropping mmap_lock in between each chunk.  Address space modifications are
possible while mmap_lock is dropped and userspace reading the content is
expected to deal with possible concurrent address space modifications.
Considering these relaxed rules, holding mmap_lock is not strictly needed
as long as we can guarantee that a concurrently modified vma is reported
either in its original form or after it was modified.

This patchset switches from holding mmap_lock while reading /proc/pid/maps
to taking per-vma locks as we walk the vma tree.  This reduces the
contention with tasks modifying the address space because they would have
to contend for the same vma as opposed to the entire address space.
Previous version of this patchset [1] tried to perform /proc/pid/maps
reading under RCU, however its implementation is quite complex and the
results are worse than the new version because it still relied on
mmap_lock speculation which retries if any part of the address space gets
modified.  New implementaion is both simpler and results in less
contention.  Note that similar approach would not work for /proc/pid/smaps
reading as it also walks the page table and that's not RCU-safe.

Paul McKenney's designed a test [2] to measure mmap/munmap latencies while
concurrently reading /proc/pid/maps.  The test has a pair of processes
scanning /proc/PID/maps, and another process unmapping and remapping 4K
pages from a 128MB range of anonymous memory.  At the end of each 10
second run, the latency of each mmap() or munmap() operation is measured,
and for each run the maximum and mean latency is printed.  The map/unmap
process is started first, its PID is passed to the scanners, and then the
map/unmap process waits until both scanners are running before starting
its timed test.  The scanners keep scanning until the specified
/proc/PID/maps file disappears.

The latest results from Paul:
Stock mm-unstable, all of the runs had maximum latencies in excess of 0.5
milliseconds, and with 80% of the runs' latencies exceeding a full
millisecond, and ranging up beyond 4 full milliseconds.  In contrast, 99%
of the runs with this patch series applied had maximum latencies of less
than 0.5 milliseconds, with the single outlier at only 0.608 milliseconds.

From a median-performance (as opposed to maximum-latency) viewpoint, this
patch series also looks good, with stock mm weighing in at 11 microseconds
and patch series at 6 microseconds, better than a 2x improvement.

Before the change:
./run-proc-vs-map.sh --nsamples 100 --rawdata -- --busyduration 2
    0.011     0.008     0.521
    0.011     0.008     0.552
    0.011     0.008     0.590
    0.011     0.008     0.660
    ...
    0.011     0.015     2.987
    0.011     0.015     3.038
    0.011     0.016     3.431
    0.011     0.016     4.707

After the change:
./run-proc-vs-map.sh --nsamples 100 --rawdata -- --busyduration 2
    0.006     0.005     0.026
    0.006     0.005     0.029
    0.006     0.005     0.034
    0.006     0.005     0.035
    ...
    0.006     0.006     0.421
    0.006     0.006     0.423
    0.006     0.006     0.439
    0.006     0.006     0.608

The patchset also adds a number of tests to check for /proc/pid/maps data
coherency.  They are designed to detect any unexpected data tearing while
performing some common address space modifications (vma split, resize and
remap).  Even before these changes, reading /proc/pid/maps might have
inconsistent data because the file is read page-by-page with mmap_lock
being dropped between the pages.  An example of user-visible inconsistency
can be that the same vma is printed twice: once before it was modified and
then after the modifications.  For example if vma was extended, it might
be found and reported twice.  What is not expected is to see a gap where
there should have been a vma both before and after modification.  This
patchset increases the chances of such tearing, therefore it's even more
important now to test for unexpected inconsistencies.

In [3] Lorenzo identified the following possible vma merging/splitting
scenarios:

Merges with changes to existing vmas:
1 Merge both - mapping a vma over another one and between two vmas which
can be merged after this replacement;
2. Merge left full - mapping a vma at the end of an existing one and
completely over its right neighbor;
3. Merge left partial - mapping a vma at the end of an existing one and
partially over its right neighbor;
4. Merge right full - mapping a vma before the start of an existing one
and completely over its left neighbor;
5. Merge right partial - mapping a vma before the start of an existing one
and partially over its left neighbor;

Merges without changes to existing vmas:
6. Merge both - mapping a vma into a gap between two vmas which can be
merged after the insertion;
7. Merge left - mapping a vma at the end of an existing one;
8. Merge right - mapping a vma before the start end of an existing one;

Splits
9. Split with new vma at the lower address;
10. Split with new vma at the higher address;

If such merges or splits happen concurrently with the /proc/maps reading
we might report a vma twice, once before the modification and once after
it is modified:

Case 1 might report overwritten and previous vma along with the final
merged vma;
Case 2 might report previous and the final merged vma;
Case 3 might cause us to retry once we detect the temporary gap caused by
shrinking of the right neighbor;
Case 4 might report overritten and the final merged vma;
Case 5 might cause us to retry once we detect the temporary gap caused by
shrinking of the left neighbor;
Case 6 might report previous vma and the gap along with the final marged
vma;
Case 7 might report previous and the final merged vma;
Case 8 might report the original gap and the final merged vma covering the
gap;
Case 9 might cause us to retry once we detect the temporary gap caused by
shrinking of the original vma at the vma start;
Case 10 might cause us to retry once we detect the temporary gap caused by
shrinking of the original vma at the vma end;

In all these cases the retry mechanism prevents us from reporting possible
temporary gaps.

[1] https://lore.kernel.org/all/20250418174959.1431962-1-surenb@google.com/
[2] https://github.com/paulmckrcu/proc-mmap_sem-test
[3] https://lore.kernel.org/all/e1863f40-39ab-4e5b-984a-c48765ffde1c@lucifer.local/

The /proc/pid/maps file is generated page by page, with the mmap_lock
released between pages.  This can lead to inconsistent reads if the
underlying vmas are concurrently modified.  For instance, if a vma split
or merge occurs at a page boundary while /proc/pid/maps is being read, the
same vma might be seen twice: once before and once after the change.  This
duplication is considered acceptable for userspace handling.  However,
observing a "hole" where a vma should be (e.g., due to a vma being
replaced and the space temporarily being empty) is unacceptable.

Implement a test that:
1. Forks a child process which continuously modifies its address
   space, specifically targeting a vma at the boundary between two pages.
2. The parent process repeatedly reads the child's /proc/pid/maps.
3. The parent process checks the last vma of the first page and the
   first vma of the second page for consistency, looking for the effects
   of vma splits or merges.

The test duration is configurable via DURATION environment variable
expressed in seconds.  The default test duration is 5 seconds.

Example Command: DURATION=10 ./proc-maps-race

Link: https://lore.kernel.org/all/20250418174959.1431962-1-surenb@google.com/
Link: https://github.com/paulmckrcu/proc-mmap_sem-test
Link: https://lore.kernel.org/all/e1863f40-39ab-4e5b-984a-c48765ffde1c@lucifer.local/
Link: https://lkml.kernel.org/r/20250719182854.3166724-1-surenb@google.com
Link: https://lkml.kernel.org/r/20250719182854.3166724-2-surenb@google.com
Signed-off-by: Suren Baghdasaryan <surenb@google.com>
Cc: Alexey Dobriyan <adobriyan@gmail.com>
Cc: Andrii Nakryiko <andrii@kernel.org>
Cc: Christian Brauner <brauner@kernel.org>
Cc: Christophe Leroy <christophe.leroy@csgroup.eu>
Cc: David Hildenbrand <david@redhat.com>
Cc: Jann Horn <jannh@google.com>
Cc: Jeongjun Park <aha310510@gmail.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Josef Bacik <josef@toxicpanda.com>
Cc: Kalesh Singh <kaleshsingh@google.com>
Cc: Liam Howlett <liam.howlett@oracle.com>
Cc: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
Cc: Matthew Wilcox (Oracle) <willy@infradead.org>
Cc: Michal Hocko <mhocko@kernel.org>
Cc: Oscar Salvador <osalvador@suse.de>
Cc: "Paul E . McKenney" <paulmck@kernel.org>
Cc: Peter Xu <peterx@redhat.com>
Cc: Ryan Roberts <ryan.roberts@arm.com>
Cc: Shuah Khan <shuah@kernel.org>
Cc: Thomas Weißschuh <linux@weissschuh.net>
Cc: T.J. Mercier <tjmercier@google.com>
Cc: Vlastimil Babka <vbabka@suse.cz>
Cc: Ye Bin <yebin10@huawei.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
tools/testing/selftests/proc/.gitignore
tools/testing/selftests/proc/Makefile
tools/testing/selftests/proc/proc-maps-race.c [new file with mode: 0644]

index 973968f45bba0dde3881258eb7f1d74ba7b0ae93..19bb333e2485f518698af9ca3de05be7c7b35456 100644 (file)
@@ -5,6 +5,7 @@
 /proc-2-is-kthread
 /proc-fsconfig-hidepid
 /proc-loadavg-001
+/proc-maps-race
 /proc-multiple-procfs
 /proc-empty-vm
 /proc-pid-vm
index b12921b9794b0f7702f9cbda06dae432599aba48..50aba102201a9d7952b6b4e11a1cf33a9864adb3 100644 (file)
@@ -9,6 +9,7 @@ TEST_GEN_PROGS += fd-002-posix-eq
 TEST_GEN_PROGS += fd-003-kthread
 TEST_GEN_PROGS += proc-2-is-kthread
 TEST_GEN_PROGS += proc-loadavg-001
+TEST_GEN_PROGS += proc-maps-race
 TEST_GEN_PROGS += proc-empty-vm
 TEST_GEN_PROGS += proc-pid-vm
 TEST_GEN_PROGS += proc-self-map-files-001
diff --git a/tools/testing/selftests/proc/proc-maps-race.c b/tools/testing/selftests/proc/proc-maps-race.c
new file mode 100644 (file)
index 0000000..5b28dda
--- /dev/null
@@ -0,0 +1,447 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Copyright 2022 Google LLC.
+ * Author: Suren Baghdasaryan <surenb@google.com>
+ *
+ * Permission to use, copy, modify, and distribute this software for any
+ * purpose with or without fee is hereby granted, provided that the above
+ * copyright notice and this permission notice appear in all copies.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
+ * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
+ * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
+ * ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
+ * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
+ * ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
+ * OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
+ */
+/*
+ * Fork a child that concurrently modifies address space while the main
+ * process is reading /proc/$PID/maps and verifying the results. Address
+ * space modifications include:
+ *     VMA splitting and merging
+ *
+ */
+#define _GNU_SOURCE
+#include "../kselftest_harness.h"
+#include <errno.h>
+#include <fcntl.h>
+#include <pthread.h>
+#include <stdbool.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include <string.h>
+#include <unistd.h>
+#include <sys/mman.h>
+#include <sys/stat.h>
+#include <sys/types.h>
+#include <sys/wait.h>
+
+/* /proc/pid/maps parsing routines */
+struct page_content {
+       char *data;
+       ssize_t size;
+};
+
+#define LINE_MAX_SIZE          256
+
+struct line_content {
+       char text[LINE_MAX_SIZE];
+       unsigned long start_addr;
+       unsigned long end_addr;
+};
+
+enum test_state {
+       INIT,
+       CHILD_READY,
+       PARENT_READY,
+       SETUP_READY,
+       SETUP_MODIFY_MAPS,
+       SETUP_MAPS_MODIFIED,
+       SETUP_RESTORE_MAPS,
+       SETUP_MAPS_RESTORED,
+       TEST_READY,
+       TEST_DONE,
+};
+
+struct vma_modifier_info;
+
+FIXTURE(proc_maps_race)
+{
+       struct vma_modifier_info *mod_info;
+       struct page_content page1;
+       struct page_content page2;
+       struct line_content last_line;
+       struct line_content first_line;
+       unsigned long duration_sec;
+       int shared_mem_size;
+       int page_size;
+       int vma_count;
+       int maps_fd;
+       pid_t pid;
+};
+
+typedef bool (*vma_modifier_op)(FIXTURE_DATA(proc_maps_race) *self);
+typedef bool (*vma_mod_result_check_op)(struct line_content *mod_last_line,
+                                       struct line_content *mod_first_line,
+                                       struct line_content *restored_last_line,
+                                       struct line_content *restored_first_line);
+
+struct vma_modifier_info {
+       int vma_count;
+       void *addr;
+       int prot;
+       void *next_addr;
+       vma_modifier_op vma_modify;
+       vma_modifier_op vma_restore;
+       vma_mod_result_check_op vma_mod_check;
+       pthread_mutex_t sync_lock;
+       pthread_cond_t sync_cond;
+       enum test_state curr_state;
+       bool exit;
+       void *child_mapped_addr[];
+};
+
+
+static bool read_two_pages(FIXTURE_DATA(proc_maps_race) *self)
+{
+       ssize_t  bytes_read;
+
+       if (lseek(self->maps_fd, 0, SEEK_SET) < 0)
+               return false;
+
+       bytes_read = read(self->maps_fd, self->page1.data, self->page_size);
+       if (bytes_read <= 0)
+               return false;
+
+       self->page1.size = bytes_read;
+
+       bytes_read = read(self->maps_fd, self->page2.data, self->page_size);
+       if (bytes_read <= 0)
+               return false;
+
+       self->page2.size = bytes_read;
+
+       return true;
+}
+
+static void copy_first_line(struct page_content *page, char *first_line)
+{
+       char *pos = strchr(page->data, '\n');
+
+       strncpy(first_line, page->data, pos - page->data);
+       first_line[pos - page->data] = '\0';
+}
+
+static void copy_last_line(struct page_content *page, char *last_line)
+{
+       /* Get the last line in the first page */
+       const char *end = page->data + page->size - 1;
+       /* skip last newline */
+       const char *pos = end - 1;
+
+       /* search previous newline */
+       while (pos[-1] != '\n')
+               pos--;
+       strncpy(last_line, pos, end - pos);
+       last_line[end - pos] = '\0';
+}
+
+/* Read the last line of the first page and the first line of the second page */
+static bool read_boundary_lines(FIXTURE_DATA(proc_maps_race) *self,
+                               struct line_content *last_line,
+                               struct line_content *first_line)
+{
+       if (!read_two_pages(self))
+               return false;
+
+       copy_last_line(&self->page1, last_line->text);
+       copy_first_line(&self->page2, first_line->text);
+
+       return sscanf(last_line->text, "%lx-%lx", &last_line->start_addr,
+                     &last_line->end_addr) == 2 &&
+              sscanf(first_line->text, "%lx-%lx", &first_line->start_addr,
+                     &first_line->end_addr) == 2;
+}
+
+/* Thread synchronization routines */
+static void wait_for_state(struct vma_modifier_info *mod_info, enum test_state state)
+{
+       pthread_mutex_lock(&mod_info->sync_lock);
+       while (mod_info->curr_state != state)
+               pthread_cond_wait(&mod_info->sync_cond, &mod_info->sync_lock);
+       pthread_mutex_unlock(&mod_info->sync_lock);
+}
+
+static void signal_state(struct vma_modifier_info *mod_info, enum test_state state)
+{
+       pthread_mutex_lock(&mod_info->sync_lock);
+       mod_info->curr_state = state;
+       pthread_cond_signal(&mod_info->sync_cond);
+       pthread_mutex_unlock(&mod_info->sync_lock);
+}
+
+static void stop_vma_modifier(struct vma_modifier_info *mod_info)
+{
+       wait_for_state(mod_info, SETUP_READY);
+       mod_info->exit = true;
+       signal_state(mod_info, SETUP_MODIFY_MAPS);
+}
+
+static bool capture_mod_pattern(FIXTURE_DATA(proc_maps_race) *self,
+                               struct line_content *mod_last_line,
+                               struct line_content *mod_first_line,
+                               struct line_content *restored_last_line,
+                               struct line_content *restored_first_line)
+{
+       signal_state(self->mod_info, SETUP_MODIFY_MAPS);
+       wait_for_state(self->mod_info, SETUP_MAPS_MODIFIED);
+
+       /* Copy last line of the first page and first line of the last page */
+       if (!read_boundary_lines(self, mod_last_line, mod_first_line))
+               return false;
+
+       signal_state(self->mod_info, SETUP_RESTORE_MAPS);
+       wait_for_state(self->mod_info, SETUP_MAPS_RESTORED);
+
+       /* Copy last line of the first page and first line of the last page */
+       if (!read_boundary_lines(self, restored_last_line, restored_first_line))
+               return false;
+
+       if (!self->mod_info->vma_mod_check(mod_last_line, mod_first_line,
+                                          restored_last_line, restored_first_line))
+               return false;
+
+       /*
+        * The content of these lines after modify+resore should be the same
+        * as the original.
+        */
+       return strcmp(restored_last_line->text, self->last_line.text) == 0 &&
+              strcmp(restored_first_line->text, self->first_line.text) == 0;
+}
+
+static inline bool split_vma(FIXTURE_DATA(proc_maps_race) *self)
+{
+       return mmap(self->mod_info->addr, self->page_size, self->mod_info->prot | PROT_EXEC,
+                   MAP_PRIVATE | MAP_ANONYMOUS | MAP_FIXED, -1, 0) != MAP_FAILED;
+}
+
+static inline bool merge_vma(FIXTURE_DATA(proc_maps_race) *self)
+{
+       return mmap(self->mod_info->addr, self->page_size, self->mod_info->prot,
+                   MAP_PRIVATE | MAP_ANONYMOUS | MAP_FIXED, -1, 0) != MAP_FAILED;
+}
+
+static inline bool check_split_result(struct line_content *mod_last_line,
+                                     struct line_content *mod_first_line,
+                                     struct line_content *restored_last_line,
+                                     struct line_content *restored_first_line)
+{
+       /* Make sure vmas at the boundaries are changing */
+       return strcmp(mod_last_line->text, restored_last_line->text) != 0 &&
+              strcmp(mod_first_line->text, restored_first_line->text) != 0;
+}
+
+FIXTURE_SETUP(proc_maps_race)
+{
+       const char *duration = getenv("DURATION");
+       struct vma_modifier_info *mod_info;
+       pthread_mutexattr_t mutex_attr;
+       pthread_condattr_t cond_attr;
+       unsigned long duration_sec;
+       char fname[32];
+
+       self->page_size = (unsigned long)sysconf(_SC_PAGESIZE);
+       duration_sec = duration ? atol(duration) : 0;
+       self->duration_sec = duration_sec ? duration_sec : 5UL;
+
+       /*
+        * Have to map enough vmas for /proc/pid/maps to contain more than one
+        * page worth of vmas. Assume at least 32 bytes per line in maps output
+        */
+       self->vma_count = self->page_size / 32 + 1;
+       self->shared_mem_size = sizeof(struct vma_modifier_info) + self->vma_count * sizeof(void *);
+
+       /* map shared memory for communication with the child process */
+       self->mod_info = (struct vma_modifier_info *)mmap(NULL, self->shared_mem_size,
+                               PROT_READ | PROT_WRITE, MAP_SHARED | MAP_ANONYMOUS, -1, 0);
+       ASSERT_NE(self->mod_info, MAP_FAILED);
+       mod_info = self->mod_info;
+
+       /* Initialize shared members */
+       pthread_mutexattr_init(&mutex_attr);
+       pthread_mutexattr_setpshared(&mutex_attr, PTHREAD_PROCESS_SHARED);
+       ASSERT_EQ(pthread_mutex_init(&mod_info->sync_lock, &mutex_attr), 0);
+       pthread_condattr_init(&cond_attr);
+       pthread_condattr_setpshared(&cond_attr, PTHREAD_PROCESS_SHARED);
+       ASSERT_EQ(pthread_cond_init(&mod_info->sync_cond, &cond_attr), 0);
+       mod_info->vma_count = self->vma_count;
+       mod_info->curr_state = INIT;
+       mod_info->exit = false;
+
+       self->pid = fork();
+       if (!self->pid) {
+               /* Child process modifying the address space */
+               int prot = PROT_READ | PROT_WRITE;
+               int i;
+
+               for (i = 0; i < mod_info->vma_count; i++) {
+                       mod_info->child_mapped_addr[i] = mmap(NULL, self->page_size * 3, prot,
+                                       MAP_PRIVATE | MAP_ANONYMOUS, -1, 0);
+                       ASSERT_NE(mod_info->child_mapped_addr[i], MAP_FAILED);
+                       /* change protection in adjacent maps to prevent merging */
+                       prot ^= PROT_WRITE;
+               }
+               signal_state(mod_info, CHILD_READY);
+               wait_for_state(mod_info, PARENT_READY);
+               while (true) {
+                       signal_state(mod_info, SETUP_READY);
+                       wait_for_state(mod_info, SETUP_MODIFY_MAPS);
+                       if (mod_info->exit)
+                               break;
+
+                       ASSERT_TRUE(mod_info->vma_modify(self));
+                       signal_state(mod_info, SETUP_MAPS_MODIFIED);
+                       wait_for_state(mod_info, SETUP_RESTORE_MAPS);
+                       ASSERT_TRUE(mod_info->vma_restore(self));
+                       signal_state(mod_info, SETUP_MAPS_RESTORED);
+
+                       wait_for_state(mod_info, TEST_READY);
+                       while (mod_info->curr_state != TEST_DONE) {
+                               ASSERT_TRUE(mod_info->vma_modify(self));
+                               ASSERT_TRUE(mod_info->vma_restore(self));
+                       }
+               }
+               for (i = 0; i < mod_info->vma_count; i++)
+                       munmap(mod_info->child_mapped_addr[i], self->page_size * 3);
+
+               exit(0);
+       }
+
+       sprintf(fname, "/proc/%d/maps", self->pid);
+       self->maps_fd = open(fname, O_RDONLY);
+       ASSERT_NE(self->maps_fd, -1);
+
+       /* Wait for the child to map the VMAs */
+       wait_for_state(mod_info, CHILD_READY);
+
+       /* Read first two pages */
+       self->page1.data = malloc(self->page_size);
+       ASSERT_NE(self->page1.data, NULL);
+       self->page2.data = malloc(self->page_size);
+       ASSERT_NE(self->page2.data, NULL);
+
+       ASSERT_TRUE(read_boundary_lines(self, &self->last_line, &self->first_line));
+
+       /*
+        * Find the addresses corresponding to the last line in the first page
+        * and the first line in the last page.
+        */
+       mod_info->addr = NULL;
+       mod_info->next_addr = NULL;
+       for (int i = 0; i < mod_info->vma_count; i++) {
+               if (mod_info->child_mapped_addr[i] == (void *)self->last_line.start_addr) {
+                       mod_info->addr = mod_info->child_mapped_addr[i];
+                       mod_info->prot = PROT_READ;
+                       /* Even VMAs have write permission */
+                       if ((i % 2) == 0)
+                               mod_info->prot |= PROT_WRITE;
+               } else if (mod_info->child_mapped_addr[i] == (void *)self->first_line.start_addr) {
+                       mod_info->next_addr = mod_info->child_mapped_addr[i];
+               }
+
+               if (mod_info->addr && mod_info->next_addr)
+                       break;
+       }
+       ASSERT_TRUE(mod_info->addr && mod_info->next_addr);
+
+       signal_state(mod_info, PARENT_READY);
+
+}
+
+FIXTURE_TEARDOWN(proc_maps_race)
+{
+       int status;
+
+       stop_vma_modifier(self->mod_info);
+
+       free(self->page2.data);
+       free(self->page1.data);
+
+       for (int i = 0; i < self->vma_count; i++)
+               munmap(self->mod_info->child_mapped_addr[i], self->page_size);
+       close(self->maps_fd);
+       waitpid(self->pid, &status, 0);
+       munmap(self->mod_info, self->shared_mem_size);
+}
+
+TEST_F(proc_maps_race, test_maps_tearing_from_split)
+{
+       struct vma_modifier_info *mod_info = self->mod_info;
+
+       struct line_content split_last_line;
+       struct line_content split_first_line;
+       struct line_content restored_last_line;
+       struct line_content restored_first_line;
+
+       wait_for_state(mod_info, SETUP_READY);
+
+       /* re-read the file to avoid using stale data from previous test */
+       ASSERT_TRUE(read_boundary_lines(self, &self->last_line, &self->first_line));
+
+       mod_info->vma_modify = split_vma;
+       mod_info->vma_restore = merge_vma;
+       mod_info->vma_mod_check = check_split_result;
+
+       ASSERT_TRUE(capture_mod_pattern(self, &split_last_line, &split_first_line,
+                                       &restored_last_line, &restored_first_line));
+
+       /* Now start concurrent modifications for self->duration_sec */
+       signal_state(mod_info, TEST_READY);
+
+       struct line_content new_last_line;
+       struct line_content new_first_line;
+       struct timespec start_ts, end_ts;
+
+       clock_gettime(CLOCK_MONOTONIC_COARSE, &start_ts);
+       do {
+               bool last_line_changed;
+               bool first_line_changed;
+
+               ASSERT_TRUE(read_boundary_lines(self, &new_last_line, &new_first_line));
+
+               /* Check if we read vmas after split */
+               if (!strcmp(new_last_line.text, split_last_line.text)) {
+                       /*
+                        * The vmas should be consistent with split results,
+                        * however if vma was concurrently restored after a
+                        * split, it can be reported twice (first the original
+                        * split one, then the same vma but extended after the
+                        * merge) because we found it as the next vma again.
+                        * In that case new first line will be the same as the
+                        * last restored line.
+                        */
+                       ASSERT_FALSE(strcmp(new_first_line.text, split_first_line.text) &&
+                                    strcmp(new_first_line.text, restored_last_line.text));
+               } else {
+                       /* The vmas should be consistent with merge results */
+                       ASSERT_FALSE(strcmp(new_last_line.text, restored_last_line.text));
+                       ASSERT_FALSE(strcmp(new_first_line.text, restored_first_line.text));
+               }
+               /*
+                * First and last lines should change in unison. If the last
+                * line changed then the first line should change as well and
+                * vice versa.
+                */
+               last_line_changed = strcmp(new_last_line.text, self->last_line.text) != 0;
+               first_line_changed = strcmp(new_first_line.text, self->first_line.text) != 0;
+               ASSERT_EQ(last_line_changed, first_line_changed);
+
+               clock_gettime(CLOCK_MONOTONIC_COARSE, &end_ts);
+       } while (end_ts.tv_sec - start_ts.tv_sec < self->duration_sec);
+
+       /* Signal the modifyer thread to stop and wait until it exits */
+       signal_state(mod_info, TEST_DONE);
+}
+
+TEST_HARNESS_MAIN