From ba4312eb977a93e2200156da102b199e587151a9 Mon Sep 17 00:00:00 2001 From: Jeremy Fitzhardinge Date: Fri, 12 Oct 2007 14:11:42 -0700 Subject: [PATCH] xfs: eagerly remove vmap mappings to avoid upsetting Xen patch ace2e92e193126711cb3a83a3752b2c5b8396950 in mainline. XFS leaves stray mappings around when it vmaps memory to make it virtually contigious. This upsets Xen if one of those pages is being recycled into a pagetable, since it finds an extra writable mapping of the page. This patch solves the problem in a brute force way, by making XFS always eagerly unmap its mappings. [ Stable: This works around a bug in 2.6.23. We may come up with a better solution for mainline, but this seems like a low-impact fix for the stable kernel. ] Signed-off-by: Jeremy Fitzhardinge Cc: XFS masters Cc: Morten =?utf-8?q?B=C3=B8geskov?= Cc: Mark Williamson Signed-off-by: Greg Kroah-Hartman --- fs/xfs/linux-2.6/xfs_buf.c | 13 +++++++++++++ 1 file changed, 13 insertions(+) diff --git a/fs/xfs/linux-2.6/xfs_buf.c b/fs/xfs/linux-2.6/xfs_buf.c index b0f0e58866de6..be9e65b7fe7bd 100644 --- a/fs/xfs/linux-2.6/xfs_buf.c +++ b/fs/xfs/linux-2.6/xfs_buf.c @@ -187,6 +187,19 @@ free_address( { a_list_t *aentry; +#ifdef CONFIG_XEN + /* + * Xen needs to be able to make sure it can get an exclusive + * RO mapping of pages it wants to turn into a pagetable. If + * a newly allocated page is also still being vmap()ed by xfs, + * it will cause pagetable construction to fail. This is a + * quick workaround to always eagerly unmap pages so that Xen + * is happy. + */ + vunmap(addr); + return; +#endif + aentry = kmalloc(sizeof(a_list_t), GFP_NOWAIT); if (likely(aentry)) { spin_lock(&as_lock); -- 2.47.2