Currently when loading GRUB modules we allocate space for all sections
including those without SHF_ALLOC set. We then copy the sections that
/do/ have SHF_ALLOC set into the allocated memory leaving some of our
allocation untouched forever. Additionally, on platforms with GOT fixups
and trampolines we currently compute alignment round-ups for the
sections and sections with sh_size = 0. This patch removes the extra
space from the allocation computation and makes the allocation
computation loop skip empty sections as the loading loop does.
Signed-off-by: Peter Jones <pjones@redhat.com>
Signed-off-by: Jan Setje-Eilers <jan.setjeeilers@oracle.com>
Signed-off-by: Mate Kukri <mate.kukri@canonical.com>
Reviewed-By: Vladimir Serbinenko <phcoder@gmail.com>
Reviewed-by: Daniel Kiper <daniel.kiper@oracle.com>
i < e->e_shnum;
i++, s = (const Elf_Shdr *)((const char *) s + e->e_shentsize))
{
+ if (s->sh_size == 0 || !(s->sh_flags & SHF_ALLOC))
+ continue;
+
tsize = ALIGN_UP (tsize, s->sh_addralign) + s->sh_size;
if (talign < s->sh_addralign)
talign = s->sh_addralign;