]> git.ipfire.org Git - thirdparty/libvirt.git/commitdiff
numa_conf: Properly check for caches in virDomainNumaDefValidate()
authorMichal Privoznik <mprivozn@redhat.com>
Tue, 18 Aug 2020 10:46:38 +0000 (12:46 +0200)
committerMichal Privoznik <mprivozn@redhat.com>
Wed, 19 Aug 2020 08:21:59 +0000 (10:21 +0200)
When adding support for HMAT, in f0611fe8830 I've introduced a
check which aims to validate /domain/cpu/numa/interconnects. As a
part of that, there is a loop which checks whether all <latency/>
with @cache attribute refer to an existing cache level. For
instance:

  <cpu mode='host-model' check='partial'>
    <numa>
      <cell id='0' cpus='0-5' memory='512000' unit='KiB' discard='yes'>
        <cache level='1' associativity='direct' policy='writeback'>
          <size value='8' unit='KiB'/>
          <line value='5' unit='B'/>
        </cache>
      </cell>
      <interconnects>
        <latency initiator='0' target='0' cache='1' type='access' value='5'/>
        <bandwidth initiator='0' target='0' type='access' value='204800' unit='KiB'/>
      </interconnects>
    </numa>
  </cpu>

This XML defines that accessing L1 cache of node #0 from node #0
has latency of 5ns.

However, the loop was not written properly. Well, the check in
it, as it was always checking for the first cache in the target
node and not the rest. Therefore, the following example errors
out:

  <cpu mode='host-model' check='partial'>
    <numa>
      <cell id='0' cpus='0-5' memory='512000' unit='KiB' discard='yes'>
        <cache level='3' associativity='direct' policy='writeback'>
          <size value='10' unit='KiB'/>
          <line value='8' unit='B'/>
        </cache>
        <cache level='1' associativity='direct' policy='writeback'>
          <size value='8' unit='KiB'/>
          <line value='5' unit='B'/>
        </cache>
      </cell>
      <interconnects>
        <latency initiator='0' target='0' cache='1' type='access' value='5'/>
        <bandwidth initiator='0' target='0' type='access' value='204800' unit='KiB'/>
      </interconnects>
    </numa>
  </cpu>

This errors out even though it is a valid configuration. The L1
cache under node #0 is still present.

Fixes: f0611fe8830
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Laine Stump <laine@redhat.com>
src/conf/numa_conf.c

index 50d57ba8f68fd1b6c1611c8acf5826f3e382f36e..9305e125b78de1021597a5fdf56224e0025737de 100644 (file)
@@ -1421,7 +1421,7 @@ virDomainNumaDefValidate(const virDomainNuma *def)
 
         if (l->cache > 0) {
             for (j = 0; j < def->mem_nodes[l->target].ncaches; j++) {
-                const virDomainNumaCache *cache = def->mem_nodes[l->target].caches;
+                const virDomainNumaCache *cache = &def->mem_nodes[l->target].caches[j];
 
                 if (l->cache == cache->level)
                     break;