access: ensure ACL files are rebuilt when protocol changes
Meson is not told that the .x protocol files are an input for the
generator, so it doesn't know to setup a rebuild dependancy.
Reviewed-by: Erik Skultety <eskultet@redhat.com> Reviewed-by: Pavel Hrdina <phrdina@redhat.com> Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
qemu: fix concurrency crash bug in force snapshot revert
This patch is just revert of [1]. Actually we should NOT pass
QEMU_ASYNC_JOB_NONE as that patch suggests while we are in async job in order
to acquire nested jobs correctly. The patch tries to fix issues introduced by
another patch [2] where jobs are mistakenly cleared out in qemuProcessStop.
Later patch [3] fixed the issue introduced by patch [2]. Now we need to revert
[1] as well as we now still have same concurrency crash issues as [3] described
but for the force revert.
[1] 0c4408c83: qemu: Don't use asyncJob after stop during snapshot revert
[2] 888aa4b6b: qemuDomainObjPrivateDataClear: Don't leak @migParams
[3] d75f865fb: qemu: fix concurrency crash bug in snapshot revert
Signed-off-by: Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com> Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
Peter Krempa [Tue, 15 Sep 2020 15:58:04 +0000 (17:58 +0200)]
qemuBuildHostdevSCSIAttachPrepare: Propagate 'readonly' flag also for iSCSI
The 'readonly' hostdev property is stored separately from the
virStorageSource as some hostdevs are not described by a virStorage
source. We need to propagate the flag to the virStorage source also for
iSCSI backends as it's used to generate the backend properties.
Peter Krempa [Thu, 10 Sep 2020 13:43:54 +0000 (15:43 +0200)]
qemuDomainPrepareHostdev: Don't base backend nodename on device alias
QEMU's blockdev nodenames which are used to back SCSI/iSCSI hostdevs are
limited to 32 characters. If a user passes a very long user alias as
name of the host device it's easy to end up with a too-long nodename.
To prevent this from happening don't base the nodename on the possibly
user-specified alias but on the normal sequential node name generator.
We then store the name in the status XML for further use.
Signed-off-by: Peter Krempa <pkrempa@redhat.com> Reviewed-by: Ján Tomko <jtomko@redhat.com>
Peter Krempa [Thu, 10 Sep 2020 10:32:04 +0000 (12:32 +0200)]
qemu: domain: Extract preparation of hostdev specific data to a separate function
Historically we've prepared secrets for all objects in one place. This
doesn't make much sense and it's semantically more appealing to prepare
everything for a single device type in one place.
Move the setup of the (iSCSI|SCSI) hostdev secrets into a new function
which will be used to setup other things as well in the future.
This is a similar approach we do for disks.
Signed-off-by: Peter Krempa <pkrempa@redhat.com> Reviewed-by: Ján Tomko <jtomko@redhat.com>
This was a hack when we were locally regenerating the nodename so that
it's not leaked. Now that we use proper virStorageSource with
persistence it's no longer required.
Signed-off-by: Peter Krempa <pkrempa@redhat.com> Reviewed-by: Ján Tomko <jtomko@redhat.com>
Peter Krempa [Thu, 10 Sep 2020 06:25:37 +0000 (08:25 +0200)]
qemuBuildHostdevSCSI(A|De)tachPrepare: Use virStorageSource in def for SCSI hostdevs
Modify the attach/detach data generators to actually use the
virStorageSourceStructure embedded in the SCSI config data rather than
creating an ad-hoc internal one.
The modification will allow us to properly store the nodename used for
the backend in the status XML rather than re-generating it all the
time.
Signed-off-by: Peter Krempa <pkrempa@redhat.com> Reviewed-by: Ján Tomko <jtomko@redhat.com>
Peter Krempa [Wed, 9 Sep 2020 15:58:34 +0000 (17:58 +0200)]
qemu: domain: Fill in (i)SCSI backend nodename if it is not present in status XML
For upgrade reasons so that we can modify the used nodename we must
generate the old version for all status XMLs which don't have it stored
explicitly.
The change will be required as using the user-provided alias may result
in too-long nodenames which will be rejected by qemu.
Add code which fills in the appropriate old value and add test cases to
validate that it's added and also that existing nodenames are not
overwritten.
Signed-off-by: Peter Krempa <pkrempa@redhat.com> Reviewed-by: Ján Tomko <jtomko@redhat.com>
Peter Krempa [Wed, 9 Sep 2020 13:27:45 +0000 (15:27 +0200)]
conf: Add virStorageSource member for SCSI host device config data
The backend for the SCSI host device is a storage source. While the
definition doesn't look like that it's converted to a storage source
when the VM is running.
Add the storage source to the definition object and also parse/format
its private data which will be used for internal state storage while
the VM is running.
Note that the virStorageSourcePtr may not be allocated all the time so
the private data parser allocates it if there is any private data
present.
Signed-off-by: Peter Krempa <pkrempa@redhat.com> Reviewed-by: Ján Tomko <jtomko@redhat.com>
Lin Ma [Fri, 11 Sep 2020 07:13:07 +0000 (15:13 +0800)]
vshReadlineParse: Ignore vshReadlineOptionsGenerator for VSH_OT_INT options
Commit c7151b0 added the completion for VSH_OT_INT options, say '--cellno'
and '--pagesize', So we need to ignore VSH_OT_INT otherwise we get the
incorrect completion.
A lot of virReportError() calls use VIR_ERR_INTERNAL_ERROR to represent
the number of the error, even in cases where there is one fitting more.
Hence, replace some of them with better virErrorNumber values.
Signed-off-by: Pino Toscano <ptoscano@redhat.com> Reviewed-by: Martin Kletzander <mkletzan@redhat.com>
When a list is freed, we iterate through all the items, invoking the
free function for each; the actual free function called for each element
is the function of the actual type of each element, and thus the @_next
pointer in the element struct has the same type as the element itself.
Currently, the free function gets the parent of the current element
type, and invoke its free function to continue freeing the list.
However, in case the hierarchy of the classes has more than 1 level
(i.e. Class <- SubClass <- SubSubClass), the invoked free function is
only the parent class' one, and not the actual base class of the
hierarchy.
To fix that, change the generator to get the base class of a class, and
invoking that instead. Also, avoid to set the @_next back, as it is not
needed.
Ian Wienand [Fri, 11 Sep 2020 11:34:17 +0000 (21:34 +1000)]
network: Only check kernel added routes in virNetDevIPCheckIPv6Forwarding
The original motivation for adding virNetDevIPCheckIPv6Forwarding
(commit 00d28a78b5) was that networking routes would disappear when
ipv6 forwarding was enabled for an interface.
This is a fairly undocumented side-effect of the "accept_ra" sysctl
for an interface. 1 means the interface will accept_ra's if not
forwarding, 2 means always accept_RAs; but it is not explained that
enabling forwarding when accept_ra==1 will also clear any kernel RA
assigned routes, very likely breaking your networking.
The check to warn about this currently uses netlink to go through all
the routes and then look at the accept_ra status of the interfaces.
However, it has been noticed that this problem does not affect systems
where IPv6 RA configuration is handled in userspace, e.g. via tools
such as NetworkManager. In this case, the error message from libvirt
is spurious, and modifying the forwarding state will not affect the RA
state or disable your networking.
If you refer to the function rt6_purge_dflt_routers() in the kernel,
we can see that the routes being purged are only those with the
kernel's RTF_ADDRCONF flag set; that is, routes added by the kernel's
RA handling. Why does it do this? I think this is a Linux
implementation decision; it has always been like that and there are
some comments suggesting that it is because a router should be
statically configured, rather than accepting external configurations.
The solution implemented here is to convert the existing check into a
walk of /proc/net/ipv6_route (because RTF_ADDRCONF is apparently not
exposed in netlink) and look for routes with this flag set. We then
check the accept_ra status for the interface, and if enabling
forwarding would break things raise an error.
This should hopefully avoid "interactive" users, who are likely to be
using NetworkManager and the like, having false warnings when enabling
IPv6, but retain the error check for users relying on kernel-based
IPv6 interface auto-configuration.
Signed-off-by: Ian Wienand <iwienand@redhat.com> Reviewed-by: Laine Stump <laine@redhat.com> Reviewed-by: Cedric Bosdonnat <CBosdonnat@suse.com>
Lin Ma [Fri, 11 Sep 2020 07:06:09 +0000 (15:06 +0800)]
qemu: Return perf status that affect next boot for shutoff domains
While we set up perf events for a shutoff domain and check the settings,
All of perf events are reported as 'disabled', unless we add --config,
This is redundant for a shutoff domain.
Use virDomainObjGetOneDefState instead of virDomainObjGetOneDef to fix
the issue. After patch, The perf event status of a shutoff domain is
reported correctly:
Signed-off-by: Lin Ma <lma@suse.de> Reviewed-by: Erik Skultety <eskultet@redhat.com> Reviewed-by: Ján Tomko <jtomko@redhat.com> Signed-off-by: Ján Tomko <jtomko@redhat.com>
Erik Skultety [Fri, 11 Sep 2020 12:44:27 +0000 (14:44 +0200)]
qemu: qemuDomainPMSuspendAgent: Don't assign to 'ret' in a conditional
When the guest agent isn't running, we still report success on a PM
suspend action even though we logged an error correctly, this is because
we poisoned the 'ret' value a few lines above.
Fixes: a663a860819287e041c3de672aad1d8543098ecc Signed-off-by: Erik Skultety <eskultet@redhat.com>
Michal Privoznik [Fri, 11 Sep 2020 09:50:33 +0000 (11:50 +0200)]
virnuma: Use numa_nodes_ptr when checking available NUMA nodes
In v6.7.0-rc1~86 I've tried to fix a problem where we were not
detecting NUMA nodes properly because we misused behaviour of a
libnuma API and as it turned out the behaviour was correct for
hosts with 64 CPUs in one NUMA node. So I changed the code to use
nodemask_isset(&numa_all_nodes, ..) instead and it fixed the
problem on such hosts. However, what I did not realize is that
numa_all_nodes does not reflect all NUMA nodes visible to
userspace, it contains only those nodes that the process
(libvirtd) an allocate memory from, which can be only a subset of
all NUMA nodes. The bitmask that contains all NUMA nodes visible
to userspace and which one I should have used is: numa_nodes_ptr.
For curious ones:
And as I was fixing virNumaGetNodeCPUs() I came to realize that
we already have a function that wraps the correct bitmask:
virNumaNodeIsAvailable().
Fixes: 24d7d85208f812a45686b32a0561cc9c5c9a49c9
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1876956 Signed-off-by: Michal Privoznik <mprivozn@redhat.com> Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>