Laine Stump [Mon, 24 Mar 2025 23:28:33 +0000 (19:28 -0400)]
docs: add table showing guest IP/DNS/gateway settings when using SLIRP
When using the default SLIRP backend for <interface type='user'>, the
<ip address='blah' prefix='blur'/> setting doesn't behave as might be
expected (i.e. it doesn't set the guest interface IP/prefix to exactly
the provided values). This *should* have created questions when users
originally encountered it, but instead it has become more apparent as
people are contemplating switching from using the SLIRP backend to
using passt instead (with passt, the <ip> settings do behave "as
expected").
In order to make this difference in behavior less mysterious, Yalan
Zhang kindly took the time to test and document the effect of various
representative <ip> settings on guest interface config when SLIRP is
used (see https://issues.redhat.com/browse/RHEL-46601); this patch
adds that same table to libvirt's documentation.
Signed-off-by: Laine Stump <laine@redhat.com> Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
ch: virCHProcessEvent() vm shutdown event handler fix
When the domain shutdown was executed from virsh, only the VM
process (a child of the CH monitor) was terminated. Since we assume
only one VM per monitor, the monitor process should also be
terminated.
Modified the VM shutdown event handler to match the VMM shutdown
behavior, ensuring the VM monitor stops along with the VM. Also
updated the virCHEventStopProcess job type, as it only destroys the
domain rather than modifying anything.
Signed-off-by: Kirill Shchetiniuk <kshcheti@redhat.com> Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
ch: virCHProcessEvent() update domain info after reboot
When the domain was rebooted, some of its properties were changed but
not updated in the transient domain definition. This led to the
inability to connect to the serial console as its path had changed
during the reboot but was not updated in the domain definition.
Added VIR_CH_EVENT_VM_REBOOTED event handling to update the
information in transient domain definition after domain's reboot is
completed to maintain it in consistent state.
Signed-off-by: Kirill Shchetiniuk <kshcheti@redhat.com> Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
ch: virCHMonitorNew() run new CH monitor daemonized
When the new CH monitor was started, it ran as a non-daemonized
process and was a child of the CH driver process. This led to a
situation where if the CH driver died, the monitor process were
killed too, terminating the running VM under the monitor. This
led to termination of all VM started under the libvirt.
Make new monitor running daemonized to avoid VMs shutdown when
driver dies. Also added a pidfile its preparetion to be able
to aquire daemon's PID.
Signed-off-by: Kirill Shchetiniuk <kshcheti@redhat.com> Signed-off-by: Michal Privoznik <mprivozn@redhat.com> Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
Peter Krempa [Tue, 1 Apr 2025 19:03:14 +0000 (21:03 +0200)]
qemu: Always revert internal snapshots via QMP rather than '-loadvm'
As all supported qemu versions now support the QMP internal snapshot
commands (QEMU_CAPS_SNAPSHOT_INTERNAL_QMP is always present) we can
remove the code for loading snapshots during startup via '-loadvm'.
Signed-off-by: Peter Krempa <pkrempa@redhat.com> Reviewed-by: Pavel Hrdina <phrdina@redhat.com> Reviewed-by: Ján Tomko <jtomko@redhat.com>
Peter Krempa [Tue, 1 Apr 2025 19:03:06 +0000 (21:03 +0200)]
qemu: snapshot: Always assume support for QEMU_CAPS_SNAPSHOT_INTERNAL_QMP
The 'snapshot-save' QMP command was introduced in 'qemu-6.0' and libvirt
now requires at least 'qemu-6.2'. Thus we can assume that the QMP
command can be used always.
Signed-off-by: Peter Krempa <pkrempa@redhat.com> Reviewed-by: Pavel Hrdina <phrdina@redhat.com> Reviewed-by: Ján Tomko <jtomko@redhat.com>
Peter Krempa [Wed, 2 Apr 2025 07:39:03 +0000 (09:39 +0200)]
qemuSnapshotCreateActiveInternal: Fix error logic
The 'ret' variable is set to 0 before a call which can theoretically
fail. Not in practice really as the failure scenarion includes only
object initialization.
Since the code already has another variable for checking monitor returns
use that one properly so that the code makes sense.
Signed-off-by: Peter Krempa <pkrempa@redhat.com> Reviewed-by: Pavel Hrdina <phrdina@redhat.com> Reviewed-by: Ján Tomko <jtomko@redhat.com>
Peter Krempa [Tue, 25 Mar 2025 16:46:45 +0000 (17:46 +0100)]
backup: Add support for passing server socket file descriptor to backup NBD server
In deployments where libvirt is containerized together with the VM it
may be hard for the management application to access listening sockets
inside the container from the outside.
This patch implements "transport='fd'" for the NBD server definition for
backups which allows to use the existing "virDomainFDAssociate()" to
pass FD to a pre-opened server socket to qemu instead of trying to
create it by qemu.
Add schema, enable the parser, add formatter and implement the actual
passing for the qemu backup code.
Signed-off-by: Peter Krempa <pkrempa@redhat.com> Spellchecked-by: Ján Tomko <jtomko@redhat.com> Reviewed-by: Ján Tomko <jtomko@redhat.com>
Peter Krempa [Tue, 25 Mar 2025 16:32:23 +0000 (17:32 +0100)]
qemu: monitor: Support FD passing of sockets to 'qemuMonitorJSONNBDServerStart'
Upcoming patches will extend the FD passing infrastructure to the backup
job so that users can pass an opened socket instead of qemu opening it
themself to bypass difficulities caused by containerizing libvirt.
Signed-off-by: Peter Krempa <pkrempa@redhat.com> Reviewed-by: Ján Tomko <jtomko@redhat.com>
Since the refactor to use proper enum type for the network transport the
'transport' variable is no longer filled. Remove it and fix the error
message which references it without using NULLSTR.
Fixes: 452695926dc Signed-off-by: Peter Krempa <pkrempa@redhat.com> Reviewed-by: Ján Tomko <jtomko@redhat.com>
Peter Krempa [Fri, 28 Mar 2025 07:26:17 +0000 (08:26 +0100)]
virDomainDiskDefValidateSourceChainOne: Fix validation of 'data-file' nesting
As the 'dataStore' is internally represented as a virStorageSource
object it has provisions for nesting which is not supported.
When I've reviewed and modified the commit adding data file parsing
support I've added code that was supposed to reject any 'backingStore'
and 'dataStore' structures nested in a source of a 'dataStore'.
Unfortunately the check was broken as one of the terms checked the
presence of parent's 'backingStore' instead of the nesting.
Peter Krempa [Tue, 25 Mar 2025 06:23:01 +0000 (07:23 +0100)]
esxConnectListAllDomains: Don't propagate failure to lookup a single domain
In esxConnectListAllDomains if the lookup of the VM name and UUID fails
for a single VM (possible e.g. with broken storage) the whole API would
return failure even when there are working VMs.
Rework the lookup so that if a subset fails we ignore the failure on
those. We report an error only if lookup of all of the objects failed.
Failure is reported from the last one.
Resolves: https://issues.redhat.com/browse/RHEL-80606 Signed-off-by: Peter Krempa <pkrempa@redhat.com> Reviewed-by: Ján Tomko <jtomko@redhat.com>
Philipp Schuster [Wed, 26 Mar 2025 12:51:41 +0000 (13:51 +0100)]
doc: remove wrong comment
This comment is wrong as later qemuMigrationSrcRun() is called which
checks if TLS should be used and activated. QEMU has built-in support
for TLS, which this refers to.
The comment originates from a time when tunneled support was the only
way to get encryption.
Signed-off-by: Philipp Schuster <philipp.schuster@cyberus-technology.de>
When invoking virDomainSaveParams with a relative path, the image is
saved to the daemon's CWD. Similarly, when providing virDomainRestoreParams
with a relative path, it attempts to restore from the daemon's CWD. In most
configurations, the daemon's CWD is set to '/'. Ensure a relative path is
converted to absolute before invoking the driver domain{Save,Restore}Params
functions.
Signed-off-by: Jim Fehlig <jfehlig@suse.com> Reviewed-by: Martin Kletzander <mkletzan@redhat.com>
Pavel Hrdina [Mon, 24 Mar 2025 19:11:58 +0000 (20:11 +0100)]
qemu_driver: Fix virDomainSaveImageDefineXML
Commit 28a06215280b99708ed8dc2d183f62ba7b34ccf8 added support to restore
sparse images but changed the boolean that controls if we open the file
as read-only or read-write. Editing XML in the save image resulted in
following error message:
failed to write header to domain save file '/data/images/fedora40.save': Bad file descriptor
Signed-off-by: Pavel Hrdina <phrdina@redhat.com> Reviewed-by: Jim Fehlig <jfehlig@suse.com>
Michal Privoznik [Tue, 27 Jun 2023 11:32:55 +0000 (13:32 +0200)]
qemu: Emit NIC_MAC_CHANGE event
So far, we only process NIC_RX_FILTER_CHANGED event when the
corresponding device has 'trustGuestRxFilters' enabled. And the
event is emitted only for virtio model. IOW, this is fairly
limited situation and other scenarios don't emit any event (e.g.
change of MAC address on a PCI passthrough device).
Resolves: https://issues.redhat.com/browse/RHEL-7035 Signed-off-by: Michal Privoznik <mprivozn@redhat.com> Reviewed-by: Martin Kletzander <mkletzan@redhat.com>
Michal Privoznik [Tue, 27 Jun 2023 08:13:51 +0000 (10:13 +0200)]
Introduce NIC_MAC_CHANGE event
The aim off this event is to notify management application that
guest changed MAC address on one of its vNICs so the app can
update its internal records, e.g. for finding match between
guest/host view of vNICs.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com> Reviewed-by: Martin Kletzander <mkletzan@redhat.com>
Michal Privoznik [Tue, 27 Jun 2023 15:13:33 +0000 (17:13 +0200)]
qemu: Reflect MAC address change in live domain XML
If a guest changes MAC address on its vNIC, then QEMU emits
NIC_RX_FILTER_CHANGED event (the event is emitted in other cases
too, but that's not important right now). Now, domain XML allows
users to chose whether to trust these events or not:
<interface trustGuestRxFilters='yes|no'/>
For the 'no' case no action is performed and the event is
ignored. But for the 'yes' case, some host side features of
corresponding vNIC (well tap/macvtap device) are tweaked to
reflect changed MAC address. But what is missing is reflecting
this new MAC address in domain XML.
Basically, what happens is: the host sees traffic with new MAC
address, all tools inside the guest see the new MAC address
(including 'virsh domifaddr --source agent') which makes it
harder to match device in the guest with the one in the domain
XML.
Therefore, report this new MAC address as another attribute of
the <mac/> element:
Peter Krempa [Thu, 20 Mar 2025 15:17:23 +0000 (16:17 +0100)]
qemuAgentCheckError: Use 'VIR_ERR_AGENT_COMMAND_FAILED'
In the two cases when we know that the command returned failure switch
to the new error code so that management applications can
programatically detect failure of the guest agent command.
Signed-off-by: Peter Krempa <pkrempa@redhat.com> Reviewed-by: Ján Tomko <jtomko@redhat.com>
Peter Krempa [Thu, 20 Mar 2025 15:07:32 +0000 (16:07 +0100)]
qemuAgentCommandFull: Use VIR_ERR_AGENT_COMMAND_TIMEOUT when agent disappears
When the agent disappears after geting a proper command we ought to
report the same error code as if we timed out as it's uncertain whether
the guest agent did anything.
Signed-off-by: Peter Krempa <pkrempa@redhat.com> Reviewed-by: Ján Tomko <jtomko@redhat.com>
Peter Krempa [Thu, 20 Mar 2025 14:17:11 +0000 (15:17 +0100)]
qemu: agent: Differentiate timeouts when syncing from command timeout
As the guest agent code uses timeouts it is possible that we stop
waiting before the guest agent replies. If this happens while syncing
everything is okay because we didn't send any state-changing command.
In case when the timeout happens after a real command was transmitted
it's unknown if the guest-agent processed it or not.
Use the new special error code VIR_ERR_AGENT_COMMAND_TIMEOUT for cases
when we sent non-sync commands, so that the management applications or
users have possibility to react to this situation.
Signed-off-by: Peter Krempa <pkrempa@redhat.com> Reviewed-by: Ján Tomko <jtomko@redhat.com>
Introduce a new special error code for guest agent commands.
The error code will be specifically reported only when an actual command
(not a sync) was issued to the guest agent and the timeout time was
reached.
This will allow users and management applications to differentiate
between the cases when the sync timed out and thus there's no risk in
the agent actually having executed the command and when the actual
command was sent.
Signed-off-by: Peter Krempa <pkrempa@redhat.com> Reviewed-by: Ján Tomko <jtomko@redhat.com>
Commits c2518f7bc7 and 28a0621528 introduced build failures on 32-bit
platforms by using incorrect format specifiers with g_strdup_printf.
In one case, an 'unsigned long' format specifier is used with a
'long long int' variable. Fix by changing the format specifier to
'uintmax_t', and casting the variable likewise.
In a second case, an 'unsigned long' format specifier is used with a
'size_t' variable, which is 'unsigned int' on 32-bit systems. Fix by
changing the format specifier to use the 'z' modifier.
Fixes: c2518f7bc7dd4f8ab8655a12ec3a000e1eb5b232 Fixes: 28a06215280b99708ed8dc2d183f62ba7b34ccf8 Signed-off-by: Jim Fehlig <jfehlig@suse.com> Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
Pavel Hrdina [Thu, 20 Mar 2025 22:42:05 +0000 (23:42 +0100)]
qemu: remove VIR_DOMAIN_SAVE_PARALLEL flag
There is no need to use extra flag in addition to the new
"parallel.channels" param.
Using the flag without param would result in using uninitialized
variable. Fixing it would result in error that parallel channels cannot
be less then 1 or setting 1 as default.
Using the param without the flag is ignored.
Signed-off-by: Pavel Hrdina <phrdina@redhat.com> Reviewed-by: Jim Fehlig <jfehlig@suse.com>
Pavel Hrdina [Thu, 20 Mar 2025 22:17:11 +0000 (23:17 +0100)]
tools: remote --parallel from virsh save command
There is no need to have --parallel and --parallel-channels especially
when --parallel on its own is the same as not used at all. In both cases
libvirt will default to single channel.
Signed-off-by: Pavel Hrdina <phrdina@redhat.com> Reviewed-by: Jim Fehlig <jfehlig@suse.com>
Pavel Hrdina [Thu, 20 Mar 2025 22:14:06 +0000 (23:14 +0100)]
tools: remove --parallel from virsh restore command
There is no need to have --parallel and --parallel-channels especially
when --parallel on its own is the same as not used at all. In both cases
libvirt will default to single channel.
Signed-off-by: Pavel Hrdina <phrdina@redhat.com> Reviewed-by: Jim Fehlig <jfehlig@suse.com>
qemu/dbus: Allow connections from root to the dbus-daemon
In commit dbfb96d18c04 libvirt started connecting to the daemon to set
RDP credentials, but our configuration file did not allow connections
from the root user, so the connection failed and the VM failed to start.
In order to avoid such issue allow root to connect if the daemon is
running privileged.
Signed-off-by: Martin Kletzander <mkletzan@redhat.com> Reviewed-by: Ján Tomko <jtomko@redhat.com>
include: Define constants for parallel save/restore
Add a new VIR_DOMAIN_SAVE_PARALLEL flag to the save and restore APIs,
which can be used to specify the use of multiple, parallel channels
for saving and restoring a domain. The number of parallel channels
can be set using the VIR_DOMAIN_SAVE_PARAM_PARALLEL_CHANNELS
typed parameter.
Signed-off-by: Claudio Fontana <cfontana@suse.de> Signed-off-by: Jim Fehlig <jfehlig@suse.com> Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Jim Fehlig [Wed, 24 Jul 2024 17:24:47 +0000 (11:24 -0600)]
qemu: Support O_DIRECT with mapped-ram on restore
When using the mapped-ram migration capability, direct IO is
enabled by setting the "direct-io" migration parameter to
"true" and passing QEMU an additional fd with O_DIRECT set.
Signed-off-by: Jim Fehlig <jfehlig@suse.com> Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Jim Fehlig [Mon, 22 Jul 2024 17:34:44 +0000 (11:34 -0600)]
qemu: Support O_DIRECT with mapped-ram on save
When using the mapped-ram migration capability, direct IO is
enabled by setting the "direct-io" migration parameter to
"true" and passing QEMU an additional fd with O_DIRECT set.
Signed-off-by: Jim Fehlig <jfehlig@suse.com> Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Jim Fehlig [Tue, 21 Jan 2025 23:39:20 +0000 (16:39 -0700)]
qemu: Apply migration parameters in qemuMigrationDstRun
Similar to qemuMigrationSrcRun, apply migration parameters in
qemuMigrationDstRun. This allows callers to create customized
migration parameters, but delegates their application to the
function performing the migration.
Signed-off-by: Jim Fehlig <jfehlig@suse.com> Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Jim Fehlig [Mon, 22 Jul 2024 23:12:21 +0000 (17:12 -0600)]
qemu: Move creation of qemuProcessIncomingDef struct
qemuProcessStartWithMemoryState() is the only caller of qemuProcessStart()
that uses the qemuProcessIncomingDef struct. Move creation of the struct
to qemuProcessStartWithMemoryState().
Signed-off-by: Jim Fehlig <jfehlig@suse.com> Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Jim Fehlig [Tue, 14 Jan 2025 23:13:20 +0000 (16:13 -0700)]
qemu: Add support for mapped-ram on save
Introduce support for QEMU's new mapped-ram stream format [1].
mapped-ram can be enabled by setting the 'save_image_format'
setting in qemu.conf to 'sparse'.
To use mapped-ram with QEMU:
- The 'mapped-ram' migration capability must be set to true
- The 'multifd' migration capability must be set to true and
the 'multifd-channels' migration parameter must set to 1
- QEMU must be provided an fdset containing the migration fd
- The 'migrate' qmp command is invoked with a URI referencing the
fdset and an offset where to start reading or writing the data
stream, e.g.
The mapped-ram stream, in conjunction with direct IO and multifd
support provided by subsequent patches, can significantly improve
the time required to save VM memory state. The following tables
compare mapped-ram with the existing, sequential save stream. In
all cases, the save and restore operations are to/from a block
device comprised of two NVMe disks in RAID0 configuration with
xfs (~8600MiB/s). The values in the 'save time' and 'restore time'
columns were scraped from the 'real' time reported by time(1). The
'Size' and 'Blocks' columns were provided by the corresponding
outputs of stat(1).
As can be seen from the tables, one caveat of mapped-ram is the logical
file size of a saved image is basically equivalent to the VM memory size.
Note however that mapped-ram typically uses fewer blocks on disk, hence
the name 'sparse' for 'save_image_format'.
Also note the mapped-ram stream is incompatible with the existing stream
format, hence mapped-ram cannot be used to restore an image saved with
the existing format and vice versa.
Jim Fehlig [Wed, 17 Jul 2024 23:04:43 +0000 (17:04 -0600)]
qemu: Add helper function for creating save image fd
Move the code in qemuSaveImageCreate that opens, labels, and wraps the
save image fd to a helper function, providing more flexibility for
upcoming mapped-ram support.
Signed-off-by: Jim Fehlig <jfehlig@suse.com> Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Jim Fehlig [Mon, 13 Jan 2025 22:57:50 +0000 (15:57 -0700)]
qemu_saveimage: add "sparse" to supported save image formats
Extend the list of formats to include "sparse", which uses QEMU's mapped-ram
stream format [1] to write guest memory blocks at fixed offsets in the save
image file.
Jim Fehlig [Wed, 29 May 2024 22:31:05 +0000 (16:31 -0600)]
qemu: Add function to get migration params for save
Introduce qemuMigrationParamsForSave() to create a
qemuMigrationParams object initialized with appropriate migration
capabilities and parameters for a save operation.
Note that mapped-ram capability also requires the multifd capability.
For now, the number of multifd channels is set to 1. Future work
to support parallel save/restore can set the number of channels to
a user-specified value.
Signed-off-by: Jim Fehlig <jfehlig@suse.com> Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Jim Fehlig [Thu, 6 Feb 2025 22:20:27 +0000 (15:20 -0700)]
qemu: Add function to get FDPass object from monitor
Add new function qemuFDPassNewFromMonitor to get an fdset previously
passed to qemu, based on the 'prefix' provided when the qemuFDPass
object was initially created.
Signed-off-by: Jim Fehlig <jfehlig@suse.com> Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Signed-off-by: Harikumar Rajkumar <harirajkumar230@gmail.com>
* Fixed alignment of child elements in the XML
* Fixed placement of the throttlegroups element
* Removed completer wrapper
Reviewed-by: Peter Krempa <pkrempa@redhat.com> Signed-off-by: Peter Krempa <pkrempa@redhat.com> Reviewed-by: Ján Tomko <jtomko@redhat.com>
Chun Feng Wu [Wed, 19 Feb 2025 16:57:21 +0000 (22:27 +0530)]
virsh: Add support for throttle group operations
Implement new throttle cmds
* Add new virsh cmds: domthrottlegroupset, domthrottlegrouplist,
domthrottlegroupinfo, domthrottlegroupdel
* Add doc for new cmds at docs/manpages/virsh.rst
* Add cmd helper "virshDomainThrottleGroupCompleter", which is used by
domthrottlegroupset, domthrottlegroupinfo, domthrottlegroupdel
Signed-off-by: Chun Feng Wu <danielwuwy@163.com>
* Update of code documentation comments.
* Reimplement Get throttle group from XML.
Signed-off-by: Harikumar Rajkumar <harirajkumar230@gmail.com>a
* Fixed memleaks
* Rewrote getter to avoid extra copies
* Simplified name extractor
Reviewed-by: Peter Krempa <pkrempa@redhat.com> Signed-off-by: Peter Krempa <pkrempa@redhat.com> Reviewed-by: Ján Tomko <jtomko@redhat.com>
Chun Feng Wu [Wed, 19 Feb 2025 16:57:20 +0000 (22:27 +0530)]
virsh: Refactor iotune options for re-use
Define macro for iotune options, this macro is used by opts_blkdeviotune and
later throttle group opts
Signed-off-by: Chun Feng Wu <danielwuwy@163.com> Reviewed-by: Peter Krempa <pkrempa@redhat.com> Signed-off-by: Peter Krempa <pkrempa@redhat.com> Reviewed-by: Ján Tomko <jtomko@redhat.com>
* Add tests for throttlefilter nodename parse and format for statusxml
(disk/privateData/nodenames/nodename with type='throttle-filter')
* Add iotune limited disk tests to make sure iotune refactory works
Signed-off-by: Chun Feng Wu <danielwuwy@163.com>
* Isolate status xml test
Signed-off-by: Harikumar Rajkumar <harirajkumar230@gmail.com> Reviewed-by: Peter Krempa <pkrempa@redhat.com> Signed-off-by: Peter Krempa <pkrempa@redhat.com> Reviewed-by: Ján Tomko <jtomko@redhat.com>
Chun Feng Wu [Wed, 19 Feb 2025 16:57:17 +0000 (22:27 +0530)]
qemuxmlconftest: Add 'throttlefilter' tests
* Add tests for throttlegroup domain xml processing, including
groups referenced and not referenced by filters
* Add tests for throttlefilter domain xml processing, including
throttle group referenced by different disks
* Add negative test case to report error when iotune is configured
together with throttle filters
Signed-off-by: Chun Feng Wu <danielwuwy@163.com>
* Isolate domain xml test
Signed-off-by: Harikumar Rajkumar <harirajkumar230@gmail.com>
* Added test case with copy on read
Reviewed-by-by: Peter Krempa <pkrempa@redhat.com> Signed-off-by: Peter Krempa <pkrempa@redhat.com> Reviewed-by: Ján Tomko <jtomko@redhat.com>
Chun Feng Wu [Wed, 19 Feb 2025 16:57:16 +0000 (22:27 +0530)]
config: validate: Verify iotune, throttle group and filter
Refactor iotune verification, and verify some rules
* Disk iotune validation can be reused for throttle group validation,
refactor it into common method "virDomainDiskIoTuneValidate"
* Add "virDomainDefValidateThrottleGroups" to validate throttle groups,
which in turn calls "virDomainDiskIoTuneValidate"
* Make sure referenced throttle group exists
* Use "iotune" and "throttlefilters" exclusively for specific disk
* Throttle filters cannot be used together with CDROM
Signed-off-by: Chun Feng Wu <danielwuwy@163.com>
* Update of code documentation comments.
Signed-off-by: Harikumar Rajkumar <harirajkumar230@gmail.com>
* Moved validation code from parser to validator
* Removed dead checks after validation
Reviewed-by: Peter Krempa <pkrempa@redhat.com> Signed-off-by: Peter Krempa <pkrempa@redhat.com> Reviewed-by: Ján Tomko <jtomko@redhat.com>
Chun Feng Wu [Wed, 19 Feb 2025 16:57:15 +0000 (22:27 +0530)]
qemu: block: Support block disk along with throttle filters
For hot attaching/detaching
* Leverage qemuBlockThrottleFiltersData to prepare attaching/detaching
throttle filter data for qemuMonitorBlockdevAdd and qemuMonitorBlockdevDel
* For hot attaching, within qemuDomainAttachDiskGeneric,prepare throttle
filters json data, and create corresponding blockdev for QMP request
("blockdev-add" with "driver":"throttle")
* Each filter has a nodename, and those filters are chained up,
create them in sequence, and delete them reversely
* Delete filters by "qemuBlockThrottleFiltersDetach"("blockdev-del")
when detaching device
For throttle group commandline
* Add qemuBuildThrottleGroupCommandLine in qemuBuildCommandLine to add
"object" of throttle-group
* Verify throttle group definition when lauching vm
* Check QEMU_CAPS_OBJECT_JSON before "qemuBuildObjectCommandlineFromJSON",
which is to build "-object" option
For throttle filter commandline
* Add qemuBuildDiskThrottleFiltersCommandLine in qemuBuildDiskCommandLine
to add "blockdev"
Signed-off-by: Chun Feng Wu <danielwuwy@163.com>
* Apply suggested coding style changes.
* Update of code documentation comments.
Chun Feng Wu [Wed, 19 Feb 2025 16:57:14 +0000 (22:27 +0530)]
qemu: helper: throttle filter nodename and preparation processing
It contains throttle filter nodename processing(new nodename,
topnodename, parse and format nodename), throttle filter
attaching/detaching preparation and implementation.
* Updated "qemuDomainDiskGetTopNodename", so if throttlefilter is used
together with copyOnRead, top node is throttle filter node, e.g.
device -> throttle -> copyOnRead Layer-> image chain
* In qemuBuildThrottleFiltersAttachPrepareBlockdev, if copy_on_read
is on, build throttle nodename chain on top of copy_on_read nodename
* In status xml, throttle filter nodename(virDomainDiskDef.nodename) is
saved at disk/privateData/nodenames/nodename(type='throttle-filter'),
corresponding parse/format sits in qemuDomainDiskPrivateParse and
qemuDomainDiskPrivateFormat
* If filter nodename hasn't been set by qemuDomainDiskPrivateParse,
in qemuDomainPrepareThrottleFilterBlockdev, filter nodename index
can be generated by reusing qemuDomainStorageIDNew and current
global sequence number is persistented in virDomainObj-
>privateData(qemuDomainObjPrivate)->nodenameindex.
qemuDomainPrepareThrottleFilterBlockdev is called by
qemuDomainPrepareDiskSourceBlockdev, which in turn used by both
hotplug and qemuProcessStart to prepare throttle filter node name
* Define method qemuBlockThrottleFilterGetProps, which is used by
both hotplug and command to build throttle object for QEMU
* Define methods for throttle filter attach/detach/rollback
Signed-off-by: Chun Feng Wu <danielwuwy@163.com>
* Apply suggested coding style changes.
* Update of code documentation comments.
Signed-off-by: Harikumar Rajkumar <harirajkumar230@gmail.com> Reviewed-by: Peter Krempa <pkrempa@redhat.com> Signed-off-by: Peter Krempa <pkrempa@redhat.com> Reviewed-by: Ján Tomko <jtomko@redhat.com>
Chun Feng Wu [Wed, 19 Feb 2025 16:57:13 +0000 (22:27 +0530)]
qemu: Implement qemu driver for throttle API
ThrottleGroup lifecycle implementation, note, in QOM, throttlegroup name is prefixed with
"throttle-" to clearly separate throttle group objects into their own namespace.
* "qemuDomainSetThrottleGroup", this method is to add("object-add") or update("qom-set")
throttlegroup in QOM and update corresponding objects in DOM
* "qemuDomainGetThrottleGroup", this method queries throttlegroup info by groupname
* "qemuDomainDelThrottleGroup", this method checks if group is referenced by any throttle
in disks and delete it if it's not used anymore
* Check flag "QEMU_CAPS_OBJECT_JSON" during qemuDomainSetThrottleGroup when vm is active,
throttle group feature requries such flag
* "objectAddNoWrap"("props") check is done by reusing qemuMonitorAddObject in
qemuDomainSetThrottleGroup
Signed-off-by: Chun Feng Wu <danielwuwy@163.com>
* Apply suggested coding style changes.
* cleanup qemu Get ThrottleGroup.
* Update the version to 11.1.0.
Signed-off-by: Harikumar Rajkumar <harirajkumar230@gmail.com>
* Removed QEMU_CAPS_OBJECT_JSON check as the flag no longer exists.
* Update the version to 11.2.0.
Reviewed-by: Peter Krempa <pkrempa@redhat.com> Signed-off-by: Peter Krempa <pkrempa@redhat.com> Reviewed-by: Ján Tomko <jtomko@redhat.com>
Chun Feng Wu [Wed, 19 Feb 2025 16:57:12 +0000 (22:27 +0530)]
qemu: Refactor qemuDomainSetBlockIoTune to extract common methods
extract common methods from "qemuDomainSetBlockIoTune" to be reused
by throttle handling later, common methods include:
* "qemuDomainValidateBlockIoTune", which is to validate that PARAMS
contains only recognized parameter names with correct types
* "qemuDomainSetBlockIoTuneFields", which is to load parameters into
internal object virDomainBlockIoTuneInfo
* "qemuDomainCheckBlockIoTuneMutualExclusion", which is to check rules
like "total and read/write of bytes_sec cannot be set at the same time"
* "qemuDomainCheckBlockIoTuneMax", which is to check "max" rules within iotune
Chun Feng Wu [Wed, 19 Feb 2025 16:57:11 +0000 (22:27 +0530)]
remote: New APIs for ThrottleGroup lifecycle management
Defined new public APIs:
* virDomainSetThrottleGroup to add or update throttlegroup within specific domain,
it will be referenced by throttlefilter later in disk to do limits
* virDomainGetThrottleGroup to get throttlegroup info, old-style is discarded
(APIs to query first for the number of parameters and then give it a
reasonably-sized pointer), instead, the new approach is adopted that
API returns allocated array of fields and number of fileds that are in it.
* virDomainDelThrottleGroup to delete throttlegroup, it fails if this throttlegroup
is still referenced by some throttlefilter
Signed-off-by: Chun Feng Wu <danielwuwy@163.com>
* Reimplement getter API to fetch data from XML.
* Apply suggested coding style changes.
* Update of code documentation comments.
* Update the version to 11.2.0.
Signed-off-by: Harikumar Rajkumar <harirajkumar230@gmail.com> Reviewed-by: Peter Krempa <pkrempa@redhat.com> Signed-off-by: Peter Krempa <pkrempa@redhat.com> Reviewed-by: Ján Tomko <jtomko@redhat.com>
Chun Feng Wu [Wed, 19 Feb 2025 16:57:10 +0000 (22:27 +0530)]
tests: Test qemuMonitorJSONGetThrottleGroup and qemuMonitorJSONUpdateThrottleGroup
Within "testQemuMonitorJSONqemuMonitorJSONUpdateThrottleGroup"
* Test qemuMonitorJSONGetThrottleGroup
* Test qemuMonitorJSONUpdateThrottleGroup, which updates limits through "qom-set"
Signed-off-by: Chun Feng Wu <danielwuwy@163.com>
* fix test