+Fri Feb 29 12:50:00 UTC 2008 Richard W.M. Jones <rjones@redhat.com>
+
+ Many typos fixed (Atsushi SAKAI).
+
Thu Feb 28 18:04:59 CET 2008 Jim Meyering <meyering@redhat.com>
Rewrite test-coverage rules to accommodate multiple .o files per .c.
error is logged, allowing to retrieve it later and if the user registered an
error callback it will be called synchronously. Once the call to libvirt ends
the error can be detected by the return value and the full information for
-the last logged error can be retrieved.</p><p>To avoid as much as prossible troubles with a global variable in a
+the last logged error can be retrieved.</p><p>To avoid as much as possible troubles with a global variable in a
multithreaded environment, libvirt will associate when possible the errors to
the current connection they are related to, that way the error is stored in a
dynamic structure which can be made thread specific. Error callback can be
<li>conn: if available a pointer to the <a href="html/libvirt-libvirt.html#virConnectPtr">virConnectPtr</a>
connection to the hypervisor where this happened</li>
<li>dom: if available a pointer to the <a href="html/libvirt-libvirt.html#virDomainPtr">virDomainPtr</a> domain
- targetted in the operation</li>
+ targeted in the operation</li>
</ul><p>and then extra raw informations about the error which may be initialized
to 0 or NULL if unused</p><ul><li>str1, str2, str3: string informations, usually str1 is the error
message format</li>
</ul><p>So usually, setting up specific error handling with libvirt consist of
registering an handler with with <a href="html/libvirt-virterror.html#virSetErrorFunc">virSetErrorFunc</a> or
with <a href="html/libvirt-virterror.html#virConnSetErrorFunc">virConnSetErrorFunc</a>,
-chech the value of the code value, take appropriate action, if needed let
+check the value of the code value, take appropriate action, if needed let
libvirt print the error on stderr by calling <a href="html/libvirt-virterror.html#virDefaultErrorFunc">virDefaultErrorFunc</a>.
For asynchronous error handing, set such a function doing nothing to avoid
the error being reported on stderr, and call virConnGetLastError or
<li>memory: the maximum memory allocated to the domain in kilobytes</li>
<li>vcpu: the number of virtual cpu configured for the domain</li>
<li>os: a block describing the Operating System, its content will be
- dependant on the OS type
+ dependent on the OS type
<ul><li>type: indicate the OS type, always linux at this point</li>
<li>kernel: path to the kernel on the Domain 0 filesystem</li>
<li>initrd: an optional path for the init ramdisk on the Domain 0
pointing to an additional program in charge of emulating the devices</li>
<li>the disk entry indicates in the dev target section that the emulation
for the drive is the first IDE disk device hda. The list of device names
- supported is dependant on the Hypervisor, but for Xen it can be any IDE
+ supported is dependent on the Hypervisor, but for Xen it can be any IDE
device <code>hda</code>-<code>hdd</code>, or a floppy device
<code>fda</code>, <code>fdb</code>. The <code><disk></code> element
also supports a 'device' attribute to indicate what kinda of hardware to
of the box which does NAT'ing to the default route and has an IP range of
<code>192.168.22.0/255.255.255.0</code>. Each guest will have an
associated tun device created with a name of vnetN, which can also be
- overriden with the <target> element. Example configs are:</p>
+ overridden with the <target> element. Example configs are:</p>
<pre><interface type='network'>
<source network='default'/>
</interface>
<p>Provides a bridge from the VM directly onto the LAN. This assumes
there is a bridge device on the host which has one or more of the hosts
physical NICs enslaved. The guest VM will have an associated tun device
- created with a name of vnetN, which can also be overriden with the
+ created with a name of vnetN, which can also be overridden with the
<target> element. The tun device will be enslaved to the bridge.
The IP range / network configuration is whatever is used on the LAN. This
provides the guest VM full incoming & outgoing net access just like a
<li>Generic connection to LAN
<p>Provides a means for the administrator to execute an arbitrary script
to connect the guest's network to the LAN. The guest will have a tun
- device created with a name of vnetN, which can also be overriden with the
+ device created with a name of vnetN, which can also be overridden with the
<target> element. After creating the tun device a shell script will
be run which is expected to do whatever host network integration is
required. By default this script is called /etc/qemu-ifup but can be
- overriden.</p>
+ overridden.</p>
<pre><interface type='ethernet'/>
<interface type='ethernet'>
</features>
</guest></span>
...
-</capabilities></pre><p>The first block (in red) indicates the host hardware capbilities, currently
+</capabilities></pre><p>The first block (in red) indicates the host hardware capabilities, currently
it is limited to the CPU properties but other information may be available,
it shows the CPU architecture, and the features of the chip (the feature
block is similar to what you will find in a Xen fully virtualized domain
under the <a href="http://www.opensource.org/licenses/lgpl-license.html">GNU
Lesser General Public License</a>. Virtualization of the Linux Operating
System means the ability to run multiple instances of Operating Systems
-concurently on a single hardware system where the basic resources are driven
+concurrently on a single hardware system where the basic resources are driven
by a Linux (or Solaris) instance. The library aims at providing a long term
stable C API initially for <a href="http://www.cl.cam.ac.uk/Research/SRG/netos/xen/index.html">Xen
paravirtualization</a> but it can also integrate with other
Ruby bindings (David Lutterkort), SASL based authentication for
libvirt remote support (Daniel Berrange), PolicyKit authentication
(Daniel Berrange)</li>
- <li>Documentation: example files for QEMU and libvirtd configuations
+ <li>Documentation: example files for QEMU and libvirtd configurations
(Daniel Berrange), english cleanups (Jim Paris), CIM and OpenVZ
references, document <shareable/>, daemon startup when using
QEMU/KVM, document HV support for new NUMA calls (Richard Jones),
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html xmlns="http://www.w3.org/1999/xhtml"><head><meta http-equiv="Content-Type" content="text/html; charset=ISO-8859-1" /><link rel="stylesheet" type="text/css" href="libvirt.css" /><link rel="SHORTCUT ICON" href="/32favicon.png" /><title>Bindings for other languages</title></head><body><div id="container"><div id="intro"><div id="adjustments"></div><div id="pageHeader"></div><div id="content2"><h1 class="style1">Bindings for other languages</h1><p>Libvirt comes with bindings to support other languages than
pure C. First the headers embeds the necessary declarations to
-allow direct acces from C++ code, but also we have bindings for
+allow direct access from C++ code, but also we have bindings for
higher level kind of languages:</p><ul><li>Python: Libvirt comes with direct support for the Python language
(just make sure you installed the libvirt-python package if not
compiling from sources). See below for more informations about
is not applicable when creating a pool.</dd>
<dt>available</dt>
-<dd>Providing the free space available for allocating new volums
+<dd>Providing the free space available for allocating new volumes
in the pool. Due to underlying device constraints it may not be
possible to allocate the entire free space to a single volume.
This value is in bytes. This is not applicable when creating a
be created. For device based pools it will tbe directory in which
devices nodes exist. For the latter <code>/dev/</code> may seem
like the logical choice, however, devices nodes there are not
-guarenteed stable across reboots, since they are allocated on
-demand. It is preferrable to use a stable location such as one
+guaranteed stable across reboots, since they are allocated on
+demand. It is preferable to use a stable location such as one
of the <code>/dev/disk/by-{path,id,uuid,label</code> locations.
</dd>
<dt>permissions<dt>
If a storage pool exposes information about its underlying
placement / allocation scheme, the <code>device</code> element
within the <code>source</code> element may contain information
-about its avilable extents. Some pools have a constraint that
+about its available extents. Some pools have a constraint that
a volume must be allocated entirely within a single constraint
(eg disk partition pools). Thus the extent information allows an
application to determine the maximum possible size for a new
be created. For device based pools it will tbe directory in which
devices nodes exist. For the latter <code>/dev/</code> may seem
like the logical choice, however, devices nodes there are not
-guarenteed stable across reboots, since they are allocated on
-demand. It is preferrable to use a stable location such as one
+guaranteed stable across reboots, since they are allocated on
+demand. It is preferable to use a stable location such as one
of the <code>/dev/disk/by-{path,id,uuid,label</code> locations.
</dd>
<dt>format</dt>
</ul><p>
When listing existing volumes all these formats are supported
natively. When creating new volumes, only a subset may be
-available. The <code>raw</code> type is guarenteed always
+available. The <code>raw</code> type is guaranteed always
available. The <code>qcow2</code> type can be created if
either <code>qemu-img</code> or <code>qcow-create</code> tools
-are present. The others are dependant on support of the
+are present. The others are dependent on support of the
<code>qemu-img</code> tool.
</p><h4><a name="StorageBackendFS" id="StorageBackendFS">Filesystem pool</a></h4>
<h5>Valid pool format types</h5>
<p>
-The fileystem pool supports the following formats:
+The filesystem pool supports the following formats:
</p>
<ul><li><code>auto</code> - automatically determine format</li>
<h5>Valid pool format types</h5>
<p>
-The network fileystem pool supports the following formats:
+The network filesystem pool supports the following formats:
</p>
<ul><li><code>auto</code> - automatically determine format</li>
/**
* virDomainInterfaceStatsPtr:
*
- * A pointe to a virDomainInterfaceStats structure
+ * A pointer to a virDomainInterfaceStats structure
*/
typedef virDomainInterfaceStatsStruct *virDomainInterfaceStatsPtr;
* @nodeinfo: virNodeInfo instance
*
* This macro is to calculate the total number of CPUs supported
- * but not neccessarily active in the host.
+ * but not necessarily active in the host.
*/
/**
* virConnectFlags
*
- * Flags when openning a connection to a hypervisor
+ * Flags when opening a connection to a hypervisor
*/
typedef enum {
VIR_CONNECT_RO = 1, /* A readonly connection */
* @cpumap: pointer to a bit map of real CPUs (in 8-bit bytes) (IN/OUT)
* @cpu: the physical CPU number
*
- * This macro is to be used in conjonction with virDomainPinVcpu() API.
+ * This macro is to be used in conjunction with virDomainPinVcpu() API.
* USE_CPU macro set the bit (CPU usable) of the related cpu in cpumap.
*/
* @cpumap: pointer to a bit map of real CPUs (in 8-bit bytes) (IN/OUT)
* @cpu: the physical CPU number
*
- * This macro is to be used in conjonction with virDomainPinVcpu() API.
+ * This macro is to be used in conjunction with virDomainPinVcpu() API.
* USE_CPU macro reset the bit (CPU not usable) of the related cpu in cpumap.
*/
* VIR_CPU_MAPLEN:
* @cpu: number of physical CPUs
*
- * This macro is to be used in conjonction with virDomainPinVcpu() API.
+ * This macro is to be used in conjunction with virDomainPinVcpu() API.
* It returns the length (in bytes) required to store the complete
* CPU map between a single virtual & all physical CPUs of a domain.
*/
* @vcpu: the virtual CPU number
* @cpu: the physical CPU number
*
- * This macro is to be used in conjonction with virDomainGetVcpus() API.
+ * This macro is to be used in conjunction with virDomainGetVcpus() API.
* VIR_CPU_USABLE macro returns a non zero value (true) if the cpu
* is usable by the vcpu, and 0 otherwise.
*/
* This cpumap must be previously allocated by the caller
* (ie: malloc(maplen))
*
- * This macro is to be used in conjonction with virDomainGetVcpus() and
+ * This macro is to be used in conjunction with virDomainGetVcpus() and
* virDomainPinVcpu() APIs. VIR_COPY_CPUMAP macro extract the cpumap of
* the specified vcpu from cpumaps array and copy it into cpumap to be used
* later by virDomainPinVcpu() API.
* @maplen: the length (in bytes) of one cpumap
* @vcpu: the virtual CPU number
*
- * This macro is to be used in conjonction with virDomainGetVcpus() and
+ * This macro is to be used in conjunction with virDomainGetVcpus() and
* virDomainPinVcpu() APIs. VIR_GET_CPUMAP macro returns a pointer to the
* cpumap of the specified vcpu from cpumaps array.
*/
/**
* virDomainInterfaceStatsPtr:
*
- * A pointe to a virDomainInterfaceStats structure
+ * A pointer to a virDomainInterfaceStats structure
*/
typedef virDomainInterfaceStatsStruct *virDomainInterfaceStatsPtr;
* @nodeinfo: virNodeInfo instance
*
* This macro is to calculate the total number of CPUs supported
- * but not neccessarily active in the host.
+ * but not necessary active in the host.
*/
/**
* virConnectFlags
*
- * Flags when openning a connection to a hypervisor
+ * Flags when opening a connection to a hypervisor
*/
typedef enum {
VIR_CONNECT_RO = 1, /* A readonly connection */
* @cpumap: pointer to a bit map of real CPUs (in 8-bit bytes) (IN/OUT)
* @cpu: the physical CPU number
*
- * This macro is to be used in conjonction with virDomainPinVcpu() API.
+ * This macro is to be used in conjunction with virDomainPinVcpu() API.
* USE_CPU macro set the bit (CPU usable) of the related cpu in cpumap.
*/
* @cpumap: pointer to a bit map of real CPUs (in 8-bit bytes) (IN/OUT)
* @cpu: the physical CPU number
*
- * This macro is to be used in conjonction with virDomainPinVcpu() API.
+ * This macro is to be used in conjunction with virDomainPinVcpu() API.
* USE_CPU macro reset the bit (CPU not usable) of the related cpu in cpumap.
*/
* VIR_CPU_MAPLEN:
* @cpu: number of physical CPUs
*
- * This macro is to be used in conjonction with virDomainPinVcpu() API.
+ * This macro is to be used in conjunction with virDomainPinVcpu() API.
* It returns the length (in bytes) required to store the complete
* CPU map between a single virtual & all physical CPUs of a domain.
*/
* @vcpu: the virtual CPU number
* @cpu: the physical CPU number
*
- * This macro is to be used in conjonction with virDomainGetVcpus() API.
+ * This macro is to be used in conjunction with virDomainGetVcpus() API.
* VIR_CPU_USABLE macro returns a non zero value (true) if the cpu
* is usable by the vcpu, and 0 otherwise.
*/
* This cpumap must be previously allocated by the caller
* (ie: malloc(maplen))
*
- * This macro is to be used in conjonction with virDomainGetVcpus() and
+ * This macro is to be used in conjunction with virDomainGetVcpus() and
* virDomainPinVcpu() APIs. VIR_COPY_CPUMAP macro extract the cpumap of
* the specified vcpu from cpumaps array and copy it into cpumap to be used
* later by virDomainPinVcpu() API.
* @maplen: the length (in bytes) of one cpumap
* @vcpu: the virtual CPU number
*
- * This macro is to be used in conjonction with virDomainGetVcpus() and
+ * This macro is to be used in conjunction with virDomainGetVcpus() and
* virDomainPinVcpu() APIs. VIR_GET_CPUMAP macro returns a pointer to the
* cpumap of the specified vcpu from cpumaps array.
*/
VIR_FROM_XEN, /* Error at Xen hypervisor layer */
VIR_FROM_XEND, /* Error at connection with xend daemon */
VIR_FROM_XENSTORE, /* Error at connection with xen store */
- VIR_FROM_SEXPR, /* Error in the S-Epression code */
+ VIR_FROM_SEXPR, /* Error in the S-Expression code */
VIR_FROM_XML, /* Error in the XML code */
VIR_FROM_DOM, /* Error when operating on a domain */
VIR_FROM_RPC, /* Error in the XML-RPC code */
VIR_WAR_NO_NETWORK, /* failed to start network */
VIR_ERR_NO_DOMAIN, /* domain not found or unexpectedly disappeared */
VIR_ERR_NO_NETWORK, /* network not found */
- VIR_ERR_INVALID_MAC, /* invalid MAC adress */
+ VIR_ERR_INVALID_MAC, /* invalid MAC address */
VIR_ERR_AUTH_FAILED, /* authentication failed */
VIR_ERR_INVALID_STORAGE_POOL, /* invalid storage pool object */
VIR_ERR_INVALID_STORAGE_VOL, /* invalid storage vol object */
if (c_retval == NULL)
return VIR_PY_NONE;
- /* convert to a Python tupple of long objects */
+ /* convert to a Python tuple of long objects */
if ((info = PyTuple_New(2)) == NULL) {
free(c_retval);
return VIR_PY_NONE;
return VIR_PY_NONE;
}
- /* convert to a Python tupple of long objects */
+ /* convert to a Python tuple of long objects */
if ((info = PyDict_New()) == NULL) {
free(params);
return VIR_PY_NONE;
return VIR_PY_INT_FAIL;
}
- /* convert to a Python tupple of long objects */
+ /* convert to a Python tuple of long objects */
for (i = 0 ; i < nparams ; i++) {
PyObject *key, *val;
key = libvirt_constcharPtrWrap(params[i].field);
cpumap, cpumaplen) < 0)
goto cleanup;
- /* convert to a Python tupple of long objects */
+ /* convert to a Python tuple of long objects */
if ((pyretval = PyTuple_New(2)) == NULL)
goto cleanup;
if ((pycpuinfo = PyList_New(dominfo.nrVirtCpu)) == NULL)
virInitialize();
- /* intialize the python extension module */
+ /* initialize the python extension module */
Py_InitModule((char *)
#ifndef __CYGWIN__
"libvirtmod"
<arg name='path' type='char *' info='the path for the interface device'/>
</function>
<function name="virNodeGetCellsFreeMemory" file='python'>
- <info>Returns the availbale memory for a list of cells</info>
+ <info>Returns the available memory for a list of cells</info>
<arg name='conn' type='virConnectPtr' info='pointer to the hypervisor connection'/>
<arg name='startCell' type='int' info='first cell in the list'/>
<arg name='maxCells' type='int' info='number of cell in the list'/>
<arg name='domain' type='virDomainPtr' info='pointer to domain object, or NULL for Domain0'/>
</function>
<function name='virDomainPinVcpu' file='python'>
- <info>Dynamically change the real CPUs which can be allocated to a virtual CPU. This function requires priviledged access to the hypervisor.</info>
+ <info>Dynamically change the real CPUs which can be allocated to a virtual CPU. This function requires privileged access to the hypervisor.</info>
<return type='int' info='0 in case of success, -1 in case of failure.'/>
<arg name='domain' type='virDomainPtr' info='pointer to domain object, or NULL for Domain0'/>
<arg name='vcpu' type='unsigned int' info='virtual CPU number'/>
* if (x) LIBVIRT_STMT_START { ... } LIBVIRT_STMT_END; else ...
*
* When GCC is compiling C code in non-ANSI mode, it will use the
- * compiler __extension__ to wrap the statements wihin `({' and '})' braces.
+ * compiler __extension__ to wrap the statements within `({' and '})' braces.
* When compiling on platforms where configure has defined
* HAVE_DOWHILE_MACROS, statements will be wrapped with `do' and `while (0)'.
* For any other platforms (SunOS4 is known to have this issue), wrap the
int i;
/* Remove deleted entries, shuffling down remaining
- * entries as needed to form contigous series
+ * entries as needed to form contiguous series
*/
for (i = 0 ; i < eventLoop.timeoutsCount ; ) {
if (!eventLoop.timeouts[i].deleted) {
int i;
/* Remove deleted entries, shuffling down remaining
- * entries as needed to form contigous series
+ * entries as needed to form contiguous series
*/
for (i = 0 ; i < eventLoop.handlesCount ; ) {
if (!eventLoop.handles[i].deleted) {
* virEventAddHandleImpl: register a callback for monitoring file handle events
*
* @fd: file handle to monitor for events
- * @events: bitset of events to wach from POLLnnn constants
- * @cb: callback to invoke when an event occurrs
+ * @events: bitset of events to watch from POLLnnn constants
+ * @cb: callback to invoke when an event occurs
* @opaque: user data to pass to callback
*
* returns -1 if the file handle cannot be registered, 0 upon success
* virEventUpdateHandleImpl: change event set for a monitored file handle
*
* @fd: file handle to monitor for events
- * @events: bitset of events to wach from POLLnnn constants
+ * @events: bitset of events to watch from POLLnnn constants
*
* Will not fail if fd exists
*/
* virEventAddTimeoutImpl: register a callback for a timer event
*
* @frequency: time between events in milliseconds
- * @cb: callback to invoke when an event occurrs
+ * @cb: callback to invoke when an event occurs
* @opaque: user data to pass to callback
*
* Setting frequency to -1 will disable the timer. Setting the frequency
#################################################################
#
-# Network connectivitiy controls
+# Network connectivity controls
#
# Flag listening for secure TLS connections on the public TCP/IP port.
# NB, must pass the --listen flag to the libvirtd process for this to
# have any effect.
#
-# It is neccessary to setup a CA and issue server certificates before
+# It is necessary to setup a CA and issue server certificates before
# using this capability.
#
# This is enabled by default, uncomment this to disable it
/* The server records are now being established. This
* might be caused by a host name change. We need to wait
* for our own records to register until the host name is
- * properly esatblished. */
+ * properly established. */
AVAHI_DEBUG("Client collision/connecting %p", mdns->client);
group = mdns->group;
while (group) {
/**
* Removes a group container from advertizement
*
- * @mdns amanger to detatch group from
+ * @mdns amanger to detach group from
* @group group to remove
*/
void libvirtd_mdns_remove_group(struct libvirtd_mdns *mdns, struct libvirtd_mdns_group *group);
/**
* Removes a service entry from a group
*
- * @group group to deteach service entry from
+ * @group group to detach service entry from
* @entry service entry to remove
*/
void libvirtd_mdns_remove_entry(struct libvirtd_mdns_group *group, struct libvirtd_mdns_entry *entry);
}
-/* We asked for an SSF layer, so sanity check that we actaully
+/* We asked for an SSF layer, so sanity check that we actually
* got what we asked for */
static int
remoteSASLCheckSSF (struct qemud_client *client,