+Thu Apr 24 18:00:21 JST 2008 Atsushi SAKAI <sakaia@jp.fujitsu.com>
+
+ * proxy/libvirt_proxy.c docs/* fixing typos
+
Thu Apr 24 09:54:19 CEST 2008 Daniel Veillard <veillard@redhat.com>
* AUTHORS: indicate that the Logo is by Diana Fong
<h1>Bindings for other languages</h1>
<p>Libvirt comes with bindings to support other languages than
pure C. First the headers embeds the necessary declarations to
-allow direct acces from C++ code, but also we have bindings for
+allow direct access from C++ code, but also we have bindings for
higher level kind of languages:</p>
<ul><li>Python: Libvirt comes with direct support for the Python language
(just make sure you installed the libvirt-python package if not
</pre>
<h2>Built from CVS / GIT</h2>
<p>
- When building from CVS it is neccessary to generate the autotools
+ When building from CVS it is necessary to generate the autotools
support files. This requires having <code>autoconf</code>,
<code>automake</code>, <code>libtool</code> and <code>intltool</code>
installed. The process can be automated with the <code>autogen.sh</code>
<h2>Hourly development snapshots</h2>
<p>
Once an hour, an automated snapshot is made from the latest CVS server
- source tree. These snapshots should be usable, but we make no guarentees
+ source tree. These snapshots should be usable, but we make no guarantees
about their stability:
</p>
<ul><li><a href="ftp://libvirt.org/libvirt/libvirt-cvs-snapshot.tar.gz">libvirt.org FTP server</a></li><li><a href="http://libvirt.org/sources/libvirt-cvs-snapshot.tar.gz">libvirt.org HTTP server</a></li></ul>
<h2>CVS repository access</h2>
<p>
The master source repository uses <a href="http://ximbiot.com/cvs/cvshome/docs/">CVS</a>
- and anonymous access is provided. Prior to accessing the server is it neccessary
+ and anonymous access is provided. Prior to accessing the server is it necessary
to authenticate using the password <code>anoncvs</code>. This can be accomplished with the
<code>cvs login</code> command:
</p>
</pre>
<p>
The libvirt build process uses GNU autotools, so after obtaining a checkout
- it is neccessary to generate the configure script and Makefile.in templates
+ it is necessary to generate the configure script and Makefile.in templates
using the <code>autogen.sh</code> command. As an example, to do a complete
build and install it into your home directory run:
</p>
<p>
Once an hour, an automated snapshot is made from the latest CVS server
- source tree. These snapshots should be usable, but we make no guarentees
+ source tree. These snapshots should be usable, but we make no guarantees
about their stability:
</p>
<p>
The master source repository uses <a href="http://ximbiot.com/cvs/cvshome/docs/">CVS</a>
- and anonymous access is provided. Prior to accessing the server is it neccessary
+ and anonymous access is provided. Prior to accessing the server is it necessary
to authenticate using the password <code>anoncvs</code>. This can be accomplished with the
<code>cvs login</code> command:
</p>
<p>
The libvirt build process uses GNU autotools, so after obtaining a checkout
- it is neccessary to generate the configure script and Makefile.in templates
+ it is necessary to generate the configure script and Makefile.in templates
using the <code>autogen.sh</code> command. As an example, to do a complete
build and install it into your home directory run:
</p>
</p>
<h2>Hypervisor drivers</h2>
<p>
- The hypervisor drivers currently supported by livirt are:
+ The hypervisor drivers currently supported by libvirt are:
</p>
<ul><li><strong><a href="drvxen.html">Xen</a></strong></li><li><strong><a href="drvqemu.html">QEMU</a></strong></li><li><strong><a href="drvlxc.html">LXC</a></strong></li><li><strong><a href="drvtest.html">Test</a></strong></li><li><strong><a href="drvopenvz.html">OpenVZ</a></strong></li></ul>
</div>
<h2>Hypervisor drivers</h2>
<p>
- The hypervisor drivers currently supported by livirt are:
+ The hypervisor drivers currently supported by libvirt are:
</p>
<ul>
devices nodes exist. For the latter <code>/dev/</code> may seem
like the logical choice, however, devices nodes there are not
guaranteed stable across reboots, since they are allocated on
-demand. It is preferrable to use a stable location such as one
+demand. It is preferable to use a stable location such as one
of the <code>/dev/disk/by-{path,id,uuid,label</code> locations.
</dd><dt>format</dt><dd>Provides information about the pool specific volume format.
For disk pools it will provide the partition type. For filesystem
devices nodes exist. For the latter <code>/dev/</code> may seem
like the logical choice, however, devices nodes there are not
guaranteed stable across reboots, since they are allocated on
-demand. It is preferrable to use a stable location such as one
+demand. It is preferable to use a stable location such as one
of the <code>/dev/disk/by-{path,id,uuid,label</code> locations.
</dd>
<dt>format</dt>
Storage on IDE/SCSI/USB disks, FibreChannel, LVM, iSCSI, NFS and filesystems
</li></ul>
<h2>libvirt provides:</h2>
- <ul><li>Remote management using TLS encryption and x509 certificates</li><li>Remote management authenticating with Kerberos and SASL</li><li>Local access control using PolicyKit</li><li>Zero-conf discovery using Avahi mulicast-DNS</li><li>Management of virtual machines, virtual networks and storage</li></ul>
+ <ul><li>Remote management using TLS encryption and x509 certificates</li><li>Remote management authenticating with Kerberos and SASL</li><li>Local access control using PolicyKit</li><li>Zero-conf discovery using Avahi multicast-DNS</li><li>Management of virtual machines, virtual networks and storage</li></ul>
<p class="image">
<img src="libvirtLogo.png" alt="libvirt Logo" /></p>
</div>
<li>Remote management using TLS encryption and x509 certificates</li>
<li>Remote management authenticating with Kerberos and SASL</li>
<li>Local access control using PolicyKit</li>
- <li>Zero-conf discovery using Avahi mulicast-DNS</li>
+ <li>Zero-conf discovery using Avahi multicast-DNS</li>
<li>Management of virtual machines, virtual networks and storage</li>
</ul>
<span>Driver for the Linux native container API</span>
</li><li>
<a href="drvtest.html">Test</a>
- <span>Psuedo-driver simulating APIs in memory for test suites</span>
+ <span>Pseudo-driver simulating APIs in memory for test suites</span>
</li><li>
<a href="drvremote.html">Remote</a>
<span>Driver providing secure remote to the libvirt APIs</span>
/**
* proxyListenUnixSocket:
- * @path: the fileame for the socket
+ * @path: the filename for the socket
*
* create a new abstract socket based on that path and listen on it
*
if (exit_timeout == 0) {
done = 1;
if (debug > 0) {
- fprintf(stderr, "Exitting after 30s without clients\n");
+ fprintf(stderr, "Exiting after 30s without clients\n");
}
}
} else