--- /dev/null
+<!DOCTYPE html>
+<html>
+<head><meta http-equiv="Content-Type" content="text/html;charset=UTF-8"></head>
+<body>
+<div id="Director_Director_Name">
+<dt>Name = <name></dt>
+<dd>
+ The director name used by the system administrator. This directive is required.
+</dd>
+</div>
+<div id="Director_Director_Description">
+<dt>Description = <text></dt>
+<dd>
+ The text field contains a description of the Director that will be displayed in the graphical user interface. This directive is optional.
+</dd>
+</div>
+<div id="Director_Director_Password">
+<dt>Password = <UA-password></dt>
+<dd>
+ Specifies the password that must be supplied for the default <span class="bbacula">Bacula</span> Console to be authorized. The same password must appear in the <span>Director</span> resource of the Console configuration file. For added security, the password is never passed across the network but instead a challenge response hash code created from the password. This directive is required. If you have either <span class="btool">/dev/random</span> or <span class="btool">bc</span> on your machine, <span class="bbacula">Bacula</span> will generate a random password during the configuration process, otherwise it will be left blank and you must manually supply it. <p> The password is plain text. It is not generated through any special process but as noted above, it is better to use random text for security reasons. </p>
+
+</dd>
+</div>
+<div id="Director_Director_Messages">
+<dt>Messages = <Messages-resource-name></dt>
+<dd>
+ The messages resource specifies where to deliver Director messages that are not associated with a specific Job. Most messages are specific to a job and will be directed to the Messages resource specified by the job. However, there are a few messages that can occur when no job is running. This directive is required.
+</dd>
+</div>
+<div id="Director_Director_QueryFile">
+<dt>QueryFile = <Path></dt>
+<dd>
+ This directive is mandatory and specifies a directory and file in which the Director can find the canned SQL statements for the <span class="bcommandname">query</span> command of the Console. Standard shell expansion of the <span class="bbracket"><Path></span> is done when the configuration file is read so that values such as <span class="bbf">$HOME</span> will be properly expanded. This directive is required.
+</dd>
+</div>
+<div id="Director_Director_WorkingDirectory">
+<dt>Working Directory = <Directory></dt>
+<dd>
+ This directive is mandatory and specifies a directory in which the Director may put its status files. This directory should be used only by <span class="bbacula">Bacula</span> but may be shared by other <span class="bbacula">Bacula</span> daemons. However, please note, if this directory is shared with other <span class="bbacula">Bacula</span> daemons (the File daemon and Storage daemon), you must ensure that the <span class="bdirectivename">Name</span> given to each daemon is unique so that the temporary filenames used do not collide. By default the <span class="bbacula">Bacula</span> configure process creates unique daemon names by postfixing them with <span class="btt">-dir</span>, <span class="btt">-fd</span>, and <span class="btt">-sd</span>. Standard shell expansion of the <span class="bdirectivename">Working Directory</span> is done when the configuration file is read so that values such as <span class="bbf">$HOME</span> will be properly expanded. This directive is required. <p> The working directory specified must already exist and be readable and writable by the <span class="bbacula">Bacula</span> daemon referencing it. </p>
+<p> If you have specified a Director user and/or a Director group on your <span class="btool">./configure</span> line with <span class="bvalue">-with-dir-user</span> and/or <span class="bvalue">-with-dir-group</span> the Working Directory owner and group will be set to those values. </p>
+
+</dd>
+</div>
+<div id="Director_Director_ScriptsDirectory">
+<dt>Scripts Directory = <Directory></dt>
+<dd>
+ This directive is optional and, if defined, specifies a directory in which the Director and the Storage daemon will look for many of the scripts that it needs to use during particular operations such as starting/stopping, the <span class="btool">mtx-changer</span> script, tape alerts, as well as catalog updates. This directory may be shared by other <span class="bbacula">Bacula</span> daemons. Standard shell expansion of the directory is done when the configuration file is read so that values such as <span class="bbf">$HOME</span> will be properly expanded.
+</dd>
+</div>
+<div id="Director_Director_PidDirectory">
+<dt>Pid Directory = <Directory></dt>
+<dd>
+ This directive is mandatory and specifies a directory in which the Director may put its process Id file. The process Id file is used to shutdown <span class="bbacula">Bacula</span> and to prevent multiple copies of <span class="bbacula">Bacula</span> from running simultaneously. Standard shell expansion of the <span class="bdirectivename">Pid Directory</span> is done when the configuration file is read so that values such as <span class="bbf">$HOME</span> will be properly expanded. <p> The PID directory specified must already exist and be readable and writable by the <span class="bbacula">Bacula</span> daemon referencing it </p>
+<p> Typically on Linux systems, you will set this to: <span class="bdirectoryname">/var/run</span>. If you are not installing <span class="bbacula">Bacula</span> in the system directories, you can use the <span>Working Directory</span> as defined above. This directive is required. </p>
+
+</dd>
+</div>
+<div id="Director_Director_VerId">
+<dt>VerId = <string></dt>
+<dd>
+ where <span class="bbracket"><string></span> is an identifier which can be used for support purpose. This string is displayed using the <span class="bcommandname">version</span> command.
+</dd>
+</div>
+<div id="Director_Director_CommCompression">
+<dt>CommCompression = <yes|no></dt>
+<dd>
+ <p> If the two <span class="bbacula">Bacula</span> components (DIR, FD, SD, bconsole) have the comm line compression enabled, the line compression will be enabled. The default value is yes. </p>
+<p> In many cases, the volume of data transmitted across the communications line can be reduced by a factor of three when this directive is <span class="bdefaultvalue">enabled</span>. In the case that the compression is not effective, <span class="bbacula">Bacula</span> turns it off on a record by record basis. </p>
+
+<p> If you are backing up data that is already compressed the comm line compression will not be effective, and you are likely to end up with an average compression ratio that is very small. In this case, <span class="bbacula">Bacula</span> reports <span class="bvalue">None</span> in the Job report. </p>
+
+</dd>
+</div>
+<div id="Director_Director_EventsRetention">
+<dt>Events Retention = <time></dt>
+<dd>
+ <p> The <span class="bdirectivename">Events Retention</span> directive defines the length of time that <span class="bbacula">Bacula</span> will keep events records in the Catalog database. When this time period expires, and if the user runs the <span class="bcommandname">prune events</span> command, <span class="bbacula">Bacula</span> will prune (remove) Events records that are older than the specified period. </p>
+<p> See the Configuration chapter of this manual for additional details of time specifications. </p>
+<p> The default is <span class="bdefaultvalue">1 month</span>. </p>
+
+</dd>
+</div>
+<div id="Director_Director_MaximumReloadRequests">
+<dt>MaximumReloadRequests = <number></dt>
+<dd>
+ <p> Where <span class="bbracket"><number></span> is the maximum number of <span class="bcommandname">reload</span> command that can be queued while jobs are running. The default is set to <span class="bdefaultvalue">32</span> and is usually sufficient. </p>
+
+</dd>
+</div>
+<div id="Director_Director_MaximumConsoleConnections">
+<dt>MaximumConsoleConnections = <number></dt>
+<dd>
+ where <span class="bbracket"><number></span> is the maximum number of Console Connections that could run concurrently. The default is set to <span class="bdefaultvalue">20</span>, but you may set it to a larger number.
+</dd>
+</div>
+<div id="Director_Director_DirPort">
+<dt>DirPort = <port-number></dt>
+<dd>
+ Specify the port (a positive integer) on which the Director daemon will listen for <span class="bbacula">Bacula</span> Console connections. This same port number must be specified in the Director resource of the Console configuration file. The default is <span class="bdefaultvalue">9101</span>, so normally this directive need not be specified. This directive should not be used if you specify the DirAddresses (plural) directive.
+</dd>
+</div>
+<div id="Director_Director_DirAddress">
+<dt>DirAddress = <IP-Address></dt>
+<dd>
+ This directive is optional, but if it is specified, it will cause the Director server (for the Console program) to bind to the specified <span class="bbracket"><IP-Address></span>, which is either a domain name or an IP address specified as a dotted quadruple in string or quoted string format. If this directive is not specified, the Director will bind to any available address (the default). Note, unlike the DirAddresses specification noted above, this directive only permits a single address to be specified. This directive should not be used if you specify a DirAddresses (plural) directive.
+</dd>
+</div>
+<div id="Director_Director_DirAddresses">
+<dt>DirAddresses = <IP-address-specification></dt>
+<dd>
+ Specify the ports and addresses on which the Director daemon will listen for <span class="bbacula">Bacula</span> Console connections. Probably the simplest way to explain this is to show an example:
+<pre>
+ DirAddresses = {
+ ip = { addr = 1.2.3.4; port = 1205;}
+ ipv4 = {
+ addr = 1.2.3.4; port = http;
+ }
+ ipv6 = {
+ addr = 1.2.3.4;
+ port = 1205;
+ }
+ ip = {
+ addr = 1.2.3.4
+ port = 1205
+ }
+ ip = { addr = 1.2.3.4 }
+ ip = { addr = 201:220:222::2 }
+ ip = {
+ addr = bluedot.thun.net
+ }
+ }
+</pre>
+<p> where ip, ip4, ip6, addr, and port are all keywords. Note, that the address can be specified as either a dotted quadruple, or IPv6 colon notation, or as a symbolic name (only in the ip specification). Also, port can be specified as a number or as the mnemonic value from the <span class="bfilename">/etc/services</span> file. If a port is not specified, the default will be used. If an ip section is specified, the resolution can be made either by IPv4 or IPv6. If ip4 is specified, then only IPv4 resolutions will be permitted, and likewise with ip6. </p>
+<p> Please note that if you use the DirAddresses directive, you must not use either a DirPort or a DirAddress directive in the same resource. </p>
+
+</dd>
+</div>
+<div id="Director_Director_DirSourceAddress">
+<dt>DirSourceAddress = <IP-Address></dt>
+<dd>
+ This record is optional, and if it is specified, it will cause the Director server (when initiating connections to a storage or file daemon) to source its connections from the specified address. Only a single IP address may be specified. If this record is not specified, the Director server will source its outgoing connections according to the system routing table (the default).
+</dd>
+</div>
+<div id="Director_Director_MaximumConcurrentJobs">
+<dt>Maximum Concurrent Jobs = <number></dt>
+<dd>
+ where <span class="bbracket"><number></span> is the maximum number of total Director Jobs that should run concurrently. The default is set to <span class="bdefaultvalue">20</span>, but you may set it to a larger number. Every valid connection to any daemon (Director, File daemon, or Storage daemon) results in a Job. This includes connections from <span class="bbacula">Bacula</span><span class="bcommandname"> Console</span>. Thus the number of concurrent Jobs must, in general, be greater than the maximum number of Jobs that you wish to actually run. <p> In general, increasing the number of Concurrent Jobs increases the total throughtput of <span class="bbacula">Bacula</span>, because the simultaneous Jobs can all feed data to the Storage daemon and to the Catalog at the same time. However, keep in mind, that the Volume format becomes more complicated with multiple simultaneous jobs, consequently, restores may take longer if <span class="bbacula">Bacula</span> must sort through interleaved volume blocks from multiple simultaneous jobs. Though not normally necessary, this can be avoided by having each simultaneous job write to a different volume or by using data spooling, which will first spool the data to disk simultaneously, then write one spool file at a time to the volume thus avoiding excessive interleaving of the different job blocks. </p>
+
+</dd>
+</div>
+<div id="Director_Director_FdConnectTimeout">
+<dt>FD Connect Timeout = <time></dt>
+<dd>
+ where <span class="bbracket"><time></span> is the time that the Director should continue attempting to contact the File daemon to start a job, and after which the Director will cancel the job. The default is <span class="bdefaultvalue">3 minutes</span>.
+</dd>
+</div>
+<div id="Director_Director_SdConnectTimeout">
+<dt>SD Connect Timeout = <time></dt>
+<dd>
+ where <span class="bbracket"><time></span> is the time that the Director should continue attempting to contact the Storage daemon to start a job, and after which the Director will cancel the job. The default is <span class="bdefaultvalue">30 minutes</span>.
+</dd>
+</div>
+<div id="Director_Director_HeartbeatInterval">
+<dt>Heartbeat Interval = <time-interval></dt>
+<dd>
+ This directive is optional and if specified will cause the Director to set a keepalive interval (heartbeat) in seconds on each of the sockets it opens for the Client resource. This value will override any specified at the Director level. It is implemented only on systems (Linux, ...) that provide the <span class="btool">setsockopt</span> <span class="btt">TCP_KEEPIDLE</span> function. The default value is <span class="bdefaultvalue">300s</span>.
+
+</dd>
+</div>
+<div id="Director_Director_TlsPskEnable">
+<dt>TLS PSK Enable = <yes|no></dt>
+<dd>
+<p> Enable or Disable automatic TLS PSK support. TLS PSK is enabled by default between all <span class="bbacula">Bacula</span> components. The Pre-Shared Key used between the programs is the <span class="bbacula">Bacula</span> password. If both <span class="bdirectivename">TLS Enable</span> and <span class="bdirectivename">TLS PSK Enable</span> are enabled, the system will use TLS certificates. </p>
+
+</dd>
+</div>
+<div id="Director_Director_TlsEnable">
+<dt>TLS Enable = <yes|no></dt>
+<dd>
+<p> Enable TLS support. If TLS is not enabled, none of the other TLS directives have any effect. In other words, even if you set <span class="bbf">TLS Require = yes</span> you need to have TLS enabled or TLS will not be used. </p>
+
+</dd>
+</div>
+<div id="Director_Director_TlsRequire">
+<dt>TLS Require = <yes|no></dt>
+<dd>
+<p> Require TLS or TLS-PSK encryption. This directive is ignored unless one of <span class="bbf">TLS Enable</span> or <span class="bbf">TLS PSK Enable</span> is set to <span class="bvalue">yes</span>. If TLS is not required while TLS or TLS-PSK are enabled, then the <span class="bbacula">Bacula</span> component will connect with other components either with or without TLS or TLS-PSK</p>
+<p> If TLS or TLS-PSK is enabled and TLS is required, then the <span class="bbacula">Bacula</span> component will refuse any connection request that does not use TLS. </p>
+
+</dd>
+</div>
+<div id="Director_Director_TlsAuthenticate">
+<dt>TLS Authenticate = <yes|no></dt>
+<dd>
+ When <span class="bdirectivename">TLS Authenticate</span> is enabled, after doing the CRAM-MD5 authentication, <span class="bbacula">Bacula</span> will also do TLS authentication, then TLS encryption will be turned off, and the rest of the communication between the two <span class="bbacula">Bacula</span> components will be done without encryption. If TLS-PSK is used instead of the regular TLS, the encryption is turned off after the TLS-PSK authentication step. <p> If you want to encrypt communications data, use the normal TLS directives but do <span class="bbf">not</span> turn on <span class="bdirectivename">TLS Authenticate</span>. </p>
+
+</dd>
+</div>
+<div id="Director_Director_TlsKey">
+<dt>TLS Key = <Filename></dt>
+<dd>The full path and filename of a PEM encoded TLS private key. It must correspond to the TLS certificate.
+</dd>
+</div>
+<div id="Director_Director_TlsCertificate">
+<dt>TLS Certificate = <Filename></dt>
+<dd>The full path and filename of a PEM encoded TLS certificate. It will be used as either a client or server certificate, depending on the connection direction. PEM stands for Privacy Enhanced Mail, but in this context refers to how the certificates are encoded. This format is used because PEM files are base64 encoded and hence ASCII text based rather than binary. They may also contain encrypted information. <p> This directive is required in a server context, but it may not be specified in a client context if <span class="bdirectivename">TLS Verify Peer</span> is set to <span class="bvalue">no</span> in the corresponding server context. </p>
+
+<p> Example: </p>
+<p> File Daemon configuration file (<span class="bfilename">bacula-fd.conf</span>), <span class="bdaemon">Director</span> resource configuration has <span class="bdirectivename">TLS Verify Peer</span> = <span class="bvalue">no</span>: </p>
+<pre>
+ Director {
+ Name = bacula-dir
+ Password = "password"
+ Address = director.example.com
+
+ # TLS configuration directives
+ TLS Enable = yes
+ TLS Require = yes
+ TLS Verify Peer = no
+ TLS CA Certificate File = /opt/bacula/ssl/certs/root_cert.pem
+ TLS Certificate = /opt/bacula/ssl/certs/client1_cert.pem
+ TLS Key = /opt/bacula/ssl/keys/client1_key.pem
+ }
+</pre>
+<p> Having <span class="bdirectivename">TLS Verify Peer</span> = <span class="bvalue">no</span>, means the File Daemon, server context, will not check Directorâ\80\99s public certificate, client context. There is no need to specify <span class="bdirectivename">TLS Certificate File</span> neither <span class="bdirectivename">TLS Key</span> directives in the <span class="bresourcename">Client</span> resource, director configuration file. We can have the below client configuration in <span class="bfilename">bacula-dir.conf</span>: </p>
+
+<pre>
+ Client {
+ Name = client1-fd
+ Address = client1.example.com
+ FDPort = 9102
+ Catalog = MyCatalog
+ Password = "password"
+ ...
+ # TLS configuration directives
+ TLS Enable = yes
+ TLS Require = yes
+ TLS CA Certificate File = /opt/bacula/ssl/certs/ca_client1_cert.pem
+ }
+</pre>
+
+</dd>
+</div>
+<div id="Director_Director_TlsCaCertificateFile">
+<dt>TLS CA Certificate File = <Filename></dt>
+<dd>The full path and filename specifying a PEM encoded TLS CA certificate(s). Multiple certificates are permitted in the file. One of <span class="bdirectivename">TLS CA Certificate File</span> or <span class="bdirectivename">TLS CA Certificate Dir</span> are required in a server context, unless <span class="bdirectivename">TLS Verify Peer</span> (see above) is set to <span class="bvalue">no</span>, and are always required in a client context.
+</dd>
+</div>
+<div id="Director_Director_TlsCaCertificateDir">
+<dt>TLS CA Certificate Dir = <Directory></dt>
+<dd>Full path to TLS CA certificate directory. In the current implementation, certificates must be stored PEM encoded with OpenSSL-compatible hashes, which is the subject name's hash and an extension of <span class="bbf">.0</span>. One of <span class="bdirectivename">TLS CA Certificate File</span> or <span class="bdirectivename">TLS CA Certificate Dir</span> are required in a server context, unless <span class="bdirectivename">TLS Verify Peer</span> is set to <span class="bvalue">no</span>, and are always required in a client context.
+</dd>
+</div>
+<div id="Director_Director_TlsVerifyPeer">
+<dt>TLS Verify Peer = <yes|no></dt>
+<dd>
+Verify peer certificate. Instructs server to request and verify the client's X.509 certificate. Any client certificate signed by a known-CA will be accepted. Additionally, the client's X509 certificate Common Name must meet the value of the <span class="bdirectivename">Address</span> directive. If the <span class="bdirectivename">TLSAllowed CN</span> onfiguration directive is used, the client's x509 certificate Common Name must also correspond to one of the CN specified in the <span class="bdirectivename">TLS Allowed CN</span> directive. This directive is valid only for a server and not in client context. The default is <span class="bdefaultvalue">yes</span>.
+</dd>
+</div>
+<div id="Director_Director_TlsAllowedCn">
+<dt>TLS Allowed CN = <string list></dt>
+<dd>Common name attribute of allowed peer certificates. This directive is valid for a server and in a client context. If this directive is specified, the peer certificate will be verified against this list. In the case this directive is configured on a server side, the allowed CN list will not be checked if <span class="bdirectivename">TLS Verify Peer</span> is set to <span class="bvalue">no</span> (<span class="bdirectivename">TLS Verify Peer</span> is <span class="bdefaultvalue">yes</span> by default). This can be used to ensure that only the CN-approved component may connect. This directive may be specified more than once. <p> In the case this directive is configured in a server side, the allowed CN list will only be checked if <span class="bdirectivename">TLS Verify Peer = yes</span> (default). For example, in <span class="bfilename">bacula-fd.conf</span>, <span class="bdaemon">Director</span> resource definition: </p>
+<pre>
+ Director {
+ Name = bacula-dir
+ Password = "password"
+ Address = director.example.com
+ # TLS configuration directives
+ TLS Enable = yes
+ TLS Require = yes
+ # if TLS Verify Peer = no, then TLS Allowed CN will not be checked.
+ TLS Verify Peer = yes
+ TLS Allowed CN = director.example.com
+ TLS CA Certificate File = /opt/bacula/ssl/certs/root_cert.pem
+ TLS Certificate = /opt/bacula/ssl/certs/client1_cert.pem
+ TLS Key = /opt/bacula/ssl/keys/client1_key.pem
+ }
+</pre>
+<p> In the case this directive is configured in a client side, the allowed CN list will always be checked. </p>
+
+<pre>
+ Client {
+ Name = client1-fd
+ Address = client1.example.com
+ FDPort = 9102
+ Catalog = MyCatalog
+ Password = "password"
+ ...
+ # TLS configuration directives
+ TLS Enable = yes
+ TLS Require = yes
+ # the Allowed CN will be checked for this client by director
+ # the client's certificate Common Name must match any of
+ # the values of the Allowed CN list
+ TLS Allowed CN = client1.example.com
+ TLS CA Certificate File = /opt/bacula/ssl/certs/ca_client1_cert.pem
+ TLS Certificate = /opt/bacula/ssl/certs/director_cert.pem
+ TLS Key = /opt/bacula/ssl/keys/director_key.pem
+ }
+</pre>
+<p> If the client doesnâ\80\99t provide a certificate with a Common Name that meets any value in the <span class="bdirectivename">TLS Allowed CN</span> list, an error message will be issued: </p>
+
+<pre>
+16-Nov 17:30 bacula-dir JobId 0: Fatal error: bnet.c:273 TLS certificate
+verification failed. Peer certificate did not match a required commonName
+16-Nov 17:30 bacula-dir JobId 0: Fatal error: TLS negotiation failed with FD at
+"192.168.100.2:9102".
+</pre>
+
+</dd>
+</div>
+<div id="Director_Director_TlsDhFile">
+<dt>TLS DH File = <Directory></dt>
+<dd>Path to PEM encoded Diffie-Hellman parameter file. If this directive is specified, DH key exchange will be used for the ephemeral keying, allowing for forward secrecy of communications. DH key exchange adds an additional level of security because the key used for encryption/decryption by the server and the client is computed on each end and thus is never passed over the network if Diffie-Hellman key exchange is used. Even if DH key exchange is not used, the encryption/decryption key is always passed encrypted. This directive is only valid within a server context. <p> To generate the parameter file, you may use <span class="btool">openssl</span>: </p>
+
+<pre>
+openssl dhparam -out dh4096.pem -5 4096
+</pre>
+
+
+
+</dd>
+</div>
+<div id="Director_Director_AutoPrune">
+<dt>AutoPrune = <yes|no></dt>
+<dd>
+<p> Normally, pruning of Files from the Catalog is specified on a Client by Client basis in the <span>Client</span> resource with the <span class="bdirectivename">AutoPrune</span> directive. It is also possible to overwrite the Client settings in the <span>Pool</span> resource used by jobs, with the <span class="bdirectivename">AutoPrune</span>, <span class="bdirectivename">PruneFiles</span> and <span class="bdirectivename">PruneJobs</span> directives. </p>
+
+<p> If this directive is specified (not normally) and the value is <span class="bvalue">no</span>, it will override the value specified in all the <span>Client</span> and the <span>Pool</span> resources. The default is <span class="bdefaultvalue">yes</span>. </p>
+
+<p> If you set <span class="bdirectivename">AutoPrune</span> = <span class="bvalue">no</span>, pruning will not be done automatically, and your Catalog will grow in size each time you run a Job. Pruning affects only information in the catalog and not data stored in the backup archives (on Volumes). The <span class="bcommandname">prune</span> <span class="btool">bconsole</span> command can be used to prune catalog records respecting the Client and/or the Pool <span class="bdirectivename">FileRetention</span>, <span class="bdirectivename">JobRetention</span> and <span class="bdirectivename">VolumeRetention</span> directives. </p>
+
+</dd>
+</div>
+<div id="Director_Director_StatisticsRetention">
+<dt>Statistics Retention = <time></dt>
+<dd>
+ <p> The <span class="bdirectivename">Statistics Retention</span> directive defines the length of time that <span class="bbacula">Bacula</span> will keep statistics job records in the Catalog database after the Job End time. (In <span class="btable">JobHistory</span> table) When this time period expires, and if the user runs the <span class="bcommandname">prune stats</span> command, <span class="bbacula">Bacula</span> will prune (remove) Job records that are older than the specified period. </p>
+<p> Theses statistics records aren't used for restore purpose, but mainly for capacity planning, billings, etc. See Statistics chapter for additional information. </p>
+<p> See the Configuration chapter of this manual for additional details of time specifications. </p>
+<p> The default is <span class="bdefaultvalue">5 years</span>. </p>
+</dd>
+</div>
+<div id="Director_Client_Name">
+<dt>Name = <name></dt>
+<dd>
+ The client name which will be used in the Job resource directive or in the console <span class="bcommandname">run</span> command. This directive is required.
+</dd>
+</div>
+<div id="Director_Client_Address">
+<dt>Address = <address></dt>
+<dd>
+ Where the <span class="bbracket"><address></span> is a host name, a fully qualified domain name, or a network address in dotted quad notation for a <span class="bbacula">Bacula</span> File server daemon. This directive is required.
+</dd>
+</div>
+<div id="Director_Client_Password">
+<dt>Password = <password></dt>
+<dd>
+ This is the password to be used when establishing a connection with the File services, so the Client configuration file on the machine to be backed up must have the same password defined for this Director. This directive is required. If you have either <span class="btool">/dev/random</span> or <span class="btool">bc</span> on your machine, <span class="bbacula">Bacula</span> will generate a random password during the configuration process, otherwise it will be left blank. <p> The password is plain text. It is not generated through any special process, but it is preferable for security reasons to make the text random. </p>
+
+</dd>
+</div>
+<div id="Director_Client_Catalog">
+<dt>Catalog = <Catalog-resource-name></dt>
+<dd>
+ This specifies the name of the catalog resource to be used for this Client. This directive is required.
+</dd>
+</div>
+<div id="Director_Client_Enabled">
+<dt>Enabled = <yes|no></dt>
+<dd>
+ This directive allows you to enable or disable the <span>Client</span> resource. If the resource is disabled, the Client will not be used.
+</dd>
+</div>
+<div id="Director_Client_FdPort">
+<dt>FD Port = <port-number></dt>
+<dd>
+ Where the <span class="bbracket"><port-number></span> is a port number at which the <span class="bbacula">Bacula</span> File server daemon can be contacted. The default is <span class="bdefaultvalue">9102</span>.
+</dd>
+</div>
+<div id="Director_Client_FdStorageAddress">
+<dt>FD Storage Address = <address></dt>
+<dd>
+ Where the <span class="bbracket"><address></span> is a host name, a , or an <span class="bbf">IP address</span>. The <span class="bbracket"><address></span> specified here will be transmitted to the File daemon instead of the IP address that the Director uses to contact the Storage daemon. This FDStorageAddress will then be used by the File daemon to contact the Storage daemon. This directive particularly useful if the File daemon is in a different network domain than the Director or Storage daemon. It is also useful in NAT or firewal environments.
+</dd>
+</div>
+<div id="Director_Client_SDCallsClient">
+<dt>SD Calls Client = <yes|no></dt>
+<dd>
+ <p> If the <span class="bdirectivename">SD Calls Client</span> directive is set to true in a <span>Client</span> resource any Backup, Restore, Verify Job where the client is involved, the client will wait for the Storage daemon to contact it. By default this directive is set to <span class="bdefaultvalue">false</span>, and the Client will call the Storage daemon as it always has. This directive can be useful if your Storage daemon is behind a firewall that permits outgoing connections but not incoming connections. </p>
+
+</dd>
+</div>
+<div id="Director_Client_AllowFDConnections">
+<dt>AllowFDConnections = <yes|no></dt>
+<dd>
+ <p> When <span class="bdirectivename">AllowFDConnections</span> is set to <span class="bvalue">true</span>, the Director will accept incoming connections from the Client and will keep the socket open for a future use. The Director will no longer use the <span class="bdirectivename">Address</span> to contact the File Daemon. This configuration is useful if the Director cannot contact the FileDaemon directly. The default value is <span class="bdefaultvalue">no</span>. </p>
+
+</dd>
+</div>
+<div id="Director_Client_MaximumConcurrentJobs">
+<dt>Maximum Concurrent Jobs = <number></dt>
+<dd>
+ where <span class="bbracket"><number></span> is the maximum number of Jobs with the current Client that can run concurrently. Note, this directive limits only Jobs for Clients with the same name as the resource in which it appears. Any other restrictions on the maximum concurrent jobs such as in the Director, Job, or Storage resources will also apply in addition to any limit specified here. The default is set to <span class="bdefaultvalue">1</span>, but you may set it to a larger number. If set to a large value, please be careful to not have this value higher than the <span class="bdirectivename">Maximum Concurrent Jobs</span> configured in the <span>Client</span> resource in the Client/File daemon configuration file. Otherwise, backup jobs can fail due to the Director connection to FD be refused because Maximum Concurrent Jobs was exceeded on FD side.
+</dd>
+</div>
+<div id="Director_Client_MaximumBandwidthPerJob">
+<dt>Maximum Bandwidth Per Job = <speed></dt>
+<dd>
+ <p> The speed parameter specifies the maximum allowed bandwidth in bytes that a job may use when started for this Client. You may specify the following speed parameter modifiers: kb/s (1,000 bytes per second), k/s (1,024 bytes per second), mb/s (1,000,000 bytes per second), or m/s (1,048,576 bytes per second). </p>
+<p> The use of TLS, TLS PSK, CommLine compression and Deduplication can interfer with the value set with the Directive. </p>
+
+</dd>
+</div>
+<div id="Director_Client_AutoPrune">
+<dt>AutoPrune = <yes|no></dt>
+<dd>
+ If AutoPrune is set to <span class="bdefaultvalue">yes</span> (default), <span class="bbacula">Bacula</span> (version 1.20 or greater) will automatically apply the File retention period and the Job retention period for the Client at the end of the Job. If you set <span class="bdirectivename">AutoPrune</span> = <span class="bvalue">no</span>, pruning will not be done, and your Catalog will grow in size each time you run a Job. Pruning affects only information in the catalog and not data stored in the backup archives (on Volumes).
+</dd>
+</div>
+<div id="Director_Client_JobRetention">
+<dt>Job Retention = <time-period-specification></dt>
+<dd>
+ The Job Retention directive defines the length of time that <span class="bbacula">Bacula</span> will keep Job records in the Catalog database after the Job End time. When this time period expires, and if <span class="bdirectivename">AutoPrune</span> is set to <span class="bvalue">yes</span> <span class="bbacula">Bacula</span> will prune (remove) Job records that are older than the specified File Retention period. As with the other retention periods, this affects only records in the catalog and not data in your archive backup. <p> If a Job record is selected for pruning, all associated File and JobMedia records will also be pruned regardless of the File Retention period set. As a consequence, you normally will set the File retention period to be less than the Job retention period. The Job retention period can actually be less than the value you specify here if you set the <span class="bdirectivename">Volume Retention</span> directive in the Pool resource to a smaller duration. This is because the Job retention period and the Volume retention period are independently applied, so the smaller of the two takes precedence. </p>
+<p> The Job retention period is specified as seconds, minutes, hours, days, weeks, months, quarters, or years. See the Configuration chapter of this manual for additional details of time specification. </p>
+<p> The default is <span class="bdefaultvalue">180 days</span>. </p>
+
+</dd>
+</div>
+<div id="Director_Client_FileRetention">
+<dt>File Retention = <time-period-specification></dt>
+<dd>
+ The File Retention directive defines the length of time that <span class="bbacula">Bacula</span> will keep File records in the Catalog database after the End time of the Job corresponding to the File records. When this time period expires, and if <span class="bdirectivename">AutoPrune</span> is set to <span class="bvalue">yes</span> <span class="bbacula">Bacula</span> will prune (remove) File records that are older than the specified File Retention period. Note, this affects only records in the catalog database. It does not affect your archive backups. <p> File records may actually be retained for a shorter period than you specify on this directive if you specify either a shorter <span class="bbf">Job Retention</span> or a shorter <span class="bbf">Volume Retention</span> period. The shortest retention period of the three takes precedence. The time may be expressed in seconds, minutes, hours, days, weeks, months, quarters, or years. See the Configuration chapter of this manual for additional details of time specification. </p>
+<p> The default is <span class="bdefaultvalue">60 days</span>. </p>
+
+</dd>
+</div>
+<div id="Director_Client_SnapshotRetention">
+<dt>Snapshot Retention = <time-period-specification></dt>
+<dd>
+ <p> The Snapshot Retention directive defines the length of time that <span class="bbacula">Bacula</span> will keep Snapshots in the Catalog database and on the Client after the Snapshot creation. When this time period expires, and if using the <span class="bcommandname">snapshot prune</span> command, <span class="bbacula">Bacula</span> will prune (remove) Snapshot records that are older than the specified Snapshot Retention period and will contact the FileDaemon to delete Snapshots from the system. </p>
+<p> The Snapshot retention period is specified as seconds, minutes, hours, days, weeks, months, quarters, or years. See the Configuration chapter of this manual for additional details of time specification. </p>
+<p> The default is <span class="bdefaultvalue">0 seconds</span>, Snapshots are deleted at the end of the backup. The Job <span class="bdirectivename">SnapshotRetention</span> directive overwrites the Client <span class="bdirectivename">SnapshotRetention</span> directive. </p>
+
+</dd>
+</div>
+<div id="Director_Client_TlsPskEnable">
+<dt>TLS PSK Enable = <yes|no></dt>
+<dd>
+<p> Enable or Disable automatic TLS PSK support. TLS PSK is enabled by default between all <span class="bbacula">Bacula</span> components. The Pre-Shared Key used between the programs is the <span class="bbacula">Bacula</span> password. If both <span class="bdirectivename">TLS Enable</span> and <span class="bdirectivename">TLS PSK Enable</span> are enabled, the system will use TLS certificates. </p>
+
+</dd>
+</div>
+<div id="Director_Client_TlsEnable">
+<dt>TLS Enable = <yes|no></dt>
+<dd>
+<p> Enable TLS support. If TLS is not enabled, none of the other TLS directives have any effect. In other words, even if you set <span class="bbf">TLS Require = yes</span> you need to have TLS enabled or TLS will not be used. </p>
+
+</dd>
+</div>
+<div id="Director_Client_TlsRequire">
+<dt>TLS Require = <yes|no></dt>
+<dd>
+<p> Require TLS or TLS-PSK encryption. This directive is ignored unless one of <span class="bbf">TLS Enable</span> or <span class="bbf">TLS PSK Enable</span> is set to <span class="bvalue">yes</span>. If TLS is not required while TLS or TLS-PSK are enabled, then the <span class="bbacula">Bacula</span> component will connect with other components either with or without TLS or TLS-PSK</p>
+<p> If TLS or TLS-PSK is enabled and TLS is required, then the <span class="bbacula">Bacula</span> component will refuse any connection request that does not use TLS. </p>
+
+</dd>
+</div>
+<div id="Director_Client_TlsAuthenticate">
+<dt>TLS Authenticate = <yes|no></dt>
+<dd>
+ When <span class="bdirectivename">TLS Authenticate</span> is enabled, after doing the CRAM-MD5 authentication, <span class="bbacula">Bacula</span> will also do TLS authentication, then TLS encryption will be turned off, and the rest of the communication between the two <span class="bbacula">Bacula</span> components will be done without encryption. If TLS-PSK is used instead of the regular TLS, the encryption is turned off after the TLS-PSK authentication step. <p> If you want to encrypt communications data, use the normal TLS directives but do <span class="bbf">not</span> turn on <span class="bdirectivename">TLS Authenticate</span>. </p>
+
+</dd>
+</div>
+<div id="Director_Client_TlsCertificate">
+<dt>TLS Certificate = <Filename></dt>
+<dd>The full path and filename of a PEM encoded TLS certificate. It will be used as either a client or server certificate, depending on the connection direction. PEM stands for Privacy Enhanced Mail, but in this context refers to how the certificates are encoded. This format is used because PEM files are base64 encoded and hence ASCII text based rather than binary. They may also contain encrypted information. <p> This directive is required in a server context, but it may not be specified in a client context if <span class="bdirectivename">TLS Verify Peer</span> is set to <span class="bvalue">no</span> in the corresponding server context. </p>
+
+<p> Example: </p>
+<p> File Daemon configuration file (<span class="bfilename">bacula-fd.conf</span>), <span class="bdaemon">Director</span> resource configuration has <span class="bdirectivename">TLS Verify Peer</span> = <span class="bvalue">no</span>: </p>
+<pre>
+ Director {
+ Name = bacula-dir
+ Password = "password"
+ Address = director.example.com
+
+ # TLS configuration directives
+ TLS Enable = yes
+ TLS Require = yes
+ TLS Verify Peer = no
+ TLS CA Certificate File = /opt/bacula/ssl/certs/root_cert.pem
+ TLS Certificate = /opt/bacula/ssl/certs/client1_cert.pem
+ TLS Key = /opt/bacula/ssl/keys/client1_key.pem
+ }
+</pre>
+<p> Having <span class="bdirectivename">TLS Verify Peer</span> = <span class="bvalue">no</span>, means the File Daemon, server context, will not check Directorâ\80\99s public certificate, client context. There is no need to specify <span class="bdirectivename">TLS Certificate File</span> neither <span class="bdirectivename">TLS Key</span> directives in the <span class="bresourcename">Client</span> resource, director configuration file. We can have the below client configuration in <span class="bfilename">bacula-dir.conf</span>: </p>
+
+<pre>
+ Client {
+ Name = client1-fd
+ Address = client1.example.com
+ FDPort = 9102
+ Catalog = MyCatalog
+ Password = "password"
+ ...
+ # TLS configuration directives
+ TLS Enable = yes
+ TLS Require = yes
+ TLS CA Certificate File = /opt/bacula/ssl/certs/ca_client1_cert.pem
+ }
+</pre>
+
+</dd>
+</div>
+<div id="Director_Client_TlsKey">
+<dt>TLS Key = <Filename></dt>
+<dd>The full path and filename of a PEM encoded TLS private key. It must correspond to the TLS certificate.
+</dd>
+</div>
+<div id="Director_Client_TlsCaCertificateFile">
+<dt>TLS CA Certificate File = <Filename></dt>
+<dd>The full path and filename specifying a PEM encoded TLS CA certificate(s). Multiple certificates are permitted in the file. One of <span class="bdirectivename">TLS CA Certificate File</span> or <span class="bdirectivename">TLS CA Certificate Dir</span> are required in a server context, unless <span class="bdirectivename">TLS Verify Peer</span> (see above) is set to <span class="bvalue">no</span>, and are always required in a client context.
+</dd>
+</div>
+<div id="Director_Client_TlsCaCertificateDir">
+<dt>TLS CA Certificate Dir = <Directory></dt>
+<dd>Full path to TLS CA certificate directory. In the current implementation, certificates must be stored PEM encoded with OpenSSL-compatible hashes, which is the subject name's hash and an extension of <span class="bbf">.0</span>. One of <span class="bdirectivename">TLS CA Certificate File</span> or <span class="bdirectivename">TLS CA Certificate Dir</span> are required in a server context, unless <span class="bdirectivename">TLS Verify Peer</span> is set to <span class="bvalue">no</span>, and are always required in a client context.
+</dd>
+</div>
+<div id="Director_Client_TlsVerifyPeer">
+<dt>TLS Verify Peer = <yes|no></dt>
+<dd>
+Verify peer certificate. Instructs server to request and verify the client's X.509 certificate. Any client certificate signed by a known-CA will be accepted. Additionally, the client's X509 certificate Common Name must meet the value of the <span class="bdirectivename">Address</span> directive. If the <span class="bdirectivename">TLSAllowed CN</span> onfiguration directive is used, the client's x509 certificate Common Name must also correspond to one of the CN specified in the <span class="bdirectivename">TLS Allowed CN</span> directive. This directive is valid only for a server and not in client context. The default is <span class="bdefaultvalue">yes</span>.
+</dd>
+</div>
+<div id="Director_Client_TlsAllowedCn">
+<dt>TLS Allowed CN = <string list></dt>
+<dd>Common name attribute of allowed peer certificates. This directive is valid for a server and in a client context. If this directive is specified, the peer certificate will be verified against this list. In the case this directive is configured on a server side, the allowed CN list will not be checked if <span class="bdirectivename">TLS Verify Peer</span> is set to <span class="bvalue">no</span> (<span class="bdirectivename">TLS Verify Peer</span> is <span class="bdefaultvalue">yes</span> by default). This can be used to ensure that only the CN-approved component may connect. This directive may be specified more than once. <p> In the case this directive is configured in a server side, the allowed CN list will only be checked if <span class="bdirectivename">TLS Verify Peer = yes</span> (default). For example, in <span class="bfilename">bacula-fd.conf</span>, <span class="bdaemon">Director</span> resource definition: </p>
+<pre>
+ Director {
+ Name = bacula-dir
+ Password = "password"
+ Address = director.example.com
+ # TLS configuration directives
+ TLS Enable = yes
+ TLS Require = yes
+ # if TLS Verify Peer = no, then TLS Allowed CN will not be checked.
+ TLS Verify Peer = yes
+ TLS Allowed CN = director.example.com
+ TLS CA Certificate File = /opt/bacula/ssl/certs/root_cert.pem
+ TLS Certificate = /opt/bacula/ssl/certs/client1_cert.pem
+ TLS Key = /opt/bacula/ssl/keys/client1_key.pem
+ }
+</pre>
+<p> In the case this directive is configured in a client side, the allowed CN list will always be checked. </p>
+
+<pre>
+ Client {
+ Name = client1-fd
+ Address = client1.example.com
+ FDPort = 9102
+ Catalog = MyCatalog
+ Password = "password"
+ ...
+ # TLS configuration directives
+ TLS Enable = yes
+ TLS Require = yes
+ # the Allowed CN will be checked for this client by director
+ # the client's certificate Common Name must match any of
+ # the values of the Allowed CN list
+ TLS Allowed CN = client1.example.com
+ TLS CA Certificate File = /opt/bacula/ssl/certs/ca_client1_cert.pem
+ TLS Certificate = /opt/bacula/ssl/certs/director_cert.pem
+ TLS Key = /opt/bacula/ssl/keys/director_key.pem
+ }
+</pre>
+<p> If the client doesnâ\80\99t provide a certificate with a Common Name that meets any value in the <span class="bdirectivename">TLS Allowed CN</span> list, an error message will be issued: </p>
+
+<pre>
+16-Nov 17:30 bacula-dir JobId 0: Fatal error: bnet.c:273 TLS certificate
+verification failed. Peer certificate did not match a required commonName
+16-Nov 17:30 bacula-dir JobId 0: Fatal error: TLS negotiation failed with FD at
+"192.168.100.2:9102".
+</pre>
+
+</dd>
+</div>
+<div id="Director_Job_Name">
+<dt>Name = <name></dt>
+<dd>
+ The Job name. This name can be specified on the <span class="bcommandname">run</span> command in the console program to start a job. If the name contains spaces, it must be specified between quotes. It is generally a good idea to give your job the same name as the Client that it will backup. This permits easy identification of jobs. <p> When the job actually runs, the unique Job Name will consist of the name you specify here followed by the date and time the job was scheduled for execution. This directive is required. </p>
+
+</dd>
+</div>
+<div id="Director_Job_Type">
+<dt>Type = <job-type></dt>
+<dd>
+ The <span class="bdirectivename">Type</span> directive specifies the Job type, which may be one of the following: <span class="bvalue">Backup</span>, <span class="bvalue">Restore</span>, <span class="bvalue">Verify</span>, or <span class="bvalue">Admin</span>. This directive is required. Within a particular Job Type, there are also Levels as discussed in the next item.
+<dl class="bdescription2">
+<dt>Backup</dt>
+<dd class="bdescription2">
+ Run a backup Job. Normally you will have at least one Backup job for each client you want to save. Normally, unless you turn off cataloging, most all the important statistics and data concerning files backed up will be placed in the catalog.
+</dd>
+<dt>Restore</dt>
+<dd class="bdescription2">
+ Run a restore Job. Normally, you will specify only one Restore job which acts as a sort of prototype that you will modify using the console program in order to perform restores. Although certain basic information from a Restore job is saved in the catalog, it is very minimal compared to the information stored for a Backup job - for example, no File database entries are generated since no Files are saved. <p> Restore jobs cannot be automatically started by the scheduler as is the case for Backup, Verify and Admin jobs. To restore files, you must use the <span class="bcommandname">restore</span> command in the console. </p>
+
+</dd>
+<dt>Verify</dt>
+<dd class="bdescription2">
+ Run a Verify Job. In general, Verify jobs permit you to compare the contents of the catalog to the file system, or to what was backed up. In addition, to verifying that a tape that was written can be read, you can also use Verify as a sort of tripwire intrusion detection.
+</dd>
+<dt>Admin</dt>
+<dd class="bdescription2">
+<p> Run an Admin Job. An Admin job can be used to periodically run catalog pruning, if you do not want to do it at the end of each Backup Job. Although an Admin job is recorded in the catalog, very little data is saved. The Client is not involved in an Admin job, so features such as <span>“</span><span class="bdirectivename">Client Run Before Job</span><span>”</span> are not available. Only Director's runscripts will be executed. </p>
+
+</dd>
+<dt>Migration</dt>
+<dd class="bdescription2">
+ Run a Migration Job (similar to a backup job) that reads data that was previously backed up to a Volume and writes it to another Volume. (See (here))
+</dd>
+<dt>Copy</dt>
+<dd class="bdescription2">
+ Run a Copy Job that essentially creates two identical copies of the same backup. The Copy process is essentially identical to the Migration feature with the exception that the Job that is copied is left unchanged. (See (here))
+</dd>
+</dl>
+
+
+</dd>
+</div>
+<div id="Director_Job_Level">
+<dt>Level = <job-level></dt>
+<dd>
+ The Level directive specifies the default Job level to be run. Each different Job Type (Backup, Restore, ...) has a different set of Levels that can be specified. The Level is normally overridden by a different value that is specified in the <span>Schedule</span> resource. This directive is not required, but must be specified either by a <span class="bdirectivename">Level</span> directive or as an override specified in the <span>Schedule</span> resource. <p> For a Backup Job, the Level may be one of the following: </p>
+
+<dl class="bdescription2">
+<dt>Full</dt>
+<dd class="bdescription2">
+ When the <span class="bdirectivename">Level</span> is set to <span class="bvalue">Full</span> all files in the FileSet whether or not they have changed will be backed up.
+</dd>
+<dt>Incremental</dt>
+<dd class="bdescription2">
+ When the Level is set to Incremental all files specified in the FileSet that have changed since the last successful backup of the the same Job using the same FileSet and Client, will be backed up. If the Director cannot find a previous valid Full backup then the job will be upgraded into a Full backup. When the Director looks for a valid backup record in the catalog database, it looks for a previous Job with:
+<ul class="bitemize3">
+<li class="bitemize3">The same Job name. </li>
+<li class="bitemize3">The same Client name. </li>
+<li class="bitemize3">The same FileSet (any change to the definition of the FileSet such as adding or deleting a file in the Include or Exclude sections constitutes a different FileSet. </li>
+<li class="bitemize3">The Job was a Full, Differential, or Incremental backup. </li>
+<li class="bitemize3">The Job terminated normally (i.e. did not fail or was not canceled). </li>
+<li class="bitemize3">The Job started no longer ago than <span class="bdirectivename">Max Full Interval</span>. </li>
+</ul>
+
+<p> If all the above conditions do not hold, the Director will upgrade the Incremental to a Full save. Otherwise, the Incremental backup will be performed as requested. </p>
+<p> The File daemon (Client) decides which files to backup for an Incremental backup by comparing start time of the prior Job (Full, Differential, or Incremental) against the time each file was last <span>“</span>modified<span>”</span> (<span class="btt">st_mtime</span>) and the time its attributes were last <span>“</span>changed<span>”</span>(<span class="btt">st_ctime</span>). If the file was modified or its attributes changed on or after this start time, it will then be backed up. </p>
+<p> Some virus scanning software may change <span class="btt">st_ctime</span> while doing the scan. For example, if the virus scanning program attempts to reset the access time (<span class="btt">st_atime</span>), which <span class="bbacula">Bacula</span> does not use, it will cause <span class="btt">st_ctime</span> to change and hence <span class="bbacula">Bacula</span> will backup the file during an Incremental or Differential backup. In the case of Sophos virus scanning, you can prevent it from resetting the access time (<span class="btt">st_atime</span>) and hence changing <span class="btt">st_ctime</span> by using the <span class="bbf"><code>--</code>no-reset-atime</span> option. For other software, please see their manual. </p>
+<p> When <span class="bbacula">Bacula</span> does an Incremental backup, all modified files that are still on the system are backed up. However, any file that has been deleted since the last Full backup remains in the <span class="bbacula">Bacula</span> catalog, which means that if between a Full save and the time you do a restore, some files are deleted, those deleted files will also be restored. The deleted files will no longer appear in the catalog after doing another Full save. </p>
+<p> In addition, if you move a directory rather than copy it, the files in it do not have their modification time (<span class="btt">st_mtime</span>) or their attribute change time (<span class="btt">st_ctime</span>) changed. As a consequence, those files will probably not be backed up by an Incremental or Differential backup which depend solely on these time stamps. If you move a directory, and wish it to be properly backed up, it is generally preferable to copy it, then delete the original. </p>
+<p> However, to manage deleted files or directories changes in the catalog during an Incremental backup you can use <span class="bhighlight">accurate</span> mode. This is quite memory consuming process. See Accurate mode for more details. </p>
+
+</dd>
+<dt>Differential</dt>
+<dd class="bdescription2">
+ When the Level is set to Differential all files specified in the FileSet that have changed since the last successful Full backup of the same Job will be backed up. If the Director cannot find a valid previous Full backup for the same Job, FileSet, and Client, backup, then the Differential job will be upgraded into a Full backup. When the Director looks for a valid Full backup record in the catalog database, it looks for a previous Job with:
+<ul class="bitemize3">
+<li class="bitemize3">The same Job name. </li>
+<li class="bitemize3">The same Client name. </li>
+<li class="bitemize3">The same FileSet (any change to the definition of the FileSet such as adding or deleting a file in the Include or Exclude sections constitutes a different FileSet. </li>
+<li class="bitemize3">The Job was a FULL backup. </li>
+<li class="bitemize3">The Job terminated normally (i.e. did not fail or was not canceled). </li>
+<li class="bitemize3">The Job started no longer ago than <span class="bdirectivename">Max Full Interval</span>. </li>
+</ul>
+
+<p> If all the above conditions do not hold, the Director will upgrade the Differential to a Full save. Otherwise, the Differential backup will be performed as requested. </p>
+<p> The File daemon (Client) decides which files to backup for a differential backup by comparing the start time of the prior Full backup Job against the time each file was last <span>“</span>modified<span>”</span> (<span class="btt">st_mtime</span>) and the time its attributes were last <span>“</span>changed<span>”</span> (<span class="btt">st_ctime</span>). If the file was modified or its attributes were changed on or after this start time, it will then be backed up. The start time used is displayed after the <span class="bbf">Since</span> on the Job report. In rare cases, using the start time of the prior backup may cause some files to be backed up twice, but it ensures that no change is missed. As with the Incremental option, you should ensure that the clocks on your server and client are synchronized or as close as possible to avoid the possibility of a file being skipped. Note, on versions 1.33 or greater <span class="bbacula">Bacula</span> automatically makes the necessary adjustments to the time between the server and the client so that the times <span class="bbacula">Bacula</span> uses are synchronized. </p>
+<p> When <span class="bbacula">Bacula</span> does a Differential backup, all modified files that are still on the system are backed up. However, any file that has been deleted since the last Full backup remains in the <span class="bbacula">Bacula</span> catalog, which means that if between a Full save and the time you do a restore, some files are deleted, those deleted files will also be restored. The deleted files will no longer appear in the catalog after doing another Full save. However, to remove deleted files from the catalog during a Differential backup is quite a time consuming process and not currently implemented in <span class="bbacula">Bacula</span>. It is, however, a planned future feature. </p>
+<p> As noted above, if you move a directory rather than copy it, the files in it do not have their modification time (<span class="btt">st_mtime</span>) or their attribute change time (<span class="btt">st_ctime</span>) changed. As a consequence, those files will probably not be backed up by an Incremental or Differential backup which depend solely on these time stamps. If you move a directory, and wish it to be properly backed up, it is generally preferable to copy it, then delete the original. Alternatively, you can move the directory, then use the <span class="btool">touch</span> program to update the timestamps. </p>
+<p> However, to manage deleted files or directories changes in the catalog during an Differential backup you can use <span class="bhighlight">accurate</span> mode. This is quite memory consuming process. See Accurate mode for more details. </p>
+<p> Every once and a while, someone asks why we need Differential backups as long as Incremental backups pickup all changed files. There are possibly many answers to this question, but the one that is the most important for me is that a Differential backup effectively merges all the Incremental and Differential backups since the last Full backup into a single Differential backup. This has two effects: </p>
+<ol class="benumerate1">
+<li class="benumerate1">It gives some redundancy since the old backups could be used if the merged backup cannot be read. </li>
+<li class="benumerate1">More importantly, it reduces the number of Volumes that are needed to do a restore effectively eliminating the need to read all the volumes on which the preceding Incremental and Differential backups since the last Full are done. </li>
+</ol>
+
+
+</dd>
+<dt>VirtualFull</dt>
+<dd class="bdescription2">
+ When the backup Level is set to <span class="bvalue">VirtualFull</span>, <span class="bbacula">Bacula</span> will consolidate the previous Full backup plus the most recent Differential backup and any subsequent Incremental backups into a new Full backup. This new Full backup will then be considered as the most recent Full for any future Incremental or Differential backups. The VirtualFull backup is accomplished without contacting the client by reading the previous backup data and writing it to a volume in a different pool. <p><span class="bbacula">Bacula</span>'s virtual backup feature is often called Synthetic Backup or Consolidation in other backup products. </p>
+
+</dd>
+</dl>
+
+<p> For a Restore Job, no level needs to be specified. </p>
+<p> For a Verify Job, the Level may be one of the following: </p>
+
+<dl class="bdescription2">
+<dt>InitCatalog</dt>
+<dd class="bdescription2">
+ does a scan of the specified <span class="bdirectivename">FileSet</span> and stores the file attributes in the Catalog database. Since no file data is saved, you might ask why you would want to do this. It turns out to be a very simple and easy way to have a <span class="bbf">Tripwire</span> like feature using <span class="bbacula">Bacula</span>. In other words, it allows you to save the state of a set of files defined by the <span>FileSet</span> and later check to see if those files have been modified or deleted and if any new files have been added. This can be used to detect system intrusion. Typically you would specify a <span>FileSet</span> that contains the set of system files that should not change (e.g. /sbin, /boot, /lib, /bin, ...). Normally, you run the <span class="bvalue">InitCatalog</span> level verify one time when your system is first setup, and then once again after each modification (upgrade) to your system. Thereafter, when your want to check the state of your system files, you use a Verify <span class="bdirectivename">level</span> = <span class="bvalue">Catalog</span>. This compares the results of your <span class="bvalue">InitCatalog</span> with the current state of the files.
+</dd>
+<dt>Catalog</dt>
+<dd class="bdescription2">
+ Compares the current state of the files against the state previously saved during an <span class="bvalue">InitCatalog</span>. Any discrepancies are reported. The items reported are determined by the Verify options specified on the <span class="bdirectivename">Include</span> directive in the specified <span>FileSet</span> (see the <span>FileSet</span> resource below for more details). Typically this command will be run once a day (or night) to check for any changes to your system files. <p> Please note! If you run two Verify Catalog jobs on the same client at the same time, the results will certainly be incorrect. This is because Verify Catalog modifies the Catalog database while running in order to track new files. </p>
+
+</dd>
+<dt>VolumeToCatalog</dt>
+<dd class="bdescription2">
+ This level causes <span class="bbacula">Bacula</span> to read the file attribute data written to the Volume from the last backup Job for the job specified on the <span class="bdirectivename">VerifyJob</span> directive. The file attribute data are compared to the values saved in the Catalog database and any differences are reported. This is similar to the <span class="bvalue">DiskToCatalog</span> level except that instead of comparing the disk file attributes to the catalog database, the attribute data written to the Volume is read and compared to the catalog database. Although the attribute data including the signatures (MD5 or SHA1) are compared, the actual file data is not compared (it is not in the catalog). <p> Please note! If you run two Verify VolumeToCatalog jobs on the same client at the same time, the results will certainly be incorrect. This is because the Verify VolumeToCatalog modifies the Catalog database while running. </p>
+
+</dd>
+<dt>DiskToCatalog</dt>
+<dd class="bdescription2">
+ This level causes <span class="bbacula">Bacula</span> to read the files as they currently are on disk, and to compare the current file attributes with the attributes saved in the catalog from the last backup for the job specified on the <span class="bdirectivename">VerifyJob</span> directive. This level differs from the <span class="bvalue">VolumeToCatalog</span> level described above by the fact that it doesn't compare against a previous Verify job but against a previous backup. When you run this level, you must supply the verify options on your Include statements. Those options determine what attribute fields are compared. <p> This command can be very useful if you have disk problems because it will compare the current state of your disk against the last successful backup, which may be several jobs. </p>
+<p> Note, the current implementation (1.32c) does not identify files that have been deleted.</p>
+</dd>
+</dl>
+
+
+</dd>
+</div>
+<div id="Director_Job_Client">
+<dt>Client = <client-resource-name></dt>
+<dd>
+ The Client directive specifies the Client (File daemon) that will be used in the current Job. Only a single Client may be specified in any one Job. The Client runs on the machine to be backed up, and sends the requested files to the Storage daemon for backup, or receives them when restoring. For additional details, see the Client Resource section of this chapter. This directive is required.
+</dd>
+</div>
+<div id="Director_Job_Fileset">
+<dt>FileSet = <FileSet-resource-name></dt>
+<dd>
+ The FileSet directive specifies the FileSet that will be used in the current Job. The FileSet specifies which directories (or files) are to be backed up, and what options to use (e.g. compression, ...). Only a single FileSet resource may be specified in any one Job. For additional details, see the FileSet Resource section of this chapter. This directive is required.
+</dd>
+</div>
+<div id="Director_Job_Pool">
+<dt>Pool = <pool-resource-name></dt>
+<dd>
+ The Pool directive defines the pool of Volumes where your data can be backed up. Many <span class="bbacula">Bacula</span> installations will use only the <span class="bvalue">Default</span> pool. However, if you want to specify a different set of Volumes for different Clients or different Jobs, you will probably want to use Pools. For additional details, see the Pool Resource section of this chapter. This directive is required.
+</dd>
+</div>
+<div id="Director_Job_Storage">
+<dt>Storage = <storage-resource-name></dt>
+<dd>
+ The Storage directive defines the name of the storage services where you want to backup the FileSet data. For additional details, see the Storage Resource Chapter of this manual. The Storage resource may also be specified in the Job's Pool resource, in which case the value in the Pool resource overrides any value in the Job. This Storage resource definition is not required by either the Job resource or in the Pool, but it must be specified in one or the other, if not an error will result.
+</dd>
+</div>
+<div id="Director_Job_Messages">
+<dt>Messages = <messages-resource-name></dt>
+<dd>
+ The Messages directive defines what Messages resource should be used for this job, and thus how and where the various messages are to be delivered. For example, you can direct some messages to a log file, and others can be sent by email. For additional details, see the Messages Resource Chapter of this manual. This directive is required.
+</dd>
+</div>
+<div id="Director_Job_Schedule">
+<dt>Schedule = <schedule-name></dt>
+<dd>
+ The Schedule directive defines what schedule is to be used for the Job. The schedule in turn determines when the Job will be automatically started and what Job level (i.e. Full, Incremental, ...) is to be run. This directive is optional, and if left out, the Job can only be started manually using the Console program. Although you may specify only a single Schedule resource for any one job, the Schedule resource may contain multiple <span class="bdirectivename">Run</span> directives, which allow you to run the Job at many different times, and each <span class="bdirectivename">Run</span> directive permits overriding the default Job Level Pool, Storage, and Messages resources. This gives considerable flexibility in what can be done with a single Job. For additional details, see the Schedule Resource chapter of this manual.
+</dd>
+</div>
+<div id="Director_Job_JobDefs">
+<dt>JobDefs = <JobDefs-Resource-Name></dt>
+<dd>
+ If a <span class="bbracket"><JobDefs-Resource-Name></span> is specified, all the values contained in the named <span>JobDefs</span> resource will be used as the defaults for the current Job. Any value that you explicitly define in the current Job resource, will override any defaults specified in the <span>JobDefs</span> resource. The use of this directive permits writing much more compact <span>Job</span> resources where the bulk of the directives are defined in one or more JobDefs. This is particularly useful if you have many similar Jobs but with minor variations such as different Clients. A simple example of the use of JobDefs is provided in the default <span class="bfilename">bacula-dir.conf</span> file.
+</dd>
+</div>
+<div id="Director_Job_Priority">
+<dt>Priority = <number></dt>
+<dd>
+ This directive permits you to control the order in which your jobs will be run by specifying a positive non-zero number. The higher the number, the lower the job priority. Assuming you are not running concurrent jobs, all queued jobs of priority 1 will run before queued jobs of priority 2 and so on, regardless of the original scheduling order. <p> The priority only affects waiting jobs that are queued to run, not jobs that are already running. If one or more jobs of priority 2 are already running, and a new job is scheduled with priority 1, the currently running priority 2 jobs must complete before the priority 1 job is run, unless <span class="bdirectivename">Allow Mixed Priority</span> is set. </p>
+<p> The default priority is <span class="bdefaultvalue">10</span>. </p>
+<p> If you want to run concurrent jobs you should keep these points in mind: </p>
+
+<ul class="bitemize2">
+<li class="bitemize2">See Running Concurrent Jobs section on how to setup concurrent jobs in the <span class="bmanualname"><span class="bbacula">Bacula</span> Enterprise Problems Resolution guide</span>.
+</li>
+<li class="bitemize2">
+<span class="bbacula">Bacula</span> concurrently runs jobs of only one priority at a time. It will not simultaneously run a priority 1 and a priority 2 job.
+</li>
+<li class="bitemize2">If <span class="bbacula">Bacula</span> is running a priority 2 job and a new priority 1 job is scheduled, it will wait until the running priority 2 job terminates even if the <span class="bdirectivename">Maximum Concurrent Jobs</span> settings would otherwise allow two jobs to run simultaneously.
+</li>
+<li class="bitemize2">Suppose that bacula is running a priority 2 job and a new priority 1 job is scheduled and queued waiting for the running priority 2 job to terminate. If you then start a second priority 2 job, the waiting priority 1 job will prevent the new priority 2 job from running concurrently with the running priority 2 job. That is: as long as there is a higher priority job waiting to run, no new lower priority jobs will start even if the Maximum Concurrent Jobs settings would normally allow them to run. This ensures that higher priority jobs will be run as soon as possible. </li>
+</ul>
+
+<p> If you have several jobs of different priority, it may not best to start them at exactly the same time, because <span class="bbacula">Bacula</span> must examine them one at a time. If by <span class="bbacula">Bacula</span> starts a lower priority job first, then it will run before your high priority jobs. If you experience this problem, you may avoid it by starting any higher priority jobs a few seconds before lower priority ones. This insures that <span class="bbacula">Bacula</span> will examine the jobs in the correct order, and that your priority scheme will be respected. </p>
+
+</dd>
+</div>
+<div id="Director_Job_Accurate">
+<dt>Accurate = <yes|no></dt>
+<dd>
+ In accurate mode, the File daemon knowns exactly which files were present after the last backup. So it is able to handle deleted or renamed files. <p> When restoring a FileSet for a specified date (including <span>“</span>most recent<span>”</span>), <span class="bbacula">Bacula</span> is able to restore exactly the files and directories that existed at the time of the last backup prior to that date including ensuring that deleted files are actually deleted, and renamed directories are restored properly. </p>
+<p> In this mode, the File daemon must keep data concerning all files in memory. So If you do not have sufficient memory, the backup may either be terribly slow or fail. </p>
+<p> For 500.000 files (a typical desktop linux system), it will require approximately 64 Megabytes of RAM on your File daemon to hold the required information. </p>
+
+</dd>
+</div>
+<div id="Director_Job_Enabled">
+<dt>Enabled = <yes|no></dt>
+<dd>
+ This directive allows you to enable or disable a <span>Job</span> resource. When the resource of the Job is disabled, the Job will no longer be scheduled and it will not be available in the list of Jobs to be run. To be able to use the Job you must <span class="bcommandname">enable</span> it.
+</dd>
+</div>
+<div id="Director_Job_Run">
+<dt>Run = <job-name></dt>
+<dd>
+ The <span class="bdirectivename">Run</span> directive (not to be confused with the Run option in a Schedule) allows you to start other jobs or to clone jobs. By using the cloning keywords (see below), you can backup the same data (or almost the same data) to two or more drives at the same time. The <span class="bbracket"><job-name></span> is normally the same name as the current Job resource (thus creating a clone). However, it may be any Job name, so one job may start other related jobs. <p> The part after the equal sign must be enclosed in double quotes, and can contain any string or set of options (overrides) that you can specify when entering the <span class="bcommandname">run</span> command from the console. For example <span class="bbf">storage=DDS-4 ...</span>. In addition, there are two special keywords that permit you to clone the current job. They are <span class="bbf">level=%l</span> and <span class="bbf">since=%s</span>. The %l in the level keyword permits entering the actual level of the current job and the %s in the since keyword permits putting the same time for comparison as used on the current job. Note, in the case of the since keyword, the %s must be enclosed in double quotes, and thus they must be preceded by a backslash since they are already inside quotes. For example: </p>
+
+<pre>
+ run = "Nightly-backup level=%l since=\"%s\" storage=DDS-4"
+</pre>
+<p> A cloned job will not start additional clones, so it is not possible to recurse. </p>
+<p> Please note that all cloned jobs, as specified in the Run directives are submitted for running before the original job is run (while it is being initialized). This means that any clone job will actually start before the original job, and may even block the original job from starting until the original job finishes unless you allow multiple simultaneous jobs. Even if you set a lower priority on the clone job, if no other jobs are running, it will start before the original job. </p>
+<p> If you are trying to prioritize jobs by using the clone feature (Run directive), you will find it much easier to do using a RunScript resource, or a RunBeforeJob directive. </p>
+
+</dd>
+</div>
+<div id="Director_Job_FullBackupPool">
+<dt>Full Backup Pool = <pool-resource-name></dt>
+<dd>
+ The <span class="bdirectivename">Full Backup Pool</span> specifies a Pool to be used for Full backups. It will override any Pool specification during a Full backup. This directive is optional.
+</dd>
+</div>
+<div id="Director_Job_IncrementalBackupPool">
+<dt>Incremental Backup Pool = <pool-resource-name></dt>
+<dd>
+ The <span class="bdirectivename">Incremental Backup Pool</span> specifies a Pool to be used for Incremental backups. It will override any Pool specification during an Incremental backup. This directive is optional.
+</dd>
+</div>
+<div id="Director_Job_DifferentialBackupPool">
+<dt>Differential Backup Pool = <pool-resource-name></dt>
+<dd>
+ The <span class="bdirectivename">Differential Backup Pool</span> specifies a Pool to be used for Differential backups. It will override any Pool specification during a Differential backup. This directive is optional.
+</dd>
+</div>
+<div id="Director_Job_MaxFullInterval">
+<dt>Max Full Interval = <time></dt>
+<dd>
+ The time specifies the maximum allowed age (counting from start time) of the most recent successful Full backup that is required in order to run Incremental or Differential backup jobs. If the most recent Full backup is older than this interval, Incremental and Differential backups will be upgraded to Full backups automatically. If this directive is not present, or specified as 0, then the age of the previous Full backup is not considered.
+</dd>
+</div>
+<div id="Director_Job_WriteBootstrap">
+<dt>Write Bootstrap = <bootstrap-file-specification></dt>
+<dd>
+ The <span>writebootstrap</span> directive specifies a file name where <span class="bbacula">Bacula</span> will write a <span class="bbf">bootstrap</span> file for each Backup job run. This directive applies only to Backup Jobs. If the Backup job is a Full save, <span class="bbacula">Bacula</span> will erase any current contents of the specified file before writing the bootstrap records. If the Job is an Incremental or Differential save, <span class="bbacula">Bacula</span> will append the current bootstrap record to the end of the file. <p> Using this feature, permits you to constantly have a bootstrap file that can recover the current state of your system. Normally, the file specified should be a mounted drive on another machine, so that if your hard disk is lost, you will immediately have a bootstrap record available. Alternatively, you should copy the bootstrap file to another machine after it is updated. Note, it is a good idea to write a separate bootstrap file for each Job backed up including the job that backs up your catalog database. </p>
+<p> If the <span class="bbracket"><bootstrap-file-specification></span> begins with a vertical bar (|), <span class="bbacula">Bacula</span> will use the specification as the name of a program to which it will pipe the bootstrap record. It could for example be a shell script that emails you the bootstrap record. </p>
+<p> On versions 1.39.22 or greater, before opening the file or executing the specified command, <span class="bbacula">Bacula</span> performs character substitution like in RunScript directive. To automatically manage your bootstrap files, you can use this in your <span>JobDefs</span> resources: </p>
+<pre>
+JobDefs {
+ Write Bootstrap = "%c_%n.bsr"
+ ...
+}
+</pre>
+<p> For more details on using this file, please see the chapter entitled The Bootstrap File of this manual. </p>
+
+</dd>
+</div>
+<div id="Director_Job_SpoolData">
+<dt>Spool Data = <yes|no></dt>
+<dd>
+ <p> If this directive is set to <span class="bvalue">yes</span> (default <span class="bdefaultvalue">no</span>), the Storage daemon will be requested to spool the data for this Job to disk rather than write it directly to the Volume (normally a tape). </p>
+<p> Thus the data is written in large blocks to the Volume rather than small blocks. This directive is particularly useful when running multiple simultaneous backups to tape. Once all the data arrives or the spool files' maximum sizes are reached, the data will be despooled and written to tape. </p>
+<p> Spooling data prevents interleaving date from several job and reduces or eliminates tape drive stop and start commonly known as <span>“</span>shoe-shine<span>”</span>. </p>
+<p> We don't recommend using this option if you are writing to a disk file using this option will probably just slow down the backup jobs. </p>
+<p> NOTE: When this directive is set to yes, Spool Attributes is also automatically set to yes. </p>
+
+</dd>
+</div>
+<div id="Director_Job_SpoolAttributes">
+<dt>Spool Attributes = <yes|no></dt>
+<dd>
+ <p> The default is set to <span class="bdefaultvalue">yes</span>, the Storage daemon will buffer the File attributes and Storage coordinates to a temporary file in the Working Directory, then when writing the Job data to the tape is completed, the attributes and storage coordinates will be sent to the Director. If set to <span class="bvalue">no</span> the File attributes are sent by the Storage daemon to the Director as they are stored on tape. </p>
+<p> NOTE: When Spool Data is set to yes, Spool Attributes is also automatically set to yes. </p>
+
+</dd>
+</div>
+<div id="Director_Job_SpoolSize">
+<dt>SpoolSize=bytes</dt>
+<dd>
+ where the bytes specify the maximum spool size for this job. The default is take from Device Maximum Spool Size limit. This directive is available only in <span class="bbacula">Bacula</span> version 2.3.5 or later.
+</dd>
+</div>
+<div id="Director_Job_PreferMountedVolumes">
+<dt>Prefer Mounted Volumes = <yes|no></dt>
+<dd>
+ If the Prefer Mounted Volumes directive is set to <span class="bvalue">yes</span> (default <span class="bdefaultvalue">yes</span>), the Storage daemon is requested to select either an Autochanger or a drive with a valid Volume already mounted in preference to a drive that is not ready. This means that all jobs will attempt to append to the same Volume (providing the Volume is appropriate - right Pool, ... for that job), unless you are using multiple pools. If no drive with a suitable Volume is available, it will select the first available drive. Note, any Volume that has been requested to be mounted, will be considered valid as a mounted volume by another job. This if multiple jobs start at the same time and they all prefer mounted volumes, the first job will request the mount, and the other jobs will use the same volume. <p> If the directive is set to <span class="bvalue">no</span>, the Storage daemon will prefer finding an unused drive, otherwise, each job started will append to the same Volume (assuming the Pool is the same for all jobs). Setting Prefer Mounted Volumes to no can be useful for those sites with multiple drive autochangers that prefer to maximize backup throughput at the expense of using additional drives and Volumes. This means that the job will prefer to use an unused drive rather than use a drive that is already in use. </p>
+<p> Despite the above, we recommend against setting this directive to <span class="bvalue">no</span> since it tends to add a lot of swapping of Volumes between the different drives and can easily lead to deadlock situations in the Storage daemon. We will accept bug reports against it, but we cannot guarantee that we will be able to fix the problem in a reasonable time. </p>
+<p> A better alternative for using multiple drives is to use multiple pools so that <span class="bbacula">Bacula</span> will be forced to mount Volumes from those Pools on different drives. </p>
+
+</dd>
+</div>
+<div id="Director_Job_RescheduleOnError">
+<dt>Reschedule On Error = <yes|no></dt>
+<dd>
+ If this directive is enabled, and the job terminates in error, the job will be rescheduled as determined by the <span class="bdirectivename">Reschedule Interval</span> and <span class="bdirectivename">Reschedule Times</span> directives. If you cancel the job, it will not be rescheduled. The default is <span class="bdefaultvalue">no</span> (i.e. the job will not be rescheduled). <p> This specification can be useful for portables, laptops, or other machines that are not always connected to the network or switched on. </p>
+
+</dd>
+</div>
+<div id="Director_Job_RescheduleIncompleteJobs">
+<dt>Reschedule Incomplete Jobs = <yes|no></dt>
+<dd>
+ <p> If this directive is enabled, and the job terminates in incomplete status, the job will be rescheduled as determined by the <span class="bdirectivename">Reschedule Interval</span> and <span class="bdirectivename">Reschedule Times</span> directives. If you cancel the job, it will not be rescheduled. The default is <span class="bdefaultvalue">yes</span> (i.e. Incomplete jobs will be rescheduled). </p>
+
+</dd>
+</div>
+<div id="Director_Job_RescheduleInterval">
+<dt>Reschedule Interval = <time-specification></dt>
+<dd>
+ If you have specified <span class="bdirectivename">Reschedule On Error</span> = <span class="bvalue">yes</span> and the job terminates in error, it will be rescheduled after the interval of time specified by <span class="bbracket"><time-specification></span>. See the time specification formats in the Configure chapter for details of time specifications. If no interval is specified, the job will not be rescheduled on error. The default Reschedule Interval is <span class="bdefaultvalue">30 minutes</span> (<span class="bdefaultvalue">1800 seconds</span>).
+</dd>
+</div>
+<div id="Director_Job_RescheduleTimes">
+<dt>Reschedule Times = <count></dt>
+<dd>
+ This directive specifies the maximum number of times to reschedule the job. If it is set to <span class="bdefaultvalue">zero</span> (<span class="bdefaultvalue">0</span>, the default) the job will be rescheduled an indefinite number of times.
+</dd>
+</div>
+<div id="Director_Job_Base">
+<dt>Base = <job-resource-name, ...></dt>
+<dd>
+ The Base directive permits to specify the list of jobs that will be used during Full backup as base. This directive is optional. See the Base Job chapter for more information.
+</dd>
+</div>
+<div id="Director_Job_AllowIncompleteJobs">
+<dt>Allow Incomplete Jobs = <yes|no></dt>
+<dd>
+ <p> If this directive is disabled, and the job terminates in incomplete status, the data of the job will be discarded and the job will be marked in error. Bacula will treat this job like a regular job in error. The default is <span class="bdefaultvalue">yes</span>. </p>
+
+</dd>
+</div>
+<div id="Director_Job_VirtualFullBackupPool">
+<dt>VirtualFull Backup Pool = <pool-resource-name></dt>
+<dd>
+ The <span class="bdirectivename">VirtualFull Backup Pool</span> specifies a Pool to be used for VirtualFull backups. It will override any Pool specification during an VirtualFull backup. This directive is optional.
+</dd>
+</div>
+<div id="Director_Job_MaxVirtualFullInterval">
+<dt>Max VirtualFull Interval = <time></dt>
+<dd>
+ <p> The time specifies the maximum allowed age (counting from start time) of the most recent successful Full backup that is required in order to run Incremental, Differential or Full backup jobs. If the most recent Full backup is older than this interval, Incremental, Differential and Full backups will be converted to a VirtualFull backup automatically. If this directive is not present, or specified as 0, then the age of the previous Full backup is not considered. </p>
+
+
+<p> Please note that a VirtualFull job is not a real backup job. A VirtualFull will merge exiting jobs to create a new virtual Full job in the catalog and will copy the exiting data to new volumes. </p>
+<p> The Client is not used in a VirtualFull job, so when using this directive, the Job that was supposed to run and save recently modified data on the Client will not run. Only the next regular Job defined in the Schedule will backup the data. It will not be possible to restore the data that was modified on the Client between the last Incremental/Differential and the VirtualFull. </p>
+
+
+</dd>
+</div>
+<div id="Director_Job_BackupsToKeep">
+<dt>BackupsToKeep = <number></dt>
+<dd>
+ <p> When this directive is present during a Virtual Full (it is ignored for other Job types), it will look for a Full backup that has more subsequent backups than the value specified. In the example below, the Job will simply terminate unless there is a Full back followed by at least 31 backups of either level Differential or Incremental. </p>
+
+<pre>
+ Job {
+ Name = "VFull"
+ Type = Backup
+ Level = VirtualFull
+ Client = "my-fd"
+ File Set = "FullSet"
+ Accurate = Yes
+ Backups To Keep = 30
+ }
+</pre>
+<p> Assuming that the last Full backup is followed by 32 Incremental backups, a Virtual Full will be run that consolidates the Full with the first two Incrementals that were run after the Full. The result is that you will end up with a Full followed by 30 Incremental backups. </p>
+
+</dd>
+</div>
+<div id="Director_Job_DeleteConsolidatedJobs">
+<dt>DeleteConsolidatedJobs = <yes/no></dt>
+<dd>
+ <p> If set to <span class="bvalue">yes</span>, it will cause any old Job that is consolidated during a Virtual Full to be deleted. In the example above we saw that a Full plus one other job (either an Incremental or Differential) were consolidated into a new Full backup. The original Full plus the other Job consolidated will be deleted. The default value is <span class="bdefaultvalue">no</span>. </p>
+
+</dd>
+</div>
+<div id="Director_Job_SelectionType">
+<dt>Selection Type = <Selection-type-keyword></dt>
+<dd>The <span class="bbracket"><Selection-type-keyword></span> determines how the migration job will go about selecting what JobIds to migrate. In most cases, it is used in conjunction with a <span class="bbf">Selection Pattern</span> to give you fine control over exactly what JobIds are selected. The possible values for <span class="bbracket"><Selection-type-keyword></span> are: <dl class="bdescription2">
+<dt>SmallestVolume</dt>
+<dd class="bdescription2">This selection keyword selects the volume with the fewest bytes from the Pool to be migrated. The Pool to be migrated is the Pool defined in the Migration Job resource. The migration control job will then start and run one migration backup job for each of the Jobs found on this Volume. The Selection Pattern, if specified, is not used.
+</dd>
+<dt>OldestVolume</dt>
+<dd class="bdescription2">This selection keyword selects the volume with the oldest last write time in the Pool to be migrated. The Pool to be migrated is the Pool defined in the Migration Job resource. The migration control job will then start and run one migration backup job for each of the Jobs found on this Volume. The Selection Pattern, if specified, is not used.
+</dd>
+<dt>Client</dt>
+<dd class="bdescription2">The Client selection type, first selects all the Clients that have been backed up in the Pool specified by the Migration Job resource, then it applies the <span class="bbf">Selection Pattern</span> (defined below) as a regular expression to the list of Client names, giving a filtered Client name list. All jobs that were backed up for those filtered (regexed) Clients will be migrated. The migration control job will then start and run one migration backup job for each of the JobIds found for those filtered Clients.
+</dd>
+<dt>Volume</dt>
+<dd class="bdescription2">The Volume selection type, first selects all the Volumes that have been backed up in the Pool specified by the Migration Job resource, then it applies the <span class="bbf">Selection Pattern</span> (defined below) as a regular expression to the list of Volume names, giving a filtered Volume list. All JobIds that were backed up for those filtered (regexed) Volumes will be migrated. The migration control job will then start and run one migration backup job for each of the JobIds found on those filtered Volumes. <p> Jobs on Volumes will be considered for Migration only if the Volume is marked, Full, Used, or Error. Volumes that are still marked Append will not be considered for migration. This prevents <span class="bbacula">Bacula</span> from attempting to read the Volume at the same time it is writing it. It also reduces other deadlock situations, as well as avoids the problem that you migrate a Volume and later find new files appended to that Volume. </p>
+
+</dd>
+<dt>Job</dt>
+<dd class="bdescription2">The Job selection type, first selects all the Jobs (as defined on the <span class="bbf">Name</span> directive in a Job resource) that have been backed up in the Pool specified by the Migration Job resource, then it applies the <span class="bbf">Selection Pattern</span> (defined below) as a regular expression to the list of Job names, giving a filtered Job name list. All JobIds that were run for those filtered (regexed) Job names will be migrated. Note, for a given Job named, they can be many jobs (JobIds) that ran. The migration control job will then start and run one migration backup job for each of the Jobs found.
+</dd>
+<dt>SQLQuery</dt>
+<dd class="bdescription2">The SQLQuery selection type, used the <span class="bbf">Selection Pattern</span> as an SQL query to obtain the JobIds to be migrated. The Selection Pattern must be a valid SELECT SQL statement for your SQL engine, and it must return the JobId as the first field of the SELECT.
+</dd>
+<dt>PoolOccupancy</dt>
+<dd class="bdescription2">This selection type will cause the Migration job to compute the total size of the specified pool for all Media Types combined. If it exceeds the <span class="bbf">Migration High Bytes</span> defined in the Pool, the Migration job will migrate all JobIds beginning with the oldest Volume in the pool (determined by Last Write time) until the Pool bytes drop below the <span class="bbf">Migration Low Bytes</span> defined in the Pool. This calculation should be consider rather approximate because it is made once by the Migration job before migration is begun, and thus does not take into account additional data written into the Pool during the migration. In addition, the calculation of the total Pool byte size is based on the Volume bytes saved in the Volume (Media) database entries. The bytes calculate for Migration is based on the value stored in the Job records of the Jobs to be migrated. These do not include the Storage daemon overhead as is in the total Pool size. As a consequence, normally, the migration will migrate more bytes than strictly necessary.
+</dd>
+<dt>PoolTime</dt>
+<dd class="bdescription2">The PoolTime selection type will cause the Migration job to look at the time each JobId has been in the Pool since the job ended. All Jobs in the Pool longer than the time specified on <span class="bbf">Migration Time</span> directive in the Pool resource will be migrated.
+</dd>
+<dt>PoolUncopiedJobs</dt>
+<dd class="bdescription2">This selection which copies all jobs from a pool to an other pool which were not copied before is available only for copy Jobs.
+</dd>
+</dl>
+
+</dd>
+</div>
+<div id="Director_Job_SelectionPattern">
+<dt>Selection Pattern = <Quoted-string></dt>
+<dd>The Selection Patterns permitted for each Selection-type-keyword are described above. <p> For the OldestVolume and SmallestVolume, this Selection pattern is not used (ignored). </p>
+
+ For the Client, Volume, and Job keywords, this pattern must be a valid regular expression that will filter the appropriate item names found in the Pool.
+ For the SQLQuery keyword, this pattern must be a valid SELECT SQL statement that returns JobIds.
+</dd>
+</div>
+<div id="Director_Job_MaximumSpawnedJobs">
+<dt>Maximum Spawned Jobs = <nb></dt>
+<dd>
+ <p> The Job resource now permits specifying a number of <span class="bdirectivename">Maximum Spawn Jobs</span>. The default is <span class="bdefaultvalue">600</span>. This directive can be useful if you have big hardware and you do a lot of Migration/Copy jobs which start at the same time. </p>
+
+</dd>
+</div>
+<div id="Director_Job_PurgeMigrationJob">
+<dt>Purge Migration Job = <yes|no></dt>
+<dd>This directive may be added to the Migration Job definition in the Director configuration file to purge the job migrated at the end of a migration.</dd>
+</div>
+<div id="Director_Job_VerifyJob">
+<dt>Verify Job = <Job-Resource-Name></dt>
+<dd>
+ If you run a verify job without this directive, the last job run will be compared with the catalog, which means that you must immediately follow a backup by a <span class="bcommandname">verify</span> command. If you specify a Verify Job <span class="bbacula">Bacula</span> will find the last job with that name that ran. This permits you to run all your backups, then run Verify jobs on those that you wish to be verified (most often a <span class="bvalue">VolumeToCatalog</span>) so that the tape just written is re-read.
+</dd>
+</div>
+<div id="Director_Job_Where">
+<dt>Where = <directory></dt>
+<dd>
+ This directive applies only to a Restore job and specifies a prefix to the directory name of all files being restored. This permits files to be restored in a different location from which they were saved. If <span class="bdirectivename">Where</span> is not specified or is set to slash (<span class="bdirectoryname">/</span>), the files will be restored to their original location. By default, we have set <span class="bdirectivename">Where</span> in the example configuration files to be <span class="bdirectoryname">/tmp/bacula-restores</span>. This is to prevent accidental overwriting of your files.
+</dd>
+</div>
+<div id="Director_Job_Replace">
+<dt>Replace = <replace-option></dt>
+<dd>
+ This directive applies only to a Restore job and specifies what happens when <span class="bbacula">Bacula</span> wants to restore a file or directory that already exists. You have the following options for <span class="bbracket"><replace-option></span>:
+</dd>
+</div>
+<div id="Director_Job_PrefixLinks">
+<dt>Prefix Links=<yes|no></dt>
+<dd>
+ If a <span class="bdirectoryname">Where</span> path prefix is specified for a recovery job, apply it to absolute links as well. The default is <span class="bdefaultvalue">no</span>. When set to <span class="bvalue">yes</span> then while restoring files to an alternate directory, any absolute soft links will also be modified to point to the new alternate directory. Normally this is what is desired - i.e. everything is self consistent. However, if you wish to later move the files to their original locations, all files linked with absolute names will be broken.
+</dd>
+</div>
+<div id="Director_Job_RegexWhere">
+<dt>RegexWhere = <expressions></dt>
+<dd>
+ This directive applies only to a Restore job and specifies a regex filename manipulation of all files being restored. This will use File Relocation feature implemented in <span class="bbacula">Bacula</span> 2.1.8 or later. <p> For more informations about how use this option, see this. </p>
+
+</dd>
+</div>
+<div id="Director_Job_StripPrefix">
+<dt>Strip Prefix = <directory></dt>
+<dd>
+ This directive applies only to a Restore job and specifies a prefix to remove from the directory name of all files being restored. This will use the File Relocation feature implemented in <span class="bbacula">Bacula</span> 2.1.8 or later. <p> Using <span class="btt">Strip Prefix=/etc</span>, <span class="bfilename">/etc/passwd</span> will be restored to <span class="bfilename">/passwd</span></p>
+<p> Under Windows, if you want to restore <span class="bfilename">c:/files</span> to <span class="bfilename">d:/files</span>, you can use : </p>
+
+<pre>
+ Strip Prefix = c:
+ Add Prefix = d:
+</pre>
+
+</dd>
+</div>
+<div id="Director_Job_AddPrefix">
+<dt>Add Prefix = <directory></dt>
+<dd>
+ This directive applies only to a Restore job and specifies a prefix to the directory name of all files being restored. This will use File Relocation feature implemented in <span class="bbacula">Bacula</span> 2.1.8 or later.
+</dd>
+</div>
+<div id="Director_Job_AddSuffix">
+<dt>Add Suffix = <extention></dt>
+<dd>
+ This directive applies only to a Restore job and specifies a suffix to all files being restored. This will use File Relocation feature implemented in <span class="bbacula">Bacula</span> 2.1.8 or later. <p> Using <span class="btt">Add Suffix=.old</span>, <span class="bfilename">/etc/passwd</span> will be restored to <span class="bfilename">/etc/passwsd.old</span></p>
+
+</dd>
+</div>
+<div id="Director_Job_Bootstrap">
+<dt>Bootstrap = <bootstrap-file></dt>
+<dd>
+ The <span class="bdirectivename">Bootstrap</span> directive specifies a bootstrap file that, if provided, will be used during Restore Jobs and is ignored in other Job types. The <span class="bbracket"><bootstrap-file></span> contains the list of tapes to be used in a Restore Job as well as which files are to be restored. Specification of this directive is optional, and if specified, it is used only for a restore job. In addition, when running a Restore job from the console, this value can be changed. <p> If you use the <span class="bcommandname">restore</span> command in the <span class="btool">bconsole</span> program, to start a Restore job, the <span class="bbracket"><bootstrap-file></span> will be created automatically from the files you select to be restored. </p>
+<p> For additional details of the <span class="bdirectivename">bootstrap</span> directive, please see Restoring Files with the Bootstrap File chapter of this manual. </p>
+
+</dd>
+</div>
+<div id="Director_Job_MaximumConcurrentJobs">
+<dt>Maximum Concurrent Jobs = <number></dt>
+<dd>
+ where <span class="bbracket"><number></span> is the maximum number of Jobs from the current Job resource that can run concurrently. Note, this directive limits only Jobs with the same name as the resource in which it appears. Any other restrictions on the maximum concurrent jobs such as in the Director, Client, or Storage resources will also apply in addition to the limit specified here. The default is set to <span class="bdefaultvalue">1</span>, but you may set it to a larger number. We strongly recommend that you read the WARNING documented under Maximum Concurrent Jobs in the Director's resource.
+</dd>
+</div>
+<div id="Director_Job_MaximumBandwidth">
+<dt>Maximum Bandwidth = <speed></dt>
+<dd>
+ <p> The speed parameter specifies the maximum allowed bandwidth in <span class="bhighlight">bytes</span> that a job may use. You may specify the following speed parameter modifiers: <span class="bvalue">kb/s</span> (1,000 bytes per second), <span class="bvalue">k/s</span> (1,024 bytes per second), <span class="bvalue">mb/s</span> (1,000,000 bytes per second), or <span class="bvalue">m/s</span> (1,048,576 bytes per second). </p>
+<p> The use of TLS, TLS PSK, CommLine compression and Deduplication can interfer with the value set with the Directive. </p>
+
+</dd>
+</div>
+<div id="Director_Job_MaxStartDelay">
+<dt>Max Start Delay = <time></dt>
+<dd>
+ The time specifies the maximum delay between the scheduled time and the actual start time for the Job. For example, a job can be scheduled to run at 1:00am, but because other jobs are running, it may wait to run. If the delay is set to 3600 (one hour) and the job has not begun to run by 2:00am, the job will be canceled. This can be useful, for example, to prevent jobs from running during day time hours. The default is <span class="bdefaultvalue">0</span> which indicates no limit.
+</dd>
+</div>
+<div id="Director_Job_MaxRunSchedTime">
+<dt>Max Run Sched Time = <time></dt>
+<dd>
+ <p> The time specifies the maximum allowed time that a job may run, counted from when the job was scheduled. This can be useful to prevent jobs from running during working hours. We can see it like <span class="btt">Max Start Delay + Max Run Time</span>. </p>
+
+</dd>
+</div>
+<div id="Director_Job_MaxRunTime">
+<dt>Max Run Time = <time></dt>
+<dd>
+ The time specifies the maximum allowed time that a job may run, counted from when the job starts, (<span class="bbf">not</span> necessarily the same as when the job was scheduled). <p> By default, the the watchdog thread will kill any Job that has run more than 200 days. The maximum watchdog timeout is independent of MaxRunTime and cannot be changed. </p>
+
+</dd>
+</div>
+<div id="Director_Job_IncrementalMaxRunTime">
+<dt>Incremental Max Run Time = <time></dt>
+<dd>
+ The time specifies the maximum allowed time that an Incremental backup job may run, counted from when the job starts, (<span class="bbf">not</span> necessarily the same as when the job was scheduled).
+</dd>
+</div>
+<div id="Director_Job_DifferentialMaxRunTime">
+<dt>Differential Max Run Time = <time></dt>
+<dd>
+ The time specifies the maximum allowed time that a Differential backup job may run, counted from when the job starts, (<span class="bbf">not</span> necessarily the same as when the job was scheduled).
+</dd>
+</div>
+<div id="Director_Job_MaxWaitTime">
+<dt>Max Wait Time = <time></dt>
+<dd>
+ The time specifies the maximum allowed time that a job may block waiting for a resource (such as waiting for a tape to be mounted, or waiting for the storage or file daemons to perform their duties), counted from the when the job starts, (<span class="bbf">not</span> necessarily the same as when the job was scheduled). This directive works as expected since bacula 2.3.18.
+<div class="bimageH"> Job time control directives</div>
+
+
+</dd>
+</div>
+<div id="Director_Job_PruneJobs">
+<dt>Prune Jobs = <yes|no></dt>
+<dd>
+ Normally, pruning of Jobs from the Catalog is specified on a Client by Client basis in the Client resource with the <span class="bdirectivename">AutoPrune</span> directive. If this directive is specified (not normally) and the value is <span class="bvalue">yes</span>, it will override the value specified in the Client resource. The default is <span class="bdefaultvalue">no</span>.
+</dd>
+</div>
+<div id="Director_Job_PruneFiles">
+<dt>Prune Files = <yes|no></dt>
+<dd>
+ Normally, pruning of Files from the Catalog is specified on a Client by Client basis in the Client resource with the <span class="bdirectivename">AutoPrune</span> directive. If this directive is specified (not normally) and the value is <span class="bvalue">yes</span>, it will override the value specified in the Client resource. The default is <span class="bdefaultvalue">no</span>.
+</dd>
+</div>
+<div id="Director_Job_PruneVolumes">
+<dt>Prune Volumes = <yes|no></dt>
+<dd>
+ Normally, pruning of Volumes from the Catalog is specified on a Pool by Pool basis in the Pool resource with the <span class="bdirectivename">AutoPrune</span> directive. Note, this is different from File and Job pruning which is done on a Client by Client basis. If this directive is specified (not normally) and the value is <span class="bvalue">yes</span>, it will override the value specified in the Pool resource. The default is <span class="bdefaultvalue">no</span>.
+</dd>
+</div>
+<div id="Director_Job_SnapshotRetention">
+<dt>Snapshot Retention = <time-period-specification></dt>
+<dd>
+ <p> The Snapshot Retention directive defines the length of time that <span class="bbacula">Bacula</span> will keep Snapshots in the Catalog database and on the Client after the Snapshot creation. When this time period expires, and if using the <span class="bcommandname">snapshot prune</span> command, <span class="bbacula">Bacula</span> will prune (remove) Snapshot records that are older than the specified Snapshot Retention period and will contact the FileDaemon to delete Snapshots from the system. </p>
+<p> The Snapshot retention period is specified as seconds, minutes, hours, days, weeks, months, quarters, or years. See the Configuration chapter of this manual for additional details of time specification. </p>
+<p> The default is <span class="bdefaultvalue">0 seconds</span>, Snapshots are deleted at the end of the backup. The Job <span class="bdirectivename">SnapshotRetention</span> directive overwrites the Client <span class="bdirectivename">SnapshotRetention</span> directive. </p>
+
+</dd>
+</div>
+<div id="Director_Job_Runscript">
+<dt>RunScript {<body-of-runscript>}</dt>
+<dd>
+ <p> The <span class="bdirectivename">RunScript</span> directive behaves like a resource in that it requires opening and closing braces around a number of directives that make up the body of the runscript. </p>
+<p> The specified <span class="bdirectivename">Command</span> (see below for details) is run as an external program prior or after the current Job. This is optional. By default, the program is executed on the Client side like in <span class="btt">ClientRunXXXJob</span>. </p>
+<p><span class="bdirectivename">Console</span> options are special commands that are sent to the director instead of the OS. At this time, console command ouputs are redirected to log with the jobid <span class="bvalue">0</span>. </p>
+<p> You can use following console command : <span class="bcommandname">delete</span>, <span class="bcommandname">disable</span>, <span class="bcommandname">enable</span>, <span class="bcommandname">estimate</span>, <span class="bcommandname">list</span>, <span class="bcommandname">llist</span>, <span class="bcommandname">memory</span>, <span class="bcommandname">prune</span>, <span class="bcommandname">purge</span>, <span class="bcommandname">reload</span>, <span class="bcommandname">status</span>, <span class="bcommandname">setdebug</span>, <span class="bcommandname">show</span>, <span class="bcommandname">time</span>, <span class="bcommandname">trace</span>, <span class="bcommandname">update</span>, <span class="bcommandname">version</span>, <span class="bcommandname">.client</span>, <span class="bcommandname">.jobs</span>, <span class="bcommandname">.pool</span>, <span class="bcommandname">.storage</span>. See console chapter for more information. You need to specify needed information on command line, nothing will be prompted. Example : </p>
+
+<pre>
+Console = "prune files client=%c"
+Console = "update stats age=3"
+</pre>
+<p> You can specify more than one Command/Console option per RunScript. </p>
+<p> You can use following options may be specified in the body of the runscript: </p>
+
+<div class="blongtable">
+<table class="blongtable">
+<caption class="blongtable" id="33127">Options for Run Script</caption>
+<tr class="btoprule">
+<th class="btablecenter">Options</th>
+<th class="btablecenter">Value</th>
+<th class="btablecenter">Default</th>
+<th class="btablecenter">Information</th>
+</tr>
+<tr class="bmidrule">
+<td class="btableleft"> Runs On Success </td>
+<td class="btableleft"> Yes / No </td>
+<td class="btableleft"><span class="bdefaultvalue">Yes</span></td>
+<td class="btableleft"> Run command if JobStatus is successful</td>
+</tr>
+<tr class="bmidrule">
+<td class="btableleft"> Runs On Failure </td>
+<td class="btableleft"> Yes / No </td>
+<td class="btableleft"><span class="bdefaultvalue">No</span></td>
+<td class="btableleft"> Run command if JobStatus isn't successful</td>
+</tr>
+<tr class="bmidrule">
+<td class="btableleft"> Runs On Client </td>
+<td class="btableleft"> Yes / No </td>
+<td class="btableleft"><span class="bdefaultvalue">Yes</span></td>
+<td class="btableleft"> Run command on client<span class="bfootnote">note<span class="bfootnotetext">Scripts will run on Client only with Jobs that use a Client. (Backup, Restore, some Verify jobs). For other Jobs (Copy, Migration, Admin, ...) RunsOnClient should be set to No.</span></span>
+</td>
+</tr>
+<tr class="bmidrule">
+<td class="btableleft"> Runs When </td>
+<td class="btableleft"> Before After Always <span class="bemph">AfterVSS</span>
+</td>
+<td class="btableleft"><span class="bdefaultvalue">Never</span></td>
+<td class="btableleft"> When run commands</td>
+</tr>
+<tr class="bmidrule">
+<td class="btableleft"> Fail Job On Error </td>
+<td class="btableleft"> Yes/No </td>
+<td class="btableleft"><span class="bdefaultvalue">Yes</span></td>
+<td class="btableleft"> Fail job if script returns something different from 0</td>
+</tr>
+<tr class="bmidrule">
+<td class="btableleft"> Command </td>
+
+
+<td class="btableleft"> Path to your script</td>
+</tr>
+<tr class="bmidrule">
+<td class="btableleft"> Console </td>
+
+
+<td class="btableleft"> Console command</td>
+</tr>
+</table>
+</div>
+
+<p> Any output sent by the command to standard output will be included in the <span class="bbacula">Bacula</span> job report. The command string must be a valid program name or name of a shell script. </p>
+<p> In addition, the command string is parsed then fed to the OS, which means that the path will be searched to execute your specified command, but there is no shell interpretation, as a consequence, if you invoke complicated commands or want any shell features such as redirection or piping, you must call a shell script and do it inside that script. </p>
+<p> Before submitting the specified command to the operating system, <span class="bbacula">Bacula</span> performs character substitution of the following characters: </p>
+
+<pre>
+ %% = %
+ %b = Job Bytes
+ %c = Client's name
+ %C = If the job is a Cloned job (Only on director side)
+ %d = Daemon's name (Such as host-dir or host-fd)
+ %D = Director's name (Also valid on file daemon)
+ %e = Job Exit Status
+ %E = Non-fatal Job Errors
+ %f = Job FileSet (Only on director side)
+ %F = Job Files
+ %h = Client address
+ %i = JobId
+ %I = Migration/Copy JobId (Only in Copy/Migrate Jobs)
+ %j = Unique Job id
+ %l = Job Level
+ %n = Job name
+ %o = Job Priority
+ %p = Pool name (Only on director side)
+ %P = Current PID process
+ %R = Read Bytes
+ %s = Since time
+ %S = Previous Job name (Only on file daemon side)
+ %t = Job type (Backup, ...)
+ %v = Volume name (Only on director side)
+ %w = Storage name (Only on director side)
+ %x = Spooling enabled? ("yes" or "no")
+</pre>
+<p> Some character substitutions are not available in all situations. The Job Exit Status code %e edits the following values: </p>
+
+<ul class="bitemize2">
+<li class="bitemize2">OK </li>
+<li class="bitemize2">Error </li>
+<li class="bitemize2">Fatal Error </li>
+<li class="bitemize2">Canceled </li>
+<li class="bitemize2">Differences </li>
+<li class="bitemize2">Unknown term code </li>
+</ul>
+
+<p> Thus if you edit it on a command line, you will need to enclose it within some sort of quotes. </p>
+
+<p> You can use these following shortcuts: </p>
+<div class="blongtable">
+<table class="blongtable">
+<caption class="blongtable" id="33130">RunScript shortcuts</caption>
+<tr class="btoprule">
+<th class="btablecenter"> </th>
+<th class="btablecenter">Runs</th>
+<th class="btablecenter">Runs</th>
+<th class="btablecenter">FailJob</th>
+<th class="btablecenter">Runs</th>
+<th class="btablecenter">Runs</th>
+</tr>
+<tr class="blongtable">
+<th class="btablecenter">Keyword</th>
+<th class="btablecenter">On</th>
+<th class="btablecenter">On</th>
+<th class="btablecenter">On</th>
+<th class="btablecenter">On</th>
+<th class="btablecenter">When</th>
+</tr>
+<tr class="blongtable">
+<th class="btablecenter"> </th>
+<th class="btablecenter">Success</th>
+<th class="btablecenter">Failure</th>
+<th class="btablecenter">Error</th>
+<th class="btablecenter">Client</th>
+<th class="btablecenter"> </th>
+</tr>
+<tr class="bmidrule">
+<td class="btableleft"> Run Before Job </td>
+
+
+<td class="btablecenter"> Yes </td>
+<td class="btablecenter"> No </td>
+<td class="btablecenter"> Before</td>
+</tr>
+<tr class="bmidrule">
+<td class="btableleft"> Run After Job </td>
+<td class="btablecenter"> Yes </td>
+<td class="btablecenter"> No </td>
+
+<td class="btablecenter"> No </td>
+<td class="btablecenter"> After</td>
+</tr>
+<tr class="bmidrule">
+<td class="btableleft"> Run After Failed Job </td>
+<td class="btablecenter"> No </td>
+<td class="btablecenter"> Yes </td>
+
+<td class="btablecenter"> No </td>
+<td class="btablecenter"> After</td>
+</tr>
+<tr class="bmidrule">
+<td class="btableleft"> Client Run Before Job </td>
+
+
+<td class="btablecenter"> Yes </td>
+<td class="btablecenter"> Yes </td>
+<td class="btablecenter"> Before</td>
+</tr>
+<tr class="bmidrule">
+<td class="btableleft"> Client Run After Job </td>
+<td class="btablecenter"> Yes </td>
+<td class="btablecenter"> No </td>
+
+<td class="btablecenter"> Yes </td>
+<td class="btablecenter"> After</td>
+</tr>
+</table>
+</div> Examples: <pre>
+RunScript {
+ RunsWhen = Before
+ FailJobOnError = No
+ Command = "/etc/init.d/apache stop"
+}
+
+RunScript {
+ RunsWhen = After
+ RunsOnFailure = yes
+ Command = "/etc/init.d/apache start"
+}
+</pre>
+<p><span class="bbf">Notes about ClientRunBeforeJob</span></p>
+<p> For compatibility reasons, with this shortcut, the command is executed directly when the client recieve it. And if the command is in error, other remote runscripts will be discarded. To be sure that all commands will be sent and executed, you have to use RunScript syntax. </p>
+<p><span class="bbf">Special Shell Considerations</span></p>
+<p> A <span>“</span>Command =<span>”</span> can be one of: </p>
+<ul class="bitemize2">
+<li class="bitemize2">The full path to an executable program </li>
+<li class="bitemize2">The name of an executable program that can be found in the $PATH </li>
+<li class="bitemize2">A <span class="bemph">complex</span> shell command in the form of: "sh -c \"your commands go here\"" </li>
+</ul>
+<span class="bbf">Special Windows Considerations</span>
+<p> You can run scripts just after snapshots initializations with <span class="bvalue">AfterVSS</span> keyword. </p>
+<p> In addition, for a Windows client, please take note that you must ensure a correct path to your script. The script or program can be a <span class="bfilename">.com</span>, <span class="bfilename">.exe</span> or a <span class="bfilename">.bat</span> file. If you just put the program name in then <span class="bbacula">Bacula</span> will search using the same rules that <span class="btool">cmd.exe</span> uses (current directory, <span class="bbacula">Bacula</span> bin directory, and <span class="btt">PATH</span>). It will even try the different extensions in the same order as <span class="btool">cmd.exe</span>. The command can be anything that <span class="btool">cmd.exe</span> or <span class="btool">command.com</span> will recognize as an executable file. </p>
+<p> However, if you have slashes in the program name then <span class="bbacula">Bacula</span> figures you are fully specifying the name, so you must also explicitly add the three character extension. </p>
+<p> The command is run in a Win32 environment, so Unix like commands will not work unless you have installed and properly configured Cygwin in addition to and separately from <span class="bbacula">Bacula</span>. </p>
+<p> The System <span class="btt">%Path%</span> will be searched for the command. (under the environment variable dialog you have have both System Environment and User Environment, we believe that only the System environment will be available to <span class="btool">bacula-fd</span>, if it is running as a service.) </p>
+<p> System environment variables can be referenced with <span class="btt">%var%</span> and used as either part of the command name or arguments. </p>
+<p> So if you have a script in the <span class="bbacula">Bacula</span> <span class="bdirectoryname">\bin</span> directory then the following lines should work fine: </p>
+
+<pre>
+ Client Run Before Job = systemstate
+or
+ Client Run Before Job = systemstate.bat
+or
+ Client Run Before Job = "systemstate"
+or
+ Client Run Before Job = "systemstate.bat"
+or
+ ClientRunBeforeJob = "\"C:/Program Files/Bacula/systemstate.bat\""
+</pre>
+<p> The outer set of quotes is removed when the configuration file is parsed. You need to escape the inner quotes so that they are there when the code that parses the command line for execution runs so it can tell what the program name is. </p>
+
+<pre>
+ClientRunBeforeJob = "\"C:/Program Files/Software
+ Vendor/Executable\" /arg1 /arg2 \"foo bar\""
+</pre>
+<p> The special characters </p>
+<pre>
+&<>()@^|
+</pre> will need to be quoted, if they are part of a filename or argument.
+<p> If someone is logged in, a blank <span>“</span>command<span>”</span> window running the commands will be present during the execution of the command. </p>
+<p> Some Suggestions from Phil Stracchino for running on Win32 machines with the native Win32 File daemon: </p>
+
+<ol class="benumerate1">
+<li class="benumerate1">You might want the ClientRunBeforeJob directive to specify a <span class="bfilename">.bat</span> file which runs the actual client-side commands, rather than trying to run (for example) <span class="btool">regedit /e</span> directly. </li>
+<li class="benumerate1">The batch file should explicitly <span>“</span><span class="btt">exit 0</span><span>”</span> on successful completion. </li>
+<li class="benumerate1">The path to the batch file should be specified in Unix form: <pre>
+ClientRunBeforeJob = "c:/bacula/bin/systemstate.bat"
+</pre> rather than DOS/Windows form: <pre>
+ClientRunBeforeJob = "c:\bacula\bin\systemstate.bat" # INCORRECT
+</pre>
+</li>
+</ol>
+
+<p> For Win32, please note that there are certain limitations: </p>
+<pre>
+ClientRunBeforeJob = "C:/Program Files/Bacula/bin/pre-exec.bat"
+</pre>
+<p> Lines like the above do not work because there are limitations of <span class="btool">cmd.exe</span> that is used to execute the command. <span class="bbacula">Bacula</span> prefixes the string you supply with <span class="btool">cmd.exe /c </span>. To test that your command works you should type <span class="btool">cmd /c "C:/Program Files/test.exe"</span> at a cmd prompt and see what happens. Once the command is correct insert a backslash (\) before each double quote ("), and then put quotes around the whole thing when putting it in the director's configuration file. You either need to have only one set of quotes or else use the short name and don't put quotes around the command path. </p>
+<p> Below is the output from cmd's help as it relates to the command line passed to the <span class="btt">/c</span> option. </p>
+<p> If <span class="btt">/C</span> or <span class="btt">/K</span> is specified, then the remainder of the command line after the switch is processed as a command line, where the following logic is used to process quote (") characters: </p>
+
+<ol class="benumerate1">
+<li class="benumerate1">If all of the following conditions are met, then quote characters on the command line are preserved: <ul class="bitemize2">
+<li class="bitemize2">no <span class="btt">/S</span> switch. </li>
+<li class="bitemize2">exactly two quote characters. </li>
+<li class="bitemize2">no special characters between the two quote characters, where special is one of: <pre>
+&<>()@^|
+</pre>
+</li>
+<li class="bitemize2">there are one or more whitespace characters between the the two quote characters. </li>
+<li class="bitemize2">the string between the two quote characters is the name of an executable file. </li>
+</ul>
+
+</li>
+<li class="benumerate1">Otherwise, old behavior is to see if the first character is a quote character and if so, strip the leading character and remove the last quote character on the command line, preserving any text after the last quote character. </li>
+</ol>
+
+<p> The following example of the use of the Client Run Before Job directive was submitted by a user: </p>
+<p> You could write a shell script to back up a DB2 database to a FIFO. The shell script is: </p>
+
+<pre>
+ #!/bin/sh
+ # ===== backupdb.sh
+ DIR=/u01/mercuryd
+
+ mkfifo $DIR/dbpipe
+ db2 BACKUP DATABASE mercuryd TO $DIR/dbpipe WITHOUT PROMPTING &
+ sleep 1
+</pre>
+<p> The following line in the Job resource in the <span class="bfilename">bacula-dir.conf</span> file: </p>
+<pre>
+Client Run Before Job = "su - mercuryd -c \"/u01/mercuryd/backupdb.sh '%t' '%l'\""
+</pre>
+<p> When the job is run, you will get messages from the output of the script stating that the backup has started. Even though the command being run is backgrounded with &, the job will block until the <span class="btool">db2 BACKUP DATABASE</span> command, thus the backup stalls. </p>
+<p> To remedy this situation, the <span>“</span>db2 BACKUP DATABASE<span>”</span> line should be changed to the following: </p>
+
+<pre>
+db2 BACKUP DATABASE mercuryd TO $DIR/dbpipe WITHOUT PROMPTING > $DIR/backup.log 2>&1 < /dev/null &
+</pre>
+<p> It is important to redirect the input and outputs of a backgrounded command to <span class="bdirectoryname">/dev/null</span> to prevent the script from blocking. </p>
+
+</dd>
+</div>
+<div id="Director_Job_AllowMixedPriority">
+<dt>Allow Mixed Priority = <yes|no></dt>
+<dd>
+ This directive is only implemented in version 2.5 and later. When set to <span class="bvalue">yes</span> (default <span class="bdefaultvalue">no</span>), this job may run even if lower priority jobs are already running. This means a high priority job will not have to wait for other jobs to finish before starting. The scheduler will only mix priorities when all running jobs have this set to true. <p> Note that only higher priority jobs will start early. Suppose the director will allow two concurrent jobs, and that two jobs with priority 10 are running, with two more in the queue. If a job with priority 5 is added to the queue, it will be run as soon as one of the running jobs finishes. However, new priority 10 jobs will not be run until the priority 5 job has finished. </p>
+
+</dd>
+</div>
+<div id="Director_Job_AllowDuplicateJobs">
+<dt>Allow Duplicate Jobs = <yes|no></dt>
+<dd>
+ A duplicate job in the sense we use it here means a second or subsequent job with the same name starts. This happens most frequently when the first job runs longer than expected because no tapes are available. The default is <span class="bdefaultvalue">yes</span>.
+<div class="bimageH"> Allow Duplicate Jobs usage</div>
+
+<p> If this directive is enabled duplicate jobs will be run. If the directive is set to <span class="bvalue">no</span> then only one job of a given name may run at one time, and the action that <span class="bbacula">Bacula</span> takes to ensure only one job runs is determined by the other directives (see below). </p>
+<p> If <span class="bdirectivename">Allow Duplicate Jobs</span> is set to <span class="bvalue">no</span> and two jobs are present and none of the three directives given below permit cancelling a job, then the current job (the second one started) will be cancelled. </p>
+
+</dd>
+</div>
+<div id="Director_Job_CancelLowerLevelDuplicates">
+<dt>Cancel Lower Level Duplicates = <yes|no></dt>
+<dd>
+ If <span class="bdirectivename">Allow Duplicate Jobs</span> is set to <span class="bvalue">no</span> and this directive is set to <span class="bvalue">yes</span>, <span class="bbacula">Bacula</span> will choose between duplicated jobs the one with the highest level. For example, it will cancel a previous Incremental to run a Full backup. It works only for Backup jobs. The default is <span class="bdefaultvalue">no</span>. If the levels of the duplicated jobs are the same, nothing is done and the other Cancel XXX Duplicate directives will be examined.
+</dd>
+</div>
+<div id="Director_Job_CancelQueuedDuplicates">
+<dt>Cancel Queued Duplicates = <yes|no></dt>
+<dd>
+ If <span class="bdirectivename">Allow Duplicate Jobs</span> is set to <span class="bvalue">no</span> and if this directive is set to <span class="bvalue">yes</span> any job that is already queued to run but not yet running will be canceled. The default is <span class="bdefaultvalue">no</span>.
+</dd>
+</div>
+<div id="Director_Job_CancelRunningDuplicates">
+<dt>Cancel Running Duplicates = <yes|no></dt>
+<dd>
+ If <span class="bdirectivename">Allow Duplicate Jobs</span> is set to <span class="bvalue">no</span> and if this directive is set to <span class="bvalue">yes</span> any job that is already running will be canceled. The default is <span class="bdefaultvalue">no</span>.
+</dd>
+</div>
+<div id="Director_Storage_Name">
+<dt>Name = <name></dt>
+<dd>
+ The name of the storage resource. This name appears on the Storage directive specified in the Job resource and is required.
+</dd>
+</div>
+<div id="Director_Storage_Address">
+<dt>Address = <address></dt>
+<dd>
+ Where the address is a host name, a , or an <span class="bbf">IP address</span>. Please note that the <span class="bbracket"><address></span> as specified here will be transmitted to the File daemon who will then use it to contact the Storage daemon. Hence, it is <span class="bbf">not</span>, a good idea to use <span class="bvalue">localhost</span> as the name but rather a fully qualified machine name or an IP address. This directive is required.
+</dd>
+</div>
+<div id="Director_Storage_Password">
+<dt>Password = <password></dt>
+<dd>
+ This is the password to be used when establishing a connection with the Storage services. This same password also must appear in the Director resource of the Storage daemon's configuration file. This directive is required. If you have either <span class="btool">/dev/random</span> or <span class="btool">bc</span> on your machine, <span class="bbacula">Bacula</span> will generate a random password during the configuration process, otherwise it will be left blank. <p> The password is plain text. It is not generated through any special process, but it is preferable for security reasons to use random text. </p>
+
+</dd>
+</div>
+<div id="Director_Storage_Enabled">
+<dt>Enabled = <yes|no></dt>
+<dd>
+ This directive allows you to enable or disable a <span>Storage</span> resource. When the resource is disabled, the storage device will not be used. To reuse it you must re-enable the <span>Storage</span> resource.
+</dd>
+</div>
+<div id="Director_Storage_AllowCompression">
+<dt>AllowCompression = <yes|no></dt>
+<dd>
+ <p> This directive is optional, and if you specify <span class="bvalue">no</span> (the default is <span class="bdefaultvalue">yes</span>), it will cause backups jobs running on this storage resource to run without client File Daemon compression. This effectively overrides compression options in FileSets used by jobs which use this storage resource. </p>
+
+</dd>
+</div>
+<div id="Director_Storage_SdPort">
+<dt>SD Port = <port></dt>
+<dd>
+ Where port is the port to use to contact the storage daemon for information and to start jobs. This same port number must appear in the Storage resource of the Storage daemon's configuration file. The default is <span class="bdefaultvalue">9103</span>.
+</dd>
+</div>
+<div id="Director_Storage_FdStorageAddress">
+<dt>FD Storage Address = <address></dt>
+<dd>
+ Where the <span class="bbracket"><address></span> is a host name, a , or an <span class="bbf">IP address</span>. The <span class="bbracket"><address></span> specified here will be transmitted to the File daemon instead of the IP address that the Director uses to contact the Storage daemon. This FDStorageAddress will then be used by the File daemon to contact the Storage daemon. This directive particularly useful if the File daemon is in a different network domain than the Director or Storage daemon. It is also useful in NAT or firewal environments.
+<div class="bimageH"> Backup over WAN using FD Storage Address</div>
+
+
+</dd>
+</div>
+<div id="Director_Storage_HeartbeatInterval">
+<dt>Heartbeat Interval = <time-interval></dt>
+<dd>
+ This directive is optional and if specified will cause the Director to set a keepalive interval (heartbeat) in seconds on each of the sockets it opens for the Storage resource. This value will override any specified at the Director level. It is implemented only on systems (Linux, ...) that provide the <span class="btool">setsockopt</span> <span class="btt">TCP_KEEPIDLE</span> function. The default value is <span class="bdefaultvalue">300s</span>.
+</dd>
+</div>
+<div id="Director_Storage_Device">
+<dt>Device = <device-name></dt>
+<dd>
+ This directive specifies the Storage daemon's name of the device resource to be used for the storage. If you are using an Autochanger, the name specified here should be the name of the Storage daemon's Autochanger resource rather than the name of an individual device. This name is not the physical device name, but the logical device name as defined on the <span class="bdirectivename">Name</span> directive contained in the <span>Device</span> or the <span>Autochanger</span> resource definition of the <span class="bbf">Storage daemon</span> configuration file. You can specify any name you would like (even the device name if you prefer) up to a maximum of 127 characters in length. The physical device name associated with this device is specified in the <span class="bbf">Storage daemon</span> configuration file (as <span class="bdirectivename">Archive Device</span>). Please take care not to define two different Storage resource directives in the Director that point to the same Device in the Storage daemon. Doing so may cause the Storage daemon to block (or hang) attempting to open the same device that is already open. This directive is required.
+</dd>
+</div>
+<div id="Director_Storage_MediaType">
+<dt>Media Type = <MediaType></dt>
+<dd>
+ This directive specifies the Media Type to be used to store the data. This is an arbitrary string of characters up to 127 maximum that you define. It can be anything you want. However, it is best to make it descriptive of the storage media (e.g. <span>“</span>File<span>”</span>, <span>“</span>DAT<span>”</span>, <span>“</span>HP DLT8000<span>”</span>, <span>“</span>8mm<span>”</span>, ...). In addition, it is essential that you make the <span class="bdirectivename">Media Type</span> specification unique for each storage media type. If you have two DDS-4 drives that have incompatible formats, or if you have a DDS-4 drive and a DDS-4 autochanger, you almost certainly should specify different <span class="bbf">Media Types</span>. During a restore, assuming a <span class="bbf">DDS-4</span> Media Type is associated with the Job, <span class="bbacula">Bacula</span> can decide to use any Storage daemon that supports Media Type <span class="bbf">DDS-4</span> and on any drive that supports it. <p> If you are writing to disk Volumes, you must make doubly sure that each Device resource defined in the Storage daemon (and hence in the Director's conf file) has a unique media type. Otherwise for <span class="bbacula">Bacula</span> versions 1.38 and older, your restores may not work because <span class="bbacula">Bacula</span> will assume that you can mount any Media Type with the same name on any Device associated with that Media Type. This is possible with tape drives, but with disk drives, unless you are very clever you cannot mount a Volume in any directory - this can be done by creating an appropriate soft link. </p>
+<p> Currently <span class="bbacula">Bacula</span> permits only a single Media Type per Storage and Device definition. Consequently, if you have a drive that supports more than one Media Type, you can give a unique string to Volumes with different intrinsic Media Type (Media Type = DDS-3-4 for DDS-3 and DDS-4 types), but then those volumes will only be mounted on drives indicated with the dual type (DDS-3-4). </p>
+<p> If you want to tie <span class="bbacula">Bacula</span> to using a single Storage daemon or drive, you must specify a unique Media Type for that drive. This is an important point that should be carefully understood. Note, this applies equally to Disk Volumes. If you define more than one disk Device resource in your Storage daemon's conf file, the Volumes on those two devices are in fact incompatible because one can not be mounted on the other device since they are found in different directories. For this reason, you probably should use two different Media Types for your two disk Devices (even though you might think of them as both being File types). You can find more on this subject in the Basic Volume Management chapter of this manual. </p>
+<p> The <span class="bbf">MediaType</span> specified in the Director's Storage resource, <span class="bbf">must</span> correspond to the <span class="bbf">Media Type</span> specified in the <span>Device</span> resource of the <span class="bbf">Storage daemon</span> configuration file. This directive is required, and it is used by the Director and the Storage daemon to ensure that a Volume automatically selected from the Pool corresponds to the physical device. If a Storage daemon handles multiple devices (e.g. will write to various file Volumes on different partitions), this directive allows you to specify exactly which device. </p>
+<p> As mentioned above, the value specified in the Director's Storage resource must agree with the value specified in the Device resource in the <span class="bbf">Storage daemon's</span> configuration file. It is also an additional check so that you don't try to write data for a DLT onto an 8mm device. </p>
+
+</dd>
+</div>
+<div id="Director_Storage_Autochanger">
+<dt>Autochanger = <yes|no></dt>
+<dd>
+ If you specify <span class="bvalue">yes</span> for this command (the default is <span class="bdefaultvalue">no</span>), when you use the <span class="bcommandname">label</span> command or the <span class="bcommandname">add</span> command to create a new Volume, <span class="bbacula">Bacula</span> will also request the Autochanger Slot number. This simplifies creating database entries for Volumes in an autochanger. If you forget to specify the Slot, the autochanger will not be used. However, you may modify the Slot associated with a Volume at any time by using the <span class="bcommandname">update volume</span> or <span class="bcommandname">update slots</span> command in the console program. When <span class="bdirectivename">Autochanger</span> is enabled, the algorithm used by <span class="bbacula">Bacula</span> to search for available volumes will be modified to consider only Volumes that are known to be in the autochanger's magazine. If no <span class="bbf">in changer</span> volume is found, <span class="bbacula">Bacula</span> will attempt recycling, pruning, ..., and if still no volume is found, <span class="bbacula">Bacula</span> will search for any volume whether or not in the magazine. By privileging in changer volumes, this procedure minimizes operator intervention. The default is <span class="bdefaultvalue">no</span>. <p> For the autochanger to be used, you must also specify <span class="bdirectivename">Autochanger</span> = <span class="bvalue">yes</span> in the Device Resource in the Storage daemon's configuration file as well as other important Storage daemon configuration information. Please consult the Using Autochangers manual of this chapter for the details of using autochangers. You can modify any additional <span>Storage</span> resources that correspond to devices that are part of the <span>Autochanger</span> device. Instead of the previous <span class="bdirectivename">Autochanger</span> = <span class="bvalue">yes</span> directive, the configuration should be modified to be <span class="bdirectivename">Autochanger</span> = <span class="bvalue">xxx</span> where <span class="bvalue">xxx</span> is the name of the Autochanger. </p>
+
+</dd>
+</div>
+<div id="Director_Storage_MaximumConcurrentJobs">
+<dt>Maximum Concurrent Jobs = <number></dt>
+<dd>
+ where <span class="bbracket"><number></span> is the maximum number of Jobs with the current Storage resource that can run concurrently. Note, this directive limits only Jobs for Jobs using this Storage daemon. Any other restrictions on the maximum concurrent jobs such as in the Director, Job, or Client resources will also apply in addition to any limit specified here. The default is set to <span class="bdefaultvalue">1</span>, but you may set it to a larger number. However, if you set the Storage daemon's number of concurrent jobs greater than one, we recommend that you read the waring documented under Maximum Concurrent Jobs in the Director's resource or simply turn data spooling on as documented in the Data Spooling chapter of this manual.
+</dd>
+</div>
+<div id="Director_Storage_TlsPskEnable">
+<dt>TLS PSK Enable = <yes|no></dt>
+<dd>
+<p> Enable or Disable automatic TLS PSK support. TLS PSK is enabled by default between all <span class="bbacula">Bacula</span> components. The Pre-Shared Key used between the programs is the <span class="bbacula">Bacula</span> password. If both <span class="bdirectivename">TLS Enable</span> and <span class="bdirectivename">TLS PSK Enable</span> are enabled, the system will use TLS certificates. </p>
+
+</dd>
+</div>
+<div id="Director_Storage_TlsEnable">
+<dt>TLS Enable = <yes|no></dt>
+<dd>
+<p> Enable TLS support. If TLS is not enabled, none of the other TLS directives have any effect. In other words, even if you set <span class="bbf">TLS Require = yes</span> you need to have TLS enabled or TLS will not be used. </p>
+
+</dd>
+</div>
+<div id="Director_Storage_TlsRequire">
+<dt>TLS Require = <yes|no></dt>
+<dd>
+<p> Require TLS or TLS-PSK encryption. This directive is ignored unless one of <span class="bbf">TLS Enable</span> or <span class="bbf">TLS PSK Enable</span> is set to <span class="bvalue">yes</span>. If TLS is not required while TLS or TLS-PSK are enabled, then the <span class="bbacula">Bacula</span> component will connect with other components either with or without TLS or TLS-PSK</p>
+<p> If TLS or TLS-PSK is enabled and TLS is required, then the <span class="bbacula">Bacula</span> component will refuse any connection request that does not use TLS. </p>
+
+</dd>
+</div>
+<div id="Director_Storage_TlsAuthenticate">
+<dt>TLS Authenticate = <yes|no></dt>
+<dd>
+ When <span class="bdirectivename">TLS Authenticate</span> is enabled, after doing the CRAM-MD5 authentication, <span class="bbacula">Bacula</span> will also do TLS authentication, then TLS encryption will be turned off, and the rest of the communication between the two <span class="bbacula">Bacula</span> components will be done without encryption. If TLS-PSK is used instead of the regular TLS, the encryption is turned off after the TLS-PSK authentication step. <p> If you want to encrypt communications data, use the normal TLS directives but do <span class="bbf">not</span> turn on <span class="bdirectivename">TLS Authenticate</span>. </p>
+
+</dd>
+</div>
+<div id="Director_Storage_TlsCertificate">
+<dt>TLS Certificate = <Filename></dt>
+<dd>The full path and filename of a PEM encoded TLS certificate. It will be used as either a client or server certificate, depending on the connection direction. PEM stands for Privacy Enhanced Mail, but in this context refers to how the certificates are encoded. This format is used because PEM files are base64 encoded and hence ASCII text based rather than binary. They may also contain encrypted information. <p> This directive is required in a server context, but it may not be specified in a client context if <span class="bdirectivename">TLS Verify Peer</span> is set to <span class="bvalue">no</span> in the corresponding server context. </p>
+
+<p> Example: </p>
+<p> File Daemon configuration file (<span class="bfilename">bacula-fd.conf</span>), <span class="bdaemon">Director</span> resource configuration has <span class="bdirectivename">TLS Verify Peer</span> = <span class="bvalue">no</span>: </p>
+<pre>
+ Director {
+ Name = bacula-dir
+ Password = "password"
+ Address = director.example.com
+
+ # TLS configuration directives
+ TLS Enable = yes
+ TLS Require = yes
+ TLS Verify Peer = no
+ TLS CA Certificate File = /opt/bacula/ssl/certs/root_cert.pem
+ TLS Certificate = /opt/bacula/ssl/certs/client1_cert.pem
+ TLS Key = /opt/bacula/ssl/keys/client1_key.pem
+ }
+</pre>
+<p> Having <span class="bdirectivename">TLS Verify Peer</span> = <span class="bvalue">no</span>, means the File Daemon, server context, will not check Directorâ\80\99s public certificate, client context. There is no need to specify <span class="bdirectivename">TLS Certificate File</span> neither <span class="bdirectivename">TLS Key</span> directives in the <span class="bresourcename">Client</span> resource, director configuration file. We can have the below client configuration in <span class="bfilename">bacula-dir.conf</span>: </p>
+
+<pre>
+ Client {
+ Name = client1-fd
+ Address = client1.example.com
+ FDPort = 9102
+ Catalog = MyCatalog
+ Password = "password"
+ ...
+ # TLS configuration directives
+ TLS Enable = yes
+ TLS Require = yes
+ TLS CA Certificate File = /opt/bacula/ssl/certs/ca_client1_cert.pem
+ }
+</pre>
+
+</dd>
+</div>
+<div id="Director_Storage_TlsKey">
+<dt>TLS Key = <Filename></dt>
+<dd>The full path and filename of a PEM encoded TLS private key. It must correspond to the TLS certificate.
+</dd>
+</div>
+<div id="Director_Storage_TlsCaCertificateFile">
+<dt>TLS CA Certificate File = <Filename></dt>
+<dd>The full path and filename specifying a PEM encoded TLS CA certificate(s). Multiple certificates are permitted in the file. One of <span class="bdirectivename">TLS CA Certificate File</span> or <span class="bdirectivename">TLS CA Certificate Dir</span> are required in a server context, unless <span class="bdirectivename">TLS Verify Peer</span> (see above) is set to <span class="bvalue">no</span>, and are always required in a client context.
+</dd>
+</div>
+<div id="Director_Storage_TlsCaCertificateDir">
+<dt>TLS CA Certificate Dir = <Directory></dt>
+<dd>Full path to TLS CA certificate directory. In the current implementation, certificates must be stored PEM encoded with OpenSSL-compatible hashes, which is the subject name's hash and an extension of <span class="bbf">.0</span>. One of <span class="bdirectivename">TLS CA Certificate File</span> or <span class="bdirectivename">TLS CA Certificate Dir</span> are required in a server context, unless <span class="bdirectivename">TLS Verify Peer</span> is set to <span class="bvalue">no</span>, and are always required in a client context.
+</dd>
+</div>
+<div id="Director_Catalog_Name">
+<dt>Name = <name></dt>
+<dd>
+ The name of the Catalog. No necessary relation to the database server name. This name will be specified in the Client resource directive indicating that all catalog data for that Client is maintained in this Catalog. This directive is required.
+</dd>
+</div>
+<div id="Director_Catalog_DbPort">
+<dt>DB Port = <port></dt>
+<dd>
+ This defines the port to be used in conjunction with <span class="bdirectivename">DB Address</span> to access the database if it is on another machine. This directive is used only by <span class="bsqltool">MySQL</span> and <span class="bsqltool">PostgreSQL</span>. This directive is optional.
+
+</dd>
+</div>
+<div id="Director_Catalog_DbName">
+<dt>DB Name = <name></dt>
+<dd>
+ This specifies the name of the database. If you use multiple catalogs (databases), you specify which one here. If you are using an external database server rather than the internal one, you must specify a name that is known to the server (i.e. you explicitly created the <span class="bbacula">Bacula</span> tables using this name. This directive is required.
+</dd>
+</div>
+<div id="Director_Catalog_User">
+<dt>user = <user></dt>
+<dd>
+ This specifies what user name to use to log into the database. This directive is required.
+</dd>
+</div>
+<div id="Director_Catalog_Password">
+<dt>password = <password></dt>
+<dd>
+ This specifies the password to use when logging into the database. This directive is required.
+</dd>
+</div>
+<div id="Director_Catalog_DbSocket">
+<dt>DB Socket = <socket-name></dt>
+<dd>
+ This is the name of a socket to use on the local host to connect to the database. This directive is used only by <span class="bsqltool">MySQL</span>. Normally, if neither <span class="bdirectivename">DB Socket</span> or <span class="bdirectivename">DB Address</span> are specified, <span class="bsqltool">MySQL</span> will use the default socket. If the DB Socket is specified, the <span class="bsqltool">MySQL</span> server must reside on the same machine as the Director.
+</dd>
+</div>
+<div id="Director_Schedule_Name">
+<dt>Name = <name></dt>
+<dd>
+ The name of the schedule being defined. The Name directive is required.
+</dd>
+</div>
+<div id="Director_Schedule_Run">
+<dt>Run = <Job-overrides> <Date-time-specification></dt>
+<dd>
+ The Run directive defines when a Job is to be run, and what overrides if any to apply. You may specify multiple <span class="bdirectivename">Run</span> directives within a <span>Schedule</span> resource. If you do, they will all be applied (i.e. multiple schedules). If you have two <span class="bdirectivename">Run</span> directives that start at the same time, two Jobs will start at the same time (well, within one second of each other). <p> The <span class="bbf">Job-overrides</span> permit overriding the Level, the Storage, the Messages, and the Pool specifications provided in the Job resource. In addition, the <span class="bdirectivename">FullPool</span>, the <span class="bdirectivename">IncrementalPool</span>, and the <span class="bdirectivename">DifferentialPool</span> specifications permit overriding the Pool specification according to what backup Job Level is in effect. </p>
+<p> By the use of overrides, you may customize a particular Job. For example, you may specify a Messages override for your Incremental backups that outputs messages to a log file, but for your weekly or monthly Full backups, you may send the output by email by using a different Messages override. </p>
+<p><span class="bbf">Job-overrides</span> are specified as: <span class="bbf">keyword=value</span> where the keyword is <span class="bvalue">Level</span>, <span class="bvalue">Storage</span>, <span class="bvalue">Messages</span>, <span class="bvalue">Pool</span>, <span class="bvalue">FullPool</span>, <span class="bvalue">DifferentialPool</span>, or <span class="bvalue">IncrementalPool</span>, and the <span class="bvalue">value</span> is as defined on the respective directive formats for the Job resource. You may specify multiple <span class="bbf">Job-overrides</span> on one <span class="bdirectivename">Run</span> directive by separating them with one or more spaces or by separating them with a trailing comma. For example: </p>
+
+<dl class="bdescription2">
+<dt>Level=Full</dt>
+<dd class="bdescription2">
+ is all files in the FileSet whether or not they have changed.
+</dd>
+<dt>Level=Incremental</dt>
+<dd class="bdescription2">
+ is all files that have changed since the last backup.
+</dd>
+<dt>Pool=Weekly</dt>
+<dd class="bdescription2">
+ specifies to use the Pool named <span class="bbf">Weekly</span>.
+</dd>
+<dt>Storage=DLT_Drive</dt>
+<dd class="bdescription2">
+ specifies to use <span class="bbf">DLT_Drive</span> for the storage device.
+</dd>
+<dt>Messages=Verbose</dt>
+<dd class="bdescription2">
+ specifies to use the <span class="bbf">Verbose</span> message resource for the Job.
+</dd>
+<dt>FullPool=Full</dt>
+<dd class="bdescription2">
+ specifies to use the Pool named <span class="bbf">Full</span> if the job is a full backup, or is upgraded from another type to a Full backup.
+</dd>
+<dt>DifferentialPool=Differential</dt>
+<dd class="bdescription2">
+ specifies to use the Pool named <span class="bbf">Differential</span> if the job is a differential backup.
+</dd>
+<dt>IncrementalPool=Incremental</dt>
+<dd class="bdescription2">
+ specifies to use the Pool named <span class="bbf">Incremental</span> if the job is an incremental backup.
+</dd>
+<dt>Next Pool = <span class="bbracket"><pool-specification></span>
+</dt>
+<dd class="bdescription2">The <span class="bbf">Next Pool</span> directive specifies the pool to which Jobs will be migrated. This directive is used to define the Pool into which the data will be migrated.
+</dd>
+<dt>Priority = <span class="bbracket"><number></span>
+</dt>
+<dd class="bdescription2">
+ This directive permits you to control the order in which your jobs will be run by specifying a positive non-zero number. The higher the number, the lower the job priority. Assuming you are not running concurrent jobs, all queued jobs of priority 1 will run before queued jobs of priority 2 and so on, regardless of the original scheduling order. <p> The priority only affects waiting jobs that are queued to run, not jobs that are already running. If one or more jobs of priority 2 are already running, and a new job is scheduled with priority 1, the currently running priority 2 jobs must complete before the priority 1 job is run, unless <span class="bdirectivename">Allow Mixed Priority</span> is set. </p>
+<p> The default priority is <span class="bdefaultvalue">10</span>. </p>
+
+</dd>
+</dl>
+<span class="bbracket"><Date-time-specification></span> determines when the Job is to be run. The specification is a repetition, and as a default <span class="bbacula">Bacula</span> is set to run a job at the beginning of the hour of every hour of every day of every week of every month of every year. This is not normally what you want, so you must specify or limit when you want the job to run. Any specification given is assumed to be repetitive in nature and will serve to override or limit the default repetition. This is done by specifying masks or times for the hour, day of the month, day of the week, week of the month, week of the year, and month when you want the job to run. By specifying one or more of the above, you can define a schedule to repeat at almost any frequency you want.
+<p> Basically, you must supply a <span class="bbf">month</span>, <span class="bbf">day</span>, <span class="bbf">hour</span>, and <span class="bbf">minute</span> the Job is to be run. Of these four items to be specified, <span class="bbf">day</span> is special in that you may either specify a day of the month such as <span class="bvalue">1</span>, <span class="bvalue">2</span>, ... <span class="bvalue">31</span>, or you may specify a day of the week such as <span class="bvalue">Monday</span>, <span class="bvalue">Tuesday</span>, ... <span class="bvalue">Sunday</span>. Finally, you may also specify a week qualifier to restrict the schedule to the <span class="bvalue">first</span>, <span class="bvalue">second</span>, <span class="bvalue">third</span>, <span class="bvalue">fourth</span>, <span class="bvalue">fifth</span> or <span class="bvalue">sixth</span> week of the month. </p>
+<p> For example, if you specify only a day of the week, such as <span class="bvalue">Tuesday</span> the Job will be run every hour of every Tuesday of every Month. That is the <span class="bvalue">month</span> and <span class="bvalue">hour</span> remain set to the defaults of every month and all hours. </p>
+<p> Note, by default with no other specification, your job will run at the beginning of every hour. If you wish your job to run more than once in any given hour, you will need to specify multiple <span class="bdirectivename">Run</span> specifications each with a different minute. </p>
+<p> The date/time to run the Job can be specified in the following way in pseudo-BNF: </p>
+
+<pre>
+<void-keyword> = on
+<at-keyword> = at
+<week-keyword> = 1st | 2nd | 3rd | 4th | 5th | 6th | first |
+ second | third | fourth | fifth
+<wday-keyword> = sun | mon | tue | wed | thu | fri | sat |
+ sunday | monday | tuesday | wednesday |
+ thursday | friday | saturday
+<week-of-year-keyword> = w00 | w01 | ... w52 | w53
+<month-keyword> = jan | feb | mar | apr | may | jun | jul |
+ aug | sep | oct | nov | dec | january |
+ february | ... | december
+<daily-keyword> = daily
+<weekly-keyword> = weekly
+<monthly-keyword> = monthly
+<hourly-keyword> = hourly
+<digit> = 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 0
+<number> = <digit> | <digit><number>
+<12hour> = 0 | 1 | 2 | ... 12
+<hour> = 0 | 1 | 2 | ... 23
+<minute> = 0 | 1 | 2 | ... 59
+<day> = 1 | 2 | ... 31 | lastday
+<time> = <hour>:<minute> |
+ <12hour>:<minute>am |
+ <12hour>:<minute>pm
+<time-spec> = <at-keyword> <time> |
+ <hourly-keyword>
+<date-keyword> = <void-keyword> <weekly-keyword>
+<day-range> = <day>-<day>
+<month-range> = <month-keyword>-<month-keyword>
+<wday-range> = <wday-keyword>-<wday-keyword>
+<range> = <day-range> | <month-range> |
+ <wday-range>
+<date> = <date-keyword> | <day> | <range>
+<date-spec> = <date> | <date-spec>
+<day-spec> = <day> | <wday-keyword> |
+ <day> | <wday-range> |
+ <week-keyword> <wday-keyword> |
+ <week-keyword> <wday-range> |
+ <daily-keyword>
+<month-spec> = <month-keyword> | <month-range> |
+ <monthly-keyword>
+<date-time-spec> = <month-spec> <day-spec> <time-spec>
+</pre>
+
+</dd>
+</div>
+<div id="Director_Schedule_Enabled">
+<dt>Enabled = <yes|no></dt>
+<dd>
+ This directive allows you to enable or disable the <span>Schedule</span> resource.
+</dd>
+</div>
+<div id="Director_Fileset_Name">
+<dt>Name = <name></dt>
+<dd>
+ The name of the FileSet resource. This directive is required.
+</dd>
+</div>
+<div id="Director_Fileset_IgnoreFilesetChanges">
+<dt>Ignore FileSet Changes = <yes|no></dt>
+<dd>
+ Normally, if you modify the FileSet Include or Exclude lists, the next backup will be forced to a Full so that <span class="bbacula">Bacula</span> can guarantee that any additions or deletions are properly saved. <p> We strongly recommend against setting this directive to yes, since doing so may cause you to have an incomplete set of backups. </p>
+<p> If this directive is set to <span class="bvalue">yes</span>, any changes you make to the FileSet Include or Exclude lists, will not force a Full during subsequent backups. Note that any changes to Options resources in the FileSet are not considered by this directive. You can use the Accurate mode for this to be treated correctly, or schedule a new Full backup manually. </p>
+<p> The default is <span class="bdefaultvalue">no</span>, in which case, if you change the Include or Exclude lists, <span class="bbacula">Bacula</span> will force a Full backup to ensure that everything is properly backed up. </p>
+
+</dd>
+</div>
+<div id="Director_Fileset_EnableVss">
+<dt>Enable VSS = <yes|no></dt>
+<dd>
+ If this directive is set to <span class="bvalue">yes</span> the File daemon will be notified that the user wants to use a Volume Snapshot Service (VSS) backup for this job. The default is <span class="bdefaultvalue">yes</span>. This directive is effective only for VSS enabled Win32 File daemons. It permits a consistent copy of open files to be made for cooperating writer applications, and for applications that are not VSS aware, <span class="bbacula">Bacula</span> can at least copy open files. The Volume Snapshot Servicewill only be done on Windows drives where the drive (e.g. C:, D:, ...) is explicitly mentioned in a <span class="bdirectivename">File</span> directive. For more information, please see the Windows chapter of this manual.
+</dd>
+</div>
+<div id="Director_Fileset_EnableSnapshot">
+<dt>Enable Snapshot = <yes|no></dt>
+<dd>
+ If this directive is set to <span class="bbf">yes</span> the File daemon will be notified that the user wants to use the Snapshot Engine for this job. The default is <span class="bdefaultvalue">no</span>. This directive is effective only for Snapshot enabled Unix File daemons. It permits a consistent copy of open files to be made for cooperating applications. The <span class="btt">bsnapshot</span> tool should be installed on the Client.
+</dd>
+</div>
+<div id="Director_Fileset_Include"><dt>Include {Options {<file-options>} ...; <file-list> } </dt></div>
+<div id="Director_Fileset_Exclude"><dt>Exclude {<file-list> }</dt></div>
+<div id="Director_Pool_Name">
+<dt>Name = <name></dt>
+<dd>
+ The name of the pool. For most applications, you will use the default pool name <span class="bvalue">Default</span>. This directive is required.
+</dd>
+</div>
+<div id="Director_Pool_PoolType">
+<dt>Pool Type = <type></dt>
+<dd>
+ This directive defines the pool type, which corresponds to the type of Job being run. It is required and may be one of the following:
+<ul class="bitemize2">
+<li class="bitemize2">[Backup] </li>
+<li class="bitemize2">[*Archive] </li>
+<li class="bitemize2">[*Cloned] </li>
+<li class="bitemize2">[*Migration] </li>
+<li class="bitemize2">[*Copy] </li>
+<li class="bitemize2">[*Save] </li>
+</ul> Note, only Backup is current implemented.
+
+</dd>
+</div>
+<div id="Director_Pool_LabelFormat">
+<dt>Label Format = <format></dt>
+<dd>
+ This directive specifies the format of the labels contained in this pool. The format directive is used as a sort of template to create new Volume names during automatic Volume labeling. <p> The <span class="bbracket"><format></span> should be specified in double quotes, and consists of letters, numbers and the special characters hyphen (<span class="bbf">-</span>), underscore (<span class="bbf">_</span>), colon (<span class="bbf">:</span>), and period (<span class="bbf">.</span>), which are the legal characters for a Volume name. The <span class="bbracket"><format></span> should be enclosed in double quotes ("). </p>
+<p> In addition, the format may contain a number of variable expansion characters which will be expanded by a complex algorithm allowing you to create Volume names of many different formats. In all cases, the expansion process must resolve to the set of characters noted above that are legal Volume names. Generally, these variable expansion characters begin with a dollar sign (<span class="bbf">$</span>) or a left bracket (<span class="bbf">[</span>). If you specify variable expansion characters, you should always enclose the format with double quote characters (<span class="bbf">"</span>). For more details on variable expansion, please see the Variable Expansion chapter of the <span class="bmanualname"><span class="bbacula">Bacula</span> Enterprise Miscellaneous guide</span>. </p>
+<p> If no variable expansion characters are found in the string, the Volume name will be formed from the <span class="bbracket"><format></span> string appended with the a unique number that increases. If you do not remove volumes from the pool, this number should be the number of volumes plus one, but this is not guaranteed. The unique number will be edited as four digits with leading zeros. For example, with a <span class="bdirectivename">Label Format</span> = <span class="bvalue">"File-"</span>, the first volumes will be named <span class="bfilename">File-0001</span>, <span class="bfilename">File-0002</span>, ... </p>
+<p> With the exception of Job specific variables, you can test your <span class="bdirectivename">LabelFormat</span> by using the var command in the <span class="bmanualname"><span class="bbacula">Bacula</span> Enterprise Console manual</span>. </p>
+
+<pre>
+ Label Format="${Level}_${Type}_${Client}_${Year}-${Month:p/2/0/r}-${Day:p/2/0/r}"
+</pre>
+<p> Once defined, the name of the volume cannot be changed. When the volume is recycled, the volume can be used by an other Job at an other time, and possibly from an other Pool. In the example above, the volume defined with such name is probably not supposed to be recycled or reused. </p>
+<p> In almost all cases, you should enclose the format specification (part after the equal sign) in double quotes. </p>
+
+</dd>
+</div>
+<div id="Director_Pool_CleaningPrefix">
+<dt>Cleaning Prefix = <string></dt>
+<dd>
+ This directive defines a prefix string, which if it matches the beginning of a Volume name during labeling of a Volume, the Volume will be defined with the VolStatus set to Cleaning and thus <span class="bbacula">Bacula</span> will never attempt to use this tape. This is primarily for use with autochangers that accept barcodes where the convention is that barcodes beginning with <span class="bhighlight">CLN</span> are treated as cleaning tapes.
+</dd>
+</div>
+<div id="Director_Pool_ScratchPool">
+<dt>ScratchPool = <pool-resource-name></dt>
+<dd>
+ This directive permits specifing a specific scratch Pool to be used for the Job. This pool will replace the default scratch pool named <span class="bsl">Scratch</span> for volume selection. For more information about scratch pools see Scratch Pool section of this manual. This directive is useful when using multiple storage devices that share the same MediaType or when you want to dedicate volumes to a particular set of pools.
+</dd>
+</div>
+<div id="Director_Pool_CatalogFiles">
+<dt>Catalog Files = <yes|no></dt>
+<dd>
+ This directive defines whether or not you want the names of the files that were saved to be put into the catalog. The default is <span class="bdefaultvalue">yes</span>. The advantage of specifying <span class="bdirectivename">Catalog Files</span> = <span class="bvalue">No</span> is that you will have a significantly smaller Catalog database. The disadvantage is that you will not be able to produce a Catalog listing of the files backed up for each Job (this is often called Browsing). Also, without the File entries in the catalog, you will not be able to use the Console <span class="bcommandname">restore</span> command nor any other command that references File entries.
+</dd>
+</div>
+<div id="Director_Pool_Storage">
+<dt>Storage = <storage-resource-name></dt>
+<dd>
+ The Storage directive defines the name of the storage services where you want to backup the FileSet data. For additional details, see the Storage Resource Chapter of this manual. The Storage resource may also be specified in the Job resource, but the value, if any, in the Pool resource overrides any value in the Job. This Storage resource definition is not required by either the Job resource or in the Pool, but it must be specified in one or the other. If not configuration error will result.
+</dd>
+</div>
+<div id="Director_Pool_MaximumVolumes">
+<dt>Maximum Volumes = <number></dt>
+<dd>
+ This directive specifies the maximum number of volumes (tapes or files) contained in the pool. This directive is optional, if omitted or set to zero, any number of volumes will be permitted. In general, this directive is useful for Autochangers where there is a fixed number of Volumes, or for File storage where you wish to ensure that the backups made to disk files do not become too numerous or consume too much space.
+<p> This directive is only respected in case of volumes automatically created by <span class="bbacula">Bacula</span>. If you add volumes to a pool manually with the <span class="bcommandname">label</span> command, it is possible to have more volumes in a pool than specified by <span class="bdirectivename">Maximum Volumes</span>. </p>
+
+</dd>
+</div>
+<div id="Director_Pool_MaximumVolumeJobs">
+<dt>Maximum Volume Jobs = <positive-integer></dt>
+<dd>
+ This directive specifies the maximum number of Jobs that can be written to the Volume. If you specify zero (<span class="bdefaultvalue">0</span>) (the default), there is no limit. Otherwise, when the number of Jobs backed up to the Volume equals <span class="bbracket"><positive-integer></span> the Volume will be marked Used. When the Volume is marked Used it can no longer be used for appending Jobs, much like the Full status. A Volume that is marked Used or Full can be recycled if recycling is enabled, and thus used again. By setting <span class="bdirectivename">MaximumVolumeJobs</span> to <span class="bvalue">1</span>, you get the same effect as setting <span class="bdirectivename">UseVolumeOnce</span> = <span class="bvalue">yes</span>. <p> The value defined by this directive in the <span class="bfilename">bacula-dir.conf</span> file is the default value used when a Volume is created. Once the volume is created, changing the value in the <span class="bfilename">bacula-dir.conf</span> file will not change what is stored for the Volume. To change the value for an existing Volume you must use the <span class="bcommandname">update</span> command in the Console. </p>
+<p> If you are running multiple simultaneous jobs, this directive may not work correctly because when a drive is reserved for a job, this directive is not taken into account, so multiple jobs may try to start writing to the Volume. At some point, when the Media record is updated, multiple simultaneous jobs may fail since the Volume can no longer be written. </p>
+
+</dd>
+</div>
+<div id="Director_Pool_MaximumVolumeFiles">
+<dt>Maximum Volume Files = <positive-integer></dt>
+<dd>
+ This directive specifies the maximum number of files that can be written to the Volume. If you specify zero (<span class="bdefaultvalue">0</span>, the default), there is no limit. Otherwise, when the number of files written to the Volume equals <span class="bbracket"><positive-integer></span> the Volume will be marked Used. When the Volume is marked Used it can no longer be used for appending Jobs, much like the Full status, but it can be recycled if recycling is enabled and thus used again. This value is checked and the Used status is set only at the end of a job that writes to the particular volume. <p> The value defined by this directive in the <span class="bfilename">bacula-dir.conf</span> file is the default value used when a Volume is created. Once the volume is created, changing the value in the <span class="bfilename">bacula-dir.conf</span> file will not change what is stored for the Volume. To change the value for an existing Volume you must use the <span class="bcommandname">update</span> command in the Console. </p>
+
+</dd>
+</div>
+<div id="Director_Pool_MaximumVolumeBytes">
+<dt>Maximum Volume Bytes = <size></dt>
+<dd>
+ This directive specifies the maximum number of bytes that can be written to the Volume. If you specify zero (<span class="bdefaultvalue">0</span>,the default), there is no limit except the physical size of the Volume. Otherwise, when the number of bytes written to the Volume equals <span class="bbracket"><size></span> the Volume will be marked Full. When the Volume is marked Full it can no longer be used for appending Jobs, but it can be recycled if recycling is enabled, and thus the Volume can be re-used after recycling. The size specified is checked just before each block is written to the Volume and if the Volume size would exceed the specified Maximum Volume Bytes the Full status will be set and the Job will request the next available Volume to continue. <p> This directive is particularly useful for restricting the size of disk volumes, and will work correctly even in the case of multiple simultaneous jobs writing to the volume. </p>
+<p> The value defined by this directive in the <span class="bfilename">bacula-dir.conf</span> file is the default value used when a Volume is created. Once the volume is created, changing the value in the <span class="bfilename">bacula-dir.conf</span> file will not change what is stored for the Volume. To change the value for an existing Volume you must use the <span class="bcommandname">update</span> command in the Console. </p>
+
+</dd>
+</div>
+<div id="Director_Pool_VolumeUseDuration">
+<dt>Volume Use Duration = <time-period-specification></dt>
+<dd>
+ The Volume Use Duration directive defines the time period that the Volume can be written beginning from the time of first data write to the Volume. If the time-period specified is zero (<span class="bdefaultvalue">0</span>,the default), the Volume can be written indefinitely. Otherwise, the next time a job runs that wants to access this Volume, and the time period from the first write to the volume (the first Job written) exceeds the time-period-specification, the Volume will be marked Used, which means that no more Jobs can be appended to the Volume, but it may be recycled if recycling is enabled. Using the command <span class="bcommandname">status dir</span> applies algorithms similar to running jobs, so during such a command, the Volume status may also be changed. Once the Volume is recycled, it will be available for use again. <p> You might use this directive, for example, if you have a Volume used for Incremental backups, and Volumes used for Weekly Full backups. Once the Full backup is done, you will want to use a different Incremental Volume. This can be accomplished by setting the Volume Use Duration for the Incremental Volume to six days. I.e. it will be used for the 6 days following a Full save, then a different Incremental volume will be used. Be careful about setting the duration to short periods such as 23 hours, or you might experience problems of <span class="bbacula">Bacula</span> waiting for a tape over the weekend only to complete the backups Monday morning when an operator mounts a new tape. </p>
+<p> The use duration is checked and the Used status is set only at the end of a job that writes to the particular volume, which means that even though the use duration may have expired, the catalog entry will not be updated until the next job that uses this volume is run. This directive is not intended to be used to limit volume sizes and may not work as expected (i.e. will fail jobs) if the use duration expires while multiple simultaneous jobs are writing to the volume. </p>
+<p> Please note that the value defined by this directive in the <span class="bfilename">bacula-dir.conf</span> file is the default value used when a Volume is created. Once the volume is created, changing the value in the <span class="bfilename">bacula-dir.conf</span> file will not change what is stored for the Volume. To change the value for an existing Volume you must use the update volume command in the <span class="bmanualname"><span class="bbacula">Bacula</span> Enterprise Console manual</span>. </p>
+
+</dd>
+</div>
+<div id="Director_Pool_UseVolumeOnce">
+<dt>Use Volume Once = <yes|no></dt>
+<dd>
+ This directive if set to <span class="bvalue">yes</span> specifies that each volume is to be used only once. This is most useful when the Media is a file and you want a new file for each backup that is done. The default is <span class="bdefaultvalue">no</span> (i.e. use volume any number of times). This directive will most likely be phased out (deprecated), so you are recommended to use <span class="bdirectivename">Maximum Volume Jobs</span> = <span class="bvalue">1</span> instead. <p> The value defined by this directive in the <span class="bfilename">bacula-dir.conf</span> file is the default value used when a Volume is created. Once the volume is created, changing the value in the <span class="bfilename">bacula-dir.conf</span> file will not change what is stored for the Volume. To change the value for an existing Volume you must use the <span class="bcommandname">update</span> command in the Console. </p>
+<p> Please see the notes below under <span class="bdirectivename">Maximum Volume Jobs</span> concerning using this directive with multiple simultaneous jobs. </p>
+
+</dd>
+</div>
+<div id="Director_Pool_Recycle">
+<dt>Recycle = <yes|no></dt>
+<dd>
+ This directive specifies whether or not Purged Volumes may be recycled. If it is set to <span class="bdefaultvalue">yes</span> (default) and <span class="bbacula">Bacula</span> needs a volume but finds none that are appendable, it will search for and recycle (reuse) Purged Volumes (i.e. volumes with all the Jobs and Files expired and thus deleted from the Catalog). If the Volume is recycled, all previous data written to that Volume will be overwritten. If Recycle is set to <span class="bvalue">no</span>, the Volume will not be recycled, and hence, the data will remain valid. If you want to reuse (re-write) the Volume, and the recycle flag is no (0 in the catalog), you may manually set the recycle flag (<span class="bcommandname">update</span> command) for a Volume to be reused. <p> Please note that the value defined by this directive in the <span class="bfilename">bacula-dir.conf</span> file is the default value used when a Volume is created. Once the volume is created, changing the value in the <span class="bfilename">bacula-dir.conf</span> file will not change what is stored for the Volume. To change the value for an existing Volume you must use the <span class="bcommandname">update</span> command in the Console. </p>
+<p> When all Job and File records have been pruned or purged from the catalog for a particular Volume, if that Volume is marked as Full or Used, it will then be marked as Purged. Only Volumes marked as Purged will be considered to be converted to the Recycled state if the <span class="bdirectivename">Recycle</span> directive is set to <span class="bvalue">yes</span>. </p>
+
+
+</dd>
+</div>
+<div id="Director_Pool_RecyclePool">
+<dt>RecyclePool = <pool-resource-name></dt>
+<dd>
+ This directive defines to which pool the Volume will be placed (moved) when it is recycled. Without this directive, a Volume will remain in the same pool when it is recycled. With this directive, it will be moved automatically to any existing pool during a recycle. This directive is probably most useful when defined in the Scratch pool, so that volumes will be recycled back into the Scratch pool. For more on the see the Scratch Pool section of this manual. <p> Although this directive is called RecyclePool, the Volume in question is actually moved from its current pool to the one you specify on this directive when <span class="bbacula">Bacula</span> prunes the Volume and discovers that there are no records left in the catalog and hence marks it as Purged. </p>
+
+</dd>
+</div>
+<div id="Director_Pool_PurgeOldestVolume">
+<dt>Purge Oldest Volume = <yes|no></dt>
+<dd>
+ This directive instructs the Director to search for the oldest used Volume in the Pool when another Volume is requested by the Storage daemon and none are available. The catalog is then <span class="bbf">purged</span> irrespective of retention periods of all Files and Jobs written to this Volume. The Volume is then recycled and will be used as the next Volume to be written. This directive overrides any Job, File, or Volume retention periods that you may have specified. <p> This directive can be useful if you have a fixed number of Volumes in the Pool and you want to cycle through them and reusing the oldest one when all Volumes are full, but you don't want to worry about setting proper retention periods. However, by using this option you risk losing valuable data. </p>
+<p> Please be aware that <span class="bvalue">Purge Oldest Volume</span> disregards all retention periods. If you have only a single Volume defined and you turn this variable on, that Volume will always be immediately overwritten when it fills! So at a minimum, ensure that you have a decent number of Volumes in your Pool before running any jobs. If you want retention periods to apply do not use this directive. To specify a retention period, use the <span class="bdirectivename">Volume Retention</span> directive (see above). </p>
+<p> We <span class="bbf">highly</span> recommend against using this directive, because it is sure that some day, <span class="bbacula">Bacula</span> will recycle a Volume that contains current data. The default is <span class="bdefaultvalue">no</span>. </p>
+
+</dd>
+</div>
+<div id="Director_Pool_RecycleOldestVolume">
+<dt>Recycle Oldest Volume = <yes|no></dt>
+<dd>
+ This directive instructs the Director to search for the oldest used Volume in the Pool when another Volume is requested by the Storage daemon and none are available. The catalog is then <span class="bbf">pruned</span> respecting the retention periods of all Files and Jobs written to this Volume. If all Jobs are pruned (i.e. the volume is Purged), then the Volume is recycled and will be used as the next Volume to be written. This directive respects any Job, File, or Volume retention periods that you may have specified, and as such it is <span class="bbf">much</span> better to use this directive than the Purge Oldest Volume. <p> This directive can be useful if you have a fixed number of Volumes in the Pool and you want to cycle through them and you have specified the correct retention periods. </p>
+<p> However, if you use this directive and have only one Volume in the Pool, you will immediately recycle your Volume if you fill it and <span class="bbacula">Bacula</span> needs another one. Thus your backup will be totally invalid. Please use this directive with care. The default is <span class="bdefaultvalue">no</span>. </p>
+
+</dd>
+</div>
+<div id="Director_Pool_RecycleCurrentVolume">
+<dt>Recycle Current Volume = <yes|no></dt>
+<dd>
+ If <span class="bbacula">Bacula</span> needs a new Volume, this directive instructs <span class="bbacula">Bacula</span> to Prune the volume respecting the Job and File retention periods. If all Jobs are pruned (i.e. the volume is Purged), then the Volume is recycled and will be used as the next Volume to be written. This directive respects any Job, File, or Volume retention periods that you may have specified, and thus it is <span class="bbf">much</span> better to use it rather than the Purge Oldest Volume directive. <p> This directive can be useful if you have a fixed number of Volumes in the Pool, you want to cycle through them, and you have specified retention periods that prune Volumes before you have cycled through the Volume in the Pool. </p>
+<p> However, if you use this directive and have only one Volume in the Pool, you will immediately recycle your Volume if you fill it and <span class="bbacula">Bacula</span> needs another one. Thus your backup will be totally invalid. Please use this directive with care. The default is <span class="bdefaultvalue">no</span>. </p>
+
+</dd>
+</div>
+<div id="Director_Pool_VolumeRetention">
+<dt>Volume Retention = <time-period-specification></dt>
+<dd>
+ The <span class="bdirectivename">Volume Retention</span> directive defines the longest amount of time that <span class="bbacula">Bacula</span> will keep records associated with the Volume in the Catalog database after the end time of each Job written to the Volume. When this time period expires, and if <span class="bdirectivename">AutoPrune</span> is set to <span class="bvalue">yes</span> <span class="bbacula">Bacula</span> may prune (remove) Job records that are older than the specified Volume Retention period if it is necessary to free up a Volume. Note, it is also possible for all the Job and File records to be pruned before the Volume Retention period if Job and File Retention periods are configured to a lower value. In that case the Volume can then be marked Pruned and subsequently recycled prior to expiration of the Volume Retention period.
+<p> Recycling will not occur until it is absolutely necessary to free up a volume (i.e. no other writable volume exists). All File records associated with pruned Jobs are also pruned. The time may be specified as seconds, minutes, hours, days, weeks, months, quarters, or years. The <span class="bdirectivename">Volume Retention</span> is applied independently of the <span class="bdirectivename">Job Retention</span> and the <span class="bdirectivename">File Retention</span> periods defined in the Client resource. This means that all the retention periods are applied in turn and that the shorter period is the one that effectively takes precedence. Note, that when the <span class="bdirectivename">Volume Retention</span> period has been reached, and it is necessary to obtain a new volume, <span class="bbacula">Bacula</span> will prune both the Job and the File records. And the inverse is also true that if all the Job and File records that refer to a Volume were already pruned, then the Volume may be recycled regardless of its retention period. Pruning may also occur during a <span class="bcommandname">status dir</span> command because it uses similar algorithms for finding the next available Volume. </p>
+<p> It is important to know that when the Volume Retention period expires, or all the Job and File records have been pruned that refer to a Volume, <span class="bbacula">Bacula</span> does not automatically recycle a Volume. It attempts to keep the Volume data intact as long as possible before over writing the Volume. </p>
+<p> By defining multiple Pools with different Volume Retention periods, you may effectively have a set of tapes that is recycled weekly, another Pool of tapes that is recycled monthly and so on. However, one must keep in mind that if your <span class="bdirectivename">Volume Retention</span> period is too short, it may prune the last valid Full backup, and hence until the next Full backup is done, you will not have a complete backup of your system, and in addition, the next Incremental or Differential backup will be promoted to a Full backup. As a consequence, the minimum <span class="bdirectivename">Volume Retention</span> period should be at twice the interval of your Full backups. This means that if you do a Full backup once a month, the minimum Volume retention period should be two months. </p>
+<p> The default Volume retention period is <span class="bdefaultvalue">365 days</span>, and either the default or the value defined by this directive in the <span class="bfilename">bacula-dir.conf</span> file is the default value used when a Volume is created. Once the volume is created, changing the value in the <span class="bfilename">bacula-dir.conf</span> file will not change what is stored for the Volume. To change the value for an existing Volume you must use the <span class="bcommandname">update</span> command in the Console. </p>
+<p> To disable the <span class="bdirectivename">Volume Retention</span> feature, it is possible to set the directive to 0. When disabled, the pruning will be done only on the Job Retention directives and the "ExpiresIn" information available in the <span class="bcommandname">list volume</span> output is not available. </p>
+
+</dd>
+</div>
+<div id="Director_Pool_AutoPrune">
+<dt>AutoPrune = <yes|no></dt>
+<dd>
+ If AutoPrune is set to <span class="bdefaultvalue">yes</span> (default), <span class="bbacula">Bacula</span> (version 1.20 or greater) will automatically apply the Volume Retention period when new Volume is needed and no appendable Volumes exist in the Pool. Volume pruning causes expired Jobs (older than the <span class="bdirectivename">Volume Retention</span> period) to be deleted from the Catalog and permits possible recycling of the Volume.
+</dd>
+</div>
+<div id="Director_Pool_ActionOnPurge">
+<dt>Action On Purge = <Truncate</dt>
+<dd>
+<p> The directive <span class="bdirectivename">ActionOnPurge</span> = <span class="bvalue">Truncate</span> instructs <span class="bbacula">Bacula</span> to permit the Volume to be truncated after it has been purged. Note: the ActionOnPurge is a bit misleading since the volume is not actually truncated when it is purged, but is enabled to be truncated. The actual truncation is done with the <span class="bcommandname">truncate</span> command. </p>
+<p> To actually truncate a Volume, you must first set the ActionOnPurge to Truncate in the Pool, then you must ensure that any existing Volumes also have this information in them, by doing an <span class="bcommandname">update Volumes</span> comand. Finally, after the Volume has been purged, you may then truncate it. It is useful to prevent disk based volumes from consuming too much space. See below for more details of how to ensure Volumes are truncated after being purged. </p>
+<p> First set the Pool to permit truncation. </p>
+<pre>
+Pool {
+ Name = Default
+ Action On Purge = Truncate
+ ...
+}
+</pre>
+<p> Then assuming a Volume has been Purged, you can schedule truncate operation at the end of your CatalogBackup job like in this example: </p>
+
+<pre>
+Job {
+ Name = CatalogBackup
+ ...
+ RunScript {
+ RunsWhen=After
+ RunsOnClient=No
+ Console = "truncate Volume allpools storage=File"
+ }
+}
+</pre>
+
+</dd>
+</div>
+<div id="Director_Pool_JobRetention">
+<dt>Job Retention = <time-period-specification></dt>
+<dd>
+ <p> The Job Retention directive defines the length of time that <span class="bbacula">Bacula</span> will keep Job records in the Catalog database after the Job End time. As with the other retention periods, this affects only records in the catalog and not data in your archive backup. </p>
+<p> This directive takes precedence over Client directives of the same name. For example, you can decide to increase Retention times for Archive or OffSite Pool. </p>
+<p> For more information see Client side documentation JobRetention</p>
+
+</dd>
+</div>
+<div id="Director_Pool_FileRetention">
+<dt>File Retention = <time-period-specification></dt>
+<dd>
+ The File Retention directive defines the length of time that <span class="bbacula">Bacula</span> will keep File records in the Catalog database after the End time of the Job corresponding to the File records. <p> This directive takes precedence over Client directives of the same name. For example, you can decide to increase Retention times for Archive or OffSite Pool. </p>
+<p> Note, this affects only records in the catalog database. It does not affect your archive backups. </p>
+<p> For more information see Client documentation about FileRetention</p>
+
+</dd>
+</div>
+<div id="Director_Pool_MigrationHighBytes">
+<dt>Migration High Bytes = <byte-specification></dt>
+<dd>This directive specifies the number of bytes in the Pool which will trigger a migration if a <span class="bbf">PoolOccupancy</span> migration selection type has been specified. The fact that the Pool usage goes above this level does not automatically trigger a migration job. However, if a migration job runs and has the PoolOccupancy selection type set, the Migration High Bytes will be applied. <span class="bbacula">Bacula</span> does not currently restrict a pool to have only a single Media Type, so you must keep in mind that if you mix Media Types in a Pool, the results may not be what you want, as the Pool count of all bytes will be for all Media Types combined.
+</dd>
+</div>
+<div id="Director_Pool_MigrationLowBytes">
+<dt>Migration Low Bytes = <byte-specification></dt>
+<dd>This directive specifies the number of bytes in the Pool which will stop a migration if a <span class="bbf">PoolOccupancy</span> migration selection type has been specified and triggered by more than Migration High Bytes being in the pool. In other words, once a migration job is started with <span class="bbf">PoolOccupancy</span> migration selection and it determines that there are more than Migration High Bytes, the migration job will continue to run jobs until the number of bytes in the Pool drop to or below Migration Low Bytes.
+</dd>
+</div>
+<div id="Director_Pool_NextPool">
+<dt>Next Pool = <pool-specification></dt>
+<dd>The Next Pool directive specifies the pool to which Jobs will be migrated. This directive is required to define the Pool into which the data will be migrated. Without this directive, the migration job will terminate in error. <p> The Next Pool directive may also be specified in the Job resource or on a Run directive in the Schedule resource. Any Next Pool directive in the Job resource will take precedence over the Pool definition, and any Next Pool specification on the Run directive in a Schedule resource will take ultimate precedence. </p>
+
+</dd>
+</div>
+<div id="Director_Counter_Name">
+<dt>Name = <name></dt>
+<dd>
+ The name of the Counter. This is the name you will use in the variable expansion to reference the counter value.
+</dd>
+</div>
+<div id="Director_Counter_Minimum">
+<dt>Minimum = <integer></dt>
+<dd>
+ This specifies the minimum value that the counter can have. It also becomes the default. If not supplied, <span class="bdefaultvalue">zero</span> is assumed.
+</dd>
+</div>
+<div id="Director_Counter_Maximum">
+<dt>Maximum = <integer></dt>
+<dd>
+ This is the maximum value value that the counter can have. If not specified or set to zero, the counter can have a maximum value of 2,147,483,648 (2 to the 31 power). When the counter is incremented past this value, it is reset to the Minimum.
+</dd>
+</div>
+<div id="Director_Counter_Catalog">
+<dt>Catalog = <catalog-name></dt>
+<dd>
+ If this directive is specified, the counter and its values will be saved in the specified catalog. If this directive is not present, the counter will be redefined each time that <span class="bbacula">Bacula</span> is started.
+</dd>
+</div>
+<div id="Director_Console_Password">
+<dt>Password = <password></dt>
+<dd>
+ Specifies the password that must be supplied for a named <span class="bbacula">Bacula</span> Console to be authorized. The same password must appear in the <span>Console</span> resource of the Console configuration file. For added security, the password is never actually passed across the network but rather a challenge response hash code created with the password. This directive is required. If you have either <span class="btool">/dev/random</span> or <span class="btool">bc</span> on your machine, <span class="bbacula">Bacula</span> will generate a random password during the configuration process, otherwise it will be left blank. <p> The password is plain text. It is not generated through any special process. However, it is preferable for security reasons to choose random text. </p>
+
+</dd>
+</div>
+<div id="Director_Console_JobAcl">
+<dt>JobACL = <name-list></dt>
+<dd>
+ This directive is used to specify a list of Job resource names that can be accessed by the console. Without this directive, the console cannot access any of the Director's Job resources. Multiple Job resource names may be specified by separating them with commas, and/or by specifying multiple JobACL directives. For example, the directive may be specified as:
+<pre>
+ JobACL = kernsave, "Backup client 1", "Backup client 2"
+ JobACL = "RestoreFiles"
+</pre>
+<p> With the above specification, the console can access the Director's resources for the four jobs named on the JobACL directives, but for no others. </p>
+
+</dd>
+</div>
+<div id="Director_Console_ClientAcl">
+<dt>ClientACL = <name-list></dt>
+<dd>
+ <p> This directive is used to specify a list of <span>Client</span> resource names that can be accessed by the console. </p>
+
+</dd>
+</div>
+<div id="Director_Console_StorageAcl">
+<dt>StorageACL = <name-list></dt>
+<dd>
+ This directive is used to specify a list of Storage resource names that can be accessed by the console.
+</dd>
+</div>
+<div id="Director_Console_ScheduleAcl">
+<dt>ScheduleACL = <name-list></dt>
+<dd>
+ This directive is used to specify a list of Schedule resource names that can be accessed by the console.
+</dd>
+</div>
+<div id="Director_Console_PoolAcl">
+<dt>PoolACL = <name-list></dt>
+<dd>
+ This directive is used to specify a list of Pool resource names that can be accessed by the console.
+</dd>
+</div>
+<div id="Director_Console_CommandAcl">
+<dt>CommandACL = <name-list></dt>
+<dd>
+ This directive is used to specify a list of console commands that can be executed by the console.
+</dd>
+</div>
+<div id="Director_Console_CatalogAcl">
+<dt>CatalogACL = <name-list></dt>
+<dd>
+ This directive is used to specify a list of Catalog resource names that can be accessed by the console.
+</dd>
+</div>
+<div id="Director_Console_WhereAcl">
+<dt>WhereACL = <string></dt>
+<dd>
+ This directive permits you to specify where a restricted console can restore files. If this directive is not specified, only the default restore location is permitted (normally <span class="bdefaultvalue">/tmp/bacula-restores</span>). If <span class="bvalue">*all*</span> is specified any path the user enters will be accepted (not very secure), any other value specified (there may be multiple WhereACL directives) will restrict the user to use that path. For example, on a Unix system, if you specify <span>“</span>/<span>”</span>, the file will be restored to the original location. This directive is untested.
+</dd>
+</div>
+<div id="Director_Console_BackupClientAcl">
+<dt>BackupClientACL = <name-list></dt>
+<dd>
+ <p> This directive is used to specify a list of <span>Client</span> resource names that can be used by the console to backup files. The <span class="bdirectivename">ClientAcl</span> is not affected by the <span class="bdirectivename">RestoreClientACL</span> directive. </p>
+
+</dd>
+</div>
+<div id="Director_Console_RestoreClientAcl">
+<dt>RestoreClientACL = <name-list></dt>
+<dd>
+ <p> This directive is used to specify a list of <span>Client</span> resource names that can be used by the console to restore files. The <span class="bdirectivename">ClientAcl</span> is not affected by the <span class="bdirectivename">RestoreClientACL</span> directive. </p>
+
+<pre>
+ ClientAcl = localhost-fd # backup and restore
+ RestoreClientAcl = test-fd # restore only
+ BackupClientAcl = production-fd # backup only
+</pre>
+
+</dd>
+</div>
+<div id="Director_Console_UserIdAcl">
+<dt>UserIdACL = <name-list></dt>
+<dd>
+<p> This directive is used to specify a list of <span class="btt">UID</span>/<span class="btt">GID</span> that can be accessed from a restore session. Without this directive, the console cannot restore any file. During the restore session, the Director will compute the restore list and will exclude files and directories that cannot be accessed. <span class="bbacula">Bacula</span> uses the LStat database field to retrieve <span class="btt">st_mode</span>, <span class="btt">st_uid</span> and <span class="btt">st_gid</span> information for each file and compare them with the UserIdACL elements. If a parent directory doesn't have a proper catalog entry, the access to this directory will be automatically granted. </p>
+<p><span class="btt">UID</span>/<span class="btt">GID</span> names are resolved with <span class="btool">getpwnam()</span> function within the Director. The User <span class="btt">UID</span>/<span class="btt">GID</span> mapping might be different from one system to an other. </p>
+<p> Windows systems are not compatible with the <span class="bdirectivename">UserIdACL</span> feature. The use of <span class="bdirectivename">UserIdACL</span> = <span class="bvalue">*all*</span> is required to restore Windows systems from a restricted Console. </p>
+<p> Multiple <span class="btt">UID</span>/<span class="btt">GID</span> names may be specified by separating them with commas, and/or by specifying multiple <span class="bdirectivename">UserIdACL</span> directives. </p>
+
+</dd>
+</div>
+<div id="Director_Console_DirectoryAcl">
+<dt>DirectoryACL = <name-list></dt>
+<dd>
+ <p> This directive is used to specify a list of directories that can be accessed by a restore session. Without this directive, the console cannot restore any file. Multiple directories names may be specified by separating them with commas, and/or by specifying multiple <span class="bdirectivename">DirectoryACL</span> directives. </p>
+
+</dd>
+</div>
+<div id="Director_Console_TlsPskEnable">
+<dt>TLS PSK Enable = <yes|no></dt>
+<dd>
+<p> Enable or Disable automatic TLS PSK support. TLS PSK is enabled by default between all <span class="bbacula">Bacula</span> components. The Pre-Shared Key used between the programs is the <span class="bbacula">Bacula</span> password. If both <span class="bdirectivename">TLS Enable</span> and <span class="bdirectivename">TLS PSK Enable</span> are enabled, the system will use TLS certificates. </p>
+
+</dd>
+</div>
+<div id="Director_Console_TlsEnable">
+<dt>TLS Enable = <yes|no></dt>
+<dd>
+<p> Enable TLS support. If TLS is not enabled, none of the other TLS directives have any effect. In other words, even if you set <span class="bbf">TLS Require = yes</span> you need to have TLS enabled or TLS will not be used. </p>
+
+</dd>
+</div>
+<div id="Director_Console_TlsRequire">
+<dt>TLS Require = <yes|no></dt>
+<dd>
+<p> Require TLS or TLS-PSK encryption. This directive is ignored unless one of <span class="bbf">TLS Enable</span> or <span class="bbf">TLS PSK Enable</span> is set to <span class="bvalue">yes</span>. If TLS is not required while TLS or TLS-PSK are enabled, then the <span class="bbacula">Bacula</span> component will connect with other components either with or without TLS or TLS-PSK</p>
+<p> If TLS or TLS-PSK is enabled and TLS is required, then the <span class="bbacula">Bacula</span> component will refuse any connection request that does not use TLS. </p>
+
+</dd>
+</div>
+<div id="Director_Console_TlsAuthenticate">
+<dt>TLS Authenticate = <yes|no></dt>
+<dd>
+ When <span class="bdirectivename">TLS Authenticate</span> is enabled, after doing the CRAM-MD5 authentication, <span class="bbacula">Bacula</span> will also do TLS authentication, then TLS encryption will be turned off, and the rest of the communication between the two <span class="bbacula">Bacula</span> components will be done without encryption. If TLS-PSK is used instead of the regular TLS, the encryption is turned off after the TLS-PSK authentication step. <p> If you want to encrypt communications data, use the normal TLS directives but do <span class="bbf">not</span> turn on <span class="bdirectivename">TLS Authenticate</span>. </p>
+
+</dd>
+</div>
+<div id="Director_Console_TlsCertificate">
+<dt>TLS Certificate = <Filename></dt>
+<dd>The full path and filename of a PEM encoded TLS certificate. It will be used as either a client or server certificate, depending on the connection direction. PEM stands for Privacy Enhanced Mail, but in this context refers to how the certificates are encoded. This format is used because PEM files are base64 encoded and hence ASCII text based rather than binary. They may also contain encrypted information. <p> This directive is required in a server context, but it may not be specified in a client context if <span class="bdirectivename">TLS Verify Peer</span> is set to <span class="bvalue">no</span> in the corresponding server context. </p>
+
+<p> Example: </p>
+<p> File Daemon configuration file (<span class="bfilename">bacula-fd.conf</span>), <span class="bdaemon">Director</span> resource configuration has <span class="bdirectivename">TLS Verify Peer</span> = <span class="bvalue">no</span>: </p>
+<pre>
+ Director {
+ Name = bacula-dir
+ Password = "password"
+ Address = director.example.com
+
+ # TLS configuration directives
+ TLS Enable = yes
+ TLS Require = yes
+ TLS Verify Peer = no
+ TLS CA Certificate File = /opt/bacula/ssl/certs/root_cert.pem
+ TLS Certificate = /opt/bacula/ssl/certs/client1_cert.pem
+ TLS Key = /opt/bacula/ssl/keys/client1_key.pem
+ }
+</pre>
+<p> Having <span class="bdirectivename">TLS Verify Peer</span> = <span class="bvalue">no</span>, means the File Daemon, server context, will not check Directorâ\80\99s public certificate, client context. There is no need to specify <span class="bdirectivename">TLS Certificate File</span> neither <span class="bdirectivename">TLS Key</span> directives in the <span class="bresourcename">Client</span> resource, director configuration file. We can have the below client configuration in <span class="bfilename">bacula-dir.conf</span>: </p>
+
+<pre>
+ Client {
+ Name = client1-fd
+ Address = client1.example.com
+ FDPort = 9102
+ Catalog = MyCatalog
+ Password = "password"
+ ...
+ # TLS configuration directives
+ TLS Enable = yes
+ TLS Require = yes
+ TLS CA Certificate File = /opt/bacula/ssl/certs/ca_client1_cert.pem
+ }
+</pre>
+
+</dd>
+</div>
+<div id="Director_Console_TlsKey">
+<dt>TLS Key = <Filename></dt>
+<dd>The full path and filename of a PEM encoded TLS private key. It must correspond to the TLS certificate.
+</dd>
+</div>
+<div id="Director_Console_TlsCaCertificateFile">
+<dt>TLS CA Certificate File = <Filename></dt>
+<dd>The full path and filename specifying a PEM encoded TLS CA certificate(s). Multiple certificates are permitted in the file. One of <span class="bdirectivename">TLS CA Certificate File</span> or <span class="bdirectivename">TLS CA Certificate Dir</span> are required in a server context, unless <span class="bdirectivename">TLS Verify Peer</span> (see above) is set to <span class="bvalue">no</span>, and are always required in a client context.
+</dd>
+</div>
+<div id="Director_Console_TlsCaCertificateDir">
+<dt>TLS CA Certificate Dir = <Directory></dt>
+<dd>Full path to TLS CA certificate directory. In the current implementation, certificates must be stored PEM encoded with OpenSSL-compatible hashes, which is the subject name's hash and an extension of <span class="bbf">.0</span>. One of <span class="bdirectivename">TLS CA Certificate File</span> or <span class="bdirectivename">TLS CA Certificate Dir</span> are required in a server context, unless <span class="bdirectivename">TLS Verify Peer</span> is set to <span class="bvalue">no</span>, and are always required in a client context.
+</dd>
+</div>
+<div id="Director_Console_TlsVerifyPeer">
+<dt>TLS Verify Peer = <yes|no></dt>
+<dd>
+Verify peer certificate. Instructs server to request and verify the client's X.509 certificate. Any client certificate signed by a known-CA will be accepted. Additionally, the client's X509 certificate Common Name must meet the value of the <span class="bdirectivename">Address</span> directive. If the <span class="bdirectivename">TLSAllowed CN</span> onfiguration directive is used, the client's x509 certificate Common Name must also correspond to one of the CN specified in the <span class="bdirectivename">TLS Allowed CN</span> directive. This directive is valid only for a server and not in client context. The default is <span class="bdefaultvalue">yes</span>.
+</dd>
+</div>
+<div id="Director_Console_TlsAllowedCn">
+<dt>TLS Allowed CN = <string list></dt>
+<dd>Common name attribute of allowed peer certificates. This directive is valid for a server and in a client context. If this directive is specified, the peer certificate will be verified against this list. In the case this directive is configured on a server side, the allowed CN list will not be checked if <span class="bdirectivename">TLS Verify Peer</span> is set to <span class="bvalue">no</span> (<span class="bdirectivename">TLS Verify Peer</span> is <span class="bdefaultvalue">yes</span> by default). This can be used to ensure that only the CN-approved component may connect. This directive may be specified more than once. <p> In the case this directive is configured in a server side, the allowed CN list will only be checked if <span class="bdirectivename">TLS Verify Peer = yes</span> (default). For example, in <span class="bfilename">bacula-fd.conf</span>, <span class="bdaemon">Director</span> resource definition: </p>
+<pre>
+ Director {
+ Name = bacula-dir
+ Password = "password"
+ Address = director.example.com
+ # TLS configuration directives
+ TLS Enable = yes
+ TLS Require = yes
+ # if TLS Verify Peer = no, then TLS Allowed CN will not be checked.
+ TLS Verify Peer = yes
+ TLS Allowed CN = director.example.com
+ TLS CA Certificate File = /opt/bacula/ssl/certs/root_cert.pem
+ TLS Certificate = /opt/bacula/ssl/certs/client1_cert.pem
+ TLS Key = /opt/bacula/ssl/keys/client1_key.pem
+ }
+</pre>
+<p> In the case this directive is configured in a client side, the allowed CN list will always be checked. </p>
+
+<pre>
+ Client {
+ Name = client1-fd
+ Address = client1.example.com
+ FDPort = 9102
+ Catalog = MyCatalog
+ Password = "password"
+ ...
+ # TLS configuration directives
+ TLS Enable = yes
+ TLS Require = yes
+ # the Allowed CN will be checked for this client by director
+ # the client's certificate Common Name must match any of
+ # the values of the Allowed CN list
+ TLS Allowed CN = client1.example.com
+ TLS CA Certificate File = /opt/bacula/ssl/certs/ca_client1_cert.pem
+ TLS Certificate = /opt/bacula/ssl/certs/director_cert.pem
+ TLS Key = /opt/bacula/ssl/keys/director_key.pem
+ }
+</pre>
+<p> If the client doesnâ\80\99t provide a certificate with a Common Name that meets any value in the <span class="bdirectivename">TLS Allowed CN</span> list, an error message will be issued: </p>
+
+<pre>
+16-Nov 17:30 bacula-dir JobId 0: Fatal error: bnet.c:273 TLS certificate
+verification failed. Peer certificate did not match a required commonName
+16-Nov 17:30 bacula-dir JobId 0: Fatal error: TLS negotiation failed with FD at
+"192.168.100.2:9102".
+</pre>
+
+</dd>
+</div>
+<div id="Director_Console_TlsDhFile">
+<dt>TLS DH File = <Directory></dt>
+<dd>Path to PEM encoded Diffie-Hellman parameter file. If this directive is specified, DH key exchange will be used for the ephemeral keying, allowing for forward secrecy of communications. DH key exchange adds an additional level of security because the key used for encryption/decryption by the server and the client is computed on each end and thus is never passed over the network if Diffie-Hellman key exchange is used. Even if DH key exchange is not used, the encryption/decryption key is always passed encrypted. This directive is only valid within a server context. <p> To generate the parameter file, you may use <span class="btool">openssl</span>: </p>
+
+<pre>
+openssl dhparam -out dh4096.pem -5 4096
+</pre>
+
+
+</dd>
+</div>
+<div id="Director_Statistics_Name">
+<dt>Name = <name></dt>
+<dd>
+ The Statistics directive <span class="bdirectivename">name</span> is used by the system administrator. This directive is required.
+</dd>
+</div>
+<div id="Director_Statistics_Description">
+<dt>Description = <string></dt>
+<dd>
+ <p> The text field contains a description of the Statistics that will be displayed in the graphical user interface. This directive is optional. </p>
+
+</dd>
+</div>
+<div id="Director_Statistics_Type">
+<dt>Type = <CSV|Graphite></dt>
+<dd>
+ <p> The <span class="bdirectivename">Type</span> directive specifies the Statistics backend, which may be one of the following: <span class="bvalue">CSV</span> or <span class="bvalue">Graphite</span>. This directive is required. </p>
+
+<p>CSV is a simple file level backend which saves all required metrics with the following format to the file: <span>“</span><span class="bbracket"><time></span>, <span class="bbracket"><metric></span>, <span class="bbracket"><value></span>\n<span>”</span></p>
+<p> Where <span class="bbracket"><time></span> is a standard Unix time (a number of seconds from 1/01/1970) with local timezone as returned by a system call <span class="btool">time()</span>, <span class="bbracket"><metric></span> is a <span class="bbacula">Bacula</span> metric string and <span class="bbracket"><value></span> is a metric value which could be in numeric format (<span class="btt">int</span>/<span class="btt">float</span>) or a string <span class="bvalue">True</span> or <span class="bvalue">False</span> for boolean variable. The CSV backend requires the <span class="bdirectivename">File</span> = <span class="bvalue"> </span> parameter. </p>
+
+<p> Graphite is a network backend which will send all required metrics to a Graphite server. The Graphite backend requires the <span class="bdirectivename">Host</span> = <span class="bvalue"> </span> and <span class="bdirectivename">Port</span> = <span class="bvalue"> </span> directives to be set. </p>
+<p> If the Graphite server is not available, the metrics are automatically spooled in the working directory. When the server can be reached again, spooled metrics are despooled automatically and the spooling function is suspended. </p>
+
+</dd>
+</div>
+<div id="Director_Statistics_Interval">
+<dt>Interval = <time-interval></dt>
+<dd>
+ <p> The <span class="bdirectivename">Intervall</span> directive instructs the Statistics thread how long it should sleep between every collection iteration. This directive is optional and the default value is <span class="bdefaultvalue">300</span> seconds. </p>
+
+</dd>
+</div>
+<div id="Director_Statistics_Metrics">
+<dt>Metrics = <metricspec></dt>
+<dd>
+ <p> The <span class="bdirectivename">Metrics</span> directive allow metric filtering and <span class="bbracket"><metricspec></span> is a filter which enables to use <span class="bvalue">*</span> and <span class="bvalue">?</span> characters to match the required metric name in the same way as found in shell wildcard resolution. You can exclude filtered metric with <span class="bvalue">!</span> prefix. You can define any number of filters for a single Statistics. Metrics filter is executed in order as found in configuration. This directive is optional and if not used all available metrics will be saved by this statistics backend. </p>
+
+<p> Example: </p>
+<pre>
+# Include all metric starting with "bacula.jobs"
+Metrics = "bacula.jobs.*"
+
+# Exclude any metric starting with "bacula.jobs"
+Metrics = "!bacula.jobs.*"
+</pre>
+
+</dd>
+</div>
+<div id="Director_Statistics_Prefix">
+<dt>Prefix = <string>
+File = <filename>
+</dt>
+<dd>
+ <p> The <span class="bdirectivename">Prefix</span> allows to alter the metrics name saved by statistics to distinguish between different installations or daemons. The prefix string will be added to metric name as: <span>“</span><span class="bbracket"><prefix></span>.<span class="bbracket"><metric_name></span><span>”</span> This directive is optional. </p>
+
+ <p> The File is used by the CSV statistics backend and point to the full path and filename of the file where metrics will be saved. With the CSV type, the <span class="bdirectivename">File</span> directive is required. The statistics thread must have the permissions to write to the selected file or create a new file if the file doesn't exist. If statistics is unable to write to the file or create a new one then the collection terminates and an error message will be generated. The file is only open during the dump and is closed otherwise. Statistics file rotation could be executed by a <span class="btool">mv</span> shell command. </p>
+
+</dd>
+</div>
+<div id="Director_Statistics_Host">
+<dt>Host = <hostname></dt>
+<dd>
+ <p> The <span class="bdirectivename">Host</span> directive is used for Graphite backend and specify the hostname or the IP address of the Graphite server. When the directive <span class="bdirectivename">Type</span> is set to Graphite, the Host directive is required. </p>
+
+</dd>
+</div>
+<div id="Director_Statistics_Port">
+<dt>Host = <number></dt>
+<dd>
+ <p> The <span class="bdirectivename">Port</span> directive is used for Graphite backend and specify the TCP port number of the Graphite server. When the directive <span class="bdirectivename">Type</span> is set to Graphite, the <span class="bdirectivename">Port</span> directive is required. </p>
+
+</dd>
+</div>
+<div id="Storage_Director_Password">
+<dt>Password = <Director-password></dt>
+<dd>
+ Specifies the password that must be supplied by the above named Director. This directive is required.
+</dd>
+</div>
+<div id="Storage_Director_Monitor">
+<dt>Monitor = <yes|no></dt>
+<dd>
+ If Monitor is set to <span class="bdefaultvalue">no</span> (default), this director will have full access to this Storage daemon. If Monitor is set to <span class="bvalue">yes</span>, this director will only be able to fetch the current status of this Storage daemon. <p> Please note that if this director is being used by a Monitor, we highly recommend to set this directive to <span class="bvalue">yes</span> to avoid serious security problems. </p>
+
+</dd>
+</div>
+<div id="Storage_Director_TlsEnable">
+<dt>TLS Enable = <yes|no></dt>
+<dd>
+<p> Enable TLS support. If TLS is not enabled, none of the other TLS directives have any effect. In other words, even if you set <span class="bbf">TLS Require = yes</span> you need to have TLS enabled or TLS will not be used. </p>
+
+</dd>
+</div>
+<div id="Storage_Director_TlsPskEnable">
+<dt>TLS PSK Enable = <yes|no></dt>
+<dd>
+<p> Enable or Disable automatic TLS PSK support. TLS PSK is enabled by default between all <span class="bbacula">Bacula</span> components. The Pre-Shared Key used between the programs is the <span class="bbacula">Bacula</span> password. If both <span class="bdirectivename">TLS Enable</span> and <span class="bdirectivename">TLS PSK Enable</span> are enabled, the system will use TLS certificates. </p>
+
+</dd>
+</div>
+<div id="Storage_Director_TlsRequire">
+<dt>TLS Require = <yes|no></dt>
+<dd>
+<p> Require TLS or TLS-PSK encryption. This directive is ignored unless one of <span class="bbf">TLS Enable</span> or <span class="bbf">TLS PSK Enable</span> is set to <span class="bvalue">yes</span>. If TLS is not required while TLS or TLS-PSK are enabled, then the <span class="bbacula">Bacula</span> component will connect with other components either with or without TLS or TLS-PSK</p>
+<p> If TLS or TLS-PSK is enabled and TLS is required, then the <span class="bbacula">Bacula</span> component will refuse any connection request that does not use TLS. </p>
+
+</dd>
+</div>
+<div id="Storage_Director_TlsAuthenticate">
+<dt>TLS Authenticate = <yes|no></dt>
+<dd>
+ When <span class="bdirectivename">TLS Authenticate</span> is enabled, after doing the CRAM-MD5 authentication, <span class="bbacula">Bacula</span> will also do TLS authentication, then TLS encryption will be turned off, and the rest of the communication between the two <span class="bbacula">Bacula</span> components will be done without encryption. If TLS-PSK is used instead of the regular TLS, the encryption is turned off after the TLS-PSK authentication step. <p> If you want to encrypt communications data, use the normal TLS directives but do <span class="bbf">not</span> turn on <span class="bdirectivename">TLS Authenticate</span>. </p>
+
+</dd>
+</div>
+<div id="Storage_Director_TlsKey">
+<dt>TLS Key = <Filename></dt>
+<dd>The full path and filename of a PEM encoded TLS private key. It must correspond to the TLS certificate.
+</dd>
+</div>
+<div id="Storage_Director_TlsCertificate">
+<dt>TLS Certificate = <Filename></dt>
+<dd>The full path and filename of a PEM encoded TLS certificate. It will be used as either a client or server certificate, depending on the connection direction. PEM stands for Privacy Enhanced Mail, but in this context refers to how the certificates are encoded. This format is used because PEM files are base64 encoded and hence ASCII text based rather than binary. They may also contain encrypted information. <p> This directive is required in a server context, but it may not be specified in a client context if <span class="bdirectivename">TLS Verify Peer</span> is set to <span class="bvalue">no</span> in the corresponding server context. </p>
+
+<p> Example: </p>
+<p> File Daemon configuration file (<span class="bfilename">bacula-fd.conf</span>), <span class="bdaemon">Director</span> resource configuration has <span class="bdirectivename">TLS Verify Peer</span> = <span class="bvalue">no</span>: </p>
+<pre>
+ Director {
+ Name = bacula-dir
+ Password = "password"
+ Address = director.example.com
+
+ # TLS configuration directives
+ TLS Enable = yes
+ TLS Require = yes
+ TLS Verify Peer = no
+ TLS CA Certificate File = /opt/bacula/ssl/certs/root_cert.pem
+ TLS Certificate = /opt/bacula/ssl/certs/client1_cert.pem
+ TLS Key = /opt/bacula/ssl/keys/client1_key.pem
+ }
+</pre>
+<p> Having <span class="bdirectivename">TLS Verify Peer</span> = <span class="bvalue">no</span>, means the File Daemon, server context, will not check Directorâ\80\99s public certificate, client context. There is no need to specify <span class="bdirectivename">TLS Certificate File</span> neither <span class="bdirectivename">TLS Key</span> directives in the <span class="bresourcename">Client</span> resource, director configuration file. We can have the below client configuration in <span class="bfilename">bacula-dir.conf</span>: </p>
+
+<pre>
+ Client {
+ Name = client1-fd
+ Address = client1.example.com
+ FDPort = 9102
+ Catalog = MyCatalog
+ Password = "password"
+ ...
+ # TLS configuration directives
+ TLS Enable = yes
+ TLS Require = yes
+ TLS CA Certificate File = /opt/bacula/ssl/certs/ca_client1_cert.pem
+ }
+</pre>
+
+</dd>
+</div>
+<div id="Storage_Director_TlsCaCertificateFile">
+<dt>TLS CA Certificate File = <Filename></dt>
+<dd>The full path and filename specifying a PEM encoded TLS CA certificate(s). Multiple certificates are permitted in the file. One of <span class="bdirectivename">TLS CA Certificate File</span> or <span class="bdirectivename">TLS CA Certificate Dir</span> are required in a server context, unless <span class="bdirectivename">TLS Verify Peer</span> (see above) is set to <span class="bvalue">no</span>, and are always required in a client context.
+</dd>
+</div>
+<div id="Storage_Director_TlsCaCertificateDir">
+<dt>TLS CA Certificate Dir = <Directory></dt>
+<dd>Full path to TLS CA certificate directory. In the current implementation, certificates must be stored PEM encoded with OpenSSL-compatible hashes, which is the subject name's hash and an extension of <span class="bbf">.0</span>. One of <span class="bdirectivename">TLS CA Certificate File</span> or <span class="bdirectivename">TLS CA Certificate Dir</span> are required in a server context, unless <span class="bdirectivename">TLS Verify Peer</span> is set to <span class="bvalue">no</span>, and are always required in a client context.
+</dd>
+</div>
+<div id="Storage_Director_TlsVerifyPeer">
+<dt>TLS Verify Peer = <yes|no></dt>
+<dd>
+Verify peer certificate. Instructs server to request and verify the client's X.509 certificate. Any client certificate signed by a known-CA will be accepted. Additionally, the client's X509 certificate Common Name must meet the value of the <span class="bdirectivename">Address</span> directive. If the <span class="bdirectivename">TLSAllowed CN</span> onfiguration directive is used, the client's x509 certificate Common Name must also correspond to one of the CN specified in the <span class="bdirectivename">TLS Allowed CN</span> directive. This directive is valid only for a server and not in client context. The default is <span class="bdefaultvalue">yes</span>.
+</dd>
+</div>
+<div id="Storage_Director_TlsAllowedCn">
+<dt>TLS Allowed CN = <string list></dt>
+<dd>Common name attribute of allowed peer certificates. This directive is valid for a server and in a client context. If this directive is specified, the peer certificate will be verified against this list. In the case this directive is configured on a server side, the allowed CN list will not be checked if <span class="bdirectivename">TLS Verify Peer</span> is set to <span class="bvalue">no</span> (<span class="bdirectivename">TLS Verify Peer</span> is <span class="bdefaultvalue">yes</span> by default). This can be used to ensure that only the CN-approved component may connect. This directive may be specified more than once. <p> In the case this directive is configured in a server side, the allowed CN list will only be checked if <span class="bdirectivename">TLS Verify Peer = yes</span> (default). For example, in <span class="bfilename">bacula-fd.conf</span>, <span class="bdaemon">Director</span> resource definition: </p>
+<pre>
+ Director {
+ Name = bacula-dir
+ Password = "password"
+ Address = director.example.com
+ # TLS configuration directives
+ TLS Enable = yes
+ TLS Require = yes
+ # if TLS Verify Peer = no, then TLS Allowed CN will not be checked.
+ TLS Verify Peer = yes
+ TLS Allowed CN = director.example.com
+ TLS CA Certificate File = /opt/bacula/ssl/certs/root_cert.pem
+ TLS Certificate = /opt/bacula/ssl/certs/client1_cert.pem
+ TLS Key = /opt/bacula/ssl/keys/client1_key.pem
+ }
+</pre>
+<p> In the case this directive is configured in a client side, the allowed CN list will always be checked. </p>
+
+<pre>
+ Client {
+ Name = client1-fd
+ Address = client1.example.com
+ FDPort = 9102
+ Catalog = MyCatalog
+ Password = "password"
+ ...
+ # TLS configuration directives
+ TLS Enable = yes
+ TLS Require = yes
+ # the Allowed CN will be checked for this client by director
+ # the client's certificate Common Name must match any of
+ # the values of the Allowed CN list
+ TLS Allowed CN = client1.example.com
+ TLS CA Certificate File = /opt/bacula/ssl/certs/ca_client1_cert.pem
+ TLS Certificate = /opt/bacula/ssl/certs/director_cert.pem
+ TLS Key = /opt/bacula/ssl/keys/director_key.pem
+ }
+</pre>
+<p> If the client doesnâ\80\99t provide a certificate with a Common Name that meets any value in the <span class="bdirectivename">TLS Allowed CN</span> list, an error message will be issued: </p>
+
+<pre>
+16-Nov 17:30 bacula-dir JobId 0: Fatal error: bnet.c:273 TLS certificate
+verification failed. Peer certificate did not match a required commonName
+16-Nov 17:30 bacula-dir JobId 0: Fatal error: TLS negotiation failed with FD at
+"192.168.100.2:9102".
+</pre>
+
+</dd>
+</div>
+<div id="Storage_Director_TlsDhFile">
+<dt>TLS DH File = <Directory></dt>
+<dd>Path to PEM encoded Diffie-Hellman parameter file. If this directive is specified, DH key exchange will be used for the ephemeral keying, allowing for forward secrecy of communications. DH key exchange adds an additional level of security because the key used for encryption/decryption by the server and the client is computed on each end and thus is never passed over the network if Diffie-Hellman key exchange is used. Even if DH key exchange is not used, the encryption/decryption key is always passed encrypted. This directive is only valid within a server context. <p> To generate the parameter file, you may use <span class="btool">openssl</span>: </p>
+
+<pre>
+openssl dhparam -out dh4096.pem -5 4096
+</pre>
+
+</dd>
+</div>
+<div id="Storage_Storage_WorkingDirectory">
+<dt>Working Directory = <Directory></dt>
+<dd>
+ This directive is mandatory and specifies a directory in which the Storage daemon may put its status files. This directory should be used only by <span class="bbf"><span class="bbacula">Bacula</span></span>, but may be shared by other <span class="bbacula">Bacula</span> daemons provided the names given to each daemon are unique. This directive is required
+</dd>
+</div>
+<div id="Storage_Storage_PidDirectory">
+<dt>Pid Directory = <Directory></dt>
+<dd>
+ This directive is mandatory and specifies a directory in which the Storage daemon may put its process Id file files. The process Id file is used to shutdown <span class="bbacula">Bacula</span> and to prevent multiple copies of <span class="bbacula">Bacula</span> from running simultaneously. This directive is required. Standard shell expansion of the <span class="bbf">Directory</span> is done when the configuration file is read so that values such as <span class="bbf">$HOME</span> will be properly expanded. <p> Typically on Linux systems, you will set this to: <span class="bdirectoryname">/var/run</span>. If you are not installing <span class="bbacula">Bacula</span> in the system directories, you can use the <span class="bbf">Working Directory</span> as defined above. </p>
+
+</dd>
+</div>
+<div id="Storage_Storage_CommCompression">
+<dt>CommCompression = <yes|no></dt>
+<dd>
+ <p> If the two <span class="bbacula">Bacula</span> components (DIR, FD, SD, bconsole) have the comm line compression enabled, the line compression will be enabled. The default value is yes. </p>
+<p> In many cases, the volume of data transmitted across the communications line can be reduced by a factor of three when this directive is <span class="bdefaultvalue">enabled</span>. In the case that the compression is not effective, <span class="bbacula">Bacula</span> turns it off on a record by record basis. </p>
+
+<p> If you are backing up data that is already compressed the comm line compression will not be effective, and you are likely to end up with an average compression ratio that is very small. In this case, <span class="bbacula">Bacula</span> reports <span class="bvalue">None</span> in the Job report. </p>
+
+</dd>
+</div>
+<div id="Storage_Storage_SdAddress">
+<dt>SDAddress = <IP-Address></dt>
+<dd>
+ This directive is optional, and if it is specified, it will cause the Storage daemon server (for Director and File daemon connections) to bind to the specified <span class="bbf">IP-Address</span>, which is either a domain name or an IP address specified as a dotted quadruple. If this directive is not specified, the Storage daemon will bind to any available address (the default).
+</dd>
+</div>
+<div id="Storage_Storage_SdPort">
+<dt>SDPort = <port-number></dt>
+<dd>
+ Specifies port number on which the Storage daemon listens for Director connections. The default is <span class="bdefaultvalue">9103</span>.
+</dd>
+</div>
+<div id="Storage_Storage_HeartbeatInterval">
+<dt>Heartbeat Interval = <time-interval></dt>
+<dd>
+ This directive defines an interval of time in seconds. When the Storage daemon is waiting for the operator to mount a tape, each time interval, it will send a heartbeat signal to the File daemon. The default interval is <span class="bdefaultvalue">300s</span>. This feature is particularly useful if you have a router such as 3Com that does not follow Internet standards and times out an valid connection after a short duration despite the fact that keepalive is set. This usually results in a broken pipe error message.
+</dd>
+</div>
+<div id="Storage_Storage_ClientConnectWait">
+<dt>Client Connect Wait = <time-interval></dt>
+<dd>
+ This directive defines an interval of time in seconds that the Storage daemon will wait for a Client (the File daemon) to connect. The default is <span class="bdefaultvalue">30 minutes</span>. Be aware that the longer the Storage daemon waits for a Client, the more resources will be tied up.
+</dd>
+</div>
+<div id="Storage_Storage_MaximumConcurrentJobs">
+<dt>Maximum Concurrent Jobs = <number></dt>
+<dd>
+ where <span class="bbracket"><number></span> is the maximum number of Jobs that may run concurrently. The default is set to <span class="bdefaultvalue">20</span>, but you may set it to a larger number. Each contact from the Director (e.g. status request, job start request) is considered as a Job, so if you want to be able to do a <span class="bcommandname">status</span> request in the console at the same time as a Job is running, you will need to set this value greater than 1. To run simultaneous Jobs, you will need to set a number of other directives in the Director's configuration file. Which ones you set depend on what you want, but you will almost certainly need to set the <span class="bbf">Maximum Concurrent Jobs</span> in the Storage resource in the Director's configuration file and possibly those in the Job and Client resources.
+</dd>
+</div>
+<div id="Storage_Storage_TlsPskEnable">
+<dt>TLS PSK Enable = <yes|no></dt>
+<dd>
+<p> Enable or Disable automatic TLS PSK support. TLS PSK is enabled by default between all <span class="bbacula">Bacula</span> components. The Pre-Shared Key used between the programs is the <span class="bbacula">Bacula</span> password. If both <span class="bdirectivename">TLS Enable</span> and <span class="bdirectivename">TLS PSK Enable</span> are enabled, the system will use TLS certificates. </p>
+
+</dd>
+</div>
+<div id="Storage_Storage_TlsEnable">
+<dt>TLS Enable = <yes|no></dt>
+<dd>
+<p> Enable TLS support. If TLS is not enabled, none of the other TLS directives have any effect. In other words, even if you set <span class="bbf">TLS Require = yes</span> you need to have TLS enabled or TLS will not be used. </p>
+
+</dd>
+</div>
+<div id="Storage_Storage_TlsRequire">
+<dt>TLS Require = <yes|no></dt>
+<dd>
+<p> Require TLS or TLS-PSK encryption. This directive is ignored unless one of <span class="bbf">TLS Enable</span> or <span class="bbf">TLS PSK Enable</span> is set to <span class="bvalue">yes</span>. If TLS is not required while TLS or TLS-PSK are enabled, then the <span class="bbacula">Bacula</span> component will connect with other components either with or without TLS or TLS-PSK</p>
+<p> If TLS or TLS-PSK is enabled and TLS is required, then the <span class="bbacula">Bacula</span> component will refuse any connection request that does not use TLS. </p>
+
+</dd>
+</div>
+<div id="Storage_Storage_TlsAuthenticate">
+<dt>TLS Authenticate = <yes|no></dt>
+<dd>
+ When <span class="bdirectivename">TLS Authenticate</span> is enabled, after doing the CRAM-MD5 authentication, <span class="bbacula">Bacula</span> will also do TLS authentication, then TLS encryption will be turned off, and the rest of the communication between the two <span class="bbacula">Bacula</span> components will be done without encryption. If TLS-PSK is used instead of the regular TLS, the encryption is turned off after the TLS-PSK authentication step. <p> If you want to encrypt communications data, use the normal TLS directives but do <span class="bbf">not</span> turn on <span class="bdirectivename">TLS Authenticate</span>. </p>
+
+</dd>
+</div>
+<div id="Storage_Storage_TlsKey">
+<dt>TLS Key = <Filename></dt>
+<dd>The full path and filename of a PEM encoded TLS private key. It must correspond to the TLS certificate.
+</dd>
+</div>
+<div id="Storage_Storage_TlsCertificate">
+<dt>TLS Certificate = <Filename></dt>
+<dd>The full path and filename of a PEM encoded TLS certificate. It will be used as either a client or server certificate, depending on the connection direction. PEM stands for Privacy Enhanced Mail, but in this context refers to how the certificates are encoded. This format is used because PEM files are base64 encoded and hence ASCII text based rather than binary. They may also contain encrypted information. <p> This directive is required in a server context, but it may not be specified in a client context if <span class="bdirectivename">TLS Verify Peer</span> is set to <span class="bvalue">no</span> in the corresponding server context. </p>
+
+<p> Example: </p>
+<p> File Daemon configuration file (<span class="bfilename">bacula-fd.conf</span>), <span class="bdaemon">Director</span> resource configuration has <span class="bdirectivename">TLS Verify Peer</span> = <span class="bvalue">no</span>: </p>
+<pre>
+ Director {
+ Name = bacula-dir
+ Password = "password"
+ Address = director.example.com
+
+ # TLS configuration directives
+ TLS Enable = yes
+ TLS Require = yes
+ TLS Verify Peer = no
+ TLS CA Certificate File = /opt/bacula/ssl/certs/root_cert.pem
+ TLS Certificate = /opt/bacula/ssl/certs/client1_cert.pem
+ TLS Key = /opt/bacula/ssl/keys/client1_key.pem
+ }
+</pre>
+<p> Having <span class="bdirectivename">TLS Verify Peer</span> = <span class="bvalue">no</span>, means the File Daemon, server context, will not check Directorâ\80\99s public certificate, client context. There is no need to specify <span class="bdirectivename">TLS Certificate File</span> neither <span class="bdirectivename">TLS Key</span> directives in the <span class="bresourcename">Client</span> resource, director configuration file. We can have the below client configuration in <span class="bfilename">bacula-dir.conf</span>: </p>
+
+<pre>
+ Client {
+ Name = client1-fd
+ Address = client1.example.com
+ FDPort = 9102
+ Catalog = MyCatalog
+ Password = "password"
+ ...
+ # TLS configuration directives
+ TLS Enable = yes
+ TLS Require = yes
+ TLS CA Certificate File = /opt/bacula/ssl/certs/ca_client1_cert.pem
+ }
+</pre>
+
+</dd>
+</div>
+<div id="Storage_Storage_TlsCaCertificateFile">
+<dt>TLS CA Certificate File = <Filename></dt>
+<dd>The full path and filename specifying a PEM encoded TLS CA certificate(s). Multiple certificates are permitted in the file. One of <span class="bdirectivename">TLS CA Certificate File</span> or <span class="bdirectivename">TLS CA Certificate Dir</span> are required in a server context, unless <span class="bdirectivename">TLS Verify Peer</span> (see above) is set to <span class="bvalue">no</span>, and are always required in a client context.
+</dd>
+</div>
+<div id="Storage_Storage_TlsCaCertificateDir">
+<dt>TLS CA Certificate Dir = <Directory></dt>
+<dd>Full path to TLS CA certificate directory. In the current implementation, certificates must be stored PEM encoded with OpenSSL-compatible hashes, which is the subject name's hash and an extension of <span class="bbf">.0</span>. One of <span class="bdirectivename">TLS CA Certificate File</span> or <span class="bdirectivename">TLS CA Certificate Dir</span> are required in a server context, unless <span class="bdirectivename">TLS Verify Peer</span> is set to <span class="bvalue">no</span>, and are always required in a client context.
+</dd>
+</div>
+<div id="Storage_Storage_TlsVerifyPeer">
+<dt>TLS Verify Peer = <yes|no></dt>
+<dd>
+Verify peer certificate. Instructs server to request and verify the client's X.509 certificate. Any client certificate signed by a known-CA will be accepted. Additionally, the client's X509 certificate Common Name must meet the value of the <span class="bdirectivename">Address</span> directive. If the <span class="bdirectivename">TLSAllowed CN</span> onfiguration directive is used, the client's x509 certificate Common Name must also correspond to one of the CN specified in the <span class="bdirectivename">TLS Allowed CN</span> directive. This directive is valid only for a server and not in client context. The default is <span class="bdefaultvalue">yes</span>.
+</dd>
+</div>
+<div id="Storage_Storage_TlsAllowedCn">
+<dt>TLS Allowed CN = <string list></dt>
+<dd>Common name attribute of allowed peer certificates. This directive is valid for a server and in a client context. If this directive is specified, the peer certificate will be verified against this list. In the case this directive is configured on a server side, the allowed CN list will not be checked if <span class="bdirectivename">TLS Verify Peer</span> is set to <span class="bvalue">no</span> (<span class="bdirectivename">TLS Verify Peer</span> is <span class="bdefaultvalue">yes</span> by default). This can be used to ensure that only the CN-approved component may connect. This directive may be specified more than once. <p> In the case this directive is configured in a server side, the allowed CN list will only be checked if <span class="bdirectivename">TLS Verify Peer = yes</span> (default). For example, in <span class="bfilename">bacula-fd.conf</span>, <span class="bdaemon">Director</span> resource definition: </p>
+<pre>
+ Director {
+ Name = bacula-dir
+ Password = "password"
+ Address = director.example.com
+ # TLS configuration directives
+ TLS Enable = yes
+ TLS Require = yes
+ # if TLS Verify Peer = no, then TLS Allowed CN will not be checked.
+ TLS Verify Peer = yes
+ TLS Allowed CN = director.example.com
+ TLS CA Certificate File = /opt/bacula/ssl/certs/root_cert.pem
+ TLS Certificate = /opt/bacula/ssl/certs/client1_cert.pem
+ TLS Key = /opt/bacula/ssl/keys/client1_key.pem
+ }
+</pre>
+<p> In the case this directive is configured in a client side, the allowed CN list will always be checked. </p>
+
+<pre>
+ Client {
+ Name = client1-fd
+ Address = client1.example.com
+ FDPort = 9102
+ Catalog = MyCatalog
+ Password = "password"
+ ...
+ # TLS configuration directives
+ TLS Enable = yes
+ TLS Require = yes
+ # the Allowed CN will be checked for this client by director
+ # the client's certificate Common Name must match any of
+ # the values of the Allowed CN list
+ TLS Allowed CN = client1.example.com
+ TLS CA Certificate File = /opt/bacula/ssl/certs/ca_client1_cert.pem
+ TLS Certificate = /opt/bacula/ssl/certs/director_cert.pem
+ TLS Key = /opt/bacula/ssl/keys/director_key.pem
+ }
+</pre>
+<p> If the client doesnâ\80\99t provide a certificate with a Common Name that meets any value in the <span class="bdirectivename">TLS Allowed CN</span> list, an error message will be issued: </p>
+
+<pre>
+16-Nov 17:30 bacula-dir JobId 0: Fatal error: bnet.c:273 TLS certificate
+verification failed. Peer certificate did not match a required commonName
+16-Nov 17:30 bacula-dir JobId 0: Fatal error: TLS negotiation failed with FD at
+"192.168.100.2:9102".
+</pre>
+
+</dd>
+</div>
+<div id="Storage_Storage_TlsDhFile">
+<dt>TLS DH File = <Directory></dt>
+<dd>Path to PEM encoded Diffie-Hellman parameter file. If this directive is specified, DH key exchange will be used for the ephemeral keying, allowing for forward secrecy of communications. DH key exchange adds an additional level of security because the key used for encryption/decryption by the server and the client is computed on each end and thus is never passed over the network if Diffie-Hellman key exchange is used. Even if DH key exchange is not used, the encryption/decryption key is always passed encrypted. This directive is only valid within a server context. <p> To generate the parameter file, you may use <span class="btool">openssl</span>: </p>
+
+<pre>
+openssl dhparam -out dh4096.pem -5 4096
+</pre>
+
+</dd>
+</div>
+<div id="Storage_Device_DeviceType">
+<dt>Device Type = <type-specification></dt>
+<dd>
+ The Device Type specification allows you to explicitly tell <span class="bbacula">Bacula</span> what kind of device you are defining. It the <span class="bemph">type-specification</span> may be one of the following: <dl class="bdescription2">
+<dt>File</dt>
+<dd class="bdescription2">Tells <span class="bbacula">Bacula</span> that the device is a file. It may either be a file defined on fixed medium or a removable filesystem such as USB. All files must be random access devices. </dd>
+<dt>Tape</dt>
+<dd class="bdescription2">The device is a tape device and thus is sequential access. Tape devices are controlled using ioctl() calls. </dd>
+<dt>Fifo</dt>
+<dd class="bdescription2">The device is a first-in-first out sequential access read-only or write-only device. </dd>
+<dt>Aligned</dt>
+<dd class="bdescription2">Tells <span class="bbacula">Bacula</span> that the device is special Aligned Device. Please see the specific Aligned User's guide for more information. </dd>
+<dt>Dedup</dt>
+<dd class="bdescription2">The device is a Deduplication device. The Storage Daemon will use the deduplication engine. Please see the specific Deduplication User's guide for more information. </dd>
+<dt>Cloud</dt>
+<dd class="bdescription2">The device is a Cloud device. The Storage Daemon will use the Cloud to upload/download volumes. Please see the specific Cloud User's guide for more information. </dd>
+</dl>
+<p> The Device Type directive is not required, and if not specified, <span class="bbacula">Bacula</span> will attempt to guess what kind of device has been specified using the Archive Device specification supplied. There are several advantages to explicitly specifying the Device Type. First, on some systems, block and character devices have the same type. Secondly, if you explicitly specify the Device Type, the mount point need not be defined until the device is opened. This is the case with most removable devices such as USB that are mounted by the HAL daemon. If the Device Type is not explicitly specified, then the mount point must exist when the Storage daemon starts. </p>
+
+</dd>
+</div>
+<div id="Storage_Device_ArchiveDevice">
+<dt>Archive Device = <name-string></dt>
+<dd>
+ The specified <span class="bbf">name-string</span> gives the system file name of the storage device managed by this storage daemon. This will usually be the device file name of a removable storage device (tape drive), for example <span class="bfilename">/dev/nst0</span> or <span class="bfilename">/dev/rmt/0mbn</span>. It may also be a directory name if you are archiving to disk storage. In this case, you must supply the full absolute path to the directory. When specifying a tape device, it is preferable that the "non-rewind" variant of the device file name be given. In addition, on systems such as Sun, which have multiple tape access methods, you must be sure to specify to use Berkeley I/O conventions with the device. The <span class="bbf">b</span> in the Solaris (Sun) archive specification <span class="bfilename">/dev/rmt/0mbn</span> is what is needed in this case. <span class="bbacula">Bacula</span> does not support SysV tape drive behavior. <p> As noted above, normally the Archive Device is the name of a tape drive, but you may also specify an absolute path to an existing directory. If the Device is a directory <span class="bbacula">Bacula</span> will write to file storage in the specified directory, and the filename used will be the Volume name as specified in the Catalog. If you want to write into more than one directory (i.e. to spread the load to different disk drives), you will need to define two Device resources, each containing an Archive Device with a different directory. In addition to a tape device name or a directory name, <span class="bbacula">Bacula</span> will accept the name of a FIFO. A FIFO is a special kind of file that connects two programs via kernel memory. If a FIFO device is specified for a backup operation, you must have a program that reads what <span class="bbacula">Bacula</span> writes into the FIFO. When the Storage daemon starts the job, it will wait for <span class="bbf">MaximumOpenWait</span> seconds for the read program to start reading, and then time it out and terminate the job. As a consequence, it is best to start the read program at the beginning of the job perhaps with the <span class="bbf">RunBeforeJob</span> directive. For this kind of device, you never want to specify <span class="bbf">AlwaysOpen</span>, because you want the Storage daemon to open it only when a job starts, so you must explicitly set it to <span class="bvalue">no</span>. Since a FIFO is a one way device, <span class="bbacula">Bacula</span> will not attempt to read a label of a FIFO device, but will simply write on it. To create a FIFO Volume in the catalog, use the <span class="bcommandname">add</span> command rather than the <span class="bcommandname">label</span> command to avoid attempting to write a label. </p>
+
+<pre>
+Device {
+ Name = FifoStorage
+ Media Type = Fifo
+ Device Type = Fifo
+ Archive Device = /tmp/fifo
+ LabelMedia = yes
+ Random Access = no
+ AutomaticMount = no
+ RemovableMedia = no
+ MaximumOpenWait = 60
+ AlwaysOpen = no
+}
+</pre>
+<p> During a restore operation, if the Archive Device is a FIFO, <span class="bbacula">Bacula</span> will attempt to read from the FIFO, so you must have an external program that writes into the FIFO. <span class="bbacula">Bacula</span> will wait <span class="bbf">MaximumOpenWait</span> seconds for the program to begin writing and will then time it out and terminate the job. As noted above, you may use the <span class="bbf">RunBeforeJob</span> to start the writer program at the beginning of the job. </p>
+<p> The Archive Device directive is required. </p>
+
+</dd>
+</div>
+<div id="Storage_Device_DriveIndex">
+<dt>Drive Index = <number></dt>
+<dd>
+ The <span class="bbf">Drive Index</span> that you specify is passed to the <span class="btool">mtx-changer</span> script and is thus passed to the <span class="btool">mtx</span> program. By default, the Drive Index is <span class="bdefaultvalue">zero</span>, so if you have only one drive in your autochanger, everything will work normally. However, if you have multiple drives, you must specify multiple <span class="bbacula">Bacula</span> Device resources (one for each drive). The first Device should have the Drive Index set to 0, and the second Device Resource should contain a Drive Index set to 1, and so on. This will then permit you to use two or more drives in your autochanger. As of <span class="bbacula">Bacula</span> version 1.38.0, using the <span class="bbf">Autochanger</span> resource, <span class="bbacula">Bacula</span> will automatically ensure that only one drive at a time uses the autochanger script, so you no longer need locking scripts as in the past - the default <span class="btool">mtx-changer</span> script works for any number of drives.
+</dd>
+</div>
+<div id="Storage_Device_MediaType">
+<dt>Media Type = <name-string></dt>
+<dd>
+ The specified <span class="bbf">name-string</span> names the type of media supported by this device, for example, "DLT7000". Media type names are arbitrary in that you set them to anything you want, but they must be known to the volume database to keep track of which storage daemons can read which volumes. In general, each different storage type should have a unique Media Type associated with it. The same <span class="bbf">name-string</span> must appear in the appropriate Storage resource definition in the Director's configuration file. <p> Even though the names you assign are arbitrary (i.e. you choose the name you want), you should take care in specifying them because the Media Type is used to determine which storage device <span class="bbacula">Bacula</span> will select during restore. Thus you should probably use the same Media Type specification for all drives where the Media can be freely interchanged. This is not generally an issue if you have a single Storage daemon, but it is with multiple Storage daemons, especially if they have incompatible media. </p>
+<p> For example, if you specify a Media Type of "DDS-4" then during the restore, <span class="bbacula">Bacula</span> will be able to choose any Storage Daemon that handles "DDS-4". If you have an autochanger, you might want to name the Media Type in a way that is unique to the autochanger, unless you wish to possibly use the Volumes in other drives. You should also ensure to have unique Media Type names if the Media is not compatible between drives. This specification is required for all devices. </p>
+<p> In addition, if you are using disk storage, each Device resource will generally have a different mount point or directory. In order for <span class="bbacula">Bacula</span> to select the correct Device resource, each one must have a unique Media Type. </p>
+
+</dd>
+</div>
+<div id="Storage_Device_RemovableMedia">
+<dt>Removable Media = <yes|no></dt>
+<dd>
+ If <span class="bvalue">yes</span>, this device supports removable media (for example, tapes or CDs). If <span class="bvalue">no</span>, media cannot be removed (for example, an intermediate backup area on a hard disk). If <span class="bbf">Removable media</span> is enabled on a File device (as opposed to a tape) the Storage daemon will assume that device may be something like a USB device that can be removed or a simply a removable harddisk. When attempting to open such a device, if the Volume is not found (for File devices, the Volume name is the same as the Filename), then the Storage daemon will search the entire device looking for likely Volume names, and for each one found, it will ask the Director if the Volume can be used. If so, the Storage daemon will use the first such Volume found. Thus it acts somewhat like a tape drive - if the correct Volume is not found, it looks at what actually is found, and if it is an appendable Volume, it will use it. <p> If the removable medium is not automatically mounted (e.g. udev), then you might consider using additional Storage daemon device directives such as <span class="bbf">Requires Mount</span>, <span class="bbf">Mount Point</span>, <span class="bbf">Mount Command</span>, and <span class="bbf">Unmount Command</span>, all of which can be used in conjunction with <span class="bbf">Removable Media</span>. </p>
+
+</dd>
+</div>
+<div id="Storage_Device_RandomAccess">
+<dt>Random Access = <yes|no></dt>
+<dd>
+ If <span class="bvalue">yes</span>, the archive device is assumed to be a random access medium which supports the <span class="bbf">lseek</span> (or <span class="bbf">lseek64</span> if Largefile is enabled during configuration) facility. This should be set to <span class="bvalue">yes</span> for all file systems such as USB, and fixed files. It should be set to <span class="bvalue">no</span> for non-random access devices such as tapes and named pipes.
+</dd>
+</div>
+<div id="Storage_Device_AlwaysOpen">
+<dt>Always Open = <yes|no></dt>
+<dd>
+ If <span class="bdefaultvalue">yes</span> (default), <span class="bbacula">Bacula</span> will always keep the device open unless specifically <span class="bbf">unmounted</span> by the Console program. This permits <span class="bbacula">Bacula</span> to ensure that the tape drive is always available, and properly positioned. If you set <span class="bbf">AlwaysOpen</span> to <span class="bvalue">no</span>, <span class="bbf"><span class="bbacula">Bacula</span></span> will only open the drive when necessary, and at the end of the Job if no other Jobs are using the drive, it will be freed. The next time <span class="bbacula">Bacula</span> wants to append to a tape on a drive that was freed, <span class="bbacula">Bacula</span> will rewind the tape and position it to the end. To avoid unnecessary tape positioning and to minimize unnecessary operator intervention, it is highly recommended that <span class="bbf">Always Open = yes</span>. This also ensures that the drive is available when <span class="bbacula">Bacula</span> needs it. <p> If you have <span class="bbf">Always Open = yes</span> (recommended) and you want to use the drive for something else, simply use the <span class="bcommandname">unmount</span> command in the Console program to release the drive. However, don't forget to remount the drive with <span class="bcommandname">mount</span> when the drive is available or the next <span class="bbacula">Bacula</span> job will block. </p>
+<p> For File storage, this directive is ignored. For a FIFO storage device, you must set this to <span class="bvalue">no</span>. </p>
+<p> Please note that if you set this directive to <span class="bvalue">no</span> <span class="bbacula">Bacula</span> will release the tape drive between each job, and thus the next job will rewind the tape and position it to the end of the data. This can be a very time consuming operation. In addition, with this directive set to no, certain multiple drive autochanger operations will fail. We strongly recommend to keep <span class="bbf">Always Open</span> set to <span class="bvalue">yes</span></p>
+
+</dd>
+</div>
+<div id="Storage_Device_Autochanger">
+<dt>Autochanger = <yes|no></dt>
+<dd>
+ If <span class="bvalue">yes</span>, this device belongs to an automatic tape changer, and you must specify an <span class="bbf">Autochanger</span> resource that points to the <span class="bbf">Device</span> resources. You must also specify a <span class="bbf">Changer Device</span>. If the Autochanger directive is set to <span class="bdefaultvalue">no</span> (default), the volume must be manually changed. You should also have an identical directive to the Storage resource in the Director's configuration file so that when labeling tapes you are prompted for the slot.
+</dd>
+</div>
+<div id="Storage_Device_MaximumVolumeSize">
+<dt>Maximum Volume Size = <size></dt>
+<dd>
+ No more than <span class="bbf">size</span> bytes will be written onto a given volume on the archive device. This directive is used mainly in testing <span class="bbacula">Bacula</span> to simulate a small Volume. It can also be useful if you wish to limit the size of a File Volume to say less than 2GB of data. In some rare cases of really antiquated tape drives that do not properly indicate when the end of a tape is reached during writing (though I have read about such drives, I have never personally encountered one). Please note, this directive is deprecated (being phased out) in favor of the <span class="bbf">Maximum Volume Bytes</span> defined in the Director's configuration file.
+</dd>
+</div>
+<div id="Storage_Device_MaximumFileSize">
+<dt>Maximum File Size = <size></dt>
+<dd>
+ No more than <span class="bbf">size</span> bytes will be written into a given logical file on the volume. Once this size is reached, an end of file mark is written on the volume and subsequent data are written into the next file. Breaking long sequences of data blocks with file marks permits quicker positioning to the start of a given stream of data and can improve recovery from read errors on the volume. The default is <span class="bdefaultvalue">one Gigabyte</span>. This directive creates EOF marks only on tape media. However, regardless of the medium type (tape, disk, USB ...) each time a the Maximum File Size is exceeded, a record is put into the catalog database that permits seeking to that position on the medium for restore operations. If you set this to a small value (e.g. 1MB), you will generate lots of database records (JobMedia) and may significantly increase CPU/disk overhead. <p> If you are configuring an LTO-3 or LTO-4 tape, you probably will want to set the <span class="bdirectivename">Maximum File Size</span> to <span class="bvalue">2GB</span> to avoid making the drive stop to write an EOF mark. </p>
+<p> Note, this directive does not limit the size of Volumes that <span class="bbacula">Bacula</span> will create regardless of whether they are tape or disk volumes. It changes only the number of EOF marks on a tape and the number of block positioning records (see below) that are generated. If you want to limit the size of all Volumes for a particular device, use the <span class="bdirectivename">Maximum Volume Size</span> directive (above), or use the <span class="bdirectivename">Maximum Volume Bytes</span> directive in the Director's Pool resource, which does the same thing but on a Pool (Volume) basis. </p>
+
+</dd>
+</div>
+<div id="Storage_Device_MaximumFileIndex">
+<dt>Maximum File Index = <size></dt>
+<dd>
+ <p> Some data might include information about the actual position of a block in the data stream. This information is stored in the catalog inside the <span class="btable">FileMedia</span> table. By default, one index record will be created every 100MB of data. The index permits quicker positioning to the start of a given block in the <span class="bbacula">Bacula</span> Volume and can improve the Single Item Restore feature. If you set this to a small value (e.g. 1MB), you will generate lots of database records (FileMedia) and may significantly increase CPU/disk overhead. </p>
+
+</dd>
+</div>
+<div id="Storage_Device_RequiresMount">
+<dt>Requires Mount = <yes|no></dt>
+<dd>
+ When this directive is enabled, the Storage daemon will submit a <span class="bbf">Mount Command</span> before attempting to open the device. You must set this directive to <span class="bvalue">yes</span> for removable file systems such as USB devices that are not automatically mounted by the operating system when plugged in or opened by <span class="bbacula">Bacula</span>. It should be set to <span class="bvalue">no</span> for all other devices such as tapes and fixed filesystems. It should also be set to <span class="bvalue">no</span> for any removable device that is automatically mounted by the operating system when opened (e.g. USB devices mounted by udev or hotplug). This directive indicates if the device requires to be mounted using the <span class="bbf">Mount Command</span>. To be able to write devices need a mount, the following directives must also be defined: <span class="bbf">Mount Point</span>, <span class="bbf">Mount Command</span>, and <span class="bbf">Unmount Command</span>.
+</dd>
+</div>
+<div id="Storage_Device_MountPoint">
+<dt>Mount Point = <directory>
+Mount Point = <directory>
+</dt>
+<dd>
+ Directory where the device can be mounted. This directive is used only for devices that have <span class="bbf">Requires Mount</span> enabled such as USB file devices.
+ Directory where the device can be mounted.
+</dd>
+</div>
+<div id="Storage_Device_MountCommand">
+<dt>Mount Command = <name-string>
+Mount Command = <name-string>
+</dt>
+<dd>
+ This directive specifies the command that must be executed to mount devices such as many USB devices. Before the command is executed, %a is replaced with the Archive Device, and %m with the Mount Point. <p> See the Edit Codes section below for more details of the editing codes that can be used in this directive. </p>
+<p> If you need to specify multiple commands, create a shell script. </p>
+
+ Command that must be executed to mount the device. Before the command is executed, %a is replaced with the Archive Device, and %m with the Mount Point. <p> Most frequently, you will define it as follows: </p>
+
+<pre>
+Mount Command = "/bin/mount -t iso9660 -o ro %a %m"
+</pre>
+<p> For some media, you may need multiple commands. If so, it is recommended that you use a shell script instead of putting them all into the Mount Command. For example, instead of this: </p>
+
+<pre>
+Mount Command = "/usr/local/bin/mymount"
+</pre>
+<p> Where that script contains: </p>
+
+<pre>
+#!/bin/sh
+ndasadmin enable -s 1 -o w
+sleep 2
+mount /dev/ndas-00323794-0p1 /backup
+</pre>
+<p> Similar consideration should be given to all other Command parameters. </p>
+
+</dd>
+</div>
+<div id="Storage_Device_UnmountCommand">
+<dt>Unmount Command = <name-string>
+Unmount Command = <name-string>
+</dt>
+<dd>
+ This directive specifies the command that must be executed to unmount devices such as many USB devices. Before the command is executed, %a is replaced with the Archive Device, and %m with the Mount Point. <p> Most frequently, you will define it as follows: </p>
+
+<pre>
+Unmount Command = "/bin/umount %m"
+</pre>
+<p> See the Edit Codes section below for more details of the editing codes that can be used in this directive. </p>
+<p> If you need to specify multiple commands, create a shell script. </p>
+
+ Command that must be executed to unmount the device. Before the command is executed, %a is replaced with the Archive Device, and %m with the Mount Point. <p> Most frequently, you will define it as follows: </p>
+
+<pre>
+Unmount Command = "/bin/umount %m"
+</pre>
+<p> If you need to specify multiple commands, create a shell script. </p>
+
+</dd>
+</div>
+<div id="Storage_Device_MaximumPartSize">
+<dt>Maximum Part Size = <size></dt>
+<dd>
+ <p> This directive allows one to specify the maximum size for each part. Smaller part sizes will reduce restore costs, but may require a small additional overhead to handle multiple parts. The maximum number of parts permitted in a Cloud Volume is <span class="bvalue">524,288</span>. The maximum size of any given part is approximately 17.5TB. </p>
+
+</dd>
+</div>
+<div id="Storage_Device_AlertCommand">
+<dt>Alert Command = <name-string></dt>
+<dd>
+ The <span class="bbf">name-string</span> specifies an external program to be called at the completion of each Job after the device is released. The purpose of this command is to check for Tape Alerts, which are present when something is wrong with your tape drive (at least for most modern tape drives). The same substitution characters that may be specified in the Changer Command may also be used in this string. For more information, please see the Autochangers chapter of this manual. <p> The directive in the Device resource can call the <span class="btool">tapealert</span> script that is installed in the scripts directory. It is defined as follows: </p>
+<pre>
+Device {
+ Name = ...
+ Archive Device = /dev/nst0
+ Alert Command = "/opt/bacula/scripts/tapealert %l"
+ Control Device = /dev/sg1 # must be SCSI ctl for /dev/nst0
+ ...
+}
+</pre>
+<p> Once the above mentioned two directives (<span class="bdirectivename">Alert Command</span> and <span class="bdirectivename">Control Device</span>) are in place in each of your <span>Device</span> resources, <span class="bbacula">Bacula</span> will check for tape alerts at two points: </p>
+<ul class="bitemize2">
+<li class="bitemize2">After the Drive is used and it becomes idle. </li>
+<li class="bitemize2">After each read or write error on the drive. </li>
+</ul>
+
+<p> At each of the above times, <span class="bbacula">Bacula</span> will call the new <span class="btool">tapealert</span> script, which uses the <span class="btool">tapeinfo</span> program. The <span class="btool">tapeinfo</span> utility is part of the <span class="btool">apt</span> <span class="btt">sg3-utils</span> and <span class="btool">rpm</span> <span class="btt">sg3_utils</span> packages. Then for each tape alert that <span class="bbacula">Bacula</span> finds for that drive, it will emit a Job message that is either <span class="btt">INFO</span>, <span class="btt">WARNING</span>, or <span class="btt">FATAL</span> depending on the designation in the Tape Alert published by the T10 Technical Committee on SCSI Storage Interfaces. For the specification, please see: http://www.t10.org/ftp/t10/document.02/02-142r0.pdf</p>
+</dd>
+</div>
+<div id="Storage_Device_WormCommand">
+<dt>Worm Command = <name-string></dt>
+<dd>
+<p> The <span class="bvalue">name-string</span> specifies an external program to be called when loading a new volume. The purpose of this command is to check if the current tape is a WORM<span class="bfootnote">note<span class="bfootnotetext">Write Once Read Many</span></span> tape. The same substitution characters that may be specified in the <span class="bdirectivename">Changer Command</span> ay also be used in this string. </p>
+<p> The directive in the <span>Device</span> resource can call the <span class="btool">isworm</span> script that is installed in the scripts directory. It is defined as follows: </p>
+
+<pre>
+Device {
+ Name = ...
+ Archive Device = /dev/nst0
+ Worm Command = "/opt/bacula/scripts/isworm %l"
+ Control Device = /dev/sg1 # must be SCSI ctl for /dev/nst0
+ ...
+}
+</pre>
+<p><span class="bbacula">Bacula</span> will call the <span class="btool">isworm</span> script, which uses the <span class="btool">tapeinfo</span> and <span class="btool">sdparm</span> program. </p>
+
+</dd>
+</div>
+<div id="Storage_Device_HardwareEndOfMedium">
+<dt>Hardware End of Medium = <yes|no></dt>
+<dd>
+ If <span class="bvalue">no</span>, the archive device is not required to support end of medium ioctl request, and the storage daemon will use the forward space file function to find the end of the recorded data. If <span class="bvalue">yes</span>, the archive device must support the <span class="btt">ioctl</span> <span class="btt">MTEOM</span> call, which will position the tape to the end of the recorded data. In addition, your SCSI driver must keep track of the file number on the tape and report it back correctly by the <span class="bbf">MTIOCGET</span> ioctl. Note, some SCSI drivers will correctly forward space to the end of the recorded data, but they do not keep track of the file number. On Linux machines, the SCSI driver has a <span class="bbf">fast-eod</span> option, which if set will cause the driver to lose track of the file number. You should ensure that this option is always turned off using the <span class="btool">mt</span> program. <p> Default setting for Hardware End of Medium is <span class="bdefaultvalue">yes</span>. This function is used before appending to a tape to ensure that no previously written data is lost. We recommend if you have a non-standard or unusual tape drive that you use the <span class="btool">btape</span> program to test your drive to see whether or not it supports this function. All modern (after 1998) tape drives support this feature. </p>
+
+</dd>
+</div>
+<div id="Storage_Device_BackwardSpaceRecord">
+<dt>Backward Space Record = <yes|no></dt>
+<dd>
+ If <span class="bvalue">yes</span>, the archive device supports the <span class="btt">MTBSR ioctl</span> to backspace records. If <span class="bvalue">no</span>, this call is not used and the device must be rewound and advanced forward to the desired position. Default is <span class="bdefaultvalue">yes</span> for non random-access devices. This function if enabled is used at the end of a Volume after writing the end of file and any ANSI/IBM labels to determine whether or not the last block was written correctly. If you turn this function off, the test will not be done. This causes no harm as the re-read process is precautionary rather than required.
+</dd>
+</div>
+<div id="Storage_Device_BackwardSpaceFile">
+<dt>Backward Space File = <yes|no></dt>
+<dd>
+ If <span class="bvalue">yes</span>, the archive device supports the <span class="bbf">MTBSF</span> and <span class="bbf">MTBSF ioctl</span>s to backspace over an end of file mark and to the start of a file. If <span class="bvalue">no</span>, these calls are not used and the device must be rewound and advanced forward to the desired position. Default is <span class="bdefaultvalue">yes</span> for non random-access devices.
+</dd>
+</div>
+<div id="Storage_Device_BsfAtEom">
+<dt>BSF at EOM = <yes|no></dt>
+<dd>
+ If <span class="bdefaultvalue">no</span>, the default, no special action is taken by <span class="bbacula">Bacula</span> with the End of Medium (end of tape) is reached because the tape will be positioned after the last EOF tape mark, and <span class="bbacula">Bacula</span> can append to the tape as desired. However, on some systems, such as FreeBSD, when <span class="bbacula">Bacula</span> reads the End of Medium (end of tape), the tape will be positioned after the second EOF tape mark (two successive EOF marks indicated End of Medium). If <span class="bbacula">Bacula</span> appends from that point, all the appended data will be lost. The solution for such systems is to specify <span class="bbf">BSF at EOM</span> which causes <span class="bbacula">Bacula</span> to backspace over the second EOF mark. Determination of whether or not you need this directive is done using the <span class="bcommandname">test</span> command in the <span class="btool">btape</span> program.
+</dd>
+</div>
+<div id="Storage_Device_TwoEof">
+<dt>TWO EOF = <yes|no></dt>
+<dd>
+ If <span class="bvalue">yes</span>, <span class="bbacula">Bacula</span> will write two end of file marks when terminating a tape - i.e. after the last job or at the end of the medium. If <span class="bdefaultvalue">no</span>, the default, <span class="bbacula">Bacula</span> will only write one end of file to terminate the tape.
+</dd>
+</div>
+<div id="Storage_Device_ForwardSpaceRecord">
+<dt>Forward Space Record = <yes|no></dt>
+<dd>
+ If <span class="bvalue">yes</span>, the archive device must support the <span class="bbf">MTFSR ioctl</span> to forward space over records. If <span class="bvalue">no</span>, data must be read in order to advance the position on the device. Default is <span class="bdefaultvalue">yes</span> for non random-access devices.
+</dd>
+</div>
+<div id="Storage_Device_ForwardSpaceFile">
+<dt>Forward Space File = <yes|no></dt>
+<dd>
+ If <span class="bvalue">yes</span>, the archive device must support the <span class="btt">MTFSF ioctl</span> to forward space by file marks. If <span class="bvalue">no</span>, data must be read to advance the position on the device. Default is <span class="bdefaultvalue">yes</span> for non random-access devices.
+</dd>
+</div>
+<div id="Storage_Device_FastForwardSpaceFile">
+<dt>Fast Forward Space File = <yes|no></dt>
+<dd>
+ If <span class="bvalue">no</span>, the archive device is not required to support keeping track of the file number (<span class="bbf">MTIOCGET</span> ioctl) during forward space file. If <span class="bvalue">yes</span>, the archive device must support the <span class="btt">ioctl</span> <span class="btt">MTFSF</span> call, which virtually all drivers support, but in addition, your SCSI driver must keep track of the file number on the tape and report it back correctly by the <span class="bbf">MTIOCGET</span> ioctl. Note, some SCSI drivers will correctly forward space, but they do not keep track of the file number or more seriously, they do not report end of medium. <p> Default setting for Fast Forward Space File is <span class="bdefaultvalue">yes</span>. </p>
+
+</dd>
+</div>
+<div id="Storage_Device_CloseOnPoll">
+<dt>Close on Poll = <yes|no></dt>
+<dd>
+ If <span class="bvalue">yes</span>, <span class="bbacula">Bacula</span> close the device (equivalent to an unmount except no mount is required) and reopen it at each poll. Normally this is not too useful unless you have the <span class="bbf">Offline on Unmount</span> directive set, in which case the drive will be taken offline preventing wear on the tape during any future polling. Once the operator inserts a new tape, <span class="bbacula">Bacula</span> will recognize the drive on the next poll and automatically continue with the backup. Please see above more more details.
+</dd>
+</div>
+<div id="Storage_Device_BlockPositioning">
+<dt>Block Positioning = <yes|no></dt>
+<dd>
+ This directive tells <span class="bbacula">Bacula</span> not to use block positioning when doing restores. Turning this directive off can cause <span class="bbacula">Bacula</span> to be <span class="bbf">extremely</span> slow when restoring files. You might use this directive if you wrote your tapes with <span class="bbacula">Bacula</span> in variable block mode (the default), but your drive was in fixed block mode. The default is <span class="bdefaultvalue">yes</span>.
+</dd>
+</div>
+<div id="Storage_Device_BlockChecksum">
+<dt>Block Checksum = <yes|no></dt>
+<dd>
+ You may turn off the Block Checksum (CRC32) code that <span class="bbacula">Bacula</span> uses when writing blocks to a Volume. Doing so can reduce the Storage daemon CPU usage slightly. It will also permit <span class="bbacula">Bacula</span> to read a Volume that has corrupted data. <p> The default is <span class="bdefaultvalue">yes</span> - i.e. the checksum is computed on write and checked on read. </p>
+<p><span class="bbf">We do not recommend to turn this off</span> particularly on older tape drives or for disk Volumes where doing so may allow corrupted data to go undetected. </p>
+
+</dd>
+</div>
+<div id="Storage_Device_OfflineOnUnmount">
+<dt>Offline On Unmount = <yes|no></dt>
+<dd>
+ The default for this directive is <span class="bdefaultvalue">no</span>. If <span class="bvalue">yes</span> the archive device must support the <span class="btt">MTOFFL ioctl</span> to rewind and take the volume offline. In this case, <span class="bbacula">Bacula</span> will issue the offline (eject) request before closing the device during the <span class="bcommandname">unmount</span> command. If <span class="bvalue">no</span> <span class="bbacula">Bacula</span> will not attempt to offline the device before unmounting it. After an offline is issued, the cassette will be ejected thus <span class="bbf">requiring operator intervention</span> to continue, and on some systems require an explicit load command to be issued (<span class="btool">mt -f /dev/xxx load</span>) before the system will recognize the tape. If you are using an autochanger, some devices require an offline to be issued prior to changing the volume. However, most devices do not and may get very confused. <p> If you are using a Linux 2.6 kernel or other OSes such as FreeBSD or Solaris, the Offline On Unmount will leave the drive with no tape, and <span class="bbacula">Bacula</span> will not be able to properly open the drive and may fail the job. For more information on this problem, please see the description of Offline On Unmount subsection in the Tape Testing chapter of the <span class="bmanualname"><span class="bbacula">Bacula</span> Enterprise Problems Resolution guide</span>. </p>
+
+</dd>
+</div>
+<div id="Storage_Device_MaximumChangerWait">
+<dt>Maximum Changer Wait = <time></dt>
+<dd>
+ This directive specifies the maximum time in seconds for <span class="bbacula">Bacula</span> to wait for an autochanger to change the volume. If this time is exceeded, <span class="bbacula">Bacula</span> will invalidate the Volume slot number stored in the catalog and try again. If no additional changer volumes exist, <span class="bbacula">Bacula</span> will ask the operator to intervene. The default is <span class="bdefaultvalue">5 minutes</span>.
+</dd>
+</div>
+<div id="Storage_Device_MaximumOpenWait">
+<dt>Maximum Open Wait = <time>
+Maximum Open Wait = <time>
+</dt>
+<dd>
+ This directive specifies the maximum time in seconds that <span class="bbacula">Bacula</span> will wait for a device that is busy. The default is <span class="bdefaultvalue">5 minutes</span>. If the device cannot be obtained, the current Job will be terminated in error. <span class="bbacula">Bacula</span> will re-attempt to open the drive the next time a Job starts that needs the the drive.
+ This directive specifies the maximum amount of time in seconds that <span class="bbacula">Bacula</span> will wait for a device that is busy. The default is <span class="bdefaultvalue">5 minutes</span>. If the device cannot be obtained, the current Job will be terminated in error. <span class="bbacula">Bacula</span> will re-attempt to open the drive the next time a Job starts that needs the the drive.
+</dd>
+</div>
+<div id="Storage_Device_MaximumRewindWait">
+<dt>Maximum Rewind Wait = <time></dt>
+<dd>
+ This directive specifies the maximum time in seconds for <span class="bbacula">Bacula</span> to wait for a rewind before timing out. If this time is exceeded, <span class="bbacula">Bacula</span> will cancel the job. The default is <span class="bdefaultvalue">5 minutes</span>.
+</dd>
+</div>
+<div id="Storage_Device_MinimumBlockSize">
+<dt>Minimum block size = <size-in-bytes></dt>
+<dd>
+ On most modern tape drives, you will not need or want to specify this directive, and if you do so, it will be to make <span class="bbacula">Bacula</span> use fixed block sizes. This statement applies only to non-random access devices (e.g. tape drives). Blocks written by the storage daemon to a non-random archive device will never be smaller than the given <span class="bbf">size-in-bytes</span>. The Storage daemon will attempt to efficiently fill blocks with data received from active sessions but will, if necessary, add padding to a block to achieve the required minimum size. <p> To force the block size to be fixed, as is the case for some non-random access devices (tape drives), set the <span class="bbf">Minimum block size</span> and the <span class="bbf">Maximum block size</span> to the same value (zero included). The default is that both the minimum and maximum block size are <span class="bdefaultvalue">zero</span> and the default block size is <span class="bdefaultvalue">64,512 bytes</span>. </p>
+<p> For example, suppose you want a fixed block size of 100K bytes, then you would specify: </p>
+
+<pre>
+Minimum block size = 100K
+Maximum block size = 100K
+</pre>
+<p> Please note that if you specify a fixed block size as shown above, the tape drive must either be in variable block size mode, or if it is in fixed block size mode, the block size (generally defined by <span class="btool">mt</span>) <span class="bbf">must</span> be identical to the size specified in <span class="bbacula">Bacula</span> - otherwise when you attempt to re-read your Volumes, you will get an error. </p>
+<p> If you want the block size to be variable but with a 64K minimum and <span class="bdefaultvalue">200K</span> maximum (and default as well), you would specify: </p>
+
+<pre>
+Minimum block size = 64K
+Maximum blocksize = 256K
+</pre>
+
+</dd>
+</div>
+<div id="Storage_Device_MaximumBlockSize">
+<dt>Maximum block size = <size-in-bytes></dt>
+<dd>
+ On most modern tape drives, you will not need to specify this directive. If you do so, it will most likely be to reduce shoe-shine and improve performance on more modern LTO drives. The Storage daemon will always attempt to write blocks of the specified <span class="bbf">size-in-bytes</span> to the archive device. As a consequence, this statement specifies both the default block size and the maximum block size. The size written never exceeds the given <span class="bbf">size-in-bytes</span>. If adding data to a block would cause it to exceed the given maximum size, the block will be written to the archive device, and the new data will begin a new block. <p> If no value is specified or zero is specified, the Storage daemon will use a default block size of <span class="bdefaultvalue">64,512 bytes</span> (126 * 512). </p>
+<p> The maximum <span class="bbf">size-in-bytes</span> possible is 4,000,000. </p>
+
+</dd>
+</div>
+<div id="Storage_Device_ControlDevice">
+<dt>Control Device = <name-string></dt>
+<dd>
+ The control device is the SCSI control device that corresponds to the <span class="bdirectivename">Archive Device</span>. The correspondance can be done via the <span class="btool">lssci -g</span> command. <pre>
+/opt/bacula# lsscsi -g
+[1:0:0:0] tape HP Ultrium 4-SCSI H61W /dev/st0 /dev/sg0
+[1:0:0:1] tape HP Ultrium 4-SCSI H61W /dev/st1 /dev/sg1
+[1:0:0:2] mediumx HP MSL G3 Series E.00 - /dev/sg2
+</pre>
+
+</dd>
+</div>
+<div id="Storage_Device_ChangerDevice">
+<dt>Changer Device = <name-string></dt>
+<dd>
+ The specified <span class="bbf">name-string</span> must be the <span class="bbf">generic SCSI</span> device name of the autochanger that corresponds to the normal read/write <span class="bbf">Archive Device</span> specified in the Device resource. This generic SCSI device name should be specified if you have an autochanger or if you have a standard tape drive and want to use the <span class="bbf">Alert Command</span> (see below). For example, on Linux systems, for an Archive Device name of <span class="bfilename">/dev/nst0</span>, you would specify <span class="bfilename">/dev/sg0</span> for the Changer Device name. Depending on your exact configuration, and the number of autochangers or the type of autochanger, what you specify here can vary. This directive is optional. See the Using Autochangers chapter of this manual for more details of using this and the following autochanger directives.
+</dd>
+</div>
+<div id="Storage_Device_ChangerCommand">
+<dt>Changer Command = <name-string></dt>
+<dd>
+ The <span class="bbf">name-string</span> specifies an external program to be called that will automatically change volumes as required by <span class="bbf"><span class="bbacula">Bacula</span></span>. Normally, this directive will be specified only in the <span class="bbf">AutoChanger</span> resource, which is then used for all devices. However, you may also specify the different <span class="bbf">Changer Command</span> in each Device resource. Most frequently, you will specify the <span class="bbacula">Bacula</span> supplied <span class="btool">mtx-changer</span> script as follows:
+<pre>
+Changer Command = "/path/mtx-changer %c %o %S %a %d"
+</pre>
+<p> and you will install the <span class="btool">mtx</span> on your system (found in the <span class="bbf">depkgs</span> release). An example of this command is in the default <span class="bfilename">bacula-sd.conf</span> file. For more details on the substitution characters that may be specified to configure your autochanger please see the Autochangers chapter of this manual. For FreeBSD users, you might want to see one of the several <span class="btool">chio</span> scripts in <span class="bdirectoryname">examples/autochangers</span>. </p>
+
+</dd>
+</div>
+<div id="Storage_Device_MaximumConcurrentJobs">
+<dt>Maximum Concurrent Jobs = <num></dt>
+<dd>
+ <p><span class="bdirectivename">Maximum Concurrent Jobs</span> is a directive that permits setting the maximum number of Jobs that can run concurrently on a specified Device. Using this directive, it is possible to have different Jobs using multiple drives, because when the Maximum Concurrent Jobs limit is reached, the Storage Daemon will start new Jobs on any other available compatible drive. This facilitates writing to multiple drives with multiple Jobs that all use the same Pool. </p>
+
+</dd>
+</div>
+<div id="Storage_Device_MaximumNetworkBufferSize">
+<dt>Maximum Network Buffer Size = <bytes></dt>
+<dd>
+ where <span class="bemph">bytes</span> specifies the initial network buffer size to use with the File daemon. This size will be adjusted down if it is too large until it is accepted by the OS. Please use care in setting this value since if it is too large, it will be trimmed by 512 bytes until the OS is happy, which may require a large number of system calls. The default value is <span class="bdefaultvalue">32,768</span> bytes. The maximum value is <span class="bvalue">1,000,000</span> bytes. <p> The default size was chosen to be relatively large but not too big in the case that you are transmitting data over Internet. It is clear that on a high speed local network, you can increase this number and improve performance. For example, some users have found that if you use a value of 65,536 bytes they get five to ten times the throughput. Larger values for most users don't seem to improve performance. If you are interested in improving your backup speeds, this is definitely a place to experiment. You will probably also want to make the corresponding change in each of your File daemons conf files. </p>
+
+</dd>
+</div>
+<div id="Storage_Device_SpoolDirectory">
+<dt>Spool Directory = <directory></dt>
+<dd>
+ specifies the name of the directory to be used to store the spool files for this device. This directory is also used to store temporary part files when writing to a device that requires mount (USB). The default is to use the <span class="bdefaultvalue">working directory</span>.</dd>
+</div>
+<div id="Storage_Device_MaximumSpoolSize">
+<dt>Maximum Spool Size = <bytes></dt>
+<dd>
+ where the bytes specify the maximum spool size for all jobs that are running. The default is <span class="bdefaultvalue">no limit</span>.
+</dd>
+</div>
+<div id="Storage_Device_MaximumJobSpoolSize">
+<dt>Maximum Job Spool Size = <bytes></dt>
+<dd>
+ where the bytes specify the maximum spool size for any one job that is running. The default is <span class="bdefaultvalue">no limit</span>. This directive is implemented only in version 1.37 and later.
+</dd>
+</div>
+<div id="Storage_Autochanger_Device">
+<dt>Device = <Device-name1, device-name2, ...></dt>
+<dd>Specifies the names of the Device resource or resources that correspond to the autochanger drive. If you have a multiple drive autochanger, you must specify multiple Device names, each one referring to a separate Device resource that contains a Drive Index specification that corresponds to the drive number base zero. You may specify multiple device names on a single line separated by commas, and/or you may specify multiple Device directives. This directive is required.
+</dd>
+</div>
+<div id="Storage_Autochanger_ChangerDevice">
+<dt>Changer Device = <name-string></dt>
+<dd>
+ The specified <span class="bbf"><span class="bbracket"><name-string></span></span> gives the system file name of the autochanger device name. If specified in this resource, the Changer Device name is not needed in the Device resource. If it is specified in the Device resource (see above), it will take precedence over one specified in the Autochanger resource.
+</dd>
+</div>
+<div id="Storage_Autochanger_ChangerCommand">
+<dt>Changer Command = <name-string></dt>
+<dd>
+ The <span class="bbf"><span class="bbracket"><name-string></span></span> specifies an external program to be called that will automatically change volumes as required by <span class="bbf"><span class="bbacula">Bacula</span></span>. Most frequently, you will specify the <span class="bbacula">Bacula</span> supplied <span class="btool">mtx-changer</span> script as follows. If it is specified here, it need not be specified in the Device resource. If it is also specified in the Device resource, it will take precedence over the one specified in the Autochanger resource.
+</dd>
+</div>
+<div id="Storage_Cloud_Description">
+<dt>Description = <Text></dt>
+<dd>The description is used for display purposes as is the case with all resources.
+</dd>
+</div>
+<div id="Storage_Cloud_Driver">
+<dt>Driver = <Driver-Name></dt>
+<dd>
+<p> This defines which driver to use. At the moment, the only Cloud driver that is implemented is <span class="bvalue">S3</span>. There is also a <span class="bvalue">File</span> driver, which is used mostly for testing. </p>
+
+</dd>
+</div>
+<div id="Storage_Cloud_HostName">
+<dt>Host Name = <Name></dt>
+<dd>
+ This directive specifies the hostname to be used in the URL. Each Cloud service provider has a different and unique hostname. The maximum size is <span class="bvalue">255</span> characters and may contain a TCP port specification.
+</dd>
+</div>
+<div id="Storage_Cloud_BucketName">
+<dt>Bucket Name = <Name></dt>
+<dd>
+<p> This directive specifies the bucket name that you wish to use on the Cloud service. This name is normally a unique name that identifies where you want to place your Cloud Volume parts. With Amazon S3, the bucket must be created previously on the Cloud service. With Azure Storage, it is generaly refered as Container and it can be created automatically by Bacula when it does not exist. The maximum bucket name size is <span class="bvalue">255</span> characters. </p>
+
+</dd>
+</div>
+<div id="Storage_Cloud_AccessKey">
+<dt>Access Key = <String></dt>
+<dd>
+<p> The access key is your unique user identifier given to you by your cloud service provider. </p>
+
+</dd>
+</div>
+<div id="Storage_Cloud_SecretKey">
+<dt>Secret Key = <String></dt>
+<dd>
+ The secret key is the security key that was given to you by your cloud service provider. It is equivalent to a password.
+</dd>
+</div>
+<div id="Storage_Cloud_Region">
+<dt>Region = <String></dt>
+<dd>The Cloud resource can be configured to use a specific endpoint within a region. This directive is required for AWS-V4 regions. ex: <span class="bdirectivename">Region</span> = <span class="bvalue">"eu-central-1"</span>
+</dd>
+</div>
+<div id="Storage_Cloud_Protocol">
+<dt>Protocol = <HTTP | HTTPS></dt>
+<dd>
+<p> The protocol defines the communications protocol to use with the cloud service provider. The two protocols currently supported are: HTTPS and HTTP. The default is HTTPS. </p>
+
+</dd>
+</div>
+<div id="Storage_Cloud_BlobEndpoint">
+<dt>BlobEndpoint = <String></dt>
+<dd>This resource can be used to specify a custom URL for Azure Blob (see https://docs.microsoft.com/en-us/azure/storage/blobs/storage-custom-domain-name).
+</dd>
+</div>
+<div id="Storage_Cloud_EndpointSuffix">
+<dt>EndpointSuffix = <String></dt>
+<dd>Use this resource to specify a custom URL postfix for Azure. ex: <span class="bdirectivename">EnbpointSuffix</span> = <span class="bvalue">"core.chinacloudapi.cn"</span>
+</dd>
+</div>
+<div id="Storage_Cloud_UriStyle">
+<dt>Uri Style = <VirtualHost | Path></dt>
+<dd>
+<p> This directive specifies the URI style to use to communicate with the cloud service provider. The two <span class="bdirectivename">Uri Styles</span> currently supported are: <span class="bvalue">VirtualHost</span> and <span class="bvalue">Path</span>. The default is <span class="bdefaultvalue">VirtualHost</span>. </p>
+
+</dd>
+</div>
+<div id="Storage_Cloud_TruncateCache">
+<dt>Truncate Cache = <truncate-kw></dt>
+<dd>
+<p> This directive specifies when <span class="bbacula">Bacula</span> should automatically remove (truncate) the local cache parts. Local cache parts can only be removed if they have been uploaded to the cloud. The currently implemented values are: </p>
+<dl class="bdescription2">
+<dt>No </dt>
+<dd class="bdescription2">Do not remove cache. With this option you must manually delete the cache parts with a <span class="btool">bconsole</span> <span class="bcommandname">truncate cache</span>, or do so with an <span class="bbf">Admin</span> Job that runs an <span class="bcommandname">truncate cache</span> command. This is the default. </dd>
+<dt>AfterUpload </dt>
+<dd class="bdescription2">Each part will be removed just after it is uploaded. Note, if this option is specified, all restores will require a download from the Cloud<span class="bfootnote">note<span class="bfootnotetext">Not yet implemented</span></span> . </dd>
+<dt>AtEndOfJob </dt>
+<dd class="bdescription2">With this option, at the end of the Job, every part that has been uploaded to the Cloud will be removed<span class="bfootnote">note<span class="bfootnotetext">Not yet implemented</span></span> (truncated). </dd>
+</dl>
+
+
+</dd>
+</div>
+<div id="Storage_Cloud_Upload">
+<dt>Upload = <upload-kw></dt>
+<dd>
+<p> This directive specifies when local cache parts will be uploaded to the Cloud. The options are: </p>
+
+<dl class="bdescription2">
+<dt>No</dt>
+<dd class="bdescription2">Do not upload cache parts. With this option you must manually upload the cache parts with a <span class="bbacula">Bacula</span><span class="bcommandname"> Console</span> <span class="bcommandname">upload</span> command, or do so with an <span class="bbf">Admin</span> Job that runs an <span class="bcommandname">upload</span> command. This is the default. </dd>
+<dt>EachPart</dt>
+<dd class="bdescription2">With this option, each part will be uploaded when it is complete i.e. when the next part is created or at the end of the Job. </dd>
+<dt>AtEndOfJob</dt>
+<dd class="bdescription2">With this option all parts that have not been previously uploaded will be uploaded at the end of the Job.<span class="bfootnote">note<span class="bfootnotetext">Not yet implemented</span></span>
+</dd>
+</dl>
+
+</dd>
+</div>
+<div id="Storage_Cloud_MaximumConcurrentUploads">
+<dt>Maximum Concurrent Uploads = <number></dt>
+<dd>The default is <span class="bdefaultvalue">3</span>, but by using this directive, you may set it to any value you want.
+</dd>
+</div>
+<div id="Storage_Cloud_MaximumConcurrentDownloads">
+<dt>Maximum Concurrent Downloads = <number></dt>
+<dd>The default is <span class="bdefaultvalue">3</span>, but by using this directive, you may set it to any value you want.
+</dd>
+</div>
+<div id="Storage_Cloud_MaximumUploadBandwidth">
+<dt>Maximum Upload Bandwidth = <speed></dt>
+<dd>
+<p> The default is <span class="bdefaultvalue">unlimited</span>, but by using this directive, you may limit the upload bandwidth used globally by all devices referencing this <span>Cloud</span> resource. </p>
+
+</dd>
+</div>
+<div id="Storage_Cloud_MaximumDownloadBandwidth">
+<dt>Maximum Download Bandwidth = <speed></dt>
+<dd>
+<p> The default is <span class="bdefaultvalue">unlimited</span>, but by using this directive, you may limit the download bandwidth used globally by all devices referencing this Cloud resource. </p>
+
+</dd>
+</div>
+<div id="FileDaemon_Director_Name">
+<dt>Name = <name></dt>
+<dd>
+ The name of the Director that may contact this Client. This name must be the same as the name specified on the Director resource in the Director's configuration file. Note, the case (upper/lower) of the characters in the name are significant (i.e. S is not the same as s). This directive is required.
+</dd>
+</div>
+<div id="FileDaemon_Director_Password">
+<dt>Password = <password></dt>
+<dd>
+ Specifies the password that must be supplied for a Director to be authorized. This password must be the same as the password specified in the Client resource in the Director's configuration file. This directive is required.
+</dd>
+</div>
+<div id="FileDaemon_Director_Address">
+<dt>Address = <address></dt>
+<dd>
+<p> Where the address is a host name, a fully qualified domain name, or a network address used to connect to the Director. This directive is required when <span class="bdirectivename">ConnectToDirector</span> is enabled. </p>
+
+</dd>
+</div>
+<div id="FileDaemon_Director_DirPort">
+<dt>DirPort = <number></dt>
+<dd>
+ <p> Specify the port to use to connect to the Director. This port must be identical to the <span class="bdirectivename">DIRport</span> specified in the <span class="bdirectivename">Director</span> resource of the Director's configuration file. The default is <span class="bdefaultvalue">9101</span> so this directive is not normally specified. </p>
+
+</dd>
+</div>
+<div id="FileDaemon_Director_ConnectToDirector">
+<dt>ConnectToDirector = <yes|no></dt>
+<dd>
+ <p> When the <span class="bdirectivename">ConnectToDirector</span> directive is set to <span class="btt">true</span>, the Client will contact the Director according to the <span>Schedule</span> rules. The connection initiated by the Client will be then used by the Director to start jobs or issue bconsole commands. If the <span class="bdirectivename">Schedule</span> directive is not set, the connection will be initiated when the file daemon starts. The connection will be reinitialized every <span class="bdirectivename">ReconnectionTime</span>. This directive can be useful if your File Daemon is behind a firewall that permits outgoing connections but not incoming connections. </p>
+
+</dd>
+</div>
+<div id="FileDaemon_Director_Schedule">
+<dt>Schedule = <sched-resource></dt>
+<dd>
+<p> The <span class="bdirectivename">Schedule</span> directive defines what schedule is to be used for Client to connect the Director if the directive <span class="bdirectivename">ConnectToDirector</span> is set to <span class="btt">true</span>. </p>
+<p> This directive is optional, and if left out, the Client will initiate a connection automatically at the start of the daemon. Although you may specify only a single Schedule resource for any <span>Director</span> resource, the <span>Schedule</span> resource may contain multiple <span class="bdirectivename">Connect</span> directives, which allow you to initiate the Client connection at many different times, and each <span class="bdirectivename">Connect</span> directive permits to set the the <span class="bdirectivename">Max Connect Time</span> directive. </p>
+
+</dd>
+</div>
+<div id="FileDaemon_Director_ReconnectionTime">
+<dt>ReconnectionTime = <time></dt>
+<dd>
+<p> When the Director resource of the FileDaemon is configured to connect the Director with the <span class="bdirectivename">ConnectToDirector</span> directive, the connection initiated by the FileDeamon to the Director will be reinitialized at a regular interval specified by the <span class="bdirectivename">ReconnectionTime</span> directive. The default value is <span class="bdefaultvalue">40 mins</span>. </p>
+
+</dd>
+</div>
+<div id="FileDaemon_Director_Monitor">
+<dt>Monitor = <yes|no></dt>
+<dd>
+ If Monitor is set to <span class="bdefaultvalue">no</span> (default), this director will have full access to this Client. If Monitor is set to <span class="bvalue">yes</span>, this director will only be able to fetch the current status of this Client. <p> Please note that if this director is being used by a Monitor, we highly recommend to set this directive to <span class="bvalue">yes</span> to avoid serious security problems. </p>
+
+</dd>
+</div>
+<div id="FileDaemon_Director_DisableCommand">
+<dt>DisableCommand = <cmd></dt>
+<dd>The <span class="bdirectivename">Disable Command</span> adds security to your File daemon by disabling certain commands for the current Director. More information about the syntax can be found on (here).
+</dd>
+</div>
+<div id="FileDaemon_Director_TlsPskEnable">
+<dt>TLS PSK Enable = <yes|no></dt>
+<dd>
+<p> Enable or Disable automatic TLS PSK support. TLS PSK is enabled by default between all <span class="bbacula">Bacula</span> components. The Pre-Shared Key used between the programs is the <span class="bbacula">Bacula</span> password. If both <span class="bdirectivename">TLS Enable</span> and <span class="bdirectivename">TLS PSK Enable</span> are enabled, the system will use TLS certificates. </p>
+
+</dd>
+</div>
+<div id="FileDaemon_Director_TlsEnable">
+<dt>TLS Enable = <yes|no></dt>
+<dd>
+<p> Enable TLS support. If TLS is not enabled, none of the other TLS directives have any effect. In other words, even if you set <span class="bbf">TLS Require = yes</span> you need to have TLS enabled or TLS will not be used. </p>
+
+</dd>
+</div>
+<div id="FileDaemon_Director_TlsRequire">
+<dt>TLS Require = <yes|no></dt>
+<dd>
+<p> Require TLS or TLS-PSK encryption. This directive is ignored unless one of <span class="bbf">TLS Enable</span> or <span class="bbf">TLS PSK Enable</span> is set to <span class="bvalue">yes</span>. If TLS is not required while TLS or TLS-PSK are enabled, then the <span class="bbacula">Bacula</span> component will connect with other components either with or without TLS or TLS-PSK</p>
+<p> If TLS or TLS-PSK is enabled and TLS is required, then the <span class="bbacula">Bacula</span> component will refuse any connection request that does not use TLS. </p>
+
+</dd>
+</div>
+<div id="FileDaemon_Director_TlsAuthenticate">
+<dt>TLS Authenticate = <yes|no></dt>
+<dd>
+ When <span class="bdirectivename">TLS Authenticate</span> is enabled, after doing the CRAM-MD5 authentication, <span class="bbacula">Bacula</span> will also do TLS authentication, then TLS encryption will be turned off, and the rest of the communication between the two <span class="bbacula">Bacula</span> components will be done without encryption. If TLS-PSK is used instead of the regular TLS, the encryption is turned off after the TLS-PSK authentication step. <p> If you want to encrypt communications data, use the normal TLS directives but do <span class="bbf">not</span> turn on <span class="bdirectivename">TLS Authenticate</span>. </p>
+
+</dd>
+</div>
+<div id="FileDaemon_Director_TlsKey">
+<dt>TLS Key = <Filename></dt>
+<dd>The full path and filename of a PEM encoded TLS private key. It must correspond to the TLS certificate.
+</dd>
+</div>
+<div id="FileDaemon_Director_TlsCertificate">
+<dt>TLS Certificate = <Filename></dt>
+<dd>The full path and filename of a PEM encoded TLS certificate. It will be used as either a client or server certificate, depending on the connection direction. PEM stands for Privacy Enhanced Mail, but in this context refers to how the certificates are encoded. This format is used because PEM files are base64 encoded and hence ASCII text based rather than binary. They may also contain encrypted information. <p> This directive is required in a server context, but it may not be specified in a client context if <span class="bdirectivename">TLS Verify Peer</span> is set to <span class="bvalue">no</span> in the corresponding server context. </p>
+
+<p> Example: </p>
+<p> File Daemon configuration file (<span class="bfilename">bacula-fd.conf</span>), <span class="bdaemon">Director</span> resource configuration has <span class="bdirectivename">TLS Verify Peer</span> = <span class="bvalue">no</span>: </p>
+<pre>
+ Director {
+ Name = bacula-dir
+ Password = "password"
+ Address = director.example.com
+
+ # TLS configuration directives
+ TLS Enable = yes
+ TLS Require = yes
+ TLS Verify Peer = no
+ TLS CA Certificate File = /opt/bacula/ssl/certs/root_cert.pem
+ TLS Certificate = /opt/bacula/ssl/certs/client1_cert.pem
+ TLS Key = /opt/bacula/ssl/keys/client1_key.pem
+ }
+</pre>
+<p> Having <span class="bdirectivename">TLS Verify Peer</span> = <span class="bvalue">no</span>, means the File Daemon, server context, will not check Directorâ\80\99s public certificate, client context. There is no need to specify <span class="bdirectivename">TLS Certificate File</span> neither <span class="bdirectivename">TLS Key</span> directives in the <span class="bresourcename">Client</span> resource, director configuration file. We can have the below client configuration in <span class="bfilename">bacula-dir.conf</span>: </p>
+
+<pre>
+ Client {
+ Name = client1-fd
+ Address = client1.example.com
+ FDPort = 9102
+ Catalog = MyCatalog
+ Password = "password"
+ ...
+ # TLS configuration directives
+ TLS Enable = yes
+ TLS Require = yes
+ TLS CA Certificate File = /opt/bacula/ssl/certs/ca_client1_cert.pem
+ }
+</pre>
+
+</dd>
+</div>
+<div id="FileDaemon_Director_TlsVerifyPeer">
+<dt>TLS Verify Peer = <yes|no></dt>
+<dd>
+Verify peer certificate. Instructs server to request and verify the client's X.509 certificate. Any client certificate signed by a known-CA will be accepted. Additionally, the client's X509 certificate Common Name must meet the value of the <span class="bdirectivename">Address</span> directive. If the <span class="bdirectivename">TLSAllowed CN</span> onfiguration directive is used, the client's x509 certificate Common Name must also correspond to one of the CN specified in the <span class="bdirectivename">TLS Allowed CN</span> directive. This directive is valid only for a server and not in client context. The default is <span class="bdefaultvalue">yes</span>.
+</dd>
+</div>
+<div id="FileDaemon_Director_TlsCaCertificateFile">
+<dt>TLS CA Certificate File = <Filename></dt>
+<dd>The full path and filename specifying a PEM encoded TLS CA certificate(s). Multiple certificates are permitted in the file. One of <span class="bdirectivename">TLS CA Certificate File</span> or <span class="bdirectivename">TLS CA Certificate Dir</span> are required in a server context, unless <span class="bdirectivename">TLS Verify Peer</span> (see above) is set to <span class="bvalue">no</span>, and are always required in a client context.
+</dd>
+</div>
+<div id="FileDaemon_Director_TlsCaCertificateDir">
+<dt>TLS CA Certificate Dir = <Directory></dt>
+<dd>Full path to TLS CA certificate directory. In the current implementation, certificates must be stored PEM encoded with OpenSSL-compatible hashes, which is the subject name's hash and an extension of <span class="bbf">.0</span>. One of <span class="bdirectivename">TLS CA Certificate File</span> or <span class="bdirectivename">TLS CA Certificate Dir</span> are required in a server context, unless <span class="bdirectivename">TLS Verify Peer</span> is set to <span class="bvalue">no</span>, and are always required in a client context.
+</dd>
+</div>
+<div id="FileDaemon_Director_TlsAllowedCn">
+<dt>TLS Allowed CN = <string list></dt>
+<dd>Common name attribute of allowed peer certificates. This directive is valid for a server and in a client context. If this directive is specified, the peer certificate will be verified against this list. In the case this directive is configured on a server side, the allowed CN list will not be checked if <span class="bdirectivename">TLS Verify Peer</span> is set to <span class="bvalue">no</span> (<span class="bdirectivename">TLS Verify Peer</span> is <span class="bdefaultvalue">yes</span> by default). This can be used to ensure that only the CN-approved component may connect. This directive may be specified more than once. <p> In the case this directive is configured in a server side, the allowed CN list will only be checked if <span class="bdirectivename">TLS Verify Peer = yes</span> (default). For example, in <span class="bfilename">bacula-fd.conf</span>, <span class="bdaemon">Director</span> resource definition: </p>
+<pre>
+ Director {
+ Name = bacula-dir
+ Password = "password"
+ Address = director.example.com
+ # TLS configuration directives
+ TLS Enable = yes
+ TLS Require = yes
+ # if TLS Verify Peer = no, then TLS Allowed CN will not be checked.
+ TLS Verify Peer = yes
+ TLS Allowed CN = director.example.com
+ TLS CA Certificate File = /opt/bacula/ssl/certs/root_cert.pem
+ TLS Certificate = /opt/bacula/ssl/certs/client1_cert.pem
+ TLS Key = /opt/bacula/ssl/keys/client1_key.pem
+ }
+</pre>
+<p> In the case this directive is configured in a client side, the allowed CN list will always be checked. </p>
+
+<pre>
+ Client {
+ Name = client1-fd
+ Address = client1.example.com
+ FDPort = 9102
+ Catalog = MyCatalog
+ Password = "password"
+ ...
+ # TLS configuration directives
+ TLS Enable = yes
+ TLS Require = yes
+ # the Allowed CN will be checked for this client by director
+ # the client's certificate Common Name must match any of
+ # the values of the Allowed CN list
+ TLS Allowed CN = client1.example.com
+ TLS CA Certificate File = /opt/bacula/ssl/certs/ca_client1_cert.pem
+ TLS Certificate = /opt/bacula/ssl/certs/director_cert.pem
+ TLS Key = /opt/bacula/ssl/keys/director_key.pem
+ }
+</pre>
+<p> If the client doesnâ\80\99t provide a certificate with a Common Name that meets any value in the <span class="bdirectivename">TLS Allowed CN</span> list, an error message will be issued: </p>
+
+<pre>
+16-Nov 17:30 bacula-dir JobId 0: Fatal error: bnet.c:273 TLS certificate
+verification failed. Peer certificate did not match a required commonName
+16-Nov 17:30 bacula-dir JobId 0: Fatal error: TLS negotiation failed with FD at
+"192.168.100.2:9102".
+</pre>
+
+</dd>
+</div>
+<div id="FileDaemon_Director_TlsDhFile">
+<dt>TLS DH File = <Directory></dt>
+<dd>Path to PEM encoded Diffie-Hellman parameter file. If this directive is specified, DH key exchange will be used for the ephemeral keying, allowing for forward secrecy of communications. DH key exchange adds an additional level of security because the key used for encryption/decryption by the server and the client is computed on each end and thus is never passed over the network if Diffie-Hellman key exchange is used. Even if DH key exchange is not used, the encryption/decryption key is always passed encrypted. This directive is only valid within a server context. <p> To generate the parameter file, you may use <span class="btool">openssl</span>: </p>
+
+<pre>
+openssl dhparam -out dh4096.pem -5 4096
+</pre>
+
+
+</dd>
+</div>
+<div id="FileDaemon_Director_MaximumBandwidthPerJob">
+<dt>Maximum Bandwidth Per Job = <speed></dt>
+<dd>
+ <p> The speed parameter specifies the maximum allowed bandwidth in bytes per second that a job may use when started from this Director. You may specify the following speed parameter modifiers: kb/s (1,000 bytes per second), k/s (1,024 bytes per second), mb/s (1,000,000 bytes per second), or m/s (1,048,576 bytes per second). </p>
+
+</dd>
+</div>
+<div id="FileDaemon_FileDaemon_Name">
+<dt>Name = <name></dt>
+<dd>
+ The client name that must be used by the Director when connecting. Generally, it is a good idea to use a name related to the machine so that error messages can be easily identified if you have multiple Clients. This directive is required.
+</dd>
+</div>
+<div id="FileDaemon_FileDaemon_WorkingDirectory">
+<dt>Working Directory = <Directory></dt>
+<dd>
+ This directive is mandatory and specifies a directory in which the File daemon may put its status files. This directory should be used only by <span class="bbacula">Bacula</span>, but may be shared by other <span class="bbacula">Bacula</span> daemons provided the daemon names on the <span class="bdirectivename">Name</span> definition are unique for each daemon. This directive is required. <p> On Win32 systems, in some circumstances you may need to specify a drive letter in the specified working directory path. Also, please be sure that this directory is writable by the SYSTEM user otherwise restores may fail (the bootstrap file that is transferred to the File daemon from the Director is temporarily put in this directory before being passed to the Storage daemon). </p>
+
+</dd>
+</div>
+<div id="FileDaemon_FileDaemon_PidDirectory">
+<dt>Pid Directory = <Directory></dt>
+<dd>
+ This directive is mandatory and specifies a directory in which the File daemon may put its process Id file files. The process Id file is used to shutdown <span class="bbacula">Bacula</span> and to prevent multiple copies of <span class="bbacula">Bacula</span> from running simultaneously. This record is required. Standard shell expansion of the <span class="bbracket"><Directory></span> is done when the configuration file is read so that values such as <span class="bbf">$HOME</span> will be properly expanded. <p> Typically on Linux systems, you will set this to: <span class="bdirectoryname">/var/run</span>. If you are not installing <span class="bbacula">Bacula</span> in the system directories, you can use the <span class="bdirectivename">Working Directory</span> as defined above. </p>
+
+</dd>
+</div>
+<div id="FileDaemon_FileDaemon_DisableCommand">
+<dt>DisableCommand = <cmd></dt>
+<dd>The <span class="bdirectivename">Disable Command</span> adds security to your File daemon by disabling certain commands globally. The commands that can be disabled are:
+<pre>
+backup
+cancel
+setdebug=
+setbandwidth=
+estimate
+fileset
+JobId=
+level =
+restore
+endrestore
+session
+status
+.status
+storage
+verify
+RunBeforeNow
+RunBeforeJob
+RunAfterJob
+Run
+accurate
+</pre>
+<p> One or more of these command keywords can be placed in quotes and separated by spaces on the <span class="bdirectivename">Disable Command</span> directive line. Note: the commands must be written exactly as they appear above. </p>
+
+</dd>
+</div>
+<div id="FileDaemon_FileDaemon_CommCompression">
+<dt>CommCompression = <yes|no></dt>
+<dd>
+ <p> If the two <span class="bbacula">Bacula</span> components (DIR, FD, SD, bconsole) have the comm line compression enabled, the line compression will be enabled. The default value is yes. </p>
+<p> In many cases, the volume of data transmitted across the communications line can be reduced by a factor of three when this directive is <span class="bdefaultvalue">enabled</span>. In the case that the compression is not effective, <span class="bbacula">Bacula</span> turns it off on a record by record basis. </p>
+
+<p> If you are backing up data that is already compressed the comm line compression will not be effective, and you are likely to end up with an average compression ratio that is very small. In this case, <span class="bbacula">Bacula</span> reports <span class="bvalue">None</span> in the Job report. </p>
+
+</dd>
+</div>
+<div id="FileDaemon_FileDaemon_FdPort">
+<dt>FDPort = <port-number></dt>
+<dd>
+ This specifies the port number on which the Client listens for Director connections. It must agree with the FDPort specified in the Client resource of the Director's configuration file. The default is <span class="bdefaultvalue">9102</span>.
+</dd>
+</div>
+<div id="FileDaemon_FileDaemon_FdAddress">
+<dt>FDAddress = <IP-Address></dt>
+<dd>
+ This record is optional, and if it is specified, it will cause the File daemon server (for Director connections) to bind to the specified <span class="bbf">IP-Address</span>, which is either a domain name or an IP address specified as a dotted quadruple. If this record is not specified, the File daemon will bind to any available address (the default).
+</dd>
+</div>
+<div id="FileDaemon_FileDaemon_FdAddresses">
+<dt>FDAddresses = <IP-address-specification></dt>
+<dd>
+ Specify the ports and addresses on which the File daemon listens for Director connections. Probably the simplest way to explain is to show an example:
+<pre>
+ FDAddresses = {
+ ip = { addr = 1.2.3.4; port = 1205; }
+ ipv4 = {
+ addr = 1.2.3.4; port = http; }
+ ipv6 = {
+ addr = 1.2.3.4;
+ port = 1205;
+ }
+ ip = {
+ addr = 1.2.3.4
+ port = 1205
+ }
+ ip = { addr = 1.2.3.4 }
+ ip = {
+ addr = 201:220:222::2
+ }
+ ip = {
+ addr = bluedot.thun.net
+ }
+ }
+</pre>
+<p> where ip, ip4, ip6, addr, and port are all keywords. Note, that the address can be specified as either a dotted quadruple, or IPv6 colon notation, or as a symbolic name (only in the ip specification). Also, port can be specified as a number or as the mnemonic value from the <span class="bfilename">/etc/services</span> file. If a port is not specified, the default will be used. If an ip section is specified, the resolution can be made either by IPv4 or IPv6. If ip4 is specified, then only IPv4 resolutions will be permitted, and likewise with ip6. </p>
+
+</dd>
+</div>
+<div id="FileDaemon_FileDaemon_FdSourceAddress">
+<dt>FDSourceAddress = <IP-Address></dt>
+<dd>
+ This record is optional, and if it is specified, it will cause the File daemon server (for Storage connections) to bind to the specified <span class="bbf">IP-Address</span>, which is either a domain name or an IP address specified as a dotted quadruple. If this record is not specified, the kernel will choose the best address according to the routing table (the default).
+</dd>
+</div>
+<div id="FileDaemon_FileDaemon_SdConnectTimeout">
+<dt>SDConnectTimeout = <time-interval></dt>
+<dd>
+ This record defines an interval of time that the File daemon will try to connect to the Storage daemon. The default is <span class="bdefaultvalue">30 minutes</span>. If no connection is made in the specified time interval, the File daemon cancels the Job.
+</dd>
+</div>
+<div id="FileDaemon_FileDaemon_HeartbeatInterval">
+<dt>Heartbeat Interval = <time-interval></dt>
+<dd>
+ This record defines an interval of time in seconds. For each heartbeat that the File daemon receives from the Storage daemon, it will forward it to the Director. In addition, if no heartbeat has been received from the Storage daemon and thus forwarded the File daemon will send a heartbeat signal to the Director and to the Storage daemon to keep the channels active. The default interval is <span class="bdefaultvalue">300s</span>. This feature is particularly useful if you have a router such as 3Com that does not follow Internet standards and times out a valid connection after a short duration despite the fact that keepalive is set. This usually results in a broken pipe error message. <p> If you continue getting broken pipe error messages despite using the Heartbeat Interval, and you are using Windows, you should consider upgrading your ethernet driver. This is a known problem with NVidia NForce 3 drivers (4.4.2 17/05/2004), or try the following workaround suggested by Thomas Simmons for Win32 machines: </p>
+<p> Browse to: Start → Control Panel → Network Connections </p>
+<p> Right click the connection for the nvidia adapter and select properties. Under the General tab, click <span>“</span>Configure...<span>”</span>. Under the Advanced tab set <span>“</span>Checksum Offload<span>”</span> to disabled and click OK to save the change. </p>
+<p> Lack of communications, or communications that get interrupted can also be caused by Linux firewalls where you have a rule that throttles connections or traffic. </p>
+
+</dd>
+</div>
+<div id="FileDaemon_FileDaemon_MaximumNetworkBufferSize">
+<dt>Maximum Network Buffer Size = <bytes></dt>
+<dd>
+ where <span class="bbracket"><bytes></span> specifies the initial network buffer size to use with the File daemon. This size will be adjusted down if it is too large until it is accepted by the OS. Please use care in setting this value since if it is too large, it will be trimmed by 512 bytes until the OS is happy, which may require a large number of system calls. The default value is <span class="bdefaultvalue">65,536</span> bytes. The maximum value is <span class="bdefaultvalue">1,000,000</span> bytes. <p> Note, on certain Windows machines, there are reports that the transfer rates are very slow and this seems to be related to the default <span class="bdefaultvalue">65,536</span> size. On systems where the transfer rates seem abnormally slow compared to other systems, you might try setting the Maximum Network Buffer Size to 32,768 in both the File daemon and in the Storage daemon. </p>
+
+</dd>
+</div>
+<div id="FileDaemon_FileDaemon_TlsPskEnable">
+<dt>TLS PSK Enable = <yes|no></dt>
+<dd>
+<p> Enable or Disable automatic TLS PSK support. TLS PSK is enabled by default between all <span class="bbacula">Bacula</span> components. The Pre-Shared Key used between the programs is the <span class="bbacula">Bacula</span> password. If both <span class="bdirectivename">TLS Enable</span> and <span class="bdirectivename">TLS PSK Enable</span> are enabled, the system will use TLS certificates. </p>
+
+</dd>
+</div>
+<div id="FileDaemon_FileDaemon_TlsEnable">
+<dt>TLS Enable = <yes|no></dt>
+<dd>
+<p> Enable TLS support. If TLS is not enabled, none of the other TLS directives have any effect. In other words, even if you set <span class="bbf">TLS Require = yes</span> you need to have TLS enabled or TLS will not be used. </p>
+
+</dd>
+</div>
+<div id="FileDaemon_FileDaemon_TlsRequire">
+<dt>TLS Require = <yes|no></dt>
+<dd>
+<p> Require TLS or TLS-PSK encryption. This directive is ignored unless one of <span class="bbf">TLS Enable</span> or <span class="bbf">TLS PSK Enable</span> is set to <span class="bvalue">yes</span>. If TLS is not required while TLS or TLS-PSK are enabled, then the <span class="bbacula">Bacula</span> component will connect with other components either with or without TLS or TLS-PSK</p>
+<p> If TLS or TLS-PSK is enabled and TLS is required, then the <span class="bbacula">Bacula</span> component will refuse any connection request that does not use TLS. </p>
+
+</dd>
+</div>
+<div id="FileDaemon_FileDaemon_TlsAuthenticate">
+<dt>TLS Authenticate = <yes|no></dt>
+<dd>
+ When <span class="bdirectivename">TLS Authenticate</span> is enabled, after doing the CRAM-MD5 authentication, <span class="bbacula">Bacula</span> will also do TLS authentication, then TLS encryption will be turned off, and the rest of the communication between the two <span class="bbacula">Bacula</span> components will be done without encryption. If TLS-PSK is used instead of the regular TLS, the encryption is turned off after the TLS-PSK authentication step. <p> If you want to encrypt communications data, use the normal TLS directives but do <span class="bbf">not</span> turn on <span class="bdirectivename">TLS Authenticate</span>. </p>
+
+</dd>
+</div>
+<div id="FileDaemon_FileDaemon_TlsKey">
+<dt>TLS Key = <Filename></dt>
+<dd>The full path and filename of a PEM encoded TLS private key. It must correspond to the TLS certificate.
+</dd>
+</div>
+<div id="FileDaemon_FileDaemon_TlsCertificate">
+<dt>TLS Certificate = <Filename></dt>
+<dd>The full path and filename of a PEM encoded TLS certificate. It will be used as either a client or server certificate, depending on the connection direction. PEM stands for Privacy Enhanced Mail, but in this context refers to how the certificates are encoded. This format is used because PEM files are base64 encoded and hence ASCII text based rather than binary. They may also contain encrypted information. <p> This directive is required in a server context, but it may not be specified in a client context if <span class="bdirectivename">TLS Verify Peer</span> is set to <span class="bvalue">no</span> in the corresponding server context. </p>
+
+<p> Example: </p>
+<p> File Daemon configuration file (<span class="bfilename">bacula-fd.conf</span>), <span class="bdaemon">Director</span> resource configuration has <span class="bdirectivename">TLS Verify Peer</span> = <span class="bvalue">no</span>: </p>
+<pre>
+ Director {
+ Name = bacula-dir
+ Password = "password"
+ Address = director.example.com
+
+ # TLS configuration directives
+ TLS Enable = yes
+ TLS Require = yes
+ TLS Verify Peer = no
+ TLS CA Certificate File = /opt/bacula/ssl/certs/root_cert.pem
+ TLS Certificate = /opt/bacula/ssl/certs/client1_cert.pem
+ TLS Key = /opt/bacula/ssl/keys/client1_key.pem
+ }
+</pre>
+<p> Having <span class="bdirectivename">TLS Verify Peer</span> = <span class="bvalue">no</span>, means the File Daemon, server context, will not check Directorâ\80\99s public certificate, client context. There is no need to specify <span class="bdirectivename">TLS Certificate File</span> neither <span class="bdirectivename">TLS Key</span> directives in the <span class="bresourcename">Client</span> resource, director configuration file. We can have the below client configuration in <span class="bfilename">bacula-dir.conf</span>: </p>
+
+<pre>
+ Client {
+ Name = client1-fd
+ Address = client1.example.com
+ FDPort = 9102
+ Catalog = MyCatalog
+ Password = "password"
+ ...
+ # TLS configuration directives
+ TLS Enable = yes
+ TLS Require = yes
+ TLS CA Certificate File = /opt/bacula/ssl/certs/ca_client1_cert.pem
+ }
+</pre>
+
+</dd>
+</div>
+<div id="FileDaemon_FileDaemon_TlsCaCertificateFile">
+<dt>TLS CA Certificate File = <Filename></dt>
+<dd>The full path and filename specifying a PEM encoded TLS CA certificate(s). Multiple certificates are permitted in the file. One of <span class="bdirectivename">TLS CA Certificate File</span> or <span class="bdirectivename">TLS CA Certificate Dir</span> are required in a server context, unless <span class="bdirectivename">TLS Verify Peer</span> (see above) is set to <span class="bvalue">no</span>, and are always required in a client context.
+</dd>
+</div>
+<div id="FileDaemon_FileDaemon_TlsCaCertificateDir">
+<dt>TLS CA Certificate Dir = <Directory></dt>
+<dd>Full path to TLS CA certificate directory. In the current implementation, certificates must be stored PEM encoded with OpenSSL-compatible hashes, which is the subject name's hash and an extension of <span class="bbf">.0</span>. One of <span class="bdirectivename">TLS CA Certificate File</span> or <span class="bdirectivename">TLS CA Certificate Dir</span> are required in a server context, unless <span class="bdirectivename">TLS Verify Peer</span> is set to <span class="bvalue">no</span>, and are always required in a client context.
+</dd>
+</div>
+<div id="FileDaemon_FileDaemon_MaximumConcurrentJobs">
+<dt>Maximum Concurrent Jobs = <number></dt>
+<dd>
+ where <span class="bbracket"><number></span> is the maximum number of Jobs that should run concurrently. The default is set to <span class="bdefaultvalue">20</span>, but you may set it to a larger number. Each contact from the Director (e.g. status request, job start request) is considered as a Job, so if you want to be able to do a <span class="bcommandname">status</span> request in the console at the same time as a Job is running, you will need to set this value greater than <span class="bvalue">1</span>. If set to a large value, please be careful to have this value higher than the <span class="bdirectivename">Maximum Concurrent Jobs</span> configured in the <span>Client</span> resource in the Director configuration file. Otherwise, backup jobs can fail due to the Director connection to FD be refused because Maximum Concurrent Jobs was exceeded on FD side.
+</dd>
+</div>
+<div id="FileDaemon_FileDaemon_MaximumBandwidthPerJob">
+<dt>Maximum Bandwidth Per Job = <speed></dt>
+<dd>
+ <p> The speed parameter specifies the maximum allowed bandwidth in bytes per second that a job may use. You may specify the following speed parameter modifiers: <span class="bvalue">kb/s</span> (1,000 bytes per second), <span class="bvalue">k/s</span> (1,024 bytes per second), <span class="bvalue">mb/s</span> (1,000,000 bytes per second), or <span class="bvalue">m/s</span> (1,048,576 bytes per second). </p>
+<p> The use of TLS, TLS PSK, CommLine compression and Deduplication can interfer with the value set with the Directive. </p>
+
+</dd>
+</div>
+<div id="FileDaemon_Statistics_Name">
+<dt>Name = <name></dt>
+<dd>
+ <p> The Statistics directive <span class="bdirectivename">name</span> is used by the system administrator. This directive is required. </p>
+
+</dd>
+</div>
+<div id="FileDaemon_Statistics_Description">
+<dt>Description = <string></dt>
+<dd>
+ <p> The text field contains a description of the <span>Statistics</span> resource that will be displayed in the graphical user interface. This directive is optional. </p>
+
+</dd>
+</div>
+<div id="FileDaemon_Statistics_Interval">
+<dt>Interval = <time-interval></dt>
+<dd>
+ <p> The <span class="bdirectivename">Intervall</span> directive instructs the Statistics collector thread how long it should sleep between every collection iteration. This directive is optional and the default value is <span class="bdefaultvalue">300</span> seconds. </p>
+
+</dd>
+</div>
+<div id="FileDaemon_Statistics_Type">
+<dt>Type = <CSV|Graphite></dt>
+<dd>
+ <p> The <span class="bdirectivename">Type</span> directive specifies the Statistics backend, which may be one of the following: <span class="bvalue">CSV</span> or <span class="bvalue">Graphite</span>. This directive is required. </p>
+
+<p>CSV is a simple file level backend which saves all required metrics with the following format to the file: <span>“</span><span class="bbracket"><time></span>, <span class="bbracket"><metric></span>, <span class="bbracket"><value></span>\n<span>”</span></p>
+<p> Where <span class="bbracket"><time></span> is a standard Unix time (a number of seconds from 1/01/1970) with local timezone as returned by a system call <span class="btool">time()</span>, <span class="bbracket"><metric></span> is a <span class="bbacula">Bacula</span> metric string and <span class="bbracket"><value></span> is a metric value which could be in numeric format (<span class="btt">int</span>/<span class="btt">float</span>) or a string <span class="bvalue">"True"</span> or <span class="bvalue">"False"</span> for boolean variable. The CSV backend requires the <span class="bdirectivename">File</span> = <span class="bvalue"> </span> parameter. </p>
+
+<p> Graphite is a network backend which will send all required metrics to a Graphite server. The Graphite backend requires the <span class="bdirectivename">Host</span> = <span class="bvalue"> </span> and <span class="bdirectivename">Port</span> = <span class="bvalue"> </span> directives to be set. </p>
+<p> If the Graphite server is not available, the metrics are automatically spooled in the working directory. When the server can be reached again, spooled metrics are despooled automatically and the spooling function is suspended. </p>
+
+</dd>
+</div>
+<div id="FileDaemon_Statistics_Metrics">
+<dt>Metrics = <metricspec></dt>
+<dd>
+ <p> The <span class="bdirectivename">Metrics</span> directive allow metric filtering and <span class="bbracket"><metricspec></span> is a filter which enables to use <span class="bvalue">*</span> and <span class="bvalue">?</span> characters to match the required metric name in the same way as found in shell wildcard resolution. You can exclude filtered metric with <span class="bvalue">!</span> prefix. You can define any number of filters for a single Statistics. Metrics filter is executed in order as found in configuration. This directive is optional and if not used all available metrics will be saved by this collector backend. </p>
+
+<p> Example: </p>
+<pre>
+# Include all metric starting with "bacula.jobs"
+Metrics = "bacula.jobs.*"
+
+# Exclude any metric starting with "bacula.jobs"
+Metrics = "!bacula.jobs.*"
+</pre>
+
+</dd>
+</div>
+<div id="FileDaemon_Statistics_Prefix">
+<dt>Prefix = <string>
+File = <filename>
+</dt>
+<dd>
+ <p> The <span class="bdirectivename">Prefix</span> allows to alter the metrics name saved by collector to distinguish between different installations or daemons. The prefix string will be added to metric name as: <span>“</span><span class="bbracket"><prefix></span>.<span class="bbracket"><metric_name></span><span>”</span> This directive is optional. </p>
+
+ <p> The <span class="bdirectivename">File</span> is used by the CSV collector backend and point to the full path and filename of the file where metrics will be saved. With the CSV type, the <span class="bdirectivename">File</span> directive is required. The collector thread must have the permissions to write to the selected file or create a new file if the file doesn't exist. If collector is unable to write to the file or create a new one then the collection terminates and an error message will be generated. The file is only open during the dump and is closed otherwise. Statistics file rotation could be executed by a mv shell command. </p>
+
+</dd>
+</div>
+<div id="FileDaemon_Statistics_Host">
+<dt>Host = <hostname></dt>
+<dd>
+ <p> The <span class="bdirectivename">Host</span> directive is used for Graphite backend and specify the hostname or the IP address of the Graphite server. When the directive Type is set to Graphite, the <span class="bdirectivename">Host</span> directive is required. </p>
+
+</dd>
+</div>
+<div id="FileDaemon_Statistics_Port">
+<dt>Host = <number></dt>
+<dd>
+ <p> The <span class="bdirectivename">Port</span> directive is used for Graphite backend and specify the TCP port number of the Graphite server. When the directive Type is set to Graphite, the <span class="bdirectivename">Port</span> directive is required. </p>
+
+</dd>
+</div>
+<div id="FileDaemon_Schedule_Name">
+<dt>Name = <name></dt>
+<dd>
+ The name of the schedule being defined. The Name directive is required.
+</dd>
+</div>
+<div id="FileDaemon_Schedule_Connect">
+<dt>Connect = <Connect-overrides> <Date-time-specification></dt>
+<dd>
+ <p> The Connect directive defines when a Client should connect to a Director. You may specify multiple <span class="bdirectivename">Connect</span> directives within a <span>Schedule</span> resource. If you do, they will all be applied (i.e. multiple schedules). If you have two <span class="bdirectivename">Connect</span> directives that start at the same time, two connections will start at the same time (well, within one second of each other). It is not recommended to have multiple connections at the same time. </p>
+<p><span class="bbf">Connect-options</span> are specified as: <span class="bbf">keyword=value</span> where the keyword is <span class="bvalue">MaxConnectTime</span> and the <span class="bvalue">value</span> is as defined on the respective directive formats for the Job resource. You may specify multiple <span class="bbf">Connect-options</span> on one <span class="bdirectivename">Connect</span> directive by separating them with one or more spaces or by separating them with a trailing comma. For example: </p>
+
+<dl class="bdescription2">
+<dt>MaxConnectTime=<span class="bbracket"><time-spec></span>
+</dt>
+<dd class="bdescription2">
+ specifies how much time the connection will be attempt and active.
+</dd>
+</dl>
+
+</dd>
+</div>
+<div id="FileDaemon_Schedule_Enabled">
+<dt>Enabled = <yes|no></dt>
+<dd>
+ This directive allows you to enable or disable the <span>Schedule</span> resource.
+</dd>
+</div>
+<div id="Console_Console_Name">
+<dt>Name = <name></dt>
+<dd>
+ The Console name used to allow a restricted console to change its IP address using the SetIP command. The SetIP command must also be defined in the Director's conf CommandACL list.
+</dd>
+</div>
+<div id="Console_Console_Password">
+<dt>Password = <password></dt>
+<dd>
+ If this password is supplied, then the password specified in the Director resource of you Console conf will be ignored. See below for more details.
+</dd>
+</div>
+<div id="Console_Console_Director">
+<dt>Director = <director-resource-name></dt>
+<dd>If this directive is specified, this Console resource will be used by bconsole when that particular director is selected when first starting bconsole. I.e. it binds a particular console resource with its name and password to a particular director.
+</dd>
+</div>
+<div id="Console_Console_CommCompression">
+<dt>CommCompression = <yes|no></dt>
+<dd>
+ <p> If the two <span class="bbacula">Bacula</span> components (DIR, FD, SD, bconsole) have the comm line compression enabled, the line compression will be enabled. The default value is yes. </p>
+<p> In many cases, the volume of data transmitted across the communications line can be reduced by a factor of three when this directive is <span class="bdefaultvalue">enabled</span>. In the case that the compression is not effective, <span class="bbacula">Bacula</span> turns it off on a record by record basis. </p>
+
+<p> If you are backing up data that is already compressed the comm line compression will not be effective, and you are likely to end up with an average compression ratio that is very small. In this case, <span class="bbacula">Bacula</span> reports <span class="bvalue">None</span> in the Job report. </p>
+
+</dd>
+</div>
+<div id="Console_Console_TlsPskEnable">
+<dt>TLS PSK Enable = <yes|no></dt>
+<dd>
+<p> Enable or Disable automatic TLS PSK support. TLS PSK is enabled by default between all <span class="bbacula">Bacula</span> components. The Pre-Shared Key used between the programs is the <span class="bbacula">Bacula</span> password. If both <span class="bdirectivename">TLS Enable</span> and <span class="bdirectivename">TLS PSK Enable</span> are enabled, the system will use TLS certificates. </p>
+
+</dd>
+</div>
+<div id="Console_Console_TlsEnable">
+<dt>TLS Enable = <yes|no></dt>
+<dd>
+<p> Enable TLS support. If TLS is not enabled, none of the other TLS directives have any effect. In other words, even if you set <span class="bbf">TLS Require = yes</span> you need to have TLS enabled or TLS will not be used. </p>
+
+</dd>
+</div>
+<div id="Console_Console_TlsRequire">
+<dt>TLS Require = <yes|no></dt>
+<dd>
+<p> Require TLS or TLS-PSK encryption. This directive is ignored unless one of <span class="bbf">TLS Enable</span> or <span class="bbf">TLS PSK Enable</span> is set to <span class="bvalue">yes</span>. If TLS is not required while TLS or TLS-PSK are enabled, then the <span class="bbacula">Bacula</span> component will connect with other components either with or without TLS or TLS-PSK</p>
+<p> If TLS or TLS-PSK is enabled and TLS is required, then the <span class="bbacula">Bacula</span> component will refuse any connection request that does not use TLS. </p>
+
+</dd>
+</div>
+<div id="Console_Console_TlsAuthenticate">
+<dt>TLS Authenticate = <yes|no></dt>
+<dd>
+ When <span class="bdirectivename">TLS Authenticate</span> is enabled, after doing the CRAM-MD5 authentication, <span class="bbacula">Bacula</span> will also do TLS authentication, then TLS encryption will be turned off, and the rest of the communication between the two <span class="bbacula">Bacula</span> components will be done without encryption. If TLS-PSK is used instead of the regular TLS, the encryption is turned off after the TLS-PSK authentication step. <p> If you want to encrypt communications data, use the normal TLS directives but do <span class="bbf">not</span> turn on <span class="bdirectivename">TLS Authenticate</span>. </p>
+
+</dd>
+</div>
+<div id="Console_Console_TlsKey">
+<dt>TLS Key = <Filename></dt>
+<dd>The full path and filename of a PEM encoded TLS private key. It must correspond to the TLS certificate.
+</dd>
+</div>
+<div id="Console_Console_TlsCertificate">
+<dt>TLS Certificate = <Filename></dt>
+<dd>The full path and filename of a PEM encoded TLS certificate. It will be used as either a client or server certificate, depending on the connection direction. PEM stands for Privacy Enhanced Mail, but in this context refers to how the certificates are encoded. This format is used because PEM files are base64 encoded and hence ASCII text based rather than binary. They may also contain encrypted information. <p> This directive is required in a server context, but it may not be specified in a client context if <span class="bdirectivename">TLS Verify Peer</span> is set to <span class="bvalue">no</span> in the corresponding server context. </p>
+
+<p> Example: </p>
+<p> File Daemon configuration file (<span class="bfilename">bacula-fd.conf</span>), <span class="bdaemon">Director</span> resource configuration has <span class="bdirectivename">TLS Verify Peer</span> = <span class="bvalue">no</span>: </p>
+<pre>
+ Director {
+ Name = bacula-dir
+ Password = "password"
+ Address = director.example.com
+
+ # TLS configuration directives
+ TLS Enable = yes
+ TLS Require = yes
+ TLS Verify Peer = no
+ TLS CA Certificate File = /opt/bacula/ssl/certs/root_cert.pem
+ TLS Certificate = /opt/bacula/ssl/certs/client1_cert.pem
+ TLS Key = /opt/bacula/ssl/keys/client1_key.pem
+ }
+</pre>
+<p> Having <span class="bdirectivename">TLS Verify Peer</span> = <span class="bvalue">no</span>, means the File Daemon, server context, will not check Directorâ\80\99s public certificate, client context. There is no need to specify <span class="bdirectivename">TLS Certificate File</span> neither <span class="bdirectivename">TLS Key</span> directives in the <span class="bresourcename">Client</span> resource, director configuration file. We can have the below client configuration in <span class="bfilename">bacula-dir.conf</span>: </p>
+
+<pre>
+ Client {
+ Name = client1-fd
+ Address = client1.example.com
+ FDPort = 9102
+ Catalog = MyCatalog
+ Password = "password"
+ ...
+ # TLS configuration directives
+ TLS Enable = yes
+ TLS Require = yes
+ TLS CA Certificate File = /opt/bacula/ssl/certs/ca_client1_cert.pem
+ }
+</pre>
+
+</dd>
+</div>
+<div id="Console_Console_TlsCaCertificateFile">
+<dt>TLS CA Certificate File = <Filename></dt>
+<dd>The full path and filename specifying a PEM encoded TLS CA certificate(s). Multiple certificates are permitted in the file. One of <span class="bdirectivename">TLS CA Certificate File</span> or <span class="bdirectivename">TLS CA Certificate Dir</span> are required in a server context, unless <span class="bdirectivename">TLS Verify Peer</span> (see above) is set to <span class="bvalue">no</span>, and are always required in a client context.
+</dd>
+</div>
+<div id="Console_Console_TlsCaCertificateDir">
+<dt>TLS CA Certificate Dir = <Directory></dt>
+<dd>Full path to TLS CA certificate directory. In the current implementation, certificates must be stored PEM encoded with OpenSSL-compatible hashes, which is the subject name's hash and an extension of <span class="bbf">.0</span>. One of <span class="bdirectivename">TLS CA Certificate File</span> or <span class="bdirectivename">TLS CA Certificate Dir</span> are required in a server context, unless <span class="bdirectivename">TLS Verify Peer</span> is set to <span class="bvalue">no</span>, and are always required in a client context.
+</dd>
+</div>
+<div id="Console_Console_HeartbeatInterval">
+<dt>Heartbeat Interval = <time-interval></dt>
+<dd>
+ This directive is optional and if specified will cause the Console to set a keepalive interval (heartbeat) in seconds on each of the sockets to communicate with the Director. It is implemented only on systems (Linux, ...) that provide the <span class="bbf">setsockopt</span> <span class="btt">TCP_KEEPIDLE</span> function. The default value is <span class="bdefaultvalue">zero</span>, which means no change is made to the socket.
+</dd>
+</div>
+<div id="Console_Director_Name">
+<dt>Name = <name></dt>
+<dd>
+ The director name used to select among different Directors, otherwise, this name is not used.
+</dd>
+</div>
+<div id="Console_Director_DirPort">
+<dt>DIRPort = <port-number></dt>
+<dd>
+ Specify the port to use to connect to the Director. This value will most likely already be set to the value you specified on the <span class="bbf"><code>--</code>with-baseport</span> option of the <span class="btool">./configure</span> command. This port must be identical to the <span class="bbf">DIRport</span> specified in the <span class="bbf">Director</span> resource of the Director's configuration file. The default is 9101 so this directive is not normally specified.
+</dd>
+</div>
+<div id="Console_Director_Address">
+<dt>Address = <address></dt>
+<dd>
+ Where the address is a host name, a fully qualified domain name, or a network address used to connect to the Director.
+</dd>
+</div>
+<div id="Console_Director_Password">
+<dt>Password = <password></dt>
+<dd>
+ Where the password is the password needed for the Director to accept the Console connection. This password must be identical to the <span class="bbf">Password</span> specified in the <span class="bbf">Director</span> resource of the Director's configuration file. This directive is required.
+</dd>
+</div>
+<div id="Console_Director_HistoryFile">
+<dt>HistoryFile = <filename></dt>
+<dd>
+ Where the filename will be used to store the console command history. By default, the history file is set to <span class="btt">$HOME/.bconsole_history</span>
+</dd>
+</div>
+<div id="Console_Director_HistoryFileSize">
+<dt>HistoryFileSize = <number-of-lines></dt>
+<dd>
+ Specify the history file size in lines. The default value is 100.
+</dd>
+</div>
+<div id="Console_Director_TlsPskEnable">
+<dt>TLS PSK Enable = <yes|no></dt>
+<dd>
+<p> Enable or Disable automatic TLS PSK support. TLS PSK is enabled by default between all <span class="bbacula">Bacula</span> components. The Pre-Shared Key used between the programs is the <span class="bbacula">Bacula</span> password. If both <span class="bdirectivename">TLS Enable</span> and <span class="bdirectivename">TLS PSK Enable</span> are enabled, the system will use TLS certificates. </p>
+
+</dd>
+</div>
+<div id="Console_Director_TlsEnable">
+<dt>TLS Enable = <yes|no></dt>
+<dd>
+<p> Enable TLS support. If TLS is not enabled, none of the other TLS directives have any effect. In other words, even if you set <span class="bbf">TLS Require = yes</span> you need to have TLS enabled or TLS will not be used. </p>
+
+</dd>
+</div>
+<div id="Console_Director_TlsRequire">
+<dt>TLS Require = <yes|no></dt>
+<dd>
+<p> Require TLS or TLS-PSK encryption. This directive is ignored unless one of <span class="bbf">TLS Enable</span> or <span class="bbf">TLS PSK Enable</span> is set to <span class="bvalue">yes</span>. If TLS is not required while TLS or TLS-PSK are enabled, then the <span class="bbacula">Bacula</span> component will connect with other components either with or without TLS or TLS-PSK</p>
+<p> If TLS or TLS-PSK is enabled and TLS is required, then the <span class="bbacula">Bacula</span> component will refuse any connection request that does not use TLS. </p>
+
+</dd>
+</div>
+<div id="Console_Director_TlsAuthenticate">
+<dt>TLS Authenticate = <yes|no></dt>
+<dd>
+ When <span class="bdirectivename">TLS Authenticate</span> is enabled, after doing the CRAM-MD5 authentication, <span class="bbacula">Bacula</span> will also do TLS authentication, then TLS encryption will be turned off, and the rest of the communication between the two <span class="bbacula">Bacula</span> components will be done without encryption. If TLS-PSK is used instead of the regular TLS, the encryption is turned off after the TLS-PSK authentication step. <p> If you want to encrypt communications data, use the normal TLS directives but do <span class="bbf">not</span> turn on <span class="bdirectivename">TLS Authenticate</span>. </p>
+
+</dd>
+</div>
+<div id="Console_Director_TlsKey">
+<dt>TLS Key = <Filename></dt>
+<dd>The full path and filename of a PEM encoded TLS private key. It must correspond to the TLS certificate.
+</dd>
+</div>
+<div id="Console_Director_TlsCertificate">
+<dt>TLS Certificate = <Filename></dt>
+<dd>The full path and filename of a PEM encoded TLS certificate. It will be used as either a client or server certificate, depending on the connection direction. PEM stands for Privacy Enhanced Mail, but in this context refers to how the certificates are encoded. This format is used because PEM files are base64 encoded and hence ASCII text based rather than binary. They may also contain encrypted information. <p> This directive is required in a server context, but it may not be specified in a client context if <span class="bdirectivename">TLS Verify Peer</span> is set to <span class="bvalue">no</span> in the corresponding server context. </p>
+
+<p> Example: </p>
+<p> File Daemon configuration file (<span class="bfilename">bacula-fd.conf</span>), <span class="bdaemon">Director</span> resource configuration has <span class="bdirectivename">TLS Verify Peer</span> = <span class="bvalue">no</span>: </p>
+<pre>
+ Director {
+ Name = bacula-dir
+ Password = "password"
+ Address = director.example.com
+
+ # TLS configuration directives
+ TLS Enable = yes
+ TLS Require = yes
+ TLS Verify Peer = no
+ TLS CA Certificate File = /opt/bacula/ssl/certs/root_cert.pem
+ TLS Certificate = /opt/bacula/ssl/certs/client1_cert.pem
+ TLS Key = /opt/bacula/ssl/keys/client1_key.pem
+ }
+</pre>
+<p> Having <span class="bdirectivename">TLS Verify Peer</span> = <span class="bvalue">no</span>, means the File Daemon, server context, will not check Directorâ\80\99s public certificate, client context. There is no need to specify <span class="bdirectivename">TLS Certificate File</span> neither <span class="bdirectivename">TLS Key</span> directives in the <span class="bresourcename">Client</span> resource, director configuration file. We can have the below client configuration in <span class="bfilename">bacula-dir.conf</span>: </p>
+
+<pre>
+ Client {
+ Name = client1-fd
+ Address = client1.example.com
+ FDPort = 9102
+ Catalog = MyCatalog
+ Password = "password"
+ ...
+ # TLS configuration directives
+ TLS Enable = yes
+ TLS Require = yes
+ TLS CA Certificate File = /opt/bacula/ssl/certs/ca_client1_cert.pem
+ }
+</pre>
+
+</dd>
+</div>
+<div id="Console_Director_TlsCaCertificateFile">
+<dt>TLS CA Certificate File = <Filename></dt>
+<dd>The full path and filename specifying a PEM encoded TLS CA certificate(s). Multiple certificates are permitted in the file. One of <span class="bdirectivename">TLS CA Certificate File</span> or <span class="bdirectivename">TLS CA Certificate Dir</span> are required in a server context, unless <span class="bdirectivename">TLS Verify Peer</span> (see above) is set to <span class="bvalue">no</span>, and are always required in a client context.
+</dd>
+</div>
+<div id="Console_Director_TlsCaCertificateDir">
+<dt>TLS CA Certificate Dir = <Directory></dt>
+<dd>Full path to TLS CA certificate directory. In the current implementation, certificates must be stored PEM encoded with OpenSSL-compatible hashes, which is the subject name's hash and an extension of <span class="bbf">.0</span>. One of <span class="bdirectivename">TLS CA Certificate File</span> or <span class="bdirectivename">TLS CA Certificate Dir</span> are required in a server context, unless <span class="bdirectivename">TLS Verify Peer</span> is set to <span class="bvalue">no</span>, and are always required in a client context.
+</dd>
+</div>
+</body>
+</html>