Eric Bollengier [Tue, 23 Feb 2021 09:01:38 +0000 (10:01 +0100)]
Fix Verify job issue with offset stream and compressed blocks
Description:
------------
When a Plugin generates an OFFSET stream and the FileSet uses
data compression, a Verify job (level=data) did not handle
correctly the offset header in the data stream, resulting
the following kind of message:
Error: Compressed header version error. Got=0x4f58 want=0x1
Alain Spineux [Thu, 20 May 2021 11:08:07 +0000 (13:08 +0200)]
new ProgressCounter object to monitor progress, cancel and time
- The progress object can be passed to function/objects to monitor
the progress from another function.
- A progress object is passed to any function involved in the dedup vacuum
and used by "dedup usage" to report the vacuum progress
- It can also be used to remotely "cancel" a loop
Michal Rakowski [Mon, 17 May 2021 21:23:28 +0000 (23:23 +0200)]
Fix #7644 About Storage Groups messages changes
Allow user to explcitly set 'ListedOrder' policy for storage group
defined. This policy will be also used when no policy is specified at
all.
Change messages about Storage Groups and storage choice.
Michal Rakowski [Fri, 14 May 2021 11:55:46 +0000 (13:55 +0200)]
Fix #7628 About 'reload' command crashing the director
This issue was not seen when trying to start the director with the same,
broken config file. It's because we use different error codes for config
parsing when the dir is being started and when we do the reload. For the
startup, we use M_ERROR_TERM which in the end just stops the director.
In case of reload we use M_ERROR (since we don't want the dir to stop in
case of trying to load broken config), so the error from parsing config
was not propagated anywhere (err msg was printed, but the parser's
caller could be unaware about it). New field was added to inform about
the error, besides printing err message.
Michal Rakowski [Tue, 4 May 2021 20:38:55 +0000 (22:38 +0200)]
Fix org#2610 About incorrect stopping of running job
Extended 'is_job_canceled()' helper to check for the Incomplete state.
This way, handling job which was either stopped or cancelled is more
elegant (e.g. it is now possible to stop job which was waiting on
mounting volume during backup - it was not possilbe before).
Alain Spineux [Tue, 4 May 2021 09:43:45 +0000 (11:43 +0200)]
Fix org#2500 .bvfs_get_jobids jobid=X must return X in the list part2
- restrict this feature to the ".bvfs_get_jobids" command only
- else any call with a jcr->JobId != 0 force the function to use it
this is not what we want.
- notice the following change in the behaviour of the .bvfs_get_jobids
command when a job has been migrated :
| JobId | Name | StartTime | Type | Level | JobFiles
+-------+------------------+---------------------+------+-------+---------
| 1 | MigrationJobSave | 2021-05-04 11:16:26 | M | F | 3131
| 5 | MigrationJobSave | 2021-05-04 11:16:26 | B | F | 3131
*.bvfs_get_jobids jobid=1
5 <== before the change
*.bvfs_get_jobids jobid=1
<EMPTY-LINE> <== after the change
Alain Spineux [Fri, 19 Feb 2021 10:22:20 +0000 (11:22 +0100)]
Fix org#2500 .bvfs_get_jobids jobid=X must return X in the list
- when multiple backups of the same job finish at the exact same time,
then .bvfs_get_jobids can "mix" them.
- it is not "admissible" to ask bacula for restoring job X and
see bacula restore job Y because they are "interchangeable".
- this mostly happens in regression test
- the code does : If the jobid is specified then force the SQL query
to use it.
Alain Spineux [Fri, 19 Feb 2021 09:58:22 +0000 (10:58 +0100)]
Check if char **jobid parameter is NULL before to modify it in bvfs_parse_arg_version()
- it looks like the function can handle a NULL parameter, som lines above
and below does the check, it looks right to do it everywhere before to
modify jobid
Alain Spineux [Fri, 19 Feb 2021 09:53:09 +0000 (10:53 +0100)]
Tweak replace MAX_NAME_LENGTH, with a sizeof(variable) in ua_dotcmds.c
- we use a sizeof() for the exact same use few lines above
- Yes, I didn't remove the "exact same lines above" because I'm not
sure that it will not brake something.
Michal Rakowski [Thu, 1 Apr 2021 13:41:13 +0000 (15:41 +0200)]
Fix handling of plugin/restore objects in copy/migration jobs
Objects are being inserted during reading spool file from the SD
(during the despool_attributes_from_file() call), so no need to insert
those manually at the end of MAC job.
This patch makes FD Plugin API more consistent for restore
jobs when plugin decide to forward file extraction to Bacula
Core (CF_Core) or skip this extraction (CF_Skip). This patch
fixes Metadata, ACL and XATTR plugin streams handling.
This extends a metaplugin protocol with a SKIP command
in metadta streams handling, so backend can decide later
if the file should be skipped or extracted.
It update metaplugin and Bacula for a proper SKIP
command handling, so both xacl's and metadata's will
be properly skipped for backend.
metaplugin: Add per file Core restore functionality.
Now the Backend can ask Bacula (using Metaplugin protocol) to
restore the current file using Bacula Core functionality.
To use this functionality backend has to respond with "CORE"
command during File Attributes section.