dedup: detect and report dedupengine startup failure
- 2 cases to handle
- cannot load the driver
- error at the dedupengine creation time or startup
- the error message is stored in the dedup ressourse
- see regress/tests/dedup-start-fail-test for more
# Conflicts:
# bacula/src/stored/dircmd.c
Author: Eric Bollengier <eric@baculasystems.com>
Date: Fri Nov 15 09:12:29 2019 +0100
cloud: Rename TransferRetentionDays to TransferRetention (expressed as a time)
- remove global variables: dedupengine and bucker_manager
- update protocol SD<->DIR
- extend dedup_dev and BufferedMsgSD to suuport all function
- create "dummy" functions in dev.h
Warning :
- include a dedupengine use counter
- .status storage=XXX dedupengine display all the dedupengine on the SD
instead of the one(s) related to the "device"
dedup: add MaxContainerSize directive to SD ( FIX #1728 )
- this directive must limit the size of the container
- add max_container_size to bacula-sd.conf and to BucketManager
- new method BucketManager::grow0() like grow() but without any lock
- new method BucketManager::add_buckets() that add a bucket
- new method BucketManager::can_grow_more() that tell if a bucket can grow
- allocate the max size (511 entries) for the container table
(this is the maximum for standard 64K header)
- use BucketManager::alloc_block() instead of Bucket::alloc_block() to alloc chunks
BucketManager::alloc_block() will handle the choice of the bucket, extend or
create a new one when required.
- use Bucket::capacity() instead of ba.size
- dedup: link extra container together and keep ba.size < max_container_bidx
. container of the same max_blocksize are linked
. the head is got by BucketManager::choose_bucket()
. the current container is head->current
. they can be iterated up to NULL vi head->current->next
. ba.size is kept < max_container_bidx by grow0()
(I started doing a "runtime" limitation in alloc_block())
. brc32 must use the right size (ba.size,max_container_bidx) at startup
. add_buckets() return a bucket and handle "md_fsmonly"
- extend clone_fsm() to handle extra containers
- extend BucketManager::alloc_block() to keep vacuum_fsm geometry in sync
- add a lock parameter to BucketManager::get_bucket(blockaddr addr, bool lck=true)
- new method DedupEngine::vacuum_mark0() with a lock parameter
- can modifying MaxContainerSize at any time
. containers that are already bigger than the new value will no grow anymore
. container that are smaller than the new value will continue to grow
- add bucketmanager::Check
dedup: detect and report dedupengine startup failure
- 2 cases to handle
- cannot load the driver
- error at the dedupengine creation time or startup
- the error message is stored in the dedup ressourse
- see regress/tests/dedup-start-fail-test for more
# Conflicts:
# bacula/src/stored/dircmd.c
Author: Eric Bollengier <eric@baculasystems.com>
Date: Fri Nov 15 09:12:29 2019 +0100
cloud: Rename TransferRetentionDays to TransferRetention (expressed as a time)
- remove global variables: dedupengine and bucker_manager
- update protocol SD<->DIR
- extend dedup_dev and BufferedMsgSD to suuport all function
- create "dummy" functions in dev.h
Warning :
- include a dedupengine use counter
- .status storage=XXX dedupengine display all the dedupengine on the SD
instead of the one(s) related to the "device"
dedup: add MaxContainerSize directive to SD ( FIX #1728 )
- this directive must limit the size of the container
- add max_container_size to bacula-sd.conf and to BucketManager
- new method BucketManager::grow0() like grow() but without any lock
- new method BucketManager::add_buckets() that add a bucket
- new method BucketManager::can_grow_more() that tell if a bucket can grow
- allocate the max size (511 entries) for the container table
(this is the maximum for standard 64K header)
- use BucketManager::alloc_block() instead of Bucket::alloc_block() to alloc chunks
BucketManager::alloc_block() will handle the choice of the bucket, extend or
create a new one when required.
- use Bucket::capacity() instead of ba.size
- dedup: link extra container together and keep ba.size < max_container_bidx
. container of the same max_blocksize are linked
. the head is got by BucketManager::choose_bucket()
. the current container is head->current
. they can be iterated up to NULL vi head->current->next
. ba.size is kept < max_container_bidx by grow0()
(I started doing a "runtime" limitation in alloc_block())
. brc32 must use the right size (ba.size,max_container_bidx) at startup
. add_buckets() return a bucket and handle "md_fsmonly"
- extend clone_fsm() to handle extra containers
- extend BucketManager::alloc_block() to keep vacuum_fsm geometry in sync
- add a lock parameter to BucketManager::get_bucket(blockaddr addr, bool lck=true)
- new method DedupEngine::vacuum_mark0() with a lock parameter
- can modifying MaxContainerSize at any time
. containers that are already bigger than the new value will no grow anymore
. container that are smaller than the new value will continue to grow
- add bucketmanager::Check
new breaddir() function to replace readdir_r() + core update
- breaddir() is a thread safe replacement for readdir_r()
- remove any usage of readdir_r() and readdir() in the core
- remove NAMELEN() (that was never used except for 3 uses in find_one.c)
- remove global variables: dedupengine and bucker_manager
- update protocol SD<->DIR
- extend dedup_dev and BufferedMsgSD to suuport all function
- create "dummy" functions in dev.h
Warning :
- include a dedupengine use counter
- .status storage=XXX dedupengine display all the dedupengine on the SD
instead of the one(s) related to the "device"
- remove global variables: dedupengine and bucker_manager
- update protocol SD<->DIR
- extend dedup_dev and BufferedMsgSD to suuport all function
- create "dummy" functions in dev.h
Warning :
- include a dedupengine use counter
- .status storage=XXX dedupengine display all the dedupengine on the SD
instead of the one(s) related to the "device"
The OutputWriter class can be configured to use a custom
separator (\t or \n for example), custom date format, and
will handle basic formating such as time, string, integer,
list of strings, etc...
Author: Kern Sibbald <kern@sibbald.com>
Date: Sat Oct 26 07:57:10 2013 +0200
Implement FileMedia feature and FO_OFFSETS for plugin
- Allow plugins to use the FO_OFFSETS option
- Add SQL support in cats
- Add bvfs commands
- Add FileMedia handling in the Storage Daemon
- Add use FO_OFFSETS option in test plugin
- Add new sql scripts to configure.in
- Update catalog version to 1018
Cloud drivers are ment to be as standalone as possible. Interface containing DCR and JCR imply huge dependencies. cloud device reference creates circular dependencies.
I'm removing these reference and modify the code depending on context.
When JCR is used to JMsg and Qmsg : the message is returned to the caller, in charge to JMsg or Qmsg it if necessary.
When DCR is used for job cancellation check : a cancel callback function is passed instead.
Device dependency is only used to call add_vol_and_part. It's been moved to cloud_driver interface and since it's a const function, it's declared static for ease of use.
cloud_transfer_mgr : is_cancelled renamed to is_canceled.
Improve cloud proxy probing and use temporary name during transfer
- Introducing volume_lookup function. Even an empty volume will not be reloaded if it's been already synchronized.
- Change bacula_ctx struct to class.
2 constructors from transfer class or POOLMEM ptr.
errMsg member is a reference to POOLMEM ptr.
error msg can safely be re-allocated.
- Debug display transfer when upload error.
POOLMEM reference modification.
- Fix s3_driver crash when libs3 is not present.
- Use transfer message to report drivers errors.
- #define temp xfer download file name.
- copy download parts to tmp xfert file.
Rename it to part when transfer is completed.
- Don't truncate uploaded parts.
Exclude uploaded parts from the cloud proxy list and pass the resulting list to truncate_cloud_volume.
Author: Eric Bollengier <eric@baculasystems.com>
Date: Thu Nov 24 09:36:41 2016 +0100
Get error message from get_cloud_volume... procedures
Implement FileMedia feature and FO_OFFSETS for plugin
- Allow plugins to use the FO_OFFSETS option
- Add SQL support in cats
- Add bvfs commands
- Add FileMedia handling in the Storage Daemon
- Add use FO_OFFSETS option in test plugin
- Add new sql scripts to configure.in
- Update catalog version to 1018
dedup: Store data rehydrated size in VolABytes media record
- collect chunk size in rec.extra_byte in append.c
then sum these size into block->extra_bytes in record_write.c
and finally do the ccounting into the volume with a
dev->updateVolCatExtraBytes(block->extra_bytes) in block.c
- dev->updateVolCatExtraBytes() is a new method that do nothing
except for dedup_dev object
Fix a problem in the flow control termination in copy job.
One side was still sending FC traffic while the other was not
listening for them anymore.
Use the BNET_CMD_STP_FLOWCTRL message that was first used for the FD-SD
protocol.
- remove global variables: dedupengine and bucker_manager
- update protocol SD<->DIR
- extend dedup_dev and BufferedMsgSD to suuport all function
- create "dummy" functions in dev.h
Warning :
- include a dedupengine use counter
- .status storage=XXX dedupengine display all the dedupengine on the SD
instead of the one(s) related to the "device"
PSK: Modify authentication in each daemon to support PSK
- use AuthenticateBase class
Author: Eric Bollengier <eric@baculasystems.com>
Date: Mon Jun 11 09:00:30 2018 +0200
aligned: Add support for indexed streams to Aligned Volumes
Author: Eric Bollengier <eric@baculasystems.com>
Date: Tue Feb 13 14:27:12 2018 +0100
Add Quota feature with the Pool:MaximumPoolBytes directive
The MaximumPoolBytes and the current PoolBytes values are checked
when creating a new volume or moving a volume from the Scratch Pool.
The parameter is also sent to the Storage daemon and the Job will control
before each write that the Quota is not yet reached. If the value is reached,
the storage daemon will contact the director to know the last known value.
If the value is reached, then the current volume is marked as Used, and the
Storage daemon will ask for a new volume.
Purged and Recycled volumes are counted in the MaximumPoolBytes count. To
not count these volumes, users can use the RecyclePool option to move the
Purged/Recycled volumes back to the Scratch pool for example.
The "llist pool" command lists the maxpoolbytes value and the current poolbytes
value.
The current quota can be extended or reduced wiht the reload/update pool commands.
Fix #5041 about the Copy of an Incomplete job failing with the error XXX digest not same FileIndex=YYYY
- An "incomplete" job keeps the same SessTime despite a restart of the SD.
this make SessTime not always increasing inside a volume.
- The BSR match function was relaying on the fact that the SessTime
was always increasing, it should have not anymore, when we have added
"incomplete" job
- Their is no reason anymore to STOP a BSR on a sesstime condition,
we can remove this part of the code.
Author: Eric Bollengier <eric@baculasystems.com>
Date: Tue Jan 29 10:18:55 2019 +0100
Enhance large copy/migration/restore performance with new BSR algorithm
Add M_SECURITY when connection is bad + fix bug where invalid probes sent to Dir
Author: Eric Bollengier <eric@baculasystems.com>
Date: Mon Aug 28 17:01:40 2017 +0200
Write the list of selected files to a temporary file if requested by a Plugin
If we find a RestoreObject with the type FT_PLUGIN_FILELIST, we list all
selected files in a temporary file to send it to the FD before the actual
restore as a RestoreObject.
Author: Eric Bollengier <eric@baculasystems.com>
Date: Tue May 17 11:16:02 2016 +0200
Implement FileMedia feature and FO_OFFSETS for plugin
- Allow plugins to use the FO_OFFSETS option
- Add SQL support in cats
- Add bvfs commands
- Add FileMedia handling in the Storage Daemon
- Add use FO_OFFSETS option in test plugin
- Add new sql scripts to configure.in
- Update catalog version to 1018
Author: Kern Sibbald <kern@sibbald.com>
Date: Sat Feb 1 23:16:04 2014 +0100
- remove global variables: dedupengine and bucker_manager
- update protocol SD<->DIR
- extend dedup_dev and BufferedMsgSD to suuport all function
- create "dummy" functions in dev.h
Warning :
- include a dedupengine use counter
- .status storage=XXX dedupengine display all the dedupengine on the SD
instead of the one(s) related to the "device"
this patch don't do dedup, just skip rehydration
Be carefull of the "3000 OK data" message change
The end of the rehydration is announced by the
last ACK after the EOD
- initialize jcr->sd_hash_size using jcr->sh_hash
- added jcr->max_dedup_block_size, block above this size must be send
in raw format to avoid DDE overflowing.
DDE cannot handle block above this size.
NOTICE raw format is not yet implemented in DEDUP 8.0
- added min_blocksize to DedupEngine
- in filed/job.c, moved creation of jcr->dedup after the capabilities
exchanges to be able to use them in DedupFiledInterface
- in append.c and backup.c, dedup file if size >= min_block_size
cloud: implement async I/O with generic driver scripts
stdin and stdout are used for piped IPC with the script. call_fct is the function that handles file descriptors.
Specific callback are passed for each function.
cloud: implement async I/O with generic driver scripts
stdin and stdout are used for piped IPC with the script. call_fct is the function that handles file descriptors.
Specific callback are passed for each function.
Cloud drivers are ment to be as standalone as possible. Interface containing DCR and JCR imply huge dependencies. cloud device reference creates circular dependencies.
I'm removing these reference and modify the code depending on context.
When JCR is used to JMsg and Qmsg : the message is returned to the caller, in charge to JMsg or Qmsg it if necessary.
When DCR is used for job cancellation check : a cancel callback function is passed instead.
Device dependency is only used to call add_vol_and_part. It's been moved to cloud_driver interface and since it's a const function, it's declared static for ease of use.
cloud_transfer_mgr : is_cancelled renamed to is_canceled.
Cloud drivers are ment to be as standalone as possible. Interface containing DCR and JCR imply huge dependencies. cloud device reference creates circular dependencies.
I'm removing these reference and modify the code depending on context.
When JCR is used to JMsg and Qmsg : the message is returned to the caller, in charge to JMsg or Qmsg it if necessary.
When DCR is used for job cancellation check : a cancel callback function is passed instead.
Device dependency is only used to call add_vol_and_part. It's been moved to cloud_driver interface and since it's a const function, it's declared static for ease of use.
cloud_transfer_mgr : is_cancelled renamed to is_canceled.
Cache cloud enum part. Simplified
After yesterday discussion with @eric, it comes out that we don't need to list all volumes on the cache and/or cloud. This simplifies the api and the s3 driver request. This has been briefly tested on s3 and file driver.
Author: Eric Bollengier <eric@baculasystems.com>
Date: Mon Aug 28 17:01:40 2017 +0200
Write the list of selected files to a temporary file if requested by a Plugin
If we find a RestoreObject with the type FT_PLUGIN_FILELIST, we list all
selected files in a temporary file to send it to the FD before the actual
restore as a RestoreObject.
Implement FileMedia feature and FO_OFFSETS for plugin
- Allow plugins to use the FO_OFFSETS option
- Add SQL support in cats
- Add bvfs commands
- Add FileMedia handling in the Storage Daemon
- Add use FO_OFFSETS option in test plugin
- Add new sql scripts to configure.in
- Update catalog version to 1018
Author: Eric Bollengier <eric@baculasystems.com>
Date: Wed Mar 23 10:05:32 2016 +0100
Fix #1674 About Solaris Intel SD checksum errors
Author: Eric Bollengier <eric@baculasystems.com>
Date: Fri Aug 14 14:59:27 2015 +0200
Add code to handle interactive restore session in Storage Daemon
- Store the File/Block/Recnum/Volume position at the record level
- Modify the read_record() loop to reposition the volume when
dev::set_interactive_reposition() is called
- Allow to reposition to a specific BSR (may be created just for the call)
- remove global variables: dedupengine and bucker_manager
- update protocol SD<->DIR
- extend dedup_dev and BufferedMsgSD to suuport all function
- create "dummy" functions in dev.h
Warning :
- include a dedupengine use counter
- .status storage=XXX dedupengine display all the dedupengine on the SD
instead of the one(s) related to the "device"
Cloud drivers are ment to be as standalone as possible. Interface containing DCR and JCR imply huge dependencies. cloud device reference creates circular dependencies.
I'm removing these reference and modify the code depending on context.
When JCR is used to JMsg and Qmsg : the message is returned to the caller, in charge to JMsg or Qmsg it if necessary.
When DCR is used for job cancellation check : a cancel callback function is passed instead.
Device dependency is only used to call add_vol_and_part. It's been moved to cloud_driver interface and since it's a const function, it's declared static for ease of use.
cloud_transfer_mgr : is_cancelled renamed to is_canceled.
transfer-related functions are attached to the transfer object.
transfer_mgr is used as a facade to encapsulate ref counting and mutex locking.
cloud_dev is adapted to the new interfaces.
dev.h : 2 lists to separate downloads from uploads.
merge norbert's changes back in kern-cloud2.
Took Eric's code review in account.
s3_driver -> callback parameters are passed via bacula_ctx.
Transfer is done synchronously.
Can be made async by defining ASYNC_TRANSFER.
Cloud drivers are ment to be as standalone as possible. Interface containing DCR and JCR imply huge dependencies. cloud device reference creates circular dependencies.
I'm removing these reference and modify the code depending on context.
When JCR is used to JMsg and Qmsg : the message is returned to the caller, in charge to JMsg or Qmsg it if necessary.
When DCR is used for job cancellation check : a cancel callback function is passed instead.
Device dependency is only used to call add_vol_and_part. It's been moved to cloud_driver interface and since it's a const function, it's declared static for ease of use.
cloud_transfer_mgr : is_cancelled renamed to is_canceled.
transfer-related functions are attached to the transfer object.
transfer_mgr is used as a facade to encapsulate ref counting and mutex locking.
cloud_dev is adapted to the new interfaces.
dev.h : 2 lists to separate downloads from uploads.
merge norbert's changes back in kern-cloud2.
Took Eric's code review in account.
s3_driver -> callback parameters are passed via bacula_ctx.
Transfer is done synchronously.
Can be made async by defining ASYNC_TRANSFER.
Cloud drivers are ment to be as standalone as possible. Interface containing DCR and JCR imply huge dependencies. cloud device reference creates circular dependencies.
I'm removing these reference and modify the code depending on context.
When JCR is used to JMsg and Qmsg : the message is returned to the caller, in charge to JMsg or Qmsg it if necessary.
When DCR is used for job cancellation check : a cancel callback function is passed instead.
Device dependency is only used to call add_vol_and_part. It's been moved to cloud_driver interface and since it's a const function, it's declared static for ease of use.
cloud_transfer_mgr : is_cancelled renamed to is_canceled.
Cloud drivers are ment to be as standalone as possible. Interface containing DCR and JCR imply huge dependencies. cloud device reference creates circular dependencies.
I'm removing these reference and modify the code depending on context.
When JCR is used to JMsg and Qmsg : the message is returned to the caller, in charge to JMsg or Qmsg it if necessary.
When DCR is used for job cancellation check : a cancel callback function is passed instead.
Device dependency is only used to call add_vol_and_part. It's been moved to cloud_driver interface and since it's a const function, it's declared static for ease of use.
cloud_transfer_mgr : is_cancelled renamed to is_canceled.
cloud: Fix MA #2453 volume parts are always downloaded.
This is now fixed in download_part_to_cache.
Also, make sure that upload/download are not terminated FATAL when the cloud is not active (cache can do the job).
Improve cloud proxy probing and use temporary name during transfer
- Introducing volume_lookup function. Even an empty volume will not be reloaded if it's been already synchronized.
- Change bacula_ctx struct to class.
2 constructors from transfer class or POOLMEM ptr.
errMsg member is a reference to POOLMEM ptr.
error msg can safely be re-allocated.
- Debug display transfer when upload error.
POOLMEM reference modification.
- Fix s3_driver crash when libs3 is not present.
- Use transfer message to report drivers errors.
- #define temp xfer download file name.
- copy download parts to tmp xfert file.
Rename it to part when transfer is completed.
- Don't truncate uploaded parts.
Exclude uploaded parts from the cloud proxy list and pass the resulting list to truncate_cloud_volume.
static transfer_mgr and fix leaks.
create 2 transfer managers, one for uploads, one for downloads.
replace tranfer_mgr everywhere it's used (transfer status for download and upload are separated, making it more accurate).
Leaks : need to loop <=last_index() otherwise we don't free the last inserted item.
Transfer argument free is moved inside transfer destructor (eventually, we'll keep access to the arg for the all transfer live cycle. It will be usefull latter).
Author: Eric Bollengier <eric@baculasystems.com>
Date: Thu Oct 20 17:21:01 2016 +0200
Call the driver truncate command from the cloud device
Implement FileMedia feature and FO_OFFSETS for plugin
- Allow plugins to use the FO_OFFSETS option
- Add SQL support in cats
- Add bvfs commands
- Add FileMedia handling in the Storage Daemon
- Add use FO_OFFSETS option in test plugin
- Add new sql scripts to configure.in
- Update catalog version to 1018
Author: Eric Bollengier <eric@baculasystems.com>
Date: Wed Mar 23 10:05:32 2016 +0100
Fix #1674 About Solaris Intel SD checksum errors
Author: Kern Sibbald <kern@sibbald.com>
Date: Sun Oct 21 20:20:00 2012 +0200
Refactor block.c + do JobMedia only on ameta device
Implement FileMedia feature and FO_OFFSETS for plugin
- Allow plugins to use the FO_OFFSETS option
- Add SQL support in cats
- Add bvfs commands
- Add FileMedia handling in the Storage Daemon
- Add use FO_OFFSETS option in test plugin
- Add new sql scripts to configure.in
- Update catalog version to 1018
Implement FileMedia feature and FO_OFFSETS for plugin
- Allow plugins to use the FO_OFFSETS option
- Add SQL support in cats
- Add bvfs commands
- Add FileMedia handling in the Storage Daemon
- Add use FO_OFFSETS option in test plugin
- Add new sql scripts to configure.in
- Update catalog version to 1018
Author: Eric Bollengier <eric@baculasystems.com>
Date: Wed Aug 13 10:06:32 2014 +0200
Implement FileMedia feature and FO_OFFSETS for plugin
- Allow plugins to use the FO_OFFSETS option
- Add SQL support in cats
- Add bvfs commands
- Add FileMedia handling in the Storage Daemon
- Add use FO_OFFSETS option in test plugin
- Add new sql scripts to configure.in
- Update catalog version to 1018
Author: Kern Sibbald <kern@sibbald.com>
Date: Sun Dec 7 15:21:45 2014 +0100
#776 Volume created in the catalog but not on disk and #464 SD can't read an existing volume
Author: Kern Sibbald <kern@sibbald.com>
Date: Sun May 18 15:34:37 2014 +0200
This commit is the result of the squash of the following main commits:
Author: Eric Bollengier <eric@baculasystems.com>
Date: Wed Mar 20 14:53:33 2019 +0100
Add FD Calls Director feature to ease client behind NAT access
- Implement scheduler in Client side to activate the feature
- Refactor Run resource between the director and the file daemon
- Allow to use the Director resource to connect a Director for the proxy command and the FDCallsDir
- Add FDCallsDir state in show client
- Add code to handle permanent connections bsock_meeting
New Directive
FileDaemon:Director:FDCallsDir=<yes/no>
FileDaemon:Director:FDCallsDirReconnect=<time>
FileDaemon:Director:FDCallsDirSchedule=<resource>
snapshot: redirect Bacula's Dmsg() output to logfile instead of STDOUT
- It is not possible to use bsnapshot with a debug level > 100
- Dmsg() emitted by Bacula's libraries are going into STDOUT when
the "trace" is not setup
- This setup the messages.c File *trace_fd to the one used by
bsnapshot
- All the log are now using into bsnapshot.log
$ cat bsnapshot.conf
mountopts=noexec,nodev <--- default
mountopts=/dev/vgroot/root:nouuid <---- for one device
mountopts=/dev/vgroot/home:nouuid,uid=backup <---- for one device
Author: Eric Bollengier <eric@baculasystems.com>
Date: Wed May 16 17:44:07 2018 +0200
snapshot: Add debug for system commands
Author: Eric Bollengier <eric@baculasystems.com>
Date: Wed Dec 13 15:19:37 2017 +0100
snapshot: Add support for LVM 1.02/2.02
The Attr lvs/vgs attribute is now at the end of the output, so
we must treat \n as a attribute separator.
The Time parameter was renamed CTime in lvs output.
fix #3297 lvm_snapshot_size in % ignored, default 10% always used
- poor integer arithmetic
Author: Eric Bollengier <eric@baculasystems.com>
Date: Thu Feb 5 09:42:01 2015 +0100
Add Snapshot Engine to bacula core
Rename jcr->VSS to jcr->Snapshot and use the same variable on Linux/Windows
Pass snapshot retention to the FD
Add SnapshotRetention to client and job resource
Add documentation about snapshot backend
regress: Add snapshot script for lvm backend
Use retry in lvm unmount procedure
Read config file from sysconfigdir
Load bsnapshot.cfg by default and configure bsnapshot using exepath
Add config file to specify lvm snapshot parameters
Control LVM free space for snapshot
Implement OT_PINT64 in output writer, should display job speed api2 mode status command
regress: Add test to test snapshot bconsole commands
regress: Add test for zfs snapshot
implement zfs driver (linux zfs driver cannot mount snapshots)
regress: Add regress script for snapshots
Include automatically subvolumes with onefs=no
Handle ff->link in strip snapshot path
Scan for subvolumes not listed in mtab
Snapshots can be created/deleted from the catalog
Implement path translation between snapshot
Add JobId to Snapshot table
Add code to execute snapshot script on the FD side
Add more Director snapshots command
Implement CatReq interface for Snapshots
Implement DB snapshot functions
the delete snapshot is now connecting to the FD, and the
FD is calling the snapshot script with the delete argument
Allow to specify env variable in run_command_full_output() and open_bpipe()
Modify cmd_parser to not look for plugin name with use_name option
Add ARG_LIST to generate nice arg1=arg2 output from db_list function
Add tag DT_SNAPSHOT
Add JobId in the list snapshot output
Add list snapshot command
Add JobId to snapshot table
Fix MA#5588 bsmtp: add support for Sender and custom header
Add 2 new options (from the new man page)
-S sender
Set the Sender: header.
-H header_line
Add a custom header line to the mail header.
You can use this option up to 10 times.
The header_line must be a single line
that follows RFC2822 syntax, for example "Keywords: bacula".
Don't forget to protect the space just after the ":"
Eric Bollengier [Tue, 12 May 2020 08:24:24 +0000 (10:24 +0200)]
BEE Backport bacula/src/tools/Makefile.in
This commit is the result of the squash of the following main commits:
Author: Eric Bollengier <eric@baculasystems.com>
Date: Mon Mar 16 15:09:33 2020 +0100
regress: Add first cut of joblist unittest
Author: Eric Bollengier <eric@baculasystems.com>
Date: Wed Mar 20 14:53:33 2019 +0100
Add FD Calls Director feature to ease client behind NAT access
- Implement scheduler in Client side to activate the feature
- Refactor Run resource between the director and the file daemon
- Allow to use the Director resource to connect a Director for the proxy command and the FDCallsDir
- Add FDCallsDir state in show client
- Add code to handle permanent connections bsock_meeting
New Directive
FileDaemon:Director:FDCallsDir=<yes/no>
FileDaemon:Director:FDCallsDirReconnect=<time>
FileDaemon:Director:FDCallsDirSchedule=<resource>
Director:Client:FDCallsDir=<yes/no>
New Resource
FileDaemon:Schedule
Author: Eric Bollengier <eric@baculasystems.com>
Date: Tue Feb 19 19:00:49 2019 +0100
Add tool to re-create io errors during regress
Author: Kern Sibbald <kern@sibbald.com>
Date: Sun Jan 27 11:11:21 2019 +0100
Disable building cats_test while compile is broken
Author: Eric Bollengier <eric@baculasystems.com>
Date: Tue Oct 16 09:53:32 2018 +0200
Implement FileMedia feature and FO_OFFSETS for plugin
- Allow plugins to use the FO_OFFSETS option
- Add SQL support in cats
- Add bvfs commands
- Add FileMedia handling in the Storage Daemon
- Add use FO_OFFSETS option in test plugin
- Add new sql scripts to configure.in
- Update catalog version to 1018
Author: Eric Bollengier <eric@baculasystems.com>
Date: Tue Nov 10 15:44:01 2015 +0100
Add optional parameters to db_list_job_records()
It is now possible to specify
- ClientId
- JobErrors
- JobStatus
- Job Type
- order ASC/DESC
Author: Eric Bollengier <eric@baculasystems.com>
Date: Thu Feb 5 09:42:01 2015 +0100
Add Snapshot Engine to bacula core
Rename jcr->VSS to jcr->Snapshot and use the same variable on Linux/Windows
Pass snapshot retention to the FD
Add SnapshotRetention to client and job resource
Add documentation about snapshot backend
regress: Add snapshot script for lvm backend
Use retry in lvm unmount procedure
Read config file from sysconfigdir
Load bsnapshot.cfg by default and configure bsnapshot using exepath
Add config file to specify lvm snapshot parameters
Control LVM free space for snapshot
Implement OT_PINT64 in output writer, should display job speed api2 mode status command
regress: Add test to test snapshot bconsole commands
regress: Add test for zfs snapshot
implement zfs driver (linux zfs driver cannot mount snapshots)
regress: Add regress script for snapshots
Include automatically subvolumes with onefs=no
Handle ff->link in strip snapshot path
Scan for subvolumes not listed in mtab
Snapshots can be created/deleted from the catalog
Implement path translation between snapshot
Add JobId to Snapshot table
Add code to execute snapshot script on the FD side
Add more Director snapshots command
Implement CatReq interface for Snapshots
Implement DB snapshot functions
the delete snapshot is now connecting to the FD, and the
FD is calling the snapshot script with the delete argument
Allow to specify env variable in run_command_full_output() and open_bpipe()
Modify cmd_parser to not look for plugin name with use_name option
Add ARG_LIST to generate nice arg1=arg2 output from db_list function
Add tag DT_SNAPSHOT
Add JobId in the list snapshot output
Add list snapshot command
Add JobId to snapshot table
Author: Eric Bollengier <eric@baculasystems.com>
Date: Mon Aug 26 13:57:34 2013 +0200
- Some upgrade procedures are not run when appropriate
- Fix some version issues in comments and messages
Author: Kern Sibbald <kern@sibbald.com>
Date: Sat Aug 11 21:20:22 2018 +0200
Permit catalog to contain negative FileIndexes
Author: Eric Bollengier <eric@baculasystems.com>
Date: Tue Feb 27 16:06:06 2018 +0100
Allow to copy/migrate jobs with plugins
Up to now, the unique Job name is used by plugins as a key
in some cases (for example as a snapshot name).
The plugin can use this value to compute the backup set.
For example, snapshot-diff previous current
When using Copy/Migration, the unique Job name associated with a job
is updated with a new name, leading to some confusion on the plugin
side.
With this patch, we associate the original Job name with the
Copy/Migration job record in the catalog, and we can send this value
to the Client. If we copy a job that was migrated, the original job
is kept from one job record to an other.
Author: Eric Bollengier <eric@baculasystems.com>
Date: Tue Feb 13 14:13:54 2018 +0100
Add Pool::MaxPoolBytes to the catalog. Catalog version is now 1020
Author: Eric Bollengier <eric@baculasystems.com>
Date: Thu Jan 5 20:27:33 2017 +0100
Implement FileMedia feature and FO_OFFSETS for plugin
- Allow plugins to use the FO_OFFSETS option
- Add SQL support in cats
- Add bvfs commands
- Add FileMedia handling in the Storage Daemon
- Add use FO_OFFSETS option in test plugin
- Add new sql scripts to configure.in
- Update catalog version to 1018
Author: Eric Bollengier <eric@baculasystems.com>
Date: Tue Feb 24 16:46:46 2015 +0100
Upgrade catalog version to 1017
Author: Eric Bollengier <eric@baculasystems.com>
Date: Sun Nov 3 17:32:19 2013 +0100
Update SQL scripts for Events table. Catalog format 1022
Author: Eric Bollengier <eric@baculasystems.com>
Date: Tue Mar 5 11:37:45 2019 +0100
Modify SQL upgrade scripts for format 1021
Author: Eric Bollengier <eric@baculasystems.com>
Date: Thu Dec 6 10:19:13 2018 +0100
Add procedure to upgrade from catalog version 16 to 1020
Author: Eric Bollengier <eric@baculasystems.com>
Date: Tue Feb 27 16:06:06 2018 +0100
Allow to copy/migrate jobs with plugins
Up to now, the unique Job name is used by plugins as a key
in some cases (for example as a snapshot name).
The plugin can use this value to compute the backup set.
For example, snapshot-diff previous current
When using Copy/Migration, the unique Job name associated with a job
is updated with a new name, leading to some confusion on the plugin
side.
With this patch, we associate the original Job name with the
Copy/Migration job record in the catalog, and we can send this value
to the Client. If we copy a job that was migrated, the original job
is kept from one job record to an other.
Author: Eric Bollengier <eric@baculasystems.com>
Date: Fri Feb 16 17:47:54 2018 +0100
Add catalog Job column PriorJob
Author: Eric Bollengier <eric@baculasystems.com>
Date: Tue Feb 13 14:13:54 2018 +0100
Add Pool::MaxPoolBytes to the catalog. Catalog version is now 1020
Author: Eric Bollengier <eric@baculasystems.com>
Date: Thu Mar 23 10:10:58 2017 +0100
Adapt update_bacula_tables scripts for catalog version 15
Implement FileMedia feature and FO_OFFSETS for plugin
- Allow plugins to use the FO_OFFSETS option
- Add SQL support in cats
- Add bvfs commands
- Add FileMedia handling in the Storage Daemon
- Add use FO_OFFSETS option in test plugin
- Add new sql scripts to configure.in
- Update catalog version to 1018
Author: Eric Bollengier <eric@baculasystems.com>
Date: Thu Jun 11 17:15:34 2015 +0200
Fix error in update_postgresql_tables.in caused by bad search and replace
Author: Eric Bollengier <eric@baculasystems.com>
Date: Mon Mar 9 08:23:34 2015 +0100
Add --stop1016 option to update_postgresql-_tables.in script
The option is used by the debian upgrade script. Each step is handled
automatically by the package.
Author: Eric Bollengier <eric@baculasystems.com>
Date: Tue Feb 24 16:46:46 2015 +0100
Upgrade catalog version to 1017
Author: Eric Bollengier <eric@baculasystems.com>
Date: Mon Nov 25 13:53:45 2013 +0100
Enhance postgresql upgrade script to avoid writing WALs while loading the File table
Need to use:
archive_mode=off
wal_level=minimal
Author: Eric Bollengier <eric@baculasystems.com>
Date: Sun Nov 3 17:32:19 2013 +0100
Fix #8269 about VolWrites overflow for PostgreSQL backend
- Change VolWrites (not really used other than for external statistics) from 32bit to 64bit
Error: sql_update.c:411 sql_update.c:411 update UPDATE Media SET VolJobs=1,VolFiles=1,VolBlocks=13409927,VolBytes=865101275136,VolMounts=317,VolErrors=0,VolWrites=2160216210,MaxVolBytes=1609538994176,VolStatus='Append',Slot=3,InChanger=1,VolReadTime=3467657,VolWriteTime=681603738721,VolParts=0,LabelType=0,StorageId=3,PoolId=14,VolRetention=86400,VolUseDuration=1,MaxVolJobs=0,MaxVolFiles=0,Enabled=1,LocationId=0,ScratchPoolId=0,RecyclePoolId=0,RecycleCount=314,Recycle=1,ActionOnPurge=0 WHERE VolumeName='005045L5' failed:
ERROR: integer out of range
Implement FileMedia feature and FO_OFFSETS for plugin
- Allow plugins to use the FO_OFFSETS option
- Add SQL support in cats
- Add bvfs commands
- Add FileMedia handling in the Storage Daemon
- Add use FO_OFFSETS option in test plugin
- Add new sql scripts to configure.in
- Update catalog version to 1018
Author: Eric Bollengier <eric@baculasystems.com>
Date: Tue Feb 24 16:46:46 2015 +0100
Upgrade catalog version to 1017
Author: Eric Bollengier <eric@baculasystems.com>
Date: Thu May 1 13:07:17 2014 +0200
Fix #10747 about MySQL database creation/upgrade script
Author: Eric Bollengier <eric@baculasystems.com>
Date: Sun Nov 3 17:32:19 2013 +0100
Add missing FileTable column on JobHisto table
Author: Eric Bollengier <eric@baculasystems.com>
Date: Thu Oct 3 11:37:07 2013 +0200
Update Catalog to 1016 and add Snapshot tables
Author: Eric Bollengier <eric@baculasystems.com>
Date: Wed Aug 28 13:49:30 2013 +0200
More work on migration scripts
- Fix hardlink-test
- Ensure that the mysql migration script will stop if having error
Author: Eric Bollengier <eric@baculasystems.com>
Date: Fri May 3 13:24:07 2013 +0200
Cleanup update_xxxx_tables.in
The script is now updating the schema step by step from version 12 to
version 1015.
Eric Bollengier [Mon, 11 May 2020 15:16:26 +0000 (17:16 +0200)]
BEE Backport bacula/src/cats/sql_update.c
This commit is the result of the squash of the following main commits:
Author: Eric Bollengier <eric@baculasystems.com>
Date: Tue Feb 27 16:06:06 2018 +0100
Allow to copy/migrate jobs with plugins
Up to now, the unique Job name is used by plugins as a key
in some cases (for example as a snapshot name).
The plugin can use this value to compute the backup set.
For example, snapshot-diff previous current
When using Copy/Migration, the unique Job name associated with a job
is updated with a new name, leading to some confusion on the plugin
side.
With this patch, we associate the original Job name with the
Copy/Migration job record in the catalog, and we can send this value
to the Client. If we copy a job that was migrated, the original job
is kept from one job record to an other.
Author: Eric Bollengier <eric@baculasystems.com>
Date: Tue Feb 13 14:15:04 2018 +0100
Author: Eric Bollengier <eric@baculasystems.com>
Date: Fri Mar 6 16:25:13 2020 +0100
Move MaxPoolBytes checks in bee files
Author: Eric Bollengier <eric@baculasystems.com>
Date: Tue Jun 18 15:12:54 2019 +0200
Fix #5173 about incorrect behavior of the "delete pool" command
Author: Eric Bollengier <eric@baculasystems.com>
Date: Fri Dec 14 10:16:15 2018 +0100
Add update command to set the Job comment field
Author: Kern Sibbald <kern@sibbald.com>
Date: Sat Aug 11 21:20:22 2018 +0200
Permit catalog to contain negative FileIndexes
Author: Eric Bollengier <eric@baculasystems.com>
Date: Tue Feb 27 16:06:06 2018 +0100
Allow to copy/migrate jobs with plugins
Up to now, the unique Job name is used by plugins as a key
in some cases (for example as a snapshot name).
The plugin can use this value to compute the backup set.
For example, snapshot-diff previous current
When using Copy/Migration, the unique Job name associated with a job
is updated with a new name, leading to some confusion on the plugin
side.
With this patch, we associate the original Job name with the
Copy/Migration job record in the catalog, and we can send this value
to the Client. If we copy a job that was migrated, the original job
is kept from one job record to an other.
Author: Eric Bollengier <eric@baculasystems.com>
Date: Mon Feb 19 09:34:05 2018 +0100
Fix possible issue with db_get_pool_numvols() after a SQL error
Author: Eric Bollengier <eric@baculasystems.com>
Date: Tue Feb 13 14:15:04 2018 +0100
Implement FileMedia feature and FO_OFFSETS for plugin
- Allow plugins to use the FO_OFFSETS option
- Add SQL support in cats
- Add bvfs commands
- Add FileMedia handling in the Storage Daemon
- Add use FO_OFFSETS option in test plugin
- Add new sql scripts to configure.in
- Update catalog version to 1018
Fix GCC 8 compiler warnings with memset() on objects
Author: Eric Bollengier <eric@baculasystems.com>
Date: Tue Feb 27 16:06:06 2018 +0100
Allow to copy/migrate jobs with plugins
Up to now, the unique Job name is used by plugins as a key
in some cases (for example as a snapshot name).
The plugin can use this value to compute the backup set.
For example, snapshot-diff previous current
When using Copy/Migration, the unique Job name associated with a job
is updated with a new name, leading to some confusion on the plugin
side.
With this patch, we associate the original Job name with the
Copy/Migration job record in the catalog, and we can send this value
to the Client. If we copy a job that was migrated, the original job
is kept from one job record to an other.
Author: Eric Bollengier <eric@baculasystems.com>
Date: Thu Jan 5 09:14:00 2017 +0100
Implement FileMedia feature and FO_OFFSETS for plugin
- Allow plugins to use the FO_OFFSETS option
- Add SQL support in cats
- Add bvfs commands
- Add FileMedia handling in the Storage Daemon
- Add use FO_OFFSETS option in test plugin
- Add new sql scripts to configure.in
- Update catalog version to 1018
Implement FileMedia feature and FO_OFFSETS for plugin
- Allow plugins to use the FO_OFFSETS option
- Add SQL support in cats
- Add bvfs commands
- Add FileMedia handling in the Storage Daemon
- Add use FO_OFFSETS option in test plugin
- Add new sql scripts to configure.in
- Update catalog version to 1018
Eric Bollengier [Mon, 11 May 2020 15:13:13 +0000 (17:13 +0200)]
BEE Backport bacula/src/cats/sql_cmds.c
This commit is the result of the squash of the following main commits:
Author: Eric Bollengier <eric@baculasystems.com>
Date: Tue Nov 12 15:29:50 2019 +0100
Do not select volumes with MaximumVolumeRetention=0 in "prune expired" command
Author: Eric Bollengier <eric@baculasystems.com>
Date: Thu May 16 10:40:49 2019 +0200
Fix #5053 about BVFS commands not compatible with ACLs wildcards
Author: Eric Bollengier <eric@baculasystems.com>
Date: Tue Feb 27 16:06:06 2018 +0100
Allow to copy/migrate jobs with plugins
Up to now, the unique Job name is used by plugins as a key
in some cases (for example as a snapshot name).
The plugin can use this value to compute the backup set.
For example, snapshot-diff previous current
When using Copy/Migration, the unique Job name associated with a job
is updated with a new name, leading to some confusion on the plugin
side.
With this patch, we associate the original Job name with the
Copy/Migration job record in the catalog, and we can send this value
to the Client. If we copy a job that was migrated, the original job
is kept from one job record to an other.
Author: Eric Bollengier <eric@baculasystems.com>
Date: Tue Feb 13 14:15:04 2018 +0100
Add MaxPoolBytes support for SQL functions
Author: Eric Bollengier <eric@baculasystems.com>
Date: Wed Jan 10 11:31:59 2018 +0100
Fix #3451 about a segfault with the "prune expired" command with the SQLite backend
Author: Eric Bollengier <eric@baculasystems.com>
Date: Wed Oct 11 09:51:46 2017 +0200
Fix SQLite queries that select volumes for pruning
Strangely, this query doesn't work as expected on SQLite 3.11.0, but it is ok on 2.8.17.