]> git.ipfire.org Git - thirdparty/bacula.git/commitdiff
Add Kubernetes plugin
authorRadosław Korzeniewski <radoslaw@korzeniewski.net>
Mon, 28 Feb 2022 17:51:22 +0000 (18:51 +0100)
committerEric Bollengier <eric@baculasystems.com>
Thu, 24 Mar 2022 08:03:24 +0000 (09:03 +0100)
Author: Radosław Korzeniewski <radoslaw@korzeniewski.net>
Date:   Fri Jun 18 13:12:40 2021 +0200

    kubernetes: Remove debug only code which caused a regression.

Author: Radosław Korzeniewski <radoslaw@korzeniewski.net>
Date:   Thu Jun 17 09:22:02 2021 +0200

    kubernetes: Change default logging dir. Fixes #7829

    This change apply for Kubernetes and Openshift plugins.
    The new, standard, logging directory is now:
    "/opt/bacula/working/kubernetes"

Author: Radosław Korzeniewski <radoslaw@korzeniewski.net>
Date:   Wed Jun 16 16:26:31 2021 +0200

    kubernetes: Extend debugging for multiple bacula-backup pods.

    This patch update bacula-backup pod to version 15Jun21.
    So to use a new extended feature you have to update your docker
    image too. Do not forget to update any required config files, i.e.
    an `image=...` in your plugin parameters or `image: ...` in your
    bacula backup pod custom yaml files.

Author: Radosław Korzeniewski <radoslaw@korzeniewski.net>
Date:   Tue Jun 15 14:15:42 2021 +0200

    kubernetes: Correct error messages for sslserver.

Author: Radosław Korzeniewski <radoslaw@korzeniewski.net>
Date:   Fri May 28 11:00:58 2021 +0200

    kubernetes: Fix #7699 PVC listing does not show some items

Author: Radosław Korzeniewski <radoslaw@korzeniewski.net>
Date:   Thu May 27 14:30:16 2021 +0200

    kubernetes: Fix #7698 About minor display typo

Author: Eric Bollengier <eric@baculasystems.com>
Date:   Thu May 20 14:58:15 2021 +0200

    k8s: Fix #7663 About platform detection issue

Author: Eric Bollengier <eric@baculasystems.com>
Date:   Thu May 6 11:42:44 2021 +0200

    Fix k8s compilation

Author: Radosław Korzeniewski <radoslaw@korzeniewski.net>
Date:   Wed May 5 14:30:37 2021 +0200

    kubernetes: Fix 'NoneType' object is not iterable. Fixes #0007574.

Author: Radosław Korzeniewski <radoslaw@korzeniewski.net>
Date:   Wed Apr 21 09:02:26 2021 +0200

    kubernetes: Update strict requirements.

Author: Radosław Korzeniewski <radoslaw@korzeniewski.net>
Date:   Mon Mar 8 13:12:16 2021 +0100

    kubernetes: Fix SyntaxWarning: is not with a literal.

Author: Eric Bollengier <eric@baculasystems.com>
Date:   Thu Jan 28 14:34:04 2021 +0100

    k8s: Fix compilation issue with Cython

Author: Radosław Korzeniewski <radoslaw@korzeniewski.net>
Date:   Tue Jan 26 10:58:12 2021 +0100

    kubernetes: Fix parameter verification. Closes #0007285.

    Fix storageclass and persistentvolume parameters verification which
    prevent backup when no storage class or persistent volume defined
    cluster wide. The issue arised in the very corner configurations.

Author: Eric Bollengier <eric@baculasystems.com>
Date:   Thu Jan 21 13:50:15 2021 +0100

    Use PATH to locate pyinstaller

Author: Radosław Korzeniewski <radoslaw@korzeniewski.net>
Date:   Tue Jan 12 15:45:46 2021 +0100

    kubernetes: Bump and synchronize plugin versions.

Author: Radosław Korzeniewski <radekk@inteos.pl>
Date:   Tue Jan 12 15:03:48 2021 +0100

    kubernetes: Add resource filtering with labels.

    Added new plugin parameter `labels=...` which allows to filter what
    resources you want to backup based on resource labels.
    You can learn about Kubernetes label selectors at:
    https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/
    The new parameter support all selection patterns supported by Kubernetes
    including equality based and set based.

Author: Radosław Korzeniewski <radekk@inteos.pl>
Date:   Fri Jan 8 11:23:21 2021 +0100

    kubernetes: Correct requirements.txt file.

Author: Radosław Korzeniewski <radekk@inteos.pl>
Date:   Tue Jan 5 17:39:15 2021 +0100

    kubernetes: Disable unwanted urllib ssl warnings.

Author: Eric Bollengier <eric@baculasystems.com>
Date:   Thu Dec 17 14:46:25 2020 +0100

    k8s: Sync with upstream to fix compilation issue

Author: Eric Bollengier <eric@baculasystems.com>
Date:   Thu Dec 17 13:24:15 2020 +0100

    k8s: Add util.boolparam class

Author: Eric Bollengier <eric@baculasystems.com>
Date:   Tue Dec 15 18:12:42 2020 +0100

    Add specific target for OpenShift

Author: Radosław Korzeniewski <radoslaw@korzeniewski.net>
Date:   Wed Dec 16 15:22:01 2020 +0100

    k8s: Add run.after.snapshot annotation.

    This patch adds the support for the following annotations:
    - bacula/run.after.snapshot.container.command
    - bacula/run.after.snapshot.failjobonerror
    which are executed just after a snapshot creation for a Pod which
    cover all requested volumes.

Author: Radosław Korzeniewski <radoslaw@korzeniewski.net>
Date:   Mon Dec 14 15:16:31 2020 +0100

    k8s: Fix for empty annotations. Closes #0007178.

Author: Radosław Korzeniewski <radoslaw@korzeniewski.net>
Date:   Fri Dec 11 12:07:36 2020 +0100

    k8s: Ensure proper handling bacula.volumes annotation.

Author: Radosław Korzeniewski <radoslaw@korzeniewski.net>
Date:   Fri Dec 11 11:04:17 2020 +0100

    k8s: Add code comments.

Author: Radosław Korzeniewski <radoslaw@korzeniewski.net>
Date:   Thu Dec 10 15:37:46 2020 +0100

    k8s: Fix PVC Data restore regression.

Author: Radosław Korzeniewski <radoslaw@korzeniewski.net>
Date:   Thu Dec 10 13:24:15 2020 +0100

    k8s: Update joblog messages and sort imports.

Author: Radosław Korzeniewski <radoslaw@korzeniewski.net>
Date:   Thu Dec 10 12:24:41 2020 +0100

    k8s: Fix invalid PVC Data tar name when shapshot.

Author: Radosław Korzeniewski <radoslaw@korzeniewski.net>
Date:   Wed Dec 9 17:07:20 2020 +0100

    k8s: Add PVC Data backup with CSI Snapshot support.

    This add PVC Data backup using Bacula Backup Pod functionality
    and Kubernetes Storage CSI Snapshot features. The new PVC Data
    backup uses Bacula Pod Annotations to configure all required backup
    parameters including pre- and post- remote command execution.

Author: Radosław Korzeniewski <radoslaw@korzeniewski.net>
Date:   Wed Dec 9 17:00:34 2020 +0100

    k8s: Update descriptions and debugging messages.

Author: Radosław Korzeniewski <radoslaw@korzeniewski.net>
Date:   Wed Dec 9 16:59:25 2020 +0100

    k8s: Update version and building data.

Author: Radosław Korzeniewski <radoslaw@korzeniewski.net>
Date:   Tue Dec 8 13:02:04 2020 +0100

    k8s: Bacula annotations add function descriptions.

Author: Radosław Korzeniewski <radoslaw@korzeniewski.net>
Date:   Tue Dec 8 13:01:19 2020 +0100

    k8s: Add PVC Clone configuration.

Author: Radosław Korzeniewski <radoslaw@korzeniewski.net>
Date:   Tue Dec 8 13:00:08 2020 +0100

    k8s: Correct Bacula Pod label to baculabackup.

Author: Radosław Korzeniewski <radoslaw@korzeniewski.net>
Date:   Tue Dec 8 12:59:22 2020 +0100

    k8s: Add capacity to PVC data list.

Author: Radosław Korzeniewski <radoslaw@korzeniewski.net>
Date:   Fri Dec 4 16:35:37 2020 +0100

    k8s: Update estimate job to show PVC Data size.

    Add PVC object size as found in PVC description:
    ...
    spec:
      resources:
        requests:
          storage: XX
    ...

Author: Radosław Korzeniewski <radoslaw@korzeniewski.net>
Date:   Fri Dec 4 13:17:46 2020 +0100

    k8s: Add pod annotations for Estimation Job.

    Bacula Pod Annotations is the functionality where you can select and
    configure PVC Data backup parameters using Kubernetes Pod metadata
    annotations feature.

    It will support the following Pod annotations:
    * bacula/backup.mode: [snapshot|standard] - it is the cloud way to select
       what pvcdata you want to backup and extend the current plugin
       parameter: pvcdata[=<pvcname>], default is snapshot if not defined.
    * bacula/backup.volumes: <pvcname[,pvcname2...]> - required, multiple
       pvc names as comma separated list.
    * bacula/run.before.job.container.command:
       [<container>/<command>|*/<command>]  - a star (*) means all
       containers.
    * bacula/run.before.job.failjobonerror: [yes|no] - default is yes.
    * bacula/run.after.job.container.command:
       [<container>/<command>|*/<command>] - a star (*) means all
       containers.
    * bacula/run.after.job.failjobonerror: [yes|no] - default is no.

Author: Radosław Korzeniewski <radoslaw@korzeniewski.net>
Date:   Fri Dec 4 13:09:53 2020 +0100

    k8s: Update bacula pod annotations for containers.

Author: Radosław Korzeniewski <radoslaw@korzeniewski.net>
Date:   Fri Dec 4 11:10:57 2020 +0100

    k8s: Refactor send_file_info in default io.

Author: Radosław Korzeniewski <radoslaw@korzeniewski.net>
Date:   Thu Dec 3 16:21:33 2020 +0100

    k8s: Update PVCData support functions.

Author: Radosław Korzeniewski <radoslaw@korzeniewski.net>
Date:   Thu Dec 3 16:10:04 2020 +0100

    k8s: Add Pod remote execution support functions.

Author: Radosław Korzeniewski <radoslaw@korzeniewski.net>
Date:   Thu Dec 3 16:09:00 2020 +0100

    k8s: Add Bacula Pod annotations support functions.

Author: Radosław Korzeniewski <radoslaw@korzeniewski.net>
Date:   Mon Nov 23 14:37:18 2020 +0100

    k8s: Update plugin to use a common PTCOMM class.

Author: Radosław Korzeniewski <radekk@inteos.pl>
Date:   Mon Jul 20 14:46:26 2020 +0000

    kubernetes: Add StorageClass support, PVC Data backup #5740

    Add StorageClass support
    Add StorageClass restore
    Add StorageClass backup/restore
    All pod data backup
    Update backend log format to express location

    Fix timestamp conversion for str
      This patch fixes a situation when python kubernetes
      module return 'str' as a creation_timestamp parameter
      instead of as usual 'datetime.datetime' object.

    All pod data backup
    Fix more timestamp conversion for str
    Correct k8sbackend imports

    Update pvcdata list including not mounted

    Fix problem when conn service unable to start
      It fixes the backup loop when first connection service
      cannot start because of an error, i.e. nonexistent cert
      files. In this case no other PVC Data will be backup.

    All PVC Data backup support. Fixes #5740.

    Bump plugin version.

    Various Backports
      Fix query_job.py location
      Backport .gitignore
      Backport baculak8s/__init__.py
      Fix location of pvcdata.py and storageclass.py
      backport plugins/kubernetes_plugin.py
      backport plugins/plugin_factory.py
      Remove plugins/swift_plugin.py

Author: Radosław Korzeniewski <radekk@inteos.pl>
Date:   Thu Jul 23 16:15:25 2020 +0000

    kubernetes: More work on StorageClass support

Author: Radosław Korzeniewski <radekk@inteos.pl>
Date:   Fri Jun 19 16:34:29 2020 +0200

    kubernetes: Fix for #6304: 'NoneType' is not iterable

Author: Radosław Korzeniewski <radekk@inteos.pl>
Date:   Thu Mar 5 16:10:37 2020 +0100

    kubernetes: Fix #6052 Add resources and limits to bacula pod

Author: Radosław Korzeniewski <radekk@inteos.pl>
Date:   Tue Feb 25 14:22:26 2020 +0100

    kubernetes: Add queryparameter interface

Author: Radosław Korzeniewski <radekk@inteos.pl>
Date:   Mon Jul 20 14:46:26 2020 +0000

    kubernetes: Add StorageClass implementation

Author: Radosław Korzeniewski <radekk@inteos.pl>
Date:   Fri Jun 19 16:34:29 2020 +0200

    k8s: Fix for #6304: 'NoneType' is not iterable

Author: Radosław Korzeniewski <radekk@inteos.pl>
Date:   Thu Feb 20 13:10:46 2020 +0100

    kubernetes: Fix invalid import.

Author: Radosław Korzeniewski <radekk@inteos.pl>
Date:   Thu Feb 20 13:06:21 2020 +0100

    kubernetes: Allow wildcards in the 'persistentvolume' option. Closes #0005983.

Author: Radosław Korzeniewski <radekk@inteos.pl>
Date:   Thu Feb 6 16:21:21 2020 +0100

    kubernetes: Optimize imports and fix invalid k8s reference.

Author: Eric Bollengier <eric@baculasystems.com>
Date:   Fri Jan 10 10:04:35 2020 +0100

    update Copyright year

Author: Radosław Korzeniewski <radekk@inteos.pl>
Date:   Fri Nov 29 15:21:22 2019 +0100

    k8s: Fix #5745 for improper attributes backup

Author: Eric Bollengier <eric@baculasystems.com>
Date:   Mon Nov 25 10:18:35 2019 +0100

    k8s: Backport io/log.py

Author: Radosław Korzeniewski <radekk@inteos.pl>
Date:   Sun Nov 24 00:37:41 2019 +0100

    k8s: allow a minimum timeout=1

Author: Radosław Korzeniewski <radekk@inteos.pl>
Date:   Sun Nov 24 00:25:19 2019 +0100

    k8s: Possible fix for #5713

Author: Radosław Korzeniewski <radekk@inteos.pl>
Date:   Sun Nov 24 00:24:19 2019 +0100

    k8s: Fix socket timeout

Author: Radosław Korzeniewski <radekk@inteos.pl>
Date:   Sun Nov 24 00:23:37 2019 +0100

    k8s: Add force delete pod when timeout waiting for connection

Author: Radosław Korzeniewski <radekk@inteos.pl>
Date:   Sat Nov 23 23:17:12 2019 +0100

    k8s: Add timeout=NN plugin parameter overriding DEFAULTTIMEOUT

Author: Radosław Korzeniewski <radekk@inteos.pl>
Date:   Sat Nov 23 23:16:23 2019 +0100

    k8s: Fix display finish pvcdata backup on success backup only

Author: Radosław Korzeniewski <radekk@inteos.pl>
Date:   Sat Nov 23 23:15:34 2019 +0100

    k8s: Fix comm error during pvcdata backup

Author: Radosław Korzeniewski <radekk@inteos.pl>
Date:   Sat Nov 23 17:45:15 2019 +0100

    k8s: Add more debug messages to check #5711

Author: Radosław Korzeniewski <radekk@inteos.pl>
Date:   Sat Nov 23 09:57:51 2019 +0100

    k8s: Possible Fix #5706 Add a special handling for sa-tokens during restore

Author: Radosław Korzeniewski <radekk@inteos.pl>
Date:   Sat Nov 23 09:54:32 2019 +0100

    k8s: Fix log location fallback

Author: Radosław Korzeniewski <radekk@inteos.pl>
Date:   Fri Nov 22 09:35:42 2019 +0100

    k8s: Display processing information in backup job only

Author: Radosław Korzeniewski <radekk@inteos.pl>
Date:   Thu Nov 21 16:35:47 2019 +0100

    k8s: Possible fix for #5679

Author: Radosław Korzeniewski <radekk@inteos.pl>
Date:   Thu Nov 21 01:05:44 2019 +0100

    k8s: Add more verbose job messages

Author: Radosław Korzeniewski <radekk@inteos.pl>
Date:   Wed Nov 20 23:14:25 2019 +0100

    k8s: Update pvconfig option behavior as agreed in #5670

Author: Radosław Korzeniewski <radekk@inteos.pl>
Date:   Wed Nov 20 22:58:06 2019 +0100

    k8s: Add K8S version info display in joblog

Author: Radosław Korzeniewski <radekk@inteos.pl>
Date:   Wed Nov 20 22:57:37 2019 +0100

    k8s: Add backend framework info packet handling

Author: Eric Bollengier <eric@baculasystems.com>
Date:   Thu Nov 21 10:08:36 2019 +0100

    k8s: Fix compilation with respbody

Author: Radosław Korzeniewski <radekk@inteos.pl>
Date:   Wed Nov 20 00:44:23 2019 +0100

    k8s: Fix #5652 Correct pvcdata archive size to -1

Author: Radosław Korzeniewski <radekk@inteos.pl>
Date:   Wed Nov 20 00:06:23 2019 +0100

    k8s: Possible fix for details REST error response.

Author: Radosław Korzeniewski <radekk@inteos.pl>
Date:   Tue Nov 19 23:05:38 2019 +0100

    k8s: Fix exception handling at listing.

Author: Radosław Korzeniewski <radekk@inteos.pl>
Date:   Tue Nov 19 19:19:35 2019 +0100

    k8s: Possible fix for #5682 about config plugin parameter

Author: Radosław Korzeniewski <radekk@inteos.pl>
Date:   Tue Nov 19 11:03:13 2019 +0100

    k8s: Possible fix for res listing 403 errors. Fix for #5671

Author: Eric Bollengier <eric@baculasystems.com>
Date:   Fri Nov 15 15:22:54 2019 +0100

    k8s: use sys.exit(0) instead of quit()

Author: Radosław Korzeniewski <radekk@inteos.pl>
Date:   Mon Nov 18 12:44:02 2019 +0100

    k8s: Possible fix for #5655 - manage unhandled exception

Author: Eric Bollengier <eric@baculasystems.com>
Date:   Thu Nov 14 11:41:41 2019 +0100

    k8s: Remove hardcoded gcr.io reference for the bacula-backup image

Author: Radosław Korzeniewski <radekk@inteos.pl>
Date:   Wed Nov 13 10:30:16 2019 +0100

    k8s: Add imagepullpolicy=... plugin parameter

Author: Radosław Korzeniewski <radekk@inteos.pl>
Date:   Wed Nov 13 09:57:06 2019 +0100

    k8s: Fix #5611 for fdport and pluginport handling

Author: Eric Bollengier <eric@baculasystems.com>
Date:   Tue Nov 12 17:04:57 2019 +0100

    k8s: Fix error messages sent to the protocol channel

Author: Eric Bollengier <eric@baculasystems.com>
Date:   Wed Nov 6 09:51:20 2019 +0100

    k8s: Fix build on rhel8

Author: Radosław Korzeniewski <radekk@inteos.pl>
Date:   Tue Nov 5 12:35:27 2019 +0100

    k8s: Fix #5569 about 'incluster' option handling

Author: Eric Bollengier <eric@baculasystems.com>
Date:   Tue Oct 29 15:29:25 2019 +0100

    k8s: Generate Dockerfile with ./configure

Author: Radosław Korzeniewski <radekk@inteos.pl>
Date:   Fri Oct 18 16:36:37 2019 +0200

    k8s: Correct pvcdata resource restore ready status.

Author: Radosław Korzeniewski <radekk@inteos.pl>
Date:   Fri Oct 18 15:18:09 2019 +0200

    k8s: update restore error messages.

Author: Radosław Korzeniewski <radekk@inteos.pl>
Date:   Fri Oct 18 10:25:53 2019 +0200

    k8s: Fix backup pod preparation and token update.

Author: Radosław Korzeniewski <radekk@inteos.pl>
Date:   Tue Oct 15 14:05:17 2019 +0200

    k8s: Add 'baculaimage' plugin parameter

Author: Radosław Korzeniewski <radekk@inteos.pl>
Date:   Mon Oct 14 16:05:31 2019 +0200

    k8s: Fix regression on restore job

Author: Radosław Korzeniewski <radekk@inteos.pl>
Date:   Fri Oct 11 16:35:55 2019 +0200

    k8s: Working pvcdata restore

Author: Radosław Korzeniewski <radekk@inteos.pl>
Date:   Fri Oct 11 16:06:17 2019 +0200

    k8s: Upgrade k8s backup handler to 0.5-inteos

Author: Radosław Korzeniewski <radekk@inteos.pl>
Date:   Fri Oct 11 14:20:32 2019 +0200

    k8s: Fix some restore problems

Author: Radosław Korzeniewski <radekk@inteos.pl>
Date:   Fri Oct 11 14:19:34 2019 +0200

    k8s: New baculatar vs. bacula-backup pod

Author: Eric Bollengier <eric@baculasystems.com>
Date:   Fri Oct 11 17:25:15 2019 +0200

    k8s: tweak app name for pyinstaller

Author: Radosław Korzeniewski <radekk@inteos.pl>
Date:   Fri Oct 11 13:35:24 2019 +0200

    k8s: Fix restore job and add more error handling

Author: Radosław Korzeniewski <radekk@inteos.pl>
Date:   Fri Oct 11 11:59:45 2019 +0200

    k8s: Fix regression on handle_connection and add more error handling

Author: Radosław Korzeniewski <radekk@inteos.pl>
Date:   Fri Oct 11 10:22:14 2019 +0200

    k8s: Remove unused parameter

Author: Radosław Korzeniewski <radekk@inteos.pl>
Date:   Thu Oct 10 16:07:07 2019 +0200

    k8s: Restore pvcdata work in progress

Author: Radosław Korzeniewski <radekk@inteos.pl>
Date:   Thu Oct 10 13:56:46 2019 +0200

    k8s: Working on pvcdata restore job and error handling

Author: Radosław Korzeniewski <radekk@inteos.pl>
Date:   Thu Oct 10 10:08:39 2019 +0200

    k8s: Add sync on backup pod remove and make pod name a variable

Author: Eric Bollengier <eric@baculasystems.com>
Date:   Fri Oct 11 10:36:33 2019 +0200

    k8s: Move Makefile to Makefile.in

Author: Eric Bollengier <eric@baculasystems.com>
Date:   Fri Oct 11 10:36:16 2019 +0200

    k8s: Work on packaging

Author: Eric Bollengier <eric@baculasystems.com>
Date:   Thu Oct 10 18:50:34 2019 +0200

    k8s: Compile and bundle kubernetes plugin

Author: Eric Bollengier <eric@baculasystems.com>
Date:   Thu Oct 10 15:15:54 2019 +0200

    k8s: Fix Cython integration patch

Author: Radosław Korzeniewski <radekk@inteos.pl>
Date:   Wed Oct 9 17:50:44 2019 +0200

    k8s: Refactor backup job for sharing pvcdata backup code with restore job

Author: Radosław Korzeniewski <radekk@inteos.pl>
Date:   Wed Oct 9 15:22:43 2019 +0200

    k8s: Correct restore loop for local pvcdata restore

Author: Radosław Korzeniewski <radekk@inteos.pl>
Date:   Wed Oct 9 14:55:11 2019 +0200

    k8s: Correct pvcdata error handling and filtering

Author: Eric Bollengier <eric@baculasystems.com>
Date:   Thu Oct 10 15:04:20 2019 +0200

    k8s: Add command to install pip requirements in Makefile

Author: Eric Bollengier <eric@baculasystems.com>
Date:   Thu Oct 10 15:01:08 2019 +0200

    k8s: Prepare the code for Cython

Author: Eric Bollengier <eric@baculasystems.com>
Date:   Wed Oct 9 17:22:36 2019 +0200

    k8s: update requirements file

Author: Radosław Korzeniewski <radekk@inteos.pl>
Date:   Tue Oct 8 20:45:27 2019 +0200

    k8s: First pvcdata backup successful

Author: Radosław Korzeniewski <radekk@inteos.pl>
Date:   Tue Oct 8 18:13:46 2019 +0200

    k8s: Redesign Connection Server error handling

Author: Radosław Korzeniewski <radekk@inteos.pl>
Date:   Tue Oct 8 18:13:03 2019 +0200

    k8s: Small fixes to auth token manipulation

Author: Radosław Korzeniewski <radekk@inteos.pl>
Date:   Tue Oct 8 17:31:45 2019 +0200

    k8s: Add Pod management utilities

Author: Radosław Korzeniewski <radekk@inteos.pl>
Date:   Tue Oct 8 18:58:51 2019 +0200

    k8s: Add kubernetes backend program

    Author: Radosław Korzeniewski <radekk@inteos.pl>
    Date:   Tue Oct 8 12:59:47 2019 +0200

        Move pvcdata connection server params to main backup job file.

    Author: Radosław Korzeniewski <radekk@inteos.pl>
    Date:   Tue Oct 8 12:58:54 2019 +0200

        ConnectionServer with ssl defined.

    Author: Radosław Korzeniewski <radekk@inteos.pl>
    Date:   Tue Oct 8 10:08:27 2019 +0200

        PVC data backup and restore work in progress.

    Author: Radosław Korzeniewski <radekk@inteos.pl>
    Date:   Mon Oct 7 16:21:56 2019 +0200

        Add a reference bacula-backup.yaml.

    Author: Radosław Korzeniewski <radekk@inteos.pl>
    Date:   Mon Oct 7 16:21:31 2019 +0200

        Add custom yaml handling for pod bacula-backup.

    Author: Radosław Korzeniewski <radekk@inteos.pl>
    Date:   Mon Oct 7 11:34:24 2019 +0200

        Update pvcdata backup work in progress.

    Author: Radosław Korzeniewski <radekk@inteos.pl>
    Date:   Fri Oct 4 17:31:06 2019 +0200

        Handle new parameter pvcdata as namespaced parameter.

    Author: Radosław Korzeniewski <radekk@inteos.pl>
    Date:   Fri Oct 4 17:30:04 2019 +0200

        Update plugin params handling.

        This update plugin handling in a such way that:
        param=value
        - produces single param in block {param: value}
        param=value1 param=value2
        - produces a list of params as {param: [value1, value2]}

    Author: Radosław Korzeniewski <radekk@inteos.pl>
    Date:   Fri Oct 4 10:38:18 2019 +0200

        Fix missing check for warning or abort when no data to backup but explicitly required.

    Author: Radosław Korzeniewski <radekk@inteos.pl>
    Date:   Fri Oct 4 10:36:44 2019 +0200

        Add pv and ns as aliases for namespace and persistentvolume.

    Author: Radosław Korzeniewski <radekk@inteos.pl>
    Date:   Fri Oct 4 09:57:59 2019 +0200

        Refactor backup/estimate loop.

    Author: Radosław Korzeniewski <radekk@inteos.pl>
    Date:   Thu Oct 3 20:59:32 2019 +0200

        Pod volume backup work in progress.

    Author: Radosław Korzeniewski <radekk@inteos.pl>
    Date:   Thu Oct 3 20:58:50 2019 +0200

        Update regression script and fix issues.

    Author: Radosław Korzeniewski <radekk@inteos.pl>
    Date:   Thu Oct 3 20:58:02 2019 +0200

        Pod data backup work in progress.

    Author: Radosław Korzeniewski <radekk@inteos.pl>
    Date:   Thu Oct 3 11:35:44 2019 +0200

        Work in progress on pod backup.

    Author: Radosław Korzeniewski <radekk@inteos.pl>
    Date:   Mon Sep 30 16:21:51 2019 +0200

        Update k8s objects attributes remapping.

    Author: Radosław Korzeniewski <radekk@inteos.pl>
    Date:   Mon Sep 30 15:47:19 2019 +0200

        Remove some debug code.

    Author: Radosław Korzeniewski <radekk@inteos.pl>
    Date:   Mon Sep 30 15:46:30 2019 +0200

        Fix output format transformation issue.

    Author: Radosław Korzeniewski <radekk@inteos.pl>
    Date:   Mon Sep 30 15:46:07 2019 +0200

        Update regression scripts.

    Author: Radosław Korzeniewski <radekk@inteos.pl>
    Date:   Fri Sep 27 12:59:40 2019 +0200

        Regression tests work in progress.

    Author: Radosław Korzeniewski <radekk@inteos.pl>
    Date:   Fri Sep 27 12:58:06 2019 +0200

        Implement PLUGIN_WORKING logging fallback to /tmp.

    Author: Radosław Korzeniewski <radekk@inteos.pl>
    Date:   Fri Sep 27 11:00:51 2019 +0200

        Fix estimation loop after refactoring.

    Author: Radosław Korzeniewski <radekk@inteos.pl>
    Date:   Thu Sep 26 17:23:54 2019 +0200

        Some cosmetic changes during code revision.

    Author: Radosław Korzeniewski <radekk@inteos.pl>
    Date:   Wed Sep 25 16:50:41 2019 +0200

        Regression tests work in progress.

    Author: Radosław Korzeniewski <radekk@inteos.pl>
    Date:   Wed Sep 25 10:55:48 2019 +0200

        Update plugin restore parameters.

    Author: Radosław Korzeniewski <radekk@inteos.pl>
    Date:   Tue Sep 24 17:03:47 2019 +0200

        Update K8SObjects to support PVCDATA.

    Author: Radosław Korzeniewski <radekk@inteos.pl>
    Date:   Tue Sep 24 17:01:02 2019 +0200

        Update plugin connect on different auth methods.

    Author: Radosław Korzeniewski <radekk@inteos.pl>
    Date:   Tue Sep 24 15:25:25 2019 +0200

        Add BearerToken authorization to plugin.

    Author: Radosław Korzeniewski <radekk@inteos.pl>
    Date:   Tue Sep 24 13:56:09 2019 +0200

        Add Rancher ProjectId restore to Namespace.

    Author: Radosław Korzeniewski <radekk@inteos.pl>
    Date:   Tue Sep 24 13:55:42 2019 +0200

        Fix StateFul Set restore.

    Author: Radosław Korzeniewski <radekk@inteos.pl>
    Date:   Mon Sep 23 15:53:05 2019 +0200

        Fix some restore problems with internal secrets mounts.

    Author: Radosław Korzeniewski <radekk@inteos.pl>
    Date:   Mon Sep 23 11:04:55 2019 +0200

        Change resources backup to explicit order.

    Author: Radosław Korzeniewski <radekk@inteos.pl>
    Date:   Fri Sep 20 15:44:13 2019 +0200

        Testing pvc restore.

    Author: Radosław Korzeniewski <radekk@inteos.pl>
    Date:   Thu Sep 19 18:52:55 2019 +0200

        Update restore functionality.

    Author: Radosław Korzeniewski <radekk@inteos.pl>
    Date:   Thu Sep 19 14:14:48 2019 +0200

        Prepare code for restore resources.

    Author: Radosław Korzeniewski <radekk@inteos.pl>
    Date:   Thu Sep 19 12:01:32 2019 +0200

        Fix regression in resource filter handling.

    Author: Radosław Korzeniewski <radekk@inteos.pl>
    Date:   Thu Sep 19 10:37:12 2019 +0200

        Restore optimizations and work in progress.

    Author: Radosław Korzeniewski <radekk@inteos.pl>
    Date:   Mon Sep 16 16:06:31 2019 +0200

        Update resources backup handling.

    Author: Radosław Korzeniewski <radekk@inteos.pl>
    Date:   Mon Sep 16 13:11:55 2019 +0200

        Optymizing and working on restore.

    Author: Radosław Korzeniewski <radekk@inteos.pl>
    Date:   Wed Sep 11 15:43:32 2019 +0200

        Restore work in progress.

    Author: Radosław Korzeniewski <radekk@inteos.pl>
    Date:   Wed Sep 11 15:42:02 2019 +0200

        Update FileInfo representation.

    Author: Radosław Korzeniewski <radekk@inteos.pl>
    Date:   Wed Sep 11 12:37:33 2019 +0200

        Working on function optimize and restore features.

    Author: Radosław Korzeniewski <radekk@inteos.pl>
    Date:   Wed Sep 11 12:36:20 2019 +0200

        Add K8SobjType methoddict for fast object to method mapping.

    Author: Radosław Korzeniewski <radekk@inteos.pl>
    Date:   Wed Sep 11 12:35:37 2019 +0200

        Fix Can't instantiate abstract class FileSystemPlugin.

    Author: Radosław Korzeniewski <radekk@inteos.pl>
    Date:   Tue Sep 10 18:28:50 2019 +0200

        Fix invalid list api call for secrets and serviceaccounts.

    Author: Radosław Korzeniewski <radekk@inteos.pl>
    Date:   Tue Sep 10 14:30:01 2019 +0200

        Correct error handling.

    Author: Radosław Korzeniewski <radekk@inteos.pl>
    Date:   Tue Sep 10 13:37:28 2019 +0200

        Introducing K8SObjType class for single place configuration.

    Author: Radosław Korzeniewski <radekk@inteos.pl>
    Date:   Tue Sep 10 13:05:37 2019 +0200

        Optimize procedures and work on restore.

    Author: Radosław Korzeniewski <radekk@inteos.pl>
    Date:   Tue Sep 10 13:03:24 2019 +0200

        Add requirements.txt for plugin dependencies.

    Author: Radosław Korzeniewski <radekk@inteos.pl>
    Date:   Mon Sep 9 16:30:02 2019 +0200

        Implement k8s_fileobject_type() for restore.

    Author: Radosław Korzeniewski <radekk@inteos.pl>
    Date:   Fri Aug 30 17:37:47 2019 +0200

        Optimize procedures and work on restore.

    Author: Radosław Korzeniewski <radekk@inteos.pl>
    Date:   Fri Aug 30 17:18:42 2019 +0200

        Optimize backup procedure and work on restore.

    Author: Radosław Korzeniewski <radekk@inteos.pl>
    Date:   Fri Aug 30 17:09:32 2019 +0200

        Correct error handling.

    Author: Radosław Korzeniewski <radekk@inteos.pl>
    Date:   Mon Jun 3 15:51:07 2019 +0200

        First restore to files job done.

    Author: Radosław Korzeniewski <radekk@inteos.pl>
    Date:   Mon Jun 3 14:40:42 2019 +0200

        K8S Plugin development.

    Author: Radosław Korzeniewski <radekk@inteos.pl>
    Date:   Fri May 31 09:11:53 2019 +0200

        Add more error and exception handling to the code.

    Author: Radosław Korzeniewski <radekk@inteos.pl>
    Date:   Tue Apr 30 16:48:36 2019 +0200

        Try to handle urllib3 exceptions and wornings - unsuccessful yet.

    Author: Radosław Korzeniewski <radekk@inteos.pl>
    Date:   Tue Apr 30 16:27:34 2019 +0200

        Revert exception handling.

    Author: Radosław Korzeniewski <radekk@inteos.pl>
    Date:   Tue Apr 30 10:11:35 2019 +0200

        K8S Plugin development.

    Author: Radosław Korzeniewski <radekk@inteos.pl>
    Date:   Mon Apr 29 14:41:27 2019 +0200

        K8S Plugin development.

    Author: Radosław Korzeniewski <radekk@inteos.pl>
    Date:   Mon Apr 29 14:19:03 2019 +0200

        Optimize main estimate/backup loop.

    Author: Radosław Korzeniewski <radekk@inteos.pl>
    Date:   Mon Apr 29 13:56:10 2019 +0200

        K8S Plugin development.

    Author: Radosław Korzeniewski <radekk@inteos.pl>
    Date:   Mon Apr 29 10:29:36 2019 +0200

        Implement namespace=... and pvconfig plugin parameters.

        When namespace defined then no PerststentVolumes backup unless pvconfig
        parameter is set.

    Author: Radosław Korzeniewski <radekk@inteos.pl>
    Date:   Mon Apr 29 10:21:11 2019 +0200

        Move k8s to backend subroutines.

    Author: Radosław Korzeniewski <radekk@inteos.pl>
    Date:   Mon Apr 29 10:19:22 2019 +0200

        Allow single word "debug" parameter.

    Author: Radosław Korzeniewski <radekk@inteos.pl>
    Date:   Mon Apr 29 10:18:30 2019 +0200

        Automatic handle array parameters and single word parameters.

    Author: Radosław Korzeniewski <radekk@inteos.pl>
    Date:   Mon Apr 29 10:18:01 2019 +0200

        Add k8s backend subroutines.

    Author: Radosław Korzeniewski <radekk@inteos.pl>
    Date:   Fri Apr 26 18:19:44 2019 +0200

        Always clean debug hooks before real testing. :)

    Author: Radosław Korzeniewski <radekk@inteos.pl>
    Date:   Fri Apr 26 18:15:08 2019 +0200

        Listing and estimate for first k8s objects working.

        sort-of... :)

    Author: Radosław Korzeniewski <radekk@inteos.pl>
    Date:   Fri Apr 26 15:50:42 2019 +0200

        Change K8S objects listing from Python/generator to simple lists.

    Author: Radosław Korzeniewski <radekk@inteos.pl>
    Date:   Fri Apr 26 15:49:43 2019 +0200

        Temporary disable exception handling for better backup.

    Author: Radosław Korzeniewski <radekk@inteos.pl>
    Date:   Tue Apr 23 12:08:02 2019 +0200

        Change plugin estimate travesting loop.

    Author: Radosław Korzeniewski <radekk@inteos.pl>
    Date:   Fri Apr 19 13:37:08 2019 +0200

        Add missing files.

    Author: Radosław Korzeniewski <radekk@inteos.pl>
    Date:   Wed Apr 17 17:08:56 2019 +0200

        Update code to the newest backend code.

    Author: Radosław Korzeniewski <radekk@inteos.pl>
    Date:   Sun Mar 31 20:26:40 2019 +0200

        Refactor kubernetes plugin.

    Author: Radosław Korzeniewski <radekk@inteos.pl>
    Date:   Sun Mar 31 20:26:13 2019 +0200

        Allow kubernetes plugin api in handshake.

    Author: Radosław Korzeniewski <radekk@inteos.pl>
    Date:   Sun Mar 31 20:25:37 2019 +0200

        Update backend install location.

    Author: Radosław Korzeniewski <radekk@inteos.pl>
    Date:   Sat Mar 30 10:34:09 2019 +0100

        More code refactoring.

    Author: Radosław Korzeniewski <radekk@inteos.pl>
    Date:   Fri Mar 29 17:27:10 2019 +0100

        Rename and refactor.

    Author: Radosław Korzeniewski <radekk@inteos.pl>
    Date:   Wed Mar 27 21:15:42 2019 +0100

        Refactoring old swift metaplugin.

    Author: Radosław Korzeniewski <radekk@inteos.pl>
    Date:   Fri Mar 22 16:38:55 2019 +0100

        Refactoring old swift metaplugin.

    Author: Radosław Korzeniewski <radekk@inteos.pl>
    Date:   Fri Mar 22 16:28:26 2019 +0100

        Porting swift backend itho k8s.

    Author: Radosław Korzeniewski <radekk@inteos.pl>
    Date:   Tue Mar 19 14:31:28 2019 +0100

        Add python backend code based on swift backend.

122 files changed:
bacula/src/plugins/fd/kubernetes-backend/.gitignore [new file with mode: 0644]
bacula/src/plugins/fd/kubernetes-backend/Makefile [new file with mode: 0644]
bacula/src/plugins/fd/kubernetes-backend/Project.txt [new file with mode: 0644]
bacula/src/plugins/fd/kubernetes-backend/README [new file with mode: 0644]
bacula/src/plugins/fd/kubernetes-backend/TODO [new file with mode: 0644]
bacula/src/plugins/fd/kubernetes-backend/bacula-backup.yaml [new file with mode: 0644]
bacula/src/plugins/fd/kubernetes-backend/baculak8s/__init__.py [new file with mode: 0644]
bacula/src/plugins/fd/kubernetes-backend/baculak8s/entities/__init__.py [new file with mode: 0644]
bacula/src/plugins/fd/kubernetes-backend/baculak8s/entities/file_info.py [new file with mode: 0644]
bacula/src/plugins/fd/kubernetes-backend/baculak8s/entities/k8sobjtype.py [new file with mode: 0644]
bacula/src/plugins/fd/kubernetes-backend/baculak8s/io/__init__.py [new file with mode: 0644]
bacula/src/plugins/fd/kubernetes-backend/baculak8s/io/default_io.py [new file with mode: 0644]
bacula/src/plugins/fd/kubernetes-backend/baculak8s/io/jobs/__init__.py [new file with mode: 0644]
bacula/src/plugins/fd/kubernetes-backend/baculak8s/io/jobs/restore_io.py [new file with mode: 0644]
bacula/src/plugins/fd/kubernetes-backend/baculak8s/io/log.py [new file with mode: 0644]
bacula/src/plugins/fd/kubernetes-backend/baculak8s/io/packet_definitions.py [new file with mode: 0644]
bacula/src/plugins/fd/kubernetes-backend/baculak8s/io/services/__init__.py [new file with mode: 0644]
bacula/src/plugins/fd/kubernetes-backend/baculak8s/io/services/job_info_io.py [new file with mode: 0644]
bacula/src/plugins/fd/kubernetes-backend/baculak8s/io/services/plugin_params_io.py [new file with mode: 0644]
bacula/src/plugins/fd/kubernetes-backend/baculak8s/jobs/__init__.py [new file with mode: 0644]
bacula/src/plugins/fd/kubernetes-backend/baculak8s/jobs/backup_job.py [new file with mode: 0644]
bacula/src/plugins/fd/kubernetes-backend/baculak8s/jobs/estimation_job.py [new file with mode: 0644]
bacula/src/plugins/fd/kubernetes-backend/baculak8s/jobs/job.py [new file with mode: 0644]
bacula/src/plugins/fd/kubernetes-backend/baculak8s/jobs/job_factory.py [new file with mode: 0644]
bacula/src/plugins/fd/kubernetes-backend/baculak8s/jobs/job_pod_bacula.py [new file with mode: 0644]
bacula/src/plugins/fd/kubernetes-backend/baculak8s/jobs/listing_job.py [new file with mode: 0644]
bacula/src/plugins/fd/kubernetes-backend/baculak8s/jobs/query_job.py [new file with mode: 0644]
bacula/src/plugins/fd/kubernetes-backend/baculak8s/jobs/restore_job.py [new file with mode: 0644]
bacula/src/plugins/fd/kubernetes-backend/baculak8s/main.py [new file with mode: 0644]
bacula/src/plugins/fd/kubernetes-backend/baculak8s/plugins/__init__.py [new file with mode: 0644]
bacula/src/plugins/fd/kubernetes-backend/baculak8s/plugins/fs_plugin.py [new file with mode: 0644]
bacula/src/plugins/fd/kubernetes-backend/baculak8s/plugins/k8sbackend/__init__.py [new file with mode: 0644]
bacula/src/plugins/fd/kubernetes-backend/baculak8s/plugins/k8sbackend/baculaannotations.py [new file with mode: 0644]
bacula/src/plugins/fd/kubernetes-backend/baculak8s/plugins/k8sbackend/baculabackup.py [new file with mode: 0644]
bacula/src/plugins/fd/kubernetes-backend/baculak8s/plugins/k8sbackend/baculabackupimage.py.in [new file with mode: 0644]
bacula/src/plugins/fd/kubernetes-backend/baculak8s/plugins/k8sbackend/configmaps.py [new file with mode: 0644]
bacula/src/plugins/fd/kubernetes-backend/baculak8s/plugins/k8sbackend/daemonset.py [new file with mode: 0644]
bacula/src/plugins/fd/kubernetes-backend/baculak8s/plugins/k8sbackend/deployment.py [new file with mode: 0644]
bacula/src/plugins/fd/kubernetes-backend/baculak8s/plugins/k8sbackend/endpoints.py [new file with mode: 0644]
bacula/src/plugins/fd/kubernetes-backend/baculak8s/plugins/k8sbackend/k8sfileinfo.py [new file with mode: 0644]
bacula/src/plugins/fd/kubernetes-backend/baculak8s/plugins/k8sbackend/k8sutils.py [new file with mode: 0644]
bacula/src/plugins/fd/kubernetes-backend/baculak8s/plugins/k8sbackend/limitrange.py [new file with mode: 0644]
bacula/src/plugins/fd/kubernetes-backend/baculak8s/plugins/k8sbackend/namespaces.py [new file with mode: 0644]
bacula/src/plugins/fd/kubernetes-backend/baculak8s/plugins/k8sbackend/persistentvolumeclaims.py [new file with mode: 0644]
bacula/src/plugins/fd/kubernetes-backend/baculak8s/plugins/k8sbackend/persistentvolumes.py [new file with mode: 0644]
bacula/src/plugins/fd/kubernetes-backend/baculak8s/plugins/k8sbackend/podexec.py [new file with mode: 0644]
bacula/src/plugins/fd/kubernetes-backend/baculak8s/plugins/k8sbackend/pods.py [new file with mode: 0644]
bacula/src/plugins/fd/kubernetes-backend/baculak8s/plugins/k8sbackend/podtemplates.py [new file with mode: 0644]
bacula/src/plugins/fd/kubernetes-backend/baculak8s/plugins/k8sbackend/pvcclone.py [new file with mode: 0644]
bacula/src/plugins/fd/kubernetes-backend/baculak8s/plugins/k8sbackend/pvcdata.py [new file with mode: 0644]
bacula/src/plugins/fd/kubernetes-backend/baculak8s/plugins/k8sbackend/replicaset.py [new file with mode: 0644]
bacula/src/plugins/fd/kubernetes-backend/baculak8s/plugins/k8sbackend/replicationcontroller.py [new file with mode: 0644]
bacula/src/plugins/fd/kubernetes-backend/baculak8s/plugins/k8sbackend/resourcequota.py [new file with mode: 0644]
bacula/src/plugins/fd/kubernetes-backend/baculak8s/plugins/k8sbackend/secret.py [new file with mode: 0644]
bacula/src/plugins/fd/kubernetes-backend/baculak8s/plugins/k8sbackend/service.py [new file with mode: 0644]
bacula/src/plugins/fd/kubernetes-backend/baculak8s/plugins/k8sbackend/serviceaccounts.py [new file with mode: 0644]
bacula/src/plugins/fd/kubernetes-backend/baculak8s/plugins/k8sbackend/statefulset.py [new file with mode: 0644]
bacula/src/plugins/fd/kubernetes-backend/baculak8s/plugins/k8sbackend/storageclass.py [new file with mode: 0644]
bacula/src/plugins/fd/kubernetes-backend/baculak8s/plugins/kubernetes_plugin.py [new file with mode: 0644]
bacula/src/plugins/fd/kubernetes-backend/baculak8s/plugins/plugin.py [new file with mode: 0644]
bacula/src/plugins/fd/kubernetes-backend/baculak8s/plugins/plugin_factory.py [new file with mode: 0644]
bacula/src/plugins/fd/kubernetes-backend/baculak8s/services/__init__.py [new file with mode: 0644]
bacula/src/plugins/fd/kubernetes-backend/baculak8s/services/handshake_service.py [new file with mode: 0644]
bacula/src/plugins/fd/kubernetes-backend/baculak8s/services/job_end_service.py [new file with mode: 0644]
bacula/src/plugins/fd/kubernetes-backend/baculak8s/services/job_info_service.py [new file with mode: 0644]
bacula/src/plugins/fd/kubernetes-backend/baculak8s/services/plugin_params_service.py [new file with mode: 0644]
bacula/src/plugins/fd/kubernetes-backend/baculak8s/services/service.py [new file with mode: 0644]
bacula/src/plugins/fd/kubernetes-backend/baculak8s/services/unexpected_error_service.py [new file with mode: 0644]
bacula/src/plugins/fd/kubernetes-backend/baculak8s/util/__init__.py [new file with mode: 0644]
bacula/src/plugins/fd/kubernetes-backend/baculak8s/util/boolparam.py [new file with mode: 0644]
bacula/src/plugins/fd/kubernetes-backend/baculak8s/util/date_util.py [new file with mode: 0644]
bacula/src/plugins/fd/kubernetes-backend/baculak8s/util/dict_util.py [new file with mode: 0644]
bacula/src/plugins/fd/kubernetes-backend/baculak8s/util/iso8601.py [new file with mode: 0644]
bacula/src/plugins/fd/kubernetes-backend/baculak8s/util/lambda_util.py [new file with mode: 0644]
bacula/src/plugins/fd/kubernetes-backend/baculak8s/util/respbody.py [new file with mode: 0644]
bacula/src/plugins/fd/kubernetes-backend/baculak8s/util/size_util.py [new file with mode: 0644]
bacula/src/plugins/fd/kubernetes-backend/baculak8s/util/sslserver.py [new file with mode: 0644]
bacula/src/plugins/fd/kubernetes-backend/baculak8s/util/token.py [new file with mode: 0644]
bacula/src/plugins/fd/kubernetes-backend/bin/k8s_backend [new file with mode: 0644]
bacula/src/plugins/fd/kubernetes-backend/get_python [new file with mode: 0755]
bacula/src/plugins/fd/kubernetes-backend/make_bin [new file with mode: 0755]
bacula/src/plugins/fd/kubernetes-backend/mkExt.pl [new file with mode: 0755]
bacula/src/plugins/fd/kubernetes-backend/requirements.txt [new file with mode: 0644]
bacula/src/plugins/fd/kubernetes-backend/setup.py [new file with mode: 0644]
bacula/src/plugins/fd/kubernetes-backend/tests/README [new file with mode: 0644]
bacula/src/plugins/fd/kubernetes-backend/tests/__init__.py [new file with mode: 0644]
bacula/src/plugins/fd/kubernetes-backend/tests/base_tests.py [new file with mode: 0644]
bacula/src/plugins/fd/kubernetes-backend/tests/k8s_tests.py [new file with mode: 0644]
bacula/src/plugins/fd/kubernetes-backend/tests/test_exploratory/__init__.py [new file with mode: 0644]
bacula/src/plugins/fd/kubernetes-backend/tests/test_exploratory/test_account.py [new file with mode: 0644]
bacula/src/plugins/fd/kubernetes-backend/tests/test_exploratory/test_auth.py [new file with mode: 0644]
bacula/src/plugins/fd/kubernetes-backend/tests/test_exploratory/test_container.py [new file with mode: 0644]
bacula/src/plugins/fd/kubernetes-backend/tests/test_exploratory/test_object.py [new file with mode: 0644]
bacula/src/plugins/fd/kubernetes-backend/tests/test_stress/__init__.py [new file with mode: 0644]
bacula/src/plugins/fd/kubernetes-backend/tests/test_stress/test_stress_backup.py [new file with mode: 0644]
bacula/src/plugins/fd/kubernetes-backend/tests/test_stress/test_stress_restore.py [new file with mode: 0644]
bacula/src/plugins/fd/kubernetes-backend/tests/util/__init__.py [new file with mode: 0644]
bacula/src/plugins/fd/kubernetes-backend/tests/util/io_test_util.py [new file with mode: 0644]
bacula/src/plugins/fd/kubernetes-backend/tests/util/os_test_util.py [new file with mode: 0644]
bacula/src/plugins/fd/kubernetes-backend/tests/util/packet_builders.py [new file with mode: 0644]
bacula/src/plugins/fd/kubernetes-backend/tests/util/packet_test_util.py [new file with mode: 0644]
bacula/src/plugins/fd/kubernetes-fd.c [new file with mode: 0644]
bacula/src/plugins/fd/kubernetes-fd.h [new file with mode: 0644]
bacula/src/plugins/fd/pluginlib/commctx.h
bacula/src/plugins/fd/pluginlib/commctx_test.cpp
bacula/src/plugins/fd/pluginlib/execprog.cpp [new file with mode: 0644]
bacula/src/plugins/fd/pluginlib/execprog.h [new file with mode: 0644]
bacula/src/plugins/fd/pluginlib/execprog_test.cpp [new file with mode: 0644]
bacula/src/plugins/fd/pluginlib/iso8601.cpp [new file with mode: 0644]
bacula/src/plugins/fd/pluginlib/iso8601.h [new file with mode: 0644]
bacula/src/plugins/fd/pluginlib/iso8601_test.cpp [new file with mode: 0644]
bacula/src/plugins/fd/pluginlib/metaplugin.cpp
bacula/src/plugins/fd/pluginlib/metaplugin.h
bacula/src/plugins/fd/pluginlib/metaplugin_test.cpp
bacula/src/plugins/fd/pluginlib/pluginlib.cpp
bacula/src/plugins/fd/pluginlib/pluginlib.h
bacula/src/plugins/fd/pluginlib/pluginlib_test.cpp
bacula/src/plugins/fd/pluginlib/ptcomm.cpp
bacula/src/plugins/fd/pluginlib/ptcomm.h
bacula/src/plugins/fd/pluginlib/smartalist.h
bacula/src/plugins/fd/pluginlib/smartptr.h
bacula/src/plugins/fd/pluginlib/test_metaplugin_backend.c

diff --git a/bacula/src/plugins/fd/kubernetes-backend/.gitignore b/bacula/src/plugins/fd/kubernetes-backend/.gitignore
new file mode 100644 (file)
index 0000000..e5705de
--- /dev/null
@@ -0,0 +1,95 @@
+# Byte-compiled / optimized / DLL files
+__pycache__/
+*.py[cod]
+*$py.class
+
+# C extensions
+*.so
+
+# Distribution / packaging
+.Python
+env/
+build/
+develop-eggs/
+dist/
+downloads/
+eggs/
+.eggs/
+lib/
+lib64/
+parts/
+sdist/
+var/
+*.egg-info/
+.installed.cfg
+*.egg
+
+# PyInstaller
+#  Usually these files are written by a python script from a template
+#  before PyInstaller builds the exe, so as to inject date/other infos into it.
+*.manifest
+*.spec
+
+# Installer logs
+pip-log.txt
+pip-delete-this-directory.txt
+
+# Unit test / coverage reports
+htmlcov/
+.tox/
+.coverage
+.coverage.*
+.cache
+nosetests.xml
+coverage.xml
+*,cover
+.hypothesis/
+
+# Translations
+*.mo
+*.pot
+
+# Django stuff:
+*.log
+local_settings.py
+
+# Flask stuff:
+instance/
+.webassets-cache
+
+# Scrapy stuff:
+.scrapy
+
+# Sphinx documentation
+docs/_build/
+
+# PyBuilder
+target/
+
+# IPython Notebook
+.ipynb_checkpoints
+
+# pyenv
+.python-version
+
+# celery beat schedule file
+celerybeat-schedule
+
+# dotenv
+.env
+
+# virtualenv
+venv/
+ENV/
+
+# Spyder project settings
+.spyderproject
+
+# Rope project settings
+.ropeproject
+
+.idea/
+.AppleDouble
+
+# dynamically created image version
+baculak8s/plugins/k8sbackend/baculabackupimage.py
diff --git a/bacula/src/plugins/fd/kubernetes-backend/Makefile b/bacula/src/plugins/fd/kubernetes-backend/Makefile
new file mode 100644 (file)
index 0000000..536d4ef
--- /dev/null
@@ -0,0 +1,46 @@
+#
+# Master Makefile
+#
+# Copyright (C) 2000-2020 Kern Sibbald
+# License: BSD 2-Clause; see file LICENSE-FOSS
+#
+
+include ../Makefile.inc
+
+CWD=$(shell pwd)
+PIP_PROG=$(shell ./get_python PIP)
+PYTHON_PROG=$(shell ./get_python PYTHON)
+PYTHONPATH=$(shell ./get_python PYTHONPATH)
+PYTHON_PREFIX=$(shell ./get_python PYTHON_PREFIX)
+
+all: pbuild
+
+clean:
+       @find . -name '__pycache__' -print | xargs -r rm -r
+       @$(PYTHON_PROG) setup.py clean
+
+dist-clean: clean
+       @rm -rf build dist compile.py *.spec install-deps
+
+build: pbuild
+
+pbuild:
+       @$(PYTHON_PROG) setup.py build
+
+install: build
+       $(MAKE) -C ../ install-kubernetes
+       @$(PYTHON_PROG) setup.py install
+
+install-deps: requirements.txt
+       PYTHONPATH=$(PYTHONPATH) $(PIP_PROG) install --user -r requirements.txt
+       touch install-deps
+
+binary: install-deps
+       find baculak8s -name '*.py' | ./mkExt.pl > compile.py
+       PYTHONPATH=$(PYTHONPATH) $(PYTHON_PROG) compile.py build
+       $(CWD)/make_bin
+
+install-bin: install-kubernetes
+
+install-kubernetes: binary
+       $(INSTALL_PROGRAM) dist/k8s_backend $(DESTDIR)$(sbindir)
diff --git a/bacula/src/plugins/fd/kubernetes-backend/Project.txt b/bacula/src/plugins/fd/kubernetes-backend/Project.txt
new file mode 100644 (file)
index 0000000..83f2099
--- /dev/null
@@ -0,0 +1,277 @@
+RD-Project-0001 Check List 1.0
+-------------------------------
+
+This checklist must be completed and stored in the SVN directory of the
+project.
+
+When an item is not relevant, N/A can be specified. When an item is not
+done, the line can stay empty. A percentage can also be specified when
+the item is not completely done. Yes/No/Done can be used for Boolean answers.
+
+A copy should be sent to the Project and the R&D manager.
+
+
+Project Name....: Swift Plugin
+Version.........: 1.0
+Authors.........: Henrique Medrado de Faria
+Completion%.....: 100%
+
+
+
+----------------------------------------------------------------
+--- Project Description
+
+Short Description:
+
+This project intents to create a Plugin for Backup and Restore of
+OpenStack Swift Storage objects, which is accessible via a REST API.
+It is used by Bacula C++ Plugin, and both comunicate through an ASCII
+protocol.
+
+
+Beta Testers....:
+Alpha Date......:
+Beta Date.......:
+Target Date.....:
+Release Date....:
+
+
+
+----------------------------------------------------------------
+--- Code
+
+-- What is the associated intranet web page name?
+
+
+
+-- Where is the associated SVN project directory?
+
+
+
+-- Where is the code (git, path, ...)?
+
+bsweb:swift
+
+-- What is the git branch name?
+
+master
+
+-- How to compile the code?
+
+The code is interpreted
+
+
+-- What are the command line options?
+
+At the moment, there are no command line options
+
+-- What are the tested platforms?
+
+
+
+-- What are the supported platforms?
+
+
+
+-- Who did the code review?
+
+Alain Spineux
+
+
+----------------------------------------------------------------
+--- Dependencies
+
+What is needed to run the code (dependencies)?
+
+
+
+What is the procedure to install dependencies?
+
+
+
+Is the dependency installation procedure implemented in a depkgs-xxx Makefile?
+
+
+
+Are the dependencies stored in bsweb:/home/src/depkgs as a depkgs-xxx file ?
+
+
+
+How to configure dependencies?
+
+
+
+What is the license for each dependency? Is it compatible with BEE license?
+
+
+
+Was an email sent to all developers with the documentation to install new dependencies?
+
+
+
+Can Bacula compile without the new dependencies?
+
+Yes.
+
+Should we update the configure.in for new libraries?
+
+
+
+----------------------------------------------------------------
+-- Coding Style
+
+Are all structures properly documented?
+
+Yes.
+
+
+Are all functions properly documented?
+
+Yes.
+
+
+Are all advanced algorithms documented?
+
+Yes.
+
+
+Is the copyright correct in all files?
+
+Yes.
+
+
+
+----------------------------------------------------------------
+-- Regression Testing
+
+
+-- What are the names of the regress tests that can be used?
+
+All tests are store inside the "tests" folder
+
+Unit Tests:
+
+tests/bacula_swift/test_io
+tests/bacula_swift/test_jobs
+tests/bacula_swift/test_services
+
+
+Integration Tests:
+
+tests/bacula_swift/test_plugins
+
+(Tests the integration with the Plugin Data Source, such as Swift Storage)
+
+
+System Tests:
+
+tests/bacula_swift/test_system
+
+(Tests the code as if it were used by the Bacula C++ Plugin)
+
+
+Exploratory Tests:
+
+Auxiliary Tests for developers to study and understand Data Sources
+
+
+Stress Tests:
+
+Tests to verify performance requirements
+
+
+
+-- What are the options or variables that can be used to configure the tests?
+
+
+In order to properly configure the tests,
+some Environment Variables must be created:
+
+BE_PLUGIN_TYPE (Specifies which Plugin should be tested)
+
+BE_PLUGIN_VERSION (Specifies which Plugin Version should be tested)
+
+BE_PLUGIN_URL (Specifies the URL where the Plugins Data Source exists)
+
+BE_PLUGIN_USER (Specifies the username that should be used by the Plugin)
+
+BE_PLUGIN_PWD (Specifies the password that should be used by the Plugin)
+
+Example:
+
+export BE_PLUGIN_TYPE=swift
+
+export BE_PLUGIN_VERSION=1
+
+export BE_PLUGIN_URL=http://192.168.0.5:8080
+
+export BE_PLUGIN_USER=test:tester
+
+export BE_PLUGIN_PWD=testing
+
+
+
+
+-- Have we some unit test procedures? How to run them?
+
+
+To run the tests (from the projects root folder):
+
+$ python3 -m unittest discover tests/test_baculaswift
+
+
+
+-- Who ran the regression tests?
+
+
+----------------------------------------------------------------
+--- Documentation
+
+Where is the documentation?
+
+
+
+Are the following subjects described in the documentation:
+   1- General overview
+   
+   2- Installation of the program
+   
+   3- Configuration of the program
+   
+   4- Limitations
+
+
+----------------------------------------------------------------
+--- Packaging
+
+Do we have a RPM spec file?
+
+No
+
+
+Do we have Debian debhelper files?
+
+No
+
+
+Do we have a Windows installer?
+
+No
+
+Will the spec or the debhelper scripts abort if dependencies are not found?
+
+
+----------------------------------------------------------------
+--- Support
+
+What is the Mantis category for bug reports?
+
+
+
+Who is the Support expert for the project?
+
+Henrique Medrado (hfaria020@gmail.com)
+
+
+
+
diff --git a/bacula/src/plugins/fd/kubernetes-backend/README b/bacula/src/plugins/fd/kubernetes-backend/README
new file mode 100644 (file)
index 0000000..69b2740
--- /dev/null
@@ -0,0 +1,19 @@
+This is a Kubernetes Backend for Bacula Enterprise
+
+
+Author: Radosław Korzeniewski (c) 2019,2020
+
+(c) Bacula Systems SA
+
+---------- Build and Installation ----------
+
+To build and install the project:
+
+$ python3 setup.py build
+$ python3 setup.py install
+
+(From the projects root folder)
+
+To run the project:
+
+$ k8s_backend
diff --git a/bacula/src/plugins/fd/kubernetes-backend/TODO b/bacula/src/plugins/fd/kubernetes-backend/TODO
new file mode 100644 (file)
index 0000000..87fc305
--- /dev/null
@@ -0,0 +1,6 @@
+-- TO BE DONE --
+    - Bacula Regression Test
+
+-- TO BE REVIEWED --
+
+-- OTHER --
diff --git a/bacula/src/plugins/fd/kubernetes-backend/bacula-backup.yaml b/bacula/src/plugins/fd/kubernetes-backend/bacula-backup.yaml
new file mode 100644 (file)
index 0000000..88e1ed2
--- /dev/null
@@ -0,0 +1,40 @@
+apiVersion: v1
+kind: Pod
+metadata:
+  name: {podname}
+  namespace: {namespace}
+  labels:
+    app: baculabackup
+spec:
+  hostname: {podname}
+  {nodenameparam}
+  containers:
+  - name: {podname}
+    resources:
+      limits:
+        cpu: "1"
+        memory: "64Mi"
+      requests:
+        cpu: "100m"
+        memory: "16Mi"
+    image: {image}
+    env:
+    - name: PLUGINMODE
+      value: "{mode}"
+    - name: PLUGINHOST
+      value: "{host}"
+    - name: PLUGINPORT
+      value: "{port}"
+    - name: PLUGINTOKEN
+      value: "{token}"
+    - name: PLUGINJOB
+      value: "{job}"
+    imagePullPolicy: {imagepullpolicy}
+    volumeMounts:
+      - name: {podname}-storage
+        mountPath: /{mode}
+  restartPolicy: Never
+  volumes:
+    - name: {podname}-storage
+      persistentVolumeClaim:
+        claimName: {pvcname}
diff --git a/bacula/src/plugins/fd/kubernetes-backend/baculak8s/__init__.py b/bacula/src/plugins/fd/kubernetes-backend/baculak8s/__init__.py
new file mode 100644 (file)
index 0000000..cc4e9f3
--- /dev/null
@@ -0,0 +1,19 @@
+# -*- coding: UTF-8 -*-
+# Bacula(R) - The Network Backup Solution
+#
+#   Copyright (C) 2000-2022 Kern Sibbald
+#
+#   The original author of Bacula is Kern Sibbald, with contributions
+#   from many others, a complete list can be found in the file AUTHORS.
+#
+#   You may use this file and others of this release according to the
+#   license defined in the LICENSE file, which includes the Affero General
+#   Public License, v3.0 ("AGPLv3") and some additional permissions and
+#   terms pursuant to its AGPLv3 Section 7.
+#
+#   This notice must be preserved when any source code is
+#   conveyed and/or propagated.
+#
+#   Bacula(R) is a registered trademark of Kern Sibbald.
+
+from . import plugins
diff --git a/bacula/src/plugins/fd/kubernetes-backend/baculak8s/entities/__init__.py b/bacula/src/plugins/fd/kubernetes-backend/baculak8s/entities/__init__.py
new file mode 100644 (file)
index 0000000..2f40006
--- /dev/null
@@ -0,0 +1,16 @@
+# Bacula(R) - The Network Backup Solution
+#
+#   Copyright (C) 2000-2022 Kern Sibbald
+#
+#   The original author of Bacula is Kern Sibbald, with contributions
+#   from many others, a complete list can be found in the file AUTHORS.
+#
+#   You may use this file and others of this release according to the
+#   license defined in the LICENSE file, which includes the Affero General
+#   Public License, v3.0 ("AGPLv3") and some additional permissions and
+#   terms pursuant to its AGPLv3 Section 7.
+#
+#   This notice must be preserved when any source code is
+#   conveyed and/or propagated.
+#
+#   Bacula(R) is a registered trademark of Kern Sibbald.
\ No newline at end of file
diff --git a/bacula/src/plugins/fd/kubernetes-backend/baculak8s/entities/file_info.py b/bacula/src/plugins/fd/kubernetes-backend/baculak8s/entities/file_info.py
new file mode 100644 (file)
index 0000000..2ef7f3a
--- /dev/null
@@ -0,0 +1,127 @@
+# -*- coding: UTF-8 -*-
+# Bacula(R) - The Network Backup Solution
+#
+#   Copyright (C) 2000-2022 Kern Sibbald
+#
+#   The original author of Bacula is Kern Sibbald, with contributions
+#   from many others, a complete list can be found in the file AUTHORS.
+#
+#   You may use this file and others of this release according to the
+#   license defined in the LICENSE file, which includes the Affero General
+#   Public License, v3.0 ("AGPLv3") and some additional permissions and
+#   terms pursuant to its AGPLv3 Section 7.
+#
+#   This notice must be preserved when any source code is
+#   conveyed and/or propagated.
+#
+#   Bacula(R) is a registered trademark of Kern Sibbald.
+
+import fnmatch
+import re
+
+NOT_EMPTY_FILE = "F"
+EMPTY_FILE = "E"
+DIRECTORY = "D"
+SYMBOLIC_LINK = "S"
+HARD_LINK = "L"
+DEFAULT_FILE_MODE = "100640"  # -rw-r-----
+DEFAULT_DIR_MODE = "040755"  # drwxr-xr-x
+MEGABYTE = 1024 * 1024
+
+
+class FileInfo(object):
+    """
+        Entity representing information about a File
+    """
+
+    def __init__(self, name,
+                 ftype, size, uid, gid,
+                 accessed_at, modified_at, created_at,
+                 mode, nlink, index=None,
+                 namespace=None, objtype=None,
+                 fullfname=[]):
+        self.name = name
+        self.type = ftype
+        self.size = size
+        self.uid = uid
+        self.gid = gid
+        self.mode = mode
+        self.nlink = nlink
+        self.index = index
+        self.accessed_at = accessed_at
+        self.modified_at = modified_at
+        self.created_at = created_at
+        self.namespace = namespace
+        self.objtype = objtype
+        self.objcache = None
+        self.fullfname = fullfname
+
+    def __str__(self):
+        return '{{FileInfo name:{} namespace:{} type:{} objtype:{} cached:{}}}'\
+            .format(str(self.name),
+                    str(self.namespace),
+                    str(self.type),
+                    str(self.objtype),
+                    self.objcache is not None)
+
+    def is_bucket(self):
+        return self.type == DIRECTORY and not self.name
+
+    def match_any_glob(self, globs, current_matches):
+        """
+            Verifies whether this File matches any glob inside $globs$.
+            If it does, the $current_matches$ list will be updated.
+        """
+        any_match = False
+
+        for glob in globs:
+            # Glob check
+            if fnmatch.fnmatchcase(self.name, glob):
+                any_match = True
+                current_matches.append(glob)
+                break
+
+        return any_match
+
+    def match_any_regex(self, regexes, current_matches):
+        """
+            Verifies whether this File matches any regex inside $regexs$.
+            If it does, the $current_matches$ list will be updated.
+        """
+
+        any_match = False
+
+        for regex in regexes:
+            if re.match(regex, self.name):
+                any_match = True
+                current_matches.append(regex)
+                break
+
+        return any_match
+
+    def apply_regexwhere_param(self, regexwhere):
+        """
+            Applies a $regexwhere$ into this File, updating it's name.
+        """
+        patterns = regexwhere.split(",")
+
+        for pattern in patterns:
+            re_flags = 0
+
+            if pattern.endswith("/i"):
+                re_flags += re.IGNORECASE
+
+            separator = pattern[0]
+            splitted = pattern \
+                .strip(separator) \
+                .split(separator)
+
+            splitted[1] = splitted[1].replace("$", "\\")
+
+            # We remove empty entries
+            splitted = list(filter(None, splitted))
+            new_name = re.sub(r'%s' % splitted[0],
+                              r'' + splitted[1],
+                              self.name,
+                              flags=re_flags)
+            self.name = new_name
diff --git a/bacula/src/plugins/fd/kubernetes-backend/baculak8s/entities/k8sobjtype.py b/bacula/src/plugins/fd/kubernetes-backend/baculak8s/entities/k8sobjtype.py
new file mode 100644 (file)
index 0000000..265b398
--- /dev/null
@@ -0,0 +1,94 @@
+# -*- coding: UTF-8 -*-
+#  Bacula(R) - The Network Backup Solution
+#
+#   Copyright (C) 2000-2022 Kern Sibbald
+#
+#   The original author of Bacula is Kern Sibbald, with contributions
+#   from many others, a complete list can be found in the file AUTHORS.
+#
+#   You may use this file and others of this release according to the
+#   license defined in the LICENSE file, which includes the Affero General
+#   Public License, v3.0 ("AGPLv3") and some additional permissions and
+#   terms pursuant to its AGPLv3 Section 7.
+#
+#   This notice must be preserved when any source code is
+#   conveyed and/or propagated.
+#
+#   Bacula(R) is a registered trademark of Kern Sibbald.
+#
+#     Copyright (c) 2019 by Inteos sp. z o.o.
+#     All rights reserved. IP transfered to Bacula Systems according to agreement.
+#     Author: Radosław Korzeniewski, radekk@inteos.pl, Inteos Sp. z o.o.
+#
+
+
+class K8SObjType(object):
+    K8SOBJ_CONFIGMAP = 'cm'
+    K8SOBJ_DAEMONSET = 'ds'
+    K8SOBJ_DEPLOYMENT = 'dp'
+    K8SOBJ_ENDPOINT = 'ep'
+    K8SOBJ_LIMITRANGE = 'lr'
+    K8SOBJ_NAMESPACE = 'ns'
+    K8SOBJ_POD = 'pod'
+    K8SOBJ_PVOLCLAIM = 'pvc'
+    K8SOBJ_PVOLUME = 'pv'
+    K8SOBJ_PODTEMPLATE = 'podt'
+    K8SOBJ_REPLICASET = 'rs'
+    K8SOBJ_REPLICACONTR = 'rc'
+    K8SOBJ_RESOURCEQUOTA = 'rq'
+    K8SOBJ_SECRET = 'sec'
+    K8SOBJ_SERVICE = 'svc'
+    K8SOBJ_SERVICEACCOUNT = 'sva'
+    K8SOBJ_STATEFULSET = 'ss'
+    K8SOBJ_PVCDATA = 'pvcdata'
+    K8SOBJ_STORAGECLASS = 'sc'
+
+    K8SOBJ_NAMESPACE_Path = 'namespaces'
+    K8SOBJ_PVOLUME_Path = 'persistentvolumes'
+    K8SOBJ_PVCS_Path = 'persistentvolumeclaims'
+    K8SOBJ_PVCDATA_Path = 'pvcdata'
+    K8SOBJ_STORAGECLASS_Path = 'storageclass'
+
+    pathdict = {
+        K8SOBJ_CONFIGMAP: 'configmaps',
+        K8SOBJ_DAEMONSET: 'daemonsets',
+        K8SOBJ_DEPLOYMENT: 'deployments',
+        K8SOBJ_ENDPOINT: 'endpoints',
+        K8SOBJ_LIMITRANGE: 'limitranges',
+        K8SOBJ_NAMESPACE: K8SOBJ_NAMESPACE_Path,
+        K8SOBJ_POD: 'pods',
+        K8SOBJ_PVOLCLAIM: K8SOBJ_PVCS_Path,
+        K8SOBJ_PVCDATA: K8SOBJ_PVCS_Path,
+        K8SOBJ_PVOLUME: K8SOBJ_PVOLUME_Path,
+        K8SOBJ_PODTEMPLATE: 'podtemplates',
+        K8SOBJ_REPLICASET: 'replicasets',
+        K8SOBJ_REPLICACONTR: 'replicationcontroller',
+        K8SOBJ_RESOURCEQUOTA: 'resourcequota',
+        K8SOBJ_SECRET: 'secrets',
+        K8SOBJ_SERVICE: 'services',
+        K8SOBJ_SERVICEACCOUNT: 'serviceaccounts',
+        K8SOBJ_STATEFULSET: 'statefulsets',
+        K8SOBJ_STORAGECLASS: K8SOBJ_STORAGECLASS_Path,
+    }
+
+    methoddict = {
+        K8SOBJ_CONFIGMAP: 'config_map',
+        K8SOBJ_DAEMONSET: 'daemon_set',
+        K8SOBJ_DEPLOYMENT: 'deployment',
+        K8SOBJ_ENDPOINT: 'endpoint',
+        K8SOBJ_LIMITRANGE: 'limitrange',
+        K8SOBJ_NAMESPACE: 'namespace',
+        K8SOBJ_POD: 'pod',
+        K8SOBJ_PVOLCLAIM: 'persistentvolume_claim',
+        K8SOBJ_PVCDATA: 'persistentvolume_data',
+        K8SOBJ_PVOLUME: 'persistentvolume',
+        K8SOBJ_PODTEMPLATE: 'pod_template',
+        K8SOBJ_REPLICASET: 'replica_set',
+        K8SOBJ_REPLICACONTR: 'replication_controller',
+        K8SOBJ_RESOURCEQUOTA: 'resource_quota',
+        K8SOBJ_SECRET: 'secret',
+        K8SOBJ_SERVICE: 'service',
+        K8SOBJ_SERVICEACCOUNT: 'service_account',
+        K8SOBJ_STATEFULSET: 'stateful_set',
+        K8SOBJ_STORAGECLASS: 'storageclass',
+    }
diff --git a/bacula/src/plugins/fd/kubernetes-backend/baculak8s/io/__init__.py b/bacula/src/plugins/fd/kubernetes-backend/baculak8s/io/__init__.py
new file mode 100644 (file)
index 0000000..2f40006
--- /dev/null
@@ -0,0 +1,16 @@
+# Bacula(R) - The Network Backup Solution
+#
+#   Copyright (C) 2000-2022 Kern Sibbald
+#
+#   The original author of Bacula is Kern Sibbald, with contributions
+#   from many others, a complete list can be found in the file AUTHORS.
+#
+#   You may use this file and others of this release according to the
+#   license defined in the LICENSE file, which includes the Affero General
+#   Public License, v3.0 ("AGPLv3") and some additional permissions and
+#   terms pursuant to its AGPLv3 Section 7.
+#
+#   This notice must be preserved when any source code is
+#   conveyed and/or propagated.
+#
+#   Bacula(R) is a registered trademark of Kern Sibbald.
\ No newline at end of file
diff --git a/bacula/src/plugins/fd/kubernetes-backend/baculak8s/io/default_io.py b/bacula/src/plugins/fd/kubernetes-backend/baculak8s/io/default_io.py
new file mode 100644 (file)
index 0000000..fa23fb2
--- /dev/null
@@ -0,0 +1,172 @@
+# -*- coding: UTF-8 -*-
+# Bacula(R) - The Network Backup Solution
+#
+#   Copyright (C) 2000-2022 Kern Sibbald
+#
+#   The original author of Bacula is Kern Sibbald, with contributions
+#   from many others, a complete list can be found in the file AUTHORS.
+#
+#   You may use this file and others of this release according to the
+#   license defined in the LICENSE file, which includes the Affero General
+#   Public License, v3.0 ("AGPLv3") and some additional permissions and
+#   terms pursuant to its AGPLv3 Section 7.
+#
+#   This notice must be preserved when any source code is
+#   conveyed and/or propagated.
+#
+#   Bacula(R) is a registered trademark of Kern Sibbald.
+import sys
+
+from baculak8s.io.log import Log
+from baculak8s.io.packet_definitions import *
+from baculak8s.plugins.plugin import *
+
+CONNECTION_ERROR_TEMPLATE = "Error connecting to the chosen Data Source. %s."
+HOST_NOT_FOUND_CONNECTION_ERROR = "404 Not found. Name or service not known"
+HOST_TIMEOUT_ERROR = "Host connection timeout. Maximum retries exceeded"
+AUTH_FAILED_CONNECTION_ERROR = "Authentication Failed"
+UNEXPECTED_CONNECTION_ERROR = "Unrecognized connection error"
+SSL_ERROR = "SSL verification failed"
+CONNECTION_REFUSED_TEMPLATE = "Max retry exceeded or Connection refused"
+
+
+class DefaultIO(object):
+    """
+        Default Class for performing IO.
+        It contains helper methods do read and send packets.
+    """
+
+    def send_connection_error(self, error_code, strerror=None):
+        if strerror is None:
+            message = self.__get_connection_error_message(error_code)
+        else:
+            message = strerror
+        self.send_error(message)
+
+    def send_connection_abort(self, error_code, strerror=None):
+        if strerror is None:
+            message = self.__get_connection_error_message(error_code)
+        else:
+            message = strerror
+        self.send_abort(message)
+
+    def __get_connection_error_message(self, error_code):
+        if error_code == ERROR_HOST_NOT_FOUND:
+            return CONNECTION_ERROR_TEMPLATE % HOST_NOT_FOUND_CONNECTION_ERROR
+        elif error_code == ERROR_HOST_TIMEOUT:
+            return CONNECTION_ERROR_TEMPLATE % HOST_TIMEOUT_ERROR
+        elif error_code == ERROR_AUTH_FAILED:
+            return CONNECTION_ERROR_TEMPLATE % AUTH_FAILED_CONNECTION_ERROR
+        elif error_code == ERROR_SSL_FAILED:
+            return CONNECTION_ERROR_TEMPLATE % SSL_ERROR
+        elif error_code == ERROR_CONNECTION_REFUSED:
+            return CONNECTION_ERROR_TEMPLATE % CONNECTION_REFUSED_TEMPLATE
+        else:
+            return CONNECTION_ERROR_TEMPLATE % UNEXPECTED_CONNECTION_ERROR
+
+    def send_eod(self):
+        packet_header = EOD_PACKET + b"\n"
+        sys.stdout.buffer.write(packet_header)
+        sys.stdout.flush()
+        Log.save_sent_eod(packet_header.decode())
+
+    def send_abort(self, message):
+        self.send_packet(STATUS_ABORT, message)
+
+    def send_error(self, message):
+        self.send_packet(STATUS_ERROR, message)
+
+    def send_warning(self, message):
+        self.send_packet(STATUS_WARNING, message)
+
+    def send_info(self, message):
+        self.send_packet(STATUS_INFO, message)
+
+    def send_command(self, message):
+        self.send_packet(STATUS_COMMAND, message)
+
+    def send_data(self, data):
+        self.send_packet(STATUS_DATA, data, raw=True)
+
+    def send_packet(self, status, packet_content, raw=False):
+        """
+            Prints a packet to stdout. A packet has the format
+
+            $status$ + $packet_length$ + \n
+            $packet_content$ + \n
+
+            where $status$ represents the type of packet sent
+            and $packet_length$ has 6 decimal chars.
+
+            If $raw$ is True, $packet_content$ will be handled
+            as a Byte String
+        """
+
+        bytes_content = packet_content
+
+        if not raw:
+            packet_content += "\n"
+            bytes_content = packet_content.encode()
+
+        packet_length = str(len(bytes_content)).zfill(6)
+        packet_header = "%s%s\n" % (status, packet_length)
+
+        sys.stdout.buffer.write(packet_header.encode())
+        sys.stdout.buffer.write(bytes_content)
+        sys.stdout.flush()
+
+        if not raw:
+            Log.save_sent_packet(packet_header, packet_content)
+        else:
+            Log.save_sent_data(packet_header)
+
+    def send_file_info(self, info):
+        """
+           Prints four packages into stdout:
+            1 - The files FNAME packet
+            2 - The files STAT packet
+            3 - The files TSTAMP packet
+            4 - A EOD packet
+        """
+
+        full_file_name = info.name
+
+        self.send_command("FNAME:%s" % full_file_name)
+
+        timestamp_tuple = (info.accessed_at, info.modified_at, info.created_at)
+        self.send_command("TSTAMP:%s %s %s" % timestamp_tuple)
+
+        stat_tuple = (info.type, info.size, info.uid, info.gid, info.mode, info.nlink)
+        self.send_command("STAT:%s %s %s %s %s %s" % stat_tuple)
+        self.send_eod()
+
+    def send_query_response(self, response):
+        key = response[0]
+        value = response[1]
+        self.send_command(str(key)+"="+str(value))
+
+    def read_line(self):
+        """
+            Reads a line from the stdin buffer
+
+            :return: The line read, as a Byte String, without the newline char
+        """
+        return sys.stdin.buffer.readline().strip()
+
+    def read_packet(self):
+        """
+            Reads a packet from the stdin buffer
+
+            :return: 1- The packet header, as a Byte String, without the newline char
+                     2- The packet content, as a String, without the newline char
+        """
+        packet_header = sys.stdin.buffer.readline().strip()
+        packet_content = sys.stdin.buffer.readline().strip().decode()
+        Log.save_received_packet(packet_header, packet_content)
+        return packet_header, packet_content
+
+    def read_eod(self):
+        packet_header = self.read_line()
+        if not packet_header or packet_header != EOD_PACKET:
+            raise ValueError("EOD packet not found")
+        Log.save_received_eod(packet_header)
diff --git a/bacula/src/plugins/fd/kubernetes-backend/baculak8s/io/jobs/__init__.py b/bacula/src/plugins/fd/kubernetes-backend/baculak8s/io/jobs/__init__.py
new file mode 100644 (file)
index 0000000..2f40006
--- /dev/null
@@ -0,0 +1,16 @@
+# Bacula(R) - The Network Backup Solution
+#
+#   Copyright (C) 2000-2022 Kern Sibbald
+#
+#   The original author of Bacula is Kern Sibbald, with contributions
+#   from many others, a complete list can be found in the file AUTHORS.
+#
+#   You may use this file and others of this release according to the
+#   license defined in the LICENSE file, which includes the Affero General
+#   Public License, v3.0 ("AGPLv3") and some additional permissions and
+#   terms pursuant to its AGPLv3 Section 7.
+#
+#   This notice must be preserved when any source code is
+#   conveyed and/or propagated.
+#
+#   Bacula(R) is a registered trademark of Kern Sibbald.
\ No newline at end of file
diff --git a/bacula/src/plugins/fd/kubernetes-backend/baculak8s/io/jobs/restore_io.py b/bacula/src/plugins/fd/kubernetes-backend/baculak8s/io/jobs/restore_io.py
new file mode 100644 (file)
index 0000000..705b669
--- /dev/null
@@ -0,0 +1,208 @@
+# Bacula(R) - The Network Backup Solution
+#
+#   Copyright (C) 2000-2022 Kern Sibbald
+#
+#   The original author of Bacula is Kern Sibbald, with contributions
+#   from many others, a complete list can be found in the file AUTHORS.
+#
+#   You may use this file and others of this release according to the
+#   license defined in the LICENSE file, which includes the Affero General
+#   Public License, v3.0 ("AGPLv3") and some additional permissions and
+#   terms pursuant to its AGPLv3 Section 7.
+#
+#   This notice must be preserved when any source code is
+#   conveyed and/or propagated.
+#
+#   Bacula(R) is a registered trademark of Kern Sibbald.
+
+import re
+import sys
+from enum import Enum
+import logging
+
+from baculak8s.entities.file_info import FileInfo
+from baculak8s.io.default_io import DefaultIO
+from baculak8s.io.log import Log
+from baculak8s.io.packet_definitions import ACL_DATA_START, XATTR_DATA_START, EOD_PACKET, STATUS_DATA
+from baculak8s.plugins.k8sbackend.k8sfileinfo import k8sfileobjecttype
+
+RESTORE_START = "RestoreStart"
+INVALID_RESTORE_START_PACKET = "Invalid restore job start packet"
+RESTORE_END_PACKET = "FINISH"
+SUCCESS_PACKET = "OK"
+SKIP_PACKET = "SKIP"
+FILE_TRANSFER_START = "DATA"
+INVALID_XATTRS_TRANSFER_START_PACKET = "Invalid extended attributes transfer start packet. Aborting"
+INVALID_ACL_TRANSFER_START_PACKET = "Invalid access control list transfer start packet. Aborting"
+RESTORE_LOOP_ERROR = "Invalid packet during restore loop."
+FNAME_WITHOUT_FSOURCE_ERROR = "Invalid FNAME packet. It should have information about the Files Source."
+
+XATTR_ERROR_TEMPLATE = "Error while transferring files extended attributes to the chosen Data Source" \
+                       "\nFile: %s\nBucket: %s.\n"
+
+ACL_ERROR_TEMPLATE = "Error while transferring files access control list to the chosen Data Source" \
+                     "\nFile: %s\nBucket: %s.\n"
+
+FILE_ERROR_TEMPLATE = "Error while transferring file content to the chosen Data Source" \
+                      "\n\tFile: %s\n\tNamespace: %s\n\tdetails: %s"
+
+BUCKET_ERROR_TEMPLATE = "Error while creating a Bucket on the chosen Data Source" \
+                      "Bucket: %s.\n"
+
+COMMA_SEPARATOR_NOT_SUPPORTED = "Comma separator not supported yet."
+
+
+class RestorePacket(Enum):
+    FILE_INFO = 1
+    ACL_START = 2
+    XATTR_START = 3
+    RESTORE_END = 4
+    INVALID_PACKET = 5
+
+
+class RestoreIO(DefaultIO):
+    def next_loop_packet(self, onError):
+        _, packet = self.read_packet()
+        logging.debug('next_loop_packet:packet:' + str(packet))
+        if packet is None:
+            return RestorePacket.INVALID_PACKET, None
+
+        if packet == RESTORE_END_PACKET:
+            return RestorePacket.RESTORE_END, None
+
+        if packet.startswith("FNAME:"):
+            file_info = self.__read_file_info(packet, onError)
+            return RestorePacket.FILE_INFO, file_info
+
+        if packet == ACL_DATA_START:
+            return RestorePacket.ACL_START, None
+
+        if packet == XATTR_DATA_START:
+            return RestorePacket.XATTR_START, None
+
+        return RestorePacket.INVALID_PACKET, None
+
+    def __read_file_info(self, full_fname, onError):
+        """
+        Reads four packages from stdin:
+            1 - The files FNAME packet
+            2 - The files STAT packet
+            3 - The files TSTAMP packet
+            4 - An EOD packet
+
+        :return: The file_info data structure
+        """
+
+        full_fname = full_fname.replace("FNAME:", "").rstrip("/")
+
+        if "@" not in full_fname:
+            _, full_stat = self.read_packet()
+            _, full_tstamp = self.read_packet()
+            self.read_eod()
+            self.send_abort(FNAME_WITHOUT_FSOURCE_ERROR)
+            onError()
+            return
+
+        where_param = self.__read_where_parameter(full_fname)
+
+        if where_param is not None:
+            full_fname = full_fname.replace(where_param, '', 1).lstrip('/')
+
+        # creates an array like:
+        # ['@kubernetes', 'namespaces', '$namespace', '$object', '$file.yaml']
+        # ['@kubernetes', 'namespaces', '$namespace', '$object', '$file.tar']
+        # ['@kubernetes', 'namespaces', '$namespace', '$file.yaml']
+        # ['@kubernetes', 'persistentvolumes', '$file.yaml']
+        fname = full_fname.split("/", 4)
+
+        _, full_stat = self.read_packet()
+
+        # creates an array with [$type$, $size$, $uid$, $gid$, $mode$, $nlink$]
+        fstat = re.sub("STAT:", '', full_stat).split(' ')
+
+        _, full_tstamp = self.read_packet()
+
+        # creates an array with [$atime$, $mtime$, $ctime$]
+        ftstamp = re.sub("TSTAMP:", '', full_tstamp).split(' ')
+
+        self.read_eod()
+
+        # debugging
+        logging.debug("fstat:" + str(fstat))
+        logging.debug("fname:" + str(fname))
+        logging.debug("ftstamp:" + str(ftstamp))
+
+        objtype = k8sfileobjecttype(fname)
+        return FileInfo(
+            # The name may be empty if we have a bucket file
+            name=fname[-1],
+            ftype=fstat[0],
+            size=int(fstat[1]),
+            uid=fstat[2],
+            gid=fstat[3],
+            mode=fstat[4],
+            nlink=fstat[5],
+            index=fstat[6],
+            accessed_at=int(ftstamp[0]),
+            modified_at=int(ftstamp[1]),
+            created_at=int(ftstamp[2]),
+            namespace=objtype['namespace'],
+            objtype=objtype['obj'],
+            fullfname=fname
+        )
+
+    def __read_where_parameter(self, full_fname):
+        if full_fname.startswith("@"):
+            where_param = None
+        else:
+            # we read the where param part of the FNAME packet
+            index_fsource = full_fname.index("/@")
+            where_param = full_fname[0:index_fsource]
+            where_param = where_param.rstrip("/")
+
+        return where_param
+
+    def read_data(self):
+        """
+            Reads a data packet from the stdio:
+
+                "D000123\n" (Data packet header)
+                "chunk"     (Data packet)
+
+            :return: The data packet content, or
+                     None if an EOD packet (F000000) is found instead
+        """
+        header = self.read_line()
+        if not header:
+            raise ValueError("Packet Header not found")
+        logging.debug('io.read_data: ' + str(header))
+        if header == EOD_PACKET:
+            Log.save_received_eod(header)
+            return None
+
+        # Removes the status of the header to obtain data content length
+        chunk_length = int(header.decode().replace(STATUS_DATA, ''))
+        chunk = sys.stdin.buffer.read(chunk_length)
+        Log.save_received_data(header)
+        return chunk
+
+
+class FileContentReader(RestoreIO):
+    """
+        Class used to read chunks of file content from standard input.
+    """
+
+    def __init__(self):
+        self.finished = False
+
+    def read(self, size=None):
+        if self.finished:
+            return None
+
+        data = self.read_data()
+        if not data:
+            self.finished = True
+        return data
+
+    def finished_transfer(self):
+        return self.finished
diff --git a/bacula/src/plugins/fd/kubernetes-backend/baculak8s/io/log.py b/bacula/src/plugins/fd/kubernetes-backend/baculak8s/io/log.py
new file mode 100644 (file)
index 0000000..4ba500e
--- /dev/null
@@ -0,0 +1,119 @@
+# Bacula(R) - The Network Backup Solution
+#
+#   Copyright (C) 2000-2022 Kern Sibbald
+#
+#   The original author of Bacula is Kern Sibbald, with contributions
+#   from many others, a complete list can be found in the file AUTHORS.
+#
+#   You may use this file and others of this release according to the
+#   license defined in the LICENSE file, which includes the Affero General
+#   Public License, v3.0 ("AGPLv3") and some additional permissions and
+#   terms pursuant to its AGPLv3 Section 7.
+#
+#   This notice must be preserved when any source code is
+#   conveyed and/or propagated.
+#
+#   Bacula(R) is a registered trademark of Kern Sibbald.
+
+import logging
+import os
+import baculak8s
+
+PLUGIN_WORKING = os.getenv("PLUGIN_WORKING", "/opt/bacula/working/kubernetes")
+PRE_JOB_LOG_NAME_TEMPLATE = "pre_job_%s.log"
+LOG_NAME_TEMPLATE = "%s_%s_%s.log"
+
+
+class LogConfig(object):
+    """
+        Class used in order to configure the execution Debug Log File
+        It determines whether the Debug Log File should be created, and
+        where it should be created
+    """
+
+    @staticmethod
+    def start():
+        # The Log File should be at the PLUGIN_WORKING path
+        if not os.path.exists(PLUGIN_WORKING):
+            try:
+                os.makedirs(PLUGIN_WORKING)
+            except:
+                # fallback to /tmp
+                baculak8s.io.log.PLUGIN_WORKING = '/tmp/backendplugin'
+                if not os.path.exists(PLUGIN_WORKING):
+                    os.makedirs(PLUGIN_WORKING)
+
+        # The Log File starts with a "Pre Job" name
+        file_name = PRE_JOB_LOG_NAME_TEMPLATE % os.getpid()
+        file_name = os.path.join(PLUGIN_WORKING, file_name)
+        logging.basicConfig(filename=file_name, level=logging.DEBUG, filemode='w+', format='%(levelname)s:[%(pathname)s:%(lineno)d in %(funcName)s] %(message)s')
+
+    @staticmethod
+    def handle_params(job_info, plugin_params):
+        if "debug" in plugin_params and plugin_params["debug"]:
+            LogConfig._create(job_info)
+        else:
+            LogConfig._delete_pre_job_log()
+
+    @staticmethod
+    def _create(job_info):
+        pid = os.getpid()
+        old_name = PRE_JOB_LOG_NAME_TEMPLATE % pid
+        new_name = LOG_NAME_TEMPLATE % (pid, job_info["jobid"], job_info["name"])
+        old_name = os.path.join(PLUGIN_WORKING, old_name)
+        new_name = os.path.join(PLUGIN_WORKING, new_name)
+        if os.path.isfile(old_name):
+            os.rename(old_name, new_name)
+
+    @staticmethod
+    def _delete_pre_job_log():
+        pid = os.getpid()
+        pre_job_log_file = PRE_JOB_LOG_NAME_TEMPLATE % pid
+        pre_job_log_file = os.path.join(PLUGIN_WORKING, pre_job_log_file)
+        if os.path.isfile(pre_job_log_file):
+            os.remove(pre_job_log_file)
+
+
+class Log:
+    """
+        Class with helper methods to send data to the Debug Log
+    """
+
+    @staticmethod
+    def save_received_termination(packet_header):
+        Log.save_received_packet(packet_header, "(TERMINATION PACKET)")
+
+    @staticmethod
+    def save_received_eod(packet_header):
+        Log.save_received_packet(packet_header, "(EOD PACKET)")
+
+    @staticmethod
+    def save_received_data(packet_header):
+        Log.save_received_packet(packet_header, "(DATA PACKET)")
+
+    @staticmethod
+    def save_received_packet(packet_header, packet_content):
+        message = "Received Packet\n%s\n%s\n" % (packet_header.decode(), packet_content)
+        logging.debug(message)
+
+    @staticmethod
+    def save_sent_eod(packet_header):
+        Log.save_sent_packet(packet_header, "(EOD PACKET)\n")
+
+    @staticmethod
+    def save_sent_data(packet_header):
+        Log.save_sent_packet(packet_header, "(DATA PACKET)\n")
+
+    @staticmethod
+    def save_sent_packet(packet_header, packet_content):
+        message = "Sent Packet\n%s%s" % (packet_header, packet_content)
+        logging.debug(message)
+
+    @staticmethod
+    def save_exit_code(exit_code):
+        message = "Backend finished with Exit Code: %s" % exit_code
+        logging.debug(message)
+
+    @staticmethod
+    def save_exception(e):
+        logging.debug(e)
diff --git a/bacula/src/plugins/fd/kubernetes-backend/baculak8s/io/packet_definitions.py b/bacula/src/plugins/fd/kubernetes-backend/baculak8s/io/packet_definitions.py
new file mode 100644 (file)
index 0000000..89cdf7d
--- /dev/null
@@ -0,0 +1,37 @@
+# Bacula(R) - The Network Backup Solution
+#
+#   Copyright (C) 2000-2022 Kern Sibbald
+#
+#   The original author of Bacula is Kern Sibbald, with contributions
+#   from many others, a complete list can be found in the file AUTHORS.
+#
+#   You may use this file and others of this release according to the
+#   license defined in the LICENSE file, which includes the Affero General
+#   Public License, v3.0 ("AGPLv3") and some additional permissions and
+#   terms pursuant to its AGPLv3 Section 7.
+#
+#   This notice must be preserved when any source code is
+#   conveyed and/or propagated.
+#
+#   Bacula(R) is a registered trademark of Kern Sibbald.
+
+STATUS_COMMAND = "C"
+STATUS_DATA = "D"
+STATUS_ABORT = "A"
+STATUS_ERROR = "E"
+STATUS_WARNING = "W"
+STATUS_INFO = "I"
+
+EOD_PACKET = b'F000000'
+TERMINATION_PACKET = b'T000000'
+
+UNEXPECTED_ERROR_PACKET = "Unexpected error. Please check log for details"
+
+FILE_DATA_START = "DATA"
+XATTR_DATA_START = "XATTR"
+ACL_DATA_START = "ACL"
+ESTIMATION_START_PACKET = "EstimateStart"
+QUERY_START_PACKET = "QueryStart"
+INVALID_ESTIMATION_START_PACKET = "Invalid estimation job start packet"
+OBJECT_PAGE_ERROR = "Error retrieving a page of buckets."
+FILE_INFO_ERROR = "Error retrieving information about a file."
diff --git a/bacula/src/plugins/fd/kubernetes-backend/baculak8s/io/services/__init__.py b/bacula/src/plugins/fd/kubernetes-backend/baculak8s/io/services/__init__.py
new file mode 100644 (file)
index 0000000..2f40006
--- /dev/null
@@ -0,0 +1,16 @@
+# Bacula(R) - The Network Backup Solution
+#
+#   Copyright (C) 2000-2022 Kern Sibbald
+#
+#   The original author of Bacula is Kern Sibbald, with contributions
+#   from many others, a complete list can be found in the file AUTHORS.
+#
+#   You may use this file and others of this release according to the
+#   license defined in the LICENSE file, which includes the Affero General
+#   Public License, v3.0 ("AGPLv3") and some additional permissions and
+#   terms pursuant to its AGPLv3 Section 7.
+#
+#   This notice must be preserved when any source code is
+#   conveyed and/or propagated.
+#
+#   Bacula(R) is a registered trademark of Kern Sibbald.
\ No newline at end of file
diff --git a/bacula/src/plugins/fd/kubernetes-backend/baculak8s/io/services/job_info_io.py b/bacula/src/plugins/fd/kubernetes-backend/baculak8s/io/services/job_info_io.py
new file mode 100644 (file)
index 0000000..82e0c86
--- /dev/null
@@ -0,0 +1,82 @@
+# Bacula(R) - The Network Backup Solution
+#
+#   Copyright (C) 2000-2022 Kern Sibbald
+#
+#   The original author of Bacula is Kern Sibbald, with contributions
+#   from many others, a complete list can be found in the file AUTHORS.
+#
+#   You may use this file and others of this release according to the
+#   license defined in the LICENSE file, which includes the Affero General
+#   Public License, v3.0 ("AGPLv3") and some additional permissions and
+#   terms pursuant to its AGPLv3 Section 7.
+#
+#   This notice must be preserved when any source code is
+#   conveyed and/or propagated.
+#
+#   Bacula(R) is a registered trademark of Kern Sibbald.
+import datetime
+
+from baculak8s.io.default_io import DefaultIO
+from baculak8s.io.log import Log
+from baculak8s.io.packet_definitions import EOD_PACKET
+
+JOB_START_PACKET = "Job"
+INVALID_JOB_START_PACKET = "Invalid Job Start Packet"
+INVALID_JOB_PARAMETER_BLOCK = "Invalid Job Parameter Block"
+INVALID_JOB_TYPE = "Invalid Job Type. The supported types are:" \
+                   "B - Backup, R - Restore or E - Estimation"
+INVALID_REPLACE_PARAM = "Invalid Replace Parameter. The supported values are:" \
+                        "a - Replace always, w - Replace if newer, n - Never replace" \
+                        "or o - Replace if older"
+JOB_NAME_NOT_FOUND = "Parameter Job Name Not Found"
+JOB_ID_NOT_FOUND = "Parameter Job ID Not Found"
+JOB_TYPE_NOT_FOUND = "Parameter Job Type Not Found"
+
+
+class JobInfoIO(DefaultIO):
+
+    def read_job_info(self):
+        """
+            Reads blocks of parameters:
+
+                "C000111\n"     (Command packet header)
+                key1=value1\n   (Parameter)
+                "C000222\n"
+                key2=value2\n
+                "C000333\n"
+                key3=value3\n
+                ...
+
+            until an EOD packet (F000000) is found
+
+            :return: A dictionary containing the parameters
+
+        """
+
+        block = {}
+        while True:
+            packet_header = self.read_line()
+            if not packet_header:
+                raise ValueError("Packet Header not found")
+
+            if packet_header == EOD_PACKET:
+                Log.save_received_eod(packet_header)
+                return block
+            else:
+                packet_content = self.read_line().decode()
+                param = packet_content.split("=", 1)
+                # converts key to lowercase
+                param[0] = param[0].lower()
+
+                if param[0] == "since":
+                    # converts the provided timestamp to a utc timestamp
+                    parsed_param = int(param[1])
+                    parsed_param = int(datetime.datetime \
+                                       .utcfromtimestamp(parsed_param) \
+                                       .replace(tzinfo=datetime.timezone.utc) \
+                                       .timestamp())
+                    block[param[0]] = parsed_param
+                else:
+                    block[param[0]] = param[1]
+
+                Log.save_received_packet(packet_header, packet_content)
diff --git a/bacula/src/plugins/fd/kubernetes-backend/baculak8s/io/services/plugin_params_io.py b/bacula/src/plugins/fd/kubernetes-backend/baculak8s/io/services/plugin_params_io.py
new file mode 100644 (file)
index 0000000..591af2a
--- /dev/null
@@ -0,0 +1,91 @@
+# Bacula(R) - The Network Backup Solution
+#
+#   Copyright (C) 2000-2022 Kern Sibbald
+#
+#   The original author of Bacula is Kern Sibbald, with contributions
+#   from many others, a complete list can be found in the file AUTHORS.
+#
+#   You may use this file and others of this release according to the
+#   license defined in the LICENSE file, which includes the Affero General
+#   Public License, v3.0 ("AGPLv3") and some additional permissions and
+#   terms pursuant to its AGPLv3 Section 7.
+#
+#   This notice must be preserved when any source code is
+#   conveyed and/or propagated.
+#
+#   Bacula(R) is a registered trademark of Kern Sibbald.
+
+from baculak8s.io.default_io import DefaultIO
+from baculak8s.io.log import Log
+from baculak8s.io.packet_definitions import EOD_PACKET
+
+PLUGIN_PARAMETERS_START = "Params"
+INVALID_PLUGIN_PARAMETERS_START = "Invalid Plugin Parameters Start Packet"
+INVALID_PLUGIN_PARAMETERS_BLOCK = "Invalid Plugin Parameters Block"
+URL_NOT_FOUND = "Parameter URL not found on Plugin Parameters"
+USER_NOT_FOUND = "Parameter User not found on Plugin Parameters"
+PWD_NOT_FOUND = "Parameter Password not found on Plugin Parameters"
+PASSFILE_NOT_FOUND = "Passfile not found. ERR=No such file or directory"
+PWD_INSIDE_PASSFILE_NOT_FOUND = "Password inside passfile not found"
+RESTORE_LOCAL_WITHOUT_WHERE = "Restore Local plugin parameter without Where Plugin Parameter"
+
+
+class PluginParamsIO(DefaultIO):
+
+    def read_plugin_params(self):
+        """
+                Reads blocks of parameters:
+
+                    "C000111\n"     (Command packet header)
+                    key1=value1\n   (Parameter)
+                    "C000222\n"
+                    key2=value2\n
+                    "C000333\n"
+                    key3=value3\n
+                    ...
+
+                until an EOD packet (F000000) is found
+
+                :return: A dictionary containing the parameters
+
+        """
+
+        block = {
+            "includes": [],
+            "regex_includes": [],
+            "excludes": [],
+            "regex_excludes": [],
+            "namespace": [],
+            "persistentvolume": [],
+            "storageclass": [],
+        }
+        while True:
+            packet_header = self.read_line()
+            if not packet_header:
+                raise ValueError("Packet Header not found")
+
+            if packet_header == EOD_PACKET:
+                Log.save_received_eod(packet_header)
+                return block
+            else:
+                packet_content = self.read_line().decode()
+                param = packet_content.split("=", 1)
+
+                # converts key to lowercase
+                param[0] = param[0].lower()
+
+                # handle single word param without equal sign
+                if len(param) < 2:
+                    param.append(True)
+
+                # handle array parameters automatically
+                if param[0] in block.keys():
+                    if isinstance(block[param[0]], list):
+                        block[param[0]].append(param[1])
+                    else:
+                        _b = block[param[0]]
+                        block[param[0]] = [_b, param[1]]
+                else:
+                    block[param[0]] = param[1]
+
+                Log.save_received_packet(packet_header, packet_content)
diff --git a/bacula/src/plugins/fd/kubernetes-backend/baculak8s/jobs/__init__.py b/bacula/src/plugins/fd/kubernetes-backend/baculak8s/jobs/__init__.py
new file mode 100644 (file)
index 0000000..3b41a74
--- /dev/null
@@ -0,0 +1,16 @@
+# Bacula(R) - The Network Backup Solution
+#
+#   Copyright (C) 2000-2022 Kern Sibbald
+#
+#   The original author of Bacula is Kern Sibbald, with contributions
+#   from many others, a complete list can be found in the file AUTHORS.
+#
+#   You may use this file and others of this release according to the
+#   license defined in the LICENSE file, which includes the Affero General
+#   Public License, v3.0 ("AGPLv3") and some additional permissions and
+#   terms pursuant to its AGPLv3 Section 7.
+#
+#   This notice must be preserved when any source code is
+#   conveyed and/or propagated.
+#
+#   Bacula(R) is a registered trademark of Kern Sibbald.
diff --git a/bacula/src/plugins/fd/kubernetes-backend/baculak8s/jobs/backup_job.py b/bacula/src/plugins/fd/kubernetes-backend/baculak8s/jobs/backup_job.py
new file mode 100644 (file)
index 0000000..1d11865
--- /dev/null
@@ -0,0 +1,217 @@
+# Bacula(R) - The Network Backup Solution
+#
+#   Copyright (C) 2000-2022 Kern Sibbald
+#
+#   The original author of Bacula is Kern Sibbald, with contributions
+#   from many others, a complete list can be found in the file AUTHORS.
+#
+#   You may use this file and others of this release according to the
+#   license defined in the LICENSE file, which includes the Affero General
+#   Public License, v3.0 ("AGPLv3") and some additional permissions and
+#   terms pursuant to its AGPLv3 Section 7.
+#
+#   This notice must be preserved when any source code is
+#   conveyed and/or propagated.
+#
+#   Bacula(R) is a registered trademark of Kern Sibbald.
+
+import logging
+
+from baculak8s.entities.file_info import DIRECTORY
+from baculak8s.io.packet_definitions import FILE_DATA_START
+from baculak8s.jobs.estimation_job import PVCDATA_GET_ERROR, EstimationJob
+from baculak8s.jobs.job_pod_bacula import DEFAULTRECVBUFFERSIZE
+from baculak8s.plugins.k8sbackend.baculaannotations import (
+    BaculaAnnotationsClass, BaculaBackupMode)
+from baculak8s.plugins.k8sbackend.baculabackup import BACULABACKUPPODNAME
+from baculak8s.plugins.k8sbackend.podexec import ExecStatus, exec_commands
+from baculak8s.util.respbody import parse_json_descr
+from baculak8s.util.boolparam import BoolParam
+
+BACKUP_START_PACKET = "BackupStart"
+BACKUP_PARAM_LABELS = "Resource Selector: {}"
+FILE_BACKUP_ERROR = "Error while reading file contents from the chosen Data Source: {}"
+POD_DATA_RECV_ERR = "Error in receiving data from bacula-backup Pod!"
+BA_MODE_ERROR = "Invalid annotations for Pod: {namespace}/{podname}. Backup Mode '{mode}' not supported!"
+BA_EXEC_STDOUT = "{}:{}"
+BA_EXEC_STDERR = "{} Error:{}"
+BA_EXEC_ERROR = "Pod Container execution: {}"
+
+
+class BackupJob(EstimationJob):
+    """
+        Job that contains the business logic
+        related to the backup mode of the Backend.
+        It depends upon a Plugin Class implementation
+        that retrieves backup data from the Plugins Data Source
+    """
+
+    def __init__(self, plugin, params):
+        super().__init__(plugin, params, BACKUP_START_PACKET)
+        _label = params.get('labels', None)
+        if _label is not None:
+            self._io.send_info(BACKUP_PARAM_LABELS.format(_label))
+
+    def execution_loop(self):
+        return super().processing_loop(estimate=False)
+
+    def process_file(self, data):
+        return self._backup_file(data)
+
+    def _backup_file(self, data):
+        file_info = data.get('fi')
+        super()._estimate_file(file_info)
+        if file_info.type != DIRECTORY:
+            self.__backup_data(file_info, data.get('spec'))
+        self._io.send_eod()
+
+    def __backup_data(self, info, spec_data):
+        self._io.send_command(FILE_DATA_START)
+        if spec_data is None:
+            self._handle_error(FILE_BACKUP_ERROR.format(info.name))
+        else:
+            for file_chunk in [spec_data[i:i+DEFAULTRECVBUFFERSIZE] for i in range(0, len(spec_data), DEFAULTRECVBUFFERSIZE)]:
+                self._io.send_data(str.encode(file_chunk))
+
+    def __backup_pvcdata(self, namespace):
+        logging.debug('backup_pvcdata:data recv')
+        self._io.send_command(FILE_DATA_START)
+        response = self.connsrv.handle_connection(self.handle_pod_data_recv)
+        if 'error' in response:
+            self._handle_error(response['error'])
+            if 'should_remove_pod' in response:
+                self.delete_pod(namespace=namespace, force=True)
+            return False
+        logging.debug('backup_pvcdata:logs recv')
+        response = self.connsrv.handle_connection(self.handle_pod_logs)
+        if 'error' in response:
+            self._handle_error(response['error'])
+            return False
+        return True
+
+    def process_pvcdata(self, namespace, pvcdata):
+        status = None
+        if self.prepare_bacula_pod(pvcdata, namespace=namespace, mode='backup'):
+            super()._estimate_file(pvcdata)     # here to send info about pvcdata to plugin
+            status = self.__backup_pvcdata(namespace=namespace)
+            if status:
+                self._io.send_eod()
+                self.handle_tarstderr()
+            self.handle_delete_pod(namespace=namespace)
+        return status
+
+    def handle_pod_container_exec_command(self, corev1api, namespace, pod, runjobparam, failonerror=False):
+        podname = pod.get('name')
+        containers = pod.get('containers')
+        logging.debug("pod {} containers: {}".format(podname, containers))
+        # now check if run before job
+        container, command = BaculaAnnotationsClass.handle_run_job_container_command(pod.get(runjobparam))
+        if container is not None:
+            logging.info("container: {}".format(container))
+            logging.info("command: {}".format(command))
+            if container != '*':
+                # check if container exist
+                if container not in containers:
+                    # error
+                    logging.error("container {} not found".format(container))
+                    return False
+                containers = [container]
+            # here execute command
+            for cname in containers:
+                logging.info("executing command: {} on {}".format(command, cname))
+                outch, errch, infoch = exec_commands(corev1api, namespace, podname, cname, command)
+                logging.info("stdout:\n{}".format(outch))
+                if len(outch) > 0:
+                    outch = outch.rstrip('\n')
+                    self._io.send_info(BA_EXEC_STDOUT.format(runjobparam, outch))
+                logging.info("stderr:\n{}".format(errch))
+                if len(errch) > 0:
+                    errch = errch.rstrip('\n')
+                    self._io.send_warning(BA_EXEC_STDERR.format(runjobparam, errch))
+                execstatus = ExecStatus.check_status(infoch)
+                logging.info("Exec status: {}".format(execstatus))
+                if not execstatus:
+                    self._io.send_warning(BA_EXEC_ERROR.format(infoch.get('message')))
+                    if failonerror:
+                        self._handle_error("Failing job on request...")
+                        return False
+
+        return True
+
+    def process_pod_pvcdata(self, namespace, pod, pvcnames):
+        logging.debug("process_pod_pvcdata:{}/{} {}".format(namespace, pod, pvcnames))
+        status = None
+        corev1api = self._plugin.corev1api
+        backupmode = BaculaBackupMode.process_param(pod.get(BaculaAnnotationsClass.BackupMode, BaculaBackupMode.Snapshot))
+        if backupmode is None:
+            self._handle_error(BA_MODE_ERROR.format(namespace=namespace,
+                                                    podname=pod.get('name'),
+                                                    mode=pod.get(BaculaAnnotationsClass.BackupMode)))
+            return False
+
+        failonerror = BoolParam.handleParam(pod.get(BaculaAnnotationsClass.RunBeforeJobonError), True)      # the default is to fail job on error
+        # here we execute remote command before Pod backup
+        if not self.handle_pod_container_exec_command(corev1api, namespace, pod, BaculaAnnotationsClass.RunBeforeJob, failonerror):
+            logging.error("handle_pod_container_exec_command execution error!")
+            return False
+
+        requestedvolumes = [v.lstrip().rstrip() for v in pvcnames.split(',')]
+        handledvolumes = []
+
+        # iterate on requested volumes for shapshot
+        logging.debug("iterate over requested vols for snapshot: {}".format(requestedvolumes))
+        for pvc in requestedvolumes:
+            pvcname = pvc
+            logging.debug("handling vol before snapshot: {}".format(pvcname))
+            if backupmode == BaculaBackupMode.Snapshot:
+                # snapshot if requested
+                pvcname = self.create_pvcclone(namespace, pvcname)
+                if pvcname is None:
+                    # error
+                    logging.error("create_pvcclone failed!")
+                    return False
+            logging.debug("handling vol after snapshot: {}".format(pvcname))
+            handledvolumes.append({
+                'pvcname': pvcname,
+                'pvc': pvc,
+                })
+
+        failonerror = BoolParam.handleParam(pod.get(BaculaAnnotationsClass.RunAfterSnapshotonError), False)     # the default is ignore errors
+        # here we execute remote command after vol snapshot
+        if not self.handle_pod_container_exec_command(corev1api, namespace, pod, BaculaAnnotationsClass.RunAfterSnapshot, failonerror):
+            return False
+
+        # iterate on requested volumes for backup
+        logging.debug("iterate over requested vols for backup: {}".format(handledvolumes))
+        for volumes in handledvolumes:
+            pvc = volumes['pvc']
+            pvcname = volumes['pvcname']
+            # get pvcdata for this volume
+            """
+            PVCDATA:plugintest-pvc-alone:{'name': 'plugintest-pvc-alone-baculaclone-lfxrra', 'node_name': None, 'storage_class_name': 'ocs-storagecluster-cephfs', 'capacity': '1Gi', 'fi': <baculak8s.entities.file_info.FileInfo object at 0x7fc3c08bc668>}
+            """
+            pvcdata = self._plugin.get_pvcdata_namespaced(namespace, pvcname, pvc)
+            if isinstance(pvcdata, dict) and 'error' in pvcdata:
+                self._handle_error(PVCDATA_GET_ERROR.format(parse_json_descr(pvcdata)))
+
+            else:
+                logging.debug('PVCDATA:{}:{}'.format(pvc, pvcdata))
+                logging.debug('PVCDATA FI.name:{}'.format(pvcdata.get('fi').name))
+                if len(pvcdata) > 0:
+                    status = self.process_pvcdata(namespace, pvcdata)
+
+        # iterate on requested volumes for delete snap
+        logging.debug("iterate over requested vols for delete snap: {}".format(handledvolumes))
+        for volumes in handledvolumes:
+            pvcname = volumes['pvcname']
+
+            if backupmode == BaculaBackupMode.Snapshot:
+                # snapshot delete if snapshot requested
+                status = self.delete_pvcclone(namespace, pvcname)
+
+        failonerror = BoolParam.handleParam(pod.get(BaculaAnnotationsClass.RunAfterJobonError), False)     # the default is ignore errors
+        # here we execute remote command after Pod backup
+        if not self.handle_pod_container_exec_command(corev1api, namespace, pod, BaculaAnnotationsClass.RunAfterJob, failonerror):
+            return False
+
+        return status
diff --git a/bacula/src/plugins/fd/kubernetes-backend/baculak8s/jobs/estimation_job.py b/bacula/src/plugins/fd/kubernetes-backend/baculak8s/jobs/estimation_job.py
new file mode 100644 (file)
index 0000000..5b1eca4
--- /dev/null
@@ -0,0 +1,274 @@
+# Bacula(R) - The Network Backup Solution
+#
+#   Copyright (C) 2000-2022 Kern Sibbald
+#
+#   The original author of Bacula is Kern Sibbald, with contributions
+#   from many others, a complete list can be found in the file AUTHORS.
+#
+#   You may use this file and others of this release according to the
+#   license defined in the LICENSE file, which includes the Affero General
+#   Public License, v3.0 ("AGPLv3") and some additional permissions and
+#   terms pursuant to its AGPLv3 Section 7.
+#
+#   This notice must be preserved when any source code is
+#   conveyed and/or propagated.
+#
+#   Bacula(R) is a registered trademark of Kern Sibbald.
+
+import logging
+import re
+
+from baculak8s.entities.file_info import FileInfo
+from baculak8s.io.default_io import DefaultIO
+from baculak8s.io.packet_definitions import ESTIMATION_START_PACKET
+from baculak8s.jobs.job_pod_bacula import (PVCDATA_GET_ERROR, JobPodBacula)
+from baculak8s.plugins.k8sbackend.baculaannotations import BaculaAnnotationsClass
+from baculak8s.util.respbody import parse_json_descr
+
+PATTERN_NOT_FOUND = "No matches found for pattern %s"
+NO_PV_FOUND = "No required Persistent Volumes found at the cluster"
+NO_SC_FOUND = "No required Storage Classes found at the cluster"
+NO_NS_FOUND = "No required Namespaces found at the cluster"
+PV_LIST_ERROR = "Cannot list PV objects. Err={}"
+PODS_LIST_ERROR = "Cannot list Pods objects. Err={}"
+SC_LIST_ERROR = "Cannot list StorageClass objects. Err={}"
+NS_LIST_ERROR = "Cannot list Namespace objects. Err={}"
+RES_LIST_ERROR = "Cannot list resource objects. Err={}"
+# PVCDATA_LIST_ERROR = "Cannot list PVC Data objects. Err={}"
+PROCESSING_NAMESPACE_INFO = "Processing namespace: {namespace}"
+PROCESSING_PVCDATA_START_INFO = "Start backup volume claim: {pvc}"
+PROCESSING_PVCDATA_STOP_INFO = "Finish backup volume claim: {pvc}"
+PROCESSING_PODBACKUP_START_INFO = "Start backup Pod: {namespace}/{podname}"
+PROCESSING_PODBACKUP_FINISH_INFO = "Finish backup Pod: {namespace}/{podname}"
+BA_PVC_NOT_FOUND_ERROR = "Requested volume claim: {pvc} on {namespace}/{podname} not found!"
+BA_PVCNAME_ERROR = "Invalid annotations for Pod: {namespace}/{podname}. {bavol} Required!"
+PROCESS_POD_PVCDATA_ERROR = "Cannot process Pod PVC Data backup!"
+LABEL_PARAM_INVALID_ERROR = "Label parameter ({}) is invalid!"
+
+
+class EstimationJob(JobPodBacula):
+    """
+        Job that contains the business logic
+        related to the estimation mode of the Backend.
+        It depends upon a Plugin Class implementation
+        that retrieves estimation data from the Plugins Data Source
+    """
+
+    def __init__(self, plugin, params, start_packet=None):
+        if not start_packet:
+            start_packet = ESTIMATION_START_PACKET
+        self.__start_packet = start_packet
+        super().__init__(plugin, DefaultIO(), params)
+        self.include_matches = []
+        self.regex_include_matches = []
+        self.exclude_matches = []
+        self.regex_exclude_matches = []
+        _sc = params.get('storageclass', [])
+        self.storageclassparam  = _sc if len(_sc) > 0 else None
+        _pv = params.get('persistentvolume', [])
+        self.persistentvolumeparam = _pv if len(_pv) > 0 else None
+
+    def execute(self):
+        self._start(self.__start_packet)
+        self.execution_loop()
+        self.__verify_all_matches()
+        self._io.send_eod()
+
+    def execution_loop(self):
+        return self.processing_loop(estimate=True)
+
+    def processing_loop(self, estimate=False):
+        sc_list = self._plugin.list_all_storageclass(estimate=estimate)
+        logging.debug("processing list_all_storageclass:{}:nrfound:{}".format(self.storageclassparam, len(sc_list)))
+        if isinstance(sc_list, dict) and sc_list.get('exception'):
+            self._handle_error(SC_LIST_ERROR.format(parse_json_descr(sc_list)))
+
+        else:
+            if self.storageclassparam is not None and len(sc_list) == 0:
+                self._handle_error(NO_SC_FOUND)
+
+            for sc in sc_list:
+                logging.debug('processing sc:{}'.format(sc))
+                self.process_file(sc_list.get(sc))
+
+        pv_list = self._plugin.list_all_persistentvolumes(estimate=estimate)
+        logging.debug("processing list_all_persistentvolumes:{}:nrfound:{}".format(self.persistentvolumeparam, len(pv_list)))
+        if isinstance(pv_list, dict) and pv_list.get('exception'):
+            self._handle_error(PV_LIST_ERROR.format(parse_json_descr(pv_list)))
+
+        else:
+            if self.persistentvolumeparam is not None and len(pv_list) == 0:
+                self._handle_error(NO_PV_FOUND)
+
+            for pv in pv_list:
+                logging.debug('processing pv:' + str(pv))
+                self.process_file(pv_list.get(pv))
+
+        ns_list = self._plugin.list_all_namespaces(estimate=estimate)
+        logging.debug("processing list_all_namespaces:nrfound:{}".format(len(ns_list)))
+        if isinstance(ns_list, dict) and ns_list.get('exception'):
+            self._handle_error(NS_LIST_ERROR.format(parse_json_descr(ns_list)))
+
+        else:
+            if len(self._params.get('namespace')) != 0 and len(ns_list) == 0:
+                self._handle_error(NO_NS_FOUND)
+
+            for nsname in ns_list:
+                ns = ns_list.get(nsname)
+                logging.debug('processing ns:{}'.format(ns))
+                if not estimate:
+                    self._io.send_info(PROCESSING_NAMESPACE_INFO.format(namespace=ns['name']))
+                self.process_file(ns)
+                nsdata = self._plugin.list_namespaced_objects(nsname, estimate=estimate)
+                logging.debug('NSDATA:{}'.format([ns.keys() for ns in nsdata]))         # limit debug output
+                for sub in nsdata:
+                    # sub is a list of different resource types
+                    if isinstance(sub, dict) and sub.get('exception'):
+                        self._handle_error(RES_LIST_ERROR.format(parse_json_descr(sub)))
+                    else:
+                        for res in sub:
+                            self.process_file(sub.get(res))
+
+                podsannotated = self._plugin.get_annotated_namespaced_pods_data(nsname, estimate=estimate)
+                logging.debug("processing get_annotated_namespaced_pods_data:{}:nrfound:{}".format(nsname,
+                                                                                                   len(podsannotated)))
+                # here we have a list of pods which are anotated
+                if podsannotated is not None:
+                    if isinstance(podsannotated, dict) and podsannotated.get('exception'):
+                        self._handle_error(PODS_LIST_ERROR.format(parse_json_descr(podsannotated)))
+                    else:
+                        for pod in podsannotated:
+                            logging.debug('PODDATA:{}'.format(pod))
+                            # this is required parameter!
+                            pvcnames = pod.get(BaculaAnnotationsClass.BackupVolume)
+                            if pvcnames is None:
+                                self._handle_error(BA_PVCNAME_ERROR.format(namespace=nsname,
+                                                                           podname=pod.get('name'),
+                                                                           bavol=BaculaAnnotationsClass.BaculaPrefix +
+                                                                           BaculaAnnotationsClass.BackupVolume))
+                                continue
+                            else:
+                                podname = pod.get('name')
+                                if not estimate:
+                                    self._io.send_info(PROCESSING_PODBACKUP_START_INFO.format(namespace=nsname,
+                                                                                              podname=podname))
+                                status = self.process_pod_pvcdata(nsname, pod, pvcnames)
+                                if status is None:
+                                    logging.error("Some unknown error!")
+                                    self._handle_error(PROCESS_POD_PVCDATA_ERROR)
+                                    break
+                                if not estimate:
+                                    self._io.send_info(PROCESSING_PODBACKUP_FINISH_INFO.format(namespace=nsname,
+                                                                                               podname=podname))
+
+                pvcdatalist = self._plugin.list_pvcdata_for_namespace(nsname, estimate=estimate)
+                logging.debug("processing list_pvcdata_for_namespace:{}:nrfound:{}".format(nsname, len(pvcdatalist)))
+                if pvcdatalist is not None:
+                    if isinstance(pvcdatalist, dict) and pvcdatalist.get('exception'):
+                        self._handle_error(PV_LIST_ERROR.format(parse_json_descr(pvcdatalist)))
+                    else:
+                        for pvc in pvcdatalist:
+                            pvcdata = pvcdatalist.get(pvc)
+                            logging.debug('PVCDATA:{}:{}'.format(pvc, pvcdata))
+                            if not estimate:
+                                self._io.send_info(PROCESSING_PVCDATA_START_INFO.format(pvc=pvc))
+                            status = self.process_pvcdata(nsname, pvcdata)
+                            if status is None:
+                                # None means unable to prepare listening service during backup
+                                break
+                            if not estimate and status:
+                                self._io.send_info(PROCESSING_PVCDATA_STOP_INFO.format(pvc=pvc))
+
+    def _estimate_file(self, data):
+        logging.debug('{}'.format(data))
+        if isinstance(data, dict):
+            file_info = data.get('fi')
+        elif isinstance(data, FileInfo):
+            file_info = data
+        else:
+            raise ValueError('Invalid data in estimate_file')
+        logging.debug('file_info: {}'.format(file_info))
+        self._io.send_file_info(file_info)
+
+    def process_file(self, data):
+        return self._estimate_file(data)
+
+    def process_pvcdata(self, namespace, pvcdata):
+        return self._estimate_file(pvcdata)
+
+    def process_pod_pvcdata(self, namespace, pod, pvcnames):
+        # iterate on requested pvc
+        logging.debug("process_pod_pvcdata in Estimate mode")
+        for pvc in pvcnames.split(','):
+            # get pvcdata for this volume
+            pvcdata = self._plugin.get_pvcdata_namespaced(namespace, pvc)
+            if isinstance(pvcdata, dict) and 'exception' in pvcdata:
+                self._handle_error(PVCDATA_GET_ERROR.format(parse_json_descr(pvcdata)))
+            else:
+                logging.debug('PVCDATA:{}:{}'.format(pvc, pvcdata))
+                if len(pvcdata) > 0:
+                    self._estimate_file(pvcdata)
+        return True
+
+    def __match_includes(self, info):
+        if len(self._params.get("includes", [])) <= 0:
+            return True
+
+        any_match = info.match_any_glob(
+            self._params["includes"],
+            self.include_matches
+        )
+
+        return any_match
+
+    def __match_regex_includes(self, info):
+        if len(self._params.get("regex_includes", [])) <= 0:
+            return True
+
+        any_match = info.match_any_regex(
+            self._params["regex_includes"],
+            self.regex_include_matches
+        )
+
+        return any_match
+
+    def __match_excludes(self, info):
+        if len(self._params.get("excludes", [])) <= 0:
+            return False
+
+        any_match = info.match_any_glob(
+            self._params["excludes"],
+            self.exclude_matches
+        )
+
+        return any_match
+
+    def __match_regex_excludes(self, info):
+        if len(self._params.get("regex_excludes", [])) <= 0:
+            return False
+
+        any_match = info.match_any_regex(
+            self._params["regex_excludes"],
+            self.regex_exclude_matches
+        )
+
+        return any_match
+
+    def __verify_all_matches(self):
+        self.__verify_matches("includes", self.include_matches)
+        self.__verify_matches("regex_includes", self.regex_include_matches)
+        self.__verify_matches("excludes", self.exclude_matches)
+        self.__verify_matches("regex_excludes", self.regex_exclude_matches)
+
+    def __verify_matches(self, pattern_type, matches):
+
+        if len(self._params.get(pattern_type, [])) <= 0:
+            return
+
+        patterns = self._params[pattern_type]
+
+        # Assures that all patterns got at least one match
+        for pattern in patterns:
+            if pattern not in matches:
+                error_msg = PATTERN_NOT_FOUND % pattern
+                self._handle_error(error_msg)
diff --git a/bacula/src/plugins/fd/kubernetes-backend/baculak8s/jobs/job.py b/bacula/src/plugins/fd/kubernetes-backend/baculak8s/jobs/job.py
new file mode 100644 (file)
index 0000000..69927d7
--- /dev/null
@@ -0,0 +1,97 @@
+# Bacula(R) - The Network Backup Solution
+#
+#   Copyright (C) 2000-2022 Kern Sibbald
+#
+#   The original author of Bacula is Kern Sibbald, with contributions
+#   from many others, a complete list can be found in the file AUTHORS.
+#
+#   You may use this file and others of this release according to the
+#   license defined in the LICENSE file, which includes the Affero General
+#   Public License, v3.0 ("AGPLv3") and some additional permissions and
+#   terms pursuant to its AGPLv3 Section 7.
+#
+#   This notice must be preserved when any source code is
+#   conveyed and/or propagated.
+#
+#   Bacula(R) is a registered trademark of Kern Sibbald.
+
+import logging
+import sys
+import time
+from abc import ABCMeta, abstractmethod
+
+from baculak8s.io.log import Log
+
+INVALID_START_TEMPLATE = "Invalid start packet. Expected packet: {}"
+KUBERNETES_CODE_INFO = "Connected to Kubernetes {major}.{minor} - {git_version}."
+
+
+class Job(metaclass=ABCMeta):
+    """
+        Abstract Base Class for all the Backend Jobs
+    """
+
+    def __init__(self, plugin, io, params):
+        self._plugin = plugin
+        self._io = io
+        self._params = params
+
+    @abstractmethod
+    def execute(self):
+        """
+            Executes the Job
+        """
+        raise NotImplementedError
+
+    @abstractmethod
+    def execution_loop(self):
+        """
+            Execution loop subroutine
+        """
+        raise NotImplementedError
+
+    def _start(self, expected_start_packet):
+        self._read_start(expected_start_packet, onError=self._abort)
+        self._connect()
+        self._io.send_eod()
+
+    def _read_start(self, start_packet, onError):
+        _, packet = self._io.read_packet()
+        if packet != start_packet:
+            self._io.send_abort(INVALID_START_TEMPLATE.format(start_packet))
+            onError()
+
+    def _connect(self):
+        response = self._plugin.connect()
+
+        if 'error' in response:
+            logging.debug("response data:" + str(response))
+            if 'error_code' in response:
+                self._io.send_connection_error(response['error_code'])
+            else:
+                self._io.send_connection_error(0, strerror=response['error'])
+            # Reads termination packet
+            packet_header = self._io.read_line()
+            Log.save_received_termination(packet_header)
+            sys.exit(0)
+        else:
+            if self._params.get("type", None) == 'b':
+                # display some info to user
+                data = response.get('response')
+                if data is not None:
+                    self._io.send_info(KUBERNETES_CODE_INFO.format(
+                        major=data.major,
+                        minor=data.minor,
+                        git_version=data.git_version,
+                    ))
+
+    def _handle_error(self, error_message):
+        if self._params.get("abort_on_error", None) == "1":
+            self._io.send_abort(error_message)
+            self._abort()
+        else:
+            self._io.send_error(error_message)
+
+    def _abort(self):
+        self._plugin.disconnect()
+        sys.exit(0)
diff --git a/bacula/src/plugins/fd/kubernetes-backend/baculak8s/jobs/job_factory.py b/bacula/src/plugins/fd/kubernetes-backend/baculak8s/jobs/job_factory.py
new file mode 100644 (file)
index 0000000..39df998
--- /dev/null
@@ -0,0 +1,55 @@
+# Bacula(R) - The Network Backup Solution
+#
+#   Copyright (C) 2000-2022 Kern Sibbald
+#
+#   The original author of Bacula is Kern Sibbald, with contributions
+#   from many others, a complete list can be found in the file AUTHORS.
+#
+#   You may use this file and others of this release according to the
+#   license defined in the LICENSE file, which includes the Affero General
+#   Public License, v3.0 ("AGPLv3") and some additional permissions and
+#   terms pursuant to its AGPLv3 Section 7.
+#
+#   This notice must be preserved when any source code is
+#   conveyed and/or propagated.
+#
+#   Bacula(R) is a registered trademark of Kern Sibbald.
+
+from baculak8s.jobs.backup_job import BackupJob
+from baculak8s.jobs.estimation_job import EstimationJob
+from baculak8s.jobs.listing_job import ListingJob
+from baculak8s.jobs.query_job import QueryJob
+from baculak8s.jobs.restore_job import RestoreJob
+from baculak8s.services.job_info_service import (TYPE_BACKUP, TYPE_ESTIMATION,
+                                                 TYPE_RESTORE)
+
+
+class JobFactory(object):
+    """
+        Creates a Job that will be executed by the Backend
+
+        :param job_type: The type of the created Job (Backup, Restore, Estimation)
+        :param plugin: The plugin that this Job should use
+
+        :raise: ValueError, if an invalid job_type was provided
+
+    """
+
+    @staticmethod
+    def create(params, plugin):
+        if params["type"] == TYPE_BACKUP:
+            return BackupJob(plugin, params)
+
+        elif params["type"] == TYPE_RESTORE:
+            return RestoreJob(plugin, params)
+
+        elif params["type"] == TYPE_ESTIMATION:
+            if params.get("listing", None) is not None:
+                return ListingJob(plugin, params)
+            if params.get("query", None) is not None:
+                return QueryJob(plugin, params)
+
+            return EstimationJob(plugin, params)
+
+        else:
+            raise ValueError("Invalid Job Type")
diff --git a/bacula/src/plugins/fd/kubernetes-backend/baculak8s/jobs/job_pod_bacula.py b/bacula/src/plugins/fd/kubernetes-backend/baculak8s/jobs/job_pod_bacula.py
new file mode 100644 (file)
index 0000000..a68277d
--- /dev/null
@@ -0,0 +1,332 @@
+#
+#  Bacula(R) - The Network Backup Solution
+#
+#   Copyright (C) 2000-2022 Kern Sibbald
+#
+#   The original author of Bacula is Kern Sibbald, with contributions
+#   from many others, a complete list can be found in the file AUTHORS.
+#
+#   You may use this file and others of this release according to the
+#   license defined in the LICENSE file, which includes the Affero General
+#   Public License, v3.0 ("AGPLv3") and some additional permissions and
+#   terms pursuant to its AGPLv3 Section 7.
+#
+#   This notice must be preserved when any source code is
+#   conveyed and/or propagated.
+#
+#   Bacula(R) is a registered trademark of Kern Sibbald.
+#
+#     Copyright (c) 2019 by Inteos sp. z o.o.
+#     All rights reserved. IP transfered to Bacula Systems according to agreement.
+#     Author: Radosław Korzeniewski, radekk@inteos.pl, Inteos Sp. z o.o.
+#
+
+import logging
+import time
+from abc import ABCMeta
+
+import yaml
+from baculak8s.jobs.job import Job
+from baculak8s.plugins.k8sbackend.baculabackup import (BACULABACKUPIMAGE,
+                                                       BACULABACKUPPODNAME,
+                                                       ImagePullPolicy,
+                                                       prepare_backup_pod_yaml)
+from baculak8s.plugins.k8sbackend.pvcclone import prepare_backup_clone_yaml
+from baculak8s.util.respbody import parse_json_descr
+from baculak8s.util.sslserver import DEFAULTTIMEOUT, ConnectionServer
+from baculak8s.util.token import generate_token
+
+DEFAULTRECVBUFFERSIZE = 64 * 1048
+PLUGINHOST_NONE_ERR = "PLUGINHOST parameter is missing and cannot be autodetected. " \
+                      "Cannot continue with pvcdata backup!"
+POD_EXECUTION_ERR = "Cannot successfully start bacula-backup pod in expected time!"
+POD_REMOVE_ERR = "Unable to remove proxy Pod {podname}! Other operations with proxy Pod will fail!"
+POD_EXIST_ERR = "Job already running in '{namespace}' namespace. Check logs or delete {podname} Pod manually."
+TAR_STDERR_UNKNOWN = "Unknown error. You should check Pod logs for possible explanation."
+PLUGINPORT_VALUE_ERR = "Cannot use provided pluginport={port} option. Used default!"
+FDPORT_VALUE_ERR = "Cannot use provided fdport={port} option. Used default!"
+POD_YAML_PREPARED_INFO = "Prepare backup Pod with: {image} <{pullpolicy}> {pluginhost}:{pluginport}"
+POD_YAML_PREPARED_INFO_NODE = "Prepare Bacula Pod on: {nodename} with: {image} <{pullpolicy}> {pluginhost}:{pluginport}"
+CANNOT_CREATE_BACKUP_POD_ERR = "Cannot create backup pod. Err={}"
+CANNOT_REMOVE_BACKUP_POD_ERR = "Cannot remove backup pod. Err={}"
+PVCDATA_GET_ERROR = "Cannot get PVC Data object Err={}"
+PVCCLONE_YAML_PREPARED_INFO = "Prepare snapshot: {namespace}/{snapname} storage: {storage} capacity: {capacity}"
+CANNOT_CREATE_PVC_CLONE_ERR = "Cannot create PVC snapshot. Err={}"
+CANNOT_REMOVE_PVC_CLONE_ERR = "Cannot remove PVC snapshot. Err={}"
+CANNOT_START_CONNECTIONSERVER = "Cannot start ConnectionServer. Err={}"
+
+
+class JobPodBacula(Job, metaclass=ABCMeta):
+    """
+
+    This is a common class for all job which handles pvcdata backups using
+    bacula-backup Pod.
+
+    """
+
+    def __init__(self, plugin, io, params):
+        super().__init__(plugin, io, params)
+        self.connsrv = None
+        self.fdaddr = params.get("fdaddress")
+        self.fdport = params.get("fdport", 9104)
+        self.pluginhost = params.get("pluginhost", self.fdaddr)
+        self.pluginport = params.get("pluginport", self.fdport)
+        self.certfile = params.get('fdcertfile')
+        _keyfile = params.get('fdkeyfile')
+        self.keyfile = _keyfile if _keyfile is not None else self.certfile
+        self._prepare_err = False
+        self.token = None
+        self.jobname = '{name}:{jobid}'.format(name=params.get('name', 'undefined'), jobid=params.get('jobid', '0'))
+        self.timeout = params.get('timeout', DEFAULTTIMEOUT)
+        try:
+            self.timeout = int(self.timeout)
+        except ValueError:
+            self.timeout = DEFAULTTIMEOUT
+        self.timeout = max(1, self.timeout)
+        self.tarstderr = ''
+        self.tarexitcode = None
+        self.backupimage = params.get('baculaimage', BACULABACKUPIMAGE)
+        self.imagepullpolicy = ImagePullPolicy.process_param(params.get('imagepullpolicy'))
+
+    def handle_pod_logs(self, connstream):
+        logmode = ''
+        self.tarstderr = ''
+        with connstream.makefile(mode='r') as fd:
+            self.tarexitcode = fd.readline().strip()
+            logging.debug('handle_pod_logs:tarexitcode:{}'.format(self.tarexitcode))
+            while True:
+                data = fd.readline()
+                if not data:
+                    break
+                logging.debug('LOGS:{}'.format(data.strip()))
+                if data.startswith('---- stderr ----'):
+                    logmode = 'stderr'
+                    continue
+                elif data.startswith('---- list ----'):
+                    logmode = 'list'
+                    continue
+                elif data.startswith('---- end ----'):
+                    break
+                if logmode == 'stderr':
+                    self.tarstderr += data
+                    continue
+                elif logmode == 'list':
+                    # no listing feature yet
+                    continue
+
+    def handle_pod_data_recv(self, connstream):
+        while True:
+            data = self.connsrv.streamrecv(DEFAULTRECVBUFFERSIZE)
+            if not data:
+                logging.debug('handle_pod_data_recv:EOT')
+                break
+            logging.debug('handle_pod_data_recv:D' + str(len(data)))
+            self._io.send_data(data)
+
+    def handle_pod_data_send(self, connstream):
+        while True:
+            data = self._io.read_data()
+            if not data:
+                logging.debug('handle_pod_data_send:EOT')
+                break
+            self.connsrv.streamsend(data)
+            logging.debug('handle_pod_data_send:D{}'.format(len(data)))
+
+    def prepare_pod_yaml(self, namespace, pvcdata, mode='backup'):
+        logging.debug('pvcdata: {}'.format(pvcdata))
+        if self.pluginhost is None:
+            self._handle_error(PLUGINHOST_NONE_ERR)
+            self._prepare_err = True
+            return None
+        pport = self.pluginport
+        try:
+            self.pluginport = int(self.pluginport)
+        except ValueError:
+            self.pluginport = 9104
+            logging.warning(PLUGINPORT_VALUE_ERR.format(port=pport))
+            self._io.send_warning(PLUGINPORT_VALUE_ERR.format(port=pport))
+        pvcname = pvcdata.get('name')
+        node_name = pvcdata.get('node_name')
+
+        podyaml = prepare_backup_pod_yaml(mode=mode, nodename=node_name, host=self.pluginhost, port=self.pluginport,
+                                          token=self.token, namespace=namespace, pvcname=pvcname, image=self.backupimage,
+                                          imagepullpolicy=self.imagepullpolicy, job=self.jobname)
+        if node_name is None:
+            self._io.send_info(POD_YAML_PREPARED_INFO.format(
+                image=self.backupimage,
+                pullpolicy=self.imagepullpolicy,
+                pluginhost=self.pluginhost,
+                pluginport=self.pluginport
+            ))
+        else:
+            self._io.send_info(POD_YAML_PREPARED_INFO_NODE.format(
+                nodename=node_name,
+                image=self.backupimage,
+                pullpolicy=self.imagepullpolicy,
+                pluginhost=self.pluginhost,
+                pluginport=self.pluginport
+            ))
+        return podyaml
+
+    def prepare_clone_yaml(self, namespace, pvcname, capacity, storage_class):
+        logging.debug('prepare_clone_yaml: {} {} {} {}'.format(namespace, pvcname, capacity, storage_class))
+        if namespace is None or pvcname is None or capacity is None or storage_class is None:
+            logging.error("Invalid params to pvc clone!")
+            return None, None
+        pvcyaml, snapname = prepare_backup_clone_yaml(namespace, pvcname, capacity, storage_class)
+        self._io.send_info(PVCCLONE_YAML_PREPARED_INFO.format(
+            namespace=namespace,
+            snapname=snapname,
+            storage=storage_class,
+            capacity=capacity
+        ))
+        return pvcyaml, snapname
+
+    def prepare_connection_server(self):
+        if self.connsrv is None:
+            if self.fdaddr is None:
+                self.fdaddr = '0.0.0.0'
+            fport = self.fdport
+            try:
+                self.fdport = int(self.fdport)
+            except ValueError:
+                self.fdport = 9104
+                logging.warning(FDPORT_VALUE_ERR.format(port=fport))
+                self._handle_error(FDPORT_VALUE_ERR.format(port=fport))
+            logging.debug("prepare_connection_server:New ConnectionServer: {}:{}".format(
+                str(self.fdaddr),
+                str(self.fdport)))
+            self.connsrv = ConnectionServer(self.fdaddr, self.fdport,
+                                            token=self.token,
+                                            certfile=self.certfile,
+                                            keyfile=self.keyfile,
+                                            timeout=self.timeout)
+            response = self.connsrv.listen()
+            if isinstance(response, dict) and 'error' in response:
+                logging.debug("RESPONSE:{}".format(response))
+                self._handle_error(CANNOT_START_CONNECTIONSERVER.format(parse_json_descr(response)))
+                return False
+        else:
+            logging.debug("prepare_connection_server:Reusing ConnectionServer!")
+            self.connsrv.token = self.token
+        return True
+
+    def execute_pod(self, namespace, podyaml):
+        exist = self._plugin.check_pod(namespace=namespace, name=BACULABACKUPPODNAME)
+        if exist is not None:
+            logging.debug('execute_pod:exist!')
+            response = False
+            for a in range(self.timeout):
+                time.sleep(1)
+                response = self._plugin.check_gone_backup_pod(namespace)
+                if isinstance(response, dict) and 'error' in response:
+                    self._handle_error(CANNOT_REMOVE_BACKUP_POD_ERR.format(parse_json_descr(response)))
+                    return False
+                else:
+                    if response:
+                        break
+            if not response:
+                self._handle_error(POD_EXIST_ERR.format(namespace=namespace, podname=BACULABACKUPPODNAME))
+                return False
+
+        poddata = yaml.safe_load(podyaml)
+        response = self._plugin.create_backup_pod(namespace, poddata)
+        if isinstance(response, dict) and 'error' in response:
+            self._handle_error(CANNOT_CREATE_BACKUP_POD_ERR.format(parse_json_descr(response)))
+        else:
+            for seq in range(self.timeout):
+                time.sleep(1)
+                isready = self._plugin.backup_pod_isready(namespace, seq)
+                if isinstance(isready, dict) and 'error' in isready:
+                    self._handle_error(CANNOT_CREATE_BACKUP_POD_ERR.format(parse_json_descr(isready)))
+                    break
+                elif isready:
+                    return True
+        return False
+
+    def execute_pvcclone(self, namespace, clonename, cloneyaml):
+        pass
+
+    def delete_pod(self, namespace, force=False):
+        for a in range(self.timeout):
+            time.sleep(1)
+            response = self._plugin.check_gone_backup_pod(namespace, force=force)
+            if isinstance(response, dict) and 'error' in response:
+                self._handle_error(CANNOT_REMOVE_BACKUP_POD_ERR.format(parse_json_descr(response)))
+            else:
+                logging.debug('delete_pod:isgone:{}'.format(response))
+                if response:
+                    return True
+        return False
+
+    def delete_pvcclone(self, namespace, clonename, force=False):
+        for a in range(self.timeout):
+            time.sleep(1)
+            response = self._plugin.check_gone_pvcclone(namespace, clonename, force=force)
+            if isinstance(response, dict) and 'error' in response:
+                self._handle_error(CANNOT_REMOVE_PVC_CLONE_ERR.format(parse_json_descr(response)))
+            else:
+                logging.debug('delete_pvcclone:isgone:{}'.format(response))
+                if response:
+                    return True
+        return False
+
+    def handle_delete_pod(self, namespace):
+        if not self.delete_pod(namespace=namespace):
+            self._handle_error(POD_REMOVE_ERR.format(podname=BACULABACKUPPODNAME))
+
+    def handle_tarstderr(self):
+        if self.tarexitcode != '0' or len(self.tarstderr) > 0:
+            # format or prepare error message
+            if not len(self.tarstderr):
+                self.tarstderr = TAR_STDERR_UNKNOWN
+            else:
+                self.tarstderr = self.tarstderr.rstrip('\n')
+            # classify it as error or warning
+            if self.tarexitcode != '0':
+                self._handle_error(self.tarstderr)
+            else:
+                self._io.send_warning(self.tarstderr)
+
+    def prepare_bacula_pod(self, pvcdata, namespace=None, mode='backup'):
+        if self._prepare_err:
+            # first prepare yaml was unsuccessful, we can't recover from this error
+            return False
+        self.token = generate_token()
+        if namespace is None:
+            namespace = pvcdata.get('fi').namespace
+        logging.debug('prepare_bacula_pod:token={} namespace={}'.format(self.token, namespace))
+        podyaml = self.prepare_pod_yaml(namespace, pvcdata, mode=mode)
+        if podyaml is None:
+            # error preparing yaml
+            self._prepare_err = True
+            return False
+        if not self.prepare_connection_server():
+            self._prepare_err = True
+            return False
+        logging.debug('prepare_bacula_pod:start pod')
+        if not self.execute_pod(namespace, podyaml):
+            self._handle_error(POD_EXECUTION_ERR)
+            return False
+        return True
+
+    def create_pvcclone(self, namespace, pvcname):
+        clonename = None
+        logging.debug("pvcclone for:{}/{}".format(namespace, pvcname))
+        pvcdata = self._plugin.get_pvcdata_namespaced(namespace, pvcname)
+        if isinstance(pvcdata, dict) and 'exception' in pvcdata:
+            self._handle_error(PVCDATA_GET_ERROR.format(parse_json_descr(pvcdata)))
+        else:
+            logging.debug('PVCDATA_ORIG:{}:{}'.format(pvcname, pvcdata))
+            cloneyaml, clonename = self.prepare_clone_yaml(namespace, pvcname, pvcdata.get('capacity'), pvcdata.get('storage_class_name'))
+            if cloneyaml is None or clonename is None:
+                # error preparing yaml
+                self._prepare_err = True
+                return None
+            clonedata = yaml.safe_load(cloneyaml)
+            response = self._plugin.create_pvc_clone(namespace, clonedata)
+            if isinstance(response, dict) and 'error' in response:
+                self._handle_error(CANNOT_CREATE_PVC_CLONE_ERR.format(parse_json_descr(response)))
+                return None
+
+        return clonename
diff --git a/bacula/src/plugins/fd/kubernetes-backend/baculak8s/jobs/listing_job.py b/bacula/src/plugins/fd/kubernetes-backend/baculak8s/jobs/listing_job.py
new file mode 100644 (file)
index 0000000..53f2658
--- /dev/null
@@ -0,0 +1,61 @@
+# -*- coding: UTF-8 -*-
+# Bacula(R) - The Network Backup Solution
+#
+#   Copyright (C) 2000-2022 Kern Sibbald
+#
+#   The original author of Bacula is Kern Sibbald, with contributions
+#   from many others, a complete list can be found in the file AUTHORS.
+#
+#   You may use this file and others of this release according to the
+#   license defined in the LICENSE file, which includes the Affero General
+#   Public License, v3.0 ("AGPLv3") and some additional permissions and
+#   terms pursuant to its AGPLv3 Section 7.
+#
+#   This notice must be preserved when any source code is
+#   conveyed and/or propagated.
+#
+#   Bacula(R) is a registered trademark of Kern Sibbald.
+
+import logging
+
+from baculak8s.io.default_io import DefaultIO
+from baculak8s.jobs.job import Job
+from baculak8s.util.respbody import parse_json_descr
+
+LISTING_START = "ListingStart"
+LISTING_EMPTY_RESULT = "Listing returned an empty result"
+LISTING_ERROR_RESPONSE = "Listing returned error response: {}"
+
+
+class ListingJob(Job):
+    """
+        Job that contains the business logic
+        related to the listing mode of the Backend.
+        It depends upon a Plugin Class implementation
+        that retrieves listing data from the Plugins Data Source
+    """
+
+    def __init__(self, plugin, params):
+        super().__init__(plugin, DefaultIO(), params)
+
+    def execute(self):
+        self._start(LISTING_START)
+        self.execution_loop()
+        self._io.send_eod()
+
+    def execution_loop(self):
+        self.__listing_loop()
+
+    def __listing_loop(self):
+        found_any = False
+        file_info_list = self._plugin.list_in_path(self._params["listing"])
+        if isinstance(file_info_list, dict) and 'error' in file_info_list:
+            logging.warning(file_info_list)
+            self._io.send_warning(LISTING_ERROR_RESPONSE.format(parse_json_descr(file_info_list)))
+        else:
+            for file_info in file_info_list:
+                found_any = True
+                self._io.send_file_info(file_info_list.get(file_info).get('fi'))
+
+            if not found_any:
+                self._io.send_warning(LISTING_EMPTY_RESULT)
diff --git a/bacula/src/plugins/fd/kubernetes-backend/baculak8s/jobs/query_job.py b/bacula/src/plugins/fd/kubernetes-backend/baculak8s/jobs/query_job.py
new file mode 100644 (file)
index 0000000..bf64af8
--- /dev/null
@@ -0,0 +1,57 @@
+# -*- coding: UTF-8 -*-
+# Bacula(R) - The Network Backup Solution
+#
+#   Copyright (C) 2000-2022 Kern Sibbald
+#
+#   The original author of Bacula is Kern Sibbald, with contributions
+#   from many others, a complete list can be found in the file AUTHORS.
+#
+#   You may use this file and others of this release according to the
+#   license defined in the LICENSE file, which includes the Affero General
+#   Public License, v3.0 ("AGPLv3") and some additional permissions and
+#   terms pursuant to its AGPLv3 Section 7.
+#
+#   This notice must be preserved when any source code is
+#   conveyed and/or propagated.
+#
+#   Bacula(R) is a registered trademark of Kern Sibbald.
+
+import logging
+
+from baculak8s.io.default_io import DefaultIO
+from baculak8s.jobs.job import Job
+from baculak8s.util.respbody import parse_json_descr
+
+QUERY_START = "QueryStart"
+QUERY_ERROR_RESPONSE = "QueryParam returned error response: {}"
+
+
+class QueryJob(Job):
+    """
+        Job that contains the business logic
+        related to the queryParams mode of the Backend.
+        It depends upon a Plugin Class implementation
+        that retrieves listing data from the Plugins Data Source
+    """
+
+    def __init__(self, plugin, params):
+        super().__init__(plugin, DefaultIO(), params)
+
+    def execute(self):
+        self._start(QUERY_START)
+        self.execution_loop()
+        self._io.send_eod()
+
+    def execution_loop(self):
+        self.__query_loop()
+
+    def __query_loop(self):
+        found_any = False
+        query_param_list = self._plugin.query_parameter(self._params["query"])
+        if isinstance(query_param_list, dict) and 'error' in query_param_list:
+            logging.warning(query_param_list)
+            self._io.send_warning(QUERY_ERROR_RESPONSE.format(parse_json_descr(query_param_list)))
+        else:
+            for param_data in query_param_list:
+                found_any = True
+                self._io.send_query_response(param_data)
diff --git a/bacula/src/plugins/fd/kubernetes-backend/baculak8s/jobs/restore_job.py b/bacula/src/plugins/fd/kubernetes-backend/baculak8s/jobs/restore_job.py
new file mode 100644 (file)
index 0000000..e4cadf5
--- /dev/null
@@ -0,0 +1,267 @@
+# Bacula(R) - The Network Backup Solution
+#
+#   Copyright (C) 2000-2022 Kern Sibbald
+#
+#   The original author of Bacula is Kern Sibbald, with contributions
+#   from many others, a complete list can be found in the file AUTHORS.
+#
+#   You may use this file and others of this release according to the
+#   license defined in the LICENSE file, which includes the Affero General
+#   Public License, v3.0 ("AGPLv3") and some additional permissions and
+#   terms pursuant to its AGPLv3 Section 7.
+#
+#   This notice must be preserved when any source code is
+#   conveyed and/or propagated.
+#
+#   Bacula(R) is a registered trademark of Kern Sibbald.
+
+import logging
+import re
+import time
+
+from baculak8s.entities.file_info import DIRECTORY, EMPTY_FILE, NOT_EMPTY_FILE
+from baculak8s.entities.k8sobjtype import K8SObjType
+from baculak8s.io.jobs.restore_io import (COMMA_SEPARATOR_NOT_SUPPORTED,
+                                          FILE_ERROR_TEMPLATE,
+                                          FILE_TRANSFER_START,
+                                          RESTORE_LOOP_ERROR, RESTORE_START,
+                                          SKIP_PACKET, SUCCESS_PACKET,
+                                          FileContentReader, RestoreIO,
+                                          RestorePacket)
+from baculak8s.jobs.job_pod_bacula import JobPodBacula
+from baculak8s.plugins.k8sbackend.k8sfileinfo import k8sfile2objname
+from baculak8s.services.job_info_service import (REPLACE_ALWAYS,
+                                                 REPLACE_NEVER)
+from baculak8s.util.respbody import parse_json_descr
+from baculak8s.util.sslserver import DEFAULTTIMEOUT
+
+NS_DOESNOT_EXIST_ERR = "Namespace '{namespace}' does not exist! You should restore namespace too!"
+PVC_DOESNOT_EXIST_ERR = "PVC: '{pvcname}' does not exist or is not mounted! " \
+                        "You should restore PVC config or mount it in any Pod!"
+PVC_ISNOTREADY_ERR = "PVC: '{pvcname}' is not ready in expected time! " \
+                     "You should check PVC config or logs!"
+SERVICE_ACCOUNT_TOKEN_RESTORE_INFO = "Unable to restore service-account-token {token}. Regenerated new one."
+RESTORE_RES_ERR = "Cannot restore resource object. Err={}"
+PVC_STATUS_ERR = "PVC: '{}' invalid status! Err={}"
+
+
+class RestoreJob(JobPodBacula):
+    """
+        Job that contains the business logic
+        related to the restore mode of the Backend.
+        It depends upon a Plugin Class implementation
+        that sends data to the Plugins Data Source.
+    """
+
+    def __init__(self, plugin, params):
+        super().__init__(plugin, RestoreIO(), params)
+        self.ns = {}
+        self.localrestore = "where" in self._params and self._params["where"] != ""
+
+    def execution_loop(self):
+        return self.__restore_loop()
+
+    def execute(self):
+        self._start(RESTORE_START),
+        self.execution_loop()
+        self._io.send_eod()
+
+    def __restore_loop(self):
+        file_info = None
+        while True:
+            packet_type, packet = self._io.next_loop_packet(onError=self._abort)
+            if packet_type == RestorePacket.RESTORE_END:
+                break
+            elif packet_type == RestorePacket.FILE_INFO:
+                file_info = packet
+                self.restore_object(file_info)
+            else:
+                self._io.send_abort(RESTORE_LOOP_ERROR)
+                self._abort()
+
+    def restore_object(self, file_info):
+        dorestore = self.__handle_file_info(file_info)
+        logging.debug('restore_object:dorestore:'+str(dorestore))
+        if dorestore:
+            # everything is ok, so do a restore
+            reader = FileContentReader()
+            self._io.send_command(SUCCESS_PACKET)
+            self._read_start(FILE_TRANSFER_START, onError=self._abort)
+
+            if self.localrestore or file_info.objtype != K8SObjType.K8SOBJ_PVCDATA:
+                self.__restore_file(file_info, reader)
+            else:
+                namespace = file_info.namespace
+                self.__restore_pvcdata(namespace)
+
+        elif dorestore is not None:
+            self._io.send_command(SKIP_PACKET)
+
+    def __handle_file_info(self, file_info):
+        logging.debug('handle_file_info:fileinfo: {}'.format(file_info))
+        dorestore = True
+        if file_info.type in [EMPTY_FILE, NOT_EMPTY_FILE]:
+            if "replace" in self._params and not self.__should_replace(file_info):
+                dorestore = False
+
+            else:
+                if self.localrestore:
+                    file_info = self._plugin.apply_where_parameter(file_info, self._params["where"])
+                    if "regexwhere" in self._params and self._params["regexwhere"] != '':
+                        self.__apply_regexwhere_param(file_info)
+
+                else:
+                    if file_info.objtype == K8SObjType.K8SOBJ_PVCDATA:
+                        dorestore = self.__prepare_pvcdata_restore(file_info)
+
+        elif file_info.type == DIRECTORY:
+            # We just ignore it for the current implementation
+            dorestore = False
+
+        logging.debug('handle_file_info:dorestore: {}'.format(dorestore))
+        return dorestore
+
+    def __prepare_pvcdata_restore(self, file_info):
+        """
+            Successful pvcdata restore require the following prerequisites:
+            - namespace has to exist so at least restored with this job
+            - pvcs has to exist so at least restored with this job
+        :param file_info: pvcdata FileInfo for restore
+        :return:
+        """
+
+        namespace = file_info.namespace
+        nsdata = self.ns.get(namespace)
+        if nsdata is None:
+            # no information about namespace, get it
+            logging.debug('prepare_pvcdata_restore:no previous ns check. do it!')
+            nsexist = self._plugin.check_namespace(namespace)
+            nsdata = {
+                'exist': nsexist,
+            }
+            self.ns[namespace] = nsdata
+            if nsexist is None:
+                logging.debug('prepare_pvcdata_restore ns:{} not exist!'.format(namespace))
+                self._handle_error(NS_DOESNOT_EXIST_ERR.format(namespace=namespace))
+                return None
+        else:
+            if nsdata.get('exist', None) is None:
+                logging.debug('prepare_pvcdata_restore ns:cached:{} not exist!'.format(namespace))
+                return False
+        logging.debug('prepare_pvcdata_restore ns: {} found.'.format(namespace))
+
+        pvcdatalist = nsdata.get('pvcs')
+        if pvcdatalist is None:
+            # grab pvcs
+            pvcdatalist = self._plugin.list_pvcdata_for_namespace(namespace, allpvcs=True)
+            logging.debug('prepare_pvcdata_restore pvcdatalist:{}'.format(pvcdatalist))
+            self.ns[namespace].update({
+                'pvcs': pvcdatalist,
+            })
+
+        pvcname = k8sfile2objname(file_info.name)
+        pvcdata = pvcdatalist.get(pvcname)
+        if pvcdata is None:
+            logging.debug('prepare_pvcdata_restore pvc:{} not exist!'.format(pvcname))
+            self._handle_error(PVC_DOESNOT_EXIST_ERR.format(pvcname=pvcname))
+            return None
+
+        pvcisready = False
+        for _ in range(DEFAULTTIMEOUT):
+            time.sleep(1)
+            isready = self._plugin.pvc_isready(namespace, pvcname)
+            if isinstance(isready, dict) and 'error' in isready:
+                # cannot check pvc status
+                self._handle_error(PVC_STATUS_ERR.format(pvcname, parse_json_descr(isready)))
+            elif isready:
+                pvcisready = True
+                break
+            # well, we have to wait for pvc to be ready, so restart procedure
+            logging.debug('Waiting for pvc to become ready...')
+
+        if not pvcisready:
+            logging.debug('prepare_pvcdata_restore pvc:{} is not ready in expected time!'.format(pvcname))
+            self._handle_error(PVC_ISNOTREADY_ERR.format(pvcname=pvcname))
+            return None
+
+        self.ns[namespace].update({
+            'pvcdata': pvcdata,
+        })
+        logging.debug('prepare_pvcdata_restore PVCDATA:{}'.format(pvcdata))
+
+        if self.prepare_bacula_pod(pvcdata, namespace=namespace, mode='restore'):
+            return True
+
+        return False
+
+    def __should_replace(self, file_info):
+        """
+        Checks if restored object is already available as we have to distinguish between create and patch.
+        :param file_info:
+        :return:
+        """
+        if file_info.objtype == K8SObjType.K8SOBJ_PVCDATA:
+            # always replace file_info
+            return True
+        curent_file = self._plugin.check_file(file_info)
+        # cache get object for future use
+        if isinstance(curent_file, dict) and 'error' in curent_file:
+            file_info.objcache = None
+            logging.error("check_file: {}".format(curent_file['error']))
+        else:
+            file_info.objcache = curent_file
+        if self._params["replace"] == REPLACE_ALWAYS:
+            return True
+        if self._params["replace"] == REPLACE_NEVER:
+            if curent_file is not None:
+                return False
+        # XXX: we cannot support ifnewer or ifolder because k8s does not provide modification time
+        return True
+
+    def __apply_regexwhere_param(self, file_info):
+        if re.match(r",(.+?),(.+?),", self._params["regexwhere"]):
+            self._io.send_abort(COMMA_SEPARATOR_NOT_SUPPORTED)
+            self._abort()
+
+        file_info.apply_regexwhere_param(self._params["regexwhere"])
+
+    def __handle_pvcdata_connections(self):
+        logging.debug('restore_pvcdata:data send')
+        response = self.connsrv.handle_connection(self.handle_pod_data_send)
+        if 'error' in response:
+            self._handle_error(response['error'])
+            return False
+        logging.debug('backup_pvcdata:logs recv')
+        response = self.connsrv.handle_connection(self.handle_pod_logs)
+        if 'error' in response:
+            self._handle_error(response['error'])
+            return False
+        return True
+
+    def __restore_pvcdata(self, namespace):
+        self.__handle_pvcdata_connections()
+        self.handle_tarstderr()
+        self.handle_delete_pod(namespace=namespace)
+        self._io.send_command(SUCCESS_PACKET)
+
+    def __restore_file(self, file_info, reader):
+        if file_info.size == 0 and file_info.objtype != K8SObjType.K8SOBJ_PVCDATA:
+            logging.debug('file_info.size == 0')
+            response = self._plugin.restore_file(file_info)
+        else:
+            reader.finished = False
+            response = self._plugin.restore_file(file_info, reader)
+        if isinstance(response, dict) and 'error' in response:
+            if file_info.objtype == K8SObjType.K8SOBJ_SECRET:
+                self._io.send_warning(SERVICE_ACCOUNT_TOKEN_RESTORE_INFO.format(
+                    token=file_info.name.split('/')[-1][:-5]))
+            else:
+                if 'exception' in response:
+                    error_msg = RESTORE_RES_ERR.format(parse_json_descr(response))
+                    self._handle_error(error_msg)
+                else:
+                    error_msg = FILE_ERROR_TEMPLATE % (file_info.name, file_info.namespace, response['error'])
+                    self._handle_error(error_msg)
+        else:
+            if file_info.size != 0 or file_info.objtype == K8SObjType.K8SOBJ_PVCDATA:
+                self._io.send_command(SUCCESS_PACKET)
diff --git a/bacula/src/plugins/fd/kubernetes-backend/baculak8s/main.py b/bacula/src/plugins/fd/kubernetes-backend/baculak8s/main.py
new file mode 100644 (file)
index 0000000..d5e0064
--- /dev/null
@@ -0,0 +1,70 @@
+#!/usr/bin/env python3
+
+# Bacula(R) - The Network Backup Solution
+#
+#   Copyright (C) 2000-2022 Kern Sibbald
+#
+#   The original author of Bacula is Kern Sibbald, with contributions
+#   from many others, a complete list can be found in the file AUTHORS.
+#
+#   You may use this file and others of this release according to the
+#   license defined in the LICENSE file, which includes the Affero General
+#   Public License, v3.0 ("AGPLv3") and some additional permissions and
+#   terms pursuant to its AGPLv3 Section 7.
+#
+#   This notice must be preserved when any source code is
+#   conveyed and/or propagated.
+#
+#   Bacula(R) is a registered trademark of Kern Sibbald.
+
+from baculak8s.io.log import Log, LogConfig
+from baculak8s.jobs.job_factory import JobFactory
+from baculak8s.plugins.plugin_factory import PluginFactory
+from baculak8s.services.handshake_service import HandshakeService
+from baculak8s.services.job_end_service import JobEndService
+from baculak8s.services.job_info_service import JobInfoService
+from baculak8s.services.plugin_params_service import PluginParamsService
+from baculak8s.services.unexpected_error_service import UnexpectedErrorService
+from baculak8s.util.dict_util import merge_two_dicts
+
+
+def main():
+
+    try:
+        LogConfig.start()
+        plugin_name = HandshakeService().execute()
+        job_info = JobInfoService().execute()
+        plugin_params = PluginParamsService(job_info).execute()
+        LogConfig.handle_params(job_info, plugin_params)
+        merged_params = merge_two_dicts(job_info, plugin_params)
+        plugin = PluginFactory.create(plugin_name, merged_params)
+        job = JobFactory.create(merged_params, plugin)
+        job.execute()
+        JobEndService(merged_params, plugin).execute()
+    except Exception as E:
+        Log.save_exception(E)
+        UnexpectedErrorService().execute()
+        exit_code = 1
+        Log.save_exit_code(exit_code)
+        return exit_code
+    except SystemExit:
+        pass
+    """
+    LogConfig.start()
+    plugin_name = HandshakeService().execute()
+    job_info = JobInfoService().execute()
+    plugin_params = PluginParamsService(job_info).execute()
+    LogConfig.handle_params(job_info, plugin_params)
+    merged_params = merge_two_dicts(job_info, plugin_params)
+    plugin = PluginFactory.create(plugin_name, merged_params)
+    job = JobFactory.create(merged_params, plugin)
+    job.execute()
+    JobEndService(merged_params, plugin).execute()
+    """
+    exit_code = 0
+    Log.save_exit_code(exit_code)
+    return exit_code
+
+
+if __name__ == '__main__':
+    main()
diff --git a/bacula/src/plugins/fd/kubernetes-backend/baculak8s/plugins/__init__.py b/bacula/src/plugins/fd/kubernetes-backend/baculak8s/plugins/__init__.py
new file mode 100644 (file)
index 0000000..3b41a74
--- /dev/null
@@ -0,0 +1,16 @@
+# Bacula(R) - The Network Backup Solution
+#
+#   Copyright (C) 2000-2022 Kern Sibbald
+#
+#   The original author of Bacula is Kern Sibbald, with contributions
+#   from many others, a complete list can be found in the file AUTHORS.
+#
+#   You may use this file and others of this release according to the
+#   license defined in the LICENSE file, which includes the Affero General
+#   Public License, v3.0 ("AGPLv3") and some additional permissions and
+#   terms pursuant to its AGPLv3 Section 7.
+#
+#   This notice must be preserved when any source code is
+#   conveyed and/or propagated.
+#
+#   Bacula(R) is a registered trademark of Kern Sibbald.
diff --git a/bacula/src/plugins/fd/kubernetes-backend/baculak8s/plugins/fs_plugin.py b/bacula/src/plugins/fd/kubernetes-backend/baculak8s/plugins/fs_plugin.py
new file mode 100644 (file)
index 0000000..64fafe3
--- /dev/null
@@ -0,0 +1,201 @@
+# -*- coding: UTF-8 -*-
+# Bacula(R) - The Network Backup Solution
+#
+#   Copyright (C) 2000-2022 Kern Sibbald
+#
+#   The original author of Bacula is Kern Sibbald, with contributions
+#   from many others, a complete list can be found in the file AUTHORS.
+#
+#   You may use this file and others of this release according to the
+#   license defined in the LICENSE file, which includes the Affero General
+#   Public License, v3.0 ("AGPLv3") and some additional permissions and
+#   terms pursuant to its AGPLv3 Section 7.
+#
+#   This notice must be preserved when any source code is
+#   conveyed and/or propagated.
+#
+#   Bacula(R) is a registered trademark of Kern Sibbald.
+
+import datetime
+import json
+import logging
+import os
+from json import JSONEncoder
+
+import yaml
+
+from baculak8s.entities.file_info import FileInfo
+from baculak8s.entities.k8sobjtype import K8SObjType
+from baculak8s.plugins.k8sbackend.k8sfileinfo import encoder_load
+from baculak8s.plugins.plugin import Plugin
+
+
+class K8SEncoder(JSONEncoder):
+    # handles a JSON encoder for kubernetes objects
+    def default(self, o):
+        if isinstance(o, datetime.datetime):
+            return o.strftime("%Y-%m-%dT%H:%M:%S%Z")
+        else:
+            todict = getattr(o, 'to_dict', None)
+            if todict is not None:
+                odict = o.to_dict()
+                # try to remap dictionary to attributes map
+                amap = getattr(o, 'attribute_map', None)
+                if amap is not None:
+                    for att in amap:
+                        attval = amap.get(att)
+                        if attval != att and att in odict:
+                            odict[attval] = odict[att]
+                            del odict[att]
+                return odict
+            else:
+                return json.JSONEncoder.default(self, o)
+
+
+class FileSystemPlugin(Plugin):
+    """
+        Plugin that communicates with the Local Filesystem
+    """
+
+    def __init__(self, confdata):
+        logging.debug("params:" + str(confdata))
+        self._params = confdata
+
+    def list_in_path(self, path):
+        raise NotImplementedError
+
+    def list_all_namespaces(self):
+        raise NotImplementedError
+
+    def list_all_persistentvolumes(self):
+        raise NotImplementedError
+
+    def list_namespaced_objects(self, namespace):
+        raise NotImplementedError
+
+    def query_parameter(self, parameter):
+        raise NotImplementedError
+
+    def check_file(self, file_info):
+        file_path = os.path.join(self._params.get('where', '/'), file_info.name)
+        try:
+            file_stats = os.stat(file_path)
+        except FileNotFoundError:
+            return None
+        return file_stats
+
+    def connect(self):
+        """
+            Implementation of Plugin.connect(self)
+            No need to connect to the Local FS
+        """
+        return {}
+
+    def disconnect(self):
+        """
+            Implementation of Plugin.disconnect(self)
+            No need to disconnect from the Local FS
+        """
+        return {}
+
+    def apply_where_parameter(self, file_info, where_param):
+        """
+            Implementation of Plugin.apply_where_parameter(self, file_info, where_param)
+        """
+        file_info.name = "{}{}{}{}".format(os.path.sep, where_param, os.path.sep, os.path.sep.join(file_info.fullfname))
+        return file_info
+
+    def upload_bucket_acl(self, bucket, acl):
+        return {}
+
+    def upload_bucket_xattrs(self, file_info: FileInfo):
+        return {}
+
+    def upload_file_xattrs(self, file_info: FileInfo):
+        return {}
+
+    def upload_file_acl(self, bucket, filename, acl):
+        return {}
+
+    def upload_bucket(self, bucket_info):
+        """
+            Implementation of Plugin.upload_bucket(self, bucket_info)
+            Creates the bucket as a directory on the Local Filesystem
+        """
+
+        dir_path = os.path.join(self._params["restore_local_path"],
+                                bucket_info.bucket)
+
+        # We change the path to the local OS path format
+        dir_path = os.path.abspath(dir_path)
+
+        # If the files directory doesn't exist yet, we create it
+        os.makedirs(dir_path, exist_ok=True)
+
+        os.chmod(dir_path, int(bucket_info.mode, 8))
+        return {"success": 'True'}
+
+    def restore_file(self, file_info, file_content_source=None):
+        """
+            Implementation of Plugin.restore_file(self, file_info, file_content_source=None)
+            Creates the file on the Local Filesystem
+        """
+        mode = None
+        file_ext = file_info.name[-4:]
+        if file_info.objtype != K8SObjType.K8SOBJ_PVCDATA:
+            # if change output format then mode is not None
+            mode = self._params.get('outputformat', None)
+            logging.debug("outputformat: {}".format(mode))
+            if mode is not None:
+                mode = mode.lower()
+                if mode not in ('json', 'yaml'):
+                    mode = None
+                else:
+                    file_ext = mode
+
+        # We change the path to the local OS path format
+        file_path = os.path.abspath(file_info.name[:-4]+file_ext)
+
+        # If the files path doesn't exist yet, we create it
+        os.makedirs(os.path.dirname(file_path), exist_ok=True)
+        logging.debug("file_path: {}".format(file_path))
+        if not file_content_source:
+            # We create an empty file
+            with open(file_path, "w") as f:
+                f.close()
+        else:
+            logging.debug("save mode: {}".format(mode))
+            if mode is not None:
+                strdata = b''
+                while True:
+                    chunk = file_content_source.read()
+                    if chunk is None:
+                        break
+                    strdata = strdata + chunk
+                # here we have an api object translated to simple dict
+                # logging.debug("STRDATA:" + str(strdata))
+                el = encoder_load(strdata, file_info.name)
+                jd = json.dumps(el, cls=K8SEncoder, sort_keys=True)
+                data = json.loads(jd)
+                # logging.debug("EL:" + str(el))
+                # logging.debug("JD:" + str(jd))
+                # logging.debug("DATA:" + str(data))
+                if 'json' == mode:
+                    with open(file_path, "w") as out_file:
+                        json.dump(data, out_file, indent=3, sort_keys=True)
+                if 'yaml' == mode:
+                    with open(file_path, "w") as out_file:
+                        yaml.dump(data, out_file, Dumper=yaml.SafeDumper, default_flow_style=False)
+                    out_file.close()
+            else:
+                # We create an file with the contents from file_content_source
+                with open(file_path, "wb") as f:
+                    while True:
+                        chunk = file_content_source.read()
+                        if chunk is None:
+                            break
+                        f.write(chunk)
+                    f.close()
+
+        os.chmod(file_path, int(file_info.mode, 8))
+        return {"success": 'True'}
diff --git a/bacula/src/plugins/fd/kubernetes-backend/baculak8s/plugins/k8sbackend/__init__.py b/bacula/src/plugins/fd/kubernetes-backend/baculak8s/plugins/k8sbackend/__init__.py
new file mode 100644 (file)
index 0000000..e69de29
diff --git a/bacula/src/plugins/fd/kubernetes-backend/baculak8s/plugins/k8sbackend/baculaannotations.py b/bacula/src/plugins/fd/kubernetes-backend/baculak8s/plugins/k8sbackend/baculaannotations.py
new file mode 100644 (file)
index 0000000..ca8c6eb
--- /dev/null
@@ -0,0 +1,147 @@
+# -*- coding: UTF-8 -*-
+#
+#  Bacula(R) - The Network Backup Solution
+#
+#   Copyright (C) 2000-2022 Kern Sibbald
+#
+#   The original author of Bacula is Kern Sibbald, with contributions
+#   from many others, a complete list can be found in the file AUTHORS.
+#
+#   You may use this file and others of this release according to the
+#   license defined in the LICENSE file, which includes the Affero General
+#   Public License, v3.0 ("AGPLv3") and some additional permissions and
+#   terms pursuant to its AGPLv3 Section 7.
+#
+#   This notice must be preserved when any source code is
+#   conveyed and/or propagated.
+#
+#   Bacula(R) is a registered trademark of Kern Sibbald.
+#
+#     All rights reserved. IP transfered to Bacula Systems according to agreement.
+#     Author: Radosław Korzeniewski, radekk@inteos.pl, Inteos Sp. z o.o.
+#
+
+import logging
+import kubernetes
+from baculak8s.plugins.k8sbackend.pods import pods_namespaced_specs
+
+
+"""
+* bacula/backup.mode: [snapshot|standard] - it will be the cloud way to select what pvcdata you want to backup and extend the current plugin parameter: pvcdata[=<pvcname>], default is snapshot if not defined.
+* bacula/backup.volumes: <pvcname[,pvcname2...]> - required, multiple pvc names as comma separated list.
+* bacula/run.before.job.container.command: [<container>/<command>|*/<command>] - a star (*) means all containers.
+* bacula/run.before.job.failjobonerror: [yes|no] - default is yes.
+* bacula/run.after.job.container.command: [<container>/<command>|*/<command>] - a star (*) means all containers.
+* bacula/run.after.job.failjobonerror: [yes|no] - default is no.
+"""
+
+
+class BaculaBackupMode(object):
+    """
+    This is a class to manage snapshot mode.
+    """
+    Snapshot = 'snapshot'
+    Standard = 'standard'
+    params = (Snapshot, Standard)
+
+    @staticmethod
+    def process_param(mode):
+        """The static method validates backup mode
+
+        Args:
+            mode (str): a backup mode parameter from k8s annotation
+
+        Returns:
+            str: backup mode normalized to consts, `None` when error
+        """
+        if mode is not None:
+            mode = mode.lower()
+            for p in BaculaBackupMode.params:
+                if p == mode:
+                    return p
+        return None
+
+
+class BaculaAnnotationsClass(object):
+    """
+    This is a class to manage Bacula annotations parameters
+    """
+    BaculaPrefix = 'bacula/'
+    BackupMode = 'backup.mode'
+    BackupVolume = 'backup.volumes'
+    RunBeforeJob = 'run.before.job.container.command'
+    RunBeforeJobonError = 'run.before.job.failjobonerror'
+    RunAfterJob = 'run.after.job.container.command'
+    RunAfterJobonError = 'run.after.job.failjobonerror'
+    RunAfterSnapshot = 'run.after.snapshot.container.command'
+    RunAfterSnapshotonError = 'run.after.snapshot.failjobonerror'
+    params = (BackupMode, BackupVolume, RunBeforeJob, RunBeforeJobonError, RunAfterJob, RunAfterJobonError, RunAfterSnapshot, RunAfterSnapshotonError)
+
+    @staticmethod
+    def process_param(param):
+        """The static method validates Bacula annotations
+
+        Args:
+            param (str): a Bacula annotation from k8s
+
+        Returns:
+            str: Bacula annotation normalized to consts, `None` when error
+        """
+        if param is not None:
+            for p in BaculaAnnotationsClass.params:
+                if param == BaculaAnnotationsClass.BaculaPrefix + p:
+                    return p
+        return None
+
+    @staticmethod
+    def handle_run_job_container_command(param):
+        """The static method handles container/command annotation parameters
+
+        Args:
+            param (str): a container command parameter from k8s annotation
+
+        Returns:
+            tuple(2): container / command split
+        """
+        container, command = (None, None)
+        if param is not None:
+            try:
+                container, command = param.split('/', 1)
+            except ValueError as e:
+                logging.error(e)
+        return container, command
+
+
+def annotated_namespaced_pods_data(corev1api, namespace, estimate=False, labels=""):
+    """Reads Pods annotations to search for Bacula annotations
+
+    Args:
+        corev1api (coreviapi): kubernetes corev1api instance
+        namespace (str): namespace for the pod
+        estimate (bool, optional): inform if we does backup or estimate job. Defaults to False.
+        labels (list , optional): k8s label filter. Defaults to None.
+
+    Returns:
+        list: a list of pods and its annotations for selected namespace
+    """
+    podsdata = []
+    pods = pods_namespaced_specs(corev1api, namespace, labels)
+    for pod in pods:
+        metadata = pod.metadata
+        if metadata.annotations is None:
+            continue
+        bacula_annotations = [k for k, v in metadata.annotations.items() if k.startswith(BaculaAnnotationsClass.BaculaPrefix)]
+        if len(bacula_annotations) > 0:
+            containers = [c.name for c in pod.spec.containers]
+            podobj = {
+                'name': metadata.name,
+                'containers': containers
+            }
+            for ba in bacula_annotations:
+                param = metadata.annotations.get(ba)
+                baname = BaculaAnnotationsClass.process_param(ba)   # we will ignore all annotations we cannot handle
+                if baname is not None:
+                    podobj[baname] = param
+            podsdata.append(podobj)
+
+    return podsdata
diff --git a/bacula/src/plugins/fd/kubernetes-backend/baculak8s/plugins/k8sbackend/baculabackup.py b/bacula/src/plugins/fd/kubernetes-backend/baculak8s/plugins/k8sbackend/baculabackup.py
new file mode 100644 (file)
index 0000000..1bcfc4a
--- /dev/null
@@ -0,0 +1,104 @@
+#
+#  Bacula(R) - The Network Backup Solution
+#
+#   Copyright (C) 2000-2022 Kern Sibbald
+#
+#   The original author of Bacula is Kern Sibbald, with contributions
+#   from many others, a complete list can be found in the file AUTHORS.
+#
+#   You may use this file and others of this release according to the
+#   license defined in the LICENSE file, which includes the Affero General
+#   Public License, v3.0 ("AGPLv3") and some additional permissions and
+#   terms pursuant to its AGPLv3 Section 7.
+#
+#   This notice must be preserved when any source code is
+#   conveyed and/or propagated.
+#
+#   Bacula(R) is a registered trademark of Kern Sibbald.
+#
+#     Copyright (c) 2019 by Inteos sp. z o.o.
+#     All rights reserved. IP transfered to Bacula Systems according to agreement.
+#     Author: Radosław Korzeniewski, radekk@inteos.pl, Inteos Sp. z o.o.
+#
+
+import os
+import logging
+from baculak8s.plugins.k8sbackend.baculabackupimage import KUBERNETES_TAR_IMAGE
+
+
+BACULABACKUPPODNAME = 'bacula-backup'
+# BACULABACKUPIMAGE = "hub.baculasystems.com/bacula-backup:" + KUBERNETES_TAR_IMAGE
+BACULABACKUPIMAGE = "bacula-backup:" + KUBERNETES_TAR_IMAGE
+DEFAULTPODYAML = os.getenv('DEFAULTPODYAML', "/opt/bacula/scripts/bacula-backup.yaml")
+PODTEMPLATE = """
+apiVersion: v1
+kind: Pod
+metadata:
+  name: {podname}
+  namespace: {namespace}
+  labels:
+    app: baculabackup
+spec:
+  hostname: {podname}
+  {nodenameparam}
+  containers:
+  - name: {podname}
+    resources:
+      limits:
+        cpu: "1"
+        memory: "64Mi"
+      requests:
+        cpu: "100m"
+        memory: "16Mi"
+    image: {image}
+    env:
+    - name: PLUGINMODE
+      value: "{mode}"
+    - name: PLUGINHOST
+      value: "{host}"
+    - name: PLUGINPORT
+      value: "{port}"
+    - name: PLUGINTOKEN
+      value: "{token}"
+    - name: PLUGINJOB
+      value: "{job}"
+    imagePullPolicy: {imagepullpolicy}
+    volumeMounts:
+      - name: {podname}-storage
+        mountPath: /{mode}
+  restartPolicy: Never
+  volumes:
+    - name: {podname}-storage
+      persistentVolumeClaim:
+        claimName: {pvcname}
+"""
+
+
+class ImagePullPolicy(object):
+    IfNotPresent = 'IfNotPresent'
+    Always = 'Always'
+    Never = 'Never'
+    params = (IfNotPresent, Always, Never)
+
+    @staticmethod
+    def process_param(imagepullpolicy):
+        if imagepullpolicy is not None:
+            for p in ImagePullPolicy.params:
+                # logging.debug("imagepullpolicy test: {} {}".format(p, self.imagepullpolicy))
+                if imagepullpolicy.lower() == p.lower():
+                    return p
+        return ImagePullPolicy.IfNotPresent
+
+
+def prepare_backup_pod_yaml(mode='backup', nodename=None, host='localhost', port=9104, token='', namespace='default',
+                            pvcname='', image=BACULABACKUPIMAGE, imagepullpolicy=ImagePullPolicy.IfNotPresent, job=''):
+    podyaml = PODTEMPLATE
+    if os.path.exists(DEFAULTPODYAML):
+        with open(DEFAULTPODYAML, 'r') as file:
+            podyaml = file.read()
+    nodenameparam = ''
+    if nodename is not None:
+      nodenameparam = "nodeName: {nodename}".format(nodename=nodename)
+    logging.debug('host:{} port:{} namespace:{} image:{} job:{}'.format(host, port, namespace, image, job))
+    return podyaml.format(mode=mode, nodenameparam=nodenameparam, host=host, port=port, token=token, namespace=namespace,
+                          image=image, pvcname=pvcname, podname=BACULABACKUPPODNAME, imagepullpolicy=imagepullpolicy, job=job)
diff --git a/bacula/src/plugins/fd/kubernetes-backend/baculak8s/plugins/k8sbackend/baculabackupimage.py.in b/bacula/src/plugins/fd/kubernetes-backend/baculak8s/plugins/k8sbackend/baculabackupimage.py.in
new file mode 100644 (file)
index 0000000..c9b9b75
--- /dev/null
@@ -0,0 +1 @@
+KUBERNETES_TAR_IMAGE="@KUBERNETES_IMAGE_VERSION@"
diff --git a/bacula/src/plugins/fd/kubernetes-backend/baculak8s/plugins/k8sbackend/configmaps.py b/bacula/src/plugins/fd/kubernetes-backend/baculak8s/plugins/k8sbackend/configmaps.py
new file mode 100644 (file)
index 0000000..a8f02c7
--- /dev/null
@@ -0,0 +1,69 @@
+# -*- coding: UTF-8 -*-
+#
+#  Bacula(R) - The Network Backup Solution
+#
+#   Copyright (C) 2000-2022 Kern Sibbald
+#
+#   The original author of Bacula is Kern Sibbald, with contributions
+#   from many others, a complete list can be found in the file AUTHORS.
+#
+#   You may use this file and others of this release according to the
+#   license defined in the LICENSE file, which includes the Affero General
+#   Public License, v3.0 ("AGPLv3") and some additional permissions and
+#   terms pursuant to its AGPLv3 Section 7.
+#
+#   This notice must be preserved when any source code is
+#   conveyed and/or propagated.
+#
+#   Bacula(R) is a registered trademark of Kern Sibbald.
+#
+#     Copyright (c) 2019 by Inteos sp. z o.o.
+#     All rights reserved. IP transfered to Bacula Systems according to agreement.
+#     Author: Radosław Korzeniewski, radekk@inteos.pl, Inteos Sp. z o.o.
+#
+
+from baculak8s.entities.file_info import NOT_EMPTY_FILE
+from baculak8s.plugins.k8sbackend.k8sfileinfo import *
+from baculak8s.plugins.k8sbackend.k8sutils import *
+
+
+def config_map_read_namespaced(corev1api, namespace, name):
+    return corev1api.read_namespaced_config_map(name, namespace)
+
+
+def config_maps_list_namespaced(corev1api, namespace, estimate=False, labels=""):
+    cmlist = {}
+    configmaps = corev1api.list_namespaced_config_map(namespace=namespace, watch=False, label_selector=labels)
+    for cm in configmaps.items:
+        cmdata = config_map_read_namespaced(corev1api, namespace, cm.metadata.name)
+        spec = encoder_dump(cmdata)
+        cmlist['cm-' + cm.metadata.name] = {
+            'spec': spec if not estimate else None,
+            'fi': k8sfileinfo(objtype=K8SObjType.K8SOBJ_CONFIGMAP, nsname=namespace,
+                              name=cm.metadata.name,
+                              ftype=NOT_EMPTY_FILE,
+                              size=len(spec),
+                              creation_timestamp=cmdata.metadata.creation_timestamp),
+        }
+    return cmlist
+
+
+def config_map_restore_namespaced(corev1api, file_info, file_content):
+    cm = encoder_load(file_content, file_info.name)
+    metadata = prepare_metadata(cm.metadata)
+    # Instantiate the configmap object
+    configmap = client.V1ConfigMap(
+        api_version=cm.api_version,
+        kind="ConfigMap",
+        data=cm.data,
+        binary_data=cm.binary_data,
+        metadata=metadata
+    )
+    if file_info.objcache is not None:
+        # object exist so we replace it
+        response = corev1api.replace_namespaced_config_map(k8sfile2objname(file_info.name),
+                                                           file_info.namespace, configmap, pretty='true')
+    else:
+        # object does not exist, so create one as required
+        response = corev1api.create_namespaced_config_map(file_info.namespace, configmap, pretty='true')
+    return {'response': response}
diff --git a/bacula/src/plugins/fd/kubernetes-backend/baculak8s/plugins/k8sbackend/daemonset.py b/bacula/src/plugins/fd/kubernetes-backend/baculak8s/plugins/k8sbackend/daemonset.py
new file mode 100644 (file)
index 0000000..8efe987
--- /dev/null
@@ -0,0 +1,68 @@
+# -*- coding: UTF-8 -*-
+#
+#  Bacula(R) - The Network Backup Solution
+#
+#   Copyright (C) 2000-2022 Kern Sibbald
+#
+#   The original author of Bacula is Kern Sibbald, with contributions
+#   from many others, a complete list can be found in the file AUTHORS.
+#
+#   You may use this file and others of this release according to the
+#   license defined in the LICENSE file, which includes the Affero General
+#   Public License, v3.0 ("AGPLv3") and some additional permissions and
+#   terms pursuant to its AGPLv3 Section 7.
+#
+#   This notice must be preserved when any source code is
+#   conveyed and/or propagated.
+#
+#   Bacula(R) is a registered trademark of Kern Sibbald.
+#
+#     Copyright (c) 2019 by Inteos sp. z o.o.
+#     All rights reserved. IP transfered to Bacula Systems according to agreement.
+#     Author: Radosław Korzeniewski, radekk@inteos.pl, Inteos Sp. z o.o.
+#
+
+from baculak8s.entities.file_info import NOT_EMPTY_FILE
+from baculak8s.plugins.k8sbackend.k8sfileinfo import *
+from baculak8s.plugins.k8sbackend.k8sutils import *
+
+
+def daemon_sets_read_namespaced(appsv1api, namespace, name):
+    return appsv1api.read_namespaced_daemon_set(name, namespace)
+
+
+def daemon_sets_list_namespaced(appsv1api, namespace, estimate=False, labels=""):
+    dslist = {}
+    daemonsets = appsv1api.list_namespaced_daemon_set(namespace=namespace, watch=False, label_selector=labels)
+    for ds in daemonsets.items:
+        dsdata = daemon_sets_read_namespaced(appsv1api, namespace, ds.metadata.name)
+        spec = encoder_dump(dsdata)
+        dslist['ds-' + ds.metadata.name] = {
+            'spec': spec if not estimate else None,
+            'fi': k8sfileinfo(objtype=K8SObjType.K8SOBJ_DAEMONSET, nsname=namespace,
+                              name=ds.metadata.name,
+                              ftype=NOT_EMPTY_FILE,
+                              size=len(spec),
+                              creation_timestamp=dsdata.metadata.creation_timestamp),
+        }
+    return dslist
+
+
+def daemon_sets_restore_namespaced(appsv1api, file_info, file_content):
+    ds = encoder_load(file_content, file_info.name)
+    metadata = prepare_metadata(ds.metadata)
+    # Instantiate the daemon_set object
+    daemonset = client.V1DaemonSet(
+        api_version=ds.api_version,
+        kind="DaemonSet",
+        spec=ds.spec,
+        metadata=metadata
+    )
+    if file_info.objcache is not None:
+        # object exist so we replace it
+        response = appsv1api.replace_namespaced_daemon_set(k8sfile2objname(file_info.name),
+                                                           file_info.namespace, daemonset, pretty='true')
+    else:
+        # object does not exist, so create one as required
+        response = appsv1api.create_namespaced_daemon_set(file_info.namespace, daemonset, pretty='true')
+    return {'response': response}
diff --git a/bacula/src/plugins/fd/kubernetes-backend/baculak8s/plugins/k8sbackend/deployment.py b/bacula/src/plugins/fd/kubernetes-backend/baculak8s/plugins/k8sbackend/deployment.py
new file mode 100644 (file)
index 0000000..fe5362c
--- /dev/null
@@ -0,0 +1,68 @@
+# -*- coding: UTF-8 -*-
+#
+#  Bacula(R) - The Network Backup Solution
+#
+#   Copyright (C) 2000-2022 Kern Sibbald
+#
+#   The original author of Bacula is Kern Sibbald, with contributions
+#   from many others, a complete list can be found in the file AUTHORS.
+#
+#   You may use this file and others of this release according to the
+#   license defined in the LICENSE file, which includes the Affero General
+#   Public License, v3.0 ("AGPLv3") and some additional permissions and
+#   terms pursuant to its AGPLv3 Section 7.
+#
+#   This notice must be preserved when any source code is
+#   conveyed and/or propagated.
+#
+#   Bacula(R) is a registered trademark of Kern Sibbald.
+#
+#     Copyright (c) 2019 by Inteos sp. z o.o.
+#     All rights reserved. IP transfered to Bacula Systems according to agreement.
+#     Author: Radosław Korzeniewski, radekk@inteos.pl, Inteos Sp. z o.o.
+#
+
+from baculak8s.entities.file_info import NOT_EMPTY_FILE
+from baculak8s.plugins.k8sbackend.k8sfileinfo import *
+from baculak8s.plugins.k8sbackend.k8sutils import *
+
+
+def deployments_read_namespaced(appsv1api, namespace, name):
+    return appsv1api.read_namespaced_deployment(name, namespace)
+
+
+def deployments_list_namespaced(appsv1api, namespace, estimate=False, labels=""):
+    dplist = {}
+    deployments = appsv1api.list_namespaced_deployment(namespace=namespace, watch=False, label_selector=labels)
+    for dp in deployments.items:
+        dpdata = deployments_read_namespaced(appsv1api, namespace, dp.metadata.name)
+        spec = encoder_dump(dpdata)
+        dplist['dp-' + dp.metadata.name] = {
+            'spec': spec if not estimate else None,
+            'fi': k8sfileinfo(objtype=K8SObjType.K8SOBJ_DEPLOYMENT, nsname=namespace,
+                              name=dp.metadata.name,
+                              ftype=NOT_EMPTY_FILE,
+                              size=len(spec),
+                              creation_timestamp=dpdata.metadata.creation_timestamp),
+        }
+    return dplist
+
+
+def deployments_restore_namespaced(appsv1api, file_info, file_content):
+    dp = encoder_load(file_content, file_info.name)
+    metadata = prepare_metadata(dp.metadata)
+    # Instantiate the daemon_set object
+    deployment = client.V1Deployment(
+        api_version=dp.api_version,
+        kind="Deployment",
+        spec=dp.spec,
+        metadata=metadata
+    )
+    if file_info.objcache is not None:
+        # object exist so we replace it
+        response = appsv1api.replace_namespaced_deployment(k8sfile2objname(file_info.name),
+                                                           file_info.namespace, deployment, pretty='true')
+    else:
+        # object does not exist, so create one as required
+        response = appsv1api.create_namespaced_deployment(file_info.namespace, deployment, pretty='true')
+    return {'response': response}
diff --git a/bacula/src/plugins/fd/kubernetes-backend/baculak8s/plugins/k8sbackend/endpoints.py b/bacula/src/plugins/fd/kubernetes-backend/baculak8s/plugins/k8sbackend/endpoints.py
new file mode 100644 (file)
index 0000000..8328569
--- /dev/null
@@ -0,0 +1,68 @@
+# -*- coding: UTF-8 -*-
+#
+#  Bacula(R) - The Network Backup Solution
+#
+#   Copyright (C) 2000-2022 Kern Sibbald
+#
+#   The original author of Bacula is Kern Sibbald, with contributions
+#   from many others, a complete list can be found in the file AUTHORS.
+#
+#   You may use this file and others of this release according to the
+#   license defined in the LICENSE file, which includes the Affero General
+#   Public License, v3.0 ("AGPLv3") and some additional permissions and
+#   terms pursuant to its AGPLv3 Section 7.
+#
+#   This notice must be preserved when any source code is
+#   conveyed and/or propagated.
+#
+#   Bacula(R) is a registered trademark of Kern Sibbald.
+#
+#     Copyright (c) 2019 by Inteos sp. z o.o.
+#     All rights reserved. IP transfered to Bacula Systems according to agreement.
+#     Author: Radosław Korzeniewski, radekk@inteos.pl, Inteos Sp. z o.o.
+#
+
+from baculak8s.entities.file_info import NOT_EMPTY_FILE
+from baculak8s.plugins.k8sbackend.k8sfileinfo import *
+from baculak8s.plugins.k8sbackend.k8sutils import *
+
+
+def endpoints_read_namespaced(corev1api, namespace, name):
+    return corev1api.read_namespaced_endpoints(name, namespace)
+
+
+def endpoints_list_namespaced(corev1api, namespace, estimate=False, labels=""):
+    eplist = {}
+    endpoints = corev1api.list_namespaced_endpoints(namespace=namespace, watch=False, label_selector=labels)
+    for ep in endpoints.items:
+        epdata = endpoints_read_namespaced(corev1api, namespace, ep.metadata.name)
+        spec = encoder_dump(epdata)
+        eplist['ep-' + ep.metadata.name] = {
+            'spec': spec if not estimate else None,
+            'fi': k8sfileinfo(objtype=K8SObjType.K8SOBJ_ENDPOINT, nsname=namespace,
+                              name=ep.metadata.name,
+                              ftype=NOT_EMPTY_FILE,
+                              size=len(spec),
+                              creation_timestamp=epdata.metadata.creation_timestamp),
+        }
+    return eplist
+
+
+def endpoints_restore_namespaced(corev1api, file_info, file_content):
+    ep = encoder_load(file_content, file_info.name)
+    metadata = prepare_metadata(ep.metadata)
+    # Instantiate the endpoint object
+    endpoint = client.V1Endpoints(
+        api_version=ep.api_version,
+        kind="Endpoint",
+        subsets=ep.subsets,
+        metadata=metadata
+    )
+    if file_info.objcache is not None:
+        # object exist so we replace it
+        response = corev1api.replace_namespaced_endpoints(k8sfile2objname(file_info.name),
+                                                          file_info.namespace, endpoint, pretty='true')
+    else:
+        # object does not exist, so create one as required
+        response = corev1api.create_namespaced_endpoints(file_info.namespace, endpoint, pretty='true')
+    return {'response': response}
diff --git a/bacula/src/plugins/fd/kubernetes-backend/baculak8s/plugins/k8sbackend/k8sfileinfo.py b/bacula/src/plugins/fd/kubernetes-backend/baculak8s/plugins/k8sbackend/k8sfileinfo.py
new file mode 100644 (file)
index 0000000..2e69b64
--- /dev/null
@@ -0,0 +1,138 @@
+# -*- coding: UTF-8 -*-
+#
+#  Bacula(R) - The Network Backup Solution
+#
+#   Copyright (C) 2000-2022 Kern Sibbald
+#
+#   The original author of Bacula is Kern Sibbald, with contributions
+#   from many others, a complete list can be found in the file AUTHORS.
+#
+#   You may use this file and others of this release according to the
+#   license defined in the LICENSE file, which includes the Affero General
+#   Public License, v3.0 ("AGPLv3") and some additional permissions and
+#   terms pursuant to its AGPLv3 Section 7.
+#
+#   This notice must be preserved when any source code is
+#   conveyed and/or propagated.
+#
+#   Bacula(R) is a registered trademark of Kern Sibbald.
+#
+#     Copyright (c) 2019 by Inteos sp. z o.o.
+#     All rights reserved. IP transfered to Bacula Systems according to agreement.
+#     Author: Radosław Korzeniewski, radekk@inteos.pl, Inteos Sp. z o.o.
+#
+
+import json
+import logging
+
+import yaml
+from baculak8s.entities.file_info import (DEFAULT_DIR_MODE, DEFAULT_FILE_MODE,
+                                          DIRECTORY, FileInfo)
+from baculak8s.entities.k8sobjtype import K8SObjType
+from baculak8s.util.date_util import (get_time_now,
+                                      k8stimestamp_to_unix_timestamp)
+from baculak8s.util.size_util import k8s_size_to_int
+
+NOW_TIMESTAMP = get_time_now()
+defaultk8sext = 'yaml'
+defaultk8spath = '@kubernetes'
+defaultk8sarchext = 'tar'
+
+
+def encoder_dump(msg):
+    if defaultk8sext == 'json':
+        return json.dumps(msg, sort_keys=True, default=str)
+    else:
+        return yaml.dump(msg)
+
+
+def encoder_load(msg, filename=None):
+    if filename.endswith('.json') or (filename is None and defaultk8sext == 'json'):
+        return json.loads(msg)
+    else:
+        return yaml.load(msg, Loader=yaml.FullLoader)
+
+
+def k8sfile2objname(fname):
+    return str(fname).replace('.'+defaultk8sext, '').replace('.json', '').replace('.yaml', '').replace('.tar', '')
+
+
+def k8sfilepath(objtype, nsname='', name=''):
+    # TODO: refactor string format to new ".format()" method
+    if objtype is not None:
+        if objtype == K8SObjType.K8SOBJ_NAMESPACE:
+            # namespace
+            return '/%s/%s/%s/%s.%s' % (defaultk8spath, K8SObjType.pathdict[K8SObjType.K8SOBJ_NAMESPACE],
+                                        nsname, name, defaultk8sext)
+        if objtype == K8SObjType.K8SOBJ_PVOLUME:
+            # persistent volume
+            return '/%s/%s/%s.%s' % (defaultk8spath, K8SObjType.pathdict[K8SObjType.K8SOBJ_PVOLUME],
+                                     name, defaultk8sext)
+        if objtype == K8SObjType.K8SOBJ_STORAGECLASS:
+            # storage class
+            return '/%s/%s/%s.%s' % (defaultk8spath, K8SObjType.pathdict[K8SObjType.K8SOBJ_STORAGECLASS],
+                                     name, defaultk8sext)
+        if objtype == K8SObjType.K8SOBJ_PVCDATA:
+            # PVC Data tar archive here
+            return '/%s/%s/%s/%s/%s.%s' % (defaultk8spath, K8SObjType.pathdict[K8SObjType.K8SOBJ_NAMESPACE],
+                                           nsname, K8SObjType.pathdict[K8SObjType.K8SOBJ_PVCDATA],
+                                           name, defaultk8sarchext)
+        # other objects
+        return '/%s/%s/%s/%s/%s.%s' % (defaultk8spath, K8SObjType.pathdict[K8SObjType.K8SOBJ_NAMESPACE],
+                                       nsname, K8SObjType.pathdict[objtype], name, defaultk8sext)
+    return None
+
+
+def k8sfileinfo(objtype, name, ftype, size, nsname=None, creation_timestamp=None):
+    return FileInfo(
+        name=k8sfilepath(objtype, nsname=nsname, name=name),
+        ftype=ftype,
+        size=k8s_size_to_int(size),
+        objtype=objtype,
+        uid=0, gid=0,
+        mode=DEFAULT_DIR_MODE if ftype == DIRECTORY else DEFAULT_FILE_MODE,
+        # TODO: Persistent volumes can have a different access modes [RWO, ROM, RWM]
+        # TODO: which we can express as different file modes in Bacula
+        nlink=1,
+        modified_at=NOW_TIMESTAMP,
+        accessed_at=NOW_TIMESTAMP,
+        created_at=NOW_TIMESTAMP if creation_timestamp is None else k8stimestamp_to_unix_timestamp(creation_timestamp)
+    )
+
+
+def k8sfileobjecttype(fnames):
+    if len(fnames) < 3 or fnames[0] != defaultk8spath:
+        # the filepath variable cannot be converted to k8s fileinfo
+        return None
+    objtype = {
+        'obj': None,
+        'namespace': None,
+    }
+    if fnames[1] == K8SObjType.K8SOBJ_NAMESPACE_Path:
+        # handle namespaced objects
+        objtype.update({'namespace': fnames[2]})
+        filename = fnames[3]
+        if filename.endswith('.%s' % defaultk8sext):
+            objtype.update({'obj': K8SObjType.K8SOBJ_NAMESPACE})
+        elif filename == K8SObjType.K8SOBJ_PVCS_Path:
+            # handle pvcs both config and data
+            filename = fnames[4]
+            if filename.endswith('.%s' % defaultk8sext):
+                # this is a config file
+                objtype.update({'obj': K8SObjType.K8SOBJ_PVOLCLAIM})
+            else:
+                # any other are pvcdata files
+                objtype.update({'obj': K8SObjType.K8SOBJ_PVCDATA})
+        else:
+            for obj in K8SObjType.pathdict.keys():
+                if K8SObjType.pathdict[obj] == filename:
+                    objtype.update({'obj': obj})
+                    break
+    elif fnames[1] == K8SObjType.K8SOBJ_PVOLUME_Path:
+        # handle persistent volumes here
+        objtype.update({'obj': K8SObjType.K8SOBJ_PVOLUME})
+    elif fnames[1] == K8SObjType.K8SOBJ_STORAGECLASS_Path:
+        # handle storage class here
+        objtype.update({'obj': K8SObjType.K8SOBJ_STORAGECLASS})
+    logging.debug('objtype:' + str(objtype))
+    return objtype
diff --git a/bacula/src/plugins/fd/kubernetes-backend/baculak8s/plugins/k8sbackend/k8sutils.py b/bacula/src/plugins/fd/kubernetes-backend/baculak8s/plugins/k8sbackend/k8sutils.py
new file mode 100644 (file)
index 0000000..5fb4452
--- /dev/null
@@ -0,0 +1,64 @@
+#
+#  Bacula(R) - The Network Backup Solution
+#
+#   Copyright (C) 2000-2022 Kern Sibbald
+#
+#   The original author of Bacula is Kern Sibbald, with contributions
+#   from many others, a complete list can be found in the file AUTHORS.
+#
+#   You may use this file and others of this release according to the
+#   license defined in the LICENSE file, which includes the Affero General
+#   Public License, v3.0 ("AGPLv3") and some additional permissions and
+#   terms pursuant to its AGPLv3 Section 7.
+#
+#   This notice must be preserved when any source code is
+#   conveyed and/or propagated.
+#
+#   Bacula(R) is a registered trademark of Kern Sibbald.
+#
+#     Copyright (c) 2019 by Inteos sp. z o.o.
+#     All rights reserved. IP transfered to Bacula Systems according to agreement.
+#     Author: Radosław Korzeniewski, radekk@inteos.pl, Inteos Sp. z o.o.
+#
+
+import time
+
+from kubernetes import client
+
+
+def prepare_metadata(data, annotations=False):
+    metadata = client.V1ObjectMeta(
+        name=data.name,
+        namespace=data.namespace,
+        annotations=data.annotations if annotations else {},
+        deletion_grace_period_seconds=data.deletion_grace_period_seconds,
+        deletion_timestamp=data.deletion_timestamp,
+        # finalizers=data.finalizers,
+        # generate_name=data.generate_name,
+        # initializers=data.initializers,
+        labels=data.labels
+    )
+    return metadata
+
+
+def wait_for_pod_ready(corev1api, namespace, name, waits=600):
+    a = 0
+    for a in range(waits):
+        status = corev1api.read_namespaced_pod_status(name=name, namespace=namespace)
+        isready = status.status.container_statuses[0].ready
+        if isready:
+            break
+        time.sleep(1)
+    return a < waits - 1
+
+
+def wait_for_pod_terminated(corev1api, namespace, name, waits=600):
+    a = 0
+    for a in range(waits):
+        status = corev1api.read_namespaced_pod_status(name=name, namespace=namespace)
+        container = status.status.container_statuses[0]
+        isterminated = not container.ready and container.state.terminated is not None
+        if isterminated:
+            break
+        time.sleep(1)
+    return a < waits - 1
diff --git a/bacula/src/plugins/fd/kubernetes-backend/baculak8s/plugins/k8sbackend/limitrange.py b/bacula/src/plugins/fd/kubernetes-backend/baculak8s/plugins/k8sbackend/limitrange.py
new file mode 100644 (file)
index 0000000..e16e27f
--- /dev/null
@@ -0,0 +1,68 @@
+# -*- coding: UTF-8 -*-
+#
+#  Bacula(R) - The Network Backup Solution
+#
+#   Copyright (C) 2000-2022 Kern Sibbald
+#
+#   The original author of Bacula is Kern Sibbald, with contributions
+#   from many others, a complete list can be found in the file AUTHORS.
+#
+#   You may use this file and others of this release according to the
+#   license defined in the LICENSE file, which includes the Affero General
+#   Public License, v3.0 ("AGPLv3") and some additional permissions and
+#   terms pursuant to its AGPLv3 Section 7.
+#
+#   This notice must be preserved when any source code is
+#   conveyed and/or propagated.
+#
+#   Bacula(R) is a registered trademark of Kern Sibbald.
+#
+#     Copyright (c) 2019 by Inteos sp. z o.o.
+#     All rights reserved. IP transfered to Bacula Systems according to agreement.
+#     Author: Radosław Korzeniewski, radekk@inteos.pl, Inteos Sp. z o.o.
+#
+
+from baculak8s.entities.file_info import NOT_EMPTY_FILE
+from baculak8s.plugins.k8sbackend.k8sfileinfo import *
+from baculak8s.plugins.k8sbackend.k8sutils import *
+
+
+def limit_range_read_namespaced(corev1api, namespace, name):
+    return corev1api.read_namespaced_limit_range(name, namespace)
+
+
+def limit_range_list_namespaced(corev1api, namespace, estimate=False, labels=""):
+    lrlist = {}
+    limitranges = corev1api.list_namespaced_limit_range(namespace=namespace, watch=False, label_selector=labels)
+    for lr in limitranges.items:
+        lrdata = limit_range_read_namespaced(corev1api, namespace, lr.metadata.name)
+        spec = encoder_dump(lrdata)
+        lrlist['lr-' + lr.metadata.name] = {
+            'spec': spec if not estimate else None,
+            'fi': k8sfileinfo(objtype=K8SObjType.K8SOBJ_LIMITRANGE, nsname=namespace,
+                              name=lr.metadata.name,
+                              ftype=NOT_EMPTY_FILE,
+                              size=len(spec),
+                              creation_timestamp=lrdata.metadata.creation_timestamp),
+        }
+    return lrlist
+
+
+def limit_range_restore_namespaced(corev1api, file_info, file_content):
+    lr = encoder_load(file_content, file_info.name)
+    metadata = prepare_metadata(lr.metadata)
+    # Instantiate the limitrange object
+    limitrange = client.V1LimitRange(
+        api_version=lr.api_version,
+        kind="Limitrange",
+        spec=lr.spec,
+        metadata=metadata
+    )
+    if file_info.objcache is not None:
+        # object exist so we replace it
+        response = corev1api.replace_namespaced_config_map(k8sfile2objname(file_info.name),
+                                                           file_info.namespace, limitrange, pretty='true')
+    else:
+        # object does not exist, so create one as required
+        response = corev1api.create_namespaced_config_map(file_info.namespace, limitrange, pretty='true')
+    return {'response': response}
diff --git a/bacula/src/plugins/fd/kubernetes-backend/baculak8s/plugins/k8sbackend/namespaces.py b/bacula/src/plugins/fd/kubernetes-backend/baculak8s/plugins/k8sbackend/namespaces.py
new file mode 100644 (file)
index 0000000..fd35da8
--- /dev/null
@@ -0,0 +1,105 @@
+# -*- coding: UTF-8 -*-
+#
+#  Bacula(R) - The Network Backup Solution
+#
+#   Copyright (C) 2000-2022 Kern Sibbald
+#
+#   The original author of Bacula is Kern Sibbald, with contributions
+#   from many others, a complete list can be found in the file AUTHORS.
+#
+#   You may use this file and others of this release according to the
+#   license defined in the LICENSE file, which includes the Affero General
+#   Public License, v3.0 ("AGPLv3") and some additional permissions and
+#   terms pursuant to its AGPLv3 Section 7.
+#
+#   This notice must be preserved when any source code is
+#   conveyed and/or propagated.
+#
+#   Bacula(R) is a registered trademark of Kern Sibbald.
+#
+#     Copyright (c) 2019 by Inteos sp. z o.o.
+#     All rights reserved. IP transfered to Bacula Systems according to agreement.
+#     Author: Radosław Korzeniewski, radekk@inteos.pl, Inteos Sp. z o.o.
+#
+
+from baculak8s.entities.file_info import NOT_EMPTY_FILE
+from baculak8s.plugins.k8sbackend.k8sfileinfo import *
+from baculak8s.plugins.k8sbackend.k8sutils import *
+
+
+def namespace_read(corev1api, name):
+    return corev1api.read_namespace(name)
+
+
+def namespaces_list_all(corev1api, nsfilter=None, estimate=False):
+    nslist = {}
+    namespaces = corev1api.list_namespace(watch=False)
+    for ns in namespaces.items:
+        if nsfilter is not None and len(nsfilter) > 0:
+            if ns.metadata.name not in nsfilter:
+                continue
+        nsdata = namespace_read(corev1api, ns.metadata.name)
+        spec = encoder_dump(nsdata)
+        nslist[ns.metadata.name] = {
+            'name': ns.metadata.name,
+            'spec': spec if not estimate else None,
+            'fi': k8sfileinfo(objtype=K8SObjType.K8SOBJ_NAMESPACE, nsname=ns.metadata.name,
+                              name=ns.metadata.name,
+                              ftype=NOT_EMPTY_FILE,
+                              size=len(spec),
+                              creation_timestamp=nsdata.metadata.creation_timestamp),
+        }
+    return nslist
+
+
+def namespace_names(corev1api):
+    nslist = []
+    namespaces = corev1api.list_namespace(watch=False)
+    for ns in namespaces.items:
+        nslist.append(["namespace", ns.metadata.name])
+    return nslist
+
+
+def namespaces_list_all_names(corev1api):
+    nslist = {}
+    namespaces = corev1api.list_namespace(watch=False)
+    for ns in namespaces.items:
+        nslist[ns.metadata.name] = {
+            'fi': FileInfo(name="/%s/%s" % (K8SObjType.K8SOBJ_NAMESPACE_Path, ns.metadata.name),
+                           ftype=DIRECTORY,
+                           size=0,
+                           uid=0, gid=0,
+                           mode=DEFAULT_DIR_MODE,
+                           nlink=1,
+                           modified_at=NOW_TIMESTAMP,
+                           accessed_at=NOW_TIMESTAMP,
+                           created_at=k8stimestamp_to_unix_timestamp(ns.metadata.creation_timestamp)),
+        }
+    return nslist
+
+
+def namespaces_restore(corev1api, file_info, file_content):
+    ns = encoder_load(file_content, file_info.name)
+    metadata = prepare_metadata(ns.metadata)
+    # Populate annotations about projectId
+    projectid = ns.metadata.annotations.get('field.cattle.io/projectId', None)
+    if projectid is not None:
+        ann = {'field.cattle.io/projectId': projectid}
+        if metadata.annotations is not None:
+            metadata.annotations.update(ann)
+        else:
+            metadata.annotations = ann
+    # Instantiate the namespace object
+    namespace = client.V1Namespace(
+        api_version=ns.api_version,
+        kind="Namespace",
+        spec=ns.spec,
+        metadata=metadata
+    )
+    if file_info.objcache is not None:
+        # TODO: Do we really need to replace existing namespace?
+        response = corev1api.replace_namespace(k8sfile2objname(file_info.name), namespace, pretty='true')
+    else:
+        # object does not exist, so create one as required
+        response = corev1api.create_namespace(namespace, pretty='true')
+    return {'response': response}
diff --git a/bacula/src/plugins/fd/kubernetes-backend/baculak8s/plugins/k8sbackend/persistentvolumeclaims.py b/bacula/src/plugins/fd/kubernetes-backend/baculak8s/plugins/k8sbackend/persistentvolumeclaims.py
new file mode 100644 (file)
index 0000000..9d4b2b6
--- /dev/null
@@ -0,0 +1,96 @@
+# -*- coding: UTF-8 -*-
+#
+#  Bacula(R) - The Network Backup Solution
+#
+#   Copyright (C) 2000-2022 Kern Sibbald
+#
+#   The original author of Bacula is Kern Sibbald, with contributions
+#   from many others, a complete list can be found in the file AUTHORS.
+#
+#   You may use this file and others of this release according to the
+#   license defined in the LICENSE file, which includes the Affero General
+#   Public License, v3.0 ("AGPLv3") and some additional permissions and
+#   terms pursuant to its AGPLv3 Section 7.
+#
+#   This notice must be preserved when any source code is
+#   conveyed and/or propagated.
+#
+#   Bacula(R) is a registered trademark of Kern Sibbald.
+#
+#     Copyright (c) 2019 by Inteos sp. z o.o.
+#     All rights reserved. IP transfered to Bacula Systems according to agreement.
+#     Author: Radosław Korzeniewski, radekk@inteos.pl, Inteos Sp. z o.o.
+#
+
+import logging
+
+import kubernetes
+from baculak8s.entities.file_info import NOT_EMPTY_FILE
+from baculak8s.plugins.k8sbackend.k8sfileinfo import (K8SObjType, encoder_dump,
+                                                      encoder_load,
+                                                      k8sfile2objname,
+                                                      k8sfileinfo)
+from baculak8s.plugins.k8sbackend.k8sutils import prepare_metadata
+
+
+def persistentvolumeclaims_read_namespaced(corev1api, namespace, name):
+    return corev1api.read_namespaced_persistent_volume_claim(name, namespace)
+
+
+def persistentvolumeclaims_namespaced_names(corev1api, namespace, labels=""):
+    pvcslist = []
+    pvcs = corev1api.list_namespaced_persistent_volume_claim(namespace=namespace, watch=False, label_selector=labels)
+    for pvc in pvcs.items:
+        pvcspec = pvc.spec
+        pvcslist.append([
+            "pvcdata",
+            pvc.metadata.name,
+            pvcspec.storage_class_name,
+            pvcspec.resources.requests.get('storage', '-1')
+        ])
+    return pvcslist
+
+
+def persistentvolumeclaims_list_namespaced(corev1api, namespace, estimate=False, labels=""):
+    pvcslist = {}
+    pvcs = corev1api.list_namespaced_persistent_volume_claim(namespace=namespace, watch=False, label_selector=labels)
+    for pvc in pvcs.items:
+        pvcdata = persistentvolumeclaims_read_namespaced(corev1api, namespace, pvc.metadata.name)
+        spec = encoder_dump(pvcdata)
+        # logging.debug("PVCDATA-OBJ:{}".format(pvcdata))
+        # logging.debug("PVCDATA-ENC:{}".format(spec))
+        pvcslist['pvc-' + pvc.metadata.name] = {
+            'spec': spec if not estimate else None,
+            'fi': k8sfileinfo(objtype=K8SObjType.K8SOBJ_PVOLCLAIM, nsname=namespace,
+                              name=pvc.metadata.name,
+                              ftype=NOT_EMPTY_FILE,
+                              size=len(spec),
+                              creation_timestamp=pvcdata.metadata.creation_timestamp),
+        }
+    return pvcslist
+
+
+def persistentvolumeclaims_restore_namespaced(corev1api, file_info, file_content):
+    pvc = encoder_load(file_content, file_info.name)
+    metadata = prepare_metadata(pvc.metadata)
+    # Instantiate the persistentvolumeclaims object
+    persistentvolumeclaims = kubernetes.client.V1PersistentVolumeClaim(
+        api_version=pvc.api_version,
+        kind="PersistentVolumeClaim",
+        spec=pvc.spec,
+        metadata=metadata
+    )
+    # clean some data
+    persistentvolumeclaims.spec.volume_mode = None
+    persistentvolumeclaims.spec.volume_name = None
+    logging.debug('PVC: ' + str(persistentvolumeclaims))
+    if file_info.objcache is not None:
+        # object exist so we replace it
+        response = corev1api.replace_namespaced_persistent_volume_claim(k8sfile2objname(file_info.name),
+                                                                        file_info.namespace, persistentvolumeclaims,
+                                                                        pretty='true')
+    else:
+        # object does not exist, so create one as required
+        response = corev1api.create_namespaced_persistent_volume_claim(file_info.namespace, persistentvolumeclaims,
+                                                                       pretty='true')
+    return {'response': response}
diff --git a/bacula/src/plugins/fd/kubernetes-backend/baculak8s/plugins/k8sbackend/persistentvolumes.py b/bacula/src/plugins/fd/kubernetes-backend/baculak8s/plugins/k8sbackend/persistentvolumes.py
new file mode 100644 (file)
index 0000000..de7656d
--- /dev/null
@@ -0,0 +1,109 @@
+# -*- coding: UTF-8 -*-
+#
+#  Bacula(R) - The Network Backup Solution
+#
+#   Copyright (C) 2000-2022 Kern Sibbald
+#
+#   The original author of Bacula is Kern Sibbald, with contributions
+#   from many others, a complete list can be found in the file AUTHORS.
+#
+#   You may use this file and others of this release according to the
+#   license defined in the LICENSE file, which includes the Affero General
+#   Public License, v3.0 ("AGPLv3") and some additional permissions and
+#   terms pursuant to its AGPLv3 Section 7.
+#
+#   This notice must be preserved when any source code is
+#   conveyed and/or propagated.
+#
+#   Bacula(R) is a registered trademark of Kern Sibbald.
+#
+#     Copyright (c) 2019 by Inteos sp. z o.o.
+#     All rights reserved. IP transfered to Bacula Systems according to agreement.
+#     Author: Radosław Korzeniewski, radekk@inteos.pl, Inteos Sp. z o.o.
+#
+
+import pathlib
+from baculak8s.entities.file_info import NOT_EMPTY_FILE
+from baculak8s.plugins.k8sbackend.k8sfileinfo import *
+from baculak8s.util.size_util import k8s_size_to_int
+from baculak8s.plugins.k8sbackend.k8sutils import *
+
+
+def persistentvolume_read(corev1api, name):
+    return corev1api.read_persistent_volume(name)
+
+
+def persistentvolumes_list_all(corev1api, pvfilter=None, estimate=False):
+    pvlist = {}
+    persistentvolumes = corev1api.list_persistent_volume(watch=False)
+    for pv in persistentvolumes.items:
+        if pvfilter is not None and len(pvfilter) > 0:
+            logging.debug("pvfilter-glob-for: {}".format(pv.metadata.name))
+            found = False
+            for pvglob in pvfilter:
+                logging.debug("checking pvglob: {}".format(pvglob))
+                if pathlib.Path(pv.metadata.name).match(pvglob):
+                    found = True
+                    logging.debug('Found.')
+                    break
+            if not found:
+                continue
+        pvdata = persistentvolume_read(corev1api, pv.metadata.name)
+        spec = encoder_dump(pvdata)
+        pvlist[pv.metadata.name] = {
+            'spec': spec if not estimate else None,
+            'fi': k8sfileinfo(objtype=K8SObjType.K8SOBJ_PVOLUME,
+                              name=pv.metadata.name,
+                              ftype=NOT_EMPTY_FILE,
+                              size=len(spec),
+                              creation_timestamp=pvdata.metadata.creation_timestamp),
+        }
+    return pvlist
+
+
+def persistentvolumes_names(corev1api):
+    pvlist = []
+    persistentvolumes = corev1api.list_persistent_volume(watch=False)
+    for pv in persistentvolumes.items:
+        pvlist.append(["persistentvolume", pv.metadata.name])
+    return pvlist
+
+
+def persistentvolumes_list_all_names(corev1api):
+    pvlist = {}
+    persistentvolumes = corev1api.list_persistent_volume(watch=False)
+    for pv in persistentvolumes.items:
+        pvname = pv.metadata.name
+        pvsize = pv.spec.capacity['storage']
+        logging.debug("pvsize: {} / {}".format(type(pvsize), pvsize))
+        pvlist[pvname] = {
+            'fi': FileInfo(name="/%s/%s" % (K8SObjType.K8SOBJ_PVOLUME_Path, pvname),
+                           ftype=NOT_EMPTY_FILE,
+                           size=k8s_size_to_int(pvsize),
+                           uid=0, gid=0,
+                           mode=DEFAULT_FILE_MODE,
+                           nlink=1,
+                           modified_at=NOW_TIMESTAMP,
+                           accessed_at=NOW_TIMESTAMP,
+                           created_at=k8stimestamp_to_unix_timestamp(pv.metadata.creation_timestamp)),
+        }
+    return pvlist
+
+
+def persistentvolumes_restore(corev1api, file_info, file_content):
+    pv = encoder_load(file_content, file_info.name)
+    metadata = prepare_metadata(pv.metadata)
+    # Instantiate the persistentvolume object
+    persistentvolume = client.V1PersistentVolume(
+        api_version=pv.api_version,
+        kind="PersistentVolume",
+        spec=pv.spec,
+        metadata=metadata
+    )
+    if file_info.objcache is not None:
+        # object exist so we replace it
+        response = corev1api.replace_persistent_volume(k8sfile2objname(file_info.name), persistentvolume, pretty='true')
+    else:
+        # object does not exist, so create one as required
+        response = corev1api.create_persistent_volume(persistentvolume, pretty='true')
+    return {'response': response}
diff --git a/bacula/src/plugins/fd/kubernetes-backend/baculak8s/plugins/k8sbackend/podexec.py b/bacula/src/plugins/fd/kubernetes-backend/baculak8s/plugins/k8sbackend/podexec.py
new file mode 100644 (file)
index 0000000..98c2040
--- /dev/null
@@ -0,0 +1,65 @@
+# -*- coding: UTF-8 -*-
+#
+#  Bacula(R) - The Network Backup Solution
+#
+#   Copyright (C) 2000-2022 Kern Sibbald
+#
+#   The original author of Bacula is Kern Sibbald, with contributions
+#   from many others, a complete list can be found in the file AUTHORS.
+#
+#   You may use this file and others of this release according to the
+#   license defined in the LICENSE file, which includes the Affero General
+#   Public License, v3.0 ("AGPLv3") and some additional permissions and
+#   terms pursuant to its AGPLv3 Section 7.
+#
+#   This notice must be preserved when any source code is
+#   conveyed and/or propagated.
+#
+#   Bacula(R) is a registered trademark of Kern Sibbald.
+#
+#     All rights reserved. IP transfered to Bacula Systems according to agreement.
+#     Author: Radosław Korzeniewski, radekk@inteos.pl, Inteos Sp. z o.o.
+#
+
+import logging
+
+import kubernetes
+import yaml
+from kubernetes.stream import stream
+from kubernetes.stream.ws_client import (ERROR_CHANNEL, STDERR_CHANNEL,
+                                         STDOUT_CHANNEL)
+
+DEFAULTTIMEOUT = 600
+
+
+class ExecStatus(object):
+    Success = 'Success'
+    Failure = 'Failure'
+
+    @staticmethod
+    def check_status(info_channel):
+        if info_channel is not None:
+            if info_channel.get('status', ExecStatus.Failure) == ExecStatus.Success:
+                return True
+        return False
+
+
+def exec_commands(corev1api, namespace, podname, container, command):
+    exec_command = [
+        '/bin/sh',
+        '-c',
+        command
+    ]
+    client = stream(corev1api.connect_get_namespaced_pod_exec,
+                    podname,
+                    namespace,
+                    command=exec_command,
+                    container=container,
+                    stderr=True, stdin=False,
+                    stdout=True, tty=False,
+                    _preload_content=False)
+    client.run_forever(timeout=DEFAULTTIMEOUT)
+    out_channel = client.read_channel(STDOUT_CHANNEL)
+    err_channel = client.read_channel(STDERR_CHANNEL)
+    info_channel = yaml.load(client.read_channel(ERROR_CHANNEL), Loader=yaml.FullLoader)
+    return out_channel, err_channel, info_channel
diff --git a/bacula/src/plugins/fd/kubernetes-backend/baculak8s/plugins/k8sbackend/pods.py b/bacula/src/plugins/fd/kubernetes-backend/baculak8s/plugins/k8sbackend/pods.py
new file mode 100644 (file)
index 0000000..2aff8aa
--- /dev/null
@@ -0,0 +1,115 @@
+# -*- coding: UTF-8 -*-
+#
+#  Bacula(R) - The Network Backup Solution
+#
+#   Copyright (C) 2000-2022 Kern Sibbald
+#
+#   The original author of Bacula is Kern Sibbald, with contributions
+#   from many others, a complete list can be found in the file AUTHORS.
+#
+#   You may use this file and others of this release according to the
+#   license defined in the LICENSE file, which includes the Affero General
+#   Public License, v3.0 ("AGPLv3") and some additional permissions and
+#   terms pursuant to its AGPLv3 Section 7.
+#
+#   This notice must be preserved when any source code is
+#   conveyed and/or propagated.
+#
+#   Bacula(R) is a registered trademark of Kern Sibbald.
+#
+#     Copyright (c) 2019 by Inteos sp. z o.o.
+#     All rights reserved. IP transfered to Bacula Systems according to agreement.
+#     Author: Radosław Korzeniewski, radekk@inteos.pl, Inteos Sp. z o.o.
+#
+
+import logging
+
+import kubernetes
+from baculak8s.entities.file_info import NOT_EMPTY_FILE
+from baculak8s.entities.k8sobjtype import K8SObjType
+from baculak8s.plugins.k8sbackend.k8sfileinfo import (NOW_TIMESTAMP,
+                                                      encoder_dump,
+                                                      encoder_load,
+                                                      k8sfile2objname,
+                                                      k8sfileinfo)
+from baculak8s.plugins.k8sbackend.k8sutils import prepare_metadata
+
+
+def pods_read_namespaced(corev1api, namespace, name):
+    return corev1api.read_namespaced_pod(name, namespace)
+
+
+def pods_namespaced_specs(corev1api, namespace, labels=""):
+    podslist = []
+    pods = corev1api.list_namespaced_pod(namespace=namespace, watch=False, label_selector=labels)
+    for pod in pods.items:
+        podslist.append(pod)
+    # logging.debug("pods_namespaced_specs:{}".format(podslist))
+    return podslist
+
+
+def pods_list_namespaced(corev1api, namespace, estimate=False, labels=""):
+    podslist = {}
+    pods = corev1api.list_namespaced_pod(namespace=namespace, watch=False, label_selector=labels)
+    for pod in pods.items:
+        poddata = pods_read_namespaced(corev1api, namespace, pod.metadata.name)
+        spec = encoder_dump(poddata)
+        podslist['pod-' + pod.metadata.name] = {
+            'spec': spec if not estimate else None,
+            'fi': k8sfileinfo(objtype=K8SObjType.K8SOBJ_POD, nsname=namespace,
+                              name=pod.metadata.name,
+                              ftype=NOT_EMPTY_FILE,
+                              size=len(spec),
+                              creation_timestamp=poddata.metadata.creation_timestamp),
+        }
+    return podslist
+
+
+def pods_restore_namespaced(corev1api, file_info, file_content):
+    pod = encoder_load(file_content, file_info.name)
+    metadata = prepare_metadata(pod.metadata)
+    # Instantiate the pods object
+    pods = kubernetes.client.V1Pod(
+        api_version=pod.api_version,
+        kind="Pod",
+        spec=pod.spec,
+        metadata=metadata
+    )
+    # clean some data
+    pods.spec.node_name = None
+    secvol = []
+    volumes = []
+    # this removes dynamic secret token from volumes
+    for v in pods.spec.volumes:
+        if v.secret is not None and v.name.startswith('default-token-'):
+            # TODO: we should check if the secret exist and is not type=='kubernetes.io/*'
+            logging.debug('detectedSecVolume:'+str(v))
+            secvol.append(v)
+        else:
+            logging.debug('standardVolume:'+str(v.name))
+            volumes.append(v)
+    pods.spec.volumes = volumes
+    volume_mounts = []
+    # this removes above volumes from volume mounts
+    for c in pods.spec.containers:
+        for v in c.volume_mounts:
+            found = False
+            logging.debug('volMountCheck:'+str(v.name))
+            for s in secvol:
+                if s.name == v.name:
+                    found = True
+                    logging.debug('secVolMountFound')
+                    break
+            logging.debug('findResult:'+str(found))
+            if not found:
+                volume_mounts.append(v)
+        c.volume_mounts = volume_mounts
+        logging.debug('volumeMounts after cleanup:' + str(c.volume_mounts))
+    if file_info.objcache is not None:
+        # object exist so we replace it
+        response = corev1api.replace_namespaced_pod(k8sfile2objname(file_info.name),
+                                                    file_info.namespace, pods, pretty='true')
+    else:
+        # object does not exist, so create one as required
+        response = corev1api.create_namespaced_pod(file_info.namespace, pods, pretty='true')
+    return {'response': response}
diff --git a/bacula/src/plugins/fd/kubernetes-backend/baculak8s/plugins/k8sbackend/podtemplates.py b/bacula/src/plugins/fd/kubernetes-backend/baculak8s/plugins/k8sbackend/podtemplates.py
new file mode 100644 (file)
index 0000000..93af009
--- /dev/null
@@ -0,0 +1,68 @@
+# -*- coding: UTF-8 -*-
+#
+#  Bacula(R) - The Network Backup Solution
+#
+#   Copyright (C) 2000-2022 Kern Sibbald
+#
+#   The original author of Bacula is Kern Sibbald, with contributions
+#   from many others, a complete list can be found in the file AUTHORS.
+#
+#   You may use this file and others of this release according to the
+#   license defined in the LICENSE file, which includes the Affero General
+#   Public License, v3.0 ("AGPLv3") and some additional permissions and
+#   terms pursuant to its AGPLv3 Section 7.
+#
+#   This notice must be preserved when any source code is
+#   conveyed and/or propagated.
+#
+#   Bacula(R) is a registered trademark of Kern Sibbald.
+#
+#     Copyright (c) 2019 by Inteos sp. z o.o.
+#     All rights reserved. IP transfered to Bacula Systems according to agreement.
+#     Author: Radosław Korzeniewski, radekk@inteos.pl, Inteos Sp. z o.o.
+#
+
+from baculak8s.entities.file_info import NOT_EMPTY_FILE
+from baculak8s.plugins.k8sbackend.k8sfileinfo import *
+from baculak8s.plugins.k8sbackend.k8sutils import *
+
+
+def podtemplates_read_namespaced(corev1api, namespace, name):
+    return corev1api.read_namespaced_pod_template(name, namespace)
+
+
+def podtemplates_list_namespaced(corev1api, namespace, estimate=False, labels=""):
+    podstlist = {}
+    podst = corev1api.list_namespaced_pod_template(namespace=namespace, watch=False, label_selector=labels)
+    for podt in podst.items:
+        podtdata = podtemplates_read_namespaced(corev1api, namespace, podst.metadata.name)
+        spec = encoder_dump(podtdata)
+        podstlist['pod-' + podt.metadata.name] = {
+            'spec': spec if not estimate else None,
+            'fi': k8sfileinfo(objtype=K8SObjType.K8SOBJ_PODTEMPLATE, nsname=namespace,
+                              name=podt.metadata.name,
+                              ftype=NOT_EMPTY_FILE,
+                              size=len(spec),
+                              creation_timestamp=podtdata.metadata.creation_timestamp),
+        }
+    return podstlist
+
+
+def podtemplates_restore_namespaced(corev1api, file_info, file_content):
+    podst = encoder_load(file_content, file_info.name)
+    metadata = prepare_metadata(podst.metadata)
+    # Instantiate the podtemplates object
+    podtemplates = client.V1PodTemplate(
+        api_version=podst.api_version,
+        kind="PodTemplate",
+        template=podst.template,
+        metadata=metadata
+    )
+    if file_info.objcache is not None:
+        # object exist so we replace it
+        response = corev1api.replace_namespaced_pod_template(k8sfile2objname(file_info.name),
+                                                             file_info.namespace, podtemplates, pretty='true')
+    else:
+        # object does not exist, so create one as required
+        response = corev1api.create_namespaced_pod_template(file_info.namespace, podtemplates, pretty='true')
+    return {'response': response}
diff --git a/bacula/src/plugins/fd/kubernetes-backend/baculak8s/plugins/k8sbackend/pvcclone.py b/bacula/src/plugins/fd/kubernetes-backend/baculak8s/plugins/k8sbackend/pvcclone.py
new file mode 100644 (file)
index 0000000..ba3b09e
--- /dev/null
@@ -0,0 +1,71 @@
+#
+#  Bacula(R) - The Network Backup Solution
+#
+#   Copyright (C) 2000-2022 Kern Sibbald
+#
+#   The original author of Bacula is Kern Sibbald, with contributions
+#   from many others, a complete list can be found in the file AUTHORS.
+#
+#   You may use this file and others of this release according to the
+#   license defined in the LICENSE file, which includes the Affero General
+#   Public License, v3.0 ("AGPLv3") and some additional permissions and
+#   terms pursuant to its AGPLv3 Section 7.
+#
+#   This notice must be preserved when any source code is
+#   conveyed and/or propagated.
+#
+#   Bacula(R) is a registered trademark of Kern Sibbald.
+#
+#     All rights reserved. IP transfered to Bacula Systems according to agreement.
+#     Author: Radosław Korzeniewski, radekk@inteos.pl, Inteos Sp. z o.o.
+#
+
+import os
+import string
+
+from baculak8s.util.token import generate_token
+
+DEFAULTCLONEYAML = os.getenv('DEFAULTCLONEYAML', "/opt/bacula/scripts/bacula-backup-clone.yaml")
+CLONETEMPLATE = """
+apiVersion: v1
+kind: PersistentVolumeClaim
+metadata:
+  name: {clonename}
+  namespace: {namespace}
+  labels:
+    app: baculabackup
+spec:
+  storageClassName: {storageclassname}
+  dataSource:
+    name: {pvcname}
+    kind: PersistentVolumeClaim
+  accessModes:
+    - ReadWriteOnce
+  resources:
+    requests:
+      storage: {pvcsize}
+"""
+
+
+def prepare_backup_clone_yaml(namespace, pvcname, pvcsize, scname, clonename=None):
+    """ Handles PVC clone yaml preparation based on available templates
+
+    Args:
+        namespace (str): k8s namespace for pvc clone
+        pvcname (str): source pvc name to clone from
+        pvcsize (str): k8s capacity of the original pvc
+        scname (str): storage class of the original pvc
+        clonename (str, optional): the cloned - destination - pvcname; if `None` then name will be assigned automatically. Defaults to None.
+
+    Returns:
+        tuple(2): return a prepared pvc clone yaml string and assigned pvc clone name, especially useful when this name was created automatically.
+    """
+    cloneyaml = CLONETEMPLATE
+    if os.path.exists(DEFAULTCLONEYAML):
+        with open(DEFAULTCLONEYAML, 'r') as file:
+            cloneyaml = file.read()
+    if clonename is None:
+        validchars = tuple(string.ascii_lowercase) + tuple(string.digits)
+        clonename = "{pvcname}-baculaclone-{id}".format(pvcname=pvcname, id=generate_token(size=6, chars=validchars))
+
+    return cloneyaml.format(namespace=namespace, pvcname=pvcname, pvcsize=pvcsize, clonename=clonename, storageclassname=scname), clonename
diff --git a/bacula/src/plugins/fd/kubernetes-backend/baculak8s/plugins/k8sbackend/pvcdata.py b/bacula/src/plugins/fd/kubernetes-backend/baculak8s/plugins/k8sbackend/pvcdata.py
new file mode 100644 (file)
index 0000000..71a29b5
--- /dev/null
@@ -0,0 +1,171 @@
+# -*- coding: UTF-8 -*-
+#
+#  Bacula(R) - The Network Backup Solution
+#
+#   Copyright (C) 2000-2022 Kern Sibbald
+#
+#   The original author of Bacula is Kern Sibbald, with contributions
+#   from many others, a complete list can be found in the file AUTHORS.
+#
+#   You may use this file and others of this release according to the
+#   license defined in the LICENSE file, which includes the Affero General
+#   Public License, v3.0 ("AGPLv3") and some additional permissions and
+#   terms pursuant to its AGPLv3 Section 7.
+#
+#   This notice must be preserved when any source code is
+#   conveyed and/or propagated.
+#
+#   Bacula(R) is a registered trademark of Kern Sibbald.
+#
+#     Copyright (c) 2020 by Inteos sp. z o.o.
+#     All rights reserved. IP transfered to Bacula Systems according to agreement.
+#     Author: Radosław Korzeniewski, radekk@inteos.pl, Inteos Sp. z o.o.
+#
+
+import logging
+
+import kubernetes
+from baculak8s.entities.file_info import (DEFAULT_FILE_MODE, NOT_EMPTY_FILE,
+                                          FileInfo)
+from baculak8s.entities.k8sobjtype import K8SObjType
+from baculak8s.plugins.k8sbackend.k8sfileinfo import NOW_TIMESTAMP, k8sfileinfo
+from baculak8s.plugins.k8sbackend.persistentvolumeclaims import (
+    persistentvolumeclaims_namespaced_names,
+    persistentvolumeclaims_read_namespaced)
+from baculak8s.plugins.k8sbackend.pods import pods_namespaced_specs
+from baculak8s.util.size_util import k8s_size_to_int
+
+
+def pvcdata_list_update_node_names(corev1api, namespace, pvcdatalist):
+    """ Updates node_name values for pvcdatalist.
+
+    Args:
+        corev1api (corev1api): kubernetes corev1api instance
+        namespace (str): kubernetes namespace
+        pvcdatalist (dict): pvc data list as dictionary
+
+    Returns:
+        dict: updated pvc data list as dictionary
+    """
+    # here we collect node_names for proper backup pod deployment
+    pods = pods_namespaced_specs(corev1api, namespace=namespace)
+    for pod in pods:
+        for vol in pod.spec.volumes:
+            if vol.persistent_volume_claim is not None:
+                pvcname = vol.persistent_volume_claim.claim_name
+                for pvcf in pvcdatalist:
+                    if pvcname == pvcdatalist[pvcf].get('name') and pvcdatalist[pvcf].get('node_name') is None:
+                        pvcdatalist[pvcf]['node_name'] = pod.spec.node_name
+    return pvcdatalist
+
+
+def pvcdata_get_namespaced(corev1api, namespace, pvcname, pvcalias=None):
+    """ Return a single pvcdata dict for requested pvcname.
+
+    Args:
+        corev1api (corev1api): kubernetes corev1api instance
+        namespace (str): kubernetes namespace
+        pvcname (str): requested pvc name
+        pvcalias (str, optional): when not None then File_Info object wil use it as file name. Defaults to None.
+
+    Returns:
+        dict: pvc data dict
+    """
+    pvc = persistentvolumeclaims_read_namespaced(corev1api, namespace, pvcname)
+    pvcspec = pvc.spec
+    storageclassname = pvcspec.storage_class_name
+    pvcsize = pvcspec.resources.requests.get('storage', '-1')
+    pvcdata = {
+        'name': pvcname,
+        'node_name': None,
+        'storage_class_name': storageclassname,
+        'capacity': pvcsize,
+        'fi': k8sfileinfo(objtype=K8SObjType.K8SOBJ_PVCDATA,
+                          nsname=namespace,
+                          name=pvcname if pvcalias is None else pvcalias,
+                          ftype=NOT_EMPTY_FILE,
+                          size=pvcsize),
+    }
+    pvcdatalist = pvcdata_list_update_node_names(corev1api, namespace, {pvcname: pvcdata})
+    return pvcdatalist.get(pvcname)
+
+
+def pvcdata_list_namespaced(corev1api, namespace, estimate=False, pvcfilter=True, labels=""):
+    """ Return a list of pvcdata dicts for selected namespace.
+
+    Args:
+        corev1api (corev1api): kubernetes corev1api instance
+        namespace (str): kubernetes namespace
+        estimate (bool, optional): select if we do estimate (True) or backup job (False). Defaults to False.
+        pvcfilter (bool, optional): when not None and not empty list then select pvc which name is in this list only. Defaults to True.
+        labels (str, optional): selector labels
+
+    Returns:
+        dict: pvc data list as dictionary
+    """
+    pvcdata = {}
+    if pvcfilter:
+        logging.debug("pvcfilter: {}".format(pvcfilter))
+        pvcnamelist = persistentvolumeclaims_namespaced_names(corev1api, namespace, labels)
+        for pvcn in pvcnamelist:
+            pvcname = pvcn[1]
+            logging.debug('found:{}'.format(pvcname))
+            if isinstance(pvcfilter, list) and pvcname not in pvcfilter:
+                continue
+            pvcsize = pvcn[3]
+            pvcdata[pvcname] = {
+                'name': pvcname,
+                'node_name': None,
+                'storage_class_name': pvcn[2],
+                'capacity': pvcsize,
+                'fi': k8sfileinfo(objtype=K8SObjType.K8SOBJ_PVCDATA,
+                                  nsname=namespace,
+                                  name=pvcname,
+                                  ftype=NOT_EMPTY_FILE,
+                                  size=pvcsize),
+            }
+            logging.debug("add pvc: {}".format(pvcdata[pvcname]))
+    # here we collect node_names for proper backup pod deployment
+    pods = pods_namespaced_specs(corev1api, namespace=namespace)
+    for pod in pods:
+        if pod.spec.volumes is not None:
+            for vol in pod.spec.volumes:
+                if vol.persistent_volume_claim is not None:
+                    pvcname = vol.persistent_volume_claim.claim_name
+                    for pvcf in pvcdata:
+                        if pvcname == pvcdata[pvcf].get('name') and pvcdata[pvcf].get('node_name') is None:
+                            pvcdata[pvcf]['node_name'] = pod.spec.node_name
+    return pvcdata
+
+
+def list_pvcs_namespaced(corev1api, namespace):
+    """ Return pvclist for selected namespace with FileInfo object only.
+       This function is useful in listing mode only.
+
+    Args:
+        corev1api (corev1api): kubernetes corev1api instance
+        namespace (str): kubernetes namespace
+
+    Returns:
+        dict: pvc data list as dictionary
+    """
+    pvcslist = {}
+    pvcnamelist = persistentvolumeclaims_namespaced_names(corev1api, namespace)
+    for pvcn in pvcnamelist:
+        pvcname = pvcn[1]
+        pvcsize = pvcn[3]
+        logging.debug('found:{} : {}'.format(pvcname, pvcsize))
+        name = "/{}/{}/{}/{}".format(K8SObjType.K8SOBJ_NAMESPACE_Path, namespace,
+                                        K8SObjType.K8SOBJ_PVCDATA_Path, pvcname)
+        pvcslist[pvcname] = {
+            'fi': FileInfo(name=name,
+                            ftype=NOT_EMPTY_FILE,
+                            size=k8s_size_to_int(pvcsize),
+                            uid=0, gid=0,
+                            mode=DEFAULT_FILE_MODE,
+                            nlink=1,
+                            modified_at=NOW_TIMESTAMP,
+                            accessed_at=NOW_TIMESTAMP,
+                            created_at=NOW_TIMESTAMP),
+        }
+    return pvcslist
diff --git a/bacula/src/plugins/fd/kubernetes-backend/baculak8s/plugins/k8sbackend/replicaset.py b/bacula/src/plugins/fd/kubernetes-backend/baculak8s/plugins/k8sbackend/replicaset.py
new file mode 100644 (file)
index 0000000..d65b43b
--- /dev/null
@@ -0,0 +1,68 @@
+# -*- coding: UTF-8 -*-
+#
+#  Bacula(R) - The Network Backup Solution
+#
+#   Copyright (C) 2000-2022 Kern Sibbald
+#
+#   The original author of Bacula is Kern Sibbald, with contributions
+#   from many others, a complete list can be found in the file AUTHORS.
+#
+#   You may use this file and others of this release according to the
+#   license defined in the LICENSE file, which includes the Affero General
+#   Public License, v3.0 ("AGPLv3") and some additional permissions and
+#   terms pursuant to its AGPLv3 Section 7.
+#
+#   This notice must be preserved when any source code is
+#   conveyed and/or propagated.
+#
+#   Bacula(R) is a registered trademark of Kern Sibbald.
+#
+#     Copyright (c) 2019 by Inteos sp. z o.o.
+#     All rights reserved. IP transfered to Bacula Systems according to agreement.
+#     Author: Radosław Korzeniewski, radekk@inteos.pl, Inteos Sp. z o.o.
+#
+
+from baculak8s.entities.file_info import NOT_EMPTY_FILE
+from baculak8s.plugins.k8sbackend.k8sfileinfo import *
+from baculak8s.plugins.k8sbackend.k8sutils import *
+
+
+def replica_sets_read_namespaced(appsv1api, namespace, name):
+    return appsv1api.read_namespaced_replica_set(name, namespace)
+
+
+def replica_sets_list_namespaced(appsv1api, namespace, estimate=False, labels=""):
+    rslist = {}
+    replicasets = appsv1api.list_namespaced_replica_set(namespace=namespace, watch=False, label_selector=labels)
+    for rs in replicasets.items:
+        rsdata = replica_sets_read_namespaced(appsv1api, namespace, rs.metadata.name)
+        spec = encoder_dump(rsdata)
+        rslist['rs-' + rs.metadata.name] = {
+            'spec': spec if not estimate else None,
+            'fi': k8sfileinfo(objtype=K8SObjType.K8SOBJ_REPLICASET, nsname=namespace,
+                              name=rs.metadata.name,
+                              ftype=NOT_EMPTY_FILE,
+                              size=len(spec),
+                              creation_timestamp=rsdata.metadata.creation_timestamp),
+        }
+    return rslist
+
+
+def replica_sets_restore_namespaced(appsv1api, file_info, file_content):
+    rs = encoder_load(file_content, file_info.name)
+    metadata = prepare_metadata(rs.metadata)
+    # Instantiate the daemon_set object
+    replicaset = client.V1ReplicaSet(
+        api_version=rs.api_version,
+        kind="ReplicaSet",
+        spec=rs.spec,
+        metadata=metadata
+    )
+    if file_info.objcache is not None:
+        # object exist so we replace it
+        response = appsv1api.replace_namespaced_replica_set(k8sfile2objname(file_info.name),
+                                                            file_info.namespace, replicaset, pretty='true')
+    else:
+        # object does not exist, so create one as required
+        response = appsv1api.create_namespaced_replica_set(file_info.namespace, replicaset, pretty='true')
+    return {'response': response}
diff --git a/bacula/src/plugins/fd/kubernetes-backend/baculak8s/plugins/k8sbackend/replicationcontroller.py b/bacula/src/plugins/fd/kubernetes-backend/baculak8s/plugins/k8sbackend/replicationcontroller.py
new file mode 100644 (file)
index 0000000..135fcba
--- /dev/null
@@ -0,0 +1,71 @@
+# -*- coding: UTF-8 -*-
+#
+#  Bacula(R) - The Network Backup Solution
+#
+#   Copyright (C) 2000-2022 Kern Sibbald
+#
+#   The original author of Bacula is Kern Sibbald, with contributions
+#   from many others, a complete list can be found in the file AUTHORS.
+#
+#   You may use this file and others of this release according to the
+#   license defined in the LICENSE file, which includes the Affero General
+#   Public License, v3.0 ("AGPLv3") and some additional permissions and
+#   terms pursuant to its AGPLv3 Section 7.
+#
+#   This notice must be preserved when any source code is
+#   conveyed and/or propagated.
+#
+#   Bacula(R) is a registered trademark of Kern Sibbald.
+#
+#     Copyright (c) 2019 by Inteos sp. z o.o.
+#     All rights reserved. IP transfered to Bacula Systems according to agreement.
+#     Author: Radosław Korzeniewski, radekk@inteos.pl, Inteos Sp. z o.o.
+#
+
+from baculak8s.entities.file_info import NOT_EMPTY_FILE
+from baculak8s.plugins.k8sbackend.k8sfileinfo import *
+from baculak8s.plugins.k8sbackend.k8sutils import *
+
+
+def replication_controller_read_namespaced(corev1api, namespace, name):
+    return corev1api.read_namespaced_replication_controller(name, namespace)
+
+
+def replication_controller_list_namespaced(corev1api, namespace, estimate=False, labels=""):
+    rclist = {}
+    replicationcontroller = corev1api.list_namespaced_replication_controller(namespace=namespace, watch=False,
+                                                                             label_selector=labels)
+    for rc in replicationcontroller.items:
+        rcdata = replication_controller_read_namespaced(corev1api, namespace, rc.metadata.name)
+        spec = encoder_dump(rcdata)
+        rclist['rc-' + rc.metadata.name] = {
+            'spec': spec if not estimate else None,
+            'fi': k8sfileinfo(objtype=K8SObjType.K8SOBJ_REPLICACONTR, nsname=namespace,
+                              name=rc.metadata.name,
+                              ftype=NOT_EMPTY_FILE,
+                              size=len(spec),
+                              creation_timestamp=rcdata.metadata.creation_timestamp),
+        }
+    return rclist
+
+
+def replication_controller_restore_namespaced(corev1api, file_info, file_content):
+    rc = encoder_load(file_content, file_info.name)
+    metadata = prepare_metadata(rc.metadata)
+    # Instantiate the replicationcontroller object
+    replicationcontroller = client.V1ReplicationController(
+        api_version=rc.api_version,
+        kind="ReplicationController",
+        spec=rc.spec,
+        metadata=metadata
+    )
+    if file_info.objcache is not None:
+        # object exist so we replace it
+        response = corev1api.replace_namespaced_replication_controller(k8sfile2objname(file_info.name),
+                                                                       file_info.namespace, replicationcontroller,
+                                                                       pretty='true')
+    else:
+        # object does not exist, so create one as required
+        response = corev1api.create_namespaced_replication_controller(file_info.namespace, replicationcontroller,
+                                                                      pretty='true')
+    return {'response': response}
diff --git a/bacula/src/plugins/fd/kubernetes-backend/baculak8s/plugins/k8sbackend/resourcequota.py b/bacula/src/plugins/fd/kubernetes-backend/baculak8s/plugins/k8sbackend/resourcequota.py
new file mode 100644 (file)
index 0000000..23e13c0
--- /dev/null
@@ -0,0 +1,68 @@
+# -*- coding: UTF-8 -*-
+#
+#  Bacula(R) - The Network Backup Solution
+#
+#   Copyright (C) 2000-2022 Kern Sibbald
+#
+#   The original author of Bacula is Kern Sibbald, with contributions
+#   from many others, a complete list can be found in the file AUTHORS.
+#
+#   You may use this file and others of this release according to the
+#   license defined in the LICENSE file, which includes the Affero General
+#   Public License, v3.0 ("AGPLv3") and some additional permissions and
+#   terms pursuant to its AGPLv3 Section 7.
+#
+#   This notice must be preserved when any source code is
+#   conveyed and/or propagated.
+#
+#   Bacula(R) is a registered trademark of Kern Sibbald.
+#
+#     Copyright (c) 2019 by Inteos sp. z o.o.
+#     All rights reserved. IP transfered to Bacula Systems according to agreement.
+#     Author: Radosław Korzeniewski, radekk@inteos.pl, Inteos Sp. z o.o.
+#
+
+from baculak8s.entities.file_info import NOT_EMPTY_FILE
+from baculak8s.plugins.k8sbackend.k8sfileinfo import *
+from baculak8s.plugins.k8sbackend.k8sutils import *
+
+
+def resource_quota_read_namespaced(corev1api, namespace, name):
+    return corev1api.read_namespaced_resource_quota(name, namespace)
+
+
+def resource_quota_list_namespaced(corev1api, namespace, estimate=False, labels=""):
+    rqlist = {}
+    resourcequota = corev1api.list_namespaced_resource_quota(namespace=namespace, watch=False, label_selector=labels)
+    for rq in resourcequota.items:
+        rqdata = resource_quota_read_namespaced(corev1api, namespace, rq.metadata.name)
+        spec = encoder_dump(rqdata)
+        rqlist['rq-' + rq.metadata.name] = {
+            'spec': spec if not estimate else None,
+            'fi': k8sfileinfo(objtype=K8SObjType.K8SOBJ_RESOURCEQUOTA, nsname=namespace,
+                              name=rq.metadata.name,
+                              ftype=NOT_EMPTY_FILE,
+                              size=len(spec),
+                              creation_timestamp=rqdata.metadata.creation_timestamp),
+        }
+    return rqlist
+
+
+def resource_quota_restore_namespaced(corev1api, file_info, file_content):
+    rq = encoder_load(file_content, file_info.name)
+    metadata = prepare_metadata(rq.metadata)
+    # Instantiate the resourcequota object
+    resourcequota = client.V1ResourceQuota(
+        api_version=rq.api_version,
+        kind="ResourceQuota",
+        spec=rq.spec,
+        metadata=metadata
+    )
+    if file_info.objcache is not None:
+        # object exist so we replace it
+        response = corev1api.replace_namespaced_resource_quota(k8sfile2objname(file_info.name),
+                                                               file_info.namespace, resourcequota, pretty='true')
+    else:
+        # object does not exist, so create one as required
+        response = corev1api.create_namespaced_resource_quota(file_info.namespace, resourcequota, pretty='true')
+    return {'response': response}
diff --git a/bacula/src/plugins/fd/kubernetes-backend/baculak8s/plugins/k8sbackend/secret.py b/bacula/src/plugins/fd/kubernetes-backend/baculak8s/plugins/k8sbackend/secret.py
new file mode 100644 (file)
index 0000000..a43ce07
--- /dev/null
@@ -0,0 +1,72 @@
+# -*- coding: UTF-8 -*-
+#
+#  Bacula(R) - The Network Backup Solution
+#
+#   Copyright (C) 2000-2022 Kern Sibbald
+#
+#   The original author of Bacula is Kern Sibbald, with contributions
+#   from many others, a complete list can be found in the file AUTHORS.
+#
+#   You may use this file and others of this release according to the
+#   license defined in the LICENSE file, which includes the Affero General
+#   Public License, v3.0 ("AGPLv3") and some additional permissions and
+#   terms pursuant to its AGPLv3 Section 7.
+#
+#   This notice must be preserved when any source code is
+#   conveyed and/or propagated.
+#
+#   Bacula(R) is a registered trademark of Kern Sibbald.
+#
+#     Copyright (c) 2019 by Inteos sp. z o.o.
+#     All rights reserved. IP transfered to Bacula Systems according to agreement.
+#     Author: Radosław Korzeniewski, radekk@inteos.pl, Inteos Sp. z o.o.
+#
+
+from baculak8s.entities.file_info import NOT_EMPTY_FILE
+from baculak8s.plugins.k8sbackend.k8sfileinfo import *
+from baculak8s.plugins.k8sbackend.k8sutils import *
+
+
+def secrets_read_namespaced(corev1api, namespace, name):
+    return corev1api.read_namespaced_secret(name, namespace, pretty='true')
+
+
+def secrets_list_namespaced(corev1api, namespace, estimate=False, labels=""):
+    seclist = {}
+    secrets = corev1api.list_namespaced_secret(namespace=namespace, watch=False, label_selector=labels)
+    for sec in secrets.items:
+        secdata = secrets_read_namespaced(corev1api, namespace, sec.metadata.name)
+        spec = encoder_dump(secdata)
+        seclist['sec-' + sec.metadata.name] = {
+            'spec': spec if not estimate else None,
+            'fi': k8sfileinfo(objtype=K8SObjType.K8SOBJ_SECRET, nsname=namespace,
+                              name=sec.metadata.name,
+                              ftype=NOT_EMPTY_FILE,
+                              size=len(spec),
+                              creation_timestamp=secdata.metadata.creation_timestamp),
+        }
+    return seclist
+
+
+def secrets_restore_namespaced(corev1api, file_info, file_content):
+    sec = encoder_load(file_content, file_info.name)
+    metadata = prepare_metadata(sec.metadata,
+                                annotations=True if sec.type == 'kubernetes.io/service-account-token' else False)
+    # Instantiate the secret object
+    secret = client.V1Secret(
+        api_version=sec.api_version,
+        kind="Secret",
+        data=sec.data,
+        string_data=sec.string_data,
+        type=sec.type,
+        metadata=metadata
+    )
+    if file_info.objcache is not None:
+        # object exist so we replace it
+        response = corev1api.replace_namespaced_secret(k8sfile2objname(file_info.name),
+                                                       file_info.namespace, secret, pretty='true')
+    else:
+        # object does not exist, so create one as required
+        response = corev1api.create_namespaced_secret(file_info.namespace, secret, pretty='true')
+    return {'response': response}
+
diff --git a/bacula/src/plugins/fd/kubernetes-backend/baculak8s/plugins/k8sbackend/service.py b/bacula/src/plugins/fd/kubernetes-backend/baculak8s/plugins/k8sbackend/service.py
new file mode 100644 (file)
index 0000000..0f80b5f
--- /dev/null
@@ -0,0 +1,70 @@
+# -*- coding: UTF-8 -*-
+#
+#  Bacula(R) - The Network Backup Solution
+#
+#   Copyright (C) 2000-2022 Kern Sibbald
+#
+#   The original author of Bacula is Kern Sibbald, with contributions
+#   from many others, a complete list can be found in the file AUTHORS.
+#
+#   You may use this file and others of this release according to the
+#   license defined in the LICENSE file, which includes the Affero General
+#   Public License, v3.0 ("AGPLv3") and some additional permissions and
+#   terms pursuant to its AGPLv3 Section 7.
+#
+#   This notice must be preserved when any source code is
+#   conveyed and/or propagated.
+#
+#   Bacula(R) is a registered trademark of Kern Sibbald.
+#
+#     Copyright (c) 2019 by Inteos sp. z o.o.
+#     All rights reserved. IP transfered to Bacula Systems according to agreement.
+#     Author: Radosław Korzeniewski, radekk@inteos.pl, Inteos Sp. z o.o.
+#
+
+from baculak8s.entities.file_info import NOT_EMPTY_FILE
+from baculak8s.plugins.k8sbackend.k8sfileinfo import *
+from baculak8s.plugins.k8sbackend.k8sutils import *
+
+
+def services_read_namespaced(corev1api, namespace, name):
+    return corev1api.read_namespaced_service(name, namespace)
+
+
+def services_list_namespaced(corev1api, namespace, estimate=False, labels=""):
+    srvlist = {}
+    services = corev1api.list_namespaced_service(namespace=namespace, watch=False, label_selector=labels)
+    for srv in services.items:
+        srvdata = services_read_namespaced(corev1api, namespace, srv.metadata.name)
+        spec = encoder_dump(srvdata)
+        srvlist['srv-' + srv.metadata.name] = {
+            'spec': spec if not estimate else None,
+            'fi': k8sfileinfo(objtype=K8SObjType.K8SOBJ_SERVICE, nsname=namespace,
+                              name=srv.metadata.name,
+                              ftype=NOT_EMPTY_FILE,
+                              size=len(spec),
+                              creation_timestamp=srvdata.metadata.creation_timestamp),
+        }
+    return srvlist
+
+
+def services_restore_namespaced(corev1api, file_info, file_content):
+    srv = encoder_load(file_content, file_info.name)
+    metadata = prepare_metadata(srv.metadata)
+    # Instantiate the services object
+    services = client.V1Service(
+        api_version=srv.api_version,
+        kind="Service",
+        spec=srv.spec,
+        metadata=metadata
+    )
+    # clean some data
+    services.spec.cluster_ip = None
+    if file_info.objcache is not None:
+        # object exist so we replace it
+        response = corev1api.replace_namespaced_service(k8sfile2objname(file_info.name),
+                                                        file_info.namespace, services, pretty='true')
+    else:
+        # object does not exist, so create one as required
+        response = corev1api.create_namespaced_service(file_info.namespace, services, pretty='true')
+    return {'response': response}
diff --git a/bacula/src/plugins/fd/kubernetes-backend/baculak8s/plugins/k8sbackend/serviceaccounts.py b/bacula/src/plugins/fd/kubernetes-backend/baculak8s/plugins/k8sbackend/serviceaccounts.py
new file mode 100644 (file)
index 0000000..6756a3e
--- /dev/null
@@ -0,0 +1,70 @@
+# -*- coding: UTF-8 -*-
+#
+#  Bacula(R) - The Network Backup Solution
+#
+#   Copyright (C) 2000-2022 Kern Sibbald
+#
+#   The original author of Bacula is Kern Sibbald, with contributions
+#   from many others, a complete list can be found in the file AUTHORS.
+#
+#   You may use this file and others of this release according to the
+#   license defined in the LICENSE file, which includes the Affero General
+#   Public License, v3.0 ("AGPLv3") and some additional permissions and
+#   terms pursuant to its AGPLv3 Section 7.
+#
+#   This notice must be preserved when any source code is
+#   conveyed and/or propagated.
+#
+#   Bacula(R) is a registered trademark of Kern Sibbald.
+#
+#     Copyright (c) 2019 by Inteos sp. z o.o.
+#     All rights reserved. IP transfered to Bacula Systems according to agreement.
+#     Author: Radosław Korzeniewski, radekk@inteos.pl, Inteos Sp. z o.o.
+#
+
+from baculak8s.entities.file_info import NOT_EMPTY_FILE
+from baculak8s.plugins.k8sbackend.k8sfileinfo import *
+from baculak8s.plugins.k8sbackend.k8sutils import *
+
+
+def service_accounts_read_namespaced(corev1api, namespace, name):
+    return corev1api.read_namespaced_service_account(name, namespace)
+
+
+def service_accounts_list_namespaced(corev1api, namespace, estimate=False, labels=""):
+    salist = {}
+    serviceaccounts = corev1api.list_namespaced_service_account(namespace=namespace, watch=False, label_selector=labels)
+    for sa in serviceaccounts.items:
+        sadata = service_accounts_read_namespaced(corev1api, namespace, sa.metadata.name)
+        spec = encoder_dump(sadata)
+        salist['sa-' + sa.metadata.name] = {
+            'spec': spec if not estimate else None,
+            'fi': k8sfileinfo(objtype=K8SObjType.K8SOBJ_SERVICEACCOUNT, nsname=namespace,
+                              name=sa.metadata.name,
+                              ftype=NOT_EMPTY_FILE,
+                              size=len(spec),
+                              creation_timestamp=sadata.metadata.creation_timestamp),
+        }
+    return salist
+
+
+def service_accounts_restore_namespaced(corev1api, file_info, file_content):
+    sa = encoder_load(file_content, file_info.name)
+    metadata = prepare_metadata(sa.metadata)
+    # Instantiate the serviceaccount object
+    serviceaccount = client.V1ServiceAccount(
+        api_version=sa.api_version,
+        kind="ServiceAccount",
+        automount_service_account_token=sa.automount_service_account_token,
+        image_pull_secrets=sa.image_pull_secrets,
+        secrets=sa.secrets,
+        metadata=metadata
+    )
+    if file_info.objcache is not None:
+        # object exist so we replace it
+        response = corev1api.replace_namespaced_service_account(k8sfile2objname(file_info.name),
+                                                                file_info.namespace, serviceaccount, pretty='true')
+    else:
+        # object does not exist, so create one as required
+        response = corev1api.create_namespaced_service_account(file_info.namespace, serviceaccount, pretty='true')
+    return {'response': response}
diff --git a/bacula/src/plugins/fd/kubernetes-backend/baculak8s/plugins/k8sbackend/statefulset.py b/bacula/src/plugins/fd/kubernetes-backend/baculak8s/plugins/k8sbackend/statefulset.py
new file mode 100644 (file)
index 0000000..f6fa5d7
--- /dev/null
@@ -0,0 +1,68 @@
+# -*- coding: UTF-8 -*-
+#
+#  Bacula(R) - The Network Backup Solution
+#
+#   Copyright (C) 2000-2022 Kern Sibbald
+#
+#   The original author of Bacula is Kern Sibbald, with contributions
+#   from many others, a complete list can be found in the file AUTHORS.
+#
+#   You may use this file and others of this release according to the
+#   license defined in the LICENSE file, which includes the Affero General
+#   Public License, v3.0 ("AGPLv3") and some additional permissions and
+#   terms pursuant to its AGPLv3 Section 7.
+#
+#   This notice must be preserved when any source code is
+#   conveyed and/or propagated.
+#
+#   Bacula(R) is a registered trademark of Kern Sibbald.
+#
+#     Copyright (c) 2019 by Inteos sp. z o.o.
+#     All rights reserved. IP transfered to Bacula Systems according to agreement.
+#     Author: Radosław Korzeniewski, radekk@inteos.pl, Inteos Sp. z o.o.
+#
+
+from baculak8s.entities.file_info import NOT_EMPTY_FILE
+from baculak8s.plugins.k8sbackend.k8sfileinfo import *
+from baculak8s.plugins.k8sbackend.k8sutils import *
+
+
+def stateful_sets_read_namespaced(appsv1api, namespace, name):
+    return appsv1api.read_namespaced_stateful_set(name, namespace)
+
+
+def stateful_sets_list_namespaced(appsv1api, namespace, estimate=False, labels=""):
+    sslist = {}
+    statefulsets = appsv1api.list_namespaced_stateful_set(namespace=namespace, watch=False, label_selector=labels)
+    for ss in statefulsets.items:
+        ssdata = stateful_sets_read_namespaced(appsv1api, namespace, ss.metadata.name)
+        spec = encoder_dump(ssdata)
+        sslist['ss-' + ss.metadata.name] = {
+            'spec': spec if not estimate else None,
+            'fi': k8sfileinfo(objtype=K8SObjType.K8SOBJ_STATEFULSET, nsname=namespace,
+                              name=ss.metadata.name,
+                              ftype=NOT_EMPTY_FILE,
+                              size=len(spec),
+                              creation_timestamp=ssdata.metadata.creation_timestamp),
+        }
+    return sslist
+
+
+def stateful_sets_restore_namespaced(appsv1api, file_info, file_content):
+    ss = encoder_load(file_content, file_info.name)
+    metadata = prepare_metadata(ss.metadata)
+    # Instantiate the daemon_set object
+    statefulset = client.V1StatefulSet(
+        api_version=ss.api_version,
+        kind="StatefulSet",
+        spec=ss.spec,
+        metadata=metadata
+    )
+    if file_info.objcache is not None:
+        # object exist so we replace it
+        response = appsv1api.replace_namespaced_stateful_set(k8sfile2objname(file_info.name),
+                                                             file_info.namespace, statefulset, pretty='true')
+    else:
+        # object does not exist, so create one as required
+        response = appsv1api.create_namespaced_stateful_set(file_info.namespace, statefulset, pretty='true')
+    return {'response': response}
diff --git a/bacula/src/plugins/fd/kubernetes-backend/baculak8s/plugins/k8sbackend/storageclass.py b/bacula/src/plugins/fd/kubernetes-backend/baculak8s/plugins/k8sbackend/storageclass.py
new file mode 100644 (file)
index 0000000..4795c81
--- /dev/null
@@ -0,0 +1,146 @@
+# -*- coding: UTF-8 -*-
+#
+#  Bacula(R) - The Network Backup Solution
+#
+#   Copyright (C) 2000-2022 Kern Sibbald
+#
+#   The original author of Bacula is Kern Sibbald, with contributions
+#   from many others, a complete list can be found in the file AUTHORS.
+#
+#   You may use this file and others of this release according to the
+#   license defined in the LICENSE file, which includes the Affero General
+#   Public License, v3.0 ("AGPLv3") and some additional permissions and
+#   terms pursuant to its AGPLv3 Section 7.
+#
+#   This notice must be preserved when any source code is
+#   conveyed and/or propagated.
+#
+#   Bacula(R) is a registered trademark of Kern Sibbald.
+#
+#     Copyright (c) 2019 by Inteos sp. z o.o.
+#     All rights reserved. IP transfered to Bacula Systems according to agreement.
+#     Author: Radosław Korzeniewski, radekk@inteos.pl, Inteos Sp. z o.o.
+#
+
+import logging
+import pathlib
+
+from baculak8s.entities.file_info import NOT_EMPTY_FILE
+from baculak8s.plugins.k8sbackend.k8sfileinfo import *
+from baculak8s.plugins.k8sbackend.k8sutils import *
+from baculak8s.util.size_util import k8s_size_to_int
+
+
+def storageclass_read(storagev1api, name):
+    return storagev1api.read_storage_class(name)
+
+
+def storageclass_list_all(storagev1api, scfilter=None, estimate=False):
+    sclist = {}
+    storageclass = storagev1api.list_storage_class(watch=False)
+    for sc in storageclass.items:
+        if scfilter is not None and len(scfilter) > 0:
+            logging.debug("scfilter-glob-for: {}".format(sc.metadata.name))
+            found = False
+            for scglob in scfilter:
+                logging.debug("checking scglob: {}".format(scglob))
+                if pathlib.Path(sc.metadata.name).match(scglob):
+                    found = True
+                    logging.debug('Found.')
+                    break
+            if not found:
+                continue
+        scdata = storageclass_read(storagev1api, sc.metadata.name)
+        spec = encoder_dump(scdata)
+        sclist[sc.metadata.name] = {
+            'spec': spec if not estimate else None,
+            'fi': k8sfileinfo(objtype=K8SObjType.K8SOBJ_STORAGECLASS,
+                              name=sc.metadata.name,
+                              ftype=NOT_EMPTY_FILE,
+                              size=len(spec),
+                              creation_timestamp=scdata.metadata.creation_timestamp),
+        }
+    return sclist
+
+
+def storageclass_names(storagev1api):
+    sclist = []
+    storageclass = storagev1api.list_storage_class(watch=False)
+    for sc in storageclass.items:
+        sclist.append(["storageclass", sc.metadata.name])
+    return sclist
+
+
+def storageclass_list_all_names(storagev1api):
+    sclist = {}
+    storageclass = storagev1api.list_storage_class(watch=False)
+    # logging.debug(storageclass)
+    for sc in storageclass.items:
+        sclist[sc.metadata.name] = {
+            'fi': FileInfo(name="/%s/%s" % (K8SObjType.K8SOBJ_STORAGECLASS_Path, sc.metadata.name),
+                           ftype=NOT_EMPTY_FILE,
+                           size=1024,   # arbitrary file size
+                           uid=0, gid=0,
+                           mode=DEFAULT_FILE_MODE,
+                           nlink=1,
+                           modified_at=NOW_TIMESTAMP,
+                           accessed_at=NOW_TIMESTAMP,
+                           created_at=k8stimestamp_to_unix_timestamp(sc.metadata.creation_timestamp)),
+        }
+    return sclist
+
+
+"""
+{'allow_volume_expansion': True,
+ 'allowed_topologies': None,
+ 'api_version': None,
+ 'kind': None,
+ 'metadata': {'annotations': {'storageclass.kubernetes.io/is-default-class': 'true'},
+              'cluster_name': None,
+              'creation_timestamp': datetime.datetime(2020, 7, 23, 12, 0, 2, tzinfo=tzlocal()),
+              'deletion_grace_period_seconds': None,
+              'deletion_timestamp': None,
+              'finalizers': None,
+              'generate_name': None,
+              'generation': None,
+              'initializers': None,
+              'labels': {'addonmanager.kubernetes.io/mode': 'EnsureExists'},
+              'managed_fields': None,
+              'name': 'standard',
+              'namespace': None,
+              'owner_references': None,
+              'resource_version': '265',
+              'self_link': '/apis/storage.k8s.io/v1/storageclasses/standard',
+              'uid': 'b859eec1-8055-4cad-b9a0-fc13ef01a21a'},
+ 'mount_options': None,
+ 'parameters': {'type': 'pd-standard'},
+ 'provisioner': 'kubernetes.io/gce-pd',
+ 'reclaim_policy': 'Delete',
+ 'volume_binding_mode': 'Immediate'}
+"""
+
+
+def storageclass_restore(storagev1api, file_info, file_content):
+    logging.debug("storageclass_restore:fileinfo: {}".format(file_info))
+    sc = encoder_load(file_content, file_info.name)
+    metadata = prepare_metadata(sc.metadata)
+    # Instantiate the storageclass object
+    storageclass = client.V1StorageClass(
+        api_version=sc.api_version,
+        kind="StorageClass",
+        metadata=metadata,
+        allow_volume_expansion=sc.allow_volume_expansion,
+        allowed_topologies=sc.allowed_topologies,
+        mount_options=sc.mount_options,
+        parameters=sc.parameters,
+        provisioner=sc.provisioner,
+        reclaim_policy=sc.reclaim_policy,
+        volume_binding_mode=sc.volume_binding_mode,
+    )
+    if file_info.objcache is not None:
+        # object exist so we replace it
+        response = storagev1api.replace_storage_class(k8sfile2objname(file_info.name), body=storageclass, pretty='true')
+    else:
+        # object does not exist, so create one as required
+        response = storagev1api.create_storage_class(body=storageclass, pretty='true')
+    return {'response': response}
diff --git a/bacula/src/plugins/fd/kubernetes-backend/baculak8s/plugins/kubernetes_plugin.py b/bacula/src/plugins/fd/kubernetes-backend/baculak8s/plugins/kubernetes_plugin.py
new file mode 100644 (file)
index 0000000..5d50219
--- /dev/null
@@ -0,0 +1,912 @@
+# -*- coding: UTF-8 -*-
+# Bacula(R) - The Network Backup Solution
+#
+#   Copyright (C) 2000-2022 Kern Sibbald
+#
+#   The original author of Bacula is Kern Sibbald, with contributions
+#   from many others, a complete list can be found in the file AUTHORS.
+#
+#   You may use this file and others of this release according to the
+#   license defined in the LICENSE file, which includes the Affero General
+#   Public License, v3.0 ("AGPLv3") and some additional permissions and
+#   terms pursuant to its AGPLv3 Section 7.
+#
+#   This notice must be preserved when any source code is
+#   conveyed and/or propagated.
+#
+#   Bacula(R) is a registered trademark of Kern Sibbald.
+
+import logging
+
+import requests
+import requests.packages.urllib3
+import urllib3
+from kubernetes import client, config
+from kubernetes.client.rest import ApiException
+from urllib3.exceptions import MaxRetryError, SSLError, TimeoutError
+from baculak8s.entities.file_info import (DEFAULT_DIR_MODE, DIRECTORY,
+                                          EMPTY_FILE, FileInfo)
+from baculak8s.entities.k8sobjtype import K8SObjType
+from baculak8s.io.log import Log
+from baculak8s.plugins import k8sbackend
+from baculak8s.plugins.k8sbackend.baculabackup import BACULABACKUPPODNAME
+from baculak8s.plugins.k8sbackend.baculaannotations import annotated_namespaced_pods_data
+from baculak8s.plugins.k8sbackend.configmaps import *
+from baculak8s.plugins.k8sbackend.daemonset import *
+from baculak8s.plugins.k8sbackend.deployment import *
+from baculak8s.plugins.k8sbackend.endpoints import *
+from baculak8s.plugins.k8sbackend.k8sfileinfo import NOW_TIMESTAMP
+from baculak8s.plugins.k8sbackend.limitrange import *
+from baculak8s.plugins.k8sbackend.namespaces import *
+from baculak8s.plugins.k8sbackend.persistentvolumeclaims import *
+from baculak8s.plugins.k8sbackend.persistentvolumes import *
+from baculak8s.plugins.k8sbackend.pods import *
+from baculak8s.plugins.k8sbackend.podtemplates import *
+from baculak8s.plugins.k8sbackend.pvcdata import *
+from baculak8s.plugins.k8sbackend.replicaset import *
+from baculak8s.plugins.k8sbackend.replicationcontroller import *
+from baculak8s.plugins.k8sbackend.resourcequota import *
+from baculak8s.plugins.k8sbackend.secret import *
+from baculak8s.plugins.k8sbackend.service import *
+from baculak8s.plugins.k8sbackend.serviceaccounts import *
+from baculak8s.plugins.k8sbackend.statefulset import *
+from baculak8s.plugins.k8sbackend.storageclass import *
+from baculak8s.plugins.plugin import *
+from baculak8s.util.date_util import gmt_to_unix_timestamp
+
+HTTP_NOT_FOUND = 404
+K8S_POD_CONTAINER_STATUS_ERROR = "Pod status error! Reason: {}, Message: {}"
+
+
+class KubernetesPlugin(Plugin):
+    """
+        Plugin that communicates with Kubernetes API
+    """
+
+    def __init__(self, params):
+        logging.debug("params:" + str(params))
+        self._params = params
+        _ns = params.get("namespace", [])
+        _pv = params.get("persistentvolume", [])
+        _sc = params.get("storageclass", [])
+        _vssl = params.get("verify_ssl", None)
+        _pvconfig = params.get("pvconfig", True)
+        _scconfig = params.get("scconfig", True)
+        _pvcdata = params.get("pvcdata", None)
+        if isinstance(_pvcdata, str):
+            # single param
+            if _pvcdata == '1':
+                # pvcdata without value
+                _pvcdata = True
+            else:
+                _pvcdata = [_pvcdata]
+
+        self.config = {
+            'config_file': params.get("config", None),
+            'host': params.get("host", None),
+            'token': params.get("token", None),
+            'username': params.get("username", None),   # for future usage
+            'password': params.get("password", None),   # for future usage
+            'incluster': params.get("incluster", None),
+            'verify_ssl': True if _vssl is None or _vssl == "1" or _vssl == '' else False,
+            'ssl_ca_cert': params.get("ssl_ca_cert", None),
+            'namespace': _ns if len(_ns) > 0 else None,
+            'nsconfig': True,
+            'pv': _pv if len(_pv) > 0 else None,
+            'pvconfig': False if _pvconfig == '0' else True,
+            'storageclass': _sc if len(_sc) > 0 else None,
+            'scconfig': False if _scconfig == '0' else True,
+            'pvcdata': _pvcdata,
+            'labels': params.get("labels", ""),
+        }
+
+        # disable namespace backup when pv param selected
+        if self.config.get('pv') is not None and self.config.get('namespace') is None:
+            self.config['nsconfig'] = False
+
+        # as pvcs are namespaced objects then it is required to add namespace=... parameter
+        if self.config.get('namespace') is None and self.config.get('pvcdata') is not None:
+            self.config['pvcdata'] = None
+
+        logging.debug("pluginconfig:{}".format(self.config))
+
+        self.coreapi = None
+        self.corev1api = None
+        self.appsv1api = None
+        self.storagev1api = None
+        self.clientConfiguration = None
+        self.clientAPI = None
+        self.k8s = {
+            K8SObjType.K8SOBJ_PVOLUME: {},
+            K8SObjType.K8SOBJ_NAMESPACE: {},
+            K8SObjType.K8SOBJ_STORAGECLASS: {},
+            K8SObjType.K8SOBJ_PVCDATA: {},
+        }
+        self._connected = False
+
+    def connect(self):
+        """
+            Implementation of Plugin.connect(self)
+        """
+        if self.config.get("incluster") is not None:
+            try:
+                config.load_incluster_config()
+            except config.config_exception.ConfigException as e:
+                strerror = ' '.join(e.args)
+                logging.debug("ERROR at load_incluster_config: " + strerror)
+                return {
+                    'error': "incluster error: " + strerror,
+                    'exception': True,
+                }
+            except Exception as e:
+                logging.debug("ERROR at load_incluster_config: " + str(e))
+                return {
+                    'error': str(e),
+                    'exception': True,
+                }
+
+        elif self.config.get("token") is not None and self.config.get('host') is not None:
+            # handle connection with bearertoken and apihost params
+            self.clientConfiguration = client.Configuration()
+            self.clientConfiguration.host = self.config.get('host')
+            self.clientConfiguration.verify_ssl = self.config.get('verify_ssl')
+            ssl_ca_cert = self.config.get("ssl_ca_cert")
+            if ssl_ca_cert is not None:
+                self.clientConfiguration.ssl_ca_cert = ssl_ca_cert
+            self.clientConfiguration.api_key = {"authorization": "Bearer " + self.config.get("token")}
+            self.clientAPI = client.ApiClient(configuration=self.clientConfiguration)
+        else:
+            configfile = self.config.get("config_file", None)
+            if configfile is None:
+                configfile = config.kube_config.KUBE_CONFIG_DEFAULT_LOCATION
+            logging.debug("load_kube_config(config_file={})".format(configfile))
+            try:
+                config.load_kube_config(config_file=configfile)
+            except OSError as e:
+                logging.debug("ERROR OSError at load_kube_config: " + str(e.strerror))
+                return {
+                    'error': e.strerror + " config=" + str(configfile),
+                    'exception': True,
+                }
+            except Exception as e:
+                logging.debug("ERROR Exception at load_kube_config: " + str(e))
+                return {
+                    'error': str(e) + " config=" + str(configfile),
+                    'exception': True,
+                }
+
+        # Tests the connection with K8S
+        self.coreapi = client.CoreApi(api_client=self.clientAPI)
+        self.corev1api = client.CoreV1Api(api_client=self.clientAPI)
+        self.appsv1api = client.AppsV1Api(api_client=self.clientAPI)
+        self.storagev1api = client.StorageV1Api(api_client=self.clientAPI)
+
+        logging.getLogger(requests.packages.urllib3.__package__).setLevel(logging.ERROR)
+        logging.getLogger(client.rest.__package__).setLevel(logging.ERROR)
+        urllib3.disable_warnings()
+        logging.captureWarnings(True)
+
+        response = self.__execute(lambda: self.coreapi.get_api_versions(), check_connection=False)
+        if isinstance(response, dict) and "error" in response:
+            logging.debug("ERROR response:{}".format(response))
+            self.coreapi = None
+            self.corev1api = None
+            self.appsv1api = None
+            self.storagev1api = None
+            return response
+
+        else:
+            self._connected = True
+            # grab some info about a cluster and forward to job
+            vapi = client.VersionApi()
+            response = self.__execute(lambda: vapi.get_code(), check_connection=False)
+            if isinstance(response, dict) and "error" in response:
+                data = {}
+            else:
+                data = {'response': response}
+            return data
+
+    def disconnect(self):
+        """
+            Implementation of Plugin.disconnect(self)
+            No need to disconnect from K8S
+        """
+        pass
+
+    def list_in_path(self, path):
+        """
+            Implementation of Plugin.list_in_path(self, path)
+            returns FileInfo objects associated with the $path$
+        """
+        top_level_dirs = [
+            K8SObjType.K8SOBJ_NAMESPACE_Path,
+            K8SObjType.K8SOBJ_PVOLUME_Path,
+            K8SObjType.K8SOBJ_STORAGECLASS_Path,
+        ]
+        path = path.lstrip("/")
+        if path.startswith(K8SObjType.K8SOBJ_NAMESPACE_Path):
+            lpath = path.split("/")
+            if len(lpath) > 1 and len(lpath[1]) > 0:
+                if len(lpath) == 2 or len(lpath[2]) == 0:
+                    data = {
+                        'pvcdata': {
+                            'fi': FileInfo(name='/namespaces/' + lpath[1] + '/' + K8SObjType.K8SOBJ_PVCDATA_Path,
+                                           ftype=DIRECTORY,
+                                           size=0,
+                                           uid=0, gid=0,
+                                           nlink=1,
+                                           mode=DEFAULT_DIR_MODE,
+                                           modified_at=NOW_TIMESTAMP,
+                                           accessed_at=NOW_TIMESTAMP,
+                                           created_at=NOW_TIMESTAMP),
+                            }
+                    }
+                    return data
+                elif lpath[2] == K8SObjType.K8SOBJ_PVCDATA_Path:
+                    return list_pvcs_namespaced(self.corev1api, lpath[1])
+            else:
+                return self.list_all_namespaces_names()
+
+        if path == K8SObjType.K8SOBJ_STORAGECLASS_Path:
+            return self.list_all_storageclass_names()
+
+        if path == K8SObjType.K8SOBJ_PVOLUME_Path:
+            return self.list_all_persistentvolumes_names()
+
+        self.k8s = {}
+        for dirname in top_level_dirs:
+            self.k8s[dirname] = {
+                'fi': FileInfo(name='/' + dirname,
+                               ftype=DIRECTORY,
+                               size=0,
+                               uid=0, gid=0,
+                               nlink=1,
+                               mode=DEFAULT_DIR_MODE,
+                               modified_at=NOW_TIMESTAMP,
+                               accessed_at=NOW_TIMESTAMP,
+                               created_at=NOW_TIMESTAMP),
+            }
+        return self.k8s
+
+    def query_parameter(self, parameter):
+        """
+            Implementation of Plugin.query_parameter(self, parameter)
+            returns array of available parameters
+        """
+        if parameter == 'namespace':
+            return self.__execute(lambda: namespace_names(self.corev1api))
+        if parameter == 'persistentvolume':
+            return self.__execute(lambda: persistentvolumes_names(self.corev1api))
+        if parameter == 'storageclass':
+            return self.__execute(lambda: storageclass_names(self.storagev1api))
+        ns = self.config.get("namespace")
+        if ns is not None:
+            ns = ns[0]
+            if parameter == 'pvcdata':
+                return self.__execute(lambda: persistentvolumeclaims_namespaced_names(self.corev1api, ns))
+        return []
+
+    def list_all_persistentvolumes(self, estimate=False):
+        if self.config.get('pvconfig'):
+            self.k8s[K8SObjType.K8SOBJ_PVOLUME] = \
+                self.__execute(lambda: persistentvolumes_list_all(self.corev1api,
+                                                                  pvfilter=self.config['pv'],
+                                                                  estimate=estimate))
+        return self.k8s[K8SObjType.K8SOBJ_PVOLUME]
+
+    def list_all_namespaces(self, estimate=False):
+        if self.config.get('nsconfig'):
+            self.k8s[K8SObjType.K8SOBJ_NAMESPACE] = \
+                self.__execute(lambda: namespaces_list_all(self.corev1api,
+                                                           nsfilter=self.config['namespace'],
+                                                           estimate=estimate))
+        return self.k8s[K8SObjType.K8SOBJ_NAMESPACE]
+
+    def list_all_storageclass(self, estimate=False):
+        if self.config.get('scconfig'):
+            self.k8s[K8SObjType.K8SOBJ_STORAGECLASS] = \
+                self.__execute(lambda: storageclass_list_all(self.storagev1api,
+                                                             scfilter=self.config['storageclass'],
+                                                             estimate=estimate))
+        return self.k8s[K8SObjType.K8SOBJ_STORAGECLASS]
+
+    def get_pvcdata_namespaced(self, namespace, pvcname, pvcalias=None, estimate=False):
+        logging.debug("pvcdata namespaced: {}/{} pvcalias={}".format(namespace, pvcname, pvcalias))
+        return self.__execute(lambda: pvcdata_get_namespaced(self.corev1api, namespace, pvcname, pvcalias))
+
+    def list_pvcdata_for_namespace(self, namespace, estimate=False, allpvcs=False):
+        pvcfilter = self.config.get('pvcdata', allpvcs) if not allpvcs else allpvcs
+        logging.debug("list pvcdata for namespace:{} pvcfilter={} estimate={}".format(namespace, pvcfilter, estimate))
+        return self.__execute(lambda: pvcdata_list_namespaced(self.corev1api, namespace, estimate, pvcfilter=pvcfilter))
+
+    def get_config_maps(self, namespace, estimate=False):
+        return self.__execute(lambda: config_maps_list_namespaced(self.corev1api, namespace, estimate,
+                                                                  self.config['labels']))
+
+    def get_endpoints(self, namespace, estimate=False):
+        return self.__execute(lambda: endpoints_list_namespaced(self.corev1api, namespace, estimate,
+                                                                self.config['labels']))
+
+    def get_pods(self, namespace, estimate=False):
+        return self.__execute(lambda: pods_list_namespaced(self.corev1api, namespace, estimate, self.config['labels']))
+
+    def get_pvcs(self, namespace, estimate=False):
+        pvcs = self.__execute(lambda: persistentvolumeclaims_list_namespaced(self.corev1api, namespace, estimate,
+                                                                             self.config['labels']))
+        self.k8s['pvcs'] = pvcs
+        return pvcs
+
+    def get_podtemplates(self, namespace, estimate=False):
+        return self.__execute(lambda: podtemplates_list_namespaced(self.corev1api, namespace, estimate,
+                                                                   self.config['labels']))
+
+    def get_limit_ranges(self, namespace, estimate=False):
+        return self.__execute(lambda: limit_range_list_namespaced(self.corev1api, namespace, estimate,
+                                                                  self.config['labels']))
+
+    def get_replication_controller(self, namespace, estimate=False):
+        return self.__execute(lambda: replication_controller_list_namespaced(self.corev1api, namespace, estimate,
+                                                                             self.config['labels']))
+
+    def get_resource_quota(self, namespace, estimate=False):
+        return self.__execute(lambda: resource_quota_list_namespaced(self.corev1api, namespace, estimate,
+                                                                     self.config['labels']))
+
+    def get_secrets(self, namespace, estimate=False):
+        return self.__execute(lambda: secrets_list_namespaced(self.corev1api, namespace, estimate,
+                                                              self.config['labels']))
+
+    def get_services(self, namespace, estimate=False):
+        return self.__execute(lambda: services_list_namespaced(self.corev1api, namespace, estimate,
+                                                               self.config['labels']))
+
+    def get_service_accounts(self, namespace, estimate=False):
+        return self.__execute(lambda: service_accounts_list_namespaced(self.corev1api, namespace, estimate,
+                                                                       self.config['labels']))
+
+    def get_daemon_sets(self, namespace, estimate=False):
+        return self.__execute(lambda: daemon_sets_list_namespaced(self.appsv1api, namespace, estimate,
+                                                                  self.config['labels']))
+
+    def get_deployments(self, namespace, estimate=False):
+        return self.__execute(lambda: deployments_list_namespaced(self.appsv1api, namespace, estimate,
+                                                                  self.config['labels']))
+
+    def get_replica_sets(self, namespace, estimate=False):
+        return self.__execute(lambda: replica_sets_list_namespaced(self.appsv1api, namespace, estimate,
+                                                                   self.config['labels']))
+
+    def get_stateful_sets(self, namespace, estimate=False):
+        return self.__execute(lambda: stateful_sets_list_namespaced(self.appsv1api, namespace, estimate,
+                                                                    self.config['labels']))
+
+    def list_all_persistentvolumes_names(self):
+        self.k8s[K8SObjType.K8SOBJ_PVOLUME] = self.__execute(lambda: persistentvolumes_list_all_names(self.corev1api))
+        return self.k8s[K8SObjType.K8SOBJ_PVOLUME]
+
+    def get_annotated_namespaced_pods_data(self, namespace, estimate=False):
+        return self.__execute(lambda: annotated_namespaced_pods_data(self.corev1api, namespace, estimate,
+                                                                     self.config['labels']))
+
+    def list_all_storageclass_names(self):
+        self.k8s[K8SObjType.K8SOBJ_STORAGECLASS] = self.__execute(lambda: storageclass_list_all_names(self.storagev1api))
+        return self.k8s[K8SObjType.K8SOBJ_STORAGECLASS]
+
+    def list_all_namespaces_names(self):
+        self.k8s[K8SObjType.K8SOBJ_NAMESPACE] = self.__execute(lambda: namespaces_list_all_names(self.corev1api))
+        return self.k8s[K8SObjType.K8SOBJ_NAMESPACE]
+
+    def list_namespaced_objects(self, namespace, estimate=False):
+        # We should maintain the following resources backup order
+        logging.debug("list_namespaced_objects_label:[{}]".format(self.config['labels']))
+        self.k8s[K8SObjType.K8SOBJ_PVCDATA] = {}
+        data = [
+            self.get_config_maps(namespace, estimate),
+            self.get_service_accounts(namespace, estimate),
+            self.get_secrets(namespace, estimate),
+            # self.get_endpoints(namespace, estimate),
+            self.get_pvcs(namespace, estimate),
+            self.get_limit_ranges(namespace, estimate),
+            self.get_resource_quota(namespace, estimate),
+            self.get_services(namespace, estimate),
+            self.get_pods(namespace, estimate),
+            self.get_daemon_sets(namespace, estimate),
+            self.get_replica_sets(namespace, estimate),
+            self.get_stateful_sets(namespace, estimate),
+            self.get_deployments(namespace, estimate),
+            self.get_replication_controller(namespace, estimate),
+        ]
+        return data
+
+    def upload_config_map(self, file_info, file_content):
+        return self.__execute(lambda: config_map_restore_namespaced(self.corev1api, file_info, file_content))
+
+    def upload_daemon_set(self, file_info, file_content):
+        return self.__execute(lambda: daemon_sets_restore_namespaced(self.appsv1api, file_info, file_content))
+
+    def upload_deployment(self, file_info, file_content):
+        return self.__execute(lambda: deployments_restore_namespaced(self.appsv1api, file_info, file_content))
+
+    def upload_endpoint(self, file_info, file_content):
+        return self.__execute(lambda: endpoints_restore_namespaced(self.corev1api, file_info, file_content))
+
+    def upload_limitrange(self, file_info, file_content):
+        return self.__execute(lambda: limit_range_restore_namespaced(self.corev1api, file_info, file_content))
+
+    def upload_namespace(self, file_info, file_content):
+        return self.__execute(lambda: namespaces_restore(self.corev1api, file_info, file_content))
+
+    def upload_pod(self, file_info, file_content):
+        return self.__execute(lambda: pods_restore_namespaced(self.corev1api, file_info, file_content))
+
+    def upload_persistentvolume_claim(self, file_info, file_content):
+        return self.__execute(lambda: persistentvolumeclaims_restore_namespaced(self.corev1api, file_info,
+                                                                                file_content))
+
+    def upload_persistentvolume(self, file_info, file_content):
+        return self.__execute(lambda: persistentvolumes_restore(self.corev1api, file_info, file_content))
+
+    def upload_storageclass(self, file_info, file_content):
+        return self.__execute(lambda: storageclass_restore(self.storagev1api, file_info, file_content))
+
+    def upload_pod_template(self, file_info, file_content):
+        return self.__execute(lambda: podtemplates_restore_namespaced(self.corev1api, file_info, file_content))
+
+    def upload_replica_set(self, file_info, file_content):
+        return self.__execute(lambda: replica_sets_restore_namespaced(self.appsv1api, file_info, file_content))
+
+    def upload_replication_controller(self, file_info, file_content):
+        return self.__execute(lambda: replication_controller_restore_namespaced(self.corev1api, file_info,
+                                                                                file_content))
+
+    def upload_resource_quota(self, file_info, file_content):
+        return self.__execute(lambda: resource_quota_restore_namespaced(self.corev1api, file_info, file_content))
+
+    def upload_secret(self, file_info, file_content):
+        return self.__execute(lambda: secrets_restore_namespaced(self.corev1api, file_info, file_content))
+
+    def upload_service(self, file_info, file_content):
+        return self.__execute(lambda: services_restore_namespaced(self.corev1api, file_info, file_content))
+
+    def upload_service_account(self, file_info, file_content):
+        return self.__execute(lambda: service_accounts_restore_namespaced(self.corev1api, file_info, file_content))
+
+    def upload_stateful_set(self, file_info, file_content):
+        return self.__execute(lambda: stateful_sets_restore_namespaced(self.appsv1api, file_info, file_content))
+
+    def __restore_k8s_object(self, file_info, file_content_source=None):
+        file_content = b''
+        if file_info.size != 0 and file_content_source is not None:
+            while True:
+                data = file_content_source.read()
+                if data is None:
+                    break
+                else:
+                    file_content += data
+        method_name = 'upload_' + str(K8SObjType.methoddict.get(file_info.objtype))
+        method = getattr(self, method_name, None)
+        if method is None:
+            return {'error': 'Invalid object type: %s' % file_info.objtype}
+        if file_info.objcache is None:
+            curent_file = self.check_file(file_info)
+            if isinstance(curent_file, dict) and 'error' in curent_file:
+                file_info.objcache = None
+                logging.error("check_file: {}".format(curent_file['error']))
+            else:
+                file_info.objcache = curent_file
+        return method(file_info, file_content)
+
+    def restore_file(self, file_info, file_content_source=None):
+        """
+            Implementation of Plugin.restore_file(self, file_info, file_content_source=None)
+        """
+        if file_info.objtype != K8SObjType.K8SOBJ_PVCDATA:
+            return self.__restore_k8s_object(file_info, file_content_source)
+
+    # TODO: export/move all checks into k8sbackend
+    def _check_config_map(self, file_info):
+        return self.__exec_check_object(
+            lambda: self.corev1api.read_namespaced_config_map(k8sfile2objname(file_info.name), file_info.namespace))
+
+    def _check_daemon_set(self, file_info):
+        return self.__exec_check_object(
+            lambda: self.appsv1api.read_namespaced_daemon_set(k8sfile2objname(file_info.name), file_info.namespace))
+
+    def _check_deployment(self, file_info):
+        return self.__exec_check_object(
+            lambda: self.appsv1api.read_namespaced_deployment(k8sfile2objname(file_info.name), file_info.namespace))
+
+    def _check_endpoint(self, file_info):
+        return self.__exec_check_object(
+            lambda: self.corev1api.read_namespaced_endpoints(k8sfile2objname(file_info.name), file_info.namespace))
+
+    def _check_limitrange(self, file_info):
+        return self.__exec_check_object(
+            lambda: self.corev1api.read_namespaced_limit_range(k8sfile2objname(file_info.name), file_info.namespace))
+
+    def _check_namespace(self, file_info):
+        return self.__exec_check_object(lambda: self.corev1api.read_namespace(k8sfile2objname(file_info.name)))
+
+    def check_namespace(self, name):
+        return self.__exec_check_object(lambda: self.corev1api.read_namespace(name))
+
+    def _check_pod(self, file_info):
+        return self.__exec_check_object(
+            lambda: self.corev1api.read_namespaced_pod(k8sfile2objname(file_info.name), file_info.namespace))
+
+    def check_pod(self, namespace, name):
+        return self.__exec_check_object(lambda: self.corev1api.read_namespaced_pod(name, namespace))
+
+    def _check_persistentvolume_claim(self, file_info):
+        return self.__exec_check_object(
+            lambda: self.corev1api.read_namespaced_persistent_volume_claim(k8sfile2objname(file_info.name),
+                                                                           file_info.namespace))
+
+    def _check_persistentvolume(self, file_info):
+        return self.__exec_check_object(
+            lambda: self.corev1api.read_persistent_volume(k8sfile2objname(file_info.name)))
+
+    def _check_storageclass(self, file_info):
+        return self.__exec_check_object(
+            lambda: self.storagev1api.read_storage_class(k8sfile2objname(file_info.name)))
+
+    def _check_pod_template(self, file_info):
+        return self.__exec_check_object(
+            lambda: self.corev1api.read_namespaced_pod_template(k8sfile2objname(file_info.name), file_info.namespace))
+
+    def _check_replica_set(self, file_info):
+        return self.__exec_check_object(
+            lambda: self.appsv1api.read_namespaced_replica_set(k8sfile2objname(file_info.name), file_info.namespace))
+
+    def _check_stateful_set(self, file_info):
+        return self.__exec_check_object(
+            lambda: self.appsv1api.read_namespaced_stateful_set(k8sfile2objname(file_info.name), file_info.namespace))
+
+    def _check_replication_controller(self, file_info):
+        return self.__exec_check_object(
+            lambda: self.corev1api.read_namespaced_replication_controller(k8sfile2objname(file_info.name),
+                                                                          file_info.namespace))
+
+    def _check_resource_quota(self, file_info):
+        return self.__exec_check_object(
+            lambda: self.corev1api.read_namespaced_resource_quota(k8sfile2objname(file_info.name), file_info.namespace))
+
+    def _check_secret(self, file_info):
+        return self.__exec_check_object(
+            lambda: self.corev1api.read_namespaced_secret(k8sfile2objname(file_info.name), file_info.namespace))
+
+    def _check_service(self, file_info):
+        return self.__exec_check_object(
+            lambda: self.corev1api.read_namespaced_service(k8sfile2objname(file_info.name), file_info.namespace))
+
+    def _check_service_account(self, file_info):
+        return self.__exec_check_object(
+            lambda: self.corev1api.read_namespaced_service_account(k8sfile2objname(file_info.name),
+                                                                   file_info.namespace))
+
+    def check_file(self, file_info):
+        """
+
+        :param file_info:
+        :return:
+        """
+        method_name = '_check_' + str(K8SObjType.methoddict.get(file_info.objtype))
+        method = getattr(self, method_name, None)
+        if method is None:
+            return {'error': 'Invalid object type: %s' % file_info.objtype}
+        return method(file_info)
+
+    def __exec_check_object(self, action, check_connection=True):
+        """
+        Executes an action and verifies if it return HTTP404
+        :param action:
+        :param check_connection:
+        :return:
+        """
+
+        if check_connection:
+            # Verifies if connect() was called
+            if not self._connected:
+                return None
+
+        try:
+            return action()
+        except ApiException as e:
+            if e.status == HTTP_NOT_FOUND:
+                return None
+            return False
+
+    def __execute(self, action, check_connection=True):
+        """
+            Executes an action that uses one of the python-swiftclient APIs
+            (Connection API or Service API)
+
+            :param action: Action to be executed
+
+            :return: The action result in case of success, or an error dictionary
+                    in case of failure
+
+        """
+
+        if check_connection:
+            # Verifies if connect() was called
+            if not self._connected:
+                return {
+                    'exception': True,
+                    'error': "To use this, a connection must be established first",
+                    'error_code': 0
+                }
+
+        try:
+            return action()
+        except ApiException as e:
+            Log.save_exception(e)
+            return {
+                'exception': True,
+                'error': e.reason,
+                'error_code': e.status,
+                'descr': e.body,
+            }
+        except MaxRetryError as e:
+            Log.save_exception(e)
+            return {
+                'exception': True,
+                'error': e.reason,
+                'error_code': ERROR_CONNECTION_REFUSED,
+            }
+        except TimeoutError as e:
+            Log.save_exception(e)
+            return {
+                'exception': True,
+                'error': "A socket timeout error occurs.",
+                'error_code': ERROR_HOST_TIMEOUT
+            }
+        except SSLError as e:
+            Log.save_exception(e)
+            return {
+                'exception': True,
+                'error': "SSL certificate fails in an HTTPS connection.",
+                'error_code': ERROR_SSL_FAILED
+            }
+        except OSError as e:
+            Log.save_exception(e)
+            if e.__class__.__name__ == "ConnectionError":
+                error_code = ERROR_HOST_NOT_FOUND
+                error_message = "Host is Down"
+            elif e.__class__.__name__ == "ConnectTimeout":
+                error_code = ERROR_HOST_TIMEOUT
+                error_message = "Host Timeout"
+            else:
+                error_code = UNRECOGNIZED_CONNECTION_ERROR
+                error_message = "Unrecognized Error"
+
+            return {
+                'exception': True,
+                'error': error_message,
+                'error_code': error_code
+            }
+        except Exception as e:
+            Log.save_exception(e)
+            return {
+                'exception': True,
+                'error': "Unrecognized Error",
+                'error_code': UNRECOGNIZED_CONNECTION_ERROR
+            }
+
+    def __extract_file_info(self, bucket, file_name, headers):
+        """
+            Extracts the file information from the swift object Stat data structure
+
+            :param bucket: The objects bucket
+            :param file_name: The objects name
+            :param headers: The objects headers
+
+            :return: the file information data structure
+
+        """
+        # TODO: unused?
+        accessed_at_gmt = headers["date"]
+        accessed_at_ts = gmt_to_unix_timestamp(accessed_at_gmt)
+        modified_at_gmt = headers["last-modified"]
+        modified_at_ts = gmt_to_unix_timestamp(modified_at_gmt)
+        created_at_ts = int(float(headers["x-timestamp"]))
+        file_size = int(headers["content-length"])
+        file_type = NOT_EMPTY_FILE if file_size > 0 else EMPTY_FILE
+
+        return {
+            "name": file_name,
+            "type": file_type,
+            "size": file_size,
+            "uid": 0,
+            "gid": 0,
+            "mode": DEFAULT_FILE_MODE,
+            "nlink": 1,
+            "modified-at": modified_at_ts,
+            "accessed-at": accessed_at_ts,
+            "created-at": created_at_ts,
+        }
+
+    def create_backup_pod(self, namespace, poddata):
+        pod = client.V1Pod(
+            api_version=poddata.get('apiVersion'),
+            kind=poddata.get('kind'),
+            metadata=poddata.get('metadata'),
+            spec=poddata.get('spec')
+        )
+        response = self.__execute(lambda: self.corev1api.create_namespaced_pod(namespace=namespace, body=pod, pretty=True))
+        if isinstance(response, dict) and "error" in response:
+            return response
+        return {}
+
+    def create_pvc_clone(self, namespace, pvcclonedata):
+        pvcclone = client.V1PersistentVolumeClaim(
+            api_version=pvcclonedata.get('apiVersion'),
+            kind=pvcclonedata.get('kind'),
+            metadata=pvcclonedata.get('metadata'),
+            spec=pvcclonedata.get('spec')
+        )
+        response = self.__execute(lambda: self.corev1api.create_namespaced_persistent_volume_claim(namespace=namespace, body=pvcclone, pretty=True))
+        if isinstance(response, dict) and "error" in response:
+            return response
+        return {}
+
+    def backup_pod_status(self, namespace):
+        return self.corev1api.read_namespaced_pod_status(name=BACULABACKUPPODNAME, namespace=namespace)
+
+    def pvc_status(self, namespace, pvcname):
+        return self.__execute(lambda: self.corev1api.read_namespaced_persistent_volume_claim_status(name=pvcname, namespace=namespace))
+
+    def backup_pod_isready(self, namespace, seq=None, podname=BACULABACKUPPODNAME):
+        pod = self.backup_pod_status(namespace)
+        status = pod.status
+        # logging.debug("backup_pod_isready:status:{} {}".format(type(status), status))
+        if status.container_statuses is None:
+            if status.reason is None:
+                # the Pod constainer status is not available yet
+                return False
+            err = K8S_POD_CONTAINER_STATUS_ERROR.format(status.reason, status.message)
+            logging.error(err)
+            return {'error': err}
+        isready = status.container_statuses[0].ready
+        logging.info("backup_pod_status:isReady: {} / {}".format(isready, seq))
+        return isready
+
+    def pvc_isready(self, namespace, pvcname):
+        response = self.pvc_status(namespace, pvcname)
+        if isinstance(response, dict) and "error" in response:
+            return response
+        status = response.status
+        logging.debug("pvc_isready:status:{}".format(status))
+        return status.phase == 'Bound'
+
+    def remove_backup_pod(self, namespace, podname=BACULABACKUPPODNAME):
+        logging.debug('remove_backup_pod')
+        response = self.__execute(lambda: self.corev1api.delete_namespaced_pod(
+            podname, namespace, grace_period_seconds=0,
+            propagation_policy='Foreground'))
+        if isinstance(response, dict) and "error" in response:
+            return response
+        return {}
+
+    def remove_pvcclone(self, namespace, clonename):
+        logging.debug('remove_pvcclone')
+        response = self.__execute(lambda: self.corev1api.delete_namespaced_persistent_volume_claim(
+            clonename, namespace, grace_period_seconds=0,
+            propagation_policy='Foreground'))
+        if isinstance(response, dict) and "error" in response:
+            return response
+        return {}
+
+    def check_gone_backup_pod(self, namespace, force=False):
+        """ Checks if $BACULABACKUPPODNAME at selected namespace is already running.
+            If not then we can proceed with Job. If it terminated but not removed then we will safely remove it.
+        Args:
+            namespace (str): namespace for Pod
+            force (bool, optional): when we want to remove pod in any state. Defaults to False.
+
+        Returns:
+            bool: True if Pod not exist else False
+        """
+        # TODO: Refactor this method as it does more then described!
+        status = None
+        gone = False
+        try:
+            status = self.backup_pod_status(namespace)
+        except ApiException as e:
+            if e.status == HTTP_NOT_FOUND:
+                gone = True
+                return True
+        finally:
+            logging.info("check_gone_backup_pod:gone:" + str(gone))
+        if status is not None and (force or status.status.phase not in ['Pending', 'Running']):
+            response = self.remove_backup_pod(namespace)
+            if isinstance(response, dict) and 'error' in response:
+                # propagate error up
+                return response
+        return False
+
+    def check_gone_pvcclone(self, namespace, clonename, force=False):
+        response = self.pvc_status(namespace, clonename)
+        if isinstance(response, dict) and "error" in response:
+            logging.info("pvc status: {}".format(response))
+            if response.get('error_code') != HTTP_NOT_FOUND:
+                return response
+            else:
+                return True
+        status = response.status
+        logging.info("status: {}".format(status))
+        if status is not None and status.phase != 'Pending':
+            response = self.remove_pvcclone(namespace, clonename)
+            if isinstance(response, dict) and 'error' in response:
+                # propagate error up
+                return response
+        return False
+
+
+class FileContentAdapter:
+    """
+        This Iterator Class is used to upload Object Segments.
+        It is used by the Python-swiftclient connection API,
+        and dependes upon a File Stream. It should read from
+        the File Stream until segment_size is reached.
+
+    """
+
+    def __init__(self, stream, segment_size):
+        self.stream = stream
+        self.segment_size = segment_size
+        self.should_stop = False
+        self.resend_segment = False
+        self.current_segment = None
+
+    def tell(self):
+        """
+            Called by Python-swiftclient to resend data in case of error
+        """
+        self.resend_segment = True
+        pass
+
+    def seek(self, pos):
+        """
+            Called by Python-swiftclient to resend data in case of error
+        """
+        self.resend_segment = True
+        pass
+
+    def __iter__(self):
+        return self
+
+    def __next__(self):
+        """
+            Called by Python-swiftclient to retrieve the Object Content
+        """
+
+        if self.should_stop:
+            raise StopIteration
+
+        if self.resend_segment and self.current_segment is not None:
+            self.resend_segment = False
+            return self.current_segment
+
+        next_segment = b''
+        next_segment_size = 0
+
+        while True:
+            chunk = self.stream.read()
+
+            if not chunk:
+                self.should_stop = True
+                break
+
+            next_segment += chunk
+            next_segment_size += len(chunk)
+
+            if next_segment_size >= self.segment_size:
+                self.should_stop = True
+                break
+
+        if next_segment_size == 0:
+            raise StopIteration
+
+        self.current_segment = next_segment
+        return next_segment
diff --git a/bacula/src/plugins/fd/kubernetes-backend/baculak8s/plugins/plugin.py b/bacula/src/plugins/fd/kubernetes-backend/baculak8s/plugins/plugin.py
new file mode 100644 (file)
index 0000000..541a7c2
--- /dev/null
@@ -0,0 +1,99 @@
+# -*- coding: UTF-8 -*-
+# Bacula(R) - The Network Backup Solution
+#
+#   Copyright (C) 2000-2022 Kern Sibbald
+#
+#   The original author of Bacula is Kern Sibbald, with contributions
+#   from many others, a complete list can be found in the file AUTHORS.
+#
+#   You may use this file and others of this release according to the
+#   license defined in the LICENSE file, which includes the Affero General
+#   Public License, v3.0 ("AGPLv3") and some additional permissions and
+#   terms pursuant to its AGPLv3 Section 7.
+#
+#   This notice must be preserved when any source code is
+#   conveyed and/or propagated.
+#
+#   Bacula(R) is a registered trademark of Kern Sibbald.
+
+from abc import ABCMeta, abstractmethod
+
+UNRECOGNIZED_CONNECTION_ERROR = -1
+ERROR_SSL_FAILED = 1200
+ERROR_HOST_NOT_FOUND = 1400
+ERROR_HOST_TIMEOUT = 1500
+ERROR_AUTH_FAILED = 401
+ERROR_CONNECTION_REFUSED = 111
+
+
+class Plugin(metaclass=ABCMeta):
+    """
+        Abstract Base Class for all the Plugins
+    """
+
+    @abstractmethod
+    def connect(self):
+        """
+            Connects to the Plugin Data Source
+        """
+        raise NotImplementedError
+
+    @abstractmethod
+    def disconnect(self):
+        """
+            Disconnects from the Plugin Data Source
+        """
+        raise NotImplementedError
+
+    @abstractmethod
+    def list_in_path(self, path):
+        """
+            Lists all FileInfo objects belonging to the provided $path$
+
+        :return:
+        """
+        raise NotImplementedError
+
+    @abstractmethod
+    def query_parameter(self, parameter):
+        """
+            Response to client parameter query
+
+        :return:
+        """
+        raise NotImplementedError
+
+    @abstractmethod
+    def check_file(self, file_info):
+        """
+
+        :return:
+        """
+        raise NotImplementedError
+
+    @abstractmethod
+    def restore_file(self, file_info, file_content_source=None):
+        """
+
+                :return:
+                """
+        raise NotImplementedError
+
+    @abstractmethod
+    def list_all_namespaces(self):
+        """
+
+
+        :return:
+        """
+        raise NotImplementedError
+
+    @abstractmethod
+    def list_namespaced_objects(self, namespace):
+        """
+
+
+        :return:
+        """
+        raise NotImplementedError
+
diff --git a/bacula/src/plugins/fd/kubernetes-backend/baculak8s/plugins/plugin_factory.py b/bacula/src/plugins/fd/kubernetes-backend/baculak8s/plugins/plugin_factory.py
new file mode 100644 (file)
index 0000000..765ad02
--- /dev/null
@@ -0,0 +1,42 @@
+# Bacula(R) - The Network Backup Solution
+#
+#   Copyright (C) 2000-2022 Kern Sibbald
+#
+#   The original author of Bacula is Kern Sibbald, with contributions
+#   from many others, a complete list can be found in the file AUTHORS.
+#
+#   You may use this file and others of this release according to the
+#   license defined in the LICENSE file, which includes the Affero General
+#   Public License, v3.0 ("AGPLv3") and some additional permissions and
+#   terms pursuant to its AGPLv3 Section 7.
+#
+#   This notice must be preserved when any source code is
+#   conveyed and/or propagated.
+#
+#   Bacula(R) is a registered trademark of Kern Sibbald.
+from baculak8s.plugins.fs_plugin import FileSystemPlugin
+from baculak8s.plugins.kubernetes_plugin import KubernetesPlugin
+
+
+class PluginFactory(object):
+    @staticmethod
+    def create(plugin_name, config):
+        """
+                Creates a plugin to be used for Backup and / or Restore
+                operations.
+
+                :param plugin_name: The name of the plugin to be Created
+                :param config: The plugin configuration
+
+                :return: The created plugin
+
+                :raise: ValueError, if an invalid plugin_name was provided
+
+        """
+        if "where" in config and len(config["where"]) > 1:
+            return FileSystemPlugin(config)
+
+        if plugin_name in ("kubernetes", "openshift"):
+            return KubernetesPlugin(config)
+        else:
+            raise ValueError("Invalid Plugin Type")
diff --git a/bacula/src/plugins/fd/kubernetes-backend/baculak8s/services/__init__.py b/bacula/src/plugins/fd/kubernetes-backend/baculak8s/services/__init__.py
new file mode 100644 (file)
index 0000000..3b41a74
--- /dev/null
@@ -0,0 +1,16 @@
+# Bacula(R) - The Network Backup Solution
+#
+#   Copyright (C) 2000-2022 Kern Sibbald
+#
+#   The original author of Bacula is Kern Sibbald, with contributions
+#   from many others, a complete list can be found in the file AUTHORS.
+#
+#   You may use this file and others of this release according to the
+#   license defined in the LICENSE file, which includes the Affero General
+#   Public License, v3.0 ("AGPLv3") and some additional permissions and
+#   terms pursuant to its AGPLv3 Section 7.
+#
+#   This notice must be preserved when any source code is
+#   conveyed and/or propagated.
+#
+#   Bacula(R) is a registered trademark of Kern Sibbald.
diff --git a/bacula/src/plugins/fd/kubernetes-backend/baculak8s/services/handshake_service.py b/bacula/src/plugins/fd/kubernetes-backend/baculak8s/services/handshake_service.py
new file mode 100644 (file)
index 0000000..8bdaa92
--- /dev/null
@@ -0,0 +1,83 @@
+# Bacula(R) - The Network Backup Solution
+#
+#   Copyright (C) 2000-2022 Kern Sibbald
+#
+#   The original author of Bacula is Kern Sibbald, with contributions
+#   from many others, a complete list can be found in the file AUTHORS.
+#
+#   You may use this file and others of this release according to the
+#   license defined in the LICENSE file, which includes the Affero General
+#   Public License, v3.0 ("AGPLv3") and some additional permissions and
+#   terms pursuant to its AGPLv3 Section 7.
+#
+#   This notice must be preserved when any source code is
+#   conveyed and/or propagated.
+#
+#   Bacula(R) is a registered trademark of Kern Sibbald.
+import sys
+from baculak8s.io.default_io import DefaultIO
+from baculak8s.services.service import Service
+
+INVALID_HANDSHAKE_PACKET = "Invalid Handshake Packet"
+INVALID_PLUGIN_NAME = "Invalid Plugin Name"
+INVALID_PLUGIN_API = "Invalid Plugin API Version"
+HANDSHAKE_OK = "Hello Bacula"
+
+
+class HandshakeService(Service):
+    """
+       Service that contains the business logic
+       related to the Backend Handshake.
+    """
+
+    def __init__(self):
+        self.io = DefaultIO()
+
+    def execute(self, params=None):
+        """
+           Reads and parses the Handshake Packet sent to the Backend.
+           The Handshake Packet should have the format:
+           "Hello $pluginname$ $pluginAPI$".
+
+           :raise SystemExit, in case of a invalid HandshakePacket or in case
+                  of unsupported $pluginname$, $pluginAPI$
+                  or if the Backend received an invalid Handshake Packet.
+
+           :returns The $pluginname$.
+        """
+
+        handshake_data = self.__read_handshake_packet()
+        self.__verify_packet_data(handshake_data)
+        self.io.send_command(HANDSHAKE_OK)
+        return handshake_data[1]
+
+    def __read_handshake_packet(self):
+        _, packet = self.io.read_packet()
+        if not packet:
+            self.io.send_abort(INVALID_HANDSHAKE_PACKET)
+            sys.exit(0)
+
+        packet = packet.lower()
+        packet_data = packet.split(" ")
+
+        if len(packet_data) != 3 or packet_data[0] != "hello":
+            self.io.send_abort(INVALID_HANDSHAKE_PACKET)
+            sys.exit(0)
+        return packet_data
+
+    def __verify_packet_data(self, packet_data):
+        valid_plugins = {
+            "kubernetes": "3",
+            "openshift": "3",
+        }
+
+        plugin_name = packet_data[1]
+        plugin_api = packet_data[2]
+
+        if plugin_name not in valid_plugins:
+            self.io.send_abort(INVALID_PLUGIN_NAME)
+            sys.exit(0)
+
+        if plugin_api is not valid_plugins[plugin_name]:
+            self.io.send_abort(INVALID_PLUGIN_API)
+            sys.exit(0)
diff --git a/bacula/src/plugins/fd/kubernetes-backend/baculak8s/services/job_end_service.py b/bacula/src/plugins/fd/kubernetes-backend/baculak8s/services/job_end_service.py
new file mode 100644 (file)
index 0000000..9cc9899
--- /dev/null
@@ -0,0 +1,68 @@
+# Bacula(R) - The Network Backup Solution
+#
+#   Copyright (C) 2000-2022 Kern Sibbald
+#
+#   The original author of Bacula is Kern Sibbald, with contributions
+#   from many others, a complete list can be found in the file AUTHORS.
+#
+#   You may use this file and others of this release according to the
+#   license defined in the LICENSE file, which includes the Affero General
+#   Public License, v3.0 ("AGPLv3") and some additional permissions and
+#   terms pursuant to its AGPLv3 Section 7.
+#
+#   This notice must be preserved when any source code is
+#   conveyed and/or propagated.
+#
+#   Bacula(R) is a registered trademark of Kern Sibbald.
+import sys
+from baculak8s.io.default_io import DefaultIO
+from baculak8s.io.log import Log
+from baculak8s.io.packet_definitions import TERMINATION_PACKET
+from baculak8s.services.service import Service
+
+END_JOB_START_PACKET = "END"
+INVALID_END_JOB_START_PACKET = "Invalid End Job Start Packet"
+INVALID_TERMINATION_PACKET = "Invalid Termination Packet." \
+                             " This indicates a possible error " \
+                             "with the reading of the input sent" \
+                             "to the Backend."
+
+
+class JobEndService(Service):
+    """
+       Service that contains the business logic
+       related to ending the Backend Job
+    """
+
+    def __init__(self, job_info, plugin):
+        # The job_info parameter will probably be used in the future
+        self.plugin = plugin
+        self.job_info = job_info
+        self.io = DefaultIO()
+
+    def execute(self, params=None):
+        if self.job_info.get("query", None) is not None:
+            return
+        self.__read_start()
+        self.plugin.disconnect()
+        self.__read_termination()
+
+    def __read_start(self):
+        _, packet = self.io.read_packet()
+        if packet != END_JOB_START_PACKET:
+            self.io.send_abort(INVALID_END_JOB_START_PACKET)
+            self.__abort()
+            return
+        self.io.send_eod()
+
+    def __read_termination(self):
+        packet_header = self.io.read_line()
+
+        if packet_header != TERMINATION_PACKET:
+            self.__abort()
+
+        Log.save_received_termination(packet_header)
+
+    def __abort(self):
+        self.plugin.disconnect()
+        sys.exit(0)
diff --git a/bacula/src/plugins/fd/kubernetes-backend/baculak8s/services/job_info_service.py b/bacula/src/plugins/fd/kubernetes-backend/baculak8s/services/job_info_service.py
new file mode 100644 (file)
index 0000000..5218e06
--- /dev/null
@@ -0,0 +1,89 @@
+# Bacula(R) - The Network Backup Solution
+#
+#   Copyright (C) 2000-2022 Kern Sibbald
+#
+#   The original author of Bacula is Kern Sibbald, with contributions
+#   from many others, a complete list can be found in the file AUTHORS.
+#
+#   You may use this file and others of this release according to the
+#   license defined in the LICENSE file, which includes the Affero General
+#   Public License, v3.0 ("AGPLv3") and some additional permissions and
+#   terms pursuant to its AGPLv3 Section 7.
+#
+#   This notice must be preserved when any source code is
+#   conveyed and/or propagated.
+#
+#   Bacula(R) is a registered trademark of Kern Sibbald.
+import sys
+import logging
+import baculak8s
+from baculak8s.io.services.job_info_io import JobInfoIO, INVALID_JOB_PARAMETER_BLOCK, INVALID_JOB_TYPE, \
+    JOB_NAME_NOT_FOUND, JOB_ID_NOT_FOUND, JOB_TYPE_NOT_FOUND, INVALID_REPLACE_PARAM, JOB_START_PACKET, \
+    INVALID_JOB_START_PACKET
+from baculak8s.services.service import Service
+
+TYPE_BACKUP = "b"
+TYPE_RESTORE = "r"
+TYPE_ESTIMATION = "e"
+
+REPLACE_ALWAYS = 'a'
+REPLACE_IFNEWER = 'w'
+REPLACE_NEVER = 'n'
+REPLACE_IFOLDER = 'o'
+
+
+class JobInfoService(Service):
+    """
+       Service that contains the business logic
+       related to reading and parsing the information about the Job
+       that should be created and executed by the Backend.
+    """
+
+    def __init__(self):
+        self.io = JobInfoIO()
+
+    def execute(self, params=None):
+        self.__read_start()
+        params_block = self.__read_params_block()
+        self.io.send_eod()
+        return params_block
+
+    def __read_start(self):
+        _, packet = self.io.read_packet()
+
+        if packet != JOB_START_PACKET:
+            self.io.send_abort(INVALID_JOB_START_PACKET)
+            sys.exit(0)
+
+    def __read_params_block(self):
+        job_info = self.io.read_job_info()
+
+        # Parameters validation
+        if not job_info:
+            self.io.send_abort(INVALID_JOB_PARAMETER_BLOCK)
+            sys.exit(0)
+        if "name" not in job_info:
+            self.io.send_abort(JOB_NAME_NOT_FOUND)
+            sys.exit(0)
+        if "jobid" not in job_info:
+            self.io.send_abort(JOB_ID_NOT_FOUND)
+            sys.exit(0)
+        if "replace" in job_info:
+            job_info["replace"] = job_info["replace"].lower()
+            if job_info["replace"] not in [REPLACE_ALWAYS, REPLACE_IFNEWER, REPLACE_NEVER, REPLACE_IFOLDER]:
+                self.io.send_abort(INVALID_REPLACE_PARAM)
+                sys.exit(0)
+        if "namespace" in job_info:
+            logging.debug("FILE NAMESPACE: {}".format(job_info.get("namespace")))
+            baculak8s.plugins.k8sbackend.k8sfileinfo.defaultk8spath = job_info.get("namespace")
+            job_info.pop("namespace")
+        if "type" not in job_info:
+            self.io.send_abort(JOB_TYPE_NOT_FOUND)
+            sys.exit(0)
+        else:
+            job_info["type"] = job_info["type"].lower()
+            if job_info["type"] not in [TYPE_BACKUP, TYPE_RESTORE, TYPE_ESTIMATION]:
+                self.io.send_abort(INVALID_JOB_TYPE)
+                sys.exit(0)
+
+        return job_info
diff --git a/bacula/src/plugins/fd/kubernetes-backend/baculak8s/services/plugin_params_service.py b/bacula/src/plugins/fd/kubernetes-backend/baculak8s/services/plugin_params_service.py
new file mode 100644 (file)
index 0000000..95b1465
--- /dev/null
@@ -0,0 +1,90 @@
+# Bacula(R) - The Network Backup Solution
+#
+#   Copyright (C) 2000-2022 Kern Sibbald
+#
+#   The original author of Bacula is Kern Sibbald, with contributions
+#   from many others, a complete list can be found in the file AUTHORS.
+#
+#   You may use this file and others of this release according to the
+#   license defined in the LICENSE file, which includes the Affero General
+#   Public License, v3.0 ("AGPLv3") and some additional permissions and
+#   terms pursuant to its AGPLv3 Section 7.
+#
+#   This notice must be preserved when any source code is
+#   conveyed and/or propagated.
+#
+#   Bacula(R) is a registered trademark of Kern Sibbald.
+import sys
+import os
+
+from baculak8s.io.services.plugin_params_io import PluginParamsIO, INVALID_PLUGIN_PARAMETERS_BLOCK, URL_NOT_FOUND, \
+    USER_NOT_FOUND, PWD_NOT_FOUND, PASSFILE_NOT_FOUND, PWD_INSIDE_PASSFILE_NOT_FOUND, RESTORE_LOCAL_WITHOUT_WHERE, \
+    PLUGIN_PARAMETERS_START, INVALID_PLUGIN_PARAMETERS_START
+from baculak8s.services.service import Service
+
+
+class PluginParamsService(Service):
+    """
+       Service that contains the business logic
+       related to reading and parsing the information about the Plugin
+       that will be used by the Job to fulfill it's purpose
+    """
+
+    def __init__(self, job_info):
+        self.io = PluginParamsIO()
+        self.job_info = job_info
+
+    def execute(self, params=None):
+        self.__read_start()
+        params_block = self.__read_params_block()
+        self.io.send_eod()
+        return params_block
+
+    def __read_start(self):
+        _, packet = self.io.read_packet()
+
+        if packet != PLUGIN_PARAMETERS_START:
+            self.io.send_abort(INVALID_PLUGIN_PARAMETERS_START)
+            sys.exit(0)
+
+    def __read_params_block(self):
+        params_block = self.io.read_plugin_params()
+
+        # Parameters validation
+        if not params_block:
+            self.io.send_abort(INVALID_PLUGIN_PARAMETERS_BLOCK)
+            sys.exit(0)
+
+        if "restore_local" in params_block and ("where" not in self.job_info or self.job_info["where"] == ""):
+            self.io.send_abort(RESTORE_LOCAL_WITHOUT_WHERE)
+            sys.exit(0)
+
+        if 'pv' in params_block:
+            # 'pv' is an alias for 'persistentvolume'
+            params_block.get('persistentvolume').append(params_block.get('pv'))
+            del params_block['pv']
+
+        if 'ns' in params_block:
+            # 'ns' is an alias for 'namespace'
+            params_block.get('namespace').append(params_block.get('ns'))
+            del params_block['ns']
+
+        return params_block
+
+    def __get_password(self, params_block):
+        """
+           Reads the password from the passfile and puts it into the Plugin Params
+        """
+        if not os.path.isfile(params_block["passfile"]):
+            self.io.send_abort(PASSFILE_NOT_FOUND)
+            sys.exit(0)
+
+        with open(params_block["passfile"], "r") as f:
+            params_block["password"] = f.readline()
+            f.close()
+
+        if not params_block["password"]:
+            self.io.send_abort(PWD_INSIDE_PASSFILE_NOT_FOUND)
+            sys.exit(0)
+
+        return params_block
diff --git a/bacula/src/plugins/fd/kubernetes-backend/baculak8s/services/service.py b/bacula/src/plugins/fd/kubernetes-backend/baculak8s/services/service.py
new file mode 100644 (file)
index 0000000..847e59f
--- /dev/null
@@ -0,0 +1,35 @@
+# Bacula(R) - The Network Backup Solution
+#
+#   Copyright (C) 2000-2022 Kern Sibbald
+#
+#   The original author of Bacula is Kern Sibbald, with contributions
+#   from many others, a complete list can be found in the file AUTHORS.
+#
+#   You may use this file and others of this release according to the
+#   license defined in the LICENSE file, which includes the Affero General
+#   Public License, v3.0 ("AGPLv3") and some additional permissions and
+#   terms pursuant to its AGPLv3 Section 7.
+#
+#   This notice must be preserved when any source code is
+#   conveyed and/or propagated.
+#
+#   Bacula(R) is a registered trademark of Kern Sibbald.
+
+
+from abc import ABCMeta, abstractmethod
+
+
+class Service(metaclass=ABCMeta):
+    """
+        Abstract Base Class for all the Backend Services
+
+        Services represents auxiliary Business Logic supporting the
+        main functionality implemented by the Backend Jobs Classes
+    """
+
+    @abstractmethod
+    def execute(self, params=None):
+        """
+            Executes the Service
+        """
+        raise NotImplementedError
diff --git a/bacula/src/plugins/fd/kubernetes-backend/baculak8s/services/unexpected_error_service.py b/bacula/src/plugins/fd/kubernetes-backend/baculak8s/services/unexpected_error_service.py
new file mode 100644 (file)
index 0000000..31f81df
--- /dev/null
@@ -0,0 +1,38 @@
+#   The main author of Bacula(R) - The Network Backup Solution
+#
+#   Copyright (C) 2000-2022 Kern Sibbald
+#
+#   The original author of Bacula is Kern Sibbald, with contributions
+#   from many others, a complete list can be found in the file AUTHORS.
+#
+#   You may use this file and others of this release according to the
+#   license defined in the LICENSE file, which includes the Affero General
+#   Public License, v3.0 ("AGPLv3") and some additional permissions and
+#   terms pursuant to its AGPLv3 Section 7.
+#
+#   This notice must be preserved when any source code is
+#   conveyed and/or propagated.
+#
+#   Bacula(R) is a registered trademark of Kern Sibbald.
+import logging
+import traceback
+
+from baculak8s.io.default_io import DefaultIO
+from baculak8s.io.packet_definitions import UNEXPECTED_ERROR_PACKET
+from baculak8s.services.service import Service
+
+
+class UnexpectedErrorService(Service):
+    """
+       Service that is executed whenever an unexpected exception happens
+       during the Backend execution.
+    """
+
+    def __init__(self):
+        self.io = DefaultIO()
+
+    def execute(self, params=None):
+        # We log the Exception Stack Trace
+        logging.error(UNEXPECTED_ERROR_PACKET)
+        logging.error(traceback.format_exc())
+        self.io.send_abort(UNEXPECTED_ERROR_PACKET)
diff --git a/bacula/src/plugins/fd/kubernetes-backend/baculak8s/util/__init__.py b/bacula/src/plugins/fd/kubernetes-backend/baculak8s/util/__init__.py
new file mode 100644 (file)
index 0000000..13d0383
--- /dev/null
@@ -0,0 +1,17 @@
+# -*- coding: UTF-8 -*-
+# Bacula(R) - The Network Backup Solution
+#
+#   Copyright (C) 2000-2022 Kern Sibbald
+#
+#   The original author of Bacula is Kern Sibbald, with contributions
+#   from many others, a complete list can be found in the file AUTHORS.
+#
+#   You may use this file and others of this release according to the
+#   license defined in the LICENSE file, which includes the Affero General
+#   Public License, v3.0 ("AGPLv3") and some additional permissions and
+#   terms pursuant to its AGPLv3 Section 7.
+#
+#   This notice must be preserved when any source code is
+#   conveyed and/or propagated.
+#
+#   Bacula(R) is a registered trademark of Kern Sibbald.
diff --git a/bacula/src/plugins/fd/kubernetes-backend/baculak8s/util/boolparam.py b/bacula/src/plugins/fd/kubernetes-backend/baculak8s/util/boolparam.py
new file mode 100644 (file)
index 0000000..46fef43
--- /dev/null
@@ -0,0 +1,53 @@
+# -*- coding: UTF-8 -*-
+#
+#  Bacula(R) - The Network Backup Solution
+#
+#   Copyright (C) 2000-2022 Kern Sibbald
+#
+#   The original author of Bacula is Kern Sibbald, with contributions
+#   from many others, a complete list can be found in the file AUTHORS.
+#
+#   You may use this file and others of this release according to the
+#   license defined in the LICENSE file, which includes the Affero General
+#   Public License, v3.0 ("AGPLv3") and some additional permissions and
+#   terms pursuant to its AGPLv3 Section 7.
+#
+#   This notice must be preserved when any source code is
+#   conveyed and/or propagated.
+#
+#   Bacula(R) is a registered trademark of Kern Sibbald.
+#
+#     Copyright (c) 2020 by Inteos sp. z o.o.
+#     All rights reserved. IP transfered to Bacula Systems according to agreement.
+#     Author: Radosław Korzeniewski, radekk@inteos.pl, Inteos Sp. z o.o.
+#
+
+class BoolParam(object):
+
+   @staticmethod
+   def handleParam(param, _default = False):
+      """ Function handles configuration bool parameter expressed in any known value.
+          It handles string values like '1', 'Yes', etc. as True and '0', 'No', etc as False.
+
+      Args:
+          param (any): parameter value to process
+          _default (bool, optional): use this value when parameter handling fail. Defaults to False.
+
+      Returns:
+          bool: return processed value
+      """
+      if not isinstance(_default, bool):
+         _default = False
+      if param is not None:
+         if isinstance(param, str):
+            if param.lower() in ('1', 'yes', 'true'):
+               return True
+            if param.lower() in ('0', 'no', 'false'):
+               return False
+         if isinstance(param, int) or isinstance(param, bool) or isinstance(param, float):
+            if param:
+               return True
+            else:
+               return False
+      # finally return default
+      return _default
diff --git a/bacula/src/plugins/fd/kubernetes-backend/baculak8s/util/date_util.py b/bacula/src/plugins/fd/kubernetes-backend/baculak8s/util/date_util.py
new file mode 100644 (file)
index 0000000..d4f1f99
--- /dev/null
@@ -0,0 +1,47 @@
+# -*- coding: UTF-8 -*-
+# Bacula(R) - The Network Backup Solution
+#
+#   Copyright (C) 2000-2022 Kern Sibbald
+#
+#   The original author of Bacula is Kern Sibbald, with contributions
+#   from many others, a complete list can be found in the file AUTHORS.
+#
+#   You may use this file and others of this release according to the
+#   license defined in the LICENSE file, which includes the Affero General
+#   Public License, v3.0 ("AGPLv3") and some additional permissions and
+#   terms pursuant to its AGPLv3 Section 7.
+#
+#   This notice must be preserved when any source code is
+#   conveyed and/or propagated.
+#
+#   Bacula(R) is a registered trademark of Kern Sibbald.
+
+import datetime
+
+from baculak8s.util.iso8601 import parse_date
+
+
+def iso8601_to_unix_timestamp(iso_string):
+    dt = parse_date(iso_string) \
+        .replace(tzinfo=datetime.timezone.utc)
+    return int(dt.timestamp())
+
+
+def gmt_to_unix_timestamp(gmt_string):
+    dt = datetime.datetime.strptime(gmt_string, "%a, %d %b %Y %H:%M:%S GMT")
+    return int(dt.timestamp())
+
+
+def get_time_now():
+    tstamp = datetime.datetime.now(tz=datetime.timezone.utc).timestamp()
+    return int(tstamp)
+
+
+def datetime_to_unix_timestamp(dt):
+    return int(datetime.datetime.timestamp(dt))
+
+
+def k8stimestamp_to_unix_timestamp(ts):
+    if isinstance(ts, datetime.datetime):
+        return datetime_to_unix_timestamp(ts)
+    return iso8601_to_unix_timestamp(ts)
diff --git a/bacula/src/plugins/fd/kubernetes-backend/baculak8s/util/dict_util.py b/bacula/src/plugins/fd/kubernetes-backend/baculak8s/util/dict_util.py
new file mode 100644 (file)
index 0000000..f5523d3
--- /dev/null
@@ -0,0 +1,22 @@
+# Bacula(R) - The Network Backup Solution
+#
+#   Copyright (C) 2000-2022 Kern Sibbald
+#
+#   The original author of Bacula is Kern Sibbald, with contributions
+#   from many others, a complete list can be found in the file AUTHORS.
+#
+#   You may use this file and others of this release according to the
+#   license defined in the LICENSE file, which includes the Affero General
+#   Public License, v3.0 ("AGPLv3") and some additional permissions and
+#   terms pursuant to its AGPLv3 Section 7.
+#
+#   This notice must be preserved when any source code is
+#   conveyed and/or propagated.
+#
+#   Bacula(R) is a registered trademark of Kern Sibbald.
+
+
+def merge_two_dicts(d1, d2):
+    merged = d1.copy()
+    merged.update(d2)
+    return merged
diff --git a/bacula/src/plugins/fd/kubernetes-backend/baculak8s/util/iso8601.py b/bacula/src/plugins/fd/kubernetes-backend/baculak8s/util/iso8601.py
new file mode 100644 (file)
index 0000000..b2d0b54
--- /dev/null
@@ -0,0 +1,135 @@
+# Copyright (c) 2007 Michael Twomey
+#
+# Permission is hereby granted, free of charge, to any person obtaining a
+# copy of this software and associated documentation files (the
+# "Software"), to deal in the Software without restriction, including
+# without limitation the rights to use, copy, modify, merge, publish,
+# distribute, sublicense, and/or sell copies of the Software, and to
+# permit persons to whom the Software is furnished to do so, subject to
+# the following conditions:
+#
+# The above copyright notice and this permission notice shall be included
+# in all copies or substantial portions of the Software.
+#
+# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
+# OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
+# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
+# IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY
+# CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT,
+# TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
+# SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
+
+import re
+import time
+from datetime import datetime, timedelta, tzinfo
+
+__all__ = ["parse_date", "ParseError"]
+
+# Adapted from http://delete.me.uk/2005/03/iso8601.html
+ISO8601_REGEX_RAW = "(?P<year>[0-9]{4})-(?P<month>[0-9]{1,2})-(?P<day>[0-9]{1,2})" \
+                    "T(?P<hour>[0-9]{2}):(?P<minute>[0-9]{2})(:(?P<second>[0-9]{2})(\.(?P<fraction>[0-9]+))?)?" \
+                    "(?P<timezone>Z|[-+][0-9]{2}(:?[0-9]{2})?)?"
+ISO8601_REGEX = re.compile(ISO8601_REGEX_RAW)
+TIMEZONE_REGEX = re.compile("(?P<prefix>[+-])(?P<hours>[0-9]{2}):?(?P<minutes>[0-9]{2})?")
+
+
+class ParseError(Exception):
+    """Raised when there is a problem parsing a date string"""
+
+
+# Yoinked from python docs
+ZERO = timedelta(0)
+
+
+class Utc(tzinfo):
+    """UTC
+
+    """
+
+    def utcoffset(self, dt):
+        return ZERO
+
+    def tzname(self, dt):
+        return "UTC"
+
+    def dst(self, dt):
+        return ZERO
+
+
+UTC = Utc()
+
+
+class FixedOffset(tzinfo):
+    """Fixed offset in hours and minutes from UTC
+
+    """
+
+    def __init__(self, name, offset_hours, offset_minutes, offset_seconds=0):
+        self.__offset = timedelta(hours=offset_hours, minutes=offset_minutes, seconds=offset_seconds)
+        self.__name = name
+
+    def utcoffset(self, dt):
+        return self.__offset
+
+    def tzname(self, dt):
+        return self.__name
+
+    def dst(self, dt):
+        return ZERO
+
+    def __repr__(self):
+        return "<FixedOffset %r>" % self.__name
+
+
+def parse_timezone(tzstring):
+    """Parses ISO 8601 time zone specs into tzinfo offsets
+
+    """
+    if tzstring == "Z":
+        return UTC
+
+    if tzstring is None:
+        zone_sec = -time.timezone
+        return FixedOffset(name=time.tzname[0], offset_hours=(zone_sec / 3600), offset_minutes=(zone_sec % 3600) / 60,
+                           offset_seconds=zone_sec % 60)
+
+    m = TIMEZONE_REGEX.match(tzstring)
+    prefix, hours, minutes = m.groups()
+    if minutes is None:
+        minutes = 0
+    else:
+        minutes = int(minutes)
+    hours = int(hours)
+    if prefix == "-":
+        hours = -hours
+        minutes = -minutes
+    return FixedOffset(tzstring, hours, minutes)
+
+
+def parse_date(datestring):
+    """Parses ISO 8601 dates into datetime objects
+
+    The timezone is parsed from the date string. However it is quite common to
+    have dates without a timezone (not strictly correct). In this case the
+    default timezone specified in default_timezone is used. This is UTC by
+    default.
+    """
+    if not isinstance(datestring, str):
+        raise ValueError("Expecting a string %r" % datestring)
+    m = ISO8601_REGEX.match(datestring)
+    if not m:
+        raise ParseError("Unable to parse date string %r" % datestring)
+    groups = m.groupdict()
+    tz = parse_timezone(groups["timezone"])
+    if groups["fraction"] is None:
+        groups["fraction"] = 0
+    else:
+        groups["fraction"] = int(float("0.%s" % groups["fraction"]) * 1e6)
+
+    try:
+        return datetime(int(groups["year"]), int(groups["month"]), int(groups["day"]),
+                        int(groups["hour"]), int(groups["minute"]), int(groups["second"]),
+                        int(groups["fraction"]), tz)
+    except Exception as e:
+        raise ParseError("Failed to create a valid datetime record due to: %s"
+                         % e)
diff --git a/bacula/src/plugins/fd/kubernetes-backend/baculak8s/util/lambda_util.py b/bacula/src/plugins/fd/kubernetes-backend/baculak8s/util/lambda_util.py
new file mode 100644 (file)
index 0000000..bf7b752
--- /dev/null
@@ -0,0 +1,22 @@
+# Bacula(R) - The Network Backup Solution
+#
+#   Copyright (C) 2000-2022 Kern Sibbald
+#
+#   The original author of Bacula is Kern Sibbald, with contributions
+#   from many others, a complete list can be found in the file AUTHORS.
+#
+#   You may use this file and others of this release according to the
+#   license defined in the LICENSE file, which includes the Affero General
+#   Public License, v3.0 ("AGPLv3") and some additional permissions and
+#   terms pursuant to its AGPLv3 Section 7.
+#
+#   This notice must be preserved when any source code is
+#   conveyed and/or propagated.
+#
+#   Bacula(R) is a registered trademark of Kern Sibbald.
+
+
+def apply(condition, iterable):
+    # Helper method to map lambdas on iterables
+    # Created just for readability
+    return list(map(condition, iterable))
diff --git a/bacula/src/plugins/fd/kubernetes-backend/baculak8s/util/respbody.py b/bacula/src/plugins/fd/kubernetes-backend/baculak8s/util/respbody.py
new file mode 100644 (file)
index 0000000..ccfec87
--- /dev/null
@@ -0,0 +1,31 @@
+# -*- coding: UTF-8 -*-
+# Bacula(R) - The Network Backup Solution
+#
+#   Copyright (C) 2000-2022 Kern Sibbald
+#
+#   The original author of Bacula is Kern Sibbald, with contributions
+#   from many others, a complete list can be found in the file AUTHORS.
+#
+#   You may use this file and others of this release according to the
+#   license defined in the LICENSE file, which includes the Affero General
+#   Public License, v3.0 ("AGPLv3") and some additional permissions and
+#   terms pursuant to its AGPLv3 Section 7.
+#
+#   This notice must be preserved when any source code is
+#   conveyed and/or propagated.
+#
+#   Bacula(R) is a registered trademark of Kern Sibbald.
+
+import json
+
+
+def parse_json_descr(response):
+    message = 'No details.'
+    descr = response.get('descr')
+    if descr is not None and isinstance(descr, str):
+        try:
+            data = json.loads(descr)
+            message = data.get('message', 'Unknown.')
+        except:
+            pass
+    return message
diff --git a/bacula/src/plugins/fd/kubernetes-backend/baculak8s/util/size_util.py b/bacula/src/plugins/fd/kubernetes-backend/baculak8s/util/size_util.py
new file mode 100644 (file)
index 0000000..ed086d4
--- /dev/null
@@ -0,0 +1,45 @@
+# -*- coding: UTF-8 -*-
+# Bacula(R) - The Network Backup Solution
+#
+#   Copyright (C) 2000-2022 Kern Sibbald
+#
+#   The original author of Bacula is Kern Sibbald, with contributions
+#   from many others, a complete list can be found in the file AUTHORS.
+#
+#   You may use this file and others of this release according to the
+#   license defined in the LICENSE file, which includes the Affero General
+#   Public License, v3.0 ("AGPLv3") and some additional permissions and
+#   terms pursuant to its AGPLv3 Section 7.
+#
+#   This notice must be preserved when any source code is
+#   conveyed and/or propagated.
+#
+#   Bacula(R) is a registered trademark of Kern Sibbald.
+
+
+def k8s_size_to_int(res):
+    if isinstance(res, str):
+        if res.endswith("Ki"):
+            return int(res[:-2]) * 1024
+        if res.endswith("Mi"):
+            return int(res[:-2]) * 1024 * 1024
+        if res.endswith("Gi"):
+            return int(res[:-2]) * 1024 * 1024 * 1024
+        if res.endswith("Pi"):
+            return int(res[:-2]) * 1024 * 1024 * 1024 * 1024
+        if res.endswith("Ti"):
+            return int(res[:-2]) * 1024 * 1024 * 1024 * 1024 * 1024
+        if res.endswith("K"):
+            return int(res[:-1]) * 1000
+        if res.endswith("M"):
+            return int(res[:-1]) * 1000000
+        if res.endswith("G"):
+            return int(res[:-1]) * 1000000000
+        if res.endswith("P"):
+            return int(res[:-1]) * 1000000000000
+        if res.endswith("T"):
+            return int(res[:-1]) * 1000000000000000
+        if res.endswith("m"):
+            return int(res[:-1]) / 1000.0
+        return 0
+    return res
diff --git a/bacula/src/plugins/fd/kubernetes-backend/baculak8s/util/sslserver.py b/bacula/src/plugins/fd/kubernetes-backend/baculak8s/util/sslserver.py
new file mode 100644 (file)
index 0000000..d3b9e77
--- /dev/null
@@ -0,0 +1,204 @@
+#
+#  Bacula(R) - The Network Backup Solution
+#
+#   Copyright (C) 2000-2022 Kern Sibbald
+#
+#   The original author of Bacula is Kern Sibbald, with contributions
+#   from many others, a complete list can be found in the file AUTHORS.
+#
+#   You may use this file and others of this release according to the
+#   license defined in the LICENSE file, which includes the Affero General
+#   Public License, v3.0 ("AGPLv3") and some additional permissions and
+#   terms pursuant to its AGPLv3 Section 7.
+#
+#   This notice must be preserved when any source code is
+#   conveyed and/or propagated.
+#
+#   Bacula(R) is a registered trademark of Kern Sibbald.
+#
+#     Copyright (c) 2019 by Inteos sp. z o.o.
+#     All rights reserved. IP transfered to Bacula Systems according to agreement.
+#     Author: Radosław Korzeniewski, radekk@inteos.pl, Inteos Sp. z o.o.
+#
+
+import os
+import socket
+import ssl
+import time
+
+from baculak8s.util.token import *
+
+DEFAULTTIMEOUT = 600
+DEFAULTCERTFILE = "/opt/bacula/etc/bacula-backup.cert"
+DEFAULTKEYFILE = "/opt/bacula/etc/bacula-backup.key"
+CONNECTIONSERVER_AUTHTOK_ERR1 = 'ConnectionServer: Authentication token receiving timeout!'
+CONNECTIONSERVER_AUTHTOK_ERR2 = 'ConnectionServer: Authentication token error!'
+CONNECTIONSERVER_AUTHTOK_ERR3 = '{ "message": "Cert files does not exist! Cannot prepare Connection Service!" }'
+CONNECTIONSERVER_AUTHTOK_ERR4 = '{ "message": "ConnectionServer:Cannot bind to socket! Err={}" }'
+CONNECTIONSERVER_AUTHTOK_ERR5 = 'ConnectionServer: Timeout waiting...'
+CONNECTIONSERVER_AUTHTOK_ERR6 = 'ConnectionServer: Invalid Hello Data received!'
+CONNECTIONSERVER_HELLO_ERR1 = 'ConnectionServer: Hello data receiving timeout!'
+CONNECTIONSERVER_HELLO_ERR2 = 'ConnectionServer: Invalid Hello data packet: "{}"'
+CONNECTIONSERVER_HELLO_ERR3 = 'ConnectionServer: Invalid Hello header: "{}"'
+
+
+class ConnectionServer(object):
+
+    def __init__(self, host, port=9104, token=None, certfile=None, keyfile=None, timeout=DEFAULTTIMEOUT,
+                 *args, **kwargs):
+        super(ConnectionServer, self).__init__(*args, **kwargs)
+        self.connstream = None
+        self.token = token if token is not None else generate_token()
+        self.timeout = timeout
+        try:
+            self.timeout = int(self.timeout)
+        except ValueError:
+            self.timeout = DEFAULTTIMEOUT
+        self.timeout = max(1, self.timeout)
+        socket.setdefaulttimeout(self.timeout)
+        self.bindsocket = socket.socket()
+        self.sslcontext = ssl.SSLContext(ssl.PROTOCOL_TLS)
+        self.host = host
+        self.port = port
+        self.certfile = certfile if certfile is not None else DEFAULTCERTFILE
+        self.keyfile = keyfile if keyfile is not None else DEFAULTKEYFILE
+
+    def streamrecv(self, size):
+        try:
+            data = self.connstream.recv(size)
+        except socket.timeout:
+            data = None
+        return data
+
+    def streamsend(self, data):
+        status = True
+        try:
+            self.connstream.send(data)
+        except socket.timeout:
+            status = False
+        finally:
+            return status
+
+    def authenticate(self):
+        hello = self.gethello()
+        if not isinstance(hello, dict) or 'error' in hello:
+            logging.debug(CONNECTIONSERVER_AUTHTOK_ERR6)
+            return {
+                'error': CONNECTIONSERVER_AUTHTOK_ERR6,
+            }
+        response = hello.get('response')
+        if response is not None and response[0] != 'Hello':
+            logging.debug(CONNECTIONSERVER_AUTHTOK_ERR6)
+            return {
+                'error': CONNECTIONSERVER_AUTHTOK_ERR6,
+            }
+
+        try:
+            data = self.connstream.recv(auth_data_length())
+        except socket.timeout:
+            logging.debug(CONNECTIONSERVER_AUTHTOK_ERR1)
+            return {
+                'error': CONNECTIONSERVER_AUTHTOK_ERR1,
+            }
+        if check_auth_data(self.token, data):
+            self.connstream.send(b'OK')
+            logging.debug('ConnectionServer:Authenticated')
+            return {}
+        else:
+            self.connstream.send(b'NO')
+            logging.error(CONNECTIONSERVER_AUTHTOK_ERR2)
+            return {
+                'error': CONNECTIONSERVER_AUTHTOK_ERR2,
+            }
+
+    def gethello(self) -> dict:
+        data = ""
+        try:
+            data = self.connstream.recv(4)
+        except socket.timeout:
+            logging.debug(CONNECTIONSERVER_HELLO_ERR1)
+            return {
+                'error': CONNECTIONSERVER_HELLO_ERR1,
+            }
+        ddata = data.decode()
+        if ddata[3] != ':':
+            logging.debug(CONNECTIONSERVER_HELLO_ERR3.format(ddata))
+            return {
+                'error': CONNECTIONSERVER_HELLO_ERR3.format(ddata),
+            }
+        ddata = ddata[:3]
+        try:
+            datalen = int(ddata)
+        except ValueError:
+            logging.debug(CONNECTIONSERVER_HELLO_ERR2.format(ddata))
+            return {
+                'error': CONNECTIONSERVER_HELLO_ERR2.format(ddata),
+            }
+        try:
+            data = self.connstream.recv(datalen)
+        except socket.timeout:
+            logging.debug(CONNECTIONSERVER_HELLO_ERR1)
+            return {
+                'error': CONNECTIONSERVER_HELLO_ERR1,
+            }
+        ddata = data.decode().split(':')
+        logging.debug(ddata)
+        return { 'response': ddata }
+
+    def close(self):
+        self.connstream.shutdown(socket.SHUT_RDWR)
+        self.connstream.close()
+
+    def shutdown(self):
+        self.bindsocket.close()
+
+    def listen(self):
+        if not os.path.exists(self.certfile) or not os.path.exists(self.keyfile):
+            logging.error(CONNECTIONSERVER_AUTHTOK_ERR3)
+            return {
+                'error': True,
+                'descr': CONNECTIONSERVER_AUTHTOK_ERR3,
+            }
+        self.sslcontext.load_cert_chain(certfile=self.certfile, keyfile=self.keyfile)
+        ops = 0
+        lastexcept = ""
+        for ops in range(self.timeout):
+            try:
+                self.bindsocket.bind((self.host, self.port))
+            except OSError as e:
+                logging.error(e)
+                lastexcept = str(e)
+                time.sleep(5)
+            else:
+                break
+        if ops == self.timeout - 1:
+            logging.error(CONNECTIONSERVER_AUTHTOK_ERR4.format(lastexcept))
+            return {
+                'error': True,
+                'descr': CONNECTIONSERVER_AUTHTOK_ERR4.format(lastexcept)
+            }
+        logging.debug('ConnectionServer:Listening...')
+        self.bindsocket.listen(5)
+        return {}
+
+    def handle_connection(self, process_client_data):
+        try:
+            newsocket, fromaddr = self.bindsocket.accept()
+        except socket.timeout:
+            logging.error(CONNECTIONSERVER_AUTHTOK_ERR5)
+            return {
+                'error': CONNECTIONSERVER_AUTHTOK_ERR5,
+                'should_remove_pod': 1,
+            }
+        logging.debug("ConnectionServer:Connection from: {}".format(fromaddr))
+        self.connstream = self.sslcontext.wrap_socket(newsocket, server_side=True)
+        try:
+            authresp = self.authenticate()
+            if 'error' in authresp:
+                return authresp
+            process_client_data(self.connstream)
+        finally:
+            logging.debug('ConnectionServer:Finish - disconnect.')
+            self.connstream.shutdown(socket.SHUT_RDWR)
+            self.connstream.close()
+        return {}
diff --git a/bacula/src/plugins/fd/kubernetes-backend/baculak8s/util/token.py b/bacula/src/plugins/fd/kubernetes-backend/baculak8s/util/token.py
new file mode 100644 (file)
index 0000000..587cd54
--- /dev/null
@@ -0,0 +1,60 @@
+#
+#  Bacula(R) - The Network Backup Solution
+#
+#   Copyright (C) 2000-2022 Kern Sibbald
+#
+#   The original author of Bacula is Kern Sibbald, with contributions
+#   from many others, a complete list can be found in the file AUTHORS.
+#
+#   You may use this file and others of this release according to the
+#   license defined in the LICENSE file, which includes the Affero General
+#   Public License, v3.0 ("AGPLv3") and some additional permissions and
+#   terms pursuant to its AGPLv3 Section 7.
+#
+#   This notice must be preserved when any source code is
+#   conveyed and/or propagated.
+#
+#   Bacula(R) is a registered trademark of Kern Sibbald.
+#
+#     Copyright (c) 2019 by Inteos sp. z o.o.
+#     All rights reserved. IP transfered to Bacula Systems according to agreement.
+#     Author: Radosław Korzeniewski, radekk@inteos.pl, Inteos Sp. z o.o.
+#
+import logging
+import random
+import string
+
+TOKENSIZE = 64
+
+
+def generate_token(size=TOKENSIZE, chars=tuple(string.ascii_letters) + tuple(string.digits)):
+    """
+        Generates a random string of characters composed on letters and digits
+    :param size: the number of characters to generate - the length of the token string, default is 64
+    :param chars: the allowed characters set, the default is ascii letters and digits
+    :return: the token used for authorization
+    """
+    return ''.join(random.choice(chars) for _ in range(size))
+
+
+def create_auth_data(token):
+    auth_data = 'Token: {token:{size}}\n'.format(token=token, size=TOKENSIZE)
+    return auth_data
+
+
+def auth_data_length():
+    return len(create_auth_data(''))
+
+
+def check_auth_data(token, data):
+    """
+        Verifies the authorization data received from peer
+    :param token: the token at size
+    :param data:
+    :return:
+    """
+    authdata = create_auth_data(token)
+    logging.debug('AUTH_DATA:' + authdata.rstrip('\n'))
+    ddata = data.decode()
+    logging.debug('RECV_TOKEN_DATA:'+ddata.rstrip('\n'))
+    return authdata == ddata
diff --git a/bacula/src/plugins/fd/kubernetes-backend/bin/k8s_backend b/bacula/src/plugins/fd/kubernetes-backend/bin/k8s_backend
new file mode 100644 (file)
index 0000000..ea58974
--- /dev/null
@@ -0,0 +1,26 @@
+#!/usr/bin/env python3
+# Bacula(R) - The Network Backup Solution
+#
+#   Copyright (C) 2000-2022 Kern Sibbald
+#
+#   The original author of Bacula is Kern Sibbald, with contributions
+#   from many others, a complete list can be found in the file AUTHORS.
+#
+#   You may use this file and others of this release according to the
+#   license defined in the LICENSE file, which includes the Affero General
+#   Public License, v3.0 ("AGPLv3") and some additional permissions and
+#   terms pursuant to its AGPLv3 Section 7.
+#
+#   This notice must be preserved when any source code is
+#   conveyed and/or propagated.
+#
+#   Bacula(R) is a registered trademark of Kern Sibbald.
+
+import sys
+
+# sys.path.append('@pythondir@')
+
+from baculak8s.main import main
+
+if __name__ == "__main__":
+    sys.exit(main())
diff --git a/bacula/src/plugins/fd/kubernetes-backend/get_python b/bacula/src/plugins/fd/kubernetes-backend/get_python
new file mode 100755 (executable)
index 0000000..5773bae
--- /dev/null
@@ -0,0 +1,54 @@
+#!/bin/bash
+#
+# Copyright (C) 2000-2020 Kern Sibbald
+# License: BSD 2-Clause; see file LICENSE-FOSS
+#
+
+# Get some python environment variables
+
+if [ $# != 1 ]; then
+    echo "Usage: $0 [PYTHONPATH | PIP | PYTHON | PYTHON_PREFIX ]"
+    exit 1
+fi
+
+VERSION=
+for i in 13 12 11 10 9 8 7 6 5 4
+do
+    if which python3.$i &> /dev/null ; then
+        VERSION=3.$i
+        break
+    fi
+done
+
+if [ "$VERSION" = "" ]; then
+    echo "Unable to find python"
+    exit 1
+fi
+
+BASEDIR=$HOME/.local
+
+if [ $1 = "PYTHONPATH" ]; then
+    RES="$PYTHONPATH:${BASEDIR}/lib64/python${VERSION}/site-packages:${BASEDIR}/lib/python${VERSION}/site-packages:/usr/local/lib64/python${VERSION}/site-packages/:/usr/local/lib/python${VERSION}/site-packages/";
+    if [ -d $PWD/build/lib.linux-x86_64-${VERSION} ]; then
+        RES="$RES:$PWD/build/lib.linux-x86_64-${VERSION}"
+    fi
+    echo $RES
+
+elif [ $1 = "PIP" ]; then
+    if which pip$VERSION &> /dev/null
+    then
+        echo pip$VERSION        
+    else
+        echo pip3
+    fi
+
+elif [ $1 = "PYTHON" ]; then
+    echo python$VERSION
+
+elif [ $1 = "PYTHON_PREFIX" ]; then
+    echo $BASEDIR
+
+else
+    echo "Invalid parameter $1"
+    exit 1
+fi
diff --git a/bacula/src/plugins/fd/kubernetes-backend/make_bin b/bacula/src/plugins/fd/kubernetes-backend/make_bin
new file mode 100755 (executable)
index 0000000..3cef88f
--- /dev/null
@@ -0,0 +1,15 @@
+#!/bin/bash
+# Copyright (C) 2000-2020 Kern Sibbald
+# License: BSD 2-Clause; see file LICENSE-FOSS
+
+# We need to specify all .so file in the pyinstaller command
+OPT=`find build/lib.* -name *.so | while read a; do echo -n " -r $a"; done`
+
+PYTHONPATH=`./get_python PYTHONPATH`
+PYTHON_PREFIX=`./get_python PYTHON_PREFIX`
+export PYTHONPATH
+export PATH=$PATH:$PYTHON_PREFIX/bin
+echo $PYTHONPATH
+
+set -x
+pyinstaller -F $OPT build/scripts-*/k8s_backend
diff --git a/bacula/src/plugins/fd/kubernetes-backend/mkExt.pl b/bacula/src/plugins/fd/kubernetes-backend/mkExt.pl
new file mode 100755 (executable)
index 0000000..d5d6160
--- /dev/null
@@ -0,0 +1,34 @@
+#!/usr/bin/perl -w
+use strict;
+my $file;
+
+print "
+from distutils.core import setup
+from distutils.extension import Extension
+from Cython.Build import cythonize
+
+extensions = [";
+
+while (my $l = <>)
+{
+    chomp($l);
+    next if ($l =~ /__init__.py/);
+    $l =~ s:^./::;
+    $file = $l;
+    $l =~ s:/:.:g;
+    $l =~ s:\.py$::;
+    print "Extension('$l', ['$file']),\n";
+}
+
+print "]
+
+
+setup(
+    name='k8s_backend',
+    scripts=['bin/k8s_backend'],
+    setup_requires=['cython'],
+    ext_modules=cythonize(extensions),
+)
+# find . -name '*.py' | ./mkExt.pl
+";
+
diff --git a/bacula/src/plugins/fd/kubernetes-backend/requirements.txt b/bacula/src/plugins/fd/kubernetes-backend/requirements.txt
new file mode 100644 (file)
index 0000000..9417553
--- /dev/null
@@ -0,0 +1,12 @@
+# -*- coding: UTF-8 -*-
+#
+# Copyright (C) 2000-2020 Kern Sibbald
+# License: BSD 2-Clause; see file LICENSE-FOSS
+#
+# Author: Radoslaw Korzeniewski
+#
+
+pyyaml == 5.3.1
+kubernetes <= 11
+urllib3 == 1.26
+requests == 2.25
diff --git a/bacula/src/plugins/fd/kubernetes-backend/setup.py b/bacula/src/plugins/fd/kubernetes-backend/setup.py
new file mode 100644 (file)
index 0000000..cc28bab
--- /dev/null
@@ -0,0 +1,37 @@
+#!/usr/bin/env python3
+# Bacula(R) - The Network Backup Solution
+#
+#   Copyright (C) 2000-2022 Kern Sibbald
+#
+#   The original author of Bacula is Kern Sibbald, with contributions
+#   from many others, a complete list can be found in the file AUTHORS.
+#
+#   You may use this file and others of this release according to the
+#   license defined in the LICENSE file, which includes the Affero General
+#   Public License, v3.0 ("AGPLv3") and some additional permissions and
+#   terms pursuant to its AGPLv3 Section 7.
+#
+#   This notice must be preserved when any source code is
+#   conveyed and/or propagated.
+#
+#   Bacula(R) is a registered trademark of Kern Sibbald.
+import sys
+
+from setuptools import setup, find_packages
+
+if sys.version_info < (3, 0):
+    sys.exit('This version of the Backend supports only Python 3 or above')
+
+setup(
+    name='baculak8s',
+    version='2.0.2',
+    author='Radoslaw Korzeniewski',
+    author_email='radekk@korzeniewski.net',
+    packages=find_packages(exclude=('tests', 'tests.*')),
+    # packages=packages,
+    license="Bacula® - The Network Backup Solution",
+    data_files=[
+        ('/opt/bacula/bin', ['bin/k8s_backend'])
+    ],
+    # scripts=['bin/k8s_backend']
+)
diff --git a/bacula/src/plugins/fd/kubernetes-backend/tests/README b/bacula/src/plugins/fd/kubernetes-backend/tests/README
new file mode 100644 (file)
index 0000000..0458761
--- /dev/null
@@ -0,0 +1,91 @@
+--- Test Execution
+
+In order to properly execute all the tests,
+some Environment Variables must be created:
+
+BE_PLUGIN_TYPE (Specifies which Plugin should be tested)
+
+BE_PLUGIN_VERSION (Specifies which Plugin Version should be tested)
+
+BE_PLUGIN_URL (Specifies the URL where the Plugins Data Source exists)
+
+BE_PLUGIN_USER (Specifies the username that should be used by the Plugin)
+
+BE_PLUGIN_PWD (Specifies the password that should be used by the Plugin)
+
+Example:
+
+export BE_PLUGIN_TYPE=swift
+
+export BE_PLUGIN_VERSION=1
+
+export BE_PLUGIN_URL=http://192.168.0.5:8080
+
+export BE_PLUGIN_USER=test:tester
+
+export BE_PLUGIN_PWD=testing
+
+
+To run the tests (from the projects root folder):
+
+python3 -m unittest discover tests/test_baculaswift
+
+
+For more information:
+
+https://docs.openstack.org/swift/latest/development_saio.html
+
+
+
+-- Manual Testing
+
+It is possible that Swift containers and objects might be needed
+to be created in order to do manual testing. In this case, the
+python-swiftclient should be accessed directly.
+
+The python-swiftclient (called "swift") tool is a command line utility
+for communicating with an OpenStack Object Storage (swift) environment.
+It allows one to perform several types of operations.
+
+In order for properly use this tool (with legacy authenticaton),
+four Environment Variables must be created
+
+ST_AUTH_VERSION
+ST_AUTH
+ST_USER
+ST_KEY
+
+Those values could be, for example (if you are using SAIO - Swift All in One):
+
+export ST_AUTH_VERSION=1.0
+export ST_AUTH=http://192.168.0.5:8080/auth/v1.0
+export ST_USER=test:tester
+export ST_KEY=testing
+
+The complete list of operations that this tool can perform is shown
+in this link:
+
+https://docs.openstack.org/python-swiftclient/latest/cli/index.html#examples
+
+-- Swift ACL
+
+On Swift, only Accounts and Containers (buckets) have Access Control Lists.
+In order to create ACLs on Containers with the "swift" command-line tool,
+please check this link:
+
+https://www.swiftstack.com/docs/cookbooks/swift_usage/container_acl.html
+
+-- Swift XATTRS
+
+On Swift, both Containers (buckets) and Objects (files) may have Extended Attributes.
+
+In order to create XATTRs on both with the "swift" command-line tool,
+please check this link:
+
+https://docs.openstack.org/python-swiftclient/latest/cli/index.html#swift-post
+
+
+
+
+
+
diff --git a/bacula/src/plugins/fd/kubernetes-backend/tests/__init__.py b/bacula/src/plugins/fd/kubernetes-backend/tests/__init__.py
new file mode 100644 (file)
index 0000000..a9ba85f
--- /dev/null
@@ -0,0 +1,27 @@
+# Bacula(R) - The Network Backup Solution
+#
+#   Copyright (C) 2000-2022 Kern Sibbald
+#
+#   The original author of Bacula is Kern Sibbald, with contributions
+#   from many others, a complete list can be found in the file AUTHORS.
+#
+#   You may use this file and others of this release according to the
+#   license defined in the LICENSE file, which includes the Affero General
+#   Public License, v3.0 ("AGPLv3") and some additional permissions and
+#   terms pursuant to its AGPLv3 Section 7.
+#
+#   This notice must be preserved when any source code is
+#   conveyed and/or propagated.
+#
+#   Bacula(R) is a registered trademark of Kern Sibbald.
+import os
+from os import environ
+
+TESTS_ROOT = os.path.abspath(os.path.dirname(__file__))
+TMP_TEST_FILES_ROOT = os.path.join(TESTS_ROOT, "tmp")
+
+BACKEND_PLUGIN_TYPE = environ.get('BE_PLUGIN_TYPE')
+BACKEND_PLUGIN_VERSION = environ.get('BE_PLUGIN_VERSION')
+BACKEND_PLUGIN_URL = environ.get('BE_PLUGIN_URL')
+BACKEND_PLUGIN_USER = environ.get('BE_PLUGIN_USER')
+BACKEND_PLUGIN_PWD = environ.get('BE_PLUGIN_PWD')
diff --git a/bacula/src/plugins/fd/kubernetes-backend/tests/base_tests.py b/bacula/src/plugins/fd/kubernetes-backend/tests/base_tests.py
new file mode 100644 (file)
index 0000000..7091d4c
--- /dev/null
@@ -0,0 +1,55 @@
+# -*- coding: UTF-8 -*-
+# Bacula(R) - The Network Backup Solution
+#
+#   Copyright (C) 2000-2022 Kern Sibbald
+#
+#   The original author of Bacula is Kern Sibbald, with contributions
+#   from many others, a complete list can be found in the file AUTHORS.
+#
+#   You may use this file and others of this release according to the
+#   license defined in the LICENSE file, which includes the Affero General
+#   Public License, v3.0 ("AGPLv3") and some additional permissions and
+#   terms pursuant to its AGPLv3 Section 7.
+#
+#   This notice must be preserved when any source code is
+#   conveyed and/or propagated.
+#
+#   Bacula(R) is a registered trademark of Kern Sibbald.
+
+
+import unittest
+
+from tests import TMP_TEST_FILES_ROOT
+from tests.util.io_test_util import IOTestUtil
+from tests.util.os_test_util import OsTestUtil
+
+
+class BaseTest(unittest.TestCase, OsTestUtil, IOTestUtil):
+    """
+        Base Class for all tests
+    """
+
+    def setUp(self):
+        self._create_test_folders()
+
+    def _create_test_folders(self):
+        self.test_files_root = self.create_test_dir(TMP_TEST_FILES_ROOT)
+
+    def tearDown(self):
+        self.delete_if_dir(self.test_files_root)
+
+    def assertItemsContainsValueType(self, items, key, val_type):
+        for item in items:
+            self.assertDictContainsKey(item, key, val_type)
+
+    def assertDictContainsKey(self, item, key, val_type):
+        self.assertIn(key, item)
+        self.assertIsInstance(item[key], val_type)
+
+    def assertItemsContainsValue(self, items, key, val):
+        for item in items:
+            self.assertEqual(item[key], val)
+
+    def assertIsSubsetOf(self, subset, dictionary):
+        isSubset = set(subset.items()).issubset(set(dictionary.items()))
+        self.assertTrue(isSubset, "Wasn't a subset!")
diff --git a/bacula/src/plugins/fd/kubernetes-backend/tests/k8s_tests.py b/bacula/src/plugins/fd/kubernetes-backend/tests/k8s_tests.py
new file mode 100644 (file)
index 0000000..d824a04
--- /dev/null
@@ -0,0 +1,125 @@
+# -*- coding: UTF-8 -*-
+# Bacula(R) - The Network Backup Solution
+#
+#   Copyright (C) 2000-2022 Kern Sibbald
+#
+#   The original author of Bacula is Kern Sibbald, with contributions
+#   from many others, a complete list can be found in the file AUTHORS.
+#
+#   You may use this file and others of this release according to the
+#   license defined in the LICENSE file, which includes the Affero General
+#   Public License, v3.0 ("AGPLv3") and some additional permissions and
+#   terms pursuant to its AGPLv3 Section 7.
+#
+#   This notice must be preserved when any source code is
+#   conveyed and/or propagated.
+#
+#   Bacula(R) is a registered trademark of Kern Sibbald.
+
+from baculak8s.util.lambda_util import apply
+from tests.base_tests import BaseTest
+from tests.util.os_test_util import gen_random_bytes
+
+
+class K8STest(BaseTest):
+    """
+        Class with utility methods for dealing with K8S
+    """
+
+    def setUp(self):
+        super().setUp()
+        self.config_test_k8sclient()
+
+    def tearDown(self):
+        super().tearDown()
+        self.delete_all_containers()
+
+    def config_test_k8sclient(self):
+        pass
+
+    def delete_all_containers(self):
+        pass
+
+    def create_test_data(self, container, file_content, filename):
+        pass
+
+    def create_container_new(self, buckets):
+        pass
+
+    def upload_file(self, container_name, filepath, filename):
+        with open(filepath + "/" + filename, 'rb') as local:
+            self.swift_connection.put_object(
+                container_name,
+                filename,
+                contents=local,
+                content_type='text/plain'
+            )
+
+    def upload_files(self, container, files, file_chunk_count=0, file_chunk_size=0):
+        pass
+
+    def create_containers(self, amount):
+        containers = []
+
+        for i in range(0, amount):
+            container_name = "container_%d" % i
+            self.swift_connection.put_container(container_name)
+            containers.append(container_name)
+
+        return containers
+
+    def put_containers_on_swift(self, containers):
+        for container in containers:
+            self.swift_connection.put_container(container)
+
+    def upload_test_objects(self, container, path, filenames):
+        for filename in filenames:
+            self.upload_file(container, path, filename)
+
+    def verify_containers(self, buckets, where=None):
+
+        if where is not None:
+            splitted = where.split("/")
+            splitted = list(filter(None, splitted))
+            header, uploaded_files = self.swift_connection.get_container(splitted[0])
+            uploaded_names = apply(lambda f: f['name'], uploaded_files)
+            uploaded_bytes = apply(lambda f: f['bytes'], uploaded_files)
+
+            for bucket in buckets:
+                expected_files = bucket['files']
+
+                for file in expected_files:
+                    expected_name = file["name"]
+                    if len(splitted) > 1:
+                        path = "/".join(splitted[1:])
+                        expected_name = "%s/%s" % (path, expected_name)
+                    self.assertIn(expected_name, uploaded_names)
+                    self.assertIn(file['size'], uploaded_bytes)
+
+
+        else:
+            for bucket in buckets:
+                expected_files = bucket['files']
+                header, uploaded_files = self.swift_connection.get_container(bucket['name'])
+                uploaded_names = apply(lambda f: f['name'], uploaded_files)
+                uploaded_bytes = apply(lambda f: f['bytes'], uploaded_files)
+
+                for file in expected_files:
+                    self.assertIn(file['name'], uploaded_names)
+                    self.assertIn(file['size'], uploaded_bytes)
+
+                self.assertIn('x-container-meta-custom1', header)
+
+
+class FileLikeObject(object):
+    def __init__(self, file_chunks):
+        self.file_chunks = file_chunks
+
+    def read(self, size=-1):
+        if not self.file_chunks:
+            return None
+
+        return self.file_chunks.pop()
+
+    def next_chunk(self, chunk_size):
+        pass
diff --git a/bacula/src/plugins/fd/kubernetes-backend/tests/test_exploratory/__init__.py b/bacula/src/plugins/fd/kubernetes-backend/tests/test_exploratory/__init__.py
new file mode 100644 (file)
index 0000000..3b41a74
--- /dev/null
@@ -0,0 +1,16 @@
+# Bacula(R) - The Network Backup Solution
+#
+#   Copyright (C) 2000-2022 Kern Sibbald
+#
+#   The original author of Bacula is Kern Sibbald, with contributions
+#   from many others, a complete list can be found in the file AUTHORS.
+#
+#   You may use this file and others of this release according to the
+#   license defined in the LICENSE file, which includes the Affero General
+#   Public License, v3.0 ("AGPLv3") and some additional permissions and
+#   terms pursuant to its AGPLv3 Section 7.
+#
+#   This notice must be preserved when any source code is
+#   conveyed and/or propagated.
+#
+#   Bacula(R) is a registered trademark of Kern Sibbald.
diff --git a/bacula/src/plugins/fd/kubernetes-backend/tests/test_exploratory/test_account.py b/bacula/src/plugins/fd/kubernetes-backend/tests/test_exploratory/test_account.py
new file mode 100644 (file)
index 0000000..8d84c1c
--- /dev/null
@@ -0,0 +1,37 @@
+# Bacula(R) - The Network Backup Solution
+#
+#   Copyright (C) 2000-2022 Kern Sibbald
+#
+#   The original author of Bacula is Kern Sibbald, with contributions
+#   from many others, a complete list can be found in the file AUTHORS.
+#
+#   You may use this file and others of this release according to the
+#   license defined in the LICENSE file, which includes the Affero General
+#   Public License, v3.0 ("AGPLv3") and some additional permissions and
+#   terms pursuant to its AGPLv3 Section 7.
+#
+#   This notice must be preserved when any source code is
+#   conveyed and/or propagated.
+#
+#   Bacula(R) is a registered trademark of Kern Sibbald.
+
+
+import unittest
+
+from tests.base_tests import BaseTest
+
+
+class AccountTest(BaseTest):
+
+    def test_should_retrieve_account_stats(self):
+        stats = self.swift_service.stat()
+        self.assertEqual(True, stats['success'])
+
+    def test_should_list_account_containers_metadata(self):
+        containers_meta = self.swift_service.list()
+        for metadata in containers_meta:
+            self.assertEqual(True, metadata['success'])
+
+
+if __name__ == '__main__':
+    unittest.main()
\ No newline at end of file
diff --git a/bacula/src/plugins/fd/kubernetes-backend/tests/test_exploratory/test_auth.py b/bacula/src/plugins/fd/kubernetes-backend/tests/test_exploratory/test_auth.py
new file mode 100644 (file)
index 0000000..638bdb1
--- /dev/null
@@ -0,0 +1,30 @@
+# Bacula(R) - The Network Backup Solution
+#
+#   Copyright (C) 2000-2022 Kern Sibbald
+#
+#   The original author of Bacula is Kern Sibbald, with contributions
+#   from many others, a complete list can be found in the file AUTHORS.
+#
+#   You may use this file and others of this release according to the
+#   license defined in the LICENSE file, which includes the Affero General
+#   Public License, v3.0 ("AGPLv3") and some additional permissions and
+#   terms pursuant to its AGPLv3 Section 7.
+#
+#   This notice must be preserved when any source code is
+#   conveyed and/or propagated.
+#
+#   Bacula(R) is a registered trademark of Kern Sibbald.
+
+
+import unittest
+
+from tests.base_tests import BaseTest
+
+
+class AuthTest(BaseTest):
+
+    def test_should_do_legacy_authentication(self):
+        self.assertEqual(True, self.swift_service.capabilities()['success'])
+
+if __name__ == '__main__':
+    unittest.main()
\ No newline at end of file
diff --git a/bacula/src/plugins/fd/kubernetes-backend/tests/test_exploratory/test_container.py b/bacula/src/plugins/fd/kubernetes-backend/tests/test_exploratory/test_container.py
new file mode 100644 (file)
index 0000000..b2e5026
--- /dev/null
@@ -0,0 +1,98 @@
+# Bacula(R) - The Network Backup Solution
+#
+#   Copyright (C) 2000-2022 Kern Sibbald
+#
+#   The original author of Bacula is Kern Sibbald, with contributions
+#   from many others, a complete list can be found in the file AUTHORS.
+#
+#   You may use this file and others of this release according to the
+#   license defined in the LICENSE file, which includes the Affero General
+#   Public License, v3.0 ("AGPLv3") and some additional permissions and
+#   terms pursuant to its AGPLv3 Section 7.
+#
+#   This notice must be preserved when any source code is
+#   conveyed and/or propagated.
+#
+#   Bacula(R) is a registered trademark of Kern Sibbald.
+
+
+import unittest
+from time import sleep
+
+from swiftclient.service import SwiftUploadObject
+
+from tests.base_tests import BaseTest
+
+
+class ContainerTest(BaseTest):
+    def test_should_create_container(self):
+        container_name = "cats"
+        self.swift_connection.put_container(container_name)
+        containers = self.swift_connection.get_account()[1]
+        self.assertEqual(len(containers), 1)
+
+    def test_should_list_containers(self):
+        self.__create_containers(5)
+        response = self.swift_service.list()
+
+        for page in response:
+            print(len(page["listing"]))
+            print(page["listing"])
+
+    def __create_containers(self, amount):
+
+        for i in range(0, amount):
+            self.swift_connection.put_container("container=%s" % i)
+
+    def test_should_list_container_metadata(self):
+        container_name = "cats"
+        self.swift_connection.put_container(container_name)
+        stats = self.swift_service.stat(container=container_name)
+        print(stats['headers'])
+        sleep(4)
+        self.swift_connection.put_object(container_name, "cat1.txt", "")
+        stats = self.swift_service.stat(container=container_name)
+        print(stats['headers'])
+
+    def test_should_list_container_objects(self):
+        container_name = "WORLD"
+        files = ["AMERICA/NORTH/EUA/eua.txt",
+                 "AMERICA/NORTH/CANADA/canada.txt",
+                 "AMERICA/SOUTH/BRAZIL/brazil.txt"]
+
+        fc1 = FileLikeObject([self.gen_random_bytes(65553) for i in range(1)])
+        obj1 = SwiftUploadObject(fc1, files[0])
+
+        fc2 = FileLikeObject([self.gen_random_bytes(65553) for i in range(1)])
+        obj2 = SwiftUploadObject(fc2, files[1])
+
+        fc3 = FileLikeObject([self.gen_random_bytes(65553) for i in range(1)])
+        obj3 = SwiftUploadObject(fc2, files[2])
+
+        r_gen = self.swift_service.upload(container_name, [obj1, obj2, obj3])
+
+        for r in r_gen:
+            pass
+
+        self.swift_connection.put_container(container_name)
+        list_gen = self.swift_connection.get_container(container=container_name)
+        print(list_gen)
+
+class FileLikeObject(object):
+    def __init__(self, file_chunks):
+        self.file_chunks = file_chunks
+
+    def read(self, size=-1):
+        if not self.file_chunks:
+            return None
+
+        return self.file_chunks.pop()
+
+    def next_chunk(self, chunk_size):
+        pass
+
+
+
+
+if __name__ == '__main__':
+    unittest.main()
diff --git a/bacula/src/plugins/fd/kubernetes-backend/tests/test_exploratory/test_object.py b/bacula/src/plugins/fd/kubernetes-backend/tests/test_exploratory/test_object.py
new file mode 100644 (file)
index 0000000..cc9f9ae
--- /dev/null
@@ -0,0 +1,62 @@
+# Bacula(R) - The Network Backup Solution
+#
+#   Copyright (C) 2000-2022 Kern Sibbald
+#
+#   The original author of Bacula is Kern Sibbald, with contributions
+#   from many others, a complete list can be found in the file AUTHORS.
+#
+#   You may use this file and others of this release according to the
+#   license defined in the LICENSE file, which includes the Affero General
+#   Public License, v3.0 ("AGPLv3") and some additional permissions and
+#   terms pursuant to its AGPLv3 Section 7.
+#
+#   This notice must be preserved when any source code is
+#   conveyed and/or propagated.
+#
+#   Bacula(R) is a registered trademark of Kern Sibbald.
+
+
+import unittest
+from io import RawIOBase
+
+from swiftclient.service import SwiftUploadObject
+
+from tests.base_tests import BaseTest
+
+
+class ObjectTest(BaseTest):
+    def tearDown(self):
+        self.delete_all_containers()
+
+    def test_should_upload_object(self):
+        container = "cats"
+        filename = "cats1.txt"
+        file_like = FileLikeObject([self.gen_random_bytes(65553) for i in range(50)])
+        obj = SwiftUploadObject(file_like, filename)
+        self.swift_connection.put_container(container)
+        r_gen = self.swift_service.upload(container, [obj])
+
+        for r in r_gen:
+            pass
+
+        c_data = self.swift_connection.get_container(container)
+        print(c_data)
+
+
+class FileLikeObject(RawIOBase):
+    def __init__(self, file_chunks, *args, **kwargs):
+        super().__init__(*args, **kwargs)
+        self.file_chunks = file_chunks
+
+    def read(self, size=-1):
+        if not self.file_chunks:
+            return None
+
+        return self.file_chunks.pop()
+
+    def next_chunk(self, chunk_size):
+        pass
+
+
+if __name__ == '__main__':
+    unittest.main()
diff --git a/bacula/src/plugins/fd/kubernetes-backend/tests/test_stress/__init__.py b/bacula/src/plugins/fd/kubernetes-backend/tests/test_stress/__init__.py
new file mode 100644 (file)
index 0000000..3b41a74
--- /dev/null
@@ -0,0 +1,16 @@
+# Bacula(R) - The Network Backup Solution
+#
+#   Copyright (C) 2000-2022 Kern Sibbald
+#
+#   The original author of Bacula is Kern Sibbald, with contributions
+#   from many others, a complete list can be found in the file AUTHORS.
+#
+#   You may use this file and others of this release according to the
+#   license defined in the LICENSE file, which includes the Affero General
+#   Public License, v3.0 ("AGPLv3") and some additional permissions and
+#   terms pursuant to its AGPLv3 Section 7.
+#
+#   This notice must be preserved when any source code is
+#   conveyed and/or propagated.
+#
+#   Bacula(R) is a registered trademark of Kern Sibbald.
diff --git a/bacula/src/plugins/fd/kubernetes-backend/tests/test_stress/test_stress_backup.py b/bacula/src/plugins/fd/kubernetes-backend/tests/test_stress/test_stress_backup.py
new file mode 100644 (file)
index 0000000..28b331c
--- /dev/null
@@ -0,0 +1,63 @@
+# Bacula(R) - The Network Backup Solution
+#
+#   Copyright (C) 2000-2022 Kern Sibbald
+#
+#   The original author of Bacula is Kern Sibbald, with contributions
+#   from many others, a complete list can be found in the file AUTHORS.
+#
+#   You may use this file and others of this release according to the
+#   license defined in the LICENSE file, which includes the Affero General
+#   Public License, v3.0 ("AGPLv3") and some additional permissions and
+#   terms pursuant to its AGPLv3 Section 7.
+#
+#   This notice must be preserved when any source code is
+#   conveyed and/or propagated.
+#
+#   Bacula(R) is a registered trademark of Kern Sibbald.
+
+
+from tests.test_baculak8s.test_system.system_test import SystemTest
+
+
+class StressBackupTest(SystemTest):
+    # TODO Refactor
+    pass
+    # def test_backup_5M_file(self):
+    #     container = "cats"
+    #     filename = "cats1.txt"
+    #     file_content = self.gen_random_bytes(5 * 1024 * 1024)
+    #     self.create_test_data(container, file_content, filename)
+    #     command = BackupFileCommandBuilder.build(container, filename)
+    #     output = self.execute_plugin(command)
+    #     self.verify_param_existence(output, "file", filename)
+    #     self.verify_param_existence(output, "content-length", len(file_content))
+    #
+    # def test_backup_50M_file(self):
+    #     container = "cats"
+    #     filename = "cats1.txt"
+    #     file_content = self.gen_random_bytes(50 * 1024 * 1024)
+    #     self.create_test_data(container, file_content, filename)
+    #     command = BackupFileCommandBuilder.build(container, filename)
+    #     output = self.execute_plugin(command)
+    #     self.verify_param_existence(output, "file", filename)
+    #     self.verify_param_existence(output, "content-length", len(file_content))
+    #
+    # def test_backup_100M_file(self):
+    #     container = "cats"
+    #     filename = "cats1.txt"
+    #     file_content = self.gen_random_bytes(100 * 1024 * 1024)
+    #     self.create_test_data(container, file_content, filename)
+    #     command = BackupFileCommandBuilder.build(container, filename)
+    #     output = self.execute_plugin(command)
+    #     self.verify_param_existence(output, "file", filename)
+    #     self.verify_param_existence(output, "content-length", len(file_content))
+    #
+    # def test_backup_500M_file(self):
+    #     container = "cats"
+    #     filename = "cats1.txt"
+    #     file_content = self.gen_random_bytes(500 * 1024 * 1024)
+    #     self.create_test_data(container, file_content, filename)
+    #     command = BackupFileCommandBuilder.build(container, filename)
+    #     output = self.execute_plugin(command)
+    #     self.verify_param_existence(output, "file", filename)
+    #     self.verify_param_existence(output, "content-length", len(file_content))
diff --git a/bacula/src/plugins/fd/kubernetes-backend/tests/test_stress/test_stress_restore.py b/bacula/src/plugins/fd/kubernetes-backend/tests/test_stress/test_stress_restore.py
new file mode 100644 (file)
index 0000000..8209434
--- /dev/null
@@ -0,0 +1,67 @@
+# Bacula(R) - The Network Backup Solution
+#
+#   Copyright (C) 2000-2022 Kern Sibbald
+#
+#   The original author of Bacula is Kern Sibbald, with contributions
+#   from many others, a complete list can be found in the file AUTHORS.
+#
+#   You may use this file and others of this release according to the
+#   license defined in the LICENSE file, which includes the Affero General
+#   Public License, v3.0 ("AGPLv3") and some additional permissions and
+#   terms pursuant to its AGPLv3 Section 7.
+#
+#   This notice must be preserved when any source code is
+#   conveyed and/or propagated.
+#
+#   Bacula(R) is a registered trademark of Kern Sibbald.
+import time
+
+from baculaswift.plugins.plugin import MEGABYTE
+from baculaswift.services.job_info_service import TYPE_RESTORE
+from tests.test_baculak8s.test_system.system_test import SystemTest
+from tests.util.os_test_util import create_byte_chunks
+from tests.util.packet_builders import BackendCommandBuilder
+
+
+class StressRestoreTest(SystemTest):
+    def test_restore_100M_file_regular_upload_65K_data_packets(self):
+        total_size = 1600 * 65553
+        buckets = [{
+            "name": "bucket_1",
+            "files": [
+                {
+                    "name": "file1.txt",
+                    "size": total_size,
+                    "content": create_byte_chunks(1600, 65553)
+                },
+            ]
+        }]
+        packet = BackendCommandBuilder().build(TYPE_RESTORE, buckets, segment_size=8000 * MEGABYTE)
+        start = time.clock()
+        output = self.execute_plugin(packet)
+        end = time.clock()
+        elapsed = (end - start)
+        self.verify_no_aborts(output)
+        resp = self.swift_connection.head_object("bucket_1", "file1.txt")
+        self.assertEqual(total_size, int(resp["content-length"]))
+
+    def test_restore_100M_file_regular_upload_999K_data_packets(self):
+        total_size = 105 * 999999
+        buckets = [{
+            "name": "bucket_1",
+            "files": [
+                {
+                    "name": "file1.txt",
+                    "size": total_size,
+                    "content": create_byte_chunks(105, 999999)
+                },
+            ]
+        }]
+        packet = BackendCommandBuilder().build(TYPE_RESTORE, buckets, segment_size=8000 * MEGABYTE)
+        start = time.clock()
+        output = self.execute_plugin(packet)
+        end = time.clock()
+        elapsed = (end - start)
+        self.verify_no_aborts(output)
+        resp = self.swift_connection.head_object("bucket_1", "file1.txt")
+        self.assertEqual(total_size, int(resp["content-length"]))
diff --git a/bacula/src/plugins/fd/kubernetes-backend/tests/util/__init__.py b/bacula/src/plugins/fd/kubernetes-backend/tests/util/__init__.py
new file mode 100644 (file)
index 0000000..3b41a74
--- /dev/null
@@ -0,0 +1,16 @@
+# Bacula(R) - The Network Backup Solution
+#
+#   Copyright (C) 2000-2022 Kern Sibbald
+#
+#   The original author of Bacula is Kern Sibbald, with contributions
+#   from many others, a complete list can be found in the file AUTHORS.
+#
+#   You may use this file and others of this release according to the
+#   license defined in the LICENSE file, which includes the Affero General
+#   Public License, v3.0 ("AGPLv3") and some additional permissions and
+#   terms pursuant to its AGPLv3 Section 7.
+#
+#   This notice must be preserved when any source code is
+#   conveyed and/or propagated.
+#
+#   Bacula(R) is a registered trademark of Kern Sibbald.
diff --git a/bacula/src/plugins/fd/kubernetes-backend/tests/util/io_test_util.py b/bacula/src/plugins/fd/kubernetes-backend/tests/util/io_test_util.py
new file mode 100644 (file)
index 0000000..12b51d6
--- /dev/null
@@ -0,0 +1,122 @@
+# Bacula(R) - The Network Backup Solution
+#
+#   Copyright (C) 2000-2022 Kern Sibbald
+#
+#   The original author of Bacula is Kern Sibbald, with contributions
+#   from many others, a complete list can be found in the file AUTHORS.
+#
+#   You may use this file and others of this release according to the
+#   license defined in the LICENSE file, which includes the Affero General
+#   Public License, v3.0 ("AGPLv3") and some additional permissions and
+#   terms pursuant to its AGPLv3 Section 7.
+#
+#   This notice must be preserved when any source code is
+#   conveyed and/or propagated.
+#
+#   Bacula(R) is a registered trademark of Kern Sibbald.
+
+
+import io
+import sys
+
+from baculaswift.io.packet_definitions import EOD_PACKET, STATUS_COMMAND, STATUS_DATA, STATUS_ABORT, STATUS_ERROR, \
+    STATUS_WARNING
+
+
+class IOTestUtil(object):
+    """
+       Class with utility methods for dealing with IO
+    """
+
+    def stub_bytestream_stdin(testcase_inst, input):
+        stdin = sys.stdin
+
+        def cleanup():
+            sys.stdin = stdin
+
+        wrapper = io.TextIOWrapper(
+            io.BytesIO(input),
+            newline='\n'
+        )
+
+        testcase_inst.addCleanup(cleanup)
+        sys.stdin = wrapper
+
+    def stub_stdout(testcase_inst):
+        wrapper = io.TextIOWrapper(
+            io.BytesIO(),
+            newline='\n'
+        )
+
+        saved_stdout = sys.stdout
+        stub_stdout = wrapper
+        sys.stdout = stub_stdout
+
+        def cleanup():
+            stub_stdout.close()
+            sys.stdout = saved_stdout
+
+        testcase_inst.addCleanup(cleanup)
+        return stub_stdout
+
+    def verify_command_packet(self, output, packet):
+        self.verify_packet_existence(output, STATUS_COMMAND, packet.encode())
+
+    def verify_not_command_packet(self, output, packet):
+        self.verify_not_packet_existence(output, STATUS_COMMAND, packet.encode())
+
+    def verify_data_packet(self, output, packet):
+        packet_length = str(len(packet)).zfill(6).encode()
+        packet_header = STATUS_DATA.encode() + packet_length + b'\n'
+        packet = packet
+        self.assertIn(packet_header + packet, output)
+
+    def verify_abort_packet(self, output, packet):
+        self.verify_packet_existence(output, STATUS_ABORT, packet.encode())
+
+    def verify_error_packet(self, output, packet):
+        self.verify_packet_existence(output, STATUS_ERROR, packet.encode())
+
+    def verify_warning_packet(self, output, packet):
+        self.verify_packet_existence(output, STATUS_WARNING, packet.encode())
+
+    def verify_no_aborts(self, output):
+        self.assertNotIn(b'A00000', output)
+        self.assertNotIn(b'A0000', output)
+        self.assertNotIn(b'A000', output)
+
+    def verify_no_errors(self, output):
+        self.assertNotIn(b'E00000', output)
+        self.assertNotIn(b'E0000', output)
+        self.assertNotIn(b'E000', output)
+
+    def verify_no_warnings(self, output):
+        self.assertNotIn(b'W00000', output)
+        self.assertNotIn(b'W0000', output)
+        self.assertNotIn(b'W000', output)
+
+    def verify_eod_packet(self, output):
+        self.assertIn(EOD_PACKET, output)
+
+    def verify_eod_packet_count(self, output, expected_count):
+        self.verify_packet_count(output, EOD_PACKET, expected_count)
+
+    def verify_str_count(self, output, packet, expected_count):
+        real_count = output.count(packet.encode())
+        self.assertEquals(expected_count, real_count)
+
+    def verify_packet_count(self, output, packet, expected_count):
+        real_count = output.count(packet)
+        self.assertEquals(expected_count, real_count)
+
+    def verify_packet_existence(self, output, status, packet):
+        packet_length = str(len(packet) + 1).zfill(6).encode()
+        packet_header = status.encode() + packet_length + b'\n'
+        packet = packet
+        self.assertIn(packet_header + packet, output)
+
+    def verify_not_packet_existence(self, output, status, packet):
+        packet_length = str(len(packet) + 1).zfill(6).encode()
+        packet_header = status.encode() + packet_length + b'\n'
+        packet = packet
+        self.assertNotIn(packet_header + packet, output)
diff --git a/bacula/src/plugins/fd/kubernetes-backend/tests/util/os_test_util.py b/bacula/src/plugins/fd/kubernetes-backend/tests/util/os_test_util.py
new file mode 100644 (file)
index 0000000..73a2549
--- /dev/null
@@ -0,0 +1,54 @@
+# Bacula(R) - The Network Backup Solution
+#
+#   Copyright (C) 2000-2022 Kern Sibbald
+#
+#   The original author of Bacula is Kern Sibbald, with contributions
+#   from many others, a complete list can be found in the file AUTHORS.
+#
+#   You may use this file and others of this release according to the
+#   license defined in the LICENSE file, which includes the Affero General
+#   Public License, v3.0 ("AGPLv3") and some additional permissions and
+#   terms pursuant to its AGPLv3 Section 7.
+#
+#   This notice must be preserved when any source code is
+#   conveyed and/or propagated.
+#
+#   Bacula(R) is a registered trademark of Kern Sibbald.
+
+
+import os
+import shutil
+
+
+class OsTestUtil:
+    """
+       Class with utility methods for dealing with System Calls
+    """
+
+    def create_test_dir(self, dir_name):
+        self.delete_if_dir(dir_name)
+        os.mkdir(dir_name)
+        return dir_name
+
+    def create_test_file(self, dir, filename, bytes_content):
+        path = os.path.join(dir, filename)
+        file = open(path, 'w+b')
+        file.write(bytes_content)
+        file.close()
+        return path
+
+    def delete_if_dir(self, path):
+        if os.path.isdir(path):
+            shutil.rmtree(path)
+
+    def gen_random_bytes(self, bytes_size):
+        return os.urandom(bytes_size)
+
+
+def gen_random_bytes(bytes_size):
+    return os.urandom(bytes_size)
+
+
+def create_byte_chunks(chunk_count, chunk_size):
+    chunk = gen_random_bytes(chunk_size)
+    return [chunk] * chunk_count
diff --git a/bacula/src/plugins/fd/kubernetes-backend/tests/util/packet_builders.py b/bacula/src/plugins/fd/kubernetes-backend/tests/util/packet_builders.py
new file mode 100644 (file)
index 0000000..64fdac2
--- /dev/null
@@ -0,0 +1,390 @@
+# Bacula(R) - The Network Backup Solution
+#
+#   Copyright (C) 2000-2022 Kern Sibbald
+#
+#   The original author of Bacula is Kern Sibbald, with contributions
+#   from many others, a complete list can be found in the file AUTHORS.
+#
+#   You may use this file and others of this release according to the
+#   license defined in the LICENSE file, which includes the Affero General
+#   Public License, v3.0 ("AGPLv3") and some additional permissions and
+#   terms pursuant to its AGPLv3 Section 7.
+#
+#   This notice must be preserved when any source code is
+#   conveyed and/or propagated.
+#
+#   Bacula(R) is a registered trademark of Kern Sibbald.
+import json
+
+from baculaswift.io.jobs.backup_io import BACKUP_START_PACKET
+from baculaswift.io.jobs.restore_io import RESTORE_START_PACKET, RESTORE_END_PACKET, TRANSFER_START_PACKET
+from baculaswift.io.packet_definitions import TERMINATION_PACKET, XATTR_DATA_START, ACL_DATA_START, \
+    ESTIMATION_START_PACKET
+from baculaswift.io.services.job_end_io import END_JOB_START_PACKET
+from baculaswift.io.services.job_info_io import JOB_START_PACKET
+from baculaswift.io.services.plugin_params_io import PLUGIN_PARAMETERS_START
+from baculaswift.plugins.plugin import DEFAULT_FILE_MODE, DEFAULT_DIR_MODE
+from baculaswift.services.job_info_service import TYPE_BACKUP, TYPE_ESTIMATION, TYPE_RESTORE
+from tests import BACKEND_PLUGIN_USER, BACKEND_PLUGIN_PWD, BACKEND_PLUGIN_URL, BACKEND_PLUGIN_TYPE
+from tests.util.packet_test_util import PacketTestUtil
+
+
+class HandshakePacketBuilder(PacketTestUtil):
+    def build(self, packet_content="Hello %s 1" % BACKEND_PLUGIN_TYPE):
+        return self.command_packet(packet_content)
+
+
+class JobInfoStartPacketBuilder(PacketTestUtil):
+    def build_invalid(self):
+        return self.build("InvalidStartPacket")
+
+    def build(self, content=JOB_START_PACKET):
+        packet = self.command_packet(content)
+        return packet
+
+
+class JobInfoBlockBuilder(PacketTestUtil):
+    def with_invalid_job_type(self):
+        return self.build("InvalidJobType")
+
+    def build(self, job_type, where=None, since=None, replace=None):
+        packet = JobInfoStartPacketBuilder().build()
+        packet += self.command_packet("Name=az_14125_bcjn")
+        packet += self.command_packet("JobID=334")
+        packet += self.command_packet("Type=%s" % job_type.upper())
+
+        if where is not None:
+            packet += self.command_packet("Where=%s" % where)
+
+        if since is not None:
+            packet += self.command_packet("Since=%s" % since)
+
+        if replace is not None:
+            packet += self.command_packet("Replace=%s" % replace)
+
+        packet += self.eod_packet()
+        return packet
+
+    def without_job_name(self):
+        packet = JobInfoStartPacketBuilder().build()
+        packet += self.command_packet("JobID=334")
+        packet += self.command_packet("Type=E")
+        packet += self.eod_packet()
+        return packet
+
+    def without_job_id(self):
+        packet = JobInfoStartPacketBuilder().build()
+        packet += self.command_packet("Name=az_14125_bcjn")
+        packet += self.command_packet("Type=E")
+        packet += self.eod_packet()
+        return packet
+
+    def without_job_type(self):
+        packet = JobInfoStartPacketBuilder().build()
+        packet += self.command_packet("Name=az_14125_bcjn")
+        packet += self.command_packet("JobID=334")
+        packet += self.eod_packet()
+        return packet
+
+
+class PluginParamsStartPacketBuilder(PacketTestUtil):
+    def build_invalid(self):
+        return self.build(content="InvalidStartPacket")
+
+    def build(self, content=PLUGIN_PARAMETERS_START):
+        packet = self.command_packet(content)
+        return packet
+
+
+class PluginParamsBlockBuilder(PacketTestUtil):
+    def build(self,
+              includes=None,
+              regex_includes=None,
+              excludes=None,
+              regex_excludes=None,
+              segment_size=None,
+              restore_local_path=None,
+              password=True,
+              passfile=None):
+        packet = PluginParamsStartPacketBuilder().build()
+        packet += self.command_packet("User=%s" % BACKEND_PLUGIN_USER)
+        packet += self.command_packet("URL=%s" % BACKEND_PLUGIN_URL)
+
+        if password:
+            packet += self.command_packet("Password=%s" % BACKEND_PLUGIN_PWD)
+
+        if includes is not None:
+            for include in includes:
+                packet += self.command_packet("include=%s" % include)
+
+        if regex_includes is not None:
+            for regex_include in regex_includes:
+                packet += self.command_packet("regex_include=%s" % regex_include)
+
+        if excludes is not None:
+            for exclude in excludes:
+                packet += self.command_packet("exclude=%s" % exclude)
+
+        if regex_excludes is not None:
+            for regex_exclude in regex_excludes:
+                packet += self.command_packet("regex_exclude=%s" % regex_exclude)
+
+        if segment_size is not None:
+            packet += self.command_packet("be_object_segment_size=%s" % segment_size)
+
+        if restore_local_path is not None:
+            packet += self.command_packet("restore_local_path=%s" % restore_local_path)
+
+        if passfile is not None:
+            packet += self.command_packet("passfile=%s" % passfile)
+
+        packet += self.command_packet("debug=1")
+        packet += self.eod_packet()
+        return packet
+
+    def without_url(self):
+        packet = PluginParamsStartPacketBuilder().build()
+        packet += self.command_packet("User=%s" % BACKEND_PLUGIN_USER)
+        packet += self.command_packet("Password=%s" % BACKEND_PLUGIN_PWD)
+        packet += self.eod_packet()
+        return packet
+
+    def without_user(self):
+        packet = PluginParamsStartPacketBuilder().build()
+        packet += self.command_packet("Password=%s" % BACKEND_PLUGIN_PWD)
+        packet += self.command_packet("URL=%s" % BACKEND_PLUGIN_URL)
+        packet += self.eod_packet()
+        return packet
+
+    def without_pwd(self):
+        packet = PluginParamsStartPacketBuilder().build()
+        packet += self.command_packet("User=%s" % BACKEND_PLUGIN_USER)
+        packet += self.command_packet("URL=%s" % BACKEND_PLUGIN_URL)
+        packet += self.eod_packet()
+        return packet
+
+
+class JobEndPacketBuilder(PacketTestUtil):
+    def build(self):
+        packet = self.command_packet(END_JOB_START_PACKET)
+        packet += TERMINATION_PACKET
+        return packet
+
+    def build_invalid(self):
+        packet = self.invalid_command_packet()
+        packet += TERMINATION_PACKET
+        return packet
+
+
+class BackupCommandBuilder(PacketTestUtil):
+    def build(self):
+        packet = self.command_packet(BACKUP_START_PACKET)
+        return packet
+
+
+class FilesEstimationCommandBuilder(PacketTestUtil):
+    def build(self):
+        packet = self.command_packet(ESTIMATION_START_PACKET)
+        return packet
+
+
+class RestoreFileCommandBuilder(PacketTestUtil):
+    def with_invalid_file_transfer_start(self, buckets):
+        return self.build(buckets, invalid_file_transfer_start=True)
+
+    def with_invalid_xattrs_transfer_start(self, buckets):
+        return self.build(buckets, invalid_xattrs_transfer_start=True)
+
+    def with_invalid_acl_transfer_start(self, buckets):
+        return self.build(buckets, invalid_acl_transfer_start=True)
+
+    def build(self, buckets,
+              with_start_packet=True,
+              invalid_file_transfer_start=False,
+              invalid_xattrs_transfer_start=False,
+              invalid_acl_transfer_start=False,
+              fname_without_fsource=False,
+              where=None,
+              ):
+
+        packet = b''
+
+        if with_start_packet:
+            packet += self.command_packet(RESTORE_START_PACKET)
+
+        for bucket in buckets:
+            packet += self.__create_files_packets(bucket, invalid_file_transfer_start,
+                                                  invalid_xattrs_transfer_start,
+                                                  where=where,
+                                                  fname_without_fsource=fname_without_fsource)
+
+            packet += self.__create_bucket_packets(bucket, invalid_acl_transfer_start)
+
+        packet += self.command_packet(RESTORE_END_PACKET)
+        return packet
+
+    def __create_files_packets(self, bucket, invalid_file_transfer_start=False, invalid_xattrs_transfer_start=False, where=False, fname_without_fsource=False):
+        packet = b''
+        for file in bucket['files']:
+            packet += self.file_info(bucket['name'], file, where=where, fname_without_fsource=fname_without_fsource)
+            if "data_packets" not in file:
+                file["data_packets"] = True
+
+            if file["data_packets"]:
+
+                if file['size'] > 0:
+
+                    if invalid_file_transfer_start:
+                        packet += self.invalid_command_packet()
+                    else:
+                        packet += self.command_packet(TRANSFER_START_PACKET)
+
+                    if 'content' in file:
+                        packet += self.file_contents(file['content'])
+
+                packet += self.file_acl(False)
+                packet += self.file_xattrs(invalid_xattrs_transfer_start)
+
+        return packet
+
+    def file_info(self, bucket, file, where=None, fname_without_fsource=False):
+        if fname_without_fsource:
+            file_source = ""
+        else:
+            file_source = "@%s" % BACKEND_PLUGIN_TYPE
+
+        if where:
+            packet = self.command_packet('FNAME:%s/%s/%s/%s' % (where, file_source, bucket, file['name']))
+        else:
+            packet = self.command_packet('FNAME:%s/%s/%s' % (file_source, bucket, file['name']))
+        packet += self.command_packet('STAT:F %s 0 0 %s 1 473' % (file['size'], DEFAULT_FILE_MODE))
+
+        if "modified-at" not in file:
+            file["modified-at"] = 2222222222
+
+        packet += self.command_packet('TSTAMP:1111111111 %s 3333333333' % file["modified-at"])
+        packet += self.eod_packet()
+        return packet
+
+    def file_contents(self, contents):
+        packet = b''
+        for chunk in contents:
+            packet += self.data_packet(chunk)
+        packet += self.eod_packet()
+        return packet
+
+    def file_xattrs(self, invalid_xattrs_transfer_start):
+        if invalid_xattrs_transfer_start:
+            packet = self.invalid_command_packet()
+        else:
+            packet = self.command_packet(XATTR_DATA_START)
+
+        x_attrs = {
+            'content-type': "app/pdf",
+            'content-encoding': "ascii",
+            'content-disposition': "download",
+            'x-delete-at': "1999919048",
+            'x-object-meta-custom1': "custom_meta1",
+            'x-object-meta-custom2': "custom_meta2",
+        }
+        x_attrs_bytes = json.dumps(x_attrs).encode()
+
+        packet += self.data_packet(x_attrs_bytes)
+        packet += self.eod_packet()
+        return packet
+
+    def file_acl(self, invalid_acl_transfer_start):
+        # Swift does not have File Acl
+        if BACKEND_PLUGIN_TYPE == "swift":
+            return b''
+
+        if invalid_acl_transfer_start:
+            packet = self.invalid_command_packet()
+        else:
+            packet = self.command_packet(ACL_DATA_START)
+
+        acl = {
+            'read': "user1, user2",
+            'write': "user3"
+        }
+        acl_bytes = json.dumps(acl).encode()
+
+        packet += self.data_packet(acl_bytes)
+        packet += self.eod_packet()
+        return packet
+
+    def __create_bucket_packets(self, bucket, invalid_acl_transfer_start=False):
+        packet = self.bucket_info(bucket)
+
+        if "data_packets" not in bucket:
+            bucket["data_packets"] = True
+
+        if bucket["data_packets"]:
+            packet += self.bucket_acl(invalid_acl_transfer_start)
+            packet += self.bucket_xattrs()
+
+        return packet
+
+    def bucket_info(self, bucket):
+        packet = self.command_packet('FNAME:@%s/%s/' % (BACKEND_PLUGIN_TYPE, bucket['name']))
+
+        if "modified-at" not in bucket:
+            bucket["modified-at"] = 2222222222
+
+        packet += self.command_packet('STAT:D 12345 0 0 %s 1 473' % DEFAULT_DIR_MODE)
+        packet += self.command_packet('TSTAMP:1111111111 %s 3333333333' % bucket["modified-at"])
+        packet += self.eod_packet()
+        return packet
+
+    def bucket_xattrs(self):
+        packet = self.command_packet(XATTR_DATA_START)
+        xattrs = {
+            'x-container-meta-quota-bytes': "1000000",
+            'x-container-meta-quota-count': "100",
+            'x-container-meta-web-directory-type': "text/directory",
+            'x-container-meta-custom1': "custom_meta1",
+            'x-container-meta-custom2': "custom_meta2",
+        }
+        x_attrs_bytes = json.dumps(xattrs).encode()
+
+        packet += self.data_packet(x_attrs_bytes)
+        packet += self.eod_packet()
+        return packet
+
+    def bucket_acl(self, invalid_acl_transfer_start):
+        if invalid_acl_transfer_start:
+            packet = self.invalid_command_packet()
+        else:
+            packet = self.command_packet(ACL_DATA_START)
+
+        acl = {
+            'read': "user1, user2",
+            'write': "user3"
+        }
+        acl_bytes = json.dumps(acl).encode()
+
+        packet += self.data_packet(acl_bytes)
+        packet += self.eod_packet()
+        return packet
+
+
+class BackendCommandBuilder(PacketTestUtil):
+    def build(self, job_type, buckets=None, where=None, includes=None, segment_size=None, restore_local_path=None):
+        packet = HandshakePacketBuilder().build()
+        packet += JobInfoBlockBuilder().build(job_type, where=where)
+        packet += PluginParamsBlockBuilder().build(includes=includes, segment_size=segment_size, restore_local_path=restore_local_path)
+
+        if job_type == TYPE_BACKUP:
+            packet += BackupCommandBuilder().build()
+
+        elif job_type == TYPE_ESTIMATION:
+            packet += FilesEstimationCommandBuilder().build()
+
+        elif job_type == TYPE_RESTORE:
+            packet += RestoreFileCommandBuilder().build(buckets)
+
+        else:
+            raise ValueError("Invalid job_type!")
+
+        packet += JobEndPacketBuilder().build()
+        return packet
diff --git a/bacula/src/plugins/fd/kubernetes-backend/tests/util/packet_test_util.py b/bacula/src/plugins/fd/kubernetes-backend/tests/util/packet_test_util.py
new file mode 100644 (file)
index 0000000..3eecbc1
--- /dev/null
@@ -0,0 +1,37 @@
+# Bacula(R) - The Network Backup Solution
+#
+#   Copyright (C) 2000-2022 Kern Sibbald
+#
+#   The original author of Bacula is Kern Sibbald, with contributions
+#   from many others, a complete list can be found in the file AUTHORS.
+#
+#   You may use this file and others of this release according to the
+#   license defined in the LICENSE file, which includes the Affero General
+#   Public License, v3.0 ("AGPLv3") and some additional permissions and
+#   terms pursuant to its AGPLv3 Section 7.
+#
+#   This notice must be preserved when any source code is
+#   conveyed and/or propagated.
+#
+#   Bacula(R) is a registered trademark of Kern Sibbald.
+
+
+from baculaswift.io.packet_definitions import EOD_PACKET
+
+
+class PacketTestUtil(object):
+    def data_packet(self, data):
+        packet = ("D%s\n" % str(len(data)).zfill(6)).encode()
+        packet += data
+        return packet
+
+    def invalid_command_packet(self):
+        return self.command_packet("invalid_command")
+
+    def command_packet(self, packet_content):
+        packet_content += "\n"
+        packet_header = "C%s\n" % str(len(packet_content)).zfill(6)
+        return (packet_header + packet_content).encode()
+
+    def eod_packet(self):
+        return EOD_PACKET + b"\n"
diff --git a/bacula/src/plugins/fd/kubernetes-fd.c b/bacula/src/plugins/fd/kubernetes-fd.c
new file mode 100644 (file)
index 0000000..1193777
--- /dev/null
@@ -0,0 +1,57 @@
+/*
+   Bacula(R) - The Network Backup Solution
+
+   Copyright (C) 2000-2022 Kern Sibbald
+
+   The original author of Bacula is Kern Sibbald, with contributions
+   from many others, a complete list can be found in the file AUTHORS.
+
+   You may use this file and others of this release according to the
+   license defined in the LICENSE file, which includes the Affero General
+   Public License, v3.0 ("AGPLv3") and some additional permissions and
+   terms pursuant to its AGPLv3 Section 7.
+
+   This notice must be preserved when any source code is
+   conveyed and/or propagated.
+
+   Bacula(R) is a registered trademark of Kern Sibbald.
+*/
+/**
+ * @file kubernetes-fd.c
+ * @author Radosław Korzeniewski (radoslaw@korzeniewski.net)
+ * @brief This is a Bacula Kubernetes Plugin with metaplugin interface.
+ * @version 2.0.5
+ * @date 2021-01-05
+ *
+ * @copyright Copyright (c) 2021 All rights reserved.
+ *            IP transferred to Bacula Systems according to agreement.
+ */
+
+#include "kubernetes-fd.h"
+
+/* Plugin Info definitions */
+const char *PLUGIN_LICENSE       = "Bacula AGPLv3";
+const char *PLUGIN_AUTHOR        = "Radoslaw Korzeniewski";
+const char *PLUGIN_DATE          = "April 2021";
+const char *PLUGIN_VERSION       = "2.0.5"; // TODO: should synchronize with kubernetes-fd.json
+const char *PLUGIN_DESCRIPTION   = "Bacula Enterprise Kubernetes Plugin";
+
+/* Plugin compile time variables */
+const char *PLUGINPREFIX         = "kubernetes:";
+const char *PLUGINNAME           = "kubernetes";
+const char *PLUGINNAMESPACE      = "@kubernetes";
+const bool CUSTOMNAMESPACE       = true;
+const bool CUSTOMPREVJOBNAME     = false;
+const char *PLUGINAPI            = "3";
+const char *BACKEND_CMD          = "k8s_backend";
+const int32_t CUSTOMCANCELSLEEP  = 0;
+
+checkFile_t checkFile = NULL;
+const bool CORELOCALRESTORE = false;
+const bool ACCURATEPLUGINPARAMETER = true;
+
+#ifdef DEVELOPER
+const metadataTypeMap plugin_metadata_map[] = {{"METADATA_STREAM", plugin_meta_blob}};
+#else
+const metadataTypeMap plugin_metadata_map[] = {{NULL, plugin_meta_invalid}};
+#endif
diff --git a/bacula/src/plugins/fd/kubernetes-fd.h b/bacula/src/plugins/fd/kubernetes-fd.h
new file mode 100644 (file)
index 0000000..86ce5af
--- /dev/null
@@ -0,0 +1,91 @@
+/*
+   Bacula(R) - The Network Backup Solution
+
+   Copyright (C) 2000-2022 Kern Sibbald
+
+   The original author of Bacula is Kern Sibbald, with contributions
+   from many others, a complete list can be found in the file AUTHORS.
+
+   You may use this file and others of this release according to the
+   license defined in the LICENSE file, which includes the Affero General
+   Public License, v3.0 ("AGPLv3") and some additional permissions and
+   terms pursuant to its AGPLv3 Section 7.
+
+   This notice must be preserved when any source code is
+   conveyed and/or propagated.
+
+   Bacula(R) is a registered trademark of Kern Sibbald.
+*/
+/**
+ * @file kubernetes-fd.h
+ * @author Radosław Korzeniewski (radoslaw@korzeniewski.net)
+ * @brief This is a Bacula Kubernetes Plugin with metaplugin interface.
+ * @version 1.3.0
+ * @date 2021-01-05
+ *
+ * @copyright Copyright (c) 2021 All rights reserved.
+ *            IP transferred to Bacula Systems according to agreement.
+ */
+
+#ifndef KUBERNETES_PLUGIN_FD_H
+#define KUBERNETES_PLUGIN_FD_H
+
+#include "pluginlib/pluginlib.h"
+#include "pluginlib/metaplugin.h"
+
+/*
+ * The list of restore options saved to the RestoreObject.
+ */
+struct ini_items plugin_items_dump[] =
+{
+//   name                       handler             comment                                        required default
+   {"config",                   ini_store_str,      "K8S config file",                                      0, "*None*"},
+   {"host",                     ini_store_str,      "K8S API server URL/Host",                              0, "*None*"},
+   {"token",                    ini_store_str,      "K8S Bearertoken",                                      0, "*None*"},
+//   {"username",                 ini_store_str,      "HTTP Auth username for API",                     0, "*None*"},
+//   {"password",                 ini_store_str,      "HTTP Auth password for API",                     0, "*None*"},
+   {"verify_ssl",               ini_store_bool,     "K8S API server cert verification",                     0, "True"},
+   {"ssl_ca_cert",              ini_store_str,      "Custom CA Certs file to use",                          0, "*None*"},
+   {"outputformat",             ini_store_str,      "Output format when saving to file (JSON, YAML)",       0, "RAW"},
+   {"fdaddress",                ini_store_str,      "The address for listen to incoming backup pod data",   0, "*FDAddress*"},
+   {"fdport",                   ini_store_int32,    "The port for opening socket for listen",               0, "9104"},
+   {"pluginhost",               ini_store_str,      "The endpoint address for backup pod to connect",       0, "*FDAddress*"},
+   {"pluginport",               ini_store_int32,    "The endpoint port to connect",                         0, "9104"},
+   {NULL, NULL, NULL, 0, NULL}
+};
+
+// the list of valid plugin options
+const char * valid_params[] =
+{
+   "listing",
+   "query",
+   "abort_on_error",
+   "config",
+   "incluster",
+   "host",
+   "token",
+   "verify_ssl",
+   "ssl_ca_cert",
+   "timeout",
+   "debug",
+   "namespace",
+   "ns",
+   "persistentvolume",
+   "pv",
+   "pvconfig",
+   "scconfig",
+   "pvcdata",
+   "fdaddress",
+   "fdport",
+   "pluginhost",
+   "pluginport",
+   "fdcertfile",
+   "fdkeyfile",
+   "baculaimage",
+   "imagepullpolicy",
+   "outputformat",
+   "labels",
+   NULL,
+};
+
+#endif   // KUBERNETES_PLUGIN_FD_H
index ed4f8332a411b3abfed493afee9322e06255f2b4..fecef7d5733617e0db202eec76210076b2bd5203 100644 (file)
  * @file commctx.h
  * @author Radosław Korzeniewski (radoslaw@korzeniewski.net)
  * @brief This is a Bacula plugin command context switcher template.
- * @version 1.2.0
- * @date 2020-01-05
+ * @version 1.3.0
+ * @date 2020-09-13
  *
- * @copyright Copyright (c) 2021 All rights reserved. IP transferred to Bacula Systems according to agreement.
+ * @copyright Copyright (c) 2021 All rights reserved.
+ *            IP transferred to Bacula Systems according to agreement.
  */
 
 #ifndef PLUGINLIB_COMMCTX_H
@@ -68,6 +69,8 @@ public:
    bool check_command(const char *command);
    void foreach_command(void (*func)(T *, void *), void *param);
    bRC foreach_command_status(bRC (*func)(T *, void *), void *param);
+
+   T * operator->() { return ctx; }
 };
 
 
index f0911fe94b3db67078e19df7ef9c79590dc74b9b..ba25a3b5f28d78c323bd85a13080513742cb7297 100644 (file)
@@ -40,7 +40,8 @@ struct testctx : public SMARTALLOC
 {
    const char * cmd;
    testctx(const char *command) : cmd(command) { referencenumber++; };
-   ~testctx() { referencenumber--; };;
+   ~testctx() { referencenumber--; };
+   bool meth() { return true; }
 };
 
 void do_something(testctx*, void*data)
@@ -66,7 +67,7 @@ bRC do_status(testctx*, void*data)
 
 int main()
 {
-   Unittests pluglib_test("commctx_test");
+   Unittests commctx_test("commctx_test");
 
    // Pmsg0(0, "Initialize tests ...\n");
 
@@ -123,5 +124,17 @@ int main()
       ok(status == bRC_Error, "do_status with NULL");
    }
 
+   {
+      COMMCTX<testctx> ctx;
+      auto testctx1 = ctx.switch_command(TEST1);
+      ok(testctx1 != nullptr, "test switch command1");
+      auto cmd1 = ctx->cmd;
+      ok(strcmp(cmd1, TEST1) == 0, "test arrow operator variable");
+      auto testctx2 = ctx.switch_command(TEST2);
+      ok(testctx2 != nullptr, "test switch command2");
+      auto cmd2 = ctx->meth();
+      ok(cmd2, "test arrow operator method");
+   }
+
    return report();
 }
diff --git a/bacula/src/plugins/fd/pluginlib/execprog.cpp b/bacula/src/plugins/fd/pluginlib/execprog.cpp
new file mode 100644 (file)
index 0000000..9184f68
--- /dev/null
@@ -0,0 +1,313 @@
+/*
+   Bacula(R) - The Network Backup Solution
+
+   Copyright (C) 2000-2020 Kern Sibbald
+
+   The original author of Bacula is Kern Sibbald, with contributions
+   from many others, a complete list can be found in the file AUTHORS.
+
+   You may use this file and others of this release according to the
+   license defined in the LICENSE file, which includes the Affero General
+   Public License, v3.0 ("AGPLv3") and some additional permissions and
+   terms pursuant to its AGPLv3 Section 7.
+
+   This notice must be preserved when any source code is
+   conveyed and/or propagated.
+
+   Bacula(R) is a registered trademark of Kern Sibbald.
+ */
+/**
+ * @file execprog.h
+ * @author Radosław Korzeniewski (radoslaw@korzeniewski.net)
+ * @brief This is a Bacula plugin external command execution context.
+ * @version 1.2.0
+ * @date 2020-01-05
+ *
+ * @copyright Copyright (c) 2021 All rights reserved. IP transferred to Bacula Systems according to agreement.
+ */
+
+#include "execprog.h"
+
+/* Plugin compile time variables */
+#define PLUGINPREFIX                "execprog:"
+
+/*
+ * Terminate the connection represented by BPIPE object.
+ *    it shows a debug and job messages when connection close is unsuccessful
+ *    and when ctx is available only.
+ *
+ * in:
+ *    bpContext - Bacula Plugin context required for debug/job messages to show,
+ *                it could be NULL in this case no messages will be shown
+ * out:
+ *    none
+ */
+void EXECPROG::terminate(bpContext *ctx, bool raise_error)
+{
+   if (is_closed()){
+      return;
+   }
+
+   // after close_bpipe it is no longer available
+   tstatus = close_bpipe(bpipe);
+   if (tstatus && raise_error){
+      /* error during close */
+      berrno be;
+      DMSG(ctx, DERROR, "Error closing command. Err=%s\n", be.bstrerror(tstatus));
+      JMSG(ctx, M_ERROR, "Error closing command. Err=%s\n", be.bstrerror(tstatus));
+   }
+
+   // TODO: is it required to terminate the backend process?
+   // if (bpipe->worker_pid){
+   //    /* terminate the process command */
+   //    kill(bpipe->worker_pid, SIGTERM);
+   // }
+
+   bpipe = NULL;
+};
+
+/*
+ * Run command and prepared parameters.
+ */
+bool EXECPROG::execute_command(bpContext *ctx, const POOL_MEM &cmd, const POOL_MEM &args)
+{
+   return execute_command(ctx, cmd.c_str(), args.c_str());
+}
+
+/*
+ * Run command and prepared parameters.
+ */
+bool EXECPROG::execute_command(bpContext *ctx, const POOL_MEM &cmd)
+{
+   return execute_command(ctx, cmd.c_str());
+}
+
+/*
+ * Run command and prepared parameters.
+ *
+ * in:
+ *    bpContext - for Bacula debug and jobinfo messages
+ *    cmd - the command to execute
+ * out:
+ *    True - when command execute successfully
+ *    False - when execution return error
+ */
+bool EXECPROG::execute_command(bpContext *ctx, const POOLMEM *cmd, const POOLMEM *args)
+{
+   POOL_MEM exe_cmd(PM_FNAME);
+
+   if (cmd == NULL){
+      /* cannot execute command NULL */
+      DMSG0(ctx, DERROR, "Logic error: Cannot execute NULL command!\n");
+      JMSG0(ctx, M_FATAL, "Logic error: Cannot execute NULL command!\n");
+      return false;
+   }
+
+   /* the format of a command line to execute is: <cmd> [<params] */
+   Mmsg(exe_cmd, "%s %s", cmd, args);
+   DMSG(ctx, DINFO, "Executing: %s\n", exe_cmd.c_str());
+   bpipe = open_bpipe(exe_cmd.c_str(), 0, "rw");
+   if (bpipe == NULL){
+      berrno be;
+      DMSG(ctx, DERROR, "Unable to run command. Err=%s\n", be.bstrerror());
+      JMSG(ctx, M_FATAL, "Unable to run command. Err=%s\n", be.bstrerror());
+      return false;
+   }
+   DMSG(ctx, DINFO, "Command executed at PID=%d\n", get_cmd_pid());
+   tstatus = 0;
+
+   return true;
+}
+
+/*
+ * Read all output from command - until eod and save it in the out buffer.
+ *
+ * in:
+ *    bpContext - for Bacula debug jobinfo messages
+ *    out - the POOL_MEM buffer we will read data
+ * out:
+ *    -1 - when we've got any error; the function will report it to Bacula when
+ *         ctx is not NULL
+ *    0 - when no more data to read - EOD
+ *    <n> - the size of received message
+ */
+int32_t EXECPROG::read_output(bpContext *ctx, POOL_MEM &out)
+{
+   int status;
+   int rbytes;
+   bool ndone;
+
+   if (is_closed()){
+      DMSG0(ctx, DERROR, "BPIPE to command is closed, cannot get data.\n");
+      JMSG0(ctx, is_fatal() ? M_FATAL : M_ERROR, "BPIPE to command is closed, cannot get data.\n");
+      return -1;
+   }
+
+   /* set variables */
+   rbytes = 0;
+   ndone = true;
+   /* read all output data */
+   while (ndone){
+      status = read_data(ctx, out.addr() + rbytes, out.size() - rbytes);
+      if (status < 0){
+         /* error */
+         return -1;
+      }
+      rbytes += status;
+      if (is_eod()){
+         /* we read all data available */
+         ndone = false;
+         continue;
+      }
+      /* it seems out buffer is too small for all data */
+      out.check_size(rbytes + 1024);
+   }
+
+   // we terminate the output as it would be the standard string
+   out.check_size(rbytes + 1);
+   out.c_str()[rbytes] = '\0';
+
+   return rbytes;
+}
+
+/*
+ * Reads a single data block from command.
+ *  It reads as more data as is available on the other size and will fit into
+ *  a memory buffer - buf. When EOD encountered during reading it will set
+ *  f_eod flag, so checking this flag is mandatory!
+ *
+ * in:
+ *    bpContext - for Bacula debug jobinfo messages
+ *    buf - a memory buffer for data
+ *    len - the length of the memory buffer - buf
+ * out:
+ *    -1 - when we've got any error; the function reports it to Bacula when
+ *         ctx is not NULL
+ *    when no more data to read - EOD
+ *    <n> - the size of received data
+ */
+int32_t EXECPROG::read_data(bpContext *ctx, POOLMEM *buf, int32_t len)
+{
+   int status;
+   int nbytes;
+   int rbytes;
+   int timeout;
+
+   if (buf == NULL || len < 1){
+      /* we have no space to read data */
+      DMSG0(ctx, DERROR, "No space to read data from tool.\n");
+      JMSG0(ctx, is_fatal() ? M_FATAL : M_ERROR, "No space to read data from tool.\n");
+      return -1;
+   }
+
+   if (is_closed()){
+      DMSG0(ctx, DERROR, "BPIPE to command is closed, cannot get data.\n");
+      JMSG0(ctx, is_fatal() ? M_FATAL : M_ERROR, "BPIPE to command is closed, cannot get data.\n");
+      return -1;
+   }
+
+   /* we will read no more then len bytes available in the buf */
+   nbytes = len;
+   rbytes = 0;
+   /* clear flags */
+   f_eod = f_error = f_fatal = false;
+   timeout = 200;          // timeout of 200ms
+   while (nbytes){
+      status = fread(buf + rbytes, 1, nbytes, bpipe->rfd);
+      if (status == 0){
+         berrno be;
+         if (ferror(bpipe->rfd) != 0){
+            // check if it is an interrupted system call then restart
+            if (be.code() == EINTR){
+               clearerr(bpipe->rfd);
+               continue;
+            }
+            f_error = true;
+            DMSG(ctx, DERROR, "BPIPE read error: ERR=%s\n", be.bstrerror());
+            JMSG(ctx, is_fatal() ? M_FATAL : M_ERROR, "BPIPE read error: ERR=%s\n", be.bstrerror());
+            return -1;
+         }
+         if (feof(bpipe->rfd) != 0){
+            f_eod = true;
+            return rbytes;
+         }
+         bmicrosleep(0, 1000);   // sleep 1mS
+         if (!timeout--){
+            /* reach timeout*/
+            f_error = true;
+            DMSG0(ctx, DERROR, "BPIPE read timeout.\n");
+            JMSG0(ctx, is_fatal() ? M_FATAL : M_ERROR, "BPIPE read timeout.\n");
+            return -1;
+         }
+      } else {
+         timeout = 200;          // reset timeout
+      }
+      nbytes -= status;
+      rbytes += status;
+   }
+   return rbytes;
+}
+
+/*
+ * Sends a raw data block to xe tool.
+ *
+ * in:
+ *    bpContext - for Bacula debug and jobinfo messages
+ *    buf - a message buffer contains data to send
+ *    len - the length of the data to send
+ * out:
+ *    -1 - when encountered any error
+ *    <n> - the number of bytes sent, success
+ */
+int32_t EXECPROG::write_data(bpContext *ctx, POOLMEM *buf, int32_t len)
+{
+   int status;
+   int nbytes;
+   int wbytes;
+   int timeout;
+
+   if (buf == NULL){
+      /* we have no data to write */
+      DMSG0(ctx, DERROR, "No data to send to command.\n");
+      JMSG0(ctx, is_fatal() ? M_FATAL : M_ERROR, "No data to send to command.\n");
+      return -1;
+   }
+
+   if (is_closed()){
+      DMSG0(ctx, DERROR, "BPIPE to command is closed, cannot send data.\n");
+      JMSG0(ctx, is_fatal() ? M_FATAL : M_ERROR, "BPIPE to command is closed, cannot send data.\n");
+      return -1;
+   }
+
+   /* we will write len bytes available in the buf */
+   nbytes = len;
+   wbytes = 0;
+   /* clear flags */
+   f_eod = f_error = f_fatal = false;
+   timeout = 200;          // timeout of 200ms
+   while (nbytes){
+      status = fwrite(buf + wbytes, 1, nbytes, bpipe->wfd);
+      if (status == 0){
+         berrno be;
+         if (ferror(bpipe->wfd) != 0){
+            f_error = true;
+            DMSG(ctx, DERROR, "BPIPE write error: ERR=%s\n", be.bstrerror());
+            JMSG(ctx, is_fatal() ? M_FATAL : M_ERROR, "BPIPE write error: ERR=%s\n", be.bstrerror());
+            return -1;
+         }
+         bmicrosleep(0, 1000);   // sleep 1mS
+         if (!timeout--){
+            /* reached timeout*/
+            f_error = true;
+            DMSG0(ctx, DERROR, "BPIPE write timeout.\n");
+            JMSG0(ctx, is_fatal() ? M_FATAL : M_ERROR, "BPIPE write timeout.\n");
+            return -1;
+         }
+      } else {
+         timeout = 200;          // reset timeout
+      }
+      nbytes -= status;
+      wbytes += status;
+   }
+   return wbytes;
+}
diff --git a/bacula/src/plugins/fd/pluginlib/execprog.h b/bacula/src/plugins/fd/pluginlib/execprog.h
new file mode 100644 (file)
index 0000000..47d4720
--- /dev/null
@@ -0,0 +1,191 @@
+/*
+   Bacula(R) - The Network Backup Solution
+
+   Copyright (C) 2000-2020 Kern Sibbald
+
+   The original author of Bacula is Kern Sibbald, with contributions
+   from many others, a complete list can be found in the file AUTHORS.
+
+   You may use this file and others of this release according to the
+   license defined in the LICENSE file, which includes the Affero General
+   Public License, v3.0 ("AGPLv3") and some additional permissions and
+   terms pursuant to its AGPLv3 Section 7.
+
+   This notice must be preserved when any source code is
+   conveyed and/or propagated.
+
+   Bacula(R) is a registered trademark of Kern Sibbald.
+ */
+/**
+ * @file execprog.h
+ * @author Radosław Korzeniewski (radoslaw@korzeniewski.net)
+ * @brief This is a Bacula plugin external command execution context.
+ * @version 1.2.0
+ * @date 2020-01-05
+ *
+ * @copyright Copyright (c) 2021 All rights reserved. IP transferred to Bacula Systems according to agreement.
+ */
+
+#ifndef PLUGINLIB_EXECPROG_H
+#define PLUGINLIB_EXECPROG_H
+
+#include "pluginlib/pluginlib.h"
+
+
+class EXECPROG : public SMARTALLOC
+{
+private:
+   BPIPE *bpipe;              // this is our bpipe to communicate with external tools
+   int rfd;                   // backend `stdout` to plugin file descriptor
+   int wfd;                   // backend `stdin` to plugin file descriptor
+   int efd;                   // backend `stderr` to plugin file descriptor
+   int maxfd;                 // max file descriptors from bpipe channels
+   POOL_MEM errmsg;           // message buffer for error string
+   int extpipe;               // set when data blast is performed using external pipe/file
+   POOL_MEM extpipename;      // name of the external pipe/file for restore
+   bool f_eod;                // the backend signaled EOD
+   bool f_error;              // the backend signaled an error
+   bool f_fatal;              // the backend signaled a fatal error
+   bool f_cont;               // when we are reading next part of data packet
+   bool abort_on_error;       // abort on error flag
+   int32_t remaininglen;      // the number of bytes to read when `f_cont` is true
+   struct timeval _timeout;   // a timeout when waiting for data to read from backend
+   int tstatus;               //
+
+public:
+   EXECPROG() :
+      bpipe(NULL),
+      rfd(0),
+      wfd(0),
+      efd(0),
+      maxfd(0),
+      errmsg(PM_MESSAGE),
+      extpipe(-1),
+      extpipename(PM_FNAME),
+      f_eod(false),
+      f_error(false),
+      f_fatal(false),
+      f_cont(false),
+      abort_on_error(false),
+      remaininglen(0),
+#if __cplusplus >= 201103L
+      _timeout{0},
+#endif
+      tstatus(0)
+   {
+#if __cplusplus < 201103L
+      _timeout.tv_sec = 0;
+      _timeout.tv_usec = 0;
+#endif
+   }
+#if __cplusplus >= 201103L
+   EXECPROG(EXECPROG &) = delete;
+   EXECPROG(EXECPROG &&) = delete;
+   ~EXECPROG() = default;
+#else
+   ~EXECPROG() {};
+#endif
+
+   /**
+    * @brief Checks if connection is open and we can use a bpipe object for communication.
+    *
+    * @return true if connection is closed and we can't use bpipe object
+    * @return false if connection is available
+    */
+   inline bool is_open() { return bpipe != NULL; };
+
+   /**
+    * @brief Checks if connection is closed and we can't use a bpipe object for communication.
+    *
+    * @return true if connection is closed and we can't use bpipe object
+    * @return false if connection is available
+    */
+   inline bool is_closed() { return bpipe == NULL; };
+
+   /**
+    * @brief Checks if backend sent us some error, backend error message is flagged on f_error.
+    *
+    * @return true
+    * @return false
+    */
+   inline bool is_error() { return f_error || f_fatal; };
+
+   /**
+    * @brief Checks if backend sent us fatal error, backend error message is flagged on f_fatal.
+    *
+    * @return true
+    * @return false
+    */
+   inline bool is_fatal() { return f_fatal || (f_error && abort_on_error); };
+
+   /**
+    * @brief Set the abort on error object
+    *
+    */
+   inline void set_abort_on_error() { abort_on_error = true; };
+
+   /**
+    * @brief Clear abort_on_error flag.
+    *
+    */
+   inline void clear_abort_on_error() { abort_on_error = false; };
+
+   /**
+    * @brief Return abort_on_error flag.
+    *
+    * @return true
+    * @return false
+    */
+   inline bool is_abort_on_error() { return abort_on_error; };
+
+   /**
+    * @brief Checks if backend signaled EOD, eod from backend is flagged on f_eod.
+    *
+    * @return true when backend signaled EOD on last packet
+    * @return false when backend did not signal EOD
+    */
+   inline bool is_eod() { return f_eod; };
+
+   /**
+    * @brief Clears the EOD from backend flag, f_eod.
+    *  The eod flag is set when EOD message received from backend and not cleared
+    *  until next recvbackend() call.
+    */
+   inline void clear_eod() { f_eod = false; };
+
+   /**
+    * @brief Get the cmd pid.
+    *  Returns a backend PID if available.
+    *
+    * @return int the backend PID
+    */
+   inline int get_cmd_pid()
+   {
+      if (bpipe){
+         return bpipe->worker_pid;
+      }
+      return -1;
+   };
+
+   inline int get_terminate_status() { return tstatus; };
+
+   /* all you need is to simply execute the command first */
+   bool execute_command(bpContext *ctx, const POOLMEM *cmd, const POOLMEM *args = "");
+   bool execute_command(bpContext *ctx, const POOL_MEM &cmd, const POOL_MEM &args);
+   bool execute_command(bpContext *ctx, const POOL_MEM &cmd);
+
+   /* then just simply read or write data to it */
+   int32_t read_data(bpContext *ctx, POOLMEM *buf, int32_t len);
+   int32_t read_output(bpContext *ctx, POOL_MEM &out);
+   int32_t write_data(bpContext *ctx, POOLMEM *buf, int32_t len);
+
+   /* and finally terminate execution when finish */
+   void terminate(bpContext *ctx, bool raise_error = true);
+
+   POOLMEM *get_error(bpContext *ctx);
+
+   /* direct pipe management */
+   inline int close_wpipe() { return ::close_wpipe(bpipe); }
+};
+
+#endif   // PLUGINLIB_EXECPROG_H
diff --git a/bacula/src/plugins/fd/pluginlib/execprog_test.cpp b/bacula/src/plugins/fd/pluginlib/execprog_test.cpp
new file mode 100644 (file)
index 0000000..5bf3984
--- /dev/null
@@ -0,0 +1,58 @@
+/*
+   Bacula(R) - The Network Backup Solution
+
+   Copyright (C) 2000-2020 Kern Sibbald
+
+   The original author of Bacula is Kern Sibbald, with contributions
+   from many others, a complete list can be found in the file AUTHORS.
+
+   You may use this file and others of this release according to the
+   license defined in the LICENSE file, which includes the Affero General
+   Public License, v3.0 ("AGPLv3") and some additional permissions and
+   terms pursuant to its AGPLv3 Section 7.
+
+   This notice must be preserved when any source code is
+   conveyed and/or propagated.
+
+   Bacula(R) is a registered trademark of Kern Sibbald.
+ */
+/**
+ * @file execprog_test.cpp
+ * @author Radosław Korzeniewski (radoslaw@korzeniewski.net)
+ * @brief This is a Bacula plugin external command execution context - unittest.
+ * @version 1.2.0
+ * @date 2020-01-05
+ *
+ * @copyright Copyright (c) 2021 All rights reserved. IP transferred to Bacula Systems according to agreement.
+ */
+
+#include "bacula.h"
+#include "unittests.h"
+#include "execprog.h"
+
+bFuncs *bfuncs;
+bInfo *binfo;
+
+int main()
+{
+   Unittests iso8601_test("execprog_test");
+   EXECPROG execprog;
+
+   nok(execprog.is_open(), "default is_open()");
+   ok(execprog.is_closed(), "default is_closed()");
+   nok(execprog.is_error(), "default is_error()");
+   nok(execprog.is_fatal(), "default is_fatal()");
+   nok(execprog.is_abort_on_error(), "default is_abort_on_error()");
+   nok(execprog.is_eod(), "default is_eod()");
+   nok(execprog.get_cmd_pid() > 0, "default get_cmd_pid()");
+
+   ok(execprog.execute_command(NULL, "ls -l /etc/passwd"), "execprog()");
+   POOL_MEM out(PM_MESSAGE);
+   int rc = execprog.read_output(NULL, out);
+   // Pmsg1(0, "out: %s\n", out.c_str());
+   ok(rc > 0, "read_output()");
+   ok(execprog.get_cmd_pid() > 0, "get_cmd_pid()");
+
+   execprog.terminate(NULL);
+   return report();
+}
diff --git a/bacula/src/plugins/fd/pluginlib/iso8601.cpp b/bacula/src/plugins/fd/pluginlib/iso8601.cpp
new file mode 100644 (file)
index 0000000..335d9dc
--- /dev/null
@@ -0,0 +1,156 @@
+/*
+   Bacula(R) - The Network Backup Solution
+
+   Copyright (C) 2000-2020 Kern Sibbald
+
+   The original author of Bacula is Kern Sibbald, with contributions
+   from many others, a complete list can be found in the file AUTHORS.
+
+   You may use this file and others of this release according to the
+   license defined in the LICENSE file, which includes the Affero General
+   Public License, v3.0 ("AGPLv3") and some additional permissions and
+   terms pursuant to its AGPLv3 Section 7.
+
+   This notice must be preserved when any source code is
+   conveyed and/or propagated.
+
+   Bacula(R) is a registered trademark of Kern Sibbald.
+ */
+/**
+ * @file iso8601.cpp
+ * @author Radosław Korzeniewski (radoslaw@korzeniewski.net)
+ * @brief This is a Bacula plugin ISO8601 parsing library.
+ * @version 1.2.0
+ * @date 2020-01-05
+ *
+ * @copyright Copyright (c) 2021 All rights reserved. IP transferred to Bacula Systems according to agreement.
+ */
+
+#include "iso8601.h"
+#include "lib/btime.h"
+
+
+#ifdef ISO8601_USE_REGEX
+
+// Adapted from http://delete.me.uk/2005/03/iso8601.html
+#define ISO8601_REGEX_RAW              "(?P<year>[0-9]{4})-(?P<month>[0-9]{1,2})-(?P<day>[0-9]{1,2})" \
+                                       "T(?P<hour>[0-9]{2}):(?P<minute>[0-9]{2})(:(?P<second>[0-9]{2})(\.(?P<fraction>[0-9]+))?)?" \
+                                       "(?P<timezone>Z|[-+][0-9]{2}(:?[0-9]{2})?)?"
+
+#define ISO8601_TIMEZONE_REGEX_RAW     "(?P<prefix>[+-])(?P<hours>[0-9]{2}):?(?P<minutes>[0-9]{2})?"
+
+
+
+/**
+ * @brief Construct a new ISO8601DateTime::ISO8601DateTime object
+ *
+ */
+ISO8601DateTime::ISO8601DateTime() : status(false)
+{
+   const int options = REG_EXTENDED | REG_ICASE;
+   int rc;
+   char prbuf[500];
+
+   status = true;
+
+   rc = regcomp(&ISO8601_REGEX, ISO8601_REGEX_RAW, options);
+   if (rc != 0){
+      regerror(rc, &ISO8601_REGEX, prbuf, sizeof(prbuf));
+      DMsg0(DERROR, "Cannot initialize Bacula regression library for ISO8601_REGEX!\n");
+      DMsg1(DERROR, "regex Err=%s\n", prbuf);
+      status = false;
+   }
+
+   rc = regcomp(&TIMEZONE_REGEX, ISO8601_TIMEZONE_REGEX_RAW, options);
+   if (rc != 0){
+      regerror(rc, &ISO8601_REGEX, prbuf, sizeof(prbuf));
+      DMsg0(DERROR, "Cannot initialize Bacula regression library for TIMEZONE_REGEX!");
+      DMsg1(DERROR, "regex Err=%s\n", prbuf);
+      status = false;
+   }
+};
+#endif
+
+// #define ISO8601STRSCAN1
+// #define ISO8601STRSCAN2
+// #define ISO8601STRSCAN3
+static const char *scan_formats[] =
+{
+   "%FT%T%z",
+   "%FT%T%t%z",
+   "%FT%T%Z",
+   "%Y%m%dT%TZ",
+   "%Y%m%dT%T%z",
+   "%Y%m%dT%T%t%z",
+   "%Y-%m-%dT%TZ",
+   "%Y-%m-%dT%T%z",
+   "%Y-%m-%dT%T%t%z",
+   NULL,
+};
+
+/**
+ * @brief Parses ISO 8601 dates into utime_t values
+ *    The timezone is parsed from the date string. However it is quite common to
+ *    have dates without a timezone (not strictly correct). In this case the
+ *    default timezone specified in default_timezone is used. This is UTC by
+ *    default.
+ *
+ * @param datestring
+ * @return utime_t
+ */
+utime_t ISO8601DateTime::parse_data(const char *datestring)
+{
+   utime_t time = 0;
+
+   tzset();
+
+   if (strlen(datestring) > 0){
+      const char *fmt = scan_formats[0];
+      for (int a = 1; fmt != NULL; a++)
+      {
+#if __cplusplus > 201103L
+         struct tm tm {0};
+#else
+         struct tm tm;
+         memset(&tm, 0, sizeof(tm));
+#endif
+         char *rc = strptime(datestring, fmt, &tm);
+         if (rc != NULL && *rc == '\0'){
+            // no error scanning time
+            time = mktime(&tm) - timezone;
+            break;
+         }
+         fmt = scan_formats[a];
+      }
+   }
+
+   return time;
+
+#ifdef ISO8601_USE_REGEX
+   int rc;
+
+   rc = regexec(&ISO8601_REGEX, datestring.c_str(), 0, NULL, 0);
+   if (rc == 0){
+      /* found */
+
+
+   }
+   //  m = ISO8601_REGEX.match(datestring)
+   //  if not m:
+   //      raise ParseError("Unable to parse date string %r" % datestring)
+   //  groups = m.groupdict()
+   //  tz = parse_timezone(groups["timezone"])
+   //  if groups["fraction"] is None:
+   //      groups["fraction"] = 0
+   //  else:
+   //      groups["fraction"] = int(float("0.%s" % groups["fraction"]) * 1e6)
+
+   //  try:
+   //      return datetime(int(groups["year"]), int(groups["month"]), int(groups["day"]),
+   //                      int(groups["hour"]), int(groups["minute"]), int(groups["second"]),
+   //                      int(groups["fraction"]), tz)
+   //  except Exception as e:
+   //      raise ParseError("Failed to create a valid datetime record due to: %s"
+   //                       % e)
+#endif
+};
diff --git a/bacula/src/plugins/fd/pluginlib/iso8601.h b/bacula/src/plugins/fd/pluginlib/iso8601.h
new file mode 100644 (file)
index 0000000..7905436
--- /dev/null
@@ -0,0 +1,68 @@
+/*
+   Bacula(R) - The Network Backup Solution
+
+   Copyright (C) 2000-2020 Kern Sibbald
+
+   The original author of Bacula is Kern Sibbald, with contributions
+   from many others, a complete list can be found in the file AUTHORS.
+
+   You may use this file and others of this release according to the
+   license defined in the LICENSE file, which includes the Affero General
+   Public License, v3.0 ("AGPLv3") and some additional permissions and
+   terms pursuant to its AGPLv3 Section 7.
+
+   This notice must be preserved when any source code is
+   conveyed and/or propagated.
+
+   Bacula(R) is a registered trademark of Kern Sibbald.
+ */
+/**
+ * @file iso8601.h
+ * @author Radosław Korzeniewski (radoslaw@korzeniewski.net)
+ * @brief This is a Bacula plugin ISO8601 parsing library.
+ * @version 1.2.0
+ * @date 2020-01-05
+ *
+ * @copyright Copyright (c) 2021 All rights reserved. IP transferred to Bacula Systems according to agreement.
+ */
+
+#ifndef PLUGINLIB_ISO8601_H
+#define PLUGINLIB_ISO8601_H
+
+#include "pluginlib.h"
+#include "lib/bregex.h"
+
+
+// do not use regex for parsing
+#undef ISO8601_USE_REGEX
+
+class ISO8601DateTime : public SMARTALLOC
+{
+private:
+#ifdef ISO8601_USE_REGEX
+   regex_t ISO8601_REGEX;
+   regex_t TIMEZONE_REGEX;
+   bool status;
+#endif
+
+public:
+#ifdef ISO8601_USE_REGEX
+   ISO8601DateTime();
+#else
+#if __cplusplus > 201103L
+   ISO8601DateTime() = default;
+   ~ISO8601DateTime() = default;
+#else
+   ISO8601DateTime() {};
+   ~ISO8601DateTime() {};
+#endif
+#endif
+
+   utime_t parse_data(const char * datestring);
+   inline utime_t parse_data(POOL_MEM &datestring) { return parse_data(datestring.c_str()); }
+#ifdef ISO8601_USE_REGEX
+   inline bool isready() { return status; };
+#endif
+};
+
+#endif   // PLUGINLIB_ISO8601_H
diff --git a/bacula/src/plugins/fd/pluginlib/iso8601_test.cpp b/bacula/src/plugins/fd/pluginlib/iso8601_test.cpp
new file mode 100644 (file)
index 0000000..9b2c375
--- /dev/null
@@ -0,0 +1,72 @@
+/*
+   Bacula(R) - The Network Backup Solution
+
+   Copyright (C) 2000-2020 Kern Sibbald
+
+   The original author of Bacula is Kern Sibbald, with contributions
+   from many others, a complete list can be found in the file AUTHORS.
+
+   You may use this file and others of this release according to the
+   license defined in the LICENSE file, which includes the Affero General
+   Public License, v3.0 ("AGPLv3") and some additional permissions and
+   terms pursuant to its AGPLv3 Section 7.
+
+   This notice must be preserved when any source code is
+   conveyed and/or propagated.
+
+   Bacula(R) is a registered trademark of Kern Sibbald.
+ */
+/**
+ * @file iso8601_test.cpp
+ * @author Radosław Korzeniewski (radoslaw@korzeniewski.net)
+ * @brief This is a Bacula plugin ISO8601 parsing library - unittest.
+ * @version 1.2.0
+ * @date 2020-01-05
+ *
+ * @copyright Copyright (c) 2021 All rights reserved. IP transferred to Bacula Systems according to agreement.
+ */
+
+#include "bacula.h"
+#include "unittests.h"
+#include "iso8601.h"
+
+struct testvect
+{
+   const char *teststr;
+   const utime_t test_time;
+};
+
+static testvect tests[] =
+{
+   {"20000101T23:01:02Z", 946767662},
+   {"20200514T08:35:43Z", 1589445343},
+   {"2020-05-14T08:35:43Z", 1589445343},
+   {"20200514T07:27:34Z", 1589441254},
+   {"2020-05-14T07:27:34Z", 1589441254},
+   {"2021-01-13T09:44:38Z", 1610531078},
+   // {"20210113T09:44:38 +400", 1610531078},
+   // {"2021-01-13T09:44:38+400", 1610531078},
+   {NULL, 0},
+};
+
+int main()
+{
+   Unittests iso8601_test("iso8601_test");
+
+   ISO8601DateTime dt;
+
+
+   const char *teststr = tests[0].teststr;
+   utime_t test_time = tests[0].test_time;
+   for (int a = 1; teststr != NULL; a++)
+   {
+      utime_t t = dt.parse_data(teststr);
+      char test_descr[64];
+      snprintf(test_descr, 64, "Test %d", a);
+      ok(t == test_time, test_descr);
+      teststr = tests[a].teststr;
+      test_time = tests[a].test_time;
+   }
+
+   return report();
+}
index 502ae9a6a10570de7777cf5a337bc0c2e34e32d6..499a2e985108875d8d942ca823de15d82dfa2af8 100644 (file)
@@ -23,7 +23,8 @@
  * @version 2.1.0
  * @date 2020-12-23
  *
- * @copyright Copyright (c) 2021 All rights reserved. IP transferred to Bacula Systems according to agreement.
+ * @copyright Copyright (c) 2021 All rights reserved.
+ *            IP transferred to Bacula Systems according to agreement.
  */
 
 #include "metaplugin.h"
@@ -112,15 +113,6 @@ static pInfo pluginInfo = {
    PLUGIN_DESCRIPTION,
 };
 
-#define _STR(x) __STR(x)
-#define __STR(x) #x
-
-#ifdef VERSIONGIT
-   #define VERSIONGIT_STR  _STR(VERSIONGIT)
-#else
-   #define VERSIONGIT_STR  "/unknown"
-#endif
-
 /*
  * Plugin called here when it is first loaded
  */
index 32cb91b366cfb4f9d8c0dce0e656d5187a2eb3a8..d83b9736e4efe303ce9b18deccbef69ea68d481d 100644 (file)
@@ -23,7 +23,8 @@
  * @version 3.0.0
  * @date 2021-08-20
  *
- * @copyright Copyright (c) 2021 All rights reserved. IP transferred to Bacula Systems according to agreement.
+ * @copyright Copyright (c) 2021 All rights reserved.
+ *            IP transferred to Bacula Systems according to agreement.
  */
 
 #include "pluginlib.h"
index ad0b0d5f5b0d0256d65dc8cba00bc0bbb39fdd3b..e9c500d712c940ccb3308cd1a546b3875a402491 100644 (file)
@@ -32,7 +32,7 @@
 #include "unittests.h"
 
 /* Plugin Info definitions */
-const char *PLUGIN_LICENSE       = "AGPLv3";
+const char *PLUGIN_LICENSE       = "Bacula AGPLv3";
 const char *PLUGIN_AUTHOR        = "Radoslaw Korzeniewski";
 const char *PLUGIN_DATE          = "April 2021";
 const char *PLUGIN_VERSION       = "1.0.0";
index 8e366c6ab1abf3e97d4ba1b9335e03deb8b0b2a9..f4d7944d3f638052c2d2fe7d6320ee5c10a0aa51 100644 (file)
@@ -23,7 +23,9 @@
  * @version 2.2.0
  * @date 2021-04-26
  *
- * @copyright Copyright (c) 2021 All rights reserved. IP transferred to Bacula Systems according to agreement.
+ * Common definitions and utility functions for Inteos plugins.
+ * Functions defines a common framework used in our utilities and plugins.
+ * Author: Radosław Korzeniewski, radekk@inteos.pl, Inteos Sp. z o.o.
  */
 
 #include "pluginlib.h"
@@ -420,6 +422,33 @@ bool render_param(bool &param, const char *pname, const char *name, const bool v
    return false;
 }
 
+/**
+ * @brief Renders a parameter as a "key=value" string into a prepared buffer.
+ *
+ * @param param the place to render to
+ * @param handler the handler determines the value type
+ * @param key the "key" name
+ * @param val the "value" to render
+ * @return true if parameter rendering ok
+ * @return false on rendering error
+ */
+bool render_param(POOL_MEM &param, INI_ITEM_HANDLER *handler, char *key, item_value val)
+{
+   if (handler == ini_store_str){
+      Mmsg(param, "%s=%s\n", key, val.strval);
+   } else
+   if (handler == ini_store_int64){
+      Mmsg(param, "%s=%lld\n", key, val.int64val);
+   } else
+   if (handler == ini_store_bool){
+      Mmsg(param, "%s=%d\n", key, val.boolval ? 1 : 0);
+   } else {
+      DMsg1(DERROR, "Unsupported parameter handler for: %s\n", key);
+      return false;
+   }
+   return true;
+}
+
 /**
  * @brief Set the up param value
  *
@@ -544,8 +573,9 @@ bool parse_param(int &param, const char *pname, const char *name, const char *va
 
    if (value && bstrcasecmp(name, pname)){
       /* convert str to integer */
-      param = strtol(value, NULL, 10);
-      if (param == LONG_MIN || param == LONG_MAX){
+      long outparam = strtol(value, NULL, 10);
+
+      if (outparam == LONG_MIN || outparam == LONG_MAX){
          // error in conversion?
          if (errno == ERANGE){
             // yes, error
@@ -555,6 +585,7 @@ bool parse_param(int &param, const char *pname, const char *name, const char *va
             return false;
          }
       }
+      param = outparam;
       DMsg2(DINFO, "%s parameter: %d\n", name, param);
 
       return true;
index 6136b3537895ef81d6efbad46cda280f776429c4..45b1afb95cc44085bae93b78c2ed5a123cb4edbc 100644 (file)
@@ -23,7 +23,9 @@
  * @version 2.2.0
  * @date 2021-04-26
  *
- * @copyright Copyright (c) 2021 All rights reserved. IP transferred to Bacula Systems according to agreement.
+ * Common definitions and utility functions for Inteos plugins.
+ * Functions defines a common framework used in our utilities and plugins.
+ * Author: Radosław Korzeniewski, radekk@inteos.pl, Inteos Sp. z o.o.
  */
 
 #ifndef _PLUGINLIB_H_
@@ -34,6 +36,7 @@
 #include <ctype.h>
 
 #include "bacula.h"
+#include "lib/ini.h"
 #include "fd_plugins.h"
 
 /* Pointers to Bacula functions used in plugins */
@@ -49,6 +52,15 @@ extern const char *PLUGINNAME;
 #define PLUGMODULE   "PluginLib::"
 #endif
 
+#define _STR(x) __STR(x)
+#define __STR(x) #x
+
+#ifdef VERSIONGIT
+   #define VERSIONGIT_STR  _STR(VERSIONGIT)
+#else
+   #define VERSIONGIT_STR  "/unknown"
+#endif
+
 /* size of different string or query buffers */
 #define BUFLEN       4096
 #define BIGBUFLEN    65536
@@ -210,6 +222,7 @@ inline bool islocalpath(const char *path)
 bool render_param(POOLMEM **param, const char *pname, const char *fmt, const char *name, const char *value);
 bool render_param(POOLMEM **param, const char *pname, const char *fmt, const char *name, const int value);
 bool render_param(bool &param, const char *pname, const char *name, const bool value);
+bool render_param(POOL_MEM &param, INI_ITEM_HANDLER *handler, char *key, item_value val);
 
 bool parse_param(bool &param, const char *pname, const char *name, const char *value);
 bool parse_param(int &param, const char *pname, const char *name, const char *value, bool *err = NULL);
index 1c133fd677c03f348128c2cf949a9f865616c843..b4de2a60462f0aea8e1686817a242e8cb169c738 100644 (file)
@@ -23,7 +23,6 @@
  * @version 1.1.0
  * @date 2020-12-23
  *
- * @copyright Copyright (c) 2020 All rights reserved. IP transferred to Bacula Systems according to agreement.
  */
 
 #include "pluginlib.h"
index f95a91926dfc00612507653fa45541ce808df3d8..04881cee9327bee4da023b19a3f64d4caf70a803 100644 (file)
    conveyed and/or propagated.
 
    Bacula(R) is a registered trademark of Kern Sibbald.
- */
+*/
 /**
  * @file ptcomm.cpp
  * @author Radosław Korzeniewski (radoslaw@korzeniewski.net)
- * @brief This is a process communication lowlevel library for Bacula plugin.
+ * @brief This is a Bacula plugin library for interfacing with Metaplugin backend.
  * @version 2.0.0
  * @date 2020-11-20
  *
@@ -209,7 +209,7 @@ bool PTCOMM::sendbackend_data(bpContext *ctx, const char *buf, int32_t nbytes)
    _timeout.tv_sec = PTCOMM_DEFAULT_TIMEOUT;
    _timeout.tv_usec = 0;
 
-   while (nbytes)
+   while (nbytes > 0)
    {
       fd_set rfds;
       fd_set wfds;
@@ -610,6 +610,7 @@ bool PTCOMM::sendbackend(bpContext *ctx, char cmd, const POOLMEM *buf, int32_t l
       } else {
          // we will send header only
          header = &myheader;
+         _single_senddata = false;
       }
    } else {
       header = &myheader;
@@ -722,11 +723,11 @@ int32_t PTCOMM::read_any(bpContext *ctx, char *cmd, POOL_MEM &buf)
 int32_t PTCOMM::read_data(bpContext *ctx, POOL_MEM &buf)
 {
    int32_t status;
-   char cmd = 'D';
 
    if (extpipe > 0) {
       status = read(extpipe, buf.c_str(), buf.size());
    } else {
+      char cmd = 'D';
       status = recvbackend(ctx, &cmd, buf, false);
    }
 
index b23808465ea94801158cbcdd6ee531ce9e31a4b6..113954190774448424759a34c30d1382581a9b84 100644 (file)
@@ -134,7 +134,7 @@ public:
    bool write_command(bpContext *ctx, const char *buf, bool _single_senddata = false);
 
    bRC send_data(bpContext *ctx, const char *buf, int32_t len, bool _single_senddata = false);
-   bRC send_data(bpContext *ctx, POOL_MEM &buf, int32_t len) { return send_data(ctx, buf.addr(), true); }
+   bRC send_data(bpContext *ctx, POOL_MEM &buf, int32_t len) { return send_data(ctx, buf.addr(), len, true); }
    bRC recv_data(bpContext *ctx, POOL_MEM &buf, int32_t *recv_len=NULL);
 
    /**
index 35bf777d6eceb38eb6ec4852aaf9616dca2cfe96..0b11bc6adb163ae1b5378a89e2dd8234d2b96b75 100644 (file)
@@ -17,7 +17,7 @@
    Bacula(R) is a registered trademark of Kern Sibbald.
 */
 /**
- * @file smartalist.h
+ * @file commctx.h
  * @author Radosław Korzeniewski (radoslaw@korzeniewski.net)
  * @brief This is a simple smart array list (alist) resource guard conceptually based on C++11 - RAII.
  * @version 1.1.0
index c487634007fddc8255425334cf1e6f9a470db00f..19d3f20755692b611f8f96a372d2b2cbacfd5075 100644 (file)
@@ -17,7 +17,7 @@
    Bacula(R) is a registered trademark of Kern Sibbald.
 */
 /**
- * @file smartptr.h
+ * @file commctx.h
  * @author Radosław Korzeniewski (radoslaw@korzeniewski.net)
  * @brief This is a simple smart pointer guard conceptually based on C++11 smart pointers - unique_ptr.
  * @version 1.2.0
index 7f4c6105d714c0084c8e14e02eacbd35b4b62bf0..1ecbc387eb7044e061ae771e5c3288f468a09827 100644 (file)
@@ -23,7 +23,6 @@
  * @version 2.1.1
  * @date 2021-03-10
  *
- * @copyright Copyright (c) 2020 All rights reserved. IP transferred to Bacula Systems according to agreement.
  */
 
 #include <stdio.h>
@@ -149,13 +148,13 @@ int read_plugin(char * buf)
       LOG(buflog);
       len = 0;
       while (len < size) {
-         int nbytes;
-         ioctl(STDIN_FILENO, FIONREAD, &nbytes);
-         snprintf(buflog, BUFLEN, ">> FIONREAD:%i", nbytes);
+         int32_t nbytes = 0;
+         int rc = ioctl(STDIN_FILENO, FIONREAD, &nbytes);
+         snprintf(buflog, BUFLEN, ">> FIONREAD:%d:%ld", rc, nbytes);
          LOG(buflog);
          if (nbytes < size){
-            ioctl(STDIN_FILENO, FIONREAD, &nbytes);
-            snprintf(buflog, BUFLEN, ">> Second FIONREAD:%i", nbytes);
+            rc = ioctl(STDIN_FILENO, FIONREAD, &nbytes);
+            snprintf(buflog, BUFLEN, ">> Second FIONREAD:%d:%ld", rc, nbytes);
             LOG(buflog);
          }
          size_t bufread = size - len > BIGBUFLEN ? BIGBUFLEN : size - len;
@@ -1512,9 +1511,11 @@ int main(int argc, char** argv) {
    }
    //sleep(30);
 
+#ifdef F_GETPIPE_SZ
    int pipesize = fcntl(STDIN_FILENO, F_GETPIPE_SZ);
    snprintf(buflog, BUFLEN, "#> F_GETPIPE_SZ:%i", pipesize);
    LOG(buflog);
+#endif
 
    /* handshake (1) */
    len = read_plugin(buf);