-What: /sys/kernel/debug/qat_<device>_<BDF>/qat/fw_counters
+What: /sys/kernel/debug/qat_<device>_<BDF>/fw_counters
Date: November 2023
KernelVersion: 6.6
Contact: qat-linux@intel.com
The driver does not monitor for Heartbeat. It is left for a user
to poll the status periodically.
+
+What: /sys/kernel/debug/qat_<device>_<BDF>/pm_status
+Date: January 2024
+KernelVersion: 6.7
+Contact: qat-linux@intel.com
+Description: (RO) Read returns power management information specific to the
+ QAT device.
+
+ This attribute is only available for qat_4xxx devices.
+
+What: /sys/kernel/debug/qat_<device>_<BDF>/cnv_errors
+Date: January 2024
+KernelVersion: 6.7
+Contact: qat-linux@intel.com
+Description: (RO) Read returns, for each Acceleration Engine (AE), the number
+ of errors and the type of the last error detected by the device
+ when performing verified compression.
+ Reported counters::
+
+ <N>: Number of Compress and Verify (CnV) errors and type
+ of the last CnV error detected by Acceleration
+ Engine N.
services
* asym;sym: identical to sym;asym
* dc: the device is configured for running compression services
+ * dcc: identical to dc but enables the dc chaining feature,
+ hash then compression. If this is not required chose dc
* sym: the device is configured for running symmetric crypto
services
* asym: the device is configured for running asymmetric crypto
0
This attribute is only available for qat_4xxx devices.
+
+What: /sys/bus/pci/devices/<BDF>/qat/rp2srv
+Date: January 2024
+KernelVersion: 6.7
+Contact: qat-linux@intel.com
+Description:
+ (RW) This attribute provides a way for a user to query a
+ specific ring pair for the type of service that it is currently
+ configured for.
+
+ When written to, the value is cached and used to perform the
+ read operation. Allowed values are in the range 0 to N-1, where
+ N is the max number of ring pairs supported by a device. This
+ can be queried using the attribute qat/num_rps.
+
+ A read returns the service associated to the ring pair queried.
+
+ The values are:
+
+ * dc: the ring pair is configured for running compression services
+ * sym: the ring pair is configured for running symmetric crypto
+ services
+ * asym: the ring pair is configured for running asymmetric crypto
+ services
+
+ Example usage::
+
+ # echo 1 > /sys/bus/pci/devices/<BDF>/qat/rp2srv
+ # cat /sys/bus/pci/devices/<BDF>/qat/rp2srv
+ sym
+
+ This attribute is only available for qat_4xxx devices.
+
+What: /sys/bus/pci/devices/<BDF>/qat/num_rps
+Date: January 2024
+KernelVersion: 6.7
+Contact: qat-linux@intel.com
+Description:
+ (RO) Returns the number of ring pairs that a single device has.
+
+ Example usage::
+
+ # cat /sys/bus/pci/devices/<BDF>/qat/num_rps
+ 64
+
+ This attribute is only available for qat_4xxx devices.
--- /dev/null
+What: /sys/bus/pci/devices/<BDF>/qat_ras/errors_correctable
+Date: January 2024
+KernelVersion: 6.7
+Contact: qat-linux@intel.com
+Description: (RO) Reports the number of correctable errors detected by the device.
+
+ This attribute is only available for qat_4xxx devices.
+
+What: /sys/bus/pci/devices/<BDF>/qat_ras/errors_nonfatal
+Date: January 2024
+KernelVersion: 6.7
+Contact: qat-linux@intel.com
+Description: (RO) Reports the number of non fatal errors detected by the device.
+
+ This attribute is only available for qat_4xxx devices.
+
+What: /sys/bus/pci/devices/<BDF>/qat_ras/errors_fatal
+Date: January 2024
+KernelVersion: 6.7
+Contact: qat-linux@intel.com
+Description: (RO) Reports the number of fatal errors detected by the device.
+
+ This attribute is only available for qat_4xxx devices.
+
+What: /sys/bus/pci/devices/<BDF>/qat_ras/reset_error_counters
+Date: January 2024
+KernelVersion: 6.7
+Contact: qat-linux@intel.com
+Description: (WO) Write to resets all error counters of a device.
+
+ The following example reports how to reset the counters::
+
+ # echo 1 > /sys/bus/pci/devices/<BDF>/qat_ras/reset_error_counters
+ # cat /sys/bus/pci/devices/<BDF>/qat_ras/errors_correctable
+ 0
+ # cat /sys/bus/pci/devices/<BDF>/qat_ras/errors_nonfatal
+ 0
+ # cat /sys/bus/pci/devices/<BDF>/qat_ras/errors_fatal
+ 0
+
+ This attribute is only available for qat_4xxx devices.
--- /dev/null
+What: /sys/bus/pci/devices/<BDF>/qat_rl/sla_op
+Date: January 2024
+KernelVersion: 6.7
+Contact: qat-linux@intel.com
+Description:
+ (WO) This attribute is used to perform an operation on an SLA.
+ The supported operations are: add, update, rm, rm_all, and get.
+
+ Input values must be filled through the associated attribute in
+ this group before a write to this file.
+ If the operation completes successfully, the associated
+ attributes will be updated.
+ The associated attributes are: cir, pir, srv, rp, and id.
+
+ Supported operations:
+
+ * add: Creates a new SLA with the provided inputs from user.
+ * Inputs: cir, pir, srv, and rp
+ * Output: id
+
+ * get: Returns the configuration of the specified SLA in id attribute
+ * Inputs: id
+ * Outputs: cir, pir, srv, and rp
+
+ * update: Updates the SLA with new values set in the following attributes
+ * Inputs: id, cir, and pir
+
+ * rm: Removes the specified SLA in the id attribute.
+ * Inputs: id
+
+ * rm_all: Removes all the configured SLAs.
+ * Inputs: None
+
+ This attribute is only available for qat_4xxx devices.
+
+What: /sys/bus/pci/devices/<BDF>/qat_rl/rp
+Date: January 2024
+KernelVersion: 6.7
+Contact: qat-linux@intel.com
+Description:
+ (RW) When read, reports the current assigned ring pairs for the
+ queried SLA.
+ When wrote to, configures the ring pairs associated to a new SLA.
+
+ The value is a 64-bit bit mask and is written/displayed in hex.
+ Each bit of this mask represents a single ring pair i.e.,
+ bit 1 == ring pair id 0; bit 3 == ring pair id 2.
+
+ Selected ring pairs must to be assigned to a single service,
+ i.e. the one provided with the srv attribute. The service
+ assigned to a certain ring pair can be checked by querying
+ the attribute qat/rp2srv.
+
+ The maximum number of ring pairs is 4 per SLA.
+
+ Applicability in sla_op:
+
+ * WRITE: add operation
+ * READ: get operation
+
+ Example usage::
+
+ ## Read
+ # echo 4 > /sys/bus/pci/devices/<BDF>/qat_rl/id
+ # cat /sys/bus/pci/devices/<BDF>/qat_rl/rp
+ 0x5
+
+ ## Write
+ # echo 0x5 > /sys/bus/pci/devices/<BDF>/qat_rl/rp
+
+ This attribute is only available for qat_4xxx devices.
+
+What: /sys/bus/pci/devices/<BDF>/qat_rl/id
+Date: January 2024
+KernelVersion: 6.7
+Contact: qat-linux@intel.com
+Description:
+ (RW) If written to, the value is used to retrieve a particular
+ SLA and operate on it.
+ This is valid only for the following operations: update, rm,
+ and get.
+ A read of this attribute is only guaranteed to have correct data
+ after creation of an SLA.
+
+ Applicability in sla_op:
+
+ * WRITE: rm and update operations
+ * READ: add and get operations
+
+ Example usage::
+
+ ## Read
+ ## Set attributes e.g. cir, pir, srv, etc
+ # echo "add" > /sys/bus/pci/devices/<BDF>/qat_rl/sla_op
+ # cat /sys/bus/pci/devices/<BDF>/qat_rl/id
+ 4
+
+ ## Write
+ # echo 7 > /sys/bus/pci/devices/<BDF>/qat_rl/id
+ # echo "get" > /sys/bus/pci/devices/<BDF>/qat_rl/sla_op
+ # cat /sys/bus/pci/devices/<BDF>/qat_rl/rp
+ 0x5 ## ring pair ID 0 and ring pair ID 2
+
+ This attribute is only available for qat_4xxx devices.
+
+What: /sys/bus/pci/devices/<BDF>/qat_rl/cir
+Date: January 2024
+KernelVersion: 6.7
+Contact: qat-linux@intel.com
+Description:
+ (RW) Committed information rate (CIR). Rate guaranteed to be
+ achieved by a particular SLA. The value is expressed in
+ permille scale, i.e. 1000 refers to the maximum device
+ throughput for a selected service.
+
+ After sending a "get" to sla_op, this will be populated with the
+ CIR for that queried SLA.
+ Write to this file before sending an "add/update" sla_op, to set
+ the SLA to the specified value.
+
+ Applicability in sla_op:
+
+ * WRITE: add and update operations
+ * READ: get operation
+
+ Example usage::
+
+ ## Write
+ # echo 500 > /sys/bus/pci/devices/<BDF>/qat_rl/cir
+ # echo "add" /sys/bus/pci/devices/<BDF>/qat_rl/sla_op
+
+ ## Read
+ # echo 4 > /sys/bus/pci/devices/<BDF>/qat_rl/id
+ # echo "get" > /sys/bus/pci/devices/<BDF>/qat_rl/sla_op
+ # cat /sys/bus/pci/devices/<BDF>/qat_rl/cir
+ 500
+
+ This attribute is only available for qat_4xxx devices.
+
+What: /sys/bus/pci/devices/<BDF>/qat_rl/pir
+Date: January 2024
+KernelVersion: 6.7
+Contact: qat-linux@intel.com
+Description:
+ (RW) Peak information rate (PIR). The maximum rate that can be
+ achieved by that particular SLA. An SLA can reach a value
+ between CIR and PIR when the device is not fully utilized by
+ requests from other users (assigned to different SLAs).
+
+ After sending a "get" to sla_op, this will be populated with the
+ PIR for that queried SLA.
+ Write to this file before sending an "add/update" sla_op, to set
+ the SLA to the specified value.
+
+ Applicability in sla_op:
+
+ * WRITE: add and update operations
+ * READ: get operation
+
+ Example usage::
+
+ ## Write
+ # echo 750 > /sys/bus/pci/devices/<BDF>/qat_rl/pir
+ # echo "add" > /sys/bus/pci/devices/<BDF>/qat_rl/sla_op
+
+ ## Read
+ # echo 4 > /sys/bus/pci/devices/<BDF>/qat_rl/id
+ # echo "get" > /sys/bus/pci/devices/<BDF>/qat_rl/sla_op
+ # cat /sys/bus/pci/devices/<BDF>/qat_rl/pir
+ 750
+
+ This attribute is only available for qat_4xxx devices.
+
+What: /sys/bus/pci/devices/<BDF>/qat_rl/srv
+Date: January 2024
+KernelVersion: 6.7
+Contact: qat-linux@intel.com
+Description:
+ (RW) Service (SRV). Represents the service (sym, asym, dc)
+ associated to an SLA.
+ Can be written to or queried to set/show the SRV type for an SLA.
+ The SRV attribute is used to specify the SRV type before adding
+ an SLA. After an SLA is configured, reports the service
+ associated to that SLA.
+
+ Applicability in sla_op:
+
+ * WRITE: add and update operations
+ * READ: get operation
+
+ Example usage::
+
+ ## Write
+ # echo "dc" > /sys/bus/pci/devices/<BDF>/qat_rl/srv
+ # echo "add" > /sys/bus/pci/devices/<BDF>/qat_rl/sla_op
+ # cat /sys/bus/pci/devices/<BDF>/qat_rl/id
+ 4
+
+ ## Read
+ # echo 4 > /sys/bus/pci/devices/<BDF>/qat_rl/id
+ # echo "get" > /sys/bus/pci/devices/<BDF>/qat_rl/sla_op
+ # cat /sys/bus/pci/devices/<BDF>/qat_rl/srv
+ dc
+
+ This attribute is only available for qat_4xxx devices.
+
+What: /sys/bus/pci/devices/<BDF>/qat_rl/cap_rem
+Date: January 2024
+KernelVersion: 6.7
+Contact: qat-linux@intel.com
+Description:
+ (RW) This file will return the remaining capability for a
+ particular service/sla. This is the remaining value that a new
+ SLA can be set to or a current SLA can be increased with.
+
+ Example usage::
+
+ # echo "asym" > /sys/bus/pci/devices/<BDF>/qat_rl/cap_rem
+ # cat /sys/bus/pci/devices/<BDF>/qat_rl/cap_rem
+ 250
+ # echo 250 > /sys/bus/pci/devices/<BDF>/qat_rl/cir
+ # echo "add" > /sys/bus/pci/devices/<BDF>/qat_rl/sla_op
+ # cat /sys/bus/pci/devices/<BDF>/qat_rl/cap_rem
+ 0
+
+ This attribute is only available for qat_4xxx devices.
This facility uses X.509 ITU-T standard certificates to encode the public keys
involved. The signatures are not themselves encoded in any industrial standard
-type. The facility currently only supports the RSA public key encryption
-standard (though it is pluggable and permits others to be used). The possible
-hash algorithms that can be used are SHA-1, SHA-224, SHA-256, SHA-384, and
-SHA-512 (the algorithm is selected by data in the signature).
+type. The built-in facility currently only supports the RSA & NIST P-384 ECDSA
+public key signing standard (though it is pluggable and permits others to be
+used). The possible hash algorithms that can be used are SHA-2 and SHA-3 of
+sizes 256, 384, and 512 (the algorithm is selected by data in the signature).
==========================
sign the modules with:
=============================== ==========================================
- ``CONFIG_MODULE_SIG_SHA1`` :menuselection:`Sign modules with SHA-1`
- ``CONFIG_MODULE_SIG_SHA224`` :menuselection:`Sign modules with SHA-224`
``CONFIG_MODULE_SIG_SHA256`` :menuselection:`Sign modules with SHA-256`
``CONFIG_MODULE_SIG_SHA384`` :menuselection:`Sign modules with SHA-384`
``CONFIG_MODULE_SIG_SHA512`` :menuselection:`Sign modules with SHA-512`
+ ``CONFIG_MODULE_SIG_SHA3_256`` :menuselection:`Sign modules with SHA3-256`
+ ``CONFIG_MODULE_SIG_SHA3_384`` :menuselection:`Sign modules with SHA3-384`
+ ``CONFIG_MODULE_SIG_SHA3_512`` :menuselection:`Sign modules with SHA3-512`
=============================== ==========================================
The algorithm selected here will also be built into the kernel (rather
file (which is also generated if it does not already exist).
+One can select between RSA (``MODULE_SIG_KEY_TYPE_RSA``) and ECDSA
+(``MODULE_SIG_KEY_TYPE_ECDSA``) to generate either RSA 4k or NIST
+P-384 keypair.
+
It is strongly recommended that you provide your own x509.genkey file.
Most notably, in the x509.genkey file, the req_distinguished_name section
Some of the drivers will want to use the Generic ScatterWalk in case the
implementation needs to be fed separate chunks of the scatterlist which
-contains the input data. The buffer containing the resulting hash will
-always be properly aligned to .cra_alignmask so there is no need to
-worry about this.
+contains the input data.
$id: http://devicetree.org/schemas/crypto/fsl-imx-sahara.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
-title: Freescale SAHARA Cryptographic Accelerator included in some i.MX chips
+title: Freescale SAHARA Cryptographic Accelerator
maintainers:
- Steffen Trumtrar <s.trumtrar@pengutronix.de>
maxItems: 1
interrupts:
- maxItems: 1
+ items:
+ - description: SAHARA Interrupt for Host 0
+ - description: SAHARA Interrupt for Host 1
+ minItems: 1
+
+ clocks:
+ items:
+ - description: Sahara IPG clock
+ - description: Sahara AHB clock
+
+ clock-names:
+ items:
+ - const: ipg
+ - const: ahb
required:
- compatible
- reg
- interrupts
+ - clocks
+ - clock-names
+
+allOf:
+ - if:
+ properties:
+ compatible:
+ contains:
+ enum:
+ - fsl,imx53-sahara
+ then:
+ properties:
+ interrupts:
+ minItems: 2
+ maxItems: 2
+ else:
+ properties:
+ interrupts:
+ maxItems: 1
additionalProperties: false
examples:
- |
+ #include <dt-bindings/clock/imx27-clock.h>
+
crypto@10025000 {
compatible = "fsl,imx27-sahara";
- reg = < 0x10025000 0x800>;
+ reg = <0x10025000 0x800>;
interrupts = <75>;
+ clocks = <&clks IMX27_CLK_SAHARA_IPG_GATE>,
+ <&clks IMX27_CLK_SAHARA_AHB_GATE>;
+ clock-names = "ipg", "ahb";
};
compatible:
items:
- enum:
+ - qcom,sa8775p-inline-crypto-engine
- qcom,sm8450-inline-crypto-engine
- qcom,sm8550-inline-crypto-engine
- const: qcom,inline-crypto-engine
properties:
compatible:
- enum:
- - qcom,prng # 8916 etc.
- - qcom,prng-ee # 8996 and later using EE
+ oneOf:
+ - enum:
+ - qcom,prng # 8916 etc.
+ - qcom,prng-ee # 8996 and later using EE
+ - items:
+ - enum:
+ - qcom,sa8775p-trng
+ - qcom,sc7280-trng
+ - qcom,sm8450-trng
+ - qcom,sm8550-trng
+ - const: qcom,trng
reg:
maxItems: 1
required:
- compatible
- reg
- - clocks
- - clock-names
+
+allOf:
+ - if:
+ not:
+ properties:
+ compatible:
+ contains:
+ const: qcom,trng
+ then:
+ required:
+ - clocks
+ - clock-names
additionalProperties: false
compatible:
enum:
- amlogic,meson-rng
+ - amlogic,meson-s4-rng
reg:
maxItems: 1
properties:
compatible:
- const: st,stm32-rng
+ enum:
+ - st,stm32-rng
+ - st,stm32mp13-rng
reg:
maxItems: 1
type: boolean
description: If set enable the clock detection management
+ st,rng-lock-conf:
+ type: boolean
+ description: If set, the RNG configuration in RNG_CR, RNG_HTCR and
+ RNG_NSCR will be locked.
+
required:
- compatible
- reg
- clocks
+allOf:
+ - if:
+ properties:
+ compatible:
+ contains:
+ enum:
+ - st,stm32-rng
+ then:
+ properties:
+ st,rng-lock-conf: false
+
additionalProperties: false
examples:
F: include/linux/ccp.h
AMD CRYPTOGRAPHIC COPROCESSOR (CCP) DRIVER - SEV SUPPORT
-M: Brijesh Singh <brijesh.singh@amd.com>
+M: Ashish Kalra <ashish.kalra@amd.com>
M: Tom Lendacky <thomas.lendacky@amd.com>
L: linux-crypto@vger.kernel.org
S: Supported
return 0;
}
+static int nhpoly1305_neon_digest(struct shash_desc *desc,
+ const u8 *src, unsigned int srclen, u8 *out)
+{
+ return crypto_nhpoly1305_init(desc) ?:
+ nhpoly1305_neon_update(desc, src, srclen) ?:
+ crypto_nhpoly1305_final(desc, out);
+}
+
static struct shash_alg nhpoly1305_alg = {
.base.cra_name = "nhpoly1305",
.base.cra_driver_name = "nhpoly1305-neon",
.init = crypto_nhpoly1305_init,
.update = nhpoly1305_neon_update,
.final = crypto_nhpoly1305_final,
+ .digest = nhpoly1305_neon_digest,
.setkey = crypto_nhpoly1305_setkey,
.descsize = sizeof(struct nhpoly1305_state),
};
return 0;
}
+static int nhpoly1305_neon_digest(struct shash_desc *desc,
+ const u8 *src, unsigned int srclen, u8 *out)
+{
+ return crypto_nhpoly1305_init(desc) ?:
+ nhpoly1305_neon_update(desc, src, srclen) ?:
+ crypto_nhpoly1305_final(desc, out);
+}
+
static struct shash_alg nhpoly1305_alg = {
.base.cra_name = "nhpoly1305",
.base.cra_driver_name = "nhpoly1305-neon",
.init = crypto_nhpoly1305_init,
.update = nhpoly1305_neon_update,
.final = crypto_nhpoly1305_final,
+ .digest = nhpoly1305_neon_digest,
.setkey = crypto_nhpoly1305_setkey,
.descsize = sizeof(struct nhpoly1305_state),
};
.endm
/*
- * int sha1_ce_transform(struct sha1_ce_state *sst, u8 const *src,
- * int blocks)
+ * int __sha1_ce_transform(struct sha1_ce_state *sst, u8 const *src,
+ * int blocks)
*/
-SYM_FUNC_START(sha1_ce_transform)
+SYM_FUNC_START(__sha1_ce_transform)
/* load round constants */
loadrc k0.4s, 0x5a827999, w6
loadrc k1.4s, 0x6ed9eba1, w6
str dgb, [x0, #16]
mov w0, w2
ret
-SYM_FUNC_END(sha1_ce_transform)
+SYM_FUNC_END(__sha1_ce_transform)
extern const u32 sha1_ce_offsetof_count;
extern const u32 sha1_ce_offsetof_finalize;
-asmlinkage int sha1_ce_transform(struct sha1_ce_state *sst, u8 const *src,
- int blocks);
+asmlinkage int __sha1_ce_transform(struct sha1_ce_state *sst, u8 const *src,
+ int blocks);
-static void __sha1_ce_transform(struct sha1_state *sst, u8 const *src,
- int blocks)
+static void sha1_ce_transform(struct sha1_state *sst, u8 const *src,
+ int blocks)
{
while (blocks) {
int rem;
kernel_neon_begin();
- rem = sha1_ce_transform(container_of(sst, struct sha1_ce_state,
- sst), src, blocks);
+ rem = __sha1_ce_transform(container_of(sst,
+ struct sha1_ce_state,
+ sst), src, blocks);
kernel_neon_end();
src += (blocks - rem) * SHA1_BLOCK_SIZE;
blocks = rem;
return crypto_sha1_update(desc, data, len);
sctx->finalize = 0;
- sha1_base_do_update(desc, data, len, __sha1_ce_transform);
+ sha1_base_do_update(desc, data, len, sha1_ce_transform);
return 0;
}
*/
sctx->finalize = finalize;
- sha1_base_do_update(desc, data, len, __sha1_ce_transform);
+ sha1_base_do_update(desc, data, len, sha1_ce_transform);
if (!finalize)
- sha1_base_do_finalize(desc, __sha1_ce_transform);
+ sha1_base_do_finalize(desc, sha1_ce_transform);
return sha1_base_finish(desc, out);
}
return crypto_sha1_finup(desc, NULL, 0, out);
sctx->finalize = 0;
- sha1_base_do_finalize(desc, __sha1_ce_transform);
+ sha1_base_do_finalize(desc, sha1_ce_transform);
return sha1_base_finish(desc, out);
}
.word 0x90befffa, 0xa4506ceb, 0xbef9a3f7, 0xc67178f2
/*
- * void sha2_ce_transform(struct sha256_ce_state *sst, u8 const *src,
- * int blocks)
+ * int __sha256_ce_transform(struct sha256_ce_state *sst, u8 const *src,
+ * int blocks)
*/
.text
-SYM_FUNC_START(sha2_ce_transform)
+SYM_FUNC_START(__sha256_ce_transform)
/* load round constants */
adr_l x8, .Lsha2_rcon
ld1 { v0.4s- v3.4s}, [x8], #64
3: st1 {dgav.4s, dgbv.4s}, [x0]
mov w0, w2
ret
-SYM_FUNC_END(sha2_ce_transform)
+SYM_FUNC_END(__sha256_ce_transform)
extern const u32 sha256_ce_offsetof_count;
extern const u32 sha256_ce_offsetof_finalize;
-asmlinkage int sha2_ce_transform(struct sha256_ce_state *sst, u8 const *src,
- int blocks);
+asmlinkage int __sha256_ce_transform(struct sha256_ce_state *sst, u8 const *src,
+ int blocks);
-static void __sha2_ce_transform(struct sha256_state *sst, u8 const *src,
+static void sha256_ce_transform(struct sha256_state *sst, u8 const *src,
int blocks)
{
while (blocks) {
int rem;
kernel_neon_begin();
- rem = sha2_ce_transform(container_of(sst, struct sha256_ce_state,
- sst), src, blocks);
+ rem = __sha256_ce_transform(container_of(sst,
+ struct sha256_ce_state,
+ sst), src, blocks);
kernel_neon_end();
src += (blocks - rem) * SHA256_BLOCK_SIZE;
blocks = rem;
asmlinkage void sha256_block_data_order(u32 *digest, u8 const *src, int blocks);
-static void __sha256_block_data_order(struct sha256_state *sst, u8 const *src,
- int blocks)
+static void sha256_arm64_transform(struct sha256_state *sst, u8 const *src,
+ int blocks)
{
sha256_block_data_order(sst->state, src, blocks);
}
if (!crypto_simd_usable())
return sha256_base_do_update(desc, data, len,
- __sha256_block_data_order);
+ sha256_arm64_transform);
sctx->finalize = 0;
- sha256_base_do_update(desc, data, len, __sha2_ce_transform);
+ sha256_base_do_update(desc, data, len, sha256_ce_transform);
return 0;
}
if (!crypto_simd_usable()) {
if (len)
sha256_base_do_update(desc, data, len,
- __sha256_block_data_order);
- sha256_base_do_finalize(desc, __sha256_block_data_order);
+ sha256_arm64_transform);
+ sha256_base_do_finalize(desc, sha256_arm64_transform);
return sha256_base_finish(desc, out);
}
*/
sctx->finalize = finalize;
- sha256_base_do_update(desc, data, len, __sha2_ce_transform);
+ sha256_base_do_update(desc, data, len, sha256_ce_transform);
if (!finalize)
- sha256_base_do_finalize(desc, __sha2_ce_transform);
+ sha256_base_do_finalize(desc, sha256_ce_transform);
return sha256_base_finish(desc, out);
}
struct sha256_ce_state *sctx = shash_desc_ctx(desc);
if (!crypto_simd_usable()) {
- sha256_base_do_finalize(desc, __sha256_block_data_order);
+ sha256_base_do_finalize(desc, sha256_arm64_transform);
return sha256_base_finish(desc, out);
}
sctx->finalize = 0;
- sha256_base_do_finalize(desc, __sha2_ce_transform);
+ sha256_base_do_finalize(desc, sha256_ce_transform);
return sha256_base_finish(desc, out);
}
+static int sha256_ce_digest(struct shash_desc *desc, const u8 *data,
+ unsigned int len, u8 *out)
+{
+ sha256_base_init(desc);
+ return sha256_ce_finup(desc, data, len, out);
+}
+
static int sha256_ce_export(struct shash_desc *desc, void *out)
{
struct sha256_ce_state *sctx = shash_desc_ctx(desc);
.update = sha256_ce_update,
.final = sha256_ce_final,
.finup = sha256_ce_finup,
+ .digest = sha256_ce_digest,
.export = sha256_ce_export,
.import = sha256_ce_import,
.descsize = sizeof(struct sha256_ce_state),
unsigned int num_blks);
EXPORT_SYMBOL(sha256_block_data_order);
-static void __sha256_block_data_order(struct sha256_state *sst, u8 const *src,
- int blocks)
+static void sha256_arm64_transform(struct sha256_state *sst, u8 const *src,
+ int blocks)
{
sha256_block_data_order(sst->state, src, blocks);
}
asmlinkage void sha256_block_neon(u32 *digest, const void *data,
unsigned int num_blks);
-static void __sha256_block_neon(struct sha256_state *sst, u8 const *src,
- int blocks)
+static void sha256_neon_transform(struct sha256_state *sst, u8 const *src,
+ int blocks)
{
sha256_block_neon(sst->state, src, blocks);
}
static int crypto_sha256_arm64_update(struct shash_desc *desc, const u8 *data,
unsigned int len)
{
- return sha256_base_do_update(desc, data, len,
- __sha256_block_data_order);
+ return sha256_base_do_update(desc, data, len, sha256_arm64_transform);
}
static int crypto_sha256_arm64_finup(struct shash_desc *desc, const u8 *data,
unsigned int len, u8 *out)
{
if (len)
- sha256_base_do_update(desc, data, len,
- __sha256_block_data_order);
- sha256_base_do_finalize(desc, __sha256_block_data_order);
+ sha256_base_do_update(desc, data, len, sha256_arm64_transform);
+ sha256_base_do_finalize(desc, sha256_arm64_transform);
return sha256_base_finish(desc, out);
}
if (!crypto_simd_usable())
return sha256_base_do_update(desc, data, len,
- __sha256_block_data_order);
+ sha256_arm64_transform);
while (len > 0) {
unsigned int chunk = len;
sctx->count % SHA256_BLOCK_SIZE;
kernel_neon_begin();
- sha256_base_do_update(desc, data, chunk, __sha256_block_neon);
+ sha256_base_do_update(desc, data, chunk, sha256_neon_transform);
kernel_neon_end();
data += chunk;
len -= chunk;
if (!crypto_simd_usable()) {
if (len)
sha256_base_do_update(desc, data, len,
- __sha256_block_data_order);
- sha256_base_do_finalize(desc, __sha256_block_data_order);
+ sha256_arm64_transform);
+ sha256_base_do_finalize(desc, sha256_arm64_transform);
} else {
if (len)
sha256_update_neon(desc, data, len);
kernel_neon_begin();
- sha256_base_do_finalize(desc, __sha256_block_neon);
+ sha256_base_do_finalize(desc, sha256_neon_transform);
kernel_neon_end();
}
return sha256_base_finish(desc, out);
.endm
/*
- * void sha512_ce_transform(struct sha512_state *sst, u8 const *src,
- * int blocks)
+ * int __sha512_ce_transform(struct sha512_state *sst, u8 const *src,
+ * int blocks)
*/
.text
-SYM_FUNC_START(sha512_ce_transform)
+SYM_FUNC_START(__sha512_ce_transform)
/* load state */
ld1 {v8.2d-v11.2d}, [x0]
3: st1 {v8.2d-v11.2d}, [x0]
mov w0, w2
ret
-SYM_FUNC_END(sha512_ce_transform)
+SYM_FUNC_END(__sha512_ce_transform)
MODULE_ALIAS_CRYPTO("sha384");
MODULE_ALIAS_CRYPTO("sha512");
-asmlinkage int sha512_ce_transform(struct sha512_state *sst, u8 const *src,
- int blocks);
+asmlinkage int __sha512_ce_transform(struct sha512_state *sst, u8 const *src,
+ int blocks);
asmlinkage void sha512_block_data_order(u64 *digest, u8 const *src, int blocks);
-static void __sha512_ce_transform(struct sha512_state *sst, u8 const *src,
- int blocks)
+static void sha512_ce_transform(struct sha512_state *sst, u8 const *src,
+ int blocks)
{
while (blocks) {
int rem;
kernel_neon_begin();
- rem = sha512_ce_transform(sst, src, blocks);
+ rem = __sha512_ce_transform(sst, src, blocks);
kernel_neon_end();
src += (blocks - rem) * SHA512_BLOCK_SIZE;
blocks = rem;
}
}
-static void __sha512_block_data_order(struct sha512_state *sst, u8 const *src,
- int blocks)
+static void sha512_arm64_transform(struct sha512_state *sst, u8 const *src,
+ int blocks)
{
sha512_block_data_order(sst->state, src, blocks);
}
static int sha512_ce_update(struct shash_desc *desc, const u8 *data,
unsigned int len)
{
- sha512_block_fn *fn = crypto_simd_usable() ? __sha512_ce_transform
- : __sha512_block_data_order;
+ sha512_block_fn *fn = crypto_simd_usable() ? sha512_ce_transform
+ : sha512_arm64_transform;
sha512_base_do_update(desc, data, len, fn);
return 0;
static int sha512_ce_finup(struct shash_desc *desc, const u8 *data,
unsigned int len, u8 *out)
{
- sha512_block_fn *fn = crypto_simd_usable() ? __sha512_ce_transform
- : __sha512_block_data_order;
+ sha512_block_fn *fn = crypto_simd_usable() ? sha512_ce_transform
+ : sha512_arm64_transform;
sha512_base_do_update(desc, data, len, fn);
sha512_base_do_finalize(desc, fn);
static int sha512_ce_final(struct shash_desc *desc, u8 *out)
{
- sha512_block_fn *fn = crypto_simd_usable() ? __sha512_ce_transform
- : __sha512_block_data_order;
+ sha512_block_fn *fn = crypto_simd_usable() ? sha512_ce_transform
+ : sha512_arm64_transform;
sha512_base_do_finalize(desc, fn);
return sha512_base_finish(desc, out);
unsigned int num_blks);
EXPORT_SYMBOL(sha512_block_data_order);
-static void __sha512_block_data_order(struct sha512_state *sst, u8 const *src,
- int blocks)
+static void sha512_arm64_transform(struct sha512_state *sst, u8 const *src,
+ int blocks)
{
sha512_block_data_order(sst->state, src, blocks);
}
static int sha512_update(struct shash_desc *desc, const u8 *data,
unsigned int len)
{
- return sha512_base_do_update(desc, data, len,
- __sha512_block_data_order);
+ return sha512_base_do_update(desc, data, len, sha512_arm64_transform);
}
static int sha512_finup(struct shash_desc *desc, const u8 *data,
unsigned int len, u8 *out)
{
if (len)
- sha512_base_do_update(desc, data, len,
- __sha512_block_data_order);
- sha512_base_do_finalize(desc, __sha512_block_data_order);
+ sha512_base_do_update(desc, data, len, sha512_arm64_transform);
+ sha512_base_do_finalize(desc, sha512_arm64_transform);
return sha512_base_finish(desc, out);
}
.cra_priority = 300,
.cra_flags = CRYPTO_ALG_OPTIONAL_KEY,
.cra_blocksize = CHKSUM_BLOCK_SIZE,
- .cra_alignmask = 0,
.cra_ctxsize = sizeof(struct chksum_ctx),
.cra_module = THIS_MODULE,
.cra_init = chksum_cra_init,
.cra_priority = 300,
.cra_flags = CRYPTO_ALG_OPTIONAL_KEY,
.cra_blocksize = CHKSUM_BLOCK_SIZE,
- .cra_alignmask = 0,
.cra_ctxsize = sizeof(struct chksum_ctx),
.cra_module = THIS_MODULE,
.cra_init = chksumc_cra_init,
.cra_priority = 300,
.cra_flags = CRYPTO_ALG_OPTIONAL_KEY,
.cra_blocksize = CHKSUM_BLOCK_SIZE,
- .cra_alignmask = 0,
.cra_ctxsize = sizeof(struct chksum_ctx),
.cra_module = THIS_MODULE,
.cra_init = chksum_cra_init,
.cra_priority = 300,
.cra_flags = CRYPTO_ALG_OPTIONAL_KEY,
.cra_blocksize = CHKSUM_BLOCK_SIZE,
- .cra_alignmask = 0,
.cra_ctxsize = sizeof(struct chksum_ctx),
.cra_module = THIS_MODULE,
.cra_init = chksum_cra_init,
#include <asm/pstate.h>
#include <asm/elf.h>
+#include <asm/unaligned.h>
#include "opcodes.h"
if (keylen != sizeof(u32))
return -EINVAL;
- *mctx = le32_to_cpup((__le32 *)key);
+ *mctx = get_unaligned_le32(key);
return 0;
}
extern void crc32c_sparc64(u32 *crcp, const u64 *data, unsigned int len);
-static void crc32c_compute(u32 *crcp, const u64 *data, unsigned int len)
+static u32 crc32c_compute(u32 crc, const u8 *data, unsigned int len)
{
- unsigned int asm_len;
-
- asm_len = len & ~7U;
- if (asm_len) {
- crc32c_sparc64(crcp, data, asm_len);
- data += asm_len / 8;
- len -= asm_len;
+ unsigned int n = -(uintptr_t)data & 7;
+
+ if (n) {
+ /* Data isn't 8-byte aligned. Align it. */
+ n = min(n, len);
+ crc = __crc32c_le(crc, data, n);
+ data += n;
+ len -= n;
+ }
+ n = len & ~7U;
+ if (n) {
+ crc32c_sparc64(&crc, (const u64 *)data, n);
+ data += n;
+ len -= n;
}
if (len)
- *crcp = __crc32c_le(*crcp, (const unsigned char *) data, len);
+ crc = __crc32c_le(crc, data, len);
+ return crc;
}
static int crc32c_sparc64_update(struct shash_desc *desc, const u8 *data,
{
u32 *crcp = shash_desc_ctx(desc);
- crc32c_compute(crcp, (const u64 *) data, len);
-
+ *crcp = crc32c_compute(*crcp, data, len);
return 0;
}
-static int __crc32c_sparc64_finup(u32 *crcp, const u8 *data, unsigned int len,
- u8 *out)
+static int __crc32c_sparc64_finup(const u32 *crcp, const u8 *data,
+ unsigned int len, u8 *out)
{
- u32 tmp = *crcp;
-
- crc32c_compute(&tmp, (const u64 *) data, len);
-
- *(__le32 *) out = ~cpu_to_le32(tmp);
+ put_unaligned_le32(~crc32c_compute(*crcp, data, len), out);
return 0;
}
{
u32 *crcp = shash_desc_ctx(desc);
- *(__le32 *) out = ~cpu_to_le32p(crcp);
+ put_unaligned_le32(~*crcp, out);
return 0;
}
.cra_flags = CRYPTO_ALG_OPTIONAL_KEY,
.cra_blocksize = CHKSUM_BLOCK_SIZE,
.cra_ctxsize = sizeof(u32),
- .cra_alignmask = 7,
.cra_module = THIS_MODULE,
.cra_init = crc32c_sparc64_cra_init,
}
add %r13, %r10
# Set r10 to be the amount of data left in CYPH_PLAIN_IN after filling
sub $16, %r10
- # Determine if if partial block is not being filled and
+ # Determine if partial block is not being filled and
# shift mask accordingly
jge .L_no_extra_mask_1_\@
sub %r10, %r12
add %r13, %r10
# Set r10 to be the amount of data left in CYPH_PLAIN_IN after filling
sub $16, %r10
- # Determine if if partial block is not being filled and
+ # Determine if partial block is not being filled and
# shift mask accordingly
jge .L_no_extra_mask_2_\@
sub %r10, %r12
add %r13, %r10
# Set r10 to be the amount of data left in CYPH_PLAIN_IN after filling
sub $16, %r10
- # Determine if if partial block is not being filled and
+ # Determine if partial block is not being filled and
# shift mask accordingly
jge .L_no_extra_mask_1_\@
sub %r10, %r12
add %r13, %r10
# Set r10 to be the amount of data left in CYPH_PLAIN_IN after filling
sub $16, %r10
- # Determine if if partial block is not being filled and
+ # Determine if partial block is not being filled and
# shift mask accordingly
jge .L_no_extra_mask_2_\@
sub %r10, %r12
};
struct aesni_xts_ctx {
- u8 raw_tweak_ctx[sizeof(struct crypto_aes_ctx)] AESNI_ALIGN_ATTR;
- u8 raw_crypt_ctx[sizeof(struct crypto_aes_ctx)] AESNI_ALIGN_ATTR;
+ struct crypto_aes_ctx tweak_ctx AESNI_ALIGN_ATTR;
+ struct crypto_aes_ctx crypt_ctx AESNI_ALIGN_ATTR;
};
#define GCM_BLOCK_LEN 16
u8 hash_keys[GCM_BLOCK_LEN * 16];
};
+static inline void *aes_align_addr(void *addr)
+{
+ if (crypto_tfm_ctx_alignment() >= AESNI_ALIGN)
+ return addr;
+ return PTR_ALIGN(addr, AESNI_ALIGN);
+}
+
asmlinkage int aesni_set_key(struct crypto_aes_ctx *ctx, const u8 *in_key,
unsigned int key_len);
asmlinkage void aesni_enc(const void *ctx, u8 *out, const u8 *in);
static inline struct
aesni_rfc4106_gcm_ctx *aesni_rfc4106_gcm_ctx_get(struct crypto_aead *tfm)
{
- unsigned long align = AESNI_ALIGN;
-
- if (align <= crypto_tfm_ctx_alignment())
- align = 1;
- return PTR_ALIGN(crypto_aead_ctx(tfm), align);
+ return aes_align_addr(crypto_aead_ctx(tfm));
}
static inline struct
generic_gcmaes_ctx *generic_gcmaes_ctx_get(struct crypto_aead *tfm)
{
- unsigned long align = AESNI_ALIGN;
-
- if (align <= crypto_tfm_ctx_alignment())
- align = 1;
- return PTR_ALIGN(crypto_aead_ctx(tfm), align);
+ return aes_align_addr(crypto_aead_ctx(tfm));
}
#endif
static inline struct crypto_aes_ctx *aes_ctx(void *raw_ctx)
{
- unsigned long addr = (unsigned long)raw_ctx;
- unsigned long align = AESNI_ALIGN;
+ return aes_align_addr(raw_ctx);
+}
- if (align <= crypto_tfm_ctx_alignment())
- align = 1;
- return (struct crypto_aes_ctx *)ALIGN(addr, align);
+static inline struct aesni_xts_ctx *aes_xts_ctx(struct crypto_skcipher *tfm)
+{
+ return aes_align_addr(crypto_skcipher_ctx(tfm));
}
static int aes_set_key_common(struct crypto_aes_ctx *ctx,
static int xts_aesni_setkey(struct crypto_skcipher *tfm, const u8 *key,
unsigned int keylen)
{
- struct aesni_xts_ctx *ctx = crypto_skcipher_ctx(tfm);
+ struct aesni_xts_ctx *ctx = aes_xts_ctx(tfm);
int err;
err = xts_verify_key(tfm, key, keylen);
keylen /= 2;
/* first half of xts-key is for crypt */
- err = aes_set_key_common(aes_ctx(ctx->raw_crypt_ctx), key, keylen);
+ err = aes_set_key_common(&ctx->crypt_ctx, key, keylen);
if (err)
return err;
/* second half of xts-key is for tweak */
- return aes_set_key_common(aes_ctx(ctx->raw_tweak_ctx), key + keylen,
- keylen);
+ return aes_set_key_common(&ctx->tweak_ctx, key + keylen, keylen);
}
static int xts_crypt(struct skcipher_request *req, bool encrypt)
{
struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
- struct aesni_xts_ctx *ctx = crypto_skcipher_ctx(tfm);
+ struct aesni_xts_ctx *ctx = aes_xts_ctx(tfm);
int tail = req->cryptlen % AES_BLOCK_SIZE;
struct skcipher_request subreq;
struct skcipher_walk walk;
kernel_fpu_begin();
/* calculate first value of T */
- aesni_enc(aes_ctx(ctx->raw_tweak_ctx), walk.iv, walk.iv);
+ aesni_enc(&ctx->tweak_ctx, walk.iv, walk.iv);
while (walk.nbytes > 0) {
int nbytes = walk.nbytes;
nbytes &= ~(AES_BLOCK_SIZE - 1);
if (encrypt)
- aesni_xts_encrypt(aes_ctx(ctx->raw_crypt_ctx),
+ aesni_xts_encrypt(&ctx->crypt_ctx,
walk.dst.virt.addr, walk.src.virt.addr,
nbytes, walk.iv);
else
- aesni_xts_decrypt(aes_ctx(ctx->raw_crypt_ctx),
+ aesni_xts_decrypt(&ctx->crypt_ctx,
walk.dst.virt.addr, walk.src.virt.addr,
nbytes, walk.iv);
kernel_fpu_end();
kernel_fpu_begin();
if (encrypt)
- aesni_xts_encrypt(aes_ctx(ctx->raw_crypt_ctx),
+ aesni_xts_encrypt(&ctx->crypt_ctx,
walk.dst.virt.addr, walk.src.virt.addr,
walk.nbytes, walk.iv);
else
- aesni_xts_decrypt(aes_ctx(ctx->raw_crypt_ctx),
+ aesni_xts_decrypt(&ctx->crypt_ctx,
walk.dst.virt.addr, walk.src.virt.addr,
walk.nbytes, walk.iv);
kernel_fpu_end();
return 0;
}
+static int nhpoly1305_avx2_digest(struct shash_desc *desc,
+ const u8 *src, unsigned int srclen, u8 *out)
+{
+ return crypto_nhpoly1305_init(desc) ?:
+ nhpoly1305_avx2_update(desc, src, srclen) ?:
+ crypto_nhpoly1305_final(desc, out);
+}
+
static struct shash_alg nhpoly1305_alg = {
.base.cra_name = "nhpoly1305",
.base.cra_driver_name = "nhpoly1305-avx2",
.init = crypto_nhpoly1305_init,
.update = nhpoly1305_avx2_update,
.final = crypto_nhpoly1305_final,
+ .digest = nhpoly1305_avx2_digest,
.setkey = crypto_nhpoly1305_setkey,
.descsize = sizeof(struct nhpoly1305_state),
};
return 0;
}
+static int nhpoly1305_sse2_digest(struct shash_desc *desc,
+ const u8 *src, unsigned int srclen, u8 *out)
+{
+ return crypto_nhpoly1305_init(desc) ?:
+ nhpoly1305_sse2_update(desc, src, srclen) ?:
+ crypto_nhpoly1305_final(desc, out);
+}
+
static struct shash_alg nhpoly1305_alg = {
.base.cra_name = "nhpoly1305",
.base.cra_driver_name = "nhpoly1305-sse2",
.init = crypto_nhpoly1305_init,
.update = nhpoly1305_sse2_update,
.final = crypto_nhpoly1305_final,
+ .digest = nhpoly1305_sse2_digest,
.setkey = crypto_nhpoly1305_setkey,
.descsize = sizeof(struct nhpoly1305_state),
};
#include <linux/types.h>
#include <crypto/sha1.h>
#include <crypto/sha1_base.h>
+#include <asm/cpu_device_id.h>
#include <asm/simd.h>
+static const struct x86_cpu_id module_cpu_ids[] = {
+ X86_MATCH_FEATURE(X86_FEATURE_AVX2, NULL),
+ X86_MATCH_FEATURE(X86_FEATURE_AVX, NULL),
+ X86_MATCH_FEATURE(X86_FEATURE_SSSE3, NULL),
+ {}
+};
+MODULE_DEVICE_TABLE(x86cpu, module_cpu_ids);
+
static int sha1_update(struct shash_desc *desc, const u8 *data,
unsigned int len, sha1_block_fn *sha1_xform)
{
static int __init sha1_ssse3_mod_init(void)
{
+ if (!x86_match_cpu(module_cpu_ids))
+ return -ENODEV;
+
if (register_sha1_ssse3())
goto fail;
#include <crypto/sha2.h>
#include <crypto/sha256_base.h>
#include <linux/string.h>
+#include <asm/cpu_device_id.h>
#include <asm/simd.h>
asmlinkage void sha256_transform_ssse3(struct sha256_state *state,
const u8 *data, int blocks);
+static const struct x86_cpu_id module_cpu_ids[] = {
+ X86_MATCH_FEATURE(X86_FEATURE_AVX2, NULL),
+ X86_MATCH_FEATURE(X86_FEATURE_AVX, NULL),
+ X86_MATCH_FEATURE(X86_FEATURE_SSSE3, NULL),
+ {}
+};
+MODULE_DEVICE_TABLE(x86cpu, module_cpu_ids);
+
static int _sha256_update(struct shash_desc *desc, const u8 *data,
unsigned int len, sha256_block_fn *sha256_xform)
{
return sha256_ssse3_finup(desc, NULL, 0, out);
}
+static int sha256_ssse3_digest(struct shash_desc *desc, const u8 *data,
+ unsigned int len, u8 *out)
+{
+ return sha256_base_init(desc) ?:
+ sha256_ssse3_finup(desc, data, len, out);
+}
+
static struct shash_alg sha256_ssse3_algs[] = { {
.digestsize = SHA256_DIGEST_SIZE,
.init = sha256_base_init,
.update = sha256_ssse3_update,
.final = sha256_ssse3_final,
.finup = sha256_ssse3_finup,
+ .digest = sha256_ssse3_digest,
.descsize = sizeof(struct sha256_state),
.base = {
.cra_name = "sha256",
return sha256_avx_finup(desc, NULL, 0, out);
}
+static int sha256_avx_digest(struct shash_desc *desc, const u8 *data,
+ unsigned int len, u8 *out)
+{
+ return sha256_base_init(desc) ?:
+ sha256_avx_finup(desc, data, len, out);
+}
+
static struct shash_alg sha256_avx_algs[] = { {
.digestsize = SHA256_DIGEST_SIZE,
.init = sha256_base_init,
.update = sha256_avx_update,
.final = sha256_avx_final,
.finup = sha256_avx_finup,
+ .digest = sha256_avx_digest,
.descsize = sizeof(struct sha256_state),
.base = {
.cra_name = "sha256",
return sha256_avx2_finup(desc, NULL, 0, out);
}
+static int sha256_avx2_digest(struct shash_desc *desc, const u8 *data,
+ unsigned int len, u8 *out)
+{
+ return sha256_base_init(desc) ?:
+ sha256_avx2_finup(desc, data, len, out);
+}
+
static struct shash_alg sha256_avx2_algs[] = { {
.digestsize = SHA256_DIGEST_SIZE,
.init = sha256_base_init,
.update = sha256_avx2_update,
.final = sha256_avx2_final,
.finup = sha256_avx2_finup,
+ .digest = sha256_avx2_digest,
.descsize = sizeof(struct sha256_state),
.base = {
.cra_name = "sha256",
return sha256_ni_finup(desc, NULL, 0, out);
}
+static int sha256_ni_digest(struct shash_desc *desc, const u8 *data,
+ unsigned int len, u8 *out)
+{
+ return sha256_base_init(desc) ?:
+ sha256_ni_finup(desc, data, len, out);
+}
+
static struct shash_alg sha256_ni_algs[] = { {
.digestsize = SHA256_DIGEST_SIZE,
.init = sha256_base_init,
.update = sha256_ni_update,
.final = sha256_ni_final,
.finup = sha256_ni_finup,
+ .digest = sha256_ni_digest,
.descsize = sizeof(struct sha256_state),
.base = {
.cra_name = "sha256",
static int __init sha256_ssse3_mod_init(void)
{
+ if (!x86_match_cpu(module_cpu_ids))
+ return -ENODEV;
+
if (register_sha256_ssse3())
goto fail;
config MODULE_SIG_KEY_TYPE_ECDSA
bool "ECDSA"
select CRYPTO_ECDSA
+ depends on !(MODULE_SIG_SHA256 || MODULE_SIG_SHA3_256)
help
- Use an elliptic curve key (NIST P384) for module signing. Consider
- using a strong hash like sha256 or sha384 for hashing modules.
+ Use an elliptic curve key (NIST P384) for module signing. Use
+ a strong hash of same or higher bit length, i.e. sha384 or
+ sha512 for hashing modules.
Note: Remove all ECDSA signing keys, e.g. certs/signing_key.pem,
when falling back to building Linux 5.14 and older kernels.
tristate
select CRYPTO_SKCIPHER2
select CRYPTO_ALGAPI
+ select CRYPTO_ECB
config CRYPTO_SKCIPHER2
tristate
config CRYPTO_ECB
tristate "ECB (Electronic Codebook)"
- select CRYPTO_SKCIPHER
+ select CRYPTO_SKCIPHER2
select CRYPTO_MANAGER
help
ECB (Electronic Codebook) mode (NIST SP800-38A)
See https://www.chronox.de/jent.html
+choice
+ prompt "CPU Jitter RNG Memory Size"
+ default CRYPTO_JITTERENTROPY_MEMSIZE_2
+ depends on CRYPTO_JITTERENTROPY
+ help
+ The Jitter RNG measures the execution time of memory accesses.
+ Multiple consecutive memory accesses are performed. If the memory
+ size fits into a cache (e.g. L1), only the memory access timing
+ to that cache is measured. The closer the cache is to the CPU
+ the less variations are measured and thus the less entropy is
+ obtained. Thus, if the memory size fits into the L1 cache, the
+ obtained entropy is less than if the memory size fits within
+ L1 + L2, which in turn is less if the memory fits into
+ L1 + L2 + L3. Thus, by selecting a different memory size,
+ the entropy rate produced by the Jitter RNG can be modified.
+
+ config CRYPTO_JITTERENTROPY_MEMSIZE_2
+ bool "2048 Bytes (default)"
+
+ config CRYPTO_JITTERENTROPY_MEMSIZE_128
+ bool "128 kBytes"
+
+ config CRYPTO_JITTERENTROPY_MEMSIZE_1024
+ bool "1024 kBytes"
+
+ config CRYPTO_JITTERENTROPY_MEMSIZE_8192
+ bool "8192 kBytes"
+endchoice
+
+config CRYPTO_JITTERENTROPY_MEMORY_BLOCKS
+ int
+ default 64 if CRYPTO_JITTERENTROPY_MEMSIZE_2
+ default 512 if CRYPTO_JITTERENTROPY_MEMSIZE_128
+ default 1024 if CRYPTO_JITTERENTROPY_MEMSIZE_1024
+ default 4096 if CRYPTO_JITTERENTROPY_MEMSIZE_8192
+
+config CRYPTO_JITTERENTROPY_MEMORY_BLOCKSIZE
+ int
+ default 32 if CRYPTO_JITTERENTROPY_MEMSIZE_2
+ default 256 if CRYPTO_JITTERENTROPY_MEMSIZE_128
+ default 1024 if CRYPTO_JITTERENTROPY_MEMSIZE_1024
+ default 2048 if CRYPTO_JITTERENTROPY_MEMSIZE_8192
+
+config CRYPTO_JITTERENTROPY_OSR
+ int "CPU Jitter RNG Oversampling Rate"
+ range 1 15
+ default 1
+ depends on CRYPTO_JITTERENTROPY
+ help
+ The Jitter RNG allows the specification of an oversampling rate (OSR).
+ The Jitter RNG operation requires a fixed amount of timing
+ measurements to produce one output block of random numbers. The
+ OSR value is multiplied with the amount of timing measurements to
+ generate one output block. Thus, the timing measurement is oversampled
+ by the OSR factor. The oversampling allows the Jitter RNG to operate
+ on hardware whose timers deliver limited amount of entropy (e.g.
+ the timer is coarse) by setting the OSR to a higher value. The
+ trade-off, however, is that the Jitter RNG now requires more time
+ to generate random numbers.
+
config CRYPTO_JITTERENTROPY_TESTINTERFACE
bool "CPU Jitter RNG Test Interface"
depends on CRYPTO_JITTERENTROPY
obj-$(CONFIG_CRYPTO_AEAD2) += aead.o
obj-$(CONFIG_CRYPTO_GENIV) += geniv.o
-obj-$(CONFIG_CRYPTO_SKCIPHER2) += skcipher.o
+crypto_skcipher-y += lskcipher.o
+crypto_skcipher-y += skcipher.o
+
+obj-$(CONFIG_CRYPTO_SKCIPHER2) += crypto_skcipher.o
+
obj-$(CONFIG_CRYPTO_SEQIV) += seqiv.o
obj-$(CONFIG_CRYPTO_ECHAINIV) += echainiv.o
/* Hash the left-hand part (the "bulk") of the message using NHPoly1305 */
static int adiantum_hash_message(struct skcipher_request *req,
- struct scatterlist *sgl, le128 *digest)
+ struct scatterlist *sgl, unsigned int nents,
+ le128 *digest)
{
- struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
- const struct adiantum_tfm_ctx *tctx = crypto_skcipher_ctx(tfm);
struct adiantum_request_ctx *rctx = skcipher_request_ctx(req);
const unsigned int bulk_len = req->cryptlen - BLOCKCIPHER_BLOCK_SIZE;
struct shash_desc *hash_desc = &rctx->u.hash_desc;
unsigned int i, n;
int err;
- hash_desc->tfm = tctx->hash;
-
err = crypto_shash_init(hash_desc);
if (err)
return err;
- sg_miter_start(&miter, sgl, sg_nents(sgl),
- SG_MITER_FROM_SG | SG_MITER_ATOMIC);
+ sg_miter_start(&miter, sgl, nents, SG_MITER_FROM_SG | SG_MITER_ATOMIC);
for (i = 0; i < bulk_len; i += n) {
sg_miter_next(&miter);
n = min_t(unsigned int, miter.length, bulk_len - i);
const struct adiantum_tfm_ctx *tctx = crypto_skcipher_ctx(tfm);
struct adiantum_request_ctx *rctx = skcipher_request_ctx(req);
const unsigned int bulk_len = req->cryptlen - BLOCKCIPHER_BLOCK_SIZE;
+ struct scatterlist *dst = req->dst;
+ const unsigned int dst_nents = sg_nents(dst);
le128 digest;
int err;
* enc: C_R = C_M - H_{K_H}(T, C_L)
* dec: P_R = P_M - H_{K_H}(T, P_L)
*/
- err = adiantum_hash_message(req, req->dst, &digest);
- if (err)
- return err;
- le128_add(&digest, &digest, &rctx->header_hash);
- le128_sub(&rctx->rbuf.bignum, &rctx->rbuf.bignum, &digest);
- scatterwalk_map_and_copy(&rctx->rbuf.bignum, req->dst,
- bulk_len, BLOCKCIPHER_BLOCK_SIZE, 1);
+ rctx->u.hash_desc.tfm = tctx->hash;
+ le128_sub(&rctx->rbuf.bignum, &rctx->rbuf.bignum, &rctx->header_hash);
+ if (dst_nents == 1 && dst->offset + req->cryptlen <= PAGE_SIZE) {
+ /* Fast path for single-page destination */
+ struct page *page = sg_page(dst);
+ void *virt = kmap_local_page(page) + dst->offset;
+
+ err = crypto_shash_digest(&rctx->u.hash_desc, virt, bulk_len,
+ (u8 *)&digest);
+ if (err) {
+ kunmap_local(virt);
+ return err;
+ }
+ le128_sub(&rctx->rbuf.bignum, &rctx->rbuf.bignum, &digest);
+ memcpy(virt + bulk_len, &rctx->rbuf.bignum, sizeof(le128));
+ flush_dcache_page(page);
+ kunmap_local(virt);
+ } else {
+ /* Slow path that works for any destination scatterlist */
+ err = adiantum_hash_message(req, dst, dst_nents, &digest);
+ if (err)
+ return err;
+ le128_sub(&rctx->rbuf.bignum, &rctx->rbuf.bignum, &digest);
+ scatterwalk_map_and_copy(&rctx->rbuf.bignum, dst,
+ bulk_len, sizeof(le128), 1);
+ }
return 0;
}
const struct adiantum_tfm_ctx *tctx = crypto_skcipher_ctx(tfm);
struct adiantum_request_ctx *rctx = skcipher_request_ctx(req);
const unsigned int bulk_len = req->cryptlen - BLOCKCIPHER_BLOCK_SIZE;
+ struct scatterlist *src = req->src;
+ const unsigned int src_nents = sg_nents(src);
unsigned int stream_len;
le128 digest;
int err;
* dec: C_M = C_R + H_{K_H}(T, C_L)
*/
adiantum_hash_header(req);
- err = adiantum_hash_message(req, req->src, &digest);
+ rctx->u.hash_desc.tfm = tctx->hash;
+ if (src_nents == 1 && src->offset + req->cryptlen <= PAGE_SIZE) {
+ /* Fast path for single-page source */
+ void *virt = kmap_local_page(sg_page(src)) + src->offset;
+
+ err = crypto_shash_digest(&rctx->u.hash_desc, virt, bulk_len,
+ (u8 *)&digest);
+ memcpy(&rctx->rbuf.bignum, virt + bulk_len, sizeof(le128));
+ kunmap_local(virt);
+ } else {
+ /* Slow path that works for any source scatterlist */
+ err = adiantum_hash_message(req, src, src_nents, &digest);
+ scatterwalk_map_and_copy(&rctx->rbuf.bignum, src,
+ bulk_len, sizeof(le128), 0);
+ }
if (err)
return err;
- le128_add(&digest, &digest, &rctx->header_hash);
- scatterwalk_map_and_copy(&rctx->rbuf.bignum, req->src,
- bulk_len, BLOCKCIPHER_BLOCK_SIZE, 0);
+ le128_add(&rctx->rbuf.bignum, &rctx->rbuf.bignum, &rctx->header_hash);
le128_add(&rctx->rbuf.bignum, &rctx->rbuf.bignum, &digest);
/* If encrypting, encrypt P_M with the block cipher to get C_M */
* Check for a supported set of inner algorithms.
* See the comment at the beginning of this file.
*/
-static bool adiantum_supported_algorithms(struct skcipher_alg *streamcipher_alg,
+static bool adiantum_supported_algorithms(struct skcipher_alg_common *streamcipher_alg,
struct crypto_alg *blockcipher_alg,
struct shash_alg *hash_alg)
{
const char *nhpoly1305_name;
struct skcipher_instance *inst;
struct adiantum_instance_ctx *ictx;
- struct skcipher_alg *streamcipher_alg;
+ struct skcipher_alg_common *streamcipher_alg;
struct crypto_alg *blockcipher_alg;
struct shash_alg *hash_alg;
int err;
crypto_attr_alg_name(tb[1]), 0, mask);
if (err)
goto err_free_inst;
- streamcipher_alg = crypto_spawn_skcipher_alg(&ictx->streamcipher_spawn);
+ streamcipher_alg = crypto_spawn_skcipher_alg_common(&ictx->streamcipher_spawn);
/* Block cipher, e.g. "aes" */
err = crypto_grab_cipher(&ictx->blockcipher_spawn,
inst->alg.base.cra_blocksize = BLOCKCIPHER_BLOCK_SIZE;
inst->alg.base.cra_ctxsize = sizeof(struct adiantum_tfm_ctx);
- inst->alg.base.cra_alignmask = streamcipher_alg->base.cra_alignmask |
- hash_alg->base.cra_alignmask;
+ inst->alg.base.cra_alignmask = streamcipher_alg->base.cra_alignmask;
/*
* The block cipher is only invoked once per message, so for long
* messages (e.g. sectors for disk encryption) its performance doesn't
inst->alg.decrypt = adiantum_decrypt;
inst->alg.init = adiantum_init_tfm;
inst->alg.exit = adiantum_exit_tfm;
- inst->alg.min_keysize = crypto_skcipher_alg_min_keysize(streamcipher_alg);
- inst->alg.max_keysize = crypto_skcipher_alg_max_keysize(streamcipher_alg);
+ inst->alg.min_keysize = streamcipher_alg->min_keysize;
+ inst->alg.max_keysize = streamcipher_alg->max_keysize;
inst->alg.ivsize = TWEAK_SIZE;
inst->free = adiantum_free_instance;
}
EXPORT_SYMBOL_GPL(crypto_alloc_aead);
+int crypto_has_aead(const char *alg_name, u32 type, u32 mask)
+{
+ return crypto_type_has_alg(alg_name, &crypto_aead_type, type, mask);
+}
+EXPORT_SYMBOL_GPL(crypto_has_aead);
+
static int aead_prepare_alg(struct aead_alg *alg)
{
struct crypto_istat_aead *istat = aead_get_stat(alg);
/*
* Asynchronous Cryptographic Hash operations.
*
- * This is the asynchronous version of hash.c with notification of
- * completion via a callback.
+ * This is the implementation of the ahash (asynchronous hash) API. It differs
+ * from shash (synchronous hash) in that ahash supports asynchronous operations,
+ * and it hashes data from scatterlists instead of virtually addressed buffers.
+ *
+ * The ahash API provides access to both ahash and shash algorithms. The shash
+ * API only provides access to shash algorithms.
*
* Copyright (c) 2008 Loc Ho <lho@amcc.com>
*/
#include "hash.h"
-static const struct crypto_type crypto_ahash_type;
+#define CRYPTO_ALG_TYPE_AHASH_MASK 0x0000000e
-struct ahash_request_priv {
- crypto_completion_t complete;
- void *data;
- u8 *result;
- u32 flags;
- void *ubuf[] CRYPTO_MINALIGN_ATTR;
-};
+static inline struct crypto_istat_hash *ahash_get_stat(struct ahash_alg *alg)
+{
+ return hash_get_stat(&alg->halg);
+}
+
+static inline int crypto_ahash_errstat(struct ahash_alg *alg, int err)
+{
+ if (!IS_ENABLED(CONFIG_CRYPTO_STATS))
+ return err;
+
+ if (err && err != -EINPROGRESS && err != -EBUSY)
+ atomic64_inc(&ahash_get_stat(alg)->err_cnt);
+
+ return err;
+}
+
+/*
+ * For an ahash tfm that is using an shash algorithm (instead of an ahash
+ * algorithm), this returns the underlying shash tfm.
+ */
+static inline struct crypto_shash *ahash_to_shash(struct crypto_ahash *tfm)
+{
+ return *(struct crypto_shash **)crypto_ahash_ctx(tfm);
+}
+
+static inline struct shash_desc *prepare_shash_desc(struct ahash_request *req,
+ struct crypto_ahash *tfm)
+{
+ struct shash_desc *desc = ahash_request_ctx(req);
+
+ desc->tfm = ahash_to_shash(tfm);
+ return desc;
+}
+
+int shash_ahash_update(struct ahash_request *req, struct shash_desc *desc)
+{
+ struct crypto_hash_walk walk;
+ int nbytes;
+
+ for (nbytes = crypto_hash_walk_first(req, &walk); nbytes > 0;
+ nbytes = crypto_hash_walk_done(&walk, nbytes))
+ nbytes = crypto_shash_update(desc, walk.data, nbytes);
+
+ return nbytes;
+}
+EXPORT_SYMBOL_GPL(shash_ahash_update);
+
+int shash_ahash_finup(struct ahash_request *req, struct shash_desc *desc)
+{
+ struct crypto_hash_walk walk;
+ int nbytes;
+
+ nbytes = crypto_hash_walk_first(req, &walk);
+ if (!nbytes)
+ return crypto_shash_final(desc, req->result);
+
+ do {
+ nbytes = crypto_hash_walk_last(&walk) ?
+ crypto_shash_finup(desc, walk.data, nbytes,
+ req->result) :
+ crypto_shash_update(desc, walk.data, nbytes);
+ nbytes = crypto_hash_walk_done(&walk, nbytes);
+ } while (nbytes > 0);
+
+ return nbytes;
+}
+EXPORT_SYMBOL_GPL(shash_ahash_finup);
+
+int shash_ahash_digest(struct ahash_request *req, struct shash_desc *desc)
+{
+ unsigned int nbytes = req->nbytes;
+ struct scatterlist *sg;
+ unsigned int offset;
+ int err;
+
+ if (nbytes &&
+ (sg = req->src, offset = sg->offset,
+ nbytes <= min(sg->length, ((unsigned int)(PAGE_SIZE)) - offset))) {
+ void *data;
+
+ data = kmap_local_page(sg_page(sg));
+ err = crypto_shash_digest(desc, data + offset, nbytes,
+ req->result);
+ kunmap_local(data);
+ } else
+ err = crypto_shash_init(desc) ?:
+ shash_ahash_finup(req, desc);
+
+ return err;
+}
+EXPORT_SYMBOL_GPL(shash_ahash_digest);
+
+static void crypto_exit_ahash_using_shash(struct crypto_tfm *tfm)
+{
+ struct crypto_shash **ctx = crypto_tfm_ctx(tfm);
+
+ crypto_free_shash(*ctx);
+}
+
+static int crypto_init_ahash_using_shash(struct crypto_tfm *tfm)
+{
+ struct crypto_alg *calg = tfm->__crt_alg;
+ struct crypto_ahash *crt = __crypto_ahash_cast(tfm);
+ struct crypto_shash **ctx = crypto_tfm_ctx(tfm);
+ struct crypto_shash *shash;
+
+ if (!crypto_mod_get(calg))
+ return -EAGAIN;
+
+ shash = crypto_create_tfm(calg, &crypto_shash_type);
+ if (IS_ERR(shash)) {
+ crypto_mod_put(calg);
+ return PTR_ERR(shash);
+ }
+
+ crt->using_shash = true;
+ *ctx = shash;
+ tfm->exit = crypto_exit_ahash_using_shash;
+
+ crypto_ahash_set_flags(crt, crypto_shash_get_flags(shash) &
+ CRYPTO_TFM_NEED_KEY);
+ crt->reqsize = sizeof(struct shash_desc) + crypto_shash_descsize(shash);
+
+ return 0;
+}
static int hash_walk_next(struct crypto_hash_walk *walk)
{
- unsigned int alignmask = walk->alignmask;
unsigned int offset = walk->offset;
unsigned int nbytes = min(walk->entrylen,
((unsigned int)(PAGE_SIZE)) - offset);
walk->data = kmap_local_page(walk->pg);
walk->data += offset;
-
- if (offset & alignmask) {
- unsigned int unaligned = alignmask + 1 - (offset & alignmask);
-
- if (nbytes > unaligned)
- nbytes = unaligned;
- }
-
walk->entrylen -= nbytes;
return nbytes;
}
int crypto_hash_walk_done(struct crypto_hash_walk *walk, int err)
{
- unsigned int alignmask = walk->alignmask;
-
walk->data -= walk->offset;
- if (walk->entrylen && (walk->offset & alignmask) && !err) {
- unsigned int nbytes;
-
- walk->offset = ALIGN(walk->offset, alignmask + 1);
- nbytes = min(walk->entrylen,
- (unsigned int)(PAGE_SIZE - walk->offset));
- if (nbytes) {
- walk->entrylen -= nbytes;
- walk->data += walk->offset;
- return nbytes;
- }
- }
-
kunmap_local(walk->data);
crypto_yield(walk->flags);
return 0;
}
- walk->alignmask = crypto_ahash_alignmask(crypto_ahash_reqtfm(req));
walk->sg = req->src;
walk->flags = req->base.flags;
}
EXPORT_SYMBOL_GPL(crypto_hash_walk_first);
-static int ahash_setkey_unaligned(struct crypto_ahash *tfm, const u8 *key,
- unsigned int keylen)
-{
- unsigned long alignmask = crypto_ahash_alignmask(tfm);
- int ret;
- u8 *buffer, *alignbuffer;
- unsigned long absize;
-
- absize = keylen + alignmask;
- buffer = kmalloc(absize, GFP_KERNEL);
- if (!buffer)
- return -ENOMEM;
-
- alignbuffer = (u8 *)ALIGN((unsigned long)buffer, alignmask + 1);
- memcpy(alignbuffer, key, keylen);
- ret = tfm->setkey(tfm, alignbuffer, keylen);
- kfree_sensitive(buffer);
- return ret;
-}
-
static int ahash_nosetkey(struct crypto_ahash *tfm, const u8 *key,
unsigned int keylen)
{
return -ENOSYS;
}
-static void ahash_set_needkey(struct crypto_ahash *tfm)
+static void ahash_set_needkey(struct crypto_ahash *tfm, struct ahash_alg *alg)
{
- const struct hash_alg_common *alg = crypto_hash_alg_common(tfm);
-
- if (tfm->setkey != ahash_nosetkey &&
- !(alg->base.cra_flags & CRYPTO_ALG_OPTIONAL_KEY))
+ if (alg->setkey != ahash_nosetkey &&
+ !(alg->halg.base.cra_flags & CRYPTO_ALG_OPTIONAL_KEY))
crypto_ahash_set_flags(tfm, CRYPTO_TFM_NEED_KEY);
}
int crypto_ahash_setkey(struct crypto_ahash *tfm, const u8 *key,
unsigned int keylen)
{
- unsigned long alignmask = crypto_ahash_alignmask(tfm);
- int err;
+ if (likely(tfm->using_shash)) {
+ struct crypto_shash *shash = ahash_to_shash(tfm);
+ int err;
- if ((unsigned long)key & alignmask)
- err = ahash_setkey_unaligned(tfm, key, keylen);
- else
- err = tfm->setkey(tfm, key, keylen);
-
- if (unlikely(err)) {
- ahash_set_needkey(tfm);
- return err;
+ err = crypto_shash_setkey(shash, key, keylen);
+ if (unlikely(err)) {
+ crypto_ahash_set_flags(tfm,
+ crypto_shash_get_flags(shash) &
+ CRYPTO_TFM_NEED_KEY);
+ return err;
+ }
+ } else {
+ struct ahash_alg *alg = crypto_ahash_alg(tfm);
+ int err;
+
+ err = alg->setkey(tfm, key, keylen);
+ if (unlikely(err)) {
+ ahash_set_needkey(tfm, alg);
+ return err;
+ }
}
-
crypto_ahash_clear_flags(tfm, CRYPTO_TFM_NEED_KEY);
return 0;
}
EXPORT_SYMBOL_GPL(crypto_ahash_setkey);
+int crypto_ahash_init(struct ahash_request *req)
+{
+ struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
+
+ if (likely(tfm->using_shash))
+ return crypto_shash_init(prepare_shash_desc(req, tfm));
+ if (crypto_ahash_get_flags(tfm) & CRYPTO_TFM_NEED_KEY)
+ return -ENOKEY;
+ return crypto_ahash_alg(tfm)->init(req);
+}
+EXPORT_SYMBOL_GPL(crypto_ahash_init);
+
static int ahash_save_req(struct ahash_request *req, crypto_completion_t cplt,
bool has_state)
{
struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
- unsigned long alignmask = crypto_ahash_alignmask(tfm);
unsigned int ds = crypto_ahash_digestsize(tfm);
struct ahash_request *subreq;
unsigned int subreq_size;
reqsize = ALIGN(reqsize, crypto_tfm_ctx_alignment());
subreq_size += reqsize;
subreq_size += ds;
- subreq_size += alignmask & ~(crypto_tfm_ctx_alignment() - 1);
flags = ahash_request_flags(req);
gfp = (flags & CRYPTO_TFM_REQ_MAY_SLEEP) ? GFP_KERNEL : GFP_ATOMIC;
ahash_request_set_callback(subreq, flags, cplt, req);
result = (u8 *)(subreq + 1) + reqsize;
- result = PTR_ALIGN(result, alignmask + 1);
ahash_request_set_crypt(subreq, req->src, result, req->nbytes);
kfree_sensitive(subreq);
}
-static void ahash_op_unaligned_done(void *data, int err)
-{
- struct ahash_request *areq = data;
-
- if (err == -EINPROGRESS)
- goto out;
-
- /* First copy req->result into req->priv.result */
- ahash_restore_req(areq, err);
-
-out:
- /* Complete the ORIGINAL request. */
- ahash_request_complete(areq, err);
-}
-
-static int ahash_op_unaligned(struct ahash_request *req,
- int (*op)(struct ahash_request *),
- bool has_state)
-{
- int err;
-
- err = ahash_save_req(req, ahash_op_unaligned_done, has_state);
- if (err)
- return err;
-
- err = op(req->priv);
- if (err == -EINPROGRESS || err == -EBUSY)
- return err;
-
- ahash_restore_req(req, err);
-
- return err;
-}
-
-static int crypto_ahash_op(struct ahash_request *req,
- int (*op)(struct ahash_request *),
- bool has_state)
+int crypto_ahash_update(struct ahash_request *req)
{
struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
- unsigned long alignmask = crypto_ahash_alignmask(tfm);
- int err;
+ struct ahash_alg *alg;
- if ((unsigned long)req->result & alignmask)
- err = ahash_op_unaligned(req, op, has_state);
- else
- err = op(req);
+ if (likely(tfm->using_shash))
+ return shash_ahash_update(req, ahash_request_ctx(req));
- return crypto_hash_errstat(crypto_hash_alg_common(tfm), err);
+ alg = crypto_ahash_alg(tfm);
+ if (IS_ENABLED(CONFIG_CRYPTO_STATS))
+ atomic64_add(req->nbytes, &ahash_get_stat(alg)->hash_tlen);
+ return crypto_ahash_errstat(alg, alg->update(req));
}
+EXPORT_SYMBOL_GPL(crypto_ahash_update);
int crypto_ahash_final(struct ahash_request *req)
{
struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
- struct hash_alg_common *alg = crypto_hash_alg_common(tfm);
+ struct ahash_alg *alg;
- if (IS_ENABLED(CONFIG_CRYPTO_STATS))
- atomic64_inc(&hash_get_stat(alg)->hash_cnt);
+ if (likely(tfm->using_shash))
+ return crypto_shash_final(ahash_request_ctx(req), req->result);
- return crypto_ahash_op(req, tfm->final, true);
+ alg = crypto_ahash_alg(tfm);
+ if (IS_ENABLED(CONFIG_CRYPTO_STATS))
+ atomic64_inc(&ahash_get_stat(alg)->hash_cnt);
+ return crypto_ahash_errstat(alg, alg->final(req));
}
EXPORT_SYMBOL_GPL(crypto_ahash_final);
int crypto_ahash_finup(struct ahash_request *req)
{
struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
- struct hash_alg_common *alg = crypto_hash_alg_common(tfm);
+ struct ahash_alg *alg;
+
+ if (likely(tfm->using_shash))
+ return shash_ahash_finup(req, ahash_request_ctx(req));
+ alg = crypto_ahash_alg(tfm);
if (IS_ENABLED(CONFIG_CRYPTO_STATS)) {
- struct crypto_istat_hash *istat = hash_get_stat(alg);
+ struct crypto_istat_hash *istat = ahash_get_stat(alg);
atomic64_inc(&istat->hash_cnt);
atomic64_add(req->nbytes, &istat->hash_tlen);
}
-
- return crypto_ahash_op(req, tfm->finup, true);
+ return crypto_ahash_errstat(alg, alg->finup(req));
}
EXPORT_SYMBOL_GPL(crypto_ahash_finup);
int crypto_ahash_digest(struct ahash_request *req)
{
struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
- struct hash_alg_common *alg = crypto_hash_alg_common(tfm);
+ struct ahash_alg *alg;
+ int err;
+
+ if (likely(tfm->using_shash))
+ return shash_ahash_digest(req, prepare_shash_desc(req, tfm));
+ alg = crypto_ahash_alg(tfm);
if (IS_ENABLED(CONFIG_CRYPTO_STATS)) {
- struct crypto_istat_hash *istat = hash_get_stat(alg);
+ struct crypto_istat_hash *istat = ahash_get_stat(alg);
atomic64_inc(&istat->hash_cnt);
atomic64_add(req->nbytes, &istat->hash_tlen);
}
if (crypto_ahash_get_flags(tfm) & CRYPTO_TFM_NEED_KEY)
- return crypto_hash_errstat(alg, -ENOKEY);
+ err = -ENOKEY;
+ else
+ err = alg->digest(req);
- return crypto_ahash_op(req, tfm->digest, false);
+ return crypto_ahash_errstat(alg, err);
}
EXPORT_SYMBOL_GPL(crypto_ahash_digest);
subreq->base.complete = ahash_def_finup_done2;
- err = crypto_ahash_reqtfm(req)->final(subreq);
+ err = crypto_ahash_alg(crypto_ahash_reqtfm(req))->final(subreq);
if (err == -EINPROGRESS || err == -EBUSY)
return err;
if (err)
return err;
- err = tfm->update(req->priv);
+ err = crypto_ahash_alg(tfm)->update(req->priv);
if (err == -EINPROGRESS || err == -EBUSY)
return err;
return ahash_def_finup_finish1(req, err);
}
+int crypto_ahash_export(struct ahash_request *req, void *out)
+{
+ struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
+
+ if (likely(tfm->using_shash))
+ return crypto_shash_export(ahash_request_ctx(req), out);
+ return crypto_ahash_alg(tfm)->export(req, out);
+}
+EXPORT_SYMBOL_GPL(crypto_ahash_export);
+
+int crypto_ahash_import(struct ahash_request *req, const void *in)
+{
+ struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
+
+ if (likely(tfm->using_shash))
+ return crypto_shash_import(prepare_shash_desc(req, tfm), in);
+ if (crypto_ahash_get_flags(tfm) & CRYPTO_TFM_NEED_KEY)
+ return -ENOKEY;
+ return crypto_ahash_alg(tfm)->import(req, in);
+}
+EXPORT_SYMBOL_GPL(crypto_ahash_import);
+
static void crypto_ahash_exit_tfm(struct crypto_tfm *tfm)
{
struct crypto_ahash *hash = __crypto_ahash_cast(tfm);
struct crypto_ahash *hash = __crypto_ahash_cast(tfm);
struct ahash_alg *alg = crypto_ahash_alg(hash);
- hash->setkey = ahash_nosetkey;
-
crypto_ahash_set_statesize(hash, alg->halg.statesize);
- if (tfm->__crt_alg->cra_type != &crypto_ahash_type)
- return crypto_init_shash_ops_async(tfm);
+ if (tfm->__crt_alg->cra_type == &crypto_shash_type)
+ return crypto_init_ahash_using_shash(tfm);
- hash->init = alg->init;
- hash->update = alg->update;
- hash->final = alg->final;
- hash->finup = alg->finup ?: ahash_def_finup;
- hash->digest = alg->digest;
- hash->export = alg->export;
- hash->import = alg->import;
-
- if (alg->setkey) {
- hash->setkey = alg->setkey;
- ahash_set_needkey(hash);
- }
+ ahash_set_needkey(hash, alg);
if (alg->exit_tfm)
tfm->exit = crypto_ahash_exit_tfm;
static unsigned int crypto_ahash_extsize(struct crypto_alg *alg)
{
- if (alg->cra_type != &crypto_ahash_type)
+ if (alg->cra_type == &crypto_shash_type)
return sizeof(struct crypto_shash *);
return crypto_alg_extsize(alg);
if (IS_ERR(nhash))
return nhash;
- nhash->init = hash->init;
- nhash->update = hash->update;
- nhash->final = hash->final;
- nhash->finup = hash->finup;
- nhash->digest = hash->digest;
- nhash->export = hash->export;
- nhash->import = hash->import;
- nhash->setkey = hash->setkey;
nhash->reqsize = hash->reqsize;
nhash->statesize = hash->statesize;
- if (tfm->__crt_alg->cra_type != &crypto_ahash_type)
- return crypto_clone_shash_ops_async(nhash, hash);
+ if (likely(hash->using_shash)) {
+ struct crypto_shash **nctx = crypto_ahash_ctx(nhash);
+ struct crypto_shash *shash;
+
+ shash = crypto_clone_shash(ahash_to_shash(hash));
+ if (IS_ERR(shash)) {
+ err = PTR_ERR(shash);
+ goto out_free_nhash;
+ }
+ *nctx = shash;
+ return nhash;
+ }
err = -ENOSYS;
alg = crypto_ahash_alg(hash);
base->cra_type = &crypto_ahash_type;
base->cra_flags |= CRYPTO_ALG_TYPE_AHASH;
+ if (!alg->finup)
+ alg->finup = ahash_def_finup;
+ if (!alg->setkey)
+ alg->setkey = ahash_nosetkey;
+
return 0;
}
{
struct crypto_alg *alg = &halg->base;
- if (alg->cra_type != &crypto_ahash_type)
+ if (alg->cra_type == &crypto_shash_type)
return crypto_shash_alg_has_setkey(__crypto_shash_alg(alg));
- return __crypto_ahash_alg(alg)->setkey != NULL;
+ return __crypto_ahash_alg(alg)->setkey != ahash_nosetkey;
}
EXPORT_SYMBOL_GPL(crypto_hash_alg_has_setkey);
struct crypto_tfm *__crypto_alloc_tfmgfp(struct crypto_alg *alg, u32 type,
u32 mask, gfp_t gfp)
{
- struct crypto_tfm *tfm = NULL;
+ struct crypto_tfm *tfm;
unsigned int tfm_size;
int err = -ENOMEM;
* Jon Oberheide <jon@oberheide.org>
*/
-#include <crypto/algapi.h>
#include <crypto/arc4.h>
#include <crypto/internal/skcipher.h>
#include <linux/init.h>
#include <linux/module.h>
#include <linux/sched.h>
-static int crypto_arc4_setkey(struct crypto_skcipher *tfm, const u8 *in_key,
+static int crypto_arc4_setkey(struct crypto_lskcipher *tfm, const u8 *in_key,
unsigned int key_len)
{
- struct arc4_ctx *ctx = crypto_skcipher_ctx(tfm);
+ struct arc4_ctx *ctx = crypto_lskcipher_ctx(tfm);
return arc4_setkey(ctx, in_key, key_len);
}
-static int crypto_arc4_crypt(struct skcipher_request *req)
+static int crypto_arc4_crypt(struct crypto_lskcipher *tfm, const u8 *src,
+ u8 *dst, unsigned nbytes, u8 *iv, bool final)
{
- struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
- struct arc4_ctx *ctx = crypto_skcipher_ctx(tfm);
- struct skcipher_walk walk;
- int err;
+ struct arc4_ctx *ctx = crypto_lskcipher_ctx(tfm);
- err = skcipher_walk_virt(&walk, req, false);
-
- while (walk.nbytes > 0) {
- arc4_crypt(ctx, walk.dst.virt.addr, walk.src.virt.addr,
- walk.nbytes);
- err = skcipher_walk_done(&walk, 0);
- }
-
- return err;
+ arc4_crypt(ctx, dst, src, nbytes);
+ return 0;
}
-static int crypto_arc4_init(struct crypto_skcipher *tfm)
+static int crypto_arc4_init(struct crypto_lskcipher *tfm)
{
pr_warn_ratelimited("\"%s\" (%ld) uses obsolete ecb(arc4) skcipher\n",
current->comm, (unsigned long)current->pid);
return 0;
}
-static struct skcipher_alg arc4_alg = {
- /*
- * For legacy reasons, this is named "ecb(arc4)", not "arc4".
- * Nevertheless it's actually a stream cipher, not a block cipher.
- */
- .base.cra_name = "ecb(arc4)",
- .base.cra_driver_name = "ecb(arc4)-generic",
- .base.cra_priority = 100,
- .base.cra_blocksize = ARC4_BLOCK_SIZE,
- .base.cra_ctxsize = sizeof(struct arc4_ctx),
- .base.cra_module = THIS_MODULE,
- .min_keysize = ARC4_MIN_KEY_SIZE,
- .max_keysize = ARC4_MAX_KEY_SIZE,
- .setkey = crypto_arc4_setkey,
- .encrypt = crypto_arc4_crypt,
- .decrypt = crypto_arc4_crypt,
- .init = crypto_arc4_init,
+static struct lskcipher_alg arc4_alg = {
+ .co.base.cra_name = "arc4",
+ .co.base.cra_driver_name = "arc4-generic",
+ .co.base.cra_priority = 100,
+ .co.base.cra_blocksize = ARC4_BLOCK_SIZE,
+ .co.base.cra_ctxsize = sizeof(struct arc4_ctx),
+ .co.base.cra_module = THIS_MODULE,
+ .co.min_keysize = ARC4_MIN_KEY_SIZE,
+ .co.max_keysize = ARC4_MAX_KEY_SIZE,
+ .setkey = crypto_arc4_setkey,
+ .encrypt = crypto_arc4_crypt,
+ .decrypt = crypto_arc4_crypt,
+ .init = crypto_arc4_init,
};
static int __init arc4_init(void)
{
- return crypto_register_skcipher(&arc4_alg);
+ return crypto_register_lskcipher(&arc4_alg);
}
static void __exit arc4_exit(void)
{
- crypto_unregister_skcipher(&arc4_alg);
+ crypto_unregister_lskcipher(&arc4_alg);
}
subsys_initcall(arc4_init);
signed PE binary.
config FIPS_SIGNATURE_SELFTEST
- bool "Run FIPS selftests on the X.509+PKCS7 signature verification"
+ tristate "Run FIPS selftests on the X.509+PKCS7 signature verification"
help
This option causes some selftests to be run on the signature
verification code, using some built in data. This is required
depends on KEYS
depends on ASYMMETRIC_KEY_TYPE
depends on PKCS7_MESSAGE_PARSER=X509_CERTIFICATE_PARSER
+ depends on X509_CERTIFICATE_PARSER
endif # ASYMMETRIC_KEY_TYPE
x509_cert_parser.o \
x509_loader.o \
x509_public_key.o
-x509_key_parser-$(CONFIG_FIPS_SIGNATURE_SELFTEST) += selftest.o
+obj-$(CONFIG_FIPS_SIGNATURE_SELFTEST) += x509_selftest.o
+x509_selftest-y += selftest.o
$(obj)/x509_cert_parser.o: \
$(obj)/x509.asn1.h \
oid = look_up_OID(value, vlen);
switch (oid) {
- case OID_md4:
- ctx->digest_algo = "md4";
- break;
- case OID_md5:
- ctx->digest_algo = "md5";
- break;
- case OID_sha1:
- ctx->digest_algo = "sha1";
- break;
case OID_sha256:
ctx->digest_algo = "sha256";
break;
case OID_sha512:
ctx->digest_algo = "sha512";
break;
- case OID_sha224:
- ctx->digest_algo = "sha224";
+ case OID_sha3_256:
+ ctx->digest_algo = "sha3-256";
+ break;
+ case OID_sha3_384:
+ ctx->digest_algo = "sha3-384";
+ break;
+ case OID_sha3_512:
+ ctx->digest_algo = "sha3-512";
break;
case OID__NR:
+-- SPDX-License-Identifier: BSD-3-Clause
+--
+-- Copyright (C) 2009 IETF Trust and the persons identified as authors
+-- of the code
+--
+-- https://www.rfc-editor.org/rfc/rfc5652#section-3
+
PKCS7ContentInfo ::= SEQUENCE {
contentType ContentType ({ pkcs7_check_content_type }),
content [0] EXPLICIT SignedData OPTIONAL
struct pkcs7_parse_context *ctx = context;
switch (ctx->last_oid) {
- case OID_md4:
- ctx->sinfo->sig->hash_algo = "md4";
- break;
- case OID_md5:
- ctx->sinfo->sig->hash_algo = "md5";
- break;
- case OID_sha1:
- ctx->sinfo->sig->hash_algo = "sha1";
- break;
case OID_sha256:
ctx->sinfo->sig->hash_algo = "sha256";
break;
case OID_gost2012Digest512:
ctx->sinfo->sig->hash_algo = "streebog512";
break;
+ case OID_sha3_256:
+ ctx->sinfo->sig->hash_algo = "sha3-256";
+ break;
+ case OID_sha3_384:
+ ctx->sinfo->sig->hash_algo = "sha3-384";
+ break;
+ case OID_sha3_512:
+ ctx->sinfo->sig->hash_algo = "sha3-512";
+ break;
default:
printk("Unsupported digest algo: %u\n", ctx->last_oid);
return -ENOPKG;
ctx->sinfo->sig->pkey_algo = "rsa";
ctx->sinfo->sig->encoding = "pkcs1";
break;
- case OID_id_ecdsa_with_sha1:
case OID_id_ecdsa_with_sha224:
case OID_id_ecdsa_with_sha256:
case OID_id_ecdsa_with_sha384:
case OID_id_ecdsa_with_sha512:
+ case OID_id_ecdsa_with_sha3_256:
+ case OID_id_ecdsa_with_sha3_384:
+ case OID_id_ecdsa_with_sha3_512:
ctx->sinfo->sig->pkey_algo = "ecdsa";
ctx->sinfo->sig->encoding = "x962";
break;
+-- SPDX-License-Identifier: BSD-3-Clause
+--
+-- Copyright (C) 2010 IETF Trust and the persons identified as authors
+-- of the code
+--
+-- https://www.rfc-editor.org/rfc/rfc5958#section-2
--
-- This is the unencrypted variant
--
*/
if (!hash_algo)
return -EINVAL;
- if (strcmp(hash_algo, "sha1") != 0 &&
- strcmp(hash_algo, "sha224") != 0 &&
+ if (strcmp(hash_algo, "sha224") != 0 &&
strcmp(hash_algo, "sha256") != 0 &&
strcmp(hash_algo, "sha384") != 0 &&
- strcmp(hash_algo, "sha512") != 0)
+ strcmp(hash_algo, "sha512") != 0 &&
+ strcmp(hash_algo, "sha3-256") != 0 &&
+ strcmp(hash_algo, "sha3-384") != 0 &&
+ strcmp(hash_algo, "sha3-512") != 0)
return -EINVAL;
} else if (strcmp(pkey->pkey_algo, "sm2") == 0) {
if (strcmp(encoding, "raw") != 0)
* Written by David Howells (dhowells@redhat.com)
*/
-#include <linux/kernel.h>
+#include <crypto/pkcs7.h>
#include <linux/cred.h>
+#include <linux/kernel.h>
#include <linux/key.h>
-#include <crypto/pkcs7.h>
+#include <linux/module.h>
#include "x509_parser.h"
struct certs_test {
TEST(certs_selftest_1_data, certs_selftest_1_pkcs7),
};
-int __init fips_signature_selftest(void)
+static int __init fips_signature_selftest(void)
{
struct key *keyring;
int ret, i;
key_put(keyring);
return 0;
}
+
+late_initcall(fips_signature_selftest);
+
+MODULE_DESCRIPTION("X.509 self tests");
+MODULE_AUTHOR("Red Hat, Inc.");
+MODULE_LICENSE("GPL");
* Sign the specified data blob using the private key specified by params->key.
* The signature is wrapped in an encoding if params->encoding is specified
* (eg. "pkcs1"). If the encoding needs to know the digest type, this can be
- * passed through params->hash_algo (eg. "sha1").
+ * passed through params->hash_algo (eg. "sha512").
*
* Returns the length of the data placed in the signature buffer or an error.
*/
+-- SPDX-License-Identifier: BSD-3-Clause
+--
+-- Copyright (C) 2008 IETF Trust and the persons identified as authors
+-- of the code
+--
+-- https://www.rfc-editor.org/rfc/rfc5280#section-4
+
Certificate ::= SEQUENCE {
tbsCertificate TBSCertificate ({ x509_note_tbs_certificate }),
signatureAlgorithm AlgorithmIdentifier,
+-- SPDX-License-Identifier: BSD-3-Clause
+--
+-- Copyright (C) 2008 IETF Trust and the persons identified as authors
+-- of the code
+--
-- X.509 AuthorityKeyIdentifier
-- rfc5280 section 4.2.1.1
GeneralNames ::= SEQUENCE OF GeneralName
GeneralName ::= CHOICE {
- otherName [0] ANY,
- rfc822Name [1] IA5String,
- dNSName [2] IA5String,
+ otherName [0] IMPLICIT OtherName,
+ rfc822Name [1] IMPLICIT IA5String,
+ dNSName [2] IMPLICIT IA5String,
x400Address [3] ANY,
directoryName [4] Name ({ x509_akid_note_name }),
- ediPartyName [5] ANY,
- uniformResourceIdentifier [6] IA5String,
- iPAddress [7] OCTET STRING,
- registeredID [8] OBJECT IDENTIFIER
+ ediPartyName [5] IMPLICIT EDIPartyName,
+ uniformResourceIdentifier [6] IMPLICIT IA5String,
+ iPAddress [7] IMPLICIT OCTET STRING,
+ registeredID [8] IMPLICIT OBJECT IDENTIFIER
}
Name ::= SEQUENCE OF RelativeDistinguishedName
attributeType OBJECT IDENTIFIER ({ x509_note_OID }),
attributeValue ANY ({ x509_extract_name_segment })
}
+
+OtherName ::= SEQUENCE {
+ type-id OBJECT IDENTIFIER,
+ value [0] ANY
+ }
+
+EDIPartyName ::= SEQUENCE {
+ nameAssigner [0] ANY OPTIONAL,
+ partyName [1] ANY
+ }
pr_debug("PubKey Algo: %u\n", ctx->last_oid);
switch (ctx->last_oid) {
- case OID_md2WithRSAEncryption:
- case OID_md3WithRSAEncryption:
default:
return -ENOPKG; /* Unsupported combination */
- case OID_md4WithRSAEncryption:
- ctx->cert->sig->hash_algo = "md4";
- goto rsa_pkcs1;
-
- case OID_sha1WithRSAEncryption:
- ctx->cert->sig->hash_algo = "sha1";
- goto rsa_pkcs1;
-
case OID_sha256WithRSAEncryption:
ctx->cert->sig->hash_algo = "sha256";
goto rsa_pkcs1;
ctx->cert->sig->hash_algo = "sha224";
goto rsa_pkcs1;
- case OID_id_ecdsa_with_sha1:
- ctx->cert->sig->hash_algo = "sha1";
- goto ecdsa;
+ case OID_id_rsassa_pkcs1_v1_5_with_sha3_256:
+ ctx->cert->sig->hash_algo = "sha3-256";
+ goto rsa_pkcs1;
+
+ case OID_id_rsassa_pkcs1_v1_5_with_sha3_384:
+ ctx->cert->sig->hash_algo = "sha3-384";
+ goto rsa_pkcs1;
+
+ case OID_id_rsassa_pkcs1_v1_5_with_sha3_512:
+ ctx->cert->sig->hash_algo = "sha3-512";
+ goto rsa_pkcs1;
case OID_id_ecdsa_with_sha224:
ctx->cert->sig->hash_algo = "sha224";
ctx->cert->sig->hash_algo = "sha512";
goto ecdsa;
+ case OID_id_ecdsa_with_sha3_256:
+ ctx->cert->sig->hash_algo = "sha3-256";
+ goto ecdsa;
+
+ case OID_id_ecdsa_with_sha3_384:
+ ctx->cert->sig->hash_algo = "sha3-384";
+ goto ecdsa;
+
+ case OID_id_ecdsa_with_sha3_512:
+ ctx->cert->sig->hash_algo = "sha3-512";
+ goto ecdsa;
+
case OID_gost2012Signature256:
ctx->cert->sig->hash_algo = "streebog256";
goto ecrdsa;
bool blacklisted;
};
-/*
- * selftest.c
- */
-#ifdef CONFIG_FIPS_SIGNATURE_SELFTEST
-extern int __init fips_signature_selftest(void);
-#else
-static inline int fips_signature_selftest(void) { return 0; }
-#endif
-
/*
* x509_cert_parser.c
*/
/*
* Module stuff
*/
-extern int __init certs_selftest(void);
static int __init x509_key_init(void)
{
- int ret;
-
- ret = register_asymmetric_key_parser(&x509_key_parser);
- if (ret < 0)
- return ret;
- return fips_signature_selftest();
+ return register_asymmetric_key_parser(&x509_key_parser);
}
static void __exit x509_key_exit(void)
u8 *hash = areq_ctx->tail;
int err;
- hash = (u8 *)ALIGN((unsigned long)hash + crypto_ahash_alignmask(auth),
- crypto_ahash_alignmask(auth) + 1);
-
ahash_request_set_tfm(ahreq, auth);
ahash_request_set_crypt(ahreq, req->dst, hash,
req->assoclen + req->cryptlen);
u8 *hash = areq_ctx->tail;
int err;
- hash = (u8 *)ALIGN((unsigned long)hash + crypto_ahash_alignmask(auth),
- crypto_ahash_alignmask(auth) + 1);
-
ahash_request_set_tfm(ahreq, auth);
ahash_request_set_crypt(ahreq, req->src, hash,
req->assoclen + req->cryptlen - authsize);
u32 mask;
struct aead_instance *inst;
struct authenc_instance_ctx *ctx;
+ struct skcipher_alg_common *enc;
struct hash_alg_common *auth;
struct crypto_alg *auth_base;
- struct skcipher_alg *enc;
int err;
err = crypto_check_attr_type(tb, CRYPTO_ALG_TYPE_AEAD, &mask);
crypto_attr_alg_name(tb[2]), 0, mask);
if (err)
goto err_free_inst;
- enc = crypto_spawn_skcipher_alg(&ctx->enc);
+ enc = crypto_spawn_skcipher_alg_common(&ctx->enc);
- ctx->reqoff = ALIGN(2 * auth->digestsize + auth_base->cra_alignmask,
- auth_base->cra_alignmask + 1);
+ ctx->reqoff = 2 * auth->digestsize;
err = -ENAMETOOLONG;
if (snprintf(inst->alg.base.cra_name, CRYPTO_MAX_ALG_NAME,
inst->alg.base.cra_priority = enc->base.cra_priority * 10 +
auth_base->cra_priority;
inst->alg.base.cra_blocksize = enc->base.cra_blocksize;
- inst->alg.base.cra_alignmask = auth_base->cra_alignmask |
- enc->base.cra_alignmask;
+ inst->alg.base.cra_alignmask = enc->base.cra_alignmask;
inst->alg.base.cra_ctxsize = sizeof(struct crypto_authenc_ctx);
- inst->alg.ivsize = crypto_skcipher_alg_ivsize(enc);
- inst->alg.chunksize = crypto_skcipher_alg_chunksize(enc);
+ inst->alg.ivsize = enc->ivsize;
+ inst->alg.chunksize = enc->chunksize;
inst->alg.maxauthsize = auth->digestsize;
inst->alg.init = crypto_authenc_init_tfm;
unsigned int flags)
{
struct crypto_aead *authenc_esn = crypto_aead_reqtfm(req);
- struct crypto_authenc_esn_ctx *ctx = crypto_aead_ctx(authenc_esn);
struct authenc_esn_request_ctx *areq_ctx = aead_request_ctx(req);
- struct crypto_ahash *auth = ctx->auth;
- u8 *hash = PTR_ALIGN((u8 *)areq_ctx->tail,
- crypto_ahash_alignmask(auth) + 1);
+ u8 *hash = areq_ctx->tail;
unsigned int authsize = crypto_aead_authsize(authenc_esn);
unsigned int assoclen = req->assoclen;
unsigned int cryptlen = req->cryptlen;
struct authenc_esn_request_ctx *areq_ctx = aead_request_ctx(req);
struct crypto_authenc_esn_ctx *ctx = crypto_aead_ctx(authenc_esn);
struct crypto_ahash *auth = ctx->auth;
- u8 *hash = PTR_ALIGN((u8 *)areq_ctx->tail,
- crypto_ahash_alignmask(auth) + 1);
+ u8 *hash = areq_ctx->tail;
struct ahash_request *ahreq = (void *)(areq_ctx->tail + ctx->reqoff);
unsigned int authsize = crypto_aead_authsize(authenc_esn);
unsigned int assoclen = req->assoclen;
struct skcipher_request *skreq = (void *)(areq_ctx->tail +
ctx->reqoff);
struct crypto_ahash *auth = ctx->auth;
- u8 *ohash = PTR_ALIGN((u8 *)areq_ctx->tail,
- crypto_ahash_alignmask(auth) + 1);
+ u8 *ohash = areq_ctx->tail;
unsigned int cryptlen = req->cryptlen - authsize;
unsigned int assoclen = req->assoclen;
struct scatterlist *dst = req->dst;
struct ahash_request *ahreq = (void *)(areq_ctx->tail + ctx->reqoff);
unsigned int authsize = crypto_aead_authsize(authenc_esn);
struct crypto_ahash *auth = ctx->auth;
- u8 *ohash = PTR_ALIGN((u8 *)areq_ctx->tail,
- crypto_ahash_alignmask(auth) + 1);
+ u8 *ohash = areq_ctx->tail;
unsigned int assoclen = req->assoclen;
unsigned int cryptlen = req->cryptlen;
u8 *ihash = ohash + crypto_ahash_digestsize(auth);
ctx->enc = enc;
ctx->null = null;
- ctx->reqoff = ALIGN(2 * crypto_ahash_digestsize(auth),
- crypto_ahash_alignmask(auth) + 1);
+ ctx->reqoff = 2 * crypto_ahash_digestsize(auth);
crypto_aead_set_reqsize(
tfm,
u32 mask;
struct aead_instance *inst;
struct authenc_esn_instance_ctx *ctx;
+ struct skcipher_alg_common *enc;
struct hash_alg_common *auth;
struct crypto_alg *auth_base;
- struct skcipher_alg *enc;
int err;
err = crypto_check_attr_type(tb, CRYPTO_ALG_TYPE_AEAD, &mask);
crypto_attr_alg_name(tb[2]), 0, mask);
if (err)
goto err_free_inst;
- enc = crypto_spawn_skcipher_alg(&ctx->enc);
+ enc = crypto_spawn_skcipher_alg_common(&ctx->enc);
err = -ENAMETOOLONG;
if (snprintf(inst->alg.base.cra_name, CRYPTO_MAX_ALG_NAME,
inst->alg.base.cra_priority = enc->base.cra_priority * 10 +
auth_base->cra_priority;
inst->alg.base.cra_blocksize = enc->base.cra_blocksize;
- inst->alg.base.cra_alignmask = auth_base->cra_alignmask |
- enc->base.cra_alignmask;
+ inst->alg.base.cra_alignmask = enc->base.cra_alignmask;
inst->alg.base.cra_ctxsize = sizeof(struct crypto_authenc_esn_ctx);
- inst->alg.ivsize = crypto_skcipher_alg_ivsize(enc);
- inst->alg.chunksize = crypto_skcipher_alg_chunksize(enc);
+ inst->alg.ivsize = enc->ivsize;
+ inst->alg.chunksize = enc->chunksize;
inst->alg.maxauthsize = auth->digestsize;
inst->alg.init = crypto_authenc_esn_init_tfm;
* Copyright (c) 2006-2016 Herbert Xu <herbert@gondor.apana.org.au>
*/
-#include <crypto/algapi.h>
-#include <crypto/internal/cipher.h>
#include <crypto/internal/skcipher.h>
#include <linux/err.h>
#include <linux/init.h>
#include <linux/log2.h>
#include <linux/module.h>
-static int crypto_cbc_encrypt_segment(struct skcipher_walk *walk,
- struct crypto_skcipher *skcipher)
+static int crypto_cbc_encrypt_segment(struct crypto_lskcipher *tfm,
+ const u8 *src, u8 *dst, unsigned nbytes,
+ u8 *iv)
{
- unsigned int bsize = crypto_skcipher_blocksize(skcipher);
- void (*fn)(struct crypto_tfm *, u8 *, const u8 *);
- unsigned int nbytes = walk->nbytes;
- u8 *src = walk->src.virt.addr;
- u8 *dst = walk->dst.virt.addr;
- struct crypto_cipher *cipher;
- struct crypto_tfm *tfm;
- u8 *iv = walk->iv;
-
- cipher = skcipher_cipher_simple(skcipher);
- tfm = crypto_cipher_tfm(cipher);
- fn = crypto_cipher_alg(cipher)->cia_encrypt;
+ unsigned int bsize = crypto_lskcipher_blocksize(tfm);
- do {
+ for (; nbytes >= bsize; src += bsize, dst += bsize, nbytes -= bsize) {
crypto_xor(iv, src, bsize);
- fn(tfm, dst, iv);
+ crypto_lskcipher_encrypt(tfm, iv, dst, bsize, NULL);
memcpy(iv, dst, bsize);
-
- src += bsize;
- dst += bsize;
- } while ((nbytes -= bsize) >= bsize);
+ }
return nbytes;
}
-static int crypto_cbc_encrypt_inplace(struct skcipher_walk *walk,
- struct crypto_skcipher *skcipher)
+static int crypto_cbc_encrypt_inplace(struct crypto_lskcipher *tfm,
+ u8 *src, unsigned nbytes, u8 *oiv)
{
- unsigned int bsize = crypto_skcipher_blocksize(skcipher);
- void (*fn)(struct crypto_tfm *, u8 *, const u8 *);
- unsigned int nbytes = walk->nbytes;
- u8 *src = walk->src.virt.addr;
- struct crypto_cipher *cipher;
- struct crypto_tfm *tfm;
- u8 *iv = walk->iv;
-
- cipher = skcipher_cipher_simple(skcipher);
- tfm = crypto_cipher_tfm(cipher);
- fn = crypto_cipher_alg(cipher)->cia_encrypt;
+ unsigned int bsize = crypto_lskcipher_blocksize(tfm);
+ u8 *iv = oiv;
+
+ if (nbytes < bsize)
+ goto out;
do {
crypto_xor(src, iv, bsize);
- fn(tfm, src, src);
+ crypto_lskcipher_encrypt(tfm, src, src, bsize, NULL);
iv = src;
src += bsize;
} while ((nbytes -= bsize) >= bsize);
- memcpy(walk->iv, iv, bsize);
+ memcpy(oiv, iv, bsize);
+out:
return nbytes;
}
-static int crypto_cbc_encrypt(struct skcipher_request *req)
+static int crypto_cbc_encrypt(struct crypto_lskcipher *tfm, const u8 *src,
+ u8 *dst, unsigned len, u8 *iv, bool final)
{
- struct crypto_skcipher *skcipher = crypto_skcipher_reqtfm(req);
- struct skcipher_walk walk;
- int err;
+ struct crypto_lskcipher **ctx = crypto_lskcipher_ctx(tfm);
+ struct crypto_lskcipher *cipher = *ctx;
+ int rem;
- err = skcipher_walk_virt(&walk, req, false);
+ if (src == dst)
+ rem = crypto_cbc_encrypt_inplace(cipher, dst, len, iv);
+ else
+ rem = crypto_cbc_encrypt_segment(cipher, src, dst, len, iv);
- while (walk.nbytes) {
- if (walk.src.virt.addr == walk.dst.virt.addr)
- err = crypto_cbc_encrypt_inplace(&walk, skcipher);
- else
- err = crypto_cbc_encrypt_segment(&walk, skcipher);
- err = skcipher_walk_done(&walk, err);
- }
-
- return err;
+ return rem && final ? -EINVAL : rem;
}
-static int crypto_cbc_decrypt_segment(struct skcipher_walk *walk,
- struct crypto_skcipher *skcipher)
+static int crypto_cbc_decrypt_segment(struct crypto_lskcipher *tfm,
+ const u8 *src, u8 *dst, unsigned nbytes,
+ u8 *oiv)
{
- unsigned int bsize = crypto_skcipher_blocksize(skcipher);
- void (*fn)(struct crypto_tfm *, u8 *, const u8 *);
- unsigned int nbytes = walk->nbytes;
- u8 *src = walk->src.virt.addr;
- u8 *dst = walk->dst.virt.addr;
- struct crypto_cipher *cipher;
- struct crypto_tfm *tfm;
- u8 *iv = walk->iv;
-
- cipher = skcipher_cipher_simple(skcipher);
- tfm = crypto_cipher_tfm(cipher);
- fn = crypto_cipher_alg(cipher)->cia_decrypt;
+ unsigned int bsize = crypto_lskcipher_blocksize(tfm);
+ const u8 *iv = oiv;
+
+ if (nbytes < bsize)
+ goto out;
do {
- fn(tfm, dst, src);
+ crypto_lskcipher_decrypt(tfm, src, dst, bsize, NULL);
crypto_xor(dst, iv, bsize);
iv = src;
dst += bsize;
} while ((nbytes -= bsize) >= bsize);
- memcpy(walk->iv, iv, bsize);
+ memcpy(oiv, iv, bsize);
+out:
return nbytes;
}
-static int crypto_cbc_decrypt_inplace(struct skcipher_walk *walk,
- struct crypto_skcipher *skcipher)
+static int crypto_cbc_decrypt_inplace(struct crypto_lskcipher *tfm,
+ u8 *src, unsigned nbytes, u8 *iv)
{
- unsigned int bsize = crypto_skcipher_blocksize(skcipher);
- void (*fn)(struct crypto_tfm *, u8 *, const u8 *);
- unsigned int nbytes = walk->nbytes;
- u8 *src = walk->src.virt.addr;
+ unsigned int bsize = crypto_lskcipher_blocksize(tfm);
u8 last_iv[MAX_CIPHER_BLOCKSIZE];
- struct crypto_cipher *cipher;
- struct crypto_tfm *tfm;
- cipher = skcipher_cipher_simple(skcipher);
- tfm = crypto_cipher_tfm(cipher);
- fn = crypto_cipher_alg(cipher)->cia_decrypt;
+ if (nbytes < bsize)
+ goto out;
/* Start of the last block. */
src += nbytes - (nbytes & (bsize - 1)) - bsize;
memcpy(last_iv, src, bsize);
for (;;) {
- fn(tfm, src, src);
+ crypto_lskcipher_decrypt(tfm, src, src, bsize, NULL);
if ((nbytes -= bsize) < bsize)
break;
crypto_xor(src, src - bsize, bsize);
src -= bsize;
}
- crypto_xor(src, walk->iv, bsize);
- memcpy(walk->iv, last_iv, bsize);
+ crypto_xor(src, iv, bsize);
+ memcpy(iv, last_iv, bsize);
+out:
return nbytes;
}
-static int crypto_cbc_decrypt(struct skcipher_request *req)
+static int crypto_cbc_decrypt(struct crypto_lskcipher *tfm, const u8 *src,
+ u8 *dst, unsigned len, u8 *iv, bool final)
{
- struct crypto_skcipher *skcipher = crypto_skcipher_reqtfm(req);
- struct skcipher_walk walk;
- int err;
+ struct crypto_lskcipher **ctx = crypto_lskcipher_ctx(tfm);
+ struct crypto_lskcipher *cipher = *ctx;
+ int rem;
- err = skcipher_walk_virt(&walk, req, false);
+ if (src == dst)
+ rem = crypto_cbc_decrypt_inplace(cipher, dst, len, iv);
+ else
+ rem = crypto_cbc_decrypt_segment(cipher, src, dst, len, iv);
- while (walk.nbytes) {
- if (walk.src.virt.addr == walk.dst.virt.addr)
- err = crypto_cbc_decrypt_inplace(&walk, skcipher);
- else
- err = crypto_cbc_decrypt_segment(&walk, skcipher);
- err = skcipher_walk_done(&walk, err);
- }
-
- return err;
+ return rem && final ? -EINVAL : rem;
}
static int crypto_cbc_create(struct crypto_template *tmpl, struct rtattr **tb)
{
- struct skcipher_instance *inst;
- struct crypto_alg *alg;
+ struct lskcipher_instance *inst;
int err;
- inst = skcipher_alloc_instance_simple(tmpl, tb);
+ inst = lskcipher_alloc_instance_simple(tmpl, tb);
if (IS_ERR(inst))
return PTR_ERR(inst);
- alg = skcipher_ialg_simple(inst);
-
err = -EINVAL;
- if (!is_power_of_2(alg->cra_blocksize))
+ if (!is_power_of_2(inst->alg.co.base.cra_blocksize))
goto out_free_inst;
inst->alg.encrypt = crypto_cbc_encrypt;
inst->alg.decrypt = crypto_cbc_decrypt;
- err = skcipher_register_instance(tmpl, inst);
+ err = lskcipher_register_instance(tmpl, inst);
if (err) {
out_free_inst:
inst->free(inst);
struct cbcmac_desc_ctx {
unsigned int len;
+ u8 dg[];
};
static inline struct crypto_ccm_req_priv_ctx *crypto_ccm_reqctx(
const char *ctr_name,
const char *mac_name)
{
+ struct skcipher_alg_common *ctr;
u32 mask;
struct aead_instance *inst;
struct ccm_instance_ctx *ictx;
- struct skcipher_alg *ctr;
struct hash_alg_common *mac;
int err;
ctr_name, 0, mask);
if (err)
goto err_free_inst;
- ctr = crypto_spawn_skcipher_alg(&ictx->ctr);
+ ctr = crypto_spawn_skcipher_alg_common(&ictx->ctr);
/* The skcipher algorithm must be CTR mode, using 16-byte blocks. */
err = -EINVAL;
if (strncmp(ctr->base.cra_name, "ctr(", 4) != 0 ||
- crypto_skcipher_alg_ivsize(ctr) != 16 ||
- ctr->base.cra_blocksize != 1)
+ ctr->ivsize != 16 || ctr->base.cra_blocksize != 1)
goto err_free_inst;
/* ctr and cbcmac must use the same underlying block cipher. */
inst->alg.base.cra_priority = (mac->base.cra_priority +
ctr->base.cra_priority) / 2;
inst->alg.base.cra_blocksize = 1;
- inst->alg.base.cra_alignmask = mac->base.cra_alignmask |
- ctr->base.cra_alignmask;
+ inst->alg.base.cra_alignmask = ctr->base.cra_alignmask;
inst->alg.ivsize = 16;
- inst->alg.chunksize = crypto_skcipher_alg_chunksize(ctr);
+ inst->alg.chunksize = ctr->chunksize;
inst->alg.maxauthsize = 16;
inst->alg.base.cra_ctxsize = sizeof(struct crypto_ccm_ctx);
inst->alg.init = crypto_ccm_init_tfm;
{
struct cbcmac_desc_ctx *ctx = shash_desc_ctx(pdesc);
int bs = crypto_shash_digestsize(pdesc->tfm);
- u8 *dg = (u8 *)ctx + crypto_shash_descsize(pdesc->tfm) - bs;
ctx->len = 0;
- memset(dg, 0, bs);
+ memset(ctx->dg, 0, bs);
return 0;
}
struct cbcmac_desc_ctx *ctx = shash_desc_ctx(pdesc);
struct crypto_cipher *tfm = tctx->child;
int bs = crypto_shash_digestsize(parent);
- u8 *dg = (u8 *)ctx + crypto_shash_descsize(parent) - bs;
while (len > 0) {
unsigned int l = min(len, bs - ctx->len);
- crypto_xor(dg + ctx->len, p, l);
+ crypto_xor(&ctx->dg[ctx->len], p, l);
ctx->len +=l;
len -= l;
p += l;
if (ctx->len == bs) {
- crypto_cipher_encrypt_one(tfm, dg, dg);
+ crypto_cipher_encrypt_one(tfm, ctx->dg, ctx->dg);
ctx->len = 0;
}
}
struct cbcmac_desc_ctx *ctx = shash_desc_ctx(pdesc);
struct crypto_cipher *tfm = tctx->child;
int bs = crypto_shash_digestsize(parent);
- u8 *dg = (u8 *)ctx + crypto_shash_descsize(parent) - bs;
if (ctx->len)
- crypto_cipher_encrypt_one(tfm, dg, dg);
+ crypto_cipher_encrypt_one(tfm, ctx->dg, ctx->dg);
- memcpy(out, dg, bs);
+ memcpy(out, ctx->dg, bs);
return 0;
}
inst->alg.base.cra_blocksize = 1;
inst->alg.digestsize = alg->cra_blocksize;
- inst->alg.descsize = ALIGN(sizeof(struct cbcmac_desc_ctx),
- alg->cra_alignmask + 1) +
+ inst->alg.descsize = sizeof(struct cbcmac_desc_ctx) +
alg->cra_blocksize;
inst->alg.base.cra_ctxsize = sizeof(struct cbcmac_tfm_ctx);
u32 mask;
struct aead_instance *inst;
struct chachapoly_instance_ctx *ctx;
- struct skcipher_alg *chacha;
+ struct skcipher_alg_common *chacha;
struct hash_alg_common *poly;
int err;
crypto_attr_alg_name(tb[1]), 0, mask);
if (err)
goto err_free_inst;
- chacha = crypto_spawn_skcipher_alg(&ctx->chacha);
+ chacha = crypto_spawn_skcipher_alg_common(&ctx->chacha);
err = crypto_grab_ahash(&ctx->poly, aead_crypto_instance(inst),
crypto_attr_alg_name(tb[2]), 0, mask);
if (poly->digestsize != POLY1305_DIGEST_SIZE)
goto err_free_inst;
/* Need 16-byte IV size, including Initial Block Counter value */
- if (crypto_skcipher_alg_ivsize(chacha) != CHACHA_IV_SIZE)
+ if (chacha->ivsize != CHACHA_IV_SIZE)
goto err_free_inst;
/* Not a stream cipher? */
if (chacha->base.cra_blocksize != 1)
inst->alg.base.cra_priority = (chacha->base.cra_priority +
poly->base.cra_priority) / 2;
inst->alg.base.cra_blocksize = 1;
- inst->alg.base.cra_alignmask = chacha->base.cra_alignmask |
- poly->base.cra_alignmask;
+ inst->alg.base.cra_alignmask = chacha->base.cra_alignmask;
inst->alg.base.cra_ctxsize = sizeof(struct chachapoly_ctx) +
ctx->saltlen;
inst->alg.ivsize = ivsize;
- inst->alg.chunksize = crypto_skcipher_alg_chunksize(chacha);
+ inst->alg.chunksize = chacha->chunksize;
inst->alg.maxauthsize = POLY1305_DIGEST_SIZE;
inst->alg.init = chachapoly_init;
inst->alg.exit = chachapoly_exit;
*/
struct cmac_tfm_ctx {
struct crypto_cipher *child;
- u8 ctx[];
+ __be64 consts[];
};
/*
*/
struct cmac_desc_ctx {
unsigned int len;
- u8 ctx[];
+ u8 odds[];
};
static int crypto_cmac_digest_setkey(struct crypto_shash *parent,
const u8 *inkey, unsigned int keylen)
{
- unsigned long alignmask = crypto_shash_alignmask(parent);
struct cmac_tfm_ctx *ctx = crypto_shash_ctx(parent);
unsigned int bs = crypto_shash_blocksize(parent);
- __be64 *consts = PTR_ALIGN((void *)ctx->ctx,
- (alignmask | (__alignof__(__be64) - 1)) + 1);
+ __be64 *consts = ctx->consts;
u64 _const[2];
int i, err = 0;
u8 msb_mask, gfmask;
static int crypto_cmac_digest_init(struct shash_desc *pdesc)
{
- unsigned long alignmask = crypto_shash_alignmask(pdesc->tfm);
struct cmac_desc_ctx *ctx = shash_desc_ctx(pdesc);
int bs = crypto_shash_blocksize(pdesc->tfm);
- u8 *prev = PTR_ALIGN((void *)ctx->ctx, alignmask + 1) + bs;
+ u8 *prev = &ctx->odds[bs];
ctx->len = 0;
memset(prev, 0, bs);
unsigned int len)
{
struct crypto_shash *parent = pdesc->tfm;
- unsigned long alignmask = crypto_shash_alignmask(parent);
struct cmac_tfm_ctx *tctx = crypto_shash_ctx(parent);
struct cmac_desc_ctx *ctx = shash_desc_ctx(pdesc);
struct crypto_cipher *tfm = tctx->child;
int bs = crypto_shash_blocksize(parent);
- u8 *odds = PTR_ALIGN((void *)ctx->ctx, alignmask + 1);
+ u8 *odds = ctx->odds;
u8 *prev = odds + bs;
/* checking the data can fill the block */
static int crypto_cmac_digest_final(struct shash_desc *pdesc, u8 *out)
{
struct crypto_shash *parent = pdesc->tfm;
- unsigned long alignmask = crypto_shash_alignmask(parent);
struct cmac_tfm_ctx *tctx = crypto_shash_ctx(parent);
struct cmac_desc_ctx *ctx = shash_desc_ctx(pdesc);
struct crypto_cipher *tfm = tctx->child;
int bs = crypto_shash_blocksize(parent);
- u8 *consts = PTR_ALIGN((void *)tctx->ctx,
- (alignmask | (__alignof__(__be64) - 1)) + 1);
- u8 *odds = PTR_ALIGN((void *)ctx->ctx, alignmask + 1);
+ u8 *odds = ctx->odds;
u8 *prev = odds + bs;
unsigned int offset = 0;
}
crypto_xor(prev, odds, bs);
- crypto_xor(prev, consts + offset, bs);
+ crypto_xor(prev, (const u8 *)tctx->consts + offset, bs);
crypto_cipher_encrypt_one(tfm, out, prev);
struct shash_instance *inst;
struct crypto_cipher_spawn *spawn;
struct crypto_alg *alg;
- unsigned long alignmask;
u32 mask;
int err;
if (err)
goto err_free_inst;
- alignmask = alg->cra_alignmask;
- inst->alg.base.cra_alignmask = alignmask;
inst->alg.base.cra_priority = alg->cra_priority;
inst->alg.base.cra_blocksize = alg->cra_blocksize;
+ inst->alg.base.cra_ctxsize = sizeof(struct cmac_tfm_ctx) +
+ alg->cra_blocksize * 2;
inst->alg.digestsize = alg->cra_blocksize;
- inst->alg.descsize =
- ALIGN(sizeof(struct cmac_desc_ctx), crypto_tfm_ctx_alignment())
- + (alignmask & ~(crypto_tfm_ctx_alignment() - 1))
- + alg->cra_blocksize * 2;
-
- inst->alg.base.cra_ctxsize =
- ALIGN(sizeof(struct cmac_tfm_ctx), crypto_tfm_ctx_alignment())
- + ((alignmask | (__alignof__(__be64) - 1)) &
- ~(crypto_tfm_ctx_alignment() - 1))
- + alg->cra_blocksize * 2;
-
+ inst->alg.descsize = sizeof(struct cmac_desc_ctx) +
+ alg->cra_blocksize * 2;
inst->alg.init = crypto_cmac_digest_init;
inst->alg.update = crypto_cmac_digest_update;
inst->alg.final = crypto_cmac_digest_final;
{
struct skcipherd_instance_ctx *ctx;
struct skcipher_instance *inst;
- struct skcipher_alg *alg;
+ struct skcipher_alg_common *alg;
u32 type;
u32 mask;
int err;
if (err)
goto err_free_inst;
- alg = crypto_spawn_skcipher_alg(&ctx->spawn);
+ alg = crypto_spawn_skcipher_alg_common(&ctx->spawn);
err = cryptd_init_instance(skcipher_crypto_instance(inst), &alg->base);
if (err)
goto err_free_inst;
inst->alg.base.cra_flags |= CRYPTO_ALG_ASYNC |
(alg->base.cra_flags & CRYPTO_ALG_INTERNAL);
- inst->alg.ivsize = crypto_skcipher_alg_ivsize(alg);
- inst->alg.chunksize = crypto_skcipher_alg_chunksize(alg);
- inst->alg.min_keysize = crypto_skcipher_alg_min_keysize(alg);
- inst->alg.max_keysize = crypto_skcipher_alg_max_keysize(alg);
+ inst->alg.ivsize = alg->ivsize;
+ inst->alg.chunksize = alg->chunksize;
+ inst->alg.min_keysize = alg->min_keysize;
+ inst->alg.max_keysize = alg->max_keysize;
inst->alg.base.cra_ctxsize = sizeof(struct cryptd_skcipher_ctx);
return PTR_ERR(algt);
switch (algt->type & algt->mask & CRYPTO_ALG_TYPE_MASK) {
- case CRYPTO_ALG_TYPE_SKCIPHER:
+ case CRYPTO_ALG_TYPE_LSKCIPHER:
return cryptd_create_skcipher(tmpl, tb, algt, &queue);
case CRYPTO_ALG_TYPE_HASH:
return cryptd_create_hash(tmpl, tb, algt, &queue);
/**
* crypto_engine_exit - free the resources of hardware engine when exit
* @engine: the hardware engine need to be freed
- *
- * Return 0 for success.
*/
-int crypto_engine_exit(struct crypto_engine *engine)
+void crypto_engine_exit(struct crypto_engine *engine)
{
int ret;
ret = crypto_engine_stop(engine);
if (ret)
- return ret;
+ return;
kthread_destroy_worker(engine->kworker);
-
- return 0;
}
EXPORT_SYMBOL_GPL(crypto_engine_exit);
struct rtattr **tb)
{
struct skcipher_instance *inst;
- struct skcipher_alg *alg;
struct crypto_skcipher_spawn *spawn;
+ struct skcipher_alg_common *alg;
u32 mask;
int err;
if (err)
goto err_free_inst;
- alg = crypto_spawn_skcipher_alg(spawn);
+ alg = crypto_spawn_skcipher_alg_common(spawn);
/* We only support 16-byte blocks. */
err = -EINVAL;
- if (crypto_skcipher_alg_ivsize(alg) != CTR_RFC3686_BLOCK_SIZE)
+ if (alg->ivsize != CTR_RFC3686_BLOCK_SIZE)
goto err_free_inst;
/* Not a stream cipher? */
inst->alg.base.cra_alignmask = alg->base.cra_alignmask;
inst->alg.ivsize = CTR_RFC3686_IV_SIZE;
- inst->alg.chunksize = crypto_skcipher_alg_chunksize(alg);
- inst->alg.min_keysize = crypto_skcipher_alg_min_keysize(alg) +
- CTR_RFC3686_NONCE_SIZE;
- inst->alg.max_keysize = crypto_skcipher_alg_max_keysize(alg) +
- CTR_RFC3686_NONCE_SIZE;
+ inst->alg.chunksize = alg->chunksize;
+ inst->alg.min_keysize = alg->min_keysize + CTR_RFC3686_NONCE_SIZE;
+ inst->alg.max_keysize = alg->max_keysize + CTR_RFC3686_NONCE_SIZE;
inst->alg.setkey = crypto_rfc3686_setkey;
inst->alg.encrypt = crypto_rfc3686_crypt;
static int crypto_cts_create(struct crypto_template *tmpl, struct rtattr **tb)
{
struct crypto_skcipher_spawn *spawn;
+ struct skcipher_alg_common *alg;
struct skcipher_instance *inst;
- struct skcipher_alg *alg;
u32 mask;
int err;
if (err)
goto err_free_inst;
- alg = crypto_spawn_skcipher_alg(spawn);
+ alg = crypto_spawn_skcipher_alg_common(spawn);
err = -EINVAL;
- if (crypto_skcipher_alg_ivsize(alg) != alg->base.cra_blocksize)
+ if (alg->ivsize != alg->base.cra_blocksize)
goto err_free_inst;
if (strncmp(alg->base.cra_name, "cbc(", 4))
inst->alg.base.cra_alignmask = alg->base.cra_alignmask;
inst->alg.ivsize = alg->base.cra_blocksize;
- inst->alg.chunksize = crypto_skcipher_alg_chunksize(alg);
- inst->alg.min_keysize = crypto_skcipher_alg_min_keysize(alg);
- inst->alg.max_keysize = crypto_skcipher_alg_max_keysize(alg);
+ inst->alg.chunksize = alg->chunksize;
+ inst->alg.min_keysize = alg->min_keysize;
+ inst->alg.max_keysize = alg->max_keysize;
inst->alg.base.cra_ctxsize = sizeof(struct crypto_cts_ctx);
struct z_stream_s decomp_stream;
};
-static int deflate_comp_init(struct deflate_ctx *ctx, int format)
+static int deflate_comp_init(struct deflate_ctx *ctx)
{
int ret = 0;
struct z_stream_s *stream = &ctx->comp_stream;
stream->workspace = vzalloc(zlib_deflate_workspacesize(
- MAX_WBITS, MAX_MEM_LEVEL));
+ -DEFLATE_DEF_WINBITS, MAX_MEM_LEVEL));
if (!stream->workspace) {
ret = -ENOMEM;
goto out;
}
- if (format)
- ret = zlib_deflateInit(stream, 3);
- else
- ret = zlib_deflateInit2(stream, DEFLATE_DEF_LEVEL, Z_DEFLATED,
- -DEFLATE_DEF_WINBITS,
- DEFLATE_DEF_MEMLEVEL,
- Z_DEFAULT_STRATEGY);
+ ret = zlib_deflateInit2(stream, DEFLATE_DEF_LEVEL, Z_DEFLATED,
+ -DEFLATE_DEF_WINBITS, DEFLATE_DEF_MEMLEVEL,
+ Z_DEFAULT_STRATEGY);
if (ret != Z_OK) {
ret = -EINVAL;
goto out_free;
goto out;
}
-static int deflate_decomp_init(struct deflate_ctx *ctx, int format)
+static int deflate_decomp_init(struct deflate_ctx *ctx)
{
int ret = 0;
struct z_stream_s *stream = &ctx->decomp_stream;
ret = -ENOMEM;
goto out;
}
- if (format)
- ret = zlib_inflateInit(stream);
- else
- ret = zlib_inflateInit2(stream, -DEFLATE_DEF_WINBITS);
+ ret = zlib_inflateInit2(stream, -DEFLATE_DEF_WINBITS);
if (ret != Z_OK) {
ret = -EINVAL;
goto out_free;
vfree(ctx->decomp_stream.workspace);
}
-static int __deflate_init(void *ctx, int format)
+static int __deflate_init(void *ctx)
{
int ret;
- ret = deflate_comp_init(ctx, format);
+ ret = deflate_comp_init(ctx);
if (ret)
goto out;
- ret = deflate_decomp_init(ctx, format);
+ ret = deflate_decomp_init(ctx);
if (ret)
deflate_comp_exit(ctx);
out:
return ret;
}
-static void *gen_deflate_alloc_ctx(struct crypto_scomp *tfm, int format)
+static void *deflate_alloc_ctx(struct crypto_scomp *tfm)
{
struct deflate_ctx *ctx;
int ret;
if (!ctx)
return ERR_PTR(-ENOMEM);
- ret = __deflate_init(ctx, format);
+ ret = __deflate_init(ctx);
if (ret) {
kfree(ctx);
return ERR_PTR(ret);
return ctx;
}
-static void *deflate_alloc_ctx(struct crypto_scomp *tfm)
-{
- return gen_deflate_alloc_ctx(tfm, 0);
-}
-
-static void *zlib_deflate_alloc_ctx(struct crypto_scomp *tfm)
-{
- return gen_deflate_alloc_ctx(tfm, 1);
-}
-
static int deflate_init(struct crypto_tfm *tfm)
{
struct deflate_ctx *ctx = crypto_tfm_ctx(tfm);
- return __deflate_init(ctx, 0);
+ return __deflate_init(ctx);
}
static void __deflate_exit(void *ctx)
.coa_decompress = deflate_decompress } }
};
-static struct scomp_alg scomp[] = { {
+static struct scomp_alg scomp = {
.alloc_ctx = deflate_alloc_ctx,
.free_ctx = deflate_free_ctx,
.compress = deflate_scompress,
.cra_driver_name = "deflate-scomp",
.cra_module = THIS_MODULE,
}
-}, {
- .alloc_ctx = zlib_deflate_alloc_ctx,
- .free_ctx = deflate_free_ctx,
- .compress = deflate_scompress,
- .decompress = deflate_sdecompress,
- .base = {
- .cra_name = "zlib-deflate",
- .cra_driver_name = "zlib-deflate-scomp",
- .cra_module = THIS_MODULE,
- }
-} };
+};
static int __init deflate_mod_init(void)
{
if (ret)
return ret;
- ret = crypto_register_scomps(scomp, ARRAY_SIZE(scomp));
+ ret = crypto_register_scomp(&scomp);
if (ret) {
crypto_unregister_alg(&alg);
return ret;
static void __exit deflate_mod_fini(void)
{
crypto_unregister_alg(&alg);
- crypto_unregister_scomps(scomp, ARRAY_SIZE(scomp));
+ crypto_unregister_scomp(&scomp);
}
subsys_initcall(deflate_mod_init);
sdesc->shash.tfm = tfm;
drbg->priv_data = sdesc;
- return crypto_shash_alignmask(tfm);
+ return 0;
}
static int drbg_fini_hash_kernel(struct drbg_state *drbg)
* Copyright (c) 2006 Herbert Xu <herbert@gondor.apana.org.au>
*/
-#include <crypto/algapi.h>
#include <crypto/internal/cipher.h>
#include <crypto/internal/skcipher.h>
#include <linux/err.h>
#include <linux/init.h>
#include <linux/kernel.h>
#include <linux/module.h>
+#include <linux/slab.h>
-static int crypto_ecb_crypt(struct skcipher_request *req,
- struct crypto_cipher *cipher,
+static int crypto_ecb_crypt(struct crypto_cipher *cipher, const u8 *src,
+ u8 *dst, unsigned nbytes, bool final,
void (*fn)(struct crypto_tfm *, u8 *, const u8 *))
{
const unsigned int bsize = crypto_cipher_blocksize(cipher);
- struct skcipher_walk walk;
- unsigned int nbytes;
- int err;
-
- err = skcipher_walk_virt(&walk, req, false);
- while ((nbytes = walk.nbytes) != 0) {
- const u8 *src = walk.src.virt.addr;
- u8 *dst = walk.dst.virt.addr;
+ while (nbytes >= bsize) {
+ fn(crypto_cipher_tfm(cipher), dst, src);
- do {
- fn(crypto_cipher_tfm(cipher), dst, src);
+ src += bsize;
+ dst += bsize;
- src += bsize;
- dst += bsize;
- } while ((nbytes -= bsize) >= bsize);
-
- err = skcipher_walk_done(&walk, nbytes);
+ nbytes -= bsize;
}
- return err;
+ return nbytes && final ? -EINVAL : nbytes;
}
-static int crypto_ecb_encrypt(struct skcipher_request *req)
+static int crypto_ecb_encrypt2(struct crypto_lskcipher *tfm, const u8 *src,
+ u8 *dst, unsigned len, u8 *iv, bool final)
{
- struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
- struct crypto_cipher *cipher = skcipher_cipher_simple(tfm);
+ struct crypto_cipher **ctx = crypto_lskcipher_ctx(tfm);
+ struct crypto_cipher *cipher = *ctx;
- return crypto_ecb_crypt(req, cipher,
+ return crypto_ecb_crypt(cipher, src, dst, len, final,
crypto_cipher_alg(cipher)->cia_encrypt);
}
-static int crypto_ecb_decrypt(struct skcipher_request *req)
+static int crypto_ecb_decrypt2(struct crypto_lskcipher *tfm, const u8 *src,
+ u8 *dst, unsigned len, u8 *iv, bool final)
{
- struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
- struct crypto_cipher *cipher = skcipher_cipher_simple(tfm);
+ struct crypto_cipher **ctx = crypto_lskcipher_ctx(tfm);
+ struct crypto_cipher *cipher = *ctx;
- return crypto_ecb_crypt(req, cipher,
+ return crypto_ecb_crypt(cipher, src, dst, len, final,
crypto_cipher_alg(cipher)->cia_decrypt);
}
-static int crypto_ecb_create(struct crypto_template *tmpl, struct rtattr **tb)
+static int lskcipher_setkey_simple2(struct crypto_lskcipher *tfm,
+ const u8 *key, unsigned int keylen)
+{
+ struct crypto_cipher **ctx = crypto_lskcipher_ctx(tfm);
+ struct crypto_cipher *cipher = *ctx;
+
+ crypto_cipher_clear_flags(cipher, CRYPTO_TFM_REQ_MASK);
+ crypto_cipher_set_flags(cipher, crypto_lskcipher_get_flags(tfm) &
+ CRYPTO_TFM_REQ_MASK);
+ return crypto_cipher_setkey(cipher, key, keylen);
+}
+
+static int lskcipher_init_tfm_simple2(struct crypto_lskcipher *tfm)
+{
+ struct lskcipher_instance *inst = lskcipher_alg_instance(tfm);
+ struct crypto_cipher **ctx = crypto_lskcipher_ctx(tfm);
+ struct crypto_cipher_spawn *spawn;
+ struct crypto_cipher *cipher;
+
+ spawn = lskcipher_instance_ctx(inst);
+ cipher = crypto_spawn_cipher(spawn);
+ if (IS_ERR(cipher))
+ return PTR_ERR(cipher);
+
+ *ctx = cipher;
+ return 0;
+}
+
+static void lskcipher_exit_tfm_simple2(struct crypto_lskcipher *tfm)
+{
+ struct crypto_cipher **ctx = crypto_lskcipher_ctx(tfm);
+
+ crypto_free_cipher(*ctx);
+}
+
+static void lskcipher_free_instance_simple2(struct lskcipher_instance *inst)
+{
+ crypto_drop_cipher(lskcipher_instance_ctx(inst));
+ kfree(inst);
+}
+
+static struct lskcipher_instance *lskcipher_alloc_instance_simple2(
+ struct crypto_template *tmpl, struct rtattr **tb)
+{
+ struct crypto_cipher_spawn *spawn;
+ struct lskcipher_instance *inst;
+ struct crypto_alg *cipher_alg;
+ u32 mask;
+ int err;
+
+ err = crypto_check_attr_type(tb, CRYPTO_ALG_TYPE_LSKCIPHER, &mask);
+ if (err)
+ return ERR_PTR(err);
+
+ inst = kzalloc(sizeof(*inst) + sizeof(*spawn), GFP_KERNEL);
+ if (!inst)
+ return ERR_PTR(-ENOMEM);
+ spawn = lskcipher_instance_ctx(inst);
+
+ err = crypto_grab_cipher(spawn, lskcipher_crypto_instance(inst),
+ crypto_attr_alg_name(tb[1]), 0, mask);
+ if (err)
+ goto err_free_inst;
+ cipher_alg = crypto_spawn_cipher_alg(spawn);
+
+ err = crypto_inst_setname(lskcipher_crypto_instance(inst), tmpl->name,
+ cipher_alg);
+ if (err)
+ goto err_free_inst;
+
+ inst->free = lskcipher_free_instance_simple2;
+
+ /* Default algorithm properties, can be overridden */
+ inst->alg.co.base.cra_blocksize = cipher_alg->cra_blocksize;
+ inst->alg.co.base.cra_alignmask = cipher_alg->cra_alignmask;
+ inst->alg.co.base.cra_priority = cipher_alg->cra_priority;
+ inst->alg.co.min_keysize = cipher_alg->cra_cipher.cia_min_keysize;
+ inst->alg.co.max_keysize = cipher_alg->cra_cipher.cia_max_keysize;
+ inst->alg.co.ivsize = cipher_alg->cra_blocksize;
+
+ /* Use struct crypto_cipher * by default, can be overridden */
+ inst->alg.co.base.cra_ctxsize = sizeof(struct crypto_cipher *);
+ inst->alg.setkey = lskcipher_setkey_simple2;
+ inst->alg.init = lskcipher_init_tfm_simple2;
+ inst->alg.exit = lskcipher_exit_tfm_simple2;
+
+ return inst;
+
+err_free_inst:
+ lskcipher_free_instance_simple2(inst);
+ return ERR_PTR(err);
+}
+
+static int crypto_ecb_create2(struct crypto_template *tmpl, struct rtattr **tb)
{
- struct skcipher_instance *inst;
+ struct lskcipher_instance *inst;
int err;
- inst = skcipher_alloc_instance_simple(tmpl, tb);
+ inst = lskcipher_alloc_instance_simple2(tmpl, tb);
if (IS_ERR(inst))
return PTR_ERR(inst);
- inst->alg.ivsize = 0; /* ECB mode doesn't take an IV */
+ /* ECB mode doesn't take an IV */
+ inst->alg.co.ivsize = 0;
+
+ inst->alg.encrypt = crypto_ecb_encrypt2;
+ inst->alg.decrypt = crypto_ecb_decrypt2;
+
+ err = lskcipher_register_instance(tmpl, inst);
+ if (err)
+ inst->free(inst);
+
+ return err;
+}
+
+static int crypto_ecb_create(struct crypto_template *tmpl, struct rtattr **tb)
+{
+ struct crypto_lskcipher_spawn *spawn;
+ struct lskcipher_alg *cipher_alg;
+ struct lskcipher_instance *inst;
+ int err;
+
+ inst = lskcipher_alloc_instance_simple(tmpl, tb);
+ if (IS_ERR(inst)) {
+ err = crypto_ecb_create2(tmpl, tb);
+ return err;
+ }
+
+ spawn = lskcipher_instance_ctx(inst);
+ cipher_alg = crypto_lskcipher_spawn_alg(spawn);
+
+ /* ECB mode doesn't take an IV */
+ inst->alg.co.ivsize = 0;
+ if (cipher_alg->co.ivsize)
+ return -EINVAL;
- inst->alg.encrypt = crypto_ecb_encrypt;
- inst->alg.decrypt = crypto_ecb_decrypt;
+ inst->alg.co.base.cra_ctxsize = cipher_alg->co.base.cra_ctxsize;
+ inst->alg.setkey = cipher_alg->setkey;
+ inst->alg.encrypt = cipher_alg->encrypt;
+ inst->alg.decrypt = cipher_alg->decrypt;
+ inst->alg.init = cipher_alg->init;
+ inst->alg.exit = cipher_alg->exit;
- err = skcipher_register_instance(tmpl, inst);
+ err = lskcipher_register_instance(tmpl, inst);
if (err)
inst->free(inst);
MODULE_LICENSE("GPL");
MODULE_DESCRIPTION("ECB block cipher mode of operation");
MODULE_ALIAS_CRYPTO("ecb");
+MODULE_IMPORT_NS(CRYPTO_INTERNAL);
static int essiv_create(struct crypto_template *tmpl, struct rtattr **tb)
{
+ struct skcipher_alg_common *skcipher_alg = NULL;
struct crypto_attr_type *algt;
const char *inner_cipher_name;
const char *shash_name;
struct crypto_instance *inst;
struct crypto_alg *base, *block_base;
struct essiv_instance_ctx *ictx;
- struct skcipher_alg *skcipher_alg = NULL;
struct aead_alg *aead_alg = NULL;
struct crypto_alg *_hash_alg;
struct shash_alg *hash_alg;
mask = crypto_algt_inherited_mask(algt);
switch (type) {
- case CRYPTO_ALG_TYPE_SKCIPHER:
+ case CRYPTO_ALG_TYPE_LSKCIPHER:
skcipher_inst = kzalloc(sizeof(*skcipher_inst) +
sizeof(*ictx), GFP_KERNEL);
if (!skcipher_inst)
inner_cipher_name, 0, mask);
if (err)
goto out_free_inst;
- skcipher_alg = crypto_spawn_skcipher_alg(&ictx->u.skcipher_spawn);
+ skcipher_alg = crypto_spawn_skcipher_alg_common(
+ &ictx->u.skcipher_spawn);
block_base = &skcipher_alg->base;
- ivsize = crypto_skcipher_alg_ivsize(skcipher_alg);
+ ivsize = skcipher_alg->ivsize;
break;
case CRYPTO_ALG_TYPE_AEAD:
base->cra_alignmask = block_base->cra_alignmask;
base->cra_priority = block_base->cra_priority;
- if (type == CRYPTO_ALG_TYPE_SKCIPHER) {
+ if (type == CRYPTO_ALG_TYPE_LSKCIPHER) {
skcipher_inst->alg.setkey = essiv_skcipher_setkey;
skcipher_inst->alg.encrypt = essiv_skcipher_encrypt;
skcipher_inst->alg.decrypt = essiv_skcipher_decrypt;
skcipher_inst->alg.init = essiv_skcipher_init_tfm;
skcipher_inst->alg.exit = essiv_skcipher_exit_tfm;
- skcipher_inst->alg.min_keysize = crypto_skcipher_alg_min_keysize(skcipher_alg);
- skcipher_inst->alg.max_keysize = crypto_skcipher_alg_max_keysize(skcipher_alg);
+ skcipher_inst->alg.min_keysize = skcipher_alg->min_keysize;
+ skcipher_inst->alg.max_keysize = skcipher_alg->max_keysize;
skcipher_inst->alg.ivsize = ivsize;
- skcipher_inst->alg.chunksize = crypto_skcipher_alg_chunksize(skcipher_alg);
- skcipher_inst->alg.walksize = crypto_skcipher_alg_walksize(skcipher_alg);
+ skcipher_inst->alg.chunksize = skcipher_alg->chunksize;
skcipher_inst->free = essiv_skcipher_free_instance;
out_free_hash:
crypto_mod_put(_hash_alg);
out_drop_skcipher:
- if (type == CRYPTO_ALG_TYPE_SKCIPHER)
+ if (type == CRYPTO_ALG_TYPE_LSKCIPHER)
crypto_drop_skcipher(&ictx->u.skcipher_spawn);
else
crypto_drop_aead(&ictx->u.aead_spawn);
const char *ctr_name,
const char *ghash_name)
{
+ struct skcipher_alg_common *ctr;
u32 mask;
struct aead_instance *inst;
struct gcm_instance_ctx *ctx;
- struct skcipher_alg *ctr;
struct hash_alg_common *ghash;
int err;
ctr_name, 0, mask);
if (err)
goto err_free_inst;
- ctr = crypto_spawn_skcipher_alg(&ctx->ctr);
+ ctr = crypto_spawn_skcipher_alg_common(&ctx->ctr);
/* The skcipher algorithm must be CTR mode, using 16-byte blocks. */
err = -EINVAL;
if (strncmp(ctr->base.cra_name, "ctr(", 4) != 0 ||
- crypto_skcipher_alg_ivsize(ctr) != 16 ||
- ctr->base.cra_blocksize != 1)
+ ctr->ivsize != 16 || ctr->base.cra_blocksize != 1)
goto err_free_inst;
err = -ENAMETOOLONG;
inst->alg.base.cra_priority = (ghash->base.cra_priority +
ctr->base.cra_priority) / 2;
inst->alg.base.cra_blocksize = 1;
- inst->alg.base.cra_alignmask = ghash->base.cra_alignmask |
- ctr->base.cra_alignmask;
+ inst->alg.base.cra_alignmask = ctr->base.cra_alignmask;
inst->alg.base.cra_ctxsize = sizeof(struct crypto_gcm_ctx);
inst->alg.ivsize = GCM_AES_IV_SIZE;
- inst->alg.chunksize = crypto_skcipher_alg_chunksize(ctr);
+ inst->alg.chunksize = ctr->chunksize;
inst->alg.maxauthsize = 16;
inst->alg.init = crypto_gcm_init_tfm;
inst->alg.exit = crypto_gcm_exit_tfm;
#include "internal.h"
+static inline struct crypto_istat_hash *hash_get_stat(
+ struct hash_alg_common *alg)
+{
+#ifdef CONFIG_CRYPTO_STATS
+ return &alg->stat;
+#else
+ return NULL;
+#endif
+}
+
static inline int crypto_hash_report_stat(struct sk_buff *skb,
struct crypto_alg *alg,
const char *type)
return nla_put(skb, CRYPTOCFGA_STAT_HASH, sizeof(rhash), &rhash);
}
-int crypto_init_shash_ops_async(struct crypto_tfm *tfm);
-struct crypto_ahash *crypto_clone_shash_ops_async(struct crypto_ahash *nhash,
- struct crypto_ahash *hash);
+extern const struct crypto_type crypto_shash_type;
int hash_prepare_alg(struct hash_alg_common *alg);
[HASH_ALGO_SM3_256] = "sm3",
[HASH_ALGO_STREEBOG_256] = "streebog256",
[HASH_ALGO_STREEBOG_512] = "streebog512",
+ [HASH_ALGO_SHA3_256] = "sha3-256",
+ [HASH_ALGO_SHA3_384] = "sha3-384",
+ [HASH_ALGO_SHA3_512] = "sha3-512",
};
EXPORT_SYMBOL_GPL(hash_algo_name);
[HASH_ALGO_SM3_256] = SM3256_DIGEST_SIZE,
[HASH_ALGO_STREEBOG_256] = STREEBOG256_DIGEST_SIZE,
[HASH_ALGO_STREEBOG_512] = STREEBOG512_DIGEST_SIZE,
+ [HASH_ALGO_SHA3_256] = SHA3_256_DIGEST_SIZE,
+ [HASH_ALGO_SHA3_384] = SHA3_384_DIGEST_SIZE,
+ [HASH_ALGO_SHA3_512] = SHA3_512_DIGEST_SIZE,
};
EXPORT_SYMBOL_GPL(hash_digest_size);
const char *xctr_name,
const char *polyval_name)
{
+ struct skcipher_alg_common *xctr_alg;
u32 mask;
struct skcipher_instance *inst;
struct hctr2_instance_ctx *ictx;
- struct skcipher_alg *xctr_alg;
struct crypto_alg *blockcipher_alg;
struct shash_alg *polyval_alg;
char blockcipher_name[CRYPTO_MAX_ALG_NAME];
xctr_name, 0, mask);
if (err)
goto err_free_inst;
- xctr_alg = crypto_spawn_skcipher_alg(&ictx->xctr_spawn);
+ xctr_alg = crypto_spawn_skcipher_alg_common(&ictx->xctr_spawn);
err = -EINVAL;
if (strncmp(xctr_alg->base.cra_name, "xctr(", 5))
inst->alg.base.cra_blocksize = BLOCKCIPHER_BLOCK_SIZE;
inst->alg.base.cra_ctxsize = sizeof(struct hctr2_tfm_ctx) +
polyval_alg->statesize * 2;
- inst->alg.base.cra_alignmask = xctr_alg->base.cra_alignmask |
- polyval_alg->base.cra_alignmask;
+ inst->alg.base.cra_alignmask = xctr_alg->base.cra_alignmask;
/*
* The hash function is called twice, so it is weighted higher than the
* xctr and blockcipher.
inst->alg.decrypt = hctr2_decrypt;
inst->alg.init = hctr2_init_tfm;
inst->alg.exit = hctr2_exit_tfm;
- inst->alg.min_keysize = crypto_skcipher_alg_min_keysize(xctr_alg);
- inst->alg.max_keysize = crypto_skcipher_alg_max_keysize(xctr_alg);
+ inst->alg.min_keysize = xctr_alg->min_keysize;
+ inst->alg.max_keysize = xctr_alg->max_keysize;
inst->alg.ivsize = TWEAK_SIZE;
inst->free = hctr2_free_instance;
struct hmac_ctx {
struct crypto_shash *hash;
+ /* Contains 'u8 ipad[statesize];', then 'u8 opad[statesize];' */
+ u8 pads[];
};
-static inline void *align_ptr(void *p, unsigned int align)
-{
- return (void *)ALIGN((unsigned long)p, align);
-}
-
-static inline struct hmac_ctx *hmac_ctx(struct crypto_shash *tfm)
-{
- return align_ptr(crypto_shash_ctx_aligned(tfm) +
- crypto_shash_statesize(tfm) * 2,
- crypto_tfm_ctx_alignment());
-}
-
static int hmac_setkey(struct crypto_shash *parent,
const u8 *inkey, unsigned int keylen)
{
int bs = crypto_shash_blocksize(parent);
int ds = crypto_shash_digestsize(parent);
int ss = crypto_shash_statesize(parent);
- char *ipad = crypto_shash_ctx_aligned(parent);
- char *opad = ipad + ss;
- struct hmac_ctx *ctx = align_ptr(opad + ss,
- crypto_tfm_ctx_alignment());
- struct crypto_shash *hash = ctx->hash;
+ struct hmac_ctx *tctx = crypto_shash_ctx(parent);
+ struct crypto_shash *hash = tctx->hash;
+ u8 *ipad = &tctx->pads[0];
+ u8 *opad = &tctx->pads[ss];
SHASH_DESC_ON_STACK(shash, hash);
unsigned int i;
static int hmac_import(struct shash_desc *pdesc, const void *in)
{
struct shash_desc *desc = shash_desc_ctx(pdesc);
- struct hmac_ctx *ctx = hmac_ctx(pdesc->tfm);
+ const struct hmac_ctx *tctx = crypto_shash_ctx(pdesc->tfm);
- desc->tfm = ctx->hash;
+ desc->tfm = tctx->hash;
return crypto_shash_import(desc, in);
}
static int hmac_init(struct shash_desc *pdesc)
{
- return hmac_import(pdesc, crypto_shash_ctx_aligned(pdesc->tfm));
+ const struct hmac_ctx *tctx = crypto_shash_ctx(pdesc->tfm);
+
+ return hmac_import(pdesc, &tctx->pads[0]);
}
static int hmac_update(struct shash_desc *pdesc,
struct crypto_shash *parent = pdesc->tfm;
int ds = crypto_shash_digestsize(parent);
int ss = crypto_shash_statesize(parent);
- char *opad = crypto_shash_ctx_aligned(parent) + ss;
+ const struct hmac_ctx *tctx = crypto_shash_ctx(parent);
+ const u8 *opad = &tctx->pads[ss];
struct shash_desc *desc = shash_desc_ctx(pdesc);
return crypto_shash_final(desc, out) ?:
struct crypto_shash *parent = pdesc->tfm;
int ds = crypto_shash_digestsize(parent);
int ss = crypto_shash_statesize(parent);
- char *opad = crypto_shash_ctx_aligned(parent) + ss;
+ const struct hmac_ctx *tctx = crypto_shash_ctx(parent);
+ const u8 *opad = &tctx->pads[ss];
struct shash_desc *desc = shash_desc_ctx(pdesc);
return crypto_shash_finup(desc, data, nbytes, out) ?:
struct crypto_shash *hash;
struct shash_instance *inst = shash_alg_instance(parent);
struct crypto_shash_spawn *spawn = shash_instance_ctx(inst);
- struct hmac_ctx *ctx = hmac_ctx(parent);
+ struct hmac_ctx *tctx = crypto_shash_ctx(parent);
hash = crypto_spawn_shash(spawn);
if (IS_ERR(hash))
parent->descsize = sizeof(struct shash_desc) +
crypto_shash_descsize(hash);
- ctx->hash = hash;
+ tctx->hash = hash;
return 0;
}
static int hmac_clone_tfm(struct crypto_shash *dst, struct crypto_shash *src)
{
- struct hmac_ctx *sctx = hmac_ctx(src);
- struct hmac_ctx *dctx = hmac_ctx(dst);
+ struct hmac_ctx *sctx = crypto_shash_ctx(src);
+ struct hmac_ctx *dctx = crypto_shash_ctx(dst);
struct crypto_shash *hash;
hash = crypto_clone_shash(sctx->hash);
static void hmac_exit_tfm(struct crypto_shash *parent)
{
- struct hmac_ctx *ctx = hmac_ctx(parent);
+ struct hmac_ctx *tctx = crypto_shash_ctx(parent);
- crypto_free_shash(ctx->hash);
+ crypto_free_shash(tctx->hash);
}
static int hmac_create(struct crypto_template *tmpl, struct rtattr **tb)
inst->alg.base.cra_priority = alg->cra_priority;
inst->alg.base.cra_blocksize = alg->cra_blocksize;
- inst->alg.base.cra_alignmask = alg->cra_alignmask;
+ inst->alg.base.cra_ctxsize = sizeof(struct hmac_ctx) + (ss * 2);
- ss = ALIGN(ss, alg->cra_alignmask + 1);
inst->alg.digestsize = ds;
inst->alg.statesize = ss;
-
- inst->alg.base.cra_ctxsize = sizeof(struct hmac_ctx) +
- ALIGN(ss * 2, crypto_tfm_ctx_alignment());
-
inst->alg.init = hmac_init;
inst->alg.update = hmac_update;
inst->alg.final = hmac_final;
* Helper function
***************************************************************************/
+void *jent_kvzalloc(unsigned int len)
+{
+ return kvzalloc(len, GFP_KERNEL);
+}
+
+void jent_kvzfree(void *ptr, unsigned int len)
+{
+ memzero_explicit(ptr, len);
+ kvfree(ptr);
+}
+
void *jent_zalloc(unsigned int len)
{
return kzalloc(len, GFP_KERNEL);
crypto_shash_init(sdesc);
rng->sdesc = sdesc;
- rng->entropy_collector = jent_entropy_collector_alloc(1, 0, sdesc);
+ rng->entropy_collector =
+ jent_entropy_collector_alloc(CONFIG_CRYPTO_JITTERENTROPY_OSR, 0,
+ sdesc);
if (!rng->entropy_collector) {
ret = -ENOMEM;
goto err;
desc->tfm = tfm;
crypto_shash_init(desc);
- ret = jent_entropy_init(desc);
+ ret = jent_entropy_init(CONFIG_CRYPTO_JITTERENTROPY_OSR, 0, desc, NULL);
shash_desc_zero(desc);
crypto_free_shash(tfm);
if (ret) {
__u64 prev_time; /* SENSITIVE Previous time stamp */
__u64 last_delta; /* SENSITIVE stuck test */
__s64 last_delta2; /* SENSITIVE stuck test */
+
+ unsigned int flags; /* Flags used to initialize */
unsigned int osr; /* Oversample rate */
-#define JENT_MEMORY_BLOCKS 64
-#define JENT_MEMORY_BLOCKSIZE 32
#define JENT_MEMORY_ACCESSLOOPS 128
-#define JENT_MEMORY_SIZE (JENT_MEMORY_BLOCKS*JENT_MEMORY_BLOCKSIZE)
+#define JENT_MEMORY_SIZE \
+ (CONFIG_CRYPTO_JITTERENTROPY_MEMORY_BLOCKS * \
+ CONFIG_CRYPTO_JITTERENTROPY_MEMORY_BLOCKSIZE)
unsigned char *mem; /* Memory access location with size of
* memblocks * memblocksize */
unsigned int memlocation; /* Pointer to byte in *mem */
/* Repetition Count Test */
unsigned int rct_count; /* Number of stuck values */
- /* Intermittent health test failure threshold of 2^-30 */
- /* From an SP800-90B perspective, this RCT cutoff value is equal to 31. */
- /* However, our RCT implementation starts at 1, so we subtract 1 here. */
-#define JENT_RCT_CUTOFF (31 - 1) /* Taken from SP800-90B sec 4.4.1 */
-#define JENT_APT_CUTOFF 325 /* Taken from SP800-90B sec 4.4.2 */
- /* Permanent health test failure threshold of 2^-60 */
- /* From an SP800-90B perspective, this RCT cutoff value is equal to 61. */
- /* However, our RCT implementation starts at 1, so we subtract 1 here. */
-#define JENT_RCT_CUTOFF_PERMANENT (61 - 1)
-#define JENT_APT_CUTOFF_PERMANENT 355
+ /* Adaptive Proportion Test cutoff values */
+ unsigned int apt_cutoff; /* Intermittent health test failure */
+ unsigned int apt_cutoff_permanent; /* Permanent health test failure */
#define JENT_APT_WINDOW_SIZE 512 /* Data window size */
/* LSB of time stamp to process */
#define JENT_APT_LSB 16
unsigned int apt_observations; /* Number of collected observations */
unsigned int apt_count; /* APT counter */
unsigned int apt_base; /* APT base reference */
+ unsigned int health_failure; /* Record health failure */
+
unsigned int apt_base_set:1; /* APT base reference set? */
};
* zero). */
#define JENT_ESTUCK 8 /* Too many stuck results during init. */
#define JENT_EHEALTH 9 /* Health test failed during initialization */
+#define JENT_ERCT 10 /* RCT failed during initialization */
+#define JENT_EHASH 11 /* Hash self test failed */
+#define JENT_EMEM 12 /* Can't allocate memory for initialization */
+
+#define JENT_RCT_FAILURE 1 /* Failure in RCT health test. */
+#define JENT_APT_FAILURE 2 /* Failure in APT health test. */
+#define JENT_PERMANENT_FAILURE_SHIFT 16
+#define JENT_PERMANENT_FAILURE(x) (x << JENT_PERMANENT_FAILURE_SHIFT)
+#define JENT_RCT_FAILURE_PERMANENT JENT_PERMANENT_FAILURE(JENT_RCT_FAILURE)
+#define JENT_APT_FAILURE_PERMANENT JENT_PERMANENT_FAILURE(JENT_APT_FAILURE)
/*
* The output n bits can receive more than n bits of min entropy, of course,
* This test complies with SP800-90B section 4.4.2.
***************************************************************************/
+/*
+ * See the SP 800-90B comment #10b for the corrected cutoff for the SP 800-90B
+ * APT.
+ * http://www.untruth.org/~josh/sp80090b/UL%20SP800-90B-final%20comments%20v1.9%2020191212.pdf
+ * In in the syntax of R, this is C = 2 + qbinom(1 − 2^(−30), 511, 2^(-1/osr)).
+ * (The original formula wasn't correct because the first symbol must
+ * necessarily have been observed, so there is no chance of observing 0 of these
+ * symbols.)
+ *
+ * For the alpha < 2^-53, R cannot be used as it uses a float data type without
+ * arbitrary precision. A SageMath script is used to calculate those cutoff
+ * values.
+ *
+ * For any value above 14, this yields the maximal allowable value of 512
+ * (by FIPS 140-2 IG 7.19 Resolution # 16, we cannot choose a cutoff value that
+ * renders the test unable to fail).
+ */
+static const unsigned int jent_apt_cutoff_lookup[15] = {
+ 325, 422, 459, 477, 488, 494, 499, 502,
+ 505, 507, 508, 509, 510, 511, 512 };
+static const unsigned int jent_apt_cutoff_permanent_lookup[15] = {
+ 355, 447, 479, 494, 502, 507, 510, 512,
+ 512, 512, 512, 512, 512, 512, 512 };
+#define ARRAY_SIZE(x) (sizeof(x) / sizeof((x)[0]))
+
+static void jent_apt_init(struct rand_data *ec, unsigned int osr)
+{
+ /*
+ * Establish the apt_cutoff based on the presumed entropy rate of
+ * 1/osr.
+ */
+ if (osr >= ARRAY_SIZE(jent_apt_cutoff_lookup)) {
+ ec->apt_cutoff = jent_apt_cutoff_lookup[
+ ARRAY_SIZE(jent_apt_cutoff_lookup) - 1];
+ ec->apt_cutoff_permanent = jent_apt_cutoff_permanent_lookup[
+ ARRAY_SIZE(jent_apt_cutoff_permanent_lookup) - 1];
+ } else {
+ ec->apt_cutoff = jent_apt_cutoff_lookup[osr - 1];
+ ec->apt_cutoff_permanent =
+ jent_apt_cutoff_permanent_lookup[osr - 1];
+ }
+}
/*
* Reset the APT counter
*
return;
}
- if (delta_masked == ec->apt_base)
+ if (delta_masked == ec->apt_base) {
ec->apt_count++;
+ /* Note, ec->apt_count starts with one. */
+ if (ec->apt_count >= ec->apt_cutoff_permanent)
+ ec->health_failure |= JENT_APT_FAILURE_PERMANENT;
+ else if (ec->apt_count >= ec->apt_cutoff)
+ ec->health_failure |= JENT_APT_FAILURE;
+ }
+
ec->apt_observations++;
if (ec->apt_observations >= JENT_APT_WINDOW_SIZE)
jent_apt_reset(ec, delta_masked);
}
-/* APT health test failure detection */
-static int jent_apt_permanent_failure(struct rand_data *ec)
-{
- return (ec->apt_count >= JENT_APT_CUTOFF_PERMANENT) ? 1 : 0;
-}
-
-static int jent_apt_failure(struct rand_data *ec)
-{
- return (ec->apt_count >= JENT_APT_CUTOFF) ? 1 : 0;
-}
-
/***************************************************************************
* Stuck Test and its use as Repetition Count Test
*
{
if (stuck) {
ec->rct_count++;
+
+ /*
+ * The cutoff value is based on the following consideration:
+ * alpha = 2^-30 or 2^-60 as recommended in SP800-90B.
+ * In addition, we require an entropy value H of 1/osr as this
+ * is the minimum entropy required to provide full entropy.
+ * Note, we collect (DATA_SIZE_BITS + ENTROPY_SAFETY_FACTOR)*osr
+ * deltas for inserting them into the entropy pool which should
+ * then have (close to) DATA_SIZE_BITS bits of entropy in the
+ * conditioned output.
+ *
+ * Note, ec->rct_count (which equals to value B in the pseudo
+ * code of SP800-90B section 4.4.1) starts with zero. Hence
+ * we need to subtract one from the cutoff value as calculated
+ * following SP800-90B. Thus C = ceil(-log_2(alpha)/H) = 30*osr
+ * or 60*osr.
+ */
+ if ((unsigned int)ec->rct_count >= (60 * ec->osr)) {
+ ec->rct_count = -1;
+ ec->health_failure |= JENT_RCT_FAILURE_PERMANENT;
+ } else if ((unsigned int)ec->rct_count >= (30 * ec->osr)) {
+ ec->rct_count = -1;
+ ec->health_failure |= JENT_RCT_FAILURE;
+ }
} else {
/* Reset RCT */
ec->rct_count = 0;
return 0;
}
-/* RCT health test failure detection */
-static int jent_rct_permanent_failure(struct rand_data *ec)
-{
- return (ec->rct_count >= JENT_RCT_CUTOFF_PERMANENT) ? 1 : 0;
-}
-
-static int jent_rct_failure(struct rand_data *ec)
-{
- return (ec->rct_count >= JENT_RCT_CUTOFF) ? 1 : 0;
-}
-
-/* Report of health test failures */
-static int jent_health_failure(struct rand_data *ec)
+/*
+ * Report any health test failures
+ *
+ * @ec [in] Reference to entropy collector
+ *
+ * @return a bitmask indicating which tests failed
+ * 0 No health test failure
+ * 1 RCT failure
+ * 2 APT failure
+ * 1<<JENT_PERMANENT_FAILURE_SHIFT RCT permanent failure
+ * 2<<JENT_PERMANENT_FAILURE_SHIFT APT permanent failure
+ */
+static unsigned int jent_health_failure(struct rand_data *ec)
{
- return jent_rct_failure(ec) | jent_apt_failure(ec);
-}
+ /* Test is only enabled in FIPS mode */
+ if (!fips_enabled)
+ return 0;
-static int jent_permanent_health_failure(struct rand_data *ec)
-{
- return jent_rct_permanent_failure(ec) | jent_apt_permanent_failure(ec);
+ return ec->health_failure;
}
/***************************************************************************
*
* @return result of stuck test
*/
-static int jent_measure_jitter(struct rand_data *ec)
+static int jent_measure_jitter(struct rand_data *ec, __u64 *ret_current_delta)
{
__u64 time = 0;
__u64 current_delta = 0;
if (jent_condition_data(ec, current_delta, stuck))
stuck = 1;
+ /* return the raw entropy value */
+ if (ret_current_delta)
+ *ret_current_delta = current_delta;
+
return stuck;
}
safety_factor = JENT_ENTROPY_SAFETY_FACTOR;
/* priming of the ->prev_time value */
- jent_measure_jitter(ec);
+ jent_measure_jitter(ec, NULL);
while (!jent_health_failure(ec)) {
/* If a stuck measurement is received, repeat measurement */
- if (jent_measure_jitter(ec))
+ if (jent_measure_jitter(ec, NULL))
continue;
/*
return -1;
while (len > 0) {
- unsigned int tocopy;
+ unsigned int tocopy, health_test_result;
jent_gen_entropy(ec);
- if (jent_permanent_health_failure(ec)) {
+ health_test_result = jent_health_failure(ec);
+ if (health_test_result > JENT_PERMANENT_FAILURE_SHIFT) {
/*
* At this point, the Jitter RNG instance is considered
* as a failed instance. There is no rerun of the
* is assumed to not further use this instance.
*/
return -3;
- } else if (jent_health_failure(ec)) {
+ } else if (health_test_result) {
/*
* Perform startup health tests and return permanent
* error if it fails.
*/
- if (jent_entropy_init(ec->hash_state))
+ if (jent_entropy_init(0, 0, NULL, ec)) {
+ /* Mark the permanent error */
+ ec->health_failure &=
+ JENT_RCT_FAILURE_PERMANENT |
+ JENT_APT_FAILURE_PERMANENT;
return -3;
+ }
return -2;
}
/* Allocate memory for adding variations based on memory
* access
*/
- entropy_collector->mem = jent_zalloc(JENT_MEMORY_SIZE);
+ entropy_collector->mem = jent_kvzalloc(JENT_MEMORY_SIZE);
if (!entropy_collector->mem) {
jent_zfree(entropy_collector);
return NULL;
}
- entropy_collector->memblocksize = JENT_MEMORY_BLOCKSIZE;
- entropy_collector->memblocks = JENT_MEMORY_BLOCKS;
+ entropy_collector->memblocksize =
+ CONFIG_CRYPTO_JITTERENTROPY_MEMORY_BLOCKSIZE;
+ entropy_collector->memblocks =
+ CONFIG_CRYPTO_JITTERENTROPY_MEMORY_BLOCKS;
entropy_collector->memaccessloops = JENT_MEMORY_ACCESSLOOPS;
}
/* verify and set the oversampling rate */
if (osr == 0)
- osr = 1; /* minimum sampling rate is 1 */
+ osr = 1; /* H_submitter = 1 / osr */
entropy_collector->osr = osr;
+ entropy_collector->flags = flags;
entropy_collector->hash_state = hash_state;
+ /* Initialize the APT */
+ jent_apt_init(entropy_collector, osr);
+
/* fill the data pad with non-zero values */
jent_gen_entropy(entropy_collector);
void jent_entropy_collector_free(struct rand_data *entropy_collector)
{
- jent_zfree(entropy_collector->mem);
+ jent_kvzfree(entropy_collector->mem, JENT_MEMORY_SIZE);
entropy_collector->mem = NULL;
jent_zfree(entropy_collector);
}
-int jent_entropy_init(void *hash_state)
+int jent_entropy_init(unsigned int osr, unsigned int flags, void *hash_state,
+ struct rand_data *p_ec)
{
- int i;
- __u64 delta_sum = 0;
- __u64 old_delta = 0;
- unsigned int nonstuck = 0;
- int time_backwards = 0;
- int count_mod = 0;
- int count_stuck = 0;
- struct rand_data ec = { 0 };
-
- /* Required for RCT */
- ec.osr = 1;
- ec.hash_state = hash_state;
+ /*
+ * If caller provides an allocated ec, reuse it which implies that the
+ * health test entropy data is used to further still the available
+ * entropy pool.
+ */
+ struct rand_data *ec = p_ec;
+ int i, time_backwards = 0, ret = 0, ec_free = 0;
+ unsigned int health_test_result;
+
+ if (!ec) {
+ ec = jent_entropy_collector_alloc(osr, flags, hash_state);
+ if (!ec)
+ return JENT_EMEM;
+ ec_free = 1;
+ } else {
+ /* Reset the APT */
+ jent_apt_reset(ec, 0);
+ /* Ensure that a new APT base is obtained */
+ ec->apt_base_set = 0;
+ /* Reset the RCT */
+ ec->rct_count = 0;
+ /* Reset intermittent, leave permanent health test result */
+ ec->health_failure &= (~JENT_RCT_FAILURE);
+ ec->health_failure &= (~JENT_APT_FAILURE);
+ }
/* We could perform statistical tests here, but the problem is
* that we only have a few loop counts to do testing. These
#define TESTLOOPCOUNT 1024
#define CLEARCACHE 100
for (i = 0; (TESTLOOPCOUNT + CLEARCACHE) > i; i++) {
- __u64 time = 0;
- __u64 time2 = 0;
- __u64 delta = 0;
- unsigned int lowdelta = 0;
- int stuck;
+ __u64 start_time = 0, end_time = 0, delta = 0;
/* Invoke core entropy collection logic */
- jent_get_nstime(&time);
- ec.prev_time = time;
- jent_condition_data(&ec, time, 0);
- jent_get_nstime(&time2);
+ jent_measure_jitter(ec, &delta);
+ end_time = ec->prev_time;
+ start_time = ec->prev_time - delta;
/* test whether timer works */
- if (!time || !time2)
- return JENT_ENOTIME;
- delta = jent_delta(time, time2);
+ if (!start_time || !end_time) {
+ ret = JENT_ENOTIME;
+ goto out;
+ }
+
/*
* test whether timer is fine grained enough to provide
* delta even when called shortly after each other -- this
* implies that we also have a high resolution timer
*/
- if (!delta)
- return JENT_ECOARSETIME;
-
- stuck = jent_stuck(&ec, delta);
+ if (!delta || (end_time == start_time)) {
+ ret = JENT_ECOARSETIME;
+ goto out;
+ }
/*
* up to here we did not modify any variable that will be
if (i < CLEARCACHE)
continue;
- if (stuck)
- count_stuck++;
- else {
- nonstuck++;
-
- /*
- * Ensure that the APT succeeded.
- *
- * With the check below that count_stuck must be less
- * than 10% of the overall generated raw entropy values
- * it is guaranteed that the APT is invoked at
- * floor((TESTLOOPCOUNT * 0.9) / 64) == 14 times.
- */
- if ((nonstuck % JENT_APT_WINDOW_SIZE) == 0) {
- jent_apt_reset(&ec,
- delta & JENT_APT_WORD_MASK);
- }
- }
-
- /* Validate health test result */
- if (jent_health_failure(&ec))
- return JENT_EHEALTH;
-
/* test whether we have an increasing timer */
- if (!(time2 > time))
+ if (!(end_time > start_time))
time_backwards++;
-
- /* use 32 bit value to ensure compilation on 32 bit arches */
- lowdelta = time2 - time;
- if (!(lowdelta % 100))
- count_mod++;
-
- /*
- * ensure that we have a varying delta timer which is necessary
- * for the calculation of entropy -- perform this check
- * only after the first loop is executed as we need to prime
- * the old_data value
- */
- if (delta > old_delta)
- delta_sum += (delta - old_delta);
- else
- delta_sum += (old_delta - delta);
- old_delta = delta;
}
/*
* should not fail. The value of 3 should cover the NTP case being
* performed during our test run.
*/
- if (time_backwards > 3)
- return JENT_ENOMONOTONIC;
-
- /*
- * Variations of deltas of time must on average be larger
- * than 1 to ensure the entropy estimation
- * implied with 1 is preserved
- */
- if ((delta_sum) <= 1)
- return JENT_EVARVAR;
+ if (time_backwards > 3) {
+ ret = JENT_ENOMONOTONIC;
+ goto out;
+ }
- /*
- * Ensure that we have variations in the time stamp below 10 for at
- * least 10% of all checks -- on some platforms, the counter increments
- * in multiples of 100, but not always
- */
- if ((TESTLOOPCOUNT/10 * 9) < count_mod)
- return JENT_ECOARSETIME;
+ /* Did we encounter a health test failure? */
+ health_test_result = jent_health_failure(ec);
+ if (health_test_result) {
+ ret = (health_test_result & JENT_RCT_FAILURE) ? JENT_ERCT :
+ JENT_EHEALTH;
+ goto out;
+ }
- /*
- * If we have more than 90% stuck results, then this Jitter RNG is
- * likely to not work well.
- */
- if ((TESTLOOPCOUNT/10 * 9) < count_stuck)
- return JENT_ESTUCK;
+out:
+ if (ec_free)
+ jent_entropy_collector_free(ec);
- return 0;
+ return ret;
}
// SPDX-License-Identifier: GPL-2.0-or-later
+extern void *jent_kvzalloc(unsigned int len);
+extern void jent_kvzfree(void *ptr, unsigned int len);
extern void *jent_zalloc(unsigned int len);
extern void jent_zfree(void *ptr);
extern void jent_get_nstime(__u64 *out);
int jent_read_random_block(void *hash_state, char *dst, unsigned int dst_len);
struct rand_data;
-extern int jent_entropy_init(void *hash_state);
+extern int jent_entropy_init(unsigned int osr, unsigned int flags,
+ void *hash_state, struct rand_data *p_ec);
extern int jent_read_entropy(struct rand_data *ec, unsigned char *data,
unsigned int len);
static int lrw_create(struct crypto_template *tmpl, struct rtattr **tb)
{
struct crypto_skcipher_spawn *spawn;
+ struct skcipher_alg_common *alg;
struct skcipher_instance *inst;
- struct skcipher_alg *alg;
const char *cipher_name;
char ecb_name[CRYPTO_MAX_ALG_NAME];
u32 mask;
if (err)
goto err_free_inst;
- alg = crypto_skcipher_spawn_alg(spawn);
+ alg = crypto_spawn_skcipher_alg_common(spawn);
err = -EINVAL;
if (alg->base.cra_blocksize != LRW_BLOCK_SIZE)
goto err_free_inst;
- if (crypto_skcipher_alg_ivsize(alg))
+ if (alg->ivsize)
goto err_free_inst;
err = crypto_inst_setname(skcipher_crypto_instance(inst), "lrw",
(__alignof__(be128) - 1);
inst->alg.ivsize = LRW_BLOCK_SIZE;
- inst->alg.min_keysize = crypto_skcipher_alg_min_keysize(alg) +
- LRW_BLOCK_SIZE;
- inst->alg.max_keysize = crypto_skcipher_alg_max_keysize(alg) +
- LRW_BLOCK_SIZE;
+ inst->alg.min_keysize = alg->min_keysize + LRW_BLOCK_SIZE;
+ inst->alg.max_keysize = alg->max_keysize + LRW_BLOCK_SIZE;
inst->alg.base.cra_ctxsize = sizeof(struct lrw_tfm_ctx);
--- /dev/null
+// SPDX-License-Identifier: GPL-2.0-or-later
+/*
+ * Linear symmetric key cipher operations.
+ *
+ * Generic encrypt/decrypt wrapper for ciphers.
+ *
+ * Copyright (c) 2023 Herbert Xu <herbert@gondor.apana.org.au>
+ */
+
+#include <linux/cryptouser.h>
+#include <linux/err.h>
+#include <linux/export.h>
+#include <linux/kernel.h>
+#include <linux/seq_file.h>
+#include <linux/slab.h>
+#include <linux/string.h>
+#include <net/netlink.h>
+#include "skcipher.h"
+
+static inline struct crypto_lskcipher *__crypto_lskcipher_cast(
+ struct crypto_tfm *tfm)
+{
+ return container_of(tfm, struct crypto_lskcipher, base);
+}
+
+static inline struct lskcipher_alg *__crypto_lskcipher_alg(
+ struct crypto_alg *alg)
+{
+ return container_of(alg, struct lskcipher_alg, co.base);
+}
+
+static inline struct crypto_istat_cipher *lskcipher_get_stat(
+ struct lskcipher_alg *alg)
+{
+ return skcipher_get_stat_common(&alg->co);
+}
+
+static inline int crypto_lskcipher_errstat(struct lskcipher_alg *alg, int err)
+{
+ struct crypto_istat_cipher *istat = lskcipher_get_stat(alg);
+
+ if (!IS_ENABLED(CONFIG_CRYPTO_STATS))
+ return err;
+
+ if (err)
+ atomic64_inc(&istat->err_cnt);
+
+ return err;
+}
+
+static int lskcipher_setkey_unaligned(struct crypto_lskcipher *tfm,
+ const u8 *key, unsigned int keylen)
+{
+ unsigned long alignmask = crypto_lskcipher_alignmask(tfm);
+ struct lskcipher_alg *cipher = crypto_lskcipher_alg(tfm);
+ u8 *buffer, *alignbuffer;
+ unsigned long absize;
+ int ret;
+
+ absize = keylen + alignmask;
+ buffer = kmalloc(absize, GFP_ATOMIC);
+ if (!buffer)
+ return -ENOMEM;
+
+ alignbuffer = (u8 *)ALIGN((unsigned long)buffer, alignmask + 1);
+ memcpy(alignbuffer, key, keylen);
+ ret = cipher->setkey(tfm, alignbuffer, keylen);
+ kfree_sensitive(buffer);
+ return ret;
+}
+
+int crypto_lskcipher_setkey(struct crypto_lskcipher *tfm, const u8 *key,
+ unsigned int keylen)
+{
+ unsigned long alignmask = crypto_lskcipher_alignmask(tfm);
+ struct lskcipher_alg *cipher = crypto_lskcipher_alg(tfm);
+
+ if (keylen < cipher->co.min_keysize || keylen > cipher->co.max_keysize)
+ return -EINVAL;
+
+ if ((unsigned long)key & alignmask)
+ return lskcipher_setkey_unaligned(tfm, key, keylen);
+ else
+ return cipher->setkey(tfm, key, keylen);
+}
+EXPORT_SYMBOL_GPL(crypto_lskcipher_setkey);
+
+static int crypto_lskcipher_crypt_unaligned(
+ struct crypto_lskcipher *tfm, const u8 *src, u8 *dst, unsigned len,
+ u8 *iv, int (*crypt)(struct crypto_lskcipher *tfm, const u8 *src,
+ u8 *dst, unsigned len, u8 *iv, bool final))
+{
+ unsigned ivsize = crypto_lskcipher_ivsize(tfm);
+ unsigned bs = crypto_lskcipher_blocksize(tfm);
+ unsigned cs = crypto_lskcipher_chunksize(tfm);
+ int err;
+ u8 *tiv;
+ u8 *p;
+
+ BUILD_BUG_ON(MAX_CIPHER_BLOCKSIZE > PAGE_SIZE ||
+ MAX_CIPHER_ALIGNMASK >= PAGE_SIZE);
+
+ tiv = kmalloc(PAGE_SIZE, GFP_ATOMIC);
+ if (!tiv)
+ return -ENOMEM;
+
+ memcpy(tiv, iv, ivsize);
+
+ p = kmalloc(PAGE_SIZE, GFP_ATOMIC);
+ err = -ENOMEM;
+ if (!p)
+ goto out;
+
+ while (len >= bs) {
+ unsigned chunk = min((unsigned)PAGE_SIZE, len);
+ int err;
+
+ if (chunk > cs)
+ chunk &= ~(cs - 1);
+
+ memcpy(p, src, chunk);
+ err = crypt(tfm, p, p, chunk, tiv, true);
+ if (err)
+ goto out;
+
+ memcpy(dst, p, chunk);
+ src += chunk;
+ dst += chunk;
+ len -= chunk;
+ }
+
+ err = len ? -EINVAL : 0;
+
+out:
+ memcpy(iv, tiv, ivsize);
+ kfree_sensitive(p);
+ kfree_sensitive(tiv);
+ return err;
+}
+
+static int crypto_lskcipher_crypt(struct crypto_lskcipher *tfm, const u8 *src,
+ u8 *dst, unsigned len, u8 *iv,
+ int (*crypt)(struct crypto_lskcipher *tfm,
+ const u8 *src, u8 *dst,
+ unsigned len, u8 *iv,
+ bool final))
+{
+ unsigned long alignmask = crypto_lskcipher_alignmask(tfm);
+ struct lskcipher_alg *alg = crypto_lskcipher_alg(tfm);
+ int ret;
+
+ if (((unsigned long)src | (unsigned long)dst | (unsigned long)iv) &
+ alignmask) {
+ ret = crypto_lskcipher_crypt_unaligned(tfm, src, dst, len, iv,
+ crypt);
+ goto out;
+ }
+
+ ret = crypt(tfm, src, dst, len, iv, true);
+
+out:
+ return crypto_lskcipher_errstat(alg, ret);
+}
+
+int crypto_lskcipher_encrypt(struct crypto_lskcipher *tfm, const u8 *src,
+ u8 *dst, unsigned len, u8 *iv)
+{
+ struct lskcipher_alg *alg = crypto_lskcipher_alg(tfm);
+
+ if (IS_ENABLED(CONFIG_CRYPTO_STATS)) {
+ struct crypto_istat_cipher *istat = lskcipher_get_stat(alg);
+
+ atomic64_inc(&istat->encrypt_cnt);
+ atomic64_add(len, &istat->encrypt_tlen);
+ }
+
+ return crypto_lskcipher_crypt(tfm, src, dst, len, iv, alg->encrypt);
+}
+EXPORT_SYMBOL_GPL(crypto_lskcipher_encrypt);
+
+int crypto_lskcipher_decrypt(struct crypto_lskcipher *tfm, const u8 *src,
+ u8 *dst, unsigned len, u8 *iv)
+{
+ struct lskcipher_alg *alg = crypto_lskcipher_alg(tfm);
+
+ if (IS_ENABLED(CONFIG_CRYPTO_STATS)) {
+ struct crypto_istat_cipher *istat = lskcipher_get_stat(alg);
+
+ atomic64_inc(&istat->decrypt_cnt);
+ atomic64_add(len, &istat->decrypt_tlen);
+ }
+
+ return crypto_lskcipher_crypt(tfm, src, dst, len, iv, alg->decrypt);
+}
+EXPORT_SYMBOL_GPL(crypto_lskcipher_decrypt);
+
+static int crypto_lskcipher_crypt_sg(struct skcipher_request *req,
+ int (*crypt)(struct crypto_lskcipher *tfm,
+ const u8 *src, u8 *dst,
+ unsigned len, u8 *iv,
+ bool final))
+{
+ struct crypto_skcipher *skcipher = crypto_skcipher_reqtfm(req);
+ struct crypto_lskcipher **ctx = crypto_skcipher_ctx(skcipher);
+ struct crypto_lskcipher *tfm = *ctx;
+ struct skcipher_walk walk;
+ int err;
+
+ err = skcipher_walk_virt(&walk, req, false);
+
+ while (walk.nbytes) {
+ err = crypt(tfm, walk.src.virt.addr, walk.dst.virt.addr,
+ walk.nbytes, walk.iv, walk.nbytes == walk.total);
+ err = skcipher_walk_done(&walk, err);
+ }
+
+ return err;
+}
+
+int crypto_lskcipher_encrypt_sg(struct skcipher_request *req)
+{
+ struct crypto_skcipher *skcipher = crypto_skcipher_reqtfm(req);
+ struct crypto_lskcipher **ctx = crypto_skcipher_ctx(skcipher);
+ struct lskcipher_alg *alg = crypto_lskcipher_alg(*ctx);
+
+ return crypto_lskcipher_crypt_sg(req, alg->encrypt);
+}
+
+int crypto_lskcipher_decrypt_sg(struct skcipher_request *req)
+{
+ struct crypto_skcipher *skcipher = crypto_skcipher_reqtfm(req);
+ struct crypto_lskcipher **ctx = crypto_skcipher_ctx(skcipher);
+ struct lskcipher_alg *alg = crypto_lskcipher_alg(*ctx);
+
+ return crypto_lskcipher_crypt_sg(req, alg->decrypt);
+}
+
+static void crypto_lskcipher_exit_tfm(struct crypto_tfm *tfm)
+{
+ struct crypto_lskcipher *skcipher = __crypto_lskcipher_cast(tfm);
+ struct lskcipher_alg *alg = crypto_lskcipher_alg(skcipher);
+
+ alg->exit(skcipher);
+}
+
+static int crypto_lskcipher_init_tfm(struct crypto_tfm *tfm)
+{
+ struct crypto_lskcipher *skcipher = __crypto_lskcipher_cast(tfm);
+ struct lskcipher_alg *alg = crypto_lskcipher_alg(skcipher);
+
+ if (alg->exit)
+ skcipher->base.exit = crypto_lskcipher_exit_tfm;
+
+ if (alg->init)
+ return alg->init(skcipher);
+
+ return 0;
+}
+
+static void crypto_lskcipher_free_instance(struct crypto_instance *inst)
+{
+ struct lskcipher_instance *skcipher =
+ container_of(inst, struct lskcipher_instance, s.base);
+
+ skcipher->free(skcipher);
+}
+
+static void __maybe_unused crypto_lskcipher_show(
+ struct seq_file *m, struct crypto_alg *alg)
+{
+ struct lskcipher_alg *skcipher = __crypto_lskcipher_alg(alg);
+
+ seq_printf(m, "type : lskcipher\n");
+ seq_printf(m, "blocksize : %u\n", alg->cra_blocksize);
+ seq_printf(m, "min keysize : %u\n", skcipher->co.min_keysize);
+ seq_printf(m, "max keysize : %u\n", skcipher->co.max_keysize);
+ seq_printf(m, "ivsize : %u\n", skcipher->co.ivsize);
+ seq_printf(m, "chunksize : %u\n", skcipher->co.chunksize);
+}
+
+static int __maybe_unused crypto_lskcipher_report(
+ struct sk_buff *skb, struct crypto_alg *alg)
+{
+ struct lskcipher_alg *skcipher = __crypto_lskcipher_alg(alg);
+ struct crypto_report_blkcipher rblkcipher;
+
+ memset(&rblkcipher, 0, sizeof(rblkcipher));
+
+ strscpy(rblkcipher.type, "lskcipher", sizeof(rblkcipher.type));
+ strscpy(rblkcipher.geniv, "<none>", sizeof(rblkcipher.geniv));
+
+ rblkcipher.blocksize = alg->cra_blocksize;
+ rblkcipher.min_keysize = skcipher->co.min_keysize;
+ rblkcipher.max_keysize = skcipher->co.max_keysize;
+ rblkcipher.ivsize = skcipher->co.ivsize;
+
+ return nla_put(skb, CRYPTOCFGA_REPORT_BLKCIPHER,
+ sizeof(rblkcipher), &rblkcipher);
+}
+
+static int __maybe_unused crypto_lskcipher_report_stat(
+ struct sk_buff *skb, struct crypto_alg *alg)
+{
+ struct lskcipher_alg *skcipher = __crypto_lskcipher_alg(alg);
+ struct crypto_istat_cipher *istat;
+ struct crypto_stat_cipher rcipher;
+
+ istat = lskcipher_get_stat(skcipher);
+
+ memset(&rcipher, 0, sizeof(rcipher));
+
+ strscpy(rcipher.type, "cipher", sizeof(rcipher.type));
+
+ rcipher.stat_encrypt_cnt = atomic64_read(&istat->encrypt_cnt);
+ rcipher.stat_encrypt_tlen = atomic64_read(&istat->encrypt_tlen);
+ rcipher.stat_decrypt_cnt = atomic64_read(&istat->decrypt_cnt);
+ rcipher.stat_decrypt_tlen = atomic64_read(&istat->decrypt_tlen);
+ rcipher.stat_err_cnt = atomic64_read(&istat->err_cnt);
+
+ return nla_put(skb, CRYPTOCFGA_STAT_CIPHER, sizeof(rcipher), &rcipher);
+}
+
+static const struct crypto_type crypto_lskcipher_type = {
+ .extsize = crypto_alg_extsize,
+ .init_tfm = crypto_lskcipher_init_tfm,
+ .free = crypto_lskcipher_free_instance,
+#ifdef CONFIG_PROC_FS
+ .show = crypto_lskcipher_show,
+#endif
+#if IS_ENABLED(CONFIG_CRYPTO_USER)
+ .report = crypto_lskcipher_report,
+#endif
+#ifdef CONFIG_CRYPTO_STATS
+ .report_stat = crypto_lskcipher_report_stat,
+#endif
+ .maskclear = ~CRYPTO_ALG_TYPE_MASK,
+ .maskset = CRYPTO_ALG_TYPE_MASK,
+ .type = CRYPTO_ALG_TYPE_LSKCIPHER,
+ .tfmsize = offsetof(struct crypto_lskcipher, base),
+};
+
+static void crypto_lskcipher_exit_tfm_sg(struct crypto_tfm *tfm)
+{
+ struct crypto_lskcipher **ctx = crypto_tfm_ctx(tfm);
+
+ crypto_free_lskcipher(*ctx);
+}
+
+int crypto_init_lskcipher_ops_sg(struct crypto_tfm *tfm)
+{
+ struct crypto_lskcipher **ctx = crypto_tfm_ctx(tfm);
+ struct crypto_alg *calg = tfm->__crt_alg;
+ struct crypto_lskcipher *skcipher;
+
+ if (!crypto_mod_get(calg))
+ return -EAGAIN;
+
+ skcipher = crypto_create_tfm(calg, &crypto_lskcipher_type);
+ if (IS_ERR(skcipher)) {
+ crypto_mod_put(calg);
+ return PTR_ERR(skcipher);
+ }
+
+ *ctx = skcipher;
+ tfm->exit = crypto_lskcipher_exit_tfm_sg;
+
+ return 0;
+}
+
+int crypto_grab_lskcipher(struct crypto_lskcipher_spawn *spawn,
+ struct crypto_instance *inst,
+ const char *name, u32 type, u32 mask)
+{
+ spawn->base.frontend = &crypto_lskcipher_type;
+ return crypto_grab_spawn(&spawn->base, inst, name, type, mask);
+}
+EXPORT_SYMBOL_GPL(crypto_grab_lskcipher);
+
+struct crypto_lskcipher *crypto_alloc_lskcipher(const char *alg_name,
+ u32 type, u32 mask)
+{
+ return crypto_alloc_tfm(alg_name, &crypto_lskcipher_type, type, mask);
+}
+EXPORT_SYMBOL_GPL(crypto_alloc_lskcipher);
+
+static int lskcipher_prepare_alg(struct lskcipher_alg *alg)
+{
+ struct crypto_alg *base = &alg->co.base;
+ int err;
+
+ err = skcipher_prepare_alg_common(&alg->co);
+ if (err)
+ return err;
+
+ if (alg->co.chunksize & (alg->co.chunksize - 1))
+ return -EINVAL;
+
+ base->cra_type = &crypto_lskcipher_type;
+ base->cra_flags |= CRYPTO_ALG_TYPE_LSKCIPHER;
+
+ return 0;
+}
+
+int crypto_register_lskcipher(struct lskcipher_alg *alg)
+{
+ struct crypto_alg *base = &alg->co.base;
+ int err;
+
+ err = lskcipher_prepare_alg(alg);
+ if (err)
+ return err;
+
+ return crypto_register_alg(base);
+}
+EXPORT_SYMBOL_GPL(crypto_register_lskcipher);
+
+void crypto_unregister_lskcipher(struct lskcipher_alg *alg)
+{
+ crypto_unregister_alg(&alg->co.base);
+}
+EXPORT_SYMBOL_GPL(crypto_unregister_lskcipher);
+
+int crypto_register_lskciphers(struct lskcipher_alg *algs, int count)
+{
+ int i, ret;
+
+ for (i = 0; i < count; i++) {
+ ret = crypto_register_lskcipher(&algs[i]);
+ if (ret)
+ goto err;
+ }
+
+ return 0;
+
+err:
+ for (--i; i >= 0; --i)
+ crypto_unregister_lskcipher(&algs[i]);
+
+ return ret;
+}
+EXPORT_SYMBOL_GPL(crypto_register_lskciphers);
+
+void crypto_unregister_lskciphers(struct lskcipher_alg *algs, int count)
+{
+ int i;
+
+ for (i = count - 1; i >= 0; --i)
+ crypto_unregister_lskcipher(&algs[i]);
+}
+EXPORT_SYMBOL_GPL(crypto_unregister_lskciphers);
+
+int lskcipher_register_instance(struct crypto_template *tmpl,
+ struct lskcipher_instance *inst)
+{
+ int err;
+
+ if (WARN_ON(!inst->free))
+ return -EINVAL;
+
+ err = lskcipher_prepare_alg(&inst->alg);
+ if (err)
+ return err;
+
+ return crypto_register_instance(tmpl, lskcipher_crypto_instance(inst));
+}
+EXPORT_SYMBOL_GPL(lskcipher_register_instance);
+
+static int lskcipher_setkey_simple(struct crypto_lskcipher *tfm, const u8 *key,
+ unsigned int keylen)
+{
+ struct crypto_lskcipher *cipher = lskcipher_cipher_simple(tfm);
+
+ crypto_lskcipher_clear_flags(cipher, CRYPTO_TFM_REQ_MASK);
+ crypto_lskcipher_set_flags(cipher, crypto_lskcipher_get_flags(tfm) &
+ CRYPTO_TFM_REQ_MASK);
+ return crypto_lskcipher_setkey(cipher, key, keylen);
+}
+
+static int lskcipher_init_tfm_simple(struct crypto_lskcipher *tfm)
+{
+ struct lskcipher_instance *inst = lskcipher_alg_instance(tfm);
+ struct crypto_lskcipher **ctx = crypto_lskcipher_ctx(tfm);
+ struct crypto_lskcipher_spawn *spawn;
+ struct crypto_lskcipher *cipher;
+
+ spawn = lskcipher_instance_ctx(inst);
+ cipher = crypto_spawn_lskcipher(spawn);
+ if (IS_ERR(cipher))
+ return PTR_ERR(cipher);
+
+ *ctx = cipher;
+ return 0;
+}
+
+static void lskcipher_exit_tfm_simple(struct crypto_lskcipher *tfm)
+{
+ struct crypto_lskcipher **ctx = crypto_lskcipher_ctx(tfm);
+
+ crypto_free_lskcipher(*ctx);
+}
+
+static void lskcipher_free_instance_simple(struct lskcipher_instance *inst)
+{
+ crypto_drop_lskcipher(lskcipher_instance_ctx(inst));
+ kfree(inst);
+}
+
+/**
+ * lskcipher_alloc_instance_simple - allocate instance of simple block cipher
+ *
+ * Allocate an lskcipher_instance for a simple block cipher mode of operation,
+ * e.g. cbc or ecb. The instance context will have just a single crypto_spawn,
+ * that for the underlying cipher. The {min,max}_keysize, ivsize, blocksize,
+ * alignmask, and priority are set from the underlying cipher but can be
+ * overridden if needed. The tfm context defaults to
+ * struct crypto_lskcipher *, and default ->setkey(), ->init(), and
+ * ->exit() methods are installed.
+ *
+ * @tmpl: the template being instantiated
+ * @tb: the template parameters
+ *
+ * Return: a pointer to the new instance, or an ERR_PTR(). The caller still
+ * needs to register the instance.
+ */
+struct lskcipher_instance *lskcipher_alloc_instance_simple(
+ struct crypto_template *tmpl, struct rtattr **tb)
+{
+ u32 mask;
+ struct lskcipher_instance *inst;
+ struct crypto_lskcipher_spawn *spawn;
+ char ecb_name[CRYPTO_MAX_ALG_NAME];
+ struct lskcipher_alg *cipher_alg;
+ const char *cipher_name;
+ int err;
+
+ err = crypto_check_attr_type(tb, CRYPTO_ALG_TYPE_LSKCIPHER, &mask);
+ if (err)
+ return ERR_PTR(err);
+
+ cipher_name = crypto_attr_alg_name(tb[1]);
+ if (IS_ERR(cipher_name))
+ return ERR_CAST(cipher_name);
+
+ inst = kzalloc(sizeof(*inst) + sizeof(*spawn), GFP_KERNEL);
+ if (!inst)
+ return ERR_PTR(-ENOMEM);
+
+ spawn = lskcipher_instance_ctx(inst);
+ err = crypto_grab_lskcipher(spawn,
+ lskcipher_crypto_instance(inst),
+ cipher_name, 0, mask);
+
+ ecb_name[0] = 0;
+ if (err == -ENOENT && !!memcmp(tmpl->name, "ecb", 4)) {
+ err = -ENAMETOOLONG;
+ if (snprintf(ecb_name, CRYPTO_MAX_ALG_NAME, "ecb(%s)",
+ cipher_name) >= CRYPTO_MAX_ALG_NAME)
+ goto err_free_inst;
+
+ err = crypto_grab_lskcipher(spawn,
+ lskcipher_crypto_instance(inst),
+ ecb_name, 0, mask);
+ }
+
+ if (err)
+ goto err_free_inst;
+
+ cipher_alg = crypto_lskcipher_spawn_alg(spawn);
+
+ err = crypto_inst_setname(lskcipher_crypto_instance(inst), tmpl->name,
+ &cipher_alg->co.base);
+ if (err)
+ goto err_free_inst;
+
+ if (ecb_name[0]) {
+ int len;
+
+ err = -EINVAL;
+ len = strscpy(ecb_name, &cipher_alg->co.base.cra_name[4],
+ sizeof(ecb_name));
+ if (len < 2)
+ goto err_free_inst;
+
+ if (ecb_name[len - 1] != ')')
+ goto err_free_inst;
+
+ ecb_name[len - 1] = 0;
+
+ err = -ENAMETOOLONG;
+ if (snprintf(inst->alg.co.base.cra_name, CRYPTO_MAX_ALG_NAME,
+ "%s(%s)", tmpl->name, ecb_name) >=
+ CRYPTO_MAX_ALG_NAME)
+ goto err_free_inst;
+
+ if (strcmp(ecb_name, cipher_name) &&
+ snprintf(inst->alg.co.base.cra_driver_name,
+ CRYPTO_MAX_ALG_NAME,
+ "%s(%s)", tmpl->name, cipher_name) >=
+ CRYPTO_MAX_ALG_NAME)
+ goto err_free_inst;
+ } else {
+ /* Don't allow nesting. */
+ err = -ELOOP;
+ if ((cipher_alg->co.base.cra_flags & CRYPTO_ALG_INSTANCE))
+ goto err_free_inst;
+ }
+
+ err = -EINVAL;
+ if (cipher_alg->co.ivsize)
+ goto err_free_inst;
+
+ inst->free = lskcipher_free_instance_simple;
+
+ /* Default algorithm properties, can be overridden */
+ inst->alg.co.base.cra_blocksize = cipher_alg->co.base.cra_blocksize;
+ inst->alg.co.base.cra_alignmask = cipher_alg->co.base.cra_alignmask;
+ inst->alg.co.base.cra_priority = cipher_alg->co.base.cra_priority;
+ inst->alg.co.min_keysize = cipher_alg->co.min_keysize;
+ inst->alg.co.max_keysize = cipher_alg->co.max_keysize;
+ inst->alg.co.ivsize = cipher_alg->co.base.cra_blocksize;
+
+ /* Use struct crypto_lskcipher * by default, can be overridden */
+ inst->alg.co.base.cra_ctxsize = sizeof(struct crypto_lskcipher *);
+ inst->alg.setkey = lskcipher_setkey_simple;
+ inst->alg.init = lskcipher_init_tfm_simple;
+ inst->alg.exit = lskcipher_exit_tfm_simple;
+
+ return inst;
+
+err_free_inst:
+ lskcipher_free_instance_simple(inst);
+ return ERR_PTR(err);
+}
+EXPORT_SYMBOL_GPL(lskcipher_alloc_instance_simple);
err = padata_do_parallel(ictx->psenc, padata, &ctx->cb_cpu);
if (!err)
return -EINPROGRESS;
+ if (err == -EBUSY)
+ return -EAGAIN;
return err;
}
err = padata_do_parallel(ictx->psdec, padata, &ctx->cb_cpu);
if (!err)
return -EINPROGRESS;
+ if (err == -EBUSY)
+ return -EAGAIN;
return err;
}
0x05, 0x00, 0x04, 0x40
};
+static const u8 rsa_digest_info_sha3_256[] = {
+ 0x30, 0x31, 0x30, 0x0d, 0x06, 0x09,
+ 0x60, 0x86, 0x48, 0x01, 0x65, 0x03, 0x04, 0x02, 0x08,
+ 0x05, 0x00, 0x04, 0x20
+};
+
+static const u8 rsa_digest_info_sha3_384[] = {
+ 0x30, 0x41, 0x30, 0x0d, 0x06, 0x09,
+ 0x60, 0x86, 0x48, 0x01, 0x65, 0x03, 0x04, 0x02, 0x09,
+ 0x05, 0x00, 0x04, 0x30
+};
+
+static const u8 rsa_digest_info_sha3_512[] = {
+ 0x30, 0x51, 0x30, 0x0d, 0x06, 0x09,
+ 0x60, 0x86, 0x48, 0x01, 0x65, 0x03, 0x04, 0x02, 0x0A,
+ 0x05, 0x00, 0x04, 0x40
+};
+
static const struct rsa_asn1_template {
const char *name;
const u8 *data;
_(sha384),
_(sha512),
_(sha224),
- { NULL }
#undef _
+#define _(X) { "sha3-" #X, rsa_digest_info_sha3_##X, sizeof(rsa_digest_info_sha3_##X) }
+ _(256),
+ _(384),
+ _(512),
+#undef _
+ { NULL }
};
static const struct rsa_asn1_template *rsa_lookup_asn1(const char *name)
.create = pkcs1pad_create,
.module = THIS_MODULE,
};
+
+MODULE_ALIAS_CRYPTO("pkcs1pad");
+-- SPDX-License-Identifier: BSD-3-Clause
+--
+-- Copyright (C) 2016 IETF Trust and the persons identified as authors
+-- of the code
+--
+-- https://www.rfc-editor.org/rfc/rfc8017#appendix-A.1.2
+
RsaPrivKey ::= SEQUENCE {
version INTEGER,
n INTEGER ({ rsa_get_n }),
+-- SPDX-License-Identifier: BSD-3-Clause
+--
+-- Copyright (C) 2016 IETF Trust and the persons identified as authors
+-- of the code
+--
+-- https://www.rfc-editor.org/rfc/rfc8017#appendix-A.1.1
+
RsaPubKey ::= SEQUENCE {
n INTEGER ({ rsa_get_n }),
e INTEGER ({ rsa_get_e })
#include <linux/err.h>
#include <linux/kernel.h>
#include <linux/module.h>
-#include <linux/slab.h>
#include <linux/seq_file.h>
#include <linux/string.h>
#include <net/netlink.h>
#include "hash.h"
-#define MAX_SHASH_ALIGNMASK 63
-
-static const struct crypto_type crypto_shash_type;
-
static inline struct crypto_istat_hash *shash_get_stat(struct shash_alg *alg)
{
return hash_get_stat(&alg->halg);
static inline int crypto_shash_errstat(struct shash_alg *alg, int err)
{
- return crypto_hash_errstat(&alg->halg, err);
+ if (!IS_ENABLED(CONFIG_CRYPTO_STATS))
+ return err;
+
+ if (err && err != -EINPROGRESS && err != -EBUSY)
+ atomic64_inc(&shash_get_stat(alg)->err_cnt);
+
+ return err;
}
int shash_no_setkey(struct crypto_shash *tfm, const u8 *key,
}
EXPORT_SYMBOL_GPL(shash_no_setkey);
-static int shash_setkey_unaligned(struct crypto_shash *tfm, const u8 *key,
- unsigned int keylen)
-{
- struct shash_alg *shash = crypto_shash_alg(tfm);
- unsigned long alignmask = crypto_shash_alignmask(tfm);
- unsigned long absize;
- u8 *buffer, *alignbuffer;
- int err;
-
- absize = keylen + (alignmask & ~(crypto_tfm_ctx_alignment() - 1));
- buffer = kmalloc(absize, GFP_ATOMIC);
- if (!buffer)
- return -ENOMEM;
-
- alignbuffer = (u8 *)ALIGN((unsigned long)buffer, alignmask + 1);
- memcpy(alignbuffer, key, keylen);
- err = shash->setkey(tfm, alignbuffer, keylen);
- kfree_sensitive(buffer);
- return err;
-}
-
static void shash_set_needkey(struct crypto_shash *tfm, struct shash_alg *alg)
{
if (crypto_shash_alg_needs_key(alg))
unsigned int keylen)
{
struct shash_alg *shash = crypto_shash_alg(tfm);
- unsigned long alignmask = crypto_shash_alignmask(tfm);
int err;
- if ((unsigned long)key & alignmask)
- err = shash_setkey_unaligned(tfm, key, keylen);
- else
- err = shash->setkey(tfm, key, keylen);
-
+ err = shash->setkey(tfm, key, keylen);
if (unlikely(err)) {
shash_set_needkey(tfm, shash);
return err;
}
EXPORT_SYMBOL_GPL(crypto_shash_setkey);
-static int shash_update_unaligned(struct shash_desc *desc, const u8 *data,
- unsigned int len)
-{
- struct crypto_shash *tfm = desc->tfm;
- struct shash_alg *shash = crypto_shash_alg(tfm);
- unsigned long alignmask = crypto_shash_alignmask(tfm);
- unsigned int unaligned_len = alignmask + 1 -
- ((unsigned long)data & alignmask);
- /*
- * We cannot count on __aligned() working for large values:
- * https://patchwork.kernel.org/patch/9507697/
- */
- u8 ubuf[MAX_SHASH_ALIGNMASK * 2];
- u8 *buf = PTR_ALIGN(&ubuf[0], alignmask + 1);
- int err;
-
- if (WARN_ON(buf + unaligned_len > ubuf + sizeof(ubuf)))
- return -EINVAL;
-
- if (unaligned_len > len)
- unaligned_len = len;
-
- memcpy(buf, data, unaligned_len);
- err = shash->update(desc, buf, unaligned_len);
- memset(buf, 0, unaligned_len);
-
- return err ?:
- shash->update(desc, data + unaligned_len, len - unaligned_len);
-}
-
int crypto_shash_update(struct shash_desc *desc, const u8 *data,
unsigned int len)
{
- struct crypto_shash *tfm = desc->tfm;
- struct shash_alg *shash = crypto_shash_alg(tfm);
- unsigned long alignmask = crypto_shash_alignmask(tfm);
+ struct shash_alg *shash = crypto_shash_alg(desc->tfm);
int err;
if (IS_ENABLED(CONFIG_CRYPTO_STATS))
atomic64_add(len, &shash_get_stat(shash)->hash_tlen);
- if ((unsigned long)data & alignmask)
- err = shash_update_unaligned(desc, data, len);
- else
- err = shash->update(desc, data, len);
+ err = shash->update(desc, data, len);
return crypto_shash_errstat(shash, err);
}
EXPORT_SYMBOL_GPL(crypto_shash_update);
-static int shash_final_unaligned(struct shash_desc *desc, u8 *out)
-{
- struct crypto_shash *tfm = desc->tfm;
- unsigned long alignmask = crypto_shash_alignmask(tfm);
- struct shash_alg *shash = crypto_shash_alg(tfm);
- unsigned int ds = crypto_shash_digestsize(tfm);
- /*
- * We cannot count on __aligned() working for large values:
- * https://patchwork.kernel.org/patch/9507697/
- */
- u8 ubuf[MAX_SHASH_ALIGNMASK + HASH_MAX_DIGESTSIZE];
- u8 *buf = PTR_ALIGN(&ubuf[0], alignmask + 1);
- int err;
-
- if (WARN_ON(buf + ds > ubuf + sizeof(ubuf)))
- return -EINVAL;
-
- err = shash->final(desc, buf);
- if (err)
- goto out;
-
- memcpy(out, buf, ds);
-
-out:
- memset(buf, 0, ds);
- return err;
-}
-
int crypto_shash_final(struct shash_desc *desc, u8 *out)
{
- struct crypto_shash *tfm = desc->tfm;
- struct shash_alg *shash = crypto_shash_alg(tfm);
- unsigned long alignmask = crypto_shash_alignmask(tfm);
+ struct shash_alg *shash = crypto_shash_alg(desc->tfm);
int err;
if (IS_ENABLED(CONFIG_CRYPTO_STATS))
atomic64_inc(&shash_get_stat(shash)->hash_cnt);
- if ((unsigned long)out & alignmask)
- err = shash_final_unaligned(desc, out);
- else
- err = shash->final(desc, out);
+ err = shash->final(desc, out);
return crypto_shash_errstat(shash, err);
}
EXPORT_SYMBOL_GPL(crypto_shash_final);
-static int shash_finup_unaligned(struct shash_desc *desc, const u8 *data,
- unsigned int len, u8 *out)
+static int shash_default_finup(struct shash_desc *desc, const u8 *data,
+ unsigned int len, u8 *out)
{
- return shash_update_unaligned(desc, data, len) ?:
- shash_final_unaligned(desc, out);
+ struct shash_alg *shash = crypto_shash_alg(desc->tfm);
+
+ return shash->update(desc, data, len) ?:
+ shash->final(desc, out);
}
int crypto_shash_finup(struct shash_desc *desc, const u8 *data,
{
struct crypto_shash *tfm = desc->tfm;
struct shash_alg *shash = crypto_shash_alg(tfm);
- unsigned long alignmask = crypto_shash_alignmask(tfm);
int err;
if (IS_ENABLED(CONFIG_CRYPTO_STATS)) {
atomic64_add(len, &istat->hash_tlen);
}
- if (((unsigned long)data | (unsigned long)out) & alignmask)
- err = shash_finup_unaligned(desc, data, len, out);
- else
- err = shash->finup(desc, data, len, out);
-
+ err = shash->finup(desc, data, len, out);
return crypto_shash_errstat(shash, err);
}
EXPORT_SYMBOL_GPL(crypto_shash_finup);
-static int shash_digest_unaligned(struct shash_desc *desc, const u8 *data,
- unsigned int len, u8 *out)
+static int shash_default_digest(struct shash_desc *desc, const u8 *data,
+ unsigned int len, u8 *out)
{
- return crypto_shash_init(desc) ?:
- shash_update_unaligned(desc, data, len) ?:
- shash_final_unaligned(desc, out);
+ struct shash_alg *shash = crypto_shash_alg(desc->tfm);
+
+ return shash->init(desc) ?:
+ shash->finup(desc, data, len, out);
}
int crypto_shash_digest(struct shash_desc *desc, const u8 *data,
{
struct crypto_shash *tfm = desc->tfm;
struct shash_alg *shash = crypto_shash_alg(tfm);
- unsigned long alignmask = crypto_shash_alignmask(tfm);
int err;
if (IS_ENABLED(CONFIG_CRYPTO_STATS)) {
if (crypto_shash_get_flags(tfm) & CRYPTO_TFM_NEED_KEY)
err = -ENOKEY;
- else if (((unsigned long)data | (unsigned long)out) & alignmask)
- err = shash_digest_unaligned(desc, data, len, out);
else
err = shash->digest(desc, data, len, out);
}
EXPORT_SYMBOL_GPL(crypto_shash_tfm_digest);
-static int shash_default_export(struct shash_desc *desc, void *out)
-{
- memcpy(out, shash_desc_ctx(desc), crypto_shash_descsize(desc->tfm));
- return 0;
-}
-
-static int shash_default_import(struct shash_desc *desc, const void *in)
-{
- memcpy(shash_desc_ctx(desc), in, crypto_shash_descsize(desc->tfm));
- return 0;
-}
-
-static int shash_async_setkey(struct crypto_ahash *tfm, const u8 *key,
- unsigned int keylen)
-{
- struct crypto_shash **ctx = crypto_ahash_ctx(tfm);
-
- return crypto_shash_setkey(*ctx, key, keylen);
-}
-
-static int shash_async_init(struct ahash_request *req)
-{
- struct crypto_shash **ctx = crypto_ahash_ctx(crypto_ahash_reqtfm(req));
- struct shash_desc *desc = ahash_request_ctx(req);
-
- desc->tfm = *ctx;
-
- return crypto_shash_init(desc);
-}
-
-int shash_ahash_update(struct ahash_request *req, struct shash_desc *desc)
-{
- struct crypto_hash_walk walk;
- int nbytes;
-
- for (nbytes = crypto_hash_walk_first(req, &walk); nbytes > 0;
- nbytes = crypto_hash_walk_done(&walk, nbytes))
- nbytes = crypto_shash_update(desc, walk.data, nbytes);
-
- return nbytes;
-}
-EXPORT_SYMBOL_GPL(shash_ahash_update);
-
-static int shash_async_update(struct ahash_request *req)
-{
- return shash_ahash_update(req, ahash_request_ctx(req));
-}
-
-static int shash_async_final(struct ahash_request *req)
-{
- return crypto_shash_final(ahash_request_ctx(req), req->result);
-}
-
-int shash_ahash_finup(struct ahash_request *req, struct shash_desc *desc)
-{
- struct crypto_hash_walk walk;
- int nbytes;
-
- nbytes = crypto_hash_walk_first(req, &walk);
- if (!nbytes)
- return crypto_shash_final(desc, req->result);
-
- do {
- nbytes = crypto_hash_walk_last(&walk) ?
- crypto_shash_finup(desc, walk.data, nbytes,
- req->result) :
- crypto_shash_update(desc, walk.data, nbytes);
- nbytes = crypto_hash_walk_done(&walk, nbytes);
- } while (nbytes > 0);
-
- return nbytes;
-}
-EXPORT_SYMBOL_GPL(shash_ahash_finup);
-
-static int shash_async_finup(struct ahash_request *req)
-{
- struct crypto_shash **ctx = crypto_ahash_ctx(crypto_ahash_reqtfm(req));
- struct shash_desc *desc = ahash_request_ctx(req);
-
- desc->tfm = *ctx;
-
- return shash_ahash_finup(req, desc);
-}
-
-int shash_ahash_digest(struct ahash_request *req, struct shash_desc *desc)
-{
- unsigned int nbytes = req->nbytes;
- struct scatterlist *sg;
- unsigned int offset;
- int err;
-
- if (nbytes &&
- (sg = req->src, offset = sg->offset,
- nbytes <= min(sg->length, ((unsigned int)(PAGE_SIZE)) - offset))) {
- void *data;
-
- data = kmap_local_page(sg_page(sg));
- err = crypto_shash_digest(desc, data + offset, nbytes,
- req->result);
- kunmap_local(data);
- } else
- err = crypto_shash_init(desc) ?:
- shash_ahash_finup(req, desc);
-
- return err;
-}
-EXPORT_SYMBOL_GPL(shash_ahash_digest);
-
-static int shash_async_digest(struct ahash_request *req)
-{
- struct crypto_shash **ctx = crypto_ahash_ctx(crypto_ahash_reqtfm(req));
- struct shash_desc *desc = ahash_request_ctx(req);
-
- desc->tfm = *ctx;
-
- return shash_ahash_digest(req, desc);
-}
-
-static int shash_async_export(struct ahash_request *req, void *out)
-{
- return crypto_shash_export(ahash_request_ctx(req), out);
-}
-
-static int shash_async_import(struct ahash_request *req, const void *in)
-{
- struct crypto_shash **ctx = crypto_ahash_ctx(crypto_ahash_reqtfm(req));
- struct shash_desc *desc = ahash_request_ctx(req);
-
- desc->tfm = *ctx;
-
- return crypto_shash_import(desc, in);
-}
-
-static void crypto_exit_shash_ops_async(struct crypto_tfm *tfm)
+int crypto_shash_export(struct shash_desc *desc, void *out)
{
- struct crypto_shash **ctx = crypto_tfm_ctx(tfm);
-
- crypto_free_shash(*ctx);
-}
-
-int crypto_init_shash_ops_async(struct crypto_tfm *tfm)
-{
- struct crypto_alg *calg = tfm->__crt_alg;
- struct shash_alg *alg = __crypto_shash_alg(calg);
- struct crypto_ahash *crt = __crypto_ahash_cast(tfm);
- struct crypto_shash **ctx = crypto_tfm_ctx(tfm);
- struct crypto_shash *shash;
-
- if (!crypto_mod_get(calg))
- return -EAGAIN;
-
- shash = crypto_create_tfm(calg, &crypto_shash_type);
- if (IS_ERR(shash)) {
- crypto_mod_put(calg);
- return PTR_ERR(shash);
- }
-
- *ctx = shash;
- tfm->exit = crypto_exit_shash_ops_async;
-
- crt->init = shash_async_init;
- crt->update = shash_async_update;
- crt->final = shash_async_final;
- crt->finup = shash_async_finup;
- crt->digest = shash_async_digest;
- if (crypto_shash_alg_has_setkey(alg))
- crt->setkey = shash_async_setkey;
-
- crypto_ahash_set_flags(crt, crypto_shash_get_flags(shash) &
- CRYPTO_TFM_NEED_KEY);
-
- crt->export = shash_async_export;
- crt->import = shash_async_import;
+ struct crypto_shash *tfm = desc->tfm;
+ struct shash_alg *shash = crypto_shash_alg(tfm);
- crt->reqsize = sizeof(struct shash_desc) + crypto_shash_descsize(shash);
+ if (shash->export)
+ return shash->export(desc, out);
+ memcpy(out, shash_desc_ctx(desc), crypto_shash_descsize(tfm));
return 0;
}
+EXPORT_SYMBOL_GPL(crypto_shash_export);
-struct crypto_ahash *crypto_clone_shash_ops_async(struct crypto_ahash *nhash,
- struct crypto_ahash *hash)
+int crypto_shash_import(struct shash_desc *desc, const void *in)
{
- struct crypto_shash **nctx = crypto_ahash_ctx(nhash);
- struct crypto_shash **ctx = crypto_ahash_ctx(hash);
- struct crypto_shash *shash;
+ struct crypto_shash *tfm = desc->tfm;
+ struct shash_alg *shash = crypto_shash_alg(tfm);
- shash = crypto_clone_shash(*ctx);
- if (IS_ERR(shash)) {
- crypto_free_ahash(nhash);
- return ERR_CAST(shash);
- }
+ if (crypto_shash_get_flags(tfm) & CRYPTO_TFM_NEED_KEY)
+ return -ENOKEY;
- *nctx = shash;
+ if (shash->import)
+ return shash->import(desc, in);
- return nhash;
+ memcpy(shash_desc_ctx(desc), in, crypto_shash_descsize(tfm));
+ return 0;
}
+EXPORT_SYMBOL_GPL(crypto_shash_import);
static void crypto_shash_exit_tfm(struct crypto_tfm *tfm)
{
return crypto_hash_report_stat(skb, alg, "shash");
}
-static const struct crypto_type crypto_shash_type = {
+const struct crypto_type crypto_shash_type = {
.extsize = crypto_alg_extsize,
.init_tfm = crypto_shash_init_tfm,
.free = crypto_shash_free_instance,
if (alg->digestsize > HASH_MAX_DIGESTSIZE)
return -EINVAL;
+ /* alignmask is not useful for hashes, so it is not supported. */
+ if (base->cra_alignmask)
+ return -EINVAL;
+
base->cra_flags &= ~CRYPTO_ALG_TYPE_MASK;
if (IS_ENABLED(CONFIG_CRYPTO_STATS))
if (alg->descsize > HASH_MAX_DESCSIZE)
return -EINVAL;
- if (base->cra_alignmask > MAX_SHASH_ALIGNMASK)
- return -EINVAL;
-
if ((alg->export && !alg->import) || (alg->import && !alg->export))
return -EINVAL;
base->cra_type = &crypto_shash_type;
base->cra_flags |= CRYPTO_ALG_TYPE_SHASH;
+ /*
+ * Handle missing optional functions. For each one we can either
+ * install a default here, or we can leave the pointer as NULL and check
+ * the pointer for NULL in crypto_shash_*(), avoiding an indirect call
+ * when the default behavior is desired. For ->finup and ->digest we
+ * install defaults, since for optimal performance algorithms should
+ * implement these anyway. On the other hand, for ->import and
+ * ->export the common case and best performance comes from the simple
+ * memcpy of the shash_desc_ctx, so when those pointers are NULL we
+ * leave them NULL and provide the memcpy with no indirect call.
+ */
if (!alg->finup)
- alg->finup = shash_finup_unaligned;
+ alg->finup = shash_default_finup;
if (!alg->digest)
- alg->digest = shash_digest_unaligned;
- if (!alg->export) {
- alg->export = shash_default_export;
- alg->import = shash_default_import;
+ alg->digest = shash_default_digest;
+ if (!alg->export)
alg->halg.statesize = alg->descsize;
- }
if (!alg->setkey)
alg->setkey = shash_no_setkey;
#include <linux/slab.h>
#include <linux/string.h>
#include <net/netlink.h>
+#include "skcipher.h"
-#include "internal.h"
+#define CRYPTO_ALG_TYPE_SKCIPHER_MASK 0x0000000e
enum {
SKCIPHER_WALK_PHYS = 1 << 0,
u8 buffer[];
};
+static const struct crypto_type crypto_skcipher_type;
+
static int skcipher_walk_next(struct skcipher_walk *walk);
static inline void skcipher_map_src(struct skcipher_walk *walk)
static inline struct crypto_istat_cipher *skcipher_get_stat(
struct skcipher_alg *alg)
{
-#ifdef CONFIG_CRYPTO_STATS
- return &alg->stat;
-#else
- return NULL;
-#endif
+ return skcipher_get_stat_common(&alg->co);
}
static inline int crypto_skcipher_errstat(struct skcipher_alg *alg, int err)
struct skcipher_request *req)
{
struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
+ struct skcipher_alg *alg = crypto_skcipher_alg(tfm);
walk->total = req->cryptlen;
walk->nbytes = 0;
SKCIPHER_WALK_SLEEP : 0;
walk->blocksize = crypto_skcipher_blocksize(tfm);
- walk->stride = crypto_skcipher_walksize(tfm);
walk->ivsize = crypto_skcipher_ivsize(tfm);
walk->alignmask = crypto_skcipher_alignmask(tfm);
+ if (alg->co.base.cra_type != &crypto_skcipher_type)
+ walk->stride = alg->co.chunksize;
+ else
+ walk->stride = alg->walksize;
+
return skcipher_walk_first(walk);
}
unsigned long alignmask = crypto_skcipher_alignmask(tfm);
int err;
+ if (cipher->co.base.cra_type != &crypto_skcipher_type) {
+ struct crypto_lskcipher **ctx = crypto_skcipher_ctx(tfm);
+
+ crypto_lskcipher_clear_flags(*ctx, CRYPTO_TFM_REQ_MASK);
+ crypto_lskcipher_set_flags(*ctx,
+ crypto_skcipher_get_flags(tfm) &
+ CRYPTO_TFM_REQ_MASK);
+ err = crypto_lskcipher_setkey(*ctx, key, keylen);
+ goto out;
+ }
+
if (keylen < cipher->min_keysize || keylen > cipher->max_keysize)
return -EINVAL;
else
err = cipher->setkey(tfm, key, keylen);
+out:
if (unlikely(err)) {
skcipher_set_needkey(tfm);
return err;
if (crypto_skcipher_get_flags(tfm) & CRYPTO_TFM_NEED_KEY)
ret = -ENOKEY;
+ else if (alg->co.base.cra_type != &crypto_skcipher_type)
+ ret = crypto_lskcipher_encrypt_sg(req);
else
ret = alg->encrypt(req);
if (crypto_skcipher_get_flags(tfm) & CRYPTO_TFM_NEED_KEY)
ret = -ENOKEY;
+ else if (alg->co.base.cra_type != &crypto_skcipher_type)
+ ret = crypto_lskcipher_decrypt_sg(req);
else
ret = alg->decrypt(req);
skcipher_set_needkey(skcipher);
+ if (tfm->__crt_alg->cra_type != &crypto_skcipher_type)
+ return crypto_init_lskcipher_ops_sg(tfm);
+
if (alg->exit)
skcipher->base.exit = crypto_skcipher_exit_tfm;
return 0;
}
+static unsigned int crypto_skcipher_extsize(struct crypto_alg *alg)
+{
+ if (alg->cra_type != &crypto_skcipher_type)
+ return sizeof(struct crypto_lskcipher *);
+
+ return crypto_alg_extsize(alg);
+}
+
static void crypto_skcipher_free_instance(struct crypto_instance *inst)
{
struct skcipher_instance *skcipher =
}
static const struct crypto_type crypto_skcipher_type = {
- .extsize = crypto_alg_extsize,
+ .extsize = crypto_skcipher_extsize,
.init_tfm = crypto_skcipher_init_tfm,
.free = crypto_skcipher_free_instance,
#ifdef CONFIG_PROC_FS
.report_stat = crypto_skcipher_report_stat,
#endif
.maskclear = ~CRYPTO_ALG_TYPE_MASK,
- .maskset = CRYPTO_ALG_TYPE_MASK,
+ .maskset = CRYPTO_ALG_TYPE_SKCIPHER_MASK,
.type = CRYPTO_ALG_TYPE_SKCIPHER,
.tfmsize = offsetof(struct crypto_skcipher, base),
};
}
EXPORT_SYMBOL_GPL(crypto_has_skcipher);
-static int skcipher_prepare_alg(struct skcipher_alg *alg)
+int skcipher_prepare_alg_common(struct skcipher_alg_common *alg)
{
- struct crypto_istat_cipher *istat = skcipher_get_stat(alg);
+ struct crypto_istat_cipher *istat = skcipher_get_stat_common(alg);
struct crypto_alg *base = &alg->base;
- if (alg->ivsize > PAGE_SIZE / 8 || alg->chunksize > PAGE_SIZE / 8 ||
- alg->walksize > PAGE_SIZE / 8)
+ if (alg->ivsize > PAGE_SIZE / 8 || alg->chunksize > PAGE_SIZE / 8)
return -EINVAL;
if (!alg->chunksize)
alg->chunksize = base->cra_blocksize;
- if (!alg->walksize)
- alg->walksize = alg->chunksize;
- base->cra_type = &crypto_skcipher_type;
base->cra_flags &= ~CRYPTO_ALG_TYPE_MASK;
- base->cra_flags |= CRYPTO_ALG_TYPE_SKCIPHER;
if (IS_ENABLED(CONFIG_CRYPTO_STATS))
memset(istat, 0, sizeof(*istat));
return 0;
}
+static int skcipher_prepare_alg(struct skcipher_alg *alg)
+{
+ struct crypto_alg *base = &alg->base;
+ int err;
+
+ err = skcipher_prepare_alg_common(&alg->co);
+ if (err)
+ return err;
+
+ if (alg->walksize > PAGE_SIZE / 8)
+ return -EINVAL;
+
+ if (!alg->walksize)
+ alg->walksize = alg->chunksize;
+
+ base->cra_type = &crypto_skcipher_type;
+ base->cra_flags |= CRYPTO_ALG_TYPE_SKCIPHER;
+
+ return 0;
+}
+
int crypto_register_skcipher(struct skcipher_alg *alg)
{
struct crypto_alg *base = &alg->base;
--- /dev/null
+/* SPDX-License-Identifier: GPL-2.0-or-later */
+/*
+ * Cryptographic API.
+ *
+ * Copyright (c) 2023 Herbert Xu <herbert@gondor.apana.org.au>
+ */
+#ifndef _LOCAL_CRYPTO_SKCIPHER_H
+#define _LOCAL_CRYPTO_SKCIPHER_H
+
+#include <crypto/internal/skcipher.h>
+#include "internal.h"
+
+static inline struct crypto_istat_cipher *skcipher_get_stat_common(
+ struct skcipher_alg_common *alg)
+{
+#ifdef CONFIG_CRYPTO_STATS
+ return &alg->stat;
+#else
+ return NULL;
+#endif
+}
+
+int crypto_lskcipher_encrypt_sg(struct skcipher_request *req);
+int crypto_lskcipher_decrypt_sg(struct skcipher_request *req);
+int crypto_init_lskcipher_ops_sg(struct crypto_tfm *tfm);
+int skcipher_prepare_alg_common(struct skcipher_alg_common *alg);
+
+#endif /* _LOCAL_CRYPTO_SKCIPHER_H */
.finalization_type = FINALIZATION_TYPE_FINAL,
.key_offset = 1,
}, {
- .name = "digest buffer aligned only to alignmask",
+ .name = "digest misaligned buffer",
.src_divs = {
{
.proportion_of_total = 10000,
.offset = 1,
- .offset_relative_to_alignmask = true,
},
},
.finalization_type = FINALIZATION_TYPE_DIGEST,
.key_offset = 1,
- .key_offset_relative_to_alignmask = true,
}, {
.name = "init+update+update+final two even splits",
.src_divs = {
u8 *hashstate)
{
struct crypto_shash *tfm = desc->tfm;
- const unsigned int alignmask = crypto_shash_alignmask(tfm);
const unsigned int digestsize = crypto_shash_digestsize(tfm);
const unsigned int statesize = crypto_shash_statesize(tfm);
const char *driver = crypto_shash_driver_name(tfm);
/* Set the key, if specified */
if (vec->ksize) {
err = do_setkey(crypto_shash_setkey, tfm, vec->key, vec->ksize,
- cfg, alignmask);
+ cfg, 0);
if (err) {
if (err == vec->setkey_error)
return 0;
}
/* Build the scatterlist for the source data */
- err = build_hash_sglist(tsgl, vec, cfg, alignmask, divs);
+ err = build_hash_sglist(tsgl, vec, cfg, 0, divs);
if (err) {
pr_err("alg: shash: %s: error preparing scatterlist for test vector %s, cfg=\"%s\"\n",
driver, vec_name, cfg->name);
u8 *hashstate)
{
struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
- const unsigned int alignmask = crypto_ahash_alignmask(tfm);
const unsigned int digestsize = crypto_ahash_digestsize(tfm);
const unsigned int statesize = crypto_ahash_statesize(tfm);
const char *driver = crypto_ahash_driver_name(tfm);
/* Set the key, if specified */
if (vec->ksize) {
err = do_setkey(crypto_ahash_setkey, tfm, vec->key, vec->ksize,
- cfg, alignmask);
+ cfg, 0);
if (err) {
if (err == vec->setkey_error)
return 0;
}
/* Build the scatterlist for the source data */
- err = build_hash_sglist(tsgl, vec, cfg, alignmask, divs);
+ err = build_hash_sglist(tsgl, vec, cfg, 0, divs);
if (err) {
pr_err("alg: ahash: %s: error preparing scatterlist for test vector %s, cfg=\"%s\"\n",
driver, vec_name, cfg->name);
}
}, {
.alg = "ecb(arc4)",
- .generic_driver = "ecb(arc4)-generic",
+ .generic_driver = "arc4-generic",
.test = alg_test_skcipher,
.suite = {
.cipher = __VECS(arc4_tv_template)
.suite = {
.akcipher = __VECS(pkcs1pad_rsa_tv_template)
}
+ }, {
+ .alg = "pkcs1pad(rsa,sha3-256)",
+ .test = alg_test_null,
+ .fips_allowed = 1,
+ }, {
+ .alg = "pkcs1pad(rsa,sha3-384)",
+ .test = alg_test_null,
+ .fips_allowed = 1,
+ }, {
+ .alg = "pkcs1pad(rsa,sha3-512)",
+ .test = alg_test_null,
+ .fips_allowed = 1,
}, {
.alg = "pkcs1pad(rsa,sha384)",
.test = alg_test_null,
.suite = {
.hash = __VECS(xxhash64_tv_template)
}
- }, {
- .alg = "zlib-deflate",
- .test = alg_test_comp,
- .fips_allowed = 1,
- .suite = {
- .comp = {
- .comp = __VECS(zlib_deflate_comp_tv_template),
- .decomp = __VECS(zlib_deflate_decomp_tv_template)
- }
- }
}, {
.alg = "zstd",
.test = alg_test_comp,
return rc;
notest:
+ if ((type & CRYPTO_ALG_TYPE_MASK) == CRYPTO_ALG_TYPE_LSKCIPHER) {
+ char nalg[CRYPTO_MAX_ALG_NAME];
+
+ if (snprintf(nalg, sizeof(nalg), "ecb(%s)", alg) >=
+ sizeof(nalg))
+ goto notest2;
+
+ i = alg_find_test(nalg);
+ if (i < 0)
+ goto notest2;
+
+ if (fips_enabled && !alg_test_descs[i].fips_allowed)
+ goto non_fips_alg;
+
+ rc = alg_test_skcipher(alg_test_descs + i, driver, type, mask);
+ goto test_done;
+ }
+
+notest2:
printk(KERN_INFO "alg: No test for %s (%s)\n", alg, driver);
if (type & CRYPTO_ALG_FIPS_INTERNAL)
static const struct akcipher_testvec ecdsa_nist_p192_tv_template[] = {
{
.key =
- "\x04\xf7\x46\xf8\x2f\x15\xf6\x22\x8e\xd7\x57\x4f\xcc\xe7\xbb\xc1"
- "\xd4\x09\x73\xcf\xea\xd0\x15\x07\x3d\xa5\x8a\x8a\x95\x43\xe4\x68"
- "\xea\xc6\x25\xc1\xc1\x01\x25\x4c\x7e\xc3\x3c\xa6\x04\x0a\xe7\x08"
- "\x98",
- .key_len = 49,
- .params =
- "\x30\x13\x06\x07\x2a\x86\x48\xce\x3d\x02\x01\x06\x08\x2a\x86\x48"
- "\xce\x3d\x03\x01\x01",
- .param_len = 21,
- .m =
- "\xcd\xb9\xd2\x1c\xb7\x6f\xcd\x44\xb3\xfd\x63\xea\xa3\x66\x7f\xae"
- "\x63\x85\xe7\x82",
- .m_size = 20,
- .algo = OID_id_ecdsa_with_sha1,
- .c =
- "\x30\x35\x02\x19\x00\xba\xe5\x93\x83\x6e\xb6\x3b\x63\xa0\x27\x91"
- "\xc6\xf6\x7f\xc3\x09\xad\x59\xad\x88\x27\xd6\x92\x6b\x02\x18\x10"
- "\x68\x01\x9d\xba\xce\x83\x08\xef\x95\x52\x7b\xa0\x0f\xe4\x18\x86"
- "\x80\x6f\xa5\x79\x77\xda\xd0",
- .c_size = 55,
- .public_key_vec = true,
- .siggen_sigver_test = true,
- }, {
- .key =
"\x04\xb6\x4b\xb1\xd1\xac\xba\x24\x8f\x65\xb2\x60\x00\x90\xbf\xbd"
"\x78\x05\x73\xe9\x79\x1d\x6f\x7c\x0b\xd2\xc3\x93\xa7\x28\xe1\x75"
"\xf7\xd5\x95\x1d\x28\x10\xc0\x75\x50\x5c\x1a\x4f\x3f\x8f\xa5\xee"
static const struct akcipher_testvec ecdsa_nist_p256_tv_template[] = {
{
.key =
- "\x04\xb9\x7b\xbb\xd7\x17\x64\xd2\x7e\xfc\x81\x5d\x87\x06\x83\x41"
- "\x22\xd6\x9a\xaa\x87\x17\xec\x4f\x63\x55\x2f\x94\xba\xdd\x83\xe9"
- "\x34\x4b\xf3\xe9\x91\x13\x50\xb6\xcb\xca\x62\x08\xe7\x3b\x09\xdc"
- "\xc3\x63\x4b\x2d\xb9\x73\x53\xe4\x45\xe6\x7c\xad\xe7\x6b\xb0\xe8"
- "\xaf",
- .key_len = 65,
- .params =
- "\x30\x13\x06\x07\x2a\x86\x48\xce\x3d\x02\x01\x06\x08\x2a\x86\x48"
- "\xce\x3d\x03\x01\x07",
- .param_len = 21,
- .m =
- "\xc2\x2b\x5f\x91\x78\x34\x26\x09\x42\x8d\x6f\x51\xb2\xc5\xaf\x4c"
- "\x0b\xde\x6a\x42",
- .m_size = 20,
- .algo = OID_id_ecdsa_with_sha1,
- .c =
- "\x30\x46\x02\x21\x00\xf9\x25\xce\x9f\x3a\xa6\x35\x81\xcf\xd4\xe7"
- "\xb7\xf0\x82\x56\x41\xf7\xd4\xad\x8d\x94\x5a\x69\x89\xee\xca\x6a"
- "\x52\x0e\x48\x4d\xcc\x02\x21\x00\xd7\xe4\xef\x52\x66\xd3\x5b\x9d"
- "\x8a\xfa\x54\x93\x29\xa7\x70\x86\xf1\x03\x03\xf3\x3b\xe2\x73\xf7"
- "\xfb\x9d\x8b\xde\xd4\x8d\x6f\xad",
- .c_size = 72,
- .public_key_vec = true,
- .siggen_sigver_test = true,
- }, {
- .key =
"\x04\x8b\x6d\xc0\x33\x8e\x2d\x8b\x67\xf5\xeb\xc4\x7f\xa0\xf5\xd9"
"\x7b\x03\xa5\x78\x9a\xb5\xea\x14\xe4\x23\xd0\xaf\xd7\x0e\x2e\xa0"
"\xc9\x8b\xdb\x95\xf8\xb3\xaf\xac\x00\x2c\x2c\x1f\x7a\xfd\x95\x88"
static const struct akcipher_testvec ecdsa_nist_p384_tv_template[] = {
{
- .key = /* secp384r1(sha1) */
- "\x04\x89\x25\xf3\x97\x88\xcb\xb0\x78\xc5\x72\x9a\x14\x6e\x7a\xb1"
- "\x5a\xa5\x24\xf1\x95\x06\x9e\x28\xfb\xc4\xb9\xbe\x5a\x0d\xd9\x9f"
- "\xf3\xd1\x4d\x2d\x07\x99\xbd\xda\xa7\x66\xec\xbb\xea\xba\x79\x42"
- "\xc9\x34\x89\x6a\xe7\x0b\xc3\xf2\xfe\x32\x30\xbe\xba\xf9\xdf\x7e"
- "\x4b\x6a\x07\x8e\x26\x66\x3f\x1d\xec\xa2\x57\x91\x51\xdd\x17\x0e"
- "\x0b\x25\xd6\x80\x5c\x3b\xe6\x1a\x98\x48\x91\x45\x7a\x73\xb0\xc3"
- "\xf1",
- .key_len = 97,
- .params =
- "\x30\x10\x06\x07\x2a\x86\x48\xce\x3d\x02\x01\x06\x05\x2b\x81\x04"
- "\x00\x22",
- .param_len = 18,
- .m =
- "\x12\x55\x28\xf0\x77\xd5\xb6\x21\x71\x32\x48\xcd\x28\xa8\x25\x22"
- "\x3a\x69\xc1\x93",
- .m_size = 20,
- .algo = OID_id_ecdsa_with_sha1,
- .c =
- "\x30\x66\x02\x31\x00\xf5\x0f\x24\x4c\x07\x93\x6f\x21\x57\x55\x07"
- "\x20\x43\x30\xde\xa0\x8d\x26\x8e\xae\x63\x3f\xbc\x20\x3a\xc6\xf1"
- "\x32\x3c\xce\x70\x2b\x78\xf1\x4c\x26\xe6\x5b\x86\xcf\xec\x7c\x7e"
- "\xd0\x87\xd7\xd7\x6e\x02\x31\x00\xcd\xbb\x7e\x81\x5d\x8f\x63\xc0"
- "\x5f\x63\xb1\xbe\x5e\x4c\x0e\xa1\xdf\x28\x8c\x1b\xfa\xf9\x95\x88"
- "\x74\xa0\x0f\xbf\xaf\xc3\x36\x76\x4a\xa1\x59\xf1\x1c\xa4\x58\x26"
- "\x79\x12\x2a\xb7\xc5\x15\x92\xc5",
- .c_size = 104,
- .public_key_vec = true,
- .siggen_sigver_test = true,
- }, {
.key = /* secp384r1(sha224) */
"\x04\x69\x6c\xcf\x62\xee\xd0\x0d\xe5\xb5\x2f\x70\x54\xcf\x26\xa0"
"\xd9\x98\x8d\x92\x2a\xab\x9b\x11\xcb\x48\x18\xa1\xa9\x0d\xd5\x18"
},
};
-static const struct comp_testvec zlib_deflate_comp_tv_template[] = {
- {
- .inlen = 70,
- .outlen = 44,
- .input = "Join us now and share the software "
- "Join us now and share the software ",
- .output = "\x78\x5e\xf3\xca\xcf\xcc\x53\x28"
- "\x2d\x56\xc8\xcb\x2f\x57\x48\xcc"
- "\x4b\x51\x28\xce\x48\x2c\x4a\x55"
- "\x28\xc9\x48\x55\x28\xce\x4f\x2b"
- "\x29\x07\x71\xbc\x08\x2b\x01\x00"
- "\x7c\x65\x19\x3d",
- }, {
- .inlen = 191,
- .outlen = 129,
- .input = "This document describes a compression method based on the DEFLATE"
- "compression algorithm. This document defines the application of "
- "the DEFLATE algorithm to the IP Payload Compression Protocol.",
- .output = "\x78\x5e\x5d\xce\x41\x0a\xc3\x30"
- "\x0c\x04\xc0\xaf\xec\x0b\xf2\x87"
- "\xd2\xa6\x50\xe8\xc1\x07\x7f\x40"
- "\xb1\x95\x5a\x60\x5b\xc6\x56\x0f"
- "\xfd\x7d\x93\x1e\x42\xe8\x51\xec"
- "\xee\x20\x9f\x64\x20\x6a\x78\x17"
- "\xae\x86\xc8\x23\x74\x59\x78\x80"
- "\x10\xb4\xb4\xce\x63\x88\x56\x14"
- "\xb6\xa4\x11\x0b\x0d\x8e\xd8\x6e"
- "\x4b\x8c\xdb\x7c\x7f\x5e\xfc\x7c"
- "\xae\x51\x7e\x69\x17\x4b\x65\x02"
- "\xfc\x1f\xbc\x4a\xdd\xd8\x7d\x48"
- "\xad\x65\x09\x64\x3b\xac\xeb\xd9"
- "\xc2\x01\xc0\xf4\x17\x3c\x1c\x1c"
- "\x7d\xb2\x52\xc4\xf5\xf4\x8f\xeb"
- "\x6a\x1a\x34\x4f\x5f\x2e\x32\x45"
- "\x4e",
- },
-};
-
-static const struct comp_testvec zlib_deflate_decomp_tv_template[] = {
- {
- .inlen = 128,
- .outlen = 191,
- .input = "\x78\x9c\x5d\x8d\x31\x0e\xc2\x30"
- "\x10\x04\xbf\xb2\x2f\xc8\x1f\x10"
- "\x04\x09\x89\xc2\x85\x3f\x70\xb1"
- "\x2f\xf8\x24\xdb\x67\xd9\x47\xc1"
- "\xef\x49\x68\x12\x51\xae\x76\x67"
- "\xd6\x27\x19\x88\x1a\xde\x85\xab"
- "\x21\xf2\x08\x5d\x16\x1e\x20\x04"
- "\x2d\xad\xf3\x18\xa2\x15\x85\x2d"
- "\x69\xc4\x42\x83\x23\xb6\x6c\x89"
- "\x71\x9b\xef\xcf\x8b\x9f\xcf\x33"
- "\xca\x2f\xed\x62\xa9\x4c\x80\xff"
- "\x13\xaf\x52\x37\xed\x0e\x52\x6b"
- "\x59\x02\xd9\x4e\xe8\x7a\x76\x1d"
- "\x02\x98\xfe\x8a\x87\x83\xa3\x4f"
- "\x56\x8a\xb8\x9e\x8e\x5c\x57\xd3"
- "\xa0\x79\xfa\x02\x2e\x32\x45\x4e",
- .output = "This document describes a compression method based on the DEFLATE"
- "compression algorithm. This document defines the application of "
- "the DEFLATE algorithm to the IP Payload Compression Protocol.",
- }, {
- .inlen = 44,
- .outlen = 70,
- .input = "\x78\x9c\xf3\xca\xcf\xcc\x53\x28"
- "\x2d\x56\xc8\xcb\x2f\x57\x48\xcc"
- "\x4b\x51\x28\xce\x48\x2c\x4a\x55"
- "\x28\xc9\x48\x55\x28\xce\x4f\x2b"
- "\x29\x07\x71\xbc\x08\x2b\x01\x00"
- "\x7c\x65\x19\x3d",
- .output = "Join us now and share the software "
- "Join us now and share the software ",
- },
-};
-
/*
* LZO test vectors (null-terminated strings).
*/
inst->alg.base.cra_priority = alg->cra_priority;
inst->alg.base.cra_blocksize = alg->cra_blocksize;
- inst->alg.base.cra_alignmask = alg->cra_alignmask;
inst->alg.base.cra_ctxsize = sizeof(struct vmac_tfm_ctx);
inst->alg.base.cra_init = vmac_init_tfm;
*/
struct xcbc_tfm_ctx {
struct crypto_cipher *child;
- u8 ctx[];
+ u8 consts[];
};
/*
*/
struct xcbc_desc_ctx {
unsigned int len;
- u8 ctx[];
+ u8 odds[];
};
#define XCBC_BLOCKSIZE 16
static int crypto_xcbc_digest_setkey(struct crypto_shash *parent,
const u8 *inkey, unsigned int keylen)
{
- unsigned long alignmask = crypto_shash_alignmask(parent);
struct xcbc_tfm_ctx *ctx = crypto_shash_ctx(parent);
- u8 *consts = PTR_ALIGN(&ctx->ctx[0], alignmask + 1);
+ u8 *consts = ctx->consts;
int err = 0;
u8 key1[XCBC_BLOCKSIZE];
int bs = sizeof(key1);
static int crypto_xcbc_digest_init(struct shash_desc *pdesc)
{
- unsigned long alignmask = crypto_shash_alignmask(pdesc->tfm);
struct xcbc_desc_ctx *ctx = shash_desc_ctx(pdesc);
int bs = crypto_shash_blocksize(pdesc->tfm);
- u8 *prev = PTR_ALIGN(&ctx->ctx[0], alignmask + 1) + bs;
+ u8 *prev = &ctx->odds[bs];
ctx->len = 0;
memset(prev, 0, bs);
unsigned int len)
{
struct crypto_shash *parent = pdesc->tfm;
- unsigned long alignmask = crypto_shash_alignmask(parent);
struct xcbc_tfm_ctx *tctx = crypto_shash_ctx(parent);
struct xcbc_desc_ctx *ctx = shash_desc_ctx(pdesc);
struct crypto_cipher *tfm = tctx->child;
int bs = crypto_shash_blocksize(parent);
- u8 *odds = PTR_ALIGN(&ctx->ctx[0], alignmask + 1);
+ u8 *odds = ctx->odds;
u8 *prev = odds + bs;
/* checking the data can fill the block */
static int crypto_xcbc_digest_final(struct shash_desc *pdesc, u8 *out)
{
struct crypto_shash *parent = pdesc->tfm;
- unsigned long alignmask = crypto_shash_alignmask(parent);
struct xcbc_tfm_ctx *tctx = crypto_shash_ctx(parent);
struct xcbc_desc_ctx *ctx = shash_desc_ctx(pdesc);
struct crypto_cipher *tfm = tctx->child;
int bs = crypto_shash_blocksize(parent);
- u8 *consts = PTR_ALIGN(&tctx->ctx[0], alignmask + 1);
- u8 *odds = PTR_ALIGN(&ctx->ctx[0], alignmask + 1);
+ u8 *odds = ctx->odds;
u8 *prev = odds + bs;
unsigned int offset = 0;
}
crypto_xor(prev, odds, bs);
- crypto_xor(prev, consts + offset, bs);
+ crypto_xor(prev, &tctx->consts[offset], bs);
crypto_cipher_encrypt_one(tfm, out, prev);
struct shash_instance *inst;
struct crypto_cipher_spawn *spawn;
struct crypto_alg *alg;
- unsigned long alignmask;
u32 mask;
int err;
if (err)
goto err_free_inst;
- alignmask = alg->cra_alignmask | 3;
- inst->alg.base.cra_alignmask = alignmask;
inst->alg.base.cra_priority = alg->cra_priority;
inst->alg.base.cra_blocksize = alg->cra_blocksize;
+ inst->alg.base.cra_ctxsize = sizeof(struct xcbc_tfm_ctx) +
+ alg->cra_blocksize * 2;
inst->alg.digestsize = alg->cra_blocksize;
- inst->alg.descsize = ALIGN(sizeof(struct xcbc_desc_ctx),
- crypto_tfm_ctx_alignment()) +
- (alignmask &
- ~(crypto_tfm_ctx_alignment() - 1)) +
+ inst->alg.descsize = sizeof(struct xcbc_desc_ctx) +
alg->cra_blocksize * 2;
- inst->alg.base.cra_ctxsize = ALIGN(sizeof(struct xcbc_tfm_ctx),
- alignmask + 1) +
- alg->cra_blocksize * 2;
inst->alg.base.cra_init = xcbc_init_tfm;
inst->alg.base.cra_exit = xcbc_exit_tfm;
struct xts_instance_ctx {
struct crypto_skcipher_spawn spawn;
- char name[CRYPTO_MAX_ALG_NAME];
+ struct crypto_cipher_spawn tweak_spawn;
};
struct xts_request_ctx {
ctx->child = child;
- tweak = crypto_alloc_cipher(ictx->name, 0, 0);
+ tweak = crypto_spawn_cipher(&ictx->tweak_spawn);
if (IS_ERR(tweak)) {
crypto_free_skcipher(ctx->child);
return PTR_ERR(tweak);
struct xts_instance_ctx *ictx = skcipher_instance_ctx(inst);
crypto_drop_skcipher(&ictx->spawn);
+ crypto_drop_cipher(&ictx->tweak_spawn);
kfree(inst);
}
static int xts_create(struct crypto_template *tmpl, struct rtattr **tb)
{
+ struct skcipher_alg_common *alg;
+ char name[CRYPTO_MAX_ALG_NAME];
struct skcipher_instance *inst;
struct xts_instance_ctx *ctx;
- struct skcipher_alg *alg;
const char *cipher_name;
u32 mask;
int err;
cipher_name, 0, mask);
if (err == -ENOENT) {
err = -ENAMETOOLONG;
- if (snprintf(ctx->name, CRYPTO_MAX_ALG_NAME, "ecb(%s)",
+ if (snprintf(name, CRYPTO_MAX_ALG_NAME, "ecb(%s)",
cipher_name) >= CRYPTO_MAX_ALG_NAME)
goto err_free_inst;
err = crypto_grab_skcipher(&ctx->spawn,
skcipher_crypto_instance(inst),
- ctx->name, 0, mask);
+ name, 0, mask);
}
if (err)
goto err_free_inst;
- alg = crypto_skcipher_spawn_alg(&ctx->spawn);
+ alg = crypto_spawn_skcipher_alg_common(&ctx->spawn);
err = -EINVAL;
if (alg->base.cra_blocksize != XTS_BLOCK_SIZE)
goto err_free_inst;
- if (crypto_skcipher_alg_ivsize(alg))
+ if (alg->ivsize)
goto err_free_inst;
err = crypto_inst_setname(skcipher_crypto_instance(inst), "xts",
if (!strncmp(cipher_name, "ecb(", 4)) {
int len;
- len = strscpy(ctx->name, cipher_name + 4, sizeof(ctx->name));
+ len = strscpy(name, cipher_name + 4, sizeof(name));
if (len < 2)
goto err_free_inst;
- if (ctx->name[len - 1] != ')')
+ if (name[len - 1] != ')')
goto err_free_inst;
- ctx->name[len - 1] = 0;
+ name[len - 1] = 0;
if (snprintf(inst->alg.base.cra_name, CRYPTO_MAX_ALG_NAME,
- "xts(%s)", ctx->name) >= CRYPTO_MAX_ALG_NAME) {
+ "xts(%s)", name) >= CRYPTO_MAX_ALG_NAME) {
err = -ENAMETOOLONG;
goto err_free_inst;
}
} else
goto err_free_inst;
+ err = crypto_grab_cipher(&ctx->tweak_spawn,
+ skcipher_crypto_instance(inst), name, 0, mask);
+ if (err)
+ goto err_free_inst;
+
inst->alg.base.cra_priority = alg->base.cra_priority;
inst->alg.base.cra_blocksize = XTS_BLOCK_SIZE;
inst->alg.base.cra_alignmask = alg->base.cra_alignmask |
(__alignof__(u64) - 1);
inst->alg.ivsize = XTS_BLOCK_SIZE;
- inst->alg.min_keysize = crypto_skcipher_alg_min_keysize(alg) * 2;
- inst->alg.max_keysize = crypto_skcipher_alg_max_keysize(alg) * 2;
+ inst->alg.min_keysize = alg->min_keysize * 2;
+ inst->alg.max_keysize = alg->max_keysize * 2;
inst->alg.base.cra_ctxsize = sizeof(struct xts_tfm_ctx);
while ((rng_readl(priv, RNG_STATUS) >> 24) == 0) {
if (!wait)
return 0;
- hwrng_msleep(rng, 1000);
+ hwrng_yield(rng);
}
num_words = rng_readl(priv, RNG_STATUS) >> 24;
if (!priv)
return -ENOMEM;
- platform_set_drvdata(pdev, priv);
-
/* map peripheral */
priv->base = devm_platform_ioremap_resource(pdev, 0);
if (IS_ERR(priv->base))
}
EXPORT_SYMBOL_GPL(hwrng_msleep);
+long hwrng_yield(struct hwrng *rng)
+{
+ return wait_for_completion_interruptible_timeout(&rng->dying, 1);
+}
+EXPORT_SYMBOL_GPL(hwrng_yield);
+
static int __init hwrng_modinit(void)
{
int ret;
static int geode_rng_data_read(struct hwrng *rng, u32 *data)
{
- void __iomem *mem = (void __iomem *)rng->priv;
+ struct amd_geode_priv *priv = (struct amd_geode_priv *)rng->priv;
+ void __iomem *mem = priv->membase;
*data = readl(mem + GEODE_RNG_DATA_REG);
static int geode_rng_data_present(struct hwrng *rng, int wait)
{
- void __iomem *mem = (void __iomem *)rng->priv;
+ struct amd_geode_priv *priv = (struct amd_geode_priv *)rng->priv;
+ void __iomem *mem = priv->membase;
int data, i;
for (i = 0; i < 20; i++) {
if (!rng)
return -ENOMEM;
- platform_set_drvdata(pdev, rng);
-
rng->base = devm_platform_ioremap_resource(pdev, 0);
if (IS_ERR(rng->base))
return PTR_ERR(rng->base);
#define RNGC_ERROR_STATUS_STAT_ERR 0x00000008
-#define RNGC_TIMEOUT 3000 /* 3 sec */
-
+#define RNGC_SELFTEST_TIMEOUT 2500 /* us */
+#define RNGC_SEED_TIMEOUT 200 /* ms */
static bool self_test = true;
module_param(self_test, bool, 0);
cmd = readl(rngc->base + RNGC_COMMAND);
writel(cmd | RNGC_CMD_SELF_TEST, rngc->base + RNGC_COMMAND);
- ret = wait_for_completion_timeout(&rngc->rng_op_done, msecs_to_jiffies(RNGC_TIMEOUT));
+ ret = wait_for_completion_timeout(&rngc->rng_op_done,
+ usecs_to_jiffies(RNGC_SELFTEST_TIMEOUT));
imx_rngc_irq_mask_clear(rngc);
if (!ret)
return -ETIMEDOUT;
cmd = readl(rngc->base + RNGC_COMMAND);
writel(cmd | RNGC_CMD_SEED, rngc->base + RNGC_COMMAND);
- ret = wait_for_completion_timeout(&rngc->rng_op_done, msecs_to_jiffies(RNGC_TIMEOUT));
+ ret = wait_for_completion_timeout(&rngc->rng_op_done,
+ msecs_to_jiffies(RNGC_SEED_TIMEOUT));
if (!ret) {
ret = -ETIMEDOUT;
goto err;
};
struct ks_sa_rng {
- struct device *dev;
struct hwrng rng;
struct clk *clk;
struct regmap *regmap_cfg;
static int ks_sa_rng_init(struct hwrng *rng)
{
u32 value;
- struct device *dev = (struct device *)rng->priv;
- struct ks_sa_rng *ks_sa_rng = dev_get_drvdata(dev);
+ struct ks_sa_rng *ks_sa_rng = container_of(rng, struct ks_sa_rng, rng);
unsigned long clk_rate = clk_get_rate(ks_sa_rng->clk);
/* Enable RNG module */
static void ks_sa_rng_cleanup(struct hwrng *rng)
{
- struct device *dev = (struct device *)rng->priv;
- struct ks_sa_rng *ks_sa_rng = dev_get_drvdata(dev);
+ struct ks_sa_rng *ks_sa_rng = container_of(rng, struct ks_sa_rng, rng);
/* Disable RNG */
writel(0, &ks_sa_rng->reg_rng->control);
static int ks_sa_rng_data_read(struct hwrng *rng, u32 *data)
{
- struct device *dev = (struct device *)rng->priv;
- struct ks_sa_rng *ks_sa_rng = dev_get_drvdata(dev);
+ struct ks_sa_rng *ks_sa_rng = container_of(rng, struct ks_sa_rng, rng);
/* Read random data */
data[0] = readl(&ks_sa_rng->reg_rng->output_l);
static int ks_sa_rng_data_present(struct hwrng *rng, int wait)
{
- struct device *dev = (struct device *)rng->priv;
- struct ks_sa_rng *ks_sa_rng = dev_get_drvdata(dev);
+ struct ks_sa_rng *ks_sa_rng = container_of(rng, struct ks_sa_rng, rng);
u64 now = ktime_get_ns();
u32 ready;
if (!ks_sa_rng)
return -ENOMEM;
- ks_sa_rng->dev = dev;
ks_sa_rng->rng = (struct hwrng) {
.name = "ks_sa_hwrng",
.init = ks_sa_rng_init,
.data_present = ks_sa_rng_data_present,
.cleanup = ks_sa_rng_cleanup,
};
- ks_sa_rng->rng.priv = (unsigned long)dev;
ks_sa_rng->reg_rng = devm_platform_ioremap_resource(pdev, 0);
if (IS_ERR(ks_sa_rng->reg_rng))
syscon_regmap_lookup_by_phandle(dev->of_node,
"ti,syscon-sa-cfg");
- if (IS_ERR(ks_sa_rng->regmap_cfg)) {
- dev_err(dev, "syscon_node_to_regmap failed\n");
- return -EINVAL;
- }
+ if (IS_ERR(ks_sa_rng->regmap_cfg))
+ return dev_err_probe(dev, -EINVAL, "syscon_node_to_regmap failed\n");
pm_runtime_enable(dev);
ret = pm_runtime_resume_and_get(dev);
if (ret < 0) {
- dev_err(dev, "Failed to enable SA power-domain\n");
pm_runtime_disable(dev);
- return ret;
+ return dev_err_probe(dev, ret, "Failed to enable SA power-domain\n");
}
- platform_set_drvdata(pdev, ks_sa_rng);
-
return devm_hwrng_register(&pdev->dev, &ks_sa_rng->rng);
}
#include <linux/types.h>
#include <linux/of.h>
#include <linux/clk.h>
+#include <linux/iopoll.h>
-#define RNG_DATA 0x00
+#define RNG_DATA 0x00
+#define RNG_S4_DATA 0x08
+#define RNG_S4_CFG 0x00
+
+#define RUN_BIT BIT(0)
+#define SEED_READY_STS_BIT BIT(31)
+
+struct meson_rng_priv {
+ int (*read)(struct hwrng *rng, void *buf, size_t max, bool wait);
+};
struct meson_rng_data {
void __iomem *base;
struct hwrng rng;
+ struct device *dev;
};
static int meson_rng_read(struct hwrng *rng, void *buf, size_t max, bool wait)
return sizeof(u32);
}
+static int meson_rng_wait_status(void __iomem *cfg_addr, int bit)
+{
+ u32 status = 0;
+ int ret;
+
+ ret = readl_relaxed_poll_timeout_atomic(cfg_addr,
+ status, !(status & bit),
+ 10, 10000);
+ if (ret)
+ return -EBUSY;
+
+ return 0;
+}
+
+static int meson_s4_rng_read(struct hwrng *rng, void *buf, size_t max, bool wait)
+{
+ struct meson_rng_data *data =
+ container_of(rng, struct meson_rng_data, rng);
+
+ void __iomem *cfg_addr = data->base + RNG_S4_CFG;
+ int err;
+
+ writel_relaxed(readl_relaxed(cfg_addr) | SEED_READY_STS_BIT, cfg_addr);
+
+ err = meson_rng_wait_status(cfg_addr, SEED_READY_STS_BIT);
+ if (err) {
+ dev_err(data->dev, "Seed isn't ready, try again\n");
+ return err;
+ }
+
+ err = meson_rng_wait_status(cfg_addr, RUN_BIT);
+ if (err) {
+ dev_err(data->dev, "Can't get random number, try again\n");
+ return err;
+ }
+
+ *(u32 *)buf = readl_relaxed(data->base + RNG_S4_DATA);
+
+ return sizeof(u32);
+}
+
static int meson_rng_probe(struct platform_device *pdev)
{
struct device *dev = &pdev->dev;
struct meson_rng_data *data;
struct clk *core_clk;
+ const struct meson_rng_priv *priv;
data = devm_kzalloc(dev, sizeof(*data), GFP_KERNEL);
if (!data)
return -ENOMEM;
+ priv = device_get_match_data(&pdev->dev);
+ if (!priv)
+ return -ENODEV;
+
data->base = devm_platform_ioremap_resource(pdev, 0);
if (IS_ERR(data->base))
return PTR_ERR(data->base);
"Failed to get core clock\n");
data->rng.name = pdev->name;
- data->rng.read = meson_rng_read;
+ data->rng.read = priv->read;
+
+ data->dev = &pdev->dev;
return devm_hwrng_register(dev, &data->rng);
}
+static const struct meson_rng_priv meson_rng_priv = {
+ .read = meson_rng_read,
+};
+
+static const struct meson_rng_priv meson_rng_priv_s4 = {
+ .read = meson_s4_rng_read,
+};
+
static const struct of_device_id meson_rng_of_match[] = {
- { .compatible = "amlogic,meson-rng", },
+ {
+ .compatible = "amlogic,meson-rng",
+ .data = (void *)&meson_rng_priv,
+ },
+ {
+ .compatible = "amlogic,meson-s4-rng",
+ .data = (void *)&meson_rng_priv_s4,
+ },
{},
};
MODULE_DEVICE_TABLE(of, meson_rng_of_match);
rng_priv->rng.read = mpfs_rng_read;
rng_priv->rng.name = pdev->name;
- platform_set_drvdata(pdev, rng_priv);
-
ret = devm_hwrng_register(&pdev->dev, &rng_priv->rng);
if (ret)
return dev_err_probe(&pdev->dev, ret, "Failed to register MPFS hwrng\n");
#include <linux/hw_random.h>
#include <linux/of.h>
-#include <linux/of_device.h>
+#include <linux/platform_device.h>
+#include <linux/property.h>
#include <asm/hypervisor.h>
static const struct of_device_id n2rng_match[];
static int n2rng_probe(struct platform_device *op)
{
- const struct of_device_id *match;
int err = -ENOMEM;
struct n2rng *np;
- match = of_match_device(n2rng_match, &op->dev);
- if (!match)
- return -EINVAL;
-
n2rng_driver_version();
np = devm_kzalloc(&op->dev, sizeof(*np), GFP_KERNEL);
if (!np)
goto out;
np->op = op;
- np->data = (struct n2rng_template *)match->data;
+ np->data = (struct n2rng_template *)device_get_match_data(&op->dev);
INIT_DELAYED_WORK(&np->work, n2rng_work);
module_amba_driver(nmk_rng_driver);
+MODULE_DESCRIPTION("ST-Ericsson Nomadik Random Number Generator");
MODULE_LICENSE("GPL");
ctl.u64 = 0;
ctl.s.ent_en = 1; /* Enable the entropy source. */
ctl.s.rng_en = 1; /* Enable the RNG hardware. */
- cvmx_write_csr((__force u64)p->control_status, ctl.u64);
+ cvmx_write_csr((unsigned long)p->control_status, ctl.u64);
return 0;
}
ctl.u64 = 0;
/* Disable everything. */
- cvmx_write_csr((__force u64)p->control_status, ctl.u64);
+ cvmx_write_csr((unsigned long)p->control_status, ctl.u64);
}
static int octeon_rng_data_read(struct hwrng *rng, u32 *data)
{
struct octeon_rng *p = container_of(rng, struct octeon_rng, ops);
- *data = cvmx_read64_uint32((__force u64)p->result);
+ *data = cvmx_read64_uint32((unsigned long)p->result);
return sizeof(u32);
}
module_platform_driver(st_rng_driver);
MODULE_AUTHOR("Pankaj Dev <pankaj.dev@st.com>");
+MODULE_DESCRIPTION("ST Microelectronics HW Random Number Generator");
MODULE_LICENSE("GPL v2");
#include <linux/reset.h>
#include <linux/slab.h>
-#define RNG_CR 0x00
-#define RNG_CR_RNGEN BIT(2)
-#define RNG_CR_CED BIT(5)
-
-#define RNG_SR 0x04
-#define RNG_SR_SEIS BIT(6)
-#define RNG_SR_CEIS BIT(5)
-#define RNG_SR_DRDY BIT(0)
+#define RNG_CR 0x00
+#define RNG_CR_RNGEN BIT(2)
+#define RNG_CR_CED BIT(5)
+#define RNG_CR_CONFIG1 GENMASK(11, 8)
+#define RNG_CR_NISTC BIT(12)
+#define RNG_CR_CONFIG2 GENMASK(15, 13)
+#define RNG_CR_CLKDIV_SHIFT 16
+#define RNG_CR_CLKDIV GENMASK(19, 16)
+#define RNG_CR_CONFIG3 GENMASK(25, 20)
+#define RNG_CR_CONDRST BIT(30)
+#define RNG_CR_CONFLOCK BIT(31)
+#define RNG_CR_ENTROPY_SRC_MASK (RNG_CR_CONFIG1 | RNG_CR_NISTC | RNG_CR_CONFIG2 | RNG_CR_CONFIG3)
+#define RNG_CR_CONFIG_MASK (RNG_CR_ENTROPY_SRC_MASK | RNG_CR_CED | RNG_CR_CLKDIV)
+
+#define RNG_SR 0x04
+#define RNG_SR_DRDY BIT(0)
+#define RNG_SR_CECS BIT(1)
+#define RNG_SR_SECS BIT(2)
+#define RNG_SR_CEIS BIT(5)
+#define RNG_SR_SEIS BIT(6)
+
+#define RNG_DR 0x08
+
+#define RNG_NSCR 0x0C
+#define RNG_NSCR_MASK GENMASK(17, 0)
+
+#define RNG_HTCR 0x10
+
+#define RNG_NB_RECOVER_TRIES 3
+
+struct stm32_rng_data {
+ uint max_clock_rate;
+ u32 cr;
+ u32 nscr;
+ u32 htcr;
+ bool has_cond_reset;
+};
-#define RNG_DR 0x08
+/**
+ * struct stm32_rng_config - RNG configuration data
+ *
+ * @cr: RNG configuration. 0 means default hardware RNG configuration
+ * @nscr: Noise sources control configuration.
+ * @htcr: Health tests configuration.
+ */
+struct stm32_rng_config {
+ u32 cr;
+ u32 nscr;
+ u32 htcr;
+};
struct stm32_rng_private {
struct hwrng rng;
void __iomem *base;
struct clk *clk;
struct reset_control *rst;
+ struct stm32_rng_config pm_conf;
+ const struct stm32_rng_data *data;
bool ced;
+ bool lock_conf;
};
+/*
+ * Extracts from the STM32 RNG specification when RNG supports CONDRST.
+ *
+ * When a noise source (or seed) error occurs, the RNG stops generating
+ * random numbers and sets to “1” both SEIS and SECS bits to indicate
+ * that a seed error occurred. (...)
+ *
+ * 1. Software reset by writing CONDRST at 1 and at 0 (see bitfield
+ * description for details). This step is needed only if SECS is set.
+ * Indeed, when SEIS is set and SECS is cleared it means RNG performed
+ * the reset automatically (auto-reset).
+ * 2. If SECS was set in step 1 (no auto-reset) wait for CONDRST
+ * to be cleared in the RNG_CR register, then confirm that SEIS is
+ * cleared in the RNG_SR register. Otherwise just clear SEIS bit in
+ * the RNG_SR register.
+ * 3. If SECS was set in step 1 (no auto-reset) wait for SECS to be
+ * cleared by RNG. The random number generation is now back to normal.
+ */
+static int stm32_rng_conceal_seed_error_cond_reset(struct stm32_rng_private *priv)
+{
+ struct device *dev = (struct device *)priv->rng.priv;
+ u32 sr = readl_relaxed(priv->base + RNG_SR);
+ u32 cr = readl_relaxed(priv->base + RNG_CR);
+ int err;
+
+ if (sr & RNG_SR_SECS) {
+ /* Conceal by resetting the subsystem (step 1.) */
+ writel_relaxed(cr | RNG_CR_CONDRST, priv->base + RNG_CR);
+ writel_relaxed(cr & ~RNG_CR_CONDRST, priv->base + RNG_CR);
+ } else {
+ /* RNG auto-reset (step 2.) */
+ writel_relaxed(sr & ~RNG_SR_SEIS, priv->base + RNG_SR);
+ goto end;
+ }
+
+ err = readl_relaxed_poll_timeout_atomic(priv->base + RNG_CR, cr, !(cr & RNG_CR_CONDRST), 10,
+ 100000);
+ if (err) {
+ dev_err(dev, "%s: timeout %x\n", __func__, sr);
+ return err;
+ }
+
+ /* Check SEIS is cleared (step 2.) */
+ if (readl_relaxed(priv->base + RNG_SR) & RNG_SR_SEIS)
+ return -EINVAL;
+
+ err = readl_relaxed_poll_timeout_atomic(priv->base + RNG_SR, sr, !(sr & RNG_SR_SECS), 10,
+ 100000);
+ if (err) {
+ dev_err(dev, "%s: timeout %x\n", __func__, sr);
+ return err;
+ }
+
+end:
+ return 0;
+}
+
+/*
+ * Extracts from the STM32 RNG specification, when CONDRST is not supported
+ *
+ * When a noise source (or seed) error occurs, the RNG stops generating
+ * random numbers and sets to “1” both SEIS and SECS bits to indicate
+ * that a seed error occurred. (...)
+ *
+ * The following sequence shall be used to fully recover from a seed
+ * error after the RNG initialization:
+ * 1. Clear the SEIS bit by writing it to “0”.
+ * 2. Read out 12 words from the RNG_DR register, and discard each of
+ * them in order to clean the pipeline.
+ * 3. Confirm that SEIS is still cleared. Random number generation is
+ * back to normal.
+ */
+static int stm32_rng_conceal_seed_error_sw_reset(struct stm32_rng_private *priv)
+{
+ unsigned int i = 0;
+ u32 sr = readl_relaxed(priv->base + RNG_SR);
+
+ writel_relaxed(sr & ~RNG_SR_SEIS, priv->base + RNG_SR);
+
+ for (i = 12; i != 0; i--)
+ (void)readl_relaxed(priv->base + RNG_DR);
+
+ if (readl_relaxed(priv->base + RNG_SR) & RNG_SR_SEIS)
+ return -EINVAL;
+
+ return 0;
+}
+
+static int stm32_rng_conceal_seed_error(struct hwrng *rng)
+{
+ struct stm32_rng_private *priv = container_of(rng, struct stm32_rng_private, rng);
+
+ dev_dbg((struct device *)priv->rng.priv, "Concealing seed error\n");
+
+ if (priv->data->has_cond_reset)
+ return stm32_rng_conceal_seed_error_cond_reset(priv);
+ else
+ return stm32_rng_conceal_seed_error_sw_reset(priv);
+};
+
+
static int stm32_rng_read(struct hwrng *rng, void *data, size_t max, bool wait)
{
- struct stm32_rng_private *priv =
- container_of(rng, struct stm32_rng_private, rng);
+ struct stm32_rng_private *priv = container_of(rng, struct stm32_rng_private, rng);
+ unsigned int i = 0;
+ int retval = 0, err = 0;
u32 sr;
- int retval = 0;
pm_runtime_get_sync((struct device *) priv->rng.priv);
+ if (readl_relaxed(priv->base + RNG_SR) & RNG_SR_SEIS)
+ stm32_rng_conceal_seed_error(rng);
+
while (max >= sizeof(u32)) {
sr = readl_relaxed(priv->base + RNG_SR);
- /* Manage timeout which is based on timer and take */
- /* care of initial delay time when enabling rng */
+ /*
+ * Manage timeout which is based on timer and take
+ * care of initial delay time when enabling the RNG.
+ */
if (!sr && wait) {
- int err;
-
err = readl_relaxed_poll_timeout_atomic(priv->base
+ RNG_SR,
sr, sr,
10, 50000);
- if (err)
+ if (err) {
dev_err((struct device *)priv->rng.priv,
"%s: timeout %x!\n", __func__, sr);
+ break;
+ }
+ } else if (!sr) {
+ /* The FIFO is being filled up */
+ break;
}
- /* If error detected or data not ready... */
if (sr != RNG_SR_DRDY) {
- if (WARN_ONCE(sr & (RNG_SR_SEIS | RNG_SR_CEIS),
- "bad RNG status - %x\n", sr))
+ if (sr & RNG_SR_SEIS) {
+ err = stm32_rng_conceal_seed_error(rng);
+ i++;
+ if (err && i > RNG_NB_RECOVER_TRIES) {
+ dev_err((struct device *)priv->rng.priv,
+ "Couldn't recover from seed error\n");
+ return -ENOTRECOVERABLE;
+ }
+
+ continue;
+ }
+
+ if (WARN_ONCE((sr & RNG_SR_CEIS), "RNG clock too slow - %x\n", sr))
writel_relaxed(0, priv->base + RNG_SR);
- break;
}
+ /* Late seed error case: DR being 0 is an error status */
*(u32 *)data = readl_relaxed(priv->base + RNG_DR);
+ if (!*(u32 *)data) {
+ err = stm32_rng_conceal_seed_error(rng);
+ i++;
+ if (err && i > RNG_NB_RECOVER_TRIES) {
+ dev_err((struct device *)priv->rng.priv,
+ "Couldn't recover from seed error");
+ return -ENOTRECOVERABLE;
+ }
+ continue;
+ }
+
+ i = 0;
retval += sizeof(u32);
data += sizeof(u32);
max -= sizeof(u32);
return retval || !wait ? retval : -EIO;
}
+static uint stm32_rng_clock_freq_restrain(struct hwrng *rng)
+{
+ struct stm32_rng_private *priv =
+ container_of(rng, struct stm32_rng_private, rng);
+ unsigned long clock_rate = 0;
+ uint clock_div = 0;
+
+ clock_rate = clk_get_rate(priv->clk);
+
+ /*
+ * Get the exponent to apply on the CLKDIV field in RNG_CR register
+ * No need to handle the case when clock-div > 0xF as it is physically
+ * impossible
+ */
+ while ((clock_rate >> clock_div) > priv->data->max_clock_rate)
+ clock_div++;
+
+ pr_debug("RNG clk rate : %lu\n", clk_get_rate(priv->clk) >> clock_div);
+
+ return clock_div;
+}
+
static int stm32_rng_init(struct hwrng *rng)
{
struct stm32_rng_private *priv =
container_of(rng, struct stm32_rng_private, rng);
int err;
+ u32 reg;
err = clk_prepare_enable(priv->clk);
if (err)
return err;
- if (priv->ced)
- writel_relaxed(RNG_CR_RNGEN, priv->base + RNG_CR);
- else
- writel_relaxed(RNG_CR_RNGEN | RNG_CR_CED,
- priv->base + RNG_CR);
-
/* clear error indicators */
writel_relaxed(0, priv->base + RNG_SR);
+ reg = readl_relaxed(priv->base + RNG_CR);
+
+ /*
+ * Keep default RNG configuration if none was specified.
+ * 0 is an invalid value as it disables all entropy sources.
+ */
+ if (priv->data->has_cond_reset && priv->data->cr) {
+ uint clock_div = stm32_rng_clock_freq_restrain(rng);
+
+ reg &= ~RNG_CR_CONFIG_MASK;
+ reg |= RNG_CR_CONDRST | (priv->data->cr & RNG_CR_ENTROPY_SRC_MASK) |
+ (clock_div << RNG_CR_CLKDIV_SHIFT);
+ if (priv->ced)
+ reg &= ~RNG_CR_CED;
+ else
+ reg |= RNG_CR_CED;
+ writel_relaxed(reg, priv->base + RNG_CR);
+
+ /* Health tests and noise control registers */
+ writel_relaxed(priv->data->htcr, priv->base + RNG_HTCR);
+ writel_relaxed(priv->data->nscr & RNG_NSCR_MASK, priv->base + RNG_NSCR);
+
+ reg &= ~RNG_CR_CONDRST;
+ reg |= RNG_CR_RNGEN;
+ if (priv->lock_conf)
+ reg |= RNG_CR_CONFLOCK;
+
+ writel_relaxed(reg, priv->base + RNG_CR);
+
+ err = readl_relaxed_poll_timeout_atomic(priv->base + RNG_CR, reg,
+ (!(reg & RNG_CR_CONDRST)),
+ 10, 50000);
+ if (err) {
+ dev_err((struct device *)priv->rng.priv,
+ "%s: timeout %x!\n", __func__, reg);
+ return -EINVAL;
+ }
+ } else {
+ /* Handle all RNG versions by checking if conditional reset should be set */
+ if (priv->data->has_cond_reset)
+ reg |= RNG_CR_CONDRST;
+
+ if (priv->ced)
+ reg &= ~RNG_CR_CED;
+ else
+ reg |= RNG_CR_CED;
+
+ writel_relaxed(reg, priv->base + RNG_CR);
+
+ if (priv->data->has_cond_reset)
+ reg &= ~RNG_CR_CONDRST;
+
+ reg |= RNG_CR_RNGEN;
+
+ writel_relaxed(reg, priv->base + RNG_CR);
+ }
+
+ err = readl_relaxed_poll_timeout_atomic(priv->base + RNG_SR, reg,
+ reg & RNG_SR_DRDY,
+ 10, 100000);
+ if (err | (reg & ~RNG_SR_DRDY)) {
+ clk_disable_unprepare(priv->clk);
+ dev_err((struct device *)priv->rng.priv,
+ "%s: timeout:%x SR: %x!\n", __func__, err, reg);
+ return -EINVAL;
+ }
+
return 0;
}
-static void stm32_rng_cleanup(struct hwrng *rng)
+static int stm32_rng_remove(struct platform_device *ofdev)
{
- struct stm32_rng_private *priv =
- container_of(rng, struct stm32_rng_private, rng);
+ pm_runtime_disable(&ofdev->dev);
+
+ return 0;
+}
+
+static int __maybe_unused stm32_rng_runtime_suspend(struct device *dev)
+{
+ struct stm32_rng_private *priv = dev_get_drvdata(dev);
+ u32 reg;
- writel_relaxed(0, priv->base + RNG_CR);
+ reg = readl_relaxed(priv->base + RNG_CR);
+ reg &= ~RNG_CR_RNGEN;
+ writel_relaxed(reg, priv->base + RNG_CR);
clk_disable_unprepare(priv->clk);
+
+ return 0;
}
+static int __maybe_unused stm32_rng_suspend(struct device *dev)
+{
+ struct stm32_rng_private *priv = dev_get_drvdata(dev);
+
+ if (priv->data->has_cond_reset) {
+ priv->pm_conf.nscr = readl_relaxed(priv->base + RNG_NSCR);
+ priv->pm_conf.htcr = readl_relaxed(priv->base + RNG_HTCR);
+ }
+
+ /* Do not save that RNG is enabled as it will be handled at resume */
+ priv->pm_conf.cr = readl_relaxed(priv->base + RNG_CR) & ~RNG_CR_RNGEN;
+
+ writel_relaxed(priv->pm_conf.cr, priv->base + RNG_CR);
+
+ clk_disable_unprepare(priv->clk);
+
+ return 0;
+}
+
+static int __maybe_unused stm32_rng_runtime_resume(struct device *dev)
+{
+ struct stm32_rng_private *priv = dev_get_drvdata(dev);
+ int err;
+ u32 reg;
+
+ err = clk_prepare_enable(priv->clk);
+ if (err)
+ return err;
+
+ /* Clean error indications */
+ writel_relaxed(0, priv->base + RNG_SR);
+
+ reg = readl_relaxed(priv->base + RNG_CR);
+ reg |= RNG_CR_RNGEN;
+ writel_relaxed(reg, priv->base + RNG_CR);
+
+ return 0;
+}
+
+static int __maybe_unused stm32_rng_resume(struct device *dev)
+{
+ struct stm32_rng_private *priv = dev_get_drvdata(dev);
+ int err;
+ u32 reg;
+
+ err = clk_prepare_enable(priv->clk);
+ if (err)
+ return err;
+
+ /* Clean error indications */
+ writel_relaxed(0, priv->base + RNG_SR);
+
+ if (priv->data->has_cond_reset) {
+ /*
+ * Correct configuration in bits [29:4] must be set in the same
+ * access that set RNG_CR_CONDRST bit. Else config setting is
+ * not taken into account. CONFIGLOCK bit must also be unset but
+ * it is not handled at the moment.
+ */
+ writel_relaxed(priv->pm_conf.cr | RNG_CR_CONDRST, priv->base + RNG_CR);
+
+ writel_relaxed(priv->pm_conf.nscr, priv->base + RNG_NSCR);
+ writel_relaxed(priv->pm_conf.htcr, priv->base + RNG_HTCR);
+
+ reg = readl_relaxed(priv->base + RNG_CR);
+ reg |= RNG_CR_RNGEN;
+ reg &= ~RNG_CR_CONDRST;
+ writel_relaxed(reg, priv->base + RNG_CR);
+
+ err = readl_relaxed_poll_timeout_atomic(priv->base + RNG_CR, reg,
+ reg & ~RNG_CR_CONDRST, 10, 100000);
+
+ if (err) {
+ clk_disable_unprepare(priv->clk);
+ dev_err((struct device *)priv->rng.priv,
+ "%s: timeout:%x CR: %x!\n", __func__, err, reg);
+ return -EINVAL;
+ }
+ } else {
+ reg = priv->pm_conf.cr;
+ reg |= RNG_CR_RNGEN;
+ writel_relaxed(reg, priv->base + RNG_CR);
+ }
+
+ return 0;
+}
+
+static const struct dev_pm_ops __maybe_unused stm32_rng_pm_ops = {
+ SET_RUNTIME_PM_OPS(stm32_rng_runtime_suspend,
+ stm32_rng_runtime_resume, NULL)
+ SET_SYSTEM_SLEEP_PM_OPS(stm32_rng_suspend,
+ stm32_rng_resume)
+};
+
+static const struct stm32_rng_data stm32mp13_rng_data = {
+ .has_cond_reset = true,
+ .max_clock_rate = 48000000,
+ .cr = 0x00F00D00,
+ .nscr = 0x2B5BB,
+ .htcr = 0x969D,
+};
+
+static const struct stm32_rng_data stm32_rng_data = {
+ .has_cond_reset = false,
+ .max_clock_rate = 3000000,
+};
+
+static const struct of_device_id stm32_rng_match[] = {
+ {
+ .compatible = "st,stm32mp13-rng",
+ .data = &stm32mp13_rng_data,
+ },
+ {
+ .compatible = "st,stm32-rng",
+ .data = &stm32_rng_data,
+ },
+ {},
+};
+MODULE_DEVICE_TABLE(of, stm32_rng_match);
+
static int stm32_rng_probe(struct platform_device *ofdev)
{
struct device *dev = &ofdev->dev;
struct device_node *np = ofdev->dev.of_node;
struct stm32_rng_private *priv;
- struct resource res;
- int err;
+ struct resource *res;
priv = devm_kzalloc(dev, sizeof(struct stm32_rng_private), GFP_KERNEL);
if (!priv)
return -ENOMEM;
- err = of_address_to_resource(np, 0, &res);
- if (err)
- return err;
-
- priv->base = devm_ioremap_resource(dev, &res);
+ priv->base = devm_platform_get_and_ioremap_resource(ofdev, 0, &res);
if (IS_ERR(priv->base))
return PTR_ERR(priv->base);
}
priv->ced = of_property_read_bool(np, "clock-error-detect");
+ priv->lock_conf = of_property_read_bool(np, "st,rng-lock-conf");
+
+ priv->data = of_device_get_match_data(dev);
+ if (!priv->data)
+ return -ENODEV;
dev_set_drvdata(dev, priv);
priv->rng.name = dev_driver_string(dev);
-#ifndef CONFIG_PM
priv->rng.init = stm32_rng_init;
- priv->rng.cleanup = stm32_rng_cleanup;
-#endif
priv->rng.read = stm32_rng_read;
priv->rng.priv = (unsigned long) dev;
priv->rng.quality = 900;
return devm_hwrng_register(dev, &priv->rng);
}
-static int stm32_rng_remove(struct platform_device *ofdev)
-{
- pm_runtime_disable(&ofdev->dev);
-
- return 0;
-}
-
-#ifdef CONFIG_PM
-static int stm32_rng_runtime_suspend(struct device *dev)
-{
- struct stm32_rng_private *priv = dev_get_drvdata(dev);
-
- stm32_rng_cleanup(&priv->rng);
-
- return 0;
-}
-
-static int stm32_rng_runtime_resume(struct device *dev)
-{
- struct stm32_rng_private *priv = dev_get_drvdata(dev);
-
- return stm32_rng_init(&priv->rng);
-}
-#endif
-
-static const struct dev_pm_ops stm32_rng_pm_ops = {
- SET_RUNTIME_PM_OPS(stm32_rng_runtime_suspend,
- stm32_rng_runtime_resume, NULL)
- SET_SYSTEM_SLEEP_PM_OPS(pm_runtime_force_suspend,
- pm_runtime_force_resume)
-};
-
-
-static const struct of_device_id stm32_rng_match[] = {
- {
- .compatible = "st,stm32-rng",
- },
- {},
-};
-MODULE_DEVICE_TABLE(of, stm32_rng_match);
-
static struct platform_driver stm32_rng_driver = {
.driver = {
.name = "stm32-rng",
- .pm = &stm32_rng_pm_ops,
+ .pm = pm_ptr(&stm32_rng_pm_ops),
.of_match_table = stm32_rng_match,
},
.probe = stm32_rng_probe,
return -ENOMEM;
ctx->dev = &pdev->dev;
- platform_set_drvdata(pdev, ctx);
ctx->csr_base = devm_platform_ioremap_resource(pdev, 0);
if (IS_ERR(ctx->csr_base))
return ret;
}
- platform_set_drvdata(pdev, trng);
-
return 0;
}
config CRYPTO_DEV_QCOM_RNG
tristate "Qualcomm Random Number Generator Driver"
depends on ARCH_QCOM || COMPILE_TEST
+ depends on HW_RANDOM
select CRYPTO_RNG
help
This driver provides support for the Random Number
.cra_name = "md5",
.cra_driver_name = "md5-sun4i-ss",
.cra_priority = 300,
- .cra_alignmask = 3,
.cra_blocksize = MD5_HMAC_BLOCK_SIZE,
.cra_ctxsize = sizeof(struct sun4i_req_ctx),
.cra_module = THIS_MODULE,
.cra_name = "sha1",
.cra_driver_name = "sha1-sun4i-ss",
.cra_priority = 300,
- .cra_alignmask = 3,
.cra_blocksize = SHA1_BLOCK_SIZE,
.cra_ctxsize = sizeof(struct sun4i_req_ctx),
.cra_module = THIS_MODULE,
return err;
}
-static int sun4i_ss_remove(struct platform_device *pdev)
+static void sun4i_ss_remove(struct platform_device *pdev)
{
int i;
struct sun4i_ss_ctx *ss = platform_get_drvdata(pdev);
}
sun4i_ss_pm_exit(ss);
- return 0;
}
static const struct of_device_id a20ss_crypto_of_match_table[] = {
static struct platform_driver sun4i_ss_driver = {
.probe = sun4i_ss_probe,
- .remove = sun4i_ss_remove,
+ .remove_new = sun4i_ss_remove,
.driver = {
.name = "sun4i-ss",
.pm = &sun4i_ss_pm_ops,
.cra_name = "md5",
.cra_driver_name = "md5-sun8i-ce",
.cra_priority = 300,
- .cra_alignmask = 3,
.cra_flags = CRYPTO_ALG_TYPE_AHASH |
CRYPTO_ALG_ASYNC |
CRYPTO_ALG_NEED_FALLBACK,
.cra_name = "sha1",
.cra_driver_name = "sha1-sun8i-ce",
.cra_priority = 300,
- .cra_alignmask = 3,
.cra_flags = CRYPTO_ALG_TYPE_AHASH |
CRYPTO_ALG_ASYNC |
CRYPTO_ALG_NEED_FALLBACK,
.cra_name = "sha224",
.cra_driver_name = "sha224-sun8i-ce",
.cra_priority = 300,
- .cra_alignmask = 3,
.cra_flags = CRYPTO_ALG_TYPE_AHASH |
CRYPTO_ALG_ASYNC |
CRYPTO_ALG_NEED_FALLBACK,
.cra_name = "sha256",
.cra_driver_name = "sha256-sun8i-ce",
.cra_priority = 300,
- .cra_alignmask = 3,
.cra_flags = CRYPTO_ALG_TYPE_AHASH |
CRYPTO_ALG_ASYNC |
CRYPTO_ALG_NEED_FALLBACK,
.cra_name = "sha384",
.cra_driver_name = "sha384-sun8i-ce",
.cra_priority = 300,
- .cra_alignmask = 3,
.cra_flags = CRYPTO_ALG_TYPE_AHASH |
CRYPTO_ALG_ASYNC |
CRYPTO_ALG_NEED_FALLBACK,
.cra_name = "sha512",
.cra_driver_name = "sha512-sun8i-ce",
.cra_priority = 300,
- .cra_alignmask = 3,
.cra_flags = CRYPTO_ALG_TYPE_AHASH |
CRYPTO_ALG_ASYNC |
CRYPTO_ALG_NEED_FALLBACK,
return err;
}
-static int sun8i_ce_remove(struct platform_device *pdev)
+static void sun8i_ce_remove(struct platform_device *pdev)
{
struct sun8i_ce_dev *ce = platform_get_drvdata(pdev);
sun8i_ce_free_chanlist(ce, MAXFLOW - 1);
sun8i_ce_pm_exit(ce);
- return 0;
}
static const struct of_device_id sun8i_ce_crypto_of_match_table[] = {
static struct platform_driver sun8i_ce_driver = {
.probe = sun8i_ce_probe,
- .remove = sun8i_ce_remove,
+ .remove_new = sun8i_ce_remove,
.driver = {
.name = "sun8i-ce",
.pm = &sun8i_ce_pm_ops,
.cra_name = "md5",
.cra_driver_name = "md5-sun8i-ss",
.cra_priority = 300,
- .cra_alignmask = 3,
.cra_flags = CRYPTO_ALG_TYPE_AHASH |
CRYPTO_ALG_ASYNC |
CRYPTO_ALG_NEED_FALLBACK,
.cra_name = "sha1",
.cra_driver_name = "sha1-sun8i-ss",
.cra_priority = 300,
- .cra_alignmask = 3,
.cra_flags = CRYPTO_ALG_TYPE_AHASH |
CRYPTO_ALG_ASYNC |
CRYPTO_ALG_NEED_FALLBACK,
.cra_name = "sha224",
.cra_driver_name = "sha224-sun8i-ss",
.cra_priority = 300,
- .cra_alignmask = 3,
.cra_flags = CRYPTO_ALG_TYPE_AHASH |
CRYPTO_ALG_ASYNC |
CRYPTO_ALG_NEED_FALLBACK,
.cra_name = "sha256",
.cra_driver_name = "sha256-sun8i-ss",
.cra_priority = 300,
- .cra_alignmask = 3,
.cra_flags = CRYPTO_ALG_TYPE_AHASH |
CRYPTO_ALG_ASYNC |
CRYPTO_ALG_NEED_FALLBACK,
.cra_name = "hmac(sha1)",
.cra_driver_name = "hmac-sha1-sun8i-ss",
.cra_priority = 300,
- .cra_alignmask = 3,
.cra_flags = CRYPTO_ALG_TYPE_AHASH |
CRYPTO_ALG_ASYNC |
CRYPTO_ALG_NEED_FALLBACK,
return err;
}
-static int sun8i_ss_remove(struct platform_device *pdev)
+static void sun8i_ss_remove(struct platform_device *pdev)
{
struct sun8i_ss_dev *ss = platform_get_drvdata(pdev);
sun8i_ss_free_flows(ss, MAXFLOW - 1);
sun8i_ss_pm_exit(ss);
-
- return 0;
}
static const struct of_device_id sun8i_ss_crypto_of_match_table[] = {
static struct platform_driver sun8i_ss_driver = {
.probe = sun8i_ss_probe,
- .remove = sun8i_ss_remove,
+ .remove_new = sun8i_ss_remove,
.driver = {
.name = "sun8i-ss",
.pm = &sun8i_ss_pm_ops,
return rc;
}
-static int crypto4xx_remove(struct platform_device *ofdev)
+static void crypto4xx_remove(struct platform_device *ofdev)
{
struct device *dev = &ofdev->dev;
struct crypto4xx_core_device *core_dev = dev_get_drvdata(dev);
mutex_destroy(&core_dev->rng_lock);
/* Free all allocated memory */
crypto4xx_stop_all(core_dev);
-
- return 0;
}
static const struct of_device_id crypto4xx_match[] = {
.of_match_table = crypto4xx_match,
},
.probe = crypto4xx_probe,
- .remove = crypto4xx_remove,
+ .remove_new = crypto4xx_remove,
};
module_platform_driver(crypto4xx_driver);
return err;
}
-static int meson_crypto_remove(struct platform_device *pdev)
+static void meson_crypto_remove(struct platform_device *pdev)
{
struct meson_dev *mc = platform_get_drvdata(pdev);
meson_free_chanlist(mc, MAXFLOW - 1);
clk_disable_unprepare(mc->busclk);
- return 0;
}
static const struct of_device_id meson_crypto_of_match_table[] = {
static struct platform_driver meson_crypto_driver = {
.probe = meson_crypto_probe,
- .remove = meson_crypto_remove,
+ .remove_new = meson_crypto_remove,
.driver = {
.name = "gxl-crypto",
.of_match_table = meson_crypto_of_match_table,
return rc;
}
-static int aspeed_acry_remove(struct platform_device *pdev)
+static void aspeed_acry_remove(struct platform_device *pdev)
{
struct aspeed_acry_dev *acry_dev = platform_get_drvdata(pdev);
crypto_engine_exit(acry_dev->crypt_engine_rsa);
tasklet_kill(&acry_dev->done_task);
clk_disable_unprepare(acry_dev->clk);
-
- return 0;
}
MODULE_DEVICE_TABLE(of, aspeed_acry_of_matches);
static struct platform_driver aspeed_acry_driver = {
.probe = aspeed_acry_probe,
- .remove = aspeed_acry_remove,
+ .remove_new = aspeed_acry_remove,
.driver = {
.name = KBUILD_MODNAME,
.of_match_table = aspeed_acry_of_matches,
#include <linux/io.h>
#include <linux/kernel.h>
#include <linux/module.h>
-#include <linux/of_address.h>
-#include <linux/of_device.h>
-#include <linux/of_irq.h>
#include <linux/of.h>
#include <linux/platform_device.h>
+#include <linux/property.h>
#ifdef CONFIG_CRYPTO_DEV_ASPEED_DEBUG
#define HACE_DBG(d, fmt, ...) \
static int aspeed_hace_probe(struct platform_device *pdev)
{
struct aspeed_engine_crypto *crypto_engine;
- const struct of_device_id *hace_dev_id;
struct aspeed_engine_hash *hash_engine;
struct aspeed_hace_dev *hace_dev;
int rc;
if (!hace_dev)
return -ENOMEM;
- hace_dev_id = of_match_device(aspeed_hace_of_matches, &pdev->dev);
- if (!hace_dev_id) {
+ hace_dev->version = (uintptr_t)device_get_match_data(&pdev->dev);
+ if (!hace_dev->version) {
dev_err(&pdev->dev, "Failed to match hace dev id\n");
return -EINVAL;
}
hace_dev->dev = &pdev->dev;
- hace_dev->version = (unsigned long)hace_dev_id->data;
hash_engine = &hace_dev->hash_engine;
crypto_engine = &hace_dev->crypto_engine;
return rc;
}
-static int aspeed_hace_remove(struct platform_device *pdev)
+static void aspeed_hace_remove(struct platform_device *pdev)
{
struct aspeed_hace_dev *hace_dev = platform_get_drvdata(pdev);
struct aspeed_engine_crypto *crypto_engine = &hace_dev->crypto_engine;
tasklet_kill(&crypto_engine->done_task);
clk_disable_unprepare(hace_dev->clk);
-
- return 0;
}
MODULE_DEVICE_TABLE(of, aspeed_hace_of_matches);
static struct platform_driver aspeed_hace_driver = {
.probe = aspeed_hace_probe,
- .remove = aspeed_hace_remove,
+ .remove_new = aspeed_hace_remove,
.driver = {
.name = KBUILD_MODNAME,
.of_match_table = aspeed_hace_of_matches,
return err;
}
-static int atmel_aes_remove(struct platform_device *pdev)
+static void atmel_aes_remove(struct platform_device *pdev)
{
struct atmel_aes_dev *aes_dd;
atmel_aes_buff_cleanup(aes_dd);
clk_unprepare(aes_dd->iclk);
-
- return 0;
}
static struct platform_driver atmel_aes_driver = {
.probe = atmel_aes_probe,
- .remove = atmel_aes_remove,
+ .remove_new = atmel_aes_remove,
.driver = {
.name = "atmel_aes",
.of_match_table = atmel_aes_dt_ids,
.halg.base.cra_name = "sha384",
.halg.base.cra_driver_name = "atmel-sha384",
.halg.base.cra_blocksize = SHA384_BLOCK_SIZE,
- .halg.base.cra_alignmask = 0x3,
.halg.digestsize = SHA384_DIGEST_SIZE,
},
.halg.base.cra_name = "sha512",
.halg.base.cra_driver_name = "atmel-sha512",
.halg.base.cra_blocksize = SHA512_BLOCK_SIZE,
- .halg.base.cra_alignmask = 0x3,
.halg.digestsize = SHA512_DIGEST_SIZE,
},
return err;
}
-static int atmel_sha_remove(struct platform_device *pdev)
+static void atmel_sha_remove(struct platform_device *pdev)
{
struct atmel_sha_dev *sha_dd = platform_get_drvdata(pdev);
atmel_sha_dma_cleanup(sha_dd);
clk_unprepare(sha_dd->iclk);
-
- return 0;
}
static struct platform_driver atmel_sha_driver = {
.probe = atmel_sha_probe,
- .remove = atmel_sha_remove,
+ .remove_new = atmel_sha_remove,
.driver = {
.name = "atmel_sha",
.of_match_table = atmel_sha_dt_ids,
return err;
}
-static int atmel_tdes_remove(struct platform_device *pdev)
+static void atmel_tdes_remove(struct platform_device *pdev)
{
struct atmel_tdes_dev *tdes_dd = platform_get_drvdata(pdev);
atmel_tdes_dma_cleanup(tdes_dd);
atmel_tdes_buff_cleanup(tdes_dd);
-
- return 0;
}
static struct platform_driver atmel_tdes_driver = {
.probe = atmel_tdes_probe,
- .remove = atmel_tdes_remove,
+ .remove_new = atmel_tdes_remove,
.driver = {
.name = "atmel_tdes",
.of_match_table = atmel_tdes_dt_ids,
CRYPTO_ALG_ALLOCATES_MEMORY,
.cra_blocksize = SHA1_BLOCK_SIZE,
.cra_ctxsize = sizeof(struct artpec6_hashalg_context),
- .cra_alignmask = 3,
.cra_module = THIS_MODULE,
.cra_init = artpec6_crypto_ahash_init,
.cra_exit = artpec6_crypto_ahash_exit,
CRYPTO_ALG_ALLOCATES_MEMORY,
.cra_blocksize = SHA256_BLOCK_SIZE,
.cra_ctxsize = sizeof(struct artpec6_hashalg_context),
- .cra_alignmask = 3,
.cra_module = THIS_MODULE,
.cra_init = artpec6_crypto_ahash_init,
.cra_exit = artpec6_crypto_ahash_exit,
CRYPTO_ALG_ALLOCATES_MEMORY,
.cra_blocksize = SHA256_BLOCK_SIZE,
.cra_ctxsize = sizeof(struct artpec6_hashalg_context),
- .cra_alignmask = 3,
.cra_module = THIS_MODULE,
.cra_init = artpec6_crypto_ahash_init_hmac_sha256,
.cra_exit = artpec6_crypto_ahash_exit,
return err;
}
-static int artpec6_crypto_remove(struct platform_device *pdev)
+static void artpec6_crypto_remove(struct platform_device *pdev)
{
struct artpec6_crypto *ac = platform_get_drvdata(pdev);
int irq = platform_get_irq(pdev, 0);
#ifdef CONFIG_DEBUG_FS
artpec6_crypto_free_debugfs();
#endif
- return 0;
}
static struct platform_driver artpec6_crypto_driver = {
.probe = artpec6_crypto_probe,
- .remove = artpec6_crypto_remove,
+ .remove_new = artpec6_crypto_remove,
.driver = {
.name = "artpec6-crypto",
.of_match_table = artpec6_crypto_of_match,
return err;
}
-static int bcm_spu_remove(struct platform_device *pdev)
+static void bcm_spu_remove(struct platform_device *pdev)
{
int i;
struct device *dev = &pdev->dev;
}
spu_free_debugfs();
spu_mb_release(pdev);
- return 0;
}
/* ===== Kernel Module API ===== */
.of_match_table = of_match_ptr(bcm_spu_dt_ids),
},
.probe = bcm_spu_probe,
- .remove = bcm_spu_remove,
+ .remove_new = bcm_spu_remove,
};
module_platform_driver(bcm_spu_pdriver);
if (keylen != CHACHA_KEY_SIZE + saltlen)
return -EINVAL;
- ctx->cdata.key_virt = key;
+ memcpy(ctx->key, key, keylen);
+ ctx->cdata.key_virt = ctx->key;
ctx->cdata.keylen = keylen - saltlen;
return chachapoly_set_sh_desc(aead);
if (keylen != CHACHA_KEY_SIZE + saltlen)
return -EINVAL;
- ctx->cdata.key_virt = key;
+ memcpy(ctx->key, key, keylen);
+ ctx->cdata.key_virt = ctx->key;
ctx->cdata.keylen = keylen - saltlen;
return chachapoly_set_sh_desc(aead);
return ret;
}
-static int caam_jr_remove(struct platform_device *pdev)
+static void caam_jr_remove(struct platform_device *pdev)
{
int ret;
struct device *jrdev;
caam_rng_exit(jrdev->parent);
/*
- * Return EBUSY if job ring already allocated.
+ * If a job ring is still allocated there is trouble ahead. Once
+ * caam_jr_remove() returned, jrpriv will be freed and the registers
+ * will get unmapped. So any user of such a job ring will probably
+ * crash.
*/
if (atomic_read(&jrpriv->tfm_count)) {
- dev_err(jrdev, "Device is busy\n");
- return -EBUSY;
+ dev_alert(jrdev, "Device is busy; consumers might start to crash\n");
+ return;
}
/* Unregister JR-based RNG & crypto algorithms */
ret = caam_jr_shutdown(jrdev);
if (ret)
dev_err(jrdev, "Failed to shut down job ring\n");
-
- return ret;
-}
-
-static void caam_jr_platform_shutdown(struct platform_device *pdev)
-{
- caam_jr_remove(pdev);
}
/* Main per-ring interrupt handler */
.pm = pm_ptr(&caam_jr_pm_ops),
},
.probe = caam_jr_probe,
- .remove = caam_jr_remove,
- .shutdown = caam_jr_platform_shutdown,
+ .remove_new = caam_jr_remove,
+ .shutdown = caam_jr_remove,
};
static int __init jr_driver_init(void)
ndev->hw.revision_id);
/* copy partname */
- strncpy(ndev->hw.partname, name, sizeof(ndev->hw.partname));
+ strscpy(ndev->hw.partname, name, sizeof(ndev->hw.partname));
}
void enable_pf2vf_mbox_interrupts(struct nitrox_device *ndev)
#include "dbc.h"
+#define DBC_DEFAULT_TIMEOUT (10 * MSEC_PER_SEC)
struct error_map {
u32 psp;
int ret;
{0x0, 0x0},
};
-static int send_dbc_cmd(struct psp_dbc_device *dbc_dev,
- enum psp_platform_access_msg msg)
+static inline int send_dbc_cmd_thru_ext(struct psp_dbc_device *dbc_dev, int msg)
+{
+ dbc_dev->mbox->ext_req.header.sub_cmd_id = msg;
+
+ return psp_extended_mailbox_cmd(dbc_dev->psp,
+ DBC_DEFAULT_TIMEOUT,
+ (struct psp_ext_request *)dbc_dev->mbox);
+}
+
+static inline int send_dbc_cmd_thru_pa(struct psp_dbc_device *dbc_dev, int msg)
+{
+ return psp_send_platform_access_msg(msg,
+ (struct psp_request *)dbc_dev->mbox);
+}
+
+static int send_dbc_cmd(struct psp_dbc_device *dbc_dev, int msg)
{
int ret;
- dbc_dev->mbox->req.header.status = 0;
- ret = psp_send_platform_access_msg(msg, (struct psp_request *)dbc_dev->mbox);
+ *dbc_dev->result = 0;
+ ret = dbc_dev->use_ext ? send_dbc_cmd_thru_ext(dbc_dev, msg) :
+ send_dbc_cmd_thru_pa(dbc_dev, msg);
if (ret == -EIO) {
int i;
dev_dbg(dbc_dev->dev,
"msg 0x%x failed with PSP error: 0x%x\n",
- msg, dbc_dev->mbox->req.header.status);
+ msg, *dbc_dev->result);
for (i = 0; error_codes[i].psp; i++) {
- if (dbc_dev->mbox->req.header.status == error_codes[i].psp)
+ if (*dbc_dev->result == error_codes[i].psp)
return error_codes[i].ret;
}
}
{
int ret;
- dbc_dev->mbox->req.header.payload_size = sizeof(dbc_dev->mbox->dbc_nonce);
+ *dbc_dev->payload_size = dbc_dev->header_size + sizeof(struct dbc_user_nonce);
ret = send_dbc_cmd(dbc_dev, PSP_DYNAMIC_BOOST_GET_NONCE);
if (ret == -EAGAIN) {
dev_dbg(dbc_dev->dev, "retrying get nonce\n");
static int send_dbc_parameter(struct psp_dbc_device *dbc_dev)
{
- dbc_dev->mbox->req.header.payload_size = sizeof(dbc_dev->mbox->dbc_param);
+ struct dbc_user_param *user_param = (struct dbc_user_param *)dbc_dev->payload;
- switch (dbc_dev->mbox->dbc_param.user.msg_index) {
+ switch (user_param->msg_index) {
case PARAM_SET_FMAX_CAP:
case PARAM_SET_PWR_CAP:
case PARAM_SET_GFX_MODE:
switch (cmd) {
case DBCIOCNONCE:
- if (copy_from_user(&dbc_dev->mbox->dbc_nonce.user, argp,
- sizeof(struct dbc_user_nonce))) {
+ if (copy_from_user(dbc_dev->payload, argp, sizeof(struct dbc_user_nonce))) {
ret = -EFAULT;
goto unlock;
}
if (ret)
goto unlock;
- if (copy_to_user(argp, &dbc_dev->mbox->dbc_nonce.user,
- sizeof(struct dbc_user_nonce))) {
+ if (copy_to_user(argp, dbc_dev->payload, sizeof(struct dbc_user_nonce))) {
ret = -EFAULT;
goto unlock;
}
break;
case DBCIOCUID:
- dbc_dev->mbox->req.header.payload_size = sizeof(dbc_dev->mbox->dbc_set_uid);
- if (copy_from_user(&dbc_dev->mbox->dbc_set_uid.user, argp,
- sizeof(struct dbc_user_setuid))) {
+ if (copy_from_user(dbc_dev->payload, argp, sizeof(struct dbc_user_setuid))) {
ret = -EFAULT;
goto unlock;
}
+ *dbc_dev->payload_size = dbc_dev->header_size + sizeof(struct dbc_user_setuid);
ret = send_dbc_cmd(dbc_dev, PSP_DYNAMIC_BOOST_SET_UID);
if (ret)
goto unlock;
- if (copy_to_user(argp, &dbc_dev->mbox->dbc_set_uid.user,
- sizeof(struct dbc_user_setuid))) {
+ if (copy_to_user(argp, dbc_dev->payload, sizeof(struct dbc_user_setuid))) {
ret = -EFAULT;
goto unlock;
}
break;
case DBCIOCPARAM:
- if (copy_from_user(&dbc_dev->mbox->dbc_param.user, argp,
- sizeof(struct dbc_user_param))) {
+ if (copy_from_user(dbc_dev->payload, argp, sizeof(struct dbc_user_param))) {
ret = -EFAULT;
goto unlock;
}
+ *dbc_dev->payload_size = dbc_dev->header_size + sizeof(struct dbc_user_param);
ret = send_dbc_parameter(dbc_dev);
if (ret)
goto unlock;
- if (copy_to_user(argp, &dbc_dev->mbox->dbc_param.user,
- sizeof(struct dbc_user_param))) {
+ if (copy_to_user(argp, dbc_dev->payload, sizeof(struct dbc_user_param))) {
ret = -EFAULT;
goto unlock;
}
struct psp_dbc_device *dbc_dev;
int ret;
- if (!PSP_FEATURE(psp, DBC))
- return 0;
-
dbc_dev = devm_kzalloc(dev, sizeof(*dbc_dev), GFP_KERNEL);
if (!dbc_dev)
return -ENOMEM;
BUILD_BUG_ON(sizeof(union dbc_buffer) > PAGE_SIZE);
- dbc_dev->mbox = (void *)devm_get_free_pages(dev, GFP_KERNEL, 0);
+ dbc_dev->mbox = (void *)devm_get_free_pages(dev, GFP_KERNEL | __GFP_ZERO, 0);
if (!dbc_dev->mbox) {
ret = -ENOMEM;
goto cleanup_dev;
psp->dbc_data = dbc_dev;
dbc_dev->dev = dev;
+ dbc_dev->psp = psp;
+
+ if (PSP_CAPABILITY(psp, DBC_THRU_EXT)) {
+ dbc_dev->use_ext = true;
+ dbc_dev->payload_size = &dbc_dev->mbox->ext_req.header.payload_size;
+ dbc_dev->result = &dbc_dev->mbox->ext_req.header.status;
+ dbc_dev->payload = &dbc_dev->mbox->ext_req.buf;
+ dbc_dev->header_size = sizeof(struct psp_ext_req_buffer_hdr);
+ } else {
+ dbc_dev->payload_size = &dbc_dev->mbox->pa_req.header.payload_size;
+ dbc_dev->result = &dbc_dev->mbox->pa_req.header.status;
+ dbc_dev->payload = &dbc_dev->mbox->pa_req.buf;
+ dbc_dev->header_size = sizeof(struct psp_req_buffer_hdr);
+ }
ret = send_dbc_nonce(dbc_dev);
if (ret == -EACCES) {
struct psp_dbc_device {
struct device *dev;
+ struct psp_device *psp;
union dbc_buffer *mbox;
struct mutex ioctl_mutex;
struct miscdevice char_dev;
-};
-
-struct dbc_nonce {
- struct psp_req_buffer_hdr header;
- struct dbc_user_nonce user;
-} __packed;
-struct dbc_set_uid {
- struct psp_req_buffer_hdr header;
- struct dbc_user_setuid user;
-} __packed;
-
-struct dbc_param {
- struct psp_req_buffer_hdr header;
- struct dbc_user_param user;
-} __packed;
+ /* used to abstract communication path */
+ bool use_ext;
+ u32 header_size;
+ u32 *payload_size;
+ u32 *result;
+ void *payload;
+};
union dbc_buffer {
- struct psp_request req;
- struct dbc_nonce dbc_nonce;
- struct dbc_set_uid dbc_set_uid;
- struct dbc_param dbc_param;
+ struct psp_request pa_req;
+ struct psp_ext_request ext_req;
};
void dbc_dev_destroy(struct psp_device *psp);
#include <linux/kernel.h>
#include <linux/irqreturn.h>
+#include <linux/mutex.h>
+#include <linux/bitfield.h>
+#include <linux/delay.h>
#include "sp-dev.h"
#include "psp-dev.h"
struct psp_device *psp_master;
+#define PSP_C2PMSG_17_CMDRESP_CMD GENMASK(19, 16)
+
+static int psp_mailbox_poll(const void __iomem *cmdresp_reg, unsigned int *cmdresp,
+ unsigned int timeout_msecs)
+{
+ while (true) {
+ *cmdresp = ioread32(cmdresp_reg);
+ if (FIELD_GET(PSP_CMDRESP_RESP, *cmdresp))
+ return 0;
+
+ if (!timeout_msecs--)
+ break;
+
+ usleep_range(1000, 1100);
+ }
+
+ return -ETIMEDOUT;
+}
+
+int psp_mailbox_command(struct psp_device *psp, enum psp_cmd cmd, void *cmdbuff,
+ unsigned int timeout_msecs, unsigned int *cmdresp)
+{
+ void __iomem *cmdresp_reg, *cmdbuff_lo_reg, *cmdbuff_hi_reg;
+ int ret;
+
+ if (!psp || !psp->vdata || !psp->vdata->cmdresp_reg ||
+ !psp->vdata->cmdbuff_addr_lo_reg || !psp->vdata->cmdbuff_addr_hi_reg)
+ return -ENODEV;
+
+ cmdresp_reg = psp->io_regs + psp->vdata->cmdresp_reg;
+ cmdbuff_lo_reg = psp->io_regs + psp->vdata->cmdbuff_addr_lo_reg;
+ cmdbuff_hi_reg = psp->io_regs + psp->vdata->cmdbuff_addr_hi_reg;
+
+ mutex_lock(&psp->mailbox_mutex);
+
+ /* Ensure mailbox is ready for a command */
+ ret = -EBUSY;
+ if (psp_mailbox_poll(cmdresp_reg, cmdresp, 0))
+ goto unlock;
+
+ if (cmdbuff) {
+ iowrite32(lower_32_bits(__psp_pa(cmdbuff)), cmdbuff_lo_reg);
+ iowrite32(upper_32_bits(__psp_pa(cmdbuff)), cmdbuff_hi_reg);
+ }
+
+ *cmdresp = FIELD_PREP(PSP_C2PMSG_17_CMDRESP_CMD, cmd);
+ iowrite32(*cmdresp, cmdresp_reg);
+
+ ret = psp_mailbox_poll(cmdresp_reg, cmdresp, timeout_msecs);
+
+unlock:
+ mutex_unlock(&psp->mailbox_mutex);
+
+ return ret;
+}
+
+int psp_extended_mailbox_cmd(struct psp_device *psp, unsigned int timeout_msecs,
+ struct psp_ext_request *req)
+{
+ unsigned int reg;
+ int ret;
+
+ print_hex_dump_debug("->psp ", DUMP_PREFIX_OFFSET, 16, 2, req,
+ req->header.payload_size, false);
+
+ ret = psp_mailbox_command(psp, PSP_CMD_TEE_EXTENDED_CMD, (void *)req,
+ timeout_msecs, ®);
+ if (ret) {
+ return ret;
+ } else if (FIELD_GET(PSP_CMDRESP_STS, reg)) {
+ req->header.status = FIELD_GET(PSP_CMDRESP_STS, reg);
+ return -EIO;
+ }
+
+ print_hex_dump_debug("<-psp ", DUMP_PREFIX_OFFSET, 16, 2, req,
+ req->header.payload_size, false);
+
+ return 0;
+}
+
static struct psp_device *psp_alloc_struct(struct sp_device *sp)
{
struct device *dev = sp->dev;
psp->capability = val;
/* Detect if TSME and SME are both enabled */
- if (psp->capability & PSP_CAPABILITY_PSP_SECURITY_REPORTING &&
+ if (PSP_CAPABILITY(psp, PSP_SECURITY_REPORTING) &&
psp->capability & (PSP_SECURITY_TSME_STATUS << PSP_CAPABILITY_PSP_SECURITY_OFFSET) &&
cc_platform_has(CC_ATTR_HOST_MEM_ENCRYPT))
dev_notice(psp->dev, "psp: Both TSME and SME are active, SME is unnecessary when TSME is active.\n");
static int psp_check_sev_support(struct psp_device *psp)
{
/* Check if device supports SEV feature */
- if (!(psp->capability & PSP_CAPABILITY_SEV)) {
+ if (!PSP_CAPABILITY(psp, SEV)) {
dev_dbg(psp->dev, "psp does not support SEV\n");
return -ENODEV;
}
static int psp_check_tee_support(struct psp_device *psp)
{
/* Check if device supports TEE feature */
- if (!(psp->capability & PSP_CAPABILITY_TEE)) {
+ if (!PSP_CAPABILITY(psp, TEE)) {
dev_dbg(psp->dev, "psp does not support TEE\n");
return -ENODEV;
}
return 0;
}
-static void psp_init_platform_access(struct psp_device *psp)
-{
- int ret;
-
- ret = platform_access_dev_init(psp);
- if (ret) {
- dev_warn(psp->dev, "platform access init failed: %d\n", ret);
- return;
- }
-
- /* dbc must come after platform access as it tests the feature */
- ret = dbc_dev_init(psp);
- if (ret)
- dev_warn(psp->dev, "failed to init dynamic boost control: %d\n",
- ret);
-}
-
static int psp_init(struct psp_device *psp)
{
int ret;
return ret;
}
- if (psp->vdata->platform_access)
- psp_init_platform_access(psp);
+ if (psp->vdata->platform_access) {
+ ret = platform_access_dev_init(psp);
+ if (ret)
+ return ret;
+ }
+
+ /* dbc must come after platform access as it tests the feature */
+ if (PSP_FEATURE(psp, DBC) ||
+ PSP_CAPABILITY(psp, DBC_THRU_EXT)) {
+ ret = dbc_dev_init(psp);
+ if (ret)
+ return ret;
+ }
return 0;
}
}
psp->io_regs = sp->io_map;
+ mutex_init(&psp->mailbox_mutex);
ret = psp_get_capability(psp);
if (ret)
#include <linux/list.h>
#include <linux/bits.h>
#include <linux/interrupt.h>
+#include <linux/mutex.h>
+#include <linux/psp.h>
+#include <linux/psp-platform-access.h>
#include "sp-dev.h"
struct sp_device *sp;
void __iomem *io_regs;
+ struct mutex mailbox_mutex;
psp_irq_handler_t sev_irq_handler;
void *sev_irq_data;
#define PSP_CAPABILITY_SEV BIT(0)
#define PSP_CAPABILITY_TEE BIT(1)
+#define PSP_CAPABILITY_DBC_THRU_EXT BIT(2)
#define PSP_CAPABILITY_PSP_SECURITY_REPORTING BIT(7)
#define PSP_CAPABILITY_PSP_SECURITY_OFFSET 8
#define PSP_SECURITY_HSP_TPM_AVAILABLE BIT(10)
#define PSP_SECURITY_ROM_ARMOR_ENFORCED BIT(11)
+/**
+ * enum psp_cmd - PSP mailbox commands
+ * @PSP_CMD_TEE_RING_INIT: Initialize TEE ring buffer
+ * @PSP_CMD_TEE_RING_DESTROY: Destroy TEE ring buffer
+ * @PSP_CMD_TEE_EXTENDED_CMD: Extended command
+ * @PSP_CMD_MAX: Maximum command id
+ */
+enum psp_cmd {
+ PSP_CMD_TEE_RING_INIT = 1,
+ PSP_CMD_TEE_RING_DESTROY = 2,
+ PSP_CMD_TEE_EXTENDED_CMD = 14,
+ PSP_CMD_MAX = 15,
+};
+
+int psp_mailbox_command(struct psp_device *psp, enum psp_cmd cmd, void *cmdbuff,
+ unsigned int timeout_msecs, unsigned int *cmdresp);
+
+/**
+ * struct psp_ext_req_buffer_hdr - Structure of the extended command header
+ * @payload_size: total payload size
+ * @sub_cmd_id: extended command ID
+ * @status: status of command execution (out)
+ */
+struct psp_ext_req_buffer_hdr {
+ u32 payload_size;
+ u32 sub_cmd_id;
+ u32 status;
+} __packed;
+
+struct psp_ext_request {
+ struct psp_ext_req_buffer_hdr header;
+ void *buf;
+} __packed;
+
+/**
+ * enum psp_sub_cmd - PSP mailbox sub commands
+ * @PSP_SUB_CMD_DBC_GET_NONCE: Get nonce from DBC
+ * @PSP_SUB_CMD_DBC_SET_UID: Set UID for DBC
+ * @PSP_SUB_CMD_DBC_GET_PARAMETER: Get parameter from DBC
+ * @PSP_SUB_CMD_DBC_SET_PARAMETER: Set parameter for DBC
+ */
+enum psp_sub_cmd {
+ PSP_SUB_CMD_DBC_GET_NONCE = PSP_DYNAMIC_BOOST_GET_NONCE,
+ PSP_SUB_CMD_DBC_SET_UID = PSP_DYNAMIC_BOOST_SET_UID,
+ PSP_SUB_CMD_DBC_GET_PARAMETER = PSP_DYNAMIC_BOOST_GET_PARAMETER,
+ PSP_SUB_CMD_DBC_SET_PARAMETER = PSP_DYNAMIC_BOOST_SET_PARAMETER,
+};
+
+int psp_extended_mailbox_cmd(struct psp_device *psp, unsigned int timeout_msecs,
+ struct psp_ext_request *req);
#endif /* __PSP_DEV_H */
{
struct psp_device *psp = psp_master;
struct sev_device *sev;
+ unsigned int cmdbuff_hi, cmdbuff_lo;
unsigned int phys_lsb, phys_msb;
unsigned int reg, ret = 0;
int buf_len;
if (FIELD_GET(PSP_CMDRESP_STS, reg)) {
dev_dbg(sev->dev, "sev command %#x failed (%#010lx)\n",
cmd, FIELD_GET(PSP_CMDRESP_STS, reg));
+
+ /*
+ * PSP firmware may report additional error information in the
+ * command buffer registers on error. Print contents of command
+ * buffer registers if they changed.
+ */
+ cmdbuff_hi = ioread32(sev->io_regs + sev->vdata->cmdbuff_addr_hi_reg);
+ cmdbuff_lo = ioread32(sev->io_regs + sev->vdata->cmdbuff_addr_lo_reg);
+ if (cmdbuff_hi != phys_msb || cmdbuff_lo != phys_lsb) {
+ dev_dbg(sev->dev, "Additional error information reported in cmdbuff:");
+ dev_dbg(sev->dev, " cmdbuff hi: %#010x\n", cmdbuff_hi);
+ dev_dbg(sev->dev, " cmdbuff lo: %#010x\n", cmdbuff_lo);
+ }
ret = -EIO;
} else {
ret = sev_write_init_ex_file_if_required(cmd);
#define PLATFORM_FEATURE_DBC 0x1
+#define PSP_CAPABILITY(psp, cap) (psp->capability & PSP_CAPABILITY_##cap)
#define PSP_FEATURE(psp, feat) (psp->vdata && psp->vdata->platform_features & PLATFORM_FEATURE_##feat)
/* Structure to hold CCP device data */
const struct sev_vdata *sev;
const struct tee_vdata *tee;
const struct platform_access_vdata *platform_access;
+ const unsigned int cmdresp_reg;
+ const unsigned int cmdbuff_addr_lo_reg;
+ const unsigned int cmdbuff_addr_hi_reg;
const unsigned int feature_reg;
const unsigned int inten_reg;
const unsigned int intsts_reg;
struct sp_device *sp = dev_get_drvdata(dev);
struct psp_device *psp = sp->psp_data;
- if (psp && (psp->capability & PSP_CAPABILITY_PSP_SECURITY_REPORTING))
+ if (psp && PSP_CAPABILITY(psp, PSP_SECURITY_REPORTING))
return 0444;
return 0;
val = ioread32(psp->io_regs + psp->vdata->bootloader_info_reg);
if (attr == &dev_attr_tee_version.attr &&
- psp->capability & PSP_CAPABILITY_TEE &&
+ PSP_CAPABILITY(psp, TEE) &&
psp->vdata->tee->info_reg)
val = ioread32(psp->io_regs + psp->vdata->tee->info_reg);
};
static const struct tee_vdata teev1 = {
- .cmdresp_reg = 0x10544, /* C2PMSG_17 */
- .cmdbuff_addr_lo_reg = 0x10548, /* C2PMSG_18 */
- .cmdbuff_addr_hi_reg = 0x1054c, /* C2PMSG_19 */
.ring_wptr_reg = 0x10550, /* C2PMSG_20 */
.ring_rptr_reg = 0x10554, /* C2PMSG_21 */
.info_reg = 0x109e8, /* C2PMSG_58 */
};
static const struct tee_vdata teev2 = {
- .cmdresp_reg = 0x10944, /* C2PMSG_17 */
- .cmdbuff_addr_lo_reg = 0x10948, /* C2PMSG_18 */
- .cmdbuff_addr_hi_reg = 0x1094c, /* C2PMSG_19 */
.ring_wptr_reg = 0x10950, /* C2PMSG_20 */
.ring_rptr_reg = 0x10954, /* C2PMSG_21 */
};
static const struct psp_vdata pspv3 = {
.tee = &teev1,
.platform_access = &pa_v1,
+ .cmdresp_reg = 0x10544, /* C2PMSG_17 */
+ .cmdbuff_addr_lo_reg = 0x10548, /* C2PMSG_18 */
+ .cmdbuff_addr_hi_reg = 0x1054c, /* C2PMSG_19 */
.bootloader_info_reg = 0x109ec, /* C2PMSG_59 */
.feature_reg = 0x109fc, /* C2PMSG_63 */
.inten_reg = 0x10690, /* P2CMSG_INTEN */
static const struct psp_vdata pspv4 = {
.sev = &sevv2,
.tee = &teev1,
+ .cmdresp_reg = 0x10544, /* C2PMSG_17 */
+ .cmdbuff_addr_lo_reg = 0x10548, /* C2PMSG_18 */
+ .cmdbuff_addr_hi_reg = 0x1054c, /* C2PMSG_19 */
.bootloader_info_reg = 0x109ec, /* C2PMSG_59 */
.feature_reg = 0x109fc, /* C2PMSG_63 */
.inten_reg = 0x10690, /* P2CMSG_INTEN */
static const struct psp_vdata pspv5 = {
.tee = &teev2,
.platform_access = &pa_v2,
+ .cmdresp_reg = 0x10944, /* C2PMSG_17 */
+ .cmdbuff_addr_lo_reg = 0x10948, /* C2PMSG_18 */
+ .cmdbuff_addr_hi_reg = 0x1094c, /* C2PMSG_19 */
.feature_reg = 0x109fc, /* C2PMSG_63 */
.inten_reg = 0x10510, /* P2CMSG_INTEN */
.intsts_reg = 0x10514, /* P2CMSG_INTSTS */
static const struct psp_vdata pspv6 = {
.sev = &sevv2,
.tee = &teev2,
+ .cmdresp_reg = 0x10944, /* C2PMSG_17 */
+ .cmdbuff_addr_lo_reg = 0x10948, /* C2PMSG_18 */
+ .cmdbuff_addr_hi_reg = 0x1094c, /* C2PMSG_19 */
.feature_reg = 0x109fc, /* C2PMSG_63 */
.inten_reg = 0x10510, /* P2CMSG_INTEN */
.intsts_reg = 0x10514, /* P2CMSG_INTSTS */
return ret;
}
-static int sp_platform_remove(struct platform_device *pdev)
+static void sp_platform_remove(struct platform_device *pdev)
{
struct device *dev = &pdev->dev;
struct sp_device *sp = dev_get_drvdata(dev);
sp_destroy(sp);
dev_notice(dev, "disabled\n");
-
- return 0;
}
#ifdef CONFIG_PM
#endif
},
.probe = sp_platform_probe,
- .remove = sp_platform_remove,
+ .remove_new = sp_platform_remove,
#ifdef CONFIG_PM
.suspend = sp_platform_suspend,
.resume = sp_platform_resume,
mutex_destroy(&rb_mgr->mutex);
}
-static int tee_wait_cmd_poll(struct psp_tee_device *tee, unsigned int timeout,
- unsigned int *reg)
-{
- /* ~10ms sleep per loop => nloop = timeout * 100 */
- int nloop = timeout * 100;
-
- while (--nloop) {
- *reg = ioread32(tee->io_regs + tee->vdata->cmdresp_reg);
- if (FIELD_GET(PSP_CMDRESP_RESP, *reg))
- return 0;
-
- usleep_range(10000, 10100);
- }
-
- dev_err(tee->dev, "tee: command timed out, disabling PSP\n");
- psp_dead = true;
-
- return -ETIMEDOUT;
-}
-
static
struct tee_init_ring_cmd *tee_alloc_cmd_buffer(struct psp_tee_device *tee)
{
{
int ring_size = MAX_RING_BUFFER_ENTRIES * sizeof(struct tee_ring_cmd);
struct tee_init_ring_cmd *cmd;
- phys_addr_t cmd_buffer;
unsigned int reg;
int ret;
return -ENOMEM;
}
- cmd_buffer = __psp_pa((void *)cmd);
-
/* Send command buffer details to Trusted OS by writing to
* CPU-PSP message registers
*/
-
- iowrite32(lower_32_bits(cmd_buffer),
- tee->io_regs + tee->vdata->cmdbuff_addr_lo_reg);
- iowrite32(upper_32_bits(cmd_buffer),
- tee->io_regs + tee->vdata->cmdbuff_addr_hi_reg);
- iowrite32(TEE_RING_INIT_CMD,
- tee->io_regs + tee->vdata->cmdresp_reg);
-
- ret = tee_wait_cmd_poll(tee, TEE_DEFAULT_TIMEOUT, ®);
+ ret = psp_mailbox_command(tee->psp, PSP_CMD_TEE_RING_INIT, cmd,
+ TEE_DEFAULT_CMD_TIMEOUT, ®);
if (ret) {
- dev_err(tee->dev, "tee: ring init command timed out\n");
+ dev_err(tee->dev, "tee: ring init command timed out, disabling TEE support\n");
tee_free_ring(tee);
+ psp_dead = true;
goto free_buf;
}
if (psp_dead)
goto free_ring;
- iowrite32(TEE_RING_DESTROY_CMD,
- tee->io_regs + tee->vdata->cmdresp_reg);
-
- ret = tee_wait_cmd_poll(tee, TEE_DEFAULT_TIMEOUT, ®);
+ ret = psp_mailbox_command(tee->psp, PSP_CMD_TEE_RING_DESTROY, NULL,
+ TEE_DEFAULT_CMD_TIMEOUT, ®);
if (ret) {
- dev_err(tee->dev, "tee: ring destroy command timed out\n");
+ dev_err(tee->dev, "tee: ring destroy command timed out, disabling TEE support\n");
+ psp_dead = true;
} else if (FIELD_GET(PSP_CMDRESP_STS, reg)) {
dev_err(tee->dev, "tee: ring destroy command failed (%#010lx)\n",
FIELD_GET(PSP_CMDRESP_STS, reg));
if (ret)
return ret;
- ret = tee_wait_cmd_completion(tee, resp, TEE_DEFAULT_TIMEOUT);
+ ret = tee_wait_cmd_completion(tee, resp, TEE_DEFAULT_RING_TIMEOUT);
if (ret) {
resp->flag = CMD_RESPONSE_TIMEDOUT;
return ret;
#include <linux/device.h>
#include <linux/mutex.h>
-#define TEE_DEFAULT_TIMEOUT 10
+#define TEE_DEFAULT_CMD_TIMEOUT (10 * MSEC_PER_SEC)
+#define TEE_DEFAULT_RING_TIMEOUT 10
#define MAX_BUFFER_SIZE 988
-/**
- * enum tee_ring_cmd_id - TEE interface commands for ring buffer configuration
- * @TEE_RING_INIT_CMD: Initialize ring buffer
- * @TEE_RING_DESTROY_CMD: Destroy ring buffer
- * @TEE_RING_MAX_CMD: Maximum command id
- */
-enum tee_ring_cmd_id {
- TEE_RING_INIT_CMD = 0x00010000,
- TEE_RING_DESTROY_CMD = 0x00020000,
- TEE_RING_MAX_CMD = 0x000F0000,
-};
-
/**
* struct tee_init_ring_cmd - Command to init TEE ring buffer
* @low_addr: bits [31:0] of the physical address of ring buffer
return 0;
}
-static int ccree_remove(struct platform_device *plat_dev)
+static void ccree_remove(struct platform_device *plat_dev)
{
struct device *dev = &plat_dev->dev;
cleanup_cc_resources(plat_dev);
dev_info(dev, "ARM ccree device terminated\n");
-
- return 0;
}
static struct platform_driver ccree_driver = {
#endif
},
.probe = ccree_probe,
- .remove = ccree_remove,
+ .remove_new = ccree_remove,
};
static int __init ccree_init(void)
return error;
}
+static int chcr_hmac_init(struct ahash_request *areq);
+static int chcr_sha_init(struct ahash_request *areq);
+
static int chcr_ahash_digest(struct ahash_request *req)
{
struct chcr_ahash_req_ctx *req_ctx = ahash_request_ctx(req);
req_ctx->rxqidx = cpu % ctx->nrxq;
put_cpu();
- rtfm->init(req);
+ if (is_hmac(crypto_ahash_tfm(rtfm)))
+ chcr_hmac_init(req);
+ else
+ chcr_sha_init(req);
+
bs = crypto_tfm_alg_blocksize(crypto_ahash_tfm(rtfm));
error = chcr_inc_wrcount(dev);
if (error)
return ret;
}
-static int exynos_rng_remove(struct platform_device *pdev)
+static void exynos_rng_remove(struct platform_device *pdev)
{
crypto_unregister_rng(&exynos_rng_alg);
exynos_rng_dev = NULL;
-
- return 0;
}
static int __maybe_unused exynos_rng_suspend(struct device *dev)
.of_match_table = exynos_rng_dt_match,
},
.probe = exynos_rng_probe,
- .remove = exynos_rng_remove,
+ .remove_new = exynos_rng_remove,
};
module_platform_driver(exynos_rng_driver);
return err;
}
-static int sl3516_ce_remove(struct platform_device *pdev)
+static void sl3516_ce_remove(struct platform_device *pdev)
{
struct sl3516_ce_dev *ce = platform_get_drvdata(pdev);
#ifdef CONFIG_CRYPTO_DEV_SL3516_DEBUG
debugfs_remove_recursive(ce->dbgfs_dir);
#endif
-
- return 0;
}
static const struct of_device_id sl3516_ce_crypto_of_match_table[] = {
static struct platform_driver sl3516_ce_driver = {
.probe = sl3516_ce_probe,
- .remove = sl3516_ce_remove,
+ .remove_new = sl3516_ce_remove,
.driver = {
.name = "sl3516-crypto",
.pm = &sl3516_ce_pm_ops,
alg->alg = t->skcipher;
alg->alg.init = hifn_init_tfm;
- snprintf(alg->alg.base.cra_name, CRYPTO_MAX_ALG_NAME, "%s", t->name);
- snprintf(alg->alg.base.cra_driver_name, CRYPTO_MAX_ALG_NAME, "%s-%s",
- t->drv_name, dev->name);
+ err = -EINVAL;
+ if (snprintf(alg->alg.base.cra_name, CRYPTO_MAX_ALG_NAME,
+ "%s", t->name) >= CRYPTO_MAX_ALG_NAME)
+ goto out_free_alg;
+ if (snprintf(alg->alg.base.cra_driver_name, CRYPTO_MAX_ALG_NAME,
+ "%s-%s", t->drv_name, dev->name) >= CRYPTO_MAX_ALG_NAME)
+ goto out_free_alg;
alg->alg.base.cra_priority = 300;
alg->alg.base.cra_flags = CRYPTO_ALG_KERN_DRIVER_ONLY | CRYPTO_ALG_ASYNC;
err = crypto_register_skcipher(&alg->alg);
if (err) {
list_del(&alg->entry);
+out_free_alg:
kfree(alg);
}
static int qm_sqc_dump(struct hisi_qm *qm, char *s, char *name)
{
struct device *dev = &qm->pdev->dev;
- struct qm_sqc *sqc, *sqc_curr;
- dma_addr_t sqc_dma;
+ struct qm_sqc *sqc_curr;
+ struct qm_sqc sqc;
u32 qp_id;
int ret;
return -EINVAL;
}
- sqc = hisi_qm_ctx_alloc(qm, sizeof(*sqc), &sqc_dma);
- if (IS_ERR(sqc))
- return PTR_ERR(sqc);
+ ret = qm_set_and_get_xqc(qm, QM_MB_CMD_SQC, &sqc, qp_id, 1);
+ if (!ret) {
+ dump_show(qm, &sqc, sizeof(struct qm_sqc), name);
- ret = hisi_qm_mb(qm, QM_MB_CMD_SQC, sqc_dma, qp_id, 1);
- if (ret) {
- down_read(&qm->qps_lock);
- if (qm->sqc) {
- sqc_curr = qm->sqc + qp_id;
+ return 0;
+ }
- dump_show(qm, sqc_curr, sizeof(*sqc), "SOFT SQC");
- }
- up_read(&qm->qps_lock);
+ down_read(&qm->qps_lock);
+ if (qm->sqc) {
+ sqc_curr = qm->sqc + qp_id;
- goto free_ctx;
+ dump_show(qm, sqc_curr, sizeof(*sqc_curr), "SOFT SQC");
}
+ up_read(&qm->qps_lock);
- dump_show(qm, sqc, sizeof(*sqc), name);
-
-free_ctx:
- hisi_qm_ctx_free(qm, sizeof(*sqc), sqc, &sqc_dma);
return 0;
}
static int qm_cqc_dump(struct hisi_qm *qm, char *s, char *name)
{
struct device *dev = &qm->pdev->dev;
- struct qm_cqc *cqc, *cqc_curr;
- dma_addr_t cqc_dma;
+ struct qm_cqc *cqc_curr;
+ struct qm_cqc cqc;
u32 qp_id;
int ret;
return -EINVAL;
}
- cqc = hisi_qm_ctx_alloc(qm, sizeof(*cqc), &cqc_dma);
- if (IS_ERR(cqc))
- return PTR_ERR(cqc);
+ ret = qm_set_and_get_xqc(qm, QM_MB_CMD_CQC, &cqc, qp_id, 1);
+ if (!ret) {
+ dump_show(qm, &cqc, sizeof(struct qm_cqc), name);
- ret = hisi_qm_mb(qm, QM_MB_CMD_CQC, cqc_dma, qp_id, 1);
- if (ret) {
- down_read(&qm->qps_lock);
- if (qm->cqc) {
- cqc_curr = qm->cqc + qp_id;
+ return 0;
+ }
- dump_show(qm, cqc_curr, sizeof(*cqc), "SOFT CQC");
- }
- up_read(&qm->qps_lock);
+ down_read(&qm->qps_lock);
+ if (qm->cqc) {
+ cqc_curr = qm->cqc + qp_id;
- goto free_ctx;
+ dump_show(qm, cqc_curr, sizeof(*cqc_curr), "SOFT CQC");
}
+ up_read(&qm->qps_lock);
- dump_show(qm, cqc, sizeof(*cqc), name);
-
-free_ctx:
- hisi_qm_ctx_free(qm, sizeof(*cqc), cqc, &cqc_dma);
return 0;
}
static int qm_eqc_aeqc_dump(struct hisi_qm *qm, char *s, char *name)
{
struct device *dev = &qm->pdev->dev;
- dma_addr_t xeqc_dma;
+ struct qm_aeqc aeqc;
+ struct qm_eqc eqc;
size_t size;
void *xeqc;
int ret;
if (!strcmp(name, "EQC")) {
cmd = QM_MB_CMD_EQC;
size = sizeof(struct qm_eqc);
+ xeqc = &eqc;
} else {
cmd = QM_MB_CMD_AEQC;
size = sizeof(struct qm_aeqc);
+ xeqc = &aeqc;
}
- xeqc = hisi_qm_ctx_alloc(qm, size, &xeqc_dma);
- if (IS_ERR(xeqc))
- return PTR_ERR(xeqc);
-
- ret = hisi_qm_mb(qm, cmd, xeqc_dma, 0, 1);
+ ret = qm_set_and_get_xqc(qm, cmd, xeqc, 0, 1);
if (ret)
- goto err_free_ctx;
+ return ret;
dump_show(qm, xeqc, size, name);
-err_free_ctx:
- hisi_qm_ctx_free(qm, size, xeqc, &xeqc_dma);
return ret;
}
#define HPRE_DRV_ECDH_MASK_CAP BIT(2)
#define HPRE_DRV_X25519_MASK_CAP BIT(5)
+static DEFINE_MUTEX(hpre_algs_lock);
+static unsigned int hpre_available_devs;
+
typedef void (*hpre_cb)(struct hpre_ctx *ctx, void *sqe);
struct hpre_rsa_ctx {
int hpre_algs_register(struct hisi_qm *qm)
{
- int ret;
+ int ret = 0;
+
+ mutex_lock(&hpre_algs_lock);
+ if (hpre_available_devs) {
+ hpre_available_devs++;
+ goto unlock;
+ }
ret = hpre_register_rsa(qm);
if (ret)
- return ret;
+ goto unlock;
ret = hpre_register_dh(qm);
if (ret)
if (ret)
goto unreg_ecdh;
+ hpre_available_devs++;
+ mutex_unlock(&hpre_algs_lock);
+
return ret;
unreg_ecdh:
hpre_unregister_dh(qm);
unreg_rsa:
hpre_unregister_rsa(qm);
+unlock:
+ mutex_unlock(&hpre_algs_lock);
return ret;
}
void hpre_algs_unregister(struct hisi_qm *qm)
{
+ mutex_lock(&hpre_algs_lock);
+ if (--hpre_available_devs)
+ goto unlock;
+
hpre_unregister_x25519(qm);
hpre_unregister_ecdh(qm);
hpre_unregister_dh(qm);
hpre_unregister_rsa(qm);
+
+unlock:
+ mutex_unlock(&hpre_algs_lock);
}
#define HPRE_VIA_MSI_DSM 1
#define HPRE_SQE_MASK_OFFSET 8
#define HPRE_SQE_MASK_LEN 24
+#define HPRE_CTX_Q_NUM_DEF 1
#define HPRE_DFX_BASE 0x301000
#define HPRE_DFX_COMMON1 0x301400
module_param_cb(uacce_mode, &hpre_uacce_mode_ops, &uacce_mode, 0444);
MODULE_PARM_DESC(uacce_mode, UACCE_MODE_DESC);
+static bool pf_q_num_flag;
static int pf_q_num_set(const char *val, const struct kernel_param *kp)
{
+ pf_q_num_flag = true;
+
return q_num_set(val, kp, PCI_DEVICE_ID_HUAWEI_HPRE_PF);
}
for (i = 0; i < clusters_num; i++) {
ret = snprintf(buf, HPRE_DBGFS_VAL_MAX_LEN, "cluster%d", i);
- if (ret < 0)
+ if (ret >= HPRE_DBGFS_VAL_MAX_LEN)
return -EINVAL;
tmp_d = debugfs_create_dir(buf, qm->debug.debug_root);
qm->qp_num = pf_q_num;
qm->debug.curr_qm_qp_num = pf_q_num;
qm->qm_list = &hpre_devices;
+ if (pf_q_num_flag)
+ set_bit(QM_MODULE_PARAM, &qm->misc_ctl);
}
ret = hisi_qm_init(qm);
if (ret)
dev_warn(&pdev->dev, "init debugfs fail!\n");
- ret = hisi_qm_alg_register(qm, &hpre_devices);
+ hisi_qm_add_list(qm, &hpre_devices);
+ ret = hisi_qm_alg_register(qm, &hpre_devices, HPRE_CTX_Q_NUM_DEF);
if (ret < 0) {
pci_err(pdev, "fail to register algs to crypto!\n");
- goto err_with_qm_start;
+ goto err_qm_del_list;
}
if (qm->uacce) {
return 0;
err_with_alg_register:
- hisi_qm_alg_unregister(qm, &hpre_devices);
+ hisi_qm_alg_unregister(qm, &hpre_devices, HPRE_CTX_Q_NUM_DEF);
-err_with_qm_start:
+err_qm_del_list:
+ hisi_qm_del_list(qm, &hpre_devices);
hpre_debugfs_exit(qm);
hisi_qm_stop(qm, QM_NORMAL);
hisi_qm_pm_uninit(qm);
hisi_qm_wait_task_finish(qm, &hpre_devices);
- hisi_qm_alg_unregister(qm, &hpre_devices);
+ hisi_qm_alg_unregister(qm, &hpre_devices, HPRE_CTX_Q_NUM_DEF);
+ hisi_qm_del_list(qm, &hpre_devices);
if (qm->fun_type == QM_HW_PF && qm->vfs_num)
hisi_qm_sriov_disable(pdev, true);
#define QM_QC_PASID_ENABLE_SHIFT 7
#define QM_SQ_TYPE_MASK GENMASK(3, 0)
-#define QM_SQ_TAIL_IDX(sqc) ((le16_to_cpu((sqc)->w11) >> 6) & 0x1)
+#define QM_SQ_TAIL_IDX(sqc) ((le16_to_cpu((sqc).w11) >> 6) & 0x1)
/* cqc shift */
#define QM_CQ_HOP_NUM_SHIFT 0
#define QM_CQE_PHASE(cqe) (le16_to_cpu((cqe)->w7) & 0x1)
#define QM_QC_CQE_SIZE 4
-#define QM_CQ_TAIL_IDX(cqc) ((le16_to_cpu((cqc)->w11) >> 6) & 0x1)
+#define QM_CQ_TAIL_IDX(cqc) ((le16_to_cpu((cqc).w11) >> 6) & 0x1)
/* eqc shift */
#define QM_EQE_AEQE_SIZE (2UL << 12)
#define QM_AEQE_PHASE(aeqe) ((le32_to_cpu((aeqe)->dw0) >> 16) & 0x1)
#define QM_AEQE_TYPE_SHIFT 17
+#define QM_AEQE_TYPE_MASK 0xf
#define QM_AEQE_CQN_MASK GENMASK(15, 0)
#define QM_CQ_OVERFLOW 0
#define QM_EQ_OVERFLOW 1
#define WAIT_PERIOD 20
#define REMOVE_WAIT_DELAY 10
-#define QM_DRIVER_REMOVING 0
-#define QM_RST_SCHED 1
#define QM_QOS_PARAM_NUM 2
#define QM_QOS_MAX_VAL 1000
#define QM_QOS_RATE 100
#define QM_MK_SQC_DW3_V2(sqe_sz, sq_depth) \
((((u32)sq_depth) - 1) | ((u32)ilog2(sqe_sz) << QM_SQ_SQE_SIZE_SHIFT))
-#define INIT_QC_COMMON(qc, base, pasid) do { \
- (qc)->head = 0; \
- (qc)->tail = 0; \
- (qc)->base_l = cpu_to_le32(lower_32_bits(base)); \
- (qc)->base_h = cpu_to_le32(upper_32_bits(base)); \
- (qc)->dw3 = 0; \
- (qc)->w8 = 0; \
- (qc)->rsvd0 = 0; \
- (qc)->pasid = cpu_to_le16(pasid); \
- (qc)->w11 = 0; \
- (qc)->rsvd1 = 0; \
-} while (0)
-
enum vft_type {
SQC_VFT = 0,
CQC_VFT,
}
EXPORT_SYMBOL_GPL(hisi_qm_mb);
+/* op 0: set xqc information to hardware, 1: get xqc information from hardware. */
+int qm_set_and_get_xqc(struct hisi_qm *qm, u8 cmd, void *xqc, u32 qp_id, bool op)
+{
+ struct hisi_qm *pf_qm = pci_get_drvdata(pci_physfn(qm->pdev));
+ struct qm_mailbox mailbox;
+ dma_addr_t xqc_dma;
+ void *tmp_xqc;
+ size_t size;
+ int ret;
+
+ switch (cmd) {
+ case QM_MB_CMD_SQC:
+ size = sizeof(struct qm_sqc);
+ tmp_xqc = qm->xqc_buf.sqc;
+ xqc_dma = qm->xqc_buf.sqc_dma;
+ break;
+ case QM_MB_CMD_CQC:
+ size = sizeof(struct qm_cqc);
+ tmp_xqc = qm->xqc_buf.cqc;
+ xqc_dma = qm->xqc_buf.cqc_dma;
+ break;
+ case QM_MB_CMD_EQC:
+ size = sizeof(struct qm_eqc);
+ tmp_xqc = qm->xqc_buf.eqc;
+ xqc_dma = qm->xqc_buf.eqc_dma;
+ break;
+ case QM_MB_CMD_AEQC:
+ size = sizeof(struct qm_aeqc);
+ tmp_xqc = qm->xqc_buf.aeqc;
+ xqc_dma = qm->xqc_buf.aeqc_dma;
+ break;
+ }
+
+ /* Setting xqc will fail if master OOO is blocked. */
+ if (qm_check_dev_error(pf_qm)) {
+ dev_err(&qm->pdev->dev, "failed to send mailbox since qm is stop!\n");
+ return -EIO;
+ }
+
+ mutex_lock(&qm->mailbox_lock);
+ if (!op)
+ memcpy(tmp_xqc, xqc, size);
+
+ qm_mb_pre_init(&mailbox, cmd, xqc_dma, qp_id, op);
+ ret = qm_mb_nolock(qm, &mailbox);
+ if (!ret && op)
+ memcpy(xqc, tmp_xqc, size);
+
+ mutex_unlock(&qm->mailbox_lock);
+
+ return ret;
+}
+
static void qm_db_v1(struct hisi_qm *qm, u16 qn, u8 cmd, u16 index, u8 priority)
{
u64 doorbell;
qm_db(qm, qp->qp_id, QM_DOORBELL_CMD_CQ,
qp->qp_status.cq_head, 0);
atomic_dec(&qp->qp_status.used);
+
+ cond_resched();
}
/* set c_flag */
qm_db(qm, qp->qp_id, QM_DOORBELL_CMD_CQ, qp->qp_status.cq_head, 1);
}
-static int qm_get_complete_eqe_num(struct hisi_qm_poll_data *poll_data)
-{
- struct hisi_qm *qm = poll_data->qm;
- struct qm_eqe *eqe = qm->eqe + qm->status.eq_head;
- u16 eq_depth = qm->eq_depth;
- int eqe_num = 0;
- u16 cqn;
-
- while (QM_EQE_PHASE(eqe) == qm->status.eqc_phase) {
- cqn = le32_to_cpu(eqe->dw0) & QM_EQE_CQN_MASK;
- poll_data->qp_finish_id[eqe_num] = cqn;
- eqe_num++;
-
- if (qm->status.eq_head == eq_depth - 1) {
- qm->status.eqc_phase = !qm->status.eqc_phase;
- eqe = qm->eqe;
- qm->status.eq_head = 0;
- } else {
- eqe++;
- qm->status.eq_head++;
- }
-
- if (eqe_num == (eq_depth >> 1) - 1)
- break;
- }
-
- qm_db(qm, 0, QM_DOORBELL_CMD_EQ, qm->status.eq_head, 0);
-
- return eqe_num;
-}
-
static void qm_work_process(struct work_struct *work)
{
struct hisi_qm_poll_data *poll_data =
container_of(work, struct hisi_qm_poll_data, work);
struct hisi_qm *qm = poll_data->qm;
+ u16 eqe_num = poll_data->eqe_num;
struct hisi_qp *qp;
- int eqe_num, i;
+ int i;
- /* Get qp id of completed tasks and re-enable the interrupt. */
- eqe_num = qm_get_complete_eqe_num(poll_data);
for (i = eqe_num - 1; i >= 0; i--) {
qp = &qm->qp_array[poll_data->qp_finish_id[i]];
if (unlikely(atomic_read(&qp->qp_status.flags) == QP_STOP))
}
}
-static bool do_qm_eq_irq(struct hisi_qm *qm)
+static void qm_get_complete_eqe_num(struct hisi_qm *qm)
{
struct qm_eqe *eqe = qm->eqe + qm->status.eq_head;
- struct hisi_qm_poll_data *poll_data;
- u16 cqn;
+ struct hisi_qm_poll_data *poll_data = NULL;
+ u16 eq_depth = qm->eq_depth;
+ u16 cqn, eqe_num = 0;
- if (!readl(qm->io_base + QM_VF_EQ_INT_SOURCE))
- return false;
+ if (QM_EQE_PHASE(eqe) != qm->status.eqc_phase) {
+ atomic64_inc(&qm->debug.dfx.err_irq_cnt);
+ qm_db(qm, 0, QM_DOORBELL_CMD_EQ, qm->status.eq_head, 0);
+ return;
+ }
- if (QM_EQE_PHASE(eqe) == qm->status.eqc_phase) {
+ cqn = le32_to_cpu(eqe->dw0) & QM_EQE_CQN_MASK;
+ if (unlikely(cqn >= qm->qp_num))
+ return;
+ poll_data = &qm->poll_data[cqn];
+
+ while (QM_EQE_PHASE(eqe) == qm->status.eqc_phase) {
cqn = le32_to_cpu(eqe->dw0) & QM_EQE_CQN_MASK;
- poll_data = &qm->poll_data[cqn];
- queue_work(qm->wq, &poll_data->work);
+ poll_data->qp_finish_id[eqe_num] = cqn;
+ eqe_num++;
+
+ if (qm->status.eq_head == eq_depth - 1) {
+ qm->status.eqc_phase = !qm->status.eqc_phase;
+ eqe = qm->eqe;
+ qm->status.eq_head = 0;
+ } else {
+ eqe++;
+ qm->status.eq_head++;
+ }
- return true;
+ if (eqe_num == (eq_depth >> 1) - 1)
+ break;
}
- return false;
+ poll_data->eqe_num = eqe_num;
+ queue_work(qm->wq, &poll_data->work);
+ qm_db(qm, 0, QM_DOORBELL_CMD_EQ, qm->status.eq_head, 0);
}
static irqreturn_t qm_eq_irq(int irq, void *data)
{
struct hisi_qm *qm = data;
- bool ret;
-
- ret = do_qm_eq_irq(qm);
- if (ret)
- return IRQ_HANDLED;
- atomic64_inc(&qm->debug.dfx.err_irq_cnt);
- qm_db(qm, 0, QM_DOORBELL_CMD_EQ, qm->status.eq_head, 0);
+ /* Get qp id of completed tasks and re-enable the interrupt */
+ qm_get_complete_eqe_num(qm);
- return IRQ_NONE;
+ return IRQ_HANDLED;
}
static irqreturn_t qm_mb_cmd_irq(int irq, void *data)
u16 aeq_depth = qm->aeq_depth;
u32 type, qp_id;
+ atomic64_inc(&qm->debug.dfx.aeq_irq_cnt);
+
while (QM_AEQE_PHASE(aeqe) == qm->status.aeqc_phase) {
- type = le32_to_cpu(aeqe->dw0) >> QM_AEQE_TYPE_SHIFT;
+ type = (le32_to_cpu(aeqe->dw0) >> QM_AEQE_TYPE_SHIFT) &
+ QM_AEQE_TYPE_MASK;
qp_id = le32_to_cpu(aeqe->dw0) & QM_AEQE_CQN_MASK;
switch (type) {
return IRQ_HANDLED;
}
-static irqreturn_t qm_aeq_irq(int irq, void *data)
-{
- struct hisi_qm *qm = data;
-
- atomic64_inc(&qm->debug.dfx.aeq_irq_cnt);
- if (!readl(qm->io_base + QM_VF_AEQ_INT_SOURCE))
- return IRQ_NONE;
-
- return IRQ_WAKE_THREAD;
-}
-
static void qm_init_qp_status(struct hisi_qp *qp)
{
struct hisi_qp_status *qp_status = &qp->qp_status;
return 0;
}
-void *hisi_qm_ctx_alloc(struct hisi_qm *qm, size_t ctx_size,
- dma_addr_t *dma_addr)
-{
- struct device *dev = &qm->pdev->dev;
- void *ctx_addr;
-
- ctx_addr = kzalloc(ctx_size, GFP_KERNEL);
- if (!ctx_addr)
- return ERR_PTR(-ENOMEM);
-
- *dma_addr = dma_map_single(dev, ctx_addr, ctx_size, DMA_FROM_DEVICE);
- if (dma_mapping_error(dev, *dma_addr)) {
- dev_err(dev, "DMA mapping error!\n");
- kfree(ctx_addr);
- return ERR_PTR(-ENOMEM);
- }
-
- return ctx_addr;
-}
-
-void hisi_qm_ctx_free(struct hisi_qm *qm, size_t ctx_size,
- const void *ctx_addr, dma_addr_t *dma_addr)
-{
- struct device *dev = &qm->pdev->dev;
-
- dma_unmap_single(dev, *dma_addr, ctx_size, DMA_FROM_DEVICE);
- kfree(ctx_addr);
-}
-
-static int qm_dump_sqc_raw(struct hisi_qm *qm, dma_addr_t dma_addr, u16 qp_id)
-{
- return hisi_qm_mb(qm, QM_MB_CMD_SQC, dma_addr, qp_id, 1);
-}
-
-static int qm_dump_cqc_raw(struct hisi_qm *qm, dma_addr_t dma_addr, u16 qp_id)
-{
- return hisi_qm_mb(qm, QM_MB_CMD_CQC, dma_addr, qp_id, 1);
-}
-
static void qm_hw_error_init_v1(struct hisi_qm *qm)
{
writel(QM_ABNORMAL_INT_MASK_VALUE, qm->io_base + QM_ABNORMAL_INT_MASK);
static int qm_sq_ctx_cfg(struct hisi_qp *qp, int qp_id, u32 pasid)
{
struct hisi_qm *qm = qp->qm;
- struct device *dev = &qm->pdev->dev;
enum qm_hw_ver ver = qm->ver;
- struct qm_sqc *sqc;
- dma_addr_t sqc_dma;
- int ret;
-
- sqc = kzalloc(sizeof(struct qm_sqc), GFP_KERNEL);
- if (!sqc)
- return -ENOMEM;
+ struct qm_sqc sqc = {0};
- INIT_QC_COMMON(sqc, qp->sqe_dma, pasid);
if (ver == QM_HW_V1) {
- sqc->dw3 = cpu_to_le32(QM_MK_SQC_DW3_V1(0, 0, 0, qm->sqe_size));
- sqc->w8 = cpu_to_le16(qp->sq_depth - 1);
+ sqc.dw3 = cpu_to_le32(QM_MK_SQC_DW3_V1(0, 0, 0, qm->sqe_size));
+ sqc.w8 = cpu_to_le16(qp->sq_depth - 1);
} else {
- sqc->dw3 = cpu_to_le32(QM_MK_SQC_DW3_V2(qm->sqe_size, qp->sq_depth));
- sqc->w8 = 0; /* rand_qc */
+ sqc.dw3 = cpu_to_le32(QM_MK_SQC_DW3_V2(qm->sqe_size, qp->sq_depth));
+ sqc.w8 = 0; /* rand_qc */
}
- sqc->cq_num = cpu_to_le16(qp_id);
- sqc->w13 = cpu_to_le16(QM_MK_SQC_W13(0, 1, qp->alg_type));
+ sqc.w13 = cpu_to_le16(QM_MK_SQC_W13(0, 1, qp->alg_type));
+ sqc.base_l = cpu_to_le32(lower_32_bits(qp->sqe_dma));
+ sqc.base_h = cpu_to_le32(upper_32_bits(qp->sqe_dma));
+ sqc.cq_num = cpu_to_le16(qp_id);
+ sqc.pasid = cpu_to_le16(pasid);
if (ver >= QM_HW_V3 && qm->use_sva && !qp->is_in_kernel)
- sqc->w11 = cpu_to_le16(QM_QC_PASID_ENABLE <<
- QM_QC_PASID_ENABLE_SHIFT);
-
- sqc_dma = dma_map_single(dev, sqc, sizeof(struct qm_sqc),
- DMA_TO_DEVICE);
- if (dma_mapping_error(dev, sqc_dma)) {
- kfree(sqc);
- return -ENOMEM;
- }
+ sqc.w11 = cpu_to_le16(QM_QC_PASID_ENABLE <<
+ QM_QC_PASID_ENABLE_SHIFT);
- ret = hisi_qm_mb(qm, QM_MB_CMD_SQC, sqc_dma, qp_id, 0);
- dma_unmap_single(dev, sqc_dma, sizeof(struct qm_sqc), DMA_TO_DEVICE);
- kfree(sqc);
-
- return ret;
+ return qm_set_and_get_xqc(qm, QM_MB_CMD_SQC, &sqc, qp_id, 0);
}
static int qm_cq_ctx_cfg(struct hisi_qp *qp, int qp_id, u32 pasid)
{
struct hisi_qm *qm = qp->qm;
- struct device *dev = &qm->pdev->dev;
enum qm_hw_ver ver = qm->ver;
- struct qm_cqc *cqc;
- dma_addr_t cqc_dma;
- int ret;
-
- cqc = kzalloc(sizeof(struct qm_cqc), GFP_KERNEL);
- if (!cqc)
- return -ENOMEM;
+ struct qm_cqc cqc = {0};
- INIT_QC_COMMON(cqc, qp->cqe_dma, pasid);
if (ver == QM_HW_V1) {
- cqc->dw3 = cpu_to_le32(QM_MK_CQC_DW3_V1(0, 0, 0,
- QM_QC_CQE_SIZE));
- cqc->w8 = cpu_to_le16(qp->cq_depth - 1);
+ cqc.dw3 = cpu_to_le32(QM_MK_CQC_DW3_V1(0, 0, 0, QM_QC_CQE_SIZE));
+ cqc.w8 = cpu_to_le16(qp->cq_depth - 1);
} else {
- cqc->dw3 = cpu_to_le32(QM_MK_CQC_DW3_V2(QM_QC_CQE_SIZE, qp->cq_depth));
- cqc->w8 = 0; /* rand_qc */
+ cqc.dw3 = cpu_to_le32(QM_MK_CQC_DW3_V2(QM_QC_CQE_SIZE, qp->cq_depth));
+ cqc.w8 = 0; /* rand_qc */
}
- cqc->dw6 = cpu_to_le32(1 << QM_CQ_PHASE_SHIFT | 1 << QM_CQ_FLAG_SHIFT);
+ cqc.dw6 = cpu_to_le32(1 << QM_CQ_PHASE_SHIFT | 1 << QM_CQ_FLAG_SHIFT);
+ cqc.base_l = cpu_to_le32(lower_32_bits(qp->cqe_dma));
+ cqc.base_h = cpu_to_le32(upper_32_bits(qp->cqe_dma));
+ cqc.pasid = cpu_to_le16(pasid);
if (ver >= QM_HW_V3 && qm->use_sva && !qp->is_in_kernel)
- cqc->w11 = cpu_to_le16(QM_QC_PASID_ENABLE);
+ cqc.w11 = cpu_to_le16(QM_QC_PASID_ENABLE);
- cqc_dma = dma_map_single(dev, cqc, sizeof(struct qm_cqc),
- DMA_TO_DEVICE);
- if (dma_mapping_error(dev, cqc_dma)) {
- kfree(cqc);
- return -ENOMEM;
- }
-
- ret = hisi_qm_mb(qm, QM_MB_CMD_CQC, cqc_dma, qp_id, 0);
- dma_unmap_single(dev, cqc_dma, sizeof(struct qm_cqc), DMA_TO_DEVICE);
- kfree(cqc);
-
- return ret;
+ return qm_set_and_get_xqc(qm, QM_MB_CMD_CQC, &cqc, qp_id, 0);
}
static int qm_qp_ctx_cfg(struct hisi_qp *qp, int qp_id, u32 pasid)
*/
static int qm_drain_qp(struct hisi_qp *qp)
{
- size_t size = sizeof(struct qm_sqc) + sizeof(struct qm_cqc);
struct hisi_qm *qm = qp->qm;
struct device *dev = &qm->pdev->dev;
- struct qm_sqc *sqc;
- struct qm_cqc *cqc;
- dma_addr_t dma_addr;
- int ret = 0, i = 0;
- void *addr;
+ struct qm_sqc sqc;
+ struct qm_cqc cqc;
+ int ret, i = 0;
/* No need to judge if master OOO is blocked. */
if (qm_check_dev_error(qm))
return ret;
}
- addr = hisi_qm_ctx_alloc(qm, size, &dma_addr);
- if (IS_ERR(addr)) {
- dev_err(dev, "Failed to alloc ctx for sqc and cqc!\n");
- return -ENOMEM;
- }
-
while (++i) {
- ret = qm_dump_sqc_raw(qm, dma_addr, qp->qp_id);
+ ret = qm_set_and_get_xqc(qm, QM_MB_CMD_SQC, &sqc, qp->qp_id, 1);
if (ret) {
dev_err_ratelimited(dev, "Failed to dump sqc!\n");
- break;
+ return ret;
}
- sqc = addr;
- ret = qm_dump_cqc_raw(qm, (dma_addr + sizeof(struct qm_sqc)),
- qp->qp_id);
+ ret = qm_set_and_get_xqc(qm, QM_MB_CMD_CQC, &cqc, qp->qp_id, 1);
if (ret) {
dev_err_ratelimited(dev, "Failed to dump cqc!\n");
- break;
+ return ret;
}
- cqc = addr + sizeof(struct qm_sqc);
- if ((sqc->tail == cqc->tail) &&
+ if ((sqc.tail == cqc.tail) &&
(QM_SQ_TAIL_IDX(sqc) == QM_CQ_TAIL_IDX(cqc)))
break;
if (i == MAX_WAIT_COUNTS) {
dev_err(dev, "Fail to empty queue %u!\n", qp->qp_id);
- ret = -EBUSY;
- break;
+ return -EBUSY;
}
usleep_range(WAIT_PERIOD_US_MIN, WAIT_PERIOD_US_MAX);
}
- hisi_qm_ctx_free(qm, size, addr, &dma_addr);
-
- return ret;
+ return 0;
}
static int qm_stop_qp_nolock(struct hisi_qp *qp)
mutex_init(&qm->mailbox_lock);
init_rwsem(&qm->qps_lock);
qm->qp_in_used = 0;
- qm->misc_ctl = false;
if (test_bit(QM_SUPPORT_RPM, &qm->caps)) {
if (!acpi_device_power_manageable(ACPI_COMPANION(&pdev->dev)))
dev_info(&pdev->dev, "_PS0 and _PR0 are not defined");
destroy_workqueue(qm->wq);
}
+static void hisi_qm_free_rsv_buf(struct hisi_qm *qm)
+{
+ struct qm_dma *xqc_dma = &qm->xqc_buf.qcdma;
+ struct device *dev = &qm->pdev->dev;
+
+ dma_free_coherent(dev, xqc_dma->size, xqc_dma->va, xqc_dma->dma);
+}
+
static void hisi_qm_memory_uninit(struct hisi_qm *qm)
{
struct device *dev = &qm->pdev->dev;
hisi_qp_memory_uninit(qm, qm->qp_num);
+ hisi_qm_free_rsv_buf(qm);
if (qm->qdma.va) {
hisi_qm_cache_wb(qm);
dma_free_coherent(dev, qm->qdma.size,
static int qm_eq_ctx_cfg(struct hisi_qm *qm)
{
- struct device *dev = &qm->pdev->dev;
- struct qm_eqc *eqc;
- dma_addr_t eqc_dma;
- int ret;
-
- eqc = kzalloc(sizeof(struct qm_eqc), GFP_KERNEL);
- if (!eqc)
- return -ENOMEM;
+ struct qm_eqc eqc = {0};
- eqc->base_l = cpu_to_le32(lower_32_bits(qm->eqe_dma));
- eqc->base_h = cpu_to_le32(upper_32_bits(qm->eqe_dma));
+ eqc.base_l = cpu_to_le32(lower_32_bits(qm->eqe_dma));
+ eqc.base_h = cpu_to_le32(upper_32_bits(qm->eqe_dma));
if (qm->ver == QM_HW_V1)
- eqc->dw3 = cpu_to_le32(QM_EQE_AEQE_SIZE);
- eqc->dw6 = cpu_to_le32(((u32)qm->eq_depth - 1) | (1 << QM_EQC_PHASE_SHIFT));
-
- eqc_dma = dma_map_single(dev, eqc, sizeof(struct qm_eqc),
- DMA_TO_DEVICE);
- if (dma_mapping_error(dev, eqc_dma)) {
- kfree(eqc);
- return -ENOMEM;
- }
+ eqc.dw3 = cpu_to_le32(QM_EQE_AEQE_SIZE);
+ eqc.dw6 = cpu_to_le32(((u32)qm->eq_depth - 1) | (1 << QM_EQC_PHASE_SHIFT));
- ret = hisi_qm_mb(qm, QM_MB_CMD_EQC, eqc_dma, 0, 0);
- dma_unmap_single(dev, eqc_dma, sizeof(struct qm_eqc), DMA_TO_DEVICE);
- kfree(eqc);
-
- return ret;
+ return qm_set_and_get_xqc(qm, QM_MB_CMD_EQC, &eqc, 0, 0);
}
static int qm_aeq_ctx_cfg(struct hisi_qm *qm)
{
- struct device *dev = &qm->pdev->dev;
- struct qm_aeqc *aeqc;
- dma_addr_t aeqc_dma;
- int ret;
-
- aeqc = kzalloc(sizeof(struct qm_aeqc), GFP_KERNEL);
- if (!aeqc)
- return -ENOMEM;
+ struct qm_aeqc aeqc = {0};
- aeqc->base_l = cpu_to_le32(lower_32_bits(qm->aeqe_dma));
- aeqc->base_h = cpu_to_le32(upper_32_bits(qm->aeqe_dma));
- aeqc->dw6 = cpu_to_le32(((u32)qm->aeq_depth - 1) | (1 << QM_EQC_PHASE_SHIFT));
+ aeqc.base_l = cpu_to_le32(lower_32_bits(qm->aeqe_dma));
+ aeqc.base_h = cpu_to_le32(upper_32_bits(qm->aeqe_dma));
+ aeqc.dw6 = cpu_to_le32(((u32)qm->aeq_depth - 1) | (1 << QM_EQC_PHASE_SHIFT));
- aeqc_dma = dma_map_single(dev, aeqc, sizeof(struct qm_aeqc),
- DMA_TO_DEVICE);
- if (dma_mapping_error(dev, aeqc_dma)) {
- kfree(aeqc);
- return -ENOMEM;
- }
-
- ret = hisi_qm_mb(qm, QM_MB_CMD_AEQC, aeqc_dma, 0, 0);
- dma_unmap_single(dev, aeqc_dma, sizeof(struct qm_aeqc), DMA_TO_DEVICE);
- kfree(aeqc);
-
- return ret;
+ return qm_set_and_get_xqc(qm, QM_MB_CMD_AEQC, &aeqc, 0, 0);
}
static int qm_eq_aeq_ctx_cfg(struct hisi_qm *qm)
}
/**
- * hisi_qm_alg_register() - Register alg to crypto and add qm to qm_list.
+ * hisi_qm_alg_register() - Register alg to crypto.
* @qm: The qm needs add.
* @qm_list: The qm list.
+ * @guard: Guard of qp_num.
*
- * This function adds qm to qm list, and will register algorithm to
- * crypto when the qm list is empty.
+ * Register algorithm to crypto when the function is satisfy guard.
*/
-int hisi_qm_alg_register(struct hisi_qm *qm, struct hisi_qm_list *qm_list)
+int hisi_qm_alg_register(struct hisi_qm *qm, struct hisi_qm_list *qm_list, int guard)
{
struct device *dev = &qm->pdev->dev;
- int flag = 0;
- int ret = 0;
-
- mutex_lock(&qm_list->lock);
- if (list_empty(&qm_list->list))
- flag = 1;
- list_add_tail(&qm->list, &qm_list->list);
- mutex_unlock(&qm_list->lock);
if (qm->ver <= QM_HW_V2 && qm->use_sva) {
dev_info(dev, "HW V2 not both use uacce sva mode and hardware crypto algs.\n");
return 0;
}
- if (flag) {
- ret = qm_list->register_to_crypto(qm);
- if (ret) {
- mutex_lock(&qm_list->lock);
- list_del(&qm->list);
- mutex_unlock(&qm_list->lock);
- }
+ if (qm->qp_num < guard) {
+ dev_info(dev, "qp_num is less than task need.\n");
+ return 0;
}
- return ret;
+ return qm_list->register_to_crypto(qm);
}
EXPORT_SYMBOL_GPL(hisi_qm_alg_register);
/**
- * hisi_qm_alg_unregister() - Unregister alg from crypto and delete qm from
- * qm list.
+ * hisi_qm_alg_unregister() - Unregister alg from crypto.
* @qm: The qm needs delete.
* @qm_list: The qm list.
+ * @guard: Guard of qp_num.
*
- * This function deletes qm from qm list, and will unregister algorithm
- * from crypto when the qm list is empty.
+ * Unregister algorithm from crypto when the last function is satisfy guard.
*/
-void hisi_qm_alg_unregister(struct hisi_qm *qm, struct hisi_qm_list *qm_list)
+void hisi_qm_alg_unregister(struct hisi_qm *qm, struct hisi_qm_list *qm_list, int guard)
{
- mutex_lock(&qm_list->lock);
- list_del(&qm->list);
- mutex_unlock(&qm_list->lock);
-
if (qm->ver <= QM_HW_V2 && qm->use_sva)
return;
- if (list_empty(&qm_list->list))
- qm_list->unregister_from_crypto(qm);
+ if (qm->qp_num < guard)
+ return;
+
+ qm_list->unregister_from_crypto(qm);
}
EXPORT_SYMBOL_GPL(hisi_qm_alg_unregister);
return 0;
irq_vector = val & QM_IRQ_VECTOR_MASK;
- ret = request_threaded_irq(pci_irq_vector(pdev, irq_vector), qm_aeq_irq,
- qm_aeq_thread, 0, qm->dev_name, qm);
+ ret = request_threaded_irq(pci_irq_vector(pdev, irq_vector), NULL,
+ qm_aeq_thread, IRQF_ONESHOT, qm->dev_name, qm);
if (ret)
dev_err(&pdev->dev, "failed to request eq irq, ret = %d", ret);
static int qm_get_qp_num(struct hisi_qm *qm)
{
+ struct device *dev = &qm->pdev->dev;
bool is_db_isolation;
/* VF's qp_num assigned by PF in v2, and VF can get qp_num by vft. */
qm->max_qp_num = hisi_qm_get_hw_info(qm, qm_basic_info,
QM_FUNC_MAX_QP_CAP, is_db_isolation);
- /* check if qp number is valid */
- if (qm->qp_num > qm->max_qp_num) {
- dev_err(&qm->pdev->dev, "qp num(%u) is more than max qp num(%u)!\n",
+ if (qm->qp_num <= qm->max_qp_num)
+ return 0;
+
+ if (test_bit(QM_MODULE_PARAM, &qm->misc_ctl)) {
+ /* Check whether the set qp number is valid */
+ dev_err(dev, "qp num(%u) is more than max qp num(%u)!\n",
qm->qp_num, qm->max_qp_num);
return -EINVAL;
}
+ dev_info(dev, "Default qp num(%u) is too big, reset it to Function's max qp num(%u)!\n",
+ qm->qp_num, qm->max_qp_num);
+ qm->qp_num = qm->max_qp_num;
+ qm->debug.curr_qm_qp_num = qm->qp_num;
+
return 0;
}
return ret;
}
+static int hisi_qm_alloc_rsv_buf(struct hisi_qm *qm)
+{
+ struct qm_rsv_buf *xqc_buf = &qm->xqc_buf;
+ struct qm_dma *xqc_dma = &xqc_buf->qcdma;
+ struct device *dev = &qm->pdev->dev;
+ size_t off = 0;
+
+#define QM_XQC_BUF_INIT(xqc_buf, type) do { \
+ (xqc_buf)->type = ((xqc_buf)->qcdma.va + (off)); \
+ (xqc_buf)->type##_dma = (xqc_buf)->qcdma.dma + (off); \
+ off += QMC_ALIGN(sizeof(struct qm_##type)); \
+} while (0)
+
+ xqc_dma->size = QMC_ALIGN(sizeof(struct qm_eqc)) +
+ QMC_ALIGN(sizeof(struct qm_aeqc)) +
+ QMC_ALIGN(sizeof(struct qm_sqc)) +
+ QMC_ALIGN(sizeof(struct qm_cqc));
+ xqc_dma->va = dma_alloc_coherent(dev, xqc_dma->size,
+ &xqc_dma->dma, GFP_KERNEL);
+ if (!xqc_dma->va)
+ return -ENOMEM;
+
+ QM_XQC_BUF_INIT(xqc_buf, eqc);
+ QM_XQC_BUF_INIT(xqc_buf, aeqc);
+ QM_XQC_BUF_INIT(xqc_buf, sqc);
+ QM_XQC_BUF_INIT(xqc_buf, cqc);
+
+ return 0;
+}
+
static int hisi_qm_memory_init(struct hisi_qm *qm)
{
struct device *dev = &qm->pdev->dev;
QM_INIT_BUF(qm, sqc, qm->qp_num);
QM_INIT_BUF(qm, cqc, qm->qp_num);
+ ret = hisi_qm_alloc_rsv_buf(qm);
+ if (ret)
+ goto err_free_qdma;
+
ret = hisi_qp_alloc_memory(qm);
if (ret)
- goto err_alloc_qp_array;
+ goto err_free_reserve_buf;
return 0;
-err_alloc_qp_array:
+err_free_reserve_buf:
+ hisi_qm_free_rsv_buf(qm);
+err_free_qdma:
dma_free_coherent(dev, qm->qdma.size, qm->qdma.va, qm->qdma.dma);
err_destroy_idr:
idr_destroy(&qm->qp_idr);
#define QM_COMMON_H
#define QM_DBG_READ_LEN 256
-#define QM_RESETTING 2
struct qm_cqe {
__le32 rsvd0;
"init", "start", "close", "stop",
};
-void *hisi_qm_ctx_alloc(struct hisi_qm *qm, size_t ctx_size,
- dma_addr_t *dma_addr);
-void hisi_qm_ctx_free(struct hisi_qm *qm, size_t ctx_size,
- const void *ctx_addr, dma_addr_t *dma_addr);
+int qm_set_and_get_xqc(struct hisi_qm *qm, u8 cmd, void *xqc, u32 qp_id, bool op);
void hisi_qm_show_last_dfx_regs(struct hisi_qm *qm);
void hisi_qm_set_algqos_init(struct hisi_qm *qm);
return ret;
}
-static int sec_remove(struct platform_device *pdev)
+static void sec_remove(struct platform_device *pdev)
{
struct sec_dev_info *info = platform_get_drvdata(pdev);
int i;
}
sec_base_exit(info);
-
- return 0;
}
static const __maybe_unused struct of_device_id sec_match[] = {
static struct platform_driver sec_driver = {
.probe = sec_probe,
- .remove = sec_remove,
+ .remove_new = sec_remove,
.driver = {
.name = "hisi_sec_platform_driver",
.of_match_table = sec_match,
#define IV_CTR_INIT 0x1
#define IV_BYTE_OFFSET 0x8
+static DEFINE_MUTEX(sec_algs_lock);
+static unsigned int sec_available_devs;
+
struct sec_skcipher {
u64 alg_msk;
struct skcipher_alg alg;
ret = sec_aead_mac_init(a_req);
if (unlikely(ret)) {
dev_err(dev, "fail to init mac data for ICV!\n");
+ hisi_acc_sg_buf_unmap(dev, src, req->in);
return ret;
}
}
int sec_register_to_crypto(struct hisi_qm *qm)
{
u64 alg_mask = sec_get_alg_bitmap(qm, SEC_DRV_ALG_BITMAP_HIGH, SEC_DRV_ALG_BITMAP_LOW);
- int ret;
+ int ret = 0;
+
+ mutex_lock(&sec_algs_lock);
+ if (sec_available_devs) {
+ sec_available_devs++;
+ goto unlock;
+ }
ret = sec_register_skcipher(alg_mask);
if (ret)
- return ret;
+ goto unlock;
ret = sec_register_aead(alg_mask);
if (ret)
- sec_unregister_skcipher(alg_mask, ARRAY_SIZE(sec_skciphers));
+ goto unreg_skcipher;
+ sec_available_devs++;
+ mutex_unlock(&sec_algs_lock);
+
+ return 0;
+
+unreg_skcipher:
+ sec_unregister_skcipher(alg_mask, ARRAY_SIZE(sec_skciphers));
+unlock:
+ mutex_unlock(&sec_algs_lock);
return ret;
}
{
u64 alg_mask = sec_get_alg_bitmap(qm, SEC_DRV_ALG_BITMAP_HIGH, SEC_DRV_ALG_BITMAP_LOW);
+ mutex_lock(&sec_algs_lock);
+ if (--sec_available_devs)
+ goto unlock;
+
sec_unregister_aead(alg_mask, ARRAY_SIZE(sec_aeads));
sec_unregister_skcipher(alg_mask, ARRAY_SIZE(sec_skciphers));
+
+unlock:
+ mutex_unlock(&sec_algs_lock);
}
}
DEFINE_SHOW_ATTRIBUTE(sec_diff_regs);
+static bool pf_q_num_flag;
static int sec_pf_q_num_set(const char *val, const struct kernel_param *kp)
{
+ pf_q_num_flag = true;
+
return q_num_set(val, kp, PCI_DEVICE_ID_HUAWEI_SEC_PF);
}
qm->qp_num = pf_q_num;
qm->debug.curr_qm_qp_num = pf_q_num;
qm->qm_list = &sec_devices;
+ if (pf_q_num_flag)
+ set_bit(QM_MODULE_PARAM, &qm->misc_ctl);
} else if (qm->fun_type == QM_HW_VF && qm->ver == QM_HW_V1) {
/*
* have no way to get qm configure in VM in v1 hardware,
if (ret)
pci_warn(pdev, "Failed to init debugfs!\n");
- if (qm->qp_num >= ctx_q_num) {
- ret = hisi_qm_alg_register(qm, &sec_devices);
- if (ret < 0) {
- pr_err("Failed to register driver to crypto.\n");
- goto err_qm_stop;
- }
- } else {
- pci_warn(qm->pdev,
- "Failed to use kernel mode, qp not enough!\n");
+ hisi_qm_add_list(qm, &sec_devices);
+ ret = hisi_qm_alg_register(qm, &sec_devices, ctx_q_num);
+ if (ret < 0) {
+ pr_err("Failed to register driver to crypto.\n");
+ goto err_qm_del_list;
}
if (qm->uacce) {
return 0;
err_alg_unregister:
- if (qm->qp_num >= ctx_q_num)
- hisi_qm_alg_unregister(qm, &sec_devices);
-err_qm_stop:
+ hisi_qm_alg_unregister(qm, &sec_devices, ctx_q_num);
+err_qm_del_list:
+ hisi_qm_del_list(qm, &sec_devices);
sec_debugfs_exit(qm);
hisi_qm_stop(qm, QM_NORMAL);
err_probe_uninit:
hisi_qm_pm_uninit(qm);
hisi_qm_wait_task_finish(qm, &sec_devices);
- if (qm->qp_num >= ctx_q_num)
- hisi_qm_alg_unregister(qm, &sec_devices);
+ hisi_qm_alg_unregister(qm, &sec_devices, ctx_q_num);
+ hisi_qm_del_list(qm, &sec_devices);
if (qm->fun_type == QM_HW_PF && qm->vfs_num)
hisi_qm_sriov_disable(pdev, true);
return ret;
}
-static int hisi_trng_remove(struct platform_device *pdev)
+static void hisi_trng_remove(struct platform_device *pdev)
{
struct hisi_trng *trng = platform_get_drvdata(pdev);
if (trng->ver != HISI_TRNG_VER_V1 &&
atomic_dec_return(&trng_active_devs) == 0)
crypto_unregister_rng(&hisi_trng_alg);
-
- return 0;
}
static const struct acpi_device_id hisi_trng_acpi_match[] = {
static struct platform_driver hisi_trng_driver = {
.probe = hisi_trng_probe,
- .remove = hisi_trng_remove,
+ .remove_new = hisi_trng_remove,
.driver = {
.name = "hisi-trng-v2",
.acpi_match_table = ACPI_PTR(hisi_trng_acpi_match),
#define HZIP_OUT_SGE_DATA_OFFSET_M GENMASK(23, 0)
/* hisi_zip_sqe dw9 */
#define HZIP_REQ_TYPE_M GENMASK(7, 0)
-#define HZIP_ALG_TYPE_ZLIB 0x02
-#define HZIP_ALG_TYPE_GZIP 0x03
+#define HZIP_ALG_TYPE_DEFLATE 0x01
#define HZIP_BUF_TYPE_M GENMASK(11, 8)
-#define HZIP_PBUFFER 0x0
#define HZIP_SGL 0x1
-#define HZIP_ZLIB_HEAD_SIZE 2
-#define HZIP_GZIP_HEAD_SIZE 10
-
-#define GZIP_HEAD_FHCRC_BIT BIT(1)
-#define GZIP_HEAD_FEXTRA_BIT BIT(2)
-#define GZIP_HEAD_FNAME_BIT BIT(3)
-#define GZIP_HEAD_FCOMMENT_BIT BIT(4)
-
-#define GZIP_HEAD_FLG_SHIFT 3
-#define GZIP_HEAD_FEXTRA_SHIFT 10
-#define GZIP_HEAD_FEXTRA_XLEN 2UL
-#define GZIP_HEAD_FHCRC_SIZE 2
-
-#define HZIP_GZIP_HEAD_BUF 256
#define HZIP_ALG_PRIORITY 300
#define HZIP_SGL_SGE_NR 10
-#define HZIP_ALG_ZLIB GENMASK(1, 0)
-#define HZIP_ALG_GZIP GENMASK(3, 2)
+#define HZIP_ALG_DEFLATE GENMASK(5, 4)
-static const u8 zlib_head[HZIP_ZLIB_HEAD_SIZE] = {0x78, 0x9c};
-static const u8 gzip_head[HZIP_GZIP_HEAD_SIZE] = {
- 0x1f, 0x8b, 0x08, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x03
-};
+static DEFINE_MUTEX(zip_algs_lock);
+static unsigned int zip_available_devs;
enum hisi_zip_alg_type {
HZIP_ALG_TYPE_COMP = 0,
};
#define COMP_NAME_TO_TYPE(alg_name) \
- (!strcmp((alg_name), "zlib-deflate") ? HZIP_ALG_TYPE_ZLIB : \
- !strcmp((alg_name), "gzip") ? HZIP_ALG_TYPE_GZIP : 0) \
-
-#define TO_HEAD_SIZE(req_type) \
- (((req_type) == HZIP_ALG_TYPE_ZLIB) ? sizeof(zlib_head) : \
- ((req_type) == HZIP_ALG_TYPE_GZIP) ? sizeof(gzip_head) : 0) \
-
-#define TO_HEAD(req_type) \
- (((req_type) == HZIP_ALG_TYPE_ZLIB) ? zlib_head : \
- ((req_type) == HZIP_ALG_TYPE_GZIP) ? gzip_head : NULL) \
+ (!strcmp((alg_name), "deflate") ? HZIP_ALG_TYPE_DEFLATE : 0)
struct hisi_zip_req {
struct acomp_req *req;
- u32 sskip;
- u32 dskip;
struct hisi_acc_hw_sgl *hw_src;
struct hisi_acc_hw_sgl *hw_dst;
dma_addr_t dma_src;
module_param_cb(sgl_sge_nr, &sgl_sge_nr_ops, &sgl_sge_nr, 0444);
MODULE_PARM_DESC(sgl_sge_nr, "Number of sge in sgl(1-255)");
-static u32 get_extra_field_size(const u8 *start)
-{
- return *((u16 *)start) + GZIP_HEAD_FEXTRA_XLEN;
-}
-
-static u32 get_name_field_size(const u8 *start)
-{
- return strlen(start) + 1;
-}
-
-static u32 get_comment_field_size(const u8 *start)
-{
- return strlen(start) + 1;
-}
-
-static u32 __get_gzip_head_size(const u8 *src)
-{
- u8 head_flg = *(src + GZIP_HEAD_FLG_SHIFT);
- u32 size = GZIP_HEAD_FEXTRA_SHIFT;
-
- if (head_flg & GZIP_HEAD_FEXTRA_BIT)
- size += get_extra_field_size(src + size);
- if (head_flg & GZIP_HEAD_FNAME_BIT)
- size += get_name_field_size(src + size);
- if (head_flg & GZIP_HEAD_FCOMMENT_BIT)
- size += get_comment_field_size(src + size);
- if (head_flg & GZIP_HEAD_FHCRC_BIT)
- size += GZIP_HEAD_FHCRC_SIZE;
-
- return size;
-}
-
-static u32 __maybe_unused get_gzip_head_size(struct scatterlist *sgl)
-{
- char buf[HZIP_GZIP_HEAD_BUF];
-
- sg_copy_to_buffer(sgl, sg_nents(sgl), buf, sizeof(buf));
-
- return __get_gzip_head_size(buf);
-}
-
-static int add_comp_head(struct scatterlist *dst, u8 req_type)
-{
- int head_size = TO_HEAD_SIZE(req_type);
- const u8 *head = TO_HEAD(req_type);
- int ret;
-
- ret = sg_copy_from_buffer(dst, sg_nents(dst), head, head_size);
- if (unlikely(ret != head_size)) {
- pr_err("the head size of buffer is wrong (%d)!\n", ret);
- return -ENOMEM;
- }
-
- return head_size;
-}
-
-static int get_comp_head_size(struct acomp_req *acomp_req, u8 req_type)
-{
- if (unlikely(!acomp_req->src || !acomp_req->slen))
- return -EINVAL;
-
- if (unlikely(req_type == HZIP_ALG_TYPE_GZIP &&
- acomp_req->slen < GZIP_HEAD_FEXTRA_SHIFT))
- return -EINVAL;
-
- switch (req_type) {
- case HZIP_ALG_TYPE_ZLIB:
- return TO_HEAD_SIZE(HZIP_ALG_TYPE_ZLIB);
- case HZIP_ALG_TYPE_GZIP:
- return TO_HEAD_SIZE(HZIP_ALG_TYPE_GZIP);
- default:
- pr_err("request type does not support!\n");
- return -EINVAL;
- }
-}
-
-static struct hisi_zip_req *hisi_zip_create_req(struct acomp_req *req,
- struct hisi_zip_qp_ctx *qp_ctx,
- size_t head_size, bool is_comp)
+static struct hisi_zip_req *hisi_zip_create_req(struct hisi_zip_qp_ctx *qp_ctx,
+ struct acomp_req *req)
{
struct hisi_zip_req_q *req_q = &qp_ctx->req_q;
struct hisi_zip_req *q = req_q->q;
req_cache->req_id = req_id;
req_cache->req = req;
- if (is_comp) {
- req_cache->sskip = 0;
- req_cache->dskip = head_size;
- } else {
- req_cache->sskip = head_size;
- req_cache->dskip = 0;
- }
-
return req_cache;
}
{
struct acomp_req *a_req = req->req;
- sqe->input_data_length = a_req->slen - req->sskip;
- sqe->dest_avail_out = a_req->dlen - req->dskip;
- sqe->dw7 = FIELD_PREP(HZIP_IN_SGE_DATA_OFFSET_M, req->sskip);
- sqe->dw8 = FIELD_PREP(HZIP_OUT_SGE_DATA_OFFSET_M, req->dskip);
+ sqe->input_data_length = a_req->slen;
+ sqe->dest_avail_out = a_req->dlen;
}
static void hisi_zip_fill_buf_type(struct hisi_zip_sqe *sqe, u8 buf_type)
sqe->dw9 = val;
}
-static void hisi_zip_fill_tag_v1(struct hisi_zip_sqe *sqe, struct hisi_zip_req *req)
-{
- sqe->dw13 = req->req_id;
-}
-
-static void hisi_zip_fill_tag_v2(struct hisi_zip_sqe *sqe, struct hisi_zip_req *req)
+static void hisi_zip_fill_tag(struct hisi_zip_sqe *sqe, struct hisi_zip_req *req)
{
sqe->dw26 = req->req_id;
}
ops->fill_sqe_type(sqe, ops->sqe_type);
}
-static int hisi_zip_do_work(struct hisi_zip_req *req,
- struct hisi_zip_qp_ctx *qp_ctx)
+static int hisi_zip_do_work(struct hisi_zip_qp_ctx *qp_ctx,
+ struct hisi_zip_req *req)
{
struct hisi_acc_sgl_pool *pool = qp_ctx->sgl_pool;
struct hisi_zip_dfx *dfx = &qp_ctx->zip_dev->dfx;
return ret;
}
-static u32 hisi_zip_get_tag_v1(struct hisi_zip_sqe *sqe)
-{
- return sqe->dw13;
-}
-
-static u32 hisi_zip_get_tag_v2(struct hisi_zip_sqe *sqe)
+static u32 hisi_zip_get_tag(struct hisi_zip_sqe *sqe)
{
return sqe->dw26;
}
u32 tag = ops->get_tag(sqe);
struct hisi_zip_req *req = req_q->q + tag;
struct acomp_req *acomp_req = req->req;
- u32 status, dlen, head_size;
int err = 0;
+ u32 status;
atomic64_inc(&dfx->recv_cnt);
status = ops->get_status(sqe);
err = -EIO;
}
- dlen = ops->get_dstlen(sqe);
-
hisi_acc_sg_buf_unmap(dev, acomp_req->src, req->hw_src);
hisi_acc_sg_buf_unmap(dev, acomp_req->dst, req->hw_dst);
- head_size = (qp->alg_type == 0) ? TO_HEAD_SIZE(qp->req_type) : 0;
- acomp_req->dlen = dlen + head_size;
+ acomp_req->dlen = ops->get_dstlen(sqe);
if (acomp_req->base.complete)
acomp_request_complete(acomp_req, err);
struct hisi_zip_qp_ctx *qp_ctx = &ctx->qp_ctx[HZIP_QPC_COMP];
struct device *dev = &qp_ctx->qp->qm->pdev->dev;
struct hisi_zip_req *req;
- int head_size;
int ret;
- /* let's output compression head now */
- head_size = add_comp_head(acomp_req->dst, qp_ctx->qp->req_type);
- if (unlikely(head_size < 0)) {
- dev_err_ratelimited(dev, "failed to add comp head (%d)!\n",
- head_size);
- return head_size;
- }
-
- req = hisi_zip_create_req(acomp_req, qp_ctx, head_size, true);
+ req = hisi_zip_create_req(qp_ctx, acomp_req);
if (IS_ERR(req))
return PTR_ERR(req);
- ret = hisi_zip_do_work(req, qp_ctx);
+ ret = hisi_zip_do_work(qp_ctx, req);
if (unlikely(ret != -EINPROGRESS)) {
dev_info_ratelimited(dev, "failed to do compress (%d)!\n", ret);
hisi_zip_remove_req(qp_ctx, req);
struct hisi_zip_qp_ctx *qp_ctx = &ctx->qp_ctx[HZIP_QPC_DECOMP];
struct device *dev = &qp_ctx->qp->qm->pdev->dev;
struct hisi_zip_req *req;
- int head_size, ret;
-
- head_size = get_comp_head_size(acomp_req, qp_ctx->qp->req_type);
- if (unlikely(head_size < 0)) {
- dev_err_ratelimited(dev, "failed to get comp head size (%d)!\n",
- head_size);
- return head_size;
- }
+ int ret;
- req = hisi_zip_create_req(acomp_req, qp_ctx, head_size, false);
+ req = hisi_zip_create_req(qp_ctx, acomp_req);
if (IS_ERR(req))
return PTR_ERR(req);
- ret = hisi_zip_do_work(req, qp_ctx);
+ ret = hisi_zip_do_work(qp_ctx, req);
if (unlikely(ret != -EINPROGRESS)) {
dev_info_ratelimited(dev, "failed to do decompress (%d)!\n",
ret);
hisi_qm_free_qps(&qp_ctx->qp, 1);
}
-static const struct hisi_zip_sqe_ops hisi_zip_ops_v1 = {
- .sqe_type = 0,
- .fill_addr = hisi_zip_fill_addr,
- .fill_buf_size = hisi_zip_fill_buf_size,
- .fill_buf_type = hisi_zip_fill_buf_type,
- .fill_req_type = hisi_zip_fill_req_type,
- .fill_tag = hisi_zip_fill_tag_v1,
- .fill_sqe_type = hisi_zip_fill_sqe_type,
- .get_tag = hisi_zip_get_tag_v1,
- .get_status = hisi_zip_get_status,
- .get_dstlen = hisi_zip_get_dstlen,
-};
-
-static const struct hisi_zip_sqe_ops hisi_zip_ops_v2 = {
+static const struct hisi_zip_sqe_ops hisi_zip_ops = {
.sqe_type = 0x3,
.fill_addr = hisi_zip_fill_addr,
.fill_buf_size = hisi_zip_fill_buf_size,
.fill_buf_type = hisi_zip_fill_buf_type,
.fill_req_type = hisi_zip_fill_req_type,
- .fill_tag = hisi_zip_fill_tag_v2,
+ .fill_tag = hisi_zip_fill_tag,
.fill_sqe_type = hisi_zip_fill_sqe_type,
- .get_tag = hisi_zip_get_tag_v2,
+ .get_tag = hisi_zip_get_tag,
.get_status = hisi_zip_get_status,
.get_dstlen = hisi_zip_get_dstlen,
};
qp_ctx->zip_dev = hisi_zip;
}
- if (hisi_zip->qm.ver < QM_HW_V3)
- hisi_zip_ctx->ops = &hisi_zip_ops_v1;
- else
- hisi_zip_ctx->ops = &hisi_zip_ops_v2;
+ hisi_zip_ctx->ops = &hisi_zip_ops;
return 0;
}
hisi_zip_ctx_exit(ctx);
}
-static struct acomp_alg hisi_zip_acomp_zlib = {
- .init = hisi_zip_acomp_init,
- .exit = hisi_zip_acomp_exit,
- .compress = hisi_zip_acompress,
- .decompress = hisi_zip_adecompress,
- .base = {
- .cra_name = "zlib-deflate",
- .cra_driver_name = "hisi-zlib-acomp",
- .cra_module = THIS_MODULE,
- .cra_priority = HZIP_ALG_PRIORITY,
- .cra_ctxsize = sizeof(struct hisi_zip_ctx),
- }
-};
-
-static int hisi_zip_register_zlib(struct hisi_qm *qm)
-{
- int ret;
-
- if (!hisi_zip_alg_support(qm, HZIP_ALG_ZLIB))
- return 0;
-
- ret = crypto_register_acomp(&hisi_zip_acomp_zlib);
- if (ret)
- dev_err(&qm->pdev->dev, "failed to register to zlib (%d)!\n", ret);
-
- return ret;
-}
-
-static void hisi_zip_unregister_zlib(struct hisi_qm *qm)
-{
- if (!hisi_zip_alg_support(qm, HZIP_ALG_ZLIB))
- return;
-
- crypto_unregister_acomp(&hisi_zip_acomp_zlib);
-}
-
-static struct acomp_alg hisi_zip_acomp_gzip = {
+static struct acomp_alg hisi_zip_acomp_deflate = {
.init = hisi_zip_acomp_init,
.exit = hisi_zip_acomp_exit,
.compress = hisi_zip_acompress,
.decompress = hisi_zip_adecompress,
.base = {
- .cra_name = "gzip",
- .cra_driver_name = "hisi-gzip-acomp",
+ .cra_name = "deflate",
+ .cra_driver_name = "hisi-deflate-acomp",
.cra_module = THIS_MODULE,
- .cra_priority = HZIP_ALG_PRIORITY,
+ .cra_priority = HZIP_ALG_PRIORITY,
.cra_ctxsize = sizeof(struct hisi_zip_ctx),
}
};
-static int hisi_zip_register_gzip(struct hisi_qm *qm)
+static int hisi_zip_register_deflate(struct hisi_qm *qm)
{
int ret;
- if (!hisi_zip_alg_support(qm, HZIP_ALG_GZIP))
+ if (!hisi_zip_alg_support(qm, HZIP_ALG_DEFLATE))
return 0;
- ret = crypto_register_acomp(&hisi_zip_acomp_gzip);
+ ret = crypto_register_acomp(&hisi_zip_acomp_deflate);
if (ret)
- dev_err(&qm->pdev->dev, "failed to register to gzip (%d)!\n", ret);
+ dev_err(&qm->pdev->dev, "failed to register to deflate (%d)!\n", ret);
return ret;
}
-static void hisi_zip_unregister_gzip(struct hisi_qm *qm)
+static void hisi_zip_unregister_deflate(struct hisi_qm *qm)
{
- if (!hisi_zip_alg_support(qm, HZIP_ALG_GZIP))
+ if (!hisi_zip_alg_support(qm, HZIP_ALG_DEFLATE))
return;
- crypto_unregister_acomp(&hisi_zip_acomp_gzip);
+ crypto_unregister_acomp(&hisi_zip_acomp_deflate);
}
int hisi_zip_register_to_crypto(struct hisi_qm *qm)
{
int ret = 0;
- ret = hisi_zip_register_zlib(qm);
- if (ret)
- return ret;
+ mutex_lock(&zip_algs_lock);
+ if (zip_available_devs++)
+ goto unlock;
- ret = hisi_zip_register_gzip(qm);
+ ret = hisi_zip_register_deflate(qm);
if (ret)
- hisi_zip_unregister_zlib(qm);
+ zip_available_devs--;
+unlock:
+ mutex_unlock(&zip_algs_lock);
return ret;
}
void hisi_zip_unregister_from_crypto(struct hisi_qm *qm)
{
- hisi_zip_unregister_zlib(qm);
- hisi_zip_unregister_gzip(qm);
+ mutex_lock(&zip_algs_lock);
+ if (--zip_available_devs)
+ goto unlock;
+
+ hisi_zip_unregister_deflate(qm);
+
+unlock:
+ mutex_unlock(&zip_algs_lock);
}
#define HZIP_SQE_SIZE 128
#define HZIP_PF_DEF_Q_NUM 64
#define HZIP_PF_DEF_Q_BASE 0
+#define HZIP_CTX_Q_NUM_DEF 2
#define HZIP_SOFT_CTRL_CNT_CLR_CE 0x301000
#define HZIP_SOFT_CTRL_CNT_CLR_CE_BIT BIT(0)
{ZIP_CLUSTER_DECOMP_NUM_CAP, 0x313C, 0, GENMASK(7, 0), 0x6, 0x6, 0x3},
{ZIP_DECOMP_ENABLE_BITMAP, 0x3140, 16, GENMASK(15, 0), 0xFC, 0xFC, 0x1C},
{ZIP_COMP_ENABLE_BITMAP, 0x3140, 0, GENMASK(15, 0), 0x3, 0x3, 0x3},
- {ZIP_DRV_ALG_BITMAP, 0x3144, 0, GENMASK(31, 0), 0xF, 0xF, 0xF},
- {ZIP_DEV_ALG_BITMAP, 0x3148, 0, GENMASK(31, 0), 0xF, 0xF, 0xFF},
+ {ZIP_DRV_ALG_BITMAP, 0x3144, 0, GENMASK(31, 0), 0x0, 0x0, 0x30},
+ {ZIP_DEV_ALG_BITMAP, 0x3148, 0, GENMASK(31, 0), 0xF, 0xF, 0x3F},
{ZIP_CORE1_ALG_BITMAP, 0x314C, 0, GENMASK(31, 0), 0x5, 0x5, 0xD5},
{ZIP_CORE2_ALG_BITMAP, 0x3150, 0, GENMASK(31, 0), 0x5, 0x5, 0xD5},
{ZIP_CORE3_ALG_BITMAP, 0x3154, 0, GENMASK(31, 0), 0xA, 0xA, 0x2A},
module_param_cb(uacce_mode, &zip_uacce_mode_ops, &uacce_mode, 0444);
MODULE_PARM_DESC(uacce_mode, UACCE_MODE_DESC);
+static bool pf_q_num_flag;
static int pf_q_num_set(const char *val, const struct kernel_param *kp)
{
+ pf_q_num_flag = true;
+
return q_num_set(val, kp, PCI_DEVICE_ID_HUAWEI_ZIP_PF);
}
qm->qp_num = pf_q_num;
qm->debug.curr_qm_qp_num = pf_q_num;
qm->qm_list = &zip_devices;
+ if (pf_q_num_flag)
+ set_bit(QM_MODULE_PARAM, &qm->misc_ctl);
} else if (qm->fun_type == QM_HW_VF && qm->ver == QM_HW_V1) {
/*
* have no way to get qm configure in VM in v1 hardware,
if (ret)
pci_err(pdev, "failed to init debugfs (%d)!\n", ret);
- ret = hisi_qm_alg_register(qm, &zip_devices);
+ hisi_qm_add_list(qm, &zip_devices);
+ ret = hisi_qm_alg_register(qm, &zip_devices, HZIP_CTX_Q_NUM_DEF);
if (ret < 0) {
pci_err(pdev, "failed to register driver to crypto!\n");
- goto err_qm_stop;
+ goto err_qm_del_list;
}
if (qm->uacce) {
return 0;
err_qm_alg_unregister:
- hisi_qm_alg_unregister(qm, &zip_devices);
+ hisi_qm_alg_unregister(qm, &zip_devices, HZIP_CTX_Q_NUM_DEF);
-err_qm_stop:
+err_qm_del_list:
+ hisi_qm_del_list(qm, &zip_devices);
hisi_zip_debugfs_exit(qm);
hisi_qm_stop(qm, QM_NORMAL);
hisi_qm_pm_uninit(qm);
hisi_qm_wait_task_finish(qm, &zip_devices);
- hisi_qm_alg_unregister(qm, &zip_devices);
+ hisi_qm_alg_unregister(qm, &zip_devices, HZIP_CTX_Q_NUM_DEF);
+ hisi_qm_del_list(qm, &zip_devices);
if (qm->fun_type == QM_HW_PF && qm->vfs_num)
hisi_qm_sriov_disable(pdev, true);
return err;
}
-static int img_hash_remove(struct platform_device *pdev)
+static void img_hash_remove(struct platform_device *pdev)
{
struct img_hash_dev *hdev;
clk_disable_unprepare(hdev->hash_clk);
clk_disable_unprepare(hdev->sys_clk);
-
- return 0;
}
#ifdef CONFIG_PM_SLEEP
static struct platform_driver img_hash_driver = {
.probe = img_hash_probe,
- .remove = img_hash_remove,
+ .remove_new = img_hash_remove,
.driver = {
.name = "img-hash-accelerator",
.pm = &img_hash_pm_ops,
return ret;
}
-static int safexcel_remove(struct platform_device *pdev)
+static void safexcel_remove(struct platform_device *pdev)
{
struct safexcel_crypto_priv *priv = platform_get_drvdata(pdev);
int i;
irq_set_affinity_hint(priv->ring[i].irq, NULL);
destroy_workqueue(priv->ring[i].workqueue);
}
-
- return 0;
}
static const struct safexcel_priv_data eip97ies_mrvl_data = {
static struct platform_driver crypto_safexcel = {
.probe = safexcel_probe,
- .remove = safexcel_remove,
+ .remove_new = safexcel_remove,
.driver = {
.name = "crypto-safexcel",
.of_match_table = safexcel_of_match_table,
return 0;
}
-static int ixp_crypto_remove(struct platform_device *pdev)
+static void ixp_crypto_remove(struct platform_device *pdev)
{
int num = ARRAY_SIZE(ixp4xx_algos);
int i;
crypto_unregister_skcipher(&ixp4xx_algos[i].crypto);
}
release_ixp_crypto(&pdev->dev);
-
- return 0;
}
static const struct of_device_id ixp4xx_crypto_of_match[] = {
{
static struct platform_driver ixp_crypto_driver = {
.probe = ixp_crypto_probe,
- .remove = ixp_crypto_remove,
+ .remove_new = ixp_crypto_remove,
.driver = {
.name = "ixp4xx_crypto",
.of_match_table = ixp4xx_crypto_of_match,
{}
};
-static int kmb_ocs_aes_remove(struct platform_device *pdev)
+static void kmb_ocs_aes_remove(struct platform_device *pdev)
{
struct ocs_aes_dev *aes_dev;
spin_unlock(&ocs_aes.lock);
crypto_engine_exit(aes_dev->engine);
-
- return 0;
}
static int kmb_ocs_aes_probe(struct platform_device *pdev)
/* The OCS driver is a platform device. */
static struct platform_driver kmb_ocs_aes_driver = {
.probe = kmb_ocs_aes_probe,
- .remove = kmb_ocs_aes_remove,
+ .remove_new = kmb_ocs_aes_remove,
.driver = {
.name = DRV_NAME,
.of_match_table = kmb_ocs_aes_of_match,
return rc;
}
-static int kmb_ocs_ecc_remove(struct platform_device *pdev)
+static void kmb_ocs_ecc_remove(struct platform_device *pdev)
{
struct ocs_ecc_dev *ecc_dev;
spin_unlock(&ocs_ecc.lock);
crypto_engine_exit(ecc_dev->engine);
-
- return 0;
}
/* Device tree driver match. */
/* The OCS driver is a platform device. */
static struct platform_driver kmb_ocs_ecc_driver = {
.probe = kmb_ocs_ecc_probe,
- .remove = kmb_ocs_ecc_remove,
+ .remove_new = kmb_ocs_ecc_remove,
.driver = {
.name = DRV_NAME,
.of_match_table = kmb_ocs_ecc_of_match,
{}
};
-static int kmb_ocs_hcu_remove(struct platform_device *pdev)
+static void kmb_ocs_hcu_remove(struct platform_device *pdev)
{
- struct ocs_hcu_dev *hcu_dev;
- int rc;
-
- hcu_dev = platform_get_drvdata(pdev);
- if (!hcu_dev)
- return -ENODEV;
+ struct ocs_hcu_dev *hcu_dev = platform_get_drvdata(pdev);
crypto_engine_unregister_ahashes(ocs_hcu_algs, ARRAY_SIZE(ocs_hcu_algs));
- rc = crypto_engine_exit(hcu_dev->engine);
+ crypto_engine_exit(hcu_dev->engine);
spin_lock_bh(&ocs_hcu.lock);
list_del(&hcu_dev->list);
spin_unlock_bh(&ocs_hcu.lock);
-
- return rc;
}
static int kmb_ocs_hcu_probe(struct platform_device *pdev)
/* The OCS driver is a platform device. */
static struct platform_driver kmb_ocs_hcu_driver = {
.probe = kmb_ocs_hcu_probe,
- .remove = kmb_ocs_hcu_remove,
+ .remove_new = kmb_ocs_hcu_remove,
.driver = {
.name = DRV_NAME,
.of_match_table = kmb_ocs_hcu_of_match,
/* Copyright(c) 2020 - 2021 Intel Corporation */
#include <linux/iopoll.h>
#include <adf_accel_devices.h>
+#include <adf_admin.h>
#include <adf_cfg.h>
+#include <adf_cfg_services.h>
#include <adf_clock.h>
#include <adf_common_drv.h>
#include <adf_gen4_dc.h>
#include <adf_gen4_hw_data.h>
#include <adf_gen4_pfvf.h>
#include <adf_gen4_pm.h>
+#include "adf_gen4_ras.h"
#include <adf_gen4_timer.h>
#include "adf_4xxx_hw_data.h"
#include "icp_qat_hw.h"
+#define ADF_AE_GROUP_0 GENMASK(3, 0)
+#define ADF_AE_GROUP_1 GENMASK(7, 4)
+#define ADF_AE_GROUP_2 BIT(8)
+
enum adf_fw_objs {
ADF_FW_SYM_OBJ,
ADF_FW_ASYM_OBJ,
};
static const struct adf_fw_config adf_fw_cy_config[] = {
- {0xF0, ADF_FW_SYM_OBJ},
- {0xF, ADF_FW_ASYM_OBJ},
- {0x100, ADF_FW_ADMIN_OBJ},
+ {ADF_AE_GROUP_1, ADF_FW_SYM_OBJ},
+ {ADF_AE_GROUP_0, ADF_FW_ASYM_OBJ},
+ {ADF_AE_GROUP_2, ADF_FW_ADMIN_OBJ},
};
static const struct adf_fw_config adf_fw_dc_config[] = {
- {0xF0, ADF_FW_DC_OBJ},
- {0xF, ADF_FW_DC_OBJ},
- {0x100, ADF_FW_ADMIN_OBJ},
+ {ADF_AE_GROUP_1, ADF_FW_DC_OBJ},
+ {ADF_AE_GROUP_0, ADF_FW_DC_OBJ},
+ {ADF_AE_GROUP_2, ADF_FW_ADMIN_OBJ},
};
static const struct adf_fw_config adf_fw_sym_config[] = {
- {0xF0, ADF_FW_SYM_OBJ},
- {0xF, ADF_FW_SYM_OBJ},
- {0x100, ADF_FW_ADMIN_OBJ},
+ {ADF_AE_GROUP_1, ADF_FW_SYM_OBJ},
+ {ADF_AE_GROUP_0, ADF_FW_SYM_OBJ},
+ {ADF_AE_GROUP_2, ADF_FW_ADMIN_OBJ},
};
static const struct adf_fw_config adf_fw_asym_config[] = {
- {0xF0, ADF_FW_ASYM_OBJ},
- {0xF, ADF_FW_ASYM_OBJ},
- {0x100, ADF_FW_ADMIN_OBJ},
+ {ADF_AE_GROUP_1, ADF_FW_ASYM_OBJ},
+ {ADF_AE_GROUP_0, ADF_FW_ASYM_OBJ},
+ {ADF_AE_GROUP_2, ADF_FW_ADMIN_OBJ},
};
static const struct adf_fw_config adf_fw_asym_dc_config[] = {
- {0xF0, ADF_FW_ASYM_OBJ},
- {0xF, ADF_FW_DC_OBJ},
- {0x100, ADF_FW_ADMIN_OBJ},
+ {ADF_AE_GROUP_1, ADF_FW_ASYM_OBJ},
+ {ADF_AE_GROUP_0, ADF_FW_DC_OBJ},
+ {ADF_AE_GROUP_2, ADF_FW_ADMIN_OBJ},
};
static const struct adf_fw_config adf_fw_sym_dc_config[] = {
- {0xF0, ADF_FW_SYM_OBJ},
- {0xF, ADF_FW_DC_OBJ},
- {0x100, ADF_FW_ADMIN_OBJ},
+ {ADF_AE_GROUP_1, ADF_FW_SYM_OBJ},
+ {ADF_AE_GROUP_0, ADF_FW_DC_OBJ},
+ {ADF_AE_GROUP_2, ADF_FW_ADMIN_OBJ},
+};
+
+static const struct adf_fw_config adf_fw_dcc_config[] = {
+ {ADF_AE_GROUP_1, ADF_FW_DC_OBJ},
+ {ADF_AE_GROUP_0, ADF_FW_SYM_OBJ},
+ {ADF_AE_GROUP_2, ADF_FW_ADMIN_OBJ},
};
static_assert(ARRAY_SIZE(adf_fw_cy_config) == ARRAY_SIZE(adf_fw_dc_config));
static_assert(ARRAY_SIZE(adf_fw_cy_config) == ARRAY_SIZE(adf_fw_asym_config));
static_assert(ARRAY_SIZE(adf_fw_cy_config) == ARRAY_SIZE(adf_fw_asym_dc_config));
static_assert(ARRAY_SIZE(adf_fw_cy_config) == ARRAY_SIZE(adf_fw_sym_dc_config));
+static_assert(ARRAY_SIZE(adf_fw_cy_config) == ARRAY_SIZE(adf_fw_dcc_config));
/* Worker thread to service arbiter mappings */
static const u32 default_thrd_to_arb_map[ADF_4XXX_MAX_ACCELENGINES] = {
0x0
};
+static const u32 thrd_to_arb_map_dcc[ADF_4XXX_MAX_ACCELENGINES] = {
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x0000FFFF, 0x0000FFFF, 0x0000FFFF, 0x0000FFFF,
+ 0x0
+};
+
static struct adf_hw_device_class adf_4xxx_class = {
.name = ADF_4XXX_DEVICE_NAME,
.type = DEV_4XXX,
.instances = 0,
};
-enum dev_services {
- SVC_CY = 0,
- SVC_CY2,
- SVC_DC,
- SVC_SYM,
- SVC_ASYM,
- SVC_DC_ASYM,
- SVC_ASYM_DC,
- SVC_DC_SYM,
- SVC_SYM_DC,
-};
-
-static const char *const dev_cfg_services[] = {
- [SVC_CY] = ADF_CFG_CY,
- [SVC_CY2] = ADF_CFG_ASYM_SYM,
- [SVC_DC] = ADF_CFG_DC,
- [SVC_SYM] = ADF_CFG_SYM,
- [SVC_ASYM] = ADF_CFG_ASYM,
- [SVC_DC_ASYM] = ADF_CFG_DC_ASYM,
- [SVC_ASYM_DC] = ADF_CFG_ASYM_DC,
- [SVC_DC_SYM] = ADF_CFG_DC_SYM,
- [SVC_SYM_DC] = ADF_CFG_SYM_DC,
-};
-
static int get_service_enabled(struct adf_accel_dev *accel_dev)
{
char services[ADF_CFG_MAX_VAL_LEN_IN_BYTES] = {0};
return ret;
}
- ret = match_string(dev_cfg_services, ARRAY_SIZE(dev_cfg_services),
+ ret = match_string(adf_cfg_services, ARRAY_SIZE(adf_cfg_services),
services);
if (ret < 0)
dev_err(&GET_DEV(accel_dev),
{
struct pci_dev *pdev = accel_dev->accel_pci_dev.pci_dev;
u32 capabilities_sym, capabilities_asym, capabilities_dc;
+ u32 capabilities_dcc;
u32 fusectl1;
/* Read accelerator capabilities mask */
return capabilities_sym | capabilities_asym;
case SVC_DC:
return capabilities_dc;
+ case SVC_DCC:
+ /*
+ * Sym capabilities are available for chaining operations,
+ * but sym crypto instances cannot be supported
+ */
+ capabilities_dcc = capabilities_dc | capabilities_sym;
+ capabilities_dcc &= ~ICP_ACCEL_CAPABILITIES_CRYPTO_SYMMETRIC;
+ return capabilities_dcc;
case SVC_SYM:
return capabilities_sym;
case SVC_ASYM:
switch (get_service_enabled(accel_dev)) {
case SVC_DC:
return thrd_to_arb_map_dc;
+ case SVC_DCC:
+ return thrd_to_arb_map_dcc;
default:
return default_thrd_to_arb_map;
}
return ADF_4XXX_KPT_COUNTER_FREQ;
}
+static void adf_init_rl_data(struct adf_rl_hw_data *rl_data)
+{
+ rl_data->pciout_tb_offset = ADF_GEN4_RL_TOKEN_PCIEOUT_BUCKET_OFFSET;
+ rl_data->pciin_tb_offset = ADF_GEN4_RL_TOKEN_PCIEIN_BUCKET_OFFSET;
+ rl_data->r2l_offset = ADF_GEN4_RL_R2L_OFFSET;
+ rl_data->l2c_offset = ADF_GEN4_RL_L2C_OFFSET;
+ rl_data->c2s_offset = ADF_GEN4_RL_C2S_OFFSET;
+
+ rl_data->pcie_scale_div = ADF_4XXX_RL_PCIE_SCALE_FACTOR_DIV;
+ rl_data->pcie_scale_mul = ADF_4XXX_RL_PCIE_SCALE_FACTOR_MUL;
+ rl_data->dcpr_correction = ADF_4XXX_RL_DCPR_CORRECTION;
+ rl_data->max_tp[ADF_SVC_ASYM] = ADF_4XXX_RL_MAX_TP_ASYM;
+ rl_data->max_tp[ADF_SVC_SYM] = ADF_4XXX_RL_MAX_TP_SYM;
+ rl_data->max_tp[ADF_SVC_DC] = ADF_4XXX_RL_MAX_TP_DC;
+ rl_data->scan_interval = ADF_4XXX_RL_SCANS_PER_SEC;
+ rl_data->scale_ref = ADF_4XXX_RL_SLICE_REF;
+}
+
static void adf_enable_error_correction(struct adf_accel_dev *accel_dev)
{
struct adf_bar *misc_bar = &GET_BARS(accel_dev)[ADF_4XXX_PMISC_BAR];
return ARRAY_SIZE(adf_fw_cy_config);
}
-static const char *uof_get_name(struct adf_accel_dev *accel_dev, u32 obj_num,
- const char * const fw_objs[], int num_objs)
+static const struct adf_fw_config *get_fw_config(struct adf_accel_dev *accel_dev)
{
- int id;
-
switch (get_service_enabled(accel_dev)) {
case SVC_CY:
case SVC_CY2:
- id = adf_fw_cy_config[obj_num].obj;
- break;
+ return adf_fw_cy_config;
case SVC_DC:
- id = adf_fw_dc_config[obj_num].obj;
- break;
+ return adf_fw_dc_config;
+ case SVC_DCC:
+ return adf_fw_dcc_config;
case SVC_SYM:
- id = adf_fw_sym_config[obj_num].obj;
- break;
+ return adf_fw_sym_config;
case SVC_ASYM:
- id = adf_fw_asym_config[obj_num].obj;
- break;
+ return adf_fw_asym_config;
case SVC_ASYM_DC:
case SVC_DC_ASYM:
- id = adf_fw_asym_dc_config[obj_num].obj;
- break;
+ return adf_fw_asym_dc_config;
case SVC_SYM_DC:
case SVC_DC_SYM:
- id = adf_fw_sym_dc_config[obj_num].obj;
- break;
+ return adf_fw_sym_dc_config;
default:
- id = -EINVAL;
- break;
+ return NULL;
+ }
+}
+
+enum adf_rp_groups {
+ RP_GROUP_0 = 0,
+ RP_GROUP_1,
+ RP_GROUP_COUNT
+};
+
+static u16 get_ring_to_svc_map(struct adf_accel_dev *accel_dev)
+{
+ enum adf_cfg_service_type rps[RP_GROUP_COUNT];
+ const struct adf_fw_config *fw_config;
+ u16 ring_to_svc_map;
+ int i, j;
+
+ fw_config = get_fw_config(accel_dev);
+ if (!fw_config)
+ return 0;
+
+ for (i = 0; i < RP_GROUP_COUNT; i++) {
+ switch (fw_config[i].ae_mask) {
+ case ADF_AE_GROUP_0:
+ j = RP_GROUP_0;
+ break;
+ case ADF_AE_GROUP_1:
+ j = RP_GROUP_1;
+ break;
+ default:
+ return 0;
+ }
+
+ switch (fw_config[i].obj) {
+ case ADF_FW_SYM_OBJ:
+ rps[j] = SYM;
+ break;
+ case ADF_FW_ASYM_OBJ:
+ rps[j] = ASYM;
+ break;
+ case ADF_FW_DC_OBJ:
+ rps[j] = COMP;
+ break;
+ default:
+ rps[j] = 0;
+ break;
+ }
}
+ ring_to_svc_map = rps[RP_GROUP_0] << ADF_CFG_SERV_RING_PAIR_0_SHIFT |
+ rps[RP_GROUP_1] << ADF_CFG_SERV_RING_PAIR_1_SHIFT |
+ rps[RP_GROUP_0] << ADF_CFG_SERV_RING_PAIR_2_SHIFT |
+ rps[RP_GROUP_1] << ADF_CFG_SERV_RING_PAIR_3_SHIFT;
+
+ return ring_to_svc_map;
+}
+
+static const char *uof_get_name(struct adf_accel_dev *accel_dev, u32 obj_num,
+ const char * const fw_objs[], int num_objs)
+{
+ const struct adf_fw_config *fw_config;
+ int id;
+
+ fw_config = get_fw_config(accel_dev);
+ if (fw_config)
+ id = fw_config[obj_num].obj;
+ else
+ id = -EINVAL;
+
if (id < 0 || id > num_objs)
return NULL;
static u32 uof_get_ae_mask(struct adf_accel_dev *accel_dev, u32 obj_num)
{
- switch (get_service_enabled(accel_dev)) {
- case SVC_CY:
- return adf_fw_cy_config[obj_num].ae_mask;
- case SVC_DC:
- return adf_fw_dc_config[obj_num].ae_mask;
- case SVC_CY2:
- return adf_fw_cy_config[obj_num].ae_mask;
- case SVC_SYM:
- return adf_fw_sym_config[obj_num].ae_mask;
- case SVC_ASYM:
- return adf_fw_asym_config[obj_num].ae_mask;
- case SVC_ASYM_DC:
- case SVC_DC_ASYM:
- return adf_fw_asym_dc_config[obj_num].ae_mask;
- case SVC_SYM_DC:
- case SVC_DC_SYM:
- return adf_fw_sym_dc_config[obj_num].ae_mask;
- default:
+ const struct adf_fw_config *fw_config;
+
+ fw_config = get_fw_config(accel_dev);
+ if (!fw_config)
return 0;
- }
+
+ return fw_config[obj_num].ae_mask;
+}
+
+static void adf_gen4_set_err_mask(struct adf_dev_err_mask *dev_err_mask)
+{
+ dev_err_mask->cppagentcmdpar_mask = ADF_4XXX_HICPPAGENTCMDPARERRLOG_MASK;
+ dev_err_mask->parerr_ath_cph_mask = ADF_4XXX_PARITYERRORMASK_ATH_CPH_MASK;
+ dev_err_mask->parerr_cpr_xlt_mask = ADF_4XXX_PARITYERRORMASK_CPR_XLT_MASK;
+ dev_err_mask->parerr_dcpr_ucs_mask = ADF_4XXX_PARITYERRORMASK_DCPR_UCS_MASK;
+ dev_err_mask->parerr_pke_mask = ADF_4XXX_PARITYERRORMASK_PKE_MASK;
+ dev_err_mask->ssmfeatren_mask = ADF_4XXX_SSMFEATREN_MASK;
}
void adf_init_hw_data_4xxx(struct adf_hw_device_data *hw_data, u32 dev_id)
hw_data->uof_get_ae_mask = uof_get_ae_mask;
hw_data->set_msix_rttable = set_msix_default_rttable;
hw_data->set_ssm_wdtimer = adf_gen4_set_ssm_wdtimer;
+ hw_data->get_ring_to_svc_map = get_ring_to_svc_map;
hw_data->disable_iov = adf_disable_sriov;
hw_data->ring_pair_reset = adf_gen4_ring_pair_reset;
hw_data->enable_pm = adf_gen4_enable_pm;
hw_data->stop_timer = adf_gen4_timer_stop;
hw_data->get_hb_clock = get_heartbeat_clock;
hw_data->num_hb_ctrs = ADF_NUM_HB_CNT_PER_AE;
+ hw_data->clock_frequency = ADF_4XXX_AE_FREQ;
+ adf_gen4_set_err_mask(&hw_data->dev_err_mask);
adf_gen4_init_hw_csr_ops(&hw_data->csr_ops);
adf_gen4_init_pf_pfvf_ops(&hw_data->pfvf_ops);
adf_gen4_init_dc_ops(&hw_data->dc_ops);
+ adf_gen4_init_ras_ops(&hw_data->ras_ops);
+ adf_init_rl_data(&hw_data->rl_data);
}
void adf_clean_hw_data_4xxx(struct adf_hw_device_data *hw_data)
#define ADF_4XXX_ACCELENGINES_MASK (0x1FF)
#define ADF_4XXX_ADMIN_AE_MASK (0x100)
+#define ADF_4XXX_HICPPAGENTCMDPARERRLOG_MASK 0x1F
+#define ADF_4XXX_PARITYERRORMASK_ATH_CPH_MASK 0xF000F
+#define ADF_4XXX_PARITYERRORMASK_CPR_XLT_MASK 0x10001
+#define ADF_4XXX_PARITYERRORMASK_DCPR_UCS_MASK 0x30007
+#define ADF_4XXX_PARITYERRORMASK_PKE_MASK 0x3F
+
+/*
+ * SSMFEATREN bit mask
+ * BIT(4) - enables parity detection on CPP
+ * BIT(12) - enables the logging of push/pull data errors
+ * in pperr register
+ * BIT(16) - BIT(23) - enable parity detection on SPPs
+ */
+#define ADF_4XXX_SSMFEATREN_MASK \
+ (BIT(4) | BIT(12) | BIT(16) | BIT(17) | BIT(18) | \
+ BIT(19) | BIT(20) | BIT(21) | BIT(22) | BIT(23))
+
#define ADF_4XXX_ETR_MAX_BANKS 64
/* MSIX interrupt */
#define ADF_402XX_ASYM_OBJ "qat_402xx_asym.bin"
#define ADF_402XX_ADMIN_OBJ "qat_402xx_admin.bin"
+/* RL constants */
+#define ADF_4XXX_RL_PCIE_SCALE_FACTOR_DIV 100
+#define ADF_4XXX_RL_PCIE_SCALE_FACTOR_MUL 102
+#define ADF_4XXX_RL_DCPR_CORRECTION 1
+#define ADF_4XXX_RL_SCANS_PER_SEC 954
+#define ADF_4XXX_RL_MAX_TP_ASYM 173750UL
+#define ADF_4XXX_RL_MAX_TP_SYM 95000UL
+#define ADF_4XXX_RL_MAX_TP_DC 45000UL
+#define ADF_4XXX_RL_SLICE_REF 1000UL
+
/* Clocks frequency */
-#define ADF_4XXX_KPT_COUNTER_FREQ (100 * HZ_PER_MHZ)
+#define ADF_4XXX_KPT_COUNTER_FREQ (100 * HZ_PER_MHZ)
+#define ADF_4XXX_AE_FREQ (1000 * HZ_PER_MHZ)
/* qat_4xxx fuse bits are different from old GENs, redefine them */
enum icp_qat_4xxx_slice_mask {
#include <adf_heartbeat.h>
#include "adf_4xxx_hw_data.h"
+#include "adf_cfg_services.h"
#include "qat_compression.h"
#include "qat_crypto.h"
#include "adf_transport_access_macros.h"
};
MODULE_DEVICE_TABLE(pci, adf_pci_tbl);
-enum configs {
- DEV_CFG_CY = 0,
- DEV_CFG_DC,
- DEV_CFG_SYM,
- DEV_CFG_ASYM,
- DEV_CFG_ASYM_SYM,
- DEV_CFG_ASYM_DC,
- DEV_CFG_DC_ASYM,
- DEV_CFG_SYM_DC,
- DEV_CFG_DC_SYM,
-};
-
-static const char * const services_operations[] = {
- ADF_CFG_CY,
- ADF_CFG_DC,
- ADF_CFG_SYM,
- ADF_CFG_ASYM,
- ADF_CFG_ASYM_SYM,
- ADF_CFG_ASYM_DC,
- ADF_CFG_DC_ASYM,
- ADF_CFG_SYM_DC,
- ADF_CFG_DC_SYM,
-};
-
static void adf_cleanup_accel(struct adf_accel_dev *accel_dev)
{
if (accel_dev->hw_device) {
if (ret)
goto err;
- ret = sysfs_match_string(services_operations, services);
+ ret = sysfs_match_string(adf_cfg_services, services);
if (ret < 0)
goto err;
switch (ret) {
- case DEV_CFG_CY:
- case DEV_CFG_ASYM_SYM:
+ case SVC_CY:
+ case SVC_CY2:
ret = adf_crypto_dev_config(accel_dev);
break;
- case DEV_CFG_DC:
+ case SVC_DC:
+ case SVC_DCC:
ret = adf_comp_dev_config(accel_dev);
break;
default:
goto out_err;
}
+ accel_dev->ras_errors.enabled = true;
adf_dbgfs_init(accel_dev);
ret = adf_dev_up(accel_dev, true);
MODULE_DESCRIPTION("Intel(R) QuickAssist Technology");
MODULE_VERSION(ADF_DRV_VERSION);
MODULE_SOFTDEP("pre: crypto-intel_qat");
+MODULE_IMPORT_NS(CRYPTO_QAT);
// SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0-only)
/* Copyright(c) 2014 - 2021 Intel Corporation */
#include <adf_accel_devices.h>
+#include <adf_admin.h>
#include <adf_clock.h>
#include <adf_common_drv.h>
#include <adf_gen2_config.h>
MODULE_FIRMWARE(ADF_C3XXX_MMP);
MODULE_DESCRIPTION("Intel(R) QuickAssist Technology");
MODULE_VERSION(ADF_DRV_VERSION);
+MODULE_IMPORT_NS(CRYPTO_QAT);
MODULE_AUTHOR("Intel");
MODULE_DESCRIPTION("Intel(R) QuickAssist Technology");
MODULE_VERSION(ADF_DRV_VERSION);
+MODULE_IMPORT_NS(CRYPTO_QAT);
// SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0-only)
/* Copyright(c) 2014 - 2021 Intel Corporation */
#include <adf_accel_devices.h>
+#include <adf_admin.h>
#include <adf_clock.h>
#include <adf_common_drv.h>
#include <adf_gen2_config.h>
MODULE_FIRMWARE(ADF_C62X_MMP);
MODULE_DESCRIPTION("Intel(R) QuickAssist Technology");
MODULE_VERSION(ADF_DRV_VERSION);
+MODULE_IMPORT_NS(CRYPTO_QAT);
MODULE_AUTHOR("Intel");
MODULE_DESCRIPTION("Intel(R) QuickAssist Technology");
MODULE_VERSION(ADF_DRV_VERSION);
+MODULE_IMPORT_NS(CRYPTO_QAT);
# SPDX-License-Identifier: GPL-2.0
obj-$(CONFIG_CRYPTO_DEV_QAT) += intel_qat.o
+ccflags-y += -DDEFAULT_SYMBOL_NAMESPACE=CRYPTO_QAT
intel_qat-objs := adf_cfg.o \
adf_isr.o \
adf_ctl_drv.o \
+ adf_cfg_services.o \
adf_dev_mgr.o \
adf_init.o \
adf_accel_engine.o \
adf_admin.o \
adf_hw_arbiter.o \
adf_sysfs.o \
+ adf_sysfs_ras_counters.o \
adf_gen2_hw_data.o \
adf_gen2_config.o \
adf_gen4_hw_data.o \
adf_gen4_pm.o \
adf_gen2_dc.o \
adf_gen4_dc.o \
+ adf_gen4_ras.o \
adf_gen4_timer.o \
adf_clock.o \
qat_crypto.o \
qat_algs.o \
qat_asym_algs.o \
qat_algs_send.o \
+ adf_rl.o \
+ adf_rl_admin.o \
+ adf_sysfs_rl.o \
qat_uclo.o \
qat_hal.o \
qat_bl.o
intel_qat-$(CONFIG_DEBUG_FS) += adf_transport_debug.o \
adf_fw_counters.o \
+ adf_cnv_dbgfs.o \
+ adf_gen4_pm_debugfs.o \
adf_heartbeat.o \
adf_heartbeat_dbgfs.o \
+ adf_pm_dbgfs.o \
adf_dbgfs.o
intel_qat-$(CONFIG_PCI_IOV) += adf_sriov.o adf_vf_isr.o adf_pfvf_utils.o \
#include <linux/list.h>
#include <linux/io.h>
#include <linux/ratelimit.h>
+#include <linux/types.h>
#include "adf_cfg_common.h"
+#include "adf_rl.h"
#include "adf_pfvf_msg.h"
#define ADF_DH895XCC_DEVICE_NAME "dh895xcc"
#define ADF_PCI_MAX_BARS 3
#define ADF_DEVICE_NAME_LENGTH 32
#define ADF_ETR_MAX_RINGS_PER_BANK 16
-#define ADF_MAX_MSIX_VECTOR_NAME 16
+#define ADF_MAX_MSIX_VECTOR_NAME 48
#define ADF_DEVICE_NAME_PREFIX "qat_"
enum adf_accel_capabilities {
DEV_SKU_UNKNOWN,
};
+enum ras_errors {
+ ADF_RAS_CORR,
+ ADF_RAS_UNCORR,
+ ADF_RAS_FATAL,
+ ADF_RAS_ERRORS,
+};
+
+struct adf_error_counters {
+ atomic_t counter[ADF_RAS_ERRORS];
+ bool enabled;
+};
+
static inline const char *get_sku_info(enum dev_sku_info info)
{
switch (info) {
struct adf_etr_data;
struct adf_etr_ring_data;
+struct adf_ras_ops {
+ void (*enable_ras_errors)(struct adf_accel_dev *accel_dev);
+ void (*disable_ras_errors)(struct adf_accel_dev *accel_dev);
+ bool (*handle_interrupt)(struct adf_accel_dev *accel_dev,
+ bool *reset_required);
+};
+
struct adf_pfvf_ops {
int (*enable_comms)(struct adf_accel_dev *accel_dev);
u32 (*get_pf2vf_offset)(u32 i);
void (*build_deflate_ctx)(void *ctx);
};
+struct adf_dev_err_mask {
+ u32 cppagentcmdpar_mask;
+ u32 parerr_ath_cph_mask;
+ u32 parerr_cpr_xlt_mask;
+ u32 parerr_dcpr_ucs_mask;
+ u32 parerr_pke_mask;
+ u32 parerr_wat_wcp_mask;
+ u32 ssmfeatren_mask;
+};
+
struct adf_hw_device_data {
struct adf_hw_device_class *dev_class;
u32 (*get_accel_mask)(struct adf_hw_device_data *self);
void (*get_arb_info)(struct arb_info *arb_csrs_info);
void (*get_admin_info)(struct admin_info *admin_csrs_info);
enum dev_sku_info (*get_sku)(struct adf_hw_device_data *self);
+ u16 (*get_ring_to_svc_map)(struct adf_accel_dev *accel_dev);
int (*alloc_irq)(struct adf_accel_dev *accel_dev);
void (*free_irq)(struct adf_accel_dev *accel_dev);
void (*enable_error_correction)(struct adf_accel_dev *accel_dev);
struct adf_pfvf_ops pfvf_ops;
struct adf_hw_csr_ops csr_ops;
struct adf_dc_ops dc_ops;
+ struct adf_ras_ops ras_ops;
+ struct adf_dev_err_mask dev_err_mask;
+ struct adf_rl_hw_data rl_data;
const char *fw_name;
const char *fw_mmp_name;
u32 fuses;
u32 straps;
u32 accel_capabilities_mask;
u32 extended_dc_capabilities;
+ u16 fw_capabilities;
u32 clock_frequency;
u32 instance_id;
u16 accel_mask;
#define GET_SRV_TYPE(accel_dev, idx) \
(((GET_HW_DATA(accel_dev)->ring_to_svc_map) >> (ADF_SRV_TYPE_BIT_LEN * (idx))) \
& ADF_SRV_TYPE_MASK)
+#define GET_ERR_MASK(accel_dev) (&GET_HW_DATA(accel_dev)->dev_err_mask)
#define GET_MAX_ACCELENGINES(accel_dev) (GET_HW_DATA(accel_dev)->num_engines)
#define GET_CSR_OPS(accel_dev) (&(accel_dev)->hw_device->csr_ops)
#define GET_PFVF_OPS(accel_dev) (&(accel_dev)->hw_device->pfvf_ops)
dma_addr_t ovf_buff_p;
};
+struct adf_pm {
+ struct dentry *debugfs_pm_status;
+ bool present;
+ int idle_irq_counters;
+ int throttle_irq_counters;
+ int fw_irq_counters;
+ int host_ack_counter;
+ int host_nack_counter;
+ ssize_t (*print_pm_status)(struct adf_accel_dev *accel_dev,
+ char __user *buf, size_t count, loff_t *pos);
+};
+
+struct adf_sysfs {
+ int ring_num;
+ struct rw_semaphore lock; /* protects access to the fields in this struct */
+};
+
struct adf_accel_dev {
struct adf_etr_data *transport;
struct adf_hw_device_data *hw_device;
struct adf_fw_loader_data *fw_loader;
struct adf_admin_comms *admin;
struct adf_dc_data *dc_data;
+ struct adf_pm power_management;
struct list_head crypto_list;
struct list_head compression_list;
unsigned long status;
atomic_t ref_count;
struct dentry *debugfs_dir;
struct dentry *fw_cntr_dbgfile;
+ struct dentry *cnv_dbgfile;
struct list_head list;
struct module *owner;
struct adf_accel_pci accel_pci_dev;
struct adf_timer *timer;
struct adf_heartbeat *heartbeat;
+ struct adf_rl *rate_limiting;
+ struct adf_sysfs sysfs;
union {
struct {
/* protects VF2PF interrupts access */
u8 pf_compat_ver;
} vf;
};
+ struct adf_error_counters ras_errors;
struct mutex state_lock; /* protect state of the device */
bool is_vf;
u32 accel_id;
#include <linux/pci.h>
#include <linux/dma-mapping.h>
#include "adf_accel_devices.h"
+#include "adf_admin.h"
#include "adf_common_drv.h"
+#include "adf_cfg.h"
#include "adf_heartbeat.h"
#include "icp_qat_fw_init_admin.h"
return 0;
}
+static int adf_set_chaining(struct adf_accel_dev *accel_dev)
+{
+ u32 ae_mask = GET_HW_DATA(accel_dev)->ae_mask;
+ struct icp_qat_fw_init_admin_resp resp = { };
+ struct icp_qat_fw_init_admin_req req = { };
+
+ req.cmd_id = ICP_QAT_FW_DC_CHAIN_INIT;
+
+ return adf_send_admin(accel_dev, &req, &resp, ae_mask);
+}
+
static int adf_get_dc_capabilities(struct adf_accel_dev *accel_dev,
u32 *capabilities)
{
return adf_send_admin(accel_dev, &req, &resp, ae_mask);
}
+static bool is_dcc_enabled(struct adf_accel_dev *accel_dev)
+{
+ char services[ADF_CFG_MAX_VAL_LEN_IN_BYTES] = {0};
+ int ret;
+
+ ret = adf_cfg_get_param_value(accel_dev, ADF_GENERAL_SEC,
+ ADF_SERVICES_ENABLED, services);
+ if (ret)
+ return false;
+
+ return !strcmp(services, "dcc");
+}
+
+static int adf_get_fw_capabilities(struct adf_accel_dev *accel_dev, u16 *caps)
+{
+ u32 ae_mask = accel_dev->hw_device->admin_ae_mask;
+ struct icp_qat_fw_init_admin_resp resp = { };
+ struct icp_qat_fw_init_admin_req req = { };
+ int ret;
+
+ if (!ae_mask)
+ return 0;
+
+ req.cmd_id = ICP_QAT_FW_CAPABILITIES_GET;
+ ret = adf_send_admin(accel_dev, &req, &resp, ae_mask);
+ if (ret)
+ return ret;
+
+ *caps = resp.fw_capabilities;
+
+ return 0;
+}
+
+int adf_send_admin_rl_init(struct adf_accel_dev *accel_dev,
+ struct icp_qat_fw_init_admin_slice_cnt *slices)
+{
+ u32 ae_mask = accel_dev->hw_device->admin_ae_mask;
+ struct icp_qat_fw_init_admin_resp resp = { };
+ struct icp_qat_fw_init_admin_req req = { };
+ int ret;
+
+ req.cmd_id = ICP_QAT_FW_RL_INIT;
+
+ ret = adf_send_admin(accel_dev, &req, &resp, ae_mask);
+ if (ret)
+ return ret;
+
+ memcpy(slices, &resp.slices, sizeof(*slices));
+
+ return 0;
+}
+
+int adf_send_admin_rl_add_update(struct adf_accel_dev *accel_dev,
+ struct icp_qat_fw_init_admin_req *req)
+{
+ u32 ae_mask = accel_dev->hw_device->admin_ae_mask;
+ struct icp_qat_fw_init_admin_resp resp = { };
+
+ /*
+ * req struct filled in rl implementation. Used commands
+ * ICP_QAT_FW_RL_ADD for a new SLA
+ * ICP_QAT_FW_RL_UPDATE for update SLA
+ */
+ return adf_send_admin(accel_dev, req, &resp, ae_mask);
+}
+
+int adf_send_admin_rl_delete(struct adf_accel_dev *accel_dev, u16 node_id,
+ u8 node_type)
+{
+ u32 ae_mask = accel_dev->hw_device->admin_ae_mask;
+ struct icp_qat_fw_init_admin_resp resp = { };
+ struct icp_qat_fw_init_admin_req req = { };
+
+ req.cmd_id = ICP_QAT_FW_RL_REMOVE;
+ req.node_id = node_id;
+ req.node_type = node_type;
+
+ return adf_send_admin(accel_dev, &req, &resp, ae_mask);
+}
+
/**
* adf_send_admin_init() - Function sends init message to FW
* @accel_dev: Pointer to acceleration device.
*/
int adf_send_admin_init(struct adf_accel_dev *accel_dev)
{
+ struct adf_hw_device_data *hw_data = GET_HW_DATA(accel_dev);
u32 dc_capabilities = 0;
int ret;
+ ret = adf_set_fw_constants(accel_dev);
+ if (ret)
+ return ret;
+
+ if (is_dcc_enabled(accel_dev)) {
+ ret = adf_set_chaining(accel_dev);
+ if (ret)
+ return ret;
+ }
+
ret = adf_get_dc_capabilities(accel_dev, &dc_capabilities);
if (ret) {
dev_err(&GET_DEV(accel_dev), "Cannot get dc capabilities\n");
}
accel_dev->hw_device->extended_dc_capabilities = dc_capabilities;
- ret = adf_set_fw_constants(accel_dev);
- if (ret)
- return ret;
+ adf_get_fw_capabilities(accel_dev, &hw_data->fw_capabilities);
return adf_init_ae(accel_dev);
}
return adf_send_admin(accel_dev, &req, &resp, ae_mask);
}
+int adf_get_pm_info(struct adf_accel_dev *accel_dev, dma_addr_t p_state_addr,
+ size_t buff_size)
+{
+ struct adf_hw_device_data *hw_data = accel_dev->hw_device;
+ struct icp_qat_fw_init_admin_req req = { };
+ struct icp_qat_fw_init_admin_resp resp;
+ u32 ae_mask = hw_data->admin_ae_mask;
+ int ret;
+
+ /* Query pm info via init/admin cmd */
+ if (!accel_dev->admin) {
+ dev_err(&GET_DEV(accel_dev), "adf_admin is not available\n");
+ return -EFAULT;
+ }
+
+ req.cmd_id = ICP_QAT_FW_PM_INFO;
+ req.init_cfg_sz = buff_size;
+ req.init_cfg_ptr = p_state_addr;
+
+ ret = adf_send_admin(accel_dev, &req, &resp, ae_mask);
+ if (ret)
+ dev_err(&GET_DEV(accel_dev),
+ "Failed to query power-management info\n");
+
+ return ret;
+}
+
+int adf_get_cnv_stats(struct adf_accel_dev *accel_dev, u16 ae, u16 *err_cnt,
+ u16 *latest_err)
+{
+ struct icp_qat_fw_init_admin_req req = { };
+ struct icp_qat_fw_init_admin_resp resp;
+ int ret;
+
+ req.cmd_id = ICP_QAT_FW_CNV_STATS_GET;
+
+ ret = adf_put_admin_msg_sync(accel_dev, ae, &req, &resp);
+ if (ret)
+ return ret;
+ if (resp.status)
+ return -EPROTONOSUPPORT;
+
+ *err_cnt = resp.error_count;
+ *latest_err = resp.latest_error;
+
+ return ret;
+}
+
int adf_init_admin_comms(struct adf_accel_dev *accel_dev)
{
struct adf_admin_comms *admin;
--- /dev/null
+/* SPDX-License-Identifier: GPL-2.0-only */
+/* Copyright(c) 2023 Intel Corporation */
+#ifndef ADF_ADMIN
+#define ADF_ADMIN
+
+#include "icp_qat_fw_init_admin.h"
+
+struct adf_accel_dev;
+
+int adf_init_admin_comms(struct adf_accel_dev *accel_dev);
+void adf_exit_admin_comms(struct adf_accel_dev *accel_dev);
+int adf_send_admin_init(struct adf_accel_dev *accel_dev);
+int adf_get_ae_fw_counters(struct adf_accel_dev *accel_dev, u16 ae, u64 *reqs, u64 *resps);
+int adf_init_admin_pm(struct adf_accel_dev *accel_dev, u32 idle_delay);
+int adf_send_admin_tim_sync(struct adf_accel_dev *accel_dev, u32 cnt);
+int adf_send_admin_hb_timer(struct adf_accel_dev *accel_dev, uint32_t ticks);
+int adf_send_admin_rl_init(struct adf_accel_dev *accel_dev,
+ struct icp_qat_fw_init_admin_slice_cnt *slices);
+int adf_send_admin_rl_add_update(struct adf_accel_dev *accel_dev,
+ struct icp_qat_fw_init_admin_req *req);
+int adf_send_admin_rl_delete(struct adf_accel_dev *accel_dev, u16 node_id,
+ u8 node_type);
+int adf_get_fw_timestamp(struct adf_accel_dev *accel_dev, u64 *timestamp);
+int adf_get_pm_info(struct adf_accel_dev *accel_dev, dma_addr_t p_state_addr, size_t buff_size);
+int adf_get_cnv_stats(struct adf_accel_dev *accel_dev, u16 ae, u16 *err_cnt, u16 *latest_err);
+
+#endif
if (adf_dev_restart(accel_dev)) {
/* The device hanged and we can't restart it so stop here */
dev_err(&GET_DEV(accel_dev), "Restart device failed\n");
- kfree(reset_data);
+ if (reset_data->mode == ADF_DEV_RESET_ASYNC)
+ kfree(reset_data);
WARN(1, "QAT: device restart failed. Device is unusable\n");
return;
}
--- /dev/null
+// SPDX-License-Identifier: GPL-2.0-only
+/* Copyright(c) 2023 Intel Corporation */
+
+#include <linux/export.h>
+#include "adf_cfg_services.h"
+#include "adf_cfg_strings.h"
+
+const char *const adf_cfg_services[] = {
+ [SVC_CY] = ADF_CFG_CY,
+ [SVC_CY2] = ADF_CFG_ASYM_SYM,
+ [SVC_DC] = ADF_CFG_DC,
+ [SVC_DCC] = ADF_CFG_DCC,
+ [SVC_SYM] = ADF_CFG_SYM,
+ [SVC_ASYM] = ADF_CFG_ASYM,
+ [SVC_DC_ASYM] = ADF_CFG_DC_ASYM,
+ [SVC_ASYM_DC] = ADF_CFG_ASYM_DC,
+ [SVC_DC_SYM] = ADF_CFG_DC_SYM,
+ [SVC_SYM_DC] = ADF_CFG_SYM_DC,
+};
+EXPORT_SYMBOL_GPL(adf_cfg_services);
--- /dev/null
+/* SPDX-License-Identifier: GPL-2.0-only */
+/* Copyright(c) 2023 Intel Corporation */
+#ifndef _ADF_CFG_SERVICES_H_
+#define _ADF_CFG_SERVICES_H_
+
+#include "adf_cfg_strings.h"
+
+enum adf_services {
+ SVC_CY = 0,
+ SVC_CY2,
+ SVC_DC,
+ SVC_DCC,
+ SVC_SYM,
+ SVC_ASYM,
+ SVC_DC_ASYM,
+ SVC_ASYM_DC,
+ SVC_DC_SYM,
+ SVC_SYM_DC,
+ SVC_COUNT
+};
+
+extern const char *const adf_cfg_services[SVC_COUNT];
+
+#endif
#define ADF_CFG_DC_ASYM "dc;asym"
#define ADF_CFG_SYM_DC "sym;dc"
#define ADF_CFG_DC_SYM "dc;sym"
+#define ADF_CFG_DCC "dcc"
#define ADF_SERVICES_ENABLED "ServicesEnabled"
#define ADF_PM_IDLE_SUPPORT "PmIdleSupport"
#define ADF_ETRMGR_COALESCING_ENABLED "InterruptCoalescingEnabled"
#include <linux/types.h>
#include <linux/units.h>
#include <asm/errno.h>
+#include "adf_admin.h"
#include "adf_accel_devices.h"
#include "adf_clock.h"
#include "adf_common_drv.h"
--- /dev/null
+// SPDX-License-Identifier: GPL-2.0-only
+/* Copyright(c) 2023 Intel Corporation */
+
+#include <linux/bitfield.h>
+#include <linux/debugfs.h>
+#include <linux/kernel.h>
+
+#include "adf_accel_devices.h"
+#include "adf_admin.h"
+#include "adf_common_drv.h"
+#include "adf_cnv_dbgfs.h"
+#include "qat_compression.h"
+
+#define CNV_DEBUGFS_FILENAME "cnv_errors"
+#define CNV_MIN_PADDING 16
+
+#define CNV_ERR_INFO_MASK GENMASK(11, 0)
+#define CNV_ERR_TYPE_MASK GENMASK(15, 12)
+#define CNV_SLICE_ERR_MASK GENMASK(7, 0)
+#define CNV_SLICE_ERR_SIGN_BIT_INDEX 7
+#define CNV_DELTA_ERR_SIGN_BIT_INDEX 11
+
+enum cnv_error_type {
+ CNV_ERR_TYPE_NONE,
+ CNV_ERR_TYPE_CHECKSUM,
+ CNV_ERR_TYPE_DECOMP_PRODUCED_LENGTH,
+ CNV_ERR_TYPE_DECOMPRESSION,
+ CNV_ERR_TYPE_TRANSLATION,
+ CNV_ERR_TYPE_DECOMP_CONSUMED_LENGTH,
+ CNV_ERR_TYPE_UNKNOWN,
+ CNV_ERR_TYPES_COUNT
+};
+
+#define CNV_ERROR_TYPE_GET(latest_err) \
+ min_t(u16, u16_get_bits(latest_err, CNV_ERR_TYPE_MASK), CNV_ERR_TYPE_UNKNOWN)
+
+#define CNV_GET_DELTA_ERR_INFO(latest_error) \
+ sign_extend32(latest_error, CNV_DELTA_ERR_SIGN_BIT_INDEX)
+
+#define CNV_GET_SLICE_ERR_INFO(latest_error) \
+ sign_extend32(latest_error, CNV_SLICE_ERR_SIGN_BIT_INDEX)
+
+#define CNV_GET_DEFAULT_ERR_INFO(latest_error) \
+ u16_get_bits(latest_error, CNV_ERR_INFO_MASK)
+
+enum cnv_fields {
+ CNV_ERR_COUNT,
+ CNV_LATEST_ERR,
+ CNV_FIELDS_COUNT
+};
+
+static const char * const cnv_field_names[CNV_FIELDS_COUNT] = {
+ [CNV_ERR_COUNT] = "Total Errors",
+ [CNV_LATEST_ERR] = "Last Error",
+};
+
+static const char * const cnv_error_names[CNV_ERR_TYPES_COUNT] = {
+ [CNV_ERR_TYPE_NONE] = "No Error",
+ [CNV_ERR_TYPE_CHECKSUM] = "Checksum Error",
+ [CNV_ERR_TYPE_DECOMP_PRODUCED_LENGTH] = "Length Error-P",
+ [CNV_ERR_TYPE_DECOMPRESSION] = "Decomp Error",
+ [CNV_ERR_TYPE_TRANSLATION] = "Xlat Error",
+ [CNV_ERR_TYPE_DECOMP_CONSUMED_LENGTH] = "Length Error-C",
+ [CNV_ERR_TYPE_UNKNOWN] = "Unknown Error",
+};
+
+struct ae_cnv_errors {
+ u16 ae;
+ u16 err_cnt;
+ u16 latest_err;
+ bool is_comp_ae;
+};
+
+struct cnv_err_stats {
+ u16 ae_count;
+ struct ae_cnv_errors ae_cnv_errors[];
+};
+
+static s16 get_err_info(u8 error_type, u16 latest)
+{
+ switch (error_type) {
+ case CNV_ERR_TYPE_DECOMP_PRODUCED_LENGTH:
+ case CNV_ERR_TYPE_DECOMP_CONSUMED_LENGTH:
+ return CNV_GET_DELTA_ERR_INFO(latest);
+ case CNV_ERR_TYPE_DECOMPRESSION:
+ case CNV_ERR_TYPE_TRANSLATION:
+ return CNV_GET_SLICE_ERR_INFO(latest);
+ default:
+ return CNV_GET_DEFAULT_ERR_INFO(latest);
+ }
+}
+
+static void *qat_cnv_errors_seq_start(struct seq_file *sfile, loff_t *pos)
+{
+ struct cnv_err_stats *err_stats = sfile->private;
+
+ if (*pos == 0)
+ return SEQ_START_TOKEN;
+
+ if (*pos > err_stats->ae_count)
+ return NULL;
+
+ return &err_stats->ae_cnv_errors[*pos - 1];
+}
+
+static void *qat_cnv_errors_seq_next(struct seq_file *sfile, void *v,
+ loff_t *pos)
+{
+ struct cnv_err_stats *err_stats = sfile->private;
+
+ (*pos)++;
+
+ if (*pos > err_stats->ae_count)
+ return NULL;
+
+ return &err_stats->ae_cnv_errors[*pos - 1];
+}
+
+static void qat_cnv_errors_seq_stop(struct seq_file *sfile, void *v)
+{
+}
+
+static int qat_cnv_errors_seq_show(struct seq_file *sfile, void *v)
+{
+ struct ae_cnv_errors *ae_errors;
+ unsigned int i;
+ s16 err_info;
+ u8 err_type;
+
+ if (v == SEQ_START_TOKEN) {
+ seq_puts(sfile, "AE ");
+ for (i = 0; i < CNV_FIELDS_COUNT; ++i)
+ seq_printf(sfile, " %*s", CNV_MIN_PADDING,
+ cnv_field_names[i]);
+ } else {
+ ae_errors = v;
+
+ if (!ae_errors->is_comp_ae)
+ return 0;
+
+ err_type = CNV_ERROR_TYPE_GET(ae_errors->latest_err);
+ err_info = get_err_info(err_type, ae_errors->latest_err);
+
+ seq_printf(sfile, "%d:", ae_errors->ae);
+ seq_printf(sfile, " %*d", CNV_MIN_PADDING, ae_errors->err_cnt);
+ seq_printf(sfile, "%*s [%d]", CNV_MIN_PADDING,
+ cnv_error_names[err_type], err_info);
+ }
+ seq_putc(sfile, '\n');
+
+ return 0;
+}
+
+static const struct seq_operations qat_cnv_errors_sops = {
+ .start = qat_cnv_errors_seq_start,
+ .next = qat_cnv_errors_seq_next,
+ .stop = qat_cnv_errors_seq_stop,
+ .show = qat_cnv_errors_seq_show,
+};
+
+/**
+ * cnv_err_stats_alloc() - Get CNV stats for the provided device.
+ * @accel_dev: Pointer to a QAT acceleration device
+ *
+ * Allocates and populates table of CNV errors statistics for each non-admin AE
+ * available through the supplied acceleration device. The caller becomes the
+ * owner of such memory and is responsible for the deallocation through a call
+ * to kfree().
+ *
+ * Returns: a pointer to a dynamically allocated struct cnv_err_stats on success
+ * or a negative value on error.
+ */
+static struct cnv_err_stats *cnv_err_stats_alloc(struct adf_accel_dev *accel_dev)
+{
+ struct adf_hw_device_data *hw_data = GET_HW_DATA(accel_dev);
+ struct cnv_err_stats *err_stats;
+ unsigned long ae_count;
+ unsigned long ae_mask;
+ size_t err_stats_size;
+ unsigned long ae;
+ unsigned int i;
+ u16 latest_err;
+ u16 err_cnt;
+ int ret;
+
+ if (!adf_dev_started(accel_dev)) {
+ dev_err(&GET_DEV(accel_dev), "QAT Device not started\n");
+ return ERR_PTR(-EBUSY);
+ }
+
+ /* Ignore the admin AEs */
+ ae_mask = hw_data->ae_mask & ~hw_data->admin_ae_mask;
+ ae_count = hweight_long(ae_mask);
+ if (unlikely(!ae_count))
+ return ERR_PTR(-EINVAL);
+
+ err_stats_size = struct_size(err_stats, ae_cnv_errors, ae_count);
+ err_stats = kmalloc(err_stats_size, GFP_KERNEL);
+ if (!err_stats)
+ return ERR_PTR(-ENOMEM);
+
+ err_stats->ae_count = ae_count;
+
+ i = 0;
+ for_each_set_bit(ae, &ae_mask, GET_MAX_ACCELENGINES(accel_dev)) {
+ ret = adf_get_cnv_stats(accel_dev, ae, &err_cnt, &latest_err);
+ if (ret) {
+ dev_dbg(&GET_DEV(accel_dev),
+ "Failed to get CNV stats for ae %ld, [%d].\n",
+ ae, ret);
+ err_stats->ae_cnv_errors[i++].is_comp_ae = false;
+ continue;
+ }
+ err_stats->ae_cnv_errors[i].is_comp_ae = true;
+ err_stats->ae_cnv_errors[i].latest_err = latest_err;
+ err_stats->ae_cnv_errors[i].err_cnt = err_cnt;
+ err_stats->ae_cnv_errors[i].ae = ae;
+ i++;
+ }
+
+ return err_stats;
+}
+
+static int qat_cnv_errors_file_open(struct inode *inode, struct file *file)
+{
+ struct adf_accel_dev *accel_dev = inode->i_private;
+ struct seq_file *cnv_errors_seq_file;
+ struct cnv_err_stats *cnv_err_stats;
+ int ret;
+
+ cnv_err_stats = cnv_err_stats_alloc(accel_dev);
+ if (IS_ERR(cnv_err_stats))
+ return PTR_ERR(cnv_err_stats);
+
+ ret = seq_open(file, &qat_cnv_errors_sops);
+ if (unlikely(ret)) {
+ kfree(cnv_err_stats);
+ return ret;
+ }
+
+ cnv_errors_seq_file = file->private_data;
+ cnv_errors_seq_file->private = cnv_err_stats;
+ return ret;
+}
+
+static int qat_cnv_errors_file_release(struct inode *inode, struct file *file)
+{
+ struct seq_file *cnv_errors_seq_file = file->private_data;
+
+ kfree(cnv_errors_seq_file->private);
+ cnv_errors_seq_file->private = NULL;
+
+ return seq_release(inode, file);
+}
+
+static const struct file_operations qat_cnv_fops = {
+ .owner = THIS_MODULE,
+ .open = qat_cnv_errors_file_open,
+ .read = seq_read,
+ .llseek = seq_lseek,
+ .release = qat_cnv_errors_file_release,
+};
+
+static ssize_t no_comp_file_read(struct file *f, char __user *buf, size_t count,
+ loff_t *pos)
+{
+ char *file_msg = "No engine configured for comp\n";
+
+ return simple_read_from_buffer(buf, count, pos, file_msg,
+ strlen(file_msg));
+}
+
+static const struct file_operations qat_cnv_no_comp_fops = {
+ .owner = THIS_MODULE,
+ .read = no_comp_file_read,
+};
+
+void adf_cnv_dbgfs_add(struct adf_accel_dev *accel_dev)
+{
+ const struct file_operations *fops;
+ void *data;
+
+ if (adf_hw_dev_has_compression(accel_dev)) {
+ fops = &qat_cnv_fops;
+ data = accel_dev;
+ } else {
+ fops = &qat_cnv_no_comp_fops;
+ data = NULL;
+ }
+
+ accel_dev->cnv_dbgfile = debugfs_create_file(CNV_DEBUGFS_FILENAME, 0400,
+ accel_dev->debugfs_dir,
+ data, fops);
+}
+
+void adf_cnv_dbgfs_rm(struct adf_accel_dev *accel_dev)
+{
+ debugfs_remove(accel_dev->cnv_dbgfile);
+ accel_dev->cnv_dbgfile = NULL;
+}
--- /dev/null
+/* SPDX-License-Identifier: GPL-2.0-only */
+/* Copyright(c) 2023 Intel Corporation */
+#ifndef ADF_CNV_DBG_H
+#define ADF_CNV_DBG_H
+
+struct adf_accel_dev;
+
+void adf_cnv_dbgfs_add(struct adf_accel_dev *accel_dev);
+void adf_cnv_dbgfs_rm(struct adf_accel_dev *accel_dev);
+
+#endif
#define ADF_STATUS_AE_STARTED 6
#define ADF_STATUS_PF_RUNNING 7
#define ADF_STATUS_IRQ_ALLOCATED 8
+#define ADF_STATUS_CRYPTO_ALGS_REGISTERED 9
+#define ADF_STATUS_COMP_ALGS_REGISTERED 10
enum adf_dev_reset_mode {
ADF_DEV_RESET_ASYNC = 0,
void adf_dev_restore(struct adf_accel_dev *accel_dev);
int adf_init_aer(void);
void adf_exit_aer(void);
-int adf_init_admin_comms(struct adf_accel_dev *accel_dev);
-void adf_exit_admin_comms(struct adf_accel_dev *accel_dev);
-int adf_send_admin_init(struct adf_accel_dev *accel_dev);
-int adf_get_ae_fw_counters(struct adf_accel_dev *accel_dev, u16 ae, u64 *reqs, u64 *resps);
-int adf_init_admin_pm(struct adf_accel_dev *accel_dev, u32 idle_delay);
-int adf_send_admin_tim_sync(struct adf_accel_dev *accel_dev, u32 cnt);
-int adf_send_admin_hb_timer(struct adf_accel_dev *accel_dev, uint32_t ticks);
-int adf_get_fw_timestamp(struct adf_accel_dev *accel_dev, u64 *timestamp);
int adf_init_arb(struct adf_accel_dev *accel_dev);
void adf_exit_arb(struct adf_accel_dev *accel_dev);
void adf_update_ring_arb(struct adf_etr_ring_data *ring);
return pmisc->virt_addr;
}
+static inline void __iomem *adf_get_aram_base(struct adf_accel_dev *accel_dev)
+{
+ struct adf_hw_device_data *hw_data = accel_dev->hw_device;
+ struct adf_bar *param;
+
+ param = &GET_BARS(accel_dev)[hw_data->get_sram_bar_id(hw_data)];
+
+ return param->virt_addr;
+}
+
#endif
#include "adf_accel_devices.h"
#include "adf_cfg.h"
#include "adf_common_drv.h"
+#include "adf_cnv_dbgfs.h"
#include "adf_dbgfs.h"
#include "adf_fw_counters.h"
#include "adf_heartbeat_dbgfs.h"
+#include "adf_pm_dbgfs.h"
/**
* adf_dbgfs_init() - add persistent debugfs entries
if (!accel_dev->is_vf) {
adf_fw_counters_dbgfs_add(accel_dev);
adf_heartbeat_dbgfs_add(accel_dev);
+ adf_pm_dbgfs_add(accel_dev);
+ adf_cnv_dbgfs_add(accel_dev);
}
}
return;
if (!accel_dev->is_vf) {
+ adf_cnv_dbgfs_rm(accel_dev);
+ adf_pm_dbgfs_rm(accel_dev);
adf_heartbeat_dbgfs_rm(accel_dev);
adf_fw_counters_dbgfs_rm(accel_dev);
}
#include <linux/types.h>
#include "adf_accel_devices.h"
+#include "adf_admin.h"
#include "adf_common_drv.h"
#include "adf_fw_counters.h"
struct adf_fw_counters {
u16 ae_count;
- struct adf_ae_counters ae_counters[];
+ struct adf_ae_counters ae_counters[] __counted_by(ae_count);
};
static void adf_fw_counters_parse_ae_values(struct adf_ae_counters *ae_counters, u32 ae,
/* Number of heartbeat counter pairs */
#define ADF_NUM_HB_CNT_PER_AE ADF_NUM_THREADS_PER_AE
+/* Rate Limiting */
+#define ADF_GEN4_RL_R2L_OFFSET 0x508000
+#define ADF_GEN4_RL_L2C_OFFSET 0x509000
+#define ADF_GEN4_RL_C2S_OFFSET 0x508818
+#define ADF_GEN4_RL_TOKEN_PCIEIN_BUCKET_OFFSET 0x508800
+#define ADF_GEN4_RL_TOKEN_PCIEOUT_BUCKET_OFFSET 0x508804
+
void adf_gen4_set_ssm_wdtimer(struct adf_accel_dev *accel_dev);
void adf_gen4_init_hw_csr_ops(struct adf_hw_csr_ops *csr_ops);
int adf_gen4_ring_pair_reset(struct adf_accel_dev *accel_dev, u32 bank_number);
/* Copyright(c) 2022 Intel Corporation */
#include <linux/bitfield.h>
#include <linux/iopoll.h>
+#include <linux/kernel.h>
+
#include "adf_accel_devices.h"
+#include "adf_admin.h"
#include "adf_common_drv.h"
#include "adf_gen4_pm.h"
#include "adf_cfg_strings.h"
#include "adf_gen4_hw_data.h"
#include "adf_cfg.h"
-enum qat_pm_host_msg {
- PM_NO_CHANGE = 0,
- PM_SET_MIN,
-};
-
struct adf_gen4_pm_data {
struct work_struct pm_irq_work;
struct adf_accel_dev *accel_dev;
{
char pm_idle_support_cfg[ADF_CFG_MAX_VAL_LEN_IN_BYTES] = {};
void __iomem *pmisc = adf_get_pmisc_base(accel_dev);
+ struct adf_pm *pm = &accel_dev->power_management;
bool pm_idle_support;
u32 msg;
int ret;
if (ret)
pm_idle_support = true;
+ if (pm_idle_support)
+ pm->host_ack_counter++;
+ else
+ pm->host_nack_counter++;
+
/* Send HOST_MSG */
msg = FIELD_PREP(ADF_GEN4_PM_MSG_PAYLOAD_BIT_MASK,
pm_idle_support ? PM_SET_MIN : PM_NO_CHANGE);
container_of(work, struct adf_gen4_pm_data, pm_irq_work);
struct adf_accel_dev *accel_dev = pm_data->accel_dev;
void __iomem *pmisc = adf_get_pmisc_base(accel_dev);
+ struct adf_pm *pm = &accel_dev->power_management;
u32 pm_int_sts = pm_data->pm_int_sts;
u32 val;
/* PM Idle interrupt */
if (pm_int_sts & ADF_GEN4_PM_IDLE_STS) {
+ pm->idle_irq_counters++;
/* Issue host message to FW */
if (send_host_msg(accel_dev))
dev_warn_ratelimited(&GET_DEV(accel_dev),
"Failed to send host msg to FW\n");
}
+ /* PM throttle interrupt */
+ if (pm_int_sts & ADF_GEN4_PM_THR_STS)
+ pm->throttle_irq_counters++;
+
+ /* PM fw interrupt */
+ if (pm_int_sts & ADF_GEN4_PM_FW_INT_STS)
+ pm->fw_irq_counters++;
+
/* Clear interrupt status */
ADF_CSR_WR(pmisc, ADF_GEN4_PM_INTERRUPT, pm_int_sts);
if (ret)
return ret;
+ /* Initialize PM internal data */
+ adf_gen4_init_dev_pm_data(accel_dev);
+
/* Enable default PM interrupts: IDLE, THROTTLE */
val = ADF_CSR_RD(pmisc, ADF_GEN4_PM_INTERRUPT);
val |= ADF_GEN4_PM_INT_EN_DEFAULT;
#ifndef ADF_GEN4_PM_H
#define ADF_GEN4_PM_H
-#include "adf_accel_devices.h"
+#include <linux/bits.h>
+
+struct adf_accel_dev;
+
+enum qat_pm_host_msg {
+ PM_NO_CHANGE = 0,
+ PM_SET_MIN,
+};
/* Power management registers */
#define ADF_GEN4_PM_HOST_MSG (0x50A01C)
#define ADF_GEN4_PM_MAX_IDLE_FILTER (0x7)
#define ADF_GEN4_PM_DEFAULT_IDLE_SUPPORT (0x1)
+/* PM CSRs fields masks */
+#define ADF_GEN4_PM_DOMAIN_POWER_GATED_MASK GENMASK(15, 0)
+#define ADF_GEN4_PM_SSM_PM_ENABLE_MASK GENMASK(15, 0)
+#define ADF_GEN4_PM_IDLE_FILTER_MASK GENMASK(5, 3)
+#define ADF_GEN4_PM_IDLE_ENABLE_MASK BIT(2)
+#define ADF_GEN4_PM_ENABLE_PM_MASK BIT(21)
+#define ADF_GEN4_PM_ENABLE_PM_IDLE_MASK BIT(22)
+#define ADF_GEN4_PM_ENABLE_DEEP_PM_IDLE_MASK BIT(23)
+#define ADF_GEN4_PM_CURRENT_WP_MASK GENMASK(19, 11)
+#define ADF_GEN4_PM_CPM_PM_STATE_MASK GENMASK(22, 20)
+#define ADF_GEN4_PM_PENDING_WP_MASK GENMASK(31, 23)
+#define ADF_GEN4_PM_THR_VALUE_MASK GENMASK(6, 4)
+#define ADF_GEN4_PM_MIN_PWR_ACK_MASK BIT(7)
+#define ADF_GEN4_PM_MIN_PWR_ACK_PENDING_MASK BIT(17)
+#define ADF_GEN4_PM_CPR_ACTIVE_COUNT_MASK BIT(0)
+#define ADF_GEN4_PM_CPR_MANAGED_COUNT_MASK BIT(0)
+#define ADF_GEN4_PM_XLT_ACTIVE_COUNT_MASK BIT(1)
+#define ADF_GEN4_PM_XLT_MANAGED_COUNT_MASK BIT(1)
+#define ADF_GEN4_PM_DCPR_ACTIVE_COUNT_MASK GENMASK(3, 2)
+#define ADF_GEN4_PM_DCPR_MANAGED_COUNT_MASK GENMASK(3, 2)
+#define ADF_GEN4_PM_PKE_ACTIVE_COUNT_MASK GENMASK(8, 4)
+#define ADF_GEN4_PM_PKE_MANAGED_COUNT_MASK GENMASK(8, 4)
+#define ADF_GEN4_PM_WAT_ACTIVE_COUNT_MASK GENMASK(13, 9)
+#define ADF_GEN4_PM_WAT_MANAGED_COUNT_MASK GENMASK(13, 9)
+#define ADF_GEN4_PM_WCP_ACTIVE_COUNT_MASK GENMASK(18, 14)
+#define ADF_GEN4_PM_WCP_MANAGED_COUNT_MASK GENMASK(18, 14)
+#define ADF_GEN4_PM_UCS_ACTIVE_COUNT_MASK GENMASK(20, 19)
+#define ADF_GEN4_PM_UCS_MANAGED_COUNT_MASK GENMASK(20, 19)
+#define ADF_GEN4_PM_CPH_ACTIVE_COUNT_MASK GENMASK(24, 21)
+#define ADF_GEN4_PM_CPH_MANAGED_COUNT_MASK GENMASK(24, 21)
+#define ADF_GEN4_PM_ATH_ACTIVE_COUNT_MASK GENMASK(28, 25)
+#define ADF_GEN4_PM_ATH_MANAGED_COUNT_MASK GENMASK(28, 25)
+
int adf_gen4_enable_pm(struct adf_accel_dev *accel_dev);
bool adf_gen4_handle_pm_interrupt(struct adf_accel_dev *accel_dev);
+#ifdef CONFIG_DEBUG_FS
+void adf_gen4_init_dev_pm_data(struct adf_accel_dev *accel_dev);
+#else
+static inline void adf_gen4_init_dev_pm_data(struct adf_accel_dev *accel_dev)
+{
+}
+#endif /* CONFIG_DEBUG_FS */
+
#endif
--- /dev/null
+// SPDX-License-Identifier: GPL-2.0-only
+/* Copyright(c) 2023 Intel Corporation */
+#include <linux/dma-mapping.h>
+#include <linux/kernel.h>
+#include <linux/string_helpers.h>
+#include <linux/stringify.h>
+
+#include "adf_accel_devices.h"
+#include "adf_admin.h"
+#include "adf_common_drv.h"
+#include "adf_gen4_pm.h"
+#include "icp_qat_fw_init_admin.h"
+
+/*
+ * This is needed because a variable is used to index the mask at
+ * pm_scnprint_table(), making it not compile time constant, so the compile
+ * asserts from FIELD_GET() or u32_get_bits() won't be fulfilled.
+ */
+#define field_get(_mask, _reg) (((_reg) & (_mask)) >> (ffs(_mask) - 1))
+
+#define PM_INFO_MEMBER_OFF(member) \
+ (offsetof(struct icp_qat_fw_init_admin_pm_info, member) / sizeof(u32))
+
+#define PM_INFO_REGSET_ENTRY_MASK(_reg_, _field_, _mask_) \
+{ \
+ .reg_offset = PM_INFO_MEMBER_OFF(_reg_), \
+ .key = __stringify(_field_), \
+ .field_mask = _mask_, \
+}
+
+#define PM_INFO_REGSET_ENTRY32(_reg_, _field_) \
+ PM_INFO_REGSET_ENTRY_MASK(_reg_, _field_, GENMASK(31, 0))
+
+#define PM_INFO_REGSET_ENTRY(_reg_, _field_) \
+ PM_INFO_REGSET_ENTRY_MASK(_reg_, _field_, ADF_GEN4_PM_##_field_##_MASK)
+
+#define PM_INFO_MAX_KEY_LEN 21
+
+struct pm_status_row {
+ int reg_offset;
+ u32 field_mask;
+ const char *key;
+};
+
+static struct pm_status_row pm_fuse_rows[] = {
+ PM_INFO_REGSET_ENTRY(fusectl0, ENABLE_PM),
+ PM_INFO_REGSET_ENTRY(fusectl0, ENABLE_PM_IDLE),
+ PM_INFO_REGSET_ENTRY(fusectl0, ENABLE_DEEP_PM_IDLE),
+};
+
+static struct pm_status_row pm_info_rows[] = {
+ PM_INFO_REGSET_ENTRY(pm.status, CPM_PM_STATE),
+ PM_INFO_REGSET_ENTRY(pm.status, PENDING_WP),
+ PM_INFO_REGSET_ENTRY(pm.status, CURRENT_WP),
+ PM_INFO_REGSET_ENTRY(pm.fw_init, IDLE_ENABLE),
+ PM_INFO_REGSET_ENTRY(pm.fw_init, IDLE_FILTER),
+ PM_INFO_REGSET_ENTRY(pm.main, MIN_PWR_ACK),
+ PM_INFO_REGSET_ENTRY(pm.thread, MIN_PWR_ACK_PENDING),
+ PM_INFO_REGSET_ENTRY(pm.main, THR_VALUE),
+};
+
+static struct pm_status_row pm_ssm_rows[] = {
+ PM_INFO_REGSET_ENTRY(ssm.pm_enable, SSM_PM_ENABLE),
+ PM_INFO_REGSET_ENTRY32(ssm.active_constraint, ACTIVE_CONSTRAINT),
+ PM_INFO_REGSET_ENTRY(ssm.pm_domain_status, DOMAIN_POWER_GATED),
+ PM_INFO_REGSET_ENTRY(ssm.pm_active_status, ATH_ACTIVE_COUNT),
+ PM_INFO_REGSET_ENTRY(ssm.pm_active_status, CPH_ACTIVE_COUNT),
+ PM_INFO_REGSET_ENTRY(ssm.pm_active_status, PKE_ACTIVE_COUNT),
+ PM_INFO_REGSET_ENTRY(ssm.pm_active_status, CPR_ACTIVE_COUNT),
+ PM_INFO_REGSET_ENTRY(ssm.pm_active_status, DCPR_ACTIVE_COUNT),
+ PM_INFO_REGSET_ENTRY(ssm.pm_active_status, UCS_ACTIVE_COUNT),
+ PM_INFO_REGSET_ENTRY(ssm.pm_active_status, XLT_ACTIVE_COUNT),
+ PM_INFO_REGSET_ENTRY(ssm.pm_active_status, WAT_ACTIVE_COUNT),
+ PM_INFO_REGSET_ENTRY(ssm.pm_active_status, WCP_ACTIVE_COUNT),
+ PM_INFO_REGSET_ENTRY(ssm.pm_managed_status, ATH_MANAGED_COUNT),
+ PM_INFO_REGSET_ENTRY(ssm.pm_managed_status, CPH_MANAGED_COUNT),
+ PM_INFO_REGSET_ENTRY(ssm.pm_managed_status, PKE_MANAGED_COUNT),
+ PM_INFO_REGSET_ENTRY(ssm.pm_managed_status, CPR_MANAGED_COUNT),
+ PM_INFO_REGSET_ENTRY(ssm.pm_managed_status, DCPR_MANAGED_COUNT),
+ PM_INFO_REGSET_ENTRY(ssm.pm_managed_status, UCS_MANAGED_COUNT),
+ PM_INFO_REGSET_ENTRY(ssm.pm_managed_status, XLT_MANAGED_COUNT),
+ PM_INFO_REGSET_ENTRY(ssm.pm_managed_status, WAT_MANAGED_COUNT),
+ PM_INFO_REGSET_ENTRY(ssm.pm_managed_status, WCP_MANAGED_COUNT),
+};
+
+static struct pm_status_row pm_log_rows[] = {
+ PM_INFO_REGSET_ENTRY32(event_counters.host_msg, HOST_MSG_EVENT_COUNT),
+ PM_INFO_REGSET_ENTRY32(event_counters.sys_pm, SYS_PM_EVENT_COUNT),
+ PM_INFO_REGSET_ENTRY32(event_counters.local_ssm, SSM_EVENT_COUNT),
+ PM_INFO_REGSET_ENTRY32(event_counters.timer, TIMER_EVENT_COUNT),
+ PM_INFO_REGSET_ENTRY32(event_counters.unknown, UNKNOWN_EVENT_COUNT),
+};
+
+static struct pm_status_row pm_event_rows[ICP_QAT_NUMBER_OF_PM_EVENTS] = {
+ PM_INFO_REGSET_ENTRY32(event_log[0], EVENT0),
+ PM_INFO_REGSET_ENTRY32(event_log[1], EVENT1),
+ PM_INFO_REGSET_ENTRY32(event_log[2], EVENT2),
+ PM_INFO_REGSET_ENTRY32(event_log[3], EVENT3),
+ PM_INFO_REGSET_ENTRY32(event_log[4], EVENT4),
+ PM_INFO_REGSET_ENTRY32(event_log[5], EVENT5),
+ PM_INFO_REGSET_ENTRY32(event_log[6], EVENT6),
+ PM_INFO_REGSET_ENTRY32(event_log[7], EVENT7),
+};
+
+static struct pm_status_row pm_csrs_rows[] = {
+ PM_INFO_REGSET_ENTRY32(pm.fw_init, CPM_PM_FW_INIT),
+ PM_INFO_REGSET_ENTRY32(pm.status, CPM_PM_STATUS),
+ PM_INFO_REGSET_ENTRY32(pm.main, CPM_PM_MASTER_FW),
+ PM_INFO_REGSET_ENTRY32(pm.pwrreq, CPM_PM_PWRREQ),
+};
+
+static int pm_scnprint_table(char *buff, struct pm_status_row *table,
+ u32 *pm_info_regs, size_t buff_size, int table_len,
+ bool lowercase)
+{
+ char key[PM_INFO_MAX_KEY_LEN];
+ int wr = 0;
+ int i;
+
+ for (i = 0; i < table_len; i++) {
+ if (lowercase)
+ string_lower(key, table[i].key);
+ else
+ string_upper(key, table[i].key);
+
+ wr += scnprintf(&buff[wr], buff_size - wr, "%s: %#x\n", key,
+ field_get(table[i].field_mask,
+ pm_info_regs[table[i].reg_offset]));
+ }
+
+ return wr;
+}
+
+static int pm_scnprint_table_upper_keys(char *buff, struct pm_status_row *table,
+ u32 *pm_info_regs, size_t buff_size,
+ int table_len)
+{
+ return pm_scnprint_table(buff, table, pm_info_regs, buff_size,
+ table_len, false);
+}
+
+static int pm_scnprint_table_lower_keys(char *buff, struct pm_status_row *table,
+ u32 *pm_info_regs, size_t buff_size,
+ int table_len)
+{
+ return pm_scnprint_table(buff, table, pm_info_regs, buff_size,
+ table_len, true);
+}
+
+static_assert(sizeof(struct icp_qat_fw_init_admin_pm_info) < PAGE_SIZE);
+
+static ssize_t adf_gen4_print_pm_status(struct adf_accel_dev *accel_dev,
+ char __user *buf, size_t count,
+ loff_t *pos)
+{
+ void __iomem *pmisc = adf_get_pmisc_base(accel_dev);
+ struct adf_pm *pm = &accel_dev->power_management;
+ struct icp_qat_fw_init_admin_pm_info *pm_info;
+ dma_addr_t p_state_addr;
+ u32 *pm_info_regs;
+ char *pm_kv;
+ int len = 0;
+ u32 val;
+ int ret;
+
+ pm_info = kmalloc(PAGE_SIZE, GFP_KERNEL);
+ if (!pm_info)
+ return -ENOMEM;
+
+ pm_kv = kmalloc(PAGE_SIZE, GFP_KERNEL);
+ if (!pm_kv) {
+ ret = -ENOMEM;
+ goto out_free;
+ }
+
+ p_state_addr = dma_map_single(&GET_DEV(accel_dev), pm_info, PAGE_SIZE,
+ DMA_FROM_DEVICE);
+ ret = dma_mapping_error(&GET_DEV(accel_dev), p_state_addr);
+ if (ret)
+ goto out_free;
+
+ /* Query PM info from QAT FW */
+ ret = adf_get_pm_info(accel_dev, p_state_addr, PAGE_SIZE);
+ dma_unmap_single(&GET_DEV(accel_dev), p_state_addr, PAGE_SIZE,
+ DMA_FROM_DEVICE);
+ if (ret)
+ goto out_free;
+
+ pm_info_regs = (u32 *)pm_info;
+
+ /* Fusectl related */
+ len += scnprintf(&pm_kv[len], PAGE_SIZE - len,
+ "----------- PM Fuse info ---------\n");
+ len += pm_scnprint_table_lower_keys(&pm_kv[len], pm_fuse_rows,
+ pm_info_regs, PAGE_SIZE - len,
+ ARRAY_SIZE(pm_fuse_rows));
+ len += scnprintf(&pm_kv[len], PAGE_SIZE - len, "max_pwrreq: %#x\n",
+ pm_info->max_pwrreq);
+ len += scnprintf(&pm_kv[len], PAGE_SIZE - len, "min_pwrreq: %#x\n",
+ pm_info->min_pwrreq);
+
+ /* PM related */
+ len += scnprintf(&pm_kv[len], PAGE_SIZE - len,
+ "------------ PM Info ------------\n");
+ len += scnprintf(&pm_kv[len], PAGE_SIZE - len, "power_level: %s\n",
+ pm_info->pwr_state == PM_SET_MIN ? "min" : "max");
+ len += pm_scnprint_table_lower_keys(&pm_kv[len], pm_info_rows,
+ pm_info_regs, PAGE_SIZE - len,
+ ARRAY_SIZE(pm_info_rows));
+ len += scnprintf(&pm_kv[len], PAGE_SIZE - len, "pm_mode: STATIC\n");
+
+ /* SSM related */
+ len += scnprintf(&pm_kv[len], PAGE_SIZE - len,
+ "----------- SSM_PM Info ----------\n");
+ len += pm_scnprint_table_lower_keys(&pm_kv[len], pm_ssm_rows,
+ pm_info_regs, PAGE_SIZE - len,
+ ARRAY_SIZE(pm_ssm_rows));
+
+ /* Log related */
+ len += scnprintf(&pm_kv[len], PAGE_SIZE - len,
+ "------------- PM Log -------------\n");
+ len += pm_scnprint_table_lower_keys(&pm_kv[len], pm_log_rows,
+ pm_info_regs, PAGE_SIZE - len,
+ ARRAY_SIZE(pm_log_rows));
+
+ len += pm_scnprint_table_lower_keys(&pm_kv[len], pm_event_rows,
+ pm_info_regs, PAGE_SIZE - len,
+ ARRAY_SIZE(pm_event_rows));
+
+ len += scnprintf(&pm_kv[len], PAGE_SIZE - len, "idle_irq_count: %#x\n",
+ pm->idle_irq_counters);
+ len += scnprintf(&pm_kv[len], PAGE_SIZE - len, "fw_irq_count: %#x\n",
+ pm->fw_irq_counters);
+ len += scnprintf(&pm_kv[len], PAGE_SIZE - len,
+ "throttle_irq_count: %#x\n", pm->throttle_irq_counters);
+ len += scnprintf(&pm_kv[len], PAGE_SIZE - len, "host_ack_count: %#x\n",
+ pm->host_ack_counter);
+ len += scnprintf(&pm_kv[len], PAGE_SIZE - len, "host_nack_count: %#x\n",
+ pm->host_nack_counter);
+
+ /* CSRs content */
+ len += scnprintf(&pm_kv[len], PAGE_SIZE - len,
+ "----------- HW PM CSRs -----------\n");
+ len += pm_scnprint_table_upper_keys(&pm_kv[len], pm_csrs_rows,
+ pm_info_regs, PAGE_SIZE - len,
+ ARRAY_SIZE(pm_csrs_rows));
+
+ val = ADF_CSR_RD(pmisc, ADF_GEN4_PM_HOST_MSG);
+ len += scnprintf(&pm_kv[len], PAGE_SIZE - len,
+ "CPM_PM_HOST_MSG: %#x\n", val);
+ val = ADF_CSR_RD(pmisc, ADF_GEN4_PM_INTERRUPT);
+ len += scnprintf(&pm_kv[len], PAGE_SIZE - len,
+ "CPM_PM_INTERRUPT: %#x\n", val);
+ ret = simple_read_from_buffer(buf, count, pos, pm_kv, len);
+
+out_free:
+ kfree(pm_info);
+ kfree(pm_kv);
+ return ret;
+}
+
+void adf_gen4_init_dev_pm_data(struct adf_accel_dev *accel_dev)
+{
+ accel_dev->power_management.print_pm_status = adf_gen4_print_pm_status;
+ accel_dev->power_management.present = true;
+}
--- /dev/null
+// SPDX-License-Identifier: GPL-2.0-only
+/* Copyright(c) 2023 Intel Corporation */
+#include "adf_common_drv.h"
+#include "adf_gen4_hw_data.h"
+#include "adf_gen4_ras.h"
+#include "adf_sysfs_ras_counters.h"
+
+#define BITS_PER_REG(_n_) (sizeof(_n_) * BITS_PER_BYTE)
+
+static void enable_errsou_reporting(void __iomem *csr)
+{
+ /* Enable correctable error reporting in ERRSOU0 */
+ ADF_CSR_WR(csr, ADF_GEN4_ERRMSK0, 0);
+
+ /* Enable uncorrectable error reporting in ERRSOU1 */
+ ADF_CSR_WR(csr, ADF_GEN4_ERRMSK1, 0);
+
+ /*
+ * Enable uncorrectable error reporting in ERRSOU2
+ * but disable PM interrupt and CFC attention interrupt by default
+ */
+ ADF_CSR_WR(csr, ADF_GEN4_ERRMSK2,
+ ADF_GEN4_ERRSOU2_PM_INT_BIT |
+ ADF_GEN4_ERRSOU2_CPP_CFC_ATT_INT_BITMASK);
+
+ /*
+ * Enable uncorrectable error reporting in ERRSOU3
+ * but disable RLT error interrupt and VFLR notify interrupt by default
+ */
+ ADF_CSR_WR(csr, ADF_GEN4_ERRMSK3,
+ ADF_GEN4_ERRSOU3_RLTERROR_BIT |
+ ADF_GEN4_ERRSOU3_VFLRNOTIFY_BIT);
+}
+
+static void disable_errsou_reporting(void __iomem *csr)
+{
+ u32 val = 0;
+
+ /* Disable correctable error reporting in ERRSOU0 */
+ ADF_CSR_WR(csr, ADF_GEN4_ERRMSK0, ADF_GEN4_ERRSOU0_BIT);
+
+ /* Disable uncorrectable error reporting in ERRSOU1 */
+ ADF_CSR_WR(csr, ADF_GEN4_ERRMSK1, ADF_GEN4_ERRSOU1_BITMASK);
+
+ /* Disable uncorrectable error reporting in ERRSOU2 */
+ val = ADF_CSR_RD(csr, ADF_GEN4_ERRMSK2);
+ val |= ADF_GEN4_ERRSOU2_DIS_BITMASK;
+ ADF_CSR_WR(csr, ADF_GEN4_ERRMSK2, val);
+
+ /* Disable uncorrectable error reporting in ERRSOU3 */
+ ADF_CSR_WR(csr, ADF_GEN4_ERRMSK3, ADF_GEN4_ERRSOU3_BITMASK);
+}
+
+static void enable_ae_error_reporting(struct adf_accel_dev *accel_dev,
+ void __iomem *csr)
+{
+ u32 ae_mask = GET_HW_DATA(accel_dev)->ae_mask;
+
+ /* Enable Acceleration Engine correctable error reporting */
+ ADF_CSR_WR(csr, ADF_GEN4_HIAECORERRLOGENABLE_CPP0, ae_mask);
+
+ /* Enable Acceleration Engine uncorrectable error reporting */
+ ADF_CSR_WR(csr, ADF_GEN4_HIAEUNCERRLOGENABLE_CPP0, ae_mask);
+}
+
+static void disable_ae_error_reporting(void __iomem *csr)
+{
+ /* Disable Acceleration Engine correctable error reporting */
+ ADF_CSR_WR(csr, ADF_GEN4_HIAECORERRLOGENABLE_CPP0, 0);
+
+ /* Disable Acceleration Engine uncorrectable error reporting */
+ ADF_CSR_WR(csr, ADF_GEN4_HIAEUNCERRLOGENABLE_CPP0, 0);
+}
+
+static void enable_cpp_error_reporting(struct adf_accel_dev *accel_dev,
+ void __iomem *csr)
+{
+ struct adf_dev_err_mask *err_mask = GET_ERR_MASK(accel_dev);
+
+ /* Enable HI CPP Agents Command Parity Error Reporting */
+ ADF_CSR_WR(csr, ADF_GEN4_HICPPAGENTCMDPARERRLOGENABLE,
+ err_mask->cppagentcmdpar_mask);
+
+ ADF_CSR_WR(csr, ADF_GEN4_CPP_CFC_ERR_CTRL,
+ ADF_GEN4_CPP_CFC_ERR_CTRL_BITMASK);
+}
+
+static void disable_cpp_error_reporting(void __iomem *csr)
+{
+ /* Disable HI CPP Agents Command Parity Error Reporting */
+ ADF_CSR_WR(csr, ADF_GEN4_HICPPAGENTCMDPARERRLOGENABLE, 0);
+
+ ADF_CSR_WR(csr, ADF_GEN4_CPP_CFC_ERR_CTRL,
+ ADF_GEN4_CPP_CFC_ERR_CTRL_DIS_BITMASK);
+}
+
+static void enable_ti_ri_error_reporting(void __iomem *csr)
+{
+ u32 reg;
+
+ /* Enable RI Memory error reporting */
+ ADF_CSR_WR(csr, ADF_GEN4_RI_MEM_PAR_ERR_EN0,
+ ADF_GEN4_RIMEM_PARERR_STS_FATAL_BITMASK |
+ ADF_GEN4_RIMEM_PARERR_STS_UNCERR_BITMASK);
+
+ /* Enable IOSF Primary Command Parity error Reporting */
+ ADF_CSR_WR(csr, ADF_GEN4_RIMISCCTL, ADF_GEN4_RIMISCSTS_BIT);
+
+ /* Enable TI Internal Memory Parity Error reporting */
+ ADF_CSR_WR(csr, ADF_GEN4_TI_CI_PAR_ERR_MASK, 0);
+ ADF_CSR_WR(csr, ADF_GEN4_TI_PULL0FUB_PAR_ERR_MASK, 0);
+ ADF_CSR_WR(csr, ADF_GEN4_TI_PUSHFUB_PAR_ERR_MASK, 0);
+ ADF_CSR_WR(csr, ADF_GEN4_TI_CD_PAR_ERR_MASK, 0);
+ ADF_CSR_WR(csr, ADF_GEN4_TI_TRNSB_PAR_ERR_MASK, 0);
+
+ /* Enable error handling in RI, TI CPP interface control registers */
+ ADF_CSR_WR(csr, ADF_GEN4_RICPPINTCTL, ADF_GEN4_RICPPINTCTL_BITMASK);
+
+ ADF_CSR_WR(csr, ADF_GEN4_TICPPINTCTL, ADF_GEN4_TICPPINTCTL_BITMASK);
+
+ /*
+ * Enable error detection and reporting in TIMISCSTS
+ * with bits 1, 2 and 30 value preserved
+ */
+ reg = ADF_CSR_RD(csr, ADF_GEN4_TIMISCCTL);
+ reg &= ADF_GEN4_TIMSCCTL_RELAY_BITMASK;
+ reg |= ADF_GEN4_TIMISCCTL_BIT;
+ ADF_CSR_WR(csr, ADF_GEN4_TIMISCCTL, reg);
+}
+
+static void disable_ti_ri_error_reporting(void __iomem *csr)
+{
+ u32 reg;
+
+ /* Disable RI Memory error reporting */
+ ADF_CSR_WR(csr, ADF_GEN4_RI_MEM_PAR_ERR_EN0, 0);
+
+ /* Disable IOSF Primary Command Parity error Reporting */
+ ADF_CSR_WR(csr, ADF_GEN4_RIMISCCTL, 0);
+
+ /* Disable TI Internal Memory Parity Error reporting */
+ ADF_CSR_WR(csr, ADF_GEN4_TI_CI_PAR_ERR_MASK,
+ ADF_GEN4_TI_CI_PAR_STS_BITMASK);
+ ADF_CSR_WR(csr, ADF_GEN4_TI_PULL0FUB_PAR_ERR_MASK,
+ ADF_GEN4_TI_PULL0FUB_PAR_STS_BITMASK);
+ ADF_CSR_WR(csr, ADF_GEN4_TI_PUSHFUB_PAR_ERR_MASK,
+ ADF_GEN4_TI_PUSHFUB_PAR_STS_BITMASK);
+ ADF_CSR_WR(csr, ADF_GEN4_TI_CD_PAR_ERR_MASK,
+ ADF_GEN4_TI_CD_PAR_STS_BITMASK);
+ ADF_CSR_WR(csr, ADF_GEN4_TI_TRNSB_PAR_ERR_MASK,
+ ADF_GEN4_TI_TRNSB_PAR_STS_BITMASK);
+
+ /* Disable error handling in RI, TI CPP interface control registers */
+ ADF_CSR_WR(csr, ADF_GEN4_RICPPINTCTL, 0);
+
+ ADF_CSR_WR(csr, ADF_GEN4_TICPPINTCTL, 0);
+
+ /*
+ * Disable error detection and reporting in TIMISCSTS
+ * with bits 1, 2 and 30 value preserved
+ */
+ reg = ADF_CSR_RD(csr, ADF_GEN4_TIMISCCTL);
+ reg &= ADF_GEN4_TIMSCCTL_RELAY_BITMASK;
+ ADF_CSR_WR(csr, ADF_GEN4_TIMISCCTL, reg);
+}
+
+static void enable_rf_error_reporting(struct adf_accel_dev *accel_dev,
+ void __iomem *csr)
+{
+ struct adf_dev_err_mask *err_mask = GET_ERR_MASK(accel_dev);
+
+ /* Enable RF parity error in Shared RAM */
+ ADF_CSR_WR(csr, ADF_GEN4_SSMSOFTERRORPARITYMASK_SRC, 0);
+ ADF_CSR_WR(csr, ADF_GEN4_SSMSOFTERRORPARITYMASK_ATH_CPH, 0);
+ ADF_CSR_WR(csr, ADF_GEN4_SSMSOFTERRORPARITYMASK_CPR_XLT, 0);
+ ADF_CSR_WR(csr, ADF_GEN4_SSMSOFTERRORPARITYMASK_DCPR_UCS, 0);
+ ADF_CSR_WR(csr, ADF_GEN4_SSMSOFTERRORPARITYMASK_PKE, 0);
+
+ if (err_mask->parerr_wat_wcp_mask)
+ ADF_CSR_WR(csr, ADF_GEN4_SSMSOFTERRORPARITYMASK_WAT_WCP, 0);
+}
+
+static void disable_rf_error_reporting(struct adf_accel_dev *accel_dev,
+ void __iomem *csr)
+{
+ struct adf_dev_err_mask *err_mask = GET_ERR_MASK(accel_dev);
+
+ /* Disable RF Parity Error reporting in Shared RAM */
+ ADF_CSR_WR(csr, ADF_GEN4_SSMSOFTERRORPARITYMASK_SRC,
+ ADF_GEN4_SSMSOFTERRORPARITY_SRC_BIT);
+
+ ADF_CSR_WR(csr, ADF_GEN4_SSMSOFTERRORPARITYMASK_ATH_CPH,
+ err_mask->parerr_ath_cph_mask);
+
+ ADF_CSR_WR(csr, ADF_GEN4_SSMSOFTERRORPARITYMASK_CPR_XLT,
+ err_mask->parerr_cpr_xlt_mask);
+
+ ADF_CSR_WR(csr, ADF_GEN4_SSMSOFTERRORPARITYMASK_DCPR_UCS,
+ err_mask->parerr_dcpr_ucs_mask);
+
+ ADF_CSR_WR(csr, ADF_GEN4_SSMSOFTERRORPARITYMASK_PKE,
+ err_mask->parerr_pke_mask);
+
+ if (err_mask->parerr_wat_wcp_mask)
+ ADF_CSR_WR(csr, ADF_GEN4_SSMSOFTERRORPARITYMASK_WAT_WCP,
+ err_mask->parerr_wat_wcp_mask);
+}
+
+static void enable_ssm_error_reporting(struct adf_accel_dev *accel_dev,
+ void __iomem *csr)
+{
+ struct adf_dev_err_mask *err_mask = GET_ERR_MASK(accel_dev);
+ u32 val = 0;
+
+ /* Enable SSM interrupts */
+ ADF_CSR_WR(csr, ADF_GEN4_INTMASKSSM, 0);
+
+ /* Enable shared memory error detection & correction */
+ val = ADF_CSR_RD(csr, ADF_GEN4_SSMFEATREN);
+ val |= err_mask->ssmfeatren_mask;
+ ADF_CSR_WR(csr, ADF_GEN4_SSMFEATREN, val);
+
+ /* Enable SER detection in SER_err_ssmsh register */
+ ADF_CSR_WR(csr, ADF_GEN4_SER_EN_SSMSH,
+ ADF_GEN4_SER_EN_SSMSH_BITMASK);
+
+ /* Enable SSM soft parity error */
+ ADF_CSR_WR(csr, ADF_GEN4_SPPPARERRMSK_ATH_CPH, 0);
+ ADF_CSR_WR(csr, ADF_GEN4_SPPPARERRMSK_CPR_XLT, 0);
+ ADF_CSR_WR(csr, ADF_GEN4_SPPPARERRMSK_DCPR_UCS, 0);
+ ADF_CSR_WR(csr, ADF_GEN4_SPPPARERRMSK_PKE, 0);
+
+ if (err_mask->parerr_wat_wcp_mask)
+ ADF_CSR_WR(csr, ADF_GEN4_SPPPARERRMSK_WAT_WCP, 0);
+
+ /* Enable slice hang interrupt reporting */
+ ADF_CSR_WR(csr, ADF_GEN4_SHINTMASKSSM_ATH_CPH, 0);
+ ADF_CSR_WR(csr, ADF_GEN4_SHINTMASKSSM_CPR_XLT, 0);
+ ADF_CSR_WR(csr, ADF_GEN4_SHINTMASKSSM_DCPR_UCS, 0);
+ ADF_CSR_WR(csr, ADF_GEN4_SHINTMASKSSM_PKE, 0);
+
+ if (err_mask->parerr_wat_wcp_mask)
+ ADF_CSR_WR(csr, ADF_GEN4_SHINTMASKSSM_WAT_WCP, 0);
+}
+
+static void disable_ssm_error_reporting(struct adf_accel_dev *accel_dev,
+ void __iomem *csr)
+{
+ struct adf_dev_err_mask *err_mask = GET_ERR_MASK(accel_dev);
+ u32 val = 0;
+
+ /* Disable SSM interrupts */
+ ADF_CSR_WR(csr, ADF_GEN4_INTMASKSSM,
+ ADF_GEN4_INTMASKSSM_BITMASK);
+
+ /* Disable shared memory error detection & correction */
+ val = ADF_CSR_RD(csr, ADF_GEN4_SSMFEATREN);
+ val &= ADF_GEN4_SSMFEATREN_DIS_BITMASK;
+ ADF_CSR_WR(csr, ADF_GEN4_SSMFEATREN, val);
+
+ /* Disable SER detection in SER_err_ssmsh register */
+ ADF_CSR_WR(csr, ADF_GEN4_SER_EN_SSMSH, 0);
+
+ /* Disable SSM soft parity error */
+ ADF_CSR_WR(csr, ADF_GEN4_SPPPARERRMSK_ATH_CPH,
+ err_mask->parerr_ath_cph_mask);
+
+ ADF_CSR_WR(csr, ADF_GEN4_SPPPARERRMSK_CPR_XLT,
+ err_mask->parerr_cpr_xlt_mask);
+
+ ADF_CSR_WR(csr, ADF_GEN4_SPPPARERRMSK_DCPR_UCS,
+ err_mask->parerr_dcpr_ucs_mask);
+
+ ADF_CSR_WR(csr, ADF_GEN4_SPPPARERRMSK_PKE,
+ err_mask->parerr_pke_mask);
+
+ if (err_mask->parerr_wat_wcp_mask)
+ ADF_CSR_WR(csr, ADF_GEN4_SPPPARERRMSK_WAT_WCP,
+ err_mask->parerr_wat_wcp_mask);
+
+ /* Disable slice hang interrupt reporting */
+ ADF_CSR_WR(csr, ADF_GEN4_SHINTMASKSSM_ATH_CPH,
+ err_mask->parerr_ath_cph_mask);
+
+ ADF_CSR_WR(csr, ADF_GEN4_SHINTMASKSSM_CPR_XLT,
+ err_mask->parerr_cpr_xlt_mask);
+
+ ADF_CSR_WR(csr, ADF_GEN4_SHINTMASKSSM_DCPR_UCS,
+ err_mask->parerr_dcpr_ucs_mask);
+
+ ADF_CSR_WR(csr, ADF_GEN4_SHINTMASKSSM_PKE,
+ err_mask->parerr_pke_mask);
+
+ if (err_mask->parerr_wat_wcp_mask)
+ ADF_CSR_WR(csr, ADF_GEN4_SHINTMASKSSM_WAT_WCP,
+ err_mask->parerr_wat_wcp_mask);
+}
+
+static void enable_aram_error_reporting(void __iomem *csr)
+{
+ ADF_CSR_WR(csr, ADF_GEN4_REG_ARAMCERRUERR_EN,
+ ADF_GEN4_REG_ARAMCERRUERR_EN_BITMASK);
+
+ ADF_CSR_WR(csr, ADF_GEN4_REG_ARAMCERR,
+ ADF_GEN4_REG_ARAMCERR_EN_BITMASK);
+
+ ADF_CSR_WR(csr, ADF_GEN4_REG_ARAMUERR,
+ ADF_GEN4_REG_ARAMUERR_EN_BITMASK);
+
+ ADF_CSR_WR(csr, ADF_GEN4_REG_CPPMEMTGTERR,
+ ADF_GEN4_REG_CPPMEMTGTERR_EN_BITMASK);
+}
+
+static void disable_aram_error_reporting(void __iomem *csr)
+{
+ ADF_CSR_WR(csr, ADF_GEN4_REG_ARAMCERRUERR_EN, 0);
+ ADF_CSR_WR(csr, ADF_GEN4_REG_ARAMCERR, 0);
+ ADF_CSR_WR(csr, ADF_GEN4_REG_ARAMUERR, 0);
+ ADF_CSR_WR(csr, ADF_GEN4_REG_CPPMEMTGTERR, 0);
+}
+
+static void adf_gen4_enable_ras(struct adf_accel_dev *accel_dev)
+{
+ void __iomem *aram_csr = adf_get_aram_base(accel_dev);
+ void __iomem *csr = adf_get_pmisc_base(accel_dev);
+
+ enable_errsou_reporting(csr);
+ enable_ae_error_reporting(accel_dev, csr);
+ enable_cpp_error_reporting(accel_dev, csr);
+ enable_ti_ri_error_reporting(csr);
+ enable_rf_error_reporting(accel_dev, csr);
+ enable_ssm_error_reporting(accel_dev, csr);
+ enable_aram_error_reporting(aram_csr);
+}
+
+static void adf_gen4_disable_ras(struct adf_accel_dev *accel_dev)
+{
+ void __iomem *aram_csr = adf_get_aram_base(accel_dev);
+ void __iomem *csr = adf_get_pmisc_base(accel_dev);
+
+ disable_errsou_reporting(csr);
+ disable_ae_error_reporting(csr);
+ disable_cpp_error_reporting(csr);
+ disable_ti_ri_error_reporting(csr);
+ disable_rf_error_reporting(accel_dev, csr);
+ disable_ssm_error_reporting(accel_dev, csr);
+ disable_aram_error_reporting(aram_csr);
+}
+
+static void adf_gen4_process_errsou0(struct adf_accel_dev *accel_dev,
+ void __iomem *csr)
+{
+ u32 aecorrerr = ADF_CSR_RD(csr, ADF_GEN4_HIAECORERRLOG_CPP0);
+
+ aecorrerr &= GET_HW_DATA(accel_dev)->ae_mask;
+
+ dev_warn(&GET_DEV(accel_dev),
+ "Correctable error detected in AE: 0x%x\n",
+ aecorrerr);
+
+ ADF_RAS_ERR_CTR_INC(accel_dev->ras_errors, ADF_RAS_CORR);
+
+ /* Clear interrupt from ERRSOU0 */
+ ADF_CSR_WR(csr, ADF_GEN4_HIAECORERRLOG_CPP0, aecorrerr);
+}
+
+static bool adf_handle_cpp_aeunc(struct adf_accel_dev *accel_dev,
+ void __iomem *csr, u32 errsou)
+{
+ u32 aeuncorerr;
+
+ if (!(errsou & ADF_GEN4_ERRSOU1_HIAEUNCERRLOG_CPP0_BIT))
+ return false;
+
+ aeuncorerr = ADF_CSR_RD(csr, ADF_GEN4_HIAEUNCERRLOG_CPP0);
+ aeuncorerr &= GET_HW_DATA(accel_dev)->ae_mask;
+
+ dev_err(&GET_DEV(accel_dev),
+ "Uncorrectable error detected in AE: 0x%x\n",
+ aeuncorerr);
+
+ ADF_RAS_ERR_CTR_INC(accel_dev->ras_errors, ADF_RAS_UNCORR);
+
+ ADF_CSR_WR(csr, ADF_GEN4_HIAEUNCERRLOG_CPP0, aeuncorerr);
+
+ return false;
+}
+
+static bool adf_handle_cppcmdparerr(struct adf_accel_dev *accel_dev,
+ void __iomem *csr, u32 errsou)
+{
+ struct adf_dev_err_mask *err_mask = GET_ERR_MASK(accel_dev);
+ u32 cmdparerr;
+
+ if (!(errsou & ADF_GEN4_ERRSOU1_HICPPAGENTCMDPARERRLOG_BIT))
+ return false;
+
+ cmdparerr = ADF_CSR_RD(csr, ADF_GEN4_HICPPAGENTCMDPARERRLOG);
+ cmdparerr &= err_mask->cppagentcmdpar_mask;
+
+ dev_err(&GET_DEV(accel_dev),
+ "HI CPP agent command parity error: 0x%x\n",
+ cmdparerr);
+
+ ADF_RAS_ERR_CTR_INC(accel_dev->ras_errors, ADF_RAS_FATAL);
+
+ ADF_CSR_WR(csr, ADF_GEN4_HICPPAGENTCMDPARERRLOG, cmdparerr);
+
+ return true;
+}
+
+static bool adf_handle_ri_mem_par_err(struct adf_accel_dev *accel_dev,
+ void __iomem *csr, u32 errsou)
+{
+ bool reset_required = false;
+ u32 rimem_parerr_sts;
+
+ if (!(errsou & ADF_GEN4_ERRSOU1_RIMEM_PARERR_STS_BIT))
+ return false;
+
+ rimem_parerr_sts = ADF_CSR_RD(csr, ADF_GEN4_RIMEM_PARERR_STS);
+ rimem_parerr_sts &= ADF_GEN4_RIMEM_PARERR_STS_UNCERR_BITMASK |
+ ADF_GEN4_RIMEM_PARERR_STS_FATAL_BITMASK;
+
+ if (rimem_parerr_sts & ADF_GEN4_RIMEM_PARERR_STS_UNCERR_BITMASK) {
+ dev_err(&GET_DEV(accel_dev),
+ "RI Memory Parity uncorrectable error: 0x%x\n",
+ rimem_parerr_sts);
+ ADF_RAS_ERR_CTR_INC(accel_dev->ras_errors, ADF_RAS_UNCORR);
+ }
+
+ if (rimem_parerr_sts & ADF_GEN4_RIMEM_PARERR_STS_FATAL_BITMASK) {
+ dev_err(&GET_DEV(accel_dev),
+ "RI Memory Parity fatal error: 0x%x\n",
+ rimem_parerr_sts);
+ ADF_RAS_ERR_CTR_INC(accel_dev->ras_errors, ADF_RAS_FATAL);
+ reset_required = true;
+ }
+
+ ADF_CSR_WR(csr, ADF_GEN4_RIMEM_PARERR_STS, rimem_parerr_sts);
+
+ return reset_required;
+}
+
+static bool adf_handle_ti_ci_par_sts(struct adf_accel_dev *accel_dev,
+ void __iomem *csr, u32 errsou)
+{
+ u32 ti_ci_par_sts;
+
+ if (!(errsou & ADF_GEN4_ERRSOU1_TIMEM_PARERR_STS_BIT))
+ return false;
+
+ ti_ci_par_sts = ADF_CSR_RD(csr, ADF_GEN4_TI_CI_PAR_STS);
+ ti_ci_par_sts &= ADF_GEN4_TI_CI_PAR_STS_BITMASK;
+
+ if (ti_ci_par_sts) {
+ dev_err(&GET_DEV(accel_dev),
+ "TI Memory Parity Error: 0x%x\n", ti_ci_par_sts);
+ ADF_CSR_WR(csr, ADF_GEN4_TI_CI_PAR_STS, ti_ci_par_sts);
+ ADF_RAS_ERR_CTR_INC(accel_dev->ras_errors, ADF_RAS_UNCORR);
+ }
+
+ return false;
+}
+
+static bool adf_handle_ti_pullfub_par_sts(struct adf_accel_dev *accel_dev,
+ void __iomem *csr, u32 errsou)
+{
+ u32 ti_pullfub_par_sts;
+
+ if (!(errsou & ADF_GEN4_ERRSOU1_TIMEM_PARERR_STS_BIT))
+ return false;
+
+ ti_pullfub_par_sts = ADF_CSR_RD(csr, ADF_GEN4_TI_PULL0FUB_PAR_STS);
+ ti_pullfub_par_sts &= ADF_GEN4_TI_PULL0FUB_PAR_STS_BITMASK;
+
+ if (ti_pullfub_par_sts) {
+ dev_err(&GET_DEV(accel_dev),
+ "TI Pull Parity Error: 0x%x\n", ti_pullfub_par_sts);
+
+ ADF_CSR_WR(csr, ADF_GEN4_TI_PULL0FUB_PAR_STS,
+ ti_pullfub_par_sts);
+
+ ADF_RAS_ERR_CTR_INC(accel_dev->ras_errors, ADF_RAS_UNCORR);
+ }
+
+ return false;
+}
+
+static bool adf_handle_ti_pushfub_par_sts(struct adf_accel_dev *accel_dev,
+ void __iomem *csr, u32 errsou)
+{
+ u32 ti_pushfub_par_sts;
+
+ if (!(errsou & ADF_GEN4_ERRSOU1_TIMEM_PARERR_STS_BIT))
+ return false;
+
+ ti_pushfub_par_sts = ADF_CSR_RD(csr, ADF_GEN4_TI_PUSHFUB_PAR_STS);
+ ti_pushfub_par_sts &= ADF_GEN4_TI_PUSHFUB_PAR_STS_BITMASK;
+
+ if (ti_pushfub_par_sts) {
+ dev_err(&GET_DEV(accel_dev),
+ "TI Push Parity Error: 0x%x\n", ti_pushfub_par_sts);
+
+ ADF_RAS_ERR_CTR_INC(accel_dev->ras_errors, ADF_RAS_UNCORR);
+
+ ADF_CSR_WR(csr, ADF_GEN4_TI_PUSHFUB_PAR_STS,
+ ti_pushfub_par_sts);
+ }
+
+ return false;
+}
+
+static bool adf_handle_ti_cd_par_sts(struct adf_accel_dev *accel_dev,
+ void __iomem *csr, u32 errsou)
+{
+ u32 ti_cd_par_sts;
+
+ if (!(errsou & ADF_GEN4_ERRSOU1_TIMEM_PARERR_STS_BIT))
+ return false;
+
+ ti_cd_par_sts = ADF_CSR_RD(csr, ADF_GEN4_TI_CD_PAR_STS);
+ ti_cd_par_sts &= ADF_GEN4_TI_CD_PAR_STS_BITMASK;
+
+ if (ti_cd_par_sts) {
+ dev_err(&GET_DEV(accel_dev),
+ "TI CD Parity Error: 0x%x\n", ti_cd_par_sts);
+
+ ADF_RAS_ERR_CTR_INC(accel_dev->ras_errors, ADF_RAS_UNCORR);
+
+ ADF_CSR_WR(csr, ADF_GEN4_TI_CD_PAR_STS, ti_cd_par_sts);
+ }
+
+ return false;
+}
+
+static bool adf_handle_ti_trnsb_par_sts(struct adf_accel_dev *accel_dev,
+ void __iomem *csr, u32 errsou)
+{
+ u32 ti_trnsb_par_sts;
+
+ if (!(errsou & ADF_GEN4_ERRSOU1_TIMEM_PARERR_STS_BIT))
+ return false;
+
+ ti_trnsb_par_sts = ADF_CSR_RD(csr, ADF_GEN4_TI_TRNSB_PAR_STS);
+ ti_trnsb_par_sts &= ADF_GEN4_TI_TRNSB_PAR_STS_BITMASK;
+
+ if (ti_trnsb_par_sts) {
+ dev_err(&GET_DEV(accel_dev),
+ "TI TRNSB Parity Error: 0x%x\n", ti_trnsb_par_sts);
+
+ ADF_RAS_ERR_CTR_INC(accel_dev->ras_errors, ADF_RAS_UNCORR);
+
+ ADF_CSR_WR(csr, ADF_GEN4_TI_TRNSB_PAR_STS, ti_trnsb_par_sts);
+ }
+
+ return false;
+}
+
+static bool adf_handle_iosfp_cmd_parerr(struct adf_accel_dev *accel_dev,
+ void __iomem *csr, u32 errsou)
+{
+ u32 rimiscsts;
+
+ if (!(errsou & ADF_GEN4_ERRSOU1_TIMEM_PARERR_STS_BIT))
+ return false;
+
+ rimiscsts = ADF_CSR_RD(csr, ADF_GEN4_RIMISCSTS);
+ rimiscsts &= ADF_GEN4_RIMISCSTS_BIT;
+
+ dev_err(&GET_DEV(accel_dev),
+ "Command Parity error detected on IOSFP: 0x%x\n",
+ rimiscsts);
+
+ ADF_RAS_ERR_CTR_INC(accel_dev->ras_errors, ADF_RAS_FATAL);
+
+ ADF_CSR_WR(csr, ADF_GEN4_RIMISCSTS, rimiscsts);
+
+ return true;
+}
+
+static void adf_gen4_process_errsou1(struct adf_accel_dev *accel_dev,
+ void __iomem *csr, u32 errsou,
+ bool *reset_required)
+{
+ *reset_required |= adf_handle_cpp_aeunc(accel_dev, csr, errsou);
+ *reset_required |= adf_handle_cppcmdparerr(accel_dev, csr, errsou);
+ *reset_required |= adf_handle_ri_mem_par_err(accel_dev, csr, errsou);
+ *reset_required |= adf_handle_ti_ci_par_sts(accel_dev, csr, errsou);
+ *reset_required |= adf_handle_ti_pullfub_par_sts(accel_dev, csr, errsou);
+ *reset_required |= adf_handle_ti_pushfub_par_sts(accel_dev, csr, errsou);
+ *reset_required |= adf_handle_ti_cd_par_sts(accel_dev, csr, errsou);
+ *reset_required |= adf_handle_ti_trnsb_par_sts(accel_dev, csr, errsou);
+ *reset_required |= adf_handle_iosfp_cmd_parerr(accel_dev, csr, errsou);
+}
+
+static bool adf_handle_uerrssmsh(struct adf_accel_dev *accel_dev,
+ void __iomem *csr, u32 iastatssm)
+{
+ u32 reg;
+
+ if (!(iastatssm & ADF_GEN4_IAINTSTATSSM_UERRSSMSH_BIT))
+ return false;
+
+ reg = ADF_CSR_RD(csr, ADF_GEN4_UERRSSMSH);
+ reg &= ADF_GEN4_UERRSSMSH_BITMASK;
+
+ dev_err(&GET_DEV(accel_dev),
+ "Uncorrectable error on ssm shared memory: 0x%x\n",
+ reg);
+
+ ADF_RAS_ERR_CTR_INC(accel_dev->ras_errors, ADF_RAS_UNCORR);
+
+ ADF_CSR_WR(csr, ADF_GEN4_UERRSSMSH, reg);
+
+ return false;
+}
+
+static bool adf_handle_cerrssmsh(struct adf_accel_dev *accel_dev,
+ void __iomem *csr, u32 iastatssm)
+{
+ u32 reg;
+
+ if (!(iastatssm & ADF_GEN4_IAINTSTATSSM_CERRSSMSH_BIT))
+ return false;
+
+ reg = ADF_CSR_RD(csr, ADF_GEN4_CERRSSMSH);
+ reg &= ADF_GEN4_CERRSSMSH_ERROR_BIT;
+
+ dev_warn(&GET_DEV(accel_dev),
+ "Correctable error on ssm shared memory: 0x%x\n",
+ reg);
+
+ ADF_RAS_ERR_CTR_INC(accel_dev->ras_errors, ADF_RAS_CORR);
+
+ ADF_CSR_WR(csr, ADF_GEN4_CERRSSMSH, reg);
+
+ return false;
+}
+
+static bool adf_handle_pperr_err(struct adf_accel_dev *accel_dev,
+ void __iomem *csr, u32 iastatssm)
+{
+ u32 reg;
+
+ if (!(iastatssm & ADF_GEN4_IAINTSTATSSM_PPERR_BIT))
+ return false;
+
+ reg = ADF_CSR_RD(csr, ADF_GEN4_PPERR);
+ reg &= ADF_GEN4_PPERR_BITMASK;
+
+ dev_err(&GET_DEV(accel_dev),
+ "Uncorrectable error CPP transaction on memory target: 0x%x\n",
+ reg);
+
+ ADF_RAS_ERR_CTR_INC(accel_dev->ras_errors, ADF_RAS_UNCORR);
+
+ ADF_CSR_WR(csr, ADF_GEN4_PPERR, reg);
+
+ return false;
+}
+
+static void adf_poll_slicehang_csr(struct adf_accel_dev *accel_dev,
+ void __iomem *csr, u32 slice_hang_offset,
+ char *slice_name)
+{
+ u32 slice_hang_reg = ADF_CSR_RD(csr, slice_hang_offset);
+
+ if (!slice_hang_reg)
+ return;
+
+ dev_err(&GET_DEV(accel_dev),
+ "Slice %s hang error encountered\n", slice_name);
+
+ ADF_RAS_ERR_CTR_INC(accel_dev->ras_errors, ADF_RAS_UNCORR);
+}
+
+static bool adf_handle_slice_hang_error(struct adf_accel_dev *accel_dev,
+ void __iomem *csr, u32 iastatssm)
+{
+ struct adf_dev_err_mask *err_mask = GET_ERR_MASK(accel_dev);
+
+ if (!(iastatssm & ADF_GEN4_IAINTSTATSSM_SLICEHANG_ERR_BIT))
+ return false;
+
+ adf_poll_slicehang_csr(accel_dev, csr,
+ ADF_GEN4_SLICEHANGSTATUS_ATH_CPH, "ath_cph");
+ adf_poll_slicehang_csr(accel_dev, csr,
+ ADF_GEN4_SLICEHANGSTATUS_CPR_XLT, "cpr_xlt");
+ adf_poll_slicehang_csr(accel_dev, csr,
+ ADF_GEN4_SLICEHANGSTATUS_DCPR_UCS, "dcpr_ucs");
+ adf_poll_slicehang_csr(accel_dev, csr,
+ ADF_GEN4_SLICEHANGSTATUS_PKE, "pke");
+
+ if (err_mask->parerr_wat_wcp_mask)
+ adf_poll_slicehang_csr(accel_dev, csr,
+ ADF_GEN4_SLICEHANGSTATUS_WAT_WCP,
+ "ath_cph");
+
+ return false;
+}
+
+static bool adf_handle_spp_pullcmd_err(struct adf_accel_dev *accel_dev,
+ void __iomem *csr)
+{
+ struct adf_dev_err_mask *err_mask = GET_ERR_MASK(accel_dev);
+ bool reset_required = false;
+ u32 reg;
+
+ reg = ADF_CSR_RD(csr, ADF_GEN4_SPPPULLCMDPARERR_ATH_CPH);
+ reg &= err_mask->parerr_ath_cph_mask;
+ if (reg) {
+ dev_err(&GET_DEV(accel_dev),
+ "SPP pull command fatal error ATH_CPH: 0x%x\n", reg);
+
+ ADF_RAS_ERR_CTR_INC(accel_dev->ras_errors, ADF_RAS_FATAL);
+
+ ADF_CSR_WR(csr, ADF_GEN4_SPPPULLCMDPARERR_ATH_CPH, reg);
+
+ reset_required = true;
+ }
+
+ reg = ADF_CSR_RD(csr, ADF_GEN4_SPPPULLCMDPARERR_CPR_XLT);
+ reg &= err_mask->parerr_cpr_xlt_mask;
+ if (reg) {
+ dev_err(&GET_DEV(accel_dev),
+ "SPP pull command fatal error CPR_XLT: 0x%x\n", reg);
+
+ ADF_RAS_ERR_CTR_INC(accel_dev->ras_errors, ADF_RAS_FATAL);
+
+ ADF_CSR_WR(csr, ADF_GEN4_SPPPULLCMDPARERR_CPR_XLT, reg);
+
+ reset_required = true;
+ }
+
+ reg = ADF_CSR_RD(csr, ADF_GEN4_SPPPULLCMDPARERR_DCPR_UCS);
+ reg &= err_mask->parerr_dcpr_ucs_mask;
+ if (reg) {
+ dev_err(&GET_DEV(accel_dev),
+ "SPP pull command fatal error DCPR_UCS: 0x%x\n", reg);
+
+ ADF_RAS_ERR_CTR_INC(accel_dev->ras_errors, ADF_RAS_FATAL);
+
+ ADF_CSR_WR(csr, ADF_GEN4_SPPPULLCMDPARERR_DCPR_UCS, reg);
+
+ reset_required = true;
+ }
+
+ reg = ADF_CSR_RD(csr, ADF_GEN4_SPPPULLCMDPARERR_PKE);
+ reg &= err_mask->parerr_pke_mask;
+ if (reg) {
+ dev_err(&GET_DEV(accel_dev),
+ "SPP pull command fatal error PKE: 0x%x\n", reg);
+
+ ADF_RAS_ERR_CTR_INC(accel_dev->ras_errors, ADF_RAS_FATAL);
+
+ ADF_CSR_WR(csr, ADF_GEN4_SPPPULLCMDPARERR_PKE, reg);
+
+ reset_required = true;
+ }
+
+ if (err_mask->parerr_wat_wcp_mask) {
+ reg = ADF_CSR_RD(csr, ADF_GEN4_SPPPULLCMDPARERR_WAT_WCP);
+ reg &= err_mask->parerr_wat_wcp_mask;
+ if (reg) {
+ dev_err(&GET_DEV(accel_dev),
+ "SPP pull command fatal error WAT_WCP: 0x%x\n", reg);
+
+ ADF_RAS_ERR_CTR_INC(accel_dev->ras_errors, ADF_RAS_FATAL);
+
+ ADF_CSR_WR(csr, ADF_GEN4_SPPPULLCMDPARERR_WAT_WCP, reg);
+
+ reset_required = true;
+ }
+ }
+
+ return reset_required;
+}
+
+static bool adf_handle_spp_pulldata_err(struct adf_accel_dev *accel_dev,
+ void __iomem *csr)
+{
+ struct adf_dev_err_mask *err_mask = GET_ERR_MASK(accel_dev);
+ u32 reg;
+
+ reg = ADF_CSR_RD(csr, ADF_GEN4_SPPPULLDATAPARERR_ATH_CPH);
+ reg &= err_mask->parerr_ath_cph_mask;
+ if (reg) {
+ dev_err(&GET_DEV(accel_dev),
+ "SPP pull data err ATH_CPH: 0x%x\n", reg);
+
+ ADF_RAS_ERR_CTR_INC(accel_dev->ras_errors, ADF_RAS_UNCORR);
+
+ ADF_CSR_WR(csr, ADF_GEN4_SPPPULLDATAPARERR_ATH_CPH, reg);
+ }
+
+ reg = ADF_CSR_RD(csr, ADF_GEN4_SPPPULLDATAPARERR_CPR_XLT);
+ reg &= err_mask->parerr_cpr_xlt_mask;
+ if (reg) {
+ dev_err(&GET_DEV(accel_dev),
+ "SPP pull data err CPR_XLT: 0x%x\n", reg);
+
+ ADF_RAS_ERR_CTR_INC(accel_dev->ras_errors, ADF_RAS_UNCORR);
+
+ ADF_CSR_WR(csr, ADF_GEN4_SPPPULLDATAPARERR_CPR_XLT, reg);
+ }
+
+ reg = ADF_CSR_RD(csr, ADF_GEN4_SPPPULLDATAPARERR_DCPR_UCS);
+ reg &= err_mask->parerr_dcpr_ucs_mask;
+ if (reg) {
+ dev_err(&GET_DEV(accel_dev),
+ "SPP pull data err DCPR_UCS: 0x%x\n", reg);
+
+ ADF_RAS_ERR_CTR_INC(accel_dev->ras_errors, ADF_RAS_UNCORR);
+
+ ADF_CSR_WR(csr, ADF_GEN4_SPPPULLDATAPARERR_DCPR_UCS, reg);
+ }
+
+ reg = ADF_CSR_RD(csr, ADF_GEN4_SPPPULLDATAPARERR_PKE);
+ reg &= err_mask->parerr_pke_mask;
+ if (reg) {
+ dev_err(&GET_DEV(accel_dev),
+ "SPP pull data err PKE: 0x%x\n", reg);
+
+ ADF_RAS_ERR_CTR_INC(accel_dev->ras_errors, ADF_RAS_UNCORR);
+
+ ADF_CSR_WR(csr, ADF_GEN4_SPPPULLDATAPARERR_PKE, reg);
+ }
+
+ if (err_mask->parerr_wat_wcp_mask) {
+ reg = ADF_CSR_RD(csr, ADF_GEN4_SPPPULLDATAPARERR_WAT_WCP);
+ reg &= err_mask->parerr_wat_wcp_mask;
+ if (reg) {
+ dev_err(&GET_DEV(accel_dev),
+ "SPP pull data err WAT_WCP: 0x%x\n", reg);
+
+ ADF_RAS_ERR_CTR_INC(accel_dev->ras_errors, ADF_RAS_UNCORR);
+
+ ADF_CSR_WR(csr, ADF_GEN4_SPPPULLDATAPARERR_WAT_WCP, reg);
+ }
+ }
+
+ return false;
+}
+
+static bool adf_handle_spp_pushcmd_err(struct adf_accel_dev *accel_dev,
+ void __iomem *csr)
+{
+ struct adf_dev_err_mask *err_mask = GET_ERR_MASK(accel_dev);
+ bool reset_required = false;
+ u32 reg;
+
+ reg = ADF_CSR_RD(csr, ADF_GEN4_SPPPUSHCMDPARERR_ATH_CPH);
+ reg &= err_mask->parerr_ath_cph_mask;
+ if (reg) {
+ dev_err(&GET_DEV(accel_dev),
+ "SPP push command fatal error ATH_CPH: 0x%x\n", reg);
+
+ ADF_RAS_ERR_CTR_INC(accel_dev->ras_errors, ADF_RAS_FATAL);
+
+ ADF_CSR_WR(csr, ADF_GEN4_SPPPUSHCMDPARERR_ATH_CPH, reg);
+
+ reset_required = true;
+ }
+
+ reg = ADF_CSR_RD(csr, ADF_GEN4_SPPPUSHCMDPARERR_CPR_XLT);
+ reg &= err_mask->parerr_cpr_xlt_mask;
+ if (reg) {
+ dev_err(&GET_DEV(accel_dev),
+ "SPP push command fatal error CPR_XLT: 0x%x\n", reg);
+
+ ADF_RAS_ERR_CTR_INC(accel_dev->ras_errors, ADF_RAS_FATAL);
+
+ ADF_CSR_WR(csr, ADF_GEN4_SPPPUSHCMDPARERR_CPR_XLT, reg);
+
+ reset_required = true;
+ }
+
+ reg = ADF_CSR_RD(csr, ADF_GEN4_SPPPUSHCMDPARERR_DCPR_UCS);
+ reg &= err_mask->parerr_dcpr_ucs_mask;
+ if (reg) {
+ dev_err(&GET_DEV(accel_dev),
+ "SPP push command fatal error DCPR_UCS: 0x%x\n", reg);
+
+ ADF_RAS_ERR_CTR_INC(accel_dev->ras_errors, ADF_RAS_FATAL);
+
+ ADF_CSR_WR(csr, ADF_GEN4_SPPPUSHCMDPARERR_DCPR_UCS, reg);
+
+ reset_required = true;
+ }
+
+ reg = ADF_CSR_RD(csr, ADF_GEN4_SPPPUSHCMDPARERR_PKE);
+ reg &= err_mask->parerr_pke_mask;
+ if (reg) {
+ dev_err(&GET_DEV(accel_dev),
+ "SPP push command fatal error PKE: 0x%x\n",
+ reg);
+
+ ADF_RAS_ERR_CTR_INC(accel_dev->ras_errors, ADF_RAS_FATAL);
+
+ ADF_CSR_WR(csr, ADF_GEN4_SPPPUSHCMDPARERR_PKE, reg);
+
+ reset_required = true;
+ }
+
+ if (err_mask->parerr_wat_wcp_mask) {
+ reg = ADF_CSR_RD(csr, ADF_GEN4_SPPPUSHCMDPARERR_WAT_WCP);
+ reg &= err_mask->parerr_wat_wcp_mask;
+ if (reg) {
+ dev_err(&GET_DEV(accel_dev),
+ "SPP push command fatal error WAT_WCP: 0x%x\n", reg);
+
+ ADF_RAS_ERR_CTR_INC(accel_dev->ras_errors, ADF_RAS_FATAL);
+
+ ADF_CSR_WR(csr, ADF_GEN4_SPPPUSHCMDPARERR_WAT_WCP, reg);
+
+ reset_required = true;
+ }
+ }
+
+ return reset_required;
+}
+
+static bool adf_handle_spp_pushdata_err(struct adf_accel_dev *accel_dev,
+ void __iomem *csr)
+{
+ struct adf_dev_err_mask *err_mask = GET_ERR_MASK(accel_dev);
+ u32 reg;
+
+ reg = ADF_CSR_RD(csr, ADF_GEN4_SPPPUSHDATAPARERR_ATH_CPH);
+ reg &= err_mask->parerr_ath_cph_mask;
+ if (reg) {
+ dev_err(&GET_DEV(accel_dev),
+ "SPP push data err ATH_CPH: 0x%x\n", reg);
+
+ ADF_RAS_ERR_CTR_INC(accel_dev->ras_errors, ADF_RAS_UNCORR);
+
+ ADF_CSR_WR(csr, ADF_GEN4_SPPPUSHDATAPARERR_ATH_CPH, reg);
+ }
+
+ reg = ADF_CSR_RD(csr, ADF_GEN4_SPPPUSHDATAPARERR_CPR_XLT);
+ reg &= err_mask->parerr_cpr_xlt_mask;
+ if (reg) {
+ dev_err(&GET_DEV(accel_dev),
+ "SPP push data err CPR_XLT: 0x%x\n", reg);
+
+ ADF_RAS_ERR_CTR_INC(accel_dev->ras_errors, ADF_RAS_UNCORR);
+
+ ADF_CSR_WR(csr, ADF_GEN4_SPPPUSHDATAPARERR_CPR_XLT, reg);
+ }
+
+ reg = ADF_CSR_RD(csr, ADF_GEN4_SPPPUSHDATAPARERR_DCPR_UCS);
+ reg &= err_mask->parerr_dcpr_ucs_mask;
+ if (reg) {
+ dev_err(&GET_DEV(accel_dev),
+ "SPP push data err DCPR_UCS: 0x%x\n", reg);
+
+ ADF_RAS_ERR_CTR_INC(accel_dev->ras_errors, ADF_RAS_UNCORR);
+
+ ADF_CSR_WR(csr, ADF_GEN4_SPPPUSHDATAPARERR_DCPR_UCS, reg);
+ }
+
+ reg = ADF_CSR_RD(csr, ADF_GEN4_SPPPUSHDATAPARERR_PKE);
+ reg &= err_mask->parerr_pke_mask;
+ if (reg) {
+ dev_err(&GET_DEV(accel_dev),
+ "SPP push data err PKE: 0x%x\n", reg);
+
+ ADF_RAS_ERR_CTR_INC(accel_dev->ras_errors, ADF_RAS_UNCORR);
+
+ ADF_CSR_WR(csr, ADF_GEN4_SPPPUSHDATAPARERR_PKE, reg);
+ }
+
+ if (err_mask->parerr_wat_wcp_mask) {
+ reg = ADF_CSR_RD(csr, ADF_GEN4_SPPPUSHDATAPARERR_WAT_WCP);
+ reg &= err_mask->parerr_wat_wcp_mask;
+ if (reg) {
+ dev_err(&GET_DEV(accel_dev),
+ "SPP push data err WAT_WCP: 0x%x\n", reg);
+
+ ADF_RAS_ERR_CTR_INC(accel_dev->ras_errors, ADF_RAS_UNCORR);
+
+ ADF_CSR_WR(csr, ADF_GEN4_SPPPUSHDATAPARERR_WAT_WCP,
+ reg);
+ }
+ }
+
+ return false;
+}
+
+static bool adf_handle_spppar_err(struct adf_accel_dev *accel_dev,
+ void __iomem *csr, u32 iastatssm)
+{
+ bool reset_required;
+
+ if (!(iastatssm & ADF_GEN4_IAINTSTATSSM_SPPPARERR_BIT))
+ return false;
+
+ reset_required = adf_handle_spp_pullcmd_err(accel_dev, csr);
+ reset_required |= adf_handle_spp_pulldata_err(accel_dev, csr);
+ reset_required |= adf_handle_spp_pushcmd_err(accel_dev, csr);
+ reset_required |= adf_handle_spp_pushdata_err(accel_dev, csr);
+
+ return reset_required;
+}
+
+static bool adf_handle_ssmcpppar_err(struct adf_accel_dev *accel_dev,
+ void __iomem *csr, u32 iastatssm)
+{
+ u32 reg = ADF_CSR_RD(csr, ADF_GEN4_SSMCPPERR);
+ u32 bits_num = BITS_PER_REG(reg);
+ bool reset_required = false;
+ unsigned long errs_bits;
+ u32 bit_iterator;
+
+ if (!(iastatssm & ADF_GEN4_IAINTSTATSSM_SSMCPPERR_BIT))
+ return false;
+
+ reg = ADF_CSR_RD(csr, ADF_GEN4_SSMCPPERR);
+ reg &= ADF_GEN4_SSMCPPERR_FATAL_BITMASK | ADF_GEN4_SSMCPPERR_UNCERR_BITMASK;
+ if (reg & ADF_GEN4_SSMCPPERR_FATAL_BITMASK) {
+ dev_err(&GET_DEV(accel_dev),
+ "Fatal SSM CPP parity error: 0x%x\n", reg);
+
+ errs_bits = reg & ADF_GEN4_SSMCPPERR_FATAL_BITMASK;
+ for_each_set_bit(bit_iterator, &errs_bits, bits_num) {
+ ADF_RAS_ERR_CTR_INC(accel_dev->ras_errors, ADF_RAS_FATAL);
+ }
+ reset_required = true;
+ }
+
+ if (reg & ADF_GEN4_SSMCPPERR_UNCERR_BITMASK) {
+ dev_err(&GET_DEV(accel_dev),
+ "non-Fatal SSM CPP parity error: 0x%x\n", reg);
+ errs_bits = reg & ADF_GEN4_SSMCPPERR_UNCERR_BITMASK;
+
+ for_each_set_bit(bit_iterator, &errs_bits, bits_num) {
+ ADF_RAS_ERR_CTR_INC(accel_dev->ras_errors, ADF_RAS_UNCORR);
+ }
+ }
+
+ ADF_CSR_WR(csr, ADF_GEN4_SSMCPPERR, reg);
+
+ return reset_required;
+}
+
+static bool adf_handle_rf_parr_err(struct adf_accel_dev *accel_dev,
+ void __iomem *csr, u32 iastatssm)
+{
+ struct adf_dev_err_mask *err_mask = GET_ERR_MASK(accel_dev);
+ u32 reg;
+
+ if (!(iastatssm & ADF_GEN4_IAINTSTATSSM_SSMSOFTERRORPARITY_BIT))
+ return false;
+
+ reg = ADF_CSR_RD(csr, ADF_GEN4_SSMSOFTERRORPARITY_SRC);
+ reg &= ADF_GEN4_SSMSOFTERRORPARITY_SRC_BIT;
+ if (reg) {
+ ADF_RAS_ERR_CTR_INC(accel_dev->ras_errors, ADF_RAS_UNCORR);
+ ADF_CSR_WR(csr, ADF_GEN4_SSMSOFTERRORPARITY_SRC, reg);
+ }
+
+ reg = ADF_CSR_RD(csr, ADF_GEN4_SSMSOFTERRORPARITY_ATH_CPH);
+ reg &= err_mask->parerr_ath_cph_mask;
+ if (reg) {
+ ADF_RAS_ERR_CTR_INC(accel_dev->ras_errors, ADF_RAS_UNCORR);
+ ADF_CSR_WR(csr, ADF_GEN4_SSMSOFTERRORPARITY_ATH_CPH, reg);
+ }
+
+ reg = ADF_CSR_RD(csr, ADF_GEN4_SSMSOFTERRORPARITY_CPR_XLT);
+ reg &= err_mask->parerr_cpr_xlt_mask;
+ if (reg) {
+ ADF_RAS_ERR_CTR_INC(accel_dev->ras_errors, ADF_RAS_UNCORR);
+ ADF_CSR_WR(csr, ADF_GEN4_SSMSOFTERRORPARITY_CPR_XLT, reg);
+ }
+
+ reg = ADF_CSR_RD(csr, ADF_GEN4_SSMSOFTERRORPARITY_DCPR_UCS);
+ reg &= err_mask->parerr_dcpr_ucs_mask;
+ if (reg) {
+ ADF_RAS_ERR_CTR_INC(accel_dev->ras_errors, ADF_RAS_UNCORR);
+ ADF_CSR_WR(csr, ADF_GEN4_SSMSOFTERRORPARITY_DCPR_UCS, reg);
+ }
+
+ reg = ADF_CSR_RD(csr, ADF_GEN4_SSMSOFTERRORPARITY_PKE);
+ reg &= err_mask->parerr_pke_mask;
+ if (reg) {
+ ADF_RAS_ERR_CTR_INC(accel_dev->ras_errors, ADF_RAS_UNCORR);
+ ADF_CSR_WR(csr, ADF_GEN4_SSMSOFTERRORPARITY_PKE, reg);
+ }
+
+ if (err_mask->parerr_wat_wcp_mask) {
+ reg = ADF_CSR_RD(csr, ADF_GEN4_SSMSOFTERRORPARITY_WAT_WCP);
+ reg &= err_mask->parerr_wat_wcp_mask;
+ if (reg) {
+ ADF_RAS_ERR_CTR_INC(accel_dev->ras_errors, ADF_RAS_UNCORR);
+ ADF_CSR_WR(csr, ADF_GEN4_SSMSOFTERRORPARITY_WAT_WCP,
+ reg);
+ }
+ }
+
+ dev_err(&GET_DEV(accel_dev), "Slice ssm soft parity error reported");
+
+ return false;
+}
+
+static bool adf_handle_ser_err_ssmsh(struct adf_accel_dev *accel_dev,
+ void __iomem *csr, u32 iastatssm)
+{
+ u32 reg = ADF_CSR_RD(csr, ADF_GEN4_SER_ERR_SSMSH);
+ u32 bits_num = BITS_PER_REG(reg);
+ bool reset_required = false;
+ unsigned long errs_bits;
+ u32 bit_iterator;
+
+ if (!(iastatssm & (ADF_GEN4_IAINTSTATSSM_SER_ERR_SSMSH_CERR_BIT |
+ ADF_GEN4_IAINTSTATSSM_SER_ERR_SSMSH_UNCERR_BIT)))
+ return false;
+
+ reg = ADF_CSR_RD(csr, ADF_GEN4_SER_ERR_SSMSH);
+ reg &= ADF_GEN4_SER_ERR_SSMSH_FATAL_BITMASK |
+ ADF_GEN4_SER_ERR_SSMSH_UNCERR_BITMASK |
+ ADF_GEN4_SER_ERR_SSMSH_CERR_BITMASK;
+ if (reg & ADF_GEN4_SER_ERR_SSMSH_FATAL_BITMASK) {
+ dev_err(&GET_DEV(accel_dev),
+ "Fatal SER_SSMSH_ERR: 0x%x\n", reg);
+
+ errs_bits = reg & ADF_GEN4_SER_ERR_SSMSH_FATAL_BITMASK;
+ for_each_set_bit(bit_iterator, &errs_bits, bits_num) {
+ ADF_RAS_ERR_CTR_INC(accel_dev->ras_errors, ADF_RAS_FATAL);
+ }
+
+ reset_required = true;
+ }
+
+ if (reg & ADF_GEN4_SER_ERR_SSMSH_UNCERR_BITMASK) {
+ dev_err(&GET_DEV(accel_dev),
+ "non-fatal SER_SSMSH_ERR: 0x%x\n", reg);
+
+ errs_bits = reg & ADF_GEN4_SER_ERR_SSMSH_UNCERR_BITMASK;
+ for_each_set_bit(bit_iterator, &errs_bits, bits_num) {
+ ADF_RAS_ERR_CTR_INC(accel_dev->ras_errors, ADF_RAS_UNCORR);
+ }
+ }
+
+ if (reg & ADF_GEN4_SER_ERR_SSMSH_CERR_BITMASK) {
+ dev_warn(&GET_DEV(accel_dev),
+ "Correctable SER_SSMSH_ERR: 0x%x\n", reg);
+
+ errs_bits = reg & ADF_GEN4_SER_ERR_SSMSH_CERR_BITMASK;
+ for_each_set_bit(bit_iterator, &errs_bits, bits_num) {
+ ADF_RAS_ERR_CTR_INC(accel_dev->ras_errors, ADF_RAS_CORR);
+ }
+ }
+
+ ADF_CSR_WR(csr, ADF_GEN4_SER_ERR_SSMSH, reg);
+
+ return reset_required;
+}
+
+static bool adf_handle_iaintstatssm(struct adf_accel_dev *accel_dev,
+ void __iomem *csr)
+{
+ u32 iastatssm = ADF_CSR_RD(csr, ADF_GEN4_IAINTSTATSSM);
+ bool reset_required;
+
+ iastatssm &= ADF_GEN4_IAINTSTATSSM_BITMASK;
+ if (!iastatssm)
+ return false;
+
+ reset_required = adf_handle_uerrssmsh(accel_dev, csr, iastatssm);
+ reset_required |= adf_handle_cerrssmsh(accel_dev, csr, iastatssm);
+ reset_required |= adf_handle_pperr_err(accel_dev, csr, iastatssm);
+ reset_required |= adf_handle_slice_hang_error(accel_dev, csr, iastatssm);
+ reset_required |= adf_handle_spppar_err(accel_dev, csr, iastatssm);
+ reset_required |= adf_handle_ssmcpppar_err(accel_dev, csr, iastatssm);
+ reset_required |= adf_handle_rf_parr_err(accel_dev, csr, iastatssm);
+ reset_required |= adf_handle_ser_err_ssmsh(accel_dev, csr, iastatssm);
+
+ ADF_CSR_WR(csr, ADF_GEN4_IAINTSTATSSM, iastatssm);
+
+ return reset_required;
+}
+
+static bool adf_handle_exprpssmcmpr(struct adf_accel_dev *accel_dev,
+ void __iomem *csr)
+{
+ u32 reg = ADF_CSR_RD(csr, ADF_GEN4_EXPRPSSMCPR);
+
+ reg &= ADF_GEN4_EXPRPSSMCPR_UNCERR_BITMASK;
+ if (!reg)
+ return false;
+
+ dev_err(&GET_DEV(accel_dev),
+ "Uncorrectable error exception in SSM CMP: 0x%x", reg);
+
+ ADF_RAS_ERR_CTR_INC(accel_dev->ras_errors, ADF_RAS_UNCORR);
+
+ ADF_CSR_WR(csr, ADF_GEN4_EXPRPSSMCPR, reg);
+
+ return false;
+}
+
+static bool adf_handle_exprpssmxlt(struct adf_accel_dev *accel_dev,
+ void __iomem *csr)
+{
+ u32 reg = ADF_CSR_RD(csr, ADF_GEN4_EXPRPSSMXLT);
+
+ reg &= ADF_GEN4_EXPRPSSMXLT_UNCERR_BITMASK |
+ ADF_GEN4_EXPRPSSMXLT_CERR_BIT;
+ if (!reg)
+ return false;
+
+ if (reg & ADF_GEN4_EXPRPSSMXLT_UNCERR_BITMASK) {
+ dev_err(&GET_DEV(accel_dev),
+ "Uncorrectable error exception in SSM XLT: 0x%x", reg);
+
+ ADF_RAS_ERR_CTR_INC(accel_dev->ras_errors, ADF_RAS_UNCORR);
+ }
+
+ if (reg & ADF_GEN4_EXPRPSSMXLT_CERR_BIT) {
+ dev_warn(&GET_DEV(accel_dev),
+ "Correctable error exception in SSM XLT: 0x%x", reg);
+
+ ADF_RAS_ERR_CTR_INC(accel_dev->ras_errors, ADF_RAS_CORR);
+ }
+
+ ADF_CSR_WR(csr, ADF_GEN4_EXPRPSSMXLT, reg);
+
+ return false;
+}
+
+static bool adf_handle_exprpssmdcpr(struct adf_accel_dev *accel_dev,
+ void __iomem *csr)
+{
+ u32 reg;
+ int i;
+
+ for (i = 0; i < ADF_GEN4_DCPR_SLICES_NUM; i++) {
+ reg = ADF_CSR_RD(csr, ADF_GEN4_EXPRPSSMDCPR(i));
+ reg &= ADF_GEN4_EXPRPSSMDCPR_UNCERR_BITMASK |
+ ADF_GEN4_EXPRPSSMDCPR_CERR_BITMASK;
+ if (!reg)
+ continue;
+
+ if (reg & ADF_GEN4_EXPRPSSMDCPR_UNCERR_BITMASK) {
+ dev_err(&GET_DEV(accel_dev),
+ "Uncorrectable error exception in SSM DCMP: 0x%x", reg);
+
+ ADF_RAS_ERR_CTR_INC(accel_dev->ras_errors, ADF_RAS_UNCORR);
+ }
+
+ if (reg & ADF_GEN4_EXPRPSSMDCPR_CERR_BITMASK) {
+ dev_warn(&GET_DEV(accel_dev),
+ "Correctable error exception in SSM DCMP: 0x%x", reg);
+
+ ADF_RAS_ERR_CTR_INC(accel_dev->ras_errors, ADF_RAS_CORR);
+ }
+
+ ADF_CSR_WR(csr, ADF_GEN4_EXPRPSSMDCPR(i), reg);
+ }
+
+ return false;
+}
+
+static bool adf_handle_ssm(struct adf_accel_dev *accel_dev, void __iomem *csr,
+ u32 errsou)
+{
+ bool reset_required;
+
+ if (!(errsou & ADF_GEN4_ERRSOU2_SSM_ERR_BIT))
+ return false;
+
+ reset_required = adf_handle_iaintstatssm(accel_dev, csr);
+ reset_required |= adf_handle_exprpssmcmpr(accel_dev, csr);
+ reset_required |= adf_handle_exprpssmxlt(accel_dev, csr);
+ reset_required |= adf_handle_exprpssmdcpr(accel_dev, csr);
+
+ return reset_required;
+}
+
+static bool adf_handle_cpp_cfc_err(struct adf_accel_dev *accel_dev,
+ void __iomem *csr, u32 errsou)
+{
+ bool reset_required = false;
+ u32 reg;
+
+ if (!(errsou & ADF_GEN4_ERRSOU2_CPP_CFC_ERR_STATUS_BIT))
+ return false;
+
+ reg = ADF_CSR_RD(csr, ADF_GEN4_CPP_CFC_ERR_STATUS);
+ if (reg & ADF_GEN4_CPP_CFC_ERR_STATUS_DATAPAR_BIT) {
+ dev_err(&GET_DEV(accel_dev),
+ "CPP_CFC_ERR: data parity: 0x%x", reg);
+ ADF_RAS_ERR_CTR_INC(accel_dev->ras_errors, ADF_RAS_UNCORR);
+ }
+
+ if (reg & ADF_GEN4_CPP_CFC_ERR_STATUS_CMDPAR_BIT) {
+ dev_err(&GET_DEV(accel_dev),
+ "CPP_CFC_ERR: command parity: 0x%x", reg);
+ ADF_RAS_ERR_CTR_INC(accel_dev->ras_errors, ADF_RAS_FATAL);
+
+ reset_required = true;
+ }
+
+ if (reg & ADF_GEN4_CPP_CFC_ERR_STATUS_MERR_BIT) {
+ dev_err(&GET_DEV(accel_dev),
+ "CPP_CFC_ERR: multiple errors: 0x%x", reg);
+ ADF_RAS_ERR_CTR_INC(accel_dev->ras_errors, ADF_RAS_FATAL);
+
+ reset_required = true;
+ }
+
+ ADF_CSR_WR(csr, ADF_GEN4_CPP_CFC_ERR_STATUS_CLR,
+ ADF_GEN4_CPP_CFC_ERR_STATUS_CLR_BITMASK);
+
+ return reset_required;
+}
+
+static void adf_gen4_process_errsou2(struct adf_accel_dev *accel_dev,
+ void __iomem *csr, u32 errsou,
+ bool *reset_required)
+{
+ *reset_required |= adf_handle_ssm(accel_dev, csr, errsou);
+ *reset_required |= adf_handle_cpp_cfc_err(accel_dev, csr, errsou);
+}
+
+static bool adf_handle_timiscsts(struct adf_accel_dev *accel_dev,
+ void __iomem *csr, u32 errsou)
+{
+ u32 timiscsts;
+
+ if (!(errsou & ADF_GEN4_ERRSOU3_TIMISCSTS_BIT))
+ return false;
+
+ timiscsts = ADF_CSR_RD(csr, ADF_GEN4_TIMISCSTS);
+
+ dev_err(&GET_DEV(accel_dev),
+ "Fatal error in Transmit Interface: 0x%x\n", timiscsts);
+
+ ADF_RAS_ERR_CTR_INC(accel_dev->ras_errors, ADF_RAS_FATAL);
+
+ return true;
+}
+
+static bool adf_handle_ricppintsts(struct adf_accel_dev *accel_dev,
+ void __iomem *csr, u32 errsou)
+{
+ u32 ricppintsts;
+
+ if (!(errsou & ADF_GEN4_ERRSOU3_RICPPINTSTS_BITMASK))
+ return false;
+
+ ricppintsts = ADF_CSR_RD(csr, ADF_GEN4_RICPPINTSTS);
+ ricppintsts &= ADF_GEN4_RICPPINTSTS_BITMASK;
+
+ dev_err(&GET_DEV(accel_dev),
+ "RI CPP Uncorrectable Error: 0x%x\n", ricppintsts);
+
+ ADF_RAS_ERR_CTR_INC(accel_dev->ras_errors, ADF_RAS_UNCORR);
+
+ ADF_CSR_WR(csr, ADF_GEN4_RICPPINTSTS, ricppintsts);
+
+ return false;
+}
+
+static bool adf_handle_ticppintsts(struct adf_accel_dev *accel_dev,
+ void __iomem *csr, u32 errsou)
+{
+ u32 ticppintsts;
+
+ if (!(errsou & ADF_GEN4_ERRSOU3_TICPPINTSTS_BITMASK))
+ return false;
+
+ ticppintsts = ADF_CSR_RD(csr, ADF_GEN4_TICPPINTSTS);
+ ticppintsts &= ADF_GEN4_TICPPINTSTS_BITMASK;
+
+ dev_err(&GET_DEV(accel_dev),
+ "TI CPP Uncorrectable Error: 0x%x\n", ticppintsts);
+
+ ADF_RAS_ERR_CTR_INC(accel_dev->ras_errors, ADF_RAS_UNCORR);
+
+ ADF_CSR_WR(csr, ADF_GEN4_TICPPINTSTS, ticppintsts);
+
+ return false;
+}
+
+static bool adf_handle_aramcerr(struct adf_accel_dev *accel_dev,
+ void __iomem *csr, u32 errsou)
+{
+ u32 aram_cerr;
+
+ if (!(errsou & ADF_GEN4_ERRSOU3_REG_ARAMCERR_BIT))
+ return false;
+
+ aram_cerr = ADF_CSR_RD(csr, ADF_GEN4_REG_ARAMCERR);
+ aram_cerr &= ADF_GEN4_REG_ARAMCERR_BIT;
+
+ dev_warn(&GET_DEV(accel_dev),
+ "ARAM correctable error : 0x%x\n", aram_cerr);
+
+ ADF_RAS_ERR_CTR_INC(accel_dev->ras_errors, ADF_RAS_CORR);
+
+ aram_cerr |= ADF_GEN4_REG_ARAMCERR_EN_BITMASK;
+
+ ADF_CSR_WR(csr, ADF_GEN4_REG_ARAMCERR, aram_cerr);
+
+ return false;
+}
+
+static bool adf_handle_aramuerr(struct adf_accel_dev *accel_dev,
+ void __iomem *csr, u32 errsou)
+{
+ bool reset_required = false;
+ u32 aramuerr;
+
+ if (!(errsou & ADF_GEN4_ERRSOU3_REG_ARAMUERR_BIT))
+ return false;
+
+ aramuerr = ADF_CSR_RD(csr, ADF_GEN4_REG_ARAMUERR);
+ aramuerr &= ADF_GEN4_REG_ARAMUERR_ERROR_BIT |
+ ADF_GEN4_REG_ARAMUERR_MULTI_ERRORS_BIT;
+
+ if (!aramuerr)
+ return false;
+
+ if (aramuerr & ADF_GEN4_REG_ARAMUERR_MULTI_ERRORS_BIT) {
+ dev_err(&GET_DEV(accel_dev),
+ "ARAM multiple uncorrectable errors: 0x%x\n", aramuerr);
+
+ ADF_RAS_ERR_CTR_INC(accel_dev->ras_errors, ADF_RAS_FATAL);
+
+ reset_required = true;
+ } else {
+ dev_err(&GET_DEV(accel_dev),
+ "ARAM uncorrectable error: 0x%x\n", aramuerr);
+
+ ADF_RAS_ERR_CTR_INC(accel_dev->ras_errors, ADF_RAS_UNCORR);
+ }
+
+ aramuerr |= ADF_GEN4_REG_ARAMUERR_EN_BITMASK;
+
+ ADF_CSR_WR(csr, ADF_GEN4_REG_ARAMUERR, aramuerr);
+
+ return reset_required;
+}
+
+static bool adf_handle_reg_cppmemtgterr(struct adf_accel_dev *accel_dev,
+ void __iomem *csr, u32 errsou)
+{
+ bool reset_required = false;
+ u32 cppmemtgterr;
+
+ if (!(errsou & ADF_GEN4_ERRSOU3_REG_ARAMUERR_BIT))
+ return false;
+
+ cppmemtgterr = ADF_CSR_RD(csr, ADF_GEN4_REG_CPPMEMTGTERR);
+ cppmemtgterr &= ADF_GEN4_REG_CPPMEMTGTERR_BITMASK |
+ ADF_GEN4_REG_CPPMEMTGTERR_MULTI_ERRORS_BIT;
+ if (!cppmemtgterr)
+ return false;
+
+ if (cppmemtgterr & ADF_GEN4_REG_CPPMEMTGTERR_MULTI_ERRORS_BIT) {
+ dev_err(&GET_DEV(accel_dev),
+ "Misc memory target multiple uncorrectable errors: 0x%x\n",
+ cppmemtgterr);
+
+ ADF_RAS_ERR_CTR_INC(accel_dev->ras_errors, ADF_RAS_FATAL);
+
+ reset_required = true;
+ } else {
+ dev_err(&GET_DEV(accel_dev),
+ "Misc memory target uncorrectable error: 0x%x\n", cppmemtgterr);
+ ADF_RAS_ERR_CTR_INC(accel_dev->ras_errors, ADF_RAS_UNCORR);
+ }
+
+ cppmemtgterr |= ADF_GEN4_REG_CPPMEMTGTERR_EN_BITMASK;
+
+ ADF_CSR_WR(csr, ADF_GEN4_REG_CPPMEMTGTERR, cppmemtgterr);
+
+ return reset_required;
+}
+
+static bool adf_handle_atufaultstatus(struct adf_accel_dev *accel_dev,
+ void __iomem *csr, u32 errsou)
+{
+ u32 i;
+ u32 max_rp_num = GET_HW_DATA(accel_dev)->num_banks;
+
+ if (!(errsou & ADF_GEN4_ERRSOU3_ATUFAULTSTATUS_BIT))
+ return false;
+
+ for (i = 0; i < max_rp_num; i++) {
+ u32 atufaultstatus = ADF_CSR_RD(csr, ADF_GEN4_ATUFAULTSTATUS(i));
+
+ atufaultstatus &= ADF_GEN4_ATUFAULTSTATUS_BIT;
+
+ if (atufaultstatus) {
+ dev_err(&GET_DEV(accel_dev),
+ "Ring Pair (%u) ATU detected fault: 0x%x\n", i,
+ atufaultstatus);
+
+ ADF_RAS_ERR_CTR_INC(accel_dev->ras_errors, ADF_RAS_UNCORR);
+
+ ADF_CSR_WR(csr, ADF_GEN4_ATUFAULTSTATUS(i), atufaultstatus);
+ }
+ }
+
+ return false;
+}
+
+static void adf_gen4_process_errsou3(struct adf_accel_dev *accel_dev,
+ void __iomem *csr, void __iomem *aram_csr,
+ u32 errsou, bool *reset_required)
+{
+ *reset_required |= adf_handle_timiscsts(accel_dev, csr, errsou);
+ *reset_required |= adf_handle_ricppintsts(accel_dev, csr, errsou);
+ *reset_required |= adf_handle_ticppintsts(accel_dev, csr, errsou);
+ *reset_required |= adf_handle_aramcerr(accel_dev, aram_csr, errsou);
+ *reset_required |= adf_handle_aramuerr(accel_dev, aram_csr, errsou);
+ *reset_required |= adf_handle_reg_cppmemtgterr(accel_dev, aram_csr, errsou);
+ *reset_required |= adf_handle_atufaultstatus(accel_dev, csr, errsou);
+}
+
+static bool adf_gen4_handle_interrupt(struct adf_accel_dev *accel_dev,
+ bool *reset_required)
+{
+ void __iomem *aram_csr = adf_get_aram_base(accel_dev);
+ void __iomem *csr = adf_get_pmisc_base(accel_dev);
+ u32 errsou = ADF_CSR_RD(csr, ADF_GEN4_ERRSOU0);
+ bool handled = false;
+
+ *reset_required = false;
+
+ if (errsou & ADF_GEN4_ERRSOU0_BIT) {
+ adf_gen4_process_errsou0(accel_dev, csr);
+ handled = true;
+ }
+
+ errsou = ADF_CSR_RD(csr, ADF_GEN4_ERRSOU1);
+ if (errsou & ADF_GEN4_ERRSOU1_BITMASK) {
+ adf_gen4_process_errsou1(accel_dev, csr, errsou, reset_required);
+ handled = true;
+ }
+
+ errsou = ADF_CSR_RD(csr, ADF_GEN4_ERRSOU2);
+ if (errsou & ADF_GEN4_ERRSOU2_BITMASK) {
+ adf_gen4_process_errsou2(accel_dev, csr, errsou, reset_required);
+ handled = true;
+ }
+
+ errsou = ADF_CSR_RD(csr, ADF_GEN4_ERRSOU3);
+ if (errsou & ADF_GEN4_ERRSOU3_BITMASK) {
+ adf_gen4_process_errsou3(accel_dev, csr, aram_csr, errsou, reset_required);
+ handled = true;
+ }
+
+ return handled;
+}
+
+void adf_gen4_init_ras_ops(struct adf_ras_ops *ras_ops)
+{
+ ras_ops->enable_ras_errors = adf_gen4_enable_ras;
+ ras_ops->disable_ras_errors = adf_gen4_disable_ras;
+ ras_ops->handle_interrupt = adf_gen4_handle_interrupt;
+}
+EXPORT_SYMBOL_GPL(adf_gen4_init_ras_ops);
--- /dev/null
+/* SPDX-License-Identifier: GPL-2.0-only */
+/* Copyright(c) 2023 Intel Corporation */
+#ifndef ADF_GEN4_RAS_H_
+#define ADF_GEN4_RAS_H_
+
+#include <linux/bits.h>
+
+struct adf_ras_ops;
+
+/* ERRSOU0 Correctable error mask*/
+#define ADF_GEN4_ERRSOU0_BIT BIT(0)
+
+/* HI AE Correctable error log */
+#define ADF_GEN4_HIAECORERRLOG_CPP0 0x41A308
+
+/* HI AE Correctable error log enable */
+#define ADF_GEN4_HIAECORERRLOGENABLE_CPP0 0x41A318
+#define ADF_GEN4_ERRSOU1_HIAEUNCERRLOG_CPP0_BIT BIT(0)
+#define ADF_GEN4_ERRSOU1_HICPPAGENTCMDPARERRLOG_BIT BIT(1)
+#define ADF_GEN4_ERRSOU1_RIMEM_PARERR_STS_BIT BIT(2)
+#define ADF_GEN4_ERRSOU1_TIMEM_PARERR_STS_BIT BIT(3)
+#define ADF_GEN4_ERRSOU1_RIMISCSTS_BIT BIT(4)
+
+#define ADF_GEN4_ERRSOU1_BITMASK ( \
+ (ADF_GEN4_ERRSOU1_HIAEUNCERRLOG_CPP0_BIT) | \
+ (ADF_GEN4_ERRSOU1_HICPPAGENTCMDPARERRLOG_BIT) | \
+ (ADF_GEN4_ERRSOU1_RIMEM_PARERR_STS_BIT) | \
+ (ADF_GEN4_ERRSOU1_TIMEM_PARERR_STS_BIT) | \
+ (ADF_GEN4_ERRSOU1_RIMISCSTS_BIT))
+
+/* HI AE Uncorrectable error log */
+#define ADF_GEN4_HIAEUNCERRLOG_CPP0 0x41A300
+
+/* HI AE Uncorrectable error log enable */
+#define ADF_GEN4_HIAEUNCERRLOGENABLE_CPP0 0x41A320
+
+/* HI CPP Agent Command parity error log */
+#define ADF_GEN4_HICPPAGENTCMDPARERRLOG 0x41A310
+
+/* HI CPP Agent Command parity error logging enable */
+#define ADF_GEN4_HICPPAGENTCMDPARERRLOGENABLE 0x41A314
+
+/* RI Memory parity error status register */
+#define ADF_GEN4_RIMEM_PARERR_STS 0x41B128
+
+/* RI Memory parity error reporting enable */
+#define ADF_GEN4_RI_MEM_PAR_ERR_EN0 0x41B12C
+
+/*
+ * RI Memory parity error mask
+ * BIT(0) - BIT(3) - ri_iosf_pdata_rxq[0:3] parity error
+ * BIT(4) - ri_tlq_phdr parity error
+ * BIT(5) - ri_tlq_pdata parity error
+ * BIT(6) - ri_tlq_nphdr parity error
+ * BIT(7) - ri_tlq_npdata parity error
+ * BIT(8) - BIT(9) - ri_tlq_cplhdr[0:1] parity error
+ * BIT(10) - BIT(17) - ri_tlq_cpldata[0:7] parity error
+ * BIT(18) - set this bit to 1 to enable logging status to ri_mem_par_err_sts0
+ * BIT(19) - ri_cds_cmd_fifo parity error
+ * BIT(20) - ri_obc_ricpl_fifo parity error
+ * BIT(21) - ri_obc_tiricpl_fifo parity error
+ * BIT(22) - ri_obc_cppcpl_fifo parity error
+ * BIT(23) - ri_obc_pendcpl_fifo parity error
+ * BIT(24) - ri_cpp_cmd_fifo parity error
+ * BIT(25) - ri_cds_ticmd_fifo parity error
+ * BIT(26) - riti_cmd_fifo parity error
+ * BIT(27) - ri_int_msixtbl parity error
+ * BIT(28) - ri_int_imstbl parity error
+ * BIT(30) - ri_kpt_fuses parity error
+ */
+#define ADF_GEN4_RIMEM_PARERR_STS_UNCERR_BITMASK \
+ (BIT(0) | BIT(1) | BIT(2) | BIT(3) | BIT(5) | \
+ BIT(7) | BIT(10) | BIT(11) | BIT(12) | BIT(13) | \
+ BIT(14) | BIT(15) | BIT(16) | BIT(17) | BIT(18) | BIT(19) | \
+ BIT(20) | BIT(21) | BIT(22) | BIT(23) | BIT(24) | BIT(25) | \
+ BIT(26) | BIT(27) | BIT(28) | BIT(30))
+
+#define ADF_GEN4_RIMEM_PARERR_STS_FATAL_BITMASK \
+ (BIT(4) | BIT(6) | BIT(8) | BIT(9))
+
+/* TI CI parity status */
+#define ADF_GEN4_TI_CI_PAR_STS 0x50060C
+
+/* TI CI parity reporting mask */
+#define ADF_GEN4_TI_CI_PAR_ERR_MASK 0x500608
+
+/*
+ * TI CI parity status mask
+ * BIT(0) - CdCmdQ_sts patiry error status
+ * BIT(1) - CdDataQ_sts parity error status
+ * BIT(3) - CPP_SkidQ_sts parity error status
+ * BIT(7) - CPP_SkidQ_sc_sts parity error status
+ */
+#define ADF_GEN4_TI_CI_PAR_STS_BITMASK \
+ (BIT(0) | BIT(1) | BIT(3) | BIT(7))
+
+/* TI PULLFUB parity status */
+#define ADF_GEN4_TI_PULL0FUB_PAR_STS 0x500618
+
+/* TI PULLFUB parity error reporting mask */
+#define ADF_GEN4_TI_PULL0FUB_PAR_ERR_MASK 0x500614
+
+/*
+ * TI PULLFUB parity status mask
+ * BIT(0) - TrnPullReqQ_sts parity status
+ * BIT(1) - TrnSharedDataQ_sts parity status
+ * BIT(2) - TrnPullReqDataQ_sts parity status
+ * BIT(4) - CPP_CiPullReqQ_sts parity status
+ * BIT(5) - CPP_TrnPullReqQ_sts parity status
+ * BIT(6) - CPP_PullidQ_sts parity status
+ * BIT(7) - CPP_WaitDataQ_sts parity status
+ * BIT(8) - CPP_CdDataQ_sts parity status
+ * BIT(9) - CPP_TrnDataQP0_sts parity status
+ * BIT(10) - BIT(11) - CPP_TrnDataQRF[00:01]_sts parity status
+ * BIT(12) - CPP_TrnDataQP1_sts parity status
+ * BIT(13) - BIT(14) - CPP_TrnDataQRF[10:11]_sts parity status
+ */
+#define ADF_GEN4_TI_PULL0FUB_PAR_STS_BITMASK \
+ (BIT(0) | BIT(1) | BIT(2) | BIT(4) | BIT(5) | BIT(6) | BIT(7) | \
+ BIT(8) | BIT(9) | BIT(10) | BIT(11) | BIT(12) | BIT(13) | BIT(14))
+
+/* TI PUSHUB parity status */
+#define ADF_GEN4_TI_PUSHFUB_PAR_STS 0x500630
+
+/* TI PUSHFUB parity error reporting mask */
+#define ADF_GEN4_TI_PUSHFUB_PAR_ERR_MASK 0x50062C
+
+/*
+ * TI PUSHUB parity status mask
+ * BIT(0) - SbPushReqQ_sts parity status
+ * BIT(1) - BIT(2) - SbPushDataQ[0:1]_sts parity status
+ * BIT(4) - CPP_CdPushReqQ_sts parity status
+ * BIT(5) - BIT(6) - CPP_CdPushDataQ[0:1]_sts parity status
+ * BIT(7) - CPP_SbPushReqQ_sts parity status
+ * BIT(8) - CPP_SbPushDataQP_sts parity status
+ * BIT(9) - BIT(10) - CPP_SbPushDataQRF[0:1]_sts parity status
+ */
+#define ADF_GEN4_TI_PUSHFUB_PAR_STS_BITMASK \
+ (BIT(0) | BIT(1) | BIT(2) | BIT(4) | BIT(5) | \
+ BIT(6) | BIT(7) | BIT(8) | BIT(9) | BIT(10))
+
+/* TI CD parity status */
+#define ADF_GEN4_TI_CD_PAR_STS 0x50063C
+
+/* TI CD parity error mask */
+#define ADF_GEN4_TI_CD_PAR_ERR_MASK 0x500638
+
+/*
+ * TI CD parity status mask
+ * BIT(0) - BIT(15) - CtxMdRam[0:15]_sts parity status
+ * BIT(16) - Leaf2ClusterRam_sts parity status
+ * BIT(17) - BIT(18) - Ring2LeafRam[0:1]_sts parity status
+ * BIT(19) - VirtualQ_sts parity status
+ * BIT(20) - DtRdQ_sts parity status
+ * BIT(21) - DtWrQ_sts parity status
+ * BIT(22) - RiCmdQ_sts parity status
+ * BIT(23) - BypassQ_sts parity status
+ * BIT(24) - DtRdQ_sc_sts parity status
+ * BIT(25) - DtWrQ_sc_sts parity status
+ */
+#define ADF_GEN4_TI_CD_PAR_STS_BITMASK \
+ (BIT(0) | BIT(1) | BIT(2) | BIT(3) | BIT(4) | BIT(5) | BIT(6) | \
+ BIT(7) | BIT(8) | BIT(9) | BIT(10) | BIT(11) | BIT(12) | BIT(13) | \
+ BIT(14) | BIT(15) | BIT(16) | BIT(17) | BIT(18) | BIT(19) | BIT(20) | \
+ BIT(21) | BIT(22) | BIT(23) | BIT(24) | BIT(25))
+
+/* TI TRNSB parity status */
+#define ADF_GEN4_TI_TRNSB_PAR_STS 0x500648
+
+/* TI TRNSB Parity error reporting mask */
+#define ADF_GEN4_TI_TRNSB_PAR_ERR_MASK 0x500644
+
+/*
+ * TI TRNSB parity status mask
+ * BIT(0) - TrnPHdrQP_sts parity status
+ * BIT(1) - TrnPHdrQRF_sts parity status
+ * BIT(2) - TrnPDataQP_sts parity status
+ * BIT(3) - BIT(6) - TrnPDataQRF[0:3]_sts parity status
+ * BIT(7) - TrnNpHdrQP_sts parity status
+ * BIT(8) - BIT(9) - TrnNpHdrQRF[0:1]_sts parity status
+ * BIT(10) - TrnCplHdrQ_sts parity status
+ * BIT(11) - TrnPutObsReqQ_sts parity status
+ * BIT(12) - TrnPushReqQ_sts parity status
+ * BIT(13) - SbSplitIdRam_sts parity status
+ * BIT(14) - SbReqCountQ_sts parity status
+ * BIT(15) - SbCplTrkRam_sts parity status
+ * BIT(16) - SbGetObsReqQ_sts parity status
+ * BIT(17) - SbEpochIdQ_sts parity status
+ * BIT(18) - SbAtCplHdrQ_sts parity status
+ * BIT(19) - SbAtCplDataQ_sts parity status
+ * BIT(20) - SbReqCountRam_sts parity status
+ * BIT(21) - SbAtCplHdrQ_sc_sts parity status
+ */
+#define ADF_GEN4_TI_TRNSB_PAR_STS_BITMASK \
+ (BIT(0) | BIT(1) | BIT(2) | BIT(3) | BIT(4) | BIT(5) | BIT(6) | \
+ BIT(7) | BIT(8) | BIT(9) | BIT(10) | BIT(11) | BIT(12) | \
+ BIT(13) | BIT(14) | BIT(15) | BIT(16) | BIT(17) | BIT(18) | \
+ BIT(19) | BIT(20) | BIT(21))
+
+/* Status register to log misc error on RI */
+#define ADF_GEN4_RIMISCSTS 0x41B1B8
+
+/* Status control register to log misc RI error */
+#define ADF_GEN4_RIMISCCTL 0x41B1BC
+
+/*
+ * ERRSOU2 bit mask
+ * BIT(0) - SSM Interrupt Mask
+ * BIT(1) - CFC on CPP. ORed of CFC Push error and Pull error
+ * BIT(2) - BIT(4) - CPP attention interrupts, deprecated on gen4 devices
+ * BIT(18) - PM interrupt
+ */
+#define ADF_GEN4_ERRSOU2_SSM_ERR_BIT BIT(0)
+#define ADF_GEN4_ERRSOU2_CPP_CFC_ERR_STATUS_BIT BIT(1)
+#define ADF_GEN4_ERRSOU2_CPP_CFC_ATT_INT_BITMASK \
+ (BIT(2) | BIT(3) | BIT(4))
+
+#define ADF_GEN4_ERRSOU2_PM_INT_BIT BIT(18)
+
+#define ADF_GEN4_ERRSOU2_BITMASK \
+ (ADF_GEN4_ERRSOU2_SSM_ERR_BIT | \
+ ADF_GEN4_ERRSOU2_CPP_CFC_ERR_STATUS_BIT)
+
+#define ADF_GEN4_ERRSOU2_DIS_BITMASK \
+ (ADF_GEN4_ERRSOU2_SSM_ERR_BIT | \
+ ADF_GEN4_ERRSOU2_CPP_CFC_ERR_STATUS_BIT | \
+ ADF_GEN4_ERRSOU2_CPP_CFC_ATT_INT_BITMASK)
+
+#define ADF_GEN4_IAINTSTATSSM 0x28
+
+/* IAINTSTATSSM error bit mask definitions */
+#define ADF_GEN4_IAINTSTATSSM_UERRSSMSH_BIT BIT(0)
+#define ADF_GEN4_IAINTSTATSSM_CERRSSMSH_BIT BIT(1)
+#define ADF_GEN4_IAINTSTATSSM_PPERR_BIT BIT(2)
+#define ADF_GEN4_IAINTSTATSSM_SLICEHANG_ERR_BIT BIT(3)
+#define ADF_GEN4_IAINTSTATSSM_SPPPARERR_BIT BIT(4)
+#define ADF_GEN4_IAINTSTATSSM_SSMCPPERR_BIT BIT(5)
+#define ADF_GEN4_IAINTSTATSSM_SSMSOFTERRORPARITY_BIT BIT(6)
+#define ADF_GEN4_IAINTSTATSSM_SER_ERR_SSMSH_CERR_BIT BIT(7)
+#define ADF_GEN4_IAINTSTATSSM_SER_ERR_SSMSH_UNCERR_BIT BIT(8)
+
+#define ADF_GEN4_IAINTSTATSSM_BITMASK \
+ (ADF_GEN4_IAINTSTATSSM_UERRSSMSH_BIT | \
+ ADF_GEN4_IAINTSTATSSM_CERRSSMSH_BIT | \
+ ADF_GEN4_IAINTSTATSSM_PPERR_BIT | \
+ ADF_GEN4_IAINTSTATSSM_SLICEHANG_ERR_BIT | \
+ ADF_GEN4_IAINTSTATSSM_SPPPARERR_BIT | \
+ ADF_GEN4_IAINTSTATSSM_SSMCPPERR_BIT | \
+ ADF_GEN4_IAINTSTATSSM_SSMSOFTERRORPARITY_BIT | \
+ ADF_GEN4_IAINTSTATSSM_SER_ERR_SSMSH_CERR_BIT | \
+ ADF_GEN4_IAINTSTATSSM_SER_ERR_SSMSH_UNCERR_BIT)
+
+#define ADF_GEN4_UERRSSMSH 0x18
+
+/*
+ * UERRSSMSH error bit masks definitions
+ *
+ * BIT(0) - Indicates one uncorrectable error
+ * BIT(15) - Indicates multiple uncorrectable errors
+ * in device shared memory
+ */
+#define ADF_GEN4_UERRSSMSH_BITMASK (BIT(0) | BIT(15))
+
+#define ADF_GEN4_UERRSSMSHAD 0x1C
+
+#define ADF_GEN4_CERRSSMSH 0x10
+
+/*
+ * CERRSSMSH error bit
+ * BIT(0) - Indicates one correctable error
+ */
+#define ADF_GEN4_CERRSSMSH_ERROR_BIT BIT(0)
+
+#define ADF_GEN4_CERRSSMSHAD 0x14
+
+/* SSM error handling features enable register */
+#define ADF_GEN4_SSMFEATREN 0x198
+
+/*
+ * Disable SSM error detection and reporting features
+ * enabled by device driver on RAS initialization
+ *
+ * following bits should be cleared :
+ * BIT(4) - Disable parity for CPP parity
+ * BIT(12) - Disable logging push/pull data error in pperr register.
+ * BIT(16) - BIT(23) - Disable parity for SPPs
+ * BIT(24) - BIT(27) - Disable parity for SPPs, if it's supported on the device.
+ */
+#define ADF_GEN4_SSMFEATREN_DIS_BITMASK \
+ (BIT(0) | BIT(1) | BIT(2) | BIT(3) | BIT(5) | BIT(6) | BIT(7) | \
+ BIT(8) | BIT(9) | BIT(10) | BIT(11) | BIT(13) | BIT(14) | BIT(15))
+
+#define ADF_GEN4_INTMASKSSM 0x0
+
+/*
+ * Error reporting mask in INTMASKSSM
+ * BIT(0) - Shared memory uncorrectable interrupt mask
+ * BIT(1) - Shared memory correctable interrupt mask
+ * BIT(2) - PPERR interrupt mask
+ * BIT(3) - CPP parity error Interrupt mask
+ * BIT(4) - SSM interrupt generated by SER correctable error mask
+ * BIT(5) - SSM interrupt generated by SER uncorrectable error
+ * - not stop and scream - mask
+ */
+#define ADF_GEN4_INTMASKSSM_BITMASK \
+ (BIT(0) | BIT(1) | BIT(2) | BIT(3) | BIT(4) | BIT(5))
+
+/* CPP push or pull error */
+#define ADF_GEN4_PPERR 0x8
+
+#define ADF_GEN4_PPERR_BITMASK (BIT(0) | BIT(1))
+
+#define ADF_GEN4_PPERRID 0xC
+
+/* Slice hang handling related registers */
+#define ADF_GEN4_SLICEHANGSTATUS_ATH_CPH 0x84
+#define ADF_GEN4_SLICEHANGSTATUS_CPR_XLT 0x88
+#define ADF_GEN4_SLICEHANGSTATUS_DCPR_UCS 0x90
+#define ADF_GEN4_SLICEHANGSTATUS_WAT_WCP 0x8C
+#define ADF_GEN4_SLICEHANGSTATUS_PKE 0x94
+
+#define ADF_GEN4_SHINTMASKSSM_ATH_CPH 0xF0
+#define ADF_GEN4_SHINTMASKSSM_CPR_XLT 0xF4
+#define ADF_GEN4_SHINTMASKSSM_DCPR_UCS 0xFC
+#define ADF_GEN4_SHINTMASKSSM_WAT_WCP 0xF8
+#define ADF_GEN4_SHINTMASKSSM_PKE 0x100
+
+/* SPP pull cmd parity err_*slice* CSR */
+#define ADF_GEN4_SPPPULLCMDPARERR_ATH_CPH 0x1A4
+#define ADF_GEN4_SPPPULLCMDPARERR_CPR_XLT 0x1A8
+#define ADF_GEN4_SPPPULLCMDPARERR_DCPR_UCS 0x1B0
+#define ADF_GEN4_SPPPULLCMDPARERR_PKE 0x1B4
+#define ADF_GEN4_SPPPULLCMDPARERR_WAT_WCP 0x1AC
+
+/* SPP pull data parity err_*slice* CSR */
+#define ADF_GEN4_SPPPULLDATAPARERR_ATH_CPH 0x1BC
+#define ADF_GEN4_SPPPULLDATAPARERR_CPR_XLT 0x1C0
+#define ADF_GEN4_SPPPULLDATAPARERR_DCPR_UCS 0x1C8
+#define ADF_GEN4_SPPPULLDATAPARERR_PKE 0x1CC
+#define ADF_GEN4_SPPPULLDATAPARERR_WAT_WCP 0x1C4
+
+/* SPP push cmd parity err_*slice* CSR */
+#define ADF_GEN4_SPPPUSHCMDPARERR_ATH_CPH 0x1D4
+#define ADF_GEN4_SPPPUSHCMDPARERR_CPR_XLT 0x1D8
+#define ADF_GEN4_SPPPUSHCMDPARERR_DCPR_UCS 0x1E0
+#define ADF_GEN4_SPPPUSHCMDPARERR_PKE 0x1E4
+#define ADF_GEN4_SPPPUSHCMDPARERR_WAT_WCP 0x1DC
+
+/* SPP push data parity err_*slice* CSR */
+#define ADF_GEN4_SPPPUSHDATAPARERR_ATH_CPH 0x1EC
+#define ADF_GEN4_SPPPUSHDATAPARERR_CPR_XLT 0x1F0
+#define ADF_GEN4_SPPPUSHDATAPARERR_DCPR_UCS 0x1F8
+#define ADF_GEN4_SPPPUSHDATAPARERR_PKE 0x1FC
+#define ADF_GEN4_SPPPUSHDATAPARERR_WAT_WCP 0x1F4
+
+/* Accelerator SPP parity error mask registers */
+#define ADF_GEN4_SPPPARERRMSK_ATH_CPH 0x204
+#define ADF_GEN4_SPPPARERRMSK_CPR_XLT 0x208
+#define ADF_GEN4_SPPPARERRMSK_DCPR_UCS 0x210
+#define ADF_GEN4_SPPPARERRMSK_PKE 0x214
+#define ADF_GEN4_SPPPARERRMSK_WAT_WCP 0x20C
+
+#define ADF_GEN4_SSMCPPERR 0x224
+
+/*
+ * Uncorrectable error mask in SSMCPPERR
+ * BIT(0) - indicates CPP command parity error
+ * BIT(1) - indicates CPP Main Push PPID parity error
+ * BIT(2) - indicates CPP Main ePPID parity error
+ * BIT(3) - indicates CPP Main push data parity error
+ * BIT(4) - indicates CPP Main Pull PPID parity error
+ * BIT(5) - indicates CPP target pull data parity error
+ */
+#define ADF_GEN4_SSMCPPERR_FATAL_BITMASK \
+ (BIT(0) | BIT(1) | BIT(4))
+
+#define ADF_GEN4_SSMCPPERR_UNCERR_BITMASK \
+ (BIT(2) | BIT(3) | BIT(5))
+
+#define ADF_GEN4_SSMSOFTERRORPARITY_SRC 0x9C
+#define ADF_GEN4_SSMSOFTERRORPARITYMASK_SRC 0xB8
+
+#define ADF_GEN4_SSMSOFTERRORPARITY_ATH_CPH 0xA0
+#define ADF_GEN4_SSMSOFTERRORPARITYMASK_ATH_CPH 0xBC
+
+#define ADF_GEN4_SSMSOFTERRORPARITY_CPR_XLT 0xA4
+#define ADF_GEN4_SSMSOFTERRORPARITYMASK_CPR_XLT 0xC0
+
+#define ADF_GEN4_SSMSOFTERRORPARITY_DCPR_UCS 0xAC
+#define ADF_GEN4_SSMSOFTERRORPARITYMASK_DCPR_UCS 0xC8
+
+#define ADF_GEN4_SSMSOFTERRORPARITY_PKE 0xB0
+#define ADF_GEN4_SSMSOFTERRORPARITYMASK_PKE 0xCC
+
+#define ADF_GEN4_SSMSOFTERRORPARITY_WAT_WCP 0xA8
+#define ADF_GEN4_SSMSOFTERRORPARITYMASK_WAT_WCP 0xC4
+
+/* RF parity error detected in SharedRAM */
+#define ADF_GEN4_SSMSOFTERRORPARITY_SRC_BIT BIT(0)
+
+#define ADF_GEN4_SER_ERR_SSMSH 0x44C
+
+/*
+ * Fatal error mask in SER_ERR_SSMSH
+ * BIT(0) - Indicates an uncorrectable error has occurred in the
+ * accelerator controller command RFs
+ * BIT(2) - Parity error occurred in the bank SPP fifos
+ * BIT(3) - Indicates Parity error occurred in following fifos in
+ * the design
+ * BIT(4) - Parity error occurred in flops in the design
+ * BIT(5) - Uncorrectable error has occurred in the
+ * target push and pull data register flop
+ * BIT(7) - Indicates Parity error occurred in the Resource Manager
+ * pending lock request fifos
+ * BIT(8) - Indicates Parity error occurred in the Resource Manager
+ * MECTX command queues logic
+ * BIT(9) - Indicates Parity error occurred in the Resource Manager
+ * MECTX sigdone fifo flops
+ * BIT(10) - Indicates an uncorrectable error has occurred in the
+ * Resource Manager MECTX command RFs
+ * BIT(14) - Parity error occurred in Buffer Manager sigdone FIFO
+ */
+ #define ADF_GEN4_SER_ERR_SSMSH_FATAL_BITMASK \
+ (BIT(0) | BIT(2) | BIT(3) | BIT(4) | BIT(5) | BIT(7) | \
+ BIT(8) | BIT(9) | BIT(10) | BIT(14))
+
+/*
+ * Uncorrectable error mask in SER_ERR_SSMSH
+ * BIT(12) Parity error occurred in Buffer Manager pool 0
+ * BIT(13) Parity error occurred in Buffer Manager pool 1
+ */
+#define ADF_GEN4_SER_ERR_SSMSH_UNCERR_BITMASK \
+ (BIT(12) | BIT(13))
+
+/*
+ * Correctable error mask in SER_ERR_SSMSH
+ * BIT(1) - Indicates a correctable Error has occurred
+ * in the slice controller command RFs
+ * BIT(6) - Indicates a correctable Error has occurred in
+ * the target push and pull data RFs
+ * BIT(11) - Indicates an correctable Error has occurred in
+ * the Resource Manager MECTX command RFs
+ */
+#define ADF_GEN4_SER_ERR_SSMSH_CERR_BITMASK \
+ (BIT(1) | BIT(6) | BIT(11))
+
+/* SSM shared memory SER error reporting mask */
+#define ADF_GEN4_SER_EN_SSMSH 0x450
+
+/*
+ * SSM SER error reporting mask in SER_en_err_ssmsh
+ * BIT(0) - Enables uncorrectable Error detection in :
+ * 1) slice controller command RFs.
+ * 2) target push/pull data registers
+ * BIT(1) - Enables correctable Error detection in :
+ * 1) slice controller command RFs
+ * 2) target push/pull data registers
+ * BIT(2) - Enables Parity error detection in
+ * 1) bank SPP fifos
+ * 2) gen4_pull_id_queue
+ * 3) gen4_push_id_queue
+ * 4) AE_pull_sigdn_fifo
+ * 5) DT_push_sigdn_fifo
+ * 6) slx_push_sigdn_fifo
+ * 7) secure_push_cmd_fifo
+ * 8) secure_pull_cmd_fifo
+ * 9) Head register in FIFO wrapper
+ * 10) current_cmd in individual push queue
+ * 11) current_cmd in individual pull queue
+ * 12) push_command_rxp arbitrated in ssm_push_cmd_queues
+ * 13) pull_command_rxp arbitrated in ssm_pull_cmd_queues
+ * BIT(3) - Enables uncorrectable Error detection in
+ * the resource manager mectx cmd RFs.
+ * BIT(4) - Enables correctable error detection in the Resource Manager
+ * mectx command RFs
+ * BIT(5) - Enables Parity error detection in
+ * 1) resource manager lock request fifo
+ * 2) mectx cmdqueues logic
+ * 3) mectx sigdone fifo
+ * BIT(6) - Enables Parity error detection in Buffer Manager pools
+ * and sigdone fifo
+ */
+#define ADF_GEN4_SER_EN_SSMSH_BITMASK \
+ (BIT(0) | BIT(1) | BIT(2) | BIT(3) | BIT(4) | BIT(5) | BIT(6))
+
+#define ADF_GEN4_CPP_CFC_ERR_STATUS 0x640C04
+
+/*
+ * BIT(1) - Indicates multiple CPP CFC errors
+ * BIT(7) - Indicates CPP CFC command parity error type
+ * BIT(8) - Indicated CPP CFC data parity error type
+ */
+#define ADF_GEN4_CPP_CFC_ERR_STATUS_MERR_BIT BIT(1)
+#define ADF_GEN4_CPP_CFC_ERR_STATUS_CMDPAR_BIT BIT(7)
+#define ADF_GEN4_CPP_CFC_ERR_STATUS_DATAPAR_BIT BIT(8)
+
+/*
+ * BIT(0) - Enables CFC to detect and log push/pull data error
+ * BIT(1) - Enables CFC to generate interrupt to PCIEP for CPP error
+ * BIT(4) - When 1 Parity detection is disabled
+ * BIT(5) - When 1 Parity detection is disabled on CPP command bus
+ * BIT(6) - When 1 Parity detection is disabled on CPP push/pull bus
+ * BIT(9) - When 1 RF parity error detection is disabled
+ */
+#define ADF_GEN4_CPP_CFC_ERR_CTRL_BITMASK (BIT(0) | BIT(1))
+
+#define ADF_GEN4_CPP_CFC_ERR_CTRL_DIS_BITMASK \
+ (BIT(4) | BIT(5) | BIT(6) | BIT(9) | BIT(10))
+
+#define ADF_GEN4_CPP_CFC_ERR_CTRL 0x640C00
+
+/*
+ * BIT(0) - Clears bit(0) of ADF_GEN4_CPP_CFC_ERR_STATUS
+ * when an error is reported on CPP
+ * BIT(1) - Clears bit(1) of ADF_GEN4_CPP_CFC_ERR_STATUS
+ * when multiple errors are reported on CPP
+ * BIT(2) - Clears bit(2) of ADF_GEN4_CPP_CFC_ERR_STATUS
+ * when attention interrupt is reported
+ */
+#define ADF_GEN4_CPP_CFC_ERR_STATUS_CLR_BITMASK (BIT(0) | BIT(1) | BIT(2))
+#define ADF_GEN4_CPP_CFC_ERR_STATUS_CLR 0x640C08
+
+#define ADF_GEN4_CPP_CFC_ERR_PPID_LO 0x640C0C
+#define ADF_GEN4_CPP_CFC_ERR_PPID_HI 0x640C10
+
+/* Exception reporting in QAT SSM CMP */
+#define ADF_GEN4_EXPRPSSMCPR 0x2000
+
+/*
+ * Uncorrectable error mask in EXPRPSSMCPR
+ * BIT(2) - Hard fatal error
+ * BIT(16) - Parity error detected in CPR Push FIFO
+ * BIT(17) - Parity error detected in CPR Pull FIFO
+ * BIT(18) - Parity error detected in CPR Hash Table
+ * BIT(19) - Parity error detected in CPR History Buffer Copy 0
+ * BIT(20) - Parity error detected in CPR History Buffer Copy 1
+ * BIT(21) - Parity error detected in CPR History Buffer Copy 2
+ * BIT(22) - Parity error detected in CPR History Buffer Copy 3
+ * BIT(23) - Parity error detected in CPR History Buffer Copy 4
+ * BIT(24) - Parity error detected in CPR History Buffer Copy 5
+ * BIT(25) - Parity error detected in CPR History Buffer Copy 6
+ * BIT(26) - Parity error detected in CPR History Buffer Copy 7
+ */
+#define ADF_GEN4_EXPRPSSMCPR_UNCERR_BITMASK \
+ (BIT(2) | BIT(16) | BIT(17) | BIT(18) | BIT(19) | BIT(20) | \
+ BIT(21) | BIT(22) | BIT(23) | BIT(24) | BIT(25) | BIT(26))
+
+/* Exception reporting in QAT SSM XLT */
+#define ADF_GEN4_EXPRPSSMXLT 0xA000
+
+/*
+ * Uncorrectable error mask in EXPRPSSMXLT
+ * BIT(2) - If set, an Uncorrectable Error event occurred
+ * BIT(16) - Parity error detected in XLT Push FIFO
+ * BIT(17) - Parity error detected in XLT Pull FIFO
+ * BIT(18) - Parity error detected in XLT HCTB0
+ * BIT(19) - Parity error detected in XLT HCTB1
+ * BIT(20) - Parity error detected in XLT HCTB2
+ * BIT(21) - Parity error detected in XLT HCTB3
+ * BIT(22) - Parity error detected in XLT CBCL
+ * BIT(23) - Parity error detected in XLT LITPTR
+ */
+#define ADF_GEN4_EXPRPSSMXLT_UNCERR_BITMASK \
+ (BIT(2) | BIT(16) | BIT(17) | BIT(18) | BIT(19) | BIT(20) | BIT(21) | \
+ BIT(22) | BIT(23))
+
+/*
+ * Correctable error mask in EXPRPSSMXLT
+ * BIT(3) - Correctable error event occurred.
+ */
+#define ADF_GEN4_EXPRPSSMXLT_CERR_BIT BIT(3)
+
+/* Exception reporting in QAT SSM DCMP */
+#define ADF_GEN4_EXPRPSSMDCPR(_n_) (0x12000 + (_n_) * 0x80)
+
+/*
+ * Uncorrectable error mask in EXPRPSSMDCPR
+ * BIT(2) - Even hard fatal error
+ * BIT(4) - Odd hard fatal error
+ * BIT(6) - decode soft error
+ * BIT(16) - Parity error detected in CPR Push FIFO
+ * BIT(17) - Parity error detected in CPR Pull FIFO
+ * BIT(18) - Parity error detected in the Input Buffer
+ * BIT(19) - symbuf0parerr
+ * Parity error detected in CPR Push FIFO
+ * BIT(20) - symbuf1parerr
+ * Parity error detected in CPR Push FIFO
+ */
+#define ADF_GEN4_EXPRPSSMDCPR_UNCERR_BITMASK \
+ (BIT(2) | BIT(4) | BIT(6) | BIT(16) | BIT(17) | \
+ BIT(18) | BIT(19) | BIT(20))
+
+/*
+ * Correctable error mask in EXPRPSSMDCPR
+ * BIT(3) - Even ecc correctable error
+ * BIT(5) - Odd ecc correctable error
+ */
+#define ADF_GEN4_EXPRPSSMDCPR_CERR_BITMASK (BIT(3) | BIT(5))
+
+#define ADF_GEN4_DCPR_SLICES_NUM 3
+
+/*
+ * ERRSOU3 bit masks
+ * BIT(0) - indicates error Response Order Overflow and/or BME error
+ * BIT(1) - indicates RI push/pull error
+ * BIT(2) - indicates TI push/pull error
+ * BIT(3) - indicates ARAM correctable error
+ * BIT(4) - indicates ARAM uncorrectable error
+ * BIT(5) - indicates TI pull parity error
+ * BIT(6) - indicates RI push parity error
+ * BIT(7) - indicates VFLR interrupt
+ * BIT(8) - indicates ring pair interrupts for ATU detected fault
+ * BIT(9) - indicates error when accessing RLT block
+ */
+#define ADF_GEN4_ERRSOU3_TIMISCSTS_BIT BIT(0)
+#define ADF_GEN4_ERRSOU3_RICPPINTSTS_BITMASK (BIT(1) | BIT(6))
+#define ADF_GEN4_ERRSOU3_TICPPINTSTS_BITMASK (BIT(2) | BIT(5))
+#define ADF_GEN4_ERRSOU3_REG_ARAMCERR_BIT BIT(3)
+#define ADF_GEN4_ERRSOU3_REG_ARAMUERR_BIT BIT(4)
+#define ADF_GEN4_ERRSOU3_VFLRNOTIFY_BIT BIT(7)
+#define ADF_GEN4_ERRSOU3_ATUFAULTSTATUS_BIT BIT(8)
+#define ADF_GEN4_ERRSOU3_RLTERROR_BIT BIT(9)
+
+#define ADF_GEN4_ERRSOU3_BITMASK ( \
+ (ADF_GEN4_ERRSOU3_TIMISCSTS_BIT) | \
+ (ADF_GEN4_ERRSOU3_RICPPINTSTS_BITMASK) | \
+ (ADF_GEN4_ERRSOU3_TICPPINTSTS_BITMASK) | \
+ (ADF_GEN4_ERRSOU3_REG_ARAMCERR_BIT) | \
+ (ADF_GEN4_ERRSOU3_REG_ARAMUERR_BIT) | \
+ (ADF_GEN4_ERRSOU3_VFLRNOTIFY_BIT) | \
+ (ADF_GEN4_ERRSOU3_ATUFAULTSTATUS_BIT) | \
+ (ADF_GEN4_ERRSOU3_RLTERROR_BIT))
+
+/* TI Misc status register */
+#define ADF_GEN4_TIMISCSTS 0x50054C
+
+/* TI Misc error reporting mask */
+#define ADF_GEN4_TIMISCCTL 0x500548
+
+/*
+ * TI Misc error reporting control mask
+ * BIT(0) - Enables error detection and logging in TIMISCSTS register
+ * BIT(1) - It has effect only when SRIOV enabled, this bit is 0 by default
+ * BIT(2) - Enables the D-F-x counter within the dispatch arbiter
+ * to start based on the command triggered from
+ * BIT(30) - Disables VFLR functionality
+ * By setting this bit will revert to CPM1.x functionality
+ * bits 1, 2 and 30 value should be preserved and not meant to be changed
+ * within RAS.
+ */
+#define ADF_GEN4_TIMISCCTL_BIT BIT(0)
+#define ADF_GEN4_TIMSCCTL_RELAY_BITMASK (BIT(1) | BIT(2) | BIT(30))
+
+/* RI CPP interface status register */
+#define ADF_GEN4_RICPPINTSTS 0x41A330
+
+/*
+ * Uncorrectable error mask in RICPPINTSTS register
+ * BIT(0) - RI asserted the CPP error signal during a push
+ * BIT(1) - RI detected the CPP error signal asserted during a pull
+ * BIT(2) - RI detected a push data parity error
+ * BIT(3) - RI detected a push valid parity error
+ */
+#define ADF_GEN4_RICPPINTSTS_BITMASK \
+ (BIT(0) | BIT(1) | BIT(2) | BIT(3))
+
+/* RI CPP interface status register control */
+#define ADF_GEN4_RICPPINTCTL 0x41A32C
+
+/*
+ * Control bit mask for RICPPINTCTL register
+ * BIT(0) - value of 1 enables error detection and reporting
+ * on the RI CPP Push interface
+ * BIT(1) - value of 1 enables error detection and reporting
+ * on the RI CPP Pull interface
+ * BIT(2) - value of 1 enables error detection and reporting
+ * on the RI Parity
+ * BIT(3) - value of 1 enable checking parity on CPP
+ * BIT(4) - value of 1 enables the stop feature of the stop and stream
+ * for all RI CPP Command RFs
+ */
+#define ADF_GEN4_RICPPINTCTL_BITMASK \
+ (BIT(0) | BIT(1) | BIT(2) | BIT(3) | BIT(4))
+
+/* Push ID of the command which triggered the transaction error on RI */
+#define ADF_GEN4_RIERRPUSHID 0x41A334
+
+/* Pull ID of the command which triggered the transaction error on RI */
+#define ADF_GEN4_RIERRPULLID 0x41A338
+
+/* TI CPP interface status register */
+#define ADF_GEN4_TICPPINTSTS 0x50053C
+
+/*
+ * Uncorrectable error mask in TICPPINTSTS register
+ * BIT(0) - value of 1 indicates that the TI asserted
+ * the CPP error signal during a push
+ * BIT(1) - value of 1 indicates that the TI detected
+ * the CPP error signal asserted during a pull
+ * BIT(2) - value of 1 indicates that the TI detected
+ * a pull data parity error
+ */
+#define ADF_GEN4_TICPPINTSTS_BITMASK \
+ (BIT(0) | BIT(1) | BIT(2))
+
+/* TI CPP interface status register control */
+#define ADF_GEN4_TICPPINTCTL 0x500538
+
+/*
+ * Control bit mask for TICPPINTCTL register
+ * BIT(0) - value of 1 enables error detection and reporting on
+ * the TI CPP Push interface
+ * BIT(1) - value of 1 enables error detection and reporting on
+ * the TI CPP Push interface
+ * BIT(2) - value of 1 enables parity error detection and logging on
+ * the TI CPP Pull interface
+ * BIT(3) - value of 1 enables CPP CMD and Pull Data parity checking
+ * BIT(4) - value of 1 enables TI stop part of stop and scream mode on
+ * CPP/RF Parity error
+ */
+#define ADF_GEN4_TICPPINTCTL_BITMASK \
+ (BIT(0) | BIT(1) | BIT(2) | BIT(3) | BIT(4))
+
+/* Push ID of the command which triggered the transaction error on TI */
+#define ADF_GEN4_TIERRPUSHID 0x500540
+
+/* Pull ID of the command which triggered the transaction error on TI */
+#define ADF_GEN4_TIERRPULLID 0x500544
+
+/* Correctable error in ARAM agent register */
+#define ADF_GEN4_REG_ARAMCERR 0x1700
+
+#define ADF_GEN4_REG_ARAMCERR_BIT BIT(0)
+
+/*
+ * Correctable error enablement in ARAM bit mask
+ * BIT(3) - enable ARAM RAM to fix and log correctable error
+ * BIT(26) - enables ARAM agent to generate interrupt for correctable error
+ */
+#define ADF_GEN4_REG_ARAMCERR_EN_BITMASK (BIT(3) | BIT(26))
+
+/* Correctable error address in ARAM agent register */
+#define ADF_GEN4_REG_ARAMCERRAD 0x1708
+
+/* Uncorrectable error in ARAM agent register */
+#define ADF_GEN4_REG_ARAMUERR 0x1704
+
+/*
+ * ARAM error bit mask
+ * BIT(0) - indicates error logged in ARAMCERR or ARAMUCERR
+ * BIT(18) - indicates uncorrectable multiple errors in ARAM agent
+ */
+#define ADF_GEN4_REG_ARAMUERR_ERROR_BIT BIT(0)
+#define ADF_GEN4_REG_ARAMUERR_MULTI_ERRORS_BIT BIT(18)
+
+/*
+ * Uncorrectable error enablement in ARAM bit mask
+ * BIT(3) - enable ARAM RAM to fix and log uncorrectable error
+ * BIT(19) - enables ARAM agent to generate interrupt for uncorrectable error
+ */
+#define ADF_GEN4_REG_ARAMUERR_EN_BITMASK (BIT(3) | BIT(19))
+
+/* Unorrectable error address in ARAM agent register */
+#define ADF_GEN4_REG_ARAMUERRAD 0x170C
+
+/* Uncorrectable error transaction push/pull ID registers*/
+#define ADF_GEN4_REG_ERRPPID_LO 0x1714
+#define ADF_GEN4_REG_ERRPPID_HI 0x1718
+
+/* ARAM ECC block error enablement */
+#define ADF_GEN4_REG_ARAMCERRUERR_EN 0x1808
+
+/*
+ * ARAM ECC block error control bit masks
+ * BIT(0) - enable ARAM CD ECC block error detecting
+ * BIT(1) - enable ARAM pull request ECC error detecting
+ * BIT(2) - enable ARAM command dispatch ECC error detecting
+ * BIT(3) - enable ARAM read datapath push ECC error detecting
+ * BIT(4) - enable ARAM read datapath pull ECC error detecting
+ * BIT(5) - enable ARAM RMW ECC error detecting
+ * BIT(6) - enable ARAM write datapath RMW ECC error detecting
+ * BIT(7) - enable ARAM write datapath ECC error detecting
+ */
+#define ADF_GEN4_REG_ARAMCERRUERR_EN_BITMASK \
+ (BIT(0) | BIT(1) | BIT(2) | BIT(3) | BIT(4) | \
+ BIT(5) | BIT(6) | BIT(7))
+
+/* ARAM misc memory target error registers*/
+#define ADF_GEN4_REG_CPPMEMTGTERR 0x1710
+
+/*
+ * ARAM misc memory target error bit masks
+ * BIT(0) - indicates an error in ARAM target memory
+ * BIT(1) - indicates multiple errors in ARAM target memory
+ * BIT(4) - indicates pull error in ARAM target memory
+ * BIT(5) - indicates parity pull error in ARAM target memory
+ * BIT(6) - indicates push error in ARAM target memory
+ */
+#define ADF_GEN4_REG_CPPMEMTGTERR_BITMASK \
+ (BIT(0) | BIT(4) | BIT(5) | BIT(6))
+
+#define ADF_GEN4_REG_CPPMEMTGTERR_MULTI_ERRORS_BIT BIT(1)
+
+/*
+ * ARAM misc memory target error enablement mask
+ * BIT(2) - enables CPP memory to detect and log push/pull data error
+ * BIT(7) - enables push/pull error to generate interrupts to RI
+ * BIT(8) - enables ARAM to check parity on pull data and CPP command buses
+ * BIT(9) - enables ARAM to autopush to AE when push/parity error is detected
+ * on lookaside DT
+ */
+#define ADF_GEN4_REG_CPPMEMTGTERR_EN_BITMASK \
+ (BIT(2) | BIT(7) | BIT(8) | BIT(9))
+
+/* ATU fault status register */
+#define ADF_GEN4_ATUFAULTSTATUS(i) (0x506000 + ((i) * 0x4))
+
+#define ADF_GEN4_ATUFAULTSTATUS_BIT BIT(0)
+
+/* Command Parity error detected on IOSFP Command to QAT */
+#define ADF_GEN4_RIMISCSTS_BIT BIT(0)
+
+void adf_gen4_init_ras_ops(struct adf_ras_ops *ras_ops);
+
+#endif /* ADF_GEN4_RAS_H_ */
#include <linux/slab.h>
#include <linux/workqueue.h>
+#include "adf_admin.h"
#include "adf_accel_devices.h"
#include "adf_common_drv.h"
#include "adf_gen4_timer.h"
#include <linux/types.h>
#include <asm/errno.h>
#include "adf_accel_devices.h"
+#include "adf_admin.h"
#include "adf_cfg.h"
#include "adf_cfg_strings.h"
#include "adf_clock.h"
#include <linux/kernel.h>
#include <linux/kstrtox.h>
#include <linux/types.h>
+#include "adf_admin.h"
#include "adf_cfg.h"
#include "adf_common_drv.h"
#include "adf_heartbeat.h"
#include "adf_common_drv.h"
#include "adf_dbgfs.h"
#include "adf_heartbeat.h"
+#include "adf_rl.h"
+#include "adf_sysfs_ras_counters.h"
static LIST_HEAD(service_table);
static DEFINE_MUTEX(service_lock);
static int adf_dev_init(struct adf_accel_dev *accel_dev)
{
struct service_hndl *service;
- struct list_head *list_itr;
struct adf_hw_device_data *hw_data = accel_dev->hw_device;
int ret;
return -EFAULT;
}
+ if (hw_data->get_ring_to_svc_map)
+ hw_data->ring_to_svc_map = hw_data->get_ring_to_svc_map(accel_dev);
+
if (adf_ae_init(accel_dev)) {
dev_err(&GET_DEV(accel_dev),
"Failed to initialise Acceleration Engine\n");
}
set_bit(ADF_STATUS_IRQ_ALLOCATED, &accel_dev->status);
+ if (hw_data->ras_ops.enable_ras_errors)
+ hw_data->ras_ops.enable_ras_errors(accel_dev);
+
hw_data->enable_ints(accel_dev);
hw_data->enable_error_correction(accel_dev);
}
adf_heartbeat_init(accel_dev);
+ ret = adf_rl_init(accel_dev);
+ if (ret && ret != -EOPNOTSUPP)
+ return ret;
/*
* Subservice initialisation is divided into two stages: init and start.
* This is to facilitate any ordering dependencies between services
* prior to starting any of the accelerators.
*/
- list_for_each(list_itr, &service_table) {
- service = list_entry(list_itr, struct service_hndl, list);
+ list_for_each_entry(service, &service_table, list) {
if (service->event_hld(accel_dev, ADF_EVENT_INIT)) {
dev_err(&GET_DEV(accel_dev),
"Failed to initialise service %s\n",
{
struct adf_hw_device_data *hw_data = accel_dev->hw_device;
struct service_hndl *service;
- struct list_head *list_itr;
int ret;
set_bit(ADF_STATUS_STARTING, &accel_dev->status);
}
adf_heartbeat_start(accel_dev);
+ ret = adf_rl_start(accel_dev);
+ if (ret && ret != -EOPNOTSUPP)
+ return ret;
- list_for_each(list_itr, &service_table) {
- service = list_entry(list_itr, struct service_hndl, list);
+ list_for_each_entry(service, &service_table, list) {
if (service->event_hld(accel_dev, ADF_EVENT_START)) {
dev_err(&GET_DEV(accel_dev),
"Failed to start service %s\n",
clear_bit(ADF_STATUS_STARTED, &accel_dev->status);
return -EFAULT;
}
+ set_bit(ADF_STATUS_CRYPTO_ALGS_REGISTERED, &accel_dev->status);
if (!list_empty(&accel_dev->compression_list) && qat_comp_algs_register()) {
dev_err(&GET_DEV(accel_dev),
clear_bit(ADF_STATUS_STARTED, &accel_dev->status);
return -EFAULT;
}
+ set_bit(ADF_STATUS_COMP_ALGS_REGISTERED, &accel_dev->status);
adf_dbgfs_add(accel_dev);
+ adf_sysfs_start_ras(accel_dev);
return 0;
}
{
struct adf_hw_device_data *hw_data = accel_dev->hw_device;
struct service_hndl *service;
- struct list_head *list_itr;
bool wait = false;
int ret;
!test_bit(ADF_STATUS_STARTING, &accel_dev->status))
return;
+ adf_rl_stop(accel_dev);
adf_dbgfs_rm(accel_dev);
+ adf_sysfs_stop_ras(accel_dev);
clear_bit(ADF_STATUS_STARTING, &accel_dev->status);
clear_bit(ADF_STATUS_STARTED, &accel_dev->status);
- if (!list_empty(&accel_dev->crypto_list)) {
+ if (!list_empty(&accel_dev->crypto_list) &&
+ test_bit(ADF_STATUS_CRYPTO_ALGS_REGISTERED, &accel_dev->status)) {
qat_algs_unregister();
qat_asym_algs_unregister();
}
+ clear_bit(ADF_STATUS_CRYPTO_ALGS_REGISTERED, &accel_dev->status);
- if (!list_empty(&accel_dev->compression_list))
+ if (!list_empty(&accel_dev->compression_list) &&
+ test_bit(ADF_STATUS_COMP_ALGS_REGISTERED, &accel_dev->status))
qat_comp_algs_unregister();
+ clear_bit(ADF_STATUS_COMP_ALGS_REGISTERED, &accel_dev->status);
- list_for_each(list_itr, &service_table) {
- service = list_entry(list_itr, struct service_hndl, list);
+ list_for_each_entry(service, &service_table, list) {
if (!test_bit(accel_dev->accel_id, service->start_status))
continue;
ret = service->event_hld(accel_dev, ADF_EVENT_STOP);
{
struct adf_hw_device_data *hw_data = accel_dev->hw_device;
struct service_hndl *service;
- struct list_head *list_itr;
if (!hw_data) {
dev_err(&GET_DEV(accel_dev),
&accel_dev->status);
}
- list_for_each(list_itr, &service_table) {
- service = list_entry(list_itr, struct service_hndl, list);
+ list_for_each_entry(service, &service_table, list) {
if (!test_bit(accel_dev->accel_id, service->init_status))
continue;
if (service->event_hld(accel_dev, ADF_EVENT_SHUTDOWN))
clear_bit(accel_dev->accel_id, service->init_status);
}
+ adf_rl_exit(accel_dev);
+
+ if (hw_data->ras_ops.disable_ras_errors)
+ hw_data->ras_ops.disable_ras_errors(accel_dev);
+
adf_heartbeat_shutdown(accel_dev);
hw_data->disable_iov(accel_dev);
int adf_dev_restarting_notify(struct adf_accel_dev *accel_dev)
{
struct service_hndl *service;
- struct list_head *list_itr;
- list_for_each(list_itr, &service_table) {
- service = list_entry(list_itr, struct service_hndl, list);
+ list_for_each_entry(service, &service_table, list) {
if (service->event_hld(accel_dev, ADF_EVENT_RESTARTING))
dev_err(&GET_DEV(accel_dev),
"Failed to restart service %s.\n",
int adf_dev_restarted_notify(struct adf_accel_dev *accel_dev)
{
struct service_hndl *service;
- struct list_head *list_itr;
- list_for_each(list_itr, &service_table) {
- service = list_entry(list_itr, struct service_hndl, list);
+ list_for_each_entry(service, &service_table, list) {
if (service->event_hld(accel_dev, ADF_EVENT_RESTARTED))
dev_err(&GET_DEV(accel_dev),
"Failed to restart service %s.\n",
mutex_lock(&accel_dev->state_lock);
- if (!adf_dev_started(accel_dev)) {
- dev_info(&GET_DEV(accel_dev), "Device qat_dev%d already down\n",
- accel_dev->accel_id);
- ret = -EINVAL;
- goto out;
- }
-
if (reconfig) {
ret = adf_dev_shutdown_cache_cfg(accel_dev);
goto out;
return false;
}
+static bool adf_handle_ras_int(struct adf_accel_dev *accel_dev)
+{
+ struct adf_ras_ops *ras_ops = &accel_dev->hw_device->ras_ops;
+ bool reset_required;
+
+ if (ras_ops->handle_interrupt &&
+ ras_ops->handle_interrupt(accel_dev, &reset_required)) {
+ if (reset_required)
+ dev_err(&GET_DEV(accel_dev), "Fatal error, reset required\n");
+ return true;
+ }
+
+ return false;
+}
+
static irqreturn_t adf_msix_isr_ae(int irq, void *dev_ptr)
{
struct adf_accel_dev *accel_dev = dev_ptr;
if (adf_handle_pm_int(accel_dev))
return IRQ_HANDLED;
+ if (adf_handle_ras_int(accel_dev))
+ return IRQ_HANDLED;
+
dev_dbg(&GET_DEV(accel_dev), "qat_dev%d spurious AE interrupt\n",
accel_dev->accel_id);
--- /dev/null
+// SPDX-License-Identifier: GPL-2.0-only
+/* Copyright(c) 2023 Intel Corporation */
+#include <linux/debugfs.h>
+#include <linux/fs.h>
+#include <linux/kernel.h>
+
+#include "adf_accel_devices.h"
+#include "adf_pm_dbgfs.h"
+
+static ssize_t pm_status_read(struct file *f, char __user *buf, size_t count,
+ loff_t *pos)
+{
+ struct adf_accel_dev *accel_dev = file_inode(f)->i_private;
+ struct adf_pm pm = accel_dev->power_management;
+
+ if (pm.print_pm_status)
+ return pm.print_pm_status(accel_dev, buf, count, pos);
+
+ return count;
+}
+
+static const struct file_operations pm_status_fops = {
+ .owner = THIS_MODULE,
+ .read = pm_status_read,
+};
+
+void adf_pm_dbgfs_add(struct adf_accel_dev *accel_dev)
+{
+ struct adf_pm *pm = &accel_dev->power_management;
+
+ if (!pm->present || !pm->print_pm_status)
+ return;
+
+ pm->debugfs_pm_status = debugfs_create_file("pm_status", 0400,
+ accel_dev->debugfs_dir,
+ accel_dev, &pm_status_fops);
+}
+
+void adf_pm_dbgfs_rm(struct adf_accel_dev *accel_dev)
+{
+ struct adf_pm *pm = &accel_dev->power_management;
+
+ if (!pm->present)
+ return;
+
+ debugfs_remove(pm->debugfs_pm_status);
+ pm->debugfs_pm_status = NULL;
+}
--- /dev/null
+/* SPDX-License-Identifier: GPL-2.0-only */
+/* Copyright(c) 2023 Intel Corporation */
+
+#ifndef ADF_PM_DBGFS_H_
+#define ADF_PM_DBGFS_H_
+
+struct adf_accel_dev;
+
+void adf_pm_dbgfs_rm(struct adf_accel_dev *accel_dev);
+void adf_pm_dbgfs_add(struct adf_accel_dev *accel_dev);
+
+#endif /* ADF_PM_DBGFS_H_ */
--- /dev/null
+// SPDX-License-Identifier: GPL-2.0-only
+/* Copyright(c) 2023 Intel Corporation */
+
+#define dev_fmt(fmt) "RateLimiting: " fmt
+
+#include <asm/errno.h>
+#include <asm/div64.h>
+
+#include <linux/dev_printk.h>
+#include <linux/kernel.h>
+#include <linux/pci.h>
+#include <linux/slab.h>
+#include <linux/units.h>
+
+#include "adf_accel_devices.h"
+#include "adf_common_drv.h"
+#include "adf_rl_admin.h"
+#include "adf_rl.h"
+#include "adf_sysfs_rl.h"
+
+#define RL_TOKEN_GRANULARITY_PCIEIN_BUCKET 0U
+#define RL_TOKEN_GRANULARITY_PCIEOUT_BUCKET 0U
+#define RL_TOKEN_PCIE_SIZE 64
+#define RL_TOKEN_ASYM_SIZE 1024
+#define RL_CSR_SIZE 4U
+#define RL_CAPABILITY_MASK GENMASK(6, 4)
+#define RL_CAPABILITY_VALUE 0x70
+#define RL_VALIDATE_NON_ZERO(input) ((input) == 0)
+#define ROOT_MASK GENMASK(1, 0)
+#define CLUSTER_MASK GENMASK(3, 0)
+#define LEAF_MASK GENMASK(5, 0)
+
+static int validate_user_input(struct adf_accel_dev *accel_dev,
+ struct adf_rl_sla_input_data *sla_in,
+ bool is_update)
+{
+ const unsigned long rp_mask = sla_in->rp_mask;
+ size_t rp_mask_size;
+ int i, cnt;
+
+ if (sla_in->pir < sla_in->cir) {
+ dev_notice(&GET_DEV(accel_dev),
+ "PIR must be >= CIR, setting PIR to CIR\n");
+ sla_in->pir = sla_in->cir;
+ }
+
+ if (!is_update) {
+ cnt = 0;
+ rp_mask_size = sizeof(sla_in->rp_mask) * BITS_PER_BYTE;
+ for_each_set_bit(i, &rp_mask, rp_mask_size) {
+ if (++cnt > RL_RP_CNT_PER_LEAF_MAX) {
+ dev_notice(&GET_DEV(accel_dev),
+ "Too many ring pairs selected for this SLA\n");
+ return -EINVAL;
+ }
+ }
+
+ if (sla_in->srv >= ADF_SVC_NONE) {
+ dev_notice(&GET_DEV(accel_dev),
+ "Wrong service type\n");
+ return -EINVAL;
+ }
+
+ if (sla_in->type > RL_LEAF) {
+ dev_notice(&GET_DEV(accel_dev),
+ "Wrong node type\n");
+ return -EINVAL;
+ }
+
+ if (sla_in->parent_id < RL_PARENT_DEFAULT_ID ||
+ sla_in->parent_id >= RL_NODES_CNT_MAX) {
+ dev_notice(&GET_DEV(accel_dev),
+ "Wrong parent ID\n");
+ return -EINVAL;
+ }
+ }
+
+ return 0;
+}
+
+static int validate_sla_id(struct adf_accel_dev *accel_dev, int sla_id)
+{
+ struct rl_sla *sla;
+
+ if (sla_id <= RL_SLA_EMPTY_ID || sla_id >= RL_NODES_CNT_MAX) {
+ dev_notice(&GET_DEV(accel_dev), "Provided ID is out of bounds\n");
+ return -EINVAL;
+ }
+
+ sla = accel_dev->rate_limiting->sla[sla_id];
+
+ if (!sla) {
+ dev_notice(&GET_DEV(accel_dev), "SLA with provided ID does not exist\n");
+ return -EINVAL;
+ }
+
+ if (sla->type != RL_LEAF) {
+ dev_notice(&GET_DEV(accel_dev), "This ID is reserved for internal use\n");
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+/**
+ * find_parent() - Find the parent for a new SLA
+ * @rl_data: pointer to ratelimiting data
+ * @sla_in: pointer to user input data for a new SLA
+ *
+ * Function returns a pointer to the parent SLA. If the parent ID is provided
+ * as input in the user data, then such ID is validated and the parent SLA
+ * is returned.
+ * Otherwise, it returns the default parent SLA (root or cluster) for
+ * the new object.
+ *
+ * Return:
+ * * Pointer to the parent SLA object
+ * * NULL - when parent cannot be found
+ */
+static struct rl_sla *find_parent(struct adf_rl *rl_data,
+ struct adf_rl_sla_input_data *sla_in)
+{
+ int input_parent_id = sla_in->parent_id;
+ struct rl_sla *root = NULL;
+ struct rl_sla *parent_sla;
+ int i;
+
+ if (sla_in->type == RL_ROOT)
+ return NULL;
+
+ if (input_parent_id > RL_PARENT_DEFAULT_ID) {
+ parent_sla = rl_data->sla[input_parent_id];
+ /*
+ * SLA can be a parent if it has the same service as the child
+ * and its type is higher in the hierarchy,
+ * for example the parent type of a LEAF must be a CLUSTER.
+ */
+ if (parent_sla && parent_sla->srv == sla_in->srv &&
+ parent_sla->type == sla_in->type - 1)
+ return parent_sla;
+
+ return NULL;
+ }
+
+ /* If input_parent_id is not valid, get root for this service type. */
+ for (i = 0; i < RL_ROOT_MAX; i++) {
+ if (rl_data->root[i] && rl_data->root[i]->srv == sla_in->srv) {
+ root = rl_data->root[i];
+ break;
+ }
+ }
+
+ if (!root)
+ return NULL;
+
+ /*
+ * If the type of this SLA is cluster, then return the root.
+ * Otherwise, find the default (i.e. first) cluster for this service.
+ */
+ if (sla_in->type == RL_CLUSTER)
+ return root;
+
+ for (i = 0; i < RL_CLUSTER_MAX; i++) {
+ if (rl_data->cluster[i] && rl_data->cluster[i]->parent == root)
+ return rl_data->cluster[i];
+ }
+
+ return NULL;
+}
+
+static enum adf_cfg_service_type srv_to_cfg_svc_type(enum adf_base_services rl_srv)
+{
+ switch (rl_srv) {
+ case ADF_SVC_ASYM:
+ return ASYM;
+ case ADF_SVC_SYM:
+ return SYM;
+ case ADF_SVC_DC:
+ return COMP;
+ default:
+ return UNUSED;
+ }
+}
+
+/**
+ * get_sla_arr_of_type() - Returns a pointer to SLA type specific array
+ * @rl_data: pointer to ratelimiting data
+ * @type: SLA type
+ * @sla_arr: pointer to variable where requested pointer will be stored
+ *
+ * Return: Max number of elements allowed for the returned array
+ */
+static u32 get_sla_arr_of_type(struct adf_rl *rl_data, enum rl_node_type type,
+ struct rl_sla ***sla_arr)
+{
+ switch (type) {
+ case RL_LEAF:
+ *sla_arr = rl_data->leaf;
+ return RL_LEAF_MAX;
+ case RL_CLUSTER:
+ *sla_arr = rl_data->cluster;
+ return RL_CLUSTER_MAX;
+ case RL_ROOT:
+ *sla_arr = rl_data->root;
+ return RL_ROOT_MAX;
+ default:
+ *sla_arr = NULL;
+ return 0;
+ }
+}
+
+static bool is_service_enabled(struct adf_accel_dev *accel_dev,
+ enum adf_base_services rl_srv)
+{
+ enum adf_cfg_service_type arb_srv = srv_to_cfg_svc_type(rl_srv);
+ struct adf_hw_device_data *hw_data = GET_HW_DATA(accel_dev);
+ u8 rps_per_bundle = hw_data->num_banks_per_vf;
+ int i;
+
+ for (i = 0; i < rps_per_bundle; i++) {
+ if (GET_SRV_TYPE(accel_dev, i) == arb_srv)
+ return true;
+ }
+
+ return false;
+}
+
+/**
+ * prepare_rp_ids() - Creates an array of ring pair IDs from bitmask
+ * @accel_dev: pointer to acceleration device structure
+ * @sla: SLA object data where result will be written
+ * @rp_mask: bitmask of ring pair IDs
+ *
+ * Function tries to convert provided bitmap to an array of IDs. It checks if
+ * RPs aren't in use, are assigned to SLA service or if a number of provided
+ * IDs is not too big. If successful, writes the result into the field
+ * sla->ring_pairs_cnt.
+ *
+ * Return:
+ * * 0 - ok
+ * * -EINVAL - ring pairs array cannot be created from provided mask
+ */
+static int prepare_rp_ids(struct adf_accel_dev *accel_dev, struct rl_sla *sla,
+ const unsigned long rp_mask)
+{
+ enum adf_cfg_service_type arb_srv = srv_to_cfg_svc_type(sla->srv);
+ u16 rps_per_bundle = GET_HW_DATA(accel_dev)->num_banks_per_vf;
+ bool *rp_in_use = accel_dev->rate_limiting->rp_in_use;
+ size_t rp_cnt_max = ARRAY_SIZE(sla->ring_pairs_ids);
+ u16 rp_id_max = GET_HW_DATA(accel_dev)->num_banks;
+ u16 cnt = 0;
+ u16 rp_id;
+
+ for_each_set_bit(rp_id, &rp_mask, rp_id_max) {
+ if (cnt >= rp_cnt_max) {
+ dev_notice(&GET_DEV(accel_dev),
+ "Assigned more ring pairs than supported");
+ return -EINVAL;
+ }
+
+ if (rp_in_use[rp_id]) {
+ dev_notice(&GET_DEV(accel_dev),
+ "RP %u already assigned to other SLA", rp_id);
+ return -EINVAL;
+ }
+
+ if (GET_SRV_TYPE(accel_dev, rp_id % rps_per_bundle) != arb_srv) {
+ dev_notice(&GET_DEV(accel_dev),
+ "RP %u does not support SLA service", rp_id);
+ return -EINVAL;
+ }
+
+ sla->ring_pairs_ids[cnt++] = rp_id;
+ }
+
+ sla->ring_pairs_cnt = cnt;
+
+ return 0;
+}
+
+static void mark_rps_usage(struct rl_sla *sla, bool *rp_in_use, bool used)
+{
+ u16 rp_id;
+ int i;
+
+ for (i = 0; i < sla->ring_pairs_cnt; i++) {
+ rp_id = sla->ring_pairs_ids[i];
+ rp_in_use[rp_id] = used;
+ }
+}
+
+static void assign_rps_to_leaf(struct adf_accel_dev *accel_dev,
+ struct rl_sla *sla, bool clear)
+{
+ struct adf_hw_device_data *hw_data = GET_HW_DATA(accel_dev);
+ void __iomem *pmisc_addr = adf_get_pmisc_base(accel_dev);
+ u32 base_offset = hw_data->rl_data.r2l_offset;
+ u32 node_id = clear ? 0U : (sla->node_id & LEAF_MASK);
+ u32 offset;
+ int i;
+
+ for (i = 0; i < sla->ring_pairs_cnt; i++) {
+ offset = base_offset + (RL_CSR_SIZE * sla->ring_pairs_ids[i]);
+ ADF_CSR_WR(pmisc_addr, offset, node_id);
+ }
+}
+
+static void assign_leaf_to_cluster(struct adf_accel_dev *accel_dev,
+ struct rl_sla *sla, bool clear)
+{
+ struct adf_hw_device_data *hw_data = GET_HW_DATA(accel_dev);
+ void __iomem *pmisc_addr = adf_get_pmisc_base(accel_dev);
+ u32 base_offset = hw_data->rl_data.l2c_offset;
+ u32 node_id = sla->node_id & LEAF_MASK;
+ u32 parent_id = clear ? 0U : (sla->parent->node_id & CLUSTER_MASK);
+ u32 offset;
+
+ offset = base_offset + (RL_CSR_SIZE * node_id);
+ ADF_CSR_WR(pmisc_addr, offset, parent_id);
+}
+
+static void assign_cluster_to_root(struct adf_accel_dev *accel_dev,
+ struct rl_sla *sla, bool clear)
+{
+ struct adf_hw_device_data *hw_data = GET_HW_DATA(accel_dev);
+ void __iomem *pmisc_addr = adf_get_pmisc_base(accel_dev);
+ u32 base_offset = hw_data->rl_data.c2s_offset;
+ u32 node_id = sla->node_id & CLUSTER_MASK;
+ u32 parent_id = clear ? 0U : (sla->parent->node_id & ROOT_MASK);
+ u32 offset;
+
+ offset = base_offset + (RL_CSR_SIZE * node_id);
+ ADF_CSR_WR(pmisc_addr, offset, parent_id);
+}
+
+static void assign_node_to_parent(struct adf_accel_dev *accel_dev,
+ struct rl_sla *sla, bool clear_assignment)
+{
+ switch (sla->type) {
+ case RL_LEAF:
+ assign_rps_to_leaf(accel_dev, sla, clear_assignment);
+ assign_leaf_to_cluster(accel_dev, sla, clear_assignment);
+ break;
+ case RL_CLUSTER:
+ assign_cluster_to_root(accel_dev, sla, clear_assignment);
+ break;
+ default:
+ break;
+ }
+}
+
+/**
+ * can_parent_afford_sla() - Verifies if parent allows to create an SLA
+ * @sla_in: pointer to user input data for a new SLA
+ * @sla_parent: pointer to parent SLA object
+ * @sla_cir: current child CIR value (only for update)
+ * @is_update: request is a update
+ *
+ * Algorithm verifies if parent has enough remaining budget to take assignment
+ * of a child with provided parameters. In update case current CIR value must be
+ * returned to budget first.
+ * PIR value cannot exceed the PIR assigned to parent.
+ *
+ * Return:
+ * * true - SLA can be created
+ * * false - SLA cannot be created
+ */
+static bool can_parent_afford_sla(struct adf_rl_sla_input_data *sla_in,
+ struct rl_sla *sla_parent, u32 sla_cir,
+ bool is_update)
+{
+ u32 rem_cir = sla_parent->rem_cir;
+
+ if (is_update)
+ rem_cir += sla_cir;
+
+ if (sla_in->cir > rem_cir || sla_in->pir > sla_parent->pir)
+ return false;
+
+ return true;
+}
+
+/**
+ * can_node_afford_update() - Verifies if SLA can be updated with input data
+ * @sla_in: pointer to user input data for a new SLA
+ * @sla: pointer to SLA object selected for update
+ *
+ * Algorithm verifies if a new CIR value is big enough to satisfy currently
+ * assigned child SLAs and if PIR can be updated
+ *
+ * Return:
+ * * true - SLA can be updated
+ * * false - SLA cannot be updated
+ */
+static bool can_node_afford_update(struct adf_rl_sla_input_data *sla_in,
+ struct rl_sla *sla)
+{
+ u32 cir_in_use = sla->cir - sla->rem_cir;
+
+ /* new CIR cannot be smaller then currently consumed value */
+ if (cir_in_use > sla_in->cir)
+ return false;
+
+ /* PIR of root/cluster cannot be reduced in node with assigned children */
+ if (sla_in->pir < sla->pir && sla->type != RL_LEAF && cir_in_use > 0)
+ return false;
+
+ return true;
+}
+
+static bool is_enough_budget(struct adf_rl *rl_data, struct rl_sla *sla,
+ struct adf_rl_sla_input_data *sla_in,
+ bool is_update)
+{
+ u32 max_val = rl_data->device_data->scale_ref;
+ struct rl_sla *parent = sla->parent;
+ bool ret = true;
+
+ if (sla_in->cir > max_val || sla_in->pir > max_val)
+ ret = false;
+
+ switch (sla->type) {
+ case RL_LEAF:
+ ret &= can_parent_afford_sla(sla_in, parent, sla->cir,
+ is_update);
+ break;
+ case RL_CLUSTER:
+ ret &= can_parent_afford_sla(sla_in, parent, sla->cir,
+ is_update);
+
+ if (is_update)
+ ret &= can_node_afford_update(sla_in, sla);
+
+ break;
+ case RL_ROOT:
+ if (is_update)
+ ret &= can_node_afford_update(sla_in, sla);
+
+ break;
+ default:
+ ret = false;
+ break;
+ }
+
+ return ret;
+}
+
+static void update_budget(struct rl_sla *sla, u32 old_cir, bool is_update)
+{
+ switch (sla->type) {
+ case RL_LEAF:
+ if (is_update)
+ sla->parent->rem_cir += old_cir;
+
+ sla->parent->rem_cir -= sla->cir;
+ sla->rem_cir = 0;
+ break;
+ case RL_CLUSTER:
+ if (is_update) {
+ sla->parent->rem_cir += old_cir;
+ sla->rem_cir = sla->cir - (old_cir - sla->rem_cir);
+ } else {
+ sla->rem_cir = sla->cir;
+ }
+
+ sla->parent->rem_cir -= sla->cir;
+ break;
+ case RL_ROOT:
+ if (is_update)
+ sla->rem_cir = sla->cir - (old_cir - sla->rem_cir);
+ else
+ sla->rem_cir = sla->cir;
+ break;
+ default:
+ break;
+ }
+}
+
+/**
+ * get_next_free_sla_id() - finds next free ID in the SLA array
+ * @rl_data: Pointer to ratelimiting data structure
+ *
+ * Return:
+ * * 0 : RL_NODES_CNT_MAX - correct ID
+ * * -ENOSPC - all SLA slots are in use
+ */
+static int get_next_free_sla_id(struct adf_rl *rl_data)
+{
+ int i = 0;
+
+ while (i < RL_NODES_CNT_MAX && rl_data->sla[i++])
+ ;
+
+ if (i == RL_NODES_CNT_MAX)
+ return -ENOSPC;
+
+ return i - 1;
+}
+
+/**
+ * get_next_free_node_id() - finds next free ID in the array of that node type
+ * @rl_data: Pointer to ratelimiting data structure
+ * @sla: Pointer to SLA object for which the ID is searched
+ *
+ * Return:
+ * * 0 : RL_[NODE_TYPE]_MAX - correct ID
+ * * -ENOSPC - all slots of that type are in use
+ */
+static int get_next_free_node_id(struct adf_rl *rl_data, struct rl_sla *sla)
+{
+ struct adf_hw_device_data *hw_device = GET_HW_DATA(rl_data->accel_dev);
+ int max_id, i, step, rp_per_leaf;
+ struct rl_sla **sla_list;
+
+ rp_per_leaf = hw_device->num_banks / hw_device->num_banks_per_vf;
+
+ /*
+ * Static nodes mapping:
+ * root0 - cluster[0,4,8,12] - leaf[0-15]
+ * root1 - cluster[1,5,9,13] - leaf[16-31]
+ * root2 - cluster[2,6,10,14] - leaf[32-47]
+ */
+ switch (sla->type) {
+ case RL_LEAF:
+ i = sla->srv * rp_per_leaf;
+ step = 1;
+ max_id = i + rp_per_leaf;
+ sla_list = rl_data->leaf;
+ break;
+ case RL_CLUSTER:
+ i = sla->srv;
+ step = 4;
+ max_id = RL_CLUSTER_MAX;
+ sla_list = rl_data->cluster;
+ break;
+ case RL_ROOT:
+ return sla->srv;
+ default:
+ return -EINVAL;
+ }
+
+ while (i < max_id && sla_list[i])
+ i += step;
+
+ if (i >= max_id)
+ return -ENOSPC;
+
+ return i;
+}
+
+u32 adf_rl_calculate_slice_tokens(struct adf_accel_dev *accel_dev, u32 sla_val,
+ enum adf_base_services svc_type)
+{
+ struct adf_rl_hw_data *device_data = &accel_dev->hw_device->rl_data;
+ struct adf_hw_device_data *hw_data = GET_HW_DATA(accel_dev);
+ u64 avail_slice_cycles, allocated_tokens;
+
+ if (!sla_val)
+ return 0;
+
+ avail_slice_cycles = hw_data->clock_frequency;
+
+ switch (svc_type) {
+ case ADF_SVC_ASYM:
+ avail_slice_cycles *= device_data->slices.pke_cnt;
+ break;
+ case ADF_SVC_SYM:
+ avail_slice_cycles *= device_data->slices.cph_cnt;
+ break;
+ case ADF_SVC_DC:
+ avail_slice_cycles *= device_data->slices.dcpr_cnt;
+ break;
+ default:
+ break;
+ }
+
+ do_div(avail_slice_cycles, device_data->scan_interval);
+ allocated_tokens = avail_slice_cycles * sla_val;
+ do_div(allocated_tokens, device_data->scale_ref);
+
+ return allocated_tokens;
+}
+
+u32 adf_rl_calculate_ae_cycles(struct adf_accel_dev *accel_dev, u32 sla_val,
+ enum adf_base_services svc_type)
+{
+ struct adf_rl_hw_data *device_data = &accel_dev->hw_device->rl_data;
+ struct adf_hw_device_data *hw_data = GET_HW_DATA(accel_dev);
+ u64 allocated_ae_cycles, avail_ae_cycles;
+
+ if (!sla_val)
+ return 0;
+
+ avail_ae_cycles = hw_data->clock_frequency;
+ avail_ae_cycles *= hw_data->get_num_aes(hw_data) - 1;
+ do_div(avail_ae_cycles, device_data->scan_interval);
+
+ sla_val *= device_data->max_tp[svc_type];
+ sla_val /= device_data->scale_ref;
+
+ allocated_ae_cycles = (sla_val * avail_ae_cycles);
+ do_div(allocated_ae_cycles, device_data->max_tp[svc_type]);
+
+ return allocated_ae_cycles;
+}
+
+u32 adf_rl_calculate_pci_bw(struct adf_accel_dev *accel_dev, u32 sla_val,
+ enum adf_base_services svc_type, bool is_bw_out)
+{
+ struct adf_rl_hw_data *device_data = &accel_dev->hw_device->rl_data;
+ u64 sla_to_bytes, allocated_bw, sla_scaled;
+
+ if (!sla_val)
+ return 0;
+
+ sla_to_bytes = sla_val;
+ sla_to_bytes *= device_data->max_tp[svc_type];
+ do_div(sla_to_bytes, device_data->scale_ref);
+
+ sla_to_bytes *= (svc_type == ADF_SVC_ASYM) ? RL_TOKEN_ASYM_SIZE :
+ BYTES_PER_MBIT;
+ if (svc_type == ADF_SVC_DC && is_bw_out)
+ sla_to_bytes *= device_data->slices.dcpr_cnt -
+ device_data->dcpr_correction;
+
+ sla_scaled = sla_to_bytes * device_data->pcie_scale_mul;
+ do_div(sla_scaled, device_data->pcie_scale_div);
+ allocated_bw = sla_scaled;
+ do_div(allocated_bw, RL_TOKEN_PCIE_SIZE);
+ do_div(allocated_bw, device_data->scan_interval);
+
+ return allocated_bw;
+}
+
+/**
+ * add_new_sla_entry() - creates a new SLA object and fills it with user data
+ * @accel_dev: pointer to acceleration device structure
+ * @sla_in: pointer to user input data for a new SLA
+ * @sla_out: Pointer to variable that will contain the address of a new
+ * SLA object if the operation succeeds
+ *
+ * Return:
+ * * 0 - ok
+ * * -ENOMEM - memory allocation failed
+ * * -EINVAL - invalid user input
+ * * -ENOSPC - all available SLAs are in use
+ */
+static int add_new_sla_entry(struct adf_accel_dev *accel_dev,
+ struct adf_rl_sla_input_data *sla_in,
+ struct rl_sla **sla_out)
+{
+ struct adf_rl *rl_data = accel_dev->rate_limiting;
+ struct rl_sla *sla;
+ int ret = 0;
+
+ sla = kzalloc(sizeof(*sla), GFP_KERNEL);
+ if (!sla) {
+ ret = -ENOMEM;
+ goto ret_err;
+ }
+ *sla_out = sla;
+
+ if (!is_service_enabled(accel_dev, sla_in->srv)) {
+ dev_notice(&GET_DEV(accel_dev),
+ "Provided service is not enabled\n");
+ ret = -EINVAL;
+ goto ret_err;
+ }
+
+ sla->srv = sla_in->srv;
+ sla->type = sla_in->type;
+ ret = get_next_free_node_id(rl_data, sla);
+ if (ret < 0) {
+ dev_notice(&GET_DEV(accel_dev),
+ "Exceeded number of available nodes for that service\n");
+ goto ret_err;
+ }
+ sla->node_id = ret;
+
+ ret = get_next_free_sla_id(rl_data);
+ if (ret < 0) {
+ dev_notice(&GET_DEV(accel_dev),
+ "Allocated maximum SLAs number\n");
+ goto ret_err;
+ }
+ sla->sla_id = ret;
+
+ sla->parent = find_parent(rl_data, sla_in);
+ if (!sla->parent && sla->type != RL_ROOT) {
+ if (sla_in->parent_id != RL_PARENT_DEFAULT_ID)
+ dev_notice(&GET_DEV(accel_dev),
+ "Provided parent ID does not exist or cannot be parent for this SLA.");
+ else
+ dev_notice(&GET_DEV(accel_dev),
+ "Unable to find parent node for this service. Is service enabled?");
+ ret = -EINVAL;
+ goto ret_err;
+ }
+
+ if (sla->type == RL_LEAF) {
+ ret = prepare_rp_ids(accel_dev, sla, sla_in->rp_mask);
+ if (!sla->ring_pairs_cnt || ret) {
+ dev_notice(&GET_DEV(accel_dev),
+ "Unable to find ring pairs to assign to the leaf");
+ if (!ret)
+ ret = -EINVAL;
+
+ goto ret_err;
+ }
+ }
+
+ return 0;
+
+ret_err:
+ kfree(sla);
+ *sla_out = NULL;
+
+ return ret;
+}
+
+static int initialize_default_nodes(struct adf_accel_dev *accel_dev)
+{
+ struct adf_rl *rl_data = accel_dev->rate_limiting;
+ struct adf_rl_hw_data *device_data = rl_data->device_data;
+ struct adf_rl_sla_input_data sla_in = { };
+ int ret = 0;
+ int i;
+
+ /* Init root for each enabled service */
+ sla_in.type = RL_ROOT;
+ sla_in.parent_id = RL_PARENT_DEFAULT_ID;
+
+ for (i = 0; i < ADF_SVC_NONE; i++) {
+ if (!is_service_enabled(accel_dev, i))
+ continue;
+
+ sla_in.cir = device_data->scale_ref;
+ sla_in.pir = sla_in.cir;
+ sla_in.srv = i;
+
+ ret = adf_rl_add_sla(accel_dev, &sla_in);
+ if (ret)
+ return ret;
+ }
+
+ /* Init default cluster for each root */
+ sla_in.type = RL_CLUSTER;
+ for (i = 0; i < ADF_SVC_NONE; i++) {
+ if (!rl_data->root[i])
+ continue;
+
+ sla_in.cir = rl_data->root[i]->cir;
+ sla_in.pir = sla_in.cir;
+ sla_in.srv = rl_data->root[i]->srv;
+
+ ret = adf_rl_add_sla(accel_dev, &sla_in);
+ if (ret)
+ return ret;
+ }
+
+ return 0;
+}
+
+static void clear_sla(struct adf_rl *rl_data, struct rl_sla *sla)
+{
+ bool *rp_in_use = rl_data->rp_in_use;
+ struct rl_sla **sla_type_arr = NULL;
+ int i, sla_id, node_id;
+ u32 old_cir;
+
+ sla_id = sla->sla_id;
+ node_id = sla->node_id;
+ old_cir = sla->cir;
+ sla->cir = 0;
+ sla->pir = 0;
+
+ for (i = 0; i < sla->ring_pairs_cnt; i++)
+ rp_in_use[sla->ring_pairs_ids[i]] = false;
+
+ update_budget(sla, old_cir, true);
+ get_sla_arr_of_type(rl_data, sla->type, &sla_type_arr);
+ assign_node_to_parent(rl_data->accel_dev, sla, true);
+ adf_rl_send_admin_delete_msg(rl_data->accel_dev, node_id, sla->type);
+ mark_rps_usage(sla, rl_data->rp_in_use, false);
+
+ kfree(sla);
+ rl_data->sla[sla_id] = NULL;
+ sla_type_arr[node_id] = NULL;
+}
+
+/**
+ * add_update_sla() - handles the creation and the update of an SLA
+ * @accel_dev: pointer to acceleration device structure
+ * @sla_in: pointer to user input data for a new/updated SLA
+ * @is_update: flag to indicate if this is an update or an add operation
+ *
+ * Return:
+ * * 0 - ok
+ * * -ENOMEM - memory allocation failed
+ * * -EINVAL - user input data cannot be used to create SLA
+ * * -ENOSPC - all available SLAs are in use
+ */
+static int add_update_sla(struct adf_accel_dev *accel_dev,
+ struct adf_rl_sla_input_data *sla_in, bool is_update)
+{
+ struct adf_rl *rl_data = accel_dev->rate_limiting;
+ struct rl_sla **sla_type_arr = NULL;
+ struct rl_sla *sla = NULL;
+ u32 old_cir = 0;
+ int ret;
+
+ if (!sla_in) {
+ dev_warn(&GET_DEV(accel_dev),
+ "SLA input data pointer is missing\n");
+ ret = -EFAULT;
+ goto ret_err;
+ }
+
+ /* Input validation */
+ ret = validate_user_input(accel_dev, sla_in, is_update);
+ if (ret)
+ goto ret_err;
+
+ mutex_lock(&rl_data->rl_lock);
+
+ if (is_update) {
+ ret = validate_sla_id(accel_dev, sla_in->sla_id);
+ if (ret)
+ goto ret_err;
+
+ sla = rl_data->sla[sla_in->sla_id];
+ old_cir = sla->cir;
+ } else {
+ ret = add_new_sla_entry(accel_dev, sla_in, &sla);
+ if (ret)
+ goto ret_err;
+ }
+
+ if (!is_enough_budget(rl_data, sla, sla_in, is_update)) {
+ dev_notice(&GET_DEV(accel_dev),
+ "Input value exceeds the remaining budget%s\n",
+ is_update ? " or more budget is already in use" : "");
+ ret = -EINVAL;
+ goto ret_err;
+ }
+ sla->cir = sla_in->cir;
+ sla->pir = sla_in->pir;
+
+ /* Apply SLA */
+ assign_node_to_parent(accel_dev, sla, false);
+ ret = adf_rl_send_admin_add_update_msg(accel_dev, sla, is_update);
+ if (ret) {
+ dev_notice(&GET_DEV(accel_dev),
+ "Failed to apply an SLA\n");
+ goto ret_err;
+ }
+ update_budget(sla, old_cir, is_update);
+
+ if (!is_update) {
+ mark_rps_usage(sla, rl_data->rp_in_use, true);
+ get_sla_arr_of_type(rl_data, sla->type, &sla_type_arr);
+ sla_type_arr[sla->node_id] = sla;
+ rl_data->sla[sla->sla_id] = sla;
+ }
+
+ sla_in->sla_id = sla->sla_id;
+ goto ret_ok;
+
+ret_err:
+ if (!is_update) {
+ sla_in->sla_id = -1;
+ kfree(sla);
+ }
+ret_ok:
+ mutex_unlock(&rl_data->rl_lock);
+ return ret;
+}
+
+/**
+ * adf_rl_add_sla() - handles the creation of an SLA
+ * @accel_dev: pointer to acceleration device structure
+ * @sla_in: pointer to user input data required to add an SLA
+ *
+ * Return:
+ * * 0 - ok
+ * * -ENOMEM - memory allocation failed
+ * * -EINVAL - invalid user input
+ * * -ENOSPC - all available SLAs are in use
+ */
+int adf_rl_add_sla(struct adf_accel_dev *accel_dev,
+ struct adf_rl_sla_input_data *sla_in)
+{
+ return add_update_sla(accel_dev, sla_in, false);
+}
+
+/**
+ * adf_rl_update_sla() - handles the update of an SLA
+ * @accel_dev: pointer to acceleration device structure
+ * @sla_in: pointer to user input data required to update an SLA
+ *
+ * Return:
+ * * 0 - ok
+ * * -EINVAL - user input data cannot be used to update SLA
+ */
+int adf_rl_update_sla(struct adf_accel_dev *accel_dev,
+ struct adf_rl_sla_input_data *sla_in)
+{
+ return add_update_sla(accel_dev, sla_in, true);
+}
+
+/**
+ * adf_rl_get_sla() - returns an existing SLA data
+ * @accel_dev: pointer to acceleration device structure
+ * @sla_in: pointer to user data where SLA info will be stored
+ *
+ * The sla_id for which data are requested should be set in sla_id structure
+ *
+ * Return:
+ * * 0 - ok
+ * * -EINVAL - provided sla_id does not exist
+ */
+int adf_rl_get_sla(struct adf_accel_dev *accel_dev,
+ struct adf_rl_sla_input_data *sla_in)
+{
+ struct rl_sla *sla;
+ int ret, i;
+
+ ret = validate_sla_id(accel_dev, sla_in->sla_id);
+ if (ret)
+ return ret;
+
+ sla = accel_dev->rate_limiting->sla[sla_in->sla_id];
+ sla_in->type = sla->type;
+ sla_in->srv = sla->srv;
+ sla_in->cir = sla->cir;
+ sla_in->pir = sla->pir;
+ sla_in->rp_mask = 0U;
+ if (sla->parent)
+ sla_in->parent_id = sla->parent->sla_id;
+ else
+ sla_in->parent_id = RL_PARENT_DEFAULT_ID;
+
+ for (i = 0; i < sla->ring_pairs_cnt; i++)
+ sla_in->rp_mask |= BIT(sla->ring_pairs_ids[i]);
+
+ return 0;
+}
+
+/**
+ * adf_rl_get_capability_remaining() - returns the remaining SLA value (CIR) for
+ * selected service or provided sla_id
+ * @accel_dev: pointer to acceleration device structure
+ * @srv: service ID for which capability is requested
+ * @sla_id: ID of the cluster or root to which we want assign a new SLA
+ *
+ * Check if the provided SLA id is valid. If it is and the service matches
+ * the requested service and the type is cluster or root, return the remaining
+ * capability.
+ * If the provided ID does not match the service or type, return the remaining
+ * capacity of the default cluster for that service.
+ *
+ * Return:
+ * * Positive value - correct remaining value
+ * * -EINVAL - algorithm cannot find a remaining value for provided data
+ */
+int adf_rl_get_capability_remaining(struct adf_accel_dev *accel_dev,
+ enum adf_base_services srv, int sla_id)
+{
+ struct adf_rl *rl_data = accel_dev->rate_limiting;
+ struct rl_sla *sla = NULL;
+ int i;
+
+ if (srv >= ADF_SVC_NONE)
+ return -EINVAL;
+
+ if (sla_id > RL_SLA_EMPTY_ID && !validate_sla_id(accel_dev, sla_id)) {
+ sla = rl_data->sla[sla_id];
+
+ if (sla->srv == srv && sla->type <= RL_CLUSTER)
+ goto ret_ok;
+ }
+
+ for (i = 0; i < RL_CLUSTER_MAX; i++) {
+ if (!rl_data->cluster[i])
+ continue;
+
+ if (rl_data->cluster[i]->srv == srv) {
+ sla = rl_data->cluster[i];
+ goto ret_ok;
+ }
+ }
+
+ return -EINVAL;
+ret_ok:
+ return sla->rem_cir;
+}
+
+/**
+ * adf_rl_remove_sla() - removes provided sla_id
+ * @accel_dev: pointer to acceleration device structure
+ * @sla_id: ID of the cluster or root to which we want assign an new SLA
+ *
+ * Return:
+ * * 0 - ok
+ * * -EINVAL - wrong sla_id or it still have assigned children
+ */
+int adf_rl_remove_sla(struct adf_accel_dev *accel_dev, u32 sla_id)
+{
+ struct adf_rl *rl_data = accel_dev->rate_limiting;
+ struct rl_sla *sla;
+ int ret = 0;
+
+ mutex_lock(&rl_data->rl_lock);
+ ret = validate_sla_id(accel_dev, sla_id);
+ if (ret)
+ goto err_ret;
+
+ sla = rl_data->sla[sla_id];
+
+ if (sla->type < RL_LEAF && sla->rem_cir != sla->cir) {
+ dev_notice(&GET_DEV(accel_dev),
+ "To remove parent SLA all its children must be removed first");
+ ret = -EINVAL;
+ goto err_ret;
+ }
+
+ clear_sla(rl_data, sla);
+
+err_ret:
+ mutex_unlock(&rl_data->rl_lock);
+ return ret;
+}
+
+/**
+ * adf_rl_remove_sla_all() - removes all SLAs from device
+ * @accel_dev: pointer to acceleration device structure
+ * @incl_default: set to true if default SLAs also should be removed
+ */
+void adf_rl_remove_sla_all(struct adf_accel_dev *accel_dev, bool incl_default)
+{
+ struct adf_rl *rl_data = accel_dev->rate_limiting;
+ int end_type = incl_default ? RL_ROOT : RL_LEAF;
+ struct rl_sla **sla_type_arr = NULL;
+ u32 max_id;
+ int i, j;
+
+ mutex_lock(&rl_data->rl_lock);
+
+ /* Unregister and remove all SLAs */
+ for (j = RL_LEAF; j >= end_type; j--) {
+ max_id = get_sla_arr_of_type(rl_data, j, &sla_type_arr);
+
+ for (i = 0; i < max_id; i++) {
+ if (!sla_type_arr[i])
+ continue;
+
+ clear_sla(rl_data, sla_type_arr[i]);
+ }
+ }
+
+ mutex_unlock(&rl_data->rl_lock);
+}
+
+int adf_rl_init(struct adf_accel_dev *accel_dev)
+{
+ struct adf_hw_device_data *hw_data = GET_HW_DATA(accel_dev);
+ struct adf_rl_hw_data *rl_hw_data = &hw_data->rl_data;
+ struct adf_rl *rl;
+ int ret = 0;
+
+ /* Validate device parameters */
+ if (RL_VALIDATE_NON_ZERO(rl_hw_data->max_tp[ADF_SVC_ASYM]) ||
+ RL_VALIDATE_NON_ZERO(rl_hw_data->max_tp[ADF_SVC_SYM]) ||
+ RL_VALIDATE_NON_ZERO(rl_hw_data->max_tp[ADF_SVC_DC]) ||
+ RL_VALIDATE_NON_ZERO(rl_hw_data->scan_interval) ||
+ RL_VALIDATE_NON_ZERO(rl_hw_data->pcie_scale_div) ||
+ RL_VALIDATE_NON_ZERO(rl_hw_data->pcie_scale_mul) ||
+ RL_VALIDATE_NON_ZERO(rl_hw_data->scale_ref)) {
+ ret = -EOPNOTSUPP;
+ goto err_ret;
+ }
+
+ rl = kzalloc(sizeof(*rl), GFP_KERNEL);
+ if (!rl) {
+ ret = -ENOMEM;
+ goto err_ret;
+ }
+
+ mutex_init(&rl->rl_lock);
+ rl->device_data = &accel_dev->hw_device->rl_data;
+ rl->accel_dev = accel_dev;
+ accel_dev->rate_limiting = rl;
+
+err_ret:
+ return ret;
+}
+
+int adf_rl_start(struct adf_accel_dev *accel_dev)
+{
+ struct adf_rl_hw_data *rl_hw_data = &GET_HW_DATA(accel_dev)->rl_data;
+ void __iomem *pmisc_addr = adf_get_pmisc_base(accel_dev);
+ u16 fw_caps = GET_HW_DATA(accel_dev)->fw_capabilities;
+ int ret;
+
+ if (!accel_dev->rate_limiting) {
+ ret = -EOPNOTSUPP;
+ goto ret_err;
+ }
+
+ if ((fw_caps & RL_CAPABILITY_MASK) != RL_CAPABILITY_VALUE) {
+ dev_info(&GET_DEV(accel_dev), "not supported\n");
+ ret = -EOPNOTSUPP;
+ goto ret_free;
+ }
+
+ ADF_CSR_WR(pmisc_addr, rl_hw_data->pciin_tb_offset,
+ RL_TOKEN_GRANULARITY_PCIEIN_BUCKET);
+ ADF_CSR_WR(pmisc_addr, rl_hw_data->pciout_tb_offset,
+ RL_TOKEN_GRANULARITY_PCIEOUT_BUCKET);
+
+ ret = adf_rl_send_admin_init_msg(accel_dev, &rl_hw_data->slices);
+ if (ret) {
+ dev_err(&GET_DEV(accel_dev), "initialization failed\n");
+ goto ret_free;
+ }
+
+ ret = initialize_default_nodes(accel_dev);
+ if (ret) {
+ dev_err(&GET_DEV(accel_dev),
+ "failed to initialize default SLAs\n");
+ goto ret_sla_rm;
+ }
+
+ ret = adf_sysfs_rl_add(accel_dev);
+ if (ret) {
+ dev_err(&GET_DEV(accel_dev), "failed to add sysfs interface\n");
+ goto ret_sysfs_rm;
+ }
+
+ return 0;
+
+ret_sysfs_rm:
+ adf_sysfs_rl_rm(accel_dev);
+ret_sla_rm:
+ adf_rl_remove_sla_all(accel_dev, true);
+ret_free:
+ kfree(accel_dev->rate_limiting);
+ accel_dev->rate_limiting = NULL;
+ret_err:
+ return ret;
+}
+
+void adf_rl_stop(struct adf_accel_dev *accel_dev)
+{
+ if (!accel_dev->rate_limiting)
+ return;
+
+ adf_sysfs_rl_rm(accel_dev);
+ adf_rl_remove_sla_all(accel_dev, true);
+}
+
+void adf_rl_exit(struct adf_accel_dev *accel_dev)
+{
+ if (!accel_dev->rate_limiting)
+ return;
+
+ kfree(accel_dev->rate_limiting);
+ accel_dev->rate_limiting = NULL;
+}
--- /dev/null
+/* SPDX-License-Identifier: GPL-2.0-only */
+/* Copyright(c) 2023 Intel Corporation */
+
+#ifndef ADF_RL_H_
+#define ADF_RL_H_
+
+#include <linux/mutex.h>
+#include <linux/types.h>
+
+struct adf_accel_dev;
+
+#define RL_ROOT_MAX 4
+#define RL_CLUSTER_MAX 16
+#define RL_LEAF_MAX 64
+#define RL_NODES_CNT_MAX (RL_ROOT_MAX + RL_CLUSTER_MAX + RL_LEAF_MAX)
+#define RL_RP_CNT_PER_LEAF_MAX 4U
+#define RL_RP_CNT_MAX 64
+#define RL_SLA_EMPTY_ID -1
+#define RL_PARENT_DEFAULT_ID -1
+
+enum rl_node_type {
+ RL_ROOT,
+ RL_CLUSTER,
+ RL_LEAF,
+};
+
+enum adf_base_services {
+ ADF_SVC_ASYM = 0,
+ ADF_SVC_SYM,
+ ADF_SVC_DC,
+ ADF_SVC_NONE,
+};
+
+/**
+ * struct adf_rl_sla_input_data - ratelimiting user input data structure
+ * @rp_mask: 64 bit bitmask of ring pair IDs which will be assigned to SLA.
+ * Eg. 0x5 -> RP0 and RP2 assigned; 0xA005 -> RP0,2,13,15 assigned.
+ * @sla_id: ID of current SLA for operations update, rm, get. For the add
+ * operation, this field will be updated with the ID of the newly
+ * added SLA
+ * @parent_id: ID of the SLA to which the current one should be assigned.
+ * Set to -1 to refer to the default parent.
+ * @cir: Committed information rate. Rate guaranteed to be achieved. Input value
+ * is expressed in permille scale, i.e. 1000 refers to the maximum
+ * device throughput for a selected service.
+ * @pir: Peak information rate. Maximum rate available that the SLA can achieve.
+ * Input value is expressed in permille scale, i.e. 1000 refers to
+ * the maximum device throughput for a selected service.
+ * @type: SLA type: root, cluster, node
+ * @srv: Service associated to the SLA: asym, sym dc.
+ *
+ * This structure is used to perform operations on an SLA.
+ * Depending on the operation, some of the parameters are ignored.
+ * The following list reports which parameters should be set for each operation.
+ * - add: all except sla_id
+ * - update: cir, pir, sla_id
+ * - rm: sla_id
+ * - rm_all: -
+ * - get: sla_id
+ * - get_capability_rem: srv, sla_id
+ */
+struct adf_rl_sla_input_data {
+ u64 rp_mask;
+ int sla_id;
+ int parent_id;
+ unsigned int cir;
+ unsigned int pir;
+ enum rl_node_type type;
+ enum adf_base_services srv;
+};
+
+struct rl_slice_cnt {
+ u8 dcpr_cnt;
+ u8 pke_cnt;
+ u8 cph_cnt;
+};
+
+struct adf_rl_interface_data {
+ struct adf_rl_sla_input_data input;
+ enum adf_base_services cap_rem_srv;
+ struct rw_semaphore lock;
+};
+
+struct adf_rl_hw_data {
+ u32 scale_ref;
+ u32 scan_interval;
+ u32 r2l_offset;
+ u32 l2c_offset;
+ u32 c2s_offset;
+ u32 pciin_tb_offset;
+ u32 pciout_tb_offset;
+ u32 pcie_scale_mul;
+ u32 pcie_scale_div;
+ u32 dcpr_correction;
+ u32 max_tp[RL_ROOT_MAX];
+ struct rl_slice_cnt slices;
+};
+
+/**
+ * struct adf_rl - ratelimiting data structure
+ * @accel_dev: pointer to acceleration device data
+ * @device_data: pointer to rate limiting data specific to a device type (or revision)
+ * @sla: array of pointers to SLA objects
+ * @root: array of pointers to root type SLAs, element number reflects node_id
+ * @cluster: array of pointers to cluster type SLAs, element number reflects node_id
+ * @leaf: array of pointers to leaf type SLAs, element number reflects node_id
+ * @rp_in_use: array of ring pair IDs already used in one of SLAs
+ * @rl_lock: mutex object which is protecting data in this structure
+ * @input: structure which is used for holding the data received from user
+ */
+struct adf_rl {
+ struct adf_accel_dev *accel_dev;
+ struct adf_rl_hw_data *device_data;
+ /* mapping sla_id to SLA objects */
+ struct rl_sla *sla[RL_NODES_CNT_MAX];
+ struct rl_sla *root[RL_ROOT_MAX];
+ struct rl_sla *cluster[RL_CLUSTER_MAX];
+ struct rl_sla *leaf[RL_LEAF_MAX];
+ bool rp_in_use[RL_RP_CNT_MAX];
+ /* Mutex protecting writing to SLAs lists */
+ struct mutex rl_lock;
+ struct adf_rl_interface_data user_input;
+};
+
+/**
+ * struct rl_sla - SLA object data structure
+ * @parent: pointer to the parent SLA (root/cluster)
+ * @type: SLA type
+ * @srv: service associated with this SLA
+ * @sla_id: ID of the SLA, used as element number in SLA array and as identifier
+ * shared with the user
+ * @node_id: ID of node, each of SLA type have a separate ID list
+ * @cir: committed information rate
+ * @pir: peak information rate (PIR >= CIR)
+ * @rem_cir: if this SLA is a parent then this field represents a remaining
+ * value to be used by child SLAs.
+ * @ring_pairs_ids: array with numeric ring pairs IDs assigned to this SLA
+ * @ring_pairs_cnt: number of assigned ring pairs listed in the array above
+ */
+struct rl_sla {
+ struct rl_sla *parent;
+ enum rl_node_type type;
+ enum adf_base_services srv;
+ u32 sla_id;
+ u32 node_id;
+ u32 cir;
+ u32 pir;
+ u32 rem_cir;
+ u16 ring_pairs_ids[RL_RP_CNT_PER_LEAF_MAX];
+ u16 ring_pairs_cnt;
+};
+
+int adf_rl_add_sla(struct adf_accel_dev *accel_dev,
+ struct adf_rl_sla_input_data *sla_in);
+int adf_rl_update_sla(struct adf_accel_dev *accel_dev,
+ struct adf_rl_sla_input_data *sla_in);
+int adf_rl_get_sla(struct adf_accel_dev *accel_dev,
+ struct adf_rl_sla_input_data *sla_in);
+int adf_rl_get_capability_remaining(struct adf_accel_dev *accel_dev,
+ enum adf_base_services srv, int sla_id);
+int adf_rl_remove_sla(struct adf_accel_dev *accel_dev, u32 sla_id);
+void adf_rl_remove_sla_all(struct adf_accel_dev *accel_dev, bool incl_default);
+
+int adf_rl_init(struct adf_accel_dev *accel_dev);
+int adf_rl_start(struct adf_accel_dev *accel_dev);
+void adf_rl_stop(struct adf_accel_dev *accel_dev);
+void adf_rl_exit(struct adf_accel_dev *accel_dev);
+
+u32 adf_rl_calculate_pci_bw(struct adf_accel_dev *accel_dev, u32 sla_val,
+ enum adf_base_services svc_type, bool is_bw_out);
+u32 adf_rl_calculate_ae_cycles(struct adf_accel_dev *accel_dev, u32 sla_val,
+ enum adf_base_services svc_type);
+u32 adf_rl_calculate_slice_tokens(struct adf_accel_dev *accel_dev, u32 sla_val,
+ enum adf_base_services svc_type);
+
+#endif /* ADF_RL_H_ */
--- /dev/null
+// SPDX-License-Identifier: GPL-2.0-only
+/* Copyright(c) 2023 Intel Corporation */
+
+#include <linux/dma-mapping.h>
+#include <linux/pci.h>
+
+#include "adf_admin.h"
+#include "adf_accel_devices.h"
+#include "adf_rl_admin.h"
+
+static void
+prep_admin_req_msg(struct rl_sla *sla, dma_addr_t dma_addr,
+ struct icp_qat_fw_init_admin_sla_config_params *fw_params,
+ struct icp_qat_fw_init_admin_req *req, bool is_update)
+{
+ req->cmd_id = is_update ? ICP_QAT_FW_RL_UPDATE : ICP_QAT_FW_RL_ADD;
+ req->init_cfg_ptr = dma_addr;
+ req->init_cfg_sz = sizeof(*fw_params);
+ req->node_id = sla->node_id;
+ req->node_type = sla->type;
+ req->rp_count = sla->ring_pairs_cnt;
+ req->svc_type = sla->srv;
+}
+
+static void
+prep_admin_req_params(struct adf_accel_dev *accel_dev, struct rl_sla *sla,
+ struct icp_qat_fw_init_admin_sla_config_params *fw_params)
+{
+ fw_params->pcie_in_cir =
+ adf_rl_calculate_pci_bw(accel_dev, sla->cir, sla->srv, false);
+ fw_params->pcie_in_pir =
+ adf_rl_calculate_pci_bw(accel_dev, sla->pir, sla->srv, false);
+ fw_params->pcie_out_cir =
+ adf_rl_calculate_pci_bw(accel_dev, sla->cir, sla->srv, true);
+ fw_params->pcie_out_pir =
+ adf_rl_calculate_pci_bw(accel_dev, sla->pir, sla->srv, true);
+
+ fw_params->slice_util_cir =
+ adf_rl_calculate_slice_tokens(accel_dev, sla->cir, sla->srv);
+ fw_params->slice_util_pir =
+ adf_rl_calculate_slice_tokens(accel_dev, sla->pir, sla->srv);
+
+ fw_params->ae_util_cir =
+ adf_rl_calculate_ae_cycles(accel_dev, sla->cir, sla->srv);
+ fw_params->ae_util_pir =
+ adf_rl_calculate_ae_cycles(accel_dev, sla->pir, sla->srv);
+
+ memcpy(fw_params->rp_ids, sla->ring_pairs_ids,
+ sizeof(sla->ring_pairs_ids));
+}
+
+int adf_rl_send_admin_init_msg(struct adf_accel_dev *accel_dev,
+ struct rl_slice_cnt *slices_int)
+{
+ struct icp_qat_fw_init_admin_slice_cnt slices_resp = { };
+ int ret;
+
+ ret = adf_send_admin_rl_init(accel_dev, &slices_resp);
+ if (ret)
+ return ret;
+
+ slices_int->dcpr_cnt = slices_resp.dcpr_cnt;
+ slices_int->pke_cnt = slices_resp.pke_cnt;
+ /* For symmetric crypto, slice tokens are relative to the UCS slice */
+ slices_int->cph_cnt = slices_resp.ucs_cnt;
+
+ return 0;
+}
+
+int adf_rl_send_admin_add_update_msg(struct adf_accel_dev *accel_dev,
+ struct rl_sla *sla, bool is_update)
+{
+ struct icp_qat_fw_init_admin_sla_config_params *fw_params;
+ struct icp_qat_fw_init_admin_req req = { };
+ dma_addr_t dma_addr;
+ int ret;
+
+ fw_params = dma_alloc_coherent(&GET_DEV(accel_dev), sizeof(*fw_params),
+ &dma_addr, GFP_KERNEL);
+ if (!fw_params)
+ return -ENOMEM;
+
+ prep_admin_req_params(accel_dev, sla, fw_params);
+ prep_admin_req_msg(sla, dma_addr, fw_params, &req, is_update);
+ ret = adf_send_admin_rl_add_update(accel_dev, &req);
+
+ dma_free_coherent(&GET_DEV(accel_dev), sizeof(*fw_params), fw_params,
+ dma_addr);
+
+ return ret;
+}
+
+int adf_rl_send_admin_delete_msg(struct adf_accel_dev *accel_dev, u16 node_id,
+ u8 node_type)
+{
+ return adf_send_admin_rl_delete(accel_dev, node_id, node_type);
+}
--- /dev/null
+/* SPDX-License-Identifier: GPL-2.0-only */
+/* Copyright(c) 2023 Intel Corporation */
+
+#ifndef ADF_RL_ADMIN_H_
+#define ADF_RL_ADMIN_H_
+
+#include <linux/types.h>
+
+#include "adf_rl.h"
+
+int adf_rl_send_admin_init_msg(struct adf_accel_dev *accel_dev,
+ struct rl_slice_cnt *slices_int);
+int adf_rl_send_admin_add_update_msg(struct adf_accel_dev *accel_dev,
+ struct rl_sla *sla, bool is_update);
+int adf_rl_send_admin_delete_msg(struct adf_accel_dev *accel_dev, u16 node_id,
+ u8 node_type);
+
+#endif /* ADF_RL_ADMIN_H_ */
#include <linux/pci.h>
#include "adf_accel_devices.h"
#include "adf_cfg.h"
+#include "adf_cfg_services.h"
#include "adf_common_drv.h"
+#define UNSET_RING_NUM -1
+
static const char * const state_operations[] = {
[DEV_DOWN] = "down",
[DEV_UP] = "up",
case DEV_DOWN:
dev_info(dev, "Stopping device qat_dev%d\n", accel_id);
+ if (!adf_dev_started(accel_dev)) {
+ dev_info(&GET_DEV(accel_dev), "Device qat_dev%d already down\n",
+ accel_id);
+
+ break;
+ }
+
ret = adf_dev_down(accel_dev, true);
- if (ret < 0)
- return -EINVAL;
+ if (ret)
+ return ret;
break;
case DEV_UP:
dev_info(dev, "Starting device qat_dev%d\n", accel_id);
ret = adf_dev_up(accel_dev, true);
- if (ret < 0) {
+ if (ret == -EALREADY) {
+ break;
+ } else if (ret) {
dev_err(dev, "Failed to start device qat_dev%d\n",
accel_id);
adf_dev_down(accel_dev, true);
return count;
}
-static const char * const services_operations[] = {
- ADF_CFG_CY,
- ADF_CFG_DC,
- ADF_CFG_SYM,
- ADF_CFG_ASYM,
- ADF_CFG_ASYM_SYM,
- ADF_CFG_ASYM_DC,
- ADF_CFG_DC_ASYM,
- ADF_CFG_SYM_DC,
- ADF_CFG_DC_SYM,
-};
-
static ssize_t cfg_services_show(struct device *dev, struct device_attribute *attr,
char *buf)
{
struct adf_accel_dev *accel_dev;
int ret;
- ret = sysfs_match_string(services_operations, buf);
+ ret = sysfs_match_string(adf_cfg_services, buf);
if (ret < 0)
return ret;
return -EINVAL;
}
- ret = adf_sysfs_update_dev_config(accel_dev, services_operations[ret]);
+ ret = adf_sysfs_update_dev_config(accel_dev, adf_cfg_services[ret]);
if (ret < 0)
return ret;
static DEVICE_ATTR_RW(state);
static DEVICE_ATTR_RW(cfg_services);
+static ssize_t rp2srv_show(struct device *dev, struct device_attribute *attr,
+ char *buf)
+{
+ struct adf_hw_device_data *hw_data;
+ struct adf_accel_dev *accel_dev;
+ enum adf_cfg_service_type svc;
+
+ accel_dev = adf_devmgr_pci_to_accel_dev(to_pci_dev(dev));
+ hw_data = GET_HW_DATA(accel_dev);
+
+ if (accel_dev->sysfs.ring_num == UNSET_RING_NUM)
+ return -EINVAL;
+
+ down_read(&accel_dev->sysfs.lock);
+ svc = GET_SRV_TYPE(accel_dev, accel_dev->sysfs.ring_num %
+ hw_data->num_banks_per_vf);
+ up_read(&accel_dev->sysfs.lock);
+
+ switch (svc) {
+ case COMP:
+ return sysfs_emit(buf, "%s\n", ADF_CFG_DC);
+ case SYM:
+ return sysfs_emit(buf, "%s\n", ADF_CFG_SYM);
+ case ASYM:
+ return sysfs_emit(buf, "%s\n", ADF_CFG_ASYM);
+ default:
+ break;
+ }
+ return -EINVAL;
+}
+
+static ssize_t rp2srv_store(struct device *dev, struct device_attribute *attr,
+ const char *buf, size_t count)
+{
+ struct adf_accel_dev *accel_dev;
+ int ring, num_rings, ret;
+
+ accel_dev = adf_devmgr_pci_to_accel_dev(to_pci_dev(dev));
+ if (!accel_dev)
+ return -EINVAL;
+
+ ret = kstrtouint(buf, 10, &ring);
+ if (ret)
+ return ret;
+
+ num_rings = GET_MAX_BANKS(accel_dev);
+ if (ring >= num_rings) {
+ dev_err(&GET_DEV(accel_dev),
+ "Device does not support more than %u ring pairs\n",
+ num_rings);
+ return -EINVAL;
+ }
+
+ down_write(&accel_dev->sysfs.lock);
+ accel_dev->sysfs.ring_num = ring;
+ up_write(&accel_dev->sysfs.lock);
+
+ return count;
+}
+static DEVICE_ATTR_RW(rp2srv);
+
+static ssize_t num_rps_show(struct device *dev, struct device_attribute *attr,
+ char *buf)
+{
+ struct adf_accel_dev *accel_dev;
+
+ accel_dev = adf_devmgr_pci_to_accel_dev(to_pci_dev(dev));
+ if (!accel_dev)
+ return -EINVAL;
+
+ return sysfs_emit(buf, "%u\n", GET_MAX_BANKS(accel_dev));
+}
+static DEVICE_ATTR_RO(num_rps);
+
static struct attribute *qat_attrs[] = {
&dev_attr_state.attr,
&dev_attr_cfg_services.attr,
&dev_attr_pm_idle_enabled.attr,
+ &dev_attr_rp2srv.attr,
+ &dev_attr_num_rps.attr,
NULL,
};
"Failed to create qat attribute group: %d\n", ret);
}
+ accel_dev->sysfs.ring_num = UNSET_RING_NUM;
+
return ret;
}
EXPORT_SYMBOL_GPL(adf_sysfs_init);
--- /dev/null
+// SPDX-License-Identifier: GPL-2.0-only
+/* Copyright(c) 2023 Intel Corporation */
+
+#include <linux/sysfs.h>
+#include <linux/pci.h>
+#include <linux/string.h>
+
+#include "adf_common_drv.h"
+#include "adf_sysfs_ras_counters.h"
+
+static ssize_t errors_correctable_show(struct device *dev,
+ struct device_attribute *dev_attr,
+ char *buf)
+{
+ struct adf_accel_dev *accel_dev;
+ unsigned long counter;
+
+ accel_dev = adf_devmgr_pci_to_accel_dev(to_pci_dev(dev));
+ if (!accel_dev)
+ return -EINVAL;
+
+ counter = ADF_RAS_ERR_CTR_READ(accel_dev->ras_errors, ADF_RAS_CORR);
+ return scnprintf(buf, PAGE_SIZE, "%ld\n", counter);
+}
+
+static ssize_t errors_nonfatal_show(struct device *dev,
+ struct device_attribute *dev_attr,
+ char *buf)
+{
+ struct adf_accel_dev *accel_dev;
+ unsigned long counter;
+
+ accel_dev = adf_devmgr_pci_to_accel_dev(to_pci_dev(dev));
+ if (!accel_dev)
+ return -EINVAL;
+
+ counter = ADF_RAS_ERR_CTR_READ(accel_dev->ras_errors, ADF_RAS_UNCORR);
+ return scnprintf(buf, PAGE_SIZE, "%ld\n", counter);
+}
+
+static ssize_t errors_fatal_show(struct device *dev,
+ struct device_attribute *dev_attr,
+ char *buf)
+{
+ struct adf_accel_dev *accel_dev;
+ unsigned long counter;
+
+ accel_dev = adf_devmgr_pci_to_accel_dev(to_pci_dev(dev));
+ if (!accel_dev)
+ return -EINVAL;
+
+ counter = ADF_RAS_ERR_CTR_READ(accel_dev->ras_errors, ADF_RAS_FATAL);
+ return scnprintf(buf, PAGE_SIZE, "%ld\n", counter);
+}
+
+static ssize_t reset_error_counters_store(struct device *dev,
+ struct device_attribute *dev_attr,
+ const char *buf, size_t count)
+{
+ struct adf_accel_dev *accel_dev;
+
+ if (buf[0] != '1' || count != 2)
+ return -EINVAL;
+
+ accel_dev = adf_devmgr_pci_to_accel_dev(to_pci_dev(dev));
+ if (!accel_dev)
+ return -EINVAL;
+
+ ADF_RAS_ERR_CTR_CLEAR(accel_dev->ras_errors);
+
+ return count;
+}
+
+static DEVICE_ATTR_RO(errors_correctable);
+static DEVICE_ATTR_RO(errors_nonfatal);
+static DEVICE_ATTR_RO(errors_fatal);
+static DEVICE_ATTR_WO(reset_error_counters);
+
+static struct attribute *qat_ras_attrs[] = {
+ &dev_attr_errors_correctable.attr,
+ &dev_attr_errors_nonfatal.attr,
+ &dev_attr_errors_fatal.attr,
+ &dev_attr_reset_error_counters.attr,
+ NULL,
+};
+
+static struct attribute_group qat_ras_group = {
+ .attrs = qat_ras_attrs,
+ .name = "qat_ras",
+};
+
+void adf_sysfs_start_ras(struct adf_accel_dev *accel_dev)
+{
+ if (!accel_dev->ras_errors.enabled)
+ return;
+
+ ADF_RAS_ERR_CTR_CLEAR(accel_dev->ras_errors);
+
+ if (device_add_group(&GET_DEV(accel_dev), &qat_ras_group))
+ dev_err(&GET_DEV(accel_dev),
+ "Failed to create qat_ras attribute group.\n");
+}
+
+void adf_sysfs_stop_ras(struct adf_accel_dev *accel_dev)
+{
+ if (!accel_dev->ras_errors.enabled)
+ return;
+
+ device_remove_group(&GET_DEV(accel_dev), &qat_ras_group);
+
+ ADF_RAS_ERR_CTR_CLEAR(accel_dev->ras_errors);
+}
--- /dev/null
+/* SPDX-License-Identifier: GPL-2.0-only */
+/* Copyright(c) 2023 Intel Corporation */
+
+#ifndef ADF_RAS_H
+#define ADF_RAS_H
+
+#include <linux/bitops.h>
+#include <linux/atomic.h>
+
+struct adf_accel_dev;
+
+void adf_sysfs_start_ras(struct adf_accel_dev *accel_dev);
+void adf_sysfs_stop_ras(struct adf_accel_dev *accel_dev);
+
+#define ADF_RAS_ERR_CTR_READ(ras_errors, ERR) \
+ atomic_read(&(ras_errors).counter[ERR])
+
+#define ADF_RAS_ERR_CTR_CLEAR(ras_errors) \
+ do { \
+ for (int err = 0; err < ADF_RAS_ERRORS; ++err) \
+ atomic_set(&(ras_errors).counter[err], 0); \
+ } while (0)
+
+#define ADF_RAS_ERR_CTR_INC(ras_errors, ERR) \
+ atomic_inc(&(ras_errors).counter[ERR])
+
+#endif /* ADF_RAS_H */
--- /dev/null
+// SPDX-License-Identifier: GPL-2.0-only
+/* Copyright(c) 2023 Intel Corporation */
+
+#define dev_fmt(fmt) "RateLimiting: " fmt
+
+#include <linux/dev_printk.h>
+#include <linux/pci.h>
+#include <linux/sysfs.h>
+#include <linux/types.h>
+
+#include "adf_common_drv.h"
+#include "adf_rl.h"
+#include "adf_sysfs_rl.h"
+
+#define GET_RL_STRUCT(accel_dev) ((accel_dev)->rate_limiting->user_input)
+
+enum rl_ops {
+ ADD,
+ UPDATE,
+ RM,
+ RM_ALL,
+ GET,
+};
+
+enum rl_params {
+ RP_MASK,
+ ID,
+ CIR,
+ PIR,
+ SRV,
+ CAP_REM_SRV,
+};
+
+static const char *const rl_services[] = {
+ [ADF_SVC_ASYM] = "asym",
+ [ADF_SVC_SYM] = "sym",
+ [ADF_SVC_DC] = "dc",
+};
+
+static const char *const rl_operations[] = {
+ [ADD] = "add",
+ [UPDATE] = "update",
+ [RM] = "rm",
+ [RM_ALL] = "rm_all",
+ [GET] = "get",
+};
+
+static int set_param_u(struct device *dev, enum rl_params param, u64 set)
+{
+ struct adf_rl_interface_data *data;
+ struct adf_accel_dev *accel_dev;
+ int ret = 0;
+
+ accel_dev = adf_devmgr_pci_to_accel_dev(to_pci_dev(dev));
+ if (!accel_dev)
+ return -EINVAL;
+
+ data = &GET_RL_STRUCT(accel_dev);
+
+ down_write(&data->lock);
+ switch (param) {
+ case RP_MASK:
+ data->input.rp_mask = set;
+ break;
+ case CIR:
+ data->input.cir = set;
+ break;
+ case PIR:
+ data->input.pir = set;
+ break;
+ case SRV:
+ data->input.srv = set;
+ break;
+ case CAP_REM_SRV:
+ data->cap_rem_srv = set;
+ break;
+ default:
+ ret = -EINVAL;
+ break;
+ }
+ up_write(&data->lock);
+
+ return ret;
+}
+
+static int set_param_s(struct device *dev, enum rl_params param, int set)
+{
+ struct adf_rl_interface_data *data;
+ struct adf_accel_dev *accel_dev;
+
+ accel_dev = adf_devmgr_pci_to_accel_dev(to_pci_dev(dev));
+ if (!accel_dev || param != ID)
+ return -EINVAL;
+
+ data = &GET_RL_STRUCT(accel_dev);
+
+ down_write(&data->lock);
+ data->input.sla_id = set;
+ up_write(&data->lock);
+
+ return 0;
+}
+
+static int get_param_u(struct device *dev, enum rl_params param, u64 *get)
+{
+ struct adf_rl_interface_data *data;
+ struct adf_accel_dev *accel_dev;
+ int ret = 0;
+
+ accel_dev = adf_devmgr_pci_to_accel_dev(to_pci_dev(dev));
+ if (!accel_dev)
+ return -EINVAL;
+
+ data = &GET_RL_STRUCT(accel_dev);
+
+ down_read(&data->lock);
+ switch (param) {
+ case RP_MASK:
+ *get = data->input.rp_mask;
+ break;
+ case CIR:
+ *get = data->input.cir;
+ break;
+ case PIR:
+ *get = data->input.pir;
+ break;
+ case SRV:
+ *get = data->input.srv;
+ break;
+ default:
+ ret = -EINVAL;
+ }
+ up_read(&data->lock);
+
+ return ret;
+}
+
+static int get_param_s(struct device *dev, enum rl_params param)
+{
+ struct adf_rl_interface_data *data;
+ struct adf_accel_dev *accel_dev;
+ int ret = 0;
+
+ accel_dev = adf_devmgr_pci_to_accel_dev(to_pci_dev(dev));
+ if (!accel_dev)
+ return -EINVAL;
+
+ data = &GET_RL_STRUCT(accel_dev);
+
+ down_read(&data->lock);
+ if (param == ID)
+ ret = data->input.sla_id;
+ up_read(&data->lock);
+
+ return ret;
+}
+
+static ssize_t rp_show(struct device *dev, struct device_attribute *attr,
+ char *buf)
+{
+ int ret;
+ u64 get;
+
+ ret = get_param_u(dev, RP_MASK, &get);
+ if (ret)
+ return ret;
+
+ return sysfs_emit(buf, "%#llx\n", get);
+}
+
+static ssize_t rp_store(struct device *dev, struct device_attribute *attr,
+ const char *buf, size_t count)
+{
+ int err;
+ u64 val;
+
+ err = kstrtou64(buf, 16, &val);
+ if (err)
+ return err;
+
+ err = set_param_u(dev, RP_MASK, val);
+ if (err)
+ return err;
+
+ return count;
+}
+static DEVICE_ATTR_RW(rp);
+
+static ssize_t id_show(struct device *dev, struct device_attribute *attr,
+ char *buf)
+{
+ return sysfs_emit(buf, "%d\n", get_param_s(dev, ID));
+}
+
+static ssize_t id_store(struct device *dev, struct device_attribute *attr,
+ const char *buf, size_t count)
+{
+ int err;
+ int val;
+
+ err = kstrtoint(buf, 10, &val);
+ if (err)
+ return err;
+
+ err = set_param_s(dev, ID, val);
+ if (err)
+ return err;
+
+ return count;
+}
+static DEVICE_ATTR_RW(id);
+
+static ssize_t cir_show(struct device *dev, struct device_attribute *attr,
+ char *buf)
+{
+ int ret;
+ u64 get;
+
+ ret = get_param_u(dev, CIR, &get);
+ if (ret)
+ return ret;
+
+ return sysfs_emit(buf, "%llu\n", get);
+}
+
+static ssize_t cir_store(struct device *dev, struct device_attribute *attr,
+ const char *buf, size_t count)
+{
+ unsigned int val;
+ int err;
+
+ err = kstrtouint(buf, 10, &val);
+ if (err)
+ return err;
+
+ err = set_param_u(dev, CIR, val);
+ if (err)
+ return err;
+
+ return count;
+}
+static DEVICE_ATTR_RW(cir);
+
+static ssize_t pir_show(struct device *dev, struct device_attribute *attr,
+ char *buf)
+{
+ int ret;
+ u64 get;
+
+ ret = get_param_u(dev, PIR, &get);
+ if (ret)
+ return ret;
+
+ return sysfs_emit(buf, "%llu\n", get);
+}
+
+static ssize_t pir_store(struct device *dev, struct device_attribute *attr,
+ const char *buf, size_t count)
+{
+ unsigned int val;
+ int err;
+
+ err = kstrtouint(buf, 10, &val);
+ if (err)
+ return err;
+
+ err = set_param_u(dev, PIR, val);
+ if (err)
+ return err;
+
+ return count;
+}
+static DEVICE_ATTR_RW(pir);
+
+static ssize_t srv_show(struct device *dev, struct device_attribute *attr,
+ char *buf)
+{
+ int ret;
+ u64 get;
+
+ ret = get_param_u(dev, SRV, &get);
+ if (ret)
+ return ret;
+
+ if (get == ADF_SVC_NONE)
+ return -EINVAL;
+
+ return sysfs_emit(buf, "%s\n", rl_services[get]);
+}
+
+static ssize_t srv_store(struct device *dev, struct device_attribute *attr,
+ const char *buf, size_t count)
+{
+ unsigned int val;
+ int ret;
+
+ ret = sysfs_match_string(rl_services, buf);
+ if (ret < 0)
+ return ret;
+
+ val = ret;
+ ret = set_param_u(dev, SRV, val);
+ if (ret)
+ return ret;
+
+ return count;
+}
+static DEVICE_ATTR_RW(srv);
+
+static ssize_t cap_rem_show(struct device *dev, struct device_attribute *attr,
+ char *buf)
+{
+ struct adf_rl_interface_data *data;
+ struct adf_accel_dev *accel_dev;
+ int ret, rem_cap;
+
+ accel_dev = adf_devmgr_pci_to_accel_dev(to_pci_dev(dev));
+ if (!accel_dev)
+ return -EINVAL;
+
+ data = &GET_RL_STRUCT(accel_dev);
+
+ down_read(&data->lock);
+ rem_cap = adf_rl_get_capability_remaining(accel_dev, data->cap_rem_srv,
+ RL_SLA_EMPTY_ID);
+ up_read(&data->lock);
+ if (rem_cap < 0)
+ return rem_cap;
+
+ ret = sysfs_emit(buf, "%u\n", rem_cap);
+
+ return ret;
+}
+
+static ssize_t cap_rem_store(struct device *dev, struct device_attribute *attr,
+ const char *buf, size_t count)
+{
+ unsigned int val;
+ int ret;
+
+ ret = sysfs_match_string(rl_services, buf);
+ if (ret < 0)
+ return ret;
+
+ val = ret;
+ ret = set_param_u(dev, CAP_REM_SRV, val);
+ if (ret)
+ return ret;
+
+ return count;
+}
+static DEVICE_ATTR_RW(cap_rem);
+
+static ssize_t sla_op_store(struct device *dev, struct device_attribute *attr,
+ const char *buf, size_t count)
+{
+ struct adf_rl_interface_data *data;
+ struct adf_accel_dev *accel_dev;
+ int ret;
+
+ accel_dev = adf_devmgr_pci_to_accel_dev(to_pci_dev(dev));
+ if (!accel_dev)
+ return -EINVAL;
+
+ data = &GET_RL_STRUCT(accel_dev);
+
+ ret = sysfs_match_string(rl_operations, buf);
+ if (ret < 0)
+ return ret;
+
+ down_write(&data->lock);
+ switch (ret) {
+ case ADD:
+ data->input.parent_id = RL_PARENT_DEFAULT_ID;
+ data->input.type = RL_LEAF;
+ data->input.sla_id = 0;
+ ret = adf_rl_add_sla(accel_dev, &data->input);
+ if (ret)
+ goto err_free_lock;
+ break;
+ case UPDATE:
+ ret = adf_rl_update_sla(accel_dev, &data->input);
+ if (ret)
+ goto err_free_lock;
+ break;
+ case RM:
+ ret = adf_rl_remove_sla(accel_dev, data->input.sla_id);
+ if (ret)
+ goto err_free_lock;
+ break;
+ case RM_ALL:
+ adf_rl_remove_sla_all(accel_dev, false);
+ break;
+ case GET:
+ ret = adf_rl_get_sla(accel_dev, &data->input);
+ if (ret)
+ goto err_free_lock;
+ break;
+ default:
+ ret = -EINVAL;
+ goto err_free_lock;
+ }
+ up_write(&data->lock);
+
+ return count;
+
+err_free_lock:
+ up_write(&data->lock);
+
+ return ret;
+}
+static DEVICE_ATTR_WO(sla_op);
+
+static struct attribute *qat_rl_attrs[] = {
+ &dev_attr_rp.attr,
+ &dev_attr_id.attr,
+ &dev_attr_cir.attr,
+ &dev_attr_pir.attr,
+ &dev_attr_srv.attr,
+ &dev_attr_cap_rem.attr,
+ &dev_attr_sla_op.attr,
+ NULL,
+};
+
+static struct attribute_group qat_rl_group = {
+ .attrs = qat_rl_attrs,
+ .name = "qat_rl",
+};
+
+int adf_sysfs_rl_add(struct adf_accel_dev *accel_dev)
+{
+ struct adf_rl_interface_data *data;
+ int ret;
+
+ data = &GET_RL_STRUCT(accel_dev);
+
+ ret = device_add_group(&GET_DEV(accel_dev), &qat_rl_group);
+ if (ret)
+ dev_err(&GET_DEV(accel_dev),
+ "Failed to create qat_rl attribute group\n");
+
+ data->cap_rem_srv = ADF_SVC_NONE;
+ data->input.srv = ADF_SVC_NONE;
+
+ return ret;
+}
+
+void adf_sysfs_rl_rm(struct adf_accel_dev *accel_dev)
+{
+ device_remove_group(&GET_DEV(accel_dev), &qat_rl_group);
+}
--- /dev/null
+/* SPDX-License-Identifier: GPL-2.0-only */
+/* Copyright(c) 2023 Intel Corporation */
+#ifndef ADF_SYSFS_RL_H_
+#define ADF_SYSFS_RL_H_
+
+struct adf_accel_dev;
+
+int adf_sysfs_rl_add(struct adf_accel_dev *accel_dev);
+void adf_sysfs_rl_rm(struct adf_accel_dev *accel_dev);
+
+#endif /* ADF_SYSFS_RL_H_ */
int adf_ring_debugfs_add(struct adf_etr_ring_data *ring, const char *name)
{
struct adf_etr_ring_debug_entry *ring_debug;
- char entry_name[8];
+ char entry_name[16];
ring_debug = kzalloc(sizeof(*ring_debug), GFP_KERNEL);
if (!ring_debug)
{
struct adf_accel_dev *accel_dev = bank->accel_dev;
struct dentry *parent = accel_dev->transport->debug;
- char name[8];
+ char name[16];
snprintf(name, sizeof(name), "bank_%02d", bank->bank_number);
bank->bank_debug_dir = debugfs_create_dir(name, parent);
#include "icp_qat_fw.h"
+#define RL_MAX_RP_IDS 16
+
enum icp_qat_fw_init_admin_cmd_id {
ICP_QAT_FW_INIT_AE = 0,
ICP_QAT_FW_TRNG_ENABLE = 1,
ICP_QAT_FW_HEARTBEAT_SYNC = 7,
ICP_QAT_FW_HEARTBEAT_GET = 8,
ICP_QAT_FW_COMP_CAPABILITY_GET = 9,
+ ICP_QAT_FW_CRYPTO_CAPABILITY_GET = 10,
+ ICP_QAT_FW_DC_CHAIN_INIT = 11,
ICP_QAT_FW_HEARTBEAT_TIMER_SET = 13,
+ ICP_QAT_FW_RL_INIT = 15,
ICP_QAT_FW_TIMER_GET = 19,
+ ICP_QAT_FW_CNV_STATS_GET = 20,
ICP_QAT_FW_PM_STATE_CONFIG = 128,
+ ICP_QAT_FW_PM_INFO = 129,
+ ICP_QAT_FW_RL_ADD = 134,
+ ICP_QAT_FW_RL_UPDATE = 135,
+ ICP_QAT_FW_RL_REMOVE = 136,
};
enum icp_qat_fw_init_admin_resp_status {
ICP_QAT_FW_INIT_RESP_STATUS_FAIL
};
+struct icp_qat_fw_init_admin_slice_cnt {
+ __u8 cpr_cnt;
+ __u8 xlt_cnt;
+ __u8 dcpr_cnt;
+ __u8 pke_cnt;
+ __u8 wat_cnt;
+ __u8 wcp_cnt;
+ __u8 ucs_cnt;
+ __u8 cph_cnt;
+ __u8 ath_cnt;
+};
+
+struct icp_qat_fw_init_admin_sla_config_params {
+ __u32 pcie_in_cir;
+ __u32 pcie_in_pir;
+ __u32 pcie_out_cir;
+ __u32 pcie_out_pir;
+ __u32 slice_util_cir;
+ __u32 slice_util_pir;
+ __u32 ae_util_cir;
+ __u32 ae_util_pir;
+ __u16 rp_ids[RL_MAX_RP_IDS];
+};
+
struct icp_qat_fw_init_admin_req {
__u16 init_cfg_sz;
__u8 resrvd1;
struct {
__u32 heartbeat_ticks;
};
+ struct {
+ __u16 node_id;
+ __u8 node_type;
+ __u8 svc_type;
+ __u8 resrvd5[3];
+ __u8 rp_count;
+ };
__u32 idle_filter;
};
__u16 version_major_num;
};
__u32 extended_features;
+ struct {
+ __u16 error_count;
+ __u16 latest_error;
+ };
};
__u64 opaque_data;
union {
__u32 unsuccessful_count;
__u64 resrvd8;
};
+ struct icp_qat_fw_init_admin_slice_cnt slices;
+ __u16 fw_capabilities;
};
} __packed;
#define ICP_QAT_FW_SYNC ICP_QAT_FW_HEARTBEAT_SYNC
+#define ICP_QAT_FW_CAPABILITIES_GET ICP_QAT_FW_CRYPTO_CAPABILITY_GET
+
+#define ICP_QAT_NUMBER_OF_PM_EVENTS 8
+
+struct icp_qat_fw_init_admin_pm_info {
+ __u16 max_pwrreq;
+ __u16 min_pwrreq;
+ __u16 resvrd1;
+ __u8 pwr_state;
+ __u8 resvrd2;
+ __u32 fusectl0;
+ struct_group(event_counters,
+ __u32 sys_pm;
+ __u32 host_msg;
+ __u32 unknown;
+ __u32 local_ssm;
+ __u32 timer;
+ );
+ __u32 event_log[ICP_QAT_NUMBER_OF_PM_EVENTS];
+ struct_group(pm,
+ __u32 fw_init;
+ __u32 pwrreq;
+ __u32 status;
+ __u32 main;
+ __u32 thread;
+ );
+ struct_group(ssm,
+ __u32 pm_enable;
+ __u32 pm_active_status;
+ __u32 pm_managed_status;
+ __u32 pm_domain_status;
+ __u32 active_constraint;
+ );
+ __u32 resvrd3[6];
+};
#endif
#ifndef _ICP_QAT_HW_H_
#define _ICP_QAT_HW_H_
+#include <linux/bits.h>
+
enum icp_qat_hw_ae_id {
ICP_QAT_HW_AE_0 = 0,
ICP_QAT_HW_AE_1 = 1,
spin_unlock_bh(&backlog->lock);
}
-static void qat_alg_backlog_req(struct qat_alg_req *req,
- struct qat_instance_backlog *backlog)
-{
- INIT_LIST_HEAD(&req->list);
-
- spin_lock_bh(&backlog->lock);
- list_add_tail(&req->list, &backlog->list);
- spin_unlock_bh(&backlog->lock);
-}
-
-static int qat_alg_send_message_maybacklog(struct qat_alg_req *req)
+static bool qat_alg_try_enqueue(struct qat_alg_req *req)
{
struct qat_instance_backlog *backlog = req->backlog;
struct adf_etr_ring_data *tx_ring = req->tx_ring;
u32 *fw_req = req->fw_req;
- /* If any request is already backlogged, then add to backlog list */
+ /* Check if any request is already backlogged */
if (!list_empty(&backlog->list))
- goto enqueue;
+ return false;
- /* If ring is nearly full, then add to backlog list */
+ /* Check if ring is nearly full */
if (adf_ring_nearly_full(tx_ring))
- goto enqueue;
+ return false;
- /* If adding request to HW ring fails, then add to backlog list */
+ /* Try to enqueue to HW ring */
if (adf_send_message(tx_ring, fw_req))
- goto enqueue;
+ return false;
- return -EINPROGRESS;
+ return true;
+}
-enqueue:
- qat_alg_backlog_req(req, backlog);
- return -EBUSY;
+static int qat_alg_send_message_maybacklog(struct qat_alg_req *req)
+{
+ struct qat_instance_backlog *backlog = req->backlog;
+ int ret = -EINPROGRESS;
+
+ if (qat_alg_try_enqueue(req))
+ return ret;
+
+ spin_lock_bh(&backlog->lock);
+ if (!qat_alg_try_enqueue(req)) {
+ list_add_tail(&req->list, &backlog->list);
+ ret = -EBUSY;
+ }
+ spin_unlock_bh(&backlog->lock);
+
+ return ret;
}
int qat_alg_send_message(struct qat_alg_req *req)
acomp_request_complete(areq, ret);
}
-static int parse_zlib_header(u16 zlib_h)
-{
- int ret = -EINVAL;
- __be16 header;
- u8 *header_p;
- u8 cmf, flg;
-
- header = cpu_to_be16(zlib_h);
- header_p = (u8 *)&header;
-
- flg = header_p[0];
- cmf = header_p[1];
-
- if (cmf >> QAT_RFC_1950_CM_OFFSET > QAT_RFC_1950_CM_DEFLATE_CINFO_32K)
- return ret;
-
- if ((cmf & QAT_RFC_1950_CM_MASK) != QAT_RFC_1950_CM_DEFLATE)
- return ret;
-
- if (flg & QAT_RFC_1950_DICT_MASK)
- return ret;
-
- return 0;
-}
-
-static int qat_comp_rfc1950_callback(struct qat_compression_req *qat_req,
- void *resp)
-{
- struct acomp_req *areq = qat_req->acompress_req;
- enum direction dir = qat_req->dir;
- __be32 qat_produced_adler;
-
- qat_produced_adler = cpu_to_be32(qat_comp_get_produced_adler32(resp));
-
- if (dir == COMPRESSION) {
- __be16 zlib_header;
-
- zlib_header = cpu_to_be16(QAT_RFC_1950_COMP_HDR);
- scatterwalk_map_and_copy(&zlib_header, areq->dst, 0, QAT_RFC_1950_HDR_SIZE, 1);
- areq->dlen += QAT_RFC_1950_HDR_SIZE;
-
- scatterwalk_map_and_copy(&qat_produced_adler, areq->dst, areq->dlen,
- QAT_RFC_1950_FOOTER_SIZE, 1);
- areq->dlen += QAT_RFC_1950_FOOTER_SIZE;
- } else {
- __be32 decomp_adler;
- int footer_offset;
- int consumed;
-
- consumed = qat_comp_get_consumed_ctr(resp);
- footer_offset = consumed + QAT_RFC_1950_HDR_SIZE;
- if (footer_offset + QAT_RFC_1950_FOOTER_SIZE > areq->slen)
- return -EBADMSG;
-
- scatterwalk_map_and_copy(&decomp_adler, areq->src, footer_offset,
- QAT_RFC_1950_FOOTER_SIZE, 0);
-
- if (qat_produced_adler != decomp_adler)
- return -EBADMSG;
- }
- return 0;
-}
-
static void qat_comp_generic_callback(struct qat_compression_req *qat_req,
void *resp)
{
memset(ctx, 0, sizeof(*ctx));
}
-static int qat_comp_alg_rfc1950_init_tfm(struct crypto_acomp *acomp_tfm)
-{
- struct crypto_tfm *tfm = crypto_acomp_tfm(acomp_tfm);
- struct qat_compression_ctx *ctx = crypto_tfm_ctx(tfm);
- int ret;
-
- ret = qat_comp_alg_init_tfm(acomp_tfm);
- ctx->qat_comp_callback = &qat_comp_rfc1950_callback;
-
- return ret;
-}
-
static int qat_comp_alg_compress_decompress(struct acomp_req *areq, enum direction dir,
unsigned int shdr, unsigned int sftr,
unsigned int dhdr, unsigned int dftr)
return qat_comp_alg_compress_decompress(req, DECOMPRESSION, 0, 0, 0, 0);
}
-static int qat_comp_alg_rfc1950_compress(struct acomp_req *req)
-{
- if (!req->dst && req->dlen != 0)
- return -EINVAL;
-
- if (req->dst && req->dlen <= QAT_RFC_1950_HDR_SIZE + QAT_RFC_1950_FOOTER_SIZE)
- return -EINVAL;
-
- return qat_comp_alg_compress_decompress(req, COMPRESSION, 0, 0,
- QAT_RFC_1950_HDR_SIZE,
- QAT_RFC_1950_FOOTER_SIZE);
-}
-
-static int qat_comp_alg_rfc1950_decompress(struct acomp_req *req)
-{
- struct crypto_acomp *acomp_tfm = crypto_acomp_reqtfm(req);
- struct crypto_tfm *tfm = crypto_acomp_tfm(acomp_tfm);
- struct qat_compression_ctx *ctx = crypto_tfm_ctx(tfm);
- struct adf_accel_dev *accel_dev = ctx->inst->accel_dev;
- u16 zlib_header;
- int ret;
-
- if (req->slen <= QAT_RFC_1950_HDR_SIZE + QAT_RFC_1950_FOOTER_SIZE)
- return -EBADMSG;
-
- scatterwalk_map_and_copy(&zlib_header, req->src, 0, QAT_RFC_1950_HDR_SIZE, 0);
-
- ret = parse_zlib_header(zlib_header);
- if (ret) {
- dev_dbg(&GET_DEV(accel_dev), "Error parsing zlib header\n");
- return ret;
- }
-
- return qat_comp_alg_compress_decompress(req, DECOMPRESSION, QAT_RFC_1950_HDR_SIZE,
- QAT_RFC_1950_FOOTER_SIZE, 0, 0);
-}
-
static struct acomp_alg qat_acomp[] = { {
.base = {
.cra_name = "deflate",
.decompress = qat_comp_alg_decompress,
.dst_free = sgl_free,
.reqsize = sizeof(struct qat_compression_req),
-}, {
- .base = {
- .cra_name = "zlib-deflate",
- .cra_driver_name = "qat_zlib_deflate",
- .cra_priority = 4001,
- .cra_flags = CRYPTO_ALG_ASYNC,
- .cra_ctxsize = sizeof(struct qat_compression_ctx),
- .cra_module = THIS_MODULE,
- },
- .init = qat_comp_alg_rfc1950_init_tfm,
- .exit = qat_comp_alg_exit_tfm,
- .compress = qat_comp_alg_rfc1950_compress,
- .decompress = qat_comp_alg_rfc1950_decompress,
- .dst_free = sgl_free,
- .reqsize = sizeof(struct qat_compression_req),
-} };
+}};
int qat_comp_algs_register(void)
{
unsigned long ae = 0;
int i;
- strncpy(buf, str, 15);
+ strscpy(buf, str, sizeof(buf));
for (i = 0; i < 16; i++) {
if (!isdigit(buf[i])) {
buf[i] = '\0';
// SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0-only)
/* Copyright(c) 2014 - 2021 Intel Corporation */
#include <adf_accel_devices.h>
+#include <adf_admin.h>
#include <adf_common_drv.h>
#include <adf_gen2_config.h>
#include <adf_gen2_dc.h>
MODULE_FIRMWARE(ADF_DH895XCC_MMP);
MODULE_DESCRIPTION("Intel(R) QuickAssist Technology");
MODULE_VERSION(ADF_DRV_VERSION);
+MODULE_IMPORT_NS(CRYPTO_QAT);
MODULE_AUTHOR("Intel");
MODULE_DESCRIPTION("Intel(R) QuickAssist Technology");
MODULE_VERSION(ADF_DRV_VERSION);
+MODULE_IMPORT_NS(CRYPTO_QAT);
return ret;
}
-static int mv_cesa_remove(struct platform_device *pdev)
+static void mv_cesa_remove(struct platform_device *pdev)
{
struct mv_cesa_dev *cesa = platform_get_drvdata(pdev);
int i;
mv_cesa_put_sram(pdev, i);
irq_set_affinity_hint(cesa->engines[i].irq, NULL);
}
-
- return 0;
}
static const struct platform_device_id mv_cesa_plat_id_table[] = {
static struct platform_driver marvell_cesa = {
.probe = mv_cesa_probe,
- .remove = mv_cesa_remove,
+ .remove_new = mv_cesa_remove,
.id_table = mv_cesa_plat_id_table,
.driver = {
.name = "marvell-cesa",
.cra_name = "sha1",
.cra_driver_name = "sha1-dcp",
.cra_priority = 400,
- .cra_alignmask = 63,
.cra_flags = CRYPTO_ALG_ASYNC,
.cra_blocksize = SHA1_BLOCK_SIZE,
.cra_ctxsize = sizeof(struct dcp_async_ctx),
.cra_name = "sha256",
.cra_driver_name = "sha256-dcp",
.cra_priority = 400,
- .cra_alignmask = 63,
.cra_flags = CRYPTO_ALG_ASYNC,
.cra_blocksize = SHA256_BLOCK_SIZE,
.cra_ctxsize = sizeof(struct dcp_async_ctx),
return ret;
}
-static int mxs_dcp_remove(struct platform_device *pdev)
+static void mxs_dcp_remove(struct platform_device *pdev)
{
struct dcp *sdcp = platform_get_drvdata(pdev);
platform_set_drvdata(pdev, NULL);
global_sdcp = NULL;
-
- return 0;
}
static const struct of_device_id mxs_dcp_dt_ids[] = {
static struct platform_driver mxs_dcp_driver = {
.probe = mxs_dcp_probe,
- .remove = mxs_dcp_remove,
+ .remove_new = mxs_dcp_remove,
.driver = {
.name = "mxs-dcp",
.of_match_table = mxs_dcp_dt_ids,
return err;
}
-static int n2_crypto_remove(struct platform_device *dev)
+static void n2_crypto_remove(struct platform_device *dev)
{
struct n2_crypto *np = dev_get_drvdata(&dev->dev);
release_global_resources();
free_n2cp(np);
-
- return 0;
}
static struct n2_mau *alloc_ncp(void)
return err;
}
-static int n2_mau_remove(struct platform_device *dev)
+static void n2_mau_remove(struct platform_device *dev)
{
struct n2_mau *mp = dev_get_drvdata(&dev->dev);
release_global_resources();
free_ncp(mp);
-
- return 0;
}
static const struct of_device_id n2_crypto_match[] = {
.of_match_table = n2_crypto_match,
},
.probe = n2_crypto_probe,
- .remove = n2_crypto_remove,
+ .remove_new = n2_crypto_remove,
};
static const struct of_device_id n2_mau_match[] = {
.of_match_table = n2_mau_match,
},
.probe = n2_mau_probe,
- .remove = n2_mau_remove,
+ .remove_new = n2_mau_remove,
};
static struct platform_driver * const drivers[] = {
return err;
}
-static int omap_aes_remove(struct platform_device *pdev)
+static void omap_aes_remove(struct platform_device *pdev)
{
struct omap_aes_dev *dd = platform_get_drvdata(pdev);
struct aead_engine_alg *aalg;
pm_runtime_disable(dd->dev);
sysfs_remove_group(&dd->dev->kobj, &omap_aes_attr_group);
-
- return 0;
}
#ifdef CONFIG_PM_SLEEP
static struct platform_driver omap_aes_driver = {
.probe = omap_aes_probe,
- .remove = omap_aes_remove,
+ .remove_new = omap_aes_remove,
.driver = {
.name = "omap-aes",
.pm = &omap_aes_pm_ops,
return err;
}
-static int omap_des_remove(struct platform_device *pdev)
+static void omap_des_remove(struct platform_device *pdev)
{
struct omap_des_dev *dd = platform_get_drvdata(pdev);
int i, j;
tasklet_kill(&dd->done_task);
omap_des_dma_cleanup(dd);
pm_runtime_disable(dd->dev);
-
- return 0;
}
#ifdef CONFIG_PM_SLEEP
static struct platform_driver omap_des_driver = {
.probe = omap_des_probe,
- .remove = omap_des_remove,
+ .remove_new = omap_des_remove,
.driver = {
.name = "omap-des",
.pm = &omap_des_pm_ops,
if (big_endian)
for (i = 0; i < d; i++)
- hash[i] = be32_to_cpup((__be32 *)in + i);
+ put_unaligned(be32_to_cpup((__be32 *)in + i), &hash[i]);
else
for (i = 0; i < d; i++)
- hash[i] = le32_to_cpup((__le32 *)in + i);
+ put_unaligned(le32_to_cpup((__le32 *)in + i), &hash[i]);
}
static void omap_sham_write_ctrl_omap2(struct omap_sham_dev *dd, size_t length,
CRYPTO_ALG_NEED_FALLBACK,
.cra_blocksize = SHA1_BLOCK_SIZE,
.cra_ctxsize = sizeof(struct omap_sham_ctx),
- .cra_alignmask = OMAP_ALIGN_MASK,
.cra_module = THIS_MODULE,
.cra_init = omap_sham_cra_init,
.cra_exit = omap_sham_cra_exit,
CRYPTO_ALG_NEED_FALLBACK,
.cra_blocksize = SHA1_BLOCK_SIZE,
.cra_ctxsize = sizeof(struct omap_sham_ctx),
- .cra_alignmask = OMAP_ALIGN_MASK,
.cra_module = THIS_MODULE,
.cra_init = omap_sham_cra_init,
.cra_exit = omap_sham_cra_exit,
.cra_blocksize = SHA1_BLOCK_SIZE,
.cra_ctxsize = sizeof(struct omap_sham_ctx) +
sizeof(struct omap_sham_hmac_ctx),
- .cra_alignmask = OMAP_ALIGN_MASK,
.cra_module = THIS_MODULE,
.cra_init = omap_sham_cra_sha1_init,
.cra_exit = omap_sham_cra_exit,
.cra_blocksize = SHA1_BLOCK_SIZE,
.cra_ctxsize = sizeof(struct omap_sham_ctx) +
sizeof(struct omap_sham_hmac_ctx),
- .cra_alignmask = OMAP_ALIGN_MASK,
.cra_module = THIS_MODULE,
.cra_init = omap_sham_cra_md5_init,
.cra_exit = omap_sham_cra_exit,
CRYPTO_ALG_NEED_FALLBACK,
.cra_blocksize = SHA224_BLOCK_SIZE,
.cra_ctxsize = sizeof(struct omap_sham_ctx),
- .cra_alignmask = OMAP_ALIGN_MASK,
.cra_module = THIS_MODULE,
.cra_init = omap_sham_cra_init,
.cra_exit = omap_sham_cra_exit,
CRYPTO_ALG_NEED_FALLBACK,
.cra_blocksize = SHA256_BLOCK_SIZE,
.cra_ctxsize = sizeof(struct omap_sham_ctx),
- .cra_alignmask = OMAP_ALIGN_MASK,
.cra_module = THIS_MODULE,
.cra_init = omap_sham_cra_init,
.cra_exit = omap_sham_cra_exit,
.cra_blocksize = SHA224_BLOCK_SIZE,
.cra_ctxsize = sizeof(struct omap_sham_ctx) +
sizeof(struct omap_sham_hmac_ctx),
- .cra_alignmask = OMAP_ALIGN_MASK,
.cra_module = THIS_MODULE,
.cra_init = omap_sham_cra_sha224_init,
.cra_exit = omap_sham_cra_exit,
.cra_blocksize = SHA256_BLOCK_SIZE,
.cra_ctxsize = sizeof(struct omap_sham_ctx) +
sizeof(struct omap_sham_hmac_ctx),
- .cra_alignmask = OMAP_ALIGN_MASK,
.cra_module = THIS_MODULE,
.cra_init = omap_sham_cra_sha256_init,
.cra_exit = omap_sham_cra_exit,
CRYPTO_ALG_NEED_FALLBACK,
.cra_blocksize = SHA384_BLOCK_SIZE,
.cra_ctxsize = sizeof(struct omap_sham_ctx),
- .cra_alignmask = OMAP_ALIGN_MASK,
.cra_module = THIS_MODULE,
.cra_init = omap_sham_cra_init,
.cra_exit = omap_sham_cra_exit,
CRYPTO_ALG_NEED_FALLBACK,
.cra_blocksize = SHA512_BLOCK_SIZE,
.cra_ctxsize = sizeof(struct omap_sham_ctx),
- .cra_alignmask = OMAP_ALIGN_MASK,
.cra_module = THIS_MODULE,
.cra_init = omap_sham_cra_init,
.cra_exit = omap_sham_cra_exit,
.cra_blocksize = SHA384_BLOCK_SIZE,
.cra_ctxsize = sizeof(struct omap_sham_ctx) +
sizeof(struct omap_sham_hmac_ctx),
- .cra_alignmask = OMAP_ALIGN_MASK,
.cra_module = THIS_MODULE,
.cra_init = omap_sham_cra_sha384_init,
.cra_exit = omap_sham_cra_exit,
.cra_blocksize = SHA512_BLOCK_SIZE,
.cra_ctxsize = sizeof(struct omap_sham_ctx) +
sizeof(struct omap_sham_hmac_ctx),
- .cra_alignmask = OMAP_ALIGN_MASK,
.cra_module = THIS_MODULE,
.cra_init = omap_sham_cra_sha512_init,
.cra_exit = omap_sham_cra_exit,
return err;
}
-static int omap_sham_remove(struct platform_device *pdev)
+static void omap_sham_remove(struct platform_device *pdev)
{
struct omap_sham_dev *dd;
int i, j;
dma_release_channel(dd->dma_lch);
sysfs_remove_group(&dd->dev->kobj, &omap_sham_attr_group);
-
- return 0;
}
static struct platform_driver omap_sham_driver = {
.probe = omap_sham_probe,
- .remove = omap_sham_remove,
+ .remove_new = omap_sham_remove,
.driver = {
.name = "omap-sham",
.of_match_table = omap_sham_of_match,
return ret;
}
-static int qce_crypto_remove(struct platform_device *pdev)
+static void qce_crypto_remove(struct platform_device *pdev)
{
struct qce_device *qce = platform_get_drvdata(pdev);
clk_disable_unprepare(qce->bus);
clk_disable_unprepare(qce->iface);
clk_disable_unprepare(qce->core);
- return 0;
}
static const struct of_device_id qce_crypto_of_match[] = {
static struct platform_driver qce_crypto_driver = {
.probe = qce_crypto_probe,
- .remove = qce_crypto_remove,
+ .remove_new = qce_crypto_remove,
.driver = {
.name = KBUILD_MODNAME,
.of_match_table = qce_crypto_of_match,
#include <linux/acpi.h>
#include <linux/clk.h>
#include <linux/crypto.h>
+#include <linux/hw_random.h>
#include <linux/io.h>
#include <linux/iopoll.h>
#include <linux/kernel.h>
#define WORD_SZ 4
+#define QCOM_TRNG_QUALITY 1024
+
struct qcom_rng {
struct mutex lock;
void __iomem *base;
struct clk *clk;
- unsigned int skip_init;
+ struct hwrng hwrng;
+ struct qcom_rng_of_data *of_data;
};
struct qcom_rng_ctx {
struct qcom_rng *rng;
};
+struct qcom_rng_of_data {
+ bool skip_init;
+ bool hwrng_support;
+};
+
static struct qcom_rng *qcom_rng_dev;
static int qcom_rng_read(struct qcom_rng *rng, u8 *data, unsigned int max)
} else {
/* copy only remaining bytes */
memcpy(data, &val, max - currsize);
- break;
+ currsize = max;
}
} while (currsize < max);
- return 0;
+ return currsize;
}
static int qcom_rng_generate(struct crypto_rng *tfm,
mutex_unlock(&rng->lock);
clk_disable_unprepare(rng->clk);
+ if (ret >= 0)
+ ret = 0;
+
return ret;
}
return 0;
}
+static int qcom_hwrng_read(struct hwrng *hwrng, void *data, size_t max, bool wait)
+{
+ struct qcom_rng *qrng = container_of(hwrng, struct qcom_rng, hwrng);
+
+ return qcom_rng_read(qrng, data, max);
+}
+
static int qcom_rng_enable(struct qcom_rng *rng)
{
u32 val;
ctx->rng = qcom_rng_dev;
- if (!ctx->rng->skip_init)
+ if (!ctx->rng->of_data->skip_init)
return qcom_rng_enable(ctx->rng);
return 0;
if (IS_ERR(rng->clk))
return PTR_ERR(rng->clk);
- rng->skip_init = (unsigned long)device_get_match_data(&pdev->dev);
+ rng->of_data = (struct qcom_rng_of_data *)of_device_get_match_data(&pdev->dev);
qcom_rng_dev = rng;
ret = crypto_register_rng(&qcom_rng_alg);
if (ret) {
dev_err(&pdev->dev, "Register crypto rng failed: %d\n", ret);
qcom_rng_dev = NULL;
+ return ret;
}
+ if (rng->of_data->hwrng_support) {
+ rng->hwrng.name = "qcom_hwrng";
+ rng->hwrng.read = qcom_hwrng_read;
+ rng->hwrng.quality = QCOM_TRNG_QUALITY;
+ ret = devm_hwrng_register(&pdev->dev, &rng->hwrng);
+ if (ret) {
+ dev_err(&pdev->dev, "Register hwrng failed: %d\n", ret);
+ qcom_rng_dev = NULL;
+ goto fail;
+ }
+ }
+
+ return ret;
+fail:
+ crypto_unregister_rng(&qcom_rng_alg);
return ret;
}
-static int qcom_rng_remove(struct platform_device *pdev)
+static void qcom_rng_remove(struct platform_device *pdev)
{
crypto_unregister_rng(&qcom_rng_alg);
qcom_rng_dev = NULL;
-
- return 0;
}
+static struct qcom_rng_of_data qcom_prng_of_data = {
+ .skip_init = false,
+ .hwrng_support = false,
+};
+
+static struct qcom_rng_of_data qcom_prng_ee_of_data = {
+ .skip_init = true,
+ .hwrng_support = false,
+};
+
+static struct qcom_rng_of_data qcom_trng_of_data = {
+ .skip_init = true,
+ .hwrng_support = true,
+};
+
static const struct acpi_device_id __maybe_unused qcom_rng_acpi_match[] = {
{ .id = "QCOM8160", .driver_data = 1 },
{}
MODULE_DEVICE_TABLE(acpi, qcom_rng_acpi_match);
static const struct of_device_id __maybe_unused qcom_rng_of_match[] = {
- { .compatible = "qcom,prng", .data = (void *)0},
- { .compatible = "qcom,prng-ee", .data = (void *)1},
+ { .compatible = "qcom,prng", .data = &qcom_prng_of_data },
+ { .compatible = "qcom,prng-ee", .data = &qcom_prng_ee_of_data },
+ { .compatible = "qcom,trng", .data = &qcom_trng_of_data },
{}
};
MODULE_DEVICE_TABLE(of, qcom_rng_of_match);
static struct platform_driver qcom_rng_driver = {
.probe = qcom_rng_probe,
- .remove = qcom_rng_remove,
+ .remove_new = qcom_rng_remove,
.driver = {
.name = KBUILD_MODNAME,
.of_match_table = of_match_ptr(qcom_rng_of_match),
return err;
}
-static int rk_crypto_remove(struct platform_device *pdev)
+static void rk_crypto_remove(struct platform_device *pdev)
{
struct rk_crypto_info *crypto_tmp = platform_get_drvdata(pdev);
struct rk_crypto_info *first;
}
rk_crypto_pm_exit(crypto_tmp);
crypto_engine_exit(crypto_tmp->engine);
- return 0;
}
static struct platform_driver crypto_driver = {
.probe = rk_crypto_probe,
- .remove = rk_crypto_remove,
+ .remove_new = rk_crypto_remove,
.driver = {
.name = "rk3288-crypto",
.pm = &rk_crypto_pm_ops,
CRYPTO_ALG_NEED_FALLBACK,
.cra_blocksize = SHA1_BLOCK_SIZE,
.cra_ctxsize = sizeof(struct rk_ahash_ctx),
- .cra_alignmask = 3,
.cra_module = THIS_MODULE,
}
}
CRYPTO_ALG_NEED_FALLBACK,
.cra_blocksize = SHA256_BLOCK_SIZE,
.cra_ctxsize = sizeof(struct rk_ahash_ctx),
- .cra_alignmask = 3,
.cra_module = THIS_MODULE,
}
}
CRYPTO_ALG_NEED_FALLBACK,
.cra_blocksize = SHA1_BLOCK_SIZE,
.cra_ctxsize = sizeof(struct rk_ahash_ctx),
- .cra_alignmask = 3,
.cra_module = THIS_MODULE,
}
}
/* HASH HW constants */
#define BUFLEN HASH_BLOCK_SIZE
-#define SSS_HASH_DMA_LEN_ALIGN 8
-#define SSS_HASH_DMA_ALIGN_MASK (SSS_HASH_DMA_LEN_ALIGN - 1)
-
#define SSS_HASH_QUEUE_LENGTH 10
/**
CRYPTO_ALG_NEED_FALLBACK,
.cra_blocksize = HASH_BLOCK_SIZE,
.cra_ctxsize = sizeof(struct s5p_hash_ctx),
- .cra_alignmask = SSS_HASH_DMA_ALIGN_MASK,
.cra_module = THIS_MODULE,
.cra_init = s5p_hash_cra_init,
.cra_exit = s5p_hash_cra_exit,
CRYPTO_ALG_NEED_FALLBACK,
.cra_blocksize = HASH_BLOCK_SIZE,
.cra_ctxsize = sizeof(struct s5p_hash_ctx),
- .cra_alignmask = SSS_HASH_DMA_ALIGN_MASK,
.cra_module = THIS_MODULE,
.cra_init = s5p_hash_cra_init,
.cra_exit = s5p_hash_cra_exit,
CRYPTO_ALG_NEED_FALLBACK,
.cra_blocksize = HASH_BLOCK_SIZE,
.cra_ctxsize = sizeof(struct s5p_hash_ctx),
- .cra_alignmask = SSS_HASH_DMA_ALIGN_MASK,
.cra_module = THIS_MODULE,
.cra_init = s5p_hash_cra_init,
.cra_exit = s5p_hash_cra_exit,
return err;
}
-static int s5p_aes_remove(struct platform_device *pdev)
+static void s5p_aes_remove(struct platform_device *pdev)
{
struct s5p_aes_dev *pdata = platform_get_drvdata(pdev);
int i;
clk_disable_unprepare(pdata->clk);
s5p_dev = NULL;
-
- return 0;
}
static struct platform_driver s5p_aes_crypto = {
.probe = s5p_aes_probe,
- .remove = s5p_aes_remove,
+ .remove_new = s5p_aes_remove,
.driver = {
.name = "s5p-secss",
.of_match_table = s5p_sss_dt_match,
return ret;
}
-static int sa_ul_remove(struct platform_device *pdev)
+static void sa_ul_remove(struct platform_device *pdev)
{
struct sa_crypto_data *dev_data = platform_get_drvdata(pdev);
pm_runtime_put_sync(&pdev->dev);
pm_runtime_disable(&pdev->dev);
-
- return 0;
}
static struct platform_driver sa_ul_driver = {
.probe = sa_ul_probe,
- .remove = sa_ul_remove,
+ .remove_new = sa_ul_remove,
.driver = {
.name = "saul-crypto",
.of_match_table = of_match,
return err;
}
-static int sahara_remove(struct platform_device *pdev)
+static void sahara_remove(struct platform_device *pdev)
{
struct sahara_dev *dev = platform_get_drvdata(pdev);
clk_disable_unprepare(dev->clk_ahb);
dev_ptr = NULL;
-
- return 0;
}
static struct platform_driver sahara_driver = {
.probe = sahara_probe,
- .remove = sahara_remove,
+ .remove_new = sahara_remove,
.driver = {
.name = SAHARA_NAME,
.of_match_table = sahara_dt_ids,
data = (u32 *)req->result;
for (count = 0; count < mlen; count++)
- data[count] = readl(ctx->cryp->base + STARFIVE_HASH_SHARDR);
+ put_unaligned(readl(ctx->cryp->base + STARFIVE_HASH_SHARDR),
+ &data[count]);
return 0;
}
CRYPTO_ALG_NEED_FALLBACK,
.cra_blocksize = SHA224_BLOCK_SIZE,
.cra_ctxsize = sizeof(struct starfive_cryp_ctx),
- .cra_alignmask = 3,
.cra_module = THIS_MODULE,
}
},
CRYPTO_ALG_NEED_FALLBACK,
.cra_blocksize = SHA224_BLOCK_SIZE,
.cra_ctxsize = sizeof(struct starfive_cryp_ctx),
- .cra_alignmask = 3,
.cra_module = THIS_MODULE,
}
},
CRYPTO_ALG_NEED_FALLBACK,
.cra_blocksize = SHA256_BLOCK_SIZE,
.cra_ctxsize = sizeof(struct starfive_cryp_ctx),
- .cra_alignmask = 3,
.cra_module = THIS_MODULE,
}
},
CRYPTO_ALG_NEED_FALLBACK,
.cra_blocksize = SHA256_BLOCK_SIZE,
.cra_ctxsize = sizeof(struct starfive_cryp_ctx),
- .cra_alignmask = 3,
.cra_module = THIS_MODULE,
}
},
CRYPTO_ALG_NEED_FALLBACK,
.cra_blocksize = SHA384_BLOCK_SIZE,
.cra_ctxsize = sizeof(struct starfive_cryp_ctx),
- .cra_alignmask = 3,
.cra_module = THIS_MODULE,
}
},
CRYPTO_ALG_NEED_FALLBACK,
.cra_blocksize = SHA384_BLOCK_SIZE,
.cra_ctxsize = sizeof(struct starfive_cryp_ctx),
- .cra_alignmask = 3,
.cra_module = THIS_MODULE,
}
},
CRYPTO_ALG_NEED_FALLBACK,
.cra_blocksize = SHA512_BLOCK_SIZE,
.cra_ctxsize = sizeof(struct starfive_cryp_ctx),
- .cra_alignmask = 3,
.cra_module = THIS_MODULE,
}
},
CRYPTO_ALG_NEED_FALLBACK,
.cra_blocksize = SHA512_BLOCK_SIZE,
.cra_ctxsize = sizeof(struct starfive_cryp_ctx),
- .cra_alignmask = 3,
.cra_module = THIS_MODULE,
}
},
CRYPTO_ALG_NEED_FALLBACK,
.cra_blocksize = SM3_BLOCK_SIZE,
.cra_ctxsize = sizeof(struct starfive_cryp_ctx),
- .cra_alignmask = 3,
.cra_module = THIS_MODULE,
}
},
CRYPTO_ALG_NEED_FALLBACK,
.cra_blocksize = SM3_BLOCK_SIZE,
.cra_ctxsize = sizeof(struct starfive_cryp_ctx),
- .cra_alignmask = 3,
.cra_module = THIS_MODULE,
}
},
.cra_priority = 200,
.cra_flags = CRYPTO_ALG_OPTIONAL_KEY,
.cra_blocksize = CHKSUM_BLOCK_SIZE,
- .cra_alignmask = 3,
.cra_ctxsize = sizeof(struct stm32_crc_ctx),
.cra_module = THIS_MODULE,
.cra_init = stm32_crc32_cra_init,
.cra_priority = 200,
.cra_flags = CRYPTO_ALG_OPTIONAL_KEY,
.cra_blocksize = CHKSUM_BLOCK_SIZE,
- .cra_alignmask = 3,
.cra_ctxsize = sizeof(struct stm32_crc_ctx),
.cra_module = THIS_MODULE,
.cra_init = stm32_crc32c_cra_init,
return 0;
}
-static int stm32_crc_remove(struct platform_device *pdev)
+static void stm32_crc_remove(struct platform_device *pdev)
{
struct stm32_crc *crc = platform_get_drvdata(pdev);
int ret = pm_runtime_get_sync(crc->dev);
- if (ret < 0) {
- pm_runtime_put_noidle(crc->dev);
- return ret;
- }
-
spin_lock(&crc_list.lock);
list_del(&crc->list);
spin_unlock(&crc_list.lock);
pm_runtime_disable(crc->dev);
pm_runtime_put_noidle(crc->dev);
- clk_disable_unprepare(crc->clk);
-
- return 0;
+ if (ret >= 0)
+ clk_disable(crc->clk);
+ clk_unprepare(crc->clk);
}
static int __maybe_unused stm32_crc_suspend(struct device *dev)
static struct platform_driver stm32_crc_driver = {
.probe = stm32_crc_probe,
- .remove = stm32_crc_remove,
+ .remove_new = stm32_crc_remove,
.driver = {
.name = DRIVER_NAME,
.pm = &stm32_crc_pm_ops,
return ret;
}
-static int stm32_cryp_remove(struct platform_device *pdev)
+static void stm32_cryp_remove(struct platform_device *pdev)
{
struct stm32_cryp *cryp = platform_get_drvdata(pdev);
int ret;
- if (!cryp)
- return -ENODEV;
-
- ret = pm_runtime_resume_and_get(cryp->dev);
- if (ret < 0)
- return ret;
+ ret = pm_runtime_get_sync(cryp->dev);
if (cryp->caps->aeads_support)
crypto_engine_unregister_aeads(aead_algs, ARRAY_SIZE(aead_algs));
pm_runtime_disable(cryp->dev);
pm_runtime_put_noidle(cryp->dev);
- clk_disable_unprepare(cryp->clk);
-
- return 0;
+ if (ret >= 0)
+ clk_disable_unprepare(cryp->clk);
}
#ifdef CONFIG_PM
static struct platform_driver stm32_cryp_driver = {
.probe = stm32_cryp_probe,
- .remove = stm32_cryp_remove,
+ .remove_new = stm32_cryp_remove,
.driver = {
.name = DRIVER_NAME,
.pm = &stm32_cryp_pm_ops,
CRYPTO_ALG_KERN_DRIVER_ONLY,
.cra_blocksize = MD5_HMAC_BLOCK_SIZE,
.cra_ctxsize = sizeof(struct stm32_hash_ctx),
- .cra_alignmask = 3,
.cra_init = stm32_hash_cra_init,
.cra_exit = stm32_hash_cra_exit,
.cra_module = THIS_MODULE,
CRYPTO_ALG_KERN_DRIVER_ONLY,
.cra_blocksize = MD5_HMAC_BLOCK_SIZE,
.cra_ctxsize = sizeof(struct stm32_hash_ctx),
- .cra_alignmask = 3,
.cra_init = stm32_hash_cra_hmac_init,
.cra_exit = stm32_hash_cra_exit,
.cra_module = THIS_MODULE,
CRYPTO_ALG_KERN_DRIVER_ONLY,
.cra_blocksize = SHA1_BLOCK_SIZE,
.cra_ctxsize = sizeof(struct stm32_hash_ctx),
- .cra_alignmask = 3,
.cra_init = stm32_hash_cra_init,
.cra_exit = stm32_hash_cra_exit,
.cra_module = THIS_MODULE,
CRYPTO_ALG_KERN_DRIVER_ONLY,
.cra_blocksize = SHA1_BLOCK_SIZE,
.cra_ctxsize = sizeof(struct stm32_hash_ctx),
- .cra_alignmask = 3,
.cra_init = stm32_hash_cra_hmac_init,
.cra_exit = stm32_hash_cra_exit,
.cra_module = THIS_MODULE,
CRYPTO_ALG_KERN_DRIVER_ONLY,
.cra_blocksize = SHA224_BLOCK_SIZE,
.cra_ctxsize = sizeof(struct stm32_hash_ctx),
- .cra_alignmask = 3,
.cra_init = stm32_hash_cra_init,
.cra_exit = stm32_hash_cra_exit,
.cra_module = THIS_MODULE,
CRYPTO_ALG_KERN_DRIVER_ONLY,
.cra_blocksize = SHA224_BLOCK_SIZE,
.cra_ctxsize = sizeof(struct stm32_hash_ctx),
- .cra_alignmask = 3,
.cra_init = stm32_hash_cra_hmac_init,
.cra_exit = stm32_hash_cra_exit,
.cra_module = THIS_MODULE,
CRYPTO_ALG_KERN_DRIVER_ONLY,
.cra_blocksize = SHA256_BLOCK_SIZE,
.cra_ctxsize = sizeof(struct stm32_hash_ctx),
- .cra_alignmask = 3,
.cra_init = stm32_hash_cra_init,
.cra_exit = stm32_hash_cra_exit,
.cra_module = THIS_MODULE,
CRYPTO_ALG_KERN_DRIVER_ONLY,
.cra_blocksize = SHA256_BLOCK_SIZE,
.cra_ctxsize = sizeof(struct stm32_hash_ctx),
- .cra_alignmask = 3,
.cra_init = stm32_hash_cra_hmac_init,
.cra_exit = stm32_hash_cra_exit,
.cra_module = THIS_MODULE,
CRYPTO_ALG_KERN_DRIVER_ONLY,
.cra_blocksize = SHA384_BLOCK_SIZE,
.cra_ctxsize = sizeof(struct stm32_hash_ctx),
- .cra_alignmask = 3,
.cra_init = stm32_hash_cra_init,
.cra_exit = stm32_hash_cra_exit,
.cra_module = THIS_MODULE,
CRYPTO_ALG_KERN_DRIVER_ONLY,
.cra_blocksize = SHA384_BLOCK_SIZE,
.cra_ctxsize = sizeof(struct stm32_hash_ctx),
- .cra_alignmask = 3,
.cra_init = stm32_hash_cra_hmac_init,
.cra_exit = stm32_hash_cra_exit,
.cra_module = THIS_MODULE,
CRYPTO_ALG_KERN_DRIVER_ONLY,
.cra_blocksize = SHA512_BLOCK_SIZE,
.cra_ctxsize = sizeof(struct stm32_hash_ctx),
- .cra_alignmask = 3,
.cra_init = stm32_hash_cra_init,
.cra_exit = stm32_hash_cra_exit,
.cra_module = THIS_MODULE,
CRYPTO_ALG_KERN_DRIVER_ONLY,
.cra_blocksize = SHA512_BLOCK_SIZE,
.cra_ctxsize = sizeof(struct stm32_hash_ctx),
- .cra_alignmask = 3,
.cra_init = stm32_hash_cra_hmac_init,
.cra_exit = stm32_hash_cra_exit,
.cra_module = THIS_MODULE,
CRYPTO_ALG_KERN_DRIVER_ONLY,
.cra_blocksize = SHA3_224_BLOCK_SIZE,
.cra_ctxsize = sizeof(struct stm32_hash_ctx),
- .cra_alignmask = 3,
.cra_init = stm32_hash_cra_sha3_init,
.cra_exit = stm32_hash_cra_exit,
.cra_module = THIS_MODULE,
CRYPTO_ALG_KERN_DRIVER_ONLY,
.cra_blocksize = SHA3_224_BLOCK_SIZE,
.cra_ctxsize = sizeof(struct stm32_hash_ctx),
- .cra_alignmask = 3,
.cra_init = stm32_hash_cra_sha3_hmac_init,
.cra_exit = stm32_hash_cra_exit,
.cra_module = THIS_MODULE,
CRYPTO_ALG_KERN_DRIVER_ONLY,
.cra_blocksize = SHA3_256_BLOCK_SIZE,
.cra_ctxsize = sizeof(struct stm32_hash_ctx),
- .cra_alignmask = 3,
.cra_init = stm32_hash_cra_sha3_init,
.cra_exit = stm32_hash_cra_exit,
.cra_module = THIS_MODULE,
CRYPTO_ALG_KERN_DRIVER_ONLY,
.cra_blocksize = SHA3_256_BLOCK_SIZE,
.cra_ctxsize = sizeof(struct stm32_hash_ctx),
- .cra_alignmask = 3,
.cra_init = stm32_hash_cra_sha3_hmac_init,
.cra_exit = stm32_hash_cra_exit,
.cra_module = THIS_MODULE,
CRYPTO_ALG_KERN_DRIVER_ONLY,
.cra_blocksize = SHA3_384_BLOCK_SIZE,
.cra_ctxsize = sizeof(struct stm32_hash_ctx),
- .cra_alignmask = 3,
.cra_init = stm32_hash_cra_sha3_init,
.cra_exit = stm32_hash_cra_exit,
.cra_module = THIS_MODULE,
CRYPTO_ALG_KERN_DRIVER_ONLY,
.cra_blocksize = SHA3_384_BLOCK_SIZE,
.cra_ctxsize = sizeof(struct stm32_hash_ctx),
- .cra_alignmask = 3,
.cra_init = stm32_hash_cra_sha3_hmac_init,
.cra_exit = stm32_hash_cra_exit,
.cra_module = THIS_MODULE,
CRYPTO_ALG_KERN_DRIVER_ONLY,
.cra_blocksize = SHA3_512_BLOCK_SIZE,
.cra_ctxsize = sizeof(struct stm32_hash_ctx),
- .cra_alignmask = 3,
.cra_init = stm32_hash_cra_sha3_init,
.cra_exit = stm32_hash_cra_exit,
.cra_module = THIS_MODULE,
CRYPTO_ALG_KERN_DRIVER_ONLY,
.cra_blocksize = SHA3_512_BLOCK_SIZE,
.cra_ctxsize = sizeof(struct stm32_hash_ctx),
- .cra_alignmask = 3,
.cra_init = stm32_hash_cra_sha3_hmac_init,
.cra_exit = stm32_hash_cra_exit,
.cra_module = THIS_MODULE,
static int ahash_digest(struct ahash_request *areq)
{
- struct talitos_ahash_req_ctx *req_ctx = ahash_request_ctx(areq);
- struct crypto_ahash *ahash = crypto_ahash_reqtfm(areq);
-
- ahash->init(areq);
- req_ctx->last = 1;
+ ahash_init(areq);
+ return ahash_finup(areq);
+}
- return ahash_process_req(areq, areq->nbytes);
+static int ahash_digest_sha224_swinit(struct ahash_request *areq)
+{
+ ahash_init_sha224_swinit(areq);
+ return ahash_finup(areq);
}
static int ahash_export(struct ahash_request *areq, void *out)
return ret;
}
-static int talitos_remove(struct platform_device *ofdev)
+static void talitos_remove(struct platform_device *ofdev)
{
struct device *dev = &ofdev->dev;
struct talitos_private *priv = dev_get_drvdata(dev);
tasklet_kill(&priv->done_task[0]);
if (priv->irq[1])
tasklet_kill(&priv->done_task[1]);
-
- return 0;
}
static struct talitos_crypto_alg *talitos_alg_alloc(struct device *dev,
(!strcmp(alg->cra_name, "sha224") ||
!strcmp(alg->cra_name, "hmac(sha224)"))) {
t_alg->algt.alg.hash.init = ahash_init_sha224_swinit;
+ t_alg->algt.alg.hash.digest =
+ ahash_digest_sha224_swinit;
t_alg->algt.desc_hdr_template =
DESC_HDR_TYPE_COMMON_NONSNOOP_NO_AFEU |
DESC_HDR_SEL0_MDEUA |
alg->cra_priority = t_alg->algt.priority;
else
alg->cra_priority = TALITOS_CRA_PRIORITY;
- if (has_ftr_sec1(priv))
+ if (has_ftr_sec1(priv) && t_alg->algt.type != CRYPTO_ALG_TYPE_AHASH)
alg->cra_alignmask = 3;
else
alg->cra_alignmask = 0;
.of_match_table = talitos_match,
},
.probe = talitos_probe,
- .remove = talitos_remove,
+ .remove_new = talitos_remove,
};
module_platform_driver(talitos_driver);
.long 0x1b000000, 0x1b000000, 0x1b000000, 0x1b000000 ?rev
.long 0x0d0e0f0c, 0x0d0e0f0c, 0x0d0e0f0c, 0x0d0e0f0c ?rev
.long 0,0,0,0 ?asis
+.long 0x0f102132, 0x43546576, 0x8798a9ba, 0xcbdcedfe
Lconsts:
mflr r0
bcl 20,31,\$+4
mflr $ptr #vvvvv "distance between . and rcon
- addi $ptr,$ptr,-0x48
+ addi $ptr,$ptr,-0x58
mtlr r0
blr
.long 0
li $x70,0x70
mtspr 256,r0
+ xxlor 2, 32+$eighty7, 32+$eighty7
+ vsldoi $eighty7,$tmp,$eighty7,1 # 0x010101..87
+ xxlor 1, 32+$eighty7, 32+$eighty7
+
+ # Load XOR Lconsts.
+ mr $x70, r6
+ bl Lconsts
+ lxvw4x 0, $x40, r6 # load XOR contents
+ mr r6, $x70
+ li $x70,0x70
+
subi $rounds,$rounds,3 # -4 in total
lvx $rndkey0,$x00,$key1 # load key schedule
?vperm v31,v31,$twk5,$keyperm
lvx v25,$x10,$key_ # pre-load round[2]
+ # Switch to use the following codes with 0x010101..87 to generate tweak.
+ # eighty7 = 0x010101..87
+ # vsrab tmp, tweak, seven # next tweak value, right shift 7 bits
+ # vand tmp, tmp, eighty7 # last byte with carry
+ # vaddubm tweak, tweak, tweak # left shift 1 bit (x2)
+ # xxlor vsx, 0, 0
+ # vpermxor tweak, tweak, tmp, vsx
+
vperm $in0,$inout,$inptail,$inpperm
subi $inp,$inp,31 # undo "caller"
vxor $twk0,$tweak,$rndkey0
vsrab $tmp,$tweak,$seven # next tweak value
vaddubm $tweak,$tweak,$tweak
- vsldoi $tmp,$tmp,$tmp,15
vand $tmp,$tmp,$eighty7
vxor $out0,$in0,$twk0
- vxor $tweak,$tweak,$tmp
+ xxlor 32+$in1, 0, 0
+ vpermxor $tweak, $tweak, $tmp, $in1
lvx_u $in1,$x10,$inp
vxor $twk1,$tweak,$rndkey0
vsrab $tmp,$tweak,$seven # next tweak value
vaddubm $tweak,$tweak,$tweak
- vsldoi $tmp,$tmp,$tmp,15
le?vperm $in1,$in1,$in1,$leperm
vand $tmp,$tmp,$eighty7
vxor $out1,$in1,$twk1
- vxor $tweak,$tweak,$tmp
+ xxlor 32+$in2, 0, 0
+ vpermxor $tweak, $tweak, $tmp, $in2
lvx_u $in2,$x20,$inp
andi. $taillen,$len,15
vxor $twk2,$tweak,$rndkey0
vsrab $tmp,$tweak,$seven # next tweak value
vaddubm $tweak,$tweak,$tweak
- vsldoi $tmp,$tmp,$tmp,15
le?vperm $in2,$in2,$in2,$leperm
vand $tmp,$tmp,$eighty7
vxor $out2,$in2,$twk2
- vxor $tweak,$tweak,$tmp
+ xxlor 32+$in3, 0, 0
+ vpermxor $tweak, $tweak, $tmp, $in3
lvx_u $in3,$x30,$inp
sub $len,$len,$taillen
vxor $twk3,$tweak,$rndkey0
vsrab $tmp,$tweak,$seven # next tweak value
vaddubm $tweak,$tweak,$tweak
- vsldoi $tmp,$tmp,$tmp,15
le?vperm $in3,$in3,$in3,$leperm
vand $tmp,$tmp,$eighty7
vxor $out3,$in3,$twk3
- vxor $tweak,$tweak,$tmp
+ xxlor 32+$in4, 0, 0
+ vpermxor $tweak, $tweak, $tmp, $in4
lvx_u $in4,$x40,$inp
subi $len,$len,0x60
vxor $twk4,$tweak,$rndkey0
vsrab $tmp,$tweak,$seven # next tweak value
vaddubm $tweak,$tweak,$tweak
- vsldoi $tmp,$tmp,$tmp,15
le?vperm $in4,$in4,$in4,$leperm
vand $tmp,$tmp,$eighty7
vxor $out4,$in4,$twk4
- vxor $tweak,$tweak,$tmp
+ xxlor 32+$in5, 0, 0
+ vpermxor $tweak, $tweak, $tmp, $in5
lvx_u $in5,$x50,$inp
addi $inp,$inp,0x60
vxor $twk5,$tweak,$rndkey0
vsrab $tmp,$tweak,$seven # next tweak value
vaddubm $tweak,$tweak,$tweak
- vsldoi $tmp,$tmp,$tmp,15
le?vperm $in5,$in5,$in5,$leperm
vand $tmp,$tmp,$eighty7
vxor $out5,$in5,$twk5
- vxor $tweak,$tweak,$tmp
+ xxlor 32+$in0, 0, 0
+ vpermxor $tweak, $tweak, $tmp, $in0
vxor v31,v31,$rndkey0
mtctr $rounds
lvx v25,$x10,$key_ # round[4]
bdnz Loop_xts_enc6x
+ xxlor 32+$eighty7, 1, 1 # 0x010101..87
+
subic $len,$len,96 # $len-=96
vxor $in0,$twk0,v31 # xor with last round key
vcipher $out0,$out0,v24
vaddubm $tweak,$tweak,$tweak
vcipher $out2,$out2,v24
vcipher $out3,$out3,v24
- vsldoi $tmp,$tmp,$tmp,15
vcipher $out4,$out4,v24
vcipher $out5,$out5,v24
vand $tmp,$tmp,$eighty7
vcipher $out0,$out0,v25
vcipher $out1,$out1,v25
- vxor $tweak,$tweak,$tmp
+ xxlor 32+$in1, 0, 0
+ vpermxor $tweak, $tweak, $tmp, $in1
vcipher $out2,$out2,v25
vcipher $out3,$out3,v25
vxor $in1,$twk1,v31
and r0,r0,$len
vaddubm $tweak,$tweak,$tweak
- vsldoi $tmp,$tmp,$tmp,15
vcipher $out0,$out0,v26
vcipher $out1,$out1,v26
vand $tmp,$tmp,$eighty7
vcipher $out2,$out2,v26
vcipher $out3,$out3,v26
- vxor $tweak,$tweak,$tmp
+ xxlor 32+$in2, 0, 0
+ vpermxor $tweak, $tweak, $tmp, $in2
vcipher $out4,$out4,v26
vcipher $out5,$out5,v26
vaddubm $tweak,$tweak,$tweak
vcipher $out0,$out0,v27
vcipher $out1,$out1,v27
- vsldoi $tmp,$tmp,$tmp,15
vcipher $out2,$out2,v27
vcipher $out3,$out3,v27
vand $tmp,$tmp,$eighty7
vcipher $out5,$out5,v27
addi $key_,$sp,$FRAME+15 # rewind $key_
- vxor $tweak,$tweak,$tmp
+ xxlor 32+$in3, 0, 0
+ vpermxor $tweak, $tweak, $tmp, $in3
vcipher $out0,$out0,v28
vcipher $out1,$out1,v28
vxor $in3,$twk3,v31
vcipher $out2,$out2,v28
vcipher $out3,$out3,v28
vaddubm $tweak,$tweak,$tweak
- vsldoi $tmp,$tmp,$tmp,15
vcipher $out4,$out4,v28
vcipher $out5,$out5,v28
lvx v24,$x00,$key_ # re-pre-load round[1]
vcipher $out0,$out0,v29
vcipher $out1,$out1,v29
- vxor $tweak,$tweak,$tmp
+ xxlor 32+$in4, 0, 0
+ vpermxor $tweak, $tweak, $tmp, $in4
vcipher $out2,$out2,v29
vcipher $out3,$out3,v29
vxor $in4,$twk4,v31
vcipher $out5,$out5,v29
lvx v25,$x10,$key_ # re-pre-load round[2]
vaddubm $tweak,$tweak,$tweak
- vsldoi $tmp,$tmp,$tmp,15
vcipher $out0,$out0,v30
vcipher $out1,$out1,v30
vand $tmp,$tmp,$eighty7
vcipher $out2,$out2,v30
vcipher $out3,$out3,v30
- vxor $tweak,$tweak,$tmp
+ xxlor 32+$in5, 0, 0
+ vpermxor $tweak, $tweak, $tmp, $in5
vcipher $out4,$out4,v30
vcipher $out5,$out5,v30
vxor $in5,$twk5,v31
vcipherlast $out0,$out0,$in0
lvx_u $in0,$x00,$inp # load next input block
vaddubm $tweak,$tweak,$tweak
- vsldoi $tmp,$tmp,$tmp,15
vcipherlast $out1,$out1,$in1
lvx_u $in1,$x10,$inp
vcipherlast $out2,$out2,$in2
vcipherlast $out4,$out4,$in4
le?vperm $in2,$in2,$in2,$leperm
lvx_u $in4,$x40,$inp
- vxor $tweak,$tweak,$tmp
+ xxlor 10, 32+$in0, 32+$in0
+ xxlor 32+$in0, 0, 0
+ vpermxor $tweak, $tweak, $tmp, $in0
+ xxlor 32+$in0, 10, 10
vcipherlast $tmp,$out5,$in5 # last block might be needed
# in stealing mode
le?vperm $in3,$in3,$in3,$leperm
mtctr $rounds
beq Loop_xts_enc6x # did $len-=96 borrow?
+ xxlor 32+$eighty7, 2, 2 # 0x010101..87
+
addic. $len,$len,0x60
beq Lxts_enc6x_zero
cmpwi $len,0x20
li $x70,0x70
mtspr 256,r0
+ xxlor 2, 32+$eighty7, 32+$eighty7
+ vsldoi $eighty7,$tmp,$eighty7,1 # 0x010101..87
+ xxlor 1, 32+$eighty7, 32+$eighty7
+
+ # Load XOR Lconsts.
+ mr $x70, r6
+ bl Lconsts
+ lxvw4x 0, $x40, r6 # load XOR contents
+ mr r6, $x70
+ li $x70,0x70
+
subi $rounds,$rounds,3 # -4 in total
lvx $rndkey0,$x00,$key1 # load key schedule
vxor $twk0,$tweak,$rndkey0
vsrab $tmp,$tweak,$seven # next tweak value
vaddubm $tweak,$tweak,$tweak
- vsldoi $tmp,$tmp,$tmp,15
vand $tmp,$tmp,$eighty7
vxor $out0,$in0,$twk0
- vxor $tweak,$tweak,$tmp
+ xxlor 32+$in1, 0, 0
+ vpermxor $tweak, $tweak, $tmp, $in1
lvx_u $in1,$x10,$inp
vxor $twk1,$tweak,$rndkey0
vsrab $tmp,$tweak,$seven # next tweak value
vaddubm $tweak,$tweak,$tweak
- vsldoi $tmp,$tmp,$tmp,15
le?vperm $in1,$in1,$in1,$leperm
vand $tmp,$tmp,$eighty7
vxor $out1,$in1,$twk1
- vxor $tweak,$tweak,$tmp
+ xxlor 32+$in2, 0, 0
+ vpermxor $tweak, $tweak, $tmp, $in2
lvx_u $in2,$x20,$inp
andi. $taillen,$len,15
vxor $twk2,$tweak,$rndkey0
vsrab $tmp,$tweak,$seven # next tweak value
vaddubm $tweak,$tweak,$tweak
- vsldoi $tmp,$tmp,$tmp,15
le?vperm $in2,$in2,$in2,$leperm
vand $tmp,$tmp,$eighty7
vxor $out2,$in2,$twk2
- vxor $tweak,$tweak,$tmp
+ xxlor 32+$in3, 0, 0
+ vpermxor $tweak, $tweak, $tmp, $in3
lvx_u $in3,$x30,$inp
sub $len,$len,$taillen
vxor $twk3,$tweak,$rndkey0
vsrab $tmp,$tweak,$seven # next tweak value
vaddubm $tweak,$tweak,$tweak
- vsldoi $tmp,$tmp,$tmp,15
le?vperm $in3,$in3,$in3,$leperm
vand $tmp,$tmp,$eighty7
vxor $out3,$in3,$twk3
- vxor $tweak,$tweak,$tmp
+ xxlor 32+$in4, 0, 0
+ vpermxor $tweak, $tweak, $tmp, $in4
lvx_u $in4,$x40,$inp
subi $len,$len,0x60
vxor $twk4,$tweak,$rndkey0
vsrab $tmp,$tweak,$seven # next tweak value
vaddubm $tweak,$tweak,$tweak
- vsldoi $tmp,$tmp,$tmp,15
le?vperm $in4,$in4,$in4,$leperm
vand $tmp,$tmp,$eighty7
vxor $out4,$in4,$twk4
- vxor $tweak,$tweak,$tmp
+ xxlor 32+$in5, 0, 0
+ vpermxor $tweak, $tweak, $tmp, $in5
lvx_u $in5,$x50,$inp
addi $inp,$inp,0x60
vxor $twk5,$tweak,$rndkey0
vsrab $tmp,$tweak,$seven # next tweak value
vaddubm $tweak,$tweak,$tweak
- vsldoi $tmp,$tmp,$tmp,15
le?vperm $in5,$in5,$in5,$leperm
vand $tmp,$tmp,$eighty7
vxor $out5,$in5,$twk5
- vxor $tweak,$tweak,$tmp
+ xxlor 32+$in0, 0, 0
+ vpermxor $tweak, $tweak, $tmp, $in0
vxor v31,v31,$rndkey0
mtctr $rounds
lvx v25,$x10,$key_ # round[4]
bdnz Loop_xts_dec6x
+ xxlor 32+$eighty7, 1, 1 # 0x010101..87
+
subic $len,$len,96 # $len-=96
vxor $in0,$twk0,v31 # xor with last round key
vncipher $out0,$out0,v24
vaddubm $tweak,$tweak,$tweak
vncipher $out2,$out2,v24
vncipher $out3,$out3,v24
- vsldoi $tmp,$tmp,$tmp,15
vncipher $out4,$out4,v24
vncipher $out5,$out5,v24
vand $tmp,$tmp,$eighty7
vncipher $out0,$out0,v25
vncipher $out1,$out1,v25
- vxor $tweak,$tweak,$tmp
+ xxlor 32+$in1, 0, 0
+ vpermxor $tweak, $tweak, $tmp, $in1
vncipher $out2,$out2,v25
vncipher $out3,$out3,v25
vxor $in1,$twk1,v31
and r0,r0,$len
vaddubm $tweak,$tweak,$tweak
- vsldoi $tmp,$tmp,$tmp,15
vncipher $out0,$out0,v26
vncipher $out1,$out1,v26
vand $tmp,$tmp,$eighty7
vncipher $out2,$out2,v26
vncipher $out3,$out3,v26
- vxor $tweak,$tweak,$tmp
+ xxlor 32+$in2, 0, 0
+ vpermxor $tweak, $tweak, $tmp, $in2
vncipher $out4,$out4,v26
vncipher $out5,$out5,v26
vaddubm $tweak,$tweak,$tweak
vncipher $out0,$out0,v27
vncipher $out1,$out1,v27
- vsldoi $tmp,$tmp,$tmp,15
vncipher $out2,$out2,v27
vncipher $out3,$out3,v27
vand $tmp,$tmp,$eighty7
vncipher $out5,$out5,v27
addi $key_,$sp,$FRAME+15 # rewind $key_
- vxor $tweak,$tweak,$tmp
+ xxlor 32+$in3, 0, 0
+ vpermxor $tweak, $tweak, $tmp, $in3
vncipher $out0,$out0,v28
vncipher $out1,$out1,v28
vxor $in3,$twk3,v31
vncipher $out2,$out2,v28
vncipher $out3,$out3,v28
vaddubm $tweak,$tweak,$tweak
- vsldoi $tmp,$tmp,$tmp,15
vncipher $out4,$out4,v28
vncipher $out5,$out5,v28
lvx v24,$x00,$key_ # re-pre-load round[1]
vncipher $out0,$out0,v29
vncipher $out1,$out1,v29
- vxor $tweak,$tweak,$tmp
+ xxlor 32+$in4, 0, 0
+ vpermxor $tweak, $tweak, $tmp, $in4
vncipher $out2,$out2,v29
vncipher $out3,$out3,v29
vxor $in4,$twk4,v31
vncipher $out5,$out5,v29
lvx v25,$x10,$key_ # re-pre-load round[2]
vaddubm $tweak,$tweak,$tweak
- vsldoi $tmp,$tmp,$tmp,15
vncipher $out0,$out0,v30
vncipher $out1,$out1,v30
vand $tmp,$tmp,$eighty7
vncipher $out2,$out2,v30
vncipher $out3,$out3,v30
- vxor $tweak,$tweak,$tmp
+ xxlor 32+$in5, 0, 0
+ vpermxor $tweak, $tweak, $tmp, $in5
vncipher $out4,$out4,v30
vncipher $out5,$out5,v30
vxor $in5,$twk5,v31
vncipherlast $out0,$out0,$in0
lvx_u $in0,$x00,$inp # load next input block
vaddubm $tweak,$tweak,$tweak
- vsldoi $tmp,$tmp,$tmp,15
vncipherlast $out1,$out1,$in1
lvx_u $in1,$x10,$inp
vncipherlast $out2,$out2,$in2
vncipherlast $out4,$out4,$in4
le?vperm $in2,$in2,$in2,$leperm
lvx_u $in4,$x40,$inp
- vxor $tweak,$tweak,$tmp
+ xxlor 10, 32+$in0, 32+$in0
+ xxlor 32+$in0, 0, 0
+ vpermxor $tweak, $tweak, $tmp, $in0
+ xxlor 32+$in0, 10, 10
vncipherlast $out5,$out5,$in5
le?vperm $in3,$in3,$in3,$leperm
lvx_u $in5,$x50,$inp
mtctr $rounds
beq Loop_xts_dec6x # did $len-=96 borrow?
+ xxlor 32+$eighty7, 2, 2 # 0x010101..87
+
addic. $len,$len,0x60
beq Lxts_dec6x_zero
cmpwi $len,0x20
return err;
}
-static int zynqmp_aes_aead_remove(struct platform_device *pdev)
+static void zynqmp_aes_aead_remove(struct platform_device *pdev)
{
crypto_engine_exit(aes_drv_ctx.engine);
crypto_engine_unregister_aead(&aes_drv_ctx.alg.aead);
-
- return 0;
}
static const struct of_device_id zynqmp_aes_dt_ids[] = {
static struct platform_driver zynqmp_aes_driver = {
.probe = zynqmp_aes_aead_probe,
- .remove = zynqmp_aes_aead_remove,
+ .remove_new = zynqmp_aes_aead_remove,
.driver = {
.name = "zynqmp-aes",
.of_match_table = zynqmp_aes_dt_ids,
CRYPTO_ALG_NEED_FALLBACK,
.cra_blocksize = SHA3_384_BLOCK_SIZE,
.cra_ctxsize = sizeof(struct zynqmp_sha_tfm_ctx),
- .cra_alignmask = 3,
.cra_module = THIS_MODULE,
}
}
return err;
}
-static int zynqmp_sha_remove(struct platform_device *pdev)
+static void zynqmp_sha_remove(struct platform_device *pdev)
{
sha3_drv_ctx.dev = platform_get_drvdata(pdev);
dma_free_coherent(sha3_drv_ctx.dev, ZYNQMP_DMA_ALLOC_FIXED_SIZE, ubuf, update_dma_addr);
dma_free_coherent(sha3_drv_ctx.dev, SHA3_384_DIGEST_SIZE, fbuf, final_dma_addr);
crypto_unregister_shash(&sha3_drv_ctx.sha3_384);
-
- return 0;
}
static struct platform_driver zynqmp_sha_driver = {
.probe = zynqmp_sha_probe,
- .remove = zynqmp_sha_remove,
+ .remove_new = zynqmp_sha_remove,
.driver = {
.name = "zynqmp-sha3-384",
},
#include <linux/kernel.h>
#include <linux/module.h>
-#include <linux/crypto.h>
#include <linux/skbuff.h>
#include <linux/rtnetlink.h>
#include <linux/highmem.h>
#include <net/esp.h>
#include <net/xfrm.h>
#include <crypto/aes.h>
-#include <crypto/algapi.h>
#include <crypto/hash.h>
#include <crypto/sha1.h>
#include <crypto/sha2.h>
#ifndef __CHCR_IPSEC_H__
#define __CHCR_IPSEC_H__
-#include <crypto/algapi.h>
#include "t4_hw.h"
#include "cxgb4.h"
#include "t4_msg.h"
#define __CHTLS_H__
#include <crypto/aes.h>
-#include <crypto/algapi.h>
#include <crypto/hash.h>
#include <crypto/sha1.h>
#include <crypto/sha2.h>
#include <crypto/blake2s.h>
#include <crypto/chacha20poly1305.h>
+#include <crypto/utils.h>
#include <net/ipv6.h>
-#include <crypto/algapi.h>
void wg_cookie_checker_init(struct cookie_checker *checker,
struct wg_device *wg)
#include <linux/if.h>
#include <net/genetlink.h>
#include <net/sock.h>
-#include <crypto/algapi.h>
+#include <crypto/utils.h>
static struct genl_family genl_family;
#include <linux/bitmap.h>
#include <linux/scatterlist.h>
#include <linux/highmem.h>
-#include <crypto/algapi.h>
+#include <crypto/utils.h>
/* This implements Noise_IKpsk2:
*
* managed alongside the master keys in the filesystem-level keyring)
*/
-#include <crypto/algapi.h>
#include <crypto/skcipher.h>
+#include <crypto/utils.h>
#include <keys/user-type.h>
#include <linux/hashtable.h>
#include <linux/scatterlist.h>
+-- SPDX-License-Identifier: BSD-3-Clause
+--
+-- Copyright (C) 1998, 2000 IETF Trust and the persons identified as authors
+-- of the code
+--
+-- https://www.rfc-editor.org/rfc/rfc2478#section-3.2.1
+-- https://www.rfc-editor.org/rfc/rfc2743#section-3.1
+
GSSAPI ::=
[APPLICATION 0] IMPLICIT SEQUENCE {
thisMech
+-- SPDX-License-Identifier: BSD-3-Clause
+--
+-- Copyright (C) 1998 IETF Trust and the persons identified as authors
+-- of the code
+--
+-- https://www.rfc-editor.org/rfc/rfc2478#section-3.2.1
+
GSSAPI ::=
CHOICE {
negTokenInit
* This file implements various helper functions for UBIFS authentication support
*/
-#include <linux/crypto.h>
#include <linux/verification.h>
#include <crypto/hash.h>
-#include <crypto/algapi.h>
+#include <crypto/utils.h>
#include <keys/user-type.h>
#include <keys/asymmetric-type.h>
#include "ubifs.h"
#include <linux/list_sort.h>
#include <crypto/hash.h>
-#include <crypto/algapi.h>
/**
* struct replay_entry - replay list entry.
#include <linux/completion.h>
#include <crypto/hash_info.h>
#include <crypto/hash.h>
-#include <crypto/algapi.h>
+#include <crypto/utils.h>
#include <linux/fscrypt.h>
crypto_destroy_tfm(tfm, crypto_aead_tfm(tfm));
}
+/**
+ * crypto_has_aead() - Search for the availability of an aead.
+ * @alg_name: is the cra_name / name or cra_driver_name / driver name of the
+ * aead
+ * @type: specifies the type of the aead
+ * @mask: specifies the mask for the aead
+ *
+ * Return: true when the aead is known to the kernel crypto API; false
+ * otherwise
+ */
+int crypto_has_aead(const char *alg_name, u32 type, u32 mask);
+
static inline const char *crypto_aead_driver_name(struct crypto_aead *tfm)
{
return crypto_tfm_alg_driver_name(crypto_aead_tfm(tfm));
* @tfm: AKCIPHER tfm handle allocated with crypto_alloc_akcipher()
* @src: source buffer
* @slen: source length
- * @dst: destinatino obuffer
+ * @dst: destination obuffer
* @dlen: destination length
*
* Return: zero on success; error code in case of error
* @tfm: AKCIPHER tfm handle allocated with crypto_alloc_akcipher()
* @src: source buffer
* @slen: source length
- * @dst: destinatino obuffer
+ * @dst: destination obuffer
* @dlen: destination length
*
* Return: Output length on success; error code in case of error
return PTR_ALIGN(crypto_tfm_ctx(tfm), align);
}
-static inline void *crypto_tfm_ctx_aligned(struct crypto_tfm *tfm)
-{
- return crypto_tfm_ctx_align(tfm, crypto_tfm_alg_alignmask(tfm) + 1);
-}
-
static inline unsigned int crypto_dma_align(void)
{
return CRYPTO_DMA_ALIGN;
bool retry_support,
int (*cbk_do_batch)(struct crypto_engine *engine),
bool rt, int qlen);
-int crypto_engine_exit(struct crypto_engine *engine);
+void crypto_engine_exit(struct crypto_engine *engine);
int crypto_engine_register_aead(struct aead_engine_alg *alg);
void crypto_engine_unregister_aead(struct aead_engine_alg *alg);
#undef HASH_ALG_COMMON_STAT
struct crypto_ahash {
- int (*init)(struct ahash_request *req);
- int (*update)(struct ahash_request *req);
- int (*final)(struct ahash_request *req);
- int (*finup)(struct ahash_request *req);
- int (*digest)(struct ahash_request *req);
- int (*export)(struct ahash_request *req, void *out);
- int (*import)(struct ahash_request *req, const void *in);
- int (*setkey)(struct crypto_ahash *tfm, const u8 *key,
- unsigned int keylen);
-
+ bool using_shash; /* Underlying algorithm is shash, not ahash */
unsigned int statesize;
unsigned int reqsize;
struct crypto_tfm base;
return crypto_tfm_alg_driver_name(crypto_ahash_tfm(tfm));
}
-static inline unsigned int crypto_ahash_alignmask(
- struct crypto_ahash *tfm)
-{
- return crypto_tfm_alg_alignmask(crypto_ahash_tfm(tfm));
-}
-
/**
* crypto_ahash_blocksize() - obtain block size for cipher
* @tfm: cipher handle
*
* Return: 0 if the export was successful; < 0 if an error occurred
*/
-static inline int crypto_ahash_export(struct ahash_request *req, void *out)
-{
- return crypto_ahash_reqtfm(req)->export(req, out);
-}
+int crypto_ahash_export(struct ahash_request *req, void *out);
/**
* crypto_ahash_import() - import message digest state
*
* Return: 0 if the import was successful; < 0 if an error occurred
*/
-static inline int crypto_ahash_import(struct ahash_request *req, const void *in)
-{
- struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
-
- if (crypto_ahash_get_flags(tfm) & CRYPTO_TFM_NEED_KEY)
- return -ENOKEY;
-
- return tfm->import(req, in);
-}
+int crypto_ahash_import(struct ahash_request *req, const void *in);
/**
* crypto_ahash_init() - (re)initialize message digest handle
*
* Return: see crypto_ahash_final()
*/
-static inline int crypto_ahash_init(struct ahash_request *req)
-{
- struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
-
- if (crypto_ahash_get_flags(tfm) & CRYPTO_TFM_NEED_KEY)
- return -ENOKEY;
-
- return tfm->init(req);
-}
-
-static inline struct crypto_istat_hash *hash_get_stat(
- struct hash_alg_common *alg)
-{
-#ifdef CONFIG_CRYPTO_STATS
- return &alg->stat;
-#else
- return NULL;
-#endif
-}
-
-static inline int crypto_hash_errstat(struct hash_alg_common *alg, int err)
-{
- if (!IS_ENABLED(CONFIG_CRYPTO_STATS))
- return err;
-
- if (err && err != -EINPROGRESS && err != -EBUSY)
- atomic64_inc(&hash_get_stat(alg)->err_cnt);
-
- return err;
-}
+int crypto_ahash_init(struct ahash_request *req);
/**
* crypto_ahash_update() - add data to message digest for processing
*
* Return: see crypto_ahash_final()
*/
-static inline int crypto_ahash_update(struct ahash_request *req)
-{
- struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
- struct hash_alg_common *alg = crypto_hash_alg_common(tfm);
-
- if (IS_ENABLED(CONFIG_CRYPTO_STATS))
- atomic64_add(req->nbytes, &hash_get_stat(alg)->hash_tlen);
-
- return crypto_hash_errstat(alg, tfm->update(req));
-}
+int crypto_ahash_update(struct ahash_request *req);
/**
* DOC: Asynchronous Hash Request Handle
return crypto_tfm_alg_driver_name(crypto_shash_tfm(tfm));
}
-static inline unsigned int crypto_shash_alignmask(
- struct crypto_shash *tfm)
-{
- return crypto_tfm_alg_alignmask(crypto_shash_tfm(tfm));
-}
-
/**
* crypto_shash_blocksize() - obtain block size for cipher
* @tfm: cipher handle
* Context: Any context.
* Return: 0 if the export creation was successful; < 0 if an error occurred
*/
-static inline int crypto_shash_export(struct shash_desc *desc, void *out)
-{
- return crypto_shash_alg(desc->tfm)->export(desc, out);
-}
+int crypto_shash_export(struct shash_desc *desc, void *out);
/**
* crypto_shash_import() - import operational state
* Context: Any context.
* Return: 0 if the import was successful; < 0 if an error occurred
*/
-static inline int crypto_shash_import(struct shash_desc *desc, const void *in)
-{
- struct crypto_shash *tfm = desc->tfm;
-
- if (crypto_shash_get_flags(tfm) & CRYPTO_TFM_NEED_KEY)
- return -ENOKEY;
-
- return crypto_shash_alg(tfm)->import(desc, in);
-}
+int crypto_shash_import(struct shash_desc *desc, const void *in);
/**
* crypto_shash_init() - (re)initialize message digest
#include <crypto/sha1.h>
#include <crypto/sha2.h>
+#include <crypto/sha3.h>
#include <crypto/md5.h>
#include <crypto/streebog.h>
char *data;
unsigned int offset;
- unsigned int alignmask;
+ unsigned int flags;
struct page *pg;
unsigned int entrylen;
unsigned int total;
struct scatterlist *sg;
-
- unsigned int flags;
};
struct ahash_instance {
return crypto_spawn_tfm2(&spawn->base);
}
-static inline void *crypto_shash_ctx_aligned(struct crypto_shash *tfm)
-{
- return crypto_tfm_ctx_aligned(&tfm->base);
-}
-
static inline struct crypto_shash *__crypto_shash_cast(struct crypto_tfm *tfm)
{
return container_of(tfm, struct crypto_shash, base);
};
};
+struct lskcipher_instance {
+ void (*free)(struct lskcipher_instance *inst);
+ union {
+ struct {
+ char head[offsetof(struct lskcipher_alg, co.base)];
+ struct crypto_instance base;
+ } s;
+ struct lskcipher_alg alg;
+ };
+};
+
struct crypto_skcipher_spawn {
struct crypto_spawn base;
};
+struct crypto_lskcipher_spawn {
+ struct crypto_spawn base;
+};
+
struct skcipher_walk {
union {
struct {
return &inst->s.base;
}
+static inline struct crypto_instance *lskcipher_crypto_instance(
+ struct lskcipher_instance *inst)
+{
+ return &inst->s.base;
+}
+
static inline struct skcipher_instance *skcipher_alg_instance(
struct crypto_skcipher *skcipher)
{
struct skcipher_instance, alg);
}
+static inline struct lskcipher_instance *lskcipher_alg_instance(
+ struct crypto_lskcipher *lskcipher)
+{
+ return container_of(crypto_lskcipher_alg(lskcipher),
+ struct lskcipher_instance, alg);
+}
+
static inline void *skcipher_instance_ctx(struct skcipher_instance *inst)
{
return crypto_instance_ctx(skcipher_crypto_instance(inst));
}
+static inline void *lskcipher_instance_ctx(struct lskcipher_instance *inst)
+{
+ return crypto_instance_ctx(lskcipher_crypto_instance(inst));
+}
+
static inline void skcipher_request_complete(struct skcipher_request *req, int err)
{
crypto_request_complete(&req->base, err);
struct crypto_instance *inst,
const char *name, u32 type, u32 mask);
+int crypto_grab_lskcipher(struct crypto_lskcipher_spawn *spawn,
+ struct crypto_instance *inst,
+ const char *name, u32 type, u32 mask);
+
static inline void crypto_drop_skcipher(struct crypto_skcipher_spawn *spawn)
{
crypto_drop_spawn(&spawn->base);
}
-static inline struct skcipher_alg *crypto_skcipher_spawn_alg(
- struct crypto_skcipher_spawn *spawn)
+static inline void crypto_drop_lskcipher(struct crypto_lskcipher_spawn *spawn)
+{
+ crypto_drop_spawn(&spawn->base);
+}
+
+static inline struct lskcipher_alg *crypto_lskcipher_spawn_alg(
+ struct crypto_lskcipher_spawn *spawn)
{
- return container_of(spawn->base.alg, struct skcipher_alg, base);
+ return container_of(spawn->base.alg, struct lskcipher_alg, co.base);
}
-static inline struct skcipher_alg *crypto_spawn_skcipher_alg(
+static inline struct skcipher_alg_common *crypto_spawn_skcipher_alg_common(
struct crypto_skcipher_spawn *spawn)
{
- return crypto_skcipher_spawn_alg(spawn);
+ return container_of(spawn->base.alg, struct skcipher_alg_common, base);
+}
+
+static inline struct lskcipher_alg *crypto_spawn_lskcipher_alg(
+ struct crypto_lskcipher_spawn *spawn)
+{
+ return crypto_lskcipher_spawn_alg(spawn);
}
static inline struct crypto_skcipher *crypto_spawn_skcipher(
return crypto_spawn_tfm2(&spawn->base);
}
+static inline struct crypto_lskcipher *crypto_spawn_lskcipher(
+ struct crypto_lskcipher_spawn *spawn)
+{
+ return crypto_spawn_tfm2(&spawn->base);
+}
+
static inline void crypto_skcipher_set_reqsize(
struct crypto_skcipher *skcipher, unsigned int reqsize)
{
int skcipher_register_instance(struct crypto_template *tmpl,
struct skcipher_instance *inst);
+int crypto_register_lskcipher(struct lskcipher_alg *alg);
+void crypto_unregister_lskcipher(struct lskcipher_alg *alg);
+int crypto_register_lskciphers(struct lskcipher_alg *algs, int count);
+void crypto_unregister_lskciphers(struct lskcipher_alg *algs, int count);
+int lskcipher_register_instance(struct crypto_template *tmpl,
+ struct lskcipher_instance *inst);
+
int skcipher_walk_done(struct skcipher_walk *walk, int err);
int skcipher_walk_virt(struct skcipher_walk *walk,
struct skcipher_request *req,
return crypto_tfm_ctx(&tfm->base);
}
+static inline void *crypto_lskcipher_ctx(struct crypto_lskcipher *tfm)
+{
+ return crypto_tfm_ctx(&tfm->base);
+}
+
static inline void *crypto_skcipher_ctx_dma(struct crypto_skcipher *tfm)
{
return crypto_tfm_ctx_dma(&tfm->base);
return req->base.flags;
}
-static inline unsigned int crypto_skcipher_alg_min_keysize(
- struct skcipher_alg *alg)
-{
- return alg->min_keysize;
-}
-
-static inline unsigned int crypto_skcipher_alg_max_keysize(
- struct skcipher_alg *alg)
-{
- return alg->max_keysize;
-}
-
-static inline unsigned int crypto_skcipher_alg_walksize(
- struct skcipher_alg *alg)
-{
- return alg->walksize;
-}
-
-/**
- * crypto_skcipher_walksize() - obtain walk size
- * @tfm: cipher handle
- *
- * In some cases, algorithms can only perform optimally when operating on
- * multiple blocks in parallel. This is reflected by the walksize, which
- * must be a multiple of the chunksize (or equal if the concern does not
- * apply)
- *
- * Return: walk size in bytes
- */
-static inline unsigned int crypto_skcipher_walksize(
- struct crypto_skcipher *tfm)
-{
- return crypto_skcipher_alg_walksize(crypto_skcipher_alg(tfm));
-}
-
/* Helpers for simple block cipher modes of operation */
struct skcipher_ctx_simple {
struct crypto_cipher *cipher; /* underlying block cipher */
return crypto_spawn_cipher_alg(spawn);
}
+static inline struct crypto_lskcipher *lskcipher_cipher_simple(
+ struct crypto_lskcipher *tfm)
+{
+ struct crypto_lskcipher **ctx = crypto_lskcipher_ctx(tfm);
+
+ return *ctx;
+}
+
+struct lskcipher_instance *lskcipher_alloc_instance_simple(
+ struct crypto_template *tmpl, struct rtattr **tb);
+
+static inline struct lskcipher_alg *lskcipher_ialg_simple(
+ struct lskcipher_instance *inst)
+{
+ struct crypto_lskcipher_spawn *spawn = lskcipher_instance_ctx(inst);
+
+ return crypto_lskcipher_spawn_alg(spawn);
+}
+
#endif /* _CRYPTO_INTERNAL_SKCIPHER_H */
* @tfm: signature tfm handle allocated with crypto_alloc_sig()
* @src: source buffer
* @slen: source length
- * @dst: destinatino obuffer
+ * @dst: destination obuffer
* @dlen: destination length
*
* Return: zero on success; error code in case of error
struct crypto_skcipher base;
};
+struct crypto_lskcipher {
+ struct crypto_tfm base;
+};
+
/*
* struct crypto_istat_cipher - statistics for cipher algorithm
* @encrypt_cnt: number of encrypt requests
atomic64_t err_cnt;
};
+#ifdef CONFIG_CRYPTO_STATS
+#define SKCIPHER_ALG_COMMON_STAT struct crypto_istat_cipher stat;
+#else
+#define SKCIPHER_ALG_COMMON_STAT
+#endif
+
+/*
+ * struct skcipher_alg_common - common properties of skcipher_alg
+ * @min_keysize: Minimum key size supported by the transformation. This is the
+ * smallest key length supported by this transformation algorithm.
+ * This must be set to one of the pre-defined values as this is
+ * not hardware specific. Possible values for this field can be
+ * found via git grep "_MIN_KEY_SIZE" include/crypto/
+ * @max_keysize: Maximum key size supported by the transformation. This is the
+ * largest key length supported by this transformation algorithm.
+ * This must be set to one of the pre-defined values as this is
+ * not hardware specific. Possible values for this field can be
+ * found via git grep "_MAX_KEY_SIZE" include/crypto/
+ * @ivsize: IV size applicable for transformation. The consumer must provide an
+ * IV of exactly that size to perform the encrypt or decrypt operation.
+ * @chunksize: Equal to the block size except for stream ciphers such as
+ * CTR where it is set to the underlying block size.
+ * @stat: Statistics for cipher algorithm
+ * @base: Definition of a generic crypto algorithm.
+ */
+#define SKCIPHER_ALG_COMMON { \
+ unsigned int min_keysize; \
+ unsigned int max_keysize; \
+ unsigned int ivsize; \
+ unsigned int chunksize; \
+ \
+ SKCIPHER_ALG_COMMON_STAT \
+ \
+ struct crypto_alg base; \
+}
+struct skcipher_alg_common SKCIPHER_ALG_COMMON;
+
/**
* struct skcipher_alg - symmetric key cipher definition
* @min_keysize: Minimum key size supported by the transformation. This is the
* in parallel. Should be a multiple of chunksize.
* @stat: Statistics for cipher algorithm
* @base: Definition of a generic crypto algorithm.
+ * @co: see struct skcipher_alg_common
*
* All fields except @ivsize are mandatory and must be filled.
*/
int (*init)(struct crypto_skcipher *tfm);
void (*exit)(struct crypto_skcipher *tfm);
- unsigned int min_keysize;
- unsigned int max_keysize;
- unsigned int ivsize;
- unsigned int chunksize;
unsigned int walksize;
-#ifdef CONFIG_CRYPTO_STATS
- struct crypto_istat_cipher stat;
-#endif
+ union {
+ struct SKCIPHER_ALG_COMMON;
+ struct skcipher_alg_common co;
+ };
+};
- struct crypto_alg base;
+/**
+ * struct lskcipher_alg - linear symmetric key cipher definition
+ * @setkey: Set key for the transformation. This function is used to either
+ * program a supplied key into the hardware or store the key in the
+ * transformation context for programming it later. Note that this
+ * function does modify the transformation context. This function can
+ * be called multiple times during the existence of the transformation
+ * object, so one must make sure the key is properly reprogrammed into
+ * the hardware. This function is also responsible for checking the key
+ * length for validity. In case a software fallback was put in place in
+ * the @cra_init call, this function might need to use the fallback if
+ * the algorithm doesn't support all of the key sizes.
+ * @encrypt: Encrypt a number of bytes. This function is used to encrypt
+ * the supplied data. This function shall not modify
+ * the transformation context, as this function may be called
+ * in parallel with the same transformation object. Data
+ * may be left over if length is not a multiple of blocks
+ * and there is more to come (final == false). The number of
+ * left-over bytes should be returned in case of success.
+ * @decrypt: Decrypt a number of bytes. This is a reverse counterpart to
+ * @encrypt and the conditions are exactly the same.
+ * @init: Initialize the cryptographic transformation object. This function
+ * is used to initialize the cryptographic transformation object.
+ * This function is called only once at the instantiation time, right
+ * after the transformation context was allocated.
+ * @exit: Deinitialize the cryptographic transformation object. This is a
+ * counterpart to @init, used to remove various changes set in
+ * @init.
+ * @co: see struct skcipher_alg_common
+ */
+struct lskcipher_alg {
+ int (*setkey)(struct crypto_lskcipher *tfm, const u8 *key,
+ unsigned int keylen);
+ int (*encrypt)(struct crypto_lskcipher *tfm, const u8 *src,
+ u8 *dst, unsigned len, u8 *iv, bool final);
+ int (*decrypt)(struct crypto_lskcipher *tfm, const u8 *src,
+ u8 *dst, unsigned len, u8 *iv, bool final);
+ int (*init)(struct crypto_lskcipher *tfm);
+ void (*exit)(struct crypto_lskcipher *tfm);
+
+ struct skcipher_alg_common co;
};
#define MAX_SYNC_SKCIPHER_REQSIZE 384
struct crypto_sync_skcipher *crypto_alloc_sync_skcipher(const char *alg_name,
u32 type, u32 mask);
+
+/**
+ * crypto_alloc_lskcipher() - allocate linear symmetric key cipher handle
+ * @alg_name: is the cra_name / name or cra_driver_name / driver name of the
+ * lskcipher
+ * @type: specifies the type of the cipher
+ * @mask: specifies the mask for the cipher
+ *
+ * Allocate a cipher handle for an lskcipher. The returned struct
+ * crypto_lskcipher is the cipher handle that is required for any subsequent
+ * API invocation for that lskcipher.
+ *
+ * Return: allocated cipher handle in case of success; IS_ERR() is true in case
+ * of an error, PTR_ERR() returns the error code.
+ */
+struct crypto_lskcipher *crypto_alloc_lskcipher(const char *alg_name,
+ u32 type, u32 mask);
+
static inline struct crypto_tfm *crypto_skcipher_tfm(
struct crypto_skcipher *tfm)
{
return &tfm->base;
}
+static inline struct crypto_tfm *crypto_lskcipher_tfm(
+ struct crypto_lskcipher *tfm)
+{
+ return &tfm->base;
+}
+
/**
* crypto_free_skcipher() - zeroize and free cipher handle
* @tfm: cipher handle to be freed
crypto_free_skcipher(&tfm->base);
}
+/**
+ * crypto_free_lskcipher() - zeroize and free cipher handle
+ * @tfm: cipher handle to be freed
+ *
+ * If @tfm is a NULL or error pointer, this function does nothing.
+ */
+static inline void crypto_free_lskcipher(struct crypto_lskcipher *tfm)
+{
+ crypto_destroy_tfm(tfm, crypto_lskcipher_tfm(tfm));
+}
+
/**
* crypto_has_skcipher() - Search for the availability of an skcipher.
* @alg_name: is the cra_name / name or cra_driver_name / driver name of the
return crypto_tfm_alg_driver_name(crypto_skcipher_tfm(tfm));
}
+static inline const char *crypto_lskcipher_driver_name(
+ struct crypto_lskcipher *tfm)
+{
+ return crypto_tfm_alg_driver_name(crypto_lskcipher_tfm(tfm));
+}
+
+static inline struct skcipher_alg_common *crypto_skcipher_alg_common(
+ struct crypto_skcipher *tfm)
+{
+ return container_of(crypto_skcipher_tfm(tfm)->__crt_alg,
+ struct skcipher_alg_common, base);
+}
+
static inline struct skcipher_alg *crypto_skcipher_alg(
struct crypto_skcipher *tfm)
{
struct skcipher_alg, base);
}
-static inline unsigned int crypto_skcipher_alg_ivsize(struct skcipher_alg *alg)
+static inline struct lskcipher_alg *crypto_lskcipher_alg(
+ struct crypto_lskcipher *tfm)
{
- return alg->ivsize;
+ return container_of(crypto_lskcipher_tfm(tfm)->__crt_alg,
+ struct lskcipher_alg, co.base);
}
/**
*/
static inline unsigned int crypto_skcipher_ivsize(struct crypto_skcipher *tfm)
{
- return crypto_skcipher_alg(tfm)->ivsize;
+ return crypto_skcipher_alg_common(tfm)->ivsize;
}
static inline unsigned int crypto_sync_skcipher_ivsize(
return crypto_skcipher_ivsize(&tfm->base);
}
+/**
+ * crypto_lskcipher_ivsize() - obtain IV size
+ * @tfm: cipher handle
+ *
+ * The size of the IV for the lskcipher referenced by the cipher handle is
+ * returned. This IV size may be zero if the cipher does not need an IV.
+ *
+ * Return: IV size in bytes
+ */
+static inline unsigned int crypto_lskcipher_ivsize(
+ struct crypto_lskcipher *tfm)
+{
+ return crypto_lskcipher_alg(tfm)->co.ivsize;
+}
+
/**
* crypto_skcipher_blocksize() - obtain block size of cipher
* @tfm: cipher handle
return crypto_tfm_alg_blocksize(crypto_skcipher_tfm(tfm));
}
-static inline unsigned int crypto_skcipher_alg_chunksize(
- struct skcipher_alg *alg)
+/**
+ * crypto_lskcipher_blocksize() - obtain block size of cipher
+ * @tfm: cipher handle
+ *
+ * The block size for the lskcipher referenced with the cipher handle is
+ * returned. The caller may use that information to allocate appropriate
+ * memory for the data returned by the encryption or decryption operation
+ *
+ * Return: block size of cipher
+ */
+static inline unsigned int crypto_lskcipher_blocksize(
+ struct crypto_lskcipher *tfm)
{
- return alg->chunksize;
+ return crypto_tfm_alg_blocksize(crypto_lskcipher_tfm(tfm));
}
/**
static inline unsigned int crypto_skcipher_chunksize(
struct crypto_skcipher *tfm)
{
- return crypto_skcipher_alg_chunksize(crypto_skcipher_alg(tfm));
+ return crypto_skcipher_alg_common(tfm)->chunksize;
+}
+
+/**
+ * crypto_lskcipher_chunksize() - obtain chunk size
+ * @tfm: cipher handle
+ *
+ * The block size is set to one for ciphers such as CTR. However,
+ * you still need to provide incremental updates in multiples of
+ * the underlying block size as the IV does not have sub-block
+ * granularity. This is known in this API as the chunk size.
+ *
+ * Return: chunk size in bytes
+ */
+static inline unsigned int crypto_lskcipher_chunksize(
+ struct crypto_lskcipher *tfm)
+{
+ return crypto_lskcipher_alg(tfm)->co.chunksize;
}
static inline unsigned int crypto_sync_skcipher_blocksize(
return crypto_tfm_alg_alignmask(crypto_skcipher_tfm(tfm));
}
+static inline unsigned int crypto_lskcipher_alignmask(
+ struct crypto_lskcipher *tfm)
+{
+ return crypto_tfm_alg_alignmask(crypto_lskcipher_tfm(tfm));
+}
+
static inline u32 crypto_skcipher_get_flags(struct crypto_skcipher *tfm)
{
return crypto_tfm_get_flags(crypto_skcipher_tfm(tfm));
crypto_skcipher_clear_flags(&tfm->base, flags);
}
+static inline u32 crypto_lskcipher_get_flags(struct crypto_lskcipher *tfm)
+{
+ return crypto_tfm_get_flags(crypto_lskcipher_tfm(tfm));
+}
+
+static inline void crypto_lskcipher_set_flags(struct crypto_lskcipher *tfm,
+ u32 flags)
+{
+ crypto_tfm_set_flags(crypto_lskcipher_tfm(tfm), flags);
+}
+
+static inline void crypto_lskcipher_clear_flags(struct crypto_lskcipher *tfm,
+ u32 flags)
+{
+ crypto_tfm_clear_flags(crypto_lskcipher_tfm(tfm), flags);
+}
+
/**
* crypto_skcipher_setkey() - set key for cipher
* @tfm: cipher handle
return crypto_skcipher_setkey(&tfm->base, key, keylen);
}
+/**
+ * crypto_lskcipher_setkey() - set key for cipher
+ * @tfm: cipher handle
+ * @key: buffer holding the key
+ * @keylen: length of the key in bytes
+ *
+ * The caller provided key is set for the lskcipher referenced by the cipher
+ * handle.
+ *
+ * Note, the key length determines the cipher type. Many block ciphers implement
+ * different cipher modes depending on the key size, such as AES-128 vs AES-192
+ * vs. AES-256. When providing a 16 byte key for an AES cipher handle, AES-128
+ * is performed.
+ *
+ * Return: 0 if the setting of the key was successful; < 0 if an error occurred
+ */
+int crypto_lskcipher_setkey(struct crypto_lskcipher *tfm,
+ const u8 *key, unsigned int keylen);
+
static inline unsigned int crypto_skcipher_min_keysize(
struct crypto_skcipher *tfm)
{
- return crypto_skcipher_alg(tfm)->min_keysize;
+ return crypto_skcipher_alg_common(tfm)->min_keysize;
}
static inline unsigned int crypto_skcipher_max_keysize(
struct crypto_skcipher *tfm)
{
- return crypto_skcipher_alg(tfm)->max_keysize;
+ return crypto_skcipher_alg_common(tfm)->max_keysize;
+}
+
+static inline unsigned int crypto_lskcipher_min_keysize(
+ struct crypto_lskcipher *tfm)
+{
+ return crypto_lskcipher_alg(tfm)->co.min_keysize;
+}
+
+static inline unsigned int crypto_lskcipher_max_keysize(
+ struct crypto_lskcipher *tfm)
+{
+ return crypto_lskcipher_alg(tfm)->co.max_keysize;
}
/**
*/
int crypto_skcipher_decrypt(struct skcipher_request *req);
+/**
+ * crypto_lskcipher_encrypt() - encrypt plaintext
+ * @tfm: lskcipher handle
+ * @src: source buffer
+ * @dst: destination buffer
+ * @len: number of bytes to process
+ * @iv: IV for the cipher operation which must comply with the IV size defined
+ * by crypto_lskcipher_ivsize
+ *
+ * Encrypt plaintext data using the lskcipher handle.
+ *
+ * Return: >=0 if the cipher operation was successful, if positive
+ * then this many bytes have been left unprocessed;
+ * < 0 if an error occurred
+ */
+int crypto_lskcipher_encrypt(struct crypto_lskcipher *tfm, const u8 *src,
+ u8 *dst, unsigned len, u8 *iv);
+
+/**
+ * crypto_lskcipher_decrypt() - decrypt ciphertext
+ * @tfm: lskcipher handle
+ * @src: source buffer
+ * @dst: destination buffer
+ * @len: number of bytes to process
+ * @iv: IV for the cipher operation which must comply with the IV size defined
+ * by crypto_lskcipher_ivsize
+ *
+ * Decrypt ciphertext data using the lskcipher handle.
+ *
+ * Return: >=0 if the cipher operation was successful, if positive
+ * then this many bytes have been left unprocessed;
+ * < 0 if an error occurred
+ */
+int crypto_lskcipher_decrypt(struct crypto_lskcipher *tfm, const u8 *src,
+ u8 *dst, unsigned len, u8 *iv);
+
/**
* DOC: Symmetric Key Cipher Request Handle
*
#define CRYPTO_ALG_TYPE_CIPHER 0x00000001
#define CRYPTO_ALG_TYPE_COMPRESS 0x00000002
#define CRYPTO_ALG_TYPE_AEAD 0x00000003
+#define CRYPTO_ALG_TYPE_LSKCIPHER 0x00000004
#define CRYPTO_ALG_TYPE_SKCIPHER 0x00000005
#define CRYPTO_ALG_TYPE_AKCIPHER 0x00000006
#define CRYPTO_ALG_TYPE_SIG 0x00000007
#define CRYPTO_ALG_TYPE_SHASH 0x0000000e
#define CRYPTO_ALG_TYPE_AHASH 0x0000000f
-#define CRYPTO_ALG_TYPE_HASH_MASK 0x0000000e
-#define CRYPTO_ALG_TYPE_AHASH_MASK 0x0000000e
#define CRYPTO_ALG_TYPE_ACOMPRESS_MASK 0x0000000e
#define CRYPTO_ALG_LARVAL 0x00000010
* crypto_aead_walksize() (with the remainder going at the end), no chunk
* can cross a page boundary or a scatterlist element boundary.
* ahash:
- * - The result buffer must be aligned to the algorithm's alignmask.
* - crypto_ahash_finup() must not be used unless the algorithm implements
* ->finup() natively.
*/
* @cra_ctxsize: Size of the operational context of the transformation. This
* value informs the kernel crypto API about the memory size
* needed to be allocated for the transformation context.
- * @cra_alignmask: Alignment mask for the input and output data buffer. The data
- * buffer containing the input data for the algorithm must be
- * aligned to this alignment mask. The data buffer for the
- * output data must be aligned to this alignment mask. Note that
- * the Crypto API will do the re-alignment in software, but
- * only under special conditions and there is a performance hit.
- * The re-alignment happens at these occasions for different
- * @cra_u types: cipher -- For both input data and output data
- * buffer; ahash -- For output hash destination buf; shash --
- * For output hash destination buf.
- * This is needed on hardware which is flawed by design and
- * cannot pick data from arbitrary addresses.
+ * @cra_alignmask: For cipher, skcipher, lskcipher, and aead algorithms this is
+ * 1 less than the alignment, in bytes, that the algorithm
+ * implementation requires for input and output buffers. When
+ * the crypto API is invoked with buffers that are not aligned
+ * to this alignment, the crypto API automatically utilizes
+ * appropriately aligned temporary buffers to comply with what
+ * the algorithm needs. (For scatterlists this happens only if
+ * the algorithm uses the skcipher_walk helper functions.) This
+ * misalignment handling carries a performance penalty, so it is
+ * preferred that algorithms do not set a nonzero alignmask.
+ * Also, crypto API users may wish to allocate buffers aligned
+ * to the alignmask of the algorithm being used, in order to
+ * avoid the API having to realign them. Note: the alignmask is
+ * not supported for hash algorithms and is always 0 for them.
* @cra_priority: Priority of this transformation implementation. In case
* multiple transformations with same @cra_name are available to
* the Crypto API, the kernel will use the one with highest
QM_NOT_READY,
};
+enum qm_misc_ctl_bits {
+ QM_DRIVER_REMOVING = 0x0,
+ QM_RST_SCHED,
+ QM_RESETTING,
+ QM_MODULE_PARAM,
+};
+
enum qm_cap_bits {
QM_SUPPORT_DB_ISOLATION = 0x0,
QM_SUPPORT_FUNC_QOS,
struct hisi_qm *qm;
struct work_struct work;
u16 *qp_finish_id;
+ u16 eqe_num;
};
/**
struct list_head qm_hw_errs;
};
+struct qm_rsv_buf {
+ struct qm_sqc *sqc;
+ struct qm_cqc *cqc;
+ struct qm_eqc *eqc;
+ struct qm_aeqc *aeqc;
+ dma_addr_t sqc_dma;
+ dma_addr_t cqc_dma;
+ dma_addr_t eqc_dma;
+ dma_addr_t aeqc_dma;
+ struct qm_dma qcdma;
+};
+
struct hisi_qm {
enum qm_hw_ver ver;
enum qm_fun_type fun_type;
dma_addr_t cqc_dma;
dma_addr_t eqe_dma;
dma_addr_t aeqe_dma;
+ struct qm_rsv_buf xqc_buf;
struct hisi_qm_status status;
const struct hisi_qm_err_ini *err_ini;
mutex_init(&qm_list->lock);
}
+static inline void hisi_qm_add_list(struct hisi_qm *qm, struct hisi_qm_list *qm_list)
+{
+ mutex_lock(&qm_list->lock);
+ list_add_tail(&qm->list, &qm_list->list);
+ mutex_unlock(&qm_list->lock);
+}
+
+static inline void hisi_qm_del_list(struct hisi_qm *qm, struct hisi_qm_list *qm_list)
+{
+ mutex_lock(&qm_list->lock);
+ list_del(&qm->list);
+ mutex_unlock(&qm_list->lock);
+}
+
int hisi_qm_init(struct hisi_qm *qm);
void hisi_qm_uninit(struct hisi_qm *qm);
int hisi_qm_start(struct hisi_qm *qm);
void hisi_qm_free_qps(struct hisi_qp **qps, int qp_num);
void hisi_qm_dev_shutdown(struct pci_dev *pdev);
void hisi_qm_wait_task_finish(struct hisi_qm *qm, struct hisi_qm_list *qm_list);
-int hisi_qm_alg_register(struct hisi_qm *qm, struct hisi_qm_list *qm_list);
-void hisi_qm_alg_unregister(struct hisi_qm *qm, struct hisi_qm_list *qm_list);
+int hisi_qm_alg_register(struct hisi_qm *qm, struct hisi_qm_list *qm_list, int guard);
+void hisi_qm_alg_unregister(struct hisi_qm *qm, struct hisi_qm_list *qm_list, int guard);
int hisi_qm_resume(struct device *dev);
int hisi_qm_suspend(struct device *dev);
void hisi_qm_pm_uninit(struct hisi_qm *qm);
extern void devm_hwrng_unregister(struct device *dve, struct hwrng *rng);
extern long hwrng_msleep(struct hwrng *rng, unsigned int msecs);
+extern long hwrng_yield(struct hwrng *rng);
#endif /* LINUX_HWRANDOM_H_ */
* build_OID_registry.pl to generate the data for look_up_OID().
*/
enum OID {
- OID_id_dsa_with_sha1, /* 1.2.840.10030.4.3 */
OID_id_dsa, /* 1.2.840.10040.4.1 */
OID_id_ecPublicKey, /* 1.2.840.10045.2.1 */
OID_id_prime192v1, /* 1.2.840.10045.3.1.1 */
OID_id_prime256v1, /* 1.2.840.10045.3.1.7 */
- OID_id_ecdsa_with_sha1, /* 1.2.840.10045.4.1 */
OID_id_ecdsa_with_sha224, /* 1.2.840.10045.4.3.1 */
OID_id_ecdsa_with_sha256, /* 1.2.840.10045.4.3.2 */
OID_id_ecdsa_with_sha384, /* 1.2.840.10045.4.3.3 */
/* PKCS#1 {iso(1) member-body(2) us(840) rsadsi(113549) pkcs(1) pkcs-1(1)} */
OID_rsaEncryption, /* 1.2.840.113549.1.1.1 */
- OID_md2WithRSAEncryption, /* 1.2.840.113549.1.1.2 */
- OID_md3WithRSAEncryption, /* 1.2.840.113549.1.1.3 */
- OID_md4WithRSAEncryption, /* 1.2.840.113549.1.1.4 */
- OID_sha1WithRSAEncryption, /* 1.2.840.113549.1.1.5 */
OID_sha256WithRSAEncryption, /* 1.2.840.113549.1.1.11 */
OID_sha384WithRSAEncryption, /* 1.2.840.113549.1.1.12 */
OID_sha512WithRSAEncryption, /* 1.2.840.113549.1.1.13 */
OID_smimeCapabilites, /* 1.2.840.113549.1.9.15 */
OID_smimeAuthenticatedAttrs, /* 1.2.840.113549.1.9.16.2.11 */
- /* {iso(1) member-body(2) us(840) rsadsi(113549) digestAlgorithm(2)} */
- OID_md2, /* 1.2.840.113549.2.2 */
- OID_md4, /* 1.2.840.113549.2.4 */
- OID_md5, /* 1.2.840.113549.2.5 */
-
OID_mskrb5, /* 1.2.840.48018.1.2.2 */
OID_krb5, /* 1.2.840.113554.1.2.2 */
OID_krb5u2u, /* 1.2.840.113554.1.2.2.3 */
OID_PKU2U, /* 1.3.5.1.5.2.7 */
OID_Scram, /* 1.3.6.1.5.5.14 */
OID_certAuthInfoAccess, /* 1.3.6.1.5.5.7.1.1 */
- OID_sha1, /* 1.3.14.3.2.26 */
OID_id_ansip384r1, /* 1.3.132.0.34 */
OID_sha256, /* 2.16.840.1.101.3.4.2.1 */
OID_sha384, /* 2.16.840.1.101.3.4.2.2 */
OID_TPMImportableKey, /* 2.23.133.10.1.4 */
OID_TPMSealedData, /* 2.23.133.10.1.5 */
+ /* CSOR FIPS-202 SHA-3 */
+ OID_sha3_256, /* 2.16.840.1.101.3.4.2.8 */
+ OID_sha3_384, /* 2.16.840.1.101.3.4.2.9 */
+ OID_sha3_512, /* 2.16.840.1.101.3.4.2.10 */
+ OID_id_ecdsa_with_sha3_256, /* 2.16.840.1.101.3.4.3.10 */
+ OID_id_ecdsa_with_sha3_384, /* 2.16.840.1.101.3.4.3.11 */
+ OID_id_ecdsa_with_sha3_512, /* 2.16.840.1.101.3.4.3.12 */
+ OID_id_rsassa_pkcs1_v1_5_with_sha3_256, /* 2.16.840.1.101.3.4.3.14 */
+ OID_id_rsassa_pkcs1_v1_5_with_sha3_384, /* 2.16.840.1.101.3.4.3.15 */
+ OID_id_rsassa_pkcs1_v1_5_with_sha3_512, /* 2.16.840.1.101.3.4.3.16 */
+
OID__NR
};
#define MICROWATT_PER_MILLIWATT 1000UL
#define MICROWATT_PER_WATT 1000000UL
+#define BYTES_PER_KBIT (KILO / BITS_PER_BYTE)
+#define BYTES_PER_MBIT (MEGA / BITS_PER_BYTE)
+#define BYTES_PER_GBIT (GIGA / BITS_PER_BYTE)
+
#define ABSOLUTE_ZERO_MILLICELSIUS -273150
static inline long milli_kelvin_to_millicelsius(long t)
#ifndef _LINUX_VERIFICATION_H
#define _LINUX_VERIFICATION_H
+#include <linux/errno.h>
#include <linux/types.h>
/*
HASH_ALGO_SM3_256,
HASH_ALGO_STREEBOG_256,
HASH_ALGO_STREEBOG_512,
+ HASH_ALGO_SHA3_256,
+ HASH_ALGO_SHA3_384,
+ HASH_ALGO_SHA3_512,
HASH_ALGO__LAST
};
possible to load a signed module containing the algorithm to check
the signature on that module.
-config MODULE_SIG_SHA1
- bool "Sign modules with SHA-1"
- select CRYPTO_SHA1
-
-config MODULE_SIG_SHA224
- bool "Sign modules with SHA-224"
- select CRYPTO_SHA256
-
config MODULE_SIG_SHA256
bool "Sign modules with SHA-256"
select CRYPTO_SHA256
bool "Sign modules with SHA-512"
select CRYPTO_SHA512
+config MODULE_SIG_SHA3_256
+ bool "Sign modules with SHA3-256"
+ select CRYPTO_SHA3
+
+config MODULE_SIG_SHA3_384
+ bool "Sign modules with SHA3-384"
+ select CRYPTO_SHA3
+
+config MODULE_SIG_SHA3_512
+ bool "Sign modules with SHA3-512"
+ select CRYPTO_SHA3
+
endchoice
config MODULE_SIG_HASH
string
depends on MODULE_SIG || IMA_APPRAISE_MODSIG
- default "sha1" if MODULE_SIG_SHA1
- default "sha224" if MODULE_SIG_SHA224
default "sha256" if MODULE_SIG_SHA256
default "sha384" if MODULE_SIG_SHA384
default "sha512" if MODULE_SIG_SHA512
+ default "sha3-256" if MODULE_SIG_SHA3_256
+ default "sha3-384" if MODULE_SIG_SHA3_384
+ default "sha3-512" if MODULE_SIG_SHA3_512
choice
prompt "Module compression mode"
*cb_cpu = cpu;
}
- err = -EBUSY;
+ err = -EBUSY;
if ((pinst->flags & PADATA_RESET))
goto out;
*/
void padata_free_shell(struct padata_shell *ps)
{
+ struct parallel_data *pd;
+
if (!ps)
return;
mutex_lock(&ps->pinst->lock);
list_del(&ps->list);
- padata_free_pd(rcu_dereference_protected(ps->pd, 1));
+ pd = rcu_dereference_protected(ps->pd, 1);
+ if (refcount_dec_and_test(&pd->refcnt))
+ padata_free_pd(pd);
mutex_unlock(&ps->pinst->lock);
kfree(ps);
#include <linux/debugfs.h>
#include <linux/scatterlist.h>
-#include <linux/crypto.h>
#include <crypto/aes.h>
-#include <crypto/algapi.h>
#include <crypto/hash.h>
#include <crypto/kpp.h>
+#include <crypto/utils.h>
#include <net/bluetooth/bluetooth.h>
#include <net/bluetooth/hci_core.h>
return ret;
}
- WARN_ON((unsigned long)session_key &
- crypto_shash_alignmask(con->v2.hmac_tfm));
ret = crypto_shash_setkey(con->v2.hmac_tfm, session_key,
session_key_len);
if (ret) {
goto out;
for (i = 0; i < kvec_cnt; i++) {
- WARN_ON((unsigned long)kvecs[i].iov_base &
- crypto_shash_alignmask(con->v2.hmac_tfm));
ret = crypto_shash_update(desc, kvecs[i].iov_base,
kvecs[i].iov_len);
if (ret)
// SPDX-License-Identifier: GPL-2.0-only
#define pr_fmt(fmt) "IPsec: " fmt
-#include <crypto/algapi.h>
#include <crypto/hash.h>
+#include <crypto/utils.h>
#include <linux/err.h>
#include <linux/module.h>
#include <linux/slab.h>
{
unsigned int len;
- len = size + crypto_ahash_digestsize(ahash) +
- (crypto_ahash_alignmask(ahash) &
- ~(crypto_tfm_ctx_alignment() - 1));
+ len = size + crypto_ahash_digestsize(ahash);
len = ALIGN(len, crypto_tfm_ctx_alignment());
return tmp + offset;
}
-static inline u8 *ah_tmp_icv(struct crypto_ahash *ahash, void *tmp,
- unsigned int offset)
+static inline u8 *ah_tmp_icv(void *tmp, unsigned int offset)
{
- return PTR_ALIGN((u8 *)tmp + offset, crypto_ahash_alignmask(ahash) + 1);
+ return tmp + offset;
}
static inline struct ahash_request *ah_tmp_req(struct crypto_ahash *ahash,
int ihl = ip_hdrlen(skb);
iph = AH_SKB_CB(skb)->tmp;
- icv = ah_tmp_icv(ahp->ahash, iph, ihl);
+ icv = ah_tmp_icv(iph, ihl);
memcpy(ah->auth_data, icv, ahp->icv_trunc_len);
top_iph->tos = iph->tos;
if (!iph)
goto out;
seqhi = (__be32 *)((char *)iph + ihl);
- icv = ah_tmp_icv(ahash, seqhi, seqhi_len);
+ icv = ah_tmp_icv(seqhi, seqhi_len);
req = ah_tmp_req(ahash, icv);
sg = ah_req_sg(ahash, req);
seqhisg = sg + nfrags;
work_iph = AH_SKB_CB(skb)->tmp;
auth_data = ah_tmp_auth(work_iph, ihl);
- icv = ah_tmp_icv(ahp->ahash, auth_data, ahp->icv_trunc_len);
+ icv = ah_tmp_icv(auth_data, ahp->icv_trunc_len);
err = crypto_memneq(icv, auth_data, ahp->icv_trunc_len) ? -EBADMSG : 0;
if (err)
seqhi = (__be32 *)((char *)work_iph + ihl);
auth_data = ah_tmp_auth(seqhi, seqhi_len);
- icv = ah_tmp_icv(ahash, auth_data, ahp->icv_trunc_len);
+ icv = ah_tmp_icv(auth_data, ahp->icv_trunc_len);
req = ah_tmp_req(ahash, icv);
sg = ah_req_sg(ahash, req);
seqhisg = sg + nfrags;
+-- SPDX-License-Identifier: BSD-3-Clause
+--
+-- Copyright (C) 1990, 2002 IETF Trust and the persons identified as authors
+-- of the code
+--
+-- https://www.rfc-editor.org/rfc/rfc1157#section-4
+-- https://www.rfc-editor.org/rfc/rfc3416#section-3
+
Message ::=
SEQUENCE {
version
#define pr_fmt(fmt) "IPv6: " fmt
-#include <crypto/algapi.h>
#include <crypto/hash.h>
+#include <crypto/utils.h>
#include <linux/module.h>
#include <linux/slab.h>
#include <net/ip.h>
{
unsigned int len;
- len = size + crypto_ahash_digestsize(ahash) +
- (crypto_ahash_alignmask(ahash) &
- ~(crypto_tfm_ctx_alignment() - 1));
+ len = size + crypto_ahash_digestsize(ahash);
len = ALIGN(len, crypto_tfm_ctx_alignment());
return tmp + offset;
}
-static inline u8 *ah_tmp_icv(struct crypto_ahash *ahash, void *tmp,
- unsigned int offset)
+static inline u8 *ah_tmp_icv(void *tmp, unsigned int offset)
{
- return PTR_ALIGN((u8 *)tmp + offset, crypto_ahash_alignmask(ahash) + 1);
+ return tmp + offset;
}
static inline struct ahash_request *ah_tmp_req(struct crypto_ahash *ahash,
iph_base = AH_SKB_CB(skb)->tmp;
iph_ext = ah_tmp_ext(iph_base);
- icv = ah_tmp_icv(ahp->ahash, iph_ext, extlen);
+ icv = ah_tmp_icv(iph_ext, extlen);
memcpy(ah->auth_data, icv, ahp->icv_trunc_len);
memcpy(top_iph, iph_base, IPV6HDR_BASELEN);
iph_ext = ah_tmp_ext(iph_base);
seqhi = (__be32 *)((char *)iph_ext + extlen);
- icv = ah_tmp_icv(ahash, seqhi, seqhi_len);
+ icv = ah_tmp_icv(seqhi, seqhi_len);
req = ah_tmp_req(ahash, icv);
sg = ah_req_sg(ahash, req);
seqhisg = sg + nfrags;
work_iph = AH_SKB_CB(skb)->tmp;
auth_data = ah_tmp_auth(work_iph, hdr_len);
- icv = ah_tmp_icv(ahp->ahash, auth_data, ahp->icv_trunc_len);
+ icv = ah_tmp_icv(auth_data, ahp->icv_trunc_len);
err = crypto_memneq(icv, auth_data, ahp->icv_trunc_len) ? -EBADMSG : 0;
if (err)
auth_data = ah_tmp_auth((u8 *)work_iph, hdr_len);
seqhi = (__be32 *)(auth_data + ahp->icv_trunc_len);
- icv = ah_tmp_icv(ahash, seqhi, seqhi_len);
+ icv = ah_tmp_icv(seqhi, seqhi_len);
req = ah_tmp_req(ahash, icv);
sg = ah_req_sg(ahash, req);
seqhisg = sg + nfrags;
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/netdevice.h>
-#include <crypto/algapi.h>
#include <crypto/sha2.h>
+#include <crypto/utils.h>
#include <net/sock.h>
#include <net/inet_common.h>
#include <net/inet_hashtables.h>
* WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR A PARTICULAR PURPOSE.
*/
-#include <crypto/algapi.h>
#include <crypto/hash.h>
#include <crypto/skcipher.h>
+#include <crypto/utils.h>
#include <linux/err.h>
#include <linux/types.h>
#include <linux/mm.h>
* WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR A PARTICULAR PURPOSE.
*/
-#include <crypto/algapi.h>
#include <linux/types.h>
#include <linux/jiffies.h>
#include <linux/sunrpc/gss_krb5.h>
-#include <linux/crypto.h>
#include "gss_krb5_internal.h"
tristate
select XFRM
select CRYPTO
+ select CRYPTO_AEAD
select CRYPTO_HASH
select CRYPTO_SKCIPHER
* Copyright (c) 2002 James Morris <jmorris@intercode.com.au>
*/
+#include <crypto/aead.h>
#include <crypto/hash.h>
#include <crypto/skcipher.h>
#include <linux/module.h>
}
struct xfrm_algo_list {
+ int (*find)(const char *name, u32 type, u32 mask);
struct xfrm_algo_desc *algs;
int entries;
- u32 type;
- u32 mask;
};
static const struct xfrm_algo_list xfrm_aead_list = {
+ .find = crypto_has_aead,
.algs = aead_list,
.entries = ARRAY_SIZE(aead_list),
- .type = CRYPTO_ALG_TYPE_AEAD,
- .mask = CRYPTO_ALG_TYPE_MASK,
};
static const struct xfrm_algo_list xfrm_aalg_list = {
+ .find = crypto_has_ahash,
.algs = aalg_list,
.entries = ARRAY_SIZE(aalg_list),
- .type = CRYPTO_ALG_TYPE_HASH,
- .mask = CRYPTO_ALG_TYPE_HASH_MASK,
};
static const struct xfrm_algo_list xfrm_ealg_list = {
+ .find = crypto_has_skcipher,
.algs = ealg_list,
.entries = ARRAY_SIZE(ealg_list),
- .type = CRYPTO_ALG_TYPE_SKCIPHER,
- .mask = CRYPTO_ALG_TYPE_MASK,
};
static const struct xfrm_algo_list xfrm_calg_list = {
+ .find = crypto_has_comp,
.algs = calg_list,
.entries = ARRAY_SIZE(calg_list),
- .type = CRYPTO_ALG_TYPE_COMPRESS,
- .mask = CRYPTO_ALG_TYPE_MASK,
};
static struct xfrm_algo_desc *xfrm_find_algo(
if (!probe)
break;
- status = crypto_has_alg(list[i].name, algo_list->type,
- algo_list->mask);
+ status = algo_list->find(list[i].name, 0, 0);
if (!status)
break;
#define pr_fmt(fmt) "EVM: "fmt
#include <linux/init.h>
-#include <linux/crypto.h>
#include <linux/audit.h>
#include <linux/xattr.h>
#include <linux/integrity.h>
#include <crypto/hash.h>
#include <crypto/hash_info.h>
-#include <crypto/algapi.h>
+#include <crypto/utils.h>
#include "evm.h"
int evm_initialized;
#include <linux/scatterlist.h>
#include <linux/ctype.h>
#include <crypto/aes.h>
-#include <crypto/algapi.h>
#include <crypto/hash.h>
#include <crypto/sha2.h>
#include <crypto/skcipher.h>
+#include <crypto/utils.h>
#include "encrypted.h"
#include "ecryptfs_format.h"
*/
#include <assert.h>
+#include <errno.h>
#include <string.h>
#include <sys/ioctl.h>
struct dbc_user_nonce tmp = {
.auth_needed = !!signature,
};
- int ret;
assert(nonce_out);
if (signature)
memcpy(tmp.signature, signature, sizeof(tmp.signature));
- ret = ioctl(fd, DBCIOCNONCE, &tmp);
- if (ret)
- return ret;
+ if (ioctl(fd, DBCIOCNONCE, &tmp))
+ return errno;
memcpy(nonce_out, tmp.nonce, sizeof(tmp.nonce));
return 0;
memcpy(tmp.uid, uid, sizeof(tmp.uid));
memcpy(tmp.signature, signature, sizeof(tmp.signature));
- return ioctl(fd, DBCIOCUID, &tmp);
+ if (ioctl(fd, DBCIOCUID, &tmp))
+ return errno;
+ return 0;
}
int process_param(int fd, int msg_index, __u8 *signature, int *data)
memcpy(tmp.signature, signature, sizeof(tmp.signature));
- ret = ioctl(fd, DBCIOCPARAM, &tmp);
- if (ret)
- return ret;
+ if (ioctl(fd, DBCIOCPARAM, &tmp))
+ return errno;
*data = tmp.param;
+ memcpy(signature, tmp.signature, sizeof(tmp.signature));
return 0;
}
def handle_error(code):
- val = code * -1
- raise OSError(val, os.strerror(val))
+ raise OSError(code, os.strerror(code))
def get_nonce(device, signature):
if type(message) != tuple:
raise ValueError("Expected message tuple")
arg = ctypes.c_int(data if data else 0)
- ret = lib.process_param(device.fileno(), message[0], signature, ctypes.pointer(arg))
+ sig = ctypes.create_string_buffer(signature, len(signature))
+ ret = lib.process_param(device.fileno(), message[0], ctypes.pointer(sig), ctypes.pointer(arg))
if ret:
handle_error(ret)
- return arg, signature
+ return arg.value, sig.value
import os
import time
import glob
+import fcntl
+try:
+ import ioctl_opt as ioctl
+except ImportError:
+ ioctl = None
+ pass
from dbc import *
# Artificial delay between set commands
class DynamicBoostControlTest(unittest.TestCase):
def __init__(self, data) -> None:
self.d = None
- self.signature = "FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF"
- self.uid = "1111111111111111"
+ self.signature = b"FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF"
+ self.uid = b"1111111111111111"
super().__init__(data)
def setUp(self) -> None:
def setUp(self) -> None:
if not os.path.exists(DEVICE_NODE):
self.skipTest("system is unsupported")
+ if not ioctl:
+ self.skipTest("unable to test IOCTLs without ioctl_opt")
+
return super().setUp()
def test_invalid_nonce_ioctl(self) -> None:
"""tries to call get_nonce ioctl with invalid data structures"""
# 0x1 (get nonce), and invalid data
- INVALID1 = IOWR(ord("D"), 0x01, invalid_param)
+ INVALID1 = ioctl.IOWR(ord("D"), 0x01, invalid_param)
with self.assertRaises(OSError) as error:
fcntl.ioctl(self.d, INVALID1, self.data, True)
self.assertEqual(error.exception.errno, 22)
"""tries to call set_uid ioctl with invalid data structures"""
# 0x2 (set uid), and invalid data
- INVALID2 = IOW(ord("D"), 0x02, invalid_param)
+ INVALID2 = ioctl.IOW(ord("D"), 0x02, invalid_param)
with self.assertRaises(OSError) as error:
fcntl.ioctl(self.d, INVALID2, self.data, True)
self.assertEqual(error.exception.errno, 22)
"""tries to call set_uid ioctl with invalid data structures"""
# 0x2 as RW (set uid), and invalid data
- INVALID3 = IOWR(ord("D"), 0x02, invalid_param)
+ INVALID3 = ioctl.IOWR(ord("D"), 0x02, invalid_param)
with self.assertRaises(OSError) as error:
fcntl.ioctl(self.d, INVALID3, self.data, True)
self.assertEqual(error.exception.errno, 22)
def test_invalid_param_ioctl(self) -> None:
"""tries to call param ioctl with invalid data structures"""
# 0x3 (param), and invalid data
- INVALID4 = IOWR(ord("D"), 0x03, invalid_param)
+ INVALID4 = ioctl.IOWR(ord("D"), 0x03, invalid_param)
with self.assertRaises(OSError) as error:
fcntl.ioctl(self.d, INVALID4, self.data, True)
self.assertEqual(error.exception.errno, 22)
def test_invalid_call_ioctl(self) -> None:
"""tries to call the DBC ioctl with invalid data structures"""
# 0x4, and invalid data
- INVALID5 = IOWR(ord("D"), 0x04, invalid_param)
+ INVALID5 = ioctl.IOWR(ord("D"), 0x04, invalid_param)
with self.assertRaises(OSError) as error:
fcntl.ioctl(self.d, INVALID5, self.data, True)
self.assertEqual(error.exception.errno, 22)
# SOC power
soc_power_max = process_param(self.d, PARAM_GET_SOC_PWR_MAX, self.signature)
soc_power_min = process_param(self.d, PARAM_GET_SOC_PWR_MIN, self.signature)
- self.assertGreater(soc_power_max.parameter, soc_power_min.parameter)
+ self.assertGreater(soc_power_max[0], soc_power_min[0])
# fmax
fmax_max = process_param(self.d, PARAM_GET_FMAX_MAX, self.signature)
fmax_min = process_param(self.d, PARAM_GET_FMAX_MIN, self.signature)
- self.assertGreater(fmax_max.parameter, fmax_min.parameter)
+ self.assertGreater(fmax_max[0], fmax_min[0])
# cap values
keys = {
}
for k in keys:
result = process_param(self.d, keys[k], self.signature)
- self.assertGreater(result.parameter, 0)
+ self.assertGreater(result[0], 0)
def test_get_invalid_param(self) -> None:
"""fetch an invalid parameter"""
original = process_param(self.d, PARAM_GET_FMAX_CAP, self.signature)
# set the fmax
- target = original.parameter - 100
+ target = original[0] - 100
process_param(self.d, PARAM_SET_FMAX_CAP, self.signature, target)
time.sleep(SET_DELAY)
new = process_param(self.d, PARAM_GET_FMAX_CAP, self.signature)
- self.assertEqual(new.parameter, target)
+ self.assertEqual(new[0], target)
# revert back to current
- process_param(self.d, PARAM_SET_FMAX_CAP, self.signature, original.parameter)
+ process_param(self.d, PARAM_SET_FMAX_CAP, self.signature, original[0])
time.sleep(SET_DELAY)
cur = process_param(self.d, PARAM_GET_FMAX_CAP, self.signature)
- self.assertEqual(cur.parameter, original.parameter)
+ self.assertEqual(cur[0], original[0])
def test_set_power_cap(self) -> None:
"""get/set power cap limit"""
original = process_param(self.d, PARAM_GET_PWR_CAP, self.signature)
# set the fmax
- target = original.parameter - 10
+ target = original[0] - 10
process_param(self.d, PARAM_SET_PWR_CAP, self.signature, target)
time.sleep(SET_DELAY)
new = process_param(self.d, PARAM_GET_PWR_CAP, self.signature)
- self.assertEqual(new.parameter, target)
+ self.assertEqual(new[0], target)
# revert back to current
- process_param(self.d, PARAM_SET_PWR_CAP, self.signature, original.parameter)
+ process_param(self.d, PARAM_SET_PWR_CAP, self.signature, original[0])
time.sleep(SET_DELAY)
cur = process_param(self.d, PARAM_GET_PWR_CAP, self.signature)
- self.assertEqual(cur.parameter, original.parameter)
+ self.assertEqual(cur[0], original[0])
def test_set_3d_graphics_mode(self) -> None:
"""set/get 3d graphics mode"""