This update includes the following changes:

API:
 
 - Remove legacy compression interface.
 - Improve scatterwalk API.
 - Add request chaining to ahash and acomp.
 - Add virtual address support to ahash and acomp.
 - Add folio support to acomp.
 - Remove NULL dst support from acomp.
 
 Algorithms:
 
 - Library options are fuly hidden (selected by kernel users only).
 - Add Kerberos5 algorithms.
 - Add VAES-based ctr(aes) on x86.
 - Ensure LZO respects output buffer length on compression.
 - Remove obsolete SIMD fallback code path from arm/ghash-ce.
 
 Drivers:
 
 - Add support for PCI device 0x1134 in ccp.
 - Add support for rk3588's standalone TRNG in rockchip.
 - Add Inside Secure SafeXcel EIP-93 crypto engine support in eip93.
 - Fix bugs in tegra uncovered by multi-threaded self-test.
 - Fix corner cases in hisilicon/sec2.
 
 Others:
 
 - Add SG_MITER_LOCAL to sg miter.
 - Convert ubifs, hibernate and xfrm_ipcomp from legacy API to acomp.
 -----BEGIN PGP SIGNATURE-----
 
 iQIzBAABCgAdFiEEn51F/lCuNhUwmDeSxycdCkmxi6cFAmfiQ9kACgkQxycdCkmx
 i6fFZg/9GWjC1FLEV66vNlYAIzFGwzwWdFGyQzXyP235Cphhm4qt9gx7P91N6Lvc
 pplVjNEeZHoP8lMw+AIeGc2cRhIwsvn8C+HA3tCBOoC1qSe8T9t7KHAgiRGd/0iz
 UrzVBFLYlR9i4tc0T5peyQwSctv8DfjWzduTmI3Ts8i7OQcfeVVgj3sGfWam7kjF
 1GJWIQH7aPzT8cwFtk8gAK1insuPPZelT1Ppl9kUeZe0XUibrP7Gb5G9simxXAyi
 B+nLCaJYS6Hc1f47cfR/qyZSeYQN35KTVrEoKb1pTYXfEtMv6W9fIvQVLJRYsqpH
 RUBdDJUseE+WckR6glX9USrh+Fv9d+HfsTXh1fhpApKU5sQJ7pDbUm4ge8p6htNG
 MIszbJPdqajYveRLuPUjFlUXaqomos8eT6BZA+RLHm1cogzEOm+5bjspbfRNAVPj
 x9KiDu5lXNiFj02v/MkLKUe3bnGIyVQnZNi7Rn0Rpxjv95tIjVpksZWMPJarxUC6
 5zdyM2I5X0Z9+teBpbfWyqfzSbAs/KpzV8S/xNvWDUT6NlpYGBeNXrCDTXcwJLAh
 PRW0w1EJUwsZbPi8GEh5jNzo/YK1cGsUKrihKv7YgqSSopMLI8e/WVr8nKZMVDFA
 O+6F6ec5lR7KsOIMGUqrBGFU1ccAeaLLvLK3H5J8//gMMg82Uik=
 =aQNt
 -----END PGP SIGNATURE-----

Merge tag 'v6.15-p1' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6

Pull crypto updates from Herbert Xu:
 "API:
   - Remove legacy compression interface
   - Improve scatterwalk API
   - Add request chaining to ahash and acomp
   - Add virtual address support to ahash and acomp
   - Add folio support to acomp
   - Remove NULL dst support from acomp

  Algorithms:
   - Library options are fuly hidden (selected by kernel users only)
   - Add Kerberos5 algorithms
   - Add VAES-based ctr(aes) on x86
   - Ensure LZO respects output buffer length on compression
   - Remove obsolete SIMD fallback code path from arm/ghash-ce

  Drivers:
   - Add support for PCI device 0x1134 in ccp
   - Add support for rk3588's standalone TRNG in rockchip
   - Add Inside Secure SafeXcel EIP-93 crypto engine support in eip93
   - Fix bugs in tegra uncovered by multi-threaded self-test
   - Fix corner cases in hisilicon/sec2

  Others:
   - Add SG_MITER_LOCAL to sg miter
   - Convert ubifs, hibernate and xfrm_ipcomp from legacy API to acomp"

* tag 'v6.15-p1' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6: (187 commits)
  crypto: testmgr - Add multibuffer acomp testing
  crypto: acomp - Fix synchronous acomp chaining fallback
  crypto: testmgr - Add multibuffer hash testing
  crypto: hash - Fix synchronous ahash chaining fallback
  crypto: arm/ghash-ce - Remove SIMD fallback code path
  crypto: essiv - Replace memcpy() + NUL-termination with strscpy()
  crypto: api - Call crypto_alg_put in crypto_unregister_alg
  crypto: scompress - Fix incorrect stream freeing
  crypto: lib/chacha - remove unused arch-specific init support
  crypto: remove obsolete 'comp' compression API
  crypto: compress_null - drop obsolete 'comp' implementation
  crypto: cavium/zip - drop obsolete 'comp' implementation
  crypto: zstd - drop obsolete 'comp' implementation
  crypto: lzo - drop obsolete 'comp' implementation
  crypto: lzo-rle - drop obsolete 'comp' implementation
  crypto: lz4hc - drop obsolete 'comp' implementation
  crypto: lz4 - drop obsolete 'comp' implementation
  crypto: deflate - drop obsolete 'comp' implementation
  crypto: 842 - drop obsolete 'comp' implementation
  crypto: nx - Migrate to scomp API
  ...
This commit is contained in:
Linus Torvalds 2025-03-29 10:01:55 -07:00
commit e5e0e6bebe
233 changed files with 14511 additions and 4726 deletions

View file

@ -196,8 +196,6 @@ the aforementioned cipher types:
- CRYPTO_ALG_TYPE_CIPHER Single block cipher
- CRYPTO_ALG_TYPE_COMPRESS Compression
- CRYPTO_ALG_TYPE_AEAD Authenticated Encryption with Associated Data
(MAC)

View file

@ -26,3 +26,4 @@ for cryptographic use cases, as well as programming examples.
api-samples
descore-readme
device_drivers/index
krb5

View file

@ -0,0 +1,262 @@
.. SPDX-License-Identifier: GPL-2.0
===========================
Kerberos V Cryptography API
===========================
.. Contents:
- Overview.
- Small Buffer.
- Encoding Type.
- Key Derivation.
- PRF+ Calculation.
- Kc, Ke And Ki Derivation.
- Crypto Functions.
- Preparation Functions.
- Encryption Mode.
- Checksum Mode.
- The krb5enc AEAD algorithm
Overview
========
This API provides Kerberos 5-style cryptography for key derivation, encryption
and checksumming for use in network filesystems and can be used to implement
the low-level crypto that's needed for GSSAPI.
The following crypto types are supported::
KRB5_ENCTYPE_AES128_CTS_HMAC_SHA1_96
KRB5_ENCTYPE_AES256_CTS_HMAC_SHA1_96
KRB5_ENCTYPE_AES128_CTS_HMAC_SHA256_128
KRB5_ENCTYPE_AES256_CTS_HMAC_SHA384_192
KRB5_ENCTYPE_CAMELLIA128_CTS_CMAC
KRB5_ENCTYPE_CAMELLIA256_CTS_CMAC
KRB5_CKSUMTYPE_HMAC_SHA1_96_AES128
KRB5_CKSUMTYPE_HMAC_SHA1_96_AES256
KRB5_CKSUMTYPE_CMAC_CAMELLIA128
KRB5_CKSUMTYPE_CMAC_CAMELLIA256
KRB5_CKSUMTYPE_HMAC_SHA256_128_AES128
KRB5_CKSUMTYPE_HMAC_SHA384_192_AES256
The API can be included by::
#include <crypto/krb5.h>
Small Buffer
------------
To pass small pieces of data about, such as keys, a buffer structure is
defined, giving a pointer to the data and the size of that data::
struct krb5_buffer {
unsigned int len;
void *data;
};
Encoding Type
=============
The encoding type is defined by the following structure::
struct krb5_enctype {
int etype;
int ctype;
const char *name;
u16 key_bytes;
u16 key_len;
u16 Kc_len;
u16 Ke_len;
u16 Ki_len;
u16 prf_len;
u16 block_len;
u16 conf_len;
u16 cksum_len;
...
};
The fields of interest to the user of the API are as follows:
* ``etype`` and ``ctype`` indicate the protocol number for this encoding
type for encryption and checksumming respectively. They hold
``KRB5_ENCTYPE_*`` and ``KRB5_CKSUMTYPE_*`` constants.
* ``name`` is the formal name of the encoding.
* ``key_len`` and ``key_bytes`` are the input key length and the derived key
length. (I think they only differ for DES, which isn't supported here).
* ``Kc_len``, ``Ke_len`` and ``Ki_len`` are the sizes of the derived Kc, Ke
and Ki keys. Kc is used for in checksum mode; Ke and Ki are used in
encryption mode.
* ``prf_len`` is the size of the result from the PRF+ function calculation.
* ``block_len``, ``conf_len`` and ``cksum_len`` are the encryption block
length, confounder length and checksum length respectively. All three are
used in encryption mode, but only the checksum length is used in checksum
mode.
The encoding type is looked up by number using the following function::
const struct krb5_enctype *crypto_krb5_find_enctype(u32 enctype);
Key Derivation
==============
Once the application has selected an encryption type, the keys that will be
used to do the actual crypto can be derived from the transport key.
PRF+ Calculation
----------------
To aid in key derivation, a function to calculate the Kerberos GSSAPI
mechanism's PRF+ is provided::
int crypto_krb5_calc_PRFplus(const struct krb5_enctype *krb5,
const struct krb5_buffer *K,
unsigned int L,
const struct krb5_buffer *S,
struct krb5_buffer *result,
gfp_t gfp);
This can be used to derive the transport key from a source key plus additional
data to limit its use.
Crypto Functions
================
Once the keys have been derived, crypto can be performed on the data. The
caller must leave gaps in the buffer for the storage of the confounder (if
needed) and the checksum when preparing a message for transmission. An enum
and a pair of functions are provided to aid in this::
enum krb5_crypto_mode {
KRB5_CHECKSUM_MODE,
KRB5_ENCRYPT_MODE,
};
size_t crypto_krb5_how_much_buffer(const struct krb5_enctype *krb5,
enum krb5_crypto_mode mode,
size_t data_size, size_t *_offset);
size_t crypto_krb5_how_much_data(const struct krb5_enctype *krb5,
enum krb5_crypto_mode mode,
size_t *_buffer_size, size_t *_offset);
All these functions take the encoding type and an indication the mode of crypto
(checksum-only or full encryption).
The first function returns how big the buffer will need to be to house a given
amount of data; the second function returns how much data will fit in a buffer
of a particular size, and adjusts down the size of the required buffer
accordingly. In both cases, the offset of the data within the buffer is also
returned.
When a message has been received, the location and size of the data with the
message can be determined by calling::
void crypto_krb5_where_is_the_data(const struct krb5_enctype *krb5,
enum krb5_crypto_mode mode,
size_t *_offset, size_t *_len);
The caller provides the offset and length of the message to the function, which
then alters those values to indicate the region containing the data (plus any
padding). It is up to the caller to determine how much padding there is.
Preparation Functions
---------------------
Two functions are provided to allocated and prepare a crypto object for use by
the action functions::
struct crypto_aead *
crypto_krb5_prepare_encryption(const struct krb5_enctype *krb5,
const struct krb5_buffer *TK,
u32 usage, gfp_t gfp);
struct crypto_shash *
crypto_krb5_prepare_checksum(const struct krb5_enctype *krb5,
const struct krb5_buffer *TK,
u32 usage, gfp_t gfp);
Both of these functions take the encoding type, the transport key and the usage
value used to derive the appropriate subkey(s). They create an appropriate
crypto object, an AEAD template for encryption and a synchronous hash for
checksumming, set the key(s) on it and configure it. The caller is expected to
pass these handles to the action functions below.
Encryption Mode
---------------
A pair of functions are provided to encrypt and decrypt a message::
ssize_t crypto_krb5_encrypt(const struct krb5_enctype *krb5,
struct crypto_aead *aead,
struct scatterlist *sg, unsigned int nr_sg,
size_t sg_len,
size_t data_offset, size_t data_len,
bool preconfounded);
int crypto_krb5_decrypt(const struct krb5_enctype *krb5,
struct crypto_aead *aead,
struct scatterlist *sg, unsigned int nr_sg,
size_t *_offset, size_t *_len);
In both cases, the input and output buffers are indicated by the same
scatterlist.
For the encryption function, the output buffer may be larger than is needed
(the amount of output generated is returned) and the location and size of the
data are indicated (which must match the encoding). If no confounder is set,
the function will insert one.
For the decryption function, the offset and length of the message in buffer are
supplied and these are shrunk to fit the data. The decryption function will
verify any checksums within the message and give an error if they don't match.
Checksum Mode
-------------
A pair of function are provided to generate the checksum on a message and to
verify that checksum::
ssize_t crypto_krb5_get_mic(const struct krb5_enctype *krb5,
struct crypto_shash *shash,
const struct krb5_buffer *metadata,
struct scatterlist *sg, unsigned int nr_sg,
size_t sg_len,
size_t data_offset, size_t data_len);
int crypto_krb5_verify_mic(const struct krb5_enctype *krb5,
struct crypto_shash *shash,
const struct krb5_buffer *metadata,
struct scatterlist *sg, unsigned int nr_sg,
size_t *_offset, size_t *_len);
In both cases, the input and output buffers are indicated by the same
scatterlist. Additional metadata can be passed in which will get added to the
hash before the data.
For the get_mic function, the output buffer may be larger than is needed (the
amount of output generated is returned) and the location and size of the data
are indicated (which must match the encoding).
For the verification function, the offset and length of the message in buffer
are supplied and these are shrunk to fit the data. An error will be returned
if the checksums don't match.
The krb5enc AEAD algorithm
==========================
A template AEAD crypto algorithm, called "krb5enc", is provided that hashes the
plaintext before encrypting it (the reverse of authenc). The handle returned
by ``crypto_krb5_prepare_encryption()`` may be one of these, but there's no
requirement for the user of this API to interact with it directly.
For reference, its key format begins with a BE32 of the format number. Only
format 1 is provided and that continues with a BE32 of the Ke key length
followed by a BE32 of the Ki key length, followed by the bytes from the Ke key
and then the Ki key.
Using specifically ordered words means that the static test data doesn't
require byteswapping.

View file

@ -0,0 +1,144 @@
# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
%YAML 1.2
---
$id: http://devicetree.org/schemas/crypto/fsl,sec2.0.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: Freescale SoC SEC Security Engines versions 1.x-2.x-3.x
maintainers:
- J. Neuschäfer <j.ne@posteo.net>
properties:
compatible:
description:
Should contain entries for this and backward compatible SEC versions,
high to low. Warning - SEC1 and SEC2 are mutually exclusive.
oneOf:
- items:
- const: fsl,sec3.3
- const: fsl,sec3.1
- const: fsl,sec3.0
- const: fsl,sec2.4
- const: fsl,sec2.2
- const: fsl,sec2.1
- const: fsl,sec2.0
- items:
- const: fsl,sec3.1
- const: fsl,sec3.0
- const: fsl,sec2.4
- const: fsl,sec2.2
- const: fsl,sec2.1
- const: fsl,sec2.0
- items:
- const: fsl,sec3.0
- const: fsl,sec2.4
- const: fsl,sec2.2
- const: fsl,sec2.1
- const: fsl,sec2.0
- items:
- const: fsl,sec2.4
- const: fsl,sec2.2
- const: fsl,sec2.1
- const: fsl,sec2.0
- items:
- const: fsl,sec2.2
- const: fsl,sec2.1
- const: fsl,sec2.0
- items:
- const: fsl,sec2.1
- const: fsl,sec2.0
- items:
- const: fsl,sec2.0
- items:
- const: fsl,sec1.2
- const: fsl,sec1.0
- items:
- const: fsl,sec1.0
reg:
maxItems: 1
interrupts:
maxItems: 1
fsl,num-channels:
$ref: /schemas/types.yaml#/definitions/uint32
enum: [ 1, 4 ]
description: An integer representing the number of channels available.
fsl,channel-fifo-len:
$ref: /schemas/types.yaml#/definitions/uint32
maximum: 100
description:
An integer representing the number of descriptor pointers each channel
fetch fifo can hold.
fsl,exec-units-mask:
$ref: /schemas/types.yaml#/definitions/uint32
maximum: 0xfff
description: |
The bitmask representing what execution units (EUs) are available.
EU information should be encoded following the SEC's Descriptor Header
Dword EU_SEL0 field documentation, i.e. as follows:
bit 0 = reserved - should be 0
bit 1 = set if SEC has the ARC4 EU (AFEU)
bit 2 = set if SEC has the DES/3DES EU (DEU)
bit 3 = set if SEC has the message digest EU (MDEU/MDEU-A)
bit 4 = set if SEC has the random number generator EU (RNG)
bit 5 = set if SEC has the public key EU (PKEU)
bit 6 = set if SEC has the AES EU (AESU)
bit 7 = set if SEC has the Kasumi EU (KEU)
bit 8 = set if SEC has the CRC EU (CRCU)
bit 11 = set if SEC has the message digest EU extended alg set (MDEU-B)
remaining bits are reserved for future SEC EUs.
fsl,descriptor-types-mask:
$ref: /schemas/types.yaml#/definitions/uint32
description: |
The bitmask representing what descriptors are available. Descriptor type
information should be encoded following the SEC's Descriptor Header Dword
DESC_TYPE field documentation, i.e. as follows:
bit 0 = SEC supports descriptor type aesu_ctr_nonsnoop
bit 1 = SEC supports descriptor type ipsec_esp
bit 2 = SEC supports descriptor type common_nonsnoop
bit 3 = SEC supports descriptor type 802.11i AES ccmp
bit 4 = SEC supports descriptor type hmac_snoop_no_afeu
bit 5 = SEC supports descriptor type srtp
bit 6 = SEC supports descriptor type non_hmac_snoop_no_afeu
bit 7 = SEC supports descriptor type pkeu_assemble
bit 8 = SEC supports descriptor type aesu_key_expand_output
bit 9 = SEC supports descriptor type pkeu_ptmul
bit 10 = SEC supports descriptor type common_nonsnoop_afeu
bit 11 = SEC supports descriptor type pkeu_ptadd_dbl
..and so on and so forth.
required:
- compatible
- reg
- fsl,num-channels
- fsl,channel-fifo-len
- fsl,exec-units-mask
- fsl,descriptor-types-mask
unevaluatedProperties: false
examples:
- |
/* MPC8548E */
crypto@30000 {
compatible = "fsl,sec2.1", "fsl,sec2.0";
reg = <0x30000 0x10000>;
interrupts = <29 2>;
interrupt-parent = <&mpic>;
fsl,num-channels = <4>;
fsl,channel-fifo-len = <24>;
fsl,exec-units-mask = <0xfe>;
fsl,descriptor-types-mask = <0x12b0ebf>;
};
...

View file

@ -1,65 +0,0 @@
Freescale SoC SEC Security Engines versions 1.x-2.x-3.x
Required properties:
- compatible : Should contain entries for this and backward compatible
SEC versions, high to low, e.g., "fsl,sec2.1", "fsl,sec2.0" (SEC2/3)
e.g., "fsl,sec1.2", "fsl,sec1.0" (SEC1)
warning: SEC1 and SEC2 are mutually exclusive
- reg : Offset and length of the register set for the device
- interrupts : the SEC's interrupt number
- fsl,num-channels : An integer representing the number of channels
available.
- fsl,channel-fifo-len : An integer representing the number of
descriptor pointers each channel fetch fifo can hold.
- fsl,exec-units-mask : The bitmask representing what execution units
(EUs) are available. It's a single 32-bit cell. EU information
should be encoded following the SEC's Descriptor Header Dword
EU_SEL0 field documentation, i.e. as follows:
bit 0 = reserved - should be 0
bit 1 = set if SEC has the ARC4 EU (AFEU)
bit 2 = set if SEC has the DES/3DES EU (DEU)
bit 3 = set if SEC has the message digest EU (MDEU/MDEU-A)
bit 4 = set if SEC has the random number generator EU (RNG)
bit 5 = set if SEC has the public key EU (PKEU)
bit 6 = set if SEC has the AES EU (AESU)
bit 7 = set if SEC has the Kasumi EU (KEU)
bit 8 = set if SEC has the CRC EU (CRCU)
bit 11 = set if SEC has the message digest EU extended alg set (MDEU-B)
remaining bits are reserved for future SEC EUs.
- fsl,descriptor-types-mask : The bitmask representing what descriptors
are available. It's a single 32-bit cell. Descriptor type information
should be encoded following the SEC's Descriptor Header Dword DESC_TYPE
field documentation, i.e. as follows:
bit 0 = set if SEC supports the aesu_ctr_nonsnoop desc. type
bit 1 = set if SEC supports the ipsec_esp descriptor type
bit 2 = set if SEC supports the common_nonsnoop desc. type
bit 3 = set if SEC supports the 802.11i AES ccmp desc. type
bit 4 = set if SEC supports the hmac_snoop_no_afeu desc. type
bit 5 = set if SEC supports the srtp descriptor type
bit 6 = set if SEC supports the non_hmac_snoop_no_afeu desc.type
bit 7 = set if SEC supports the pkeu_assemble descriptor type
bit 8 = set if SEC supports the aesu_key_expand_output desc.type
bit 9 = set if SEC supports the pkeu_ptmul descriptor type
bit 10 = set if SEC supports the common_nonsnoop_afeu desc. type
bit 11 = set if SEC supports the pkeu_ptadd_dbl descriptor type
..and so on and so forth.
Example:
/* MPC8548E */
crypto@30000 {
compatible = "fsl,sec2.1", "fsl,sec2.0";
reg = <0x30000 0x10000>;
interrupts = <29 2>;
interrupt-parent = <&mpic>;
fsl,num-channels = <4>;
fsl,channel-fifo-len = <24>;
fsl,exec-units-mask = <0xfe>;
fsl,descriptor-types-mask = <0x12b0ebf>;
};

View file

@ -0,0 +1,67 @@
# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
%YAML 1.2
---
$id: http://devicetree.org/schemas/crypto/inside-secure,safexcel-eip93.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: Inside Secure SafeXcel EIP-93 cryptographic engine
maintainers:
- Christian Marangi <ansuelsmth@gmail.com>
description: |
The Inside Secure SafeXcel EIP-93 is a cryptographic engine IP block
integrated in varios devices with very different and generic name from
PKTE to simply vendor+EIP93. The real IP under the hood is actually
developed by Inside Secure and given to license to vendors.
The IP block is sold with different model based on what feature are
needed and are identified with the final letter. Each letter correspond
to a specific set of feature and multiple letter reflect the sum of the
feature set.
EIP-93 models:
- EIP-93i: (basic) DES/Triple DES, AES, PRNG, IPsec ESP, SRTP, SHA1
- EIP-93ie: i + SHA224/256, AES-192/256
- EIP-93is: i + SSL/DTLS/DTLS, MD5, ARC4
- EIP-93ies: i + e + s
- EIP-93iw: i + AES-XCB-MAC, AES-CCM
properties:
compatible:
oneOf:
- items:
- const: airoha,en7581-eip93
- const: inside-secure,safexcel-eip93ies
- items:
- not: {}
description: Need a SoC specific compatible
- enum:
- inside-secure,safexcel-eip93i
- inside-secure,safexcel-eip93ie
- inside-secure,safexcel-eip93is
- inside-secure,safexcel-eip93iw
reg:
maxItems: 1
interrupts:
maxItems: 1
required:
- compatible
- reg
- interrupts
additionalProperties: false
examples:
- |
#include <dt-bindings/interrupt-controller/arm-gic.h>
crypto@1e004000 {
compatible = "airoha,en7581-eip93", "inside-secure,safexcel-eip93ies";
reg = <0x1fb70000 0x1000>;
interrupts = <GIC_SPI 44 IRQ_TYPE_LEVEL_HIGH>;
};

View file

@ -47,6 +47,8 @@ properties:
- const: core
- const: reg
dma-coherent: true
required:
- reg
- interrupts

View file

@ -20,6 +20,7 @@ properties:
- qcom,ipq5332-trng
- qcom,ipq5424-trng
- qcom,ipq9574-trng
- qcom,qcs615-trng
- qcom,qcs8300-trng
- qcom,sa8255p-trng
- qcom,sa8775p-trng

View file

@ -55,6 +55,7 @@ properties:
- qcom,sm8550-qce
- qcom,sm8650-qce
- qcom,sm8750-qce
- qcom,x1e80100-qce
- const: qcom,sm8150-qce
- const: qcom,qce

View file

@ -0,0 +1,59 @@
# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
%YAML 1.2
---
$id: http://devicetree.org/schemas/rng/rockchip,rk3588-rng.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: Rockchip RK3588 TRNG
description: True Random Number Generator on Rockchip RK3588 SoC
maintainers:
- Nicolas Frattaroli <nicolas.frattaroli@collabora.com>
properties:
compatible:
enum:
- rockchip,rk3588-rng
reg:
maxItems: 1
clocks:
items:
- description: TRNG AHB clock
interrupts:
maxItems: 1
resets:
maxItems: 1
required:
- compatible
- reg
- clocks
- interrupts
additionalProperties: false
examples:
- |
#include <dt-bindings/clock/rockchip,rk3588-cru.h>
#include <dt-bindings/interrupt-controller/arm-gic.h>
#include <dt-bindings/interrupt-controller/irq.h>
#include <dt-bindings/reset/rockchip,rk3588-cru.h>
bus {
#address-cells = <2>;
#size-cells = <2>;
rng@fe378000 {
compatible = "rockchip,rk3588-rng";
reg = <0x0 0xfe378000 0x0 0x200>;
interrupts = <GIC_SPI 400 IRQ_TYPE_LEVEL_HIGH 0>;
clocks = <&scmi_clk SCMI_HCLK_SECURE_NS>;
resets = <&scmi_reset SCMI_SRST_H_TRNG_NS>;
};
};
...

View file

@ -3610,14 +3610,42 @@ F: drivers/hwmon/asus_wmi_sensors.c
ASYMMETRIC KEYS
M: David Howells <dhowells@redhat.com>
M: Lukas Wunner <lukas@wunner.de>
M: Ignat Korchagin <ignat@cloudflare.com>
L: keyrings@vger.kernel.org
L: linux-crypto@vger.kernel.org
S: Maintained
F: Documentation/crypto/asymmetric-keys.rst
F: crypto/asymmetric_keys/
F: include/crypto/pkcs7.h
F: include/crypto/public_key.h
F: include/keys/asymmetric-*.h
F: include/linux/verification.h
ASYMMETRIC KEYS - ECDSA
M: Lukas Wunner <lukas@wunner.de>
M: Ignat Korchagin <ignat@cloudflare.com>
R: Stefan Berger <stefanb@linux.ibm.com>
L: linux-crypto@vger.kernel.org
S: Maintained
F: crypto/ecc*
F: crypto/ecdsa*
F: include/crypto/ecc*
ASYMMETRIC KEYS - GOST
M: Lukas Wunner <lukas@wunner.de>
M: Ignat Korchagin <ignat@cloudflare.com>
L: linux-crypto@vger.kernel.org
S: Odd fixes
F: crypto/ecrdsa*
ASYMMETRIC KEYS - RSA
M: Lukas Wunner <lukas@wunner.de>
M: Ignat Korchagin <ignat@cloudflare.com>
L: linux-crypto@vger.kernel.org
S: Maintained
F: crypto/rsa*
ASYNCHRONOUS TRANSFERS/TRANSFORMS (IOAT) API
R: Dan Williams <dan.j.williams@intel.com>
S: Odd fixes
@ -11599,6 +11627,13 @@ L: linux-crypto@vger.kernel.org
S: Maintained
F: drivers/crypto/inside-secure/
INSIDE SECURE EIP93 CRYPTO DRIVER
M: Christian Marangi <ansuelsmth@gmail.com>
L: linux-crypto@vger.kernel.org
S: Maintained
F: Documentation/devicetree/bindings/crypto/inside-secure,safexcel-eip93.yaml
F: drivers/crypto/inside-secure/eip93/
INTEGRITY MEASUREMENT ARCHITECTURE (IMA)
M: Mimi Zohar <zohar@linux.ibm.com>
M: Roberto Sassu <roberto.sassu@huawei.com>
@ -11802,6 +11837,7 @@ F: drivers/dma/ioat*
INTEL IAA CRYPTO DRIVER
M: Kristen Accardi <kristen.c.accardi@intel.com>
M: Vinicius Costa Gomes <vinicius.gomes@intel.com>
L: linux-crypto@vger.kernel.org
S: Supported
F: Documentation/driver-api/crypto/iaa/iaa-crypto.rst
@ -20675,8 +20711,10 @@ F: include/uapi/linux/rkisp1-config.h
ROCKCHIP RK3568 RANDOM NUMBER GENERATOR SUPPORT
M: Daniel Golle <daniel@makrotopia.org>
M: Aurelien Jarno <aurelien@aurel32.net>
M: Nicolas Frattaroli <nicolas.frattaroli@collabora.com>
S: Maintained
F: Documentation/devicetree/bindings/rng/rockchip,rk3568-rng.yaml
F: Documentation/devicetree/bindings/rng/rockchip,rk3588-rng.yaml
F: drivers/char/hw_random/rockchip-rng.c
ROCKCHIP RASTER 2D GRAPHIC ACCELERATION UNIT DRIVER
@ -26493,6 +26531,7 @@ F: mm/zsmalloc.c
ZSTD
M: Nick Terrell <terrelln@fb.com>
M: David Sterba <dsterba@suse.com>
S: Maintained
B: https://github.com/facebook/zstd/issues
T: git https://github.com/terrelln/linux.git

View file

@ -3,10 +3,12 @@
menu "Accelerated Cryptographic Algorithms for CPU (arm)"
config CRYPTO_CURVE25519_NEON
tristate "Public key crypto: Curve25519 (NEON)"
tristate
depends on KERNEL_MODE_NEON
select CRYPTO_KPP
select CRYPTO_LIB_CURVE25519_GENERIC
select CRYPTO_ARCH_HAVE_LIB_CURVE25519
default CRYPTO_LIB_CURVE25519_INTERNAL
help
Curve25519 algorithm
@ -45,9 +47,10 @@ config CRYPTO_NHPOLY1305_NEON
- NEON (Advanced SIMD) extensions
config CRYPTO_POLY1305_ARM
tristate "Hash functions: Poly1305 (NEON)"
tristate
select CRYPTO_HASH
select CRYPTO_ARCH_HAVE_LIB_POLY1305
default CRYPTO_LIB_POLY1305_INTERNAL
help
Poly1305 authenticator algorithm (RFC7539)
@ -212,9 +215,10 @@ config CRYPTO_AES_ARM_CE
- ARMv8 Crypto Extensions
config CRYPTO_CHACHA20_NEON
tristate "Ciphers: ChaCha20, XChaCha20, XChaCha12 (NEON)"
tristate
select CRYPTO_SKCIPHER
select CRYPTO_ARCH_HAVE_LIB_CHACHA
default CRYPTO_LIB_CHACHA_INTERNAL
help
Length-preserving ciphers: ChaCha20, XChaCha20, and XChaCha12
stream cipher algorithms

View file

@ -399,9 +399,9 @@ static int ctr_encrypt(struct skcipher_request *req)
}
if (walk.nbytes) {
u8 __aligned(8) tail[AES_BLOCK_SIZE];
const u8 *tsrc = walk.src.virt.addr;
unsigned int nbytes = walk.nbytes;
u8 *tdst = walk.dst.virt.addr;
u8 *tsrc = walk.src.virt.addr;
/*
* Tell aes_ctr_encrypt() to process a tail block.

View file

@ -76,12 +76,6 @@ void hchacha_block_arch(const u32 *state, u32 *stream, int nrounds)
}
EXPORT_SYMBOL(hchacha_block_arch);
void chacha_init_arch(u32 *state, const u32 *key, const u8 *iv)
{
chacha_init_generic(state, key, iv);
}
EXPORT_SYMBOL(chacha_init_arch);
void chacha_crypt_arch(u32 *state, u8 *dst, const u8 *src, unsigned int bytes,
int nrounds)
{
@ -116,7 +110,7 @@ static int chacha_stream_xor(struct skcipher_request *req,
err = skcipher_walk_virt(&walk, req, false);
chacha_init_generic(state, ctx->key, iv);
chacha_init(state, ctx->key, iv);
while (walk.nbytes > 0) {
unsigned int nbytes = walk.nbytes;
@ -166,7 +160,7 @@ static int do_xchacha(struct skcipher_request *req, bool neon)
u32 state[16];
u8 real_iv[16];
chacha_init_generic(state, ctx->key, req->iv);
chacha_init(state, ctx->key, req->iv);
if (!IS_ENABLED(CONFIG_KERNEL_MODE_NEON) || !neon) {
hchacha_block_arm(state, subctx.key, ctx->nrounds);

View file

@ -55,10 +55,6 @@ struct ghash_desc_ctx {
u32 count;
};
struct ghash_async_ctx {
struct cryptd_ahash *cryptd_tfm;
};
asmlinkage void pmull_ghash_update_p64(int blocks, u64 dg[], const char *src,
u64 const h[][2], const char *head);
@ -78,34 +74,12 @@ static int ghash_init(struct shash_desc *desc)
static void ghash_do_update(int blocks, u64 dg[], const char *src,
struct ghash_key *key, const char *head)
{
if (likely(crypto_simd_usable())) {
kernel_neon_begin();
if (static_branch_likely(&use_p64))
pmull_ghash_update_p64(blocks, dg, src, key->h, head);
else
pmull_ghash_update_p8(blocks, dg, src, key->h, head);
kernel_neon_end();
} else {
be128 dst = { cpu_to_be64(dg[1]), cpu_to_be64(dg[0]) };
do {
const u8 *in = src;
if (head) {
in = head;
blocks++;
head = NULL;
} else {
src += GHASH_BLOCK_SIZE;
}
crypto_xor((u8 *)&dst, in, GHASH_BLOCK_SIZE);
gf128mul_lle(&dst, &key->k);
} while (--blocks);
dg[0] = be64_to_cpu(dst.b);
dg[1] = be64_to_cpu(dst.a);
}
kernel_neon_begin();
if (static_branch_likely(&use_p64))
pmull_ghash_update_p64(blocks, dg, src, key->h, head);
else
pmull_ghash_update_p8(blocks, dg, src, key->h, head);
kernel_neon_end();
}
static int ghash_update(struct shash_desc *desc, const u8 *src,
@ -206,162 +180,13 @@ static struct shash_alg ghash_alg = {
.descsize = sizeof(struct ghash_desc_ctx),
.base.cra_name = "ghash",
.base.cra_driver_name = "ghash-ce-sync",
.base.cra_priority = 300 - 1,
.base.cra_driver_name = "ghash-ce",
.base.cra_priority = 300,
.base.cra_blocksize = GHASH_BLOCK_SIZE,
.base.cra_ctxsize = sizeof(struct ghash_key) + sizeof(u64[2]),
.base.cra_module = THIS_MODULE,
};
static int ghash_async_init(struct ahash_request *req)
{
struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
struct ghash_async_ctx *ctx = crypto_ahash_ctx(tfm);
struct ahash_request *cryptd_req = ahash_request_ctx(req);
struct cryptd_ahash *cryptd_tfm = ctx->cryptd_tfm;
struct shash_desc *desc = cryptd_shash_desc(cryptd_req);
struct crypto_shash *child = cryptd_ahash_child(cryptd_tfm);
desc->tfm = child;
return crypto_shash_init(desc);
}
static int ghash_async_update(struct ahash_request *req)
{
struct ahash_request *cryptd_req = ahash_request_ctx(req);
struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
struct ghash_async_ctx *ctx = crypto_ahash_ctx(tfm);
struct cryptd_ahash *cryptd_tfm = ctx->cryptd_tfm;
if (!crypto_simd_usable() ||
(in_atomic() && cryptd_ahash_queued(cryptd_tfm))) {
memcpy(cryptd_req, req, sizeof(*req));
ahash_request_set_tfm(cryptd_req, &cryptd_tfm->base);
return crypto_ahash_update(cryptd_req);
} else {
struct shash_desc *desc = cryptd_shash_desc(cryptd_req);
return shash_ahash_update(req, desc);
}
}
static int ghash_async_final(struct ahash_request *req)
{
struct ahash_request *cryptd_req = ahash_request_ctx(req);
struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
struct ghash_async_ctx *ctx = crypto_ahash_ctx(tfm);
struct cryptd_ahash *cryptd_tfm = ctx->cryptd_tfm;
if (!crypto_simd_usable() ||
(in_atomic() && cryptd_ahash_queued(cryptd_tfm))) {
memcpy(cryptd_req, req, sizeof(*req));
ahash_request_set_tfm(cryptd_req, &cryptd_tfm->base);
return crypto_ahash_final(cryptd_req);
} else {
struct shash_desc *desc = cryptd_shash_desc(cryptd_req);
return crypto_shash_final(desc, req->result);
}
}
static int ghash_async_digest(struct ahash_request *req)
{
struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
struct ghash_async_ctx *ctx = crypto_ahash_ctx(tfm);
struct ahash_request *cryptd_req = ahash_request_ctx(req);
struct cryptd_ahash *cryptd_tfm = ctx->cryptd_tfm;
if (!crypto_simd_usable() ||
(in_atomic() && cryptd_ahash_queued(cryptd_tfm))) {
memcpy(cryptd_req, req, sizeof(*req));
ahash_request_set_tfm(cryptd_req, &cryptd_tfm->base);
return crypto_ahash_digest(cryptd_req);
} else {
struct shash_desc *desc = cryptd_shash_desc(cryptd_req);
struct crypto_shash *child = cryptd_ahash_child(cryptd_tfm);
desc->tfm = child;
return shash_ahash_digest(req, desc);
}
}
static int ghash_async_import(struct ahash_request *req, const void *in)
{
struct ahash_request *cryptd_req = ahash_request_ctx(req);
struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
struct ghash_async_ctx *ctx = crypto_ahash_ctx(tfm);
struct shash_desc *desc = cryptd_shash_desc(cryptd_req);
desc->tfm = cryptd_ahash_child(ctx->cryptd_tfm);
return crypto_shash_import(desc, in);
}
static int ghash_async_export(struct ahash_request *req, void *out)
{
struct ahash_request *cryptd_req = ahash_request_ctx(req);
struct shash_desc *desc = cryptd_shash_desc(cryptd_req);
return crypto_shash_export(desc, out);
}
static int ghash_async_setkey(struct crypto_ahash *tfm, const u8 *key,
unsigned int keylen)
{
struct ghash_async_ctx *ctx = crypto_ahash_ctx(tfm);
struct crypto_ahash *child = &ctx->cryptd_tfm->base;
crypto_ahash_clear_flags(child, CRYPTO_TFM_REQ_MASK);
crypto_ahash_set_flags(child, crypto_ahash_get_flags(tfm)
& CRYPTO_TFM_REQ_MASK);
return crypto_ahash_setkey(child, key, keylen);
}
static int ghash_async_init_tfm(struct crypto_tfm *tfm)
{
struct cryptd_ahash *cryptd_tfm;
struct ghash_async_ctx *ctx = crypto_tfm_ctx(tfm);
cryptd_tfm = cryptd_alloc_ahash("ghash-ce-sync", 0, 0);
if (IS_ERR(cryptd_tfm))
return PTR_ERR(cryptd_tfm);
ctx->cryptd_tfm = cryptd_tfm;
crypto_ahash_set_reqsize(__crypto_ahash_cast(tfm),
sizeof(struct ahash_request) +
crypto_ahash_reqsize(&cryptd_tfm->base));
return 0;
}
static void ghash_async_exit_tfm(struct crypto_tfm *tfm)
{
struct ghash_async_ctx *ctx = crypto_tfm_ctx(tfm);
cryptd_free_ahash(ctx->cryptd_tfm);
}
static struct ahash_alg ghash_async_alg = {
.init = ghash_async_init,
.update = ghash_async_update,
.final = ghash_async_final,
.setkey = ghash_async_setkey,
.digest = ghash_async_digest,
.import = ghash_async_import,
.export = ghash_async_export,
.halg.digestsize = GHASH_DIGEST_SIZE,
.halg.statesize = sizeof(struct ghash_desc_ctx),
.halg.base = {
.cra_name = "ghash",
.cra_driver_name = "ghash-ce",
.cra_priority = 300,
.cra_flags = CRYPTO_ALG_ASYNC,
.cra_blocksize = GHASH_BLOCK_SIZE,
.cra_ctxsize = sizeof(struct ghash_async_ctx),
.cra_module = THIS_MODULE,
.cra_init = ghash_async_init_tfm,
.cra_exit = ghash_async_exit_tfm,
},
};
void pmull_gcm_encrypt(int blocks, u64 dg[], const char *src,
struct gcm_key const *k, char *dst,
const char *iv, int rounds, u32 counter);
@ -459,17 +284,11 @@ static void gcm_calculate_auth_mac(struct aead_request *req, u64 dg[], u32 len)
scatterwalk_start(&walk, req->src);
do {
u32 n = scatterwalk_clamp(&walk, len);
u8 *p;
unsigned int n;
if (!n) {
scatterwalk_start(&walk, sg_next(walk.sg));
n = scatterwalk_clamp(&walk, len);
}
p = scatterwalk_map(&walk);
gcm_update_mac(dg, p, n, buf, &buf_count, ctx);
scatterwalk_unmap(p);
n = scatterwalk_next(&walk, len);
gcm_update_mac(dg, walk.addr, n, buf, &buf_count, ctx);
scatterwalk_done_src(&walk, n);
if (unlikely(len / SZ_4K > (len - n) / SZ_4K)) {
kernel_neon_end();
@ -477,8 +296,6 @@ static void gcm_calculate_auth_mac(struct aead_request *req, u64 dg[], u32 len)
}
len -= n;
scatterwalk_advance(&walk, n);
scatterwalk_done(&walk, 0, len);
} while (len);
if (buf_count) {
@ -767,14 +584,9 @@ static int __init ghash_ce_mod_init(void)
err = crypto_register_shash(&ghash_alg);
if (err)
goto err_aead;
err = crypto_register_ahash(&ghash_async_alg);
if (err)
goto err_shash;
return 0;
err_shash:
crypto_unregister_shash(&ghash_alg);
err_aead:
if (elf_hwcap2 & HWCAP2_PMULL)
crypto_unregister_aeads(gcm_aes_algs,
@ -784,7 +596,6 @@ err_aead:
static void __exit ghash_ce_mod_exit(void)
{
crypto_unregister_ahash(&ghash_async_alg);
crypto_unregister_shash(&ghash_alg);
if (elf_hwcap2 & HWCAP2_PMULL)
crypto_unregister_aeads(gcm_aes_algs,

View file

@ -26,10 +26,11 @@ config CRYPTO_NHPOLY1305_NEON
- NEON (Advanced SIMD) extensions
config CRYPTO_POLY1305_NEON
tristate "Hash functions: Poly1305 (NEON)"
tristate
depends on KERNEL_MODE_NEON
select CRYPTO_HASH
select CRYPTO_ARCH_HAVE_LIB_POLY1305
default CRYPTO_LIB_POLY1305_INTERNAL
help
Poly1305 authenticator algorithm (RFC7539)
@ -186,11 +187,12 @@ config CRYPTO_AES_ARM64_NEON_BLK
- NEON (Advanced SIMD) extensions
config CRYPTO_CHACHA20_NEON
tristate "Ciphers: ChaCha (NEON)"
tristate
depends on KERNEL_MODE_NEON
select CRYPTO_SKCIPHER
select CRYPTO_LIB_CHACHA_GENERIC
select CRYPTO_ARCH_HAVE_LIB_CHACHA
default CRYPTO_LIB_CHACHA_INTERNAL
help
Length-preserving ciphers: ChaCha20, XChaCha20, and XChaCha12
stream cipher algorithms

View file

@ -156,23 +156,13 @@ static void ccm_calculate_auth_mac(struct aead_request *req, u8 mac[])
scatterwalk_start(&walk, req->src);
do {
u32 n = scatterwalk_clamp(&walk, len);
u8 *p;
if (!n) {
scatterwalk_start(&walk, sg_next(walk.sg));
n = scatterwalk_clamp(&walk, len);
}
p = scatterwalk_map(&walk);
macp = ce_aes_ccm_auth_data(mac, p, n, macp, ctx->key_enc,
num_rounds(ctx));
unsigned int n;
n = scatterwalk_next(&walk, len);
macp = ce_aes_ccm_auth_data(mac, walk.addr, n, macp,
ctx->key_enc, num_rounds(ctx));
scatterwalk_done_src(&walk, n);
len -= n;
scatterwalk_unmap(p);
scatterwalk_advance(&walk, n);
scatterwalk_done(&walk, 0, len);
} while (len);
}

View file

@ -287,7 +287,8 @@ static int __xts_crypt(struct skcipher_request *req, bool encrypt,
struct skcipher_walk walk;
int nbytes, err;
int first = 1;
u8 *out, *in;
const u8 *in;
u8 *out;
if (req->cryptlen < AES_BLOCK_SIZE)
return -EINVAL;

View file

@ -74,12 +74,6 @@ void hchacha_block_arch(const u32 *state, u32 *stream, int nrounds)
}
EXPORT_SYMBOL(hchacha_block_arch);
void chacha_init_arch(u32 *state, const u32 *key, const u8 *iv)
{
chacha_init_generic(state, key, iv);
}
EXPORT_SYMBOL(chacha_init_arch);
void chacha_crypt_arch(u32 *state, u8 *dst, const u8 *src, unsigned int bytes,
int nrounds)
{
@ -110,7 +104,7 @@ static int chacha_neon_stream_xor(struct skcipher_request *req,
err = skcipher_walk_virt(&walk, req, false);
chacha_init_generic(state, ctx->key, iv);
chacha_init(state, ctx->key, iv);
while (walk.nbytes > 0) {
unsigned int nbytes = walk.nbytes;
@ -151,7 +145,7 @@ static int xchacha_neon(struct skcipher_request *req)
u32 state[16];
u8 real_iv[16];
chacha_init_generic(state, ctx->key, req->iv);
chacha_init(state, ctx->key, req->iv);
hchacha_block_arch(state, subctx.key, ctx->nrounds);
subctx.nrounds = ctx->nrounds;

View file

@ -308,21 +308,12 @@ static void gcm_calculate_auth_mac(struct aead_request *req, u64 dg[], u32 len)
scatterwalk_start(&walk, req->src);
do {
u32 n = scatterwalk_clamp(&walk, len);
u8 *p;
unsigned int n;
if (!n) {
scatterwalk_start(&walk, sg_next(walk.sg));
n = scatterwalk_clamp(&walk, len);
}
p = scatterwalk_map(&walk);
gcm_update_mac(dg, p, n, buf, &buf_count, ctx);
n = scatterwalk_next(&walk, len);
gcm_update_mac(dg, walk.addr, n, buf, &buf_count, ctx);
scatterwalk_done_src(&walk, n);
len -= n;
scatterwalk_unmap(p);
scatterwalk_advance(&walk, n);
scatterwalk_done(&walk, 0, len);
} while (len);
if (buf_count) {

View file

@ -112,17 +112,12 @@ static void ccm_calculate_auth_mac(struct aead_request *req, u8 mac[])
scatterwalk_start(&walk, req->src);
do {
u32 n = scatterwalk_clamp(&walk, assoclen);
u8 *p, *ptr;
unsigned int n, orig_n;
const u8 *p;
if (!n) {
scatterwalk_start(&walk, sg_next(walk.sg));
n = scatterwalk_clamp(&walk, assoclen);
}
p = ptr = scatterwalk_map(&walk);
assoclen -= n;
scatterwalk_advance(&walk, n);
orig_n = scatterwalk_next(&walk, assoclen);
p = walk.addr;
n = orig_n;
while (n > 0) {
unsigned int l, nblocks;
@ -136,9 +131,9 @@ static void ccm_calculate_auth_mac(struct aead_request *req, u8 mac[])
} else {
nblocks = n / SM4_BLOCK_SIZE;
sm4_ce_cbcmac_update(ctx->rkey_enc,
mac, ptr, nblocks);
mac, p, nblocks);
ptr += nblocks * SM4_BLOCK_SIZE;
p += nblocks * SM4_BLOCK_SIZE;
n %= SM4_BLOCK_SIZE;
continue;
@ -147,15 +142,15 @@ static void ccm_calculate_auth_mac(struct aead_request *req, u8 mac[])
l = min(n, SM4_BLOCK_SIZE - len);
if (l) {
crypto_xor(mac + len, ptr, l);
crypto_xor(mac + len, p, l);
len += l;
ptr += l;
p += l;
n -= l;
}
}
scatterwalk_unmap(p);
scatterwalk_done(&walk, 0, assoclen);
scatterwalk_done_src(&walk, orig_n);
assoclen -= orig_n;
} while (assoclen);
}

View file

@ -82,20 +82,15 @@ static void gcm_calculate_auth_mac(struct aead_request *req, u8 ghash[])
scatterwalk_start(&walk, req->src);
do {
u32 n = scatterwalk_clamp(&walk, assoclen);
u8 *p, *ptr;
unsigned int n, orig_n;
const u8 *p;
if (!n) {
scatterwalk_start(&walk, sg_next(walk.sg));
n = scatterwalk_clamp(&walk, assoclen);
}
p = ptr = scatterwalk_map(&walk);
assoclen -= n;
scatterwalk_advance(&walk, n);
orig_n = scatterwalk_next(&walk, assoclen);
p = walk.addr;
n = orig_n;
if (n + buflen < GHASH_BLOCK_SIZE) {
memcpy(&buffer[buflen], ptr, n);
memcpy(&buffer[buflen], p, n);
buflen += n;
} else {
unsigned int nblocks;
@ -103,8 +98,8 @@ static void gcm_calculate_auth_mac(struct aead_request *req, u8 ghash[])
if (buflen) {
unsigned int l = GHASH_BLOCK_SIZE - buflen;
memcpy(&buffer[buflen], ptr, l);
ptr += l;
memcpy(&buffer[buflen], p, l);
p += l;
n -= l;
pmull_ghash_update(ctx->ghash_table, ghash,
@ -114,17 +109,17 @@ static void gcm_calculate_auth_mac(struct aead_request *req, u8 ghash[])
nblocks = n / GHASH_BLOCK_SIZE;
if (nblocks) {
pmull_ghash_update(ctx->ghash_table, ghash,
ptr, nblocks);
ptr += nblocks * GHASH_BLOCK_SIZE;
p, nblocks);
p += nblocks * GHASH_BLOCK_SIZE;
}
buflen = n % GHASH_BLOCK_SIZE;
if (buflen)
memcpy(&buffer[0], ptr, buflen);
memcpy(&buffer[0], p, buflen);
}
scatterwalk_unmap(p);
scatterwalk_done(&walk, 0, assoclen);
scatterwalk_done_src(&walk, orig_n);
assoclen -= orig_n;
} while (assoclen);
/* padding with '0' */

View file

@ -3,9 +3,11 @@
menu "Accelerated Cryptographic Algorithms for CPU (mips)"
config CRYPTO_POLY1305_MIPS
tristate "Hash functions: Poly1305"
tristate
depends on MIPS
select CRYPTO_HASH
select CRYPTO_ARCH_HAVE_LIB_POLY1305
default CRYPTO_LIB_POLY1305_INTERNAL
help
Poly1305 authenticator algorithm (RFC7539)
@ -52,10 +54,11 @@ config CRYPTO_SHA512_OCTEON
Architecture: mips OCTEON using crypto instructions, when available
config CRYPTO_CHACHA_MIPS
tristate "Ciphers: ChaCha20, XChaCha20, XChaCha12 (MIPS32r2)"
tristate
depends on CPU_MIPS32_R2
select CRYPTO_SKCIPHER
select CRYPTO_ARCH_HAVE_LIB_CHACHA
default CRYPTO_LIB_CHACHA_INTERNAL
help
Length-preserving ciphers: ChaCha20, XChaCha20, and XChaCha12
stream cipher algorithms

View file

@ -20,12 +20,6 @@ EXPORT_SYMBOL(chacha_crypt_arch);
asmlinkage void hchacha_block_arch(const u32 *state, u32 *stream, int nrounds);
EXPORT_SYMBOL(hchacha_block_arch);
void chacha_init_arch(u32 *state, const u32 *key, const u8 *iv)
{
chacha_init_generic(state, key, iv);
}
EXPORT_SYMBOL(chacha_init_arch);
static int chacha_mips_stream_xor(struct skcipher_request *req,
const struct chacha_ctx *ctx, const u8 *iv)
{
@ -35,7 +29,7 @@ static int chacha_mips_stream_xor(struct skcipher_request *req,
err = skcipher_walk_virt(&walk, req, false);
chacha_init_generic(state, ctx->key, iv);
chacha_init(state, ctx->key, iv);
while (walk.nbytes > 0) {
unsigned int nbytes = walk.nbytes;
@ -67,7 +61,7 @@ static int xchacha_mips(struct skcipher_request *req)
u32 state[16];
u8 real_iv[16];
chacha_init_generic(state, ctx->key, req->iv);
chacha_init(state, ctx->key, req->iv);
hchacha_block(state, subctx.key, ctx->nrounds);
subctx.nrounds = ctx->nrounds;

View file

@ -3,10 +3,12 @@
menu "Accelerated Cryptographic Algorithms for CPU (powerpc)"
config CRYPTO_CURVE25519_PPC64
tristate "Public key crypto: Curve25519 (PowerPC64)"
tristate
depends on PPC64 && CPU_LITTLE_ENDIAN
select CRYPTO_KPP
select CRYPTO_LIB_CURVE25519_GENERIC
select CRYPTO_ARCH_HAVE_LIB_CURVE25519
default CRYPTO_LIB_CURVE25519_INTERNAL
help
Curve25519 algorithm
@ -91,11 +93,12 @@ config CRYPTO_AES_GCM_P10
later CPU. This module supports stitched acceleration for AES/GCM.
config CRYPTO_CHACHA20_P10
tristate "Ciphers: ChaCha20, XChacha20, XChacha12 (P10 or later)"
tristate
depends on PPC64 && CPU_LITTLE_ENDIAN && VSX
select CRYPTO_SKCIPHER
select CRYPTO_LIB_CHACHA_GENERIC
select CRYPTO_ARCH_HAVE_LIB_CHACHA
default CRYPTO_LIB_CHACHA_INTERNAL
help
Length-preserving ciphers: ChaCha20, XChaCha20, and XChaCha12
stream cipher algorithms

View file

@ -35,9 +35,9 @@ MODULE_ALIAS_CRYPTO("aes");
asmlinkage int aes_p10_set_encrypt_key(const u8 *userKey, const int bits,
void *key);
asmlinkage void aes_p10_encrypt(const u8 *in, u8 *out, const void *key);
asmlinkage void aes_p10_gcm_encrypt(u8 *in, u8 *out, size_t len,
asmlinkage void aes_p10_gcm_encrypt(const u8 *in, u8 *out, size_t len,
void *rkey, u8 *iv, void *Xi);
asmlinkage void aes_p10_gcm_decrypt(u8 *in, u8 *out, size_t len,
asmlinkage void aes_p10_gcm_decrypt(const u8 *in, u8 *out, size_t len,
void *rkey, u8 *iv, void *Xi);
asmlinkage void gcm_init_htable(unsigned char htable[], unsigned char Xi[]);
asmlinkage void gcm_ghash_p10(unsigned char *Xi, unsigned char *Htable,
@ -261,7 +261,7 @@ static int p10_aes_gcm_crypt(struct aead_request *req, u8 *riv,
return ret;
while ((nbytes = walk.nbytes) > 0 && ret == 0) {
u8 *src = walk.src.virt.addr;
const u8 *src = walk.src.virt.addr;
u8 *dst = walk.dst.virt.addr;
u8 buf[AES_BLOCK_SIZE];

View file

@ -69,9 +69,9 @@ static int p8_aes_ctr_setkey(struct crypto_skcipher *tfm, const u8 *key,
static void p8_aes_ctr_final(const struct p8_aes_ctr_ctx *ctx,
struct skcipher_walk *walk)
{
const u8 *src = walk->src.virt.addr;
u8 *ctrblk = walk->iv;
u8 keystream[AES_BLOCK_SIZE];
u8 *src = walk->src.virt.addr;
u8 *dst = walk->dst.virt.addr;
unsigned int nbytes = walk->nbytes;

View file

@ -57,12 +57,6 @@ void hchacha_block_arch(const u32 *state, u32 *stream, int nrounds)
}
EXPORT_SYMBOL(hchacha_block_arch);
void chacha_init_arch(u32 *state, const u32 *key, const u8 *iv)
{
chacha_init_generic(state, key, iv);
}
EXPORT_SYMBOL(chacha_init_arch);
void chacha_crypt_arch(u32 *state, u8 *dst, const u8 *src, unsigned int bytes,
int nrounds)
{
@ -95,7 +89,7 @@ static int chacha_p10_stream_xor(struct skcipher_request *req,
if (err)
return err;
chacha_init_generic(state, ctx->key, iv);
chacha_init(state, ctx->key, iv);
while (walk.nbytes > 0) {
unsigned int nbytes = walk.nbytes;
@ -137,7 +131,7 @@ static int xchacha_p10(struct skcipher_request *req)
u32 state[16];
u8 real_iv[16];
chacha_init_generic(state, ctx->key, req->iv);
chacha_init(state, ctx->key, req->iv);
hchacha_block_arch(state, subctx.key, ctx->nrounds);
subctx.nrounds = ctx->nrounds;

View file

@ -22,7 +22,6 @@ config CRYPTO_CHACHA_RISCV64
tristate "Ciphers: ChaCha"
depends on 64BIT && RISCV_ISA_V && TOOLCHAIN_HAS_VECTOR_CRYPTO
select CRYPTO_SKCIPHER
select CRYPTO_LIB_CHACHA_GENERIC
help
Length-preserving ciphers: ChaCha20 stream cipher algorithm

View file

@ -108,11 +108,12 @@ config CRYPTO_DES_S390
As of z196 the CTR mode is hardware accelerated.
config CRYPTO_CHACHA_S390
tristate "Ciphers: ChaCha20"
tristate
depends on S390
select CRYPTO_SKCIPHER
select CRYPTO_LIB_CHACHA_GENERIC
select CRYPTO_ARCH_HAVE_LIB_CHACHA
default CRYPTO_LIB_CHACHA_INTERNAL
help
Length-preserving cipher: ChaCha20 stream cipher (RFC 7539)

View file

@ -66,7 +66,6 @@ struct s390_xts_ctx {
struct gcm_sg_walk {
struct scatter_walk walk;
unsigned int walk_bytes;
u8 *walk_ptr;
unsigned int walk_bytes_remain;
u8 buf[AES_BLOCK_SIZE];
unsigned int buf_bytes;
@ -787,29 +786,20 @@ static void gcm_walk_start(struct gcm_sg_walk *gw, struct scatterlist *sg,
static inline unsigned int _gcm_sg_clamp_and_map(struct gcm_sg_walk *gw)
{
struct scatterlist *nextsg;
gw->walk_bytes = scatterwalk_clamp(&gw->walk, gw->walk_bytes_remain);
while (!gw->walk_bytes) {
nextsg = sg_next(gw->walk.sg);
if (!nextsg)
return 0;
scatterwalk_start(&gw->walk, nextsg);
gw->walk_bytes = scatterwalk_clamp(&gw->walk,
gw->walk_bytes_remain);
}
gw->walk_ptr = scatterwalk_map(&gw->walk);
if (gw->walk_bytes_remain == 0)
return 0;
gw->walk_bytes = scatterwalk_next(&gw->walk, gw->walk_bytes_remain);
return gw->walk_bytes;
}
static inline void _gcm_sg_unmap_and_advance(struct gcm_sg_walk *gw,
unsigned int nbytes)
unsigned int nbytes, bool out)
{
gw->walk_bytes_remain -= nbytes;
scatterwalk_unmap(gw->walk_ptr);
scatterwalk_advance(&gw->walk, nbytes);
scatterwalk_done(&gw->walk, 0, gw->walk_bytes_remain);
gw->walk_ptr = NULL;
if (out)
scatterwalk_done_dst(&gw->walk, nbytes);
else
scatterwalk_done_src(&gw->walk, nbytes);
}
static int gcm_in_walk_go(struct gcm_sg_walk *gw, unsigned int minbytesneeded)
@ -835,16 +825,16 @@ static int gcm_in_walk_go(struct gcm_sg_walk *gw, unsigned int minbytesneeded)
}
if (!gw->buf_bytes && gw->walk_bytes >= minbytesneeded) {
gw->ptr = gw->walk_ptr;
gw->ptr = gw->walk.addr;
gw->nbytes = gw->walk_bytes;
goto out;
}
while (1) {
n = min(gw->walk_bytes, AES_BLOCK_SIZE - gw->buf_bytes);
memcpy(gw->buf + gw->buf_bytes, gw->walk_ptr, n);
memcpy(gw->buf + gw->buf_bytes, gw->walk.addr, n);
gw->buf_bytes += n;
_gcm_sg_unmap_and_advance(gw, n);
_gcm_sg_unmap_and_advance(gw, n, false);
if (gw->buf_bytes >= minbytesneeded) {
gw->ptr = gw->buf;
gw->nbytes = gw->buf_bytes;
@ -876,13 +866,12 @@ static int gcm_out_walk_go(struct gcm_sg_walk *gw, unsigned int minbytesneeded)
}
if (gw->walk_bytes >= minbytesneeded) {
gw->ptr = gw->walk_ptr;
gw->ptr = gw->walk.addr;
gw->nbytes = gw->walk_bytes;
goto out;
}
scatterwalk_unmap(gw->walk_ptr);
gw->walk_ptr = NULL;
scatterwalk_unmap(&gw->walk);
gw->ptr = gw->buf;
gw->nbytes = sizeof(gw->buf);
@ -904,7 +893,7 @@ static int gcm_in_walk_done(struct gcm_sg_walk *gw, unsigned int bytesdone)
} else
gw->buf_bytes = 0;
} else
_gcm_sg_unmap_and_advance(gw, bytesdone);
_gcm_sg_unmap_and_advance(gw, bytesdone, false);
return bytesdone;
}
@ -921,11 +910,11 @@ static int gcm_out_walk_done(struct gcm_sg_walk *gw, unsigned int bytesdone)
if (!_gcm_sg_clamp_and_map(gw))
return i;
n = min(gw->walk_bytes, bytesdone - i);
memcpy(gw->walk_ptr, gw->buf + i, n);
_gcm_sg_unmap_and_advance(gw, n);
memcpy(gw->walk.addr, gw->buf + i, n);
_gcm_sg_unmap_and_advance(gw, n, true);
}
} else
_gcm_sg_unmap_and_advance(gw, bytesdone);
_gcm_sg_unmap_and_advance(gw, bytesdone, true);
return bytesdone;
}

View file

@ -41,7 +41,7 @@ static int chacha20_s390(struct skcipher_request *req)
int rc;
rc = skcipher_walk_virt(&walk, req, false);
chacha_init_generic(state, ctx->key, req->iv);
chacha_init(state, ctx->key, req->iv);
while (walk.nbytes > 0) {
nbytes = walk.nbytes;
@ -69,12 +69,6 @@ void hchacha_block_arch(const u32 *state, u32 *stream, int nrounds)
}
EXPORT_SYMBOL(hchacha_block_arch);
void chacha_init_arch(u32 *state, const u32 *key, const u8 *iv)
{
chacha_init_generic(state, key, iv);
}
EXPORT_SYMBOL(chacha_init_arch);
void chacha_crypt_arch(u32 *state, u8 *dst, const u8 *src,
unsigned int bytes, int nrounds)
{

View file

@ -321,7 +321,7 @@ static void ctr_crypt_final(const struct crypto_sparc64_aes_ctx *ctx,
{
u8 *ctrblk = walk->iv;
u64 keystream[AES_BLOCK_SIZE / sizeof(u64)];
u8 *src = walk->src.virt.addr;
const u8 *src = walk->src.virt.addr;
u8 *dst = walk->dst.virt.addr;
unsigned int nbytes = walk->nbytes;

View file

@ -3,10 +3,12 @@
menu "Accelerated Cryptographic Algorithms for CPU (x86)"
config CRYPTO_CURVE25519_X86
tristate "Public key crypto: Curve25519 (ADX)"
tristate
depends on X86 && 64BIT
select CRYPTO_KPP
select CRYPTO_LIB_CURVE25519_GENERIC
select CRYPTO_ARCH_HAVE_LIB_CURVE25519
default CRYPTO_LIB_CURVE25519_INTERNAL
help
Curve25519 algorithm
@ -348,11 +350,12 @@ config CRYPTO_ARIA_GFNI_AVX512_X86_64
Processes 64 blocks in parallel.
config CRYPTO_CHACHA20_X86_64
tristate "Ciphers: ChaCha20, XChaCha20, XChaCha12 (SSSE3/AVX2/AVX-512VL)"
tristate
depends on X86 && 64BIT
select CRYPTO_SKCIPHER
select CRYPTO_LIB_CHACHA_GENERIC
select CRYPTO_ARCH_HAVE_LIB_CHACHA
default CRYPTO_LIB_CHACHA_INTERNAL
help
Length-preserving ciphers: ChaCha20, XChaCha20, and XChaCha12
stream cipher algorithms
@ -417,10 +420,12 @@ config CRYPTO_POLYVAL_CLMUL_NI
- CLMUL-NI (carry-less multiplication new instructions)
config CRYPTO_POLY1305_X86_64
tristate "Hash functions: Poly1305 (SSE2/AVX2)"
tristate
depends on X86 && 64BIT
select CRYPTO_HASH
select CRYPTO_LIB_POLY1305_GENERIC
select CRYPTO_ARCH_HAVE_LIB_POLY1305
default CRYPTO_LIB_POLY1305_INTERNAL
help
Poly1305 authenticator algorithm (RFC7539)

View file

@ -48,7 +48,7 @@ chacha-x86_64-$(CONFIG_AS_AVX512) += chacha-avx512vl-x86_64.o
obj-$(CONFIG_CRYPTO_AES_NI_INTEL) += aesni-intel.o
aesni-intel-y := aesni-intel_asm.o aesni-intel_glue.o
aesni-intel-$(CONFIG_64BIT) += aes_ctrby8_avx-x86_64.o \
aesni-intel-$(CONFIG_64BIT) += aes-ctr-avx-x86_64.o \
aes-gcm-aesni-x86_64.o \
aes-xts-avx-x86_64.o
ifeq ($(CONFIG_AS_VAES)$(CONFIG_AS_VPCLMULQDQ),yy)

View file

@ -71,10 +71,9 @@ static void crypto_aegis128_aesni_process_ad(
scatterwalk_start(&walk, sg_src);
while (assoclen != 0) {
unsigned int size = scatterwalk_clamp(&walk, assoclen);
unsigned int size = scatterwalk_next(&walk, assoclen);
const u8 *src = walk.addr;
unsigned int left = size;
void *mapped = scatterwalk_map(&walk);
const u8 *src = (const u8 *)mapped;
if (pos + size >= AEGIS128_BLOCK_SIZE) {
if (pos > 0) {
@ -97,9 +96,7 @@ static void crypto_aegis128_aesni_process_ad(
pos += left;
assoclen -= size;
scatterwalk_unmap(mapped);
scatterwalk_advance(&walk, size);
scatterwalk_done(&walk, 0, assoclen);
scatterwalk_done_src(&walk, size);
}
if (pos > 0) {

View file

@ -0,0 +1,592 @@
/* SPDX-License-Identifier: Apache-2.0 OR BSD-2-Clause */
//
// Copyright 2025 Google LLC
//
// Author: Eric Biggers <ebiggers@google.com>
//
// This file is dual-licensed, meaning that you can use it under your choice of
// either of the following two licenses:
//
// Licensed under the Apache License 2.0 (the "License"). You may obtain a copy
// of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
//
// or
//
// Redistribution and use in source and binary forms, with or without
// modification, are permitted provided that the following conditions are met:
//
// 1. Redistributions of source code must retain the above copyright notice,
// this list of conditions and the following disclaimer.
//
// 2. Redistributions in binary form must reproduce the above copyright
// notice, this list of conditions and the following disclaimer in the
// documentation and/or other materials provided with the distribution.
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
// AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
// IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
// ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE
// LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
// CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
// SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
// CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
// ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
// POSSIBILITY OF SUCH DAMAGE.
//
//------------------------------------------------------------------------------
//
// This file contains x86_64 assembly implementations of AES-CTR and AES-XCTR
// using the following sets of CPU features:
// - AES-NI && AVX
// - VAES && AVX2
// - VAES && (AVX10/256 || (AVX512BW && AVX512VL)) && BMI2
// - VAES && (AVX10/512 || (AVX512BW && AVX512VL)) && BMI2
//
// See the function definitions at the bottom of the file for more information.
#include <linux/linkage.h>
#include <linux/cfi_types.h>
.section .rodata
.p2align 4
.Lbswap_mask:
.octa 0x000102030405060708090a0b0c0d0e0f
.Lctr_pattern:
.quad 0, 0
.Lone:
.quad 1, 0
.Ltwo:
.quad 2, 0
.quad 3, 0
.Lfour:
.quad 4, 0
.text
// Move a vector between memory and a register.
// The register operand must be in the first 16 vector registers.
.macro _vmovdqu src, dst
.if VL < 64
vmovdqu \src, \dst
.else
vmovdqu8 \src, \dst
.endif
.endm
// Move a vector between registers.
// The registers must be in the first 16 vector registers.
.macro _vmovdqa src, dst
.if VL < 64
vmovdqa \src, \dst
.else
vmovdqa64 \src, \dst
.endif
.endm
// Broadcast a 128-bit value from memory to all 128-bit lanes of a vector
// register. The register operand must be in the first 16 vector registers.
.macro _vbroadcast128 src, dst
.if VL == 16
vmovdqu \src, \dst
.elseif VL == 32
vbroadcasti128 \src, \dst
.else
vbroadcasti32x4 \src, \dst
.endif
.endm
// XOR two vectors together.
// Any register operands must be in the first 16 vector registers.
.macro _vpxor src1, src2, dst
.if VL < 64
vpxor \src1, \src2, \dst
.else
vpxord \src1, \src2, \dst
.endif
.endm
// Load 1 <= %ecx <= 15 bytes from the pointer \src into the xmm register \dst
// and zeroize any remaining bytes. Clobbers %rax, %rcx, and \tmp{64,32}.
.macro _load_partial_block src, dst, tmp64, tmp32
sub $8, %ecx // LEN - 8
jle .Lle8\@
// Load 9 <= LEN <= 15 bytes.
vmovq (\src), \dst // Load first 8 bytes
mov (\src, %rcx), %rax // Load last 8 bytes
neg %ecx
shl $3, %ecx
shr %cl, %rax // Discard overlapping bytes
vpinsrq $1, %rax, \dst, \dst
jmp .Ldone\@
.Lle8\@:
add $4, %ecx // LEN - 4
jl .Llt4\@
// Load 4 <= LEN <= 8 bytes.
mov (\src), %eax // Load first 4 bytes
mov (\src, %rcx), \tmp32 // Load last 4 bytes
jmp .Lcombine\@
.Llt4\@:
// Load 1 <= LEN <= 3 bytes.
add $2, %ecx // LEN - 2
movzbl (\src), %eax // Load first byte
jl .Lmovq\@
movzwl (\src, %rcx), \tmp32 // Load last 2 bytes
.Lcombine\@:
shl $3, %ecx
shl %cl, \tmp64
or \tmp64, %rax // Combine the two parts
.Lmovq\@:
vmovq %rax, \dst
.Ldone\@:
.endm
// Store 1 <= %ecx <= 15 bytes from the xmm register \src to the pointer \dst.
// Clobbers %rax, %rcx, and \tmp{64,32}.
.macro _store_partial_block src, dst, tmp64, tmp32
sub $8, %ecx // LEN - 8
jl .Llt8\@
// Store 8 <= LEN <= 15 bytes.
vpextrq $1, \src, %rax
mov %ecx, \tmp32
shl $3, %ecx
ror %cl, %rax
mov %rax, (\dst, \tmp64) // Store last LEN - 8 bytes
vmovq \src, (\dst) // Store first 8 bytes
jmp .Ldone\@
.Llt8\@:
add $4, %ecx // LEN - 4
jl .Llt4\@
// Store 4 <= LEN <= 7 bytes.
vpextrd $1, \src, %eax
mov %ecx, \tmp32
shl $3, %ecx
ror %cl, %eax
mov %eax, (\dst, \tmp64) // Store last LEN - 4 bytes
vmovd \src, (\dst) // Store first 4 bytes
jmp .Ldone\@
.Llt4\@:
// Store 1 <= LEN <= 3 bytes.
vpextrb $0, \src, 0(\dst)
cmp $-2, %ecx // LEN - 4 == -2, i.e. LEN == 2?
jl .Ldone\@
vpextrb $1, \src, 1(\dst)
je .Ldone\@
vpextrb $2, \src, 2(\dst)
.Ldone\@:
.endm
// Prepare the next two vectors of AES inputs in AESDATA\i0 and AESDATA\i1, and
// XOR each with the zero-th round key. Also update LE_CTR if !\final.
.macro _prepare_2_ctr_vecs is_xctr, i0, i1, final=0
.if \is_xctr
.if USE_AVX10
_vmovdqa LE_CTR, AESDATA\i0
vpternlogd $0x96, XCTR_IV, RNDKEY0, AESDATA\i0
.else
vpxor XCTR_IV, LE_CTR, AESDATA\i0
vpxor RNDKEY0, AESDATA\i0, AESDATA\i0
.endif
vpaddq LE_CTR_INC1, LE_CTR, AESDATA\i1
.if USE_AVX10
vpternlogd $0x96, XCTR_IV, RNDKEY0, AESDATA\i1
.else
vpxor XCTR_IV, AESDATA\i1, AESDATA\i1
vpxor RNDKEY0, AESDATA\i1, AESDATA\i1
.endif
.else
vpshufb BSWAP_MASK, LE_CTR, AESDATA\i0
_vpxor RNDKEY0, AESDATA\i0, AESDATA\i0
vpaddq LE_CTR_INC1, LE_CTR, AESDATA\i1
vpshufb BSWAP_MASK, AESDATA\i1, AESDATA\i1
_vpxor RNDKEY0, AESDATA\i1, AESDATA\i1
.endif
.if !\final
vpaddq LE_CTR_INC2, LE_CTR, LE_CTR
.endif
.endm
// Do all AES rounds on the data in the given AESDATA vectors, excluding the
// zero-th and last rounds.
.macro _aesenc_loop vecs:vararg
mov KEY, %rax
1:
_vbroadcast128 (%rax), RNDKEY
.irp i, \vecs
vaesenc RNDKEY, AESDATA\i, AESDATA\i
.endr
add $16, %rax
cmp %rax, RNDKEYLAST_PTR
jne 1b
.endm
// Finalize the keystream blocks in the given AESDATA vectors by doing the last
// AES round, then XOR those keystream blocks with the corresponding data.
// Reduce latency by doing the XOR before the vaesenclast, utilizing the
// property vaesenclast(key, a) ^ b == vaesenclast(key ^ b, a).
.macro _aesenclast_and_xor vecs:vararg
.irp i, \vecs
_vpxor \i*VL(SRC), RNDKEYLAST, RNDKEY
vaesenclast RNDKEY, AESDATA\i, AESDATA\i
.endr
.irp i, \vecs
_vmovdqu AESDATA\i, \i*VL(DST)
.endr
.endm
// XOR the keystream blocks in the specified AESDATA vectors with the
// corresponding data.
.macro _xor_data vecs:vararg
.irp i, \vecs
_vpxor \i*VL(SRC), AESDATA\i, AESDATA\i
.endr
.irp i, \vecs
_vmovdqu AESDATA\i, \i*VL(DST)
.endr
.endm
.macro _aes_ctr_crypt is_xctr
// Define register aliases V0-V15 that map to the xmm, ymm, or zmm
// registers according to the selected Vector Length (VL).
.irp i, 0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15
.if VL == 16
.set V\i, %xmm\i
.elseif VL == 32
.set V\i, %ymm\i
.elseif VL == 64
.set V\i, %zmm\i
.else
.error "Unsupported Vector Length (VL)"
.endif
.endr
// Function arguments
.set KEY, %rdi // Initially points to the start of the
// crypto_aes_ctx, then is advanced to
// point to the index 1 round key
.set KEY32, %edi // Available as temp register after all
// keystream blocks have been generated
.set SRC, %rsi // Pointer to next source data
.set DST, %rdx // Pointer to next destination data
.set LEN, %ecx // Remaining length in bytes.
// Note: _load_partial_block relies on
// this being in %ecx.
.set LEN64, %rcx // Zero-extend LEN before using!
.set LEN8, %cl
.if \is_xctr
.set XCTR_IV_PTR, %r8 // const u8 iv[AES_BLOCK_SIZE];
.set XCTR_CTR, %r9 // u64 ctr;
.else
.set LE_CTR_PTR, %r8 // const u64 le_ctr[2];
.endif
// Additional local variables
.set RNDKEYLAST_PTR, %r10
.set AESDATA0, V0
.set AESDATA0_XMM, %xmm0
.set AESDATA1, V1
.set AESDATA1_XMM, %xmm1
.set AESDATA2, V2
.set AESDATA3, V3
.set AESDATA4, V4
.set AESDATA5, V5
.set AESDATA6, V6
.set AESDATA7, V7
.if \is_xctr
.set XCTR_IV, V8
.else
.set BSWAP_MASK, V8
.endif
.set LE_CTR, V9
.set LE_CTR_XMM, %xmm9
.set LE_CTR_INC1, V10
.set LE_CTR_INC2, V11
.set RNDKEY0, V12
.set RNDKEYLAST, V13
.set RNDKEY, V14
// Create the first vector of counters.
.if \is_xctr
.if VL == 16
vmovq XCTR_CTR, LE_CTR
.elseif VL == 32
vmovq XCTR_CTR, LE_CTR_XMM
inc XCTR_CTR
vmovq XCTR_CTR, AESDATA0_XMM
vinserti128 $1, AESDATA0_XMM, LE_CTR, LE_CTR
.else
vpbroadcastq XCTR_CTR, LE_CTR
vpsrldq $8, LE_CTR, LE_CTR
vpaddq .Lctr_pattern(%rip), LE_CTR, LE_CTR
.endif
_vbroadcast128 (XCTR_IV_PTR), XCTR_IV
.else
_vbroadcast128 (LE_CTR_PTR), LE_CTR
.if VL > 16
vpaddq .Lctr_pattern(%rip), LE_CTR, LE_CTR
.endif
_vbroadcast128 .Lbswap_mask(%rip), BSWAP_MASK
.endif
.if VL == 16
_vbroadcast128 .Lone(%rip), LE_CTR_INC1
.elseif VL == 32
_vbroadcast128 .Ltwo(%rip), LE_CTR_INC1
.else
_vbroadcast128 .Lfour(%rip), LE_CTR_INC1
.endif
vpsllq $1, LE_CTR_INC1, LE_CTR_INC2
// Load the AES key length: 16 (AES-128), 24 (AES-192), or 32 (AES-256).
movl 480(KEY), %eax
// Compute the pointer to the last round key.
lea 6*16(KEY, %rax, 4), RNDKEYLAST_PTR
// Load the zero-th and last round keys.
_vbroadcast128 (KEY), RNDKEY0
_vbroadcast128 (RNDKEYLAST_PTR), RNDKEYLAST
// Make KEY point to the first round key.
add $16, KEY
// This is the main loop, which encrypts 8 vectors of data at a time.
add $-8*VL, LEN
jl .Lloop_8x_done\@
.Lloop_8x\@:
_prepare_2_ctr_vecs \is_xctr, 0, 1
_prepare_2_ctr_vecs \is_xctr, 2, 3
_prepare_2_ctr_vecs \is_xctr, 4, 5
_prepare_2_ctr_vecs \is_xctr, 6, 7
_aesenc_loop 0,1,2,3,4,5,6,7
_aesenclast_and_xor 0,1,2,3,4,5,6,7
sub $-8*VL, SRC
sub $-8*VL, DST
add $-8*VL, LEN
jge .Lloop_8x\@
.Lloop_8x_done\@:
sub $-8*VL, LEN
jz .Ldone\@
// 1 <= LEN < 8*VL. Generate 2, 4, or 8 more vectors of keystream
// blocks, depending on the remaining LEN.
_prepare_2_ctr_vecs \is_xctr, 0, 1
_prepare_2_ctr_vecs \is_xctr, 2, 3
cmp $4*VL, LEN
jle .Lenc_tail_atmost4vecs\@
// 4*VL < LEN < 8*VL. Generate 8 vectors of keystream blocks. Use the
// first 4 to XOR 4 full vectors of data. Then XOR the remaining data.
_prepare_2_ctr_vecs \is_xctr, 4, 5
_prepare_2_ctr_vecs \is_xctr, 6, 7, final=1
_aesenc_loop 0,1,2,3,4,5,6,7
_aesenclast_and_xor 0,1,2,3
vaesenclast RNDKEYLAST, AESDATA4, AESDATA0
vaesenclast RNDKEYLAST, AESDATA5, AESDATA1
vaesenclast RNDKEYLAST, AESDATA6, AESDATA2
vaesenclast RNDKEYLAST, AESDATA7, AESDATA3
sub $-4*VL, SRC
sub $-4*VL, DST
add $-4*VL, LEN
cmp $1*VL-1, LEN
jle .Lxor_tail_partial_vec_0\@
_xor_data 0
cmp $2*VL-1, LEN
jle .Lxor_tail_partial_vec_1\@
_xor_data 1
cmp $3*VL-1, LEN
jle .Lxor_tail_partial_vec_2\@
_xor_data 2
cmp $4*VL-1, LEN
jle .Lxor_tail_partial_vec_3\@
_xor_data 3
jmp .Ldone\@
.Lenc_tail_atmost4vecs\@:
cmp $2*VL, LEN
jle .Lenc_tail_atmost2vecs\@
// 2*VL < LEN <= 4*VL. Generate 4 vectors of keystream blocks. Use the
// first 2 to XOR 2 full vectors of data. Then XOR the remaining data.
_aesenc_loop 0,1,2,3
_aesenclast_and_xor 0,1
vaesenclast RNDKEYLAST, AESDATA2, AESDATA0
vaesenclast RNDKEYLAST, AESDATA3, AESDATA1
sub $-2*VL, SRC
sub $-2*VL, DST
add $-2*VL, LEN
jmp .Lxor_tail_upto2vecs\@
.Lenc_tail_atmost2vecs\@:
// 1 <= LEN <= 2*VL. Generate 2 vectors of keystream blocks. Then XOR
// the remaining data.
_aesenc_loop 0,1
vaesenclast RNDKEYLAST, AESDATA0, AESDATA0
vaesenclast RNDKEYLAST, AESDATA1, AESDATA1
.Lxor_tail_upto2vecs\@:
cmp $1*VL-1, LEN
jle .Lxor_tail_partial_vec_0\@
_xor_data 0
cmp $2*VL-1, LEN
jle .Lxor_tail_partial_vec_1\@
_xor_data 1
jmp .Ldone\@
.Lxor_tail_partial_vec_1\@:
add $-1*VL, LEN
jz .Ldone\@
sub $-1*VL, SRC
sub $-1*VL, DST
_vmovdqa AESDATA1, AESDATA0
jmp .Lxor_tail_partial_vec_0\@
.Lxor_tail_partial_vec_2\@:
add $-2*VL, LEN
jz .Ldone\@
sub $-2*VL, SRC
sub $-2*VL, DST
_vmovdqa AESDATA2, AESDATA0
jmp .Lxor_tail_partial_vec_0\@
.Lxor_tail_partial_vec_3\@:
add $-3*VL, LEN
jz .Ldone\@
sub $-3*VL, SRC
sub $-3*VL, DST
_vmovdqa AESDATA3, AESDATA0
.Lxor_tail_partial_vec_0\@:
// XOR the remaining 1 <= LEN < VL bytes. It's easy if masked
// loads/stores are available; otherwise it's a bit harder...
.if USE_AVX10
.if VL <= 32
mov $-1, %eax
bzhi LEN, %eax, %eax
kmovd %eax, %k1
.else
mov $-1, %rax
bzhi LEN64, %rax, %rax
kmovq %rax, %k1
.endif
vmovdqu8 (SRC), AESDATA1{%k1}{z}
_vpxor AESDATA1, AESDATA0, AESDATA0
vmovdqu8 AESDATA0, (DST){%k1}
.else
.if VL == 32
cmp $16, LEN
jl 1f
vpxor (SRC), AESDATA0_XMM, AESDATA1_XMM
vmovdqu AESDATA1_XMM, (DST)
add $16, SRC
add $16, DST
sub $16, LEN
jz .Ldone\@
vextracti128 $1, AESDATA0, AESDATA0_XMM
1:
.endif
mov LEN, %r10d
_load_partial_block SRC, AESDATA1_XMM, KEY, KEY32
vpxor AESDATA1_XMM, AESDATA0_XMM, AESDATA0_XMM
mov %r10d, %ecx
_store_partial_block AESDATA0_XMM, DST, KEY, KEY32
.endif
.Ldone\@:
.if VL > 16
vzeroupper
.endif
RET
.endm
// Below are the definitions of the functions generated by the above macro.
// They have the following prototypes:
//
//
// void aes_ctr64_crypt_##suffix(const struct crypto_aes_ctx *key,
// const u8 *src, u8 *dst, int len,
// const u64 le_ctr[2]);
//
// void aes_xctr_crypt_##suffix(const struct crypto_aes_ctx *key,
// const u8 *src, u8 *dst, int len,
// const u8 iv[AES_BLOCK_SIZE], u64 ctr);
//
// Both functions generate |len| bytes of keystream, XOR it with the data from
// |src|, and write the result to |dst|. On non-final calls, |len| must be a
// multiple of 16. On the final call, |len| can be any value.
//
// aes_ctr64_crypt_* implement "regular" CTR, where the keystream is generated
// from a 128-bit big endian counter that increments by 1 for each AES block.
// HOWEVER, to keep the assembly code simple, some of the counter management is
// left to the caller. aes_ctr64_crypt_* take the counter in little endian
// form, only increment the low 64 bits internally, do the conversion to big
// endian internally, and don't write the updated counter back to memory. The
// caller is responsible for converting the starting IV to the little endian
// le_ctr, detecting the (very rare) case of a carry out of the low 64 bits
// being needed and splitting at that point with a carry done in between, and
// updating le_ctr after each part if the message is multi-part.
//
// aes_xctr_crypt_* implement XCTR as specified in "Length-preserving encryption
// with HCTR2" (https://eprint.iacr.org/2021/1441.pdf). XCTR is an
// easier-to-implement variant of CTR that uses little endian byte order and
// eliminates carries. |ctr| is the per-message block counter starting at 1.
.set VL, 16
.set USE_AVX10, 0
SYM_TYPED_FUNC_START(aes_ctr64_crypt_aesni_avx)
_aes_ctr_crypt 0
SYM_FUNC_END(aes_ctr64_crypt_aesni_avx)
SYM_TYPED_FUNC_START(aes_xctr_crypt_aesni_avx)
_aes_ctr_crypt 1
SYM_FUNC_END(aes_xctr_crypt_aesni_avx)
#if defined(CONFIG_AS_VAES) && defined(CONFIG_AS_VPCLMULQDQ)
.set VL, 32
.set USE_AVX10, 0
SYM_TYPED_FUNC_START(aes_ctr64_crypt_vaes_avx2)
_aes_ctr_crypt 0
SYM_FUNC_END(aes_ctr64_crypt_vaes_avx2)
SYM_TYPED_FUNC_START(aes_xctr_crypt_vaes_avx2)
_aes_ctr_crypt 1
SYM_FUNC_END(aes_xctr_crypt_vaes_avx2)
.set VL, 32
.set USE_AVX10, 1
SYM_TYPED_FUNC_START(aes_ctr64_crypt_vaes_avx10_256)
_aes_ctr_crypt 0
SYM_FUNC_END(aes_ctr64_crypt_vaes_avx10_256)
SYM_TYPED_FUNC_START(aes_xctr_crypt_vaes_avx10_256)
_aes_ctr_crypt 1
SYM_FUNC_END(aes_xctr_crypt_vaes_avx10_256)
.set VL, 64
.set USE_AVX10, 1
SYM_TYPED_FUNC_START(aes_ctr64_crypt_vaes_avx10_512)
_aes_ctr_crypt 0
SYM_FUNC_END(aes_ctr64_crypt_vaes_avx10_512)
SYM_TYPED_FUNC_START(aes_xctr_crypt_vaes_avx10_512)
_aes_ctr_crypt 1
SYM_FUNC_END(aes_xctr_crypt_vaes_avx10_512)
#endif // CONFIG_AS_VAES && CONFIG_AS_VPCLMULQDQ

View file

@ -1,11 +1,50 @@
/* SPDX-License-Identifier: GPL-2.0-or-later */
/*
* AES-XTS for modern x86_64 CPUs
*
* Copyright 2024 Google LLC
*
* Author: Eric Biggers <ebiggers@google.com>
*/
/* SPDX-License-Identifier: Apache-2.0 OR BSD-2-Clause */
//
// AES-XTS for modern x86_64 CPUs
//
// Copyright 2024 Google LLC
//
// Author: Eric Biggers <ebiggers@google.com>
//
//------------------------------------------------------------------------------
//
// This file is dual-licensed, meaning that you can use it under your choice of
// either of the following two licenses:
//
// Licensed under the Apache License 2.0 (the "License"). You may obtain a copy
// of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
//
// or
//
// Redistribution and use in source and binary forms, with or without
// modification, are permitted provided that the following conditions are met:
//
// 1. Redistributions of source code must retain the above copyright notice,
// this list of conditions and the following disclaimer.
//
// 2. Redistributions in binary form must reproduce the above copyright
// notice, this list of conditions and the following disclaimer in the
// documentation and/or other materials provided with the distribution.
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
// AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
// IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
// ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE
// LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
// CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
// SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
// CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
// ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
// POSSIBILITY OF SUCH DAMAGE.
/*
* This file implements AES-XTS for modern x86_64 CPUs. To handle the

View file

@ -1,597 +0,0 @@
/* SPDX-License-Identifier: GPL-2.0-only OR BSD-3-Clause */
/*
* AES CTR mode by8 optimization with AVX instructions. (x86_64)
*
* Copyright(c) 2014 Intel Corporation.
*
* Contact Information:
* James Guilford <james.guilford@intel.com>
* Sean Gulley <sean.m.gulley@intel.com>
* Chandramouli Narayanan <mouli@linux.intel.com>
*/
/*
* This is AES128/192/256 CTR mode optimization implementation. It requires
* the support of Intel(R) AESNI and AVX instructions.
*
* This work was inspired by the AES CTR mode optimization published
* in Intel Optimized IPSEC Cryptographic library.
* Additional information on it can be found at:
* https://github.com/intel/intel-ipsec-mb
*/
#include <linux/linkage.h>
#define VMOVDQ vmovdqu
/*
* Note: the "x" prefix in these aliases means "this is an xmm register". The
* alias prefixes have no relation to XCTR where the "X" prefix means "XOR
* counter".
*/
#define xdata0 %xmm0
#define xdata1 %xmm1
#define xdata2 %xmm2
#define xdata3 %xmm3
#define xdata4 %xmm4
#define xdata5 %xmm5
#define xdata6 %xmm6
#define xdata7 %xmm7
#define xcounter %xmm8 // CTR mode only
#define xiv %xmm8 // XCTR mode only
#define xbyteswap %xmm9 // CTR mode only
#define xtmp %xmm9 // XCTR mode only
#define xkey0 %xmm10
#define xkey4 %xmm11
#define xkey8 %xmm12
#define xkey12 %xmm13
#define xkeyA %xmm14
#define xkeyB %xmm15
#define p_in %rdi
#define p_iv %rsi
#define p_keys %rdx
#define p_out %rcx
#define num_bytes %r8
#define counter %r9 // XCTR mode only
#define tmp %r10
#define DDQ_DATA 0
#define XDATA 1
#define KEY_128 1
#define KEY_192 2
#define KEY_256 3
.section .rodata
.align 16
byteswap_const:
.octa 0x000102030405060708090A0B0C0D0E0F
ddq_low_msk:
.octa 0x0000000000000000FFFFFFFFFFFFFFFF
ddq_high_add_1:
.octa 0x00000000000000010000000000000000
ddq_add_1:
.octa 0x00000000000000000000000000000001
ddq_add_2:
.octa 0x00000000000000000000000000000002
ddq_add_3:
.octa 0x00000000000000000000000000000003
ddq_add_4:
.octa 0x00000000000000000000000000000004
ddq_add_5:
.octa 0x00000000000000000000000000000005
ddq_add_6:
.octa 0x00000000000000000000000000000006
ddq_add_7:
.octa 0x00000000000000000000000000000007
ddq_add_8:
.octa 0x00000000000000000000000000000008
.text
/* generate a unique variable for ddq_add_x */
/* generate a unique variable for xmm register */
.macro setxdata n
var_xdata = %xmm\n
.endm
/* club the numeric 'id' to the symbol 'name' */
.macro club name, id
.altmacro
.if \name == XDATA
setxdata %\id
.endif
.noaltmacro
.endm
/*
* do_aes num_in_par load_keys key_len
* This increments p_in, but not p_out
*/
.macro do_aes b, k, key_len, xctr
.set by, \b
.set load_keys, \k
.set klen, \key_len
.if (load_keys)
vmovdqa 0*16(p_keys), xkey0
.endif
.if \xctr
movq counter, xtmp
.set i, 0
.rept (by)
club XDATA, i
vpaddq (ddq_add_1 + 16 * i)(%rip), xtmp, var_xdata
.set i, (i +1)
.endr
.set i, 0
.rept (by)
club XDATA, i
vpxor xiv, var_xdata, var_xdata
.set i, (i +1)
.endr
.else
vpshufb xbyteswap, xcounter, xdata0
.set i, 1
.rept (by - 1)
club XDATA, i
vpaddq (ddq_add_1 + 16 * (i - 1))(%rip), xcounter, var_xdata
vptest ddq_low_msk(%rip), var_xdata
jnz 1f
vpaddq ddq_high_add_1(%rip), var_xdata, var_xdata
vpaddq ddq_high_add_1(%rip), xcounter, xcounter
1:
vpshufb xbyteswap, var_xdata, var_xdata
.set i, (i +1)
.endr
.endif
vmovdqa 1*16(p_keys), xkeyA
vpxor xkey0, xdata0, xdata0
.if \xctr
add $by, counter
.else
vpaddq (ddq_add_1 + 16 * (by - 1))(%rip), xcounter, xcounter
vptest ddq_low_msk(%rip), xcounter
jnz 1f
vpaddq ddq_high_add_1(%rip), xcounter, xcounter
1:
.endif
.set i, 1
.rept (by - 1)
club XDATA, i
vpxor xkey0, var_xdata, var_xdata
.set i, (i +1)
.endr
vmovdqa 2*16(p_keys), xkeyB
.set i, 0
.rept by
club XDATA, i
vaesenc xkeyA, var_xdata, var_xdata /* key 1 */
.set i, (i +1)
.endr
.if (klen == KEY_128)
.if (load_keys)
vmovdqa 3*16(p_keys), xkey4
.endif
.else
vmovdqa 3*16(p_keys), xkeyA
.endif
.set i, 0
.rept by
club XDATA, i
vaesenc xkeyB, var_xdata, var_xdata /* key 2 */
.set i, (i +1)
.endr
add $(16*by), p_in
.if (klen == KEY_128)
vmovdqa 4*16(p_keys), xkeyB
.else
.if (load_keys)
vmovdqa 4*16(p_keys), xkey4
.endif
.endif
.set i, 0
.rept by
club XDATA, i
/* key 3 */
.if (klen == KEY_128)
vaesenc xkey4, var_xdata, var_xdata
.else
vaesenc xkeyA, var_xdata, var_xdata
.endif
.set i, (i +1)
.endr
vmovdqa 5*16(p_keys), xkeyA
.set i, 0
.rept by
club XDATA, i
/* key 4 */
.if (klen == KEY_128)
vaesenc xkeyB, var_xdata, var_xdata
.else
vaesenc xkey4, var_xdata, var_xdata
.endif
.set i, (i +1)
.endr
.if (klen == KEY_128)
.if (load_keys)
vmovdqa 6*16(p_keys), xkey8
.endif
.else
vmovdqa 6*16(p_keys), xkeyB
.endif
.set i, 0
.rept by
club XDATA, i
vaesenc xkeyA, var_xdata, var_xdata /* key 5 */
.set i, (i +1)
.endr
vmovdqa 7*16(p_keys), xkeyA
.set i, 0
.rept by
club XDATA, i
/* key 6 */
.if (klen == KEY_128)
vaesenc xkey8, var_xdata, var_xdata
.else
vaesenc xkeyB, var_xdata, var_xdata
.endif
.set i, (i +1)
.endr
.if (klen == KEY_128)
vmovdqa 8*16(p_keys), xkeyB
.else
.if (load_keys)
vmovdqa 8*16(p_keys), xkey8
.endif
.endif
.set i, 0
.rept by
club XDATA, i
vaesenc xkeyA, var_xdata, var_xdata /* key 7 */
.set i, (i +1)
.endr
.if (klen == KEY_128)
.if (load_keys)
vmovdqa 9*16(p_keys), xkey12
.endif
.else
vmovdqa 9*16(p_keys), xkeyA
.endif
.set i, 0
.rept by
club XDATA, i
/* key 8 */
.if (klen == KEY_128)
vaesenc xkeyB, var_xdata, var_xdata
.else
vaesenc xkey8, var_xdata, var_xdata
.endif
.set i, (i +1)
.endr
vmovdqa 10*16(p_keys), xkeyB
.set i, 0
.rept by
club XDATA, i
/* key 9 */
.if (klen == KEY_128)
vaesenc xkey12, var_xdata, var_xdata
.else
vaesenc xkeyA, var_xdata, var_xdata
.endif
.set i, (i +1)
.endr
.if (klen != KEY_128)
vmovdqa 11*16(p_keys), xkeyA
.endif
.set i, 0
.rept by
club XDATA, i
/* key 10 */
.if (klen == KEY_128)
vaesenclast xkeyB, var_xdata, var_xdata
.else
vaesenc xkeyB, var_xdata, var_xdata
.endif
.set i, (i +1)
.endr
.if (klen != KEY_128)
.if (load_keys)
vmovdqa 12*16(p_keys), xkey12
.endif
.set i, 0
.rept by
club XDATA, i
vaesenc xkeyA, var_xdata, var_xdata /* key 11 */
.set i, (i +1)
.endr
.if (klen == KEY_256)
vmovdqa 13*16(p_keys), xkeyA
.endif
.set i, 0
.rept by
club XDATA, i
.if (klen == KEY_256)
/* key 12 */
vaesenc xkey12, var_xdata, var_xdata
.else
vaesenclast xkey12, var_xdata, var_xdata
.endif
.set i, (i +1)
.endr
.if (klen == KEY_256)
vmovdqa 14*16(p_keys), xkeyB
.set i, 0
.rept by
club XDATA, i
/* key 13 */
vaesenc xkeyA, var_xdata, var_xdata
.set i, (i +1)
.endr
.set i, 0
.rept by
club XDATA, i
/* key 14 */
vaesenclast xkeyB, var_xdata, var_xdata
.set i, (i +1)
.endr
.endif
.endif
.set i, 0
.rept (by / 2)
.set j, (i+1)
VMOVDQ (i*16 - 16*by)(p_in), xkeyA
VMOVDQ (j*16 - 16*by)(p_in), xkeyB
club XDATA, i
vpxor xkeyA, var_xdata, var_xdata
club XDATA, j
vpxor xkeyB, var_xdata, var_xdata
.set i, (i+2)
.endr
.if (i < by)
VMOVDQ (i*16 - 16*by)(p_in), xkeyA
club XDATA, i
vpxor xkeyA, var_xdata, var_xdata
.endif
.set i, 0
.rept by
club XDATA, i
VMOVDQ var_xdata, i*16(p_out)
.set i, (i+1)
.endr
.endm
.macro do_aes_load val, key_len, xctr
do_aes \val, 1, \key_len, \xctr
.endm
.macro do_aes_noload val, key_len, xctr
do_aes \val, 0, \key_len, \xctr
.endm
/* main body of aes ctr load */
.macro do_aes_ctrmain key_len, xctr
cmp $16, num_bytes
jb .Ldo_return2\xctr\key_len
.if \xctr
shr $4, counter
vmovdqu (p_iv), xiv
.else
vmovdqa byteswap_const(%rip), xbyteswap
vmovdqu (p_iv), xcounter
vpshufb xbyteswap, xcounter, xcounter
.endif
mov num_bytes, tmp
and $(7*16), tmp
jz .Lmult_of_8_blks\xctr\key_len
/* 1 <= tmp <= 7 */
cmp $(4*16), tmp
jg .Lgt4\xctr\key_len
je .Leq4\xctr\key_len
.Llt4\xctr\key_len:
cmp $(2*16), tmp
jg .Leq3\xctr\key_len
je .Leq2\xctr\key_len
.Leq1\xctr\key_len:
do_aes_load 1, \key_len, \xctr
add $(1*16), p_out
and $(~7*16), num_bytes
jz .Ldo_return2\xctr\key_len
jmp .Lmain_loop2\xctr\key_len
.Leq2\xctr\key_len:
do_aes_load 2, \key_len, \xctr
add $(2*16), p_out
and $(~7*16), num_bytes
jz .Ldo_return2\xctr\key_len
jmp .Lmain_loop2\xctr\key_len
.Leq3\xctr\key_len:
do_aes_load 3, \key_len, \xctr
add $(3*16), p_out
and $(~7*16), num_bytes
jz .Ldo_return2\xctr\key_len
jmp .Lmain_loop2\xctr\key_len
.Leq4\xctr\key_len:
do_aes_load 4, \key_len, \xctr
add $(4*16), p_out
and $(~7*16), num_bytes
jz .Ldo_return2\xctr\key_len
jmp .Lmain_loop2\xctr\key_len
.Lgt4\xctr\key_len:
cmp $(6*16), tmp
jg .Leq7\xctr\key_len
je .Leq6\xctr\key_len
.Leq5\xctr\key_len:
do_aes_load 5, \key_len, \xctr
add $(5*16), p_out
and $(~7*16), num_bytes
jz .Ldo_return2\xctr\key_len
jmp .Lmain_loop2\xctr\key_len
.Leq6\xctr\key_len:
do_aes_load 6, \key_len, \xctr
add $(6*16), p_out
and $(~7*16), num_bytes
jz .Ldo_return2\xctr\key_len
jmp .Lmain_loop2\xctr\key_len
.Leq7\xctr\key_len:
do_aes_load 7, \key_len, \xctr
add $(7*16), p_out
and $(~7*16), num_bytes
jz .Ldo_return2\xctr\key_len
jmp .Lmain_loop2\xctr\key_len
.Lmult_of_8_blks\xctr\key_len:
.if (\key_len != KEY_128)
vmovdqa 0*16(p_keys), xkey0
vmovdqa 4*16(p_keys), xkey4
vmovdqa 8*16(p_keys), xkey8
vmovdqa 12*16(p_keys), xkey12
.else
vmovdqa 0*16(p_keys), xkey0
vmovdqa 3*16(p_keys), xkey4
vmovdqa 6*16(p_keys), xkey8
vmovdqa 9*16(p_keys), xkey12
.endif
.align 16
.Lmain_loop2\xctr\key_len:
/* num_bytes is a multiple of 8 and >0 */
do_aes_noload 8, \key_len, \xctr
add $(8*16), p_out
sub $(8*16), num_bytes
jne .Lmain_loop2\xctr\key_len
.Ldo_return2\xctr\key_len:
.if !\xctr
/* return updated IV */
vpshufb xbyteswap, xcounter, xcounter
vmovdqu xcounter, (p_iv)
.endif
RET
.endm
/*
* routine to do AES128 CTR enc/decrypt "by8"
* XMM registers are clobbered.
* Saving/restoring must be done at a higher level
* aes_ctr_enc_128_avx_by8(void *in, void *iv, void *keys, void *out,
* unsigned int num_bytes)
*/
SYM_FUNC_START(aes_ctr_enc_128_avx_by8)
/* call the aes main loop */
do_aes_ctrmain KEY_128 0
SYM_FUNC_END(aes_ctr_enc_128_avx_by8)
/*
* routine to do AES192 CTR enc/decrypt "by8"
* XMM registers are clobbered.
* Saving/restoring must be done at a higher level
* aes_ctr_enc_192_avx_by8(void *in, void *iv, void *keys, void *out,
* unsigned int num_bytes)
*/
SYM_FUNC_START(aes_ctr_enc_192_avx_by8)
/* call the aes main loop */
do_aes_ctrmain KEY_192 0
SYM_FUNC_END(aes_ctr_enc_192_avx_by8)
/*
* routine to do AES256 CTR enc/decrypt "by8"
* XMM registers are clobbered.
* Saving/restoring must be done at a higher level
* aes_ctr_enc_256_avx_by8(void *in, void *iv, void *keys, void *out,
* unsigned int num_bytes)
*/
SYM_FUNC_START(aes_ctr_enc_256_avx_by8)
/* call the aes main loop */
do_aes_ctrmain KEY_256 0
SYM_FUNC_END(aes_ctr_enc_256_avx_by8)
/*
* routine to do AES128 XCTR enc/decrypt "by8"
* XMM registers are clobbered.
* Saving/restoring must be done at a higher level
* aes_xctr_enc_128_avx_by8(const u8 *in, const u8 *iv, const void *keys,
* u8* out, unsigned int num_bytes, unsigned int byte_ctr)
*/
SYM_FUNC_START(aes_xctr_enc_128_avx_by8)
/* call the aes main loop */
do_aes_ctrmain KEY_128 1
SYM_FUNC_END(aes_xctr_enc_128_avx_by8)
/*
* routine to do AES192 XCTR enc/decrypt "by8"
* XMM registers are clobbered.
* Saving/restoring must be done at a higher level
* aes_xctr_enc_192_avx_by8(const u8 *in, const u8 *iv, const void *keys,
* u8* out, unsigned int num_bytes, unsigned int byte_ctr)
*/
SYM_FUNC_START(aes_xctr_enc_192_avx_by8)
/* call the aes main loop */
do_aes_ctrmain KEY_192 1
SYM_FUNC_END(aes_xctr_enc_192_avx_by8)
/*
* routine to do AES256 XCTR enc/decrypt "by8"
* XMM registers are clobbered.
* Saving/restoring must be done at a higher level
* aes_xctr_enc_256_avx_by8(const u8 *in, const u8 *iv, const void *keys,
* u8* out, unsigned int num_bytes, unsigned int byte_ctr)
*/
SYM_FUNC_START(aes_xctr_enc_256_avx_by8)
/* call the aes main loop */
do_aes_ctrmain KEY_256 1
SYM_FUNC_END(aes_xctr_enc_256_avx_by8)

View file

@ -23,7 +23,6 @@
#include <linux/err.h>
#include <crypto/algapi.h>
#include <crypto/aes.h>
#include <crypto/ctr.h>
#include <crypto/b128ops.h>
#include <crypto/gcm.h>
#include <crypto/xts.h>
@ -82,30 +81,8 @@ asmlinkage void aesni_xts_dec(const struct crypto_aes_ctx *ctx, u8 *out,
const u8 *in, unsigned int len, u8 *iv);
#ifdef CONFIG_X86_64
asmlinkage void aesni_ctr_enc(struct crypto_aes_ctx *ctx, u8 *out,
const u8 *in, unsigned int len, u8 *iv);
DEFINE_STATIC_CALL(aesni_ctr_enc_tfm, aesni_ctr_enc);
asmlinkage void aes_ctr_enc_128_avx_by8(const u8 *in, u8 *iv,
void *keys, u8 *out, unsigned int num_bytes);
asmlinkage void aes_ctr_enc_192_avx_by8(const u8 *in, u8 *iv,
void *keys, u8 *out, unsigned int num_bytes);
asmlinkage void aes_ctr_enc_256_avx_by8(const u8 *in, u8 *iv,
void *keys, u8 *out, unsigned int num_bytes);
asmlinkage void aes_xctr_enc_128_avx_by8(const u8 *in, const u8 *iv,
const void *keys, u8 *out, unsigned int num_bytes,
unsigned int byte_ctr);
asmlinkage void aes_xctr_enc_192_avx_by8(const u8 *in, const u8 *iv,
const void *keys, u8 *out, unsigned int num_bytes,
unsigned int byte_ctr);
asmlinkage void aes_xctr_enc_256_avx_by8(const u8 *in, const u8 *iv,
const void *keys, u8 *out, unsigned int num_bytes,
unsigned int byte_ctr);
#endif
static inline struct crypto_aes_ctx *aes_ctx(void *raw_ctx)
@ -376,24 +353,8 @@ static int cts_cbc_decrypt(struct skcipher_request *req)
}
#ifdef CONFIG_X86_64
static void aesni_ctr_enc_avx_tfm(struct crypto_aes_ctx *ctx, u8 *out,
const u8 *in, unsigned int len, u8 *iv)
{
/*
* based on key length, override with the by8 version
* of ctr mode encryption/decryption for improved performance
* aes_set_key_common() ensures that key length is one of
* {128,192,256}
*/
if (ctx->key_length == AES_KEYSIZE_128)
aes_ctr_enc_128_avx_by8(in, iv, (void *)ctx, out, len);
else if (ctx->key_length == AES_KEYSIZE_192)
aes_ctr_enc_192_avx_by8(in, iv, (void *)ctx, out, len);
else
aes_ctr_enc_256_avx_by8(in, iv, (void *)ctx, out, len);
}
static int ctr_crypt(struct skcipher_request *req)
/* This is the non-AVX version. */
static int ctr_crypt_aesni(struct skcipher_request *req)
{
struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
struct crypto_aes_ctx *ctx = aes_ctx(crypto_skcipher_ctx(tfm));
@ -407,10 +368,9 @@ static int ctr_crypt(struct skcipher_request *req)
while ((nbytes = walk.nbytes) > 0) {
kernel_fpu_begin();
if (nbytes & AES_BLOCK_MASK)
static_call(aesni_ctr_enc_tfm)(ctx, walk.dst.virt.addr,
walk.src.virt.addr,
nbytes & AES_BLOCK_MASK,
walk.iv);
aesni_ctr_enc(ctx, walk.dst.virt.addr,
walk.src.virt.addr,
nbytes & AES_BLOCK_MASK, walk.iv);
nbytes &= ~AES_BLOCK_MASK;
if (walk.nbytes == walk.total && nbytes > 0) {
@ -426,59 +386,6 @@ static int ctr_crypt(struct skcipher_request *req)
}
return err;
}
static void aesni_xctr_enc_avx_tfm(struct crypto_aes_ctx *ctx, u8 *out,
const u8 *in, unsigned int len, u8 *iv,
unsigned int byte_ctr)
{
if (ctx->key_length == AES_KEYSIZE_128)
aes_xctr_enc_128_avx_by8(in, iv, (void *)ctx, out, len,
byte_ctr);
else if (ctx->key_length == AES_KEYSIZE_192)
aes_xctr_enc_192_avx_by8(in, iv, (void *)ctx, out, len,
byte_ctr);
else
aes_xctr_enc_256_avx_by8(in, iv, (void *)ctx, out, len,
byte_ctr);
}
static int xctr_crypt(struct skcipher_request *req)
{
struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
struct crypto_aes_ctx *ctx = aes_ctx(crypto_skcipher_ctx(tfm));
u8 keystream[AES_BLOCK_SIZE];
struct skcipher_walk walk;
unsigned int nbytes;
unsigned int byte_ctr = 0;
int err;
__le32 block[AES_BLOCK_SIZE / sizeof(__le32)];
err = skcipher_walk_virt(&walk, req, false);
while ((nbytes = walk.nbytes) > 0) {
kernel_fpu_begin();
if (nbytes & AES_BLOCK_MASK)
aesni_xctr_enc_avx_tfm(ctx, walk.dst.virt.addr,
walk.src.virt.addr, nbytes & AES_BLOCK_MASK,
walk.iv, byte_ctr);
nbytes &= ~AES_BLOCK_MASK;
byte_ctr += walk.nbytes - nbytes;
if (walk.nbytes == walk.total && nbytes > 0) {
memcpy(block, walk.iv, AES_BLOCK_SIZE);
block[0] ^= cpu_to_le32(1 + byte_ctr / AES_BLOCK_SIZE);
aesni_enc(ctx, keystream, (u8 *)block);
crypto_xor_cpy(walk.dst.virt.addr + walk.nbytes -
nbytes, walk.src.virt.addr + walk.nbytes
- nbytes, keystream, nbytes);
byte_ctr += nbytes;
nbytes = 0;
}
kernel_fpu_end();
err = skcipher_walk_done(&walk, nbytes);
}
return err;
}
#endif
static int xts_setkey_aesni(struct crypto_skcipher *tfm, const u8 *key,
@ -581,11 +488,8 @@ xts_crypt(struct skcipher_request *req, xts_encrypt_iv_func encrypt_iv,
{
struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
const struct aesni_xts_ctx *ctx = aes_xts_ctx(tfm);
const unsigned int cryptlen = req->cryptlen;
struct scatterlist *src = req->src;
struct scatterlist *dst = req->dst;
if (unlikely(cryptlen < AES_BLOCK_SIZE))
if (unlikely(req->cryptlen < AES_BLOCK_SIZE))
return -EINVAL;
kernel_fpu_begin();
@ -593,23 +497,16 @@ xts_crypt(struct skcipher_request *req, xts_encrypt_iv_func encrypt_iv,
/*
* In practice, virtually all XTS plaintexts and ciphertexts are either
* 512 or 4096 bytes, aligned such that they don't span page boundaries.
* To optimize the performance of these cases, and also any other case
* where no page boundary is spanned, the below fast-path handles
* single-page sources and destinations as efficiently as possible.
* 512 or 4096 bytes and do not use multiple scatterlist elements. To
* optimize the performance of these cases, the below fast-path handles
* single-scatterlist-element messages as efficiently as possible. The
* code is 64-bit specific, as it assumes no page mapping is needed.
*/
if (likely(src->length >= cryptlen && dst->length >= cryptlen &&
src->offset + cryptlen <= PAGE_SIZE &&
dst->offset + cryptlen <= PAGE_SIZE)) {
struct page *src_page = sg_page(src);
struct page *dst_page = sg_page(dst);
void *src_virt = kmap_local_page(src_page) + src->offset;
void *dst_virt = kmap_local_page(dst_page) + dst->offset;
(*crypt_func)(&ctx->crypt_ctx, src_virt, dst_virt, cryptlen,
req->iv);
kunmap_local(dst_virt);
kunmap_local(src_virt);
if (IS_ENABLED(CONFIG_X86_64) &&
likely(req->src->length >= req->cryptlen &&
req->dst->length >= req->cryptlen)) {
(*crypt_func)(&ctx->crypt_ctx, sg_virt(req->src),
sg_virt(req->dst), req->cryptlen, req->iv);
kernel_fpu_end();
return 0;
}
@ -731,8 +628,8 @@ static struct skcipher_alg aesni_skciphers[] = {
.ivsize = AES_BLOCK_SIZE,
.chunksize = AES_BLOCK_SIZE,
.setkey = aesni_skcipher_setkey,
.encrypt = ctr_crypt,
.decrypt = ctr_crypt,
.encrypt = ctr_crypt_aesni,
.decrypt = ctr_crypt_aesni,
#endif
}, {
.base = {
@ -758,35 +655,105 @@ static
struct simd_skcipher_alg *aesni_simd_skciphers[ARRAY_SIZE(aesni_skciphers)];
#ifdef CONFIG_X86_64
/*
* XCTR does not have a non-AVX implementation, so it must be enabled
* conditionally.
*/
static struct skcipher_alg aesni_xctr = {
.base = {
.cra_name = "__xctr(aes)",
.cra_driver_name = "__xctr-aes-aesni",
.cra_priority = 400,
.cra_flags = CRYPTO_ALG_INTERNAL,
.cra_blocksize = 1,
.cra_ctxsize = CRYPTO_AES_CTX_SIZE,
.cra_module = THIS_MODULE,
},
.min_keysize = AES_MIN_KEY_SIZE,
.max_keysize = AES_MAX_KEY_SIZE,
.ivsize = AES_BLOCK_SIZE,
.chunksize = AES_BLOCK_SIZE,
.setkey = aesni_skcipher_setkey,
.encrypt = xctr_crypt,
.decrypt = xctr_crypt,
};
static struct simd_skcipher_alg *aesni_simd_xctr;
asmlinkage void aes_xts_encrypt_iv(const struct crypto_aes_ctx *tweak_key,
u8 iv[AES_BLOCK_SIZE]);
#define DEFINE_XTS_ALG(suffix, driver_name, priority) \
/* __always_inline to avoid indirect call */
static __always_inline int
ctr_crypt(struct skcipher_request *req,
void (*ctr64_func)(const struct crypto_aes_ctx *key,
const u8 *src, u8 *dst, int len,
const u64 le_ctr[2]))
{
struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
const struct crypto_aes_ctx *key = aes_ctx(crypto_skcipher_ctx(tfm));
unsigned int nbytes, p1_nbytes, nblocks;
struct skcipher_walk walk;
u64 le_ctr[2];
u64 ctr64;
int err;
ctr64 = le_ctr[0] = get_unaligned_be64(&req->iv[8]);
le_ctr[1] = get_unaligned_be64(&req->iv[0]);
err = skcipher_walk_virt(&walk, req, false);
while ((nbytes = walk.nbytes) != 0) {
if (nbytes < walk.total) {
/* Not the end yet, so keep the length block-aligned. */
nbytes = round_down(nbytes, AES_BLOCK_SIZE);
nblocks = nbytes / AES_BLOCK_SIZE;
} else {
/* It's the end, so include any final partial block. */
nblocks = DIV_ROUND_UP(nbytes, AES_BLOCK_SIZE);
}
ctr64 += nblocks;
kernel_fpu_begin();
if (likely(ctr64 >= nblocks)) {
/* The low 64 bits of the counter won't overflow. */
(*ctr64_func)(key, walk.src.virt.addr,
walk.dst.virt.addr, nbytes, le_ctr);
} else {
/*
* The low 64 bits of the counter will overflow. The
* assembly doesn't handle this case, so split the
* operation into two at the point where the overflow
* will occur. After the first part, add the carry bit.
*/
p1_nbytes = min_t(unsigned int, nbytes,
(nblocks - ctr64) * AES_BLOCK_SIZE);
(*ctr64_func)(key, walk.src.virt.addr,
walk.dst.virt.addr, p1_nbytes, le_ctr);
le_ctr[0] = 0;
le_ctr[1]++;
(*ctr64_func)(key, walk.src.virt.addr + p1_nbytes,
walk.dst.virt.addr + p1_nbytes,
nbytes - p1_nbytes, le_ctr);
}
kernel_fpu_end();
le_ctr[0] = ctr64;
err = skcipher_walk_done(&walk, walk.nbytes - nbytes);
}
put_unaligned_be64(ctr64, &req->iv[8]);
put_unaligned_be64(le_ctr[1], &req->iv[0]);
return err;
}
/* __always_inline to avoid indirect call */
static __always_inline int
xctr_crypt(struct skcipher_request *req,
void (*xctr_func)(const struct crypto_aes_ctx *key,
const u8 *src, u8 *dst, int len,
const u8 iv[AES_BLOCK_SIZE], u64 ctr))
{
struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
const struct crypto_aes_ctx *key = aes_ctx(crypto_skcipher_ctx(tfm));
struct skcipher_walk walk;
unsigned int nbytes;
u64 ctr = 1;
int err;
err = skcipher_walk_virt(&walk, req, false);
while ((nbytes = walk.nbytes) != 0) {
if (nbytes < walk.total)
nbytes = round_down(nbytes, AES_BLOCK_SIZE);
kernel_fpu_begin();
(*xctr_func)(key, walk.src.virt.addr, walk.dst.virt.addr,
nbytes, req->iv, ctr);
kernel_fpu_end();
ctr += DIV_ROUND_UP(nbytes, AES_BLOCK_SIZE);
err = skcipher_walk_done(&walk, walk.nbytes - nbytes);
}
return err;
}
#define DEFINE_AVX_SKCIPHER_ALGS(suffix, driver_name_suffix, priority) \
\
asmlinkage void \
aes_xts_encrypt_##suffix(const struct crypto_aes_ctx *key, const u8 *src, \
@ -805,32 +772,80 @@ static int xts_decrypt_##suffix(struct skcipher_request *req) \
return xts_crypt(req, aes_xts_encrypt_iv, aes_xts_decrypt_##suffix); \
} \
\
static struct skcipher_alg aes_xts_alg_##suffix = { \
.base = { \
.cra_name = "__xts(aes)", \
.cra_driver_name = "__" driver_name, \
.cra_priority = priority, \
.cra_flags = CRYPTO_ALG_INTERNAL, \
.cra_blocksize = AES_BLOCK_SIZE, \
.cra_ctxsize = XTS_AES_CTX_SIZE, \
.cra_module = THIS_MODULE, \
}, \
.min_keysize = 2 * AES_MIN_KEY_SIZE, \
.max_keysize = 2 * AES_MAX_KEY_SIZE, \
.ivsize = AES_BLOCK_SIZE, \
.walksize = 2 * AES_BLOCK_SIZE, \
.setkey = xts_setkey_aesni, \
.encrypt = xts_encrypt_##suffix, \
.decrypt = xts_decrypt_##suffix, \
}; \
asmlinkage void \
aes_ctr64_crypt_##suffix(const struct crypto_aes_ctx *key, \
const u8 *src, u8 *dst, int len, const u64 le_ctr[2]);\
\
static struct simd_skcipher_alg *aes_xts_simdalg_##suffix
static int ctr_crypt_##suffix(struct skcipher_request *req) \
{ \
return ctr_crypt(req, aes_ctr64_crypt_##suffix); \
} \
\
asmlinkage void \
aes_xctr_crypt_##suffix(const struct crypto_aes_ctx *key, \
const u8 *src, u8 *dst, int len, \
const u8 iv[AES_BLOCK_SIZE], u64 ctr); \
\
static int xctr_crypt_##suffix(struct skcipher_request *req) \
{ \
return xctr_crypt(req, aes_xctr_crypt_##suffix); \
} \
\
static struct skcipher_alg skcipher_algs_##suffix[] = {{ \
.base.cra_name = "__xts(aes)", \
.base.cra_driver_name = "__xts-aes-" driver_name_suffix, \
.base.cra_priority = priority, \
.base.cra_flags = CRYPTO_ALG_INTERNAL, \
.base.cra_blocksize = AES_BLOCK_SIZE, \
.base.cra_ctxsize = XTS_AES_CTX_SIZE, \
.base.cra_module = THIS_MODULE, \
.min_keysize = 2 * AES_MIN_KEY_SIZE, \
.max_keysize = 2 * AES_MAX_KEY_SIZE, \
.ivsize = AES_BLOCK_SIZE, \
.walksize = 2 * AES_BLOCK_SIZE, \
.setkey = xts_setkey_aesni, \
.encrypt = xts_encrypt_##suffix, \
.decrypt = xts_decrypt_##suffix, \
}, { \
.base.cra_name = "__ctr(aes)", \
.base.cra_driver_name = "__ctr-aes-" driver_name_suffix, \
.base.cra_priority = priority, \
.base.cra_flags = CRYPTO_ALG_INTERNAL, \
.base.cra_blocksize = 1, \
.base.cra_ctxsize = CRYPTO_AES_CTX_SIZE, \
.base.cra_module = THIS_MODULE, \
.min_keysize = AES_MIN_KEY_SIZE, \
.max_keysize = AES_MAX_KEY_SIZE, \
.ivsize = AES_BLOCK_SIZE, \
.chunksize = AES_BLOCK_SIZE, \
.setkey = aesni_skcipher_setkey, \
.encrypt = ctr_crypt_##suffix, \
.decrypt = ctr_crypt_##suffix, \
}, { \
.base.cra_name = "__xctr(aes)", \
.base.cra_driver_name = "__xctr-aes-" driver_name_suffix, \
.base.cra_priority = priority, \
.base.cra_flags = CRYPTO_ALG_INTERNAL, \
.base.cra_blocksize = 1, \
.base.cra_ctxsize = CRYPTO_AES_CTX_SIZE, \
.base.cra_module = THIS_MODULE, \
.min_keysize = AES_MIN_KEY_SIZE, \
.max_keysize = AES_MAX_KEY_SIZE, \
.ivsize = AES_BLOCK_SIZE, \
.chunksize = AES_BLOCK_SIZE, \
.setkey = aesni_skcipher_setkey, \
.encrypt = xctr_crypt_##suffix, \
.decrypt = xctr_crypt_##suffix, \
}}; \
\
static struct simd_skcipher_alg * \
simd_skcipher_algs_##suffix[ARRAY_SIZE(skcipher_algs_##suffix)]
DEFINE_XTS_ALG(aesni_avx, "xts-aes-aesni-avx", 500);
DEFINE_AVX_SKCIPHER_ALGS(aesni_avx, "aesni-avx", 500);
#if defined(CONFIG_AS_VAES) && defined(CONFIG_AS_VPCLMULQDQ)
DEFINE_XTS_ALG(vaes_avx2, "xts-aes-vaes-avx2", 600);
DEFINE_XTS_ALG(vaes_avx10_256, "xts-aes-vaes-avx10_256", 700);
DEFINE_XTS_ALG(vaes_avx10_512, "xts-aes-vaes-avx10_512", 800);
DEFINE_AVX_SKCIPHER_ALGS(vaes_avx2, "vaes-avx2", 600);
DEFINE_AVX_SKCIPHER_ALGS(vaes_avx10_256, "vaes-avx10_256", 700);
DEFINE_AVX_SKCIPHER_ALGS(vaes_avx10_512, "vaes-avx10_512", 800);
#endif
/* The common part of the x86_64 AES-GCM key struct */
@ -1291,41 +1306,40 @@ static void gcm_process_assoc(const struct aes_gcm_key *key, u8 ghash_acc[16],
scatterwalk_start(&walk, sg_src);
while (assoclen) {
unsigned int len_this_page = scatterwalk_clamp(&walk, assoclen);
void *mapped = scatterwalk_map(&walk);
const void *src = mapped;
unsigned int orig_len_this_step = scatterwalk_next(
&walk, assoclen);
unsigned int len_this_step = orig_len_this_step;
unsigned int len;
const u8 *src = walk.addr;
assoclen -= len_this_page;
scatterwalk_advance(&walk, len_this_page);
if (unlikely(pos)) {
len = min(len_this_page, 16 - pos);
len = min(len_this_step, 16 - pos);
memcpy(&buf[pos], src, len);
pos += len;
src += len;
len_this_page -= len;
len_this_step -= len;
if (pos < 16)
goto next;
aes_gcm_aad_update(key, ghash_acc, buf, 16, flags);
pos = 0;
}
len = len_this_page;
len = len_this_step;
if (unlikely(assoclen)) /* Not the last segment yet? */
len = round_down(len, 16);
aes_gcm_aad_update(key, ghash_acc, src, len, flags);
src += len;
len_this_page -= len;
if (unlikely(len_this_page)) {
memcpy(buf, src, len_this_page);
pos = len_this_page;
len_this_step -= len;
if (unlikely(len_this_step)) {
memcpy(buf, src, len_this_step);
pos = len_this_step;
}
next:
scatterwalk_unmap(mapped);
scatterwalk_pagedone(&walk, 0, assoclen);
scatterwalk_done_src(&walk, orig_len_this_step);
if (need_resched()) {
kernel_fpu_end();
kernel_fpu_begin();
}
assoclen -= orig_len_this_step;
}
if (unlikely(pos))
aes_gcm_aad_update(key, ghash_acc, buf, pos, flags);
@ -1542,8 +1556,9 @@ static int __init register_avx_algs(void)
if (!boot_cpu_has(X86_FEATURE_AVX))
return 0;
err = simd_register_skciphers_compat(&aes_xts_alg_aesni_avx, 1,
&aes_xts_simdalg_aesni_avx);
err = simd_register_skciphers_compat(skcipher_algs_aesni_avx,
ARRAY_SIZE(skcipher_algs_aesni_avx),
simd_skcipher_algs_aesni_avx);
if (err)
return err;
err = simd_register_aeads_compat(aes_gcm_algs_aesni_avx,
@ -1551,6 +1566,12 @@ static int __init register_avx_algs(void)
aes_gcm_simdalgs_aesni_avx);
if (err)
return err;
/*
* Note: not all the algorithms registered below actually require
* VPCLMULQDQ. But in practice every CPU with VAES also has VPCLMULQDQ.
* Similarly, the assembler support was added at about the same time.
* For simplicity, just always check for VAES and VPCLMULQDQ together.
*/
#if defined(CONFIG_AS_VAES) && defined(CONFIG_AS_VPCLMULQDQ)
if (!boot_cpu_has(X86_FEATURE_AVX2) ||
!boot_cpu_has(X86_FEATURE_VAES) ||
@ -1558,8 +1579,9 @@ static int __init register_avx_algs(void)
!boot_cpu_has(X86_FEATURE_PCLMULQDQ) ||
!cpu_has_xfeatures(XFEATURE_MASK_SSE | XFEATURE_MASK_YMM, NULL))
return 0;
err = simd_register_skciphers_compat(&aes_xts_alg_vaes_avx2, 1,
&aes_xts_simdalg_vaes_avx2);
err = simd_register_skciphers_compat(skcipher_algs_vaes_avx2,
ARRAY_SIZE(skcipher_algs_vaes_avx2),
simd_skcipher_algs_vaes_avx2);
if (err)
return err;
@ -1570,8 +1592,9 @@ static int __init register_avx_algs(void)
XFEATURE_MASK_AVX512, NULL))
return 0;
err = simd_register_skciphers_compat(&aes_xts_alg_vaes_avx10_256, 1,
&aes_xts_simdalg_vaes_avx10_256);
err = simd_register_skciphers_compat(skcipher_algs_vaes_avx10_256,
ARRAY_SIZE(skcipher_algs_vaes_avx10_256),
simd_skcipher_algs_vaes_avx10_256);
if (err)
return err;
err = simd_register_aeads_compat(aes_gcm_algs_vaes_avx10_256,
@ -1583,13 +1606,15 @@ static int __init register_avx_algs(void)
if (boot_cpu_has(X86_FEATURE_PREFER_YMM)) {
int i;
aes_xts_alg_vaes_avx10_512.base.cra_priority = 1;
for (i = 0; i < ARRAY_SIZE(skcipher_algs_vaes_avx10_512); i++)
skcipher_algs_vaes_avx10_512[i].base.cra_priority = 1;
for (i = 0; i < ARRAY_SIZE(aes_gcm_algs_vaes_avx10_512); i++)
aes_gcm_algs_vaes_avx10_512[i].base.cra_priority = 1;
}
err = simd_register_skciphers_compat(&aes_xts_alg_vaes_avx10_512, 1,
&aes_xts_simdalg_vaes_avx10_512);
err = simd_register_skciphers_compat(skcipher_algs_vaes_avx10_512,
ARRAY_SIZE(skcipher_algs_vaes_avx10_512),
simd_skcipher_algs_vaes_avx10_512);
if (err)
return err;
err = simd_register_aeads_compat(aes_gcm_algs_vaes_avx10_512,
@ -1603,27 +1628,31 @@ static int __init register_avx_algs(void)
static void unregister_avx_algs(void)
{
if (aes_xts_simdalg_aesni_avx)
simd_unregister_skciphers(&aes_xts_alg_aesni_avx, 1,
&aes_xts_simdalg_aesni_avx);
if (simd_skcipher_algs_aesni_avx[0])
simd_unregister_skciphers(skcipher_algs_aesni_avx,
ARRAY_SIZE(skcipher_algs_aesni_avx),
simd_skcipher_algs_aesni_avx);
if (aes_gcm_simdalgs_aesni_avx[0])
simd_unregister_aeads(aes_gcm_algs_aesni_avx,
ARRAY_SIZE(aes_gcm_algs_aesni_avx),
aes_gcm_simdalgs_aesni_avx);
#if defined(CONFIG_AS_VAES) && defined(CONFIG_AS_VPCLMULQDQ)
if (aes_xts_simdalg_vaes_avx2)
simd_unregister_skciphers(&aes_xts_alg_vaes_avx2, 1,
&aes_xts_simdalg_vaes_avx2);
if (aes_xts_simdalg_vaes_avx10_256)
simd_unregister_skciphers(&aes_xts_alg_vaes_avx10_256, 1,
&aes_xts_simdalg_vaes_avx10_256);
if (simd_skcipher_algs_vaes_avx2[0])
simd_unregister_skciphers(skcipher_algs_vaes_avx2,
ARRAY_SIZE(skcipher_algs_vaes_avx2),
simd_skcipher_algs_vaes_avx2);
if (simd_skcipher_algs_vaes_avx10_256[0])
simd_unregister_skciphers(skcipher_algs_vaes_avx10_256,
ARRAY_SIZE(skcipher_algs_vaes_avx10_256),
simd_skcipher_algs_vaes_avx10_256);
if (aes_gcm_simdalgs_vaes_avx10_256[0])
simd_unregister_aeads(aes_gcm_algs_vaes_avx10_256,
ARRAY_SIZE(aes_gcm_algs_vaes_avx10_256),
aes_gcm_simdalgs_vaes_avx10_256);
if (aes_xts_simdalg_vaes_avx10_512)
simd_unregister_skciphers(&aes_xts_alg_vaes_avx10_512, 1,
&aes_xts_simdalg_vaes_avx10_512);
if (simd_skcipher_algs_vaes_avx10_512[0])
simd_unregister_skciphers(skcipher_algs_vaes_avx10_512,
ARRAY_SIZE(skcipher_algs_vaes_avx10_512),
simd_skcipher_algs_vaes_avx10_512);
if (aes_gcm_simdalgs_vaes_avx10_512[0])
simd_unregister_aeads(aes_gcm_algs_vaes_avx10_512,
ARRAY_SIZE(aes_gcm_algs_vaes_avx10_512),
@ -1656,13 +1685,6 @@ static int __init aesni_init(void)
if (!x86_match_cpu(aesni_cpu_id))
return -ENODEV;
#ifdef CONFIG_X86_64
if (boot_cpu_has(X86_FEATURE_AVX)) {
/* optimize performance of ctr mode encryption transform */
static_call_update(aesni_ctr_enc_tfm, aesni_ctr_enc_avx_tfm);
pr_info("AES CTR mode by8 optimization enabled\n");
}
#endif /* CONFIG_X86_64 */
err = crypto_register_alg(&aesni_cipher_alg);
if (err)
@ -1680,14 +1702,6 @@ static int __init aesni_init(void)
if (err)
goto unregister_skciphers;
#ifdef CONFIG_X86_64
if (boot_cpu_has(X86_FEATURE_AVX))
err = simd_register_skciphers_compat(&aesni_xctr, 1,
&aesni_simd_xctr);
if (err)
goto unregister_aeads;
#endif /* CONFIG_X86_64 */
err = register_avx_algs();
if (err)
goto unregister_avx;
@ -1696,11 +1710,6 @@ static int __init aesni_init(void)
unregister_avx:
unregister_avx_algs();
#ifdef CONFIG_X86_64
if (aesni_simd_xctr)
simd_unregister_skciphers(&aesni_xctr, 1, &aesni_simd_xctr);
unregister_aeads:
#endif /* CONFIG_X86_64 */
simd_unregister_aeads(aes_gcm_algs_aesni,
ARRAY_SIZE(aes_gcm_algs_aesni),
aes_gcm_simdalgs_aesni);
@ -1720,10 +1729,6 @@ static void __exit aesni_exit(void)
simd_unregister_skciphers(aesni_skciphers, ARRAY_SIZE(aesni_skciphers),
aesni_simd_skciphers);
crypto_unregister_alg(&aesni_cipher_alg);
#ifdef CONFIG_X86_64
if (boot_cpu_has(X86_FEATURE_AVX))
simd_unregister_skciphers(&aesni_xctr, 1, &aesni_simd_xctr);
#endif /* CONFIG_X86_64 */
unregister_avx_algs();
}

View file

@ -133,12 +133,6 @@ void hchacha_block_arch(const u32 *state, u32 *stream, int nrounds)
}
EXPORT_SYMBOL(hchacha_block_arch);
void chacha_init_arch(u32 *state, const u32 *key, const u8 *iv)
{
chacha_init_generic(state, key, iv);
}
EXPORT_SYMBOL(chacha_init_arch);
void chacha_crypt_arch(u32 *state, u8 *dst, const u8 *src, unsigned int bytes,
int nrounds)
{
@ -169,7 +163,7 @@ static int chacha_simd_stream_xor(struct skcipher_request *req,
err = skcipher_walk_virt(&walk, req, false);
chacha_init_generic(state, ctx->key, iv);
chacha_init(state, ctx->key, iv);
while (walk.nbytes > 0) {
unsigned int nbytes = walk.nbytes;
@ -211,7 +205,7 @@ static int xchacha_simd(struct skcipher_request *req)
struct chacha_ctx subctx;
u8 real_iv[16];
chacha_init_generic(state, ctx->key, req->iv);
chacha_init(state, ctx->key, req->iv);
if (req->cryptlen > CHACHA_BLOCK_SIZE && crypto_simd_usable()) {
kernel_fpu_begin();

View file

@ -73,7 +73,7 @@ static int ecb_crypt(struct skcipher_request *req, const u32 *expkey)
err = skcipher_walk_virt(&walk, req, false);
while ((nbytes = walk.nbytes)) {
u8 *wsrc = walk.src.virt.addr;
const u8 *wsrc = walk.src.virt.addr;
u8 *wdst = walk.dst.virt.addr;
/* Process four block batch */

View file

@ -189,6 +189,20 @@ static int ghash_async_init(struct ahash_request *req)
return crypto_shash_init(desc);
}
static void ghash_init_cryptd_req(struct ahash_request *req)
{
struct ahash_request *cryptd_req = ahash_request_ctx(req);
struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
struct ghash_async_ctx *ctx = crypto_ahash_ctx(tfm);
struct cryptd_ahash *cryptd_tfm = ctx->cryptd_tfm;
ahash_request_set_tfm(cryptd_req, &cryptd_tfm->base);
ahash_request_set_callback(cryptd_req, req->base.flags,
req->base.complete, req->base.data);
ahash_request_set_crypt(cryptd_req, req->src, req->result,
req->nbytes);
}
static int ghash_async_update(struct ahash_request *req)
{
struct ahash_request *cryptd_req = ahash_request_ctx(req);
@ -198,8 +212,7 @@ static int ghash_async_update(struct ahash_request *req)
if (!crypto_simd_usable() ||
(in_atomic() && cryptd_ahash_queued(cryptd_tfm))) {
memcpy(cryptd_req, req, sizeof(*req));
ahash_request_set_tfm(cryptd_req, &cryptd_tfm->base);
ghash_init_cryptd_req(req);
return crypto_ahash_update(cryptd_req);
} else {
struct shash_desc *desc = cryptd_shash_desc(cryptd_req);
@ -216,8 +229,7 @@ static int ghash_async_final(struct ahash_request *req)
if (!crypto_simd_usable() ||
(in_atomic() && cryptd_ahash_queued(cryptd_tfm))) {
memcpy(cryptd_req, req, sizeof(*req));
ahash_request_set_tfm(cryptd_req, &cryptd_tfm->base);
ghash_init_cryptd_req(req);
return crypto_ahash_final(cryptd_req);
} else {
struct shash_desc *desc = cryptd_shash_desc(cryptd_req);
@ -257,8 +269,7 @@ static int ghash_async_digest(struct ahash_request *req)
if (!crypto_simd_usable() ||
(in_atomic() && cryptd_ahash_queued(cryptd_tfm))) {
memcpy(cryptd_req, req, sizeof(*req));
ahash_request_set_tfm(cryptd_req, &cryptd_tfm->base);
ghash_init_cryptd_req(req);
return crypto_ahash_digest(cryptd_req);
} else {
struct shash_desc *desc = cryptd_shash_desc(cryptd_req);

View file

@ -18,17 +18,16 @@
* drivers/crypto/nx/nx-842-crypto.c
*/
#include <crypto/internal/scompress.h>
#include <linux/init.h>
#include <linux/module.h>
#include <linux/crypto.h>
#include <linux/sw842.h>
#include <crypto/internal/scompress.h>
struct crypto842_ctx {
void *wmem; /* working memory for compress */
};
static void *crypto842_alloc_ctx(struct crypto_scomp *tfm)
static void *crypto842_alloc_ctx(void)
{
void *ctx;
@ -39,38 +38,11 @@ static void *crypto842_alloc_ctx(struct crypto_scomp *tfm)
return ctx;
}
static int crypto842_init(struct crypto_tfm *tfm)
{
struct crypto842_ctx *ctx = crypto_tfm_ctx(tfm);
ctx->wmem = crypto842_alloc_ctx(NULL);
if (IS_ERR(ctx->wmem))
return -ENOMEM;
return 0;
}
static void crypto842_free_ctx(struct crypto_scomp *tfm, void *ctx)
static void crypto842_free_ctx(void *ctx)
{
kfree(ctx);
}
static void crypto842_exit(struct crypto_tfm *tfm)
{
struct crypto842_ctx *ctx = crypto_tfm_ctx(tfm);
crypto842_free_ctx(NULL, ctx->wmem);
}
static int crypto842_compress(struct crypto_tfm *tfm,
const u8 *src, unsigned int slen,
u8 *dst, unsigned int *dlen)
{
struct crypto842_ctx *ctx = crypto_tfm_ctx(tfm);
return sw842_compress(src, slen, dst, dlen, ctx->wmem);
}
static int crypto842_scompress(struct crypto_scomp *tfm,
const u8 *src, unsigned int slen,
u8 *dst, unsigned int *dlen, void *ctx)
@ -78,13 +50,6 @@ static int crypto842_scompress(struct crypto_scomp *tfm,
return sw842_compress(src, slen, dst, dlen, ctx);
}
static int crypto842_decompress(struct crypto_tfm *tfm,
const u8 *src, unsigned int slen,
u8 *dst, unsigned int *dlen)
{
return sw842_decompress(src, slen, dst, dlen);
}
static int crypto842_sdecompress(struct crypto_scomp *tfm,
const u8 *src, unsigned int slen,
u8 *dst, unsigned int *dlen, void *ctx)
@ -92,20 +57,6 @@ static int crypto842_sdecompress(struct crypto_scomp *tfm,
return sw842_decompress(src, slen, dst, dlen);
}
static struct crypto_alg alg = {
.cra_name = "842",
.cra_driver_name = "842-generic",
.cra_priority = 100,
.cra_flags = CRYPTO_ALG_TYPE_COMPRESS,
.cra_ctxsize = sizeof(struct crypto842_ctx),
.cra_module = THIS_MODULE,
.cra_init = crypto842_init,
.cra_exit = crypto842_exit,
.cra_u = { .compress = {
.coa_compress = crypto842_compress,
.coa_decompress = crypto842_decompress } }
};
static struct scomp_alg scomp = {
.alloc_ctx = crypto842_alloc_ctx,
.free_ctx = crypto842_free_ctx,
@ -121,25 +72,12 @@ static struct scomp_alg scomp = {
static int __init crypto842_mod_init(void)
{
int ret;
ret = crypto_register_alg(&alg);
if (ret)
return ret;
ret = crypto_register_scomp(&scomp);
if (ret) {
crypto_unregister_alg(&alg);
return ret;
}
return ret;
return crypto_register_scomp(&scomp);
}
subsys_initcall(crypto842_mod_init);
static void __exit crypto842_mod_exit(void)
{
crypto_unregister_alg(&alg);
crypto_unregister_scomp(&scomp);
}
module_exit(crypto842_mod_exit);

View file

@ -234,6 +234,18 @@ config CRYPTO_AUTHENC
This is required for IPSec ESP (XFRM_ESP).
config CRYPTO_KRB5ENC
tristate "Kerberos 5 combined hash+cipher support"
select CRYPTO_AEAD
select CRYPTO_SKCIPHER
select CRYPTO_MANAGER
select CRYPTO_HASH
select CRYPTO_NULL
help
Combined hash and cipher support for Kerberos 5 RFC3961 simplified
profile. This is required for Kerberos 5-style encryption, used by
sunrpc/NFS and rxrpc/AFS.
config CRYPTO_TEST
tristate "Testing module"
depends on m || EXPERT
@ -324,6 +336,7 @@ config CRYPTO_CURVE25519
tristate "Curve25519"
select CRYPTO_KPP
select CRYPTO_LIB_CURVE25519_GENERIC
select CRYPTO_LIB_CURVE25519_INTERNAL
help
Curve25519 elliptic curve (RFC7748)
@ -622,6 +635,7 @@ config CRYPTO_ARC4
config CRYPTO_CHACHA20
tristate "ChaCha"
select CRYPTO_LIB_CHACHA_GENERIC
select CRYPTO_LIB_CHACHA_INTERNAL
select CRYPTO_SKCIPHER
help
The ChaCha20, XChaCha20, and XChaCha12 stream cipher algorithms
@ -943,6 +957,7 @@ config CRYPTO_POLY1305
tristate "Poly1305"
select CRYPTO_HASH
select CRYPTO_LIB_POLY1305_GENERIC
select CRYPTO_LIB_POLY1305_INTERNAL
help
Poly1305 authenticator algorithm (RFC7539)
@ -1446,5 +1461,6 @@ endif
source "drivers/crypto/Kconfig"
source "crypto/asymmetric_keys/Kconfig"
source "certs/Kconfig"
source "crypto/krb5/Kconfig"
endif # if CRYPTO

View file

@ -4,7 +4,7 @@
#
obj-$(CONFIG_CRYPTO) += crypto.o
crypto-y := api.o cipher.o compress.o
crypto-y := api.o cipher.o
obj-$(CONFIG_CRYPTO_ENGINE) += crypto_engine.o
obj-$(CONFIG_CRYPTO_FIPS) += fips.o
@ -157,6 +157,7 @@ obj-$(CONFIG_CRYPTO_CRC32) += crc32_generic.o
CFLAGS_crc32c_generic.o += -DARCH=$(ARCH)
CFLAGS_crc32_generic.o += -DARCH=$(ARCH)
obj-$(CONFIG_CRYPTO_AUTHENC) += authenc.o authencesn.o
obj-$(CONFIG_CRYPTO_KRB5ENC) += krb5enc.o
obj-$(CONFIG_CRYPTO_LZO) += lzo.o lzo-rle.o
obj-$(CONFIG_CRYPTO_LZ4) += lz4.o
obj-$(CONFIG_CRYPTO_LZ4HC) += lz4hc.o
@ -210,3 +211,5 @@ obj-$(CONFIG_CRYPTO_SIMD) += crypto_simd.o
# Key derivation function
#
obj-$(CONFIG_CRYPTO_KDF800108_CTR) += kdf_sp800108.o
obj-$(CONFIG_CRYPTO_KRB5) += krb5/

View file

@ -12,6 +12,7 @@
#include <linux/errno.h>
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/page-flags.h>
#include <linux/seq_file.h>
#include <linux/slab.h>
#include <linux/string.h>
@ -23,6 +24,8 @@ struct crypto_scomp;
static const struct crypto_type crypto_acomp_type;
static void acomp_reqchain_done(void *data, int err);
static inline struct acomp_alg *__crypto_acomp_alg(struct crypto_alg *alg)
{
return container_of(alg, struct acomp_alg, calg.base);
@ -58,29 +61,56 @@ static void crypto_acomp_exit_tfm(struct crypto_tfm *tfm)
struct crypto_acomp *acomp = __crypto_acomp_tfm(tfm);
struct acomp_alg *alg = crypto_acomp_alg(acomp);
alg->exit(acomp);
if (alg->exit)
alg->exit(acomp);
if (acomp_is_async(acomp))
crypto_free_acomp(acomp->fb);
}
static int crypto_acomp_init_tfm(struct crypto_tfm *tfm)
{
struct crypto_acomp *acomp = __crypto_acomp_tfm(tfm);
struct acomp_alg *alg = crypto_acomp_alg(acomp);
struct crypto_acomp *fb = NULL;
int err;
acomp->fb = acomp;
if (tfm->__crt_alg->cra_type != &crypto_acomp_type)
return crypto_init_scomp_ops_async(tfm);
if (acomp_is_async(acomp)) {
fb = crypto_alloc_acomp(crypto_acomp_alg_name(acomp), 0,
CRYPTO_ALG_ASYNC);
if (IS_ERR(fb))
return PTR_ERR(fb);
err = -EINVAL;
if (crypto_acomp_reqsize(fb) > MAX_SYNC_COMP_REQSIZE)
goto out_free_fb;
acomp->fb = fb;
}
acomp->compress = alg->compress;
acomp->decompress = alg->decompress;
acomp->dst_free = alg->dst_free;
acomp->reqsize = alg->reqsize;
if (alg->exit)
acomp->base.exit = crypto_acomp_exit_tfm;
acomp->base.exit = crypto_acomp_exit_tfm;
if (alg->init)
return alg->init(acomp);
if (!alg->init)
return 0;
err = alg->init(acomp);
if (err)
goto out_free_fb;
return 0;
out_free_fb:
crypto_free_acomp(fb);
return err;
}
static unsigned int crypto_acomp_extsize(struct crypto_alg *alg)
@ -123,35 +153,231 @@ struct crypto_acomp *crypto_alloc_acomp_node(const char *alg_name, u32 type,
}
EXPORT_SYMBOL_GPL(crypto_alloc_acomp_node);
struct acomp_req *acomp_request_alloc(struct crypto_acomp *acomp)
static void acomp_save_req(struct acomp_req *req, crypto_completion_t cplt)
{
struct crypto_tfm *tfm = crypto_acomp_tfm(acomp);
struct acomp_req *req;
struct acomp_req_chain *state = &req->chain;
req = __acomp_request_alloc(acomp);
if (req && (tfm->__crt_alg->cra_type != &crypto_acomp_type))
return crypto_acomp_scomp_alloc_ctx(req);
return req;
state->compl = req->base.complete;
state->data = req->base.data;
req->base.complete = cplt;
req->base.data = state;
state->req0 = req;
}
EXPORT_SYMBOL_GPL(acomp_request_alloc);
void acomp_request_free(struct acomp_req *req)
static void acomp_restore_req(struct acomp_req *req)
{
struct crypto_acomp *acomp = crypto_acomp_reqtfm(req);
struct crypto_tfm *tfm = crypto_acomp_tfm(acomp);
struct acomp_req_chain *state = req->base.data;
if (tfm->__crt_alg->cra_type != &crypto_acomp_type)
crypto_acomp_scomp_free_ctx(req);
req->base.complete = state->compl;
req->base.data = state->data;
}
if (req->flags & CRYPTO_ACOMP_ALLOC_OUTPUT) {
acomp->dst_free(req->dst);
req->dst = NULL;
static void acomp_reqchain_virt(struct acomp_req_chain *state, int err)
{
struct acomp_req *req = state->cur;
unsigned int slen = req->slen;
unsigned int dlen = req->dlen;
req->base.err = err;
state = &req->chain;
if (state->flags & CRYPTO_ACOMP_REQ_SRC_VIRT)
acomp_request_set_src_dma(req, state->src, slen);
else if (state->flags & CRYPTO_ACOMP_REQ_SRC_FOLIO)
acomp_request_set_src_folio(req, state->sfolio, state->soff, slen);
if (state->flags & CRYPTO_ACOMP_REQ_DST_VIRT)
acomp_request_set_dst_dma(req, state->dst, dlen);
else if (state->flags & CRYPTO_ACOMP_REQ_DST_FOLIO)
acomp_request_set_dst_folio(req, state->dfolio, state->doff, dlen);
}
static void acomp_virt_to_sg(struct acomp_req *req)
{
struct acomp_req_chain *state = &req->chain;
state->flags = req->base.flags & (CRYPTO_ACOMP_REQ_SRC_VIRT |
CRYPTO_ACOMP_REQ_DST_VIRT |
CRYPTO_ACOMP_REQ_SRC_FOLIO |
CRYPTO_ACOMP_REQ_DST_FOLIO);
if (acomp_request_src_isvirt(req)) {
unsigned int slen = req->slen;
const u8 *svirt = req->svirt;
state->src = svirt;
sg_init_one(&state->ssg, svirt, slen);
acomp_request_set_src_sg(req, &state->ssg, slen);
} else if (acomp_request_src_isfolio(req)) {
struct folio *folio = req->sfolio;
unsigned int slen = req->slen;
size_t off = req->soff;
state->sfolio = folio;
state->soff = off;
sg_init_table(&state->ssg, 1);
sg_set_page(&state->ssg, folio_page(folio, off / PAGE_SIZE),
slen, off % PAGE_SIZE);
acomp_request_set_src_sg(req, &state->ssg, slen);
}
__acomp_request_free(req);
if (acomp_request_dst_isvirt(req)) {
unsigned int dlen = req->dlen;
u8 *dvirt = req->dvirt;
state->dst = dvirt;
sg_init_one(&state->dsg, dvirt, dlen);
acomp_request_set_dst_sg(req, &state->dsg, dlen);
} else if (acomp_request_dst_isfolio(req)) {
struct folio *folio = req->dfolio;
unsigned int dlen = req->dlen;
size_t off = req->doff;
state->dfolio = folio;
state->doff = off;
sg_init_table(&state->dsg, 1);
sg_set_page(&state->dsg, folio_page(folio, off / PAGE_SIZE),
dlen, off % PAGE_SIZE);
acomp_request_set_src_sg(req, &state->dsg, dlen);
}
}
EXPORT_SYMBOL_GPL(acomp_request_free);
static int acomp_do_nondma(struct acomp_req_chain *state,
struct acomp_req *req)
{
u32 keep = CRYPTO_ACOMP_REQ_SRC_VIRT |
CRYPTO_ACOMP_REQ_SRC_NONDMA |
CRYPTO_ACOMP_REQ_DST_VIRT |
CRYPTO_ACOMP_REQ_DST_NONDMA;
ACOMP_REQUEST_ON_STACK(fbreq, crypto_acomp_reqtfm(req));
int err;
acomp_request_set_callback(fbreq, req->base.flags, NULL, NULL);
fbreq->base.flags &= ~keep;
fbreq->base.flags |= req->base.flags & keep;
fbreq->src = req->src;
fbreq->dst = req->dst;
fbreq->slen = req->slen;
fbreq->dlen = req->dlen;
if (state->op == crypto_acomp_reqtfm(req)->compress)
err = crypto_acomp_compress(fbreq);
else
err = crypto_acomp_decompress(fbreq);
req->dlen = fbreq->dlen;
return err;
}
static int acomp_do_one_req(struct acomp_req_chain *state,
struct acomp_req *req)
{
state->cur = req;
if (acomp_request_isnondma(req))
return acomp_do_nondma(state, req);
acomp_virt_to_sg(req);
return state->op(req);
}
static int acomp_reqchain_finish(struct acomp_req *req0, int err, u32 mask)
{
struct acomp_req_chain *state = req0->base.data;
struct acomp_req *req = state->cur;
struct acomp_req *n;
acomp_reqchain_virt(state, err);
if (req != req0)
list_add_tail(&req->base.list, &req0->base.list);
list_for_each_entry_safe(req, n, &state->head, base.list) {
list_del_init(&req->base.list);
req->base.flags &= mask;
req->base.complete = acomp_reqchain_done;
req->base.data = state;
err = acomp_do_one_req(state, req);
if (err == -EINPROGRESS) {
if (!list_empty(&state->head))
err = -EBUSY;
goto out;
}
if (err == -EBUSY)
goto out;
acomp_reqchain_virt(state, err);
list_add_tail(&req->base.list, &req0->base.list);
}
acomp_restore_req(req0);
out:
return err;
}
static void acomp_reqchain_done(void *data, int err)
{
struct acomp_req_chain *state = data;
crypto_completion_t compl = state->compl;
data = state->data;
if (err == -EINPROGRESS) {
if (!list_empty(&state->head))
return;
goto notify;
}
err = acomp_reqchain_finish(state->req0, err,
CRYPTO_TFM_REQ_MAY_BACKLOG);
if (err == -EBUSY)
return;
notify:
compl(data, err);
}
static int acomp_do_req_chain(struct acomp_req *req,
int (*op)(struct acomp_req *req))
{
struct crypto_acomp *tfm = crypto_acomp_reqtfm(req);
struct acomp_req_chain *state;
int err;
if (crypto_acomp_req_chain(tfm) ||
(!acomp_request_chained(req) && acomp_request_issg(req)))
return op(req);
acomp_save_req(req, acomp_reqchain_done);
state = req->base.data;
state->op = op;
state->src = NULL;
INIT_LIST_HEAD(&state->head);
list_splice_init(&req->base.list, &state->head);
err = acomp_do_one_req(state, req);
if (err == -EBUSY || err == -EINPROGRESS)
return -EBUSY;
return acomp_reqchain_finish(req, err, ~0);
}
int crypto_acomp_compress(struct acomp_req *req)
{
return acomp_do_req_chain(req, crypto_acomp_reqtfm(req)->compress);
}
EXPORT_SYMBOL_GPL(crypto_acomp_compress);
int crypto_acomp_decompress(struct acomp_req *req)
{
return acomp_do_req_chain(req, crypto_acomp_reqtfm(req)->decompress);
}
EXPORT_SYMBOL_GPL(crypto_acomp_decompress);
void comp_prepare_alg(struct comp_alg_common *alg)
{

View file

@ -16,6 +16,7 @@
#include <linux/slab.h>
#include <linux/seq_file.h>
#include <linux/string.h>
#include <linux/string_choices.h>
#include <net/netlink.h>
#include "internal.h"
@ -156,8 +157,8 @@ static void crypto_aead_show(struct seq_file *m, struct crypto_alg *alg)
struct aead_alg *aead = container_of(alg, struct aead_alg, base);
seq_printf(m, "type : aead\n");
seq_printf(m, "async : %s\n", alg->cra_flags & CRYPTO_ALG_ASYNC ?
"yes" : "no");
seq_printf(m, "async : %s\n",
str_yes_no(alg->cra_flags & CRYPTO_ALG_ASYNC));
seq_printf(m, "blocksize : %u\n", alg->cra_blocksize);
seq_printf(m, "ivsize : %u\n", aead->ivsize);
seq_printf(m, "maxauthsize : %u\n", aead->maxauthsize);

View file

@ -284,10 +284,9 @@ static void crypto_aegis128_process_ad(struct aegis_state *state,
scatterwalk_start(&walk, sg_src);
while (assoclen != 0) {
unsigned int size = scatterwalk_clamp(&walk, assoclen);
unsigned int size = scatterwalk_next(&walk, assoclen);
const u8 *src = walk.addr;
unsigned int left = size;
void *mapped = scatterwalk_map(&walk);
const u8 *src = (const u8 *)mapped;
if (pos + size >= AEGIS_BLOCK_SIZE) {
if (pos > 0) {
@ -308,9 +307,7 @@ static void crypto_aegis128_process_ad(struct aegis_state *state,
pos += left;
assoclen -= size;
scatterwalk_unmap(mapped);
scatterwalk_advance(&walk, size);
scatterwalk_done(&walk, 0, assoclen);
scatterwalk_done_src(&walk, size);
}
if (pos > 0) {

View file

@ -16,11 +16,13 @@
#include <linux/cryptouser.h>
#include <linux/err.h>
#include <linux/kernel.h>
#include <linux/mm.h>
#include <linux/module.h>
#include <linux/sched.h>
#include <linux/slab.h>
#include <linux/seq_file.h>
#include <linux/string.h>
#include <linux/string_choices.h>
#include <net/netlink.h>
#include "hash.h"
@ -28,7 +30,7 @@
#define CRYPTO_ALG_TYPE_AHASH_MASK 0x0000000e
struct crypto_hash_walk {
char *data;
const char *data;
unsigned int offset;
unsigned int flags;
@ -40,6 +42,27 @@ struct crypto_hash_walk {
struct scatterlist *sg;
};
struct ahash_save_req_state {
struct list_head head;
struct ahash_request *req0;
struct ahash_request *cur;
int (*op)(struct ahash_request *req);
crypto_completion_t compl;
void *data;
struct scatterlist sg;
const u8 *src;
u8 *page;
unsigned int offset;
unsigned int nbytes;
};
static void ahash_reqchain_done(void *data, int err);
static int ahash_save_req(struct ahash_request *req, crypto_completion_t cplt);
static void ahash_restore_req(struct ahash_request *req);
static void ahash_def_finup_done1(void *data, int err);
static int ahash_def_finup_finish1(struct ahash_request *req, int err);
static int ahash_def_finup(struct ahash_request *req);
static int hash_walk_next(struct crypto_hash_walk *walk)
{
unsigned int offset = walk->offset;
@ -58,7 +81,7 @@ static int hash_walk_new_entry(struct crypto_hash_walk *walk)
sg = walk->sg;
walk->offset = sg->offset;
walk->pg = sg_page(walk->sg) + (walk->offset >> PAGE_SHIFT);
walk->pg = nth_page(sg_page(walk->sg), (walk->offset >> PAGE_SHIFT));
walk->offset = offset_in_page(walk->offset);
walk->entrylen = sg->length;
@ -73,20 +96,29 @@ static int crypto_hash_walk_first(struct ahash_request *req,
struct crypto_hash_walk *walk)
{
walk->total = req->nbytes;
walk->entrylen = 0;
if (!walk->total) {
walk->entrylen = 0;
if (!walk->total)
return 0;
walk->flags = req->base.flags;
if (ahash_request_isvirt(req)) {
walk->data = req->svirt;
walk->total = 0;
return req->nbytes;
}
walk->sg = req->src;
walk->flags = req->base.flags;
return hash_walk_new_entry(walk);
}
static int crypto_hash_walk_done(struct crypto_hash_walk *walk, int err)
{
if ((walk->flags & CRYPTO_AHASH_REQ_VIRT))
return err;
walk->data -= walk->offset;
kunmap_local(walk->data);
@ -171,21 +203,36 @@ int shash_ahash_digest(struct ahash_request *req, struct shash_desc *desc)
unsigned int nbytes = req->nbytes;
struct scatterlist *sg;
unsigned int offset;
struct page *page;
const u8 *data;
int err;
if (nbytes &&
(sg = req->src, offset = sg->offset,
nbytes <= min(sg->length, ((unsigned int)(PAGE_SIZE)) - offset))) {
void *data;
data = req->svirt;
if (!nbytes || ahash_request_isvirt(req))
return crypto_shash_digest(desc, data, nbytes, req->result);
data = kmap_local_page(sg_page(sg));
err = crypto_shash_digest(desc, data + offset, nbytes,
req->result);
kunmap_local(data);
} else
err = crypto_shash_init(desc) ?:
shash_ahash_finup(req, desc);
sg = req->src;
if (nbytes > sg->length)
return crypto_shash_init(desc) ?:
shash_ahash_finup(req, desc);
page = sg_page(sg);
offset = sg->offset;
data = lowmem_page_address(page) + offset;
if (!IS_ENABLED(CONFIG_HIGHMEM))
return crypto_shash_digest(desc, data, nbytes, req->result);
page = nth_page(page, offset >> PAGE_SHIFT);
offset = offset_in_page(offset);
if (nbytes > (unsigned int)PAGE_SIZE - offset)
return crypto_shash_init(desc) ?:
shash_ahash_finup(req, desc);
data = kmap_local_page(page);
err = crypto_shash_digest(desc, data + offset, nbytes,
req->result);
kunmap_local(data);
return err;
}
EXPORT_SYMBOL_GPL(shash_ahash_digest);
@ -266,89 +313,330 @@ int crypto_ahash_setkey(struct crypto_ahash *tfm, const u8 *key,
}
EXPORT_SYMBOL_GPL(crypto_ahash_setkey);
static bool ahash_request_hasvirt(struct ahash_request *req)
{
struct ahash_request *r2;
if (ahash_request_isvirt(req))
return true;
list_for_each_entry(r2, &req->base.list, base.list)
if (ahash_request_isvirt(r2))
return true;
return false;
}
static int ahash_reqchain_virt(struct ahash_save_req_state *state,
int err, u32 mask)
{
struct ahash_request *req = state->cur;
for (;;) {
unsigned len = state->nbytes;
req->base.err = err;
if (!state->offset)
break;
if (state->offset == len || err) {
u8 *result = req->result;
ahash_request_set_virt(req, state->src, result, len);
state->offset = 0;
break;
}
len -= state->offset;
len = min(PAGE_SIZE, len);
memcpy(state->page, state->src + state->offset, len);
state->offset += len;
req->nbytes = len;
err = state->op(req);
if (err == -EINPROGRESS) {
if (!list_empty(&state->head) ||
state->offset < state->nbytes)
err = -EBUSY;
break;
}
if (err == -EBUSY)
break;
}
return err;
}
static int ahash_reqchain_finish(struct ahash_request *req0,
struct ahash_save_req_state *state,
int err, u32 mask)
{
struct ahash_request *req = state->cur;
struct crypto_ahash *tfm;
struct ahash_request *n;
bool update;
u8 *page;
err = ahash_reqchain_virt(state, err, mask);
if (err == -EINPROGRESS || err == -EBUSY)
goto out;
if (req != req0)
list_add_tail(&req->base.list, &req0->base.list);
tfm = crypto_ahash_reqtfm(req);
update = state->op == crypto_ahash_alg(tfm)->update;
list_for_each_entry_safe(req, n, &state->head, base.list) {
list_del_init(&req->base.list);
req->base.flags &= mask;
req->base.complete = ahash_reqchain_done;
req->base.data = state;
state->cur = req;
if (update && ahash_request_isvirt(req) && req->nbytes) {
unsigned len = req->nbytes;
u8 *result = req->result;
state->src = req->svirt;
state->nbytes = len;
len = min(PAGE_SIZE, len);
memcpy(state->page, req->svirt, len);
state->offset = len;
ahash_request_set_crypt(req, &state->sg, result, len);
}
err = state->op(req);
if (err == -EINPROGRESS) {
if (!list_empty(&state->head) ||
state->offset < state->nbytes)
err = -EBUSY;
goto out;
}
if (err == -EBUSY)
goto out;
err = ahash_reqchain_virt(state, err, mask);
if (err == -EINPROGRESS || err == -EBUSY)
goto out;
list_add_tail(&req->base.list, &req0->base.list);
}
page = state->page;
if (page) {
memset(page, 0, PAGE_SIZE);
free_page((unsigned long)page);
}
ahash_restore_req(req0);
out:
return err;
}
static void ahash_reqchain_done(void *data, int err)
{
struct ahash_save_req_state *state = data;
crypto_completion_t compl = state->compl;
data = state->data;
if (err == -EINPROGRESS) {
if (!list_empty(&state->head) || state->offset < state->nbytes)
return;
goto notify;
}
err = ahash_reqchain_finish(state->req0, state, err,
CRYPTO_TFM_REQ_MAY_BACKLOG);
if (err == -EBUSY)
return;
notify:
compl(data, err);
}
static int ahash_do_req_chain(struct ahash_request *req,
int (*op)(struct ahash_request *req))
{
struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
bool update = op == crypto_ahash_alg(tfm)->update;
struct ahash_save_req_state *state;
struct ahash_save_req_state state0;
struct ahash_request *r2;
u8 *page = NULL;
int err;
if (crypto_ahash_req_chain(tfm) ||
(!ahash_request_chained(req) &&
(!update || !ahash_request_isvirt(req))))
return op(req);
if (update && ahash_request_hasvirt(req)) {
gfp_t gfp;
u32 flags;
flags = ahash_request_flags(req);
gfp = (flags & CRYPTO_TFM_REQ_MAY_SLEEP) ?
GFP_KERNEL : GFP_ATOMIC;
page = (void *)__get_free_page(gfp);
err = -ENOMEM;
if (!page)
goto out_set_chain;
}
state = &state0;
if (ahash_is_async(tfm)) {
err = ahash_save_req(req, ahash_reqchain_done);
if (err)
goto out_free_page;
state = req->base.data;
}
state->op = op;
state->cur = req;
state->page = page;
state->offset = 0;
state->nbytes = 0;
INIT_LIST_HEAD(&state->head);
list_splice_init(&req->base.list, &state->head);
if (page)
sg_init_one(&state->sg, page, PAGE_SIZE);
if (update && ahash_request_isvirt(req) && req->nbytes) {
unsigned len = req->nbytes;
u8 *result = req->result;
state->src = req->svirt;
state->nbytes = len;
len = min(PAGE_SIZE, len);
memcpy(page, req->svirt, len);
state->offset = len;
ahash_request_set_crypt(req, &state->sg, result, len);
}
err = op(req);
if (err == -EBUSY || err == -EINPROGRESS)
return -EBUSY;
return ahash_reqchain_finish(req, state, err, ~0);
out_free_page:
free_page((unsigned long)page);
out_set_chain:
req->base.err = err;
list_for_each_entry(r2, &req->base.list, base.list)
r2->base.err = err;
return err;
}
int crypto_ahash_init(struct ahash_request *req)
{
struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
if (likely(tfm->using_shash))
return crypto_shash_init(prepare_shash_desc(req, tfm));
if (likely(tfm->using_shash)) {
struct ahash_request *r2;
int err;
err = crypto_shash_init(prepare_shash_desc(req, tfm));
req->base.err = err;
list_for_each_entry(r2, &req->base.list, base.list) {
struct shash_desc *desc;
desc = prepare_shash_desc(r2, tfm);
r2->base.err = crypto_shash_init(desc);
}
return err;
}
if (crypto_ahash_get_flags(tfm) & CRYPTO_TFM_NEED_KEY)
return -ENOKEY;
return crypto_ahash_alg(tfm)->init(req);
return ahash_do_req_chain(req, crypto_ahash_alg(tfm)->init);
}
EXPORT_SYMBOL_GPL(crypto_ahash_init);
static int ahash_save_req(struct ahash_request *req, crypto_completion_t cplt,
bool has_state)
static int ahash_save_req(struct ahash_request *req, crypto_completion_t cplt)
{
struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
unsigned int ds = crypto_ahash_digestsize(tfm);
struct ahash_request *subreq;
unsigned int subreq_size;
unsigned int reqsize;
u8 *result;
struct ahash_save_req_state *state;
gfp_t gfp;
u32 flags;
subreq_size = sizeof(*subreq);
reqsize = crypto_ahash_reqsize(tfm);
reqsize = ALIGN(reqsize, crypto_tfm_ctx_alignment());
subreq_size += reqsize;
subreq_size += ds;
if (!ahash_is_async(tfm))
return 0;
flags = ahash_request_flags(req);
gfp = (flags & CRYPTO_TFM_REQ_MAY_SLEEP) ? GFP_KERNEL : GFP_ATOMIC;
subreq = kmalloc(subreq_size, gfp);
if (!subreq)
state = kmalloc(sizeof(*state), gfp);
if (!state)
return -ENOMEM;
ahash_request_set_tfm(subreq, tfm);
ahash_request_set_callback(subreq, flags, cplt, req);
result = (u8 *)(subreq + 1) + reqsize;
ahash_request_set_crypt(subreq, req->src, result, req->nbytes);
if (has_state) {
void *state;
state = kmalloc(crypto_ahash_statesize(tfm), gfp);
if (!state) {
kfree(subreq);
return -ENOMEM;
}
crypto_ahash_export(req, state);
crypto_ahash_import(subreq, state);
kfree_sensitive(state);
}
req->priv = subreq;
state->compl = req->base.complete;
state->data = req->base.data;
req->base.complete = cplt;
req->base.data = state;
state->req0 = req;
return 0;
}
static void ahash_restore_req(struct ahash_request *req, int err)
static void ahash_restore_req(struct ahash_request *req)
{
struct ahash_request *subreq = req->priv;
struct ahash_save_req_state *state;
struct crypto_ahash *tfm;
if (!err)
memcpy(req->result, subreq->result,
crypto_ahash_digestsize(crypto_ahash_reqtfm(req)));
tfm = crypto_ahash_reqtfm(req);
if (!ahash_is_async(tfm))
return;
req->priv = NULL;
state = req->base.data;
kfree_sensitive(subreq);
req->base.complete = state->compl;
req->base.data = state->data;
kfree(state);
}
int crypto_ahash_update(struct ahash_request *req)
{
struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
if (likely(tfm->using_shash))
return shash_ahash_update(req, ahash_request_ctx(req));
if (likely(tfm->using_shash)) {
struct ahash_request *r2;
int err;
return crypto_ahash_alg(tfm)->update(req);
err = shash_ahash_update(req, ahash_request_ctx(req));
req->base.err = err;
list_for_each_entry(r2, &req->base.list, base.list) {
struct shash_desc *desc;
desc = ahash_request_ctx(r2);
r2->base.err = shash_ahash_update(r2, desc);
}
return err;
}
return ahash_do_req_chain(req, crypto_ahash_alg(tfm)->update);
}
EXPORT_SYMBOL_GPL(crypto_ahash_update);
@ -356,10 +644,24 @@ int crypto_ahash_final(struct ahash_request *req)
{
struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
if (likely(tfm->using_shash))
return crypto_shash_final(ahash_request_ctx(req), req->result);
if (likely(tfm->using_shash)) {
struct ahash_request *r2;
int err;
return crypto_ahash_alg(tfm)->final(req);
err = crypto_shash_final(ahash_request_ctx(req), req->result);
req->base.err = err;
list_for_each_entry(r2, &req->base.list, base.list) {
struct shash_desc *desc;
desc = ahash_request_ctx(r2);
r2->base.err = crypto_shash_final(desc, r2->result);
}
return err;
}
return ahash_do_req_chain(req, crypto_ahash_alg(tfm)->final);
}
EXPORT_SYMBOL_GPL(crypto_ahash_final);
@ -367,86 +669,182 @@ int crypto_ahash_finup(struct ahash_request *req)
{
struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
if (likely(tfm->using_shash))
return shash_ahash_finup(req, ahash_request_ctx(req));
if (likely(tfm->using_shash)) {
struct ahash_request *r2;
int err;
return crypto_ahash_alg(tfm)->finup(req);
err = shash_ahash_finup(req, ahash_request_ctx(req));
req->base.err = err;
list_for_each_entry(r2, &req->base.list, base.list) {
struct shash_desc *desc;
desc = ahash_request_ctx(r2);
r2->base.err = shash_ahash_finup(r2, desc);
}
return err;
}
if (!crypto_ahash_alg(tfm)->finup ||
(!crypto_ahash_req_chain(tfm) && ahash_request_hasvirt(req)))
return ahash_def_finup(req);
return ahash_do_req_chain(req, crypto_ahash_alg(tfm)->finup);
}
EXPORT_SYMBOL_GPL(crypto_ahash_finup);
static int ahash_def_digest_finish(struct ahash_request *req, int err)
{
struct crypto_ahash *tfm;
if (err)
goto out;
tfm = crypto_ahash_reqtfm(req);
if (ahash_is_async(tfm))
req->base.complete = ahash_def_finup_done1;
err = crypto_ahash_update(req);
if (err == -EINPROGRESS || err == -EBUSY)
return err;
return ahash_def_finup_finish1(req, err);
out:
ahash_restore_req(req);
return err;
}
static void ahash_def_digest_done(void *data, int err)
{
struct ahash_save_req_state *state0 = data;
struct ahash_save_req_state state;
struct ahash_request *areq;
state = *state0;
areq = state.req0;
if (err == -EINPROGRESS)
goto out;
areq->base.flags &= ~CRYPTO_TFM_REQ_MAY_SLEEP;
err = ahash_def_digest_finish(areq, err);
if (err == -EINPROGRESS || err == -EBUSY)
return;
out:
state.compl(state.data, err);
}
static int ahash_def_digest(struct ahash_request *req)
{
int err;
err = ahash_save_req(req, ahash_def_digest_done);
if (err)
return err;
err = crypto_ahash_init(req);
if (err == -EINPROGRESS || err == -EBUSY)
return err;
return ahash_def_digest_finish(req, err);
}
int crypto_ahash_digest(struct ahash_request *req)
{
struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
if (likely(tfm->using_shash))
return shash_ahash_digest(req, prepare_shash_desc(req, tfm));
if (likely(tfm->using_shash)) {
struct ahash_request *r2;
int err;
err = shash_ahash_digest(req, prepare_shash_desc(req, tfm));
req->base.err = err;
list_for_each_entry(r2, &req->base.list, base.list) {
struct shash_desc *desc;
desc = prepare_shash_desc(r2, tfm);
r2->base.err = shash_ahash_digest(r2, desc);
}
return err;
}
if (!crypto_ahash_req_chain(tfm) && ahash_request_hasvirt(req))
return ahash_def_digest(req);
if (crypto_ahash_get_flags(tfm) & CRYPTO_TFM_NEED_KEY)
return -ENOKEY;
return crypto_ahash_alg(tfm)->digest(req);
return ahash_do_req_chain(req, crypto_ahash_alg(tfm)->digest);
}
EXPORT_SYMBOL_GPL(crypto_ahash_digest);
static void ahash_def_finup_done2(void *data, int err)
{
struct ahash_request *areq = data;
struct ahash_save_req_state *state = data;
struct ahash_request *areq = state->req0;
if (err == -EINPROGRESS)
return;
ahash_restore_req(areq, err);
ahash_restore_req(areq);
ahash_request_complete(areq, err);
}
static int ahash_def_finup_finish1(struct ahash_request *req, int err)
{
struct ahash_request *subreq = req->priv;
struct crypto_ahash *tfm;
if (err)
goto out;
subreq->base.complete = ahash_def_finup_done2;
tfm = crypto_ahash_reqtfm(req);
if (ahash_is_async(tfm))
req->base.complete = ahash_def_finup_done2;
err = crypto_ahash_alg(crypto_ahash_reqtfm(req))->final(subreq);
err = crypto_ahash_final(req);
if (err == -EINPROGRESS || err == -EBUSY)
return err;
out:
ahash_restore_req(req, err);
ahash_restore_req(req);
return err;
}
static void ahash_def_finup_done1(void *data, int err)
{
struct ahash_request *areq = data;
struct ahash_request *subreq;
struct ahash_save_req_state *state0 = data;
struct ahash_save_req_state state;
struct ahash_request *areq;
state = *state0;
areq = state.req0;
if (err == -EINPROGRESS)
goto out;
subreq = areq->priv;
subreq->base.flags &= CRYPTO_TFM_REQ_MAY_BACKLOG;
areq->base.flags &= ~CRYPTO_TFM_REQ_MAY_SLEEP;
err = ahash_def_finup_finish1(areq, err);
if (err == -EINPROGRESS || err == -EBUSY)
return;
out:
ahash_request_complete(areq, err);
state.compl(state.data, err);
}
static int ahash_def_finup(struct ahash_request *req)
{
struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
int err;
err = ahash_save_req(req, ahash_def_finup_done1, true);
err = ahash_save_req(req, ahash_def_finup_done1);
if (err)
return err;
err = crypto_ahash_alg(tfm)->update(req->priv);
err = crypto_ahash_update(req);
if (err == -EINPROGRESS || err == -EBUSY)
return err;
@ -489,6 +887,7 @@ static int crypto_ahash_init_tfm(struct crypto_tfm *tfm)
struct ahash_alg *alg = crypto_ahash_alg(hash);
crypto_ahash_set_statesize(hash, alg->halg.statesize);
crypto_ahash_set_reqsize(hash, alg->reqsize);
if (tfm->__crt_alg->cra_type == &crypto_shash_type)
return crypto_init_ahash_using_shash(tfm);
@ -536,8 +935,8 @@ static void crypto_ahash_show(struct seq_file *m, struct crypto_alg *alg)
static void crypto_ahash_show(struct seq_file *m, struct crypto_alg *alg)
{
seq_printf(m, "type : ahash\n");
seq_printf(m, "async : %s\n", alg->cra_flags & CRYPTO_ALG_ASYNC ?
"yes" : "no");
seq_printf(m, "async : %s\n",
str_yes_no(alg->cra_flags & CRYPTO_ALG_ASYNC));
seq_printf(m, "blocksize : %u\n", alg->cra_blocksize);
seq_printf(m, "digestsize : %u\n",
__crypto_hash_alg_common(alg)->digestsize);
@ -654,6 +1053,9 @@ static int ahash_prepare_alg(struct ahash_alg *alg)
if (alg->halg.statesize == 0)
return -EINVAL;
if (alg->reqsize && alg->reqsize < alg->halg.statesize)
return -EINVAL;
err = hash_prepare_alg(&alg->halg);
if (err)
return err;
@ -661,8 +1063,6 @@ static int ahash_prepare_alg(struct ahash_alg *alg)
base->cra_type = &crypto_ahash_type;
base->cra_flags |= CRYPTO_ALG_TYPE_AHASH;
if (!alg->finup)
alg->finup = ahash_def_finup;
if (!alg->setkey)
alg->setkey = ahash_nosetkey;
@ -733,5 +1133,20 @@ int ahash_register_instance(struct crypto_template *tmpl,
}
EXPORT_SYMBOL_GPL(ahash_register_instance);
void ahash_request_free(struct ahash_request *req)
{
struct ahash_request *tmp;
struct ahash_request *r2;
if (unlikely(!req))
return;
list_for_each_entry_safe(r2, tmp, &req->base.list, base.list)
kfree_sensitive(r2);
kfree_sensitive(req);
}
EXPORT_SYMBOL_GPL(ahash_request_free);
MODULE_LICENSE("GPL");
MODULE_DESCRIPTION("Asynchronous cryptographic hash type");

View file

@ -464,8 +464,7 @@ void crypto_unregister_alg(struct crypto_alg *alg)
if (WARN_ON(refcount_read(&alg->cra_refcnt) != 1))
return;
if (alg->cra_destroy)
alg->cra_destroy(alg);
crypto_alg_put(alg);
crypto_remove_final(&list);
}
@ -955,7 +954,7 @@ struct crypto_async_request *crypto_dequeue_request(struct crypto_queue *queue)
queue->backlog = queue->backlog->next;
request = queue->list.next;
list_del(request);
list_del_init(request);
return list_entry(request, struct crypto_async_request, list);
}

View file

@ -36,7 +36,8 @@ EXPORT_SYMBOL_GPL(crypto_chain);
DEFINE_STATIC_KEY_FALSE(__crypto_boot_test_finished);
#endif
static struct crypto_alg *crypto_larval_wait(struct crypto_alg *alg);
static struct crypto_alg *crypto_larval_wait(struct crypto_alg *alg,
u32 type, u32 mask);
static struct crypto_alg *crypto_alg_lookup(const char *name, u32 type,
u32 mask);
@ -145,7 +146,7 @@ static struct crypto_alg *crypto_larval_add(const char *name, u32 type,
if (alg != &larval->alg) {
kfree(larval);
if (crypto_is_larval(alg))
alg = crypto_larval_wait(alg);
alg = crypto_larval_wait(alg, type, mask);
}
return alg;
@ -197,7 +198,8 @@ static void crypto_start_test(struct crypto_larval *larval)
crypto_schedule_test(larval);
}
static struct crypto_alg *crypto_larval_wait(struct crypto_alg *alg)
static struct crypto_alg *crypto_larval_wait(struct crypto_alg *alg,
u32 type, u32 mask)
{
struct crypto_larval *larval;
long time_left;
@ -219,12 +221,7 @@ again:
crypto_larval_kill(larval);
alg = ERR_PTR(-ETIMEDOUT);
} else if (!alg) {
u32 type;
u32 mask;
alg = &larval->alg;
type = alg->cra_flags & ~(CRYPTO_ALG_LARVAL | CRYPTO_ALG_DEAD);
mask = larval->mask;
alg = crypto_alg_lookup(alg->cra_name, type, mask) ?:
ERR_PTR(-EAGAIN);
} else if (IS_ERR(alg))
@ -304,7 +301,7 @@ static struct crypto_alg *crypto_larval_lookup(const char *name, u32 type,
}
if (!IS_ERR_OR_NULL(alg) && crypto_is_larval(alg))
alg = crypto_larval_wait(alg);
alg = crypto_larval_wait(alg, type, mask);
else if (alg)
;
else if (!(mask & CRYPTO_ALG_TESTED))
@ -352,7 +349,7 @@ struct crypto_alg *crypto_alg_mod_lookup(const char *name, u32 type, u32 mask)
ok = crypto_probing_notify(CRYPTO_MSG_ALG_REQUEST, larval);
if (ok == NOTIFY_STOP)
alg = crypto_larval_wait(larval);
alg = crypto_larval_wait(larval, type, mask);
else {
crypto_mod_put(larval);
alg = ERR_PTR(-ENOENT);
@ -386,10 +383,6 @@ static unsigned int crypto_ctxsize(struct crypto_alg *alg, u32 type, u32 mask)
case CRYPTO_ALG_TYPE_CIPHER:
len += crypto_cipher_ctxsize(alg);
break;
case CRYPTO_ALG_TYPE_COMPRESS:
len += crypto_compress_ctxsize(alg);
break;
}
return len;
@ -710,5 +703,15 @@ void crypto_req_done(void *data, int err)
}
EXPORT_SYMBOL_GPL(crypto_req_done);
void crypto_destroy_alg(struct crypto_alg *alg)
{
if (alg->cra_type && alg->cra_type->destroy)
alg->cra_type->destroy(alg);
if (alg->cra_destroy)
alg->cra_destroy(alg);
}
EXPORT_SYMBOL_GPL(crypto_destroy_alg);
MODULE_DESCRIPTION("Cryptographic core API");
MODULE_LICENSE("GPL");

View file

@ -267,7 +267,6 @@ static int software_key_eds_op(struct kernel_pkey_params *params,
struct crypto_sig *sig;
char *key, *ptr;
bool issig;
int ksz;
int ret;
pr_devel("==>%s()\n", __func__);
@ -302,8 +301,6 @@ static int software_key_eds_op(struct kernel_pkey_params *params,
ret = crypto_sig_set_pubkey(sig, key, pkey->keylen);
if (ret)
goto error_free_tfm;
ksz = crypto_sig_keysize(sig);
} else {
tfm = crypto_alloc_akcipher(alg_name, 0, 0);
if (IS_ERR(tfm)) {
@ -317,8 +314,6 @@ static int software_key_eds_op(struct kernel_pkey_params *params,
ret = crypto_akcipher_set_pub_key(tfm, key, pkey->keylen);
if (ret)
goto error_free_tfm;
ksz = crypto_akcipher_maxsize(tfm);
}
ret = -EINVAL;
@ -347,8 +342,8 @@ static int software_key_eds_op(struct kernel_pkey_params *params,
BUG();
}
if (ret == 0)
ret = ksz;
if (!issig && ret == 0)
ret = crypto_akcipher_maxsize(tfm);
error_free_tfm:
if (issig)

View file

@ -389,32 +389,6 @@ async_xor_val_offs(struct page *dest, unsigned int offset,
}
EXPORT_SYMBOL_GPL(async_xor_val_offs);
/**
* async_xor_val - attempt a xor parity check with a dma engine.
* @dest: destination page used if the xor is performed synchronously
* @src_list: array of source pages
* @offset: offset in pages to start transaction
* @src_cnt: number of source pages
* @len: length in bytes
* @result: 0 if sum == 0 else non-zero
* @submit: submission / completion modifiers
*
* honored flags: ASYNC_TX_ACK
*
* src_list note: if the dest is also a source it must be at index zero.
* The contents of this array will be overwritten if a scribble region
* is not specified.
*/
struct dma_async_tx_descriptor *
async_xor_val(struct page *dest, struct page **src_list, unsigned int offset,
int src_cnt, size_t len, enum sum_check_flags *result,
struct async_submit_ctl *submit)
{
return async_xor_val_offs(dest, offset, src_list, NULL, src_cnt,
len, result, submit);
}
EXPORT_SYMBOL_GPL(async_xor_val);
MODULE_AUTHOR("Intel Corporation");
MODULE_DESCRIPTION("asynchronous xor/xor-zero-sum api");
MODULE_LICENSE("GPL");

View file

@ -80,3 +80,4 @@ static void __exit bpf_crypto_skcipher_exit(void)
module_init(bpf_crypto_skcipher_init);
module_exit(bpf_crypto_skcipher_exit);
MODULE_LICENSE("GPL");
MODULE_DESCRIPTION("Symmetric key cipher support for BPF");

View file

@ -21,7 +21,7 @@ static int chacha_stream_xor(struct skcipher_request *req,
err = skcipher_walk_virt(&walk, req, false);
chacha_init_generic(state, ctx->key, iv);
chacha_init(state, ctx->key, iv);
while (walk.nbytes > 0) {
unsigned int nbytes = walk.nbytes;
@ -54,7 +54,7 @@ static int crypto_xchacha_crypt(struct skcipher_request *req)
u8 real_iv[16];
/* Compute the subkey given the original key and first 128 nonce bits */
chacha_init_generic(state, ctx->key, req->iv);
chacha_init(state, ctx->key, req->iv);
hchacha_block_generic(state, subctx.key, ctx->nrounds);
subctx.nrounds = ctx->nrounds;

View file

@ -1,32 +0,0 @@
// SPDX-License-Identifier: GPL-2.0-or-later
/*
* Cryptographic API.
*
* Compression operations.
*
* Copyright (c) 2002 James Morris <jmorris@intercode.com.au>
*/
#include <linux/crypto.h>
#include "internal.h"
int crypto_comp_compress(struct crypto_comp *comp,
const u8 *src, unsigned int slen,
u8 *dst, unsigned int *dlen)
{
struct crypto_tfm *tfm = crypto_comp_tfm(comp);
return tfm->__crt_alg->cra_compress.coa_compress(tfm, src, slen, dst,
dlen);
}
EXPORT_SYMBOL_GPL(crypto_comp_compress);
int crypto_comp_decompress(struct crypto_comp *comp,
const u8 *src, unsigned int slen,
u8 *dst, unsigned int *dlen)
{
struct crypto_tfm *tfm = crypto_comp_tfm(comp);
return tfm->__crt_alg->cra_compress.coa_decompress(tfm, src, slen, dst,
dlen);
}
EXPORT_SYMBOL_GPL(crypto_comp_decompress);

View file

@ -15,8 +15,6 @@ struct acomp_req;
struct comp_alg_common;
int crypto_init_scomp_ops_async(struct crypto_tfm *tfm);
struct acomp_req *crypto_acomp_scomp_alloc_ctx(struct acomp_req *req);
void crypto_acomp_scomp_free_ctx(struct acomp_req *req);
void comp_prepare_alg(struct comp_alg_common *alg);

View file

@ -17,23 +17,13 @@
#include <crypto/internal/skcipher.h>
#include <linux/init.h>
#include <linux/module.h>
#include <linux/mm.h>
#include <linux/spinlock.h>
#include <linux/string.h>
static DEFINE_MUTEX(crypto_default_null_skcipher_lock);
static DEFINE_SPINLOCK(crypto_default_null_skcipher_lock);
static struct crypto_sync_skcipher *crypto_default_null_skcipher;
static int crypto_default_null_skcipher_refcnt;
static int null_compress(struct crypto_tfm *tfm, const u8 *src,
unsigned int slen, u8 *dst, unsigned int *dlen)
{
if (slen > *dlen)
return -EINVAL;
memcpy(dst, src, slen);
*dlen = slen;
return 0;
}
static int null_init(struct shash_desc *desc)
{
return 0;
@ -121,7 +111,7 @@ static struct skcipher_alg skcipher_null = {
.decrypt = null_skcipher_crypt,
};
static struct crypto_alg null_algs[] = { {
static struct crypto_alg cipher_null = {
.cra_name = "cipher_null",
.cra_driver_name = "cipher_null-generic",
.cra_flags = CRYPTO_ALG_TYPE_CIPHER,
@ -134,41 +124,39 @@ static struct crypto_alg null_algs[] = { {
.cia_setkey = null_setkey,
.cia_encrypt = null_crypt,
.cia_decrypt = null_crypt } }
}, {
.cra_name = "compress_null",
.cra_driver_name = "compress_null-generic",
.cra_flags = CRYPTO_ALG_TYPE_COMPRESS,
.cra_blocksize = NULL_BLOCK_SIZE,
.cra_ctxsize = 0,
.cra_module = THIS_MODULE,
.cra_u = { .compress = {
.coa_compress = null_compress,
.coa_decompress = null_compress } }
} };
};
MODULE_ALIAS_CRYPTO("compress_null");
MODULE_ALIAS_CRYPTO("digest_null");
MODULE_ALIAS_CRYPTO("cipher_null");
struct crypto_sync_skcipher *crypto_get_default_null_skcipher(void)
{
struct crypto_sync_skcipher *ntfm = NULL;
struct crypto_sync_skcipher *tfm;
mutex_lock(&crypto_default_null_skcipher_lock);
spin_lock_bh(&crypto_default_null_skcipher_lock);
tfm = crypto_default_null_skcipher;
if (!tfm) {
tfm = crypto_alloc_sync_skcipher("ecb(cipher_null)", 0, 0);
if (IS_ERR(tfm))
goto unlock;
spin_unlock_bh(&crypto_default_null_skcipher_lock);
crypto_default_null_skcipher = tfm;
ntfm = crypto_alloc_sync_skcipher("ecb(cipher_null)", 0, 0);
if (IS_ERR(ntfm))
return ntfm;
spin_lock_bh(&crypto_default_null_skcipher_lock);
tfm = crypto_default_null_skcipher;
if (!tfm) {
tfm = ntfm;
ntfm = NULL;
crypto_default_null_skcipher = tfm;
}
}
crypto_default_null_skcipher_refcnt++;
spin_unlock_bh(&crypto_default_null_skcipher_lock);
unlock:
mutex_unlock(&crypto_default_null_skcipher_lock);
crypto_free_sync_skcipher(ntfm);
return tfm;
}
@ -176,12 +164,16 @@ EXPORT_SYMBOL_GPL(crypto_get_default_null_skcipher);
void crypto_put_default_null_skcipher(void)
{
mutex_lock(&crypto_default_null_skcipher_lock);
struct crypto_sync_skcipher *tfm = NULL;
spin_lock_bh(&crypto_default_null_skcipher_lock);
if (!--crypto_default_null_skcipher_refcnt) {
crypto_free_sync_skcipher(crypto_default_null_skcipher);
tfm = crypto_default_null_skcipher;
crypto_default_null_skcipher = NULL;
}
mutex_unlock(&crypto_default_null_skcipher_lock);
spin_unlock_bh(&crypto_default_null_skcipher_lock);
crypto_free_sync_skcipher(tfm);
}
EXPORT_SYMBOL_GPL(crypto_put_default_null_skcipher);
@ -189,7 +181,7 @@ static int __init crypto_null_mod_init(void)
{
int ret = 0;
ret = crypto_register_algs(null_algs, ARRAY_SIZE(null_algs));
ret = crypto_register_alg(&cipher_null);
if (ret < 0)
goto out;
@ -206,14 +198,14 @@ static int __init crypto_null_mod_init(void)
out_unregister_shash:
crypto_unregister_shash(&digest_null);
out_unregister_algs:
crypto_unregister_algs(null_algs, ARRAY_SIZE(null_algs));
crypto_unregister_alg(&cipher_null);
out:
return ret;
}
static void __exit crypto_null_mod_fini(void)
{
crypto_unregister_algs(null_algs, ARRAY_SIZE(null_algs));
crypto_unregister_alg(&cipher_null);
crypto_unregister_shash(&digest_null);
crypto_unregister_skcipher(&skcipher_null);
}

View file

@ -84,17 +84,6 @@ static int crypto_report_cipher(struct sk_buff *skb, struct crypto_alg *alg)
sizeof(rcipher), &rcipher);
}
static int crypto_report_comp(struct sk_buff *skb, struct crypto_alg *alg)
{
struct crypto_report_comp rcomp;
memset(&rcomp, 0, sizeof(rcomp));
strscpy(rcomp.type, "compression", sizeof(rcomp.type));
return nla_put(skb, CRYPTOCFGA_REPORT_COMPRESS, sizeof(rcomp), &rcomp);
}
static int crypto_report_one(struct crypto_alg *alg,
struct crypto_user_alg *ualg, struct sk_buff *skb)
{
@ -135,11 +124,6 @@ static int crypto_report_one(struct crypto_alg *alg,
if (crypto_report_cipher(skb, alg))
goto nla_put_failure;
break;
case CRYPTO_ALG_TYPE_COMPRESS:
if (crypto_report_comp(skb, alg))
goto nla_put_failure;
break;
}

View file

@ -33,7 +33,7 @@ static void crypto_ctr_crypt_final(struct skcipher_walk *walk,
u8 *ctrblk = walk->iv;
u8 tmp[MAX_CIPHER_BLOCKSIZE + MAX_CIPHER_ALIGNMASK];
u8 *keystream = PTR_ALIGN(tmp + 0, alignmask + 1);
u8 *src = walk->src.virt.addr;
const u8 *src = walk->src.virt.addr;
u8 *dst = walk->dst.virt.addr;
unsigned int nbytes = walk->nbytes;
@ -50,7 +50,7 @@ static int crypto_ctr_crypt_segment(struct skcipher_walk *walk,
crypto_cipher_alg(tfm)->cia_encrypt;
unsigned int bsize = crypto_cipher_blocksize(tfm);
u8 *ctrblk = walk->iv;
u8 *src = walk->src.virt.addr;
const u8 *src = walk->src.virt.addr;
u8 *dst = walk->dst.virt.addr;
unsigned int nbytes = walk->nbytes;
@ -77,20 +77,20 @@ static int crypto_ctr_crypt_inplace(struct skcipher_walk *walk,
unsigned int bsize = crypto_cipher_blocksize(tfm);
unsigned long alignmask = crypto_cipher_alignmask(tfm);
unsigned int nbytes = walk->nbytes;
u8 *dst = walk->dst.virt.addr;
u8 *ctrblk = walk->iv;
u8 *src = walk->src.virt.addr;
u8 tmp[MAX_CIPHER_BLOCKSIZE + MAX_CIPHER_ALIGNMASK];
u8 *keystream = PTR_ALIGN(tmp + 0, alignmask + 1);
do {
/* create keystream */
fn(crypto_cipher_tfm(tfm), keystream, ctrblk);
crypto_xor(src, keystream, bsize);
crypto_xor(dst, keystream, bsize);
/* increment counter in counterblock */
crypto_inc(ctrblk, bsize);
src += bsize;
dst += bsize;
} while ((nbytes -= bsize) >= bsize);
return nbytes;

View file

@ -112,7 +112,7 @@ out:
return ret;
}
static void *deflate_alloc_ctx(struct crypto_scomp *tfm)
static void *deflate_alloc_ctx(void)
{
struct deflate_ctx *ctx;
int ret;
@ -130,32 +130,18 @@ static void *deflate_alloc_ctx(struct crypto_scomp *tfm)
return ctx;
}
static int deflate_init(struct crypto_tfm *tfm)
{
struct deflate_ctx *ctx = crypto_tfm_ctx(tfm);
return __deflate_init(ctx);
}
static void __deflate_exit(void *ctx)
{
deflate_comp_exit(ctx);
deflate_decomp_exit(ctx);
}
static void deflate_free_ctx(struct crypto_scomp *tfm, void *ctx)
static void deflate_free_ctx(void *ctx)
{
__deflate_exit(ctx);
kfree_sensitive(ctx);
}
static void deflate_exit(struct crypto_tfm *tfm)
{
struct deflate_ctx *ctx = crypto_tfm_ctx(tfm);
__deflate_exit(ctx);
}
static int __deflate_compress(const u8 *src, unsigned int slen,
u8 *dst, unsigned int *dlen, void *ctx)
{
@ -185,14 +171,6 @@ out:
return ret;
}
static int deflate_compress(struct crypto_tfm *tfm, const u8 *src,
unsigned int slen, u8 *dst, unsigned int *dlen)
{
struct deflate_ctx *dctx = crypto_tfm_ctx(tfm);
return __deflate_compress(src, slen, dst, dlen, dctx);
}
static int deflate_scompress(struct crypto_scomp *tfm, const u8 *src,
unsigned int slen, u8 *dst, unsigned int *dlen,
void *ctx)
@ -241,14 +219,6 @@ out:
return ret;
}
static int deflate_decompress(struct crypto_tfm *tfm, const u8 *src,
unsigned int slen, u8 *dst, unsigned int *dlen)
{
struct deflate_ctx *dctx = crypto_tfm_ctx(tfm);
return __deflate_decompress(src, slen, dst, dlen, dctx);
}
static int deflate_sdecompress(struct crypto_scomp *tfm, const u8 *src,
unsigned int slen, u8 *dst, unsigned int *dlen,
void *ctx)
@ -256,19 +226,6 @@ static int deflate_sdecompress(struct crypto_scomp *tfm, const u8 *src,
return __deflate_decompress(src, slen, dst, dlen, ctx);
}
static struct crypto_alg alg = {
.cra_name = "deflate",
.cra_driver_name = "deflate-generic",
.cra_flags = CRYPTO_ALG_TYPE_COMPRESS,
.cra_ctxsize = sizeof(struct deflate_ctx),
.cra_module = THIS_MODULE,
.cra_init = deflate_init,
.cra_exit = deflate_exit,
.cra_u = { .compress = {
.coa_compress = deflate_compress,
.coa_decompress = deflate_decompress } }
};
static struct scomp_alg scomp = {
.alloc_ctx = deflate_alloc_ctx,
.free_ctx = deflate_free_ctx,
@ -283,24 +240,11 @@ static struct scomp_alg scomp = {
static int __init deflate_mod_init(void)
{
int ret;
ret = crypto_register_alg(&alg);
if (ret)
return ret;
ret = crypto_register_scomp(&scomp);
if (ret) {
crypto_unregister_alg(&alg);
return ret;
}
return ret;
return crypto_register_scomp(&scomp);
}
static void __exit deflate_mod_fini(void)
{
crypto_unregister_alg(&alg);
crypto_unregister_scomp(&scomp);
}

View file

@ -71,7 +71,7 @@ EXPORT_SYMBOL(ecc_get_curve);
void ecc_digits_from_bytes(const u8 *in, unsigned int nbytes,
u64 *out, unsigned int ndigits)
{
int diff = ndigits - DIV_ROUND_UP(nbytes, sizeof(u64));
int diff = ndigits - DIV_ROUND_UP_POW2(nbytes, sizeof(u64));
unsigned int o = nbytes & 7;
__be64 msd = 0;

View file

@ -22,7 +22,7 @@ static int ecdsa_p1363_verify(struct crypto_sig *tfm,
{
struct ecdsa_p1363_ctx *ctx = crypto_sig_ctx(tfm);
unsigned int keylen = crypto_sig_keysize(ctx->child);
unsigned int ndigits = DIV_ROUND_UP(keylen, sizeof(u64));
unsigned int ndigits = DIV_ROUND_UP_POW2(keylen, sizeof(u64));
struct ecdsa_raw_sig sig;
if (slen != 2 * keylen)

View file

@ -81,8 +81,8 @@ static int ecdsa_x962_verify(struct crypto_sig *tfm,
struct ecdsa_x962_signature_ctx sig_ctx;
int err;
sig_ctx.ndigits = DIV_ROUND_UP(crypto_sig_keysize(ctx->child),
sizeof(u64));
sig_ctx.ndigits = DIV_ROUND_UP_POW2(crypto_sig_keysize(ctx->child),
sizeof(u64));
err = asn1_ber_decoder(&ecdsasignature_decoder, &sig_ctx, src, slen);
if (err < 0)

View file

@ -405,8 +405,7 @@ static bool parse_cipher_name(char *essiv_cipher_name, const char *cra_name)
if (len >= CRYPTO_MAX_ALG_NAME)
return false;
memcpy(essiv_cipher_name, p, len);
essiv_cipher_name[len] = '\0';
strscpy(essiv_cipher_name, p, len + 1);
return true;
}

View file

@ -33,6 +33,21 @@ struct crypto_larval {
bool test_started;
};
struct crypto_type {
unsigned int (*ctxsize)(struct crypto_alg *alg, u32 type, u32 mask);
unsigned int (*extsize)(struct crypto_alg *alg);
int (*init_tfm)(struct crypto_tfm *tfm);
void (*show)(struct seq_file *m, struct crypto_alg *alg);
int (*report)(struct sk_buff *skb, struct crypto_alg *alg);
void (*free)(struct crypto_instance *inst);
void (*destroy)(struct crypto_alg *alg);
unsigned int type;
unsigned int maskclear;
unsigned int maskset;
unsigned int tfmsize;
};
enum {
CRYPTOA_UNSPEC,
CRYPTOA_ALG,
@ -113,6 +128,7 @@ void *crypto_create_tfm_node(struct crypto_alg *alg,
const struct crypto_type *frontend, int node);
void *crypto_clone_tfm(const struct crypto_type *frontend,
struct crypto_tfm *otfm);
void crypto_destroy_alg(struct crypto_alg *alg);
static inline void *crypto_create_tfm(struct crypto_alg *alg,
const struct crypto_type *frontend)
@ -149,8 +165,8 @@ static inline struct crypto_alg *crypto_alg_get(struct crypto_alg *alg)
static inline void crypto_alg_put(struct crypto_alg *alg)
{
if (refcount_dec_and_test(&alg->cra_refcnt) && alg->cra_destroy)
alg->cra_destroy(alg);
if (refcount_dec_and_test(&alg->cra_refcnt))
crypto_destroy_alg(alg);
}
static inline int crypto_tmpl_get(struct crypto_template *tmpl)

26
crypto/krb5/Kconfig Normal file
View file

@ -0,0 +1,26 @@
config CRYPTO_KRB5
tristate "Kerberos 5 crypto"
select CRYPTO_MANAGER
select CRYPTO_KRB5ENC
select CRYPTO_AUTHENC
select CRYPTO_SKCIPHER
select CRYPTO_HASH_INFO
select CRYPTO_HMAC
select CRYPTO_CMAC
select CRYPTO_SHA1
select CRYPTO_SHA256
select CRYPTO_SHA512
select CRYPTO_CBC
select CRYPTO_CTS
select CRYPTO_AES
select CRYPTO_CAMELLIA
help
Provide a library for provision of Kerberos-5-based crypto. This is
intended for network filesystems to use.
config CRYPTO_KRB5_SELFTESTS
bool "Kerberos 5 crypto selftests"
depends on CRYPTO_KRB5
help
Turn on some self-testing for the kerberos 5 crypto functions. These
will be performed on module load or boot, if compiled in.

18
crypto/krb5/Makefile Normal file
View file

@ -0,0 +1,18 @@
# SPDX-License-Identifier: GPL-2.0
#
# Makefile for asymmetric cryptographic keys
#
krb5-y += \
krb5_kdf.o \
krb5_api.o \
rfc3961_simplified.o \
rfc3962_aes.o \
rfc6803_camellia.o \
rfc8009_aes2.o
krb5-$(CONFIG_CRYPTO_KRB5_SELFTESTS) += \
selftest.o \
selftest_data.o
obj-$(CONFIG_CRYPTO_KRB5) += krb5.o

247
crypto/krb5/internal.h Normal file
View file

@ -0,0 +1,247 @@
/* SPDX-License-Identifier: GPL-2.0-or-later */
/* Kerberos5 crypto internals
*
* Copyright (C) 2025 Red Hat, Inc. All Rights Reserved.
* Written by David Howells (dhowells@redhat.com)
*/
#include <linux/scatterlist.h>
#include <crypto/krb5.h>
#include <crypto/hash.h>
#include <crypto/skcipher.h>
/*
* Profile used for key derivation and encryption.
*/
struct krb5_crypto_profile {
/* Pseudo-random function */
int (*calc_PRF)(const struct krb5_enctype *krb5,
const struct krb5_buffer *protocol_key,
const struct krb5_buffer *octet_string,
struct krb5_buffer *result,
gfp_t gfp);
/* Checksum key derivation */
int (*calc_Kc)(const struct krb5_enctype *krb5,
const struct krb5_buffer *TK,
const struct krb5_buffer *usage_constant,
struct krb5_buffer *Kc,
gfp_t gfp);
/* Encryption key derivation */
int (*calc_Ke)(const struct krb5_enctype *krb5,
const struct krb5_buffer *TK,
const struct krb5_buffer *usage_constant,
struct krb5_buffer *Ke,
gfp_t gfp);
/* Integrity key derivation */
int (*calc_Ki)(const struct krb5_enctype *krb5,
const struct krb5_buffer *TK,
const struct krb5_buffer *usage_constant,
struct krb5_buffer *Ki,
gfp_t gfp);
/* Derive the keys needed for an encryption AEAD object. */
int (*derive_encrypt_keys)(const struct krb5_enctype *krb5,
const struct krb5_buffer *TK,
unsigned int usage,
struct krb5_buffer *setkey,
gfp_t gfp);
/* Directly load the keys needed for an encryption AEAD object. */
int (*load_encrypt_keys)(const struct krb5_enctype *krb5,
const struct krb5_buffer *Ke,
const struct krb5_buffer *Ki,
struct krb5_buffer *setkey,
gfp_t gfp);
/* Derive the key needed for a checksum hash object. */
int (*derive_checksum_key)(const struct krb5_enctype *krb5,
const struct krb5_buffer *TK,
unsigned int usage,
struct krb5_buffer *setkey,
gfp_t gfp);
/* Directly load the keys needed for a checksum hash object. */
int (*load_checksum_key)(const struct krb5_enctype *krb5,
const struct krb5_buffer *Kc,
struct krb5_buffer *setkey,
gfp_t gfp);
/* Encrypt data in-place, inserting confounder and checksum. */
ssize_t (*encrypt)(const struct krb5_enctype *krb5,
struct crypto_aead *aead,
struct scatterlist *sg, unsigned int nr_sg,
size_t sg_len,
size_t data_offset, size_t data_len,
bool preconfounded);
/* Decrypt data in-place, removing confounder and checksum */
int (*decrypt)(const struct krb5_enctype *krb5,
struct crypto_aead *aead,
struct scatterlist *sg, unsigned int nr_sg,
size_t *_offset, size_t *_len);
/* Generate a MIC on part of a packet, inserting the checksum */
ssize_t (*get_mic)(const struct krb5_enctype *krb5,
struct crypto_shash *shash,
const struct krb5_buffer *metadata,
struct scatterlist *sg, unsigned int nr_sg,
size_t sg_len,
size_t data_offset, size_t data_len);
/* Verify the MIC on a piece of data, removing the checksum */
int (*verify_mic)(const struct krb5_enctype *krb5,
struct crypto_shash *shash,
const struct krb5_buffer *metadata,
struct scatterlist *sg, unsigned int nr_sg,
size_t *_offset, size_t *_len);
};
/*
* Crypto size/alignment rounding convenience macros.
*/
#define crypto_roundup(X) ((unsigned int)round_up((X), CRYPTO_MINALIGN))
#define krb5_aead_size(TFM) \
crypto_roundup(sizeof(struct aead_request) + crypto_aead_reqsize(TFM))
#define krb5_aead_ivsize(TFM) \
crypto_roundup(crypto_aead_ivsize(TFM))
#define krb5_shash_size(TFM) \
crypto_roundup(sizeof(struct shash_desc) + crypto_shash_descsize(TFM))
#define krb5_digest_size(TFM) \
crypto_roundup(crypto_shash_digestsize(TFM))
#define round16(x) (((x) + 15) & ~15)
/*
* Self-testing data.
*/
struct krb5_prf_test {
u32 etype;
const char *name, *key, *octet, *prf;
};
struct krb5_key_test_one {
u32 use;
const char *key;
};
struct krb5_key_test {
u32 etype;
const char *name, *key;
struct krb5_key_test_one Kc, Ke, Ki;
};
struct krb5_enc_test {
u32 etype;
u32 usage;
const char *name, *plain, *conf, *K0, *Ke, *Ki, *ct;
};
struct krb5_mic_test {
u32 etype;
u32 usage;
const char *name, *plain, *K0, *Kc, *mic;
};
/*
* krb5_api.c
*/
struct crypto_aead *krb5_prepare_encryption(const struct krb5_enctype *krb5,
const struct krb5_buffer *keys,
gfp_t gfp);
struct crypto_shash *krb5_prepare_checksum(const struct krb5_enctype *krb5,
const struct krb5_buffer *Kc,
gfp_t gfp);
/*
* krb5_kdf.c
*/
int krb5_derive_Kc(const struct krb5_enctype *krb5, const struct krb5_buffer *TK,
u32 usage, struct krb5_buffer *key, gfp_t gfp);
int krb5_derive_Ke(const struct krb5_enctype *krb5, const struct krb5_buffer *TK,
u32 usage, struct krb5_buffer *key, gfp_t gfp);
int krb5_derive_Ki(const struct krb5_enctype *krb5, const struct krb5_buffer *TK,
u32 usage, struct krb5_buffer *key, gfp_t gfp);
/*
* rfc3961_simplified.c
*/
extern const struct krb5_crypto_profile rfc3961_simplified_profile;
int crypto_shash_update_sg(struct shash_desc *desc, struct scatterlist *sg,
size_t offset, size_t len);
int authenc_derive_encrypt_keys(const struct krb5_enctype *krb5,
const struct krb5_buffer *TK,
unsigned int usage,
struct krb5_buffer *setkey,
gfp_t gfp);
int authenc_load_encrypt_keys(const struct krb5_enctype *krb5,
const struct krb5_buffer *Ke,
const struct krb5_buffer *Ki,
struct krb5_buffer *setkey,
gfp_t gfp);
int rfc3961_derive_checksum_key(const struct krb5_enctype *krb5,
const struct krb5_buffer *TK,
unsigned int usage,
struct krb5_buffer *setkey,
gfp_t gfp);
int rfc3961_load_checksum_key(const struct krb5_enctype *krb5,
const struct krb5_buffer *Kc,
struct krb5_buffer *setkey,
gfp_t gfp);
ssize_t krb5_aead_encrypt(const struct krb5_enctype *krb5,
struct crypto_aead *aead,
struct scatterlist *sg, unsigned int nr_sg, size_t sg_len,
size_t data_offset, size_t data_len,
bool preconfounded);
int krb5_aead_decrypt(const struct krb5_enctype *krb5,
struct crypto_aead *aead,
struct scatterlist *sg, unsigned int nr_sg,
size_t *_offset, size_t *_len);
ssize_t rfc3961_get_mic(const struct krb5_enctype *krb5,
struct crypto_shash *shash,
const struct krb5_buffer *metadata,
struct scatterlist *sg, unsigned int nr_sg, size_t sg_len,
size_t data_offset, size_t data_len);
int rfc3961_verify_mic(const struct krb5_enctype *krb5,
struct crypto_shash *shash,
const struct krb5_buffer *metadata,
struct scatterlist *sg, unsigned int nr_sg,
size_t *_offset, size_t *_len);
/*
* rfc3962_aes.c
*/
extern const struct krb5_enctype krb5_aes128_cts_hmac_sha1_96;
extern const struct krb5_enctype krb5_aes256_cts_hmac_sha1_96;
/*
* rfc6803_camellia.c
*/
extern const struct krb5_enctype krb5_camellia128_cts_cmac;
extern const struct krb5_enctype krb5_camellia256_cts_cmac;
/*
* rfc8009_aes2.c
*/
extern const struct krb5_enctype krb5_aes128_cts_hmac_sha256_128;
extern const struct krb5_enctype krb5_aes256_cts_hmac_sha384_192;
/*
* selftest.c
*/
#ifdef CONFIG_CRYPTO_KRB5_SELFTESTS
int krb5_selftest(void);
#else
static inline int krb5_selftest(void) { return 0; }
#endif
/*
* selftest_data.c
*/
extern const struct krb5_prf_test krb5_prf_tests[];
extern const struct krb5_key_test krb5_key_tests[];
extern const struct krb5_enc_test krb5_enc_tests[];
extern const struct krb5_mic_test krb5_mic_tests[];

452
crypto/krb5/krb5_api.c Normal file
View file

@ -0,0 +1,452 @@
// SPDX-License-Identifier: GPL-2.0-or-later
/* Kerberos 5 crypto library.
*
* Copyright (C) 2025 Red Hat, Inc. All Rights Reserved.
* Written by David Howells (dhowells@redhat.com)
*/
#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
#include <linux/module.h>
#include <linux/export.h>
#include <linux/kernel.h>
#include "internal.h"
MODULE_DESCRIPTION("Kerberos 5 crypto");
MODULE_AUTHOR("Red Hat, Inc.");
MODULE_LICENSE("GPL");
static const struct krb5_enctype *const krb5_supported_enctypes[] = {
&krb5_aes128_cts_hmac_sha1_96,
&krb5_aes256_cts_hmac_sha1_96,
&krb5_aes128_cts_hmac_sha256_128,
&krb5_aes256_cts_hmac_sha384_192,
&krb5_camellia128_cts_cmac,
&krb5_camellia256_cts_cmac,
};
/**
* crypto_krb5_find_enctype - Find the handler for a Kerberos5 encryption type
* @enctype: The standard Kerberos encryption type number
*
* Look up a Kerberos encryption type by number. If successful, returns a
* pointer to the type tables; returns NULL otherwise.
*/
const struct krb5_enctype *crypto_krb5_find_enctype(u32 enctype)
{
const struct krb5_enctype *krb5;
size_t i;
for (i = 0; i < ARRAY_SIZE(krb5_supported_enctypes); i++) {
krb5 = krb5_supported_enctypes[i];
if (krb5->etype == enctype)
return krb5;
}
return NULL;
}
EXPORT_SYMBOL(crypto_krb5_find_enctype);
/**
* crypto_krb5_how_much_buffer - Work out how much buffer is required for an amount of data
* @krb5: The encoding to use.
* @mode: The mode in which to operated (checksum/encrypt)
* @data_size: How much data we want to allow for
* @_offset: Where to place the offset into the buffer
*
* Calculate how much buffer space is required to wrap a given amount of data.
* This allows for a confounder, padding and checksum as appropriate. The
* amount of buffer required is returned and the offset into the buffer at
* which the data will start is placed in *_offset.
*/
size_t crypto_krb5_how_much_buffer(const struct krb5_enctype *krb5,
enum krb5_crypto_mode mode,
size_t data_size, size_t *_offset)
{
switch (mode) {
case KRB5_CHECKSUM_MODE:
*_offset = krb5->cksum_len;
return krb5->cksum_len + data_size;
case KRB5_ENCRYPT_MODE:
*_offset = krb5->conf_len;
return krb5->conf_len + data_size + krb5->cksum_len;
default:
WARN_ON(1);
*_offset = 0;
return 0;
}
}
EXPORT_SYMBOL(crypto_krb5_how_much_buffer);
/**
* crypto_krb5_how_much_data - Work out how much data can fit in an amount of buffer
* @krb5: The encoding to use.
* @mode: The mode in which to operated (checksum/encrypt)
* @_buffer_size: How much buffer we want to allow for (may be reduced)
* @_offset: Where to place the offset into the buffer
*
* Calculate how much data can be fitted into given amount of buffer. This
* allows for a confounder, padding and checksum as appropriate. The amount of
* data that will fit is returned, the amount of buffer required is shrunk to
* allow for alignment and the offset into the buffer at which the data will
* start is placed in *_offset.
*/
size_t crypto_krb5_how_much_data(const struct krb5_enctype *krb5,
enum krb5_crypto_mode mode,
size_t *_buffer_size, size_t *_offset)
{
size_t buffer_size = *_buffer_size, data_size;
switch (mode) {
case KRB5_CHECKSUM_MODE:
if (WARN_ON(buffer_size < krb5->cksum_len + 1))
goto bad;
*_offset = krb5->cksum_len;
return buffer_size - krb5->cksum_len;
case KRB5_ENCRYPT_MODE:
if (WARN_ON(buffer_size < krb5->conf_len + 1 + krb5->cksum_len))
goto bad;
data_size = buffer_size - krb5->cksum_len;
*_offset = krb5->conf_len;
return data_size - krb5->conf_len;
default:
WARN_ON(1);
goto bad;
}
bad:
*_offset = 0;
return 0;
}
EXPORT_SYMBOL(crypto_krb5_how_much_data);
/**
* crypto_krb5_where_is_the_data - Find the data in a decrypted message
* @krb5: The encoding to use.
* @mode: Mode of operation
* @_offset: Offset of the secure blob in the buffer; updated to data offset.
* @_len: The length of the secure blob; updated to data length.
*
* Find the offset and size of the data in a secure message so that this
* information can be used in the metadata buffer which will get added to the
* digest by crypto_krb5_verify_mic().
*/
void crypto_krb5_where_is_the_data(const struct krb5_enctype *krb5,
enum krb5_crypto_mode mode,
size_t *_offset, size_t *_len)
{
switch (mode) {
case KRB5_CHECKSUM_MODE:
*_offset += krb5->cksum_len;
*_len -= krb5->cksum_len;
return;
case KRB5_ENCRYPT_MODE:
*_offset += krb5->conf_len;
*_len -= krb5->conf_len + krb5->cksum_len;
return;
default:
WARN_ON_ONCE(1);
return;
}
}
EXPORT_SYMBOL(crypto_krb5_where_is_the_data);
/*
* Prepare the encryption with derived key data.
*/
struct crypto_aead *krb5_prepare_encryption(const struct krb5_enctype *krb5,
const struct krb5_buffer *keys,
gfp_t gfp)
{
struct crypto_aead *ci = NULL;
int ret = -ENOMEM;
ci = crypto_alloc_aead(krb5->encrypt_name, 0, 0);
if (IS_ERR(ci)) {
ret = PTR_ERR(ci);
if (ret == -ENOENT)
ret = -ENOPKG;
goto err;
}
ret = crypto_aead_setkey(ci, keys->data, keys->len);
if (ret < 0) {
pr_err("Couldn't set AEAD key %s: %d\n", krb5->encrypt_name, ret);
goto err_ci;
}
ret = crypto_aead_setauthsize(ci, krb5->cksum_len);
if (ret < 0) {
pr_err("Couldn't set AEAD authsize %s: %d\n", krb5->encrypt_name, ret);
goto err_ci;
}
return ci;
err_ci:
crypto_free_aead(ci);
err:
return ERR_PTR(ret);
}
/**
* crypto_krb5_prepare_encryption - Prepare AEAD crypto object for encryption-mode
* @krb5: The encoding to use.
* @TK: The transport key to use.
* @usage: The usage constant for key derivation.
* @gfp: Allocation flags.
*
* Allocate a crypto object that does all the necessary crypto, key it and set
* its parameters and return the crypto handle to it. This can then be used to
* dispatch encrypt and decrypt operations.
*/
struct crypto_aead *crypto_krb5_prepare_encryption(const struct krb5_enctype *krb5,
const struct krb5_buffer *TK,
u32 usage, gfp_t gfp)
{
struct crypto_aead *ci = NULL;
struct krb5_buffer keys = {};
int ret;
ret = krb5->profile->derive_encrypt_keys(krb5, TK, usage, &keys, gfp);
if (ret < 0)
goto err;
ci = krb5_prepare_encryption(krb5, &keys, gfp);
if (IS_ERR(ci)) {
ret = PTR_ERR(ci);
goto err;
}
kfree(keys.data);
return ci;
err:
kfree(keys.data);
return ERR_PTR(ret);
}
EXPORT_SYMBOL(crypto_krb5_prepare_encryption);
/*
* Prepare the checksum with derived key data.
*/
struct crypto_shash *krb5_prepare_checksum(const struct krb5_enctype *krb5,
const struct krb5_buffer *Kc,
gfp_t gfp)
{
struct crypto_shash *ci = NULL;
int ret = -ENOMEM;
ci = crypto_alloc_shash(krb5->cksum_name, 0, 0);
if (IS_ERR(ci)) {
ret = PTR_ERR(ci);
if (ret == -ENOENT)
ret = -ENOPKG;
goto err;
}
ret = crypto_shash_setkey(ci, Kc->data, Kc->len);
if (ret < 0) {
pr_err("Couldn't set shash key %s: %d\n", krb5->cksum_name, ret);
goto err_ci;
}
return ci;
err_ci:
crypto_free_shash(ci);
err:
return ERR_PTR(ret);
}
/**
* crypto_krb5_prepare_checksum - Prepare AEAD crypto object for checksum-mode
* @krb5: The encoding to use.
* @TK: The transport key to use.
* @usage: The usage constant for key derivation.
* @gfp: Allocation flags.
*
* Allocate a crypto object that does all the necessary crypto, key it and set
* its parameters and return the crypto handle to it. This can then be used to
* dispatch get_mic and verify_mic operations.
*/
struct crypto_shash *crypto_krb5_prepare_checksum(const struct krb5_enctype *krb5,
const struct krb5_buffer *TK,
u32 usage, gfp_t gfp)
{
struct crypto_shash *ci = NULL;
struct krb5_buffer keys = {};
int ret;
ret = krb5->profile->derive_checksum_key(krb5, TK, usage, &keys, gfp);
if (ret < 0) {
pr_err("get_Kc failed %d\n", ret);
goto err;
}
ci = krb5_prepare_checksum(krb5, &keys, gfp);
if (IS_ERR(ci)) {
ret = PTR_ERR(ci);
goto err;
}
kfree(keys.data);
return ci;
err:
kfree(keys.data);
return ERR_PTR(ret);
}
EXPORT_SYMBOL(crypto_krb5_prepare_checksum);
/**
* crypto_krb5_encrypt - Apply Kerberos encryption and integrity.
* @krb5: The encoding to use.
* @aead: The keyed crypto object to use.
* @sg: Scatterlist defining the crypto buffer.
* @nr_sg: The number of elements in @sg.
* @sg_len: The size of the buffer.
* @data_offset: The offset of the data in the @sg buffer.
* @data_len: The length of the data.
* @preconfounded: True if the confounder is already inserted.
*
* Using the specified Kerberos encoding, insert a confounder and padding as
* needed, encrypt this and the data in place and insert an integrity checksum
* into the buffer.
*
* The buffer must include space for the confounder, the checksum and any
* padding required. The caller can preinsert the confounder into the buffer
* (for testing, for example).
*
* The resulting secured blob may be less than the size of the buffer.
*
* Returns the size of the secure blob if successful, -ENOMEM on an allocation
* failure, -EFAULT if there is insufficient space, -EMSGSIZE if the confounder
* is too short or the data is misaligned. Other errors may also be returned
* from the crypto layer.
*/
ssize_t crypto_krb5_encrypt(const struct krb5_enctype *krb5,
struct crypto_aead *aead,
struct scatterlist *sg, unsigned int nr_sg,
size_t sg_len,
size_t data_offset, size_t data_len,
bool preconfounded)
{
if (WARN_ON(data_offset > sg_len ||
data_len > sg_len ||
data_offset > sg_len - data_len))
return -EMSGSIZE;
return krb5->profile->encrypt(krb5, aead, sg, nr_sg, sg_len,
data_offset, data_len, preconfounded);
}
EXPORT_SYMBOL(crypto_krb5_encrypt);
/**
* crypto_krb5_decrypt - Validate and remove Kerberos encryption and integrity.
* @krb5: The encoding to use.
* @aead: The keyed crypto object to use.
* @sg: Scatterlist defining the crypto buffer.
* @nr_sg: The number of elements in @sg.
* @_offset: Offset of the secure blob in the buffer; updated to data offset.
* @_len: The length of the secure blob; updated to data length.
*
* Using the specified Kerberos encoding, check and remove the integrity
* checksum and decrypt the secure region, stripping off the confounder.
*
* If successful, @_offset and @_len are updated to outline the region in which
* the data plus the trailing padding are stored. The caller is responsible
* for working out how much padding there is and removing it.
*
* Returns the 0 if successful, -ENOMEM on an allocation failure; sets
* *_error_code and returns -EPROTO if the data cannot be parsed, or -EBADMSG
* if the integrity checksum doesn't match). Other errors may also be returned
* from the crypto layer.
*/
int crypto_krb5_decrypt(const struct krb5_enctype *krb5,
struct crypto_aead *aead,
struct scatterlist *sg, unsigned int nr_sg,
size_t *_offset, size_t *_len)
{
return krb5->profile->decrypt(krb5, aead, sg, nr_sg, _offset, _len);
}
EXPORT_SYMBOL(crypto_krb5_decrypt);
/**
* crypto_krb5_get_mic - Apply Kerberos integrity checksum.
* @krb5: The encoding to use.
* @shash: The keyed hash to use.
* @metadata: Metadata to add into the hash before adding the data.
* @sg: Scatterlist defining the crypto buffer.
* @nr_sg: The number of elements in @sg.
* @sg_len: The size of the buffer.
* @data_offset: The offset of the data in the @sg buffer.
* @data_len: The length of the data.
*
* Using the specified Kerberos encoding, calculate and insert an integrity
* checksum into the buffer.
*
* The buffer must include space for the checksum at the front.
*
* Returns the size of the secure blob if successful, -ENOMEM on an allocation
* failure, -EFAULT if there is insufficient space, -EMSGSIZE if the gap for
* the checksum is too short. Other errors may also be returned from the
* crypto layer.
*/
ssize_t crypto_krb5_get_mic(const struct krb5_enctype *krb5,
struct crypto_shash *shash,
const struct krb5_buffer *metadata,
struct scatterlist *sg, unsigned int nr_sg,
size_t sg_len,
size_t data_offset, size_t data_len)
{
if (WARN_ON(data_offset > sg_len ||
data_len > sg_len ||
data_offset > sg_len - data_len))
return -EMSGSIZE;
return krb5->profile->get_mic(krb5, shash, metadata, sg, nr_sg, sg_len,
data_offset, data_len);
}
EXPORT_SYMBOL(crypto_krb5_get_mic);
/**
* crypto_krb5_verify_mic - Validate and remove Kerberos integrity checksum.
* @krb5: The encoding to use.
* @shash: The keyed hash to use.
* @metadata: Metadata to add into the hash before adding the data.
* @sg: Scatterlist defining the crypto buffer.
* @nr_sg: The number of elements in @sg.
* @_offset: Offset of the secure blob in the buffer; updated to data offset.
* @_len: The length of the secure blob; updated to data length.
*
* Using the specified Kerberos encoding, check and remove the integrity
* checksum.
*
* If successful, @_offset and @_len are updated to outline the region in which
* the data is stored.
*
* Returns the 0 if successful, -ENOMEM on an allocation failure; sets
* *_error_code and returns -EPROTO if the data cannot be parsed, or -EBADMSG
* if the checksum doesn't match). Other errors may also be returned from the
* crypto layer.
*/
int crypto_krb5_verify_mic(const struct krb5_enctype *krb5,
struct crypto_shash *shash,
const struct krb5_buffer *metadata,
struct scatterlist *sg, unsigned int nr_sg,
size_t *_offset, size_t *_len)
{
return krb5->profile->verify_mic(krb5, shash, metadata, sg, nr_sg,
_offset, _len);
}
EXPORT_SYMBOL(crypto_krb5_verify_mic);
static int __init crypto_krb5_init(void)
{
return krb5_selftest();
}
module_init(crypto_krb5_init);
static void __exit crypto_krb5_exit(void)
{
}
module_exit(crypto_krb5_exit);

145
crypto/krb5/krb5_kdf.c Normal file
View file

@ -0,0 +1,145 @@
// SPDX-License-Identifier: GPL-2.0-or-later
/* Kerberos key derivation.
*
* Copyright (C) 2025 Red Hat, Inc. All Rights Reserved.
* Written by David Howells (dhowells@redhat.com)
*/
#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
#include <linux/export.h>
#include <linux/slab.h>
#include <crypto/skcipher.h>
#include <crypto/hash.h>
#include "internal.h"
/**
* crypto_krb5_calc_PRFplus - Calculate PRF+ [RFC4402]
* @krb5: The encryption type to use
* @K: The protocol key for the pseudo-random function
* @L: The length of the output
* @S: The input octet string
* @result: Result buffer, sized to krb5->prf_len
* @gfp: Allocation restrictions
*
* Calculate the kerberos pseudo-random function, PRF+() by the following
* method:
*
* PRF+(K, L, S) = truncate(L, T1 || T2 || .. || Tn)
* Tn = PRF(K, n || S)
* [rfc4402 sec 2]
*/
int crypto_krb5_calc_PRFplus(const struct krb5_enctype *krb5,
const struct krb5_buffer *K,
unsigned int L,
const struct krb5_buffer *S,
struct krb5_buffer *result,
gfp_t gfp)
{
struct krb5_buffer T_series, Tn, n_S;
void *buffer;
int ret, n = 1;
Tn.len = krb5->prf_len;
T_series.len = 0;
n_S.len = 4 + S->len;
buffer = kzalloc(round16(L + Tn.len) + round16(n_S.len), gfp);
if (!buffer)
return -ENOMEM;
T_series.data = buffer;
n_S.data = buffer + round16(L + Tn.len);
memcpy(n_S.data + 4, S->data, S->len);
while (T_series.len < L) {
*(__be32 *)(n_S.data) = htonl(n);
Tn.data = T_series.data + Tn.len * (n - 1);
ret = krb5->profile->calc_PRF(krb5, K, &n_S, &Tn, gfp);
if (ret < 0)
goto err;
T_series.len += Tn.len;
n++;
}
/* Truncate to L */
memcpy(result->data, T_series.data, L);
ret = 0;
err:
kfree_sensitive(buffer);
return ret;
}
EXPORT_SYMBOL(crypto_krb5_calc_PRFplus);
/**
* krb5_derive_Kc - Derive key Kc and install into a hash
* @krb5: The encryption type to use
* @TK: The base key
* @usage: The key usage number
* @key: Prepped buffer to store the key into
* @gfp: Allocation restrictions
*
* Derive the Kerberos Kc checksumming key. The key is stored into the
* prepared buffer.
*/
int krb5_derive_Kc(const struct krb5_enctype *krb5, const struct krb5_buffer *TK,
u32 usage, struct krb5_buffer *key, gfp_t gfp)
{
u8 buf[5] __aligned(CRYPTO_MINALIGN);
struct krb5_buffer usage_constant = { .len = 5, .data = buf };
*(__be32 *)buf = cpu_to_be32(usage);
buf[4] = KEY_USAGE_SEED_CHECKSUM;
key->len = krb5->Kc_len;
return krb5->profile->calc_Kc(krb5, TK, &usage_constant, key, gfp);
}
/**
* krb5_derive_Ke - Derive key Ke and install into an skcipher
* @krb5: The encryption type to use
* @TK: The base key
* @usage: The key usage number
* @key: Prepped buffer to store the key into
* @gfp: Allocation restrictions
*
* Derive the Kerberos Ke encryption key. The key is stored into the prepared
* buffer.
*/
int krb5_derive_Ke(const struct krb5_enctype *krb5, const struct krb5_buffer *TK,
u32 usage, struct krb5_buffer *key, gfp_t gfp)
{
u8 buf[5] __aligned(CRYPTO_MINALIGN);
struct krb5_buffer usage_constant = { .len = 5, .data = buf };
*(__be32 *)buf = cpu_to_be32(usage);
buf[4] = KEY_USAGE_SEED_ENCRYPTION;
key->len = krb5->Ke_len;
return krb5->profile->calc_Ke(krb5, TK, &usage_constant, key, gfp);
}
/**
* krb5_derive_Ki - Derive key Ki and install into a hash
* @krb5: The encryption type to use
* @TK: The base key
* @usage: The key usage number
* @key: Prepped buffer to store the key into
* @gfp: Allocation restrictions
*
* Derive the Kerberos Ki integrity checksum key. The key is stored into the
* prepared buffer.
*/
int krb5_derive_Ki(const struct krb5_enctype *krb5, const struct krb5_buffer *TK,
u32 usage, struct krb5_buffer *key, gfp_t gfp)
{
u8 buf[5] __aligned(CRYPTO_MINALIGN);
struct krb5_buffer usage_constant = { .len = 5, .data = buf };
*(__be32 *)buf = cpu_to_be32(usage);
buf[4] = KEY_USAGE_SEED_INTEGRITY;
key->len = krb5->Ki_len;
return krb5->profile->calc_Ki(krb5, TK, &usage_constant, key, gfp);
}

View file

@ -0,0 +1,792 @@
// SPDX-License-Identifier: BSD-3-Clause
/* rfc3961 Kerberos 5 simplified crypto profile.
*
* Parts borrowed from net/sunrpc/auth_gss/.
*/
/*
* COPYRIGHT (c) 2008
* The Regents of the University of Michigan
* ALL RIGHTS RESERVED
*
* Permission is granted to use, copy, create derivative works
* and redistribute this software and such derivative works
* for any purpose, so long as the name of The University of
* Michigan is not used in any advertising or publicity
* pertaining to the use of distribution of this software
* without specific, written prior authorization. If the
* above copyright notice or any other identification of the
* University of Michigan is included in any copy of any
* portion of this software, then the disclaimer below must
* also be included.
*
* THIS SOFTWARE IS PROVIDED AS IS, WITHOUT REPRESENTATION
* FROM THE UNIVERSITY OF MICHIGAN AS TO ITS FITNESS FOR ANY
* PURPOSE, AND WITHOUT WARRANTY BY THE UNIVERSITY OF
* MICHIGAN OF ANY KIND, EITHER EXPRESS OR IMPLIED, INCLUDING
* WITHOUT LIMITATION THE IMPLIED WARRANTIES OF
* MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE
* REGENTS OF THE UNIVERSITY OF MICHIGAN SHALL NOT BE LIABLE
* FOR ANY DAMAGES, INCLUDING SPECIAL, INDIRECT, INCIDENTAL, OR
* CONSEQUENTIAL DAMAGES, WITH RESPECT TO ANY CLAIM ARISING
* OUT OF OR IN CONNECTION WITH THE USE OF THE SOFTWARE, EVEN
* IF IT HAS BEEN OR IS HEREAFTER ADVISED OF THE POSSIBILITY OF
* SUCH DAMAGES.
*/
/*
* Copyright (C) 1998 by the FundsXpress, INC.
*
* All rights reserved.
*
* Export of this software from the United States of America may require
* a specific license from the United States Government. It is the
* responsibility of any person or organization contemplating export to
* obtain such a license before exporting.
*
* WITHIN THAT CONSTRAINT, permission to use, copy, modify, and
* distribute this software and its documentation for any purpose and
* without fee is hereby granted, provided that the above copyright
* notice appear in all copies and that both that copyright notice and
* this permission notice appear in supporting documentation, and that
* the name of FundsXpress. not be used in advertising or publicity pertaining
* to distribution of the software without specific, written prior
* permission. FundsXpress makes no representations about the suitability of
* this software for any purpose. It is provided "as is" without express
* or implied warranty.
*
* THIS SOFTWARE IS PROVIDED ``AS IS'' AND WITHOUT ANY EXPRESS OR
* IMPLIED WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED
* WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR A PARTICULAR PURPOSE.
*/
/*
* Copyright (C) 2025 Red Hat, Inc. All Rights Reserved.
* Written by David Howells (dhowells@redhat.com)
*/
#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
#include <linux/random.h>
#include <linux/scatterlist.h>
#include <linux/skbuff.h>
#include <linux/slab.h>
#include <linux/lcm.h>
#include <linux/rtnetlink.h>
#include <crypto/authenc.h>
#include <crypto/skcipher.h>
#include <crypto/hash.h>
#include "internal.h"
/* Maximum blocksize for the supported crypto algorithms */
#define KRB5_MAX_BLOCKSIZE (16)
int crypto_shash_update_sg(struct shash_desc *desc, struct scatterlist *sg,
size_t offset, size_t len)
{
struct sg_mapping_iter miter;
size_t i, n;
int ret = 0;
sg_miter_start(&miter, sg, sg_nents(sg),
SG_MITER_FROM_SG | SG_MITER_LOCAL);
for (i = 0; i < len; i += n) {
sg_miter_next(&miter);
n = min(miter.length, len - i);
ret = crypto_shash_update(desc, miter.addr, n);
if (ret < 0)
break;
}
sg_miter_stop(&miter);
return ret;
}
static int rfc3961_do_encrypt(struct crypto_sync_skcipher *tfm, void *iv,
const struct krb5_buffer *in, struct krb5_buffer *out)
{
struct scatterlist sg[1];
u8 local_iv[KRB5_MAX_BLOCKSIZE] __aligned(KRB5_MAX_BLOCKSIZE) = {0};
SYNC_SKCIPHER_REQUEST_ON_STACK(req, tfm);
int ret;
if (WARN_ON(in->len != out->len))
return -EINVAL;
if (out->len % crypto_sync_skcipher_blocksize(tfm) != 0)
return -EINVAL;
if (crypto_sync_skcipher_ivsize(tfm) > KRB5_MAX_BLOCKSIZE)
return -EINVAL;
if (iv)
memcpy(local_iv, iv, crypto_sync_skcipher_ivsize(tfm));
memcpy(out->data, in->data, out->len);
sg_init_one(sg, out->data, out->len);
skcipher_request_set_sync_tfm(req, tfm);
skcipher_request_set_callback(req, 0, NULL, NULL);
skcipher_request_set_crypt(req, sg, sg, out->len, local_iv);
ret = crypto_skcipher_encrypt(req);
skcipher_request_zero(req);
return ret;
}
/*
* Calculate an unkeyed basic hash.
*/
static int rfc3961_calc_H(const struct krb5_enctype *krb5,
const struct krb5_buffer *data,
struct krb5_buffer *digest,
gfp_t gfp)
{
struct crypto_shash *tfm;
struct shash_desc *desc;
size_t desc_size;
int ret = -ENOMEM;
tfm = crypto_alloc_shash(krb5->hash_name, 0, 0);
if (IS_ERR(tfm))
return (PTR_ERR(tfm) == -ENOENT) ? -ENOPKG : PTR_ERR(tfm);
desc_size = crypto_shash_descsize(tfm) + sizeof(*desc);
desc = kzalloc(desc_size, gfp);
if (!desc)
goto error_tfm;
digest->len = crypto_shash_digestsize(tfm);
digest->data = kzalloc(digest->len, gfp);
if (!digest->data)
goto error_desc;
desc->tfm = tfm;
ret = crypto_shash_init(desc);
if (ret < 0)
goto error_digest;
ret = crypto_shash_finup(desc, data->data, data->len, digest->data);
if (ret < 0)
goto error_digest;
goto error_desc;
error_digest:
kfree_sensitive(digest->data);
error_desc:
kfree_sensitive(desc);
error_tfm:
crypto_free_shash(tfm);
return ret;
}
/*
* This is the n-fold function as described in rfc3961, sec 5.1
* Taken from MIT Kerberos and modified.
*/
static void rfc3961_nfold(const struct krb5_buffer *source, struct krb5_buffer *result)
{
const u8 *in = source->data;
u8 *out = result->data;
unsigned long ulcm;
unsigned int inbits, outbits;
int byte, i, msbit;
/* the code below is more readable if I make these bytes instead of bits */
inbits = source->len;
outbits = result->len;
/* first compute lcm(n,k) */
ulcm = lcm(inbits, outbits);
/* now do the real work */
memset(out, 0, outbits);
byte = 0;
/* this will end up cycling through k lcm(k,n)/k times, which
* is correct.
*/
for (i = ulcm-1; i >= 0; i--) {
/* compute the msbit in k which gets added into this byte */
msbit = (
/* first, start with the msbit in the first,
* unrotated byte
*/
((inbits << 3) - 1) +
/* then, for each byte, shift to the right
* for each repetition
*/
(((inbits << 3) + 13) * (i/inbits)) +
/* last, pick out the correct byte within
* that shifted repetition
*/
((inbits - (i % inbits)) << 3)
) % (inbits << 3);
/* pull out the byte value itself */
byte += (((in[((inbits - 1) - (msbit >> 3)) % inbits] << 8) |
(in[((inbits) - (msbit >> 3)) % inbits]))
>> ((msbit & 7) + 1)) & 0xff;
/* do the addition */
byte += out[i % outbits];
out[i % outbits] = byte & 0xff;
/* keep around the carry bit, if any */
byte >>= 8;
}
/* if there's a carry bit left over, add it back in */
if (byte) {
for (i = outbits - 1; i >= 0; i--) {
/* do the addition */
byte += out[i];
out[i] = byte & 0xff;
/* keep around the carry bit, if any */
byte >>= 8;
}
}
}
/*
* Calculate a derived key, DK(Base Key, Well-Known Constant)
*
* DK(Key, Constant) = random-to-key(DR(Key, Constant))
* DR(Key, Constant) = k-truncate(E(Key, Constant, initial-cipher-state))
* K1 = E(Key, n-fold(Constant), initial-cipher-state)
* K2 = E(Key, K1, initial-cipher-state)
* K3 = E(Key, K2, initial-cipher-state)
* K4 = ...
* DR(Key, Constant) = k-truncate(K1 | K2 | K3 | K4 ...)
* [rfc3961 sec 5.1]
*/
static int rfc3961_calc_DK(const struct krb5_enctype *krb5,
const struct krb5_buffer *inkey,
const struct krb5_buffer *in_constant,
struct krb5_buffer *result,
gfp_t gfp)
{
unsigned int blocksize, keybytes, keylength, n;
struct krb5_buffer inblock, outblock, rawkey;
struct crypto_sync_skcipher *cipher;
int ret = -EINVAL;
blocksize = krb5->block_len;
keybytes = krb5->key_bytes;
keylength = krb5->key_len;
if (inkey->len != keylength || result->len != keylength)
return -EINVAL;
if (!krb5->random_to_key && result->len != keybytes)
return -EINVAL;
cipher = crypto_alloc_sync_skcipher(krb5->derivation_enc, 0, 0);
if (IS_ERR(cipher)) {
ret = (PTR_ERR(cipher) == -ENOENT) ? -ENOPKG : PTR_ERR(cipher);
goto err_return;
}
ret = crypto_sync_skcipher_setkey(cipher, inkey->data, inkey->len);
if (ret < 0)
goto err_free_cipher;
ret = -ENOMEM;
inblock.data = kzalloc(blocksize * 2 + keybytes, gfp);
if (!inblock.data)
goto err_free_cipher;
inblock.len = blocksize;
outblock.data = inblock.data + blocksize;
outblock.len = blocksize;
rawkey.data = outblock.data + blocksize;
rawkey.len = keybytes;
/* initialize the input block */
if (in_constant->len == inblock.len)
memcpy(inblock.data, in_constant->data, inblock.len);
else
rfc3961_nfold(in_constant, &inblock);
/* loop encrypting the blocks until enough key bytes are generated */
n = 0;
while (n < rawkey.len) {
rfc3961_do_encrypt(cipher, NULL, &inblock, &outblock);
if (keybytes - n <= outblock.len) {
memcpy(rawkey.data + n, outblock.data, keybytes - n);
break;
}
memcpy(rawkey.data + n, outblock.data, outblock.len);
memcpy(inblock.data, outblock.data, outblock.len);
n += outblock.len;
}
/* postprocess the key */
if (!krb5->random_to_key) {
/* Identity random-to-key function. */
memcpy(result->data, rawkey.data, rawkey.len);
ret = 0;
} else {
ret = krb5->random_to_key(krb5, &rawkey, result);
}
kfree_sensitive(inblock.data);
err_free_cipher:
crypto_free_sync_skcipher(cipher);
err_return:
return ret;
}
/*
* Calculate single encryption, E()
*
* E(Key, octets)
*/
static int rfc3961_calc_E(const struct krb5_enctype *krb5,
const struct krb5_buffer *key,
const struct krb5_buffer *in_data,
struct krb5_buffer *result,
gfp_t gfp)
{
struct crypto_sync_skcipher *cipher;
int ret;
cipher = crypto_alloc_sync_skcipher(krb5->derivation_enc, 0, 0);
if (IS_ERR(cipher)) {
ret = (PTR_ERR(cipher) == -ENOENT) ? -ENOPKG : PTR_ERR(cipher);
goto err;
}
ret = crypto_sync_skcipher_setkey(cipher, key->data, key->len);
if (ret < 0)
goto err_free;
ret = rfc3961_do_encrypt(cipher, NULL, in_data, result);
err_free:
crypto_free_sync_skcipher(cipher);
err:
return ret;
}
/*
* Calculate the pseudo-random function, PRF().
*
* tmp1 = H(octet-string)
* tmp2 = truncate tmp1 to multiple of m
* PRF = E(DK(protocol-key, prfconstant), tmp2, initial-cipher-state)
*
* The "prfconstant" used in the PRF operation is the three-octet string
* "prf".
* [rfc3961 sec 5.3]
*/
static int rfc3961_calc_PRF(const struct krb5_enctype *krb5,
const struct krb5_buffer *protocol_key,
const struct krb5_buffer *octet_string,
struct krb5_buffer *result,
gfp_t gfp)
{
static const struct krb5_buffer prfconstant = { 3, "prf" };
struct krb5_buffer derived_key;
struct krb5_buffer tmp1, tmp2;
unsigned int m = krb5->block_len;
void *buffer;
int ret;
if (result->len != krb5->prf_len)
return -EINVAL;
tmp1.len = krb5->hash_len;
derived_key.len = krb5->key_bytes;
buffer = kzalloc(round16(tmp1.len) + round16(derived_key.len), gfp);
if (!buffer)
return -ENOMEM;
tmp1.data = buffer;
derived_key.data = buffer + round16(tmp1.len);
ret = rfc3961_calc_H(krb5, octet_string, &tmp1, gfp);
if (ret < 0)
goto err;
tmp2.len = tmp1.len & ~(m - 1);
tmp2.data = tmp1.data;
ret = rfc3961_calc_DK(krb5, protocol_key, &prfconstant, &derived_key, gfp);
if (ret < 0)
goto err;
ret = rfc3961_calc_E(krb5, &derived_key, &tmp2, result, gfp);
err:
kfree_sensitive(buffer);
return ret;
}
/*
* Derive the Ke and Ki keys and package them into a key parameter that can be
* given to the setkey of a authenc AEAD crypto object.
*/
int authenc_derive_encrypt_keys(const struct krb5_enctype *krb5,
const struct krb5_buffer *TK,
unsigned int usage,
struct krb5_buffer *setkey,
gfp_t gfp)
{
struct crypto_authenc_key_param *param;
struct krb5_buffer Ke, Ki;
struct rtattr *rta;
int ret;
Ke.len = krb5->Ke_len;
Ki.len = krb5->Ki_len;
setkey->len = RTA_LENGTH(sizeof(*param)) + Ke.len + Ki.len;
setkey->data = kzalloc(setkey->len, GFP_KERNEL);
if (!setkey->data)
return -ENOMEM;
rta = setkey->data;
rta->rta_type = CRYPTO_AUTHENC_KEYA_PARAM;
rta->rta_len = RTA_LENGTH(sizeof(*param));
param = RTA_DATA(rta);
param->enckeylen = htonl(Ke.len);
Ki.data = (void *)(param + 1);
Ke.data = Ki.data + Ki.len;
ret = krb5_derive_Ke(krb5, TK, usage, &Ke, gfp);
if (ret < 0) {
pr_err("get_Ke failed %d\n", ret);
return ret;
}
ret = krb5_derive_Ki(krb5, TK, usage, &Ki, gfp);
if (ret < 0)
pr_err("get_Ki failed %d\n", ret);
return ret;
}
/*
* Package predefined Ke and Ki keys and into a key parameter that can be given
* to the setkey of an authenc AEAD crypto object.
*/
int authenc_load_encrypt_keys(const struct krb5_enctype *krb5,
const struct krb5_buffer *Ke,
const struct krb5_buffer *Ki,
struct krb5_buffer *setkey,
gfp_t gfp)
{
struct crypto_authenc_key_param *param;
struct rtattr *rta;
setkey->len = RTA_LENGTH(sizeof(*param)) + Ke->len + Ki->len;
setkey->data = kzalloc(setkey->len, GFP_KERNEL);
if (!setkey->data)
return -ENOMEM;
rta = setkey->data;
rta->rta_type = CRYPTO_AUTHENC_KEYA_PARAM;
rta->rta_len = RTA_LENGTH(sizeof(*param));
param = RTA_DATA(rta);
param->enckeylen = htonl(Ke->len);
memcpy((void *)(param + 1), Ki->data, Ki->len);
memcpy((void *)(param + 1) + Ki->len, Ke->data, Ke->len);
return 0;
}
/*
* Derive the Kc key for checksum-only mode and package it into a key parameter
* that can be given to the setkey of a hash crypto object.
*/
int rfc3961_derive_checksum_key(const struct krb5_enctype *krb5,
const struct krb5_buffer *TK,
unsigned int usage,
struct krb5_buffer *setkey,
gfp_t gfp)
{
int ret;
setkey->len = krb5->Kc_len;
setkey->data = kzalloc(setkey->len, GFP_KERNEL);
if (!setkey->data)
return -ENOMEM;
ret = krb5_derive_Kc(krb5, TK, usage, setkey, gfp);
if (ret < 0)
pr_err("get_Kc failed %d\n", ret);
return ret;
}
/*
* Package a predefined Kc key for checksum-only mode into a key parameter that
* can be given to the setkey of a hash crypto object.
*/
int rfc3961_load_checksum_key(const struct krb5_enctype *krb5,
const struct krb5_buffer *Kc,
struct krb5_buffer *setkey,
gfp_t gfp)
{
setkey->len = krb5->Kc_len;
setkey->data = kmemdup(Kc->data, Kc->len, GFP_KERNEL);
if (!setkey->data)
return -ENOMEM;
return 0;
}
/*
* Apply encryption and checksumming functions to part of a scatterlist.
*/
ssize_t krb5_aead_encrypt(const struct krb5_enctype *krb5,
struct crypto_aead *aead,
struct scatterlist *sg, unsigned int nr_sg, size_t sg_len,
size_t data_offset, size_t data_len,
bool preconfounded)
{
struct aead_request *req;
ssize_t ret, done;
size_t bsize, base_len, secure_offset, secure_len, pad_len, cksum_offset;
void *buffer;
u8 *iv;
if (WARN_ON(data_offset != krb5->conf_len))
return -EINVAL; /* Data is in wrong place */
secure_offset = 0;
base_len = krb5->conf_len + data_len;
pad_len = 0;
secure_len = base_len + pad_len;
cksum_offset = secure_len;
if (WARN_ON(cksum_offset + krb5->cksum_len > sg_len))
return -EFAULT;
bsize = krb5_aead_size(aead) +
krb5_aead_ivsize(aead);
buffer = kzalloc(bsize, GFP_NOFS);
if (!buffer)
return -ENOMEM;
/* Insert the confounder into the buffer */
ret = -EFAULT;
if (!preconfounded) {
get_random_bytes(buffer, krb5->conf_len);
done = sg_pcopy_from_buffer(sg, nr_sg, buffer, krb5->conf_len,
secure_offset);
if (done != krb5->conf_len)
goto error;
}
/* We may need to pad out to the crypto blocksize. */
if (pad_len) {
done = sg_zero_buffer(sg, nr_sg, pad_len, data_offset + data_len);
if (done != pad_len)
goto error;
}
/* Hash and encrypt the message. */
req = buffer;
iv = buffer + krb5_aead_size(aead);
aead_request_set_tfm(req, aead);
aead_request_set_callback(req, 0, NULL, NULL);
aead_request_set_crypt(req, sg, sg, secure_len, iv);
ret = crypto_aead_encrypt(req);
if (ret < 0)
goto error;
ret = secure_len + krb5->cksum_len;
error:
kfree_sensitive(buffer);
return ret;
}
/*
* Apply decryption and checksumming functions to a message. The offset and
* length are updated to reflect the actual content of the encrypted region.
*/
int krb5_aead_decrypt(const struct krb5_enctype *krb5,
struct crypto_aead *aead,
struct scatterlist *sg, unsigned int nr_sg,
size_t *_offset, size_t *_len)
{
struct aead_request *req;
size_t bsize;
void *buffer;
int ret;
u8 *iv;
if (WARN_ON(*_offset != 0))
return -EINVAL; /* Can't set offset on aead */
if (*_len < krb5->conf_len + krb5->cksum_len)
return -EPROTO;
bsize = krb5_aead_size(aead) +
krb5_aead_ivsize(aead);
buffer = kzalloc(bsize, GFP_NOFS);
if (!buffer)
return -ENOMEM;
/* Decrypt the message and verify its checksum. */
req = buffer;
iv = buffer + krb5_aead_size(aead);
aead_request_set_tfm(req, aead);
aead_request_set_callback(req, 0, NULL, NULL);
aead_request_set_crypt(req, sg, sg, *_len, iv);
ret = crypto_aead_decrypt(req);
if (ret < 0)
goto error;
/* Adjust the boundaries of the data. */
*_offset += krb5->conf_len;
*_len -= krb5->conf_len + krb5->cksum_len;
ret = 0;
error:
kfree_sensitive(buffer);
return ret;
}
/*
* Generate a checksum over some metadata and part of an skbuff and insert the
* MIC into the skbuff immediately prior to the data.
*/
ssize_t rfc3961_get_mic(const struct krb5_enctype *krb5,
struct crypto_shash *shash,
const struct krb5_buffer *metadata,
struct scatterlist *sg, unsigned int nr_sg, size_t sg_len,
size_t data_offset, size_t data_len)
{
struct shash_desc *desc;
ssize_t ret, done;
size_t bsize;
void *buffer, *digest;
if (WARN_ON(data_offset != krb5->cksum_len))
return -EMSGSIZE;
bsize = krb5_shash_size(shash) +
krb5_digest_size(shash);
buffer = kzalloc(bsize, GFP_NOFS);
if (!buffer)
return -ENOMEM;
/* Calculate the MIC with key Kc and store it into the skb */
desc = buffer;
desc->tfm = shash;
ret = crypto_shash_init(desc);
if (ret < 0)
goto error;
if (metadata) {
ret = crypto_shash_update(desc, metadata->data, metadata->len);
if (ret < 0)
goto error;
}
ret = crypto_shash_update_sg(desc, sg, data_offset, data_len);
if (ret < 0)
goto error;
digest = buffer + krb5_shash_size(shash);
ret = crypto_shash_final(desc, digest);
if (ret < 0)
goto error;
ret = -EFAULT;
done = sg_pcopy_from_buffer(sg, nr_sg, digest, krb5->cksum_len,
data_offset - krb5->cksum_len);
if (done != krb5->cksum_len)
goto error;
ret = krb5->cksum_len + data_len;
error:
kfree_sensitive(buffer);
return ret;
}
/*
* Check the MIC on a region of an skbuff. The offset and length are updated
* to reflect the actual content of the secure region.
*/
int rfc3961_verify_mic(const struct krb5_enctype *krb5,
struct crypto_shash *shash,
const struct krb5_buffer *metadata,
struct scatterlist *sg, unsigned int nr_sg,
size_t *_offset, size_t *_len)
{
struct shash_desc *desc;
ssize_t done;
size_t bsize, data_offset, data_len, offset = *_offset, len = *_len;
void *buffer = NULL;
int ret;
u8 *cksum, *cksum2;
if (len < krb5->cksum_len)
return -EPROTO;
data_offset = offset + krb5->cksum_len;
data_len = len - krb5->cksum_len;
bsize = krb5_shash_size(shash) +
krb5_digest_size(shash) * 2;
buffer = kzalloc(bsize, GFP_NOFS);
if (!buffer)
return -ENOMEM;
cksum = buffer +
krb5_shash_size(shash);
cksum2 = buffer +
krb5_shash_size(shash) +
krb5_digest_size(shash);
/* Calculate the MIC */
desc = buffer;
desc->tfm = shash;
ret = crypto_shash_init(desc);
if (ret < 0)
goto error;
if (metadata) {
ret = crypto_shash_update(desc, metadata->data, metadata->len);
if (ret < 0)
goto error;
}
crypto_shash_update_sg(desc, sg, data_offset, data_len);
crypto_shash_final(desc, cksum);
ret = -EFAULT;
done = sg_pcopy_to_buffer(sg, nr_sg, cksum2, krb5->cksum_len, offset);
if (done != krb5->cksum_len)
goto error;
if (memcmp(cksum, cksum2, krb5->cksum_len) != 0) {
ret = -EBADMSG;
goto error;
}
*_offset += krb5->cksum_len;
*_len -= krb5->cksum_len;
ret = 0;
error:
kfree_sensitive(buffer);
return ret;
}
const struct krb5_crypto_profile rfc3961_simplified_profile = {
.calc_PRF = rfc3961_calc_PRF,
.calc_Kc = rfc3961_calc_DK,
.calc_Ke = rfc3961_calc_DK,
.calc_Ki = rfc3961_calc_DK,
.derive_encrypt_keys = authenc_derive_encrypt_keys,
.load_encrypt_keys = authenc_load_encrypt_keys,
.derive_checksum_key = rfc3961_derive_checksum_key,
.load_checksum_key = rfc3961_load_checksum_key,
.encrypt = krb5_aead_encrypt,
.decrypt = krb5_aead_decrypt,
.get_mic = rfc3961_get_mic,
.verify_mic = rfc3961_verify_mic,
};

115
crypto/krb5/rfc3962_aes.c Normal file
View file

@ -0,0 +1,115 @@
// SPDX-License-Identifier: BSD-3-Clause
/* rfc3962 Advanced Encryption Standard (AES) Encryption for Kerberos 5
*
* Parts borrowed from net/sunrpc/auth_gss/.
*/
/*
* COPYRIGHT (c) 2008
* The Regents of the University of Michigan
* ALL RIGHTS RESERVED
*
* Permission is granted to use, copy, create derivative works
* and redistribute this software and such derivative works
* for any purpose, so long as the name of The University of
* Michigan is not used in any advertising or publicity
* pertaining to the use of distribution of this software
* without specific, written prior authorization. If the
* above copyright notice or any other identification of the
* University of Michigan is included in any copy of any
* portion of this software, then the disclaimer below must
* also be included.
*
* THIS SOFTWARE IS PROVIDED AS IS, WITHOUT REPRESENTATION
* FROM THE UNIVERSITY OF MICHIGAN AS TO ITS FITNESS FOR ANY
* PURPOSE, AND WITHOUT WARRANTY BY THE UNIVERSITY OF
* MICHIGAN OF ANY KIND, EITHER EXPRESS OR IMPLIED, INCLUDING
* WITHOUT LIMITATION THE IMPLIED WARRANTIES OF
* MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE
* REGENTS OF THE UNIVERSITY OF MICHIGAN SHALL NOT BE LIABLE
* FOR ANY DAMAGES, INCLUDING SPECIAL, INDIRECT, INCIDENTAL, OR
* CONSEQUENTIAL DAMAGES, WITH RESPECT TO ANY CLAIM ARISING
* OUT OF OR IN CONNECTION WITH THE USE OF THE SOFTWARE, EVEN
* IF IT HAS BEEN OR IS HEREAFTER ADVISED OF THE POSSIBILITY OF
* SUCH DAMAGES.
*/
/*
* Copyright (C) 1998 by the FundsXpress, INC.
*
* All rights reserved.
*
* Export of this software from the United States of America may require
* a specific license from the United States Government. It is the
* responsibility of any person or organization contemplating export to
* obtain such a license before exporting.
*
* WITHIN THAT CONSTRAINT, permission to use, copy, modify, and
* distribute this software and its documentation for any purpose and
* without fee is hereby granted, provided that the above copyright
* notice appear in all copies and that both that copyright notice and
* this permission notice appear in supporting documentation, and that
* the name of FundsXpress. not be used in advertising or publicity pertaining
* to distribution of the software without specific, written prior
* permission. FundsXpress makes no representations about the suitability of
* this software for any purpose. It is provided "as is" without express
* or implied warranty.
*
* THIS SOFTWARE IS PROVIDED ``AS IS'' AND WITHOUT ANY EXPRESS OR
* IMPLIED WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED
* WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR A PARTICULAR PURPOSE.
*/
/*
* Copyright (C) 2025 Red Hat, Inc. All Rights Reserved.
* Written by David Howells (dhowells@redhat.com)
*/
#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
#include "internal.h"
const struct krb5_enctype krb5_aes128_cts_hmac_sha1_96 = {
.etype = KRB5_ENCTYPE_AES128_CTS_HMAC_SHA1_96,
.ctype = KRB5_CKSUMTYPE_HMAC_SHA1_96_AES128,
.name = "aes128-cts-hmac-sha1-96",
.encrypt_name = "krb5enc(hmac(sha1),cts(cbc(aes)))",
.cksum_name = "hmac(sha1)",
.hash_name = "sha1",
.derivation_enc = "cts(cbc(aes))",
.key_bytes = 16,
.key_len = 16,
.Kc_len = 16,
.Ke_len = 16,
.Ki_len = 16,
.block_len = 16,
.conf_len = 16,
.cksum_len = 12,
.hash_len = 20,
.prf_len = 16,
.keyed_cksum = true,
.random_to_key = NULL, /* Identity */
.profile = &rfc3961_simplified_profile,
};
const struct krb5_enctype krb5_aes256_cts_hmac_sha1_96 = {
.etype = KRB5_ENCTYPE_AES256_CTS_HMAC_SHA1_96,
.ctype = KRB5_CKSUMTYPE_HMAC_SHA1_96_AES256,
.name = "aes256-cts-hmac-sha1-96",
.encrypt_name = "krb5enc(hmac(sha1),cts(cbc(aes)))",
.cksum_name = "hmac(sha1)",
.hash_name = "sha1",
.derivation_enc = "cts(cbc(aes))",
.key_bytes = 32,
.key_len = 32,
.Kc_len = 32,
.Ke_len = 32,
.Ki_len = 32,
.block_len = 16,
.conf_len = 16,
.cksum_len = 12,
.hash_len = 20,
.prf_len = 16,
.keyed_cksum = true,
.random_to_key = NULL, /* Identity */
.profile = &rfc3961_simplified_profile,
};

View file

@ -0,0 +1,237 @@
// SPDX-License-Identifier: GPL-2.0-or-later
/* rfc6803 Camellia Encryption for Kerberos 5
*
* Copyright (C) 2025 Red Hat, Inc. All Rights Reserved.
* Written by David Howells (dhowells@redhat.com)
*/
#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
#include <linux/slab.h>
#include "internal.h"
/*
* Calculate the key derivation function KDF-FEEDBACK_CMAC(key, constant)
*
* n = ceiling(k / 128)
* K(0) = zeros
* K(i) = CMAC(key, K(i-1) | i | constant | 0x00 | k)
* DR(key, constant) = k-truncate(K(1) | K(2) | ... | K(n))
* KDF-FEEDBACK-CMAC(key, constant) = random-to-key(DR(key, constant))
*
* [rfc6803 sec 3]
*/
static int rfc6803_calc_KDF_FEEDBACK_CMAC(const struct krb5_enctype *krb5,
const struct krb5_buffer *key,
const struct krb5_buffer *constant,
struct krb5_buffer *result,
gfp_t gfp)
{
struct crypto_shash *shash;
struct krb5_buffer K, data;
struct shash_desc *desc;
__be32 tmp;
size_t bsize, offset, seg;
void *buffer;
u32 i = 0, k = result->len * 8;
u8 *p;
int ret = -ENOMEM;
shash = crypto_alloc_shash(krb5->cksum_name, 0, 0);
if (IS_ERR(shash))
return (PTR_ERR(shash) == -ENOENT) ? -ENOPKG : PTR_ERR(shash);
ret = crypto_shash_setkey(shash, key->data, key->len);
if (ret < 0)
goto error_shash;
ret = -ENOMEM;
K.len = crypto_shash_digestsize(shash);
data.len = K.len + 4 + constant->len + 1 + 4;
bsize = krb5_shash_size(shash) +
krb5_digest_size(shash) +
crypto_roundup(K.len) +
crypto_roundup(data.len);
buffer = kzalloc(bsize, GFP_NOFS);
if (!buffer)
goto error_shash;
desc = buffer;
desc->tfm = shash;
K.data = buffer +
krb5_shash_size(shash) +
krb5_digest_size(shash);
data.data = buffer +
krb5_shash_size(shash) +
krb5_digest_size(shash) +
crypto_roundup(K.len);
p = data.data + K.len + 4;
memcpy(p, constant->data, constant->len);
p += constant->len;
*p++ = 0x00;
tmp = htonl(k);
memcpy(p, &tmp, 4);
p += 4;
ret = -EINVAL;
if (WARN_ON(p - (u8 *)data.data != data.len))
goto error;
offset = 0;
do {
i++;
p = data.data;
memcpy(p, K.data, K.len);
p += K.len;
*(__be32 *)p = htonl(i);
ret = crypto_shash_init(desc);
if (ret < 0)
goto error;
ret = crypto_shash_finup(desc, data.data, data.len, K.data);
if (ret < 0)
goto error;
seg = min_t(size_t, result->len - offset, K.len);
memcpy(result->data + offset, K.data, seg);
offset += seg;
} while (offset < result->len);
error:
kfree_sensitive(buffer);
error_shash:
crypto_free_shash(shash);
return ret;
}
/*
* Calculate the pseudo-random function, PRF().
*
* Kp = KDF-FEEDBACK-CMAC(protocol-key, "prf")
* PRF = CMAC(Kp, octet-string)
* [rfc6803 sec 6]
*/
static int rfc6803_calc_PRF(const struct krb5_enctype *krb5,
const struct krb5_buffer *protocol_key,
const struct krb5_buffer *octet_string,
struct krb5_buffer *result,
gfp_t gfp)
{
static const struct krb5_buffer prfconstant = { 3, "prf" };
struct crypto_shash *shash;
struct krb5_buffer Kp;
struct shash_desc *desc;
size_t bsize;
void *buffer;
int ret;
Kp.len = krb5->prf_len;
shash = crypto_alloc_shash(krb5->cksum_name, 0, 0);
if (IS_ERR(shash))
return (PTR_ERR(shash) == -ENOENT) ? -ENOPKG : PTR_ERR(shash);
ret = -EINVAL;
if (result->len != crypto_shash_digestsize(shash))
goto out_shash;
ret = -ENOMEM;
bsize = krb5_shash_size(shash) +
krb5_digest_size(shash) +
crypto_roundup(Kp.len);
buffer = kzalloc(bsize, GFP_NOFS);
if (!buffer)
goto out_shash;
Kp.data = buffer +
krb5_shash_size(shash) +
krb5_digest_size(shash);
ret = rfc6803_calc_KDF_FEEDBACK_CMAC(krb5, protocol_key, &prfconstant,
&Kp, gfp);
if (ret < 0)
goto out;
ret = crypto_shash_setkey(shash, Kp.data, Kp.len);
if (ret < 0)
goto out;
desc = buffer;
desc->tfm = shash;
ret = crypto_shash_init(desc);
if (ret < 0)
goto out;
ret = crypto_shash_finup(desc, octet_string->data, octet_string->len, result->data);
if (ret < 0)
goto out;
out:
kfree_sensitive(buffer);
out_shash:
crypto_free_shash(shash);
return ret;
}
static const struct krb5_crypto_profile rfc6803_crypto_profile = {
.calc_PRF = rfc6803_calc_PRF,
.calc_Kc = rfc6803_calc_KDF_FEEDBACK_CMAC,
.calc_Ke = rfc6803_calc_KDF_FEEDBACK_CMAC,
.calc_Ki = rfc6803_calc_KDF_FEEDBACK_CMAC,
.derive_encrypt_keys = authenc_derive_encrypt_keys,
.load_encrypt_keys = authenc_load_encrypt_keys,
.derive_checksum_key = rfc3961_derive_checksum_key,
.load_checksum_key = rfc3961_load_checksum_key,
.encrypt = krb5_aead_encrypt,
.decrypt = krb5_aead_decrypt,
.get_mic = rfc3961_get_mic,
.verify_mic = rfc3961_verify_mic,
};
const struct krb5_enctype krb5_camellia128_cts_cmac = {
.etype = KRB5_ENCTYPE_CAMELLIA128_CTS_CMAC,
.ctype = KRB5_CKSUMTYPE_CMAC_CAMELLIA128,
.name = "camellia128-cts-cmac",
.encrypt_name = "krb5enc(cmac(camellia),cts(cbc(camellia)))",
.cksum_name = "cmac(camellia)",
.hash_name = NULL,
.derivation_enc = "cts(cbc(camellia))",
.key_bytes = 16,
.key_len = 16,
.Kc_len = 16,
.Ke_len = 16,
.Ki_len = 16,
.block_len = 16,
.conf_len = 16,
.cksum_len = 16,
.hash_len = 16,
.prf_len = 16,
.keyed_cksum = true,
.random_to_key = NULL, /* Identity */
.profile = &rfc6803_crypto_profile,
};
const struct krb5_enctype krb5_camellia256_cts_cmac = {
.etype = KRB5_ENCTYPE_CAMELLIA256_CTS_CMAC,
.ctype = KRB5_CKSUMTYPE_CMAC_CAMELLIA256,
.name = "camellia256-cts-cmac",
.encrypt_name = "krb5enc(cmac(camellia),cts(cbc(camellia)))",
.cksum_name = "cmac(camellia)",
.hash_name = NULL,
.derivation_enc = "cts(cbc(camellia))",
.key_bytes = 32,
.key_len = 32,
.Kc_len = 32,
.Ke_len = 32,
.Ki_len = 32,
.block_len = 16,
.conf_len = 16,
.cksum_len = 16,
.hash_len = 16,
.prf_len = 16,
.keyed_cksum = true,
.random_to_key = NULL, /* Identity */
.profile = &rfc6803_crypto_profile,
};

362
crypto/krb5/rfc8009_aes2.c Normal file
View file

@ -0,0 +1,362 @@
// SPDX-License-Identifier: GPL-2.0-or-later
/* rfc8009 AES Encryption with HMAC-SHA2 for Kerberos 5
*
* Copyright (C) 2025 Red Hat, Inc. All Rights Reserved.
* Written by David Howells (dhowells@redhat.com)
*/
#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
#include <linux/slab.h>
#include <crypto/authenc.h>
#include "internal.h"
static const struct krb5_buffer rfc8009_no_context = { .len = 0, .data = "" };
/*
* Calculate the key derivation function KDF-HMAC-SHA2(key, label, [context,] k)
*
* KDF-HMAC-SHA2(key, label, [context,] k) = k-truncate(K1)
*
* Using the appropriate one of:
* K1 = HMAC-SHA-256(key, 0x00000001 | label | 0x00 | k)
* K1 = HMAC-SHA-384(key, 0x00000001 | label | 0x00 | k)
* K1 = HMAC-SHA-256(key, 0x00000001 | label | 0x00 | context | k)
* K1 = HMAC-SHA-384(key, 0x00000001 | label | 0x00 | context | k)
* [rfc8009 sec 3]
*/
static int rfc8009_calc_KDF_HMAC_SHA2(const struct krb5_enctype *krb5,
const struct krb5_buffer *key,
const struct krb5_buffer *label,
const struct krb5_buffer *context,
unsigned int k,
struct krb5_buffer *result,
gfp_t gfp)
{
struct crypto_shash *shash;
struct krb5_buffer K1, data;
struct shash_desc *desc;
__be32 tmp;
size_t bsize;
void *buffer;
u8 *p;
int ret = -ENOMEM;
if (WARN_ON(result->len != k / 8))
return -EINVAL;
shash = crypto_alloc_shash(krb5->cksum_name, 0, 0);
if (IS_ERR(shash))
return (PTR_ERR(shash) == -ENOENT) ? -ENOPKG : PTR_ERR(shash);
ret = crypto_shash_setkey(shash, key->data, key->len);
if (ret < 0)
goto error_shash;
ret = -EINVAL;
if (WARN_ON(crypto_shash_digestsize(shash) * 8 < k))
goto error_shash;
ret = -ENOMEM;
data.len = 4 + label->len + 1 + context->len + 4;
bsize = krb5_shash_size(shash) +
krb5_digest_size(shash) +
crypto_roundup(data.len);
buffer = kzalloc(bsize, GFP_NOFS);
if (!buffer)
goto error_shash;
desc = buffer;
desc->tfm = shash;
ret = crypto_shash_init(desc);
if (ret < 0)
goto error;
p = data.data = buffer +
krb5_shash_size(shash) +
krb5_digest_size(shash);
*(__be32 *)p = htonl(0x00000001);
p += 4;
memcpy(p, label->data, label->len);
p += label->len;
*p++ = 0;
memcpy(p, context->data, context->len);
p += context->len;
tmp = htonl(k);
memcpy(p, &tmp, 4);
p += 4;
ret = -EINVAL;
if (WARN_ON(p - (u8 *)data.data != data.len))
goto error;
K1.len = crypto_shash_digestsize(shash);
K1.data = buffer +
krb5_shash_size(shash);
ret = crypto_shash_finup(desc, data.data, data.len, K1.data);
if (ret < 0)
goto error;
memcpy(result->data, K1.data, result->len);
error:
kfree_sensitive(buffer);
error_shash:
crypto_free_shash(shash);
return ret;
}
/*
* Calculate the pseudo-random function, PRF().
*
* PRF = KDF-HMAC-SHA2(input-key, "prf", octet-string, 256)
* PRF = KDF-HMAC-SHA2(input-key, "prf", octet-string, 384)
*
* The "prfconstant" used in the PRF operation is the three-octet string
* "prf".
* [rfc8009 sec 5]
*/
static int rfc8009_calc_PRF(const struct krb5_enctype *krb5,
const struct krb5_buffer *input_key,
const struct krb5_buffer *octet_string,
struct krb5_buffer *result,
gfp_t gfp)
{
static const struct krb5_buffer prfconstant = { 3, "prf" };
return rfc8009_calc_KDF_HMAC_SHA2(krb5, input_key, &prfconstant,
octet_string, krb5->prf_len * 8,
result, gfp);
}
/*
* Derive Ke.
* Ke = KDF-HMAC-SHA2(base-key, usage | 0xAA, 128)
* Ke = KDF-HMAC-SHA2(base-key, usage | 0xAA, 256)
* [rfc8009 sec 5]
*/
static int rfc8009_calc_Ke(const struct krb5_enctype *krb5,
const struct krb5_buffer *base_key,
const struct krb5_buffer *usage_constant,
struct krb5_buffer *result,
gfp_t gfp)
{
return rfc8009_calc_KDF_HMAC_SHA2(krb5, base_key, usage_constant,
&rfc8009_no_context, krb5->key_bytes * 8,
result, gfp);
}
/*
* Derive Kc/Ki
* Kc = KDF-HMAC-SHA2(base-key, usage | 0x99, 128)
* Ki = KDF-HMAC-SHA2(base-key, usage | 0x55, 128)
* Kc = KDF-HMAC-SHA2(base-key, usage | 0x99, 192)
* Ki = KDF-HMAC-SHA2(base-key, usage | 0x55, 192)
* [rfc8009 sec 5]
*/
static int rfc8009_calc_Ki(const struct krb5_enctype *krb5,
const struct krb5_buffer *base_key,
const struct krb5_buffer *usage_constant,
struct krb5_buffer *result,
gfp_t gfp)
{
return rfc8009_calc_KDF_HMAC_SHA2(krb5, base_key, usage_constant,
&rfc8009_no_context, krb5->cksum_len * 8,
result, gfp);
}
/*
* Apply encryption and checksumming functions to a message. Unlike for
* RFC3961, for RFC8009, we have to chuck the starting IV into the hash first.
*/
static ssize_t rfc8009_encrypt(const struct krb5_enctype *krb5,
struct crypto_aead *aead,
struct scatterlist *sg, unsigned int nr_sg, size_t sg_len,
size_t data_offset, size_t data_len,
bool preconfounded)
{
struct aead_request *req;
struct scatterlist bsg[2];
ssize_t ret, done;
size_t bsize, base_len, secure_offset, secure_len, pad_len, cksum_offset;
void *buffer;
u8 *iv, *ad;
if (WARN_ON(data_offset != krb5->conf_len))
return -EINVAL; /* Data is in wrong place */
secure_offset = 0;
base_len = krb5->conf_len + data_len;
pad_len = 0;
secure_len = base_len + pad_len;
cksum_offset = secure_len;
if (WARN_ON(cksum_offset + krb5->cksum_len > sg_len))
return -EFAULT;
bsize = krb5_aead_size(aead) +
krb5_aead_ivsize(aead) * 2;
buffer = kzalloc(bsize, GFP_NOFS);
if (!buffer)
return -ENOMEM;
req = buffer;
iv = buffer + krb5_aead_size(aead);
ad = buffer + krb5_aead_size(aead) + krb5_aead_ivsize(aead);
/* Insert the confounder into the buffer */
ret = -EFAULT;
if (!preconfounded) {
get_random_bytes(buffer, krb5->conf_len);
done = sg_pcopy_from_buffer(sg, nr_sg, buffer, krb5->conf_len,
secure_offset);
if (done != krb5->conf_len)
goto error;
}
/* We may need to pad out to the crypto blocksize. */
if (pad_len) {
done = sg_zero_buffer(sg, nr_sg, pad_len, data_offset + data_len);
if (done != pad_len)
goto error;
}
/* We need to include the starting IV in the hash. */
sg_init_table(bsg, 2);
sg_set_buf(&bsg[0], ad, krb5_aead_ivsize(aead));
sg_chain(bsg, 2, sg);
/* Hash and encrypt the message. */
aead_request_set_tfm(req, aead);
aead_request_set_callback(req, 0, NULL, NULL);
aead_request_set_ad(req, krb5_aead_ivsize(aead));
aead_request_set_crypt(req, bsg, bsg, secure_len, iv);
ret = crypto_aead_encrypt(req);
if (ret < 0)
goto error;
ret = secure_len + krb5->cksum_len;
error:
kfree_sensitive(buffer);
return ret;
}
/*
* Apply decryption and checksumming functions to a message. Unlike for
* RFC3961, for RFC8009, we have to chuck the starting IV into the hash first.
*
* The offset and length are updated to reflect the actual content of the
* encrypted region.
*/
static int rfc8009_decrypt(const struct krb5_enctype *krb5,
struct crypto_aead *aead,
struct scatterlist *sg, unsigned int nr_sg,
size_t *_offset, size_t *_len)
{
struct aead_request *req;
struct scatterlist bsg[2];
size_t bsize;
void *buffer;
int ret;
u8 *iv, *ad;
if (WARN_ON(*_offset != 0))
return -EINVAL; /* Can't set offset on aead */
if (*_len < krb5->conf_len + krb5->cksum_len)
return -EPROTO;
bsize = krb5_aead_size(aead) +
krb5_aead_ivsize(aead) * 2;
buffer = kzalloc(bsize, GFP_NOFS);
if (!buffer)
return -ENOMEM;
req = buffer;
iv = buffer + krb5_aead_size(aead);
ad = buffer + krb5_aead_size(aead) + krb5_aead_ivsize(aead);
/* We need to include the starting IV in the hash. */
sg_init_table(bsg, 2);
sg_set_buf(&bsg[0], ad, krb5_aead_ivsize(aead));
sg_chain(bsg, 2, sg);
/* Decrypt the message and verify its checksum. */
aead_request_set_tfm(req, aead);
aead_request_set_callback(req, 0, NULL, NULL);
aead_request_set_ad(req, krb5_aead_ivsize(aead));
aead_request_set_crypt(req, bsg, bsg, *_len, iv);
ret = crypto_aead_decrypt(req);
if (ret < 0)
goto error;
/* Adjust the boundaries of the data. */
*_offset += krb5->conf_len;
*_len -= krb5->conf_len + krb5->cksum_len;
ret = 0;
error:
kfree_sensitive(buffer);
return ret;
}
static const struct krb5_crypto_profile rfc8009_crypto_profile = {
.calc_PRF = rfc8009_calc_PRF,
.calc_Kc = rfc8009_calc_Ki,
.calc_Ke = rfc8009_calc_Ke,
.calc_Ki = rfc8009_calc_Ki,
.derive_encrypt_keys = authenc_derive_encrypt_keys,
.load_encrypt_keys = authenc_load_encrypt_keys,
.derive_checksum_key = rfc3961_derive_checksum_key,
.load_checksum_key = rfc3961_load_checksum_key,
.encrypt = rfc8009_encrypt,
.decrypt = rfc8009_decrypt,
.get_mic = rfc3961_get_mic,
.verify_mic = rfc3961_verify_mic,
};
const struct krb5_enctype krb5_aes128_cts_hmac_sha256_128 = {
.etype = KRB5_ENCTYPE_AES128_CTS_HMAC_SHA256_128,
.ctype = KRB5_CKSUMTYPE_HMAC_SHA256_128_AES128,
.name = "aes128-cts-hmac-sha256-128",
.encrypt_name = "authenc(hmac(sha256),cts(cbc(aes)))",
.cksum_name = "hmac(sha256)",
.hash_name = "sha256",
.derivation_enc = "cts(cbc(aes))",
.key_bytes = 16,
.key_len = 16,
.Kc_len = 16,
.Ke_len = 16,
.Ki_len = 16,
.block_len = 16,
.conf_len = 16,
.cksum_len = 16,
.hash_len = 20,
.prf_len = 32,
.keyed_cksum = true,
.random_to_key = NULL, /* Identity */
.profile = &rfc8009_crypto_profile,
};
const struct krb5_enctype krb5_aes256_cts_hmac_sha384_192 = {
.etype = KRB5_ENCTYPE_AES256_CTS_HMAC_SHA384_192,
.ctype = KRB5_CKSUMTYPE_HMAC_SHA384_192_AES256,
.name = "aes256-cts-hmac-sha384-192",
.encrypt_name = "authenc(hmac(sha384),cts(cbc(aes)))",
.cksum_name = "hmac(sha384)",
.hash_name = "sha384",
.derivation_enc = "cts(cbc(aes))",
.key_bytes = 32,
.key_len = 32,
.Kc_len = 24,
.Ke_len = 32,
.Ki_len = 24,
.block_len = 16,
.conf_len = 16,
.cksum_len = 24,
.hash_len = 20,
.prf_len = 48,
.keyed_cksum = true,
.random_to_key = NULL, /* Identity */
.profile = &rfc8009_crypto_profile,
};

544
crypto/krb5/selftest.c Normal file
View file

@ -0,0 +1,544 @@
// SPDX-License-Identifier: GPL-2.0-or-later
/* Kerberos library self-testing
*
* Copyright (C) 2025 Red Hat, Inc. All Rights Reserved.
* Written by David Howells (dhowells@redhat.com)
*/
#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
#include <linux/slab.h>
#include <crypto/skcipher.h>
#include <crypto/hash.h>
#include "internal.h"
#define VALID(X) \
({ \
bool __x = (X); \
if (__x) { \
pr_warn("!!! TESTINVAL %s:%u\n", __FILE__, __LINE__); \
ret = -EBADMSG; \
} \
__x; \
})
#define CHECK(X) \
({ \
bool __x = (X); \
if (__x) { \
pr_warn("!!! TESTFAIL %s:%u\n", __FILE__, __LINE__); \
ret = -EBADMSG; \
} \
__x; \
})
enum which_key {
TEST_KC, TEST_KE, TEST_KI,
};
#if 0
static void dump_sg(struct scatterlist *sg, unsigned int limit)
{
unsigned int index = 0, n = 0;
for (; sg && limit > 0; sg = sg_next(sg)) {
unsigned int off = sg->offset, len = umin(sg->length, limit);
const void *p = kmap_local_page(sg_page(sg));
limit -= len;
while (len > 0) {
unsigned int part = umin(len, 32);
pr_notice("[%x] %04x: %*phN\n", n, index, part, p + off);
index += part;
off += part;
len -= part;
}
kunmap_local(p);
n++;
}
}
#endif
static int prep_buf(struct krb5_buffer *buf)
{
buf->data = kmalloc(buf->len, GFP_KERNEL);
if (!buf->data)
return -ENOMEM;
return 0;
}
#define PREP_BUF(BUF, LEN) \
do { \
(BUF)->len = (LEN); \
ret = prep_buf((BUF)); \
if (ret < 0) \
goto out; \
} while (0)
static int load_buf(struct krb5_buffer *buf, const char *from)
{
size_t len = strlen(from);
int ret;
if (len > 1 && from[0] == '\'') {
PREP_BUF(buf, len - 1);
memcpy(buf->data, from + 1, len - 1);
ret = 0;
goto out;
}
if (VALID(len & 1))
return -EINVAL;
PREP_BUF(buf, len / 2);
ret = hex2bin(buf->data, from, buf->len);
if (ret < 0) {
VALID(1);
goto out;
}
out:
return ret;
}
#define LOAD_BUF(BUF, FROM) do { ret = load_buf(BUF, FROM); if (ret < 0) goto out; } while (0)
static void clear_buf(struct krb5_buffer *buf)
{
kfree(buf->data);
buf->len = 0;
buf->data = NULL;
}
/*
* Perform a pseudo-random function check.
*/
static int krb5_test_one_prf(const struct krb5_prf_test *test)
{
const struct krb5_enctype *krb5 = crypto_krb5_find_enctype(test->etype);
struct krb5_buffer key = {}, octet = {}, result = {}, prf = {};
int ret;
if (!krb5)
return -EOPNOTSUPP;
pr_notice("Running %s %s\n", krb5->name, test->name);
LOAD_BUF(&key, test->key);
LOAD_BUF(&octet, test->octet);
LOAD_BUF(&prf, test->prf);
PREP_BUF(&result, krb5->prf_len);
if (VALID(result.len != prf.len)) {
ret = -EINVAL;
goto out;
}
ret = krb5->profile->calc_PRF(krb5, &key, &octet, &result, GFP_KERNEL);
if (ret < 0) {
CHECK(1);
pr_warn("PRF calculation failed %d\n", ret);
goto out;
}
if (memcmp(result.data, prf.data, result.len) != 0) {
CHECK(1);
ret = -EKEYREJECTED;
goto out;
}
ret = 0;
out:
clear_buf(&result);
clear_buf(&octet);
clear_buf(&key);
return ret;
}
/*
* Perform a key derivation check.
*/
static int krb5_test_key(const struct krb5_enctype *krb5,
const struct krb5_buffer *base_key,
const struct krb5_key_test_one *test,
enum which_key which)
{
struct krb5_buffer key = {}, result = {};
int ret;
LOAD_BUF(&key, test->key);
PREP_BUF(&result, key.len);
switch (which) {
case TEST_KC:
ret = krb5_derive_Kc(krb5, base_key, test->use, &result, GFP_KERNEL);
break;
case TEST_KE:
ret = krb5_derive_Ke(krb5, base_key, test->use, &result, GFP_KERNEL);
break;
case TEST_KI:
ret = krb5_derive_Ki(krb5, base_key, test->use, &result, GFP_KERNEL);
break;
default:
VALID(1);
ret = -EINVAL;
goto out;
}
if (ret < 0) {
CHECK(1);
pr_warn("Key derivation failed %d\n", ret);
goto out;
}
if (memcmp(result.data, key.data, result.len) != 0) {
CHECK(1);
ret = -EKEYREJECTED;
goto out;
}
out:
clear_buf(&key);
clear_buf(&result);
return ret;
}
static int krb5_test_one_key(const struct krb5_key_test *test)
{
const struct krb5_enctype *krb5 = crypto_krb5_find_enctype(test->etype);
struct krb5_buffer base_key = {};
int ret;
if (!krb5)
return -EOPNOTSUPP;
pr_notice("Running %s %s\n", krb5->name, test->name);
LOAD_BUF(&base_key, test->key);
ret = krb5_test_key(krb5, &base_key, &test->Kc, TEST_KC);
if (ret < 0)
goto out;
ret = krb5_test_key(krb5, &base_key, &test->Ke, TEST_KE);
if (ret < 0)
goto out;
ret = krb5_test_key(krb5, &base_key, &test->Ki, TEST_KI);
if (ret < 0)
goto out;
out:
clear_buf(&base_key);
return ret;
}
/*
* Perform an encryption test.
*/
static int krb5_test_one_enc(const struct krb5_enc_test *test, void *buf)
{
const struct krb5_enctype *krb5 = crypto_krb5_find_enctype(test->etype);
struct crypto_aead *ci = NULL;
struct krb5_buffer K0 = {}, Ke = {}, Ki = {}, keys = {};
struct krb5_buffer conf = {}, plain = {}, ct = {};
struct scatterlist sg[1];
size_t data_len, data_offset, message_len;
int ret;
if (!krb5)
return -EOPNOTSUPP;
pr_notice("Running %s %s\n", krb5->name, test->name);
/* Load the test data into binary buffers. */
LOAD_BUF(&conf, test->conf);
LOAD_BUF(&plain, test->plain);
LOAD_BUF(&ct, test->ct);
if (test->K0) {
LOAD_BUF(&K0, test->K0);
} else {
LOAD_BUF(&Ke, test->Ke);
LOAD_BUF(&Ki, test->Ki);
ret = krb5->profile->load_encrypt_keys(krb5, &Ke, &Ki, &keys, GFP_KERNEL);
if (ret < 0)
goto out;
}
if (VALID(conf.len != krb5->conf_len) ||
VALID(ct.len != krb5->conf_len + plain.len + krb5->cksum_len))
goto out;
data_len = plain.len;
message_len = crypto_krb5_how_much_buffer(krb5, KRB5_ENCRYPT_MODE,
data_len, &data_offset);
if (CHECK(message_len != ct.len)) {
pr_warn("Encrypted length mismatch %zu != %u\n", message_len, ct.len);
goto out;
}
if (CHECK(data_offset != conf.len)) {
pr_warn("Data offset mismatch %zu != %u\n", data_offset, conf.len);
goto out;
}
memcpy(buf, conf.data, conf.len);
memcpy(buf + data_offset, plain.data, plain.len);
/* Allocate a crypto object and set its key. */
if (test->K0)
ci = crypto_krb5_prepare_encryption(krb5, &K0, test->usage, GFP_KERNEL);
else
ci = krb5_prepare_encryption(krb5, &keys, GFP_KERNEL);
if (IS_ERR(ci)) {
ret = PTR_ERR(ci);
ci = NULL;
pr_err("Couldn't alloc AEAD %s: %d\n", krb5->encrypt_name, ret);
goto out;
}
/* Encrypt the message. */
sg_init_one(sg, buf, message_len);
ret = crypto_krb5_encrypt(krb5, ci, sg, 1, message_len,
data_offset, data_len, true);
if (ret < 0) {
CHECK(1);
pr_warn("Encryption failed %d\n", ret);
goto out;
}
if (ret != message_len) {
CHECK(1);
pr_warn("Encrypted message wrong size %x != %zx\n", ret, message_len);
goto out;
}
if (memcmp(buf, ct.data, ct.len) != 0) {
CHECK(1);
pr_warn("Ciphertext mismatch\n");
pr_warn("BUF %*phN\n", ct.len, buf);
pr_warn("CT %*phN\n", ct.len, ct.data);
pr_warn("PT %*phN%*phN\n", conf.len, conf.data, plain.len, plain.data);
ret = -EKEYREJECTED;
goto out;
}
/* Decrypt the encrypted message. */
data_offset = 0;
data_len = message_len;
ret = crypto_krb5_decrypt(krb5, ci, sg, 1, &data_offset, &data_len);
if (ret < 0) {
CHECK(1);
pr_warn("Decryption failed %d\n", ret);
goto out;
}
if (CHECK(data_offset != conf.len) ||
CHECK(data_len != plain.len))
goto out;
if (memcmp(buf, conf.data, conf.len) != 0) {
CHECK(1);
pr_warn("Confounder mismatch\n");
pr_warn("ENC %*phN\n", conf.len, buf);
pr_warn("DEC %*phN\n", conf.len, conf.data);
ret = -EKEYREJECTED;
goto out;
}
if (memcmp(buf + conf.len, plain.data, plain.len) != 0) {
CHECK(1);
pr_warn("Plaintext mismatch\n");
pr_warn("BUF %*phN\n", plain.len, buf + conf.len);
pr_warn("PT %*phN\n", plain.len, plain.data);
ret = -EKEYREJECTED;
goto out;
}
ret = 0;
out:
clear_buf(&ct);
clear_buf(&plain);
clear_buf(&conf);
clear_buf(&keys);
clear_buf(&Ki);
clear_buf(&Ke);
clear_buf(&K0);
if (ci)
crypto_free_aead(ci);
return ret;
}
/*
* Perform a checksum test.
*/
static int krb5_test_one_mic(const struct krb5_mic_test *test, void *buf)
{
const struct krb5_enctype *krb5 = crypto_krb5_find_enctype(test->etype);
struct crypto_shash *ci = NULL;
struct scatterlist sg[1];
struct krb5_buffer K0 = {}, Kc = {}, keys = {}, plain = {}, mic = {};
size_t offset, len, message_len;
int ret;
if (!krb5)
return -EOPNOTSUPP;
pr_notice("Running %s %s\n", krb5->name, test->name);
/* Allocate a crypto object and set its key. */
if (test->K0) {
LOAD_BUF(&K0, test->K0);
ci = crypto_krb5_prepare_checksum(krb5, &K0, test->usage, GFP_KERNEL);
} else {
LOAD_BUF(&Kc, test->Kc);
ret = krb5->profile->load_checksum_key(krb5, &Kc, &keys, GFP_KERNEL);
if (ret < 0)
goto out;
ci = krb5_prepare_checksum(krb5, &Kc, GFP_KERNEL);
}
if (IS_ERR(ci)) {
ret = PTR_ERR(ci);
ci = NULL;
pr_err("Couldn't alloc shash %s: %d\n", krb5->cksum_name, ret);
goto out;
}
/* Load the test data into binary buffers. */
LOAD_BUF(&plain, test->plain);
LOAD_BUF(&mic, test->mic);
len = plain.len;
message_len = crypto_krb5_how_much_buffer(krb5, KRB5_CHECKSUM_MODE,
len, &offset);
if (CHECK(message_len != mic.len + plain.len)) {
pr_warn("MIC length mismatch %zu != %u\n",
message_len, mic.len + plain.len);
goto out;
}
memcpy(buf + offset, plain.data, plain.len);
/* Generate a MIC generation request. */
sg_init_one(sg, buf, 1024);
ret = crypto_krb5_get_mic(krb5, ci, NULL, sg, 1, 1024,
krb5->cksum_len, plain.len);
if (ret < 0) {
CHECK(1);
pr_warn("Get MIC failed %d\n", ret);
goto out;
}
len = ret;
if (CHECK(len != plain.len + mic.len)) {
pr_warn("MIC length mismatch %zu != %u\n", len, plain.len + mic.len);
goto out;
}
if (memcmp(buf, mic.data, mic.len) != 0) {
CHECK(1);
pr_warn("MIC mismatch\n");
pr_warn("BUF %*phN\n", mic.len, buf);
pr_warn("MIC %*phN\n", mic.len, mic.data);
ret = -EKEYREJECTED;
goto out;
}
/* Generate a verification request. */
offset = 0;
ret = crypto_krb5_verify_mic(krb5, ci, NULL, sg, 1, &offset, &len);
if (ret < 0) {
CHECK(1);
pr_warn("Verify MIC failed %d\n", ret);
goto out;
}
if (CHECK(offset != mic.len) ||
CHECK(len != plain.len))
goto out;
if (memcmp(buf + offset, plain.data, plain.len) != 0) {
CHECK(1);
pr_warn("Plaintext mismatch\n");
pr_warn("BUF %*phN\n", plain.len, buf + offset);
pr_warn("PT %*phN\n", plain.len, plain.data);
ret = -EKEYREJECTED;
goto out;
}
ret = 0;
out:
clear_buf(&mic);
clear_buf(&plain);
clear_buf(&keys);
clear_buf(&K0);
clear_buf(&Kc);
if (ci)
crypto_free_shash(ci);
return ret;
}
int krb5_selftest(void)
{
void *buf;
int ret = 0, i;
buf = kmalloc(4096, GFP_KERNEL);
if (!buf)
return -ENOMEM;
pr_notice("\n");
pr_notice("Running selftests\n");
for (i = 0; krb5_prf_tests[i].name; i++) {
ret = krb5_test_one_prf(&krb5_prf_tests[i]);
if (ret < 0) {
if (ret != -EOPNOTSUPP)
goto out;
pr_notice("Skipping %s\n", krb5_prf_tests[i].name);
}
}
for (i = 0; krb5_key_tests[i].name; i++) {
ret = krb5_test_one_key(&krb5_key_tests[i]);
if (ret < 0) {
if (ret != -EOPNOTSUPP)
goto out;
pr_notice("Skipping %s\n", krb5_key_tests[i].name);
}
}
for (i = 0; krb5_enc_tests[i].name; i++) {
memset(buf, 0x5a, 4096);
ret = krb5_test_one_enc(&krb5_enc_tests[i], buf);
if (ret < 0) {
if (ret != -EOPNOTSUPP)
goto out;
pr_notice("Skipping %s\n", krb5_enc_tests[i].name);
}
}
for (i = 0; krb5_mic_tests[i].name; i++) {
memset(buf, 0x5a, 4096);
ret = krb5_test_one_mic(&krb5_mic_tests[i], buf);
if (ret < 0) {
if (ret != -EOPNOTSUPP)
goto out;
pr_notice("Skipping %s\n", krb5_mic_tests[i].name);
}
}
ret = 0;
out:
pr_notice("Selftests %s\n", ret == 0 ? "succeeded" : "failed");
kfree(buf);
return ret;
}

291
crypto/krb5/selftest_data.c Normal file
View file

@ -0,0 +1,291 @@
// SPDX-License-Identifier: GPL-2.0-or-later
/* Data for Kerberos library self-testing
*
* Copyright (C) 2025 Red Hat, Inc. All Rights Reserved.
* Written by David Howells (dhowells@redhat.com)
*/
#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
#include "internal.h"
/*
* Pseudo-random function tests.
*/
const struct krb5_prf_test krb5_prf_tests[] = {
/* rfc8009 Appendix A */
{
.etype = KRB5_ENCTYPE_AES128_CTS_HMAC_SHA256_128,
.name = "prf",
.key = "3705D96080C17728A0E800EAB6E0D23C",
.octet = "74657374",
.prf = "9D188616F63852FE86915BB840B4A886FF3E6BB0F819B49B893393D393854295",
}, {
.etype = KRB5_ENCTYPE_AES256_CTS_HMAC_SHA384_192,
.name = "prf",
.key = "6D404D37FAF79F9DF0D33568D320669800EB4836472EA8A026D16B7182460C52",
.octet = "74657374",
.prf =
"9801F69A368C2BF675E59521E177D9A07F67EFE1CFDE8D3C8D6F6A0256E3B17D"
"B3C1B62AD1B8553360D17367EB1514D2",
},
{/* END */}
};
/*
* Key derivation tests.
*/
const struct krb5_key_test krb5_key_tests[] = {
/* rfc8009 Appendix A */
{
.etype = KRB5_ENCTYPE_AES128_CTS_HMAC_SHA256_128,
.name = "key",
.key = "3705D96080C17728A0E800EAB6E0D23C",
.Kc.use = 0x00000002,
.Kc.key = "B31A018A48F54776F403E9A396325DC3",
.Ke.use = 0x00000002,
.Ke.key = "9B197DD1E8C5609D6E67C3E37C62C72E",
.Ki.use = 0x00000002,
.Ki.key = "9FDA0E56AB2D85E1569A688696C26A6C",
}, {
.etype = KRB5_ENCTYPE_AES256_CTS_HMAC_SHA384_192,
.name = "key",
.key = "6D404D37FAF79F9DF0D33568D320669800EB4836472EA8A026D16B7182460C52",
.Kc.use = 0x00000002,
.Kc.key = "EF5718BE86CC84963D8BBB5031E9F5C4BA41F28FAF69E73D",
.Ke.use = 0x00000002,
.Ke.key = "56AB22BEE63D82D7BC5227F6773F8EA7A5EB1C825160C38312980C442E5C7E49",
.Ki.use = 0x00000002,
.Ki.key = "69B16514E3CD8E56B82010D5C73012B622C4D00FFC23ED1F",
},
/* rfc6803 sec 10 */
{
.etype = KRB5_ENCTYPE_CAMELLIA128_CTS_CMAC,
.name = "key",
.key = "57D0297298FFD9D35DE5A47FB4BDE24B",
.Kc.use = 0x00000002,
.Kc.key = "D155775A209D05F02B38D42A389E5A56",
.Ke.use = 0x00000002,
.Ke.key = "64DF83F85A532F17577D8C37035796AB",
.Ki.use = 0x00000002,
.Ki.key = "3E4FBDF30FB8259C425CB6C96F1F4635",
},
{
.etype = KRB5_ENCTYPE_CAMELLIA256_CTS_CMAC,
.name = "key",
.key = "B9D6828B2056B7BE656D88A123B1FAC68214AC2B727ECF5F69AFE0C4DF2A6D2C",
.Kc.use = 0x00000002,
.Kc.key = "E467F9A9552BC7D3155A6220AF9C19220EEED4FF78B0D1E6A1544991461A9E50",
.Ke.use = 0x00000002,
.Ke.key = "412AEFC362A7285FC3966C6A5181E7605AE675235B6D549FBFC9AB6630A4C604",
.Ki.use = 0x00000002,
.Ki.key = "FA624FA0E523993FA388AEFDC67E67EBCD8C08E8A0246B1D73B0D1DD9FC582B0",
},
{/* END */}
};
/*
* Encryption tests.
*/
const struct krb5_enc_test krb5_enc_tests[] = {
/* rfc8009 Appendix A */
{
.etype = KRB5_ENCTYPE_AES128_CTS_HMAC_SHA256_128,
.name = "enc no plain",
.plain = "",
.conf = "7E5895EAF2672435BAD817F545A37148",
.Ke = "9B197DD1E8C5609D6E67C3E37C62C72E",
.Ki = "9FDA0E56AB2D85E1569A688696C26A6C",
.ct = "EF85FB890BB8472F4DAB20394DCA781DAD877EDA39D50C870C0D5A0A8E48C718",
}, {
.etype = KRB5_ENCTYPE_AES128_CTS_HMAC_SHA256_128,
.name = "enc plain<block",
.plain = "000102030405",
.conf = "7BCA285E2FD4130FB55B1A5C83BC5B24",
.Ke = "9B197DD1E8C5609D6E67C3E37C62C72E",
.Ki = "9FDA0E56AB2D85E1569A688696C26A6C",
.ct = "84D7F30754ED987BAB0BF3506BEB09CFB55402CEF7E6877CE99E247E52D16ED4421DFDF8976C",
}, {
.etype = KRB5_ENCTYPE_AES128_CTS_HMAC_SHA256_128,
.name = "enc plain==block",
.plain = "000102030405060708090A0B0C0D0E0F",
.conf = "56AB21713FF62C0A1457200F6FA9948F",
.Ke = "9B197DD1E8C5609D6E67C3E37C62C72E",
.Ki = "9FDA0E56AB2D85E1569A688696C26A6C",
.ct = "3517D640F50DDC8AD3628722B3569D2AE07493FA8263254080EA65C1008E8FC295FB4852E7D83E1E7C48C37EEBE6B0D3",
}, {
.etype = KRB5_ENCTYPE_AES128_CTS_HMAC_SHA256_128,
.name = "enc plain>block",
.plain = "000102030405060708090A0B0C0D0E0F1011121314",
.conf = "A7A4E29A4728CE10664FB64E49AD3FAC",
.Ke = "9B197DD1E8C5609D6E67C3E37C62C72E",
.Ki = "9FDA0E56AB2D85E1569A688696C26A6C",
.ct = "720F73B18D9859CD6CCB4346115CD336C70F58EDC0C4437C5573544C31C813BCE1E6D072C186B39A413C2F92CA9B8334A287FFCBFC",
}, {
.etype = KRB5_ENCTYPE_AES256_CTS_HMAC_SHA384_192,
.name = "enc no plain",
.plain = "",
.conf = "F764E9FA15C276478B2C7D0C4E5F58E4",
.Ke = "56AB22BEE63D82D7BC5227F6773F8EA7A5EB1C825160C38312980C442E5C7E49",
.Ki = "69B16514E3CD8E56B82010D5C73012B622C4D00FFC23ED1F",
.ct = "41F53FA5BFE7026D91FAF9BE959195A058707273A96A40F0A01960621AC612748B9BBFBE7EB4CE3C",
}, {
.etype = KRB5_ENCTYPE_AES256_CTS_HMAC_SHA384_192,
.name = "enc plain<block",
.plain = "000102030405",
.conf = "B80D3251C1F6471494256FFE712D0B9A",
.Ke = "56AB22BEE63D82D7BC5227F6773F8EA7A5EB1C825160C38312980C442E5C7E49",
.Ki = "69B16514E3CD8E56B82010D5C73012B622C4D00FFC23ED1F",
.ct = "4ED7B37C2BCAC8F74F23C1CF07E62BC7B75FB3F637B9F559C7F664F69EAB7B6092237526EA0D1F61CB20D69D10F2",
}, {
.etype = KRB5_ENCTYPE_AES256_CTS_HMAC_SHA384_192,
.name = "enc plain==block",
.plain = "000102030405060708090A0B0C0D0E0F",
.conf = "53BF8A0D105265D4E276428624CE5E63",
.Ke = "56AB22BEE63D82D7BC5227F6773F8EA7A5EB1C825160C38312980C442E5C7E49",
.Ki = "69B16514E3CD8E56B82010D5C73012B622C4D00FFC23ED1F",
.ct = "BC47FFEC7998EB91E8115CF8D19DAC4BBBE2E163E87DD37F49BECA92027764F68CF51F14D798C2273F35DF574D1F932E40C4FF255B36A266",
}, {
.etype = KRB5_ENCTYPE_AES256_CTS_HMAC_SHA384_192,
.name = "enc plain>block",
.plain = "000102030405060708090A0B0C0D0E0F1011121314",
.conf = "763E65367E864F02F55153C7E3B58AF1",
.Ke = "56AB22BEE63D82D7BC5227F6773F8EA7A5EB1C825160C38312980C442E5C7E49",
.Ki = "69B16514E3CD8E56B82010D5C73012B622C4D00FFC23ED1F",
.ct = "40013E2DF58E8751957D2878BCD2D6FE101CCFD556CB1EAE79DB3C3EE86429F2B2A602AC86FEF6ECB647D6295FAE077A1FEB517508D2C16B4192E01F62",
},
/* rfc6803 sec 10 */
{
.etype = KRB5_ENCTYPE_CAMELLIA128_CTS_CMAC,
.name = "enc no plain",
.plain = "",
.conf = "B69822A19A6B09C0EBC8557D1F1B6C0A",
.K0 = "1DC46A8D763F4F93742BCBA3387576C3",
.usage = 0,
.ct = "C466F1871069921EDB7C6FDE244A52DB0BA10EDC197BDB8006658CA3CCCE6EB8",
}, {
.etype = KRB5_ENCTYPE_CAMELLIA128_CTS_CMAC,
.name = "enc 1 plain",
.plain = "'1",
.conf = "6F2FC3C2A166FD8898967A83DE9596D9",
.K0 = "5027BC231D0F3A9D23333F1CA6FDBE7C",
.usage = 1,
.ct = "842D21FD950311C0DD464A3F4BE8D6DA88A56D559C9B47D3F9A85067AF661559B8",
}, {
.etype = KRB5_ENCTYPE_CAMELLIA128_CTS_CMAC,
.name = "enc 9 plain",
.plain = "'9 bytesss",
.conf = "A5B4A71E077AEEF93C8763C18FDB1F10",
.K0 = "A1BB61E805F9BA6DDE8FDBDDC05CDEA0",
.usage = 2,
.ct = "619FF072E36286FF0A28DEB3A352EC0D0EDF5C5160D663C901758CCF9D1ED33D71DB8F23AABF8348A0",
}, {
.etype = KRB5_ENCTYPE_CAMELLIA128_CTS_CMAC,
.name = "enc 13 plain",
.plain = "'13 bytes byte",
.conf = "19FEE40D810C524B5B22F01874C693DA",
.K0 = "2CA27A5FAF5532244506434E1CEF6676",
.usage = 3,
.ct = "B8ECA3167AE6315512E59F98A7C500205E5F63FF3BB389AF1C41A21D640D8615C9ED3FBEB05AB6ACB67689B5EA",
}, {
.etype = KRB5_ENCTYPE_CAMELLIA128_CTS_CMAC,
.name = "enc 30 plain",
.plain = "'30 bytes bytes bytes bytes byt",
.conf = "CA7A7AB4BE192DABD603506DB19C39E2",
.K0 = "7824F8C16F83FF354C6BF7515B973F43",
.usage = 4,
.ct = "A26A3905A4FFD5816B7B1E27380D08090C8EC1F304496E1ABDCD2BDCD1DFFC660989E117A713DDBB57A4146C1587CBA4356665591D2240282F5842B105A5",
}, {
.etype = KRB5_ENCTYPE_CAMELLIA256_CTS_CMAC,
.name = "enc no plain",
.plain = "",
.conf = "3CBBD2B45917941067F96599BB98926C",
.K0 = "B61C86CC4E5D2757545AD423399FB7031ECAB913CBB900BD7A3C6DD8BF92015B",
.usage = 0,
.ct = "03886D03310B47A6D8F06D7B94D1DD837ECCE315EF652AFF620859D94A259266",
}, {
.etype = KRB5_ENCTYPE_CAMELLIA256_CTS_CMAC,
.name = "enc 1 plain",
.plain = "'1",
.conf = "DEF487FCEBE6DE6346D4DA4521BBA2D2",
.K0 = "1B97FE0A190E2021EB30753E1B6E1E77B0754B1D684610355864104963463833",
.usage = 1,
.ct = "2C9C1570133C99BF6A34BC1B0212002FD194338749DB4135497A347CFCD9D18A12",
}, {
.etype = KRB5_ENCTYPE_CAMELLIA256_CTS_CMAC,
.name = "enc 9 plain",
.plain = "'9 bytesss",
.conf = "AD4FF904D34E555384B14100FC465F88",
.K0 = "32164C5B434D1D1538E4CFD9BE8040FE8C4AC7ACC4B93D3314D2133668147A05",
.usage = 2,
.ct = "9C6DE75F812DE7ED0D28B2963557A115640998275B0AF5152709913FF52A2A9C8E63B872F92E64C839",
}, {
.etype = KRB5_ENCTYPE_CAMELLIA256_CTS_CMAC,
.name = "enc 13 plain",
.plain = "'13 bytes byte",
.conf = "CF9BCA6DF1144E0C0AF9B8F34C90D514",
.K0 = "B038B132CD8E06612267FAB7170066D88AECCBA0B744BFC60DC89BCA182D0715",
.usage = 3,
.ct = "EEEC85A9813CDC536772AB9B42DEFC5706F726E975DDE05A87EB5406EA324CA185C9986B42AABE794B84821BEE",
}, {
.etype = KRB5_ENCTYPE_CAMELLIA256_CTS_CMAC,
.name = "enc 30 plain",
.plain = "'30 bytes bytes bytes bytes byt",
.conf = "644DEF38DA35007275878D216855E228",
.K0 = "CCFCD349BF4C6677E86E4B02B8EAB924A546AC731CF9BF6989B996E7D6BFBBA7",
.usage = 4,
.ct = "0E44680985855F2D1F1812529CA83BFD8E349DE6FD9ADA0BAAA048D68E265FEBF34AD1255A344999AD37146887A6C6845731AC7F46376A0504CD06571474",
},
{/* END */}
};
/*
* Checksum generation tests.
*/
const struct krb5_mic_test krb5_mic_tests[] = {
/* rfc8009 Appendix A */
{
.etype = KRB5_ENCTYPE_AES128_CTS_HMAC_SHA256_128,
.name = "mic",
.plain = "000102030405060708090A0B0C0D0E0F1011121314",
.Kc = "B31A018A48F54776F403E9A396325DC3",
.mic = "D78367186643D67B411CBA9139FC1DEE",
}, {
.etype = KRB5_ENCTYPE_AES256_CTS_HMAC_SHA384_192,
.name = "mic",
.plain = "000102030405060708090A0B0C0D0E0F1011121314",
.Kc = "EF5718BE86CC84963D8BBB5031E9F5C4BA41F28FAF69E73D",
.mic = "45EE791567EEFCA37F4AC1E0222DE80D43C3BFA06699672A",
},
/* rfc6803 sec 10 */
{
.etype = KRB5_ENCTYPE_CAMELLIA128_CTS_CMAC,
.name = "mic abc",
.plain = "'abcdefghijk",
.K0 = "1DC46A8D763F4F93742BCBA3387576C3",
.usage = 7,
.mic = "1178E6C5C47A8C1AE0C4B9C7D4EB7B6B",
}, {
.etype = KRB5_ENCTYPE_CAMELLIA128_CTS_CMAC,
.name = "mic ABC",
.plain = "'ABCDEFGHIJKLMNOPQRSTUVWXYZ",
.K0 = "5027BC231D0F3A9D23333F1CA6FDBE7C",
.usage = 8,
.mic = "D1B34F7004A731F23A0C00BF6C3F753A",
}, {
.etype = KRB5_ENCTYPE_CAMELLIA256_CTS_CMAC,
.name = "mic 123",
.plain = "'123456789",
.K0 = "B61C86CC4E5D2757545AD423399FB7031ECAB913CBB900BD7A3C6DD8BF92015B",
.usage = 9,
.mic = "87A12CFD2B96214810F01C826E7744B1",
}, {
.etype = KRB5_ENCTYPE_CAMELLIA256_CTS_CMAC,
.name = "mic !@#",
.plain = "'!@#$%^&*()!@#$%^&*()!@#$%^&*()",
.K0 = "32164C5B434D1D1538E4CFD9BE8040FE8C4AC7ACC4B93D3314D2133668147A05",
.usage = 10,
.mic = "3FA0B42355E52B189187294AA252AB64",
},
{/* END */}
};

504
crypto/krb5enc.c Normal file
View file

@ -0,0 +1,504 @@
// SPDX-License-Identifier: GPL-2.0-or-later
/*
* AEAD wrapper for Kerberos 5 RFC3961 simplified profile.
*
* Copyright (C) 2025 Red Hat, Inc. All Rights Reserved.
* Written by David Howells (dhowells@redhat.com)
*
* Derived from authenc:
* Copyright (c) 2007-2015 Herbert Xu <herbert@gondor.apana.org.au>
*/
#include <crypto/internal/aead.h>
#include <crypto/internal/hash.h>
#include <crypto/internal/skcipher.h>
#include <crypto/authenc.h>
#include <crypto/scatterwalk.h>
#include <linux/err.h>
#include <linux/init.h>
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/rtnetlink.h>
#include <linux/slab.h>
#include <linux/spinlock.h>
struct krb5enc_instance_ctx {
struct crypto_ahash_spawn auth;
struct crypto_skcipher_spawn enc;
unsigned int reqoff;
};
struct krb5enc_ctx {
struct crypto_ahash *auth;
struct crypto_skcipher *enc;
};
struct krb5enc_request_ctx {
struct scatterlist src[2];
struct scatterlist dst[2];
char tail[];
};
static void krb5enc_request_complete(struct aead_request *req, int err)
{
if (err != -EINPROGRESS)
aead_request_complete(req, err);
}
/**
* crypto_krb5enc_extractkeys - Extract Ke and Ki keys from the key blob.
* @keys: Where to put the key sizes and pointers
* @key: Encoded key material
* @keylen: Amount of key material
*
* Decode the key blob we're given. It starts with an rtattr that indicates
* the format and the length. Format CRYPTO_AUTHENC_KEYA_PARAM is:
*
* rtattr || __be32 enckeylen || authkey || enckey
*
* Note that the rtattr is in cpu-endian form, unlike enckeylen. This must be
* handled correctly in static testmgr data.
*/
int crypto_krb5enc_extractkeys(struct crypto_authenc_keys *keys, const u8 *key,
unsigned int keylen)
{
struct rtattr *rta = (struct rtattr *)key;
struct crypto_authenc_key_param *param;
if (!RTA_OK(rta, keylen))
return -EINVAL;
if (rta->rta_type != CRYPTO_AUTHENC_KEYA_PARAM)
return -EINVAL;
/*
* RTA_OK() didn't align the rtattr's payload when validating that it
* fits in the buffer. Yet, the keys should start on the next 4-byte
* aligned boundary. To avoid confusion, require that the rtattr
* payload be exactly the param struct, which has a 4-byte aligned size.
*/
if (RTA_PAYLOAD(rta) != sizeof(*param))
return -EINVAL;
BUILD_BUG_ON(sizeof(*param) % RTA_ALIGNTO);
param = RTA_DATA(rta);
keys->enckeylen = be32_to_cpu(param->enckeylen);
key += rta->rta_len;
keylen -= rta->rta_len;
if (keylen < keys->enckeylen)
return -EINVAL;
keys->authkeylen = keylen - keys->enckeylen;
keys->authkey = key;
keys->enckey = key + keys->authkeylen;
return 0;
}
EXPORT_SYMBOL(crypto_krb5enc_extractkeys);
static int krb5enc_setkey(struct crypto_aead *krb5enc, const u8 *key,
unsigned int keylen)
{
struct crypto_authenc_keys keys;
struct krb5enc_ctx *ctx = crypto_aead_ctx(krb5enc);
struct crypto_skcipher *enc = ctx->enc;
struct crypto_ahash *auth = ctx->auth;
unsigned int flags = crypto_aead_get_flags(krb5enc);
int err = -EINVAL;
if (crypto_krb5enc_extractkeys(&keys, key, keylen) != 0)
goto out;
crypto_ahash_clear_flags(auth, CRYPTO_TFM_REQ_MASK);
crypto_ahash_set_flags(auth, flags & CRYPTO_TFM_REQ_MASK);
err = crypto_ahash_setkey(auth, keys.authkey, keys.authkeylen);
if (err)
goto out;
crypto_skcipher_clear_flags(enc, CRYPTO_TFM_REQ_MASK);
crypto_skcipher_set_flags(enc, flags & CRYPTO_TFM_REQ_MASK);
err = crypto_skcipher_setkey(enc, keys.enckey, keys.enckeylen);
out:
memzero_explicit(&keys, sizeof(keys));
return err;
}
static void krb5enc_encrypt_done(void *data, int err)
{
struct aead_request *req = data;
krb5enc_request_complete(req, err);
}
/*
* Start the encryption of the plaintext. We skip over the associated data as
* that only gets included in the hash.
*/
static int krb5enc_dispatch_encrypt(struct aead_request *req,
unsigned int flags)
{
struct crypto_aead *krb5enc = crypto_aead_reqtfm(req);
struct aead_instance *inst = aead_alg_instance(krb5enc);
struct krb5enc_ctx *ctx = crypto_aead_ctx(krb5enc);
struct krb5enc_instance_ctx *ictx = aead_instance_ctx(inst);
struct krb5enc_request_ctx *areq_ctx = aead_request_ctx(req);
struct crypto_skcipher *enc = ctx->enc;
struct skcipher_request *skreq = (void *)(areq_ctx->tail +
ictx->reqoff);
struct scatterlist *src, *dst;
src = scatterwalk_ffwd(areq_ctx->src, req->src, req->assoclen);
if (req->src == req->dst)
dst = src;
else
dst = scatterwalk_ffwd(areq_ctx->dst, req->dst, req->assoclen);
skcipher_request_set_tfm(skreq, enc);
skcipher_request_set_callback(skreq, aead_request_flags(req),
krb5enc_encrypt_done, req);
skcipher_request_set_crypt(skreq, src, dst, req->cryptlen, req->iv);
return crypto_skcipher_encrypt(skreq);
}
/*
* Insert the hash into the checksum field in the destination buffer directly
* after the encrypted region.
*/
static void krb5enc_insert_checksum(struct aead_request *req, u8 *hash)
{
struct crypto_aead *krb5enc = crypto_aead_reqtfm(req);
scatterwalk_map_and_copy(hash, req->dst,
req->assoclen + req->cryptlen,
crypto_aead_authsize(krb5enc), 1);
}
/*
* Upon completion of an asynchronous digest, transfer the hash to the checksum
* field.
*/
static void krb5enc_encrypt_ahash_done(void *data, int err)
{
struct aead_request *req = data;
struct crypto_aead *krb5enc = crypto_aead_reqtfm(req);
struct aead_instance *inst = aead_alg_instance(krb5enc);
struct krb5enc_instance_ctx *ictx = aead_instance_ctx(inst);
struct krb5enc_request_ctx *areq_ctx = aead_request_ctx(req);
struct ahash_request *ahreq = (void *)(areq_ctx->tail + ictx->reqoff);
if (err)
return krb5enc_request_complete(req, err);
krb5enc_insert_checksum(req, ahreq->result);
err = krb5enc_dispatch_encrypt(req, 0);
if (err != -EINPROGRESS)
aead_request_complete(req, err);
}
/*
* Start the digest of the plaintext for encryption. In theory, this could be
* run in parallel with the encryption, provided the src and dst buffers don't
* overlap.
*/
static int krb5enc_dispatch_encrypt_hash(struct aead_request *req)
{
struct crypto_aead *krb5enc = crypto_aead_reqtfm(req);
struct aead_instance *inst = aead_alg_instance(krb5enc);
struct krb5enc_ctx *ctx = crypto_aead_ctx(krb5enc);
struct krb5enc_instance_ctx *ictx = aead_instance_ctx(inst);
struct crypto_ahash *auth = ctx->auth;
struct krb5enc_request_ctx *areq_ctx = aead_request_ctx(req);
struct ahash_request *ahreq = (void *)(areq_ctx->tail + ictx->reqoff);
u8 *hash = areq_ctx->tail;
int err;
ahash_request_set_callback(ahreq, aead_request_flags(req),
krb5enc_encrypt_ahash_done, req);
ahash_request_set_tfm(ahreq, auth);
ahash_request_set_crypt(ahreq, req->src, hash, req->assoclen + req->cryptlen);
err = crypto_ahash_digest(ahreq);
if (err)
return err;
krb5enc_insert_checksum(req, hash);
return 0;
}
/*
* Process an encryption operation. We can perform the cipher and the hash in
* parallel, provided the src and dst buffers are separate.
*/
static int krb5enc_encrypt(struct aead_request *req)
{
int err;
err = krb5enc_dispatch_encrypt_hash(req);
if (err < 0)
return err;
return krb5enc_dispatch_encrypt(req, aead_request_flags(req));
}
static int krb5enc_verify_hash(struct aead_request *req)
{
struct crypto_aead *krb5enc = crypto_aead_reqtfm(req);
struct aead_instance *inst = aead_alg_instance(krb5enc);
struct krb5enc_instance_ctx *ictx = aead_instance_ctx(inst);
struct krb5enc_request_ctx *areq_ctx = aead_request_ctx(req);
struct ahash_request *ahreq = (void *)(areq_ctx->tail + ictx->reqoff);
unsigned int authsize = crypto_aead_authsize(krb5enc);
u8 *calc_hash = areq_ctx->tail;
u8 *msg_hash = areq_ctx->tail + authsize;
scatterwalk_map_and_copy(msg_hash, req->src, ahreq->nbytes, authsize, 0);
if (crypto_memneq(msg_hash, calc_hash, authsize))
return -EBADMSG;
return 0;
}
static void krb5enc_decrypt_hash_done(void *data, int err)
{
struct aead_request *req = data;
if (err)
return krb5enc_request_complete(req, err);
err = krb5enc_verify_hash(req);
krb5enc_request_complete(req, err);
}
/*
* Dispatch the hashing of the plaintext after we've done the decryption.
*/
static int krb5enc_dispatch_decrypt_hash(struct aead_request *req)
{
struct crypto_aead *krb5enc = crypto_aead_reqtfm(req);
struct aead_instance *inst = aead_alg_instance(krb5enc);
struct krb5enc_ctx *ctx = crypto_aead_ctx(krb5enc);
struct krb5enc_instance_ctx *ictx = aead_instance_ctx(inst);
struct krb5enc_request_ctx *areq_ctx = aead_request_ctx(req);
struct ahash_request *ahreq = (void *)(areq_ctx->tail + ictx->reqoff);
struct crypto_ahash *auth = ctx->auth;
unsigned int authsize = crypto_aead_authsize(krb5enc);
u8 *hash = areq_ctx->tail;
int err;
ahash_request_set_tfm(ahreq, auth);
ahash_request_set_crypt(ahreq, req->dst, hash,
req->assoclen + req->cryptlen - authsize);
ahash_request_set_callback(ahreq, aead_request_flags(req),
krb5enc_decrypt_hash_done, req);
err = crypto_ahash_digest(ahreq);
if (err < 0)
return err;
return krb5enc_verify_hash(req);
}
/*
* Dispatch the decryption of the ciphertext.
*/
static int krb5enc_dispatch_decrypt(struct aead_request *req)
{
struct crypto_aead *krb5enc = crypto_aead_reqtfm(req);
struct aead_instance *inst = aead_alg_instance(krb5enc);
struct krb5enc_ctx *ctx = crypto_aead_ctx(krb5enc);
struct krb5enc_instance_ctx *ictx = aead_instance_ctx(inst);
struct krb5enc_request_ctx *areq_ctx = aead_request_ctx(req);
struct skcipher_request *skreq = (void *)(areq_ctx->tail +
ictx->reqoff);
unsigned int authsize = crypto_aead_authsize(krb5enc);
struct scatterlist *src, *dst;
src = scatterwalk_ffwd(areq_ctx->src, req->src, req->assoclen);
dst = src;
if (req->src != req->dst)
dst = scatterwalk_ffwd(areq_ctx->dst, req->dst, req->assoclen);
skcipher_request_set_tfm(skreq, ctx->enc);
skcipher_request_set_callback(skreq, aead_request_flags(req),
req->base.complete, req->base.data);
skcipher_request_set_crypt(skreq, src, dst,
req->cryptlen - authsize, req->iv);
return crypto_skcipher_decrypt(skreq);
}
static int krb5enc_decrypt(struct aead_request *req)
{
int err;
err = krb5enc_dispatch_decrypt(req);
if (err < 0)
return err;
return krb5enc_dispatch_decrypt_hash(req);
}
static int krb5enc_init_tfm(struct crypto_aead *tfm)
{
struct aead_instance *inst = aead_alg_instance(tfm);
struct krb5enc_instance_ctx *ictx = aead_instance_ctx(inst);
struct krb5enc_ctx *ctx = crypto_aead_ctx(tfm);
struct crypto_ahash *auth;
struct crypto_skcipher *enc;
int err;
auth = crypto_spawn_ahash(&ictx->auth);
if (IS_ERR(auth))
return PTR_ERR(auth);
enc = crypto_spawn_skcipher(&ictx->enc);
err = PTR_ERR(enc);
if (IS_ERR(enc))
goto err_free_ahash;
ctx->auth = auth;
ctx->enc = enc;
crypto_aead_set_reqsize(
tfm,
sizeof(struct krb5enc_request_ctx) +
ictx->reqoff + /* Space for two checksums */
umax(sizeof(struct ahash_request) + crypto_ahash_reqsize(auth),
sizeof(struct skcipher_request) + crypto_skcipher_reqsize(enc)));
return 0;
err_free_ahash:
crypto_free_ahash(auth);
return err;
}
static void krb5enc_exit_tfm(struct crypto_aead *tfm)
{
struct krb5enc_ctx *ctx = crypto_aead_ctx(tfm);
crypto_free_ahash(ctx->auth);
crypto_free_skcipher(ctx->enc);
}
static void krb5enc_free(struct aead_instance *inst)
{
struct krb5enc_instance_ctx *ctx = aead_instance_ctx(inst);
crypto_drop_skcipher(&ctx->enc);
crypto_drop_ahash(&ctx->auth);
kfree(inst);
}
/*
* Create an instance of a template for a specific hash and cipher pair.
*/
static int krb5enc_create(struct crypto_template *tmpl, struct rtattr **tb)
{
struct krb5enc_instance_ctx *ictx;
struct skcipher_alg_common *enc;
struct hash_alg_common *auth;
struct aead_instance *inst;
struct crypto_alg *auth_base;
u32 mask;
int err;
err = crypto_check_attr_type(tb, CRYPTO_ALG_TYPE_AEAD, &mask);
if (err) {
pr_err("attr_type failed\n");
return err;
}
inst = kzalloc(sizeof(*inst) + sizeof(*ictx), GFP_KERNEL);
if (!inst)
return -ENOMEM;
ictx = aead_instance_ctx(inst);
err = crypto_grab_ahash(&ictx->auth, aead_crypto_instance(inst),
crypto_attr_alg_name(tb[1]), 0, mask);
if (err) {
pr_err("grab ahash failed\n");
goto err_free_inst;
}
auth = crypto_spawn_ahash_alg(&ictx->auth);
auth_base = &auth->base;
err = crypto_grab_skcipher(&ictx->enc, aead_crypto_instance(inst),
crypto_attr_alg_name(tb[2]), 0, mask);
if (err) {
pr_err("grab skcipher failed\n");
goto err_free_inst;
}
enc = crypto_spawn_skcipher_alg_common(&ictx->enc);
ictx->reqoff = 2 * auth->digestsize;
err = -ENAMETOOLONG;
if (snprintf(inst->alg.base.cra_name, CRYPTO_MAX_ALG_NAME,
"krb5enc(%s,%s)", auth_base->cra_name,
enc->base.cra_name) >=
CRYPTO_MAX_ALG_NAME)
goto err_free_inst;
if (snprintf(inst->alg.base.cra_driver_name, CRYPTO_MAX_ALG_NAME,
"krb5enc(%s,%s)", auth_base->cra_driver_name,
enc->base.cra_driver_name) >= CRYPTO_MAX_ALG_NAME)
goto err_free_inst;
inst->alg.base.cra_priority = enc->base.cra_priority * 10 +
auth_base->cra_priority;
inst->alg.base.cra_blocksize = enc->base.cra_blocksize;
inst->alg.base.cra_alignmask = enc->base.cra_alignmask;
inst->alg.base.cra_ctxsize = sizeof(struct krb5enc_ctx);
inst->alg.ivsize = enc->ivsize;
inst->alg.chunksize = enc->chunksize;
inst->alg.maxauthsize = auth->digestsize;
inst->alg.init = krb5enc_init_tfm;
inst->alg.exit = krb5enc_exit_tfm;
inst->alg.setkey = krb5enc_setkey;
inst->alg.encrypt = krb5enc_encrypt;
inst->alg.decrypt = krb5enc_decrypt;
inst->free = krb5enc_free;
err = aead_register_instance(tmpl, inst);
if (err) {
pr_err("ref failed\n");
goto err_free_inst;
}
return 0;
err_free_inst:
krb5enc_free(inst);
return err;
}
static struct crypto_template crypto_krb5enc_tmpl = {
.name = "krb5enc",
.create = krb5enc_create,
.module = THIS_MODULE,
};
static int __init crypto_krb5enc_module_init(void)
{
return crypto_register_template(&crypto_krb5enc_tmpl);
}
static void __exit crypto_krb5enc_module_exit(void)
{
crypto_unregister_template(&crypto_krb5enc_tmpl);
}
subsys_initcall(crypto_krb5enc_module_init);
module_exit(crypto_krb5enc_module_exit);
MODULE_LICENSE("GPL");
MODULE_DESCRIPTION("Simple AEAD wrapper for Kerberos 5 RFC3961");
MODULE_ALIAS_CRYPTO("krb5enc");

View file

@ -167,7 +167,7 @@ static int lrw_xor_tweak(struct skcipher_request *req, bool second_pass)
while (w.nbytes) {
unsigned int avail = w.nbytes;
be128 *wsrc;
const be128 *wsrc;
be128 *wdst;
wsrc = w.src.virt.addr;

View file

@ -16,7 +16,7 @@ struct lz4_ctx {
void *lz4_comp_mem;
};
static void *lz4_alloc_ctx(struct crypto_scomp *tfm)
static void *lz4_alloc_ctx(void)
{
void *ctx;
@ -27,29 +27,11 @@ static void *lz4_alloc_ctx(struct crypto_scomp *tfm)
return ctx;
}
static int lz4_init(struct crypto_tfm *tfm)
{
struct lz4_ctx *ctx = crypto_tfm_ctx(tfm);
ctx->lz4_comp_mem = lz4_alloc_ctx(NULL);
if (IS_ERR(ctx->lz4_comp_mem))
return -ENOMEM;
return 0;
}
static void lz4_free_ctx(struct crypto_scomp *tfm, void *ctx)
static void lz4_free_ctx(void *ctx)
{
vfree(ctx);
}
static void lz4_exit(struct crypto_tfm *tfm)
{
struct lz4_ctx *ctx = crypto_tfm_ctx(tfm);
lz4_free_ctx(NULL, ctx->lz4_comp_mem);
}
static int __lz4_compress_crypto(const u8 *src, unsigned int slen,
u8 *dst, unsigned int *dlen, void *ctx)
{
@ -70,14 +52,6 @@ static int lz4_scompress(struct crypto_scomp *tfm, const u8 *src,
return __lz4_compress_crypto(src, slen, dst, dlen, ctx);
}
static int lz4_compress_crypto(struct crypto_tfm *tfm, const u8 *src,
unsigned int slen, u8 *dst, unsigned int *dlen)
{
struct lz4_ctx *ctx = crypto_tfm_ctx(tfm);
return __lz4_compress_crypto(src, slen, dst, dlen, ctx->lz4_comp_mem);
}
static int __lz4_decompress_crypto(const u8 *src, unsigned int slen,
u8 *dst, unsigned int *dlen, void *ctx)
{
@ -97,26 +71,6 @@ static int lz4_sdecompress(struct crypto_scomp *tfm, const u8 *src,
return __lz4_decompress_crypto(src, slen, dst, dlen, NULL);
}
static int lz4_decompress_crypto(struct crypto_tfm *tfm, const u8 *src,
unsigned int slen, u8 *dst,
unsigned int *dlen)
{
return __lz4_decompress_crypto(src, slen, dst, dlen, NULL);
}
static struct crypto_alg alg_lz4 = {
.cra_name = "lz4",
.cra_driver_name = "lz4-generic",
.cra_flags = CRYPTO_ALG_TYPE_COMPRESS,
.cra_ctxsize = sizeof(struct lz4_ctx),
.cra_module = THIS_MODULE,
.cra_init = lz4_init,
.cra_exit = lz4_exit,
.cra_u = { .compress = {
.coa_compress = lz4_compress_crypto,
.coa_decompress = lz4_decompress_crypto } }
};
static struct scomp_alg scomp = {
.alloc_ctx = lz4_alloc_ctx,
.free_ctx = lz4_free_ctx,
@ -131,24 +85,11 @@ static struct scomp_alg scomp = {
static int __init lz4_mod_init(void)
{
int ret;
ret = crypto_register_alg(&alg_lz4);
if (ret)
return ret;
ret = crypto_register_scomp(&scomp);
if (ret) {
crypto_unregister_alg(&alg_lz4);
return ret;
}
return ret;
return crypto_register_scomp(&scomp);
}
static void __exit lz4_mod_fini(void)
{
crypto_unregister_alg(&alg_lz4);
crypto_unregister_scomp(&scomp);
}

View file

@ -4,18 +4,17 @@
*
* Copyright (c) 2013 Chanho Min <chanho.min@lge.com>
*/
#include <crypto/internal/scompress.h>
#include <linux/init.h>
#include <linux/module.h>
#include <linux/crypto.h>
#include <linux/vmalloc.h>
#include <linux/lz4.h>
#include <crypto/internal/scompress.h>
struct lz4hc_ctx {
void *lz4hc_comp_mem;
};
static void *lz4hc_alloc_ctx(struct crypto_scomp *tfm)
static void *lz4hc_alloc_ctx(void)
{
void *ctx;
@ -26,29 +25,11 @@ static void *lz4hc_alloc_ctx(struct crypto_scomp *tfm)
return ctx;
}
static int lz4hc_init(struct crypto_tfm *tfm)
{
struct lz4hc_ctx *ctx = crypto_tfm_ctx(tfm);
ctx->lz4hc_comp_mem = lz4hc_alloc_ctx(NULL);
if (IS_ERR(ctx->lz4hc_comp_mem))
return -ENOMEM;
return 0;
}
static void lz4hc_free_ctx(struct crypto_scomp *tfm, void *ctx)
static void lz4hc_free_ctx(void *ctx)
{
vfree(ctx);
}
static void lz4hc_exit(struct crypto_tfm *tfm)
{
struct lz4hc_ctx *ctx = crypto_tfm_ctx(tfm);
lz4hc_free_ctx(NULL, ctx->lz4hc_comp_mem);
}
static int __lz4hc_compress_crypto(const u8 *src, unsigned int slen,
u8 *dst, unsigned int *dlen, void *ctx)
{
@ -69,16 +50,6 @@ static int lz4hc_scompress(struct crypto_scomp *tfm, const u8 *src,
return __lz4hc_compress_crypto(src, slen, dst, dlen, ctx);
}
static int lz4hc_compress_crypto(struct crypto_tfm *tfm, const u8 *src,
unsigned int slen, u8 *dst,
unsigned int *dlen)
{
struct lz4hc_ctx *ctx = crypto_tfm_ctx(tfm);
return __lz4hc_compress_crypto(src, slen, dst, dlen,
ctx->lz4hc_comp_mem);
}
static int __lz4hc_decompress_crypto(const u8 *src, unsigned int slen,
u8 *dst, unsigned int *dlen, void *ctx)
{
@ -98,26 +69,6 @@ static int lz4hc_sdecompress(struct crypto_scomp *tfm, const u8 *src,
return __lz4hc_decompress_crypto(src, slen, dst, dlen, NULL);
}
static int lz4hc_decompress_crypto(struct crypto_tfm *tfm, const u8 *src,
unsigned int slen, u8 *dst,
unsigned int *dlen)
{
return __lz4hc_decompress_crypto(src, slen, dst, dlen, NULL);
}
static struct crypto_alg alg_lz4hc = {
.cra_name = "lz4hc",
.cra_driver_name = "lz4hc-generic",
.cra_flags = CRYPTO_ALG_TYPE_COMPRESS,
.cra_ctxsize = sizeof(struct lz4hc_ctx),
.cra_module = THIS_MODULE,
.cra_init = lz4hc_init,
.cra_exit = lz4hc_exit,
.cra_u = { .compress = {
.coa_compress = lz4hc_compress_crypto,
.coa_decompress = lz4hc_decompress_crypto } }
};
static struct scomp_alg scomp = {
.alloc_ctx = lz4hc_alloc_ctx,
.free_ctx = lz4hc_free_ctx,
@ -132,24 +83,11 @@ static struct scomp_alg scomp = {
static int __init lz4hc_mod_init(void)
{
int ret;
ret = crypto_register_alg(&alg_lz4hc);
if (ret)
return ret;
ret = crypto_register_scomp(&scomp);
if (ret) {
crypto_unregister_alg(&alg_lz4hc);
return ret;
}
return ret;
return crypto_register_scomp(&scomp);
}
static void __exit lz4hc_mod_fini(void)
{
crypto_unregister_alg(&alg_lz4hc);
crypto_unregister_scomp(&scomp);
}

View file

@ -3,19 +3,17 @@
* Cryptographic API.
*/
#include <linux/init.h>
#include <linux/module.h>
#include <linux/crypto.h>
#include <linux/vmalloc.h>
#include <linux/mm.h>
#include <linux/lzo.h>
#include <crypto/internal/scompress.h>
#include <linux/init.h>
#include <linux/lzo.h>
#include <linux/module.h>
#include <linux/slab.h>
struct lzorle_ctx {
void *lzorle_comp_mem;
};
static void *lzorle_alloc_ctx(struct crypto_scomp *tfm)
static void *lzorle_alloc_ctx(void)
{
void *ctx;
@ -26,36 +24,18 @@ static void *lzorle_alloc_ctx(struct crypto_scomp *tfm)
return ctx;
}
static int lzorle_init(struct crypto_tfm *tfm)
{
struct lzorle_ctx *ctx = crypto_tfm_ctx(tfm);
ctx->lzorle_comp_mem = lzorle_alloc_ctx(NULL);
if (IS_ERR(ctx->lzorle_comp_mem))
return -ENOMEM;
return 0;
}
static void lzorle_free_ctx(struct crypto_scomp *tfm, void *ctx)
static void lzorle_free_ctx(void *ctx)
{
kvfree(ctx);
}
static void lzorle_exit(struct crypto_tfm *tfm)
{
struct lzorle_ctx *ctx = crypto_tfm_ctx(tfm);
lzorle_free_ctx(NULL, ctx->lzorle_comp_mem);
}
static int __lzorle_compress(const u8 *src, unsigned int slen,
u8 *dst, unsigned int *dlen, void *ctx)
{
size_t tmp_len = *dlen; /* size_t(ulong) <-> uint on 64 bit */
int err;
err = lzorle1x_1_compress(src, slen, dst, &tmp_len, ctx);
err = lzorle1x_1_compress_safe(src, slen, dst, &tmp_len, ctx);
if (err != LZO_E_OK)
return -EINVAL;
@ -64,14 +44,6 @@ static int __lzorle_compress(const u8 *src, unsigned int slen,
return 0;
}
static int lzorle_compress(struct crypto_tfm *tfm, const u8 *src,
unsigned int slen, u8 *dst, unsigned int *dlen)
{
struct lzorle_ctx *ctx = crypto_tfm_ctx(tfm);
return __lzorle_compress(src, slen, dst, dlen, ctx->lzorle_comp_mem);
}
static int lzorle_scompress(struct crypto_scomp *tfm, const u8 *src,
unsigned int slen, u8 *dst, unsigned int *dlen,
void *ctx)
@ -94,12 +66,6 @@ static int __lzorle_decompress(const u8 *src, unsigned int slen,
return 0;
}
static int lzorle_decompress(struct crypto_tfm *tfm, const u8 *src,
unsigned int slen, u8 *dst, unsigned int *dlen)
{
return __lzorle_decompress(src, slen, dst, dlen);
}
static int lzorle_sdecompress(struct crypto_scomp *tfm, const u8 *src,
unsigned int slen, u8 *dst, unsigned int *dlen,
void *ctx)
@ -107,19 +73,6 @@ static int lzorle_sdecompress(struct crypto_scomp *tfm, const u8 *src,
return __lzorle_decompress(src, slen, dst, dlen);
}
static struct crypto_alg alg = {
.cra_name = "lzo-rle",
.cra_driver_name = "lzo-rle-generic",
.cra_flags = CRYPTO_ALG_TYPE_COMPRESS,
.cra_ctxsize = sizeof(struct lzorle_ctx),
.cra_module = THIS_MODULE,
.cra_init = lzorle_init,
.cra_exit = lzorle_exit,
.cra_u = { .compress = {
.coa_compress = lzorle_compress,
.coa_decompress = lzorle_decompress } }
};
static struct scomp_alg scomp = {
.alloc_ctx = lzorle_alloc_ctx,
.free_ctx = lzorle_free_ctx,
@ -134,24 +87,11 @@ static struct scomp_alg scomp = {
static int __init lzorle_mod_init(void)
{
int ret;
ret = crypto_register_alg(&alg);
if (ret)
return ret;
ret = crypto_register_scomp(&scomp);
if (ret) {
crypto_unregister_alg(&alg);
return ret;
}
return ret;
return crypto_register_scomp(&scomp);
}
static void __exit lzorle_mod_fini(void)
{
crypto_unregister_alg(&alg);
crypto_unregister_scomp(&scomp);
}

View file

@ -3,19 +3,17 @@
* Cryptographic API.
*/
#include <linux/init.h>
#include <linux/module.h>
#include <linux/crypto.h>
#include <linux/vmalloc.h>
#include <linux/mm.h>
#include <linux/lzo.h>
#include <crypto/internal/scompress.h>
#include <linux/init.h>
#include <linux/lzo.h>
#include <linux/module.h>
#include <linux/slab.h>
struct lzo_ctx {
void *lzo_comp_mem;
};
static void *lzo_alloc_ctx(struct crypto_scomp *tfm)
static void *lzo_alloc_ctx(void)
{
void *ctx;
@ -26,36 +24,18 @@ static void *lzo_alloc_ctx(struct crypto_scomp *tfm)
return ctx;
}
static int lzo_init(struct crypto_tfm *tfm)
{
struct lzo_ctx *ctx = crypto_tfm_ctx(tfm);
ctx->lzo_comp_mem = lzo_alloc_ctx(NULL);
if (IS_ERR(ctx->lzo_comp_mem))
return -ENOMEM;
return 0;
}
static void lzo_free_ctx(struct crypto_scomp *tfm, void *ctx)
static void lzo_free_ctx(void *ctx)
{
kvfree(ctx);
}
static void lzo_exit(struct crypto_tfm *tfm)
{
struct lzo_ctx *ctx = crypto_tfm_ctx(tfm);
lzo_free_ctx(NULL, ctx->lzo_comp_mem);
}
static int __lzo_compress(const u8 *src, unsigned int slen,
u8 *dst, unsigned int *dlen, void *ctx)
{
size_t tmp_len = *dlen; /* size_t(ulong) <-> uint on 64 bit */
int err;
err = lzo1x_1_compress(src, slen, dst, &tmp_len, ctx);
err = lzo1x_1_compress_safe(src, slen, dst, &tmp_len, ctx);
if (err != LZO_E_OK)
return -EINVAL;
@ -64,14 +44,6 @@ static int __lzo_compress(const u8 *src, unsigned int slen,
return 0;
}
static int lzo_compress(struct crypto_tfm *tfm, const u8 *src,
unsigned int slen, u8 *dst, unsigned int *dlen)
{
struct lzo_ctx *ctx = crypto_tfm_ctx(tfm);
return __lzo_compress(src, slen, dst, dlen, ctx->lzo_comp_mem);
}
static int lzo_scompress(struct crypto_scomp *tfm, const u8 *src,
unsigned int slen, u8 *dst, unsigned int *dlen,
void *ctx)
@ -94,12 +66,6 @@ static int __lzo_decompress(const u8 *src, unsigned int slen,
return 0;
}
static int lzo_decompress(struct crypto_tfm *tfm, const u8 *src,
unsigned int slen, u8 *dst, unsigned int *dlen)
{
return __lzo_decompress(src, slen, dst, dlen);
}
static int lzo_sdecompress(struct crypto_scomp *tfm, const u8 *src,
unsigned int slen, u8 *dst, unsigned int *dlen,
void *ctx)
@ -107,19 +73,6 @@ static int lzo_sdecompress(struct crypto_scomp *tfm, const u8 *src,
return __lzo_decompress(src, slen, dst, dlen);
}
static struct crypto_alg alg = {
.cra_name = "lzo",
.cra_driver_name = "lzo-generic",
.cra_flags = CRYPTO_ALG_TYPE_COMPRESS,
.cra_ctxsize = sizeof(struct lzo_ctx),
.cra_module = THIS_MODULE,
.cra_init = lzo_init,
.cra_exit = lzo_exit,
.cra_u = { .compress = {
.coa_compress = lzo_compress,
.coa_decompress = lzo_decompress } }
};
static struct scomp_alg scomp = {
.alloc_ctx = lzo_alloc_ctx,
.free_ctx = lzo_free_ctx,
@ -134,24 +87,11 @@ static struct scomp_alg scomp = {
static int __init lzo_mod_init(void)
{
int ret;
ret = crypto_register_alg(&alg);
if (ret)
return ret;
ret = crypto_register_scomp(&scomp);
if (ret) {
crypto_unregister_alg(&alg);
return ret;
}
return ret;
return crypto_register_scomp(&scomp);
}
static void __exit lzo_mod_fini(void)
{
crypto_unregister_alg(&alg);
crypto_unregister_scomp(&scomp);
}

View file

@ -22,8 +22,8 @@ static int crypto_pcbc_encrypt_segment(struct skcipher_request *req,
struct crypto_cipher *tfm)
{
int bsize = crypto_cipher_blocksize(tfm);
const u8 *src = walk->src.virt.addr;
unsigned int nbytes = walk->nbytes;
u8 *src = walk->src.virt.addr;
u8 *dst = walk->dst.virt.addr;
u8 * const iv = walk->iv;
@ -45,17 +45,17 @@ static int crypto_pcbc_encrypt_inplace(struct skcipher_request *req,
{
int bsize = crypto_cipher_blocksize(tfm);
unsigned int nbytes = walk->nbytes;
u8 *src = walk->src.virt.addr;
u8 *dst = walk->dst.virt.addr;
u8 * const iv = walk->iv;
u8 tmpbuf[MAX_CIPHER_BLOCKSIZE];
do {
memcpy(tmpbuf, src, bsize);
crypto_xor(iv, src, bsize);
crypto_cipher_encrypt_one(tfm, src, iv);
crypto_xor_cpy(iv, tmpbuf, src, bsize);
memcpy(tmpbuf, dst, bsize);
crypto_xor(iv, dst, bsize);
crypto_cipher_encrypt_one(tfm, dst, iv);
crypto_xor_cpy(iv, tmpbuf, dst, bsize);
src += bsize;
dst += bsize;
} while ((nbytes -= bsize) >= bsize);
return nbytes;
@ -89,8 +89,8 @@ static int crypto_pcbc_decrypt_segment(struct skcipher_request *req,
struct crypto_cipher *tfm)
{
int bsize = crypto_cipher_blocksize(tfm);
const u8 *src = walk->src.virt.addr;
unsigned int nbytes = walk->nbytes;
u8 *src = walk->src.virt.addr;
u8 *dst = walk->dst.virt.addr;
u8 * const iv = walk->iv;
@ -112,17 +112,17 @@ static int crypto_pcbc_decrypt_inplace(struct skcipher_request *req,
{
int bsize = crypto_cipher_blocksize(tfm);
unsigned int nbytes = walk->nbytes;
u8 *src = walk->src.virt.addr;
u8 *dst = walk->dst.virt.addr;
u8 * const iv = walk->iv;
u8 tmpbuf[MAX_CIPHER_BLOCKSIZE] __aligned(__alignof__(u32));
do {
memcpy(tmpbuf, src, bsize);
crypto_cipher_decrypt_one(tfm, src, src);
crypto_xor(src, iv, bsize);
crypto_xor_cpy(iv, src, tmpbuf, bsize);
memcpy(tmpbuf, dst, bsize);
crypto_cipher_decrypt_one(tfm, dst, dst);
crypto_xor(dst, iv, bsize);
crypto_xor_cpy(iv, dst, tmpbuf, bsize);
src += bsize;
dst += bsize;
} while ((nbytes -= bsize) >= bsize);
return nbytes;

View file

@ -72,9 +72,6 @@ static int c_show(struct seq_file *m, void *p)
seq_printf(m, "max keysize : %u\n",
alg->cra_cipher.cia_max_keysize);
break;
case CRYPTO_ALG_TYPE_COMPRESS:
seq_printf(m, "type : compression\n");
break;
default:
seq_printf(m, "type : unknown\n");
break;

View file

@ -210,7 +210,7 @@ static int rsassa_pkcs1_sign(struct crypto_sig *tfm,
memset(dst, 0, pad_len);
}
return 0;
return ctx->key_size;
}
static int rsassa_pkcs1_verify(struct crypto_sig *tfm,

View file

@ -15,59 +15,103 @@
#include <linux/module.h>
#include <linux/scatterlist.h>
static inline void memcpy_dir(void *buf, void *sgdata, size_t nbytes, int out)
void scatterwalk_skip(struct scatter_walk *walk, unsigned int nbytes)
{
void *src = out ? buf : sgdata;
void *dst = out ? sgdata : buf;
struct scatterlist *sg = walk->sg;
memcpy(dst, src, nbytes);
}
nbytes += walk->offset - sg->offset;
void scatterwalk_copychunks(void *buf, struct scatter_walk *walk,
size_t nbytes, int out)
{
for (;;) {
unsigned int len_this_page = scatterwalk_pagelen(walk);
u8 *vaddr;
if (len_this_page > nbytes)
len_this_page = nbytes;
if (out != 2) {
vaddr = scatterwalk_map(walk);
memcpy_dir(buf, vaddr, len_this_page, out);
scatterwalk_unmap(vaddr);
}
scatterwalk_advance(walk, len_this_page);
if (nbytes == len_this_page)
break;
buf += len_this_page;
nbytes -= len_this_page;
scatterwalk_pagedone(walk, out & 1, 1);
while (nbytes > sg->length) {
nbytes -= sg->length;
sg = sg_next(sg);
}
walk->sg = sg;
walk->offset = sg->offset + nbytes;
}
EXPORT_SYMBOL_GPL(scatterwalk_copychunks);
EXPORT_SYMBOL_GPL(scatterwalk_skip);
void scatterwalk_map_and_copy(void *buf, struct scatterlist *sg,
unsigned int start, unsigned int nbytes, int out)
inline void memcpy_from_scatterwalk(void *buf, struct scatter_walk *walk,
unsigned int nbytes)
{
do {
unsigned int to_copy;
to_copy = scatterwalk_next(walk, nbytes);
memcpy(buf, walk->addr, to_copy);
scatterwalk_done_src(walk, to_copy);
buf += to_copy;
nbytes -= to_copy;
} while (nbytes);
}
EXPORT_SYMBOL_GPL(memcpy_from_scatterwalk);
inline void memcpy_to_scatterwalk(struct scatter_walk *walk, const void *buf,
unsigned int nbytes)
{
do {
unsigned int to_copy;
to_copy = scatterwalk_next(walk, nbytes);
memcpy(walk->addr, buf, to_copy);
scatterwalk_done_dst(walk, to_copy);
buf += to_copy;
nbytes -= to_copy;
} while (nbytes);
}
EXPORT_SYMBOL_GPL(memcpy_to_scatterwalk);
void memcpy_from_sglist(void *buf, struct scatterlist *sg,
unsigned int start, unsigned int nbytes)
{
struct scatter_walk walk;
struct scatterlist tmp[2];
if (!nbytes)
if (unlikely(nbytes == 0)) /* in case sg == NULL */
return;
sg = scatterwalk_ffwd(tmp, sg, start);
scatterwalk_start(&walk, sg);
scatterwalk_copychunks(buf, &walk, nbytes, out);
scatterwalk_done(&walk, out, 0);
scatterwalk_start_at_pos(&walk, sg, start);
memcpy_from_scatterwalk(buf, &walk, nbytes);
}
EXPORT_SYMBOL_GPL(scatterwalk_map_and_copy);
EXPORT_SYMBOL_GPL(memcpy_from_sglist);
void memcpy_to_sglist(struct scatterlist *sg, unsigned int start,
const void *buf, unsigned int nbytes)
{
struct scatter_walk walk;
if (unlikely(nbytes == 0)) /* in case sg == NULL */
return;
scatterwalk_start_at_pos(&walk, sg, start);
memcpy_to_scatterwalk(&walk, buf, nbytes);
}
EXPORT_SYMBOL_GPL(memcpy_to_sglist);
void memcpy_sglist(struct scatterlist *dst, struct scatterlist *src,
unsigned int nbytes)
{
struct scatter_walk swalk;
struct scatter_walk dwalk;
if (unlikely(nbytes == 0)) /* in case sg == NULL */
return;
scatterwalk_start(&swalk, src);
scatterwalk_start(&dwalk, dst);
do {
unsigned int slen, dlen;
unsigned int len;
slen = scatterwalk_next(&swalk, nbytes);
dlen = scatterwalk_next(&dwalk, nbytes);
len = min(slen, dlen);
memcpy(dwalk.addr, swalk.addr, len);
scatterwalk_done_dst(&dwalk, len);
scatterwalk_done_src(&swalk, len);
nbytes -= len;
} while (nbytes);
}
EXPORT_SYMBOL_GPL(memcpy_sglist);
struct scatterlist *scatterwalk_ffwd(struct scatterlist dst[2],
struct scatterlist *src,

View file

@ -12,8 +12,10 @@
#include <crypto/scatterwalk.h>
#include <linux/cryptouser.h>
#include <linux/err.h>
#include <linux/highmem.h>
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/overflow.h>
#include <linux/scatterlist.h>
#include <linux/seq_file.h>
#include <linux/slab.h>
@ -23,9 +25,14 @@
#include "compress.h"
#define SCOMP_SCRATCH_SIZE 65400
struct scomp_scratch {
spinlock_t lock;
void *src;
union {
void *src;
unsigned long saddr;
};
void *dst;
};
@ -66,7 +73,7 @@ static void crypto_scomp_free_scratches(void)
for_each_possible_cpu(i) {
scratch = per_cpu_ptr(&scomp_scratch, i);
vfree(scratch->src);
free_page(scratch->saddr);
vfree(scratch->dst);
scratch->src = NULL;
scratch->dst = NULL;
@ -79,14 +86,15 @@ static int crypto_scomp_alloc_scratches(void)
int i;
for_each_possible_cpu(i) {
struct page *page;
void *mem;
scratch = per_cpu_ptr(&scomp_scratch, i);
mem = vmalloc_node(SCOMP_SCRATCH_SIZE, cpu_to_node(i));
if (!mem)
page = alloc_pages_node(cpu_to_node(i), GFP_KERNEL, 0);
if (!page)
goto error;
scratch->src = mem;
scratch->src = page_address(page);
mem = vmalloc_node(SCOMP_SCRATCH_SIZE, cpu_to_node(i));
if (!mem)
goto error;
@ -98,13 +106,66 @@ error:
return -ENOMEM;
}
static void scomp_free_streams(struct scomp_alg *alg)
{
struct crypto_acomp_stream __percpu *stream = alg->stream;
int i;
for_each_possible_cpu(i) {
struct crypto_acomp_stream *ps = per_cpu_ptr(stream, i);
if (!ps->ctx)
break;
alg->free_ctx(ps->ctx);
}
free_percpu(stream);
}
static int scomp_alloc_streams(struct scomp_alg *alg)
{
struct crypto_acomp_stream __percpu *stream;
int i;
stream = alloc_percpu(struct crypto_acomp_stream);
if (!stream)
return -ENOMEM;
for_each_possible_cpu(i) {
struct crypto_acomp_stream *ps = per_cpu_ptr(stream, i);
ps->ctx = alg->alloc_ctx();
if (IS_ERR(ps->ctx)) {
scomp_free_streams(alg);
return PTR_ERR(ps->ctx);
}
spin_lock_init(&ps->lock);
}
alg->stream = stream;
return 0;
}
static int crypto_scomp_init_tfm(struct crypto_tfm *tfm)
{
struct scomp_alg *alg = crypto_scomp_alg(__crypto_scomp_tfm(tfm));
int ret = 0;
mutex_lock(&scomp_lock);
if (!scomp_scratch_users++)
if (!alg->stream) {
ret = scomp_alloc_streams(alg);
if (ret)
goto unlock;
}
if (!scomp_scratch_users) {
ret = crypto_scomp_alloc_scratches();
if (ret)
goto unlock;
scomp_scratch_users++;
}
unlock:
mutex_unlock(&scomp_lock);
return ret;
@ -112,84 +173,144 @@ static int crypto_scomp_init_tfm(struct crypto_tfm *tfm)
static int scomp_acomp_comp_decomp(struct acomp_req *req, int dir)
{
struct scomp_scratch *scratch = raw_cpu_ptr(&scomp_scratch);
struct crypto_acomp *tfm = crypto_acomp_reqtfm(req);
void **tfm_ctx = acomp_tfm_ctx(tfm);
struct crypto_scomp **tfm_ctx = acomp_tfm_ctx(tfm);
struct crypto_scomp *scomp = *tfm_ctx;
void **ctx = acomp_request_ctx(req);
struct scomp_scratch *scratch;
void *src, *dst;
unsigned int dlen;
struct crypto_acomp_stream *stream;
unsigned int slen = req->slen;
unsigned int dlen = req->dlen;
struct page *spage, *dpage;
unsigned int n;
const u8 *src;
size_t soff;
size_t doff;
u8 *dst;
int ret;
if (!req->src || !req->slen || req->slen > SCOMP_SCRATCH_SIZE)
if (!req->src || !slen)
return -EINVAL;
if (req->dst && !req->dlen)
if (!req->dst || !dlen)
return -EINVAL;
if (!req->dlen || req->dlen > SCOMP_SCRATCH_SIZE)
req->dlen = SCOMP_SCRATCH_SIZE;
dlen = req->dlen;
scratch = raw_cpu_ptr(&scomp_scratch);
spin_lock(&scratch->lock);
if (sg_nents(req->src) == 1 && !PageHighMem(sg_page(req->src))) {
src = page_to_virt(sg_page(req->src)) + req->src->offset;
} else {
scatterwalk_map_and_copy(scratch->src, req->src, 0,
req->slen, 0);
if (acomp_request_src_isvirt(req))
src = req->svirt;
else {
src = scratch->src;
do {
if (acomp_request_src_isfolio(req)) {
spage = folio_page(req->sfolio, 0);
soff = req->soff;
} else if (slen <= req->src->length) {
spage = sg_page(req->src);
soff = req->src->offset;
} else
break;
spage = nth_page(spage, soff / PAGE_SIZE);
soff = offset_in_page(soff);
n = slen / PAGE_SIZE;
n += (offset_in_page(slen) + soff - 1) / PAGE_SIZE;
if (PageHighMem(nth_page(spage, n)) &&
size_add(soff, slen) > PAGE_SIZE)
break;
src = kmap_local_page(spage) + soff;
} while (0);
}
if (req->dst && sg_nents(req->dst) == 1 && !PageHighMem(sg_page(req->dst)))
dst = page_to_virt(sg_page(req->dst)) + req->dst->offset;
else
if (acomp_request_dst_isvirt(req))
dst = req->dvirt;
else {
unsigned int max = SCOMP_SCRATCH_SIZE;
dst = scratch->dst;
do {
if (acomp_request_dst_isfolio(req)) {
dpage = folio_page(req->dfolio, 0);
doff = req->doff;
} else if (dlen <= req->dst->length) {
dpage = sg_page(req->dst);
doff = req->dst->offset;
} else
break;
dpage = nth_page(dpage, doff / PAGE_SIZE);
doff = offset_in_page(doff);
n = dlen / PAGE_SIZE;
n += (offset_in_page(dlen) + doff - 1) / PAGE_SIZE;
if (PageHighMem(dpage + n) &&
size_add(doff, dlen) > PAGE_SIZE)
break;
dst = kmap_local_page(dpage) + doff;
max = dlen;
} while (0);
dlen = min(dlen, max);
}
spin_lock_bh(&scratch->lock);
if (src == scratch->src)
memcpy_from_sglist(scratch->src, req->src, 0, slen);
stream = raw_cpu_ptr(crypto_scomp_alg(scomp)->stream);
spin_lock(&stream->lock);
if (dir)
ret = crypto_scomp_compress(scomp, src, req->slen,
dst, &req->dlen, *ctx);
ret = crypto_scomp_compress(scomp, src, slen,
dst, &dlen, stream->ctx);
else
ret = crypto_scomp_decompress(scomp, src, req->slen,
dst, &req->dlen, *ctx);
if (!ret) {
if (!req->dst) {
req->dst = sgl_alloc(req->dlen, GFP_ATOMIC, NULL);
if (!req->dst) {
ret = -ENOMEM;
goto out;
}
} else if (req->dlen > dlen) {
ret = -ENOSPC;
goto out;
}
if (dst == scratch->dst) {
scatterwalk_map_and_copy(scratch->dst, req->dst, 0,
req->dlen, 1);
} else {
int nr_pages = DIV_ROUND_UP(req->dst->offset + req->dlen, PAGE_SIZE);
int i;
struct page *dst_page = sg_page(req->dst);
ret = crypto_scomp_decompress(scomp, src, slen,
dst, &dlen, stream->ctx);
for (i = 0; i < nr_pages; i++)
flush_dcache_page(dst_page + i);
if (dst == scratch->dst)
memcpy_to_sglist(req->dst, 0, dst, dlen);
spin_unlock(&stream->lock);
spin_unlock_bh(&scratch->lock);
req->dlen = dlen;
if (!acomp_request_dst_isvirt(req) && dst != scratch->dst) {
kunmap_local(dst);
dlen += doff;
for (;;) {
flush_dcache_page(dpage);
if (dlen <= PAGE_SIZE)
break;
dlen -= PAGE_SIZE;
dpage = nth_page(dpage, 1);
}
}
out:
spin_unlock(&scratch->lock);
if (!acomp_request_src_isvirt(req) && src != scratch->src)
kunmap_local(src);
return ret;
}
static int scomp_acomp_chain(struct acomp_req *req, int dir)
{
struct acomp_req *r2;
int err;
err = scomp_acomp_comp_decomp(req, dir);
req->base.err = err;
list_for_each_entry(r2, &req->base.list, base.list)
r2->base.err = scomp_acomp_comp_decomp(r2, dir);
return err;
}
static int scomp_acomp_compress(struct acomp_req *req)
{
return scomp_acomp_comp_decomp(req, 1);
return scomp_acomp_chain(req, 1);
}
static int scomp_acomp_decompress(struct acomp_req *req)
{
return scomp_acomp_comp_decomp(req, 0);
return scomp_acomp_chain(req, 0);
}
static void crypto_exit_scomp_ops_async(struct crypto_tfm *tfm)
@ -225,46 +346,19 @@ int crypto_init_scomp_ops_async(struct crypto_tfm *tfm)
crt->compress = scomp_acomp_compress;
crt->decompress = scomp_acomp_decompress;
crt->dst_free = sgl_free;
crt->reqsize = sizeof(void *);
return 0;
}
struct acomp_req *crypto_acomp_scomp_alloc_ctx(struct acomp_req *req)
static void crypto_scomp_destroy(struct crypto_alg *alg)
{
struct crypto_acomp *acomp = crypto_acomp_reqtfm(req);
struct crypto_tfm *tfm = crypto_acomp_tfm(acomp);
struct crypto_scomp **tfm_ctx = crypto_tfm_ctx(tfm);
struct crypto_scomp *scomp = *tfm_ctx;
void *ctx;
ctx = crypto_scomp_alloc_ctx(scomp);
if (IS_ERR(ctx)) {
kfree(req);
return NULL;
}
*req->__ctx = ctx;
return req;
}
void crypto_acomp_scomp_free_ctx(struct acomp_req *req)
{
struct crypto_acomp *acomp = crypto_acomp_reqtfm(req);
struct crypto_tfm *tfm = crypto_acomp_tfm(acomp);
struct crypto_scomp **tfm_ctx = crypto_tfm_ctx(tfm);
struct crypto_scomp *scomp = *tfm_ctx;
void *ctx = *req->__ctx;
if (ctx)
crypto_scomp_free_ctx(scomp, ctx);
scomp_free_streams(__crypto_scomp_alg(alg));
}
static const struct crypto_type crypto_scomp_type = {
.extsize = crypto_alg_extsize,
.init_tfm = crypto_scomp_init_tfm,
.destroy = crypto_scomp_destroy,
#ifdef CONFIG_PROC_FS
.show = crypto_scomp_show,
#endif
@ -277,12 +371,21 @@ static const struct crypto_type crypto_scomp_type = {
.tfmsize = offsetof(struct crypto_scomp, base),
};
int crypto_register_scomp(struct scomp_alg *alg)
static void scomp_prepare_alg(struct scomp_alg *alg)
{
struct crypto_alg *base = &alg->calg.base;
comp_prepare_alg(&alg->calg);
base->cra_flags |= CRYPTO_ALG_REQ_CHAIN;
}
int crypto_register_scomp(struct scomp_alg *alg)
{
struct crypto_alg *base = &alg->calg.base;
scomp_prepare_alg(alg);
base->cra_type = &crypto_scomp_type;
base->cra_flags |= CRYPTO_ALG_TYPE_SCOMPRESS;

View file

@ -22,6 +22,7 @@
#include <linux/seq_file.h>
#include <linux/slab.h>
#include <linux/string.h>
#include <linux/string_choices.h>
#include <net/netlink.h>
#include "skcipher.h"
@ -38,26 +39,6 @@ static const struct crypto_type crypto_skcipher_type;
static int skcipher_walk_next(struct skcipher_walk *walk);
static inline void skcipher_map_src(struct skcipher_walk *walk)
{
walk->src.virt.addr = scatterwalk_map(&walk->in);
}
static inline void skcipher_map_dst(struct skcipher_walk *walk)
{
walk->dst.virt.addr = scatterwalk_map(&walk->out);
}
static inline void skcipher_unmap_src(struct skcipher_walk *walk)
{
scatterwalk_unmap(walk->src.virt.addr);
}
static inline void skcipher_unmap_dst(struct skcipher_walk *walk)
{
scatterwalk_unmap(walk->dst.virt.addr);
}
static inline gfp_t skcipher_walk_gfp(struct skcipher_walk *walk)
{
return walk->flags & SKCIPHER_WALK_SLEEP ? GFP_KERNEL : GFP_ATOMIC;
@ -69,14 +50,6 @@ static inline struct skcipher_alg *__crypto_skcipher_alg(
return container_of(alg, struct skcipher_alg, base);
}
static int skcipher_done_slow(struct skcipher_walk *walk, unsigned int bsize)
{
u8 *addr = PTR_ALIGN(walk->buffer, walk->alignmask + 1);
scatterwalk_copychunks(addr, &walk->out, bsize, 1);
return 0;
}
/**
* skcipher_walk_done() - finish one step of a skcipher_walk
* @walk: the skcipher_walk
@ -111,15 +84,13 @@ int skcipher_walk_done(struct skcipher_walk *walk, int res)
if (likely(!(walk->flags & (SKCIPHER_WALK_SLOW |
SKCIPHER_WALK_COPY |
SKCIPHER_WALK_DIFF)))) {
unmap_src:
skcipher_unmap_src(walk);
scatterwalk_advance(&walk->in, n);
} else if (walk->flags & SKCIPHER_WALK_DIFF) {
skcipher_unmap_dst(walk);
goto unmap_src;
scatterwalk_done_src(&walk->in, n);
} else if (walk->flags & SKCIPHER_WALK_COPY) {
skcipher_map_dst(walk);
memcpy(walk->dst.virt.addr, walk->page, n);
skcipher_unmap_dst(walk);
scatterwalk_advance(&walk->in, n);
scatterwalk_map(&walk->out);
memcpy(walk->out.addr, walk->page, n);
} else { /* SKCIPHER_WALK_SLOW */
if (res > 0) {
/*
@ -131,20 +102,19 @@ unmap_src:
res = -EINVAL;
total = 0;
} else
n = skcipher_done_slow(walk, n);
memcpy_to_scatterwalk(&walk->out, walk->out.addr, n);
goto dst_done;
}
scatterwalk_done_dst(&walk->out, n);
dst_done:
if (res > 0)
res = 0;
walk->total = total;
walk->nbytes = 0;
scatterwalk_advance(&walk->in, n);
scatterwalk_advance(&walk->out, n);
scatterwalk_done(&walk->in, 0, total);
scatterwalk_done(&walk->out, 1, total);
if (total) {
if (walk->flags & SKCIPHER_WALK_SLEEP)
cond_resched();
@ -174,7 +144,7 @@ static int skcipher_next_slow(struct skcipher_walk *walk, unsigned int bsize)
{
unsigned alignmask = walk->alignmask;
unsigned n;
u8 *buffer;
void *buffer;
if (!walk->buffer)
walk->buffer = walk->page;
@ -188,10 +158,11 @@ static int skcipher_next_slow(struct skcipher_walk *walk, unsigned int bsize)
return skcipher_walk_done(walk, -ENOMEM);
walk->buffer = buffer;
}
walk->dst.virt.addr = PTR_ALIGN(buffer, alignmask + 1);
walk->src.virt.addr = walk->dst.virt.addr;
scatterwalk_copychunks(walk->src.virt.addr, &walk->in, bsize, 0);
buffer = PTR_ALIGN(buffer, alignmask + 1);
memcpy_from_scatterwalk(buffer, &walk->in, bsize);
walk->out.__addr = buffer;
walk->in.__addr = walk->out.addr;
walk->nbytes = bsize;
walk->flags |= SKCIPHER_WALK_SLOW;
@ -201,14 +172,18 @@ static int skcipher_next_slow(struct skcipher_walk *walk, unsigned int bsize)
static int skcipher_next_copy(struct skcipher_walk *walk)
{
u8 *tmp = walk->page;
void *tmp = walk->page;
skcipher_map_src(walk);
memcpy(tmp, walk->src.virt.addr, walk->nbytes);
skcipher_unmap_src(walk);
scatterwalk_map(&walk->in);
memcpy(tmp, walk->in.addr, walk->nbytes);
scatterwalk_unmap(&walk->in);
/*
* walk->in is advanced later when the number of bytes actually
* processed (which might be less than walk->nbytes) is known.
*/
walk->src.virt.addr = tmp;
walk->dst.virt.addr = tmp;
walk->in.__addr = tmp;
walk->out.__addr = tmp;
return 0;
}
@ -218,15 +193,15 @@ static int skcipher_next_fast(struct skcipher_walk *walk)
diff = offset_in_page(walk->in.offset) -
offset_in_page(walk->out.offset);
diff |= (u8 *)scatterwalk_page(&walk->in) -
(u8 *)scatterwalk_page(&walk->out);
diff |= (u8 *)(sg_page(walk->in.sg) + (walk->in.offset >> PAGE_SHIFT)) -
(u8 *)(sg_page(walk->out.sg) + (walk->out.offset >> PAGE_SHIFT));
skcipher_map_src(walk);
walk->dst.virt.addr = walk->src.virt.addr;
scatterwalk_map(&walk->out);
walk->in.__addr = walk->out.__addr;
if (diff) {
walk->flags |= SKCIPHER_WALK_DIFF;
skcipher_map_dst(walk);
scatterwalk_map(&walk->in);
}
return 0;
@ -305,14 +280,16 @@ static int skcipher_walk_first(struct skcipher_walk *walk)
return skcipher_walk_next(walk);
}
int skcipher_walk_virt(struct skcipher_walk *walk,
struct skcipher_request *req, bool atomic)
int skcipher_walk_virt(struct skcipher_walk *__restrict walk,
struct skcipher_request *__restrict req, bool atomic)
{
const struct skcipher_alg *alg =
crypto_skcipher_alg(crypto_skcipher_reqtfm(req));
struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
struct skcipher_alg *alg;
might_sleep_if(req->base.flags & CRYPTO_TFM_REQ_MAY_SLEEP);
alg = crypto_skcipher_alg(tfm);
walk->total = req->cryptlen;
walk->nbytes = 0;
walk->iv = req->iv;
@ -328,14 +305,9 @@ int skcipher_walk_virt(struct skcipher_walk *walk,
scatterwalk_start(&walk->in, req->src);
scatterwalk_start(&walk->out, req->dst);
/*
* Accessing 'alg' directly generates better code than using the
* crypto_skcipher_blocksize() and similar helper functions here, as it
* prevents the algorithm pointer from being repeatedly reloaded.
*/
walk->blocksize = alg->base.cra_blocksize;
walk->ivsize = alg->co.ivsize;
walk->alignmask = alg->base.cra_alignmask;
walk->blocksize = crypto_skcipher_blocksize(tfm);
walk->ivsize = crypto_skcipher_ivsize(tfm);
walk->alignmask = crypto_skcipher_alignmask(tfm);
if (alg->co.base.cra_type != &crypto_skcipher_type)
walk->stride = alg->co.chunksize;
@ -346,10 +318,11 @@ int skcipher_walk_virt(struct skcipher_walk *walk,
}
EXPORT_SYMBOL_GPL(skcipher_walk_virt);
static int skcipher_walk_aead_common(struct skcipher_walk *walk,
struct aead_request *req, bool atomic)
static int skcipher_walk_aead_common(struct skcipher_walk *__restrict walk,
struct aead_request *__restrict req,
bool atomic)
{
const struct aead_alg *alg = crypto_aead_alg(crypto_aead_reqtfm(req));
struct crypto_aead *tfm = crypto_aead_reqtfm(req);
walk->nbytes = 0;
walk->iv = req->iv;
@ -362,30 +335,20 @@ static int skcipher_walk_aead_common(struct skcipher_walk *walk,
if (unlikely(!walk->total))
return 0;
scatterwalk_start(&walk->in, req->src);
scatterwalk_start(&walk->out, req->dst);
scatterwalk_start_at_pos(&walk->in, req->src, req->assoclen);
scatterwalk_start_at_pos(&walk->out, req->dst, req->assoclen);
scatterwalk_copychunks(NULL, &walk->in, req->assoclen, 2);
scatterwalk_copychunks(NULL, &walk->out, req->assoclen, 2);
scatterwalk_done(&walk->in, 0, walk->total);
scatterwalk_done(&walk->out, 0, walk->total);
/*
* Accessing 'alg' directly generates better code than using the
* crypto_aead_blocksize() and similar helper functions here, as it
* prevents the algorithm pointer from being repeatedly reloaded.
*/
walk->blocksize = alg->base.cra_blocksize;
walk->stride = alg->chunksize;
walk->ivsize = alg->ivsize;
walk->alignmask = alg->base.cra_alignmask;
walk->blocksize = crypto_aead_blocksize(tfm);
walk->stride = crypto_aead_chunksize(tfm);
walk->ivsize = crypto_aead_ivsize(tfm);
walk->alignmask = crypto_aead_alignmask(tfm);
return skcipher_walk_first(walk);
}
int skcipher_walk_aead_encrypt(struct skcipher_walk *walk,
struct aead_request *req, bool atomic)
int skcipher_walk_aead_encrypt(struct skcipher_walk *__restrict walk,
struct aead_request *__restrict req,
bool atomic)
{
walk->total = req->cryptlen;
@ -393,8 +356,9 @@ int skcipher_walk_aead_encrypt(struct skcipher_walk *walk,
}
EXPORT_SYMBOL_GPL(skcipher_walk_aead_encrypt);
int skcipher_walk_aead_decrypt(struct skcipher_walk *walk,
struct aead_request *req, bool atomic)
int skcipher_walk_aead_decrypt(struct skcipher_walk *__restrict walk,
struct aead_request *__restrict req,
bool atomic)
{
struct crypto_aead *tfm = crypto_aead_reqtfm(req);
@ -612,7 +576,7 @@ static void crypto_skcipher_show(struct seq_file *m, struct crypto_alg *alg)
seq_printf(m, "type : skcipher\n");
seq_printf(m, "async : %s\n",
alg->cra_flags & CRYPTO_ALG_ASYNC ? "yes" : "no");
str_yes_no(alg->cra_flags & CRYPTO_ALG_ASYNC));
seq_printf(m, "blocksize : %u\n", alg->cra_blocksize);
seq_printf(m, "min keysize : %u\n", skcipher->min_keysize);
seq_printf(m, "max keysize : %u\n", skcipher->max_keysize);
@ -681,6 +645,7 @@ struct crypto_sync_skcipher *crypto_alloc_sync_skcipher(
/* Only sync algorithms allowed. */
mask |= CRYPTO_ALG_ASYNC | CRYPTO_ALG_SKCIPHER_REQSIZE_LARGE;
type &= ~(CRYPTO_ALG_ASYNC | CRYPTO_ALG_SKCIPHER_REQSIZE_LARGE);
tfm = crypto_alloc_tfm(alg_name, &crypto_skcipher_type, type, mask);

View file

@ -716,6 +716,207 @@ static inline int do_one_ahash_op(struct ahash_request *req, int ret)
return crypto_wait_req(ret, wait);
}
struct test_mb_ahash_data {
struct scatterlist sg[XBUFSIZE];
char result[64];
struct ahash_request *req;
struct crypto_wait wait;
char *xbuf[XBUFSIZE];
};
static inline int do_mult_ahash_op(struct test_mb_ahash_data *data, u32 num_mb,
int *rc)
{
int i, err;
/* Fire up a bunch of concurrent requests */
err = crypto_ahash_digest(data[0].req);
/* Wait for all requests to finish */
err = crypto_wait_req(err, &data[0].wait);
if (num_mb < 2)
return err;
for (i = 0; i < num_mb; i++) {
rc[i] = ahash_request_err(data[i].req);
if (rc[i]) {
pr_info("concurrent request %d error %d\n", i, rc[i]);
err = rc[i];
}
}
return err;
}
static int test_mb_ahash_jiffies(struct test_mb_ahash_data *data, int blen,
int secs, u32 num_mb)
{
unsigned long start, end;
int bcount;
int ret = 0;
int *rc;
rc = kcalloc(num_mb, sizeof(*rc), GFP_KERNEL);
if (!rc)
return -ENOMEM;
for (start = jiffies, end = start + secs * HZ, bcount = 0;
time_before(jiffies, end); bcount++) {
ret = do_mult_ahash_op(data, num_mb, rc);
if (ret)
goto out;
}
pr_cont("%d operations in %d seconds (%llu bytes)\n",
bcount * num_mb, secs, (u64)bcount * blen * num_mb);
out:
kfree(rc);
return ret;
}
static int test_mb_ahash_cycles(struct test_mb_ahash_data *data, int blen,
u32 num_mb)
{
unsigned long cycles = 0;
int ret = 0;
int i;
int *rc;
rc = kcalloc(num_mb, sizeof(*rc), GFP_KERNEL);
if (!rc)
return -ENOMEM;
/* Warm-up run. */
for (i = 0; i < 4; i++) {
ret = do_mult_ahash_op(data, num_mb, rc);
if (ret)
goto out;
}
/* The real thing. */
for (i = 0; i < 8; i++) {
cycles_t start, end;
start = get_cycles();
ret = do_mult_ahash_op(data, num_mb, rc);
end = get_cycles();
if (ret)
goto out;
cycles += end - start;
}
pr_cont("1 operation in %lu cycles (%d bytes)\n",
(cycles + 4) / (8 * num_mb), blen);
out:
kfree(rc);
return ret;
}
static void test_mb_ahash_speed(const char *algo, unsigned int secs,
struct hash_speed *speed, u32 num_mb)
{
struct test_mb_ahash_data *data;
struct crypto_ahash *tfm;
unsigned int i, j, k;
int ret;
data = kcalloc(num_mb, sizeof(*data), GFP_KERNEL);
if (!data)
return;
tfm = crypto_alloc_ahash(algo, 0, 0);
if (IS_ERR(tfm)) {
pr_err("failed to load transform for %s: %ld\n",
algo, PTR_ERR(tfm));
goto free_data;
}
for (i = 0; i < num_mb; ++i) {
if (testmgr_alloc_buf(data[i].xbuf))
goto out;
crypto_init_wait(&data[i].wait);
data[i].req = ahash_request_alloc(tfm, GFP_KERNEL);
if (!data[i].req) {
pr_err("alg: hash: Failed to allocate request for %s\n",
algo);
goto out;
}
if (i) {
ahash_request_set_callback(data[i].req, 0, NULL, NULL);
ahash_request_chain(data[i].req, data[0].req);
} else
ahash_request_set_callback(data[0].req, 0,
crypto_req_done,
&data[0].wait);
sg_init_table(data[i].sg, XBUFSIZE);
for (j = 0; j < XBUFSIZE; j++) {
sg_set_buf(data[i].sg + j, data[i].xbuf[j], PAGE_SIZE);
memset(data[i].xbuf[j], 0xff, PAGE_SIZE);
}
}
pr_info("\ntesting speed of multibuffer %s (%s)\n", algo,
get_driver_name(crypto_ahash, tfm));
for (i = 0; speed[i].blen != 0; i++) {
/* For some reason this only tests digests. */
if (speed[i].blen != speed[i].plen)
continue;
if (speed[i].blen > XBUFSIZE * PAGE_SIZE) {
pr_err("template (%u) too big for tvmem (%lu)\n",
speed[i].blen, XBUFSIZE * PAGE_SIZE);
goto out;
}
if (klen)
crypto_ahash_setkey(tfm, tvmem[0], klen);
for (k = 0; k < num_mb; k++)
ahash_request_set_crypt(data[k].req, data[k].sg,
data[k].result, speed[i].blen);
pr_info("test%3u "
"(%5u byte blocks,%5u bytes per update,%4u updates): ",
i, speed[i].blen, speed[i].plen,
speed[i].blen / speed[i].plen);
if (secs) {
ret = test_mb_ahash_jiffies(data, speed[i].blen, secs,
num_mb);
cond_resched();
} else {
ret = test_mb_ahash_cycles(data, speed[i].blen, num_mb);
}
if (ret) {
pr_err("At least one hashing failed ret=%d\n", ret);
break;
}
}
out:
ahash_request_free(data[0].req);
for (k = 0; k < num_mb; ++k)
testmgr_free_buf(data[k].xbuf);
crypto_free_ahash(tfm);
free_data:
kfree(data);
}
static int test_ahash_jiffies_digest(struct ahash_request *req, int blen,
char *out, int secs)
{
@ -2383,6 +2584,36 @@ static int do_test(const char *alg, u32 type, u32 mask, int m, u32 num_mb)
test_ahash_speed("sm3", sec, generic_hash_speed_template);
if (mode > 400 && mode < 500) break;
fallthrough;
case 450:
test_mb_ahash_speed("sha1", sec, generic_hash_speed_template,
num_mb);
if (mode > 400 && mode < 500) break;
fallthrough;
case 451:
test_mb_ahash_speed("sha256", sec, generic_hash_speed_template,
num_mb);
if (mode > 400 && mode < 500) break;
fallthrough;
case 452:
test_mb_ahash_speed("sha512", sec, generic_hash_speed_template,
num_mb);
if (mode > 400 && mode < 500) break;
fallthrough;
case 453:
test_mb_ahash_speed("sm3", sec, generic_hash_speed_template,
num_mb);
if (mode > 400 && mode < 500) break;
fallthrough;
case 454:
test_mb_ahash_speed("streebog256", sec,
generic_hash_speed_template, num_mb);
if (mode > 400 && mode < 500) break;
fallthrough;
case 455:
test_mb_ahash_speed("streebog512", sec,
generic_hash_speed_template, num_mb);
if (mode > 400 && mode < 500) break;
fallthrough;
case 499:
break;

View file

@ -58,6 +58,9 @@ module_param(fuzz_iterations, uint, 0644);
MODULE_PARM_DESC(fuzz_iterations, "number of fuzz test iterations");
#endif
/* Multibuffer is unlimited. Set arbitrary limit for testing. */
#define MAX_MB_MSGS 16
#ifdef CONFIG_CRYPTO_MANAGER_DISABLE_TESTS
/* a perfect nop */
@ -299,6 +302,13 @@ struct test_sg_division {
* @key_offset_relative_to_alignmask: if true, add the algorithm's alignmask to
* the @key_offset
* @finalization_type: what finalization function to use for hashes
* @multibuffer: test with multibuffer
* @multibuffer_index: random number used to generate the message index to use
* for multibuffer.
* @multibuffer_uneven: test with multibuffer using uneven lengths
* @multibuffer_lens: random lengths to make chained request uneven
* @multibuffer_count: random number used to generate the num_msgs parameter
* for multibuffer
* @nosimd: execute with SIMD disabled? Requires !CRYPTO_TFM_REQ_MAY_SLEEP.
* This applies to the parts of the operation that aren't controlled
* individually by @nosimd_setkey or @src_divs[].nosimd.
@ -318,6 +328,11 @@ struct testvec_config {
enum finalization_type finalization_type;
bool nosimd;
bool nosimd_setkey;
bool multibuffer;
unsigned int multibuffer_index;
unsigned int multibuffer_count;
bool multibuffer_uneven;
unsigned int multibuffer_lens[MAX_MB_MSGS];
};
#define TESTVEC_CONFIG_NAMELEN 192
@ -557,6 +572,7 @@ struct test_sglist {
char *bufs[XBUFSIZE];
struct scatterlist sgl[XBUFSIZE];
struct scatterlist sgl_saved[XBUFSIZE];
struct scatterlist full_sgl[XBUFSIZE];
struct scatterlist *sgl_ptr;
unsigned int nents;
};
@ -670,6 +686,11 @@ static int build_test_sglist(struct test_sglist *tsgl,
sg_mark_end(&tsgl->sgl[tsgl->nents - 1]);
tsgl->sgl_ptr = tsgl->sgl;
memcpy(tsgl->sgl_saved, tsgl->sgl, tsgl->nents * sizeof(tsgl->sgl[0]));
sg_init_table(tsgl->full_sgl, XBUFSIZE);
for (i = 0; i < XBUFSIZE; i++)
sg_set_buf(tsgl->full_sgl, tsgl->bufs[i], PAGE_SIZE * 2);
return 0;
}
@ -1146,6 +1167,27 @@ static void generate_random_testvec_config(struct rnd_state *rng,
break;
}
if (prandom_bool(rng)) {
int i;
cfg->multibuffer = true;
cfg->multibuffer_count = prandom_u32_state(rng);
cfg->multibuffer_count %= MAX_MB_MSGS;
if (cfg->multibuffer_count++) {
cfg->multibuffer_index = prandom_u32_state(rng);
cfg->multibuffer_index %= cfg->multibuffer_count;
}
cfg->multibuffer_uneven = prandom_bool(rng);
for (i = 0; i < MAX_MB_MSGS; i++)
cfg->multibuffer_lens[i] =
generate_random_length(rng, PAGE_SIZE * 2 * XBUFSIZE);
p += scnprintf(p, end - p, " multibuffer(%d/%d%s)",
cfg->multibuffer_index, cfg->multibuffer_count,
cfg->multibuffer_uneven ? "/uneven" : "");
}
if (!(cfg->req_flags & CRYPTO_TFM_REQ_MAY_SLEEP)) {
if (prandom_bool(rng)) {
cfg->nosimd = true;
@ -1450,6 +1492,7 @@ static int do_ahash_op(int (*op)(struct ahash_request *req),
struct ahash_request *req,
struct crypto_wait *wait, bool nosimd)
{
struct ahash_request *r2;
int err;
if (nosimd)
@ -1460,7 +1503,15 @@ static int do_ahash_op(int (*op)(struct ahash_request *req),
if (nosimd)
crypto_reenable_simd_for_test();
return crypto_wait_req(err, wait);
err = crypto_wait_req(err, wait);
if (err)
return err;
list_for_each_entry(r2, &req->base.list, base.list)
if (r2->base.err)
return r2->base.err;
return 0;
}
static int check_nonfinal_ahash_op(const char *op, int err,
@ -1481,20 +1532,65 @@ static int check_nonfinal_ahash_op(const char *op, int err,
return 0;
}
static void setup_ahash_multibuffer(
struct ahash_request *reqs[MAX_MB_MSGS],
const struct testvec_config *cfg,
struct test_sglist *tsgl)
{
struct scatterlist *sg = tsgl->full_sgl;
static u8 trash[HASH_MAX_DIGESTSIZE];
struct ahash_request *req = reqs[0];
unsigned int num_msgs;
unsigned int msg_idx;
int i;
if (!cfg->multibuffer)
return;
num_msgs = cfg->multibuffer_count;
if (num_msgs == 1)
return;
msg_idx = cfg->multibuffer_index;
for (i = 1; i < num_msgs; i++) {
struct ahash_request *r2 = reqs[i];
unsigned int nbytes = req->nbytes;
if (cfg->multibuffer_uneven)
nbytes = cfg->multibuffer_lens[i];
ahash_request_set_callback(r2, req->base.flags, NULL, NULL);
ahash_request_set_crypt(r2, sg, trash, nbytes);
ahash_request_chain(r2, req);
}
if (msg_idx) {
reqs[msg_idx]->src = req->src;
reqs[msg_idx]->nbytes = req->nbytes;
reqs[msg_idx]->result = req->result;
req->src = sg;
if (cfg->multibuffer_uneven)
req->nbytes = cfg->multibuffer_lens[0];
req->result = trash;
}
}
/* Test one hash test vector in one configuration, using the ahash API */
static int test_ahash_vec_cfg(const struct hash_testvec *vec,
const char *vec_name,
const struct testvec_config *cfg,
struct ahash_request *req,
struct ahash_request *reqs[MAX_MB_MSGS],
struct test_sglist *tsgl,
u8 *hashstate)
{
struct ahash_request *req = reqs[0];
struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
const unsigned int digestsize = crypto_ahash_digestsize(tfm);
const unsigned int statesize = crypto_ahash_statesize(tfm);
const char *driver = crypto_ahash_driver_name(tfm);
const u32 req_flags = CRYPTO_TFM_REQ_MAY_BACKLOG | cfg->req_flags;
const struct test_sg_division *divs[XBUFSIZE];
struct ahash_request *reqi = req;
DECLARE_CRYPTO_WAIT(wait);
unsigned int i;
struct scatterlist *pending_sgl;
@ -1502,6 +1598,9 @@ static int test_ahash_vec_cfg(const struct hash_testvec *vec,
u8 result[HASH_MAX_DIGESTSIZE + TESTMGR_POISON_LEN];
int err;
if (cfg->multibuffer)
reqi = reqs[cfg->multibuffer_index];
/* Set the key, if specified */
if (vec->ksize) {
err = do_setkey(crypto_ahash_setkey, tfm, vec->key, vec->ksize,
@ -1531,7 +1630,7 @@ static int test_ahash_vec_cfg(const struct hash_testvec *vec,
/* Do the actual hashing */
testmgr_poison(req->__ctx, crypto_ahash_reqsize(tfm));
testmgr_poison(reqi->__ctx, crypto_ahash_reqsize(tfm));
testmgr_poison(result, digestsize + TESTMGR_POISON_LEN);
if (cfg->finalization_type == FINALIZATION_TYPE_DIGEST ||
@ -1540,6 +1639,7 @@ static int test_ahash_vec_cfg(const struct hash_testvec *vec,
ahash_request_set_callback(req, req_flags, crypto_req_done,
&wait);
ahash_request_set_crypt(req, tsgl->sgl, result, vec->psize);
setup_ahash_multibuffer(reqs, cfg, tsgl);
err = do_ahash_op(crypto_ahash_digest, req, &wait, cfg->nosimd);
if (err) {
if (err == vec->digest_error)
@ -1561,6 +1661,7 @@ static int test_ahash_vec_cfg(const struct hash_testvec *vec,
ahash_request_set_callback(req, req_flags, crypto_req_done, &wait);
ahash_request_set_crypt(req, NULL, result, 0);
setup_ahash_multibuffer(reqs, cfg, tsgl);
err = do_ahash_op(crypto_ahash_init, req, &wait, cfg->nosimd);
err = check_nonfinal_ahash_op("init", err, result, digestsize,
driver, vec_name, cfg);
@ -1577,6 +1678,7 @@ static int test_ahash_vec_cfg(const struct hash_testvec *vec,
crypto_req_done, &wait);
ahash_request_set_crypt(req, pending_sgl, result,
pending_len);
setup_ahash_multibuffer(reqs, cfg, tsgl);
err = do_ahash_op(crypto_ahash_update, req, &wait,
divs[i]->nosimd);
err = check_nonfinal_ahash_op("update", err,
@ -1591,7 +1693,7 @@ static int test_ahash_vec_cfg(const struct hash_testvec *vec,
/* Test ->export() and ->import() */
testmgr_poison(hashstate + statesize,
TESTMGR_POISON_LEN);
err = crypto_ahash_export(req, hashstate);
err = crypto_ahash_export(reqi, hashstate);
err = check_nonfinal_ahash_op("export", err,
result, digestsize,
driver, vec_name, cfg);
@ -1604,8 +1706,8 @@ static int test_ahash_vec_cfg(const struct hash_testvec *vec,
return -EOVERFLOW;
}
testmgr_poison(req->__ctx, crypto_ahash_reqsize(tfm));
err = crypto_ahash_import(req, hashstate);
testmgr_poison(reqi->__ctx, crypto_ahash_reqsize(tfm));
err = crypto_ahash_import(reqi, hashstate);
err = check_nonfinal_ahash_op("import", err,
result, digestsize,
driver, vec_name, cfg);
@ -1619,6 +1721,7 @@ static int test_ahash_vec_cfg(const struct hash_testvec *vec,
ahash_request_set_callback(req, req_flags, crypto_req_done, &wait);
ahash_request_set_crypt(req, pending_sgl, result, pending_len);
setup_ahash_multibuffer(reqs, cfg, tsgl);
if (cfg->finalization_type == FINALIZATION_TYPE_FINAL) {
/* finish with update() and final() */
err = do_ahash_op(crypto_ahash_update, req, &wait, cfg->nosimd);
@ -1650,7 +1753,7 @@ result_ready:
static int test_hash_vec_cfg(const struct hash_testvec *vec,
const char *vec_name,
const struct testvec_config *cfg,
struct ahash_request *req,
struct ahash_request *reqs[MAX_MB_MSGS],
struct shash_desc *desc,
struct test_sglist *tsgl,
u8 *hashstate)
@ -1670,11 +1773,12 @@ static int test_hash_vec_cfg(const struct hash_testvec *vec,
return err;
}
return test_ahash_vec_cfg(vec, vec_name, cfg, req, tsgl, hashstate);
return test_ahash_vec_cfg(vec, vec_name, cfg, reqs, tsgl, hashstate);
}
static int test_hash_vec(const struct hash_testvec *vec, unsigned int vec_num,
struct ahash_request *req, struct shash_desc *desc,
struct ahash_request *reqs[MAX_MB_MSGS],
struct shash_desc *desc,
struct test_sglist *tsgl, u8 *hashstate)
{
char vec_name[16];
@ -1686,7 +1790,7 @@ static int test_hash_vec(const struct hash_testvec *vec, unsigned int vec_num,
for (i = 0; i < ARRAY_SIZE(default_hash_testvec_configs); i++) {
err = test_hash_vec_cfg(vec, vec_name,
&default_hash_testvec_configs[i],
req, desc, tsgl, hashstate);
reqs, desc, tsgl, hashstate);
if (err)
return err;
}
@ -1703,7 +1807,7 @@ static int test_hash_vec(const struct hash_testvec *vec, unsigned int vec_num,
generate_random_testvec_config(&rng, &cfg, cfgname,
sizeof(cfgname));
err = test_hash_vec_cfg(vec, vec_name, &cfg,
req, desc, tsgl, hashstate);
reqs, desc, tsgl, hashstate);
if (err)
return err;
cond_resched();
@ -1762,11 +1866,12 @@ done:
*/
static int test_hash_vs_generic_impl(const char *generic_driver,
unsigned int maxkeysize,
struct ahash_request *req,
struct ahash_request *reqs[MAX_MB_MSGS],
struct shash_desc *desc,
struct test_sglist *tsgl,
u8 *hashstate)
{
struct ahash_request *req = reqs[0];
struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
const unsigned int digestsize = crypto_ahash_digestsize(tfm);
const unsigned int blocksize = crypto_ahash_blocksize(tfm);
@ -1864,7 +1969,7 @@ static int test_hash_vs_generic_impl(const char *generic_driver,
sizeof(cfgname));
err = test_hash_vec_cfg(&vec, vec_name, cfg,
req, desc, tsgl, hashstate);
reqs, desc, tsgl, hashstate);
if (err)
goto out;
cond_resched();
@ -1882,7 +1987,7 @@ out:
#else /* !CONFIG_CRYPTO_MANAGER_EXTRA_TESTS */
static int test_hash_vs_generic_impl(const char *generic_driver,
unsigned int maxkeysize,
struct ahash_request *req,
struct ahash_request *reqs[MAX_MB_MSGS],
struct shash_desc *desc,
struct test_sglist *tsgl,
u8 *hashstate)
@ -1929,8 +2034,8 @@ static int __alg_test_hash(const struct hash_testvec *vecs,
u32 type, u32 mask,
const char *generic_driver, unsigned int maxkeysize)
{
struct ahash_request *reqs[MAX_MB_MSGS] = {};
struct crypto_ahash *atfm = NULL;
struct ahash_request *req = NULL;
struct crypto_shash *stfm = NULL;
struct shash_desc *desc = NULL;
struct test_sglist *tsgl = NULL;
@ -1954,12 +2059,14 @@ static int __alg_test_hash(const struct hash_testvec *vecs,
}
driver = crypto_ahash_driver_name(atfm);
req = ahash_request_alloc(atfm, GFP_KERNEL);
if (!req) {
pr_err("alg: hash: failed to allocate request for %s\n",
driver);
err = -ENOMEM;
goto out;
for (i = 0; i < MAX_MB_MSGS; i++) {
reqs[i] = ahash_request_alloc(atfm, GFP_KERNEL);
if (!reqs[i]) {
pr_err("alg: hash: failed to allocate request for %s\n",
driver);
err = -ENOMEM;
goto out;
}
}
/*
@ -1995,12 +2102,12 @@ static int __alg_test_hash(const struct hash_testvec *vecs,
if (fips_enabled && vecs[i].fips_skip)
continue;
err = test_hash_vec(&vecs[i], i, req, desc, tsgl, hashstate);
err = test_hash_vec(&vecs[i], i, reqs, desc, tsgl, hashstate);
if (err)
goto out;
cond_resched();
}
err = test_hash_vs_generic_impl(generic_driver, maxkeysize, req,
err = test_hash_vs_generic_impl(generic_driver, maxkeysize, reqs,
desc, tsgl, hashstate);
out:
kfree(hashstate);
@ -2010,7 +2117,12 @@ out:
}
kfree(desc);
crypto_free_shash(stfm);
ahash_request_free(req);
if (reqs[0]) {
ahash_request_set_callback(reqs[0], 0, NULL, NULL);
for (i = 1; i < MAX_MB_MSGS && reqs[i]; i++)
ahash_request_chain(reqs[i], reqs[0]);
ahash_request_free(reqs[0]);
}
crypto_free_ahash(atfm);
return err;
}
@ -3320,139 +3432,54 @@ out:
return err;
}
static int test_comp(struct crypto_comp *tfm,
const struct comp_testvec *ctemplate,
const struct comp_testvec *dtemplate,
int ctcount, int dtcount)
{
const char *algo = crypto_tfm_alg_driver_name(crypto_comp_tfm(tfm));
char *output, *decomp_output;
unsigned int i;
int ret;
output = kmalloc(COMP_BUF_SIZE, GFP_KERNEL);
if (!output)
return -ENOMEM;
decomp_output = kmalloc(COMP_BUF_SIZE, GFP_KERNEL);
if (!decomp_output) {
kfree(output);
return -ENOMEM;
}
for (i = 0; i < ctcount; i++) {
int ilen;
unsigned int dlen = COMP_BUF_SIZE;
memset(output, 0, COMP_BUF_SIZE);
memset(decomp_output, 0, COMP_BUF_SIZE);
ilen = ctemplate[i].inlen;
ret = crypto_comp_compress(tfm, ctemplate[i].input,
ilen, output, &dlen);
if (ret) {
printk(KERN_ERR "alg: comp: compression failed "
"on test %d for %s: ret=%d\n", i + 1, algo,
-ret);
goto out;
}
ilen = dlen;
dlen = COMP_BUF_SIZE;
ret = crypto_comp_decompress(tfm, output,
ilen, decomp_output, &dlen);
if (ret) {
pr_err("alg: comp: compression failed: decompress: on test %d for %s failed: ret=%d\n",
i + 1, algo, -ret);
goto out;
}
if (dlen != ctemplate[i].inlen) {
printk(KERN_ERR "alg: comp: Compression test %d "
"failed for %s: output len = %d\n", i + 1, algo,
dlen);
ret = -EINVAL;
goto out;
}
if (memcmp(decomp_output, ctemplate[i].input,
ctemplate[i].inlen)) {
pr_err("alg: comp: compression failed: output differs: on test %d for %s\n",
i + 1, algo);
hexdump(decomp_output, dlen);
ret = -EINVAL;
goto out;
}
}
for (i = 0; i < dtcount; i++) {
int ilen;
unsigned int dlen = COMP_BUF_SIZE;
memset(decomp_output, 0, COMP_BUF_SIZE);
ilen = dtemplate[i].inlen;
ret = crypto_comp_decompress(tfm, dtemplate[i].input,
ilen, decomp_output, &dlen);
if (ret) {
printk(KERN_ERR "alg: comp: decompression failed "
"on test %d for %s: ret=%d\n", i + 1, algo,
-ret);
goto out;
}
if (dlen != dtemplate[i].outlen) {
printk(KERN_ERR "alg: comp: Decompression test %d "
"failed for %s: output len = %d\n", i + 1, algo,
dlen);
ret = -EINVAL;
goto out;
}
if (memcmp(decomp_output, dtemplate[i].output, dlen)) {
printk(KERN_ERR "alg: comp: Decompression test %d "
"failed for %s\n", i + 1, algo);
hexdump(decomp_output, dlen);
ret = -EINVAL;
goto out;
}
}
ret = 0;
out:
kfree(decomp_output);
kfree(output);
return ret;
}
static int test_acomp(struct crypto_acomp *tfm,
const struct comp_testvec *ctemplate,
const struct comp_testvec *dtemplate,
int ctcount, int dtcount)
{
const char *algo = crypto_tfm_alg_driver_name(crypto_acomp_tfm(tfm));
unsigned int i;
char *output, *decomp_out;
int ret;
struct scatterlist src, dst;
struct acomp_req *req;
struct scatterlist *src = NULL, *dst = NULL;
struct acomp_req *reqs[MAX_MB_MSGS] = {};
char *decomp_out[MAX_MB_MSGS] = {};
char *output[MAX_MB_MSGS] = {};
struct crypto_wait wait;
struct acomp_req *req;
int ret = -ENOMEM;
unsigned int i;
output = kmalloc(COMP_BUF_SIZE, GFP_KERNEL);
if (!output)
return -ENOMEM;
src = kmalloc_array(MAX_MB_MSGS, sizeof(*src), GFP_KERNEL);
if (!src)
goto out;
dst = kmalloc_array(MAX_MB_MSGS, sizeof(*dst), GFP_KERNEL);
if (!dst)
goto out;
decomp_out = kmalloc(COMP_BUF_SIZE, GFP_KERNEL);
if (!decomp_out) {
kfree(output);
return -ENOMEM;
for (i = 0; i < MAX_MB_MSGS; i++) {
reqs[i] = acomp_request_alloc(tfm);
if (!reqs[i])
goto out;
acomp_request_set_callback(reqs[i],
CRYPTO_TFM_REQ_MAY_SLEEP |
CRYPTO_TFM_REQ_MAY_BACKLOG,
crypto_req_done, &wait);
if (i)
acomp_request_chain(reqs[i], reqs[0]);
output[i] = kmalloc(COMP_BUF_SIZE, GFP_KERNEL);
if (!output[i])
goto out;
decomp_out[i] = kmalloc(COMP_BUF_SIZE, GFP_KERNEL);
if (!decomp_out[i])
goto out;
}
for (i = 0; i < ctcount; i++) {
unsigned int dlen = COMP_BUF_SIZE;
int ilen = ctemplate[i].inlen;
void *input_vec;
int j;
input_vec = kmemdup(ctemplate[i].input, ilen, GFP_KERNEL);
if (!input_vec) {
@ -3460,85 +3487,61 @@ static int test_acomp(struct crypto_acomp *tfm,
goto out;
}
memset(output, 0, dlen);
crypto_init_wait(&wait);
sg_init_one(&src, input_vec, ilen);
sg_init_one(&dst, output, dlen);
sg_init_one(src, input_vec, ilen);
req = acomp_request_alloc(tfm);
if (!req) {
pr_err("alg: acomp: request alloc failed for %s\n",
algo);
kfree(input_vec);
ret = -ENOMEM;
goto out;
for (j = 0; j < MAX_MB_MSGS; j++) {
sg_init_one(dst + j, output[j], dlen);
acomp_request_set_params(reqs[j], src, dst + j, ilen, dlen);
}
acomp_request_set_params(req, &src, &dst, ilen, dlen);
acomp_request_set_callback(req, CRYPTO_TFM_REQ_MAY_BACKLOG,
crypto_req_done, &wait);
req = reqs[0];
ret = crypto_wait_req(crypto_acomp_compress(req), &wait);
if (ret) {
pr_err("alg: acomp: compression failed on test %d for %s: ret=%d\n",
i + 1, algo, -ret);
kfree(input_vec);
acomp_request_free(req);
goto out;
}
ilen = req->dlen;
dlen = COMP_BUF_SIZE;
sg_init_one(&src, output, ilen);
sg_init_one(&dst, decomp_out, dlen);
crypto_init_wait(&wait);
acomp_request_set_params(req, &src, &dst, ilen, dlen);
ret = crypto_wait_req(crypto_acomp_decompress(req), &wait);
if (ret) {
pr_err("alg: acomp: compression failed on test %d for %s: ret=%d\n",
i + 1, algo, -ret);
kfree(input_vec);
acomp_request_free(req);
goto out;
for (j = 0; j < MAX_MB_MSGS; j++) {
sg_init_one(src + j, output[j], ilen);
sg_init_one(dst + j, decomp_out[j], dlen);
acomp_request_set_params(reqs[j], src + j, dst + j, ilen, dlen);
}
if (req->dlen != ctemplate[i].inlen) {
pr_err("alg: acomp: Compression test %d failed for %s: output len = %d\n",
i + 1, algo, req->dlen);
ret = -EINVAL;
kfree(input_vec);
acomp_request_free(req);
goto out;
}
crypto_wait_req(crypto_acomp_decompress(req), &wait);
for (j = 0; j < MAX_MB_MSGS; j++) {
ret = reqs[j]->base.err;
if (ret) {
pr_err("alg: acomp: compression failed on test %d (%d) for %s: ret=%d\n",
i + 1, j, algo, -ret);
kfree(input_vec);
goto out;
}
if (memcmp(input_vec, decomp_out, req->dlen)) {
pr_err("alg: acomp: Compression test %d failed for %s\n",
i + 1, algo);
hexdump(output, req->dlen);
ret = -EINVAL;
kfree(input_vec);
acomp_request_free(req);
goto out;
}
if (reqs[j]->dlen != ctemplate[i].inlen) {
pr_err("alg: acomp: Compression test %d (%d) failed for %s: output len = %d\n",
i + 1, j, algo, reqs[j]->dlen);
ret = -EINVAL;
kfree(input_vec);
goto out;
}
#ifdef CONFIG_CRYPTO_MANAGER_EXTRA_TESTS
crypto_init_wait(&wait);
sg_init_one(&src, input_vec, ilen);
acomp_request_set_params(req, &src, NULL, ilen, 0);
ret = crypto_wait_req(crypto_acomp_compress(req), &wait);
if (ret) {
pr_err("alg: acomp: compression failed on NULL dst buffer test %d for %s: ret=%d\n",
i + 1, algo, -ret);
kfree(input_vec);
acomp_request_free(req);
goto out;
if (memcmp(input_vec, decomp_out[j], reqs[j]->dlen)) {
pr_err("alg: acomp: Compression test %d (%d) failed for %s\n",
i + 1, j, algo);
hexdump(output[j], reqs[j]->dlen);
ret = -EINVAL;
kfree(input_vec);
goto out;
}
}
#endif
kfree(input_vec);
acomp_request_free(req);
}
for (i = 0; i < dtcount; i++) {
@ -3552,10 +3555,9 @@ static int test_acomp(struct crypto_acomp *tfm,
goto out;
}
memset(output, 0, dlen);
crypto_init_wait(&wait);
sg_init_one(&src, input_vec, ilen);
sg_init_one(&dst, output, dlen);
sg_init_one(src, input_vec, ilen);
sg_init_one(dst, output[0], dlen);
req = acomp_request_alloc(tfm);
if (!req) {
@ -3566,7 +3568,7 @@ static int test_acomp(struct crypto_acomp *tfm,
goto out;
}
acomp_request_set_params(req, &src, &dst, ilen, dlen);
acomp_request_set_params(req, src, dst, ilen, dlen);
acomp_request_set_callback(req, CRYPTO_TFM_REQ_MAY_BACKLOG,
crypto_req_done, &wait);
@ -3588,30 +3590,16 @@ static int test_acomp(struct crypto_acomp *tfm,
goto out;
}
if (memcmp(output, dtemplate[i].output, req->dlen)) {
if (memcmp(output[0], dtemplate[i].output, req->dlen)) {
pr_err("alg: acomp: Decompression test %d failed for %s\n",
i + 1, algo);
hexdump(output, req->dlen);
hexdump(output[0], req->dlen);
ret = -EINVAL;
kfree(input_vec);
acomp_request_free(req);
goto out;
}
#ifdef CONFIG_CRYPTO_MANAGER_EXTRA_TESTS
crypto_init_wait(&wait);
acomp_request_set_params(req, &src, NULL, ilen, 0);
ret = crypto_wait_req(crypto_acomp_decompress(req), &wait);
if (ret) {
pr_err("alg: acomp: decompression failed on NULL dst buffer test %d for %s: ret=%d\n",
i + 1, algo, -ret);
kfree(input_vec);
acomp_request_free(req);
goto out;
}
#endif
kfree(input_vec);
acomp_request_free(req);
}
@ -3619,8 +3607,13 @@ static int test_acomp(struct crypto_acomp *tfm,
ret = 0;
out:
kfree(decomp_out);
kfree(output);
acomp_request_free(reqs[0]);
for (i = 0; i < MAX_MB_MSGS; i++) {
kfree(output[i]);
kfree(decomp_out[i]);
}
kfree(dst);
kfree(src);
return ret;
}
@ -3713,42 +3706,22 @@ static int alg_test_cipher(const struct alg_test_desc *desc,
static int alg_test_comp(const struct alg_test_desc *desc, const char *driver,
u32 type, u32 mask)
{
struct crypto_comp *comp;
struct crypto_acomp *acomp;
int err;
u32 algo_type = type & CRYPTO_ALG_TYPE_ACOMPRESS_MASK;
if (algo_type == CRYPTO_ALG_TYPE_ACOMPRESS) {
acomp = crypto_alloc_acomp(driver, type, mask);
if (IS_ERR(acomp)) {
if (PTR_ERR(acomp) == -ENOENT)
return 0;
pr_err("alg: acomp: Failed to load transform for %s: %ld\n",
driver, PTR_ERR(acomp));
return PTR_ERR(acomp);
}
err = test_acomp(acomp, desc->suite.comp.comp.vecs,
desc->suite.comp.decomp.vecs,
desc->suite.comp.comp.count,
desc->suite.comp.decomp.count);
crypto_free_acomp(acomp);
} else {
comp = crypto_alloc_comp(driver, type, mask);
if (IS_ERR(comp)) {
if (PTR_ERR(comp) == -ENOENT)
return 0;
pr_err("alg: comp: Failed to load transform for %s: %ld\n",
driver, PTR_ERR(comp));
return PTR_ERR(comp);
}
err = test_comp(comp, desc->suite.comp.comp.vecs,
desc->suite.comp.decomp.vecs,
desc->suite.comp.comp.count,
desc->suite.comp.decomp.count);
crypto_free_comp(comp);
acomp = crypto_alloc_acomp(driver, type, mask);
if (IS_ERR(acomp)) {
if (PTR_ERR(acomp) == -ENOENT)
return 0;
pr_err("alg: acomp: Failed to load transform for %s: %ld\n",
driver, PTR_ERR(acomp));
return PTR_ERR(acomp);
}
err = test_acomp(acomp, desc->suite.comp.comp.vecs,
desc->suite.comp.decomp.vecs,
desc->suite.comp.comp.count,
desc->suite.comp.decomp.count);
crypto_free_acomp(acomp);
return err;
}
@ -4328,7 +4301,7 @@ static int test_sig_one(struct crypto_sig *tfm, const struct sig_testvec *vecs)
if (vecs->public_key_vec)
return 0;
sig_size = crypto_sig_keysize(tfm);
sig_size = crypto_sig_maxsize(tfm);
if (sig_size < vecs->c_size) {
pr_err("alg: sig: invalid maxsize %u\n", sig_size);
return -EINVAL;
@ -4340,13 +4313,14 @@ static int test_sig_one(struct crypto_sig *tfm, const struct sig_testvec *vecs)
/* Run asymmetric signature generation */
err = crypto_sig_sign(tfm, vecs->m, vecs->m_size, sig, sig_size);
if (err) {
if (err < 0) {
pr_err("alg: sig: sign test failed: err %d\n", err);
return err;
}
/* Verify that generated signature equals cooked signature */
if (memcmp(sig, vecs->c, vecs->c_size) ||
if (err != vecs->c_size ||
memcmp(sig, vecs->c, vecs->c_size) ||
memchr_inv(sig + vecs->c_size, 0, sig_size - vecs->c_size)) {
pr_err("alg: sig: sign test failed: invalid output\n");
hexdump(sig, sig_size);
@ -4504,6 +4478,12 @@ static const struct alg_test_desc alg_test_descs[] = {
.alg = "authenc(hmac(sha256),ctr(aes))",
.test = alg_test_null,
.fips_allowed = 1,
}, {
.alg = "authenc(hmac(sha256),cts(cbc(aes)))",
.test = alg_test_aead,
.suite = {
.aead = __VECS(krb5_test_aes128_cts_hmac_sha256_128)
}
}, {
.alg = "authenc(hmac(sha256),rfc3686(ctr(aes)))",
.test = alg_test_null,
@ -4524,6 +4504,12 @@ static const struct alg_test_desc alg_test_descs[] = {
.alg = "authenc(hmac(sha384),ctr(aes))",
.test = alg_test_null,
.fips_allowed = 1,
}, {
.alg = "authenc(hmac(sha384),cts(cbc(aes)))",
.test = alg_test_aead,
.suite = {
.aead = __VECS(krb5_test_aes256_cts_hmac_sha384_192)
}
}, {
.alg = "authenc(hmac(sha384),rfc3686(ctr(aes)))",
.test = alg_test_null,
@ -4742,9 +4728,6 @@ static const struct alg_test_desc alg_test_descs[] = {
.suite = {
.hash = __VECS(sm4_cmac128_tv_template)
}
}, {
.alg = "compress_null",
.test = alg_test_null,
}, {
.alg = "crc32",
.test = alg_test_hash,
@ -5383,6 +5366,10 @@ static const struct alg_test_desc alg_test_descs[] = {
.alg = "jitterentropy_rng",
.fips_allowed = 1,
.test = alg_test_null,
}, {
.alg = "krb5enc(cmac(camellia),cts(cbc(camellia)))",
.test = alg_test_aead,
.suite.aead = __VECS(krb5_test_camellia_cts_cmac)
}, {
.alg = "lrw(aes)",
.generic_driver = "lrw(ecb(aes-generic))",

View file

@ -38591,4 +38591,355 @@ static const struct cipher_testvec aes_hctr2_tv_template[] = {
};
#ifdef __LITTLE_ENDIAN
#define AUTHENC_KEY_HEADER(enckeylen) \
"\x08\x00\x01\x00" /* LE rtattr */ \
enckeylen /* crypto_authenc_key_param */
#else
#define AUTHENC_KEY_HEADER(enckeylen) \
"\x00\x08\x00\x01" /* BE rtattr */ \
enckeylen /* crypto_authenc_key_param */
#endif
static const struct aead_testvec krb5_test_aes128_cts_hmac_sha256_128[] = {
/* rfc8009 Appendix A */
{
/* "enc no plain" */
.key =
AUTHENC_KEY_HEADER("\x00\x00\x00\x10")
"\x9F\xDA\x0E\x56\xAB\x2D\x85\xE1\x56\x9A\x68\x86\x96\xC2\x6A\x6C" // Ki
"\x9B\x19\x7D\xD1\xE8\xC5\x60\x9D\x6E\x67\xC3\xE3\x7C\x62\xC7\x2E", // Ke
.klen = 4 + 4 + 16 + 16,
.ptext =
"\x7E\x58\x95\xEA\xF2\x67\x24\x35\xBA\xD8\x17\xF5\x45\xA3\x71\x48" // Confounder
"", // Plain
.plen = 16 + 0,
.ctext =
"\xEF\x85\xFB\x89\x0B\xB8\x47\x2F\x4D\xAB\x20\x39\x4D\xCA\x78\x1D"
"\xAD\x87\x7E\xDA\x39\xD5\x0C\x87\x0C\x0D\x5A\x0A\x8E\x48\xC7\x18",
.clen = 16 + 0 + 16,
.assoc = "\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00", // IV
.alen = 16,
}, {
/* "enc plain<block" */
.key =
AUTHENC_KEY_HEADER("\x00\x00\x00\x10")
"\x9F\xDA\x0E\x56\xAB\x2D\x85\xE1\x56\x9A\x68\x86\x96\xC2\x6A\x6C" // Ki
"\x9B\x19\x7D\xD1\xE8\xC5\x60\x9D\x6E\x67\xC3\xE3\x7C\x62\xC7\x2E", // Ke
.klen = 4 + 4 + 16 + 16,
.ptext =
"\x7B\xCA\x28\x5E\x2F\xD4\x13\x0F\xB5\x5B\x1A\x5C\x83\xBC\x5B\x24" // Confounder
"\x00\x01\x02\x03\x04\x05", // Plain
.plen = 16 + 6,
.ctext =
"\x84\xD7\xF3\x07\x54\xED\x98\x7B\xAB\x0B\xF3\x50\x6B\xEB\x09\xCF"
"\xB5\x54\x02\xCE\xF7\xE6\x87\x7C\xE9\x9E\x24\x7E\x52\xD1\x6E\xD4"
"\x42\x1D\xFD\xF8\x97\x6C",
.clen = 16 + 6 + 16,
.assoc = "\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00", // IV
.alen = 16,
}, {
/* "enc plain==block" */
.key =
AUTHENC_KEY_HEADER("\x00\x00\x00\x10")
"\x9F\xDA\x0E\x56\xAB\x2D\x85\xE1\x56\x9A\x68\x86\x96\xC2\x6A\x6C" // Ki
"\x9B\x19\x7D\xD1\xE8\xC5\x60\x9D\x6E\x67\xC3\xE3\x7C\x62\xC7\x2E", // Ke
.klen = 4 + 4 + 16 + 16,
.ptext =
"\x56\xAB\x21\x71\x3F\xF6\x2C\x0A\x14\x57\x20\x0F\x6F\xA9\x94\x8F" // Confounder
"\x00\x01\x02\x03\x04\x05\x06\x07\x08\x09\x0A\x0B\x0C\x0D\x0E\x0F", // Plain
.plen = 16 + 16,
.ctext =
"\x35\x17\xD6\x40\xF5\x0D\xDC\x8A\xD3\x62\x87\x22\xB3\x56\x9D\x2A"
"\xE0\x74\x93\xFA\x82\x63\x25\x40\x80\xEA\x65\xC1\x00\x8E\x8F\xC2"
"\x95\xFB\x48\x52\xE7\xD8\x3E\x1E\x7C\x48\xC3\x7E\xEB\xE6\xB0\xD3",
.clen = 16 + 16 + 16,
.assoc = "\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00", // IV
.alen = 16,
}, {
/* "enc plain>block" */
.key =
AUTHENC_KEY_HEADER("\x00\x00\x00\x10")
"\x9F\xDA\x0E\x56\xAB\x2D\x85\xE1\x56\x9A\x68\x86\x96\xC2\x6A\x6C" // Ki
"\x9B\x19\x7D\xD1\xE8\xC5\x60\x9D\x6E\x67\xC3\xE3\x7C\x62\xC7\x2E", // Ke
.klen = 4 + 4 + 16 + 16,
.ptext =
"\xA7\xA4\xE2\x9A\x47\x28\xCE\x10\x66\x4F\xB6\x4E\x49\xAD\x3F\xAC" // Confounder
"\x00\x01\x02\x03\x04\x05\x06\x07\x08\x09\x0A\x0B\x0C\x0D\x0E\x0F"
"\x10\x11\x12\x13\x14", // Plain
.plen = 16 + 21,
.ctext =
"\x72\x0F\x73\xB1\x8D\x98\x59\xCD\x6C\xCB\x43\x46\x11\x5C\xD3\x36"
"\xC7\x0F\x58\xED\xC0\xC4\x43\x7C\x55\x73\x54\x4C\x31\xC8\x13\xBC"
"\xE1\xE6\xD0\x72\xC1\x86\xB3\x9A\x41\x3C\x2F\x92\xCA\x9B\x83\x34"
"\xA2\x87\xFF\xCB\xFC",
.clen = 16 + 21 + 16,
.assoc = "\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00", // IV
.alen = 16,
},
};
static const struct aead_testvec krb5_test_aes256_cts_hmac_sha384_192[] = {
/* rfc8009 Appendix A */
{
/* "enc no plain" */
.key =
AUTHENC_KEY_HEADER("\x00\x00\x00\x20")
"\x69\xB1\x65\x14\xE3\xCD\x8E\x56\xB8\x20\x10\xD5\xC7\x30\x12\xB6"
"\x22\xC4\xD0\x0F\xFC\x23\xED\x1F" // Ki
"\x56\xAB\x22\xBE\xE6\x3D\x82\xD7\xBC\x52\x27\xF6\x77\x3F\x8E\xA7"
"\xA5\xEB\x1C\x82\x51\x60\xC3\x83\x12\x98\x0C\x44\x2E\x5C\x7E\x49", // Ke
.klen = 4 + 4 + 32 + 24,
.ptext =
"\xF7\x64\xE9\xFA\x15\xC2\x76\x47\x8B\x2C\x7D\x0C\x4E\x5F\x58\xE4" // Confounder
"", // Plain
.plen = 16 + 0,
.ctext =
"\x41\xF5\x3F\xA5\xBF\xE7\x02\x6D\x91\xFA\xF9\xBE\x95\x91\x95\xA0"
"\x58\x70\x72\x73\xA9\x6A\x40\xF0\xA0\x19\x60\x62\x1A\xC6\x12\x74"
"\x8B\x9B\xBF\xBE\x7E\xB4\xCE\x3C",
.clen = 16 + 0 + 24,
.assoc = "\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00", // IV
.alen = 16,
}, {
/* "enc plain<block" */
.key =
AUTHENC_KEY_HEADER("\x00\x00\x00\x20")
"\x69\xB1\x65\x14\xE3\xCD\x8E\x56\xB8\x20\x10\xD5\xC7\x30\x12\xB6"
"\x22\xC4\xD0\x0F\xFC\x23\xED\x1F" // Ki
"\x56\xAB\x22\xBE\xE6\x3D\x82\xD7\xBC\x52\x27\xF6\x77\x3F\x8E\xA7"
"\xA5\xEB\x1C\x82\x51\x60\xC3\x83\x12\x98\x0C\x44\x2E\x5C\x7E\x49", // Ke
.klen = 4 + 4 + 32 + 24,
.ptext =
"\xB8\x0D\x32\x51\xC1\xF6\x47\x14\x94\x25\x6F\xFE\x71\x2D\x0B\x9A" // Confounder
"\x00\x01\x02\x03\x04\x05", // Plain
.plen = 16 + 6,
.ctext =
"\x4E\xD7\xB3\x7C\x2B\xCA\xC8\xF7\x4F\x23\xC1\xCF\x07\xE6\x2B\xC7"
"\xB7\x5F\xB3\xF6\x37\xB9\xF5\x59\xC7\xF6\x64\xF6\x9E\xAB\x7B\x60"
"\x92\x23\x75\x26\xEA\x0D\x1F\x61\xCB\x20\xD6\x9D\x10\xF2",
.clen = 16 + 6 + 24,
.assoc = "\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00", // IV
.alen = 16,
}, {
/* "enc plain==block" */
.key =
AUTHENC_KEY_HEADER("\x00\x00\x00\x20")
"\x69\xB1\x65\x14\xE3\xCD\x8E\x56\xB8\x20\x10\xD5\xC7\x30\x12\xB6"
"\x22\xC4\xD0\x0F\xFC\x23\xED\x1F" // Ki
"\x56\xAB\x22\xBE\xE6\x3D\x82\xD7\xBC\x52\x27\xF6\x77\x3F\x8E\xA7"
"\xA5\xEB\x1C\x82\x51\x60\xC3\x83\x12\x98\x0C\x44\x2E\x5C\x7E\x49", // Ke
.klen = 4 + 4 + 32 + 24,
.ptext =
"\x53\xBF\x8A\x0D\x10\x52\x65\xD4\xE2\x76\x42\x86\x24\xCE\x5E\x63" // Confounder
"\x00\x01\x02\x03\x04\x05\x06\x07\x08\x09\x0A\x0B\x0C\x0D\x0E\x0F", // Plain
.plen = 16 + 16,
.ctext =
"\xBC\x47\xFF\xEC\x79\x98\xEB\x91\xE8\x11\x5C\xF8\xD1\x9D\xAC\x4B"
"\xBB\xE2\xE1\x63\xE8\x7D\xD3\x7F\x49\xBE\xCA\x92\x02\x77\x64\xF6"
"\x8C\xF5\x1F\x14\xD7\x98\xC2\x27\x3F\x35\xDF\x57\x4D\x1F\x93\x2E"
"\x40\xC4\xFF\x25\x5B\x36\xA2\x66",
.clen = 16 + 16 + 24,
.assoc = "\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00", // IV
.alen = 16,
}, {
/* "enc plain>block" */
.key =
AUTHENC_KEY_HEADER("\x00\x00\x00\x20")
"\x69\xB1\x65\x14\xE3\xCD\x8E\x56\xB8\x20\x10\xD5\xC7\x30\x12\xB6"
"\x22\xC4\xD0\x0F\xFC\x23\xED\x1F" // Ki
"\x56\xAB\x22\xBE\xE6\x3D\x82\xD7\xBC\x52\x27\xF6\x77\x3F\x8E\xA7"
"\xA5\xEB\x1C\x82\x51\x60\xC3\x83\x12\x98\x0C\x44\x2E\x5C\x7E\x49", // Ke
.klen = 4 + 4 + 32 + 24,
.ptext =
"\x76\x3E\x65\x36\x7E\x86\x4F\x02\xF5\x51\x53\xC7\xE3\xB5\x8A\xF1" // Confounder
"\x00\x01\x02\x03\x04\x05\x06\x07\x08\x09\x0A\x0B\x0C\x0D\x0E\x0F"
"\x10\x11\x12\x13\x14", // Plain
.plen = 16 + 21,
.ctext =
"\x40\x01\x3E\x2D\xF5\x8E\x87\x51\x95\x7D\x28\x78\xBC\xD2\xD6\xFE"
"\x10\x1C\xCF\xD5\x56\xCB\x1E\xAE\x79\xDB\x3C\x3E\xE8\x64\x29\xF2"
"\xB2\xA6\x02\xAC\x86\xFE\xF6\xEC\xB6\x47\xD6\x29\x5F\xAE\x07\x7A"
"\x1F\xEB\x51\x75\x08\xD2\xC1\x6B\x41\x92\xE0\x1F\x62",
.clen = 16 + 21 + 24,
.assoc = "\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00", // IV
.alen = 16,
},
};
static const struct aead_testvec krb5_test_camellia_cts_cmac[] = {
/* rfc6803 sec 10 */
{
// "enc no plain"
.key =
AUTHENC_KEY_HEADER("\x00\x00\x00\x10")
"\x45\xeb\x66\xe2\xef\xa8\x77\x8f\x7d\xf1\x46\x54\x53\x05\x98\x06" // Ki
"\xe9\x9b\x82\xb3\x6c\x4a\xe8\xea\x19\xe9\x5d\xfa\x9e\xde\x88\x2c", // Ke
.klen = 4 + 4 + 16 * 2,
.ptext =
"\xB6\x98\x22\xA1\x9A\x6B\x09\xC0\xEB\xC8\x55\x7D\x1F\x1B\x6C\x0A" // Confounder
"", // Plain
.plen = 16 + 0,
.ctext =
"\xC4\x66\xF1\x87\x10\x69\x92\x1E\xDB\x7C\x6F\xDE\x24\x4A\x52\xDB"
"\x0B\xA1\x0E\xDC\x19\x7B\xDB\x80\x06\x65\x8C\xA3\xCC\xCE\x6E\xB8",
.clen = 16 + 0 + 16,
}, {
// "enc 1 plain",
.key =
AUTHENC_KEY_HEADER("\x00\x00\x00\x10")
"\x13\x5f\xe7\x11\x6f\x53\xc2\xaa\x36\x12\xb7\xea\xe0\xf2\x84\xaa" // Ki
"\xa7\xed\xcd\x53\x97\xea\x6d\x12\xb0\xaf\xf4\xcb\x8d\xaa\x57\xad", // Ke
.klen = 4 + 4 + 16 * 2,
.ptext =
"\x6F\x2F\xC3\xC2\xA1\x66\xFD\x88\x98\x96\x7A\x83\xDE\x95\x96\xD9" // Confounder
"1", // Plain
.plen = 16 + 1,
.ctext =
"\x84\x2D\x21\xFD\x95\x03\x11\xC0\xDD\x46\x4A\x3F\x4B\xE8\xD6\xDA"
"\x88\xA5\x6D\x55\x9C\x9B\x47\xD3\xF9\xA8\x50\x67\xAF\x66\x15\x59"
"\xB8",
.clen = 16 + 1 + 16,
}, {
// "enc 9 plain",
.key =
AUTHENC_KEY_HEADER("\x00\x00\x00\x10")
"\x10\x2c\x34\xd0\x75\x74\x9f\x77\x8a\x15\xca\xd1\xe9\x7d\xa9\x86" // Ki
"\xdd\xe4\x2e\xca\x7c\xd9\x86\x3f\xc3\xce\x89\xcb\xc9\x43\x62\xd7", // Ke
.klen = 4 + 4 + 16 * 2,
.ptext =
"\xA5\xB4\xA7\x1E\x07\x7A\xEE\xF9\x3C\x87\x63\xC1\x8F\xDB\x1F\x10" // Confounder
"9 bytesss", // Plain
.plen = 16 + 9,
.ctext =
"\x61\x9F\xF0\x72\xE3\x62\x86\xFF\x0A\x28\xDE\xB3\xA3\x52\xEC\x0D"
"\x0E\xDF\x5C\x51\x60\xD6\x63\xC9\x01\x75\x8C\xCF\x9D\x1E\xD3\x3D"
"\x71\xDB\x8F\x23\xAA\xBF\x83\x48\xA0",
.clen = 16 + 9 + 16,
}, {
// "enc 13 plain",
.key =
AUTHENC_KEY_HEADER("\x00\x00\x00\x10")
"\xb8\xc4\x38\xcc\x1a\x00\x60\xfc\x91\x3a\x8e\x07\x16\x96\xbd\x08" // Ki
"\xc3\x11\x3a\x25\x85\x90\xb9\xae\xbf\x72\x1b\x1a\xf6\xb0\xcb\xf8", // Ke
.klen = 4 + 4 + 16 * 2,
.ptext =
"\x19\xFE\xE4\x0D\x81\x0C\x52\x4B\x5B\x22\xF0\x18\x74\xC6\x93\xDA" // Confounder
"13 bytes byte", // Plain
.plen = 16 + 13,
.ctext =
"\xB8\xEC\xA3\x16\x7A\xE6\x31\x55\x12\xE5\x9F\x98\xA7\xC5\x00\x20"
"\x5E\x5F\x63\xFF\x3B\xB3\x89\xAF\x1C\x41\xA2\x1D\x64\x0D\x86\x15"
"\xC9\xED\x3F\xBE\xB0\x5A\xB6\xAC\xB6\x76\x89\xB5\xEA",
.clen = 16 + 13 + 16,
}, {
// "enc 30 plain",
.key =
AUTHENC_KEY_HEADER("\x00\x00\x00\x10")
"\x18\xaf\x19\xb0\x23\x74\x44\xfd\x75\x04\xad\x7d\xbd\x48\xad\xd3" // Ki
"\x8b\x07\xee\xd3\x01\x49\x91\x6a\xa2\x0d\xb3\xf5\xce\xd8\xaf\xad", // Ke
.klen = 4 + 4 + 16 * 2,
.ptext =
"\xCA\x7A\x7A\xB4\xBE\x19\x2D\xAB\xD6\x03\x50\x6D\xB1\x9C\x39\xE2" // Confounder
"30 bytes bytes bytes bytes byt", // Plain
.plen = 16 + 30,
.ctext =
"\xA2\x6A\x39\x05\xA4\xFF\xD5\x81\x6B\x7B\x1E\x27\x38\x0D\x08\x09"
"\x0C\x8E\xC1\xF3\x04\x49\x6E\x1A\xBD\xCD\x2B\xDC\xD1\xDF\xFC\x66"
"\x09\x89\xE1\x17\xA7\x13\xDD\xBB\x57\xA4\x14\x6C\x15\x87\xCB\xA4"
"\x35\x66\x65\x59\x1D\x22\x40\x28\x2F\x58\x42\xB1\x05\xA5",
.clen = 16 + 30 + 16,
}, {
// "enc no plain",
.key =
AUTHENC_KEY_HEADER("\x00\x00\x00\x20")
"\xa2\xb8\x33\xe9\x43\xbb\x10\xee\x53\xb4\xa1\x9b\xc2\xbb\xc7\xe1"
"\x9b\x87\xad\x5d\xe9\x21\x22\xa4\x33\x8b\xe6\xf7\x32\xfd\x8a\x0e" // Ki
"\x6c\xcb\x3f\x25\xd8\xae\x57\xf4\xe8\xf6\xca\x47\x4b\xdd\xef\xf1"
"\x16\xce\x13\x1b\x3f\x71\x01\x2e\x75\x6d\x6b\x1e\x3f\x70\xa7\xf1", // Ke
.klen = 4 + 4 + 32 * 2,
.ptext =
"\x3C\xBB\xD2\xB4\x59\x17\x94\x10\x67\xF9\x65\x99\xBB\x98\x92\x6C" // Confounder
"", // Plain
.plen = 16 + 0,
.ctext =
"\x03\x88\x6D\x03\x31\x0B\x47\xA6\xD8\xF0\x6D\x7B\x94\xD1\xDD\x83"
"\x7E\xCC\xE3\x15\xEF\x65\x2A\xFF\x62\x08\x59\xD9\x4A\x25\x92\x66",
.clen = 16 + 0 + 16,
}, {
// "enc 1 plain",
.key =
AUTHENC_KEY_HEADER("\x00\x00\x00\x20")
"\x84\x61\x4b\xfa\x98\xf1\x74\x8a\xa4\xaf\x99\x2b\x8c\x26\x28\x0d"
"\xc8\x98\x73\x29\xdf\x77\x5c\x1d\xb0\x4a\x43\xf1\x21\xaa\x86\x65" // Ki
"\xe9\x31\x73\xaa\x01\xeb\x3c\x24\x62\x31\xda\xfc\x78\x02\xee\x32"
"\xaf\x24\x85\x1d\x8c\x73\x87\xd1\x8c\xb9\xb2\xc5\xb7\xf5\x70\xb8", // Ke
.klen = 4 + 4 + 32 * 2,
.ptext =
"\xDE\xF4\x87\xFC\xEB\xE6\xDE\x63\x46\xD4\xDA\x45\x21\xBB\xA2\xD2" // Confounder
"1", // Plain
.plen = 16 + 1,
.ctext =
"\x2C\x9C\x15\x70\x13\x3C\x99\xBF\x6A\x34\xBC\x1B\x02\x12\x00\x2F"
"\xD1\x94\x33\x87\x49\xDB\x41\x35\x49\x7A\x34\x7C\xFC\xD9\xD1\x8A"
"\x12",
.clen = 16 + 1 + 16,
}, {
// "enc 9 plain",
.key =
AUTHENC_KEY_HEADER("\x00\x00\x00\x20")
"\x47\xb9\xf5\xba\xd7\x63\x00\x58\x2a\x54\x45\xfa\x0c\x1b\x29\xc3"
"\xaa\x83\xec\x63\xb9\x0b\x4a\xb0\x08\x48\xc1\x85\x67\x4f\x44\xa7" // Ki
"\xcd\xa2\xd3\x9a\x9b\x24\x3f\xfe\xb5\x6e\x8d\x5f\x4b\xd5\x28\x74"
"\x1e\xcb\x52\x0c\x62\x12\x3f\xb0\x40\xb8\x41\x8b\x15\xc7\xd7\x0c", // Ke
.klen = 4 + 4 + 32 * 2,
.ptext =
"\xAD\x4F\xF9\x04\xD3\x4E\x55\x53\x84\xB1\x41\x00\xFC\x46\x5F\x88" // Confounder
"9 bytesss", // Plain
.plen = 16 + 9,
.ctext =
"\x9C\x6D\xE7\x5F\x81\x2D\xE7\xED\x0D\x28\xB2\x96\x35\x57\xA1\x15"
"\x64\x09\x98\x27\x5B\x0A\xF5\x15\x27\x09\x91\x3F\xF5\x2A\x2A\x9C"
"\x8E\x63\xB8\x72\xF9\x2E\x64\xC8\x39",
.clen = 16 + 9 + 16,
}, {
// "enc 13 plain",
.key =
AUTHENC_KEY_HEADER("\x00\x00\x00\x20")
"\x15\x2f\x8c\x9d\xc9\x85\x79\x6e\xb1\x94\xed\x14\xc5\x9e\xac\xdd"
"\x41\x8a\x33\x32\x36\xb7\x8f\xaf\xa7\xc7\x9b\x04\xe0\xac\xe7\xbf" // Ki
"\xcd\x8a\x10\xe2\x79\xda\xdd\xb6\x90\x1e\xc3\x0b\xdf\x98\x73\x25"
"\x0f\x6e\xfc\x6a\x77\x36\x7d\x74\xdc\x3e\xe7\xf7\x4b\xc7\x77\x4e", // Ke
.klen = 4 + 4 + 32 * 2,
.ptext =
"\xCF\x9B\xCA\x6D\xF1\x14\x4E\x0C\x0A\xF9\xB8\xF3\x4C\x90\xD5\x14" // Confounder
"13 bytes byte",
.plen = 16 + 13,
.ctext =
"\xEE\xEC\x85\xA9\x81\x3C\xDC\x53\x67\x72\xAB\x9B\x42\xDE\xFC\x57"
"\x06\xF7\x26\xE9\x75\xDD\xE0\x5A\x87\xEB\x54\x06\xEA\x32\x4C\xA1"
"\x85\xC9\x98\x6B\x42\xAA\xBE\x79\x4B\x84\x82\x1B\xEE",
.clen = 16 + 13 + 16,
}, {
// "enc 30 plain",
.key =
AUTHENC_KEY_HEADER("\x00\x00\x00\x20")
"\x04\x8d\xeb\xf7\xb1\x2c\x09\x32\xe8\xb2\x96\x99\x6c\x23\xf8\xb7"
"\x9d\x59\xb9\x7e\xa1\x19\xfc\x0c\x15\x6b\xf7\x88\xdc\x8c\x85\xe8" // Ki
"\x1d\x51\x47\xf3\x4b\xb0\x01\xa0\x4a\x68\xa7\x13\x46\xe7\x65\x4e"
"\x02\x23\xa6\x0d\x90\xbc\x2b\x79\xb4\xd8\x79\x56\xd4\x7c\xd4\x2a", // Ke
.klen = 4 + 4 + 32 * 2,
.ptext =
"\x64\x4D\xEF\x38\xDA\x35\x00\x72\x75\x87\x8D\x21\x68\x55\xE2\x28" // Confounder
"30 bytes bytes bytes bytes byt", // Plain
.plen = 16 + 30,
.ctext =
"\x0E\x44\x68\x09\x85\x85\x5F\x2D\x1F\x18\x12\x52\x9C\xA8\x3B\xFD"
"\x8E\x34\x9D\xE6\xFD\x9A\xDA\x0B\xAA\xA0\x48\xD6\x8E\x26\x5F\xEB"
"\xF3\x4A\xD1\x25\x5A\x34\x49\x99\xAD\x37\x14\x68\x87\xA6\xC6\x84"
"\x57\x31\xAC\x7F\x46\x37\x6A\x05\x04\xCD\x06\x57\x14\x74",
.clen = 16 + 30 + 16,
},
};
#endif /* _CRYPTO_TESTMGR_H */

View file

@ -78,7 +78,7 @@ static int crypto_xctr_crypt_inplace(struct skcipher_walk *walk,
crypto_cipher_alg(tfm)->cia_encrypt;
unsigned long alignmask = crypto_cipher_alignmask(tfm);
unsigned int nbytes = walk->nbytes;
u8 *data = walk->src.virt.addr;
u8 *data = walk->dst.virt.addr;
u8 tmp[XCTR_BLOCKSIZE + MAX_CIPHER_ALIGNMASK];
u8 *keystream = PTR_ALIGN(tmp + 0, alignmask + 1);
__le32 ctr32 = cpu_to_le32(byte_ctr / XCTR_BLOCKSIZE + 1);

View file

@ -99,7 +99,7 @@ static int xts_xor_tweak(struct skcipher_request *req, bool second_pass,
while (w.nbytes) {
unsigned int avail = w.nbytes;
le128 *wsrc;
const le128 *wsrc;
le128 *wdst;
wsrc = w.src.virt.addr;

View file

@ -103,7 +103,7 @@ static int __zstd_init(void *ctx)
return ret;
}
static void *zstd_alloc_ctx(struct crypto_scomp *tfm)
static void *zstd_alloc_ctx(void)
{
int ret;
struct zstd_ctx *ctx;
@ -121,32 +121,18 @@ static void *zstd_alloc_ctx(struct crypto_scomp *tfm)
return ctx;
}
static int zstd_init(struct crypto_tfm *tfm)
{
struct zstd_ctx *ctx = crypto_tfm_ctx(tfm);
return __zstd_init(ctx);
}
static void __zstd_exit(void *ctx)
{
zstd_comp_exit(ctx);
zstd_decomp_exit(ctx);
}
static void zstd_free_ctx(struct crypto_scomp *tfm, void *ctx)
static void zstd_free_ctx(void *ctx)
{
__zstd_exit(ctx);
kfree_sensitive(ctx);
}
static void zstd_exit(struct crypto_tfm *tfm)
{
struct zstd_ctx *ctx = crypto_tfm_ctx(tfm);
__zstd_exit(ctx);
}
static int __zstd_compress(const u8 *src, unsigned int slen,
u8 *dst, unsigned int *dlen, void *ctx)
{
@ -161,14 +147,6 @@ static int __zstd_compress(const u8 *src, unsigned int slen,
return 0;
}
static int zstd_compress(struct crypto_tfm *tfm, const u8 *src,
unsigned int slen, u8 *dst, unsigned int *dlen)
{
struct zstd_ctx *ctx = crypto_tfm_ctx(tfm);
return __zstd_compress(src, slen, dst, dlen, ctx);
}
static int zstd_scompress(struct crypto_scomp *tfm, const u8 *src,
unsigned int slen, u8 *dst, unsigned int *dlen,
void *ctx)
@ -189,14 +167,6 @@ static int __zstd_decompress(const u8 *src, unsigned int slen,
return 0;
}
static int zstd_decompress(struct crypto_tfm *tfm, const u8 *src,
unsigned int slen, u8 *dst, unsigned int *dlen)
{
struct zstd_ctx *ctx = crypto_tfm_ctx(tfm);
return __zstd_decompress(src, slen, dst, dlen, ctx);
}
static int zstd_sdecompress(struct crypto_scomp *tfm, const u8 *src,
unsigned int slen, u8 *dst, unsigned int *dlen,
void *ctx)
@ -204,19 +174,6 @@ static int zstd_sdecompress(struct crypto_scomp *tfm, const u8 *src,
return __zstd_decompress(src, slen, dst, dlen, ctx);
}
static struct crypto_alg alg = {
.cra_name = "zstd",
.cra_driver_name = "zstd-generic",
.cra_flags = CRYPTO_ALG_TYPE_COMPRESS,
.cra_ctxsize = sizeof(struct zstd_ctx),
.cra_module = THIS_MODULE,
.cra_init = zstd_init,
.cra_exit = zstd_exit,
.cra_u = { .compress = {
.coa_compress = zstd_compress,
.coa_decompress = zstd_decompress } }
};
static struct scomp_alg scomp = {
.alloc_ctx = zstd_alloc_ctx,
.free_ctx = zstd_free_ctx,
@ -231,22 +188,11 @@ static struct scomp_alg scomp = {
static int __init zstd_mod_init(void)
{
int ret;
ret = crypto_register_alg(&alg);
if (ret)
return ret;
ret = crypto_register_scomp(&scomp);
if (ret)
crypto_unregister_alg(&alg);
return ret;
return crypto_register_scomp(&scomp);
}
static void __exit zstd_mod_fini(void)
{
crypto_unregister_alg(&alg);
crypto_unregister_scomp(&scomp);
}

View file

@ -534,10 +534,10 @@ config HW_RANDOM_NPCM
If unsure, say Y.
config HW_RANDOM_KEYSTONE
tristate "TI Keystone NETCP SA Hardware random number generator"
depends on ARCH_KEYSTONE || COMPILE_TEST
depends on HAS_IOMEM && OF
default HW_RANDOM
tristate "TI Keystone NETCP SA Hardware random number generator"
help
This option enables Keystone's hardware random generator.
@ -579,15 +579,15 @@ config HW_RANDOM_ARM_SMCCC_TRNG
module will be called arm_smccc_trng.
config HW_RANDOM_CN10K
tristate "Marvell CN10K Random Number Generator support"
depends on HW_RANDOM && PCI && (ARM64 || (64BIT && COMPILE_TEST))
default HW_RANDOM if ARCH_THUNDER
help
This driver provides support for the True Random Number
generator available in Marvell CN10K SoCs.
tristate "Marvell CN10K Random Number Generator support"
depends on HW_RANDOM && PCI && (ARM64 || (64BIT && COMPILE_TEST))
default HW_RANDOM if ARCH_THUNDER
help
This driver provides support for the True Random Number
generator available in Marvell CN10K SoCs.
To compile this driver as a module, choose M here.
The module will be called cn10k_rng. If unsure, say Y.
To compile this driver as a module, choose M here.
The module will be called cn10k_rng. If unsure, say Y.
config HW_RANDOM_JH7110
tristate "StarFive JH7110 Random Number Generator support"
@ -606,7 +606,8 @@ config HW_RANDOM_ROCKCHIP
default HW_RANDOM
help
This driver provides kernel-side support for the True Random Number
Generator hardware found on some Rockchip SoC like RK3566 or RK3568.
Generator hardware found on some Rockchip SoCs like RK3566, RK3568
or RK3588.
To compile this driver as a module, choose M here: the
module will be called rockchip-rng.

View file

@ -13,6 +13,8 @@
#include <linux/clk.h>
#include <linux/err.h>
#include <linux/platform_device.h>
#include <linux/pm.h>
#include <linux/pm_runtime.h>
#include <linux/interrupt.h>
#include <linux/hw_random.h>
#include <linux/completion.h>
@ -53,6 +55,7 @@
#define RNGC_SELFTEST_TIMEOUT 2500 /* us */
#define RNGC_SEED_TIMEOUT 200 /* ms */
#define RNGC_PM_TIMEOUT 500 /* ms */
static bool self_test = true;
module_param(self_test, bool, 0);
@ -123,7 +126,11 @@ static int imx_rngc_read(struct hwrng *rng, void *data, size_t max, bool wait)
{
struct imx_rngc *rngc = container_of(rng, struct imx_rngc, rng);
unsigned int status;
int retval = 0;
int err, retval = 0;
err = pm_runtime_resume_and_get(rngc->dev);
if (err)
return err;
while (max >= sizeof(u32)) {
status = readl(rngc->base + RNGC_STATUS);
@ -141,6 +148,8 @@ static int imx_rngc_read(struct hwrng *rng, void *data, size_t max, bool wait)
max -= sizeof(u32);
}
}
pm_runtime_mark_last_busy(rngc->dev);
pm_runtime_put(rngc->dev);
return retval ? retval : -EIO;
}
@ -169,7 +178,11 @@ static int imx_rngc_init(struct hwrng *rng)
{
struct imx_rngc *rngc = container_of(rng, struct imx_rngc, rng);
u32 cmd, ctrl;
int ret;
int ret, err;
err = pm_runtime_resume_and_get(rngc->dev);
if (err)
return err;
/* clear error */
cmd = readl(rngc->base + RNGC_COMMAND);
@ -186,15 +199,15 @@ static int imx_rngc_init(struct hwrng *rng)
ret = wait_for_completion_timeout(&rngc->rng_op_done,
msecs_to_jiffies(RNGC_SEED_TIMEOUT));
if (!ret) {
ret = -ETIMEDOUT;
goto err;
err = -ETIMEDOUT;
goto out;
}
} while (rngc->err_reg == RNGC_ERROR_STATUS_STAT_ERR);
if (rngc->err_reg) {
ret = -EIO;
goto err;
err = -EIO;
goto out;
}
/*
@ -205,23 +218,29 @@ static int imx_rngc_init(struct hwrng *rng)
ctrl |= RNGC_CTRL_AUTO_SEED;
writel(ctrl, rngc->base + RNGC_CONTROL);
out:
/*
* if initialisation was successful, we keep the interrupt
* unmasked until imx_rngc_cleanup is called
* we mask the interrupt ourselves if we return an error
*/
return 0;
if (err)
imx_rngc_irq_mask_clear(rngc);
err:
imx_rngc_irq_mask_clear(rngc);
return ret;
pm_runtime_put(rngc->dev);
return err;
}
static void imx_rngc_cleanup(struct hwrng *rng)
{
struct imx_rngc *rngc = container_of(rng, struct imx_rngc, rng);
int err;
imx_rngc_irq_mask_clear(rngc);
err = pm_runtime_resume_and_get(rngc->dev);
if (!err) {
imx_rngc_irq_mask_clear(rngc);
pm_runtime_put(rngc->dev);
}
}
static int __init imx_rngc_probe(struct platform_device *pdev)
@ -240,7 +259,7 @@ static int __init imx_rngc_probe(struct platform_device *pdev)
if (IS_ERR(rngc->base))
return PTR_ERR(rngc->base);
rngc->clk = devm_clk_get_enabled(&pdev->dev, NULL);
rngc->clk = devm_clk_get(&pdev->dev, NULL);
if (IS_ERR(rngc->clk))
return dev_err_probe(&pdev->dev, PTR_ERR(rngc->clk), "Cannot get rng_clk\n");
@ -248,14 +267,18 @@ static int __init imx_rngc_probe(struct platform_device *pdev)
if (irq < 0)
return irq;
clk_prepare_enable(rngc->clk);
ver_id = readl(rngc->base + RNGC_VER_ID);
rng_type = FIELD_GET(RNG_TYPE, ver_id);
/*
* This driver supports only RNGC and RNGB. (There's a different
* driver for RNGA.)
*/
if (rng_type != RNGC_TYPE_RNGC && rng_type != RNGC_TYPE_RNGB)
if (rng_type != RNGC_TYPE_RNGC && rng_type != RNGC_TYPE_RNGB) {
clk_disable_unprepare(rngc->clk);
return -ENODEV;
}
init_completion(&rngc->rng_op_done);
@ -272,15 +295,24 @@ static int __init imx_rngc_probe(struct platform_device *pdev)
ret = devm_request_irq(&pdev->dev,
irq, imx_rngc_irq, 0, pdev->name, (void *)rngc);
if (ret)
if (ret) {
clk_disable_unprepare(rngc->clk);
return dev_err_probe(&pdev->dev, ret, "Can't get interrupt working.\n");
}
if (self_test) {
ret = imx_rngc_self_test(rngc);
if (ret)
if (ret) {
clk_disable_unprepare(rngc->clk);
return dev_err_probe(&pdev->dev, ret, "self test failed\n");
}
}
pm_runtime_set_autosuspend_delay(&pdev->dev, RNGC_PM_TIMEOUT);
pm_runtime_use_autosuspend(&pdev->dev);
pm_runtime_set_active(&pdev->dev);
devm_pm_runtime_enable(&pdev->dev);
ret = devm_hwrng_register(&pdev->dev, &rngc->rng);
if (ret)
return dev_err_probe(&pdev->dev, ret, "hwrng registration failed\n");
@ -310,7 +342,10 @@ static int imx_rngc_resume(struct device *dev)
return 0;
}
static DEFINE_SIMPLE_DEV_PM_OPS(imx_rngc_pm_ops, imx_rngc_suspend, imx_rngc_resume);
static const struct dev_pm_ops imx_rngc_pm_ops = {
SYSTEM_SLEEP_PM_OPS(pm_runtime_force_suspend, pm_runtime_force_resume)
RUNTIME_PM_OPS(imx_rngc_suspend, imx_rngc_resume, NULL)
};
static const struct of_device_id imx_rngc_dt_ids[] = {
{ .compatible = "fsl,imx25-rngb" },
@ -321,7 +356,7 @@ MODULE_DEVICE_TABLE(of, imx_rngc_dt_ids);
static struct platform_driver imx_rngc_driver = {
.driver = {
.name = KBUILD_MODNAME,
.pm = pm_sleep_ptr(&imx_rngc_pm_ops),
.pm = pm_ptr(&imx_rngc_pm_ops),
.of_match_table = imx_rngc_dt_ids,
},
};

View file

@ -1,12 +1,14 @@
// SPDX-License-Identifier: GPL-2.0
/*
* rockchip-rng.c True Random Number Generator driver for Rockchip RK3568 SoC
* rockchip-rng.c True Random Number Generator driver for Rockchip SoCs
*
* Copyright (c) 2018, Fuzhou Rockchip Electronics Co., Ltd.
* Copyright (c) 2022, Aurelien Jarno
* Copyright (c) 2025, Collabora Ltd.
* Authors:
* Lin Jinhan <troy.lin@rock-chips.com>
* Aurelien Jarno <aurelien@aurel32.net>
* Nicolas Frattaroli <nicolas.frattaroli@collabora.com>
*/
#include <linux/clk.h>
#include <linux/hw_random.h>
@ -32,6 +34,9 @@
*/
#define RK_RNG_SAMPLE_CNT 1000
/* after how many bytes of output TRNGv1 implementations should be reseeded */
#define RK_TRNG_V1_AUTO_RESEED_CNT 16000
/* TRNG registers from RK3568 TRM-Part2, section 5.4.1 */
#define TRNG_RST_CTL 0x0004
#define TRNG_RNG_CTL 0x0400
@ -49,11 +54,64 @@
#define TRNG_RNG_SAMPLE_CNT 0x0404
#define TRNG_RNG_DOUT 0x0410
/*
* TRNG V1 register definitions
* The TRNG V1 IP is a stand-alone TRNG implementation (not part of a crypto IP)
* and can be found in the Rockchip RK3588 SoC
*/
#define TRNG_V1_CTRL 0x0000
#define TRNG_V1_CTRL_NOP 0x00
#define TRNG_V1_CTRL_RAND 0x01
#define TRNG_V1_CTRL_SEED 0x02
#define TRNG_V1_STAT 0x0004
#define TRNG_V1_STAT_SEEDED BIT(9)
#define TRNG_V1_STAT_GENERATING BIT(30)
#define TRNG_V1_STAT_RESEEDING BIT(31)
#define TRNG_V1_MODE 0x0008
#define TRNG_V1_MODE_128_BIT (0x00 << 3)
#define TRNG_V1_MODE_256_BIT (0x01 << 3)
/* Interrupt Enable register; unused because polling is faster */
#define TRNG_V1_IE 0x0010
#define TRNG_V1_IE_GLBL_EN BIT(31)
#define TRNG_V1_IE_SEED_DONE_EN BIT(1)
#define TRNG_V1_IE_RAND_RDY_EN BIT(0)
#define TRNG_V1_ISTAT 0x0014
#define TRNG_V1_ISTAT_RAND_RDY BIT(0)
/* RAND0 ~ RAND7 */
#define TRNG_V1_RAND0 0x0020
#define TRNG_V1_RAND7 0x003C
/* Auto Reseed Register */
#define TRNG_V1_AUTO_RQSTS 0x0060
#define TRNG_V1_VERSION 0x00F0
#define TRNG_v1_VERSION_CODE 0x46bc
/* end of TRNG_V1 register definitions */
/* Before removing this assert, give rk3588_rng_read an upper bound of 32 */
static_assert(RK_RNG_MAX_BYTE <= (TRNG_V1_RAND7 + 4 - TRNG_V1_RAND0),
"You raised RK_RNG_MAX_BYTE and broke rk3588-rng, congrats.");
struct rk_rng {
struct hwrng rng;
void __iomem *base;
int clk_num;
struct clk_bulk_data *clk_bulks;
const struct rk_rng_soc_data *soc_data;
struct device *dev;
};
struct rk_rng_soc_data {
int (*rk_rng_init)(struct hwrng *rng);
int (*rk_rng_read)(struct hwrng *rng, void *buf, size_t max, bool wait);
void (*rk_rng_cleanup)(struct hwrng *rng);
unsigned short quality;
bool reset_optional;
};
/* The mask in the upper 16 bits determines the bits that are updated */
@ -62,18 +120,37 @@ static void rk_rng_write_ctl(struct rk_rng *rng, u32 val, u32 mask)
writel((mask << 16) | val, rng->base + TRNG_RNG_CTL);
}
static int rk_rng_init(struct hwrng *rng)
static inline void rk_rng_writel(struct rk_rng *rng, u32 val, u32 offset)
{
writel(val, rng->base + offset);
}
static inline u32 rk_rng_readl(struct rk_rng *rng, u32 offset)
{
return readl(rng->base + offset);
}
static int rk_rng_enable_clks(struct rk_rng *rk_rng)
{
int ret;
/* start clocks */
ret = clk_bulk_prepare_enable(rk_rng->clk_num, rk_rng->clk_bulks);
if (ret < 0) {
dev_err(rk_rng->dev, "Failed to enable clocks: %d\n", ret);
return ret;
}
return 0;
}
static int rk3568_rng_init(struct hwrng *rng)
{
struct rk_rng *rk_rng = container_of(rng, struct rk_rng, rng);
int ret;
/* start clocks */
ret = clk_bulk_prepare_enable(rk_rng->clk_num, rk_rng->clk_bulks);
if (ret < 0) {
dev_err((struct device *) rk_rng->rng.priv,
"Failed to enable clks %d\n", ret);
ret = rk_rng_enable_clks(rk_rng);
if (ret < 0)
return ret;
}
/* set the sample period */
writel(RK_RNG_SAMPLE_CNT, rk_rng->base + TRNG_RNG_SAMPLE_CNT);
@ -87,7 +164,7 @@ static int rk_rng_init(struct hwrng *rng)
return 0;
}
static void rk_rng_cleanup(struct hwrng *rng)
static void rk3568_rng_cleanup(struct hwrng *rng)
{
struct rk_rng *rk_rng = container_of(rng, struct rk_rng, rng);
@ -98,14 +175,14 @@ static void rk_rng_cleanup(struct hwrng *rng)
clk_bulk_disable_unprepare(rk_rng->clk_num, rk_rng->clk_bulks);
}
static int rk_rng_read(struct hwrng *rng, void *buf, size_t max, bool wait)
static int rk3568_rng_read(struct hwrng *rng, void *buf, size_t max, bool wait)
{
struct rk_rng *rk_rng = container_of(rng, struct rk_rng, rng);
size_t to_read = min_t(size_t, max, RK_RNG_MAX_BYTE);
u32 reg;
int ret = 0;
ret = pm_runtime_resume_and_get((struct device *) rk_rng->rng.priv);
ret = pm_runtime_resume_and_get(rk_rng->dev);
if (ret < 0)
return ret;
@ -122,12 +199,120 @@ static int rk_rng_read(struct hwrng *rng, void *buf, size_t max, bool wait)
/* Read random data stored in the registers */
memcpy_fromio(buf, rk_rng->base + TRNG_RNG_DOUT, to_read);
out:
pm_runtime_mark_last_busy((struct device *) rk_rng->rng.priv);
pm_runtime_put_sync_autosuspend((struct device *) rk_rng->rng.priv);
pm_runtime_mark_last_busy(rk_rng->dev);
pm_runtime_put_sync_autosuspend(rk_rng->dev);
return (ret < 0) ? ret : to_read;
}
static int rk3588_rng_init(struct hwrng *rng)
{
struct rk_rng *rk_rng = container_of(rng, struct rk_rng, rng);
u32 version, status, mask, istat;
int ret;
ret = rk_rng_enable_clks(rk_rng);
if (ret < 0)
return ret;
version = rk_rng_readl(rk_rng, TRNG_V1_VERSION);
if (version != TRNG_v1_VERSION_CODE) {
dev_err(rk_rng->dev,
"wrong trng version, expected = %08x, actual = %08x\n",
TRNG_V1_VERSION, version);
ret = -EFAULT;
goto err_disable_clk;
}
mask = TRNG_V1_STAT_SEEDED | TRNG_V1_STAT_GENERATING |
TRNG_V1_STAT_RESEEDING;
if (readl_poll_timeout(rk_rng->base + TRNG_V1_STAT, status,
(status & mask) == TRNG_V1_STAT_SEEDED,
RK_RNG_POLL_PERIOD_US, RK_RNG_POLL_TIMEOUT_US) < 0) {
dev_err(rk_rng->dev, "timed out waiting for hwrng to reseed\n");
ret = -ETIMEDOUT;
goto err_disable_clk;
}
/*
* clear ISTAT flag, downstream advises to do this to avoid
* auto-reseeding "on power on"
*/
istat = rk_rng_readl(rk_rng, TRNG_V1_ISTAT);
rk_rng_writel(rk_rng, istat, TRNG_V1_ISTAT);
/* auto reseed after RK_TRNG_V1_AUTO_RESEED_CNT bytes */
rk_rng_writel(rk_rng, RK_TRNG_V1_AUTO_RESEED_CNT / 16, TRNG_V1_AUTO_RQSTS);
return 0;
err_disable_clk:
clk_bulk_disable_unprepare(rk_rng->clk_num, rk_rng->clk_bulks);
return ret;
}
static void rk3588_rng_cleanup(struct hwrng *rng)
{
struct rk_rng *rk_rng = container_of(rng, struct rk_rng, rng);
clk_bulk_disable_unprepare(rk_rng->clk_num, rk_rng->clk_bulks);
}
static int rk3588_rng_read(struct hwrng *rng, void *buf, size_t max, bool wait)
{
struct rk_rng *rk_rng = container_of(rng, struct rk_rng, rng);
size_t to_read = min_t(size_t, max, RK_RNG_MAX_BYTE);
int ret = 0;
u32 reg;
ret = pm_runtime_resume_and_get(rk_rng->dev);
if (ret < 0)
return ret;
/* Clear ISTAT, even without interrupts enabled, this will be updated */
reg = rk_rng_readl(rk_rng, TRNG_V1_ISTAT);
rk_rng_writel(rk_rng, reg, TRNG_V1_ISTAT);
/* generate 256 bits of random data */
rk_rng_writel(rk_rng, TRNG_V1_MODE_256_BIT, TRNG_V1_MODE);
rk_rng_writel(rk_rng, TRNG_V1_CTRL_RAND, TRNG_V1_CTRL);
ret = readl_poll_timeout_atomic(rk_rng->base + TRNG_V1_ISTAT, reg,
(reg & TRNG_V1_ISTAT_RAND_RDY), 0,
RK_RNG_POLL_TIMEOUT_US);
if (ret < 0)
goto out;
/* Read random data that's in registers TRNG_V1_RAND0 through RAND7 */
memcpy_fromio(buf, rk_rng->base + TRNG_V1_RAND0, to_read);
out:
/* Clear ISTAT */
rk_rng_writel(rk_rng, reg, TRNG_V1_ISTAT);
/* close the TRNG */
rk_rng_writel(rk_rng, TRNG_V1_CTRL_NOP, TRNG_V1_CTRL);
pm_runtime_mark_last_busy(rk_rng->dev);
pm_runtime_put_sync_autosuspend(rk_rng->dev);
return (ret < 0) ? ret : to_read;
}
static const struct rk_rng_soc_data rk3568_soc_data = {
.rk_rng_init = rk3568_rng_init,
.rk_rng_read = rk3568_rng_read,
.rk_rng_cleanup = rk3568_rng_cleanup,
.quality = 900,
.reset_optional = false,
};
static const struct rk_rng_soc_data rk3588_soc_data = {
.rk_rng_init = rk3588_rng_init,
.rk_rng_read = rk3588_rng_read,
.rk_rng_cleanup = rk3588_rng_cleanup,
.quality = 999, /* as determined by actual testing */
.reset_optional = true,
};
static int rk_rng_probe(struct platform_device *pdev)
{
struct device *dev = &pdev->dev;
@ -139,6 +324,7 @@ static int rk_rng_probe(struct platform_device *pdev)
if (!rk_rng)
return -ENOMEM;
rk_rng->soc_data = of_device_get_match_data(dev);
rk_rng->base = devm_platform_ioremap_resource(pdev, 0);
if (IS_ERR(rk_rng->base))
return PTR_ERR(rk_rng->base);
@ -148,34 +334,40 @@ static int rk_rng_probe(struct platform_device *pdev)
return dev_err_probe(dev, rk_rng->clk_num,
"Failed to get clks property\n");
rst = devm_reset_control_array_get_exclusive(&pdev->dev);
if (IS_ERR(rst))
return dev_err_probe(dev, PTR_ERR(rst), "Failed to get reset property\n");
if (rk_rng->soc_data->reset_optional)
rst = devm_reset_control_array_get_optional_exclusive(dev);
else
rst = devm_reset_control_array_get_exclusive(dev);
reset_control_assert(rst);
udelay(2);
reset_control_deassert(rst);
if (rst) {
if (IS_ERR(rst))
return dev_err_probe(dev, PTR_ERR(rst), "Failed to get reset property\n");
reset_control_assert(rst);
udelay(2);
reset_control_deassert(rst);
}
platform_set_drvdata(pdev, rk_rng);
rk_rng->rng.name = dev_driver_string(dev);
if (!IS_ENABLED(CONFIG_PM)) {
rk_rng->rng.init = rk_rng_init;
rk_rng->rng.cleanup = rk_rng_cleanup;
rk_rng->rng.init = rk_rng->soc_data->rk_rng_init;
rk_rng->rng.cleanup = rk_rng->soc_data->rk_rng_cleanup;
}
rk_rng->rng.read = rk_rng_read;
rk_rng->rng.priv = (unsigned long) dev;
rk_rng->rng.quality = 900;
rk_rng->rng.read = rk_rng->soc_data->rk_rng_read;
rk_rng->dev = dev;
rk_rng->rng.quality = rk_rng->soc_data->quality;
pm_runtime_set_autosuspend_delay(dev, RK_RNG_AUTOSUSPEND_DELAY);
pm_runtime_use_autosuspend(dev);
ret = devm_pm_runtime_enable(dev);
if (ret)
return dev_err_probe(&pdev->dev, ret, "Runtime pm activation failed.\n");
return dev_err_probe(dev, ret, "Runtime pm activation failed.\n");
ret = devm_hwrng_register(dev, &rk_rng->rng);
if (ret)
return dev_err_probe(&pdev->dev, ret, "Failed to register Rockchip hwrng\n");
return dev_err_probe(dev, ret, "Failed to register Rockchip hwrng\n");
return 0;
}
@ -184,7 +376,7 @@ static int __maybe_unused rk_rng_runtime_suspend(struct device *dev)
{
struct rk_rng *rk_rng = dev_get_drvdata(dev);
rk_rng_cleanup(&rk_rng->rng);
rk_rng->soc_data->rk_rng_cleanup(&rk_rng->rng);
return 0;
}
@ -193,7 +385,7 @@ static int __maybe_unused rk_rng_runtime_resume(struct device *dev)
{
struct rk_rng *rk_rng = dev_get_drvdata(dev);
return rk_rng_init(&rk_rng->rng);
return rk_rng->soc_data->rk_rng_init(&rk_rng->rng);
}
static const struct dev_pm_ops rk_rng_pm_ops = {
@ -204,7 +396,8 @@ static const struct dev_pm_ops rk_rng_pm_ops = {
};
static const struct of_device_id rk_rng_dt_match[] = {
{ .compatible = "rockchip,rk3568-rng", },
{ .compatible = "rockchip,rk3568-rng", .data = (void *)&rk3568_soc_data },
{ .compatible = "rockchip,rk3588-rng", .data = (void *)&rk3588_soc_data },
{ /* sentinel */ },
};
@ -221,8 +414,9 @@ static struct platform_driver rk_rng_driver = {
module_platform_driver(rk_rng_driver);
MODULE_DESCRIPTION("Rockchip RK3568 True Random Number Generator driver");
MODULE_DESCRIPTION("Rockchip True Random Number Generator driver");
MODULE_AUTHOR("Lin Jinhan <troy.lin@rock-chips.com>");
MODULE_AUTHOR("Aurelien Jarno <aurelien@aurel32.net>");
MODULE_AUTHOR("Daniel Golle <daniel@makrotopia.org>");
MODULE_AUTHOR("Nicolas Frattaroli <nicolas.frattaroli@collabora.com>");
MODULE_LICENSE("GPL");

View file

@ -855,5 +855,6 @@ config CRYPTO_DEV_SA2UL
source "drivers/crypto/aspeed/Kconfig"
source "drivers/crypto/starfive/Kconfig"
source "drivers/crypto/inside-secure/eip93/Kconfig"
endif # CRYPTO_HW

Some files were not shown because too many files have changed in this diff Show more