lib/crypto: powerpc/aes: Fix rndkey_from_vsx() on big endian CPUs

I finally got a big endian PPC64 kernel to boot in QEMU.  The PPC64 VSX
optimized AES library code does work in that case, with the exception of
rndkey_from_vsx() which doesn't take into account that the order in
which the VSX code stores the round key words depends on the endianness.
So fix rndkey_from_vsx() to do the right thing on big endian CPUs.

Fixes: 7cf2082e74 ("lib/crypto: powerpc/aes: Migrate POWER8 optimized code into library")
Link: https://lore.kernel.org/r/20260216022104.332991-1-ebiggers@kernel.org
Signed-off-by: Eric Biggers <ebiggers@kernel.org>
This commit is contained in:
Eric Biggers 2026-02-15 18:21:04 -08:00
parent 23b0f90ba8
commit beeebffc80

View file

@ -95,7 +95,8 @@ static inline bool is_vsx_format(const struct p8_aes_key *key)
}
/*
* Convert a round key from VSX to generic format by reflecting the 16 bytes,
* Convert a round key from VSX to generic format by reflecting all 16 bytes (if
* little endian) or reflecting the bytes in each 4-byte word (if big endian),
* and (if apply_inv_mix=true) applying InvMixColumn to each column.
*
* It would be nice if the VSX and generic key formats would be compatible. But
@ -107,6 +108,7 @@ static inline bool is_vsx_format(const struct p8_aes_key *key)
*/
static void rndkey_from_vsx(u32 out[4], const u32 in[4], bool apply_inv_mix)
{
const bool be = IS_ENABLED(CONFIG_CPU_BIG_ENDIAN);
u32 k0 = swab32(in[0]);
u32 k1 = swab32(in[1]);
u32 k2 = swab32(in[2]);
@ -118,10 +120,10 @@ static void rndkey_from_vsx(u32 out[4], const u32 in[4], bool apply_inv_mix)
k2 = inv_mix_columns(k2);
k3 = inv_mix_columns(k3);
}
out[0] = k3;
out[1] = k2;
out[2] = k1;
out[3] = k0;
out[0] = be ? k0 : k3;
out[1] = be ? k1 : k2;
out[2] = be ? k2 : k1;
out[3] = be ? k3 : k0;
}
static void aes_preparekey_arch(union aes_enckey_arch *k,