Skip to content

Commit

Permalink
Merge tag 'arm64-upstream' of git://git.kernel.org/pub/scm/linux/kern…
Browse files Browse the repository at this point in the history
…el/git/arm64/linux

Pull arm64 updates from Catalin Marinas:
 "The bulk is in-kernel pointer authentication, activity monitors and
  lots of asm symbol annotations. I also queued the sys_mremap() patch
  commenting the asymmetry in the address untagging.

  Summary:

   - In-kernel Pointer Authentication support (previously only offered
     to user space).

   - ARM Activity Monitors (AMU) extension support allowing better CPU
     utilisation numbers for the scheduler (frequency invariance).

   - Memory hot-remove support for arm64.

   - Lots of asm annotations (SYM_*) in preparation for the in-kernel
     Branch Target Identification (BTI) support.

   - arm64 perf updates: ARMv8.5-PMU 64-bit counters, refactoring the
     PMU init callbacks, support for new DT compatibles.

   - IPv6 header checksum optimisation.

   - Fixes: SDEI (software delegated exception interface) double-lock on
     hibernate with shared events.

   - Minor clean-ups and refactoring: cpu_ops accessor,
     cpu_do_switch_mm() converted to C, cpufeature finalisation helper.

   - sys_mremap() comment explaining the asymmetric address untagging
     behaviour"

* tag 'arm64-upstream' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux: (81 commits)
  mm/mremap: Add comment explaining the untagging behaviour of mremap()
  arm64: head: Convert install_el2_stub to SYM_INNER_LABEL
  arm64: Introduce get_cpu_ops() helper function
  arm64: Rename cpu_read_ops() to init_cpu_ops()
  arm64: Declare ACPI parking protocol CPU operation if needed
  arm64: move kimage_vaddr to .rodata
  arm64: use mov_q instead of literal ldr
  arm64: Kconfig: verify binutils support for ARM64_PTR_AUTH
  lkdtm: arm64: test kernel pointer authentication
  arm64: compile the kernel with ptrauth return address signing
  kconfig: Add support for 'as-option'
  arm64: suspend: restore the kernel ptrauth keys
  arm64: __show_regs: strip PAC from lr in printk
  arm64: unwind: strip PAC from kernel addresses
  arm64: mask PAC bits of __builtin_return_address
  arm64: initialize ptrauth keys for kernel booting task
  arm64: initialize and switch ptrauth kernel keys
  arm64: enable ptrauth earlier
  arm64: cpufeature: handle conflicts based on capability
  arm64: cpufeature: Move cpu capability helpers inside C file
  ...
  • Loading branch information
torvalds committed Mar 31, 2020
2 parents a8222fd + b2a84de commit 3cd86a5
Show file tree
Hide file tree
Showing 88 changed files with 2,192 additions and 700 deletions.
112 changes: 112 additions & 0 deletions Documentation/arm64/amu.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,112 @@
=======================================================
Activity Monitors Unit (AMU) extension in AArch64 Linux
=======================================================

Author: Ionela Voinescu <[email protected]>

Date: 2019-09-10

This document briefly describes the provision of Activity Monitors Unit
support in AArch64 Linux.


Architecture overview
---------------------

The activity monitors extension is an optional extension introduced by the
ARMv8.4 CPU architecture.

The activity monitors unit, implemented in each CPU, provides performance
counters intended for system management use. The AMU extension provides a
system register interface to the counter registers and also supports an
optional external memory-mapped interface.

Version 1 of the Activity Monitors architecture implements a counter group
of four fixed and architecturally defined 64-bit event counters.
- CPU cycle counter: increments at the frequency of the CPU.
- Constant counter: increments at the fixed frequency of the system
clock.
- Instructions retired: increments with every architecturally executed
instruction.
- Memory stall cycles: counts instruction dispatch stall cycles caused by
misses in the last level cache within the clock domain.

When in WFI or WFE these counters do not increment.

The Activity Monitors architecture provides space for up to 16 architected
event counters. Future versions of the architecture may use this space to
implement additional architected event counters.

Additionally, version 1 implements a counter group of up to 16 auxiliary
64-bit event counters.

On cold reset all counters reset to 0.


Basic support
-------------

The kernel can safely run a mix of CPUs with and without support for the
activity monitors extension. Therefore, when CONFIG_ARM64_AMU_EXTN is
selected we unconditionally enable the capability to allow any late CPU
(secondary or hotplugged) to detect and use the feature.

When the feature is detected on a CPU, we flag the availability of the
feature but this does not guarantee the correct functionality of the
counters, only the presence of the extension.

Firmware (code running at higher exception levels, e.g. arm-tf) support is
needed to:
- Enable access for lower exception levels (EL2 and EL1) to the AMU
registers.
- Enable the counters. If not enabled these will read as 0.
- Save/restore the counters before/after the CPU is being put/brought up
from the 'off' power state.

When using kernels that have this feature enabled but boot with broken
firmware the user may experience panics or lockups when accessing the
counter registers. Even if these symptoms are not observed, the values
returned by the register reads might not correctly reflect reality. Most
commonly, the counters will read as 0, indicating that they are not
enabled.

If proper support is not provided in firmware it's best to disable
CONFIG_ARM64_AMU_EXTN. To be noted that for security reasons, this does not
bypass the setting of AMUSERENR_EL0 to trap accesses from EL0 (userspace) to
EL1 (kernel). Therefore, firmware should still ensure accesses to AMU registers
are not trapped in EL2/EL3.

The fixed counters of AMUv1 are accessible though the following system
register definitions:
- SYS_AMEVCNTR0_CORE_EL0
- SYS_AMEVCNTR0_CONST_EL0
- SYS_AMEVCNTR0_INST_RET_EL0
- SYS_AMEVCNTR0_MEM_STALL_EL0

Auxiliary platform specific counters can be accessed using
SYS_AMEVCNTR1_EL0(n), where n is a value between 0 and 15.

Details can be found in: arch/arm64/include/asm/sysreg.h.


Userspace access
----------------

Currently, access from userspace to the AMU registers is disabled due to:
- Security reasons: they might expose information about code executed in
secure mode.
- Purpose: AMU counters are intended for system management use.

Also, the presence of the feature is not visible to userspace.


Virtualization
--------------

Currently, access from userspace (EL0) and kernelspace (EL1) on the KVM
guest side is disabled due to:
- Security reasons: they might expose information about code executed
by other guests or the host.

Any attempt to access the AMU registers will result in an UNDEFINED
exception being injected into the guest.
14 changes: 14 additions & 0 deletions Documentation/arm64/booting.rst
Original file line number Diff line number Diff line change
Expand Up @@ -248,6 +248,20 @@ Before jumping into the kernel, the following conditions must be met:
- HCR_EL2.APK (bit 40) must be initialised to 0b1
- HCR_EL2.API (bit 41) must be initialised to 0b1

For CPUs with Activity Monitors Unit v1 (AMUv1) extension present:
- If EL3 is present:
CPTR_EL3.TAM (bit 30) must be initialised to 0b0
CPTR_EL2.TAM (bit 30) must be initialised to 0b0
AMCNTENSET0_EL0 must be initialised to 0b1111
AMCNTENSET1_EL0 must be initialised to a platform specific value
having 0b1 set for the corresponding bit for each of the auxiliary
counters present.
- If the kernel is entered at EL1:
AMCNTENSET0_EL0 must be initialised to 0b1111
AMCNTENSET1_EL0 must be initialised to a platform specific value
having 0b1 set for the corresponding bit for each of the auxiliary
counters present.

The requirements described above for CPU mode, caches, MMUs, architected
timers, coherency and system registers apply to all CPUs. All CPUs must
enter the kernel in the same exception level.
Expand Down
1 change: 1 addition & 0 deletions Documentation/arm64/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,7 @@ ARM64 Architecture
:maxdepth: 1

acpi_object_usage
amu
arm-acpi
booting
cpu-feature-registers
Expand Down
69 changes: 66 additions & 3 deletions arch/arm64/Kconfig
Original file line number Diff line number Diff line change
Expand Up @@ -117,6 +117,7 @@ config ARM64
select HAVE_ALIGNED_STRUCT_PAGE if SLUB
select HAVE_ARCH_AUDITSYSCALL
select HAVE_ARCH_BITREVERSE
select HAVE_ARCH_COMPILER_H
select HAVE_ARCH_HUGE_VMAP
select HAVE_ARCH_JUMP_LABEL
select HAVE_ARCH_JUMP_LABEL_RELATIVE
Expand Down Expand Up @@ -280,6 +281,9 @@ config ZONE_DMA32
config ARCH_ENABLE_MEMORY_HOTPLUG
def_bool y

config ARCH_ENABLE_MEMORY_HOTREMOVE
def_bool y

config SMP
def_bool y

Expand Down Expand Up @@ -951,11 +955,11 @@ config HOTPLUG_CPU

# Common NUMA Features
config NUMA
bool "Numa Memory Allocation and Scheduler Support"
bool "NUMA Memory Allocation and Scheduler Support"
select ACPI_NUMA if ACPI
select OF_NUMA
help
Enable NUMA (Non Uniform Memory Access) support.
Enable NUMA (Non-Uniform Memory Access) support.

The kernel will try to allocate memory used by a CPU on the
local memory of the CPU and add some more
Expand Down Expand Up @@ -1497,23 +1501,82 @@ config ARM64_PTR_AUTH
bool "Enable support for pointer authentication"
default y
depends on !KVM || ARM64_VHE
depends on (CC_HAS_SIGN_RETURN_ADDRESS || CC_HAS_BRANCH_PROT_PAC_RET) && AS_HAS_PAC
depends on CC_IS_GCC || (CC_IS_CLANG && AS_HAS_CFI_NEGATE_RA_STATE)
depends on (!FUNCTION_GRAPH_TRACER || DYNAMIC_FTRACE_WITH_REGS)
help
Pointer authentication (part of the ARMv8.3 Extensions) provides
instructions for signing and authenticating pointers against secret
keys, which can be used to mitigate Return Oriented Programming (ROP)
and other attacks.

This option enables these instructions at EL0 (i.e. for userspace).

Choosing this option will cause the kernel to initialise secret keys
for each process at exec() time, with these keys being
context-switched along with the process.

If the compiler supports the -mbranch-protection or
-msign-return-address flag (e.g. GCC 7 or later), then this option
will also cause the kernel itself to be compiled with return address
protection. In this case, and if the target hardware is known to
support pointer authentication, then CONFIG_STACKPROTECTOR can be
disabled with minimal loss of protection.

The feature is detected at runtime. If the feature is not present in
hardware it will not be advertised to userspace/KVM guest nor will it
be enabled. However, KVM guest also require VHE mode and hence
CONFIG_ARM64_VHE=y option to use this feature.

If the feature is present on the boot CPU but not on a late CPU, then
the late CPU will be parked. Also, if the boot CPU does not have
address auth and the late CPU has then the late CPU will still boot
but with the feature disabled. On such a system, this option should
not be selected.

This feature works with FUNCTION_GRAPH_TRACER option only if
DYNAMIC_FTRACE_WITH_REGS is enabled.

config CC_HAS_BRANCH_PROT_PAC_RET
# GCC 9 or later, clang 8 or later
def_bool $(cc-option,-mbranch-protection=pac-ret+leaf)

config CC_HAS_SIGN_RETURN_ADDRESS
# GCC 7, 8
def_bool $(cc-option,-msign-return-address=all)

config AS_HAS_PAC
def_bool $(as-option,-Wa$(comma)-march=armv8.3-a)

config AS_HAS_CFI_NEGATE_RA_STATE
def_bool $(as-instr,.cfi_startproc\n.cfi_negate_ra_state\n.cfi_endproc\n)

endmenu

menu "ARMv8.4 architectural features"

config ARM64_AMU_EXTN
bool "Enable support for the Activity Monitors Unit CPU extension"
default y
help
The activity monitors extension is an optional extension introduced
by the ARMv8.4 CPU architecture. This enables support for version 1
of the activity monitors architecture, AMUv1.

To enable the use of this extension on CPUs that implement it, say Y.

Note that for architectural reasons, firmware _must_ implement AMU
support when running on CPUs that present the activity monitors
extension. The required support is present in:
* Version 1.5 and later of the ARM Trusted Firmware

For kernels that have this configuration enabled but boot with broken
firmware, you may need to say N here until the firmware is fixed.
Otherwise you may experience firmware panics or lockups when
accessing the counter registers. Even if you are not observing these
symptoms, the values returned by the register reads might not
correctly reflect reality. Most commonly, the value read will be 0,
indicating that the counter is not enabled.

endmenu

menu "ARMv8.5 architectural features"
Expand Down
11 changes: 11 additions & 0 deletions arch/arm64/Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -65,6 +65,17 @@ stack_protector_prepare: prepare0
include/generated/asm-offsets.h))
endif

ifeq ($(CONFIG_ARM64_PTR_AUTH),y)
branch-prot-flags-$(CONFIG_CC_HAS_SIGN_RETURN_ADDRESS) := -msign-return-address=all
branch-prot-flags-$(CONFIG_CC_HAS_BRANCH_PROT_PAC_RET) := -mbranch-protection=pac-ret+leaf
# -march=armv8.3-a enables the non-nops instructions for PAC, to avoid the
# compiler to generate them and consequently to break the single image contract
# we pass it only to the assembler. This option is utilized only in case of non
# integrated assemblers.
branch-prot-flags-$(CONFIG_AS_HAS_PAC) += -Wa,-march=armv8.3-a
KBUILD_CFLAGS += $(branch-prot-flags-y)
endif

ifeq ($(CONFIG_CPU_BIG_ENDIAN), y)
KBUILD_CPPFLAGS += -mbig-endian
CHECKFLAGS += -D__AARCH64EB__
Expand Down
4 changes: 2 additions & 2 deletions arch/arm64/crypto/aes-ce.S
Original file line number Diff line number Diff line change
Expand Up @@ -9,8 +9,8 @@
#include <linux/linkage.h>
#include <asm/assembler.h>

#define AES_ENTRY(func) SYM_FUNC_START(ce_ ## func)
#define AES_ENDPROC(func) SYM_FUNC_END(ce_ ## func)
#define AES_FUNC_START(func) SYM_FUNC_START(ce_ ## func)
#define AES_FUNC_END(func) SYM_FUNC_END(ce_ ## func)

.arch armv8-a+crypto

Expand Down
Loading

0 comments on commit 3cd86a5

Please sign in to comment.