reinhardkeil [Thu, 24 Oct 2019 11:49:23 +0000 (13:49 +0200)]
CMSIS-Pack:
- api documentation improved
- added bash_script to support pack generation on Linux or Windows
- added custom attribute to component element
- added TrustZone-disabled value to software model selection
Kevin Bracey [Wed, 9 Jan 2019 13:51:08 +0000 (15:51 +0200)]
IAR: RRX doesn't modify flags, but has flags as input
IAR assembler version of RRX had a "cc" clobber for RRX, but this is
unneeded - it doesn't modify the condition codes - it's not "RRXS".
Instead the assembler should have a volatile, as already seen in GCC and Clang
versions to reflect the CC input to the instruction.
CMSIS intrinsics use "volatile" attributes to force ordering of
instructions that work on the PSR, either input or output.
A "cc" clobber does not have that effect, at least in gcc; it only
indicates "changes condition codes", not "reads condition codes", so
CC-clobbering instructions can be reordered. This reflects that in
general the compilers make no guarantees about preserving flag state
between assembler sequences, meaning that __RRX will always be prone to
unreliability.
But the volatile marker increases the chances of stuff coming out in the
right order.
Kevin Bracey [Wed, 9 Jan 2019 12:22:32 +0000 (14:22 +0200)]
GCC: add WFI/WFE compiler barriers
Add "memory" clobber to __WFI and __WFE. Architecturally these should
always be immediately preceded by a __DSB (eg to ensure the write buffer
is drained). Without a barrier on WFI, the following compiler reordering
would be permitted:
This could cause some power issues with the external bus not being
idle.
The added barrier should have no impact on code size, assuming these
instructions are always accompanied by DSB, as DSB does have its own
memory clobber already.
SEV not modified as there are no issues with the equivalent reordering;
we only need the SEV to not be reordered before the DSB, which is
ensured by volatile.
Kevin Bracey [Wed, 9 Jan 2019 13:23:03 +0000 (15:23 +0200)]
IAR: LDRT et al must be asm volatile
As these functions take volatile pointers, the API is promising that the
loads and stores will happen, so the assembler statements need volatile
qualifiers too.
If the functions took non-volatile pointers, or had a separate
non-volatile overload for C++, then the volatile could be omitted - the
instructions are normal loads and stores with no side-effects.
GCC and clang assembler already is "asm volatile", and armcc uses
intrinsics.
Kevin Bracey [Thu, 17 Jan 2019 13:53:51 +0000 (15:53 +0200)]
ARMCC: remove explicit DSB/DMB/ISB barriers
ARMCC documentation states that the __dsb etc intrinsics act as
optimisation barriers. Even though that's a bit woolly about the exact
equivalent barrier intrinsic, take its word that it's doing the right
thing.
It seems safe to assume that it is, because the __schedule_barrier()
intrinsics here are not actually sufficient for DSB and DMB. They need a
__memory_barrier(). If no-one has seen any problems, then presumably
they already include one.
Kevin Bracey [Thu, 17 Jan 2019 13:39:17 +0000 (15:39 +0200)]
Core(A)/armclang: Remove ISB/DSB/DMB barriers
Core(M) versions of files already do not have explicit barriers, so this
makes Core(A) consistent - the built-ins are specified as having barriers
anyway.
Christophe Favergeon [Wed, 12 Jun 2019 11:29:14 +0000 (13:29 +0200)]
CMSIS-DSP: New testing framework
(For our internal use. In short term, we won't give support about it).
CMSIS-DSP: Update to cmake build for the testing framework
CMSIS-NN:Implementation of arm_fully_connected_s8
Use API and quantization compatible with TF Lite.
Jonatan Antoni [Tue, 9 Jul 2019 11:44:41 +0000 (13:44 +0200)]
Doxygen:
- Aligned version of CMSIS-Zone documentation with upcoming release.
- Relaxed condition on CMSIS-Drivers, i.e. applicable to all Cortex processors.
The change breaks compatibility with released software components.
Anonymous structs/unions are not strict C99 but available as an extension. Its a reasonable feature to be used. The code is more clean and readable.
Jonatan Antoni [Wed, 15 May 2019 12:38:01 +0000 (14:38 +0200)]
Core(A): Fixed __FPU_Enable function not to mess registers. (#589)
- Enhanced function to use only two temporary registers.
- Added used registers to clobber list.
Kevin Bracey [Wed, 16 Jan 2019 13:34:55 +0000 (15:34 +0200)]
__NVIC_EnableIRQ compiler barriers
__NVIC_DisableIRQ and __NVIC_EnableIRQ can be used to function as a
mutex-style protection lock against a particular interrupt handler,
similar to __disable_irq and __enable_irq for all interrupts.
However, __NVIC_EnableIRQ, unlike a mutex unlock or __enable_irq, had no
compiler barriers. Being just a volatile write, in the following code
sequence:
NVIC_DisableIRQ(devx);
// modify some RAM accessed by devx IRQ handler
NVIC_EnableIRQ(devx);
there would be nothing preventing the RAM accesses from moved below the
NVIC_EnableIRQ.
Add barriers to NVIC_EnableIRQ, so that the above code works the same as
a mutex or __disable_irq, without any added need to mark the shared RAM
as volatile.