The following are the updates compared to previous release in CMSIS-NN v6.0.0
New operators and features
- New int8 Pad operator
- New int8 Transpose operator
- New int8 Minimum/Maximum s8 operator
- New int8 /int16 Batch Matmul operator
- Per channel quantized support for Fully connected operator
Optimizations
- Improved performance and reduced memory usage for Transposed convolution.
- MVE conv int4: interleave im2col
- Align kernel_sum/ effective_bias useage
- Change SVDF MVE memmove to faster arm_memcpy_s8
- Add optional restrict keyword to conv core loop out pointers
- Treat DW conv with 1 input ch as a regular conv op
- Fast 1x1 Conv DSP case use unordered im2col
API changes
- arm_convolve_s8 New argument upscale_dims to support new transposed conv implementation. May be set to NULL with no behavioural change.
- arm_transpose_conv_s8_get_buffer_size New argument transposed_conv_params and behaviour to support new transposed conv implementation.
- arm_vector_sum_s8 New argument rhs_offset to support pre-computation of kernel_sum and bias
Full Changelog: v6.0.0...v7.0.0