![]() |
CMSIS-NN
Version 1.1.0
CMSIS NN Software Library
|
Functions | |
| void | arm_softmax_q15 (const q15_t *vec_in, const uint16_t dim_vec, q15_t *p_out) |
| Q15 softmax function. More... | |
| void | arm_softmax_q7 (const q7_t *vec_in, const uint16_t dim_vec, q7_t *p_out) |
| Q7 softmax function. More... | |
EXP(2) based softmax function
| void arm_softmax_q15 | ( | const q15_t * | vec_in, |
| const uint16_t | dim_vec, | ||
| q15_t * | p_out | ||
| ) |
| [in] | vec_in | pointer to input vector |
| [in] | dim_vec | input vector dimention |
| [out] | p_out | pointer to output vector |
Here, instead of typical e based softmax, we use 2-based softmax, i.e.,:
y_i = 2^(x_i) / sum(2^x_j)
The relative output will be different here. But mathematically, the gradient will be the same with a log(2) scaling factor.
| void arm_softmax_q7 | ( | const q7_t * | vec_in, |
| const uint16_t | dim_vec, | ||
| q7_t * | p_out | ||
| ) |
| [in] | vec_in | pointer to input vector |
| [in] | dim_vec | input vector dimention |
| [out] | p_out | pointer to output vector |
Here, instead of typical natural logarithm e based softmax, we use 2-based softmax here, i.e.,:
y_i = 2^(x_i) / sum(2^x_j)
The relative output will be different here. But mathematically, the gradient will be the same with a log(2) scaling factor.
Referenced by main().