The class implements a layer that calculates the softmax
function on a vector set.
The following formula is applied to each of the vectors:
softmax(x[0], ... , x[n-1])[i] = exp(x[i]) / (exp(x[0]) + ... + exp(x[n-1]))
// The dimensions over which the vectors should be normalized
enum TNormalizationArea {
NA_ObjectSize = 0,
NA_BatchLength,
NA_ListSize,
NA_Channel,
NA_Count
};
void SetNormalizationArea( TNormalizationArea newArea )
Specifies which dimensions of the input blob constitute the vector length:
NA_ObjectSize
- [Default] the input blob will be considered to containBatchLength * BatchWidth * ListSize
vectors, each ofHeight * Width * Depth * Channels
length.NA_BatchLength
- the input blob will be considered to containBatchWidth * ListSize * Height * Width * Depth * Channels
vectors, each ofBatchLength
length.NA_ListSize
- the input blob will be considered to containBatchLength * BatchWidth * Height * Width * Depth * Channels
vectors, each ofListSize
length.NA_Channel
- the input blob will be considered to containBatchLength * BatchWidth * ListSize * Height * Width * Depth
vectors, each ofChannels
The layer has no trainable parameters.
The single input accepts a data blob of any size. The GetNormalizationArea()
setting determines which dimensions will be considered to consitute vector length.
The single output contains a blob of the same size with the result of softmax
function applied to each vector.