This class implements a layer that calculates a cross-entropy loss function for binary classification.
The function is calculated according to the formula:
loss = y * -log(sigmoid(x)) + (1 - y) * -log(1 - sigmoid(x))
where:
x
is the network response.y
is the correct class label (can be1
or-1
).
Please note that this function first calculates a sigmoid
on the network response. It is best not to connect this layer input to the output of another sigmoid
-calculating layer.
void SetPositiveWeight( float value );
Sets the multiplier for the term that corresponds to the objects for which the class has been detected correctly. You can tune this value to prioritize precision (set value < 1
) or recall (set value > 1
) during training.
The default value is 1
.
void SetLossWeight( float lossWeight );
Sets the multiplier for this function gradient during training. The default value is 1
. You may wish to change the default if you are using several loss functions in your network.
void SetMaxGradientValue( float maxValue );
Sets the upper limit for the absolute value of the function gradient. Whenever the gradient exceeds this limit its absolute value will be reduced to GetMaxGradientValue()
.
This layer has no trainable parameters.
The layer may have 2 to 3 inputs:
- The network output for which you are calculating the loss function.
Height
,Width
,Depth
, andChannels
dimensions of this blob should be equal to1
. - A blob of the same size as the first input, containing the class labels (may be
-1
or1
). - [Optional] The objects' weights. This blob should have the same dimensions as the first input.
This layer has no output.
float GetLastLoss() const;
Use this method to get the value of the loss function calculated on the network's last run.