This class implements a layer that performs transposed convolution (sometimes also called deconvolution or up-convolution) on a set of three-dimensional multi-channel images. Padding is supported.
void SetFilterHeight( int filterHeight );
void SetFilterWidth( int filterWidth );
void SetFilterDepth( int filterDepth );
void SetFilterCount( int filterCount );
Sets the filters' size and number.
void SetStrideHeight( int strideHeight );
void SetStrideWidth( int strideWidth );
void SetStrideDepht( int strideDepth );
Sets the convolution stride. By default, the stride is 1
.
void SetPaddingHeight( int paddingHeight );
void SetPaddingWidth( int paddingWidth );
void SetPaddingDepth( int paddingDepth );
Sets the width, height, and depth of padding that should be removed from the convolution result. For example, if SetPaddingWidth( 1 );
, two rectangular sheets - one on the right and one on the left - will be cut off of the resulting image. By default these values are set to 0
.
void SetZeroFreeTerm(bool isZeroFreeTerm);
Specifies if the free terms should be used. If you set this value to true
, the free terms vector will be set to all zeros and won't be trained. By default, this value is set to false
.
CPtr<CDnnBlob> GetFilterData() const;
The filters are represented by a blob of the following dimensions:
BatchLength
is equal to1
BatchWidth
is equal to the inputs'Channels
ListSize
is equal to1
Height
is equal toGetFilterHeight()
Width
is equal toGetFilterWidth()
Depth
is equal toGetFilterDepth()
Channels
is equal toGetFilterCount()
CPtr<CDnnBlob> GetFreeTermData() const;
The free terms are represented by a blob of the total size equal to the number of filters used (GetFilterCount()
).
Each input accepts a blob with several images. The dimensions of all inputs should be the same:
BatchLength * BatchWidth * ListSize
- the number of images in the set.Height
- the images' height.Width
- the images' width.Depth
- the images' depth.Channels
- the number of channels the image format uses.
For each input the layer has one output. It contains a blob with the result of convolution. The output blob dimensions are:
BatchLength
is equal to the inputBatchLength
.BatchWidth
is equal to the inputBatchWidth
.ListSize
is equal to the inputListSize
.Height
can be calculated from the inputHeight
asStrideHeight * (Height - 1) + FilterHeight - 2 * PaddingHeight
.Width
can be calculated from the inputWidth
asStrideWidth * (Width - 1) + FilterWidth - 2 * PaddingWidth
.Depth
can be calculated from the inputDepth
as
StrideDepth * (Depth - 1) + FilterDepth - 2 * PaddingDepth
.Channels
is equal toGetFilterCount()
.