MPSCNNConvolutionDescriptor(3) MetalPerformanceShaders.framework MPSCNNConvolutionDescriptor(3)

MPSCNNConvolutionDescriptor

#import <MPSCNNConvolution.h>

Inherits NSObject, <NSSecureCoding>, and <NSCopying>.

Inherited by MPSCNNDepthWiseConvolutionDescriptor, and MPSCNNSubPixelConvolutionDescriptor.


(void) - encodeWithCoder:
(nullable instancetype) - initWithCoder:
(void) - setBatchNormalizationParametersForInferenceWithMean:variance:gamma:beta:epsilon:
(void) - setNeuronType:parameterA:parameterB:
(MPSCNNNeuronType) - neuronType
(float) - neuronParameterA
(float) - neuronParameterB
(void) - setNeuronToPReLUWithParametersA:


(nonnull instancetype) + cnnConvolutionDescriptorWithKernelWidth:kernelHeight:inputFeatureChannels:outputFeatureChannels:neuronFilter:
(nonnull instancetype) + cnnConvolutionDescriptorWithKernelWidth:kernelHeight:inputFeatureChannels:outputFeatureChannels:


NSUInteger kernelWidth
NSUInteger kernelHeight
NSUInteger inputFeatureChannels
NSUInteger outputFeatureChannels
NSUInteger strideInPixelsX
NSUInteger strideInPixelsY
NSUInteger groups
NSUInteger dilationRateX
NSUInteger dilationRateY
MPSNNNeuronDescriptor *__nonnull fusedNeuronDescriptor
const MPSCNNNeuron *__nullable neuron
const MPSCNNNeuron *__nullable BOOL supportsSecureCoding

This depends on Metal.framework The MPSCNNConvolutionDescriptor specifies a convolution descriptor

Creates a convolution descriptor.

Parameters:

kernelWidth The width of the filter window. Must be > 0. Large values will take a long time.
kernelHeight The height of the filter window. Must be > 0. Large values will take a long time.
inputFeatureChannels The number of feature channels in the input image. Must be >= 1.
outputFeatureChannels The number of feature channels in the output image. Must be >= 1.

Returns:

A valid MPSCNNConvolutionDescriptor object or nil, if failure.

This method is deprecated. Please use neuronType, neuronParameterA and neuronParameterB properites to fuse neuron with convolution.

Parameters:

kernelWidth The width of the filter window. Must be > 0. Large values will take a long time.
kernelHeight The height of the filter window. Must be > 0. Large values will take a long time.
inputFeatureChannels The number of feature channels in the input image. Must be >= 1.
outputFeatureChannels The number of feature channels in the output image. Must be >= 1.
neuronFilter An optional neuron filter that can be applied to the output of convolution.

Returns:

A valid MPSCNNConvolutionDescriptor object or nil, if failure.

- (void) encodeWithCoder: (NSCoder *__nonnull) aCoder

<NSSecureCoding> support

- (nullable instancetype) initWithCoder: (NSCoder *__nonnull) aDecoder

<NSSecureCoding> support

- (float) neuronParameterA

Getter funtion for neuronType set using setNeuronType:parameterA:parameterB method

- (float) neuronParameterB

Getter funtion for neuronType set using setNeuronType:parameterA:parameterB method

- (MPSCNNNeuronType) neuronType

Getter funtion for neuronType set using setNeuronType:parameterA:parameterB method

- (void) setBatchNormalizationParametersForInferenceWithMean: (const float *__nullable) mean(const float *__nullable) variance(const float *__nullable) gamma(const float *__nullable) beta(const float) epsilon

Adds batch normalization for inference, it copies all the float arrays provided, expecting outputFeatureChannels elements in each.

This method will be used to pass in batch normalization parameters to the convolution during the init call. For inference we modify weights and bias going in convolution or Fully Connected layer to combine and optimize the layers.


w: weights for a corresponding output feature channel
b: bias for a corresponding output feature channel
W: batch normalized weights for a corresponding output feature channel
B: batch normalized bias for a corresponding output feature channel
I = gamma / sqrt(variance + epsilon), J = beta - ( I * mean )
W = w * I
B = b * I + J
Every convolution has (OutputFeatureChannel * kernelWidth * kernelHeight * InputFeatureChannel) weights
I, J are calculated, for every output feature channel separately to get the corresponding weights and bias
Thus, I, J are calculated and then used for every (kernelWidth * kernelHeight * InputFeatureChannel)
weights, and this is done OutputFeatureChannel number of times for each output channel.
thus, internally, batch normalized weights are computed as:
W[no][i][j][ni] = w[no][i][j][ni] * I[no]
no: index into outputFeatureChannel
i : index into kernel Height
j : index into kernel Width
ni: index into inputFeatureChannel
One usually doesn't see a bias term and batch normalization together as batch normalization potentially cancels
out the bias term after training, but in MPS if the user provides it, batch normalization will use the above
formula to incorporate it, if user does not have bias terms then put a float array of zeroes in the convolution
init for bias terms of each output feature channel.
this comes from:
https://arxiv.org/pdf/1502.03167v3.pdf
Note: in certain cases the batch normalization parameters will be cached by the MPSNNGraph
or the MPSCNNConvolution. If the batch normalization parameters change after either is made,
behavior is undefined.

Parameters:

mean Pointer to an array of floats of mean for each output feature channel
variance Pointer to an array of floats of variance for each output feature channel
gamma Pointer to an array of floats of gamma for each output feature channel
beta Pointer to an array of floats of beta for each output feature channel
epsilon A small float value used to have numerical stability in the code

- (void) setNeuronToPReLUWithParametersA: (NSData *__nonnull) A

Add per-channel neuron parameters A for PReLu neuron activation functions.

This method sets the neuron to PReLU, zeros parameters A and B and sets the per-channel neuron parameters A to an array containing a unique value of A for each output feature channel.

If the neuron function is f(v,a,b), it will apply


OutputImage(x,y,i) = f( ConvolutionResult(x,y,i), A[i], B[i] ) where i in [0,outputFeatureChannels-1]

See https://arxiv.org/pdf/1502.01852.pdf for details.

All other neuron types, where parameter A and parameter B are shared across channels must be set using -setNeuronOfType:parameterA:parameterB:

If batch normalization parameters are set, batch normalization will occur before neuron application i.e. output of convolution is first batch normalized followed by neuron activation. This function automatically sets neuronType to MPSCNNNeuronTypePReLU.

Note: in certain cases the neuron descriptor will be cached by the MPSNNGraph or the MPSCNNConvolution. If the neuron type changes after either is made, behavior is undefined.

Parameters:

A An array containing per-channel float values for neuron parameter A. Number of entries must be equal to outputFeatureChannels.

- (void) setNeuronType: (MPSCNNNeuronType) neuronType(float) parameterA(float) parameterB

Adds a neuron activation function to convolution descriptor.

This mathod can be used to add a neuron activation funtion of given type with associated scalar parameters A and B that are shared across all output channels. Neuron activation fucntion is applied to output of convolution. This is a per-pixel operation that is fused with convolution kernel itself for best performance. Note that this method can only be used to fuse neuron of kind for which parameters A and B are shared across all channels of convoution output. It is an error to call this method for neuron activation functions like MPSCNNNeuronTypePReLU, which require per-channel parameter values. For those kind of neuron activation functions, use appropriate setter functions.

Note: in certain cases, the neuron descriptor will be cached by the MPSNNGraph or the MPSCNNConvolution. If the neuron type changes after either is made, behavior is undefined.

Parameters:

neuronType type of neuron activation function. For full list see MPSCNNNeuronType.h
parameterA parameterA of neuron activation that is shared across all channels of convolution output.
parameterB parameterB of neuron activation that is shared across all channels of convolution output.

- dilationRateX [read], [write], [nonatomic], [assign]

dilationRateX property can be used to implement dilated convolution as described in https://arxiv.org/pdf/1511.07122v3.pdf to aggregate global information in dense prediction problems. Default value is 1. When set to value > 1, original kernel width, kW is dilated to


kW_Dilated = (kW-1)*dilationRateX + 1

by inserting d-1 zeros between consecutive entries in each row of the original kernel. The kernel is centered based on kW_Dilated.

- dilationRateY [read], [write], [nonatomic], [assign]

dilationRateY property can be used to implement dilated convolution as described in https://arxiv.org/pdf/1511.07122v3.pdf to aggregate global information in dense prediction problems. Default value is 1. When set to value > 1, original kernel height, kH is dilated to


kH_Dilated = (kH-1)*dilationRateY + 1

by inserting d-1 rows of zeros between consecutive row of the original kernel. The kernel is centered based on kH_Dilated.

- fusedNeuronDescriptor [read], [write], [nonatomic], [retain]

This mathod can be used to add a neuron activation funtion of given type with associated scalar parameters A and B that are shared across all output channels. Neuron activation fucntion is applied to output of convolution. This is a per-pixel operation that is fused with convolution kernel itself for best performance. Note that this method can only be used to fuse neuron of kind for which parameters A and B are shared across all channels of convoution output. It is an error to call this method for neuron activation functions like MPSCNNNeuronTypePReLU, which require per-channel parameter values. For those kind of neuron activation functions, use appropriate setter functions. Default is descriptor with neuronType MPSCNNNeuronTypeNone.

Note: in certain cases the neuron descriptor will be cached by the MPSNNGraph or the MPSCNNConvolution. If the neuron type changes after either is made, behavior is undefined.

- groups [read], [write], [nonatomic], [assign]

Number of groups input and output channels are divided into. The default value is 1. Groups lets you reduce the parameterization. If groups is set to n, input is divided into n groups with inputFeatureChannels/n channels in each group. Similarly output is divided into n groups with outputFeatureChannels/n channels in each group. ith group in input is only connected to ith group in output so number of weights (parameters) needed is reduced by factor of n. Both inputFeatureChannels and outputFeatureChannels must be divisible by n and number of channels in each group must be multiple of 4.

- inputFeatureChannels [read], [write], [nonatomic], [assign]

The number of feature channels per pixel in the input image.

- kernelHeight [read], [write], [nonatomic], [assign]

The height of the filter window. The default value is 3. Any positive non-zero value is valid, including even values. The position of the top edge of the filter window is given by offset.y - (kernelHeight>>1)

- kernelWidth [read], [write], [nonatomic], [assign]

The width of the filter window. The default value is 3. Any positive non-zero value is valid, including even values. The position of the left edge of the filter window is given by offset.x - (kernelWidth>>1)

- neuron [read], [write], [nonatomic], [retain]

MPSCNNNeuron filter to be applied as part of convolution. This is applied after BatchNormalization in the end. Default is nil. This is deprecated. You dont need to create MPSCNNNeuron object to fuse with convolution. Use neuron properties in this descriptor.

- outputFeatureChannels [read], [write], [nonatomic], [assign]

The number of feature channels per pixel in the output image.

- strideInPixelsX [read], [write], [nonatomic], [assign]

The output stride (downsampling factor) in the x dimension. The default value is 1.

- strideInPixelsY [read], [write], [nonatomic], [assign]

The output stride (downsampling factor) in the y dimension. The default value is 1.

- (const MPSCNNNeuron* __nullable BOOL) supportsSecureCoding [read], [nonatomic], [assign]

<NSSecureCoding> support

Generated automatically by Doxygen for MetalPerformanceShaders.framework from the source code.

Mon Jul 9 2018 Version MetalPerformanceShaders-119.3