MPSCNNBatchNormalizationGradient(3) | MetalPerformanceShaders.framework | MPSCNNBatchNormalizationGradient(3) |
MPSCNNBatchNormalizationGradient
#import <MPSCNNBatchNormalization.h>
Inherits MPSCNNGradientKernel.
(nonnull instancetype) -
initWithDevice:fusedNeuronDescriptor:
(nullable instancetype) - initWithCoder:device:
(void) -
encodeToCommandBuffer:sourceGradient:sourceImage:batchNormalizationState:destinationGradient:
(void) -
encodeBatchToCommandBuffer:sourceGradients:sourceImages:batchNormalizationState:destinationGradients:
(MPSImage *__nonnull) -
encodeToCommandBuffer:sourceGradient:sourceImage:batchNormalizationState:
(MPSImageBatch *__nonnull) -
encodeBatchToCommandBuffer:sourceGradients:sourceImages:batchNormalizationState:
(void) -
encodeToCommandBuffer:primaryImage:secondaryImage:destinationImage:
(void) -
encodeBatchToCommandBuffer:primaryImages:secondaryImages:destinationImages:
(MPSImage *__nonnull) -
encodeToCommandBuffer:primaryImage:secondaryImage:
(MPSImageBatch *__nonnull) -
encodeBatchToCommandBuffer:primaryImages:secondaryImages:
This depends on Metal.framework
MPSCNNBatchNormalizationGradient computes the gradients of a loss function resulting from a network containing a corresponding MPSCNNBatchNormalization kernel.
Two sets of values are computed: the gradient of the loss function with respect to the batch normalization source images, and the gradient of the loss function with respect to the scale and bias terms used to compute the batch normalization.
Encode a MPSCNNKernel into a command Buffer. Create textures to hold the results and return them. In the first iteration on this method, encodeBatchToCommandBuffer:sourceImage:destinationImage: some work was left for the developer to do in the form of correctly setting the offset property and sizing the result buffer. With the introduction of the padding policy (see padding property) the filter can do this work itself. If you would like to have some input into what sort of MPSImage (e.g. temporary vs. regular) or what size it is or where it is allocated, you may set the destinationImageAllocator to allocate the image yourself.
This method uses the MPSNNPadding padding property to figure out how to size the result image and to set the offset property. See discussion in MPSNeuralNetworkTypes.h. All images in a batch must have MPSImage.numberOfImages = 1.
Parameters:
Returns:
Reimplemented from MPSCNNBinaryKernel.
Encode a MPSCNNKernel into a command Buffer. The operation shall proceed out-of-place. This is the older style of encode which reads the offset, doesn't change it, and ignores the padding method. Multiple images are processed concurrently. All images must have MPSImage.numberOfImages = 1.
Parameters:
Reimplemented from MPSCNNBinaryKernel.
Encode this operation to a command buffer. Create an MPSImageBatch to contain the result and return it. See encodeBatchToCommandBuffer:sourceGradients:sourceImages:batchNormalizationState:destinationGradients for further details.
Encode this operation to a command buffer.
Parameters:
Encode a MPSCNNKernel into a command Buffer. Create a texture to hold the result and return it. In the first iteration on this method, encodeToCommandBuffer:sourceImage:destinationImage: some work was left for the developer to do in the form of correctly setting the offset property and sizing the result buffer. With the introduction of the padding policy (see padding property) the filter can do this work itself. If you would like to have some input into what sort of MPSImage (e.g. temporary vs. regular) or what size it is or where it is allocated, you may set the destinationImageAllocator to allocate the image yourself.
This method uses the MPSNNPadding padding property to figure out how to size the result image and to set the offset property. See discussion in MPSNeuralNetworkTypes.h.
Parameters:
Returns:
Reimplemented from MPSCNNBinaryKernel.
Encode a MPSCNNKernel into a command Buffer. The operation shall proceed out-of-place. This is the older style of encode which reads the offset, doesn't change it, and ignores the padding method.
Parameters:
Reimplemented from MPSCNNBinaryKernel.
Encode this operation to a command buffer. Create an MPSImage to contain the result and return it. See encodeToCommandBuffer:sourceImage:sourceGradient:sourceImage:batchNormalizationState:destinationGradient for further details.
Encode this operation to a command buffer for a single image.
Parameters:
NSSecureCoding compatability While the standard NSSecureCoding/NSCoding method -initWithCoder: should work, since the file can't know which device your data is allocated on, we have to guess and may guess incorrectly. To avoid that problem, use a subclass of NSCoder that implements the <MPSDeviceProvider> protocol to tell MPS the MTLDevice to use.
Parameters:
Returns:
Reimplemented from MPSCNNGradientKernel.
Initializes a batch normalization gradient kernel using a device and neuron descriptor.
Parameters:
Returns:
Generated automatically by Doxygen for MetalPerformanceShaders.framework from the source code.
Mon Jul 9 2018 | Version MetalPerformanceShaders-119.3 |