MPSCNNKernel(3) | MetalPerformanceShaders.framework | MPSCNNKernel(3) |
MPSCNNKernel
#import <MPSCNNKernel.h>
Inherits MPSKernel.
Inherited by MPSCNNBatchNormalization, MPSCNNBatchNormalizationStatistics, MPSCNNBinaryConvolution, MPSCNNConvolution, MPSCNNConvolutionTranspose, MPSCNNCrossChannelNormalization, MPSCNNDropout, MPSCNNInstanceNormalization, MPSCNNLocalContrastNormalization, MPSCNNLogSoftMax, MPSCNNLoss, MPSCNNNeuron, MPSCNNPooling, MPSCNNSoftMax, MPSCNNSpatialNormalization, MPSCNNUpsampling, MPSCNNYOLOLoss, MPSNNCropAndResizeBilinear, MPSNNReduceUnary, MPSNNReshape, MPSNNResizeBilinear, MPSNNSlice, and MPSRNNImageInferenceLayer.
(nonnull instancetype) - initWithDevice:
(nullable instancetype) - initWithCoder:device:
(void) - encodeToCommandBuffer:sourceImage:destinationImage:
(void) -
encodeToCommandBuffer:sourceImage:destinationState:destinationImage:
(void) - encodeBatchToCommandBuffer:sourceImages:destinationImages:
(void) -
encodeBatchToCommandBuffer:sourceImages:destinationStates:destinationImages:
(MPSImage *__nonnull) - encodeToCommandBuffer:sourceImage:
(MPSImage *__nonnull) -
encodeToCommandBuffer:sourceImage:destinationState:destinationStateIsTemporary:
(MPSImageBatch *__nonnull) -
encodeBatchToCommandBuffer:sourceImages:
(MPSImageBatch *__nonnull) -
encodeBatchToCommandBuffer:sourceImages:destinationStates:destinationStateIsTemporary:
(MPSState *__nullable) -
resultStateForSourceImage:sourceStates:destinationImage:
(MPSStateBatch *__nullable) -
resultStateBatchForSourceImage:sourceStates:destinationImage:
(MPSState *__nullable) -
temporaryResultStateForCommandBuffer:sourceImage:sourceStates:destinationImage:
(MPSStateBatch *__nullable) -
temporaryResultStateBatchForCommandBuffer:sourceImage:sourceStates:destinationImage:
(BOOL) - isResultStateReusedAcrossBatch
(BOOL) - appendBatchBarrier
(MPSImageDescriptor *__nonnull) -
destinationImageDescriptorForSourceImages:sourceStates:
MPSOffset offset
MTLRegion clipRect
NSUInteger destinationFeatureChannelOffset
NSUInteger sourceFeatureChannelOffset
NSUInteger sourceFeatureChannelMaxCount
MPSImageEdgeMode edgeMode
NSUInteger kernelWidth
NSUInteger kernelHeight
NSUInteger strideInPixelsX
NSUInteger strideInPixelsY
NSUInteger dilationRateX
NSUInteger dilationRateY
BOOL isBackwards
BOOL BOOL isStateModified
id< MPSNNPadding > padding
id< MPSNNPadding > id< MPSImageAllocator >
destinationImageAllocator
This depends on Metal.framework Describes a convolution neural network kernel. A MPSCNNKernel consumes one MPSImage and produces one MPSImage.
The region overwritten in the destination MPSImage is described
by the clipRect. The top left corner of the region consumed (ignoring
adjustments for filter size -- e.g. convolution filter size) is given
by the offset. The size of the region consumed is a function of the
clipRect size and any subsampling caused by pixel strides at work,
e.g. MPSCNNPooling.strideInPixelsX/Y. Where the offset + clipRect
would cause a {x,y} pixel address not in the image to be read, the
edgeMode is used to determine what value to read there.
The Z/depth component of the offset, clipRect.origin and clipRect.size
indexes which images to use. If the MPSImage contains only a single image
then these should be offset.z = 0, clipRect.origin.z = 0
and clipRect.size.depth = 1. If the MPSImage contains multiple images,
clipRect.size.depth refers to number of images to process. Both source
and destination MPSImages must have at least this many images. offset.z
refers to starting source image index. Thus offset.z + clipRect.size.depth must
be <= source.numberOfImages. Similarly, clipRect.origin.z refers to starting
image index in destination. So clipRect.origin.z + clipRect.size.depth must be
<= destination.numberOfImage.
destinationFeatureChannelOffset property can be used to control where the MPSKernel will
start writing in feature channel dimension. For example, if the destination image has
64 channels, and MPSKernel outputs 32 channels, by default channels 0-31 of destination
will be populated by MPSKernel. But if we want this MPSKernel to populate channel 32-63
of the destination, we can set destinationFeatureChannelOffset = 32.
A good example of this is concat (concatenation) operation in Tensor Flow. Suppose
we have a src = w x h x Ni which goes through CNNConvolution_0 which produces
output O0 = w x h x N0 and CNNConvolution_1 which produces output O1 = w x h x N1 followed
by concatenation which produces O = w x h x (N0 + N1). We can achieve this by creating
an MPSImage with dimensions O = w x h x (N0 + N1) and using this as destination of
both convolutions as follows
CNNConvolution0: destinationFeatureChannelOffset = 0, this will output N0 channels starting at
channel 0 of destination thus populating [0,N0-1] channels.
CNNConvolution1: destinationFeatureChannelOffset = N0, this will output N1 channels starting at
channel N0 of destination thus populating [N0,N0+N1-1] channels.
A MPSCNNKernel can be saved to disk / network using NSCoders such as NSKeyedArchiver.
When decoding, the system default MTLDevice will be chosen unless the NSCoder adopts
the <MPSDeviceProvider> protocol. To accomplish this you will likely need to subclass your
unarchiver to add this method.
Returns YES if the filter must be run over the entire batch before its results may be used Nearly all filters do not need to see the entire batch all at once and can operate correctly with partial batches. This allows the graph to strip-mine the problem, processing the graph top to bottom on a subset of the batch at a time, dramatically reducing memory usage. As the full nominal working set for a graph is often so large that it may not fit in memory, sub-batching may be required forward progress.
Batch normalization statistics on the other hand must complete the batch before the statistics may be used to normalize the images in the batch in the ensuing normalization filter. Consequently, batch normalization statistics requests the graph insert a batch barrier following it by returning YES from -appendBatchBarrier. This tells the graph to complete the batch before any dependent filters can start. Note that the filter itself may still be subject to sub-batching in its operation. All filters must be able to function without seeing the entire batch in a single -encode call. Carry over state that is accumulated across sub-batches is commonly carried in a shared MPSState containing a MTLBuffer. See -isResultStateReusedAcrossBatch.
Caution: on most supported devices, the working set may be so large that the graph may be forced to throw away and recalculate most intermediate images in cases where strip-mining can not occur because -appendBatchBarrier returns YES. A single batch barrier can commonly cause a memory size increase and/or performance reduction by many fold over the entire graph. Filters of this variety should be avoided.
Default: NO
Get a suggested destination image descriptor for a source image Your application is certainly free to pass in any destinationImage it likes to encodeToCommandBuffer:sourceImage:destinationImage, within reason. This is the basic design for iOS 10. This method is therefore not required.
However, calculating the MPSImage size and MPSCNNKernel properties for each filter can be tedious and complicated work, so this method is made available to automate the process. The application may modify the properties of the descriptor before a MPSImage is made from it, so long as the choice is sensible for the kernel in question. Please see individual kernel descriptions for restrictions.
The expected timeline for use is as follows:
1) This method is called: a) The default MPS padding calculation is applied. It uses the MPSNNPaddingMethod of the .padding property to provide a consistent addressing scheme over the graph. It creates the MPSImageDescriptor and adjusts the .offset property of the MPSNNKernel. When using a MPSNNGraph, the padding is set using the MPSNNFilterNode as a proxy.
b) This method may be overridden by MPSCNNKernel subclass to achieve any customization appropriate to the object type.
c) Source states are then applied in order. These may modify the descriptor and may update other object properties. See: -destinationImageDescriptorForSourceImages:sourceStates: forKernel:suggestedDescriptor: This is the typical way in which MPS may attempt to influence the operation of its kernels.
d) If the .padding property has a custom padding policy method of the same name, it is called. Similarly, it may also adjust the descriptor and any MPSCNNKernel properties. This is the typical way in which your application may attempt to influence the operation of the MPS kernels.
2) A result is returned from this method and the caller may further adjust the descriptor and kernel properties directly.
3) The caller uses the descriptor to make a new MPSImage to use as the destination image for the -encode call in step 5.
4) The caller calls -resultStateForSourceImage:sourceStates:destinationImage: to make any result states needed for the kernel. If there isn't one, it will return nil. A variant is available to return a temporary state instead.
5) a -encode method is called to encode the kernel.
The entire process 1-5 is more simply achieved by just calling an -encode... method that returns a MPSImage out the left hand sid of the method. Simpler still, use the MPSNNGraph to coordinate the entire process from end to end. Opportunities to influence the process are of course reduced, as (2) is no longer possible with either method. Your application may opt to use the five step method if it requires greater customization as described, or if it would like to estimate storage in advance based on the sum of MPSImageDescriptors before processing a graph. Storage estimation is done by using the MPSImageDescriptor to create a MPSImage (without passing it a texture), and then call -resourceSize. As long as the MPSImage is not used in an encode call and the .texture property is not invoked, the underlying MTLTexture is not created.
No destination state or destination image is provided as an argument to this function because it is expected they will be made / configured after this is called. This method is expected to auto-configure important object properties that may be needed in the ensuing destination image and state creation steps.
Parameters:
Returns:
Encode a MPSCNNKernel into a command Buffer. Create a texture to hold the result and return it. In the first iteration on this method, encodeToCommandBuffer:sourceImage:destinationImage: some work was left for the developer to do in the form of correctly setting the offset property and sizing the result buffer. With the introduction of the padding policy (see padding property) the filter can do this work itself. If you would like to have some input into what sort of MPSImage (e.g. temporary vs. regular) or what size it is or where it is allocated, you may set the destinationImageAllocator to allocate the image yourself.
This method uses the MPSNNPadding padding property to figure out how to size the result image and to set the offset property. See discussion in MPSNeuralNetworkTypes.h. All images in a batch must have MPSImage.numberOfImages = 1.
Parameters:
Returns:
Reimplemented in MPSCNNBatchNormalizationStatistics.
Encode a MPSCNNKernel into a command Buffer. The operation shall proceed out-of-place. This is the older style of encode which reads the offset, doesn't change it, and ignores the padding method.
Parameters:
Encode a MPSCNNKernel with a destination state into a command Buffer. This is typically used during training. The state is commonly a MPSNNGradientState. Please see -resultStateForSourceImages:SourceStates:destinationImage and batch+temporary variants.
Parameters:
Reimplemented in MPSCNNBatchNormalization.
Encode a MPSCNNKernel into a command Buffer. Create a MPSImageBatch and MPSStateBatch to hold the results and return them. In the first iteration on this method, encodeToCommandBuffer:sourceImage:destinationImage: some work was left for the developer to do in the form of correctly setting the offset property and sizing the result buffer. With the introduction of the padding policy (see padding property) the filter can do this work itself. If you would like to have some input into what sort of MPSImage (e.g. temporary vs. regular) or what size it is or where it is allocated, you may set the destinationImageAllocator to allocate the image yourself.
This method uses the MPSNNPadding padding property to figure out how to size the result image and to set the offset property. See discussion in MPSNeuralNetworkTypes.h. All images in a batch must have MPSImage.numberOfImages = 1.
Usage:
MPSStateBatch * outStates = nil; // autoreleased MPSImageBatch * result = [k encodeBatchToCommandBuffer: cmdBuf
sourceImages: sourceImages
destinationStates: &outStates ];
Parameters:
Returns:
Reimplemented in MPSCNNBatchNormalization.
Encode a MPSCNNKernel into a command Buffer. Create a texture to hold the result and return it. In the first iteration on this method, encodeToCommandBuffer:sourceImage:destinationImage: some work was left for the developer to do in the form of correctly setting the offset property and sizing the result buffer. With the introduction of the padding policy (see padding property) the filter can do this work itself. If you would like to have some input into what sort of MPSImage (e.g. temporary vs. regular) or what size it is or where it is allocated, you may set the destinationImageAllocator to allocate the image yourself.
This method uses the MPSNNPadding padding property to figure out how to size the result image and to set the offset property. See discussion in MPSNeuralNetworkTypes.h. All images in a batch must have MPSImage.numberOfImages = 1.
Parameters:
Returns:
Encode a MPSCNNKernel into a command Buffer. The operation shall proceed out-of-place. This is the older style of encode which reads the offset, doesn't change it, and ignores the padding method.
Parameters:
Encode a MPSCNNKernel with a destination state into a command Buffer. This is typically used during training. The state is commonly a MPSNNGradientState. Please see -resultStateForSourceImages:SourceStates: and batch+temporary variants.
Parameters:
Reimplemented in MPSCNNBatchNormalization.
Encode a MPSCNNKernel into a command Buffer. Create a texture and state to hold the results and return them. In the first iteration on this method, encodeToCommandBuffer:sourceImage:destinationState:destinationImage: some work was left for the developer to do in the form of correctly setting the offset property and sizing the result buffer. With the introduction of the padding policy (see padding property) the filter can do this work itself. If you would like to have some input into what sort of MPSImage (e.g. temporary vs. regular) or what size it is or where it is allocated, you may set the destinationImageAllocator to allocate the image yourself.
This method uses the MPSNNPadding padding property to figure out how to size the result image and to set the offset property. See discussion in MPSNeuralNetworkTypes.h. All images in a batch must have MPSImage.numberOfImages = 1.
Parameters:
Returns:
Reimplemented in MPSCNNBatchNormalization.
NSSecureCoding compatability While the standard NSSecureCoding/NSCoding method -initWithCoder: should work, since the file can't know which device your data is allocated on, we have to guess and may guess incorrectly. To avoid that problem, use initWithCoder:device instead.
Parameters:
Returns:
Reimplemented from MPSKernel.
Reimplemented in MPSCNNBinaryConvolution, MPSCNNBinaryFullyConnected, MPSCNNConvolutionTranspose, MPSCNNFullyConnected, MPSCNNConvolution, MPSCNNYOLOLoss, MPSRNNImageInferenceLayer, MPSCNNLoss, MPSCNNCrossChannelNormalization, MPSCNNDilatedPoolingMax, MPSCNNBatchNormalization, MPSCNNBatchNormalizationStatistics, MPSCNNPoolingAverage, MPSCNNPoolingL2Norm, MPSCNNLocalContrastNormalization, MPSCNNInstanceNormalization, MPSCNNNeuron, MPSNNCropAndResizeBilinear, MPSCNNDropout, MPSCNNSpatialNormalization, MPSNNResizeBilinear, MPSCNNPooling, and MPSCNNPoolingMax.
Standard init with default properties per filter type
Parameters:
Returns:
Reimplemented from MPSKernel.
Reimplemented in MPSCNNBinaryConvolution, MPSCNNBinaryFullyConnected, MPSCNNConvolutionTranspose, MPSCNNFullyConnected, MPSCNNConvolution, MPSCNNYOLOLoss, MPSRNNImageInferenceLayer, MPSCNNLoss, MPSCNNCrossChannelNormalization, MPSNNReshape, MPSCNNBatchNormalization, MPSCNNBatchNormalizationStatistics, MPSNNReduceFeatureChannelsSum, MPSCNNNeuronLinear, MPSCNNNeuronReLU, MPSCNNNeuronPReLU, MPSCNNNeuronSigmoid, MPSCNNNeuronHardSigmoid, MPSCNNNeuronTanH, MPSCNNNeuronAbsolute, MPSCNNNeuronSoftPlus, MPSCNNNeuronSoftSign, MPSCNNNeuronELU, MPSCNNNeuronReLUN, MPSCNNNeuronPower, MPSCNNNeuronExponential, MPSCNNNeuronLogarithm, MPSCNNLocalContrastNormalization, MPSCNNInstanceNormalization, MPSCNNNeuron, MPSNNCropAndResizeBilinear, MPSNNSlice, MPSCNNDropout, MPSCNNUpsampling, MPSCNNSpatialNormalization, MPSNNReduceUnary, MPSNNReduceRowMin, MPSNNReduceColumnMin, MPSNNReduceFeatureChannelsMin, MPSNNReduceFeatureChannelsArgumentMin, MPSNNReduceRowMax, MPSNNReduceColumnMax, MPSNNReduceFeatureChannelsMax, MPSNNReduceFeatureChannelsArgumentMax, MPSNNReduceRowMean, MPSNNReduceColumnMean, MPSNNReduceFeatureChannelsMean, MPSNNReduceRowSum, MPSNNReduceColumnSum, MPSNNResizeBilinear, and MPSCNNPooling.
Returns YES if the same state is used for every operation in a batch If NO, then each image in a MPSImageBatch will need a corresponding (and different) state to go with it. Set to YES to avoid allocating redundant state in the case when the same state is used all the time. Default: NO
Allocate a MPSState (subclass) to hold the results from a -encodeBatchToCommandBuffer... operation A graph may need to allocate storage up front before executing. This may be necessary to avoid using too much memory and to manage large batches. The function should allocate any MPSState objects that will be produced by an -encode call with the indicated sourceImages and sourceStates inputs. Though the states can be further adjusted in the ensuing -encode call, the states should be initialized with all important data and all MTLResource storage allocated. The data stored in the MTLResource need not be initialized, unless the ensuing -encode call expects it to be.
The MTLDevice used by the result is derived from the source image. The padding policy will be applied to the filter before this is called to give it the chance to configure any properties like MPSCNNKernel.offset.
CAUTION: The kernel must have all properties set to values that will ultimately be passed to the -encode call that writes to the state, before -resultStateForSourceImages:sourceStates:destinationImage: is called or behavior is undefined. Please note that -destinationImageDescriptorForSourceImages:sourceStates: will alter some of these properties automatically based on the padding policy. If you intend to call that to make the destination image, then you should call that before -resultStateForSourceImages:sourceStates:destinationImage:. This will ensure the properties used in the encode call and in the destination image creation match those used to configure the state.
The following order is recommended:
// Configure MPSCNNKernel properties first kernel.edgeMode = MPSImageEdgeModeZero; kernel.destinationFeatureChannelOffset = 128; // concatenation without the copy // ALERT: will change MPSCNNKernel properties MPSImageDescriptor * d = [kernel destinationImageDescriptorForSourceImage: source
sourceStates: states]; MPSTemporaryImage * dest = [MPSTemporaryImage temporaryImageWithCommandBuffer: cmdBuf
imageDescriptor: d]; // Now that all properties are configured properly, we can make the result state // and call encode. MPSState * __nullable destState = [kernel resultStateForSourceImage: source
sourceStates: states
destinationImage: dest]; // This form of -encode will be declared by the MPSCNNKernel subclass [kernel encodeToCommandBuffer: cmdBuf
sourceImage: source
destinationState: destState
destinationImage: dest ];
Default: returns nil
Parameters:
Returns:
Reimplemented in MPSCNNConvolution, MPSCNNBatchNormalization, and MPSCNNInstanceNormalization.
Reimplemented in MPSCNNConvolution.
Allocate a temporary MPSState (subclass) to hold the results from a -encodeBatchToCommandBuffer... operation A graph may need to allocate storage up front before executing. This may be necessary to avoid using too much memory and to manage large batches. The function should allocate any MPSState objects that will be produced by an -encode call with the indicated sourceImages and sourceStates inputs. Though the states can be further adjusted in the ensuing -encode call, the states should be initialized with all important data and all MTLResource storage allocated. The data stored in the MTLResource need not be initialized, unless the ensuing -encode call expects it to be.
The MTLDevice used by the result is derived from the command buffer. The padding policy will be applied to the filter before this is called to give it the chance to configure any properties like MPSCNNKernel.offset.
CAUTION: The kernel must have all properties set to values that will ultimately be passed to the -encode call that writes to the state, before -resultStateForSourceImages:sourceStates:destinationImage: is called or behavior is undefined. Please note that -destinationImageDescriptorForSourceImages:sourceStates:destinationImage: will alter some of these properties automatically based on the padding policy. If you intend to call that to make the destination image, then you should call that before -resultStateForSourceImages:sourceStates:destinationImage:. This will ensure the properties used in the encode call and in the destination image creation match those used to configure the state.
The following order is recommended:
// Configure MPSCNNKernel properties first kernel.edgeMode = MPSImageEdgeModeZero; kernel.destinationFeatureChannelOffset = 128; // concatenation without the copy // ALERT: will change MPSCNNKernel properties MPSImageDescriptor * d = [kernel destinationImageDescriptorForSourceImage: source
sourceStates: states]; MPSTemporaryImage * dest = [MPSTemporaryImage temporaryImageWithCommandBuffer: cmdBuf
imageDescriptor: d]; // Now that all properties are configured properly, we can make the result state // and call encode. MPSState * __nullable destState = [kernel temporaryResultStateForCommandBuffer: cmdBuf
sourceImage: source
sourceStates: states]; // This form of -encode will be declared by the MPSCNNKernel subclass [kernel encodeToCommandBuffer: cmdBuf
sourceImage: source
destinationState: destState
destinationImage: dest ];
Default: returns nil
Parameters:
Returns:
Reimplemented in MPSCNNConvolution, MPSCNNBatchNormalization, and MPSCNNInstanceNormalization.
An optional clip rectangle to use when writing data. Only the pixels in the rectangle will be overwritten. A MTLRegion that indicates which part of the destination to overwrite. If the clipRect does not lie completely within the destination image, the intersection between clip rectangle and destination bounds is used. Default: MPSRectNoClip (MPSKernel::MPSRectNoClip) indicating the entire image. clipRect.origin.z is the index of starting destination image in batch processing mode. clipRect.size.depth is the number of images to process in batch processing mode.
See Also: MetalPerformanceShaders.h subsubsection_clipRect
The number of channels in the destination MPSImage to skip before writing output. This is the starting offset into the destination image in the feature channel dimension at which destination data is written. This allows an application to pass a subset of all the channels in MPSImage as output of MPSKernel. E.g. Suppose MPSImage has 24 channels and a MPSKernel outputs 8 channels. If we want channels 8 to 15 of this MPSImage to be used as output, we can set destinationFeatureChannelOffset = 8. Note that this offset applies independently to each image when the MPSImage is a container for multiple images and the MPSCNNKernel is processing multiple images (clipRect.size.depth > 1). The default value is 0 and any value specifed shall be a multiple of 4. If MPSKernel outputs N channels, the destination image MUST have at least destinationFeatureChannelOffset + N channels. Using a destination image with insufficient number of feature channels will result in an error. E.g. if the MPSCNNConvolution outputs 32 channels, and the destination has 64 channels, then it is an error to set destinationFeatureChannelOffset > 32.
Method to allocate the result image for -encodeToCommandBuffer:sourceImage: Default: defaultAllocator (MPSTemporaryImage)
Stride in source coordinates from one kernel tap to the next in the X dimension.
The MPSImageEdgeMode to use when texture reads stray off the edge of an image Most MPSKernel objects can read off the edge of the source image. This can happen because of a negative offset property, because the offset + clipRect.size is larger than the source image or because the filter looks at neighboring pixels, such as a Convolution filter. Default: MPSImageEdgeModeZero.
See Also: MetalPerformanceShaders.h subsubsection_edgemode Note: For MPSCNNPoolingAverage specifying edge mode MPSImageEdgeModeClamp is interpreted as a 'shrink-to-edge' operation, which shrinks the effective filtering window to remain within the source image borders.
YES if the filter operates backwards. This influences how strideInPixelsX/Y should be interpreted. Most filters either have stride 1 or are reducing, meaning that the result image is smaller than the original by roughly a factor of the stride. A few 'backward' filters (e.g convolution transpose) are intended to 'undo' the effects of an earlier forward filter, and so enlarge the image. The stride is in the destination coordinate frame rather than the source coordinate frame.
Returns true if the -encode call modifies the state object it accepts.
The height of the MPSCNNKernel filter window This is the vertical diameter of the region read by the filter for each result pixel. If the MPSCNNKernel does not have a filter window, then 1 will be returned.
Warning: This property was lowered to this class in ios/tvos 11 The property may not be available on iOS/tvOS 10 for all subclasses of MPSCNNKernel
The width of the MPSCNNKernel filter window This is the horizontal diameter of the region read by the filter for each result pixel. If the MPSCNNKernel does not have a filter window, then 1 will be returned.
Warning: This property was lowered to this class in ios/tvos 11 The property may not be available on iOS/tvOS 10 for all subclasses of MPSCNNKernel
The position of the destination clip rectangle origin relative to the source buffer. The offset is defined to be the position of clipRect.origin in source coordinates. Default: {0,0,0}, indicating that the top left corners of the clipRect and source image align. offset.z is the index of starting source image in batch processing mode.
See Also: MetalPerformanceShaders.h subsubsection_mpsoffset
The padding method used by the filter This influences how the destination image is sized and how the offset into the source image is set. It is used by the -encode methods that return a MPSImage from the left hand side.
The maximum number of channels in the source MPSImage to use Most filters can insert a slice operation into the filter for free. Use this to limit the size of the feature channel slice taken from the input image. If the value is too large, it is truncated to be the remaining size in the image after the sourceFeatureChannelOffset is taken into account. Default: ULONG_MAX
The number of channels in the source MPSImage to skip before reading the input. This is the starting offset into the source image in the feature channel dimension at which source data is read. Unit: feature channels This allows an application to read a subset of all the channels in MPSImage as input of MPSKernel. E.g. Suppose MPSImage has 24 channels and a MPSKernel needs to read 8 channels. If we want channels 8 to 15 of this MPSImage to be used as input, we can set sourceFeatureChannelOffset = 8. Note that this offset applies independently to each image when the MPSImage is a container for multiple images and the MPSCNNKernel is processing multiple images (clipRect.size.depth > 1). The default value is 0 and any value specifed shall be a multiple of 4. If MPSKernel inputs N channels, the source image MUST have at least sourceFeatureChannelOffset + N channels. Using a source image with insufficient number of feature channels will result in an error. E.g. if the MPSCNNConvolution inputs 32 channels, and the source has 64 channels, then it is an error to set sourceFeatureChannelOffset > 32.
The downsampling (or upsampling if a backwards filter) factor in the horizontal dimension If the filter does not do up or downsampling, 1 is returned.
Warning: This property was lowered to this class in ios/tvos 11
The property may not be available on iOS/tvOS 10 for
all subclasses of MPSCNNKernel
The downsampling (or upsampling if a backwards filter) factor in the vertical dimension If the filter does not do up or downsampling, 1 is returned.
Warning: This property was lowered to this class in ios/tvos 11
The property may not be available on iOS/tvOS 10 for
all subclasses of MPSCNNKernel
Generated automatically by Doxygen for MetalPerformanceShaders.framework from the source code.
Mon Jul 9 2018 | Version MetalPerformanceShaders-119.3 |