MPSNNFilterNode(3) MetalPerformanceShaders.framework MPSNNFilterNode(3)

MPSNNFilterNode

#import <MPSNNGraphNodes.h>

Inherits NSObject.

Inherited by MPSCNNBatchNormalizationNode, MPSCNNConvolutionNode, MPSCNNDilatedPoolingMaxNode, MPSCNNDropoutNode, MPSCNNInstanceNormalizationNode, MPSCNNLogSoftMaxNode, MPSCNNLossNode, MPSCNNNeuronNode, MPSCNNNormalizationNode, MPSCNNPoolingNode, MPSCNNSoftMaxNode, MPSCNNUpsamplingBilinearNode, MPSCNNUpsamplingNearestNode, MPSCNNYOLOLossNode, MPSNNBinaryArithmeticNode, MPSNNConcatenationNode, MPSNNGradientFilterNode, and MPSNNScaleNode.


(nonnull instancetype) - init
(MPSNNGradientFilterNode *__nonnull) - gradientFilterWithSource:
(MPSNNGradientFilterNode *__nonnull) - gradientFilterWithSources:
(NSArray< MPSNNGradientFilterNode * > *__nonnull) - gradientFiltersWithSources:
(NSArray< MPSNNGradientFilterNode * > *__nonnull) - gradientFiltersWithSource:
(NSArray< MPSNNFilterNode * > *__nullable) - trainingGraphWithSourceGradient:nodeHandler:


MPSNNImageNode * resultImage
MPSNNStateNode * resultState
NSArray< MPSNNStateNode * > * resultStates
id< MPSNNPadding > paddingPolicy
NSString * label

A placeholder node denoting a neural network filter stage There are as many MPSNNFilterNode subclasses as there are MPS neural network filter objects. Make one of those. This class defines an polymorphic interface for them.

- (NSArray <MPSNNGradientFilterNode*> * __nonnull) gradientFiltersWithSource: (MPSNNImageNode *__nonnull) gradientImage

Return multiple gradient versions of the filter MPSNNFilters that consume multiple inputs generally result in multiple conjugate filters for the gradient computation at the end of training. For example, a single concatenation operation that concatenates multple images will result in an array of slice operators that carve out subsections of the input gradient image.

Reimplemented in MPSNNGradientFilterNode.

- (NSArray <MPSNNGradientFilterNode*> * __nonnull) gradientFiltersWithSources: (NSArray< MPSNNImageNode * > *__nonnull) gradientImages

Return multiple gradient versions of the filter MPSNNFilters that consume multiple inputs generally result in multiple conjugate filters for the gradient computation at the end of training. For example, a single concatenation operation that concatenates multple images will result in an array of slice operators that carve out subsections of the input gradient image.

Reimplemented in MPSNNBinaryArithmeticNode, and MPSNNGradientFilterNode.

- (MPSNNGradientFilterNode*__nonnull) gradientFilterWithSource: (MPSNNImageNode *__nonnull) gradientImage

Return the gradient (backwards) version of this filter. The backwards training version of the filter will be returned. The non-gradient image and state arguments for the filter are automatically obtained from the target.

Parameters:

gradientImage The gradient images corresponding with the resultImage of the target

Reimplemented in MPSNNGradientFilterNode.

- (MPSNNGradientFilterNode*__nonnull) gradientFilterWithSources: (NSArray< MPSNNImageNode * > *__nonnull) gradientImages

Return the gradient (backwards) version of this filter. The backwards training version of the filter will be returned. The non-gradient image and state arguments for the filter are automatically obtained from the target.

Parameters:

gradientImages The gradient images corresponding with the resultImage of the target

Reimplemented in MPSCNNYOLOLossNode, MPSNNConcatenationNode, MPSCNNLossNode, MPSNNBinaryArithmeticNode, and MPSNNGradientFilterNode.

- (nonnull instancetype) init

Reimplemented in MPSCNNNeuronGradientNode, and MPSCNNNeuronNode.

- (NSArray <MPSNNFilterNode*> * __nullable) trainingGraphWithSourceGradient: (MPSNNImageNode *__nullable) gradientImage(__nullable MPSGradientNodeBlock) nodeHandler

Build training graph from inference graph This method will iteratively build the training potion of a graph based on an inference graph. Self should be the last node in the inference graph. It is typically a loss layer, but can be anything. Typically, the 'inference graph' used here is the desired inference graph with a dropout node and a loss layer node appended.

BUG: This method can not follow links to regions of the graph that are connected to the rest of the graph solely via MPSNNStateNodes. A gradient image input is required to construct a MPSNNGradientFilterNode from a inference filter node.

Parameters:

gradientImage The input gradient image for the first gradient node in the training section of the graph. If nil, self.resultImage is used. This results in a standard monolithic training graph. If the graph is instead divided into multiple subgraphs (potentially to allow for your custom code to appear inbetween MPSNNGraph segments) a new MPSImageNode* may be substituted.
nodeHandler An optional block to allow for customization of gradient nodes and intermediate images as the graph is constructed. It may also be used to prune braches of the developing training graph. If nil, the default handler is used. It builds the full graph, and assigns any inferenceNodeSources[i].handle to their gradient counterparts.

Returns:

The list of new MPSNNFilterNode training graph termini. These MPSNNFilterNodes are not necessarily all MPSNNGradientFilterNodes. To build a full list of nodes created, use a custom nodeHandler. If no nodes are created nil is returned.

- label [read], [write], [atomic], [copy]

A string to help identify this object.

- (id<MPSNNPadding>) paddingPolicy [read], [write], [nonatomic], [retain]

The padding method used for the filter node The padding policy configures how the filter centers the region of interest in the source image. It principally is responsible for setting the MPSCNNKernel.offset and the size of the image produced, and sometimes will also configure .sourceFeatureChannelOffset, .sourceFeatureChannelMaxCount, and .edgeMode. It is permitted to set any other filter properties as needed using a custom padding policy. The default padding policy varies per filter to conform to consensus expectation for the behavior of that filter. In some cases, pre-made padding policies are provided to match the behavior of common neural networking frameworks with particularly complex or unexpected behavior for specific nodes. See MPSNNDefaultPadding class methods in MPSNeuralNetworkTypes.h for more.

BUG: MPS doesn't provide a good way to reset the MPSKernel properties in the context of a MPSNNGraph after the kernel is finished encoding. These values carry on to the next time the graph is used. Consequently, if your custom padding policy modifies the property as a function of the previous value, e.g.:

kernel.someProperty += 2;

then the second time the graph runs, the property may have an inconsistent value, leading to unexpected behavior. The default padding computation runs before the custom padding method to provide it with a sense of what is expected for the default configuration and will reinitialize the value in the case of the .offset. However, that computation usually doesn't reset other properties. In such cases, the custom padding policy may need to keep a record of the original value to enable consistent behavior.

- (MPSNNImageNode*) resultImage [read], [nonatomic], [assign]

Get the node representing the image result of the filter Except where otherwise noted, the precision used for the result image (see format property) is copied from the precision from the first input image node.

- (MPSNNStateNode*) resultState [read], [nonatomic], [assign]

convenience method for resultStates[0] If resultStates is nil, returns nil

- (NSArray<MPSNNStateNode*>*) resultStates [read], [nonatomic], [assign]

Get the node representing the state result of the filter If more than one, see description of subclass for ordering.

Generated automatically by Doxygen for MetalPerformanceShaders.framework from the source code.

Mon Jul 9 2018 Version MetalPerformanceShaders-119.3