MPSGRUDescriptor(3) MetalPerformanceShaders.framework MPSGRUDescriptor(3)

MPSGRUDescriptor

#import <MPSRNNLayer.h>

Inherits MPSRNNDescriptor.


(nonnull instancetype) + createGRUDescriptorWithInputFeatureChannels:outputFeatureChannels:


id< MPSCNNConvolutionDataSource > inputGateInputWeights
id< MPSCNNConvolutionDataSource > inputGateRecurrentWeights
id< MPSCNNConvolutionDataSource > recurrentGateInputWeights
id< MPSCNNConvolutionDataSource > recurrentGateRecurrentWeights
id< MPSCNNConvolutionDataSource > outputGateInputWeights
id< MPSCNNConvolutionDataSource > outputGateRecurrentWeights
id< MPSCNNConvolutionDataSource > outputGateInputGateWeights
float gatePnormValue
BOOL flipOutputGates

This depends on Metal.framework The MPSGRUDescriptor specifies a GRU (Gated Recurrent Unit) block/layer descriptor. The RNN layer initialized with a MPSGRUDescriptor transforms the input data (image or matrix), and previous output with a set of filters, each producing one feature map in the output data according to the Gated unit formulae detailed below. The user may provide the GRU unit a single input or a sequence of inputs. The layer also supports p-norm gating (Detailed in: https://arxiv.org/abs/1608.03639 ).

Description of operation:

Let x_j be the input data (at time index t of sequence, j index containing quadruplet: batch index, x,y and feature index (x=y=0 for matrices)). Let h0_j be the recurrent input (previous output) data from previous time step (at time index t-1 of sequence). Let h_i be the proposed new output. Let h1_i be the output data produced at this time step.

Let Wz_ij, Uz_ij, be the input gate weights for input and recurrent input data respectively Let bi_i be the bias for the input gate

Let Wr_ij, Ur_ij be the recurrent gate weights for input and recurrent input data respectively Let br_i be the bias for the recurrent gate

Let Wh_ij, Uh_ij, Vh_ij, be the output gate weights for input, recurrent gate and input gate respectively Let bh_i be the bias for the output gate

Let gz(x), gr(x), gh(x) be the neuron activation function for the input, recurrent and output gates Let p > 0 be a scalar variable (typicall p >= 1.0) that defines the p-norm gating norm value.

Then the output of the Gated Recurrent Unit layer is computed as follows:


z_i = gz( Wz_ij * x_j + Uz_ij * h0_j + bz_i )
r_i = gr( Wr_ij * x_j + Ur_ij * h0_j + br_i )
c_i = Uh_ij * (r_j h0_j) + Vh_ij * (z_j h0_j)
h_i = gh( Wh_ij * x_j + c_i + bh_i ) h1_i = ( 1 - z_i ^ p)^(1/p) h_i + z_i h0_i

The '*' stands for convolution (see MPSRNNImageInferenceLayer) or matrix-vector/matrix multiplication (see MPSRNNMatrixInferenceLayer). Summation is over index j (except for the batch index), but there is no summation over repeated index i - the output index. Note that for validity all intermediate images have to be of same size and all U and V matrices have to be square (ie. outputFeatureChannels == inputFeatureChannels in those). Also the bias terms are scalars wrt. spatial dimensions. The conventional GRU block is achieved by setting Vh = 0 (nil) and the so-called Minimal Gated Unit is achieved with Uh = 0. (The Minimal Gated Unit is detailed in: https://arxiv.org/abs/1603.09420 and there they call z_i the value of the forget gate).

Creates a GRU descriptor.

Parameters:

inputFeatureChannels The number of feature channels in the input image/matrix. Must be >= 1.
outputFeatureChannels The number of feature channels in the output image/matrix. Must be >= 1.

Returns:

A valid MPSGRUDescriptor object or nil, if failure.

- flipOutputGates [read], [write], [nonatomic], [assign]

If YES then the GRU-block output formula is changed to: h1_i = ( 1 - z_i ^ p)^(1/p) h0_i + z_i h_i. Defaults to NO.

- gatePnormValue [read], [write], [nonatomic], [assign]

The p-norm gating norm value as specified by the GRU formulae. Defaults to 1.0f.

- inputGateInputWeights [read], [write], [nonatomic], [retain]

Contains weights 'Wz_ij', bias 'bz_i' and neuron 'gz' from the GRU formula. If nil then assumed zero weights, bias and no neuron (identity mapping). Defaults to nil.

- inputGateRecurrentWeights [read], [write], [nonatomic], [retain]

Contains weights 'Uz_ij' from the GRU formula. If nil then assumed zero weights. Defaults to nil.

- outputGateInputGateWeights [read], [write], [nonatomic], [retain]

Contains weights 'Vh_ij' - can be used to implement the 'Minimally Gated Unit'. If nil then assumed zero weights. Defaults to nil.

- outputGateInputWeights [read], [write], [nonatomic], [retain]

Contains weights 'Wh_ij', bias 'bh_i' and neuron 'gh' from the GRU formula. If nil then assumed zero weights, bias and no neuron (identity mapping).Defaults to nil.

- outputGateRecurrentWeights [read], [write], [nonatomic], [retain]

Contains weights 'Uh_ij' from the GRU formula. If nil then assumed zero weights. Defaults to nil.

- recurrentGateInputWeights [read], [write], [nonatomic], [retain]

Contains weights 'Wr_ij', bias 'br_i' and neuron 'gr' from the GRU formula. If nil then assumed zero weights, bias and no neuron (identity mapping).Defaults to nil.

- recurrentGateRecurrentWeights [read], [write], [nonatomic], [retain]

Contains weights 'Ur_ij' from the GRU formula. If nil then assumed zero weights.Defaults to nil.

Generated automatically by Doxygen for MetalPerformanceShaders.framework from the source code.

Mon Jul 9 2018 Version MetalPerformanceShaders-119.3