MPSNNOptimizerStochasticGradientDescent(3) | MetalPerformanceShaders.framework | MPSNNOptimizerStochasticGradientDescent(3) |
MPSNNOptimizerStochasticGradientDescent
#import <MPSNNOptimizers.h>
Inherits MPSNNOptimizer.
(nonnull instancetype) - initWithDevice:
(nonnull instancetype) - initWithDevice:learningRate:
(nonnull instancetype) -
initWithDevice:momentumScale:useNestrovMomentum:optimizerDescriptor:
(void) -
encodeToCommandBuffer:inputGradientVector:inputValuesVector:inputMomentumVector:resultValuesVector:
(void) -
encodeToCommandBuffer:convolutionGradientState:convolutionSourceState:inputMomentumVectors:resultState:
(void) -
encodeToCommandBuffer:batchNormalizationState:inputMomentumVectors:resultState:
(void) -
encodeToCommandBuffer:batchNormalizationGradientState:batchNormalizationSourceState:inputMomentumVectors:resultState:
float momentumScale
BOOL useNestrovMomentum
The MPSNNOptimizerStochasticGradientDescent performs a gradient descent with an optional momentum Update RMSProp is also known as root mean square propagation.
useNestrov == NO: m[t] = momentumScale * m[t-1] + learningRate * g variable = variable - m[t]
useNestrov == YES: m[t] = momentumScale * m[t-1] + g variable = variable - (learningRate * (g + m[t] * momentumScale))
where,
g is gradient of error wrt variable
m[t] is momentum of gradients it is a state we keep updating every update iteration.fi
Encode an MPSNNOptimizerStochasticGradientDescent object to a command buffer to perform out of place update
Parameters:
The following operations would be applied
useNestrov == NO:
m[t] = momentumScale * m[t-1] + learningRate * g
variable = variable - m[t]
useNestrov == YES:
m[t] = momentumScale * m[t-1] + g
variable = variable - (learningRate * (g + m[t] * momentumScale))
inputMomentumVector == nil
variable = variable - (learningRate * g)
where,
g is gradient of error wrt variable
m[t] is momentum of gradients it is a state we keep updating every update iteration.fi
Encode an MPSNNOptimizerStochasticGradientDescent object to a command buffer to perform out of place update
Parameters:
The following operations would be applied
useNestrov == NO:
m[t] = momentumScale * m[t-1] + learningRate * g
variable = variable - m[t]
useNestrov == YES:
m[t] = momentumScale * m[t-1] + g
variable = variable - (learningRate * (g + m[t] * momentumScale))
inputMomentumVector == nil
variable = variable - (learningRate * g)
where,
g is gradient of error wrt variable
m[t] is momentum of gradients it is a state we keep updating every update iteration.fi
Encode an MPSNNOptimizerStochasticGradientDescent object to a command buffer to perform out of place update
Parameters:
The following operations would be applied
useNestrov == NO:
m[t] = momentumScale * m[t-1] + learningRate * g
variable = variable - m[t]
useNestrov == YES:
m[t] = momentumScale * m[t-1] + g
variable = variable - (learningRate * (g + m[t] * momentumScale))
inputMomentumVector == nil
variable = variable - (learningRate * g)
where,
g is gradient of error wrt variable
m[t] is momentum of gradients it is a state we keep updating every update iteration.fi
Encode an MPSNNOptimizerStochasticGradientDescent object to a command buffer to perform out of place update
Parameters:
The following operations would be applied
useNestrov == NO:
m[t] = momentumScale * m[t-1] + learningRate * g
variable = variable - m[t]
useNestrov == YES:
m[t] = momentumScale * m[t-1] + g
variable = variable - (learningRate * (g + m[t] * momentumScale))
inputMomentumVector == nil
variable = variable - (learningRate * g)
where,
g is gradient of error wrt variable
m[t] is momentum of gradients it is a state we keep updating every update iteration.fi
Standard init with default properties per filter type
Parameters:
Returns:
Reimplemented from MPSNNOptimizer.
Convenience initialization for the momentum update
Parameters:
Returns:
Full initialization for the momentum update
Parameters:
Returns:
The momentumScale at which we update momentum for values array Default value is 0.0
Nestrov momentum is considered an improvement on the usual momentum update Default value is NO
Generated automatically by Doxygen for MetalPerformanceShaders.framework from the source code.
Mon Jul 9 2018 | Version MetalPerformanceShaders-119.3 |