manpagez: man pages & more
man MPSRNNMatrixInferenceLayer(3)
Home | html | info | man
MPSRNNMatrixInferenceLayer(3)




NAME

       MPSRNNMatrixInferenceLayer


SYNOPSIS

       #import <MPSRNNLayer.h>

       Inherits MPSKernel.

   Instance Methods
       (nonnull instancetype) - initWithDevice:rnnDescriptor:
       (nonnull instancetype) - initWithDevice:rnnDescriptors:
       (nonnull instancetype) - initWithDevice:
       (void) -
           encodeSequenceToCommandBuffer:sourceMatrices:destinationMatrices:recurrentInputState:recurrentOutputStates:
       (void) -
           encodeBidirectionalSequenceToCommandBuffer:sourceSequence:destinationForwardMatrices:destinationBackwardMatrices:
       (nullable instancetype) - initWithCoder:device:
       (nonnull instancetype) - copyWithZone:device:

   Properties
       NSUInteger inputFeatureChannels
       NSUInteger outputFeatureChannels
       NSUInteger numberOfLayers
       BOOL recurrentOutputIsTemporary
       BOOL storeAllIntermediateStates
       MPSRNNBidirectionalCombineMode bidirectionalCombineMode

   Additional Inherited Members

Detailed Description

       This depends on Metal.framework  The MPSRNNMatrixInferenceLayer
       specifies a recurrent neural network layer for inference on
       MPSMatrices. Currently two types of recurrent layers are supported:
       ones that operate with convolutions on images:
       MPSRNNImageInferenceLayer and one that operates on matrices:
       MPSRNNMatrixInferenceLayer. The former can be often used to implement
       the latter by using 1x1-matrices, but due to image size restrictions
       and performance, it is advisable to use MPSRNNMatrixInferenceLayer for
       linear recurrent layers. A MPSRNNMatrixInferenceLayer is initialized
       using a MPSRNNLayerDescriptor, which further specifies the recurrent
       network layer, or an array of MPSRNNLayerDescriptors, which specifies a
       stack of recurrent layers, that can operate in parallel a subset of the
       inputs in a sequence of inputs and recurrent outputs. Note that
       currently stacks with bidirectionally traversing encode functions do
       not support starting from a previous set of recurrent states, but this
       can be achieved quite easily by defining two separate unidirectional
       stacks of layers, and running the same input sequence on them
       separately (one forwards and one backwards) and ultimately combining
       the two result sequences as desired with auxiliary functions. The input
       and output vectors in encode calls are stored as rows of the input and
       output matrices and currently MPSRNNMatrixInferenceLayer supports only
       matrices with number of rows equal to one. The mathematical operation
       then is strictly speaking y^T = W x^T <=> y = x W^T in the linear
       transformations of MPSRNNSingleGateDescriptor, MPSLSTMDescriptor and
       MPSGRUDescriptor.


Method Documentation

   - (nonnull instancetype) copyWithZone: (nullable NSZone *) zone(nullable
       id< MTLDevice >) device
       Make a copy of this kernel for a new device -

       See also:
           MPSKernel

       Parameters:
           zone The NSZone in which to allocate the object
           device The device for the new MPSKernel. If nil, then use
           self.device.

       Returns:
           a pointer to a copy of this MPSKernel. This will fail, returning
           nil if the device is not supported. Devices must be
           MTLFeatureSet_iOS_GPUFamily2_v1 or later.



       Reimplemented from MPSKernel.

   - (void) encodeBidirectionalSequenceToCommandBuffer: (nonnull id<
       MTLCommandBuffer >) commandBuffer(NSArray< MPSMatrix * > *__nonnull)
       sourceSequence(NSArray< MPSMatrix * > *__nonnull)
       destinationForwardMatrices(NSArray< MPSMatrix * > *__nullable)
       destinationBackwardMatrices
       Encode an MPSRNNMatrixInferenceLayer kernel stack for an input matrix
       sequences into a command buffer bidirectionally. The operation proceeds
       as follows: The first source matrix x0 is passed through all forward
       traversing layers in the stack, ie. those that were initialized with
       MPSRNNSequenceDirectionForward, recurrent input is assumed zero. This
       produces forward output yf0 and recurrent states hf00, hf01, hf02, ...
       hf0n, one for each forward layer in the stack. Then x1 is passed to
       forward layers together with recurrent state hf00, hf01, ..., hf0n,
       which produces yf1, and hf10,... This procedure is iterated until the
       last matrix in the input sequence x_(N-1), which produces forward
       output yf(N-1). The backwards layers iterate the same sequence
       backwards, starting from input x_(N-1) (recurrent state zero), that
       produces yb(N-1) and recurrent output hb(N-1)0, hf(N-1)1, ... hb(N-1)m,
       one for each backwards traversing layer. Then the backwards layers
       handle input x_(N-2) using recurrent state hb(N-1)0, ..., et cetera,
       until the first matrix of the sequence is computed, producing output
       yb0. The result of the operation is either pair of sequences ({yf0,
       yf1, ... , yf(N-1)}, {yb0, yb1, ... , yb(N-1)}) or a combined sequence,
       {(yf0 + yb0), ... , (yf(N-1) + yb(N-1)) }, where '+' stands either for
       sum, or concatenation along feature channels, as specified by
       bidirectionalCombineMode.

       Parameters:
           commandBuffer A valid MTLCommandBuffer to receive the encoded
           filter
           sourceSequence An array of valid MPSMatrix objects containing the
           source matrix sequence (x0, x1, ... x_n-1).
           destinationForwardMatrices An array of valid MPSMatrices to be
           overwritten by result from forward input matrices. If
           bidirectionalCombineMode is either
           MPSRNNBidirectionalCombineModeAdd or
           MPSRNNBidirectionalCombineModeConcatenate, then will contain the
           combined results. destinationForwardMatrix may not alias with any
           of the source matrices.
           destinationBackwardMatrices If bidirectionalCombineMode is
           MPSRNNBidirectionalCombineModeNone, then must be an array of valid
           MPSMatrices that will be overwritten by result from backward input
           matrices. Otherwise this parameter is ignored and can be nil.
           destinationBackwardMatrices may not alias to any of the source
           matrices.



   - (void) encodeSequenceToCommandBuffer: (nonnull id< MTLCommandBuffer >)
       commandBuffer(NSArray< MPSMatrix * > *__nonnull)
       sourceMatrices(NSArray< MPSMatrix * > *__nonnull)
       destinationMatrices(MPSRNNRecurrentMatrixState *__nullable)
       recurrentInputState(NSMutableArray< MPSRNNRecurrentMatrixState * >
       *__nullable) recurrentOutputStates
       Encode an MPSRNNMatrixInferenceLayer kernel (stack) for a sequence of
       inputs into a command buffer. Note that when encoding using this
       function the

       See also:
           layerSequenceDirection is ignored and the layer stack operates as
           if all layers were forward feeding layers. In order to run
           bidirectional sequences use
           encodeBidirectionalSequenceToCommandBuffer:sourceSequence: or
           alternatively run two layer stacks and combine results at the end
           using utility functions.

       Parameters:
           commandBuffer A valid MTLCommandBuffer to receive the encoded
           filter
           sourceMatrices An array of valid MPSMatrix objects containing the
           sequence of source matrices.
           destinationMatrices An array valid MPSMatrices to be overwritten by
           result matrix sequence. destinationMatrices may not alias
           sourceMatrices.
           recurrentInputState An optional state containing the output
           matrices and memory cells (for LSTMs) of the layer obtained from
           the previous input matrices in a sequence of inputs. Has to be the
           output of a previous call to this function or nil (assumed zero).
           Note: can be one of the states returned in
           intermediateRecurrentStates.
           recurrentOutputStates An optional array that will contain the
           recurrent output states. If nil then the recurrent output state is
           discarded. If storeAllIntermediateStates is YES, then all
           intermediate states of the sequence are returned in the array, the
           first one corresponding to the first input in the sequence,
           otherwise only the last recurrent output state is returned. If
           recurrentOutputIsTemporary is YES and then all returned recurrent
           states will be temporary.

       See also:
           MPSState:isTemporary. Example: In order to get a new state one can
           do the following:

           MPSRNNRecurrentMatrixState* recurrent0 = nil;
           [filter encodeToCommandBuffer: cmdBuf
                            sourceMatrix: source0
                       destinationMatrix: destination0
                     recurrentInputState: nil
                    recurrentOutputState: &recurrent0];


            Then use it for the next input in sequence:

           [filter encodeToCommandBuffer: cmdBuf
                            sourceMatrix: source1
                       destinationMatrix: destination1
                     recurrentInputState: recurrent0
                    recurrentOutputState: &recurrent0];


            And discard recurrent output of the third input:

           [filter encodeToCommandBuffer: cmdBuf
                            sourceMatrix: source2
                       destinationMatrix: destination2
                     recurrentInputState: recurrent0
                    recurrentOutputState: nil];






   - (nullable instancetype) initWithCoder: (NSCoder *__nonnull)
       aDecoder(nonnull id< MTLDevice >) device
       NSSecureCoding compatability  See MPSKernel::initWithCoder.

       Parameters:
           aDecoder The NSCoder subclass with your serialized
           MPSRNNMatrixInferenceLayer
           device The MTLDevice on which to make the
           MPSRNNMatrixInferenceLayer

       Returns:
           A new MPSRNNMatrixInferenceLayer object, or nil if failure.



       Reimplemented from MPSKernel.

   - (nonnull instancetype) initWithDevice: (nonnull id< MTLDevice >) device
       Standard init with default properties per filter type

       Parameters:
           device The device that the filter will be used on. May not be NULL.

       Returns:
           a pointer to the newly initialized object. This will fail,
           returning nil if the device is not supported. Devices must be
           MTLFeatureSet_iOS_GPUFamily2_v1 or later.



       Reimplemented from MPSKernel.

   - (nonnull instancetype) initWithDevice: (nonnull id< MTLDevice >)
       device(nonnull const MPSRNNDescriptor *) rnnDescriptor
       Initializes a linear (fully connected) RNN kernel

       Parameters:
           device The MTLDevice on which this MPSRNNMatrixLayer filter will be
           used
           rnnDescriptor The descriptor that defines the RNN layer

       Returns:
           A valid MPSRNNMatrixInferenceLayer object or nil, if failure.



   - (nonnull instancetype) initWithDevice: (nonnull id< MTLDevice >)
       device(NSArray< const MPSRNNDescriptor * > *__nonnull) rnnDescriptors
       Initializes a kernel that implements a stack of linear (fully
       connected) RNN layers

       Parameters:
           device The MTLDevice on which this MPSRNNMatrixLayer filter will be
           used
           rnnDescriptors An array of RNN descriptors that defines a stack of
           RNN layers, starting at index zero. The number of layers in stack
           is the number of entries in the array. All entries in the array
           must be valid MPSRNNDescriptors.

       Returns:
           A valid MPSRNNMatrixInferenceLayer object or nil, if failure.




Property Documentation

   - bidirectionalCombineMode [read],  [write],  [nonatomic],  [assign]
       Defines how to combine the output-results, when encoding bidirectional
       layers using encodeBidirectionalSequenceToCommandBuffer. Defaults to
       MPSRNNBidirectionalCombineModeNone.

   - inputFeatureChannels [read],  [nonatomic],  [assign]
       The number of feature channels input vector/matrix.

   - numberOfLayers [read],  [nonatomic],  [assign]
       Number of layers in the filter-stack. This will be one when using
       initWithDevice:rnnDescriptor to initialize this filter and the number
       of entries in the array 'rnnDescriptors' when initializing this filter
       with initWithDevice:rnnDescriptors.

   - outputFeatureChannels [read],  [nonatomic],  [assign]
       The number of feature channels in the output vector/matrix.

   - recurrentOutputIsTemporary [read],  [write],  [nonatomic],  [assign]
       How output states from encodeSequenceToCommandBuffer are constructed.
       Defaults to NO. For reference

       See also:
           MPSState.



   - storeAllIntermediateStates [read],  [write],  [nonatomic],  [assign]
       If YES then calls to encodeSequenceToCommandBuffer return every
       recurrent state in the array: recurrentOutputStates. Defaults to NO.



Author

       Generated automatically by Doxygen for
       MetalPerformanceShaders.framework from the source code.





Version MetalPerformanceShaders-Thu2Jul 13 2017  MPSRNNMatrixInferenceLayer(3)


Mac OS X 10.12.6 - Generated Tue Oct 31 19:54:17 CDT 2017
© manpagez.com 2000-2025
Individual documents may contain additional copyright information.