manpagez: man pages & more
man MPSCNNKernel(3)
Home | html | info | man
MPSCNNKernel(3)        MetalPerformanceShaders.framework       MPSCNNKernel(3)




NAME

       MPSCNNKernel


SYNOPSIS

       #import <MPSCNNKernel.h>

       Inherits MPSKernel.

       Inherited by MPSCNNBinaryConvolution, MPSCNNConvolution,
       MPSCNNConvolutionTranspose, MPSCNNCrossChannelNormalization,
       MPSCNNLocalContrastNormalization, MPSCNNLogSoftMax, MPSCNNNeuron,
       MPSCNNPooling, MPSCNNSoftMax, MPSCNNSpatialNormalization,
       MPSCNNUpsampling, and MPSRNNImageInferenceLayer.

   Instance Methods
       (nonnull instancetype) - initWithDevice:
       (nullable instancetype) - initWithCoder:device:
       (void) - encodeToCommandBuffer:sourceImage:destinationImage:
       (MPSImage *__nonnull) - encodeToCommandBuffer:sourceImage:

   Properties
       MPSOffset offset
       MTLRegion clipRect
       NSUInteger destinationFeatureChannelOffset
       MPSImageEdgeMode edgeMode
       NSUInteger kernelWidth
       NSUInteger kernelHeight
       NSUInteger strideInPixelsX
       NSUInteger strideInPixelsY
       BOOL isBackwards
       id< MPSNNPadding > padding
       id< MPSNNPadding > id< MPSImageAllocator > destinationImageAllocator

   Additional Inherited Members

Detailed Description

       This depends on Metal.framework  Describes a convolution neural network
       kernel.  A MPSCNNKernel consumes one MPSImage and produces one
       MPSImage.

               The region overwritten in the destination MPSImage is described
               by the clipRect.  The top left corner of the region consumed (ignoring
               adjustments for filter size -- e.g. convolution filter size) is given
               by the offset. The size of the region consumed is a function of the
               clipRect size and any subsampling caused by pixel strides at work,
               e.g. MPSCNNPooling.strideInPixelsX/Y.  Where the offset + clipRect
               would cause a {x,y} pixel address not in the image to be read, the
               edgeMode is used to determine what value to read there.

               The Z/depth component of the offset, clipRect.origin and clipRect.size
               indexes which images to use. If the MPSImage contains only a single image
               then these should be offset.z = 0, clipRect.origin.z = 0
               and clipRect.size.depth = 1. If the MPSImage contains multiple images,
               clipRect.size.depth refers to number of images to process. Both source
               and destination MPSImages must have at least this many images. offset.z
               refers to starting source image index. Thus offset.z + clipRect.size.depth must
               be <= source.numberOfImages. Similarly, clipRect.origin.z refers to starting
               image index in destination. So clipRect.origin.z + clipRect.size.depth must be
               <= destination.numberOfImage.

               destinationFeatureChannelOffset property can be used to control where the MPSKernel will
               start writing in feature channel dimension. For example, if the destination image has
               64 channels, and MPSKernel outputs 32 channels, by default channels 0-31 of destination
               will be populated by MPSKernel. But if we want this MPSKernel to populate channel 32-63
               of the destination, we can set destinationFeatureChannelOffset = 32.
               A good example of this is concat (concatenation) operation in Tensor Flow. Suppose
               we have a src = w x h x Ni which goes through CNNConvolution_0 which produces
               output O0 = w x h x N0 and CNNConvolution_1 which produces output O1 = w x h x N1 followed
               by concatenation which produces O = w x h x (N0 + N1). We can achieve this by creating
               an MPSImage with dimensions O = w x h x (N0 + N1) and using this as destination of
               both convolutions as follows
                   CNNConvolution0: destinationFeatureChannelOffset = 0, this will output N0 channels starting at
                                    channel 0 of destination thus populating [0,N0-1] channels.
                   CNNConvolution1: destinationFeatureChannelOffset = N0, this will output N1 channels starting at
                                    channel N0 of destination thus populating [N0,N0+N1-1] channels.


               A MPSCNNKernel can be saved to disk / network using NSCoders such as NSKeyedArchiver.
               When decoding, the system default MTLDevice will be chosen unless the NSCoder adopts
               the <MPSDeviceProvider> protocol.  To accomplish this you will likely need to subclass your
               unarchiver to add this method.




Method Documentation

   - (MPSImage * __nonnull) encodeToCommandBuffer: (nonnull id<
       MTLCommandBuffer >) commandBuffer(MPSImage *__nonnull) sourceImage
       Encode a MPSCNNKernel into a command Buffer. Create a texture to hold
       the result and return it.  In the first iteration on this method,
       encodeToCommandBuffer:sourceImage:destinationImage: some work was left
       for the developer to do in the form of correctly setting the offset
       property and sizing the result buffer. With the introduction of the
       padding policy (see padding property) the filter can do this work
       itself. If you would like to have some input into what sort of MPSImage
       (e.g. temporary vs. regular) or what size it is or where it is
       allocated, you may set the destinationImageAllocator to allocate the
       image yourself.

       This method uses the MPSNNPadding padding property to figure out how to
       size the result image and to set the offset property. See discussion in
       MPSNeuralNetworkTypes.h.

       Parameters:
           commandBuffer The command buffer
           sourceImage A MPSImage to use as the source images for the filter.

       Returns:
           A MPSImage or MPSTemporaryImage allocated per the
           destinationImageAllocator containing the output of the graph. The
           offset property will be adjusted to reflect the offset used during
           the encode. The returned image will be automatically released when
           the command buffer completes. If you want to keep it around for
           longer, retain the image. (ARC will do this for you if you use it
           later.)



   - (void) encodeToCommandBuffer: (nonnull id< MTLCommandBuffer >)
       commandBuffer(MPSImage *__nonnull) sourceImage(MPSImage *__nonnull)
       destinationImage
       Encode a MPSCNNKernel into a command Buffer. The operation shall
       proceed out-of-place.  This is the older style of encode which reads
       the offset, doesn't change it, and ignores the padding method.

       Parameters:
           commandBuffer A valid MTLCommandBuffer to receive the encoded
           filter
           sourceImage A valid MPSImage object containing the source image.
           destinationImage A valid MPSImage to be overwritten by result
           image. destinationImage may not alias sourceImage.



   - (nullable instancetype) initWithCoder: (NSCoder *__nonnull)
       aDecoder(nonnull id< MTLDevice >) device
       NSSecureCoding compatability  While the standard
       NSSecureCoding/NSCoding method -initWithCoder: should work, since the
       file can't know which device your data is allocated on, we have to
       guess and may guess incorrectly. To avoid that problem, use
       initWithCoder:device instead.

       Parameters:
           aDecoder The NSCoder subclass with your serialized MPSKernel
           device The MTLDevice on which to make the MPSKernel

       Returns:
           A new MPSKernel object, or nil if failure.



       Reimplemented from MPSKernel.

       Reimplemented in MPSCNNBinaryConvolution, MPSCNNBinaryFullyConnected,
       MPSCNNConvolutionTranspose, MPSCNNConvolution, MPSCNNFullyConnected,
       MPSRNNImageInferenceLayer, MPSCNNNeuron, MPSCNNDilatedPoolingMax,
       MPSCNNPoolingAverage, MPSCNNPoolingL2Norm,
       MPSCNNCrossChannelNormalization, MPSCNNPooling, MPSCNNPoolingMax,
       MPSCNNLocalContrastNormalization, and MPSCNNSpatialNormalization.

   - (nonnull instancetype) initWithDevice: (nonnull id< MTLDevice >) device
       Standard init with default properties per filter type

       Parameters:
           device The device that the filter will be used on. May not be NULL.

       Returns:
           A pointer to the newly initialized object. This will fail,
           returning nil if the device is not supported. Devices must be
           MTLFeatureSet_iOS_GPUFamily2_v1 or later.



       Reimplemented from MPSKernel.

       Reimplemented in MPSCNNBinaryConvolution, MPSCNNBinaryFullyConnected,
       MPSCNNConvolutionTranspose, MPSCNNConvolution, MPSCNNFullyConnected,
       MPSRNNImageInferenceLayer, MPSCNNNeuronReLUN, MPSCNNNeuronELU,
       MPSCNNCrossChannelNormalization, MPSCNNPooling, MPSCNNNeuronSoftPlus,
       MPSCNNNeuronSoftSign, MPSCNNNeuronTanH, MPSCNNNeuronAbsolute,
       MPSCNNNeuronHardSigmoid, MPSCNNLocalContrastNormalization,
       MPSCNNNeuronReLU, MPSCNNNeuronPReLU, MPSCNNNeuronSigmoid,
       MPSCNNNeuronLinear, MPSCNNSpatialNormalization, and MPSCNNUpsampling.


Property Documentation

   - clipRect [read],  [write],  [nonatomic],  [assign]
       An optional clip rectangle to use when writing data. Only the pixels in
       the rectangle will be overwritten.  A MTLRegion that indicates which
       part of the destination to overwrite. If the clipRect does not lie
       completely within the destination image, the intersection between clip
       rectangle and destination bounds is used. Default: MPSRectNoClip
       (MPSKernel::MPSRectNoClip) indicating the entire image.
       clipRect.origin.z is the index of starting destination image in batch
       processing mode. clipRect.size.depth is the number of images to process
       in batch processing mode.

       See Also: MetalPerformanceShaders.h subsubsection_clipRect

   - destinationFeatureChannelOffset [read],  [write],  [nonatomic],  [assign]
       The number of channels in the destination MPSImage to skip before
       writing output.  This is the starting offset into the destination image
       in the feature channel dimension at which destination data is written.
       This allows an application to pass a subset of all the channels in
       MPSImage as output of MPSKernel. E.g. Suppose MPSImage has 24 channels
       and a MPSKernel outputs 8 channels. If we want channels 8 to 15 of this
       MPSImage to be used as output, we can set
       destinationFeatureChannelOffset = 8. Note that this offset applies
       independently to each image when the MPSImage is a container for
       multiple images and the MPSCNNKernel is processing multiple images
       (clipRect.size.depth > 1). The default value is 0 and any value
       specifed shall be a multiple of 4. If MPSKernel outputs N channels,
       destination image MUST have at least destinationFeatureChannelOffset +
       N channels. Using a destination image with insufficient number of
       feature channels result in an error. E.g. if the MPSCNNConvolution
       outputs 32 channels, and destination has 64 channels, then it is an
       error to set destinationFeatureChannelOffset > 32.

   - (id<MPSNNPadding> id<MPSImageAllocator>) destinationImageAllocator
       [read],  [write],  [nonatomic],  [retain]
       Method to allocate the result image for
       -encodeToCommandBuffer:sourceImage:  Default: defaultAllocator
       (MPSTemporaryImage)

   - edgeMode [read],  [write],  [nonatomic],  [assign]
       The MPSImageEdgeMode to use when texture reads stray off the edge of an
       image  Most MPSKernel objects can read off the edge of the source
       image. This can happen because of a negative offset property, because
       the offset + clipRect.size is larger than the source image or because
       the filter looks at neighboring pixels, such as a Convolution filter.
       Default: MPSImageEdgeModeZero.

       See Also: MetalPerformanceShaders.h subsubsection_edgemode Note: For
       MPSCNNPoolingAverage specifying edge mode MPSImageEdgeModeClamp is
       interpreted as a 'shrink-to-edge' operation, which shrinks the
       effective filtering window to remain within the source image borders.

   - isBackwards [read],  [nonatomic],  [assign]
       YES if the filter operates backwards.  This influences how
       strideInPixelsX/Y should be interpreted. Most filters either have
       stride 1 or are reducing, meaning that the result image is smaller than
       the original by roughly a factor of the stride. A few 'backward'
       filters (e.g unpooling) are intended to 'undo' the effects of an
       earlier forward filter, and so enlarge the image. The stride is in the
       destination coordinate frame rather than the source coordinate frame.

   - kernelHeight [read],  [nonatomic],  [assign]
       The height of the MPSCNNKernel filter window  This is the vertical
       diameter of the region read by the filter for each result pixel. If the
       MPSCNNKernel does not have a filter window, then 1 will be returned.

       Warning: This property was lowered to this class in ios/tvos 11 The
       property may not be available on iOS/tvOS 10 for all subclasses of
       MPSCNNKernel

   - kernelWidth [read],  [nonatomic],  [assign]
       The width of the MPSCNNKernel filter window  This is the horizontal
       diameter of the region read by the filter for each result pixel. If the
       MPSCNNKernel does not have a filter window, then 1 will be returned.

       Warning: This property was lowered to this class in ios/tvos 11 The
       property may not be available on iOS/tvOS 10 for all subclasses of
       MPSCNNKernel

   - offset [read],  [write],  [nonatomic],  [assign]
       The position of the destination clip rectangle origin relative to the
       source buffer.  The offset is defined to be the position of
       clipRect.origin in source coordinates. Default: {0,0,0}, indicating
       that the top left corners of the clipRect and source image align.
       offset.z is the index of starting source image in batch processing
       mode.

       See Also: MetalPerformanceShaders.h subsubsection_mpsoffset

   - padding [read],  [write],  [nonatomic],  [assign]
       The padding method used by the filter  This influences how the
       destination image is sized and how the offset into the source image is
       set. It is used by the -encode methods that return a MPSImage from the
       left hand side.

   - strideInPixelsX [read],  [nonatomic],  [assign]
       The downsampling (or upsampling if a backwards filter) factor in the
       horizontal dimension  If the filter does not do up or downsampling, 1
       is returned.

               Warning: This property was lowered to this class in ios/tvos 11
                        The property may not be available on iOS/tvOS 10 for
                        all subclasses of MPSCNNKernel



   - strideInPixelsY [read],  [nonatomic],  [assign]
       The downsampling (or upsampling if a backwards filter) factor in the
       vertical dimension  If the filter does not do up or downsampling, 1 is
       returned.

               Warning: This property was lowered to this class in ios/tvos 11
                        The property may not be available on iOS/tvOS 10 for
                        all subclasses of MPSCNNKernel





Author

       Generated automatically by Doxygen for
       MetalPerformanceShaders.framework from the source code.





Version MetalPerformanceShaders-Thu2Jul 13 2017                MPSCNNKernel(3)


Mac OS X 10.13.1 - Generated Mon Nov 6 16:27:00 CST 2017
© manpagez.com 2000-2024
Individual documents may contain additional copyright information.