- NGraph operation: Building block of neural networks, such as convolution or pooling.
- (clDNN) Primitive: Basic NN operation that was defined in clDNN. One primitive is usually mapped to one ngraph operation, but graph compilation may cause the mapping not to be 1-to-1.
- Kernel: Actual body of execution in GPU. It also refers to specific implementations of Primitive for GPU, such as
convolution_gpu_winograd_2x3_s1.cl
. Usually, single kernel fulfills the operation of single primitive, but several kernels may be used to support one primitive. - Unittest: Single-layer test within cldnn.
- Functional test: Single-layer test in IE.
-
Understand the new operation.
- Review the ngraph operation spec
- IE operations(a.k.a primitive or NN-layer) are defined by ngraph.
- You can check ngraph reference implementation of the primitive as well
-
Try to find existing primitive that fully or partially covers this operation.
- It is also possible to transform the network so that the missing primitive is covered from existing primitive.
- e.g. Replace reduce with pooling
-
Add new / extend existing cldnn primitive according to the operation spec.
-
This phase is to enable primitive within cldnn library, without exposing it to IE.
-
Implement reference parallel kernel that supports all parameters of the operation and all input/output data types and layouts
File Description scatter_elements_update_ref.cl OpenCL Kernel body. For more detail, please see How to write OCL kernel section scatter_elements_update_kernel_ref.(cpp,h) Counterpart of kernel body for host scatter_elements_update_kernel_selector.(cpp,h) Kernel selector for a primitive register_gpu.(cpp,hpp) Primitive registration scatter_elements_update_gpu.cpp Primitive registration, input spec scatter_elements_update_inst.h Node type declaration for cldnn program clDNN/src/scatter_elements_update.cpp Code for scatter_elements_update_inst.h clDNN/api/cldnn/primitives/scatter_elements_update.hpp clDNN primitive definition common_types.h Enum declaration for KernelType and arguments -
Add unit tests for the new operation
File Description scatter_elements_update_gpu_test.cpp Unittest for layer -
Need to add reference code or expected result for checking the result.
-
You can also specify the kernel with
force_implementations
in case the primitive contains multiple kernels.... build_options options; implementation_desc conv_impl = { format::fs_b_yx_fsv32, "" }; options.set_option(build_option::force_implementations({ {"conv_fsv", conv_impl} })); network network(engine, topology, options); ...
-
This unit test is built into
clDNN_unit_tests
. It is a gtest application.# Show list of test cases openvino/bin/intel64/Debug$ ./clDNN_unit_tests64 --gtest_list_tests # Run test openvino/bin/intel64/Debug$ ./clDNN_unit_tests64 --gtest_filter=scatter_elements_update_gpu_fp16.*
-
Test scope needs to be comprehensive, but not wasteful. These tests run for every PRs in CI. Let's save the planet.
-
-
Support layer fusion, if applicable
- It is usually easy to fuse some layers, such as scale, activation, quantize and eltwise, into previous layer. This fusing rule can be added to
prepare_primitive_fusing::fuse_simple_primitives
. fuse_simple_primitives
is called during graph compilation phase- You can see general description of layer fusion here
- Unit tests for layer fusion are placed in a single file: fusings_gpu_test.cpp. It is also compiled into
clDNN_unit_tests
. - Code for fused layers are generated with
jitter
. It is created asFUSED_OPS..
macro in OCL code. This generation logic is inKernelBase::MakeFusedOpsJitConstants
.
- It is usually easy to fuse some layers, such as scale, activation, quantize and eltwise, into previous layer. This fusing rule can be added to
-
-
Add / update factory for this operation in the GPU plugin to use new primitive in inference-engine
File Description cldnn_engine/ops/scatter_elements_update.cpp Instantiation from cldnn plugin for IE cldnn_primitives_list.hpp Registration for primitives -
Add functional single layer tests for the operation and try to cover most of the difference use cases of this operation
File Description single_layer_tests/scatter_elements_update.cpp Single layer test - It is possible to use ngraph reference code for result validation.
- This is compiled into
gpuFuncTests
. It is alsogtest
application. - Please also review the general guideline of test infrastructure
-
[Optional] If there are existing IRs with this operation, try to run the full model(s) to be sure that it's correctly processed within the context
-
[Optional] If there are existing IRs with this operation, try to run the full model(s) and estimate performance impact from this operation on total model execution time
-
Create PR with your changes
- If you are
OpenVINO
group member in github, CI will be triggered. - Please review the OpenVINO contribution guide.
- If you are
- The process is quite similar to previous one. You can skip already existing steps.
- Main work is adding new kernel and registering it from kernel selector.
- You may need to add unit test for that new kernel. Specific kernel can be chosen with
build_option::force_implementations
. - It is not possible to specify kernel from functional test(IE).
In GPU OCL kernels, many conditional statements are processed with #ifdef
so that it can be handled during compile-time. The definitions are created with jitter.cpp
. It is set during graph compilation. You can see generated macros following the steps in source dumps.
Jitter also contains run-time parameters such as input and output size.
Additional macros can be defined from host-code of kernel itself. For example, see below code snippet. It passes SUB_GROUP_SIZE
through macro definition through jitter.
// GetJitConstants method of the kernel
const size_t sub_group_size = 16;
JitConstants jit = MakeBaseParamsJitConstants(params);
jit.AddConstant(MakeJitConstant("SUB_GROUP_SIZE", sub_group_size ));
Jitter generates macros for index calculations. With these macros, you can program ocl kernel in a layout-agnostic way. If you use the macro ${TENSOR_NAME}_GET_INDEX
, you can get 1d-index from tensor coordinate whether the format is planar(such as bfyx
or byxf
) or blocked.(such as b_fs_yx_fsv16
). You can check source code for GET_INDEX macro.
If a kernel is not performance-critical, you can support bfyx
, bfzyx
and bfwzyx
only for layout. Those are default layouts. As an optimized format, b_fs_yx_fsv16
, b_fs_yx_fsv4
or byxf
can be used as well.
General description of layout can be found here and header file is here
When layers are fused, jitter
will create macros to generate code for fused layers. It is realized into FUSED_OPS..
in OCL kernel. You can understand the usage from other kernels.
There is a comment that describes layer fusion.