-
Notifications
You must be signed in to change notification settings - Fork 2.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Good First Issue] [Snippets] [ARM]: Enable Logical ops tokenization in CPU Plugin #28160
Comments
"Hi @a-sidorova and @dmitry-gorokhov, I'm excited to contribute to this project and look forward to your guidance on issue #28160!" |
@BhargavMah Hi! I assigned it to you, good luck! 🚀 I believe that all needed information about the task is described in the issue description. Please take a look at this. If you have any specific questions, please feel free to ask them :) |
Hey there, I would like to take this issue. |
Thanks for being interested in this issue. It looks like this ticket is already assigned to a contributor. Please communicate with the assigned contributor to confirm the status of the issue. |
Hi @a-sidorova and @dmitry-gorokhov, as per the issue description, it is mentioned that all JIT emitters are already implemented, and the task is to only modify the CPU generator and transformation files. However, I couldn't find the emitter for logical operations. If such an emitter file does not exist, could you please confirm? Otherwise, kindly guide me to its location. Thank you! |
Hi @BhargavMah! Thank you for taking the ticket and question! If you have some questions, please, don't hesitate and ask them 😊 |
Context
Snippets is a highly specialized JIT (just-in-time) compiler for computational graphs. It provides a scalable approach to operations' fusions and enablement. As a typical compiler,
Snippets
have frontend (tokenizer), optimizer and backend (generator).The first of the
Snippets
pipeline, Tokenization, identifies parts of the initialov::Model
that can be lowered bySnippets
efficiently, and tokenizes them into one whole node -Subgraph
.The second step of the pipeline (Optimizer) is applying common and device-specific optimizations to
Subgraph
and getting lowered representation of tokenizedSubgraph
.Finally, the last stage is code emission. The target generator maps every operation in the IR to a binary code emitter JIT Emitter, which is then used to produce a piece of executable code. As a result, we produce an executable that performs calculations described by the initial input
ov::Model
.The purpose of this issue is enabling tokenization of logical elementwise operations (LogicalAnd, LogicalOr, LogicalXor, LogicalNot) in Snippets for ARM devices. All JIT Emitters are already implemented in the CPU Plugin, so no need to write JIT Emitters.
Prerequisites
Recommended to use ARM CPU based platform for development (e.g. Mac, Raspberry Pi etc). The cross-compilation with an emulator (e.g. QEMU) using is still option:
cmake -DCMAKE_TOOLCHAIN_FILE=../cmake/arm64.toolchain.cmake ..
.What needs to be done?
is_supported_op
in the tokenization pass callback.Tests
Tests are disabled in default build, so ensure to add
-DENABLE_TESTS=ON
into cmake command.GoogleTest is used for testing. CPU functional test target is ov_cpu_func_tests. You can use
GoogleTest
filter:./bin/[platform]/[build_type]/ov_cpu_func_tests --gtest_filter="*smoke*LogicalLayerTest*"
Examples
Resources
Contact points
@a-sidorova, @dmitry-gorokhov
The text was updated successfully, but these errors were encountered: