Feature: Light weight AI Sub-System
We would like Harmony OS to support the Light weight AI Sub-System: A robust framework to support AI application in OHOS. The framework which can cater to different application / use case requirements. The Light weight AI fits in the OHOS as an optional module which can be used on High capacity devices or light weight devices.
Work in progress
Need PMC approval: Light weight AI run-time interfaces kit
Need PMC approval: Light weight AI run-time framework
Need PMC approval: Third party dlpack used in Light weight AI run-time framework
PR sample app: Sample Applications using Light weight AI run-time
Efficient RAM Reduction :
Our Solution is Customizable, based on the memory requirement can choose between:
Efficient ROM reduction :
Depending on the need to the application, users can choose the optimization level which meets their requirements.
* @yutong zhang Constructor, initializes the runtime and allocate necessary memory.
* @chancelai graph_byte Pointer to the graph data of the model.
* @chancelai param_byte Pointer to the param data of the model.
* @chancelai kernel_funcs The list of kernel functions used for this model.
Runtime(const char graph_byte, const char* params_byte, std::vector* kernel_funcs);
* @yutong zhang Function to set the input tensor to the model.
* @chancelai name The name of the input tensor for which input need to set.
* @chancelai tensor The input tensor value to be set for execution of model.
void SetInput(const char name, DLTensor* tensor);
* @yutong zhang Function to execute the model.
* @yutong zhang Return tensor for given output index after execution.
* @chancelai index The output index.
* @chancelai tensor The output pointer where the data is copied.
void GetOutput(int32_t index, DLTensor tensor);
Sample application has been implemented to use the above service. The Sample application is maintained in sample camera apps.
The Sample applications generated in the bin can be invoked to see the results
Currently the Light-weight AI framework is already work in progress as per the above mentioned PR’s and is waiting for approval to create repo.
Apart from the current optimized framework capabilities, the following enhancements are being planned to be supported.
-- Memory planner (30-Oct)
-- CMSIS-NN (30-Oct)
-- Camera integrated demo (30-Oct)
-- TVM Model Conversion (30-Nov)
-- Enhanced Tiny kernels (30-Nov)
-- 4-bit quantized kernels (30-Dec)
-- AMP (30-Dec)
-- Person detection
-- Image recognition
-- Speech detection
-- Magic wand
-- Currently the Hi3516 and Hi3518 boards are being supported
-- Plan to support additional hardware in future (Q1-2021)
-- NXP IMXULL
Sign in to comment