9.3. 基础示例包使用说明¶
基础示例包主要提供了三个方面的示例:
dnn
API教学示例。自定义算子(custom OP)等特殊功能示例。
非NV12输入模型的杂项示例。
开发者可以体验并基于这些示例进行应用开发,降低开发门槛。
9.3.1. 交付物说明¶
交付物主要包括以下内容:
名称 |
内容 |
---|---|
horizon_runtime_sample |
包含示例源代码和运行脚本。 |
注解
上板模型需要先在OE包的 samples/ai_toolchain/model_zoo/runtime/horizon_runtime_sample
目录下执行 resolve_runtime_sample.sh
脚本进行获取。
示例包结构如下所示:
+---horizon_runtime_sample
├── code # 示例源码
│ ├── 00_quick_start # 快速入门示例,用mobilenetv1读取单张图片进行推理的示例代码
│ │ ├── CMakeLists.txt
│ │ ├── CMakeLists_x86.txt
│ │ └── src
│ ├── 01_api_tutorial # BPU SDK API使用示例代码
│ │ ├── CMakeLists.txt
│ │ ├── mem
│ │ ├── model
│ │ ├── roi_infer
│ │ └── tensor
│ ├── 02_advanced_samples # 特殊功能示例
│ │ ├── CMakeLists.txt
│ │ ├── custom_identity
│ │ ├── multi_input
│ │ ├── multi_model_batch
│ │ └── nv12_batch
│ ├── 03_misc # 杂项示例
│ │ ├── CMakeLists.txt
│ │ ├── lenet_gray
│ │ └── resnet_feature
│ ├── build_x5.sh # 编译脚本
│ ├── build_x86.sh # x86仿真平台编译脚本
│ ├── CMakeLists.txt
│ ├── CMakeLists_x86.txt
│ └── deps_gcc11.3 # 编译依赖库
│ ├── aarch64
│ └── x86
├── x5
│ ├── data # 预置数据文件
│ │ ├── cls_images
│ │ ├── det_images
│ │ ├── misc_data
│ │ ├── custom_identity_data
│ ├── model
│ │ ├── README.md
│ │ └── runtime -> ../../../model_zoo/runtime/horizon_runtime_sample # 软链接指向OE包中的模型,板端运行环境需要自行指定模型路径
│ └── script # aarch64示例运行脚本
│ │ ├── 00_quick_start
│ │ ├── 01_api_tutorial
│ │ ├── 02_advanced_samples
│ │ ├── 03_misc
│ │ ├── aarch64 # 编译产生aarch64可执行程序及依赖库
│ │ └── README.md
│ └── script_x86 # x86示例运行脚本
│ ├── 00_quick_start
│ ├── x86 # 编译产生x86可执行程序及依赖库
│ └── README.md
└── README.md
code:该目录内是示例的源码。
code/00_quick_start:快速入门示例,基于
dnn
API,用mobilenetv1进行单张图片模型推理和结果解析。code/01_api_tutorial:
dnn
API使用教学代码,包括 mem, model , roi_infer 和 tensor 四部分。code/02_advanced_samples:特殊功能示例,包括 custom_identity, multi_input, multi_model_batch 和 nv12_batch 功能。
code/03_misc:非NV12输入模型的杂项示例。
code/build_x5.sh:程序一键编译脚本。
code/build_x86.sh:x86仿真环境一键编译脚本。
code/deps_gcc11.3:示例代码所需要的三方依赖, 用户在开发自己代码程序的时候可以根据实际情况替换或者裁剪。
x5:示例运行脚本,预置了数据和相关模型。
9.3.2. 环境构建¶
9.3.2.1. 开发板准备¶
1.拿到开发板后,升级系统镜像到示例包推荐的系统镜像版本。
2.确保本地开发机和开发板可以远程连接。
9.3.2.2. 编译¶
编译需要的步骤如下:
1.当前环境安装好交叉编译工具arm-gnu-toolchain-11.3.rel1-x86_64-aarch64-none-linux-gnu。
2.然后执行 horizon_runtime_sample/code 目录下的build_x5.sh脚本即可一键编译真机环境下的可执行程序,可执行程序和对应依赖会自动复制到 x5/script 目录下的 aarch64 目录下。
x86仿真环境下使用 horizon_runtime_sample/code 目录下的build_x86.sh脚本即可一键编译x86环境下的可执行程序,可执行 程序和对应依赖会自动复制到 x5/script_x86 目录下的 x86 目录下。
注解
需要注意build_x5.sh脚本里指定的交叉编译工具链的位置是 /opt 目录下,用户如果安装在其他位置,可以手动修改下build_x5.sh。
export CC=/opt/arm-gnu-toolchain-11.3.rel1-x86_64-aarch64-none-linux-gnu/bin/aarch64-none-linux-gnu-gcc
export CXX=/opt/arm-gnu-toolchain-11.3.rel1-x86_64-aarch64-none-linux-gnu/bin/aarch64-none-linux-gnu-g++
9.3.3. 示例使用(basic_samples示例)¶
示例脚本主要在 x5/script 和 x5/script_x86 目录下,编译程序后目录结构如下:
script:
├── 00_quick_start
│ ├── README.md
│ └── run_mobilenetV1.sh
├── 01_api_tutorial
│ ├── model.sh
│ ├── README.md
│ ├── roi_infer.sh
│ ├── sys_mem.sh
│ └── tensor.sh
├── 02_advanced_samples
│ ├── plugin
│ │ └── custom_arm_op_custom_identity.sh
│ ├── README.md
│ ├── run_multi_input.sh
│ ├── run_multi_model_batch.sh
│ └── run_nv12_batch.sh
├── 03_misc
│ ├── README.md
│ ├── run_lenet.sh
│ └── run_resnet50_feature.sh
├── aarch64 # 编译产生可执行程序及依赖库
│ ├── bin
│ │ ├── model_example
│ │ ├── roi_infer
│ │ ├── run_custom_op
│ │ ├── run_lenet_gray
│ │ ├── run_mobileNetV1_224x224
│ │ ├── run_multi_input
│ │ ├── run_multi_model_batch
│ │ ├── run_nv12_batch
│ │ ├── run_resnet_feature
│ │ ├── sys_mem_example
│ │ └── tensor_example
│ └── lib
│ ├── libdnn.so
│ ├── libhbrt_bayes_aarch64.so
│ └── libopencv_world.so.3.4
└── README.md
script_x86:
├── 00_quick_start
│ ├── README.md
│ └── run_mobilenetV1.sh
├── x86 # 编译产生可执行程序及依赖库
│ ├── bin
│ │ └── run_mobileNetV1_224x224
│ └── lib
│ ├── libdnn.so
│ ├── libhbdk_sim_x86.so
│ └── libopencv_world.so.3.4
└── README.md
注解
horizon_runtime_sample
示例包的模型发布物需要在OE包的samples/ai_toolchain/model_zoo/runtime/horizon_runtime_sample
目录下执行resolve_runtime_sample.sh
脚本进行获取。model
文件夹包含模型所在的路径,x86运行环境下runtime
文件夹为软链接,链接路径指向../../../model_zoo/runtime/horizon_runtime_sample
,可直接在OE环境中运行;板端运行环境下需要将模型发布物放至model
文件夹下。
9.3.3.1. quick_start¶
00_quick_start 目录下的快速开始示例:
00_quick_start/
├── README.md
└── run_mobilenetV1.sh
run_mobilenetV1.sh:该脚本实现使用mobilenetv1读取单张图片进行推理的示例功能。 使用的时候,进入 00_quick_start 目录, 然后直接执行
sh run_mobilenetV1.sh
即可,如下代码块所示:#!/bin/sh root@x5dvb:/userdata/app/horizon/basic_samples/x5/script/00_quick_start# sh run_mobilenetV1.sh ../aarch64/bin/run_mobileNetV1_224x224 --model_file=../../model/runtime/mobilenetv1/mobilenetv1_224x224_nv12.bin --image_file=../../data/cls_images/zebra_cls.jpg --top_k=5 I0000 00:00:00.000000 10765 vlog_is_on.cc:197] RAW: Set VLOG level for "*" to 3 [BPU_PLAT]BPU Platform Version(1.3.3)! [HBRT] set log level as 0. version = 3.15.18.0 [DNN] Runtime version = 1.17.2_(3.15.18 HBRT) [A][DNN][packed_model.cpp:225][Model](2023-04-11,17:51:17.206.804) [HorizonRT] The model builder version = 1.15.0 I0411 17:51:17.244180 10765 run_mobileNetV1_224x224.cc:135] DNN runtime version: 1.17.2_(3.15.18 HBRT) I0411 17:51:17.244376 10765 run_mobileNetV1_224x224.cc:252] input[0] name is data I0411 17:51:17.244508 10765 run_mobileNetV1_224x224.cc:268] output[0] name is prob I0411 17:51:17.260176 10765 run_mobileNetV1_224x224.cc:159] read image to tensor as nv12 success I0411 17:51:17.262075 10765 run_mobileNetV1_224x224.cc:194] TOP 0 result id: 340 I0411 17:51:17.262118 10765 run_mobileNetV1_224x224.cc:194] TOP 1 result id: 292 I0411 17:51:17.262148 10765 run_mobileNetV1_224x224.cc:194] TOP 2 result id: 282 I0411 17:51:17.262177 10765 run_mobileNetV1_224x224.cc:194] TOP 3 result id: 83 I0411 17:51:17.262205 10765 run_mobileNetV1_224x224.cc:194] TOP 4 result id: 290
9.3.3.2. api_tutorial¶
该示例是指 01_api_tutorial 目录内的示例,旨在引导用户使用嵌入式API。其目录包含以下脚本:
├── model.sh
├── roi_infer.sh
├── sys_mem.sh
└── tensor.sh
model.sh:该脚本主要实现读取模型信息的功能。 使用的时候,直接进入 01_api_tutorial 目录,然后直接执行
sh model.sh
即可,如下代码块所示:#!/bin/sh root@x5dvb-hynix8G:/userdata/horizon/x5/script/01_api_tutorial# sh model.sh ../aarch64/bin/model_example --model_file_list=../../model/runtime/mobilenetv1/mobilenetv1_224x224_nv12.bin I0000 00:00:00.000000 10810 vlog_is_on.cc:197] RAW: Set VLOG level for "*" to 3 [BPU_PLAT]BPU Platform Version(1.3.3)! [HBRT] set log level as 0. version = 3.15.18.0 [DNN] Runtime version = 1.17.2_(3.15.18 HBRT) [A][DNN][packed_model.cpp:225][Model](2023-04-11,17:53:28.970.396) [HorizonRT] The model builder version = 1.15.0 I0411 17:53:29.007853 10810 model_example.cc:104] model count:1, model[0]: mobilenetv1_224x224_nv12 I0411 17:53:29.007939 10810 model_example.cc:112] hbDNNGetModelHandle [mobilenetv1_224x224_nv12] success! I0411 17:53:29.008011 10810 model_example.cc:186] [mobilenetv1_224x224_nv12] Model Info: input num: 1, input[0] validShape: ( 1, 3, 224, 224 ), alignedShape: ( 1, 3, 224, 224 ), tensorType: 1, output num: 1, output[0] validShape: ( 1, 1000, 1, 1 ), alignedShape: ( 1, 1000, 1, 1 ), tensorType: 13
roi_infer.sh:该脚本主要引导如何使用
hbDNNRoiInfer
这个API,示例代码实现的功能是将一张图片转为nv12数据,给定roi框进行infer。 使用的时候,直接进入 01_api_tutorial 目录,然后直接执行sh roi_infer.sh
即可。sys_mem.sh:该脚本主要引导如何使用
hbSysAllocMem
、hbSysFlushMem
和hbSysFreeMem
这几个API。使用的时候,直接进入 01_api_tutorial 目录,执行sh sys_mem.sh
即可。tensor.sh:该脚本主要引导如何准备模型输入和输出的tensor、打印Tensor的属性和数据排布,以及使用quantizeAxis参数进行反量化。 使用的时候,直接进入 01_api_tutorial 目录,执行
sh tensor.sh
即可,如下代码块所示:root@x5dvb-hynix8G:/userdata/horizon/x5/script/01_api_tutorial# sh tensor.sh *****************************test_prepare_free_fn************************************************* Tensor data type:0, Tensor layout: 2, shape:1x1x721x1836, aligned shape:1x1x721x1840 Tensor data type:1, Tensor layout: 2, shape:1x3x773x329, aligned shape:1x3x773x336 Tensor data type:2, Tensor layout: 2, shape:1x3x108x1297, aligned shape:1x3x108x1312 Tensor data type:5, Tensor layout: 2, shape:1x3x858x477, aligned shape:1x3x858x477 Tensor data type:5, Tensor layout: 0, shape:1x920x102x3, aligned shape:1x920x102x3 Tensor data type:4, Tensor layout: 2, shape:1x3x723x1486, aligned shape:1x3x723x1486 Tensor data type:4, Tensor layout: 0, shape:1x372x366x3, aligned shape:1x372x366x3 Tensor data type:3, Tensor layout: 2, shape:1x3x886x291, aligned shape:1x3x886x291 Tensor data type:3, Tensor layout: 0, shape:1x613x507x3, aligned shape:1x613x507x3 *****************************test_prepare_free_fn************************************************ *****************************test_info_fn******************************************************** Tensor data type:14, shape:1x1x1x3x2, stride:24x24x24x8x4, ndim: 5, data: [[[[[0, 1], [2, 3], [4, 5]]]]] Tensor data type:9, shape:3x3x1x2x1, stride:6x2x2x1x1, ndim: 5, data: [[[[[0], [1]]], [[[2], [3]]], [[[4], [5]]]], [[[[6], [7]]], [[[8], [9]]], [[[10], [11]]]], [[[[12], [13]]], [[[14], [15]]], [[[16], [17]]]]] *****************************test_info_fn******************************************************** ********************test_dequantize_fn*********************************************************** Tensor data type:8, shape:1x1x2x4, ndim: 4, quantiType: 2, quantizeAxis: 1, quantizeValue: (0.1,), data: [[[[0, 1, 2, 3], [4, 5, 6, 7]]]], dequantize data: [[[[0, 0.1, 0.2, 0.3], [0.4, 0.5, 0.6, 0.7]]]] Tensor data type:8, shape:2x4x1x1, ndim: 4, quantiType: 2, quantizeAxis: 3, quantizeValue: (0.1,), data: [[[[0]], [[1]], [[2]], [[3]]], [[[4]], [[5]], [[6]], [[7]]]], dequantize data: [[[[0]], [[0.1]], [[0.2]], [[0.3]]], [[[0.4]], [[0.5]], [[0.6]], [[0.7]]]] ********************test_dequantize_fn***********************************************************
9.3.3.3. advanced_samples¶
该示例是指 02_advanced_samples 目录内的示例,介绍了自定义算子特殊功能的使用。其目录包含以下脚本:
├── plugin
│ └── custom_arm_op_custom_identity.sh
├── README.md
├── run_multi_input.sh
├── run_multi_model_batch.sh
└── run_nv12_batch.sh
custom_arm_op_custom_identity.sh:该脚本主要实现自定义算子模型推理功能, 使用的时候,进入 02_advanced_samples 目录, 然后直接执行
sh custom_arm_op_custom_identity.sh
即可,如下代码块所示:root@x5dvb-hynix8G:/userdata/horizon/x5/script/02_advanced_samples# sh custom_arm_op_custom_identity.sh ../../aarch64/bin/run_custom_op --model_file=../../../model/runtime/custom_op/custom_op_featuremap.bin --input_file=../../../data/custom_identity_data/input0.bin,../../../data/custom_identity_data/input1.bin I0000 00:00:00.000000 10841 vlog_is_on.cc:197] RAW: Set VLOG level for "*" to 3 I0411 17:55:59.928918 10841 main.cpp:212] hbDNNRegisterLayerCreator success I0411 17:55:59.929064 10841 main.cpp:217] hbDNNRegisterLayerCreator success [BPU_PLAT]BPU Platform Version(1.3.3)! [HBRT] set log level as 0. version = 3.15.18.0 [DNN] Runtime version = 1.17.2_(3.15.18 HBRT) [A][DNN][packed_model.cpp:225][Model](2023-04-11,17:56:00.667.991) [HorizonRT] The model builder version = 1.15.0 I0411 17:56:00.676071 10841 main.cpp:232] hbDNNGetModelNameList success I0411 17:56:00.676204 10841 main.cpp:239] hbDNNGetModelHandle success I0411 17:56:00.676276 10841 main.cpp:245] hbDNNGetInputCount success file length: 602112 file length: 602112 I0411 17:56:00.687402 10841 main.cpp:268] hbDNNGetOutputCount success I0411 17:56:00.687788 10841 main.cpp:297] hbDNNInfer success I0411 17:56:00.695663 10841 main.cpp:302] task done I0411 17:56:03.145243 10841 main.cpp:306] write output tensor
模型的第一个输出数据保存至
output0.txt
文件。run_multi_input.sh:该脚本主要实现多输入模型推理功能, 使用的时候,进入 02_advanced_samples 目录, 然后直接执行
sh run_multi_input.sh
即可,如下代码块所示:root@x5dvb:/userdata/horizon/x5/script/02_advanced_samples# sh run_multi_input.sh ../aarch64/bin/run_multi_input --model_file=../../model/runtime/mobilenetv2/mobilenetv2_multi_224x224_gray.bin --image_file=../../data/cls_images/zebra_cls.jpg --top_k=5 I0000 00:00:00.000000 10893 vlog_is_on.cc:197] RAW: Set VLOG level for "*" to 3 [BPU_PLAT]BPU Platform Version(1.3.3)! [HBRT] set log level as 0. version = 3.15.18.0 [DNN] Runtime version = 1.17.2_(3.15.18 HBRT) [A][DNN][packed_model.cpp:225][Model](2023-04-11,17:57:03.277.375) [HorizonRT] The model builder version = 1.15.0 I0411 17:57:03.327527 10893 multi_input.cc:148] read image to tensor as bgr success I0411 17:57:03.329546 10893 multi_input.cc:183] TOP 0 result id: 340 I0411 17:57:03.329598 10893 multi_input.cc:183] TOP 1 result id: 292 I0411 17:57:03.329628 10893 multi_input.cc:183] TOP 2 result id: 352 I0411 17:57:03.329656 10893 multi_input.cc:183] TOP 3 result id: 351 I0411 17:57:03.329684 10893 multi_input.cc:183] TOP 4 result id: 282
run_multi_model_batch.sh:该脚本主要实现多个小模型批量推理功能, 使用的时候,进入 02_advanced_samples 目录, 然后直接执行
sh run_multi_model_batch.sh
即可,如下代码块所示:root@x5dvb-hynix8G:/userdata/horizon/x5/script/02_advanced_samples# sh run_multi_model_batch.sh ../aarch64/bin/run_multi_model_batch --model_file=../../model/runtime/googlenet/googlenet_224x224_nv12.bin,../../model/runtime/mobilenetv2/mobilenetv2_224x224_nv12.bin --input_file=../../data/cls_images/zebra_cls.jpg,../../data/cls_images/zebra_cls.jpg I0000 00:00:00.000000 10916 vlog_is_on.cc:197] RAW: Set VLOG level for "*" to 3 [BPU_PLAT]BPU Platform Version(1.3.3)! [HBRT] set log level as 0. version = 3.15.18.0 [DNN] Runtime version = 1.17.2_(3.15.18 HBRT) [A][DNN][packed_model.cpp:225][Model](2023-04-11,17:57:43.547.52) [HorizonRT] The model builder version = 1.15.0 [A][DNN][packed_model.cpp:225][Model](2023-04-11,17:57:51.811.477) [HorizonRT] The model builder version = 1.15.0 I0411 17:57:51.844280 10916 main.cpp:117] hbDNNInitializeFromFiles success I0411 17:57:51.844388 10916 main.cpp:125] hbDNNGetModelNameList success I0411 17:57:51.844424 10916 main.cpp:139] hbDNNGetModelHandle success I0411 17:57:51.875140 10916 main.cpp:153] read image to nv12 success I0411 17:57:51.875686 10916 main.cpp:170] prepare input tensor success I0411 17:57:51.875875 10916 main.cpp:182] prepare output tensor success I0411 17:57:51.876082 10916 main.cpp:216] infer success I0411 17:57:51.878844 10916 main.cpp:221] task done I0411 17:57:51.878948 10916 main.cpp:226] googlenet class result id: 340 I0411 17:57:51.879084 10916 main.cpp:230] mobilenetv2 class result id: 340 I0411 17:57:51.879177 10916 main.cpp:234] release task success
run_nv12_batch.sh:该脚本主要实现batch模型推理功能,Infer1是分开设置输入张量每个batch的地址,Infer2是只设置一个地址,包含所有的batch, 使用的时候,进入 02_advanced_samples 目录, 然后直接执行
sh run_nv12_batch.sh
即可,如下代码块所示:root@x5dvb:/userdata/horizon/x5/script/02_advanced_samples# sh run_nv12_batch.sh ../aarch64/bin/run_nv12_batch --model_file=../../model/runtime/googlenet/googlenet_4x224x224_nv12.bin --image_file=../../data/cls_images/zebra_cls.jpg,../../data/cls_images/cat_cls.jpg,../../data/cls_images/zebra_cls.jpg,../../data/cls_images/cat_cls.jpg --top_k=5 I0000 00:00:00.000000 21511 vlog_is_on.cc:197] RAW: Set VLOG level for "*" to 3 [BPU_PLAT]BPU Platform Version(1.3.3)! [HBRT] set log level as 0. version = 3.15.18.0 [DNN] Runtime version = 1.17.2_(3.15.18 HBRT) I0705 11:39:43.429180 21511 nv12_batch.cc:151] Infer1 start I0705 11:39:43.488143 21511 nv12_batch.cc:166] read image to tensor as nv12 success I0705 11:39:43.491156 21511 nv12_batch.cc:201] Batch[0]: I0705 11:39:43.491211 21511 nv12_batch.cc:203] TOP 0 result id: 340 I0705 11:39:43.491240 21511 nv12_batch.cc:203] TOP 1 result id: 83 I0705 11:39:43.491266 21511 nv12_batch.cc:203] TOP 2 result id: 41 I0705 11:39:43.491298 21511 nv12_batch.cc:203] TOP 3 result id: 912 I0705 11:39:43.491324 21511 nv12_batch.cc:203] TOP 4 result id: 292 I0705 11:39:43.491348 21511 nv12_batch.cc:201] Batch[1]: I0705 11:39:43.491374 21511 nv12_batch.cc:203] TOP 0 result id: 282 I0705 11:39:43.491398 21511 nv12_batch.cc:203] TOP 1 result id: 281 I0705 11:39:43.491422 21511 nv12_batch.cc:203] TOP 2 result id: 285 I0705 11:39:43.491447 21511 nv12_batch.cc:203] TOP 3 result id: 287 I0705 11:39:43.491472 21511 nv12_batch.cc:203] TOP 4 result id: 283 I0705 11:39:43.491497 21511 nv12_batch.cc:201] Batch[2]: I0705 11:39:43.491514 21511 nv12_batch.cc:203] TOP 0 result id: 340 I0705 11:39:43.491539 21511 nv12_batch.cc:203] TOP 1 result id: 83 I0705 11:39:43.491564 21511 nv12_batch.cc:203] TOP 2 result id: 41 I0705 11:39:43.491587 21511 nv12_batch.cc:203] TOP 3 result id: 912 I0705 11:39:43.491612 21511 nv12_batch.cc:203] TOP 4 result id: 292 I0705 11:39:43.491637 21511 nv12_batch.cc:201] Batch[3]: I0705 11:39:43.491662 21511 nv12_batch.cc:203] TOP 0 result id: 282 I0705 11:39:43.491685 21511 nv12_batch.cc:203] TOP 1 result id: 281 I0705 11:39:43.491710 21511 nv12_batch.cc:203] TOP 2 result id: 285 I0705 11:39:43.491734 21511 nv12_batch.cc:203] TOP 3 result id: 287 I0705 11:39:43.491760 21511 nv12_batch.cc:203] TOP 4 result id: 283 I0705 11:39:43.492235 21511 nv12_batch.cc:223] Infer1 end I0705 11:39:43.492276 21511 nv12_batch.cc:228] Infer2 start I0705 11:39:43.549713 21511 nv12_batch.cc:243] read image to tensor as nv12 success I0705 11:39:43.552248 21511 nv12_batch.cc:278] Batch[0]: I0705 11:39:43.552292 21511 nv12_batch.cc:280] TOP 0 result id: 340 I0705 11:39:43.552320 21511 nv12_batch.cc:280] TOP 1 result id: 83 I0705 11:39:43.552345 21511 nv12_batch.cc:280] TOP 2 result id: 41 I0705 11:39:43.552371 21511 nv12_batch.cc:280] TOP 3 result id: 912 I0705 11:39:43.552397 21511 nv12_batch.cc:280] TOP 4 result id: 292 I0705 11:39:43.552421 21511 nv12_batch.cc:278] Batch[1]: I0705 11:39:43.552445 21511 nv12_batch.cc:280] TOP 0 result id: 282 I0705 11:39:43.552469 21511 nv12_batch.cc:280] TOP 1 result id: 281 I0705 11:39:43.552495 21511 nv12_batch.cc:280] TOP 2 result id: 285 I0705 11:39:43.552520 21511 nv12_batch.cc:280] TOP 3 result id: 287 I0705 11:39:43.552567 21511 nv12_batch.cc:280] TOP 4 result id: 283 I0705 11:39:43.552592 21511 nv12_batch.cc:278] Batch[2]: I0705 11:39:43.552616 21511 nv12_batch.cc:280] TOP 0 result id: 340 I0705 11:39:43.552641 21511 nv12_batch.cc:280] TOP 1 result id: 83 I0705 11:39:43.552665 21511 nv12_batch.cc:280] TOP 2 result id: 41 I0705 11:39:43.552690 21511 nv12_batch.cc:280] TOP 3 result id: 912 I0705 11:39:43.552716 21511 nv12_batch.cc:280] TOP 4 result id: 292 I0705 11:39:43.552739 21511 nv12_batch.cc:278] Batch[3]: I0705 11:39:43.552763 21511 nv12_batch.cc:280] TOP 0 result id: 282 I0705 11:39:43.552788 21511 nv12_batch.cc:280] TOP 1 result id: 281 I0705 11:39:43.552812 21511 nv12_batch.cc:280] TOP 2 result id: 285 I0705 11:39:43.552837 21511 nv12_batch.cc:280] TOP 3 result id: 287 I0705 11:39:43.552861 21511 nv12_batch.cc:280] TOP 4 result id: 283 I0705 11:39:43.553154 21511 nv12_batch.cc:300] Infer2 end
9.3.3.4. misc¶
该示例是指 03_misc 目录内的示例,介绍了非nv12输入模型的使用。其目录包含以下脚本:
├── run_lenet.sh
└── run_resnet50_feature.sh
run_lenet.sh:该脚本主要实现Y数据输入的lenet模型推理功能, 使用的时候,进入 03_misc 目录, 然后直接执行
sh run_lenet.sh
即可,如下代码块所示:root@x5dvb-hynix8G:/userdata/horizon/x5/script/03_misc# sh run_lenet.sh ../aarch64/bin/run_lenet_gray --model_file=../../model/runtime/lenet_gray/lenet_28x28_gray.bin --data_file=../../data/misc_data/7.bin --image_height=28 --image_width=28 --top_k=5 I0000 00:00:00.000000 10979 vlog_is_on.cc:197] RAW: Set VLOG level for "*" to 3 [BPU_PLAT]BPU Platform Version(1.3.3)! [HBRT] set log level as 0. version = 3.15.18.0 [DNN] Runtime version = 1.17.2_(3.15.18 HBRT) [A][DNN][packed_model.cpp:225][Model](2023-04-11,18:02:12.605.436) [HorizonRT] The model builder version = 1.15.0 I0411 18:02:12.613317 10979 run_lenet_gray.cc:128] hbDNNInitializeFromFiles success I0411 18:02:12.613404 10979 run_lenet_gray.cc:136] hbDNNGetModelNameList success I0411 18:02:12.613440 10979 run_lenet_gray.cc:143] hbDNNGetModelHandle success I0411 18:02:12.614181 10979 run_lenet_gray.cc:159] prepare y tensor success I0411 18:02:12.614310 10979 run_lenet_gray.cc:172] prepare tensor success I0411 18:02:12.614503 10979 run_lenet_gray.cc:182] infer success I0411 18:02:12.615538 10979 run_lenet_gray.cc:187] task done [W][DNN][hb_sys.cpp:108][Mem](2023-04-11,18:02:12.615.583) memory is noncachable, ignore flush operation I0411 18:02:12.615624 10979 run_lenet_gray.cc:192] task post process finished I0411 18:02:12.615667 10979 run_lenet_gray.cc:198] TOP 0 result id: 7 I0411 18:02:12.615698 10979 run_lenet_gray.cc:198] TOP 1 result id: 9 I0411 18:02:12.615727 10979 run_lenet_gray.cc:198] TOP 2 result id: 3 I0411 18:02:12.615754 10979 run_lenet_gray.cc:198] TOP 3 result id: 4 I0411 18:02:12.615782 10979 run_lenet_gray.cc:198] TOP 4 result id: 2
run_resnet50_feature.sh:该脚本主要实现feature数据输入的resnet50模型推理功能。示例代码对feature数据做了quantize和padding以满足模型的输入条件,然后输入到模型进行infer。 使用的时候,进入 03_misc 目录, 然后直接执行
sh run_resnet50_feature.sh
即可,如下代码块所示:root@x5dvb-hynix8G:/userdata/horizon/x5/script/03_misc# sh run_resnet50_feature.sh ../aarch64/bin/run_resnet_feature --model_file=./resnet50_64x56x56_featuremap_modified.bin --data_file=../../data/misc_data/np_0 --top_k=5 I0000 00:00:00.000000 11024 vlog_is_on.cc:197] RAW: Set VLOG level for "*" to 3 [BPU_PLAT]BPU Platform Version(1.3.3)! [HBRT] set log level as 0. version = 3.15.18.0 [DNN] Runtime version = 1.17.2_(3.15.18 HBRT) [A][DNN][packed_model.cpp:225][Model](2023-04-11,18:03:30.317.594) [HorizonRT] The model builder version = 1.15.1 I0411 18:03:30.523054 11024 run_resnet_feature.cc:160] hbDNNInitializeFromFiles success I0411 18:03:30.523152 11024 run_resnet_feature.cc:168] hbDNNGetModelNameList success I0411 18:03:30.523188 11024 run_resnet_feature.cc:175] hbDNNGetModelHandle success I0411 18:03:30.529860 11024 run_resnet_feature.cc:346] input data size: 802816; input valid size: 200704; input aligned size: 229376 I0411 18:03:30.536860 11024 run_resnet_feature.cc:357] tensor padding featuremap success I0411 18:03:30.536912 11024 run_resnet_feature.cc:190] prepare feature tensor success I0411 18:03:30.537052 11024 run_resnet_feature.cc:200] prepare tensor success I0411 18:03:30.537197 11024 run_resnet_feature.cc:210] infer success I0411 18:03:30.541096 11024 run_resnet_feature.cc:215] task done [W][DNN][hb_sys.cpp:108][Mem](2023-04-11,18:03:30.541.149) memory is noncachable, ignore flush operation I0411 18:03:30.541409 11024 run_resnet_feature.cc:220] task post process finished I0411 18:03:30.541453 11024 run_resnet_feature.cc:226] TOP 0 result id: 74 I0411 18:03:30.541483 11024 run_resnet_feature.cc:226] TOP 1 result id: 815 I0411 18:03:30.541512 11024 run_resnet_feature.cc:226] TOP 2 result id: 73 I0411 18:03:30.541538 11024 run_resnet_feature.cc:226] TOP 3 result id: 78 I0411 18:03:30.541565 11024 run_resnet_feature.cc:226] TOP 4 result id: 72
9.3.4. 辅助工具(日志)¶
日志主要包括 示例日志 和 dnn日志 两部分。 其中示例日志是指交付包示例代码中所应用的日志; dnn日志是指嵌入式dnn库中的日志。 用户根据不同的需求可以设置不同的日志。
9.3.4.1. 示例日志¶
示例日志主要采用glog中的vlog,basic_samples所涉及到的示例中,日志内容全部输出。
9.3.4.2. dnn 日志¶
关于 dnn
日志的配置,请阅读BPU SDK API手册章节中的 配置信息 一节内容。