0
  • 聊天消息
  • 系统消息
  • 评论与回复
登录后你可以
  • 下载海量资料
  • 学习在线课程
  • 观看技术视频
  • 写文章/发帖/加入社区
会员中心
创作中心

完善资料让更多小伙伴认识你,还能领取20积分哦,立即完善>

3天内不再提示

英特尔的开发板评测

英特尔物联网 来源:英特尔物联网 2025-01-24 09:37 707次阅读

作者:隋晓金

收到英特尔的开发板-小挪吒,正好手中也有oak相机,反正都是 OpenVINO一套玩意,进行评测一下,竟然默认是个Windows系统,刷机成Linux系统比较方便。

bcd806e4-d969-11ef-9310-92fbcf53809c.jpg

bcfdc334-d969-11ef-9310-92fbcf53809c.png

bd145590-d969-11ef-9310-92fbcf53809c.jpg

bd376d0a-d969-11ef-9310-92fbcf53809c.jpg

bd57ae26-d969-11ef-9310-92fbcf53809c.jpg

我们先刷个刷成Linux系统,测试比较方便,虽然Windows+Python代码也可以开发,搞点难度的Ubuntu+&++推理,同时还为了测试灰仔的ncnn,勉为其难,把系统刷掉,系统我们选择英特尔适配的22.04即可,确保和CPU的型号相同即可:

bd71af24-d969-11ef-9310-92fbcf53809c.png

使用motrix的下载,速度较快。然后使用rufus进行刻录优盘进行sd卡刻入,系统变成linux,就可以远程设置一ssh;系统界面如上。

系统需要安装官方的OpenVINO组件,使用英特尔端进行OpenVINO模型推理,当然也可使用ncnn/mnn/onnx,但原声组件更友好一些。

bd87494c-d969-11ef-9310-92fbcf53809c.png

先配置oak的环境,适配深度相机推理和测距,然后在开发板上推理关键点检测推理,演绎一下测试开发版性能,正好相机端的芯片也是英特尔使用OpenVINO框架,下面操作是开发板上配置一下相机使用的库环境:

ubuntu@ubuntu:~$ wget https://gitee.com/oakchina/depthai-core/releases/download/v2.28.0/depthai_2.28.0_amd64.deb
ubuntu@ubuntu:~$ sudo apt install -f
ubuntu@ubuntu:~$ sudo dpkg -i depthai_2.28.0_amd64.deb
(Reading database ... 164136 files and directories currently installed.)
Preparing to unpack depthai_2.28.0_amd64.deb ...
Unpacking depthai (2.28.0) over (2.28.0) ...
Setting up depthai (2.28.0) ...;

配置一下OpenVINO ,参考手册。这个主要后面写代码和转模型用。但是我用C++写代码,搞点有难度的事情。

https://www.intel.com/content/www/us/en/developer/tools/openvino-toolkit/download.html?PACKAGE=OPENVINO_BASE&VERSION=v_2022_3_2&ENVIRONMENT=DEV_TOOLS&OP_SYSTEM=LINUX&DISTRIBUTION=PIP;

链接,下面操作仍然在开发板上执行:

pip install openvino-dev==2022.3.2
storage.openvinotoolkit.org


ubuntu@ubuntu:~$ wget https://storage.openvinotoolkit.org/repositories/openvino/packages/2023.3/linux/l_openvino_toolkit_ubuntu22_2023.3.0.13775.ceeafaf64f3_x86_64.tgz
ubuntu@ubuntu:~$ sudo tar xf l_openvino_toolkit_ubuntu22_2023.0.0.10926.b4452d56304_x86_64.tgz.tgz.sha256.tgz -C /opt/intel/
ubuntu@ubuntu:~$ tar -zxvf l_openvino_toolkit_ubuntu22_2023.3.0.13775.ceeafaf64f3_x86_64.tgz
ubuntu@ubuntu:~$ mv l_openvino_toolkit_ubuntu22_2023.3.0.13775.ceeafaf64f3_x86_64 openvino_2023
ubuntu@ubuntu:~$ mv openvino_2023/ /opt/intel/
ubuntu@ubuntu:~$ cd /opt/intel/
ubuntu@ubuntu:~$ cd openvino_2023/
ubuntu@ubuntu:/opt/intel/openvino_2023$ vim ~/.bashrc
source /opt/intel/openvino_2023/setupvars.sh
ubuntu@ubuntu:~$ cd /opt/intel/openvino_2023/install_dependencies/
ubuntu@ubuntu:/opt/intel/openvino_2023/install_dependencies$ sudo -E ./install_openvino_dependencies.sh

下面操作在自己的宿主机器上执行,主要发现在开发板上的OpenVINO无法转相机的blob模型,但是这个低版本的OpenVINO库又无法开发板,应为2021.4支持系统ubuntu20.04版本和一下,开发板的版本是22.04系统版本过高。

先搞一下yolov5lite,这个官方给了方法和例子,简要叙述和附上,这我是在自己的宿主主机上做的ubuntu20.04 因为现在开发板版本过高,担心它的OpenVINO环境转的blob不一定能在oak相机上运行。

ubuntu@ubuntu:~/Downloads$ axel -n 100 https://registrationcenter-download.intel.com/akdlm/IRC_NAS/18096/l_openvino_toolkit_p_2021.4.689.tgz
ubuntu@ubuntu:~$ tar -zxvf l_openvino_toolkit_p_2021.4.689.tgz 
ubuntu@ubuntu:~/Downloads$ cd l_openvino_toolkit_p_2021.4.689/
ubuntu@ubuntu:~/Downloads/l_openvino_toolkit_p_2021.4.689$ sudo ./install_GUI.sh 
ubuntu@ubuntu:~$ cd /opt/intel/openvino_2021/install_dependencies/
ubuntu@ubuntu:/opt/intel/openvino_2021/install_dependencies$ sudo -E ./install_openvino_dependencies.sh 
ubuntu@ubuntu:/opt/intel/openvino_2021/bin$ sudo vim ~/.bashrc 

在末尾添加:

source /opt/intel/openvino_2021/bin/setupvars.sh
ubuntu@ubuntu:/opt/intel/openvino_2021/bin$ source ~/.bashrc 
[setupvars.sh] OpenVINO environment initialized
ubuntu@ubuntu:/opt/intel/openvino_2021/bin$ cd /opt//intel/openvino_2021/deployment_tools/model_optimizer//install_prerequisites/
ubuntu@ubuntu:/opt/intel/openvino_2021/deployment_tools/model_optimizer/install_prerequisites$ sudo ./install_prerequisites.sh

下载模型,进行转模型:

ubuntu@ubuntu:~$ git clone https://github.com/ppogg/YOLOv5-Lite

模型代码,参考oak官方代码:

bd996a3c-d969-11ef-9310-92fbcf53809c.png

转onnx模型和转OpenVINO模型 export_onnx.py见官方参考:

ubuntu@ubuntu:~/YOLOv5-Lite$ pip3 install -r requirements.txt
ubuntu@ubuntu:~/YOLOv5-Lite$ python3 export_onnx.py -w v5lite-e.pt -imgsz 640
Namespace(blob=False, convert_tool='blobconverter', img_size=[640, 640], 
input_model=PosixPath('/home/ubuntu/YOLOv5-Lite/v5lite-e.pt'), name='v5lite-e', 
opset=12, output_dir=PosixPath('/home/ubuntu/YOLOv5-Lite'), shaves=6, 
spatial_detection=False)
[18:12:38] INFO   YOLOv5  v1.5-16-g9d649a6 torch 2.4.1+cu121 CPU      
                                        
Fusing layers... 
[18:12:41] INFO   Model Summary: 167 layers, 781205 parameters, 0 gradients, 
          2.9 GFLOPS                         
 
      INFO   Starting ONNX export with onnx 1.16.1...          
      INFO   Starting to simplify ONNX...                
      INFO   ONNX export success, saved as:               
              /home/ubuntu/YOLOv5-Lite/v5lite-e.onnx       
      INFO   anchors:                          
              [10.0, 13.0, 16.0, 30.0, 33.0, 23.0, 30.0, 61.0,  
          62.0, 45.0, 59.0, 119.0, 116.0, 90.0, 156.0, 198.0, 373.0, 
          326.0]                           
      INFO   anchor_masks:                        
              {'side80': [0, 1, 2], 'side40': [3, 4, 5], 'side20':
          [6, 7, 8]}                         
      INFO   Anchors data export success, saved as:           
              /home/ubuntu/YOLOv5-Lite/v5lite-e.json       
      INFO   Export complete (3.61s).                  
ubuntu@ubuntu:~/YOLOv5-Lite$ python3 /opt/intel/openvino_2021/deployment_tools/model_optimizer/mo.py --input_model v5lite-e.onnx --output_dir /home/ubuntu/YOLOv5-Lite/saved/FP16 --input_shape [1,3,640,640] --data_type FP16 --scale_values [255.0,255.0,255.0] --mean_values [0,0,0]
Model Optimizer arguments:
Common parameters:
  - Path to the Input Model:   /home/ubuntu/YOLOv5-Lite/v5lite-e.onnx
  - Path for generated IR:   /home/ubuntu/YOLOv5-Lite/saved/FP16
  - IR output name:   v5lite-e
  - Log level:   ERROR
  - Batch:   Not specified, inherited from the model
  - Input layers:   Not specified, inherited from the model
  - Output layers:   Not specified, inherited from the model
  - Input shapes:   [1,3,640,640]
  - Mean values:   [0,0,0]
  - Scale values:   [255.0,255.0,255.0]
  - Scale factor:   Not specified
  - Precision of IR:   FP16
  - Enable fusing:   True
  - Enable grouped convolutions fusing:   True
  - Move mean values to preprocess section:   None
  - Reverse input channels:   False
ONNX specific parameters:
  - Inference Engine found in:   /opt/intel/openvino_2021/python/python3.8/openvino
Inference Engine version:   2021.4.1-3926-14e67d86634-releases/2021/4
Model Optimizer version:   2021.4.1-3926-14e67d86634-releases/2021/4
[ WARNING ] 
Detected not satisfied dependencies:
  networkx: installed: 3.1, required: ~= 2.5
  numpy: installed: 1.23.5, required: < 1.20
 
Please install required versions of components or use install_prerequisites script
/opt/intel/openvino_2021.4.689/deployment_tools/model_optimizer/install_prerequisites/install_prerequisites_onnx.sh
Note that install_prerequisites scripts may install additional components.
/opt/intel/openvino_2021/deployment_tools/model_optimizer/extensions/front/onnx/parameter_ext.py:20: DeprecationWarning: `mapping.TENSOR_TYPE_TO_NP_TYPE` is now deprecated and will be removed in a future release.To silence this warning, please use `helper.tensor_dtype_to_np_dtype` instead.
  'data_type': TENSOR_TYPE_TO_NP_TYPE[t_type.elem_type]
/opt/intel/openvino_2021/deployment_tools/model_optimizer/extensions/analysis/boolean_input.py:13: DeprecationWarning: `np.bool` is a deprecated alias for the builtin `bool`. To silence this warning, use `bool` by itself. Doing this will not modify any behavior and is safe. If you specifically wanted the numpy scalar type, use `np.bool_` here.
Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations
  nodes = graph.get_op_nodes(op='Parameter', data_type=np.bool)
/opt/intel/openvino_2021/deployment_tools/model_optimizer/mo/front/common/partial_infer/concat.py:36: DeprecationWarning: `np.bool` is a deprecated alias for the builtin `bool`. To silence this warning, use `bool` by itself. Doing this will not modify any behavior and is safe. If you specifically wanted the numpy scalar type, use `np.bool_` here.
Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations
  mask = np.zeros_like(shape, dtype=np.bool)
[ WARNING ]  Const node '/model.8/Resize/Add_input_port_1/value338417277' returns shape values of 'float64' type but it must be integer or float32. During Elementwise type inference will attempt to cast to float32
[ WARNING ]  Const node '/model.12/Resize/Add_input_port_1/value341817280' returns shape values of 'float64' type but it must be integer or float32. During Elementwise type inference will attempt to cast to float32
[ WARNING ]  Changing Const node '/model.8/Resize/Add_input_port_1/value338418006' data type from float16 to  for Elementwise operation
[ WARNING ] Changing Const node '/model.12/Resize/Add_input_port_1/value341817580' data type from float16 to  for Elementwise operation
[ SUCCESS ] Generated IR version 10 model.
[ SUCCESS ] XML file: /home/ubuntu/YOLOv5-Lite/saved/FP16/v5lite-e.xml
[ SUCCESS ] BIN file: /home/ubuntu/YOLOv5-Lite/saved/FP16/v5lite-e.bin
[ SUCCESS ] Total execution time: 10.69 seconds. 
[ SUCCESS ] Memory consumed: 104 MB. 
It's been a while, check for a new version of Intel(R) Distribution of OpenVINO(TM) toolkit here https://software.intel.com/content/www/us/en/develop/tools/openvino-toolkit/download.html?cid=other&source=prod&campid=ww_2021_bu_IOTG_OpenVINO-2021-4-LTS&content=upg_all&medium=organic or on the GitHub*
ubuntu@ubuntu:~/YOLOv5-Lite$ 

转换模型

ubuntu@ubuntu:~$ find . -name "mo_onnx.py"
./.local/lib/python3.10/site-packages/openvino/tools/mo/mo_onnx.py
ubuntu@ubuntu:~$ python3 ./.local/lib/python3.10/site-packages/openvino/tools/mo/mo_onnx.py --input_model v5lite-e.onnx --output_dir /home/ubuntu/YOLOv5-Lite/saved/FP16 --input_shape [1,3,640,640] --data_type FP16 --scale_values [255.0,255.0,255.0] --mean_values [0,0,0]
[ WARNING ] Use of deprecated cli option --data_type detected. Option use in the following releases will be fatal.
Check for a new version of Intel(R) Distribution of OpenVINO(TM) toolkit here https://software.intel.com/content/www/us/en/develop/tools/openvino-toolkit/download.html?cid=other&source=prod&campid=ww_2023_bu_IOTG_OpenVINO-2022-3&content=upg_all&medium=organic or on https://github.com/openvinotoolkit/openvino
[ INFO ] The model was converted to IR v11, the latest model format that corresponds to the source DL framework input/output format. While IR v11 is backwards compatible with OpenVINO Inference Engine API v1.0, please use API v2.0 (as of 2022.1) to take advantage of the latest improvements in IR v11.
Find more information about API v2.0 and IR v11 at https://docs.openvino.ai/latest/openvino_2_0_transition_guide.html
[ SUCCESS ] Generated IR version 11 model.
[ SUCCESS ] XML file: /home/ubuntu/YOLOv5-Lite/saved/FP16/v5lite-e.xml
[ SUCCESS ] BIN file: /home/ubuntu/YOLOv5-Lite/saved/FP16/v5lite-e.bin
ubuntu@ubuntu:~$ pip3 install blobconverter
然后站blob


[setupvars.sh] OpenVINO environment initialized
ubuntu@ubuntu:~/YOLOv5-Lite$ cd /opt/intel/openvino_2021/deployment_tools/tools
ubuntu@ubuntu:/opt/intel/openvino_2021/deployment_tools/tools$ sudo chmod 777 compile_tool/
[sudo] password for ubuntu: 
ubuntu@ubuntu:/opt/intel/openvino_2021/deployment_tools/tools$ cd compile_tool/
ubuntu@ubuntu:/opt/intel/openvino_2021/deployment_tools/tools/compile_tool$ ./compile_tool -m /home/ubuntu/YOLOv5-Lite/saved/FP16/v5lite-e.xml -ip U8 -d MYRIAD -VPU_NUMBER_OF_SHAVES 4 -VPU_NUMBER_OF_CMX_SLICES 4
Inference Engine: 
  IE version ......... 2021.4.1
  Build ........... 2021.4.1-3926-14e67d86634-releases/2021/4
 
Network inputs:
  images : U8 / NCHW
Network outputs:
  output1_yolov5 : FP16 / NCHW
  output2_yolov5 : FP16 / NCHW
  output3_yolov5 : FP16 / NCHW
[Warning][VPU][Config] Deprecated option was used : VPU_MYRIAD_PLATFORM
Done. LoadNetwork time elapsed: 6529 ms
ubuntu@ubuntu:/opt/intel/openvino_2021/deployment_tools/tools/compile_tool$ ls
compile_tool README.md v5lite-e.blob

导出模型,先在oak相机上试试,这个整个模型都是在oak相机端进行推理和测距,只能说这个开发板是支持oak这种深度相机使用的。

bdba9ffe-d969-11ef-9310-92fbcf53809c.jpg

接着,来修改我们的代码,将模型放在开发板上使用OpenVINO推理,将测距功能仍然保持相机端推理,下面是使用clion远程调用开发板进行编译的代码,将深度相机OAK插在哪吒开发板的usb接口,将英特尔开发板插上显示器,然后进行相机调用,后续上传GitHub。

cmakelists.txt

cmake_minimum_required(VERSION 3.16)
project(demo)
set(CMAKE_CXX_STANDARD 11)
find_package(OpenCV REQUIRED)
#message(STATUS ${OpenCV_INCLUDE_DIRS})
#添加头文件
include_directories(${OpenCV_INCLUDE_DIRS})
include_directories(${CMAKE_SOURCE_DIR}/include)
include_directories(${CMAKE_SOURCE_DIR}/include/utility)
#链接Opencv库
find_package(depthai CONFIG REQUIRED)
add_executable(demo main.cpp include/utility/utility.cpp)
target_link_libraries(demo ${OpenCV_LIBS} depthai::opencv )
 

main.cpp

#include 
// Includes common necessary includes for development using depthai library
#include "depthai/depthai.hpp"
 
/*
The code is the same as for Tiny-yolo-V3, the only difference is the blob file.
The blob was compiled following this tutorial: https://github.com/TNTWEN/OpenVINO-YOLOV4
*/
 
 
static const std::vector labelMap = {
        "person",        "bicycle",      "car",           "motorbike",     "aeroplane",   "bus",         "train",       "truck",        "boat",
        "traffic light", "fire hydrant", "stop sign",     "parking meter", "bench",       "bird",        "cat",         "dog",          "horse",
        "sheep",         "cow",          "elephant",      "bear",          "zebra",       "giraffe",     "backpack",    "umbrella",     "handbag",
        "tie",           "suitcase",     "frisbee",       "skis",          "snowboard",   "sports ball", "kite",        "baseball bat", "baseball glove",
        "skateboard",    "surfboard",    "tennis racket", "bottle",        "wine glass",  "cup",         "fork",        "knife",        "spoon",
        "bowl",          "banana",       "apple",         "sandwich",      "orange",      "broccoli",    "carrot",      "hot dog",      "pizza",
        "donut",         "cake",         "chair",         "sofa",          "pottedplant", "bed",         "diningtable", "toilet",       "tvmonitor",
        "laptop",        "mouse",        "remote",        "keyboard",      "cell phone",  "microwave",   "oven",        "toaster",      "sink",
        "refrigerator",  "book",         "clock",         "vase",          "scissors",    "teddy bear",  "hair drier",  "toothbrush"};
 
static std::atomic syncNN{true};
 
 
int main() {
    // Create pipeline
    dai::Pipeline pipeline;
 
    // Define sources
    auto camRgb = pipeline.create();
    auto monoLeft = pipeline.create();
    auto monoRight = pipeline.create();
    auto stereo = pipeline.create();
    auto spatialDataCalculator = pipeline.create();
 
 
    // Properties
    camRgb->setPreviewSize(640, 640);
    camRgb->setBoardSocket(dai::RGB);
    camRgb->setResolution(dai::THE_1080_P);
    camRgb->setInterleaved(false);
    camRgb->setColorOrder(dai::RGB);
    camRgb->setPreviewKeepAspectRatio(false); //将调整视频大小以适应预览大小,对齐
 
    monoLeft->setBoardSocket(dai::LEFT);
    monoLeft->setResolution(dai::THE_720_P);
    monoRight->setBoardSocket(dai::RIGHT);
    monoRight->setResolution(dai::THE_720_P);
 
 
    stereo->setDefaultProfilePreset(dai::HIGH_ACCURACY);
    stereo->setLeftRightCheck(true);
    stereo->setDepthAlign(dai::RGB);
    stereo->setExtendedDisparity(true);
 
    dai::Point2f topLeft(0.4f, 0.4f);
    dai::Point2f bottomRight(0.6f, 0.6f);
 
    dai::SpatialLocationCalculatorConfigData config;
    config.depthThresholds.lowerThreshold = 100;
    config.depthThresholds.upperThreshold = 10000;
    config.roi = dai::Rect(topLeft, bottomRight);
 
    spatialDataCalculator->initialConfig.addROI(config);
    spatialDataCalculator->inputConfig.setWaitForMessage(false);
 
 
    // Network specific settings
    auto detectionNetwork = pipeline.create();
    detectionNetwork->setBlob("../v5lite-e.blob");
    detectionNetwork->setConfidenceThreshold(0.5);
    //Yolo specific parameters
    detectionNetwork->setNumClasses(80);
    detectionNetwork->setCoordinateSize(4);
    detectionNetwork->setAnchors({10,13,16,30,33,23,30,61,62,45,59,119,116,90,156,198,373,326});
    detectionNetwork->setAnchorMasks({{{"side80",{0, 1, 2}},{"side40",{3, 4, 5}},{"side20",{6, 7, 8}}}});
    detectionNetwork->setIouThreshold(0.5);
 
    // rgb输出
    auto xoutRgb = pipeline.create();
    xoutRgb->setStreamName("rgb");
 
    // depth输出
    auto xoutDepth = pipeline.create();
    xoutDepth->setStreamName("depth");
 
    // 测距模块数据输出
    auto xoutSpatialData = pipeline.create();
    xoutSpatialData->setStreamName("spatialData");
 
    // 测距模块配置输入
    auto xinSpatialCalcConfig = pipeline.create();
    xinSpatialCalcConfig->setStreamName("spatialCalcConfig");
 
 
    // Linking  preview 画布 video 实时分辨率
    camRgb->video.link(xoutRgb->input); //显示用video
    camRgb->preview.link(detectionNetwork->input); //推理用preview
    monoLeft->out.link(stereo->left);
    monoRight->out.link(stereo->right);
 
    spatialDataCalculator->passthroughDepth.link(xoutDepth->input);
    stereo->depth.link(spatialDataCalculator->inputDepth);
 
    spatialDataCalculator->out.link(xoutSpatialData->input);
    xinSpatialCalcConfig->out.link(spatialDataCalculator->inputConfig);
 
 
    // output
    auto xlinkParseOut = pipeline.create();
    xlinkParseOut->setStreamName("parseOut");
 
    auto xlinkoutOut = pipeline.create();
    xlinkoutOut->setStreamName("out");
 
    auto xlinkPassthroughOut = pipeline.create();
    xlinkPassthroughOut->setStreamName("passthrough");
 
 
    detectionNetwork->out.link(xlinkParseOut->input);
    detectionNetwork->passthrough.link(xlinkPassthroughOut->input);
 
 
    // Connect to device and start pipeline
    dai::Device device;
 
    device.setIrLaserDotProjectorBrightness(1000);
    device.setIrFloodLightBrightness(0);
    device.startPipeline(pipeline);
 
    // Output queues will be used to get the rgb frames and nn data from the outputs defined above
    auto detectQueue = device.getOutputQueue("parseOut",8,false);
    auto passthQueue = device.getOutputQueue("passthrough", 8, false);
    auto depthQueue = device.getOutputQueue("depth", 8, false);
    auto spatialCalcQueue = device.getOutputQueue("spatialData", 8, false);
    auto spatialCalcConfigInQueue = device.getInputQueue("spatialCalcConfig", 8, false);
    auto rgbQueue = device.getOutputQueue("rgb", 8, false);
 
    bool printOutputLayersOnce = true;
    auto color = cv::Scalar(0,255,0);
 
 
    std::vector detections;
    auto startTime = std::now();
    int counter = 0;
    float fps = 0;
    auto color2 = cv::Scalar(255, 255, 255);
    cv::Scalar color1 = cv::Scalar(0, 0, 255);
 
    while (true) {
        counter++;
        auto currentTime = std::now();
        auto elapsed = std::duration_cast>(currentTime - startTime);
        if(elapsed > std::seconds(1)) {
            fps = counter / elapsed.count();
            counter = 0;
            startTime = currentTime;
        }
 
        std::shared_ptr inRgb = rgbQueue->get();
        std::shared_ptr inDepth = depthQueue->get();
        std::shared_ptr inDet = detectQueue->get();
        std::shared_ptr ImgFrame = passthQueue->get();
 
        cv::Mat frame = inRgb->getCvFrame();
        cv::Mat src = ImgFrame->getCvFrame();
 
        cv::Mat depthFrameColor;
        cv::Mat depthFrame = inDepth->getFrame();
        cv::normalize(depthFrame, depthFrameColor, 255, 0, cv::NORM_INF, CV_8UC1);
        cv::equalizeHist(depthFrameColor, depthFrameColor);
        cv::applyColorMap(depthFrameColor, depthFrameColor, cv::COLORMAP_HOT);
 
        inDet = detectQueue->get();
        if(inDet) {
            detections = inDet->detections;
            for(auto& detection : detections) {
                int x1 = detection.xmin * src.cols;
                int y1 = detection.ymin * src.rows;
                int x2 = detection.xmax * src.cols;
                int y2 = detection.ymax * src.rows;
 
                uint32_t labelIndex = detection.label;
                std::string labelStr = std::to_string(labelIndex);
                if(labelIndex < labelMap.size()) {
                    labelStr = labelMap[labelIndex];
                }
                cv::putText(src, labelStr, cv::Point(x1 + 10, y1 + 20), cv::FONT_HERSHEY_TRIPLEX, 0.5, 255);
                std::stringstream confStr;
                confStr << std::fixed << std::setprecision(2) << detection.confidence * 100;
                cv::putText(src, confStr.str(), cv::Point(x1 + 10, y1 + 40), cv::FONT_HERSHEY_TRIPLEX, 0.5, 255);
                cv::rectangle(src, cv::Point(x1, y1), cv::Point(x2, y2)), color, cv::FONT_HERSHEY_SIMPLEX);
 
                // 1920*1080
                //cv::rectangle(depthFrameColor, cv::Point(x1, y1), cv::Point(x2, y2)), color, cv::FONT_HERSHEY_SIMPLEX);
                int top_left_x = detection.xmin * frame.cols;
                int top_left_y = detection.ymin * frame.rows;
                int bottom_right_x = detection.xmax * frame.cols;
                int bottom_right_y = detection.ymax * frame.rows;
 
                // 最值限定
                top_left_x = top_left_x < 0 ? 0 : top_left_x;
                bottom_right_x = bottom_right_x > frame.cols - 1 ? frame.cols - 1 : bottom_right_x;
                top_left_y = top_left_y < 0 ? 0 : top_left_y;
                bottom_right_y = bottom_right_y > frame.rows - 1 ? frame.rows - 1 : bottom_right_y;
 
                topLeft.x = top_left_x;
                topLeft.y = top_left_y;
                bottomRight.x = bottom_right_x;
                bottomRight.y = bottom_right_y;
 
                // 测距模块推送实际像素大小的ROI
                config.roi = dai::Rect(topLeft, bottomRight);
                dai::SpatialLocationCalculatorConfig cfg;
                cfg.addROI(config);
                spatialCalcConfigInQueue->send(cfg);
                std::vector spatialData = spatialCalcQueue->get()->getSpatialLocations();
 
                for (auto &depthData : spatialData) {
                    auto roi = depthData.config.roi;
                    roi = roi.denormalize(depthFrameColor.cols, depthFrameColor.rows);
                    auto xmin = (int) roi.topLeft().x;
                    auto ymin = (int) roi.topLeft().y;
                    auto xmax = (int) roi.bottomRight().x;
                    auto ymax = (int) roi.bottomRight().y;
 
                    // 最值限定
//                    xmin = xmin < 0 ? 0 : xmin;
//                    xmax = xmax > frame.cols - 1 ? frame.cols - 1 : xmax;
//                    ymin = ymin < 0 ? 0 : ymin;
//                    ymax = ymax > frame.rows - 1 ? frame.rows - 1 : ymax;
 
                    auto coords = depthData.spatialCoordinates;
                    auto distance = std::sqrt(coords.x * coords.x + coords.y * coords.y + coords.z * coords.z);
                    auto fontType = cv::FONT_HERSHEY_TRIPLEX;
 
                    std::stringstream rgb_depthX, depthX, rgb_depthX_;
                    rgb_depthX << "X: " << (int) coords.x << " mm";
                    rgb_depthX_.precision(2);
                    rgb_depthX_ << "dis: " << std::fixed << static_cast(distance) << " mm";
 
                    cv::rectangle(frame,
                                  cv::Point(xmin, ymin), cv::Point(xmax, ymax)),
                                  color,
                                  fontType);
 
                    cv::putText(frame, rgb_depthX_.str(), cv::Point(xmin + 10, ymin - 20),
                                fontType,
                                0.5, color1);
 
                    cv::putText(frame, rgb_depthX.str(), cv::Point(xmin + 10, ymin + 20),
                                fontType,
                                0.5, color1);
                    std::stringstream rgb_depthY, depthY;
                    rgb_depthY << "Y: " << (int) coords.y << " mm";
                    cv::putText(frame, rgb_depthY.str(), cv::Point(xmin + 10, ymin + 35),
                                fontType,
                                0.5, color1);
                    std::stringstream rgb_depthZ, depthZ;
                    rgb_depthZ << "Z: " << (int) coords.z << " mm";
                    cv::putText(frame, rgb_depthZ.str(), cv::Point(xmin + 10, ymin + 50),
                                fontType,
                                0.5, color1);
 
 
                    cv::rectangle(depthFrameColor,
                            cv::Point(xmin, ymin), cv::Point(xmax, ymax)),
                            color,
                            fontType);
                    depthX << "X: " << (int) coords.x << " mm";
                    cv::putText(depthFrameColor, depthX.str(), cv::Point(xmin + 10, ymin + 20),
                                fontType, 0.5, color1);
                    depthY << "Y: " << (int) coords.y << " mm";
                    cv::putText(depthFrameColor, depthY.str(), cv::Point(xmin + 10, ymin + 35),
                                fontType, 0.5, color1);
                    depthZ << "Z: " << (int) coords.z << " mm";
                    cv::putText(depthFrameColor, depthZ.str(), cv::Point(xmin + 10, ymin + 50),
                                fontType, 0.5, color1);
                }
            }
 
            std::stringstream fpsStr;
            fpsStr << "NN fps: " << std::fixed << std::setprecision(2) << fps;
//            printf("fps %f
",fps);
            cv::putText(src, fpsStr.str(), cv::Point(4, 22), cv::FONT_HERSHEY_TRIPLEX, 1,
                        cv::Scalar(0, 255, 0));
            cv::putText(frame, fpsStr.str(), cv::Point(4, 22), cv::FONT_HERSHEY_TRIPLEX, 1,
                        cv::Scalar(0, 255, 0));
 
            // Show the frame
//            cv::imshow("src", src);
            cv::imshow("frame", frame);
            cv::imwrite("frame.jpg", frame);
//            cv::imshow("depth", depthFrameColor);
            int key = cv::waitKey(1);
            if(key == 'q' || key == 'Q' || key == 27) {
                return 0;
            }
        }
    }
}

bdd81016-d969-11ef-9310-92fbcf53809c.jpg

然后将在相机端的推理代码踢掉,使用本地开发板哪吒进行推理,然后整体替换OpenVINO推理方式:

(1)先改个用编解码的方式获取相机,测距,使用CPU进行纯h264解码,纯CPU解码30帧左右,看样子还行,这小板子的CPU软解看着还凑合。

cmakelists.txt

cmake_minimum_required(VERSION 3.16)
project(demo)
set(CMAKE_CXX_STANDARD 11)
find_package(OpenCV REQUIRED)
#message(STATUS ${OpenCV_INCLUDE_DIRS})
#添加头文件
include_directories(${OpenCV_INCLUDE_DIRS})
include_directories(${CMAKE_SOURCE_DIR}/include)
include_directories(${CMAKE_SOURCE_DIR}/include/utility)
#链接Opencv库
find_package(depthai CONFIG REQUIRED)
add_executable(demo main.cpp include/utility/utility.cpp)
target_link_libraries(demo ${OpenCV_LIBS} depthai::opencv -lavformat -lavcodec -lswscale -lavutil -lz)

main.cpp

#include 
#include 
#include 
#include 
#include 
#include 
#include 
extern "C"
{
#include 
#include 
#include 
#include 
}
 
 
#include "utility.hpp"
 
#include "depthai/depthai.hpp"
 
using namespace std::chrono;
 
int main(int argc, char **argv) {
  dai::Pipeline pipeline;
  //定义
  auto cam = pipeline.create();
  cam->setBoardSocket(dai::RGB);
  cam->setResolution(dai::THE_1080_P);
  cam->setVideoSize(1920, 1080);
  cam->setFps(30);
  auto Encoder = pipeline.create();
  Encoder->setDefaultProfilePreset(cam->getVideoSize(), cam->getFps(),
                   dai::H265_MAIN);
 
 
  cam->video.link(Encoder->input);
 
  auto monoLeft = pipeline.create();
  auto monoRight = pipeline.create();
  auto stereo = pipeline.create();
  auto spatialLocationCalculator = pipeline.create();
 
  auto xoutDepth = pipeline.create();
  auto xoutSpatialData = pipeline.create();
  auto xinSpatialCalcConfig = pipeline.create();
  auto xoutRgb = pipeline.create();
  xoutDepth->setStreamName("depth");
  xoutSpatialData->setStreamName("spatialData");
  xinSpatialCalcConfig->setStreamName("spatialCalcConfig");
  xoutRgb->setStreamName("rgb");
 
  monoLeft->setResolution(dai::THE_400_P);
  monoLeft->setBoardSocket(dai::LEFT);
  monoRight->setResolution(dai::THE_400_P);
  monoRight->setBoardSocket(dai::RIGHT);
 
  stereo->setDefaultProfilePreset(dai::HIGH_ACCURACY);
  stereo->setLeftRightCheck(true);
  stereo->setExtendedDisparity(true);
  spatialLocationCalculator->inputConfig.setWaitForMessage(false);
 
 
  dai::SpatialLocationCalculatorConfigData config;
  config.depthThresholds.lowerThreshold = 200;
  config.depthThresholds.upperThreshold = 10000;
  config.roi = dai::Point2f( 0.1, 0.45), dai::Point2f(( 1) * 0.1, 0.55));
  spatialLocationCalculator->initialConfig.addROI(config);
 
  // Linking
  monoLeft->out.link(stereo->left);
  monoRight->out.link(stereo->right);
 
  spatialLocationCalculator->passthroughDepth.link(xoutDepth->input);
  stereo->depth.link(spatialLocationCalculator->inputDepth);
 
  spatialLocationCalculator->out.link(xoutSpatialData->input);
  xinSpatialCalcConfig->out.link(spatialLocationCalculator->inputConfig);
 
 
  //定义输出
  auto xlinkoutpreviewOut = pipeline.create();
  xlinkoutpreviewOut->setStreamName("out");
 
  Encoder->bitstream.link(xlinkoutpreviewOut->input);
 
 
  //结构推送相机
  dai::Device device(pipeline);
  device.setIrLaserDotProjectorBrightness(1000);
 
  //取帧显示
  auto outqueue = device.getOutputQueue("out", cam->getFps(), false);//maxsize 代表缓冲数据
  auto depthQueue = device.getOutputQueue("depth", 4, false);
  auto spatialCalcQueue = device.getOutputQueue("spatialData", 4, false);
 
  //auto videoFile = std::ofstream("video.h265", std::binary);
 
 
  int width = 1920;
  int height = 1080;
  AVCodec *pCodec = avcodec_find_decoder(AV_CODEC_ID_H265);
  AVCodecContext *pCodecCtx = avcodec_alloc_context3(pCodec);
  int ret = avcodec_open2(pCodecCtx, pCodec, NULL);
  if (ret < 0) {//打开解码器
        printf("Could not open codec.
");
        return -1;
    }
    AVFrame *picture = av_frame_alloc();
    picture->width = width;
  picture->height = height;
  picture->format = AV_PIX_FMT_YUV420P;
  ret = av_frame_get_buffer(picture, 1);
  if (ret < 0) {
        printf("av_frame_get_buffer error
");
        return -1;
    }
    AVFrame *pFrame = av_frame_alloc();
    pFrame->width = width;
  pFrame->height = height;
  pFrame->format = AV_PIX_FMT_YUV420P;
  ret = av_frame_get_buffer(pFrame, 1);
  if (ret < 0) {
        printf("av_frame_get_buffer error
");
        return -1;
    }
    AVFrame *pFrameRGB = av_frame_alloc();
    pFrameRGB->width = width;
  pFrameRGB->height = height;
  pFrameRGB->format = AV_PIX_FMT_RGB24;
  ret = av_frame_get_buffer(pFrameRGB, 1);
  if (ret < 0) {
        printf("av_frame_get_buffer error
");
        return -1;
    }
 
 
    int picture_size = av_image_get_buffer_size(AV_PIX_FMT_YUV420P, width, height,
                                                1);//计算这个格式的图片,需要多少字节来存储
    uint8_t *out_buff = (uint8_t *) av_malloc(picture_size * sizeof(uint8_t));
    av_image_fill_arrays(picture->data, picture->linesize, out_buff, AV_PIX_FMT_YUV420P, width,
             height, 1);
  //这个函数 是缓存转换格式,可以不用 以为上面已经设置了AV_PIX_FMT_YUV420P
  SwsContext *img_convert_ctx = sws_getContext(width, height, AV_PIX_FMT_YUV420P,
                         width, height, AV_PIX_FMT_RGB24, 4,
                         NULL, NULL, NULL);
  AVPacket *packet = av_packet_alloc();
 
  auto startTime = steady_clock::now();
  int counter = 0;
  float fps = 0;
  auto spatialCalcConfigInQueue = device.getInputQueue("spatialCalcConfig");
  while (true) {
    counter++;
    auto currentTime = steady_clock::now();
    auto elapsed = duration_cast>(currentTime - startTime);
    if (elapsed > seconds(1)) {
      fps = counter / elapsed.count();
      counter = 0;
      startTime = currentTime;
    }
 
 
 
 
    auto h265Packet = outqueue->get();
 
 
    //videoFile.write((char *) (h265Packet->getData().data()), h265Packet->getData().size());
 
    packet->data = (uint8_t *) h265Packet->getData().data();  //这里填入一个指向完整H264数据帧的指针
    packet->size = h265Packet->getData().size();    //这个填入H265 数据帧的大小
    packet->stream_index = 0;
    ret = avcodec_send_packet(pCodecCtx, packet);
    if (ret < 0) {
            printf("avcodec_send_packet 
");
            continue;
        }
        av_packet_unref(packet);
        int got_picture = avcodec_receive_frame(pCodecCtx, pFrame);
        av_frame_is_writable(pFrame);
        if (got_picture < 0) {
            printf("avcodec_receive_frame 
");
            continue;
        }
 
        sws_scale(img_convert_ctx, pFrame->data, pFrame->linesize, 0,
         height,
         pFrameRGB->data, pFrameRGB->linesize);
 
 
    cv::Mat mRGB(cv::Size(width, height), CV_8UC3);
    mRGB.data = (unsigned char *) pFrameRGB->data[0];
    cv::Mat mBGR;
    cv::cvtColor(mRGB, mBGR, cv::COLOR_RGB2BGR);
    std::stringstream fpsStr;
    fpsStr << "NN fps: " << std::fixed << std::setprecision(2) << fps;
        printf("fps %f
",fps);
        cv::putText(mBGR, fpsStr.str(), cv::Point(4, 22), cv::FONT_HERSHEY_TRIPLEX, 0.4,
                    cv::Scalar(0, 255, 0));
 
 
        config.roi = dai::Point2f(3 * 0.1, 0.45), dai::Point2f((3 + 1) * 0.1, 0.55));
        dai::SpatialLocationCalculatorConfig cfg;
        cfg.addROI(config);
        spatialCalcConfigInQueue->send(cfg);
 
    // auto inDepth = depthQueue->get();
    //cv::Mat depthFrame = inDepth->getFrame(); // depthFrame values are in millimeters
 
 
    auto spatialData = spatialCalcQueue->get()->getSpatialLocations();
    for(auto depthData : spatialData) {
      auto roi = depthData.config.roi;
      roi = roi.denormalize(mBGR.cols, mBGR.rows);
 
      auto xmin = static_cast(roi.topLeft().x);
      auto ymin = static_cast(roi.topLeft().y);
      auto xmax = static_cast(roi.bottomRight().x);
      auto ymax = static_cast(roi.bottomRight().y);
 
      auto coords = depthData.spatialCoordinates;
      auto distance = std::sqrt(coords.x * coords.x + coords.y * coords.y + coords.z * coords.z);
      auto color = cv::Scalar(0, 200, 40);
      auto fontType = cv::FONT_HERSHEY_TRIPLEX;
      cv::rectangle(mBGR, cv::Point(xmin, ymin), cv::Point(xmax, ymax)), color);
      std::stringstream depthDistance;
      depthDistance.precision(2);
      depthDistance << std::fixed << static_cast(distance / 1000.0f) << "m";
            cv::putText(mBGR, depthDistance.str(), cv::Point(xmin + 10, ymin + 20), fontType, 0.5, color);
        }
 
 
 
        cv::imshow("demo", mBGR);
        cv::imwrite("demo.jpg",mBGR);
 
        cv::waitKey(1);
 
 
    }
 
 
    return 0;
}

整个代码在哪吒开发板上进行解码,帧率达到30fps左右,还可以,图片就不上传了,大家可以自己评测,前提安装ffmpeg这个库。

(2)v8的模型转换和开发板上推理,这个地方一定要保证opset=11,如果是14是不可以的,模型转换可以在开发板上转换就行。

ubuntu@ubuntu:~$ pip install ultralytics转换代码

ubuntu@ubuntu:~$ cat convert_yolov8.py
from ultralytics import YOLO
 
# Load a model
model = YOLO("yolov8n.yaml") # build a new model from scratch
model = YOLO("yolov8n.pt") # load a pretrained model (recommended for training)
 
# Use the model
# model.train(data="coco8.yaml", epochs=3) # train the model
# metrics = model.val() # evaluate model performance on the validation set
results = model("https://ultralytics.com/images/bus.jpg") # predict on an image
path = model.export(format="onnx") # export the model to ONNX format
path = model.export(format="openvino",opset=11) # export the model to ONNX format
cmkelists.txt


cmake_minimum_required(VERSION 3.12)
project(yolov8_openvino_example)
 
set(CMAKE_CXX_STANDARD 14)
 
find_package(OpenCV REQUIRED)
 
include_directories(
  ${OpenCV_INCLUDE_DIRS}
  /opt/intel/openvino_2023/runtime/include
)
 
add_executable(detect 
  main.cc
  inference.cc
)
 
target_link_libraries(detect
  ${OpenCV_LIBS}
   /opt/intel/openvino_2023/runtime/lib/intel64/libopenvino.so
)

测试代码使用官方的即可 ultralytics/examples/YOLOv8-OpenVINO-CPP-Inference at main · ultralytics/ultralytics · GitHub

be01e6a2-d969-11ef-9310-92fbcf53809c.jpg

(3)增加板子使用OpenVINO推理+板子CPU/ffmpeg解码+推流;oak相机测距代码就不添加了。

be2a7784-d969-11ef-9310-92fbcf53809c.png

发现这个模型还是比较重,添加到推理端有点小卡,先不加了,先用CPU进行编解码推流吧,测试目录和GitHub地址如下,效果图如下:

be50bcdc-d969-11ef-9310-92fbcf53809c.png

拉流设置命令

github:https://github.com/sxj731533730/OAK_Rtserver.git

参考资料:

[1] OAK相机如何将yoloV5lite模型转换成blob格式?_oak china yolov5模型转换-CSDN博客

https://blog.csdn.net/oakchina/article/details/129403986

[2]https://github.com/openvinotoolkit/openvino_notebooks/tree/latest/notebooks/pose-estimation-webcam

声明:本文内容及配图由入驻作者撰写或者入驻合作网站授权转载。文章观点仅代表作者本人,不代表电子发烧友网立场。文章及其配图仅供工程师学习之用,如有内容侵权或者其他违规问题,请联系本站处理。 举报投诉
  • 英特尔
    +关注

    关注

    61

    文章

    10119

    浏览量

    173432
  • 开发板
    +关注

    关注

    25

    文章

    5338

    浏览量

    100400

原文标题:开发者实战|英特尔开发板试用:结合oak深度相机进行评测

文章出处:【微信号:英特尔物联网,微信公众号:英特尔物联网】欢迎添加关注!文章转载请注明出处。

收藏 人收藏

    相关推荐

    世纪大并购!传高通有意整体收购英特尔英特尔最新回应

    电子发烧友网报道(文/吴子鹏)9月21日,《华尔街日报》发布博文称,高通公司有意整体收购英特尔公司,而不是仅仅收购芯片设计部门。“最近几天,高通已经接触了芯片制造商英特尔。”报道称,这笔交易还远未
    的头像 发表于 09-22 05:21 3376次阅读
    世纪大并购!传高通有意整体收购<b class='flag-5'>英特尔</b>,<b class='flag-5'>英特尔</b>最新回应

    请问OpenVINO™工具套件英特尔®Distribution是否与Windows® 10物联网企业版兼容?

    无法在基于 Windows® 10 物联网企业版的目标系统上使用 英特尔® Distribution OpenVINO™ 2021* 版本推断模型。
    发表于 03-05 08:32

    英特尔®独立显卡与OpenVINO™工具套件结合使用时,无法运行推理怎么解决?

    使用英特尔®独立显卡与OpenVINO™工具套件时无法运行推理
    发表于 03-05 06:56

    英特尔带您解锁云上智算新引擎

    在近日举办的2024火山引擎FORCE原动力大会上,英特尔与火山引擎联合发布基于英特尔 至强 6 性能核处理器的第四代服务器实例,以打造弹性算力底座的产品化实践。同时,英特尔也携手扣子共同推出Coze-AIPC端云协同智能体
    的头像 发表于 12-23 14:05 678次阅读

    基于英特尔开发板开发ROS应用

    随着智能机器人技术的快速发展,越来越多的研究者和开发者开始涉足这一充满挑战和机遇的领域。哪吒开发板,作为一款高性能的机器人开发平台,凭借其强大的计算能力和丰富的接口,为机器人爱好者和专业人士提供了一个理想的实验和
    的头像 发表于 12-20 10:54 1584次阅读
    基于<b class='flag-5'>英特尔</b><b class='flag-5'>开发板</b><b class='flag-5'>开发</b>ROS应用

    英特尔推出全新英特尔锐炫B系列显卡

    英特尔锐炫B580和B570 GPU以卓越价值为时新游戏带来超凡表现。   > 今日,英特尔发布全新英特尔锐炫 B系列显卡(代号Battlemage)。英特尔锐炫 B580和B570
    的头像 发表于 12-07 10:16 1176次阅读
    <b class='flag-5'>英特尔</b>推出全新<b class='flag-5'>英特尔</b>锐炫B系列显卡

    英特尔CEO Gelsinger宣布退休

    近日,英特尔公司宣布其首席执行官Pat Gelsinger即将退休。这一消息发布后,英特尔的美股在盘前交易中上涨了近4%。同时,英特尔宣布任命Zinsner和Johnston Holthaus为临时
    的头像 发表于 12-03 10:55 444次阅读

    基于哪吒开发板部署YOLOv8模型

    2024英特尔 “走近开发者”互动活动-哪吒开发套件免费试 用 AI 创新计划:哪吒开发板是专为支持入门级边缘 AI 应用程序和设备而设计,能够满足人工智能学习、
    的头像 发表于 11-15 14:13 812次阅读
    基于哪吒<b class='flag-5'>开发板</b>部署YOLOv8模型

    英特尔考虑出售Altera股权

    近日,英特尔(Intel)正积极寻求出售其可编程芯片制造子公司Altera的股权,并考虑引入战略投资或PE投资。据悉,英特尔对Altera的估值约为170亿美元,而英特尔于2015年以167亿美元的价格收购了这家公司。
    的头像 发表于 10-21 15:42 695次阅读

    英特尔至强品牌新战略发布

    品牌是企业使命和发展的象征,也承载着产品特质和市场认可。在英特尔GTC科技体验中心的英特尔 至强 6 能效核处理器发布会上,英特尔公司全球副总裁兼首席市场营销官Brett Hannath宣布推出全新的
    的头像 发表于 10-12 10:13 675次阅读

    英特尔IT的发展现状和创新动向

    AI大模型的爆发,客观上给IT的发展带来了巨大的机会。作为把IT发展上升为战略高度的英特尔,自然在推动IT发展中注入了强劲动力。英特尔IT不仅专注于创新、AI和优化,以及英特尔员工、最终用户和
    的头像 发表于 08-16 15:22 774次阅读

    英特尔是如何实现玻璃基板的?

    。 虽然玻璃基板对整个半导体行业而言并不陌生,但凭借庞大的制造规模和优秀的技术人才,英特尔将其提升到了一个新的水平。近日,英特尔封装测试技术开发(Assembly Test Technology Development)部门介绍
    的头像 发表于 07-22 16:37 497次阅读

    英特尔CEO:AI时代英特尔动力不减

    英特尔CEO帕特·基辛格坚信,在AI技术的飞速发展之下,英特尔的处理器仍能保持其核心地位。基辛格公开表示,摩尔定律仍然有效,而英特尔在处理器和芯片技术上的创新能力将持续驱动公司前进。
    的头像 发表于 06-06 10:04 572次阅读

    BittWare提供基于英特尔Agilex™ 7 FPGA最新加速

    BittWare 当前的加速产品组合包括最新的英特尔 Agilex 7 FPGA F、I 和 M 系列,包括 Compute Express Link (CXL) 和 PCIe* 5.0
    的头像 发表于 04-30 15:22 1049次阅读
    BittWare提供基于<b class='flag-5'>英特尔</b>Agilex™ 7 FPGA最新加速<b class='flag-5'>板</b>

    英特尔开发套件『哪吒』在Java环境实现ADAS道路识别演示 | 开发者实战

    本文使用来自OpenModelZoo的预训练的road-segmentation-adas-0001模型。ADAS代表高级驾驶辅助服务。该模型识别四个类别:背景、道路、路缘和标记。硬件环境此文使用了英特尔开发套件家族里的『哪吒』(Nezha)
    的头像 发表于 04-29 08:07 844次阅读
    <b class='flag-5'>英特尔</b><b class='flag-5'>开发</b>套件『哪吒』在Java环境实现ADAS道路识别演示 | <b class='flag-5'>开发</b>者实战