CN114612825A - Target detection method based on edge equipment - Google Patents

Target detection method based on edge equipment Download PDF

Info

Publication number
CN114612825A
CN114612825A CN202210230959.9A CN202210230959A CN114612825A CN 114612825 A CN114612825 A CN 114612825A CN 202210230959 A CN202210230959 A CN 202210230959A CN 114612825 A CN114612825 A CN 114612825A
Authority
CN
China
Prior art keywords
deep learning
detection
target detection
model
learning frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210230959.9A
Other languages
Chinese (zh)
Other versions
CN114612825B (en
Inventor
何臻力
索珈顺
王汝欣
何婧
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yunnan University YNU
Original Assignee
Yunnan University YNU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Yunnan University YNU filed Critical Yunnan University YNU
Priority to CN202210230959.9A priority Critical patent/CN114612825B/en
Publication of CN114612825A publication Critical patent/CN114612825A/en
Application granted granted Critical
Publication of CN114612825B publication Critical patent/CN114612825B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a target detection method based on edge equipment, which comprises the steps of selecting a target detection model according to actual needs, setting priorities of detection time, detection power consumption and detection precision and an upper limit of the detection time, acquiring deep learning frames supporting different model input resolutions for the edge equipment needing target detection to form a deep learning frame set, acquiring the inferred time and the average power consumption of each deep learning frame, optimizing the deep learning frames based on the upper limit of the detection time and the priorities of three performances, carrying out optimal configuration on the target detection model according to the deep learning frames obtained by screening, and deploying the target detection model to the edge equipment after training for target detection. The invention optimizes the deep learning framework of the target detection model on the edge device, and comprehensively considers the detection time, the detection power consumption and the detection precision of the target detection model, thereby improving the performance of target detection.

Description

Target detection method based on edge equipment
Technical Field
The invention belongs to the technical field of edge calculation, and particularly relates to a target detection method based on edge equipment.
Background
The scheme belongs to the field of edge calculation. Currently, many AI algorithms are deployed on edge devices to achieve the goals of reducing latency, saving bandwidth, and protecting privacy. The target detection algorithm is widely applied to the fields of industrial production, city monitoring, pedestrian detection and the like. Designing a target detection method that can work well on low cost edge devices would be of great value.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provide an edge device-based target detection method, which improves the performance of target detection by optimizing a deep learning framework of a target detection model on an edge device.
In order to achieve the above object, the object detection method based on edge device of the present invention comprises the following steps:
s1: selecting a target detection model according to actual needs, and then setting priorities of three detection performance indexes, namely detection time, detection power consumption and detection precision of the target detection model, and an upper limit T of the detection time of the target detection model;
s2: for the edge device needing target detection, firstly determining the deep learning frames supported by the edge device, and recording the number of the supported deep learning frames as N; obtaining model input resolutions supported by a target detection model when the target detection model runs in each deep learning frame on edge equipment, and recording the number of the model input resolutions supported by the nth deep learning frame as MnN is 1,2, …, N, and the nth depth learning frame supporting the mth model input resolution is denoted as fn,m,m=1,2,…,MnAll the deep learning frames fn,mForming a deep learning frame set F, and then obtaining each deep learning frame Fn,mInferred time t at edge devicen,mAnd average power consumption wn,m
S3: for the deep learning framework set F, the time t is deducedn,mDeleting the deep learning frames exceeding the detection time upper limit T to obtain a preliminarily screened deep learning frame set F';
if the detection time is the highest priority in the detection performance indexes, the deep learning frame with the least detection time in the deep learning frame set F' is used as the deep learning frame
Figure BDA0003538338430000021
Using the corresponding model input resolution as the input resolution of the target detection model
Figure BDA0003538338430000022
If the number of the screened deep learning frames is more than 1, continuously screening according to the detection precision and the priority of the detection power consumption to obtain an optimal deep learning frame;
if the detection power consumption is the highest priority in the detection performance indexes, the deep learning frame with the lowest average power consumption in the deep learning frame set F' is used as the deep learning frame
Figure BDA0003538338430000023
Using the corresponding model input resolution as the input resolution of the target detection model
Figure BDA0003538338430000024
If the number of the screened deep learning frames is more than 1, continuously screening according to the detection precision and the priority of the detection time to obtain the optimal deep learning frame;
if the highest priority in the detection performance indexes is the detection precision, the deep learning frame with the maximum model input resolution in the deep learning frame set F' is used as the deep learning frame
Figure BDA0003538338430000025
Correspond it toModel input resolution as input resolution for target detection model
Figure BDA0003538338430000026
If the number of the screened deep learning frames is more than 1, continuously screening according to the detection time and the priority of the detection power consumption to obtain an optimal deep learning frame;
s4: according to the deep learning framework obtained by screening in the step S3
Figure BDA0003538338430000027
And the input resolution of the object detection model
Figure BDA0003538338430000028
Carrying out optimized configuration on a target detection model, and then collecting training samples to train the target detection model;
s5: and deploying the target detection model trained in the step S4 to the edge device, and performing target detection on the video image acquired by the camera.
The invention relates to a target detection method based on edge equipment, which selects a target detection model according to actual needs, sets priorities of detection time, detection power consumption and detection precision and an upper limit of the detection time, acquires deep learning frames supporting different model input resolutions for the edge equipment needing target detection to form a deep learning frame set, acquires the inferred time and average power consumption of each deep learning frame, optimizes the deep learning frames based on the upper limit of the detection time and the priorities of three performances, optimally configures the target detection model according to the deep learning frames obtained by screening, and deploys the target detection model to the edge equipment for target detection after training.
The invention optimizes the deep learning framework of the target detection model on the edge device, and comprehensively considers the detection time, the detection power consumption and the detection precision of the target detection model, thereby improving the performance of target detection.
Drawings
FIG. 1 is a flowchart of an embodiment of a target detection method based on an edge device according to the present invention;
FIG. 2 is a preferred diagram of the model input resolution supported by the deep learning framework.
Detailed Description
The following description of the embodiments of the present invention is provided in order to better understand the present invention for those skilled in the art with reference to the accompanying drawings. It is to be expressly noted that in the following description, a detailed description of known functions and designs will be omitted when it may obscure the subject matter of the present invention.
Examples
Fig. 1 is a flowchart of an embodiment of the target detection method based on the edge device according to the present invention. As shown in fig. 1, the method for detecting an object based on edge devices of the present invention includes the following specific steps:
s101: setting a target detection model:
selecting a target detection model according to actual needs, and then setting priorities of three detection performance indexes, namely detection time, detection power consumption and detection precision of the target detection model, and an upper limit T of the detection time of the target detection model;
s102: determining a deep learning framework supported by the edge device:
for the edge device needing target detection, firstly, the depth learning frames supported by the edge device are determined, and the number of the supported depth learning frames is recorded as N. Obtaining model input resolutions supported by the target detection model when the target detection model runs on each deep learning frame on the edge device, and recording the number of the model input resolutions supported by the nth deep learning frame as MnN is 1,2, …, N, and the nth depth learning frame supporting the mth model input resolution is denoted as fn,m,m=1,2,…,MnAll the deep learning frames fn,mForming a deep learning frame set F, and then obtaining each deep learning frame Fn,mInferred time t at edge devicen,mAnd average power consumption wn,m
The invention aims at a target detection method based on deep learning, and before a deep learning project is started, selecting a proper frame is very important, because the selection of the proper frame can play a role of getting twice the result with half the effort. Therefore, the invention firstly determines the supported deep learning framework before starting the edge device for target detection. The currently popular deep learning frameworks are PaddlePaddle, Tensorflow, Caffe, Theano, MXNet, Torch, PyTorch, TensorRT 16-bit and TensorRT 32-bit.
When the model input resolution supported by the deep learning frame is determined, because the resolution of the image acquired by the camera is usually greater than the model input resolution, scaling is required, how to enable the scaled acquired image to contain more effective information as much as possible can be performed by primarily screening the model input resolution supported by the deep learning frame according to the resolution of the image acquired by the camera, that is, the resolution with the aspect ratio closest to the aspect ratio of the image acquired by the camera, that is, the resolution with the difference of the proportions smaller than a preset threshold value, is selected as the model input resolution supported by the deep learning frame. FIG. 2 is a preferred diagram of the model input resolution supported by the deep learning framework. As shown in fig. 2, assuming a 4:3 ratio of the size of the images captured by the camera, the depth learning framework supports model input resolutions of 416 × 416(1:1), 416 × 320(13:10), and 416 × 256(13: 8). It can be seen that when the input resolution of the model is 416 × 416, the scaled camera collects the image as fig. 2(a), the top and bottom are filled with pure color blocks, and the actual effective resolution of the image is 416 × 312; when the input resolution of the model is 416 × 320, the zoomed image collected by the camera is shown in fig. 2(b), and the actual effective resolution of the image is also 416 × 312; when the model input resolution is 416 × 256, the scaled camera captures an image as in fig. 2(c), and the effective resolution of the image is reduced to 341 × 256. Assuming that the target detection model is the YOLOv3 model, the VisDrone dataset is used to measure the accuracy of the different inputs. Table 1 is a detection accuracy statistical table after scaling of input resolutions of different models in this embodiment.
Model input resolution AP accuracy (%) AP50 precision (%) AP75 precision (%)
416x416 8 18.39 6.02
416x320 8.05 18.49 5.99
416x256 7.65 17.59 5.63
TABLE 1
As shown in table 1, the resolution 416 x 416 at 1:1 and the resolution 416 x 320 closest to 4:3 have little difference in accuracy, while the accuracy decreases when inferred with the resolution 416 x 256 closest to 16: 9. While resolution 416 x 320 is calculated in a smaller amount than resolution 416 x 416. Therefore, when the aspect ratio of the image collected by the camera is closest to the aspect ratio of the input resolution of the model, the image collected by the camera is zoomed, so that the calculation amount of the model is reduced while the accuracy is not reduced, and the inference speed is increased.
In the embodiment, the edge device is a GPU-based edge device Jetson Nano, and the depth learning frames supported by the edge device are TensorFlow, PyTorch, TensonorRT 16-bit and TensonorRT 32-bit. Table 2 is an evaluation index of the tensrflow frame on the edge device Jetson Nano in this example.
Figure BDA0003538338430000041
Figure BDA0003538338430000051
TABLE 2
Table 3 shows the evaluation index of PyTorch frame on the edge device Jetson Nano in this example.
Figure BDA0003538338430000052
TABLE 3
Table 4 shows the evaluation indexes of the TensonorRT 16-bit framework on the edge device Jetson Nano in this example.
Figure BDA0003538338430000053
TABLE 4
Table 5 shows the evaluation indexes of the TensonorRT 32-bit framework on the edge device Jetson Nano in this example.
Figure BDA0003538338430000054
Figure BDA0003538338430000061
TABLE 5
S103: determining an optimization mode of a target detection model:
in the invention, a target detection model is optimized from two aspects of deducing a frame and inputting a picture resolution so as to enable the performance of the target detection model to reach an expectation as much as possible, and the specific method comprises the following steps:
for the deep learning framework set F, the time t is deduced thereinn,mAnd deleting the deep learning frames exceeding the detection time upper limit T to obtain a deep learning frame set F' after primary screening.
If the detection time is the highest priority in the detection performance indexes, the deep learning frame with the least detection time in the deep learning frame set F' is used as the deep learning frame
Figure BDA0003538338430000062
Using the corresponding model input resolution as the input resolution of the target detection model
Figure BDA0003538338430000063
And if the number of the screened deep learning frames is more than 1, continuing screening according to the detection precision and the priority of the detection power consumption to obtain the optimal deep learning frame.
If the detection power consumption is the highest priority in the detection performance indexes, the deep learning frame with the lowest average power consumption in the deep learning frame set F' is used as the deep learning frame
Figure BDA0003538338430000064
Using the corresponding model input resolution as the input resolution of the target detection model
Figure BDA0003538338430000065
And if the number of the screened deep learning frames is more than 1, continuing screening according to the detection precision and the detection time priority to obtain the optimal deep learning frame.
If the highest priority in the detection performance indexes is the detection precision, the deep learning frame with the maximum model input resolution in the deep learning frame set F' is used as the deep learning frame
Figure BDA0003538338430000066
Divide its corresponding model input intoResolution as input resolution for target detection model
Figure BDA0003538338430000067
This is because, in the neural network model, the larger the input resolution is, the higher the detection accuracy is. And if the number of the screened deep learning frames is more than 1, continuing screening according to the detection time and the priority of the detection power consumption to obtain the optimal deep learning frame.
S104: optimizing a target detection model:
and performing optimization configuration on the target detection model according to the target detection model optimization mode determined in the step S103, and then collecting training samples to train the target detection model.
S105: target detection:
and deploying the target detection model trained in the step S104 to edge equipment, and performing target detection on the video image acquired by the camera.
Experiments are also performed on the edge device Jetson Xavier NX in addition to the edge device Jetson Nano in the embodiment, and experiments show that the invention can well operate on both types of edge devices.
Although illustrative embodiments of the present invention have been described above to facilitate the understanding of the present invention by those skilled in the art, it should be understood that the present invention is not limited to the scope of the embodiments, and various changes may be made apparent to those skilled in the art as long as they are within the spirit and scope of the present invention as defined and defined by the appended claims, and all matters of the invention which utilize the inventive concepts are protected.

Claims (2)

1. An object detection method based on edge equipment is characterized by comprising the following steps:
s1: selecting a target detection model according to actual needs, and then setting priorities of three detection performance indexes, namely detection time, detection power consumption and detection precision of the target detection model, and an upper limit T of the detection time of the target detection model;
s2: for the edge device needing target detection, firstly determining the deep learning frames supported by the edge device, and recording the number of the supported deep learning frames as N; obtaining model input resolutions supported by the target detection model when the target detection model runs on each deep learning frame on the edge device, and recording the number of the model input resolutions supported by the nth deep learning frame as MnN is 1,2, …, N, and the nth depth learning frame supporting the mth model input resolution is denoted as fn,m,m=1,2,…,MnAll the deep learning frames fn,mForming a deep learning frame set F, and then obtaining each deep learning frame Fn,mInferred time t at edge devicen,mAnd average power consumption wn,m
S3: for the deep learning framework set F, the time t is deduced thereinn,mDeleting the deep learning frames exceeding the detection time upper limit T to obtain a deep learning frame set F' after primary screening;
if the detection time is the highest priority in the detection performance indexes, the deep learning frame with the least detection time in the deep learning frame set F' is used as the deep learning frame
Figure FDA0003538338420000011
Using the corresponding model input resolution as the input resolution of the target detection model
Figure FDA0003538338420000012
If the number of the screened deep learning frames is more than 1, continuing screening according to the detection precision and the detection power consumption priority to obtain an optimal deep learning frame;
if the detection power consumption is the highest priority in the detection performance indexes, the deep learning frame with the lowest average power consumption in the deep learning frame set F' is used as the deep learning frame
Figure FDA0003538338420000013
Make its corresponding model input resolutionInput resolution for target detection model
Figure FDA0003538338420000014
If the number of the screened deep learning frames is more than 1, continuously screening according to the detection precision and the priority of the detection time to obtain the optimal deep learning frame;
if the highest priority in the detection performance indexes is the detection precision, the deep learning frame with the maximum model input resolution in the deep learning frame set F' is used as the deep learning frame
Figure FDA0003538338420000015
Using the corresponding model input resolution as the input resolution of the target detection model
Figure FDA0003538338420000016
If the number of the screened deep learning frames is more than 1, continuously screening according to the detection time and the priority of the detection power consumption to obtain an optimal deep learning frame;
s4: according to the deep learning framework obtained by screening in the step S3
Figure FDA0003538338420000021
And the input resolution of the object detection model
Figure FDA0003538338420000022
Carrying out optimized configuration on a target detection model, and then collecting training samples to train the target detection model;
s5: and deploying the target detection model trained in the step S4 to the edge device, and performing target detection on the video image acquired by the camera.
2. The method for detecting the target of claim 1, wherein the supported model input resolution of the deep learning framework in step S1 is a resolution in which the difference between the aspect ratio and the aspect ratio of the resolution of the image captured by the camera is smaller than a preset threshold.
CN202210230959.9A 2022-03-09 2022-03-09 Target detection method based on edge equipment Active CN114612825B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210230959.9A CN114612825B (en) 2022-03-09 2022-03-09 Target detection method based on edge equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210230959.9A CN114612825B (en) 2022-03-09 2022-03-09 Target detection method based on edge equipment

Publications (2)

Publication Number Publication Date
CN114612825A true CN114612825A (en) 2022-06-10
CN114612825B CN114612825B (en) 2024-03-19

Family

ID=81860641

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210230959.9A Active CN114612825B (en) 2022-03-09 2022-03-09 Target detection method based on edge equipment

Country Status (1)

Country Link
CN (1) CN114612825B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116055778A (en) * 2022-05-30 2023-05-02 荣耀终端有限公司 Video data processing method, electronic device and readable storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112257500A (en) * 2020-09-16 2021-01-22 江苏方天电力技术有限公司 Intelligent image recognition system and method for power equipment based on cloud edge cooperation technology
AU2020103494A4 (en) * 2020-11-17 2021-01-28 China University Of Mining And Technology Handheld call detection method based on lightweight target detection network
CN113158794A (en) * 2021-03-16 2021-07-23 西安天和防务技术股份有限公司 Object detection method, edge device, and computer-readable storage medium
CN113159166A (en) * 2021-04-19 2021-07-23 国网山东省电力公司威海供电公司 Embedded image identification detection method, system, medium and equipment based on edge calculation
WO2021189507A1 (en) * 2020-03-24 2021-09-30 南京新一代人工智能研究院有限公司 Rotor unmanned aerial vehicle system for vehicle detection and tracking, and detection and tracking method
CN113610024A (en) * 2021-08-13 2021-11-05 天津大学 Multi-strategy deep learning remote sensing image small target detection method
WO2021238826A1 (en) * 2020-05-26 2021-12-02 苏宁易购集团股份有限公司 Method and apparatus for training instance segmentation model, and instance segmentation method

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021189507A1 (en) * 2020-03-24 2021-09-30 南京新一代人工智能研究院有限公司 Rotor unmanned aerial vehicle system for vehicle detection and tracking, and detection and tracking method
WO2021238826A1 (en) * 2020-05-26 2021-12-02 苏宁易购集团股份有限公司 Method and apparatus for training instance segmentation model, and instance segmentation method
CN112257500A (en) * 2020-09-16 2021-01-22 江苏方天电力技术有限公司 Intelligent image recognition system and method for power equipment based on cloud edge cooperation technology
AU2020103494A4 (en) * 2020-11-17 2021-01-28 China University Of Mining And Technology Handheld call detection method based on lightweight target detection network
CN113158794A (en) * 2021-03-16 2021-07-23 西安天和防务技术股份有限公司 Object detection method, edge device, and computer-readable storage medium
CN113159166A (en) * 2021-04-19 2021-07-23 国网山东省电力公司威海供电公司 Embedded image identification detection method, system, medium and equipment based on edge calculation
CN113610024A (en) * 2021-08-13 2021-11-05 天津大学 Multi-strategy deep learning remote sensing image small target detection method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
陈佳林;和青;李云波;潘志松;: "基于边缘计算和深度学习的园区目标检测方法", 电子技术与软件工程, no. 16, 15 August 2020 (2020-08-15), pages 152 - 154 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116055778A (en) * 2022-05-30 2023-05-02 荣耀终端有限公司 Video data processing method, electronic device and readable storage medium
CN116055778B (en) * 2022-05-30 2023-11-21 荣耀终端有限公司 Video data processing method, electronic device and readable storage medium

Also Published As

Publication number Publication date
CN114612825B (en) 2024-03-19

Similar Documents

Publication Publication Date Title
WO2021052025A1 (en) Electric fan vibration fault detecting method and apparatus
WO2021073418A1 (en) Face recognition method and apparatus, device, and storage medium
TWI399703B (en) Forward and backward resizing method
DE102011078662B4 (en) Acquiring and generating images with a high dynamic range
CN107679465A (en) A kind of pedestrian's weight identification data generation and extending method based on generation network
JP2008234654A (en) Object image detection method and image detection device
CN102903085B (en) Based on the fast image splicing method of corners Matching
CN109711401B (en) Text detection method in natural scene image based on Faster Rcnn
CN109492596B (en) Pedestrian detection method and system based on K-means clustering and regional recommendation network
CN101465000B (en) Image processing apparatus and method, and program
JP2006524394A5 (en)
CN102170527A (en) Image processing apparatus
CN114612825A (en) Target detection method based on edge equipment
CN113822830A (en) Multi-exposure image fusion method based on depth perception enhancement
JP2004310475A (en) Image processor, cellular phone for performing image processing, and image processing program
CN112560701B (en) Face image extraction method and device and computer storage medium
CN106530361A (en) Color correction method for color face image
WO2023284236A1 (en) Blind image denoising method and apparatus, electronic device, and storage medium
CN112883983B (en) Feature extraction method, device and electronic system
US20170185863A1 (en) System and method for adaptive pixel filtering
WO2019228450A1 (en) Image processing method, device, and equipment, and readable medium
JP5364264B2 (en) Location detection of block defect using neural network
WO2021047492A1 (en) Target tracking method, device, and computer system
CN103067671A (en) Method and device of image display
Wang et al. Distortion recognition for image quality assessment with convolutional neural network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant