CN112966653B - Line inspection model training method, line inspection method and line inspection system - Google Patents

Line inspection model training method, line inspection method and line inspection system Download PDF

Info

Publication number
CN112966653B
CN112966653B CN202110333069.6A CN202110333069A CN112966653B CN 112966653 B CN112966653 B CN 112966653B CN 202110333069 A CN202110333069 A CN 202110333069A CN 112966653 B CN112966653 B CN 112966653B
Authority
CN
China
Prior art keywords
training image
line inspection
training
value
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110333069.6A
Other languages
Chinese (zh)
Other versions
CN112966653A (en
Inventor
陈鹏
杨若鹄
邝嘉隆
黄德斌
古蕊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ubtech Robotics Corp
Original Assignee
Ubtech Robotics Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ubtech Robotics Corp filed Critical Ubtech Robotics Corp
Priority to CN202110333069.6A priority Critical patent/CN112966653B/en
Publication of CN112966653A publication Critical patent/CN112966653A/en
Application granted granted Critical
Publication of CN112966653B publication Critical patent/CN112966653B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Electromagnetism (AREA)
  • Artificial Intelligence (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Automation & Control Theory (AREA)
  • Multimedia (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The application is suitable for the technical field of mobile equipment line inspection, and provides a training method, a line inspection method and a line inspection system of a line inspection model, wherein the training method comprises the following steps: inputting training images in a preset training image set into a line inspection model to obtain a prediction result, wherein each training image in the training image set is obtained by shooting a runway, and each training image has corresponding annotation information, the annotation information comprises an annotation steering value and an annotation speed value, and the prediction result comprises a prediction steering value and a prediction speed value; calculating a steering error value and a speed error value according to the marking information and the prediction result corresponding to the training image; and adjusting parameters in the line inspection model according to the steering error value and the speed error value until convergence conditions are met, and obtaining the target line inspection model. By the method, the mobile equipment can move at different speeds according to the runway position in the line inspection process, so that the line inspection speed of the mobile equipment is increased, and the line inspection efficiency of the mobile equipment is improved.

Description

Line inspection model training method, line inspection method and line inspection system
Technical Field
The application belongs to the technical field of line inspection of mobile equipment, and particularly relates to a training method of a line inspection model, a line inspection method, a training device of the line inspection model, a line inspection device and a line inspection system.
Background
In the field of mobile devices (e.g., educational carts and mobile robots), line inspection has been a very classical application scenario. The traditional line inspection technology is to collect runway information of a fixed position, calculate the current runway position of the mobile equipment according to the runway information of the fixed position, and calculate the steering coefficient.
However, conventional solutions require that the mobile device only use one speed throughout the course of the tour, and when there is a complex curve in the runway, it must move at an excessive speed, making the speed of the mobile device very slow.
Disclosure of Invention
In view of the above, the present application provides a line inspection model training method, a line inspection model training device, a line inspection device and a line inspection system, which can enable a mobile device to move at different speeds according to the current runway position in the line inspection process, thereby accelerating the line inspection speed of the mobile device and improving the line inspection efficiency of the mobile device.
In a first aspect, the present application provides a training method of a line inspection model, including:
inputting training images in a preset training image set into the line inspection model to obtain a prediction result output by the line inspection model, wherein each training image in the training image set is obtained by shooting a runway, and each training image has corresponding annotation information, the annotation information comprises an annotation steering value and an annotation speed value, and the prediction result comprises a prediction steering value and a prediction speed value;
calculating to obtain a steering error value and a speed error value according to the marking information corresponding to the training image and the prediction result;
and adjusting parameters in the line inspection model according to the steering error value and the speed error value until the line inspection model meets convergence conditions, so as to obtain a target line inspection model.
In a second aspect, the present application provides a line inspection method, including:
acquiring runway images obtained by shooting the runway in the process of moving the mobile equipment on the runway;
inputting the runway image into a target line inspection model to obtain a prediction result output by the target line inspection model, wherein the prediction result comprises a predicted steering value and a predicted speed value;
Controlling the movement of the mobile device based on the predicted steering value and the predicted speed value;
the target line inspection model is obtained through training based on a training image set and labeling information corresponding to each training image in the training image set, wherein the labeling information comprises a labeling steering value and a labeling speed value.
In a third aspect, the present application provides a training device for a line inspection model, including:
the model prediction unit is used for inputting training images in a preset training image set into the line inspection model to obtain a prediction result output by the line inspection model, wherein each training image in the training image set is obtained by shooting a runway, and each training image has corresponding annotation information, the annotation information comprises an annotation steering value and an annotation speed value, and the prediction result comprises a prediction steering value and a prediction speed value;
the error calculation unit is used for calculating a steering error value and a speed error value according to the marking information corresponding to the training image and the prediction result;
and the parameter adjustment unit is used for adjusting parameters in the line inspection model according to the steering error value and the speed error value until the line inspection model meets the convergence condition, so as to obtain the target line inspection model.
In a fourth aspect, the present application provides a line inspection device, including:
the image acquisition unit is used for acquiring runway images obtained by shooting the runway in the process of moving the mobile equipment on the runway;
the target model prediction unit is used for inputting the runway image into a target line patrol model to obtain a prediction result output by the target line patrol model, wherein the prediction result comprises a predicted steering value and a predicted speed value;
a motion control unit for controlling the motion of the mobile device based on the predicted steering value and the predicted speed value;
the target line inspection model is obtained through training based on a training image set and labeling information corresponding to each training image in the training image set, wherein the labeling information comprises a labeling steering value and a labeling speed value.
In a fifth aspect, the present application provides a line inspection system, comprising:
a terminal device for implementing the steps of the method as provided in the first aspect above;
a mobile device for implementing the steps of the method as provided in the second aspect above.
From the above, in the present application, a training image set obtained by capturing a runway is first obtained, where the training image set includes at least one training image, each training image has corresponding labeling information, where the labeling information includes a labeling steering value and a labeling speed value, the training image in the training image set is input to the line inspection model to obtain a prediction result output by the line inspection model, the prediction result includes a prediction steering value and a prediction speed value, then a steering error value and a speed error value are obtained by calculation according to the labeling information and the prediction result corresponding to the training image, and finally parameters in the line inspection model are adjusted according to the steering error value and the speed error value until the line inspection model meets a convergence condition, and a target line inspection model is obtained. According to the method and the device, the target line inspection model is obtained through training of the training image set and the labeling steering values and the labeling speed values corresponding to the training images in the training image set, and different prediction steering values and prediction speed values can be matched for the runway types (straight lines, curves, right-angle curves and the like) corresponding to the runway positions where the mobile equipment is located, so that line inspection speed of the mobile equipment is accelerated, and line inspection efficiency of the mobile equipment is improved. It will be appreciated that the advantages of the second to fifth aspects may be found in the relevant description of the first aspect, and are not described here again.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the following description will briefly introduce the drawings that are needed in the embodiments or the description of the prior art, it is obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flow chart of a training method of a line inspection model according to an embodiment of the present application;
FIG. 2 is a diagram of an example of a scenario for obtaining a training image set provided in an embodiment of the present application;
FIG. 3 is an exemplary diagram of a designated runway provided by an embodiment of the present application;
FIG. 4 is an exemplary diagram of a visual interface of a marking instrument provided by an embodiment of the present application;
fig. 5 is a diagram illustrating an example network structure of a line inspection model according to an embodiment of the present application;
fig. 6 is a schematic flow chart of a line inspection method according to an embodiment of the present application;
fig. 7 is a schematic structural diagram of a training device for a line inspection model according to an embodiment of the present application;
fig. 8 is a schematic structural diagram of a line inspection device according to an embodiment of the present application;
Fig. 9 is a schematic structural diagram of a terminal device provided in an embodiment of the present application;
fig. 10 is a schematic structural diagram of a mobile device according to an embodiment of the present application.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system configurations, techniques, etc. in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
It should be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It should also be understood that the term "and/or" as used in this specification and the appended claims refers to any and all possible combinations of one or more of the associated listed items, and includes such combinations.
As used in this specification and the appended claims, the term "if" may be interpreted as "when..once" or "in response to a determination" or "in response to detection" depending on the context. Similarly, the phrase "if a determination" or "if a [ described condition or event ] is detected" may be interpreted in the context of meaning "upon determination" or "in response to determination" or "upon detection of a [ described condition or event ]" or "in response to detection of a [ described condition or event ]".
In addition, in the description of the present application and the appended claims, the terms "first," "second," "third," and the like are used merely to distinguish between descriptions and are not to be construed as indicating or implying relative importance.
Reference in the specification to "one embodiment" or "some embodiments" or the like means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the application. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," and the like in the specification are not necessarily all referring to the same embodiment, but mean "one or more but not all embodiments" unless expressly specified otherwise. The terms "comprising," "including," "having," and variations thereof mean "including but not limited to," unless expressly specified otherwise.
Fig. 1 shows a flowchart of a training method of a line inspection model, where the training method is applied to a terminal device, and the details are as follows:
and step 101, inputting training images in a preset training image set into a line inspection model to obtain a prediction result output by the line inspection model.
In the embodiment of the application, each training image in the preset training image set is obtained by shooting a runway, and each training image has corresponding annotation information, wherein the annotation information comprises an annotation steering value and an annotation speed value. Specifically, the manner of obtaining the training image set may be as follows: referring to fig. 2, the mobile device may be a cart, and the terminal device may be a computer, wherein the mobile device is connected to the bluetooth handle through bluetooth, and the terminal device is connected to the mobile device through a wireless communication technology (Wireless Fidelity, WIFI), so that the bluetooth handle and the mobile device may communicate with each other, and the mobile device and the terminal device may communicate with each other. First, a user may place a mobile device on a designated runway and then send control turn values and control speed values to the mobile device by manipulating a bluetooth handle to instruct the mobile device to move on the designated runway based on the control turn values and control speed values, as one example, the designated runway may be as shown in fig. 3. The mobile device is provided with a visual module, such as a camera, the view field of the camera faces the ground, the mobile device can shoot a designated runway at each moment in the movement process through the camera to obtain training images, and at the moment of shooting each training image, the mobile device can acquire a control steering value and a control speed value sent by a Bluetooth handle at the moment, the control steering value is used as a marking steering value corresponding to the training image, and the control speed value is used as a marking speed value corresponding to the training image. Through the mode, the mobile device can obtain the training image set and the labeling information corresponding to each training image in the training image set, and the user does not need to label manually. Because the computing performance of the mobile device is generally poor and insufficient to complete training, the mobile device can send the training image set and the labeling information corresponding to each training image in the training image set to the terminal device through a wireless communication technology, and the terminal device trains the line inspection model.
It should be noted that, in order for the line inspection model to identify the characteristics of the runway, the training image needs to include at least one of two sides of the runway, so in the embodiment of the present application, the size of the field angle of the camera on the mobile device and the width of the runway need to be limited. For example, the angle of view of the camera may be between 80 and 120 degrees, and this range considers that the resolution must not be too high when the singlechip on the mobile device collects data and needs to increase the frame rate, and the size of the feature map that the singlechip can process is limited, and no matter how large the map is, the feature map needs to be uniformly scaled to a resolution below 300x300, so that the real-time requirement can be achieved. Accordingly, when the runway is at the front 30 cm from the camera, the width of the runway may be set to not more than 40 cm.
After the terminal device obtains the training image set and the labeling information corresponding to each training image in the training image set, the training images in the training image set can be input into a line inspection model, the line inspection model outputs a corresponding prediction result according to the input training images, and the prediction result comprises a predicted steering value and a predicted speed value
And 102, calculating to obtain a steering error value and a speed error value according to the marking information and the prediction result corresponding to the training image.
In the embodiment of the application, the steering error value and the speed error value can be calculated according to the marking information corresponding to the input training image and the prediction result output by the line inspection model. Specifically, the loss function used in the iterative training process of the line inspection model may be a mean square error function, and the steering error value and the speed error value may be calculated according to the mean square error function, the labeling information and the prediction result.
And 103, adjusting parameters in the line inspection model according to the steering error value and the speed error value until the line inspection model meets the convergence condition, and obtaining the target line inspection model.
In the embodiment of the application, after the steering error value and the speed error value are calculated, parameters in the line inspection model can be adjusted according to the steering error value and the speed error value. And then the line inspection model after the parameters are adjusted continues training by repeating the steps 101, 102 and 103 until the line inspection model meets the convergence condition. The convergence condition may be that when the inspection model is verified by the verification set, the error rate of the prediction result output by the inspection model is not reduced for 10 consecutive times.
Optionally, before the step 101, the method further includes:
Performing enhancement processing on the training images in the training image set to obtain an enhanced training image;
correspondingly, the step 101 specifically includes:
and inputting the training image after the enhancement treatment to a line inspection model to obtain a prediction result output by the line inspection model.
In the embodiment of the application, in order to improve generalization of the line inspection model, enhancement processing may be performed on a training image in the training image. Wherein the enhancement process includes at least one of random brightness, random contrast, random saturation, motion blur, gaussian blur, and gaussian noise. The diversity of the training images can be increased by the enhancement process. In the training process, the training images before the enhancement processing and the training images after the enhancement processing are used as training samples of the training line inspection model, so that the accuracy of the target line inspection model obtained through training is improved. The random brightness refers to the brightness of the training image being randomly adjusted, the random contrast refers to the contrast of the training image being randomly adjusted, the random saturation refers to the saturation of the training image being randomly adjusted, and the motion blur refers to the scene in the training image generating a motion blur effect.
Optionally, before the step 101, the method further includes:
normalizing pixel values of the training images in the training image set to obtain normalized training images;
correspondingly, the step 101 specifically includes:
and inputting the training image after normalization processing into a line inspection model to obtain a prediction result output by the line inspection model.
In the embodiment of the application, the training image is composed of pixel points, and the value of the pixel value of each pixel point is an integer in the range of 0-255. In practical application, although the training image which is not normalized can be directly input into the line inspection model, the training speed of the line inspection model is very slow, so in order to improve the training speed of the line inspection model, the pixel values of all the pixels of the training image in the training image set can be normalized, so that the range of the pixel values is scaled from 0 to 255 to 0 to 1. Illustratively, the normalization process may be performed by dividing each pixel value of the training image by 255, for example, performing the normalization process on the pixel value 255, and may be performed by dividing the pixel value 255 by 255, that is, the pixel value becomes 1 after the normalization process. In the training process, the training image after normalization processing can be input into the line inspection model, so that the training speed of the line inspection model is improved.
Optionally, before the step 101, the method further includes:
for each training image in the training image set, taking the sum of half of the width of the training image and the annotation steering value corresponding to the training image as a moving coordinate value;
displaying a training image and a moving point corresponding to the moving coordinate value in the training image to instruct a user to mark the training image;
rejecting the training image set marked as unqualified training images to obtain a rejected training image set;
correspondingly, the step 101 specifically includes:
and inputting the training images in the removed training image set to the line inspection model to obtain a prediction result output by the line inspection model.
In this embodiment of the present application, for each training image in the training image set, a sum of half of the width of the training image and the label steering value corresponding to the training image may be used as the movement coordinate value, for example, the width of the training image 1 is a, and the label steering value corresponding to the training image 1 is b, and then the movement coordinate value corresponding to the training image 1 is a/2+b. The moving coordinate value is the coordinate value of the moving point in the training image 1. The terminal equipment can display the training image and the moving point corresponding to the moving coordinate value in the training image through the display screen, then, the user can judge whether the position of the moving point deviates from the manual expectation, so that the training image is marked, and if the position of the moving point deviates from the manual expectation, the user can mark the training image as unqualified through a marking tool. Referring to fig. 4, fig. 4 is a visual interface of the marking tool, and a user may select a training image set by selecting a file option, and set an automatic playing time, so that the marking tool plays training images in the training image set according to a sequential shooting order of each training image in the training image set in the visual interface, and simultaneously displays moving points in the training images. During the playing process, when the user finds that the moving point of a certain training image deviates from the manual expectation, the training image can be marked as unqualified through a pause playing marking option. Finally, the training image set marked as unqualified can be removed, and the removed training image set is obtained.
In one embodiment, since the motion of the mobile device is continuous, the user may mark the training images without one frame, only the first frame of training image (denoted as the first frame mark image) in which the mobile point deviates and the last frame of training image (denoted as the last frame mark image) in which the mobile point deviates during the mark playing process. And then, directly removing the first frame mark image, the tail frame mark image and the training image positioned between the first frame mark image and the tail frame mark image in the playing process.
Optionally, considering that the line inspection model is finally required to be applied to the mobile device, and the calculation capability of the single-chip microcomputer on the mobile device is limited, the network structure of the line inspection model should be as simple as possible, so that the single-chip microcomputer can operate the line inspection model. Based on this, the embodiment of the application proposes a network structure of a line inspection model, as shown in fig. 5, where the line inspection model includes five convolution layers, a pooling layer, and two fully-connected layers, the convolution kernels in the five convolution layers are not more than 3×3 in size, the two fully-connected layers are used to perform general matrix multiplication (General Matrix Multiplication, GEMM), and an activation function used by the line inspection model is a linear rectification function (Rectified Linear Unit, reLU).
Optionally, because the single chip microcomputer on the mobile device has limited computing capability, only 8 bits of data can be processed, and the mobile device has high real-time requirements on the target line inspection model, the target line inspection model trained on the terminal device cannot be directly deployed into the mobile device. Based on this, after the above step 103, further includes:
and carrying out quantization treatment on the target line inspection model to obtain a quantized model.
In the embodiment of the application, after the target line inspection model is obtained through training, quantization processing can be carried out on the target line inspection model, so that the accuracy of the target line inspection model is converted from 32 bits to 8 bits, and a quantized model is obtained, thereby reducing the calculated amount of the model and further achieving the purpose of acceleration. The quantized model can be deployed to mobile equipment and operated by a singlechip on the mobile equipment. The parameters in the target line patrol model are floating point data, such as 6.0, and the parameters in the quantized model are fixed point data, such as 127. The quantization formula for converting floating point data into fixed point data is:
wherein Q represents quantized fixed-point value, R represents real floating-point value, Z represents quantized fixed-point value corresponding to 0 floating-point value, S represents minimum scale which can be represented after fixed-point quantization.
Meanwhile, the evaluation formulas of S and Z are as follows:
Z=Q max -R max ÷S,R max represents the maximum floating point value, R min Representing the smallest floating point value, Q max Represents the minimum setpoint value, Q min Representing the smallest fixA dot value.
For example, assuming a range of weight activation values between [ -2.0,6.0], quantization of 8-bit data is required, and the fixed-point quantization can represent a range of [ -128,127], the calculation process is as follows:
Z=127-6.0÷0.0313725≈-64.25≈64
then there is a correspondence as follows:
quantization of fixed point values Floating point value
-128 -2.0
-64 0.0
127 6.0
From the values obtained above, if there is a true weight value of 0.48, i.e. r=0.48, the corresponding evaluation of Q is:
Q=0.48÷0.0313725+(-64)≈-55.07≈-55。
from the above, in the scheme of the application, a training image set obtained by shooting a runway is firstly obtained, the training image set comprises at least one training image, each training image has corresponding labeling information, the labeling information comprises a labeling steering value and a labeling speed value, the training images in the training image set are input into the line inspection model to obtain a prediction result output by the line inspection model, the prediction result comprises a prediction steering value and a prediction speed value, then a steering error value and a speed error value are obtained through calculation according to the labeling information corresponding to the training images and the prediction result, and finally parameters in the line inspection model are adjusted according to the steering error value and the speed error value until convergence conditions of the line inspection model are met, and the target line inspection model is obtained. According to the method and the device, the target line inspection model is obtained through training of the training image set and the labeling steering values and the labeling speed values corresponding to the training images in the training image set, and different prediction steering values and prediction speed values can be matched for the runway types (straight lines, curves, right-angle curves and the like) corresponding to the runway positions where the mobile equipment is located, so that line inspection speed of the mobile equipment is accelerated, and line inspection efficiency of the mobile equipment is improved.
It should be understood that the sequence number of each step in the foregoing embodiment does not mean that the execution sequence of each process should be determined by the function and the internal logic of each process, and should not limit the implementation process of the embodiment of the present application in any way.
Fig. 6 shows a flowchart of a line inspection method provided in an embodiment of the present application, where the line inspection method is applied to a mobile device, and is described in detail below:
step 601, obtaining a runway image obtained by shooting the runway in the process of moving the mobile device on the runway.
Step 602, inputting the runway image to a target line inspection model to obtain a prediction result output by the target line inspection model, wherein the prediction result comprises a predicted steering value and a predicted speed value.
Step 603, controlling the movement of the mobile device based on the predicted steering value and the predicted speed value.
In the embodiment of the application, after the target line inspection model obtained by training on the terminal equipment is subjected to quantization processing, the target line inspection model after quantization processing can be deployed on the mobile equipment. In the course of the movement of the mobile device on the runway, the runway can be shot according to a preset shooting frequency, so as to obtain runway images, for example, the mobile device shoots the runway every 1 second in the course of the movement. Every time the mobile device shoots a frame of runway image, the mobile device immediately acquires the runway image and inputs the runway image into a target line inspection model, and the target line inspection model outputs a corresponding prediction result, wherein the prediction result comprises a predicted steering value and a predicted speed value. Finally, the mobile device moves based on the predicted steering value and the predicted speed value. The target line inspection model is obtained by training based on a training image set and labeling information corresponding to each training image in the training image set, wherein the labeling information comprises a labeling steering value and a labeling speed value, and particularly, the target line inspection model is obtained by executing the training method of the line inspection model.
In the scheme of the application, firstly, a runway image obtained by shooting the runway in the process of moving on the runway of the mobile equipment is obtained, then the runway image is input into a target line patrol model to obtain a prediction result output by the target line patrol model, the prediction result comprises a prediction steering value and a prediction speed value, and finally, the movement of the mobile equipment is controlled based on the prediction steering value and the prediction speed value. The target line inspection model is obtained through training based on a training image set and labeling information corresponding to each training image in the training image set, wherein the labeling information comprises a labeling steering value and a labeling speed value. According to the method and the device, the target line inspection model is obtained through training of the training image set and the labeling steering values and the labeling speed values corresponding to the training images in the training image set, and different prediction steering values and prediction speed values can be matched for the runway types (straight lines, curves, right-angle curves and the like) corresponding to the runway positions where the mobile equipment is located, so that line inspection speed of the mobile equipment is accelerated, and line inspection efficiency of the mobile equipment is improved.
Fig. 7 is a schematic structural diagram of a training device for a line inspection model, where the training device is applied to a terminal device, and for convenience of explanation, only a portion relevant to an embodiment of the present application is shown.
The training device 700 of the line inspection model includes:
the model prediction unit 701 is configured to input training images in a preset training image set into the line inspection model to obtain a prediction result output by the line inspection model, where each training image in the training image set is obtained by shooting a runway, and each training image has corresponding annotation information, where the annotation information includes an annotation steering value and an annotation speed value, and the prediction result includes a prediction steering value and a prediction speed value;
an error calculating unit 702, configured to calculate a steering error value and a speed error value according to the labeling information corresponding to the training image and the prediction result;
and a parameter adjustment unit 703, configured to adjust parameters in the inspection model according to the steering error value and the speed error value until the inspection model meets a convergence condition, so as to obtain a target inspection model.
Optionally, the training device 700 for the line inspection model further includes:
and the enhancement processing unit is used for carrying out enhancement processing on the training images in the training image set to obtain the training images after the enhancement processing, wherein the enhancement processing comprises at least one of random brightness, random contrast, random saturation, motion blur, gaussian blur and Gaussian noise.
The model prediction unit 701 is specifically configured to input the training image after the enhancement processing into the line inspection model.
Optionally, the training device 700 for the line inspection model further includes:
the normalization processing unit is used for carrying out normalization processing on the pixel values of the training images in the training image set to obtain normalized training images;
the model prediction unit 701 is specifically configured to input the training image after normalization processing to the line inspection model.
Optionally, the training device 700 for the line inspection model further includes:
a coordinate calculation unit configured to set, for each training image in the training image set, a sum of half of a width of the training image and a label steering value corresponding to the training image as a movement coordinate value;
the image display unit is used for displaying the training image and a moving point corresponding to the moving coordinate value in the training image so as to instruct a user to mark the training image;
the image rejection unit is used for rejecting the training images marked as unqualified in the training image set to obtain a rejected training image set;
the model prediction unit 701 is specifically configured to input the training images in the training image set after the culling to the line inspection model.
Optionally, the line inspection model includes five convolution layers and a pooling layer, and a convolution kernel of the five convolution layers is less than or equal to 3×3.
Optionally, the training device 700 for the line inspection model further includes:
and the quantization processing unit is used for carrying out quantization processing on the target line inspection model to obtain a quantized model, wherein parameters in the target line inspection model are floating point data, and parameters in the quantized model are fixed point data.
From the above, in the scheme of the application, a training image set obtained by shooting a runway is firstly obtained, the training image set comprises at least one training image, each training image has corresponding labeling information, the labeling information comprises a labeling steering value and a labeling speed value, the training images in the training image set are input into the line inspection model to obtain a prediction result output by the line inspection model, the prediction result comprises a prediction steering value and a prediction speed value, then a steering error value and a speed error value are obtained through calculation according to the labeling information corresponding to the training images and the prediction result, and finally parameters in the line inspection model are adjusted according to the steering error value and the speed error value until convergence conditions of the line inspection model are met, and the target line inspection model is obtained. According to the method and the device, the target line inspection model is obtained through training of the training image set and the labeling steering values and the labeling speed values corresponding to the training images in the training image set, and different prediction steering values and prediction speed values can be matched for the runway types (straight lines, curves, right-angle curves and the like) corresponding to the runway positions where the mobile equipment is located, so that line inspection speed of the mobile equipment is accelerated, and line inspection efficiency of the mobile equipment is improved.
Fig. 8 shows a schematic structural diagram of a line inspection device provided in an embodiment of the present application, where the line inspection device is applied to a mobile device, and for convenience of explanation, only a portion related to the embodiment of the present application is shown.
The line inspection device 800 includes:
an image obtaining unit 801, configured to obtain a runway image obtained by capturing a runway of a mobile device during a movement of the mobile device on the runway;
a target model prediction unit 802, configured to input the runway image to a target line inspection model, to obtain a predicted result output by the target line inspection model, where the predicted result includes a predicted steering value and a predicted speed value;
a motion control unit 803 for controlling the motion of the mobile device based on the predicted steering value and the predicted speed value;
the target line inspection model is obtained through training based on a training image set and labeling information corresponding to each training image in the training image set, wherein the labeling information comprises a labeling steering value and a labeling speed value.
In the scheme of the application, firstly, a runway image obtained by shooting the runway in the process of moving on the runway of the mobile equipment is obtained, then the runway image is input into a target line patrol model to obtain a prediction result output by the target line patrol model, the prediction result comprises a prediction steering value and a prediction speed value, and finally, the movement of the mobile equipment is controlled based on the prediction steering value and the prediction speed value. The target line inspection model is obtained through training based on a training image set and labeling information corresponding to each training image in the training image set, wherein the labeling information comprises a labeling steering value and a labeling speed value. According to the method and the device, the target line inspection model is obtained through training of the training image set and the labeling steering values and the labeling speed values corresponding to the training images in the training image set, and different prediction steering values and prediction speed values can be matched for the runway types (straight lines, curves, right-angle curves and the like) corresponding to the runway positions where the mobile equipment is located, so that line inspection speed of the mobile equipment is accelerated, and line inspection efficiency of the mobile equipment is improved.
Fig. 9 is a schematic structural diagram of a terminal device provided in an embodiment of the present application. As shown in fig. 9, the terminal device 9 of this embodiment includes: at least one first processor 90 (only one is shown in fig. 9), a first memory 91, and a first computer program 92 stored in the first memory 91 and executable on the at least one first processor 90, the first processor 90 implementing the following steps when executing the first computer program 92:
inputting training images in a preset training image set into the line inspection model to obtain a prediction result output by the line inspection model, wherein each training image in the training image set is obtained by shooting a runway, and each training image has corresponding annotation information, the annotation information comprises an annotation steering value and an annotation speed value, and the prediction result comprises a prediction steering value and a prediction speed value;
calculating to obtain a steering error value and a speed error value according to the marking information corresponding to the training image and the prediction result;
and adjusting parameters in the line inspection model according to the steering error value and the speed error value until the line inspection model meets convergence conditions, so as to obtain a target line inspection model.
In a second possible implementation provided by the first possible implementation, assuming that the first possible implementation is the first possible implementation, before the training images in the preset training image set are input to the line inspection model, the following steps are further implemented when the first processor 90 executes the first computer program 92:
performing enhancement processing on the training images in the training image set to obtain an enhanced training image, wherein the enhancement processing comprises at least one of random brightness, random contrast, random saturation, motion blur, gaussian blur and Gaussian noise;
correspondingly, the inputting the training images in the preset training image set into the line inspection model includes:
and inputting the training image after the enhancement treatment into the line inspection model.
In a third possible implementation manner provided by the first possible implementation manner, before the training images in the preset training image set are input into the line inspection model, the following steps are further implemented when the first processor 90 executes the first computer program 92:
normalizing the pixel values of the training images in the training image set to obtain normalized training images;
Correspondingly, the inputting the training images in the preset training image set into the line inspection model includes:
and inputting the training image after normalization processing into the line inspection model.
In a fourth possible implementation manner provided by the first possible implementation manner, before the training images in the preset training image set are input into the line inspection model, the following steps are further implemented when the first processor 90 executes the first computer program 92:
for each training image in the training image set, taking the sum of half of the width of the training image and the annotation steering value corresponding to the training image as a moving coordinate value;
displaying the training image and a moving point corresponding to the moving coordinate value in the training image to instruct a user to mark the training image;
rejecting the training images marked as unqualified in the training image set to obtain a rejected training image set;
correspondingly, the inputting the training images in the preset training image set into the line inspection model includes:
and inputting the training images in the eliminated training image set into the line inspection model.
In a fifth possible implementation manner provided by the first possible implementation manner, the line inspection model includes five convolution layers and a pooling layer, and a convolution kernel in the five convolution layers is less than or equal to 3×3.
The terminal device 9 may include, but is not limited to, a first processor 90, a first memory 91. It will be appreciated by those skilled in the art that fig. 9 is merely an example of the terminal device 9 and is not meant to be limiting as to the terminal device 9, and may include more or fewer components than shown, or may combine certain components, or different components, such as may also include input-output devices, network access devices, etc.
The first processor 90 may be a central processing unit (Central Processing Unit, CPU), the first processor 90 may also be other general purpose processors, digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), off-the-shelf programmable gate arrays (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The first memory 91 may in some embodiments be an internal storage unit of the terminal device 9, such as a hard disk or a memory of the terminal device 9. The first memory 91 may also be an external storage device of the terminal device 9 in other embodiments, for example, a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card) or the like, which are provided on the terminal device 9. Further, the first memory 91 may also include both an internal storage unit and an external storage device of the terminal device 9. The first memory 91 is used for storing an operating system, an application program, a boot loader (BootLoader), data, other programs, and the like, for example, program codes of the first computer program. The above-described first memory 91 may also be used to temporarily store data that has been output or is to be output.
Fig. 10 is a schematic structural diagram of a mobile device according to an embodiment of the present application. As shown in fig. 10, the mobile device 10 of this embodiment includes: at least one second processor 100 (only one is shown in fig. 10), a second memory 101, and a second computer program 102 stored in the second memory 101 and executable on the at least one second processor 100, the second processor 100 implementing the following steps when executing the second computer program 102:
Acquiring runway images obtained by shooting the runway in the process of moving the mobile equipment on the runway;
inputting the runway image into a target line inspection model to obtain a prediction result output by the target line inspection model, wherein the prediction result comprises a predicted steering value and a predicted speed value;
controlling the movement of the mobile device based on the predicted steering value and the predicted speed value;
the target line inspection model is obtained through training based on a training image set and labeling information corresponding to each training image in the training image set, wherein the labeling information comprises a labeling steering value and a labeling speed value.
The mobile device may include, but is not limited to, a second processor 100, a second memory 101. It will be appreciated by those skilled in the art that fig. 10 is merely an example of mobile device 10 and is not intended to limit mobile device 10, and may include more or fewer components than shown, or may combine certain components, or different components, such as may also include input-output devices, network access devices, etc.
The second processor 100 may be a central processing unit (Central Processing Unit, CPU), and the second processor 100 may also be other general purpose processors, digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), off-the-shelf programmable gate arrays (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The second storage 101 may be an internal storage unit of the mobile device 10, such as a hard disk or a memory of the mobile device 10, in some embodiments. The second memory 101 may also be an external storage device of the mobile device 10 in other embodiments, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash Card (Flash Card) or the like, which are provided on the mobile device 10. Further, the second memory 101 may also include both an internal storage unit and an external storage device of the mobile device 10. The second memory 101 is used for storing an operating system, an application program, a boot loader (BootLoader), data, other programs, and the like, for example, program codes of the second computer program. The above-described second memory 101 may also be used to temporarily store data that has been output or is to be output.
It should be noted that, because the content of information interaction and execution process between the above devices/units is based on the same concept as the method embodiment of the present application, specific functions and technical effects thereof may be referred to in the method embodiment section, and will not be described herein again.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-described division of the functional units and modules is illustrated, and in practical application, the above-described functional distribution may be performed by different functional units and modules according to needs, i.e. the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-described functions. The functional units and modules in the embodiment may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit, where the integrated units may be implemented in a form of hardware or a form of a software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working process of the units and modules in the above system may refer to the corresponding process in the foregoing method embodiment, which is not described herein again.
The embodiments of the present application also provide a computer readable storage medium storing a computer program, where the computer program is executed by a processor to implement steps in each of the method embodiments described above.
The present embodiments also provide a computer program product which, when run on a server, causes the server to perform steps that may be implemented in the various method embodiments described above.
The embodiment of the application also provides a line inspection system, which comprises the mobile device and the terminal device.
The integrated units described above, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the present application implements all or part of the flow of the method of the above embodiments, and may be implemented by a computer program to instruct related hardware, where the above computer program may be stored in a computer readable storage medium, where the computer program, when executed by a processor, may implement the steps of each of the method embodiments described above. The computer program comprises computer program code, and the computer program code can be in a source code form, an object code form, an executable file or some intermediate form and the like. The computer readable medium may include at least: any entity or device capable of carrying computer program code to a server, a recording medium, computer Memory, read-Only Memory (ROM), random access Memory (RAM, random Access Memory), electrical carrier signals, telecommunications signals, and software distribution media. Such as a U-disk, removable hard disk, magnetic or optical disk, etc. In some jurisdictions, computer readable media may not be electrical carrier signals and telecommunications signals in accordance with legislation and patent practice.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and in part, not described or illustrated in any particular embodiment, reference is made to the related descriptions of other embodiments.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus/network device and method may be implemented in other manners. For example, the apparatus/network device embodiments described above are merely illustrative, e.g., the division of modules or elements described above is merely a logical functional division, and there may be additional divisions in actual implementation, e.g., multiple elements or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection via interfaces, devices or units, which may be in electrical, mechanical or other forms.
The units described above as separate components may or may not be physically separate, and components shown as units may or may not be physical units, may be located in one place, or may be distributed over a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
The above embodiments are only for illustrating the technical solution of the present application, and are not limiting thereof; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present application, and are intended to be included in the scope of the present application.

Claims (9)

1. The training method of the line inspection model is characterized by comprising the following steps of:
for each training image in a training image set, taking the sum of half of the width of the training image and the annotation steering value corresponding to the training image as a moving coordinate value;
Displaying the training image and a moving point corresponding to the moving coordinate value in the training image to instruct a user to mark the training image;
if the position of the moving point deviates from the artificial expectation, marking the training image as an unqualified image;
rejecting the training image set marked as unqualified training images to obtain a rejected training image set;
inputting training images in the eliminated training image set into the line inspection model to obtain a prediction result output by the line inspection model, wherein each training image in the training image set is obtained by shooting a runway, and each training image has corresponding annotation information, the annotation information comprises an annotation steering value and an annotation speed value, and the prediction result comprises a prediction steering value and a prediction speed value;
according to the marking information corresponding to the training image and the prediction result, calculating to obtain a steering error value and a speed error value;
and adjusting parameters in the line inspection model according to the steering error value and the speed error value until the line inspection model meets convergence conditions, so as to obtain a target line inspection model.
2. The training method of claim 1, further comprising, prior to inputting training images of a training image set into the line patrol model:
performing enhancement processing on the training images in the training image set to obtain an enhanced training image, wherein the enhancement processing comprises at least one of random brightness, random contrast, random saturation, motion blur, gaussian blur and Gaussian noise;
accordingly, inputting training images in a training image set to the line inspection model includes:
and inputting the training image after the enhancement treatment to the line inspection model.
3. The training method of claim 1, further comprising, prior to inputting training images of a training image set into the line patrol model:
normalizing pixel values of the training images in the training image set to obtain normalized training images;
accordingly, inputting training images in a training image set to the line inspection model includes:
and inputting the training image after normalization processing to the line inspection model.
4. The training method of claim 1, wherein the line patrol model comprises five convolution layers and one pooling layer, the convolution kernel of the five convolution layers having a size of less than or equal to 3 x 3.
5. The training method of claim 1, further comprising, after the obtaining the target line patrol model:
and carrying out quantization processing on the target line inspection model to obtain a quantized model, wherein parameters in the target line inspection model are floating point data, and parameters in the quantized model are fixed point data.
6. A line patrol method, comprising:
acquiring runway images obtained by shooting the runway in the process of moving on the runway by mobile equipment;
inputting the runway image to a target line inspection model to obtain a prediction result output by the target line inspection model, wherein the prediction result comprises a predicted steering value and a predicted speed value;
controlling the mobile device motion based on the predicted steering value and the predicted speed value;
wherein the target line patrol model is trained based on the training method according to any one of claims 1 to 5.
7. A training device of line inspection model, characterized by comprising:
the model prediction unit is used for regarding each training image in the training image set, and taking the sum of half of the width of the training image and the annotation steering value corresponding to the training image as a moving coordinate value; displaying the training image and a moving point corresponding to the moving coordinate value in the training image to instruct a user to mark the training image; if the position of the moving point deviates from the artificial expectation, marking the training image as an unqualified image; rejecting the training image set marked as unqualified training images to obtain a rejected training image set; inputting training images in the eliminated training image set into the line inspection model to obtain a prediction result output by the line inspection model, wherein each training image in the training image set is obtained by shooting a runway, and each training image has corresponding annotation information, the annotation information comprises an annotation steering value and an annotation speed value, and the prediction result comprises a prediction steering value and a prediction speed value;
Inputting training images in a training image set to the line inspection model to obtain a prediction result output by the line inspection model, wherein each training image in the training image set is obtained by shooting a runway, and each training image has corresponding annotation information, the annotation information comprises an annotation steering value and an annotation speed value, and the prediction result comprises a prediction steering value and a prediction speed value;
the error calculation unit is used for calculating a steering error value and a speed error value according to the marking information corresponding to the training image and the prediction result;
and the parameter adjustment unit is used for adjusting parameters in the line inspection model according to the steering error value and the speed error value until the line inspection model meets the convergence condition, so as to obtain a target line inspection model.
8. A line patrol device, comprising:
the image acquisition unit is used for acquiring runway images obtained by shooting the runway in the process of moving the mobile equipment on the runway;
the target model prediction unit is used for inputting the runway image into a target line patrol model to obtain a prediction result output by the target line patrol model, wherein the prediction result comprises a predicted steering value and a predicted speed value;
A motion control unit for controlling the movement of the mobile device based on the predicted steering value and the predicted speed value;
wherein the target line patrol model is trained based on the training method according to any one of claims 1 to 5.
9. A line patrol system, comprising:
terminal equipment for implementing the steps of the training method according to any one of claims 1-5;
a mobile device for implementing the steps of the line patrol method of claim 6.
CN202110333069.6A 2021-03-29 2021-03-29 Line inspection model training method, line inspection method and line inspection system Active CN112966653B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110333069.6A CN112966653B (en) 2021-03-29 2021-03-29 Line inspection model training method, line inspection method and line inspection system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110333069.6A CN112966653B (en) 2021-03-29 2021-03-29 Line inspection model training method, line inspection method and line inspection system

Publications (2)

Publication Number Publication Date
CN112966653A CN112966653A (en) 2021-06-15
CN112966653B true CN112966653B (en) 2023-12-19

Family

ID=76278741

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110333069.6A Active CN112966653B (en) 2021-03-29 2021-03-29 Line inspection model training method, line inspection method and line inspection system

Country Status (1)

Country Link
CN (1) CN112966653B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109034078A (en) * 2018-08-01 2018-12-18 腾讯科技(深圳)有限公司 Training method, age recognition methods and the relevant device of age identification model
CN109901595A (en) * 2019-04-16 2019-06-18 山东大学 A kind of automated driving system and method based on monocular cam and raspberry pie
CN111046752A (en) * 2019-11-26 2020-04-21 上海兴容信息技术有限公司 Indoor positioning method and device, computer equipment and storage medium
CN112329873A (en) * 2020-11-12 2021-02-05 苏州挚途科技有限公司 Training method of target detection model, target detection method and device

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109034078A (en) * 2018-08-01 2018-12-18 腾讯科技(深圳)有限公司 Training method, age recognition methods and the relevant device of age identification model
CN109901595A (en) * 2019-04-16 2019-06-18 山东大学 A kind of automated driving system and method based on monocular cam and raspberry pie
CN111046752A (en) * 2019-11-26 2020-04-21 上海兴容信息技术有限公司 Indoor positioning method and device, computer equipment and storage medium
CN112329873A (en) * 2020-11-12 2021-02-05 苏州挚途科技有限公司 Training method of target detection model, target detection method and device

Also Published As

Publication number Publication date
CN112966653A (en) 2021-06-15

Similar Documents

Publication Publication Date Title
CN110032954B (en) Intelligent identification and counting method and system for reinforcing steel bars
CN111950723B (en) Neural network model training method, image processing method, device and terminal equipment
US20200302276A1 (en) Artificial intelligence semiconductor chip having weights of variable compression ratio
CN108549892B (en) License plate image sharpening method based on convolutional neural network
US9600744B2 (en) Adaptive interest rate control for visual search
CN111476709B (en) Face image processing method and device and electronic equipment
US20190172193A1 (en) Method and apparatus for evaluating image definition, computer device and storage medium
CN104169943B (en) Method and apparatus for improved face recognition
CN109120854B (en) Image processing method, image processing device, electronic equipment and storage medium
CN110148117B (en) Power equipment defect identification method and device based on power image and storage medium
CN111950570B (en) Target image extraction method, neural network training method and device
CN112085701A (en) Face ambiguity detection method and device, terminal equipment and storage medium
CN111104830A (en) Deep learning model for image recognition, training device and method of deep learning model
CN113421242B (en) Welding spot appearance quality detection method and device based on deep learning and terminal
CN114612987A (en) Expression recognition method and device
CN112507897A (en) Cross-modal face recognition method, device, equipment and storage medium
CN110648284A (en) Image processing method and device for uneven illumination
CN114330565A (en) Face recognition method and device
CN114359641A (en) Target object detection method, related device and equipment
CN112966653B (en) Line inspection model training method, line inspection method and line inspection system
CN108921792B (en) Method and device for processing pictures
CN111222446B (en) Face recognition method, face recognition device and mobile terminal
CN111967529A (en) Identification method, device, equipment and system
CN112070181A (en) Image stream-based cooperative detection method and device and storage medium
CN111104965A (en) Vehicle target identification method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant