CN112966653A - Line patrol model training method, line patrol method and line patrol system - Google Patents

Line patrol model training method, line patrol method and line patrol system Download PDF

Info

Publication number
CN112966653A
CN112966653A CN202110333069.6A CN202110333069A CN112966653A CN 112966653 A CN112966653 A CN 112966653A CN 202110333069 A CN202110333069 A CN 202110333069A CN 112966653 A CN112966653 A CN 112966653A
Authority
CN
China
Prior art keywords
line patrol
training
training image
value
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110333069.6A
Other languages
Chinese (zh)
Other versions
CN112966653B (en
Inventor
陈鹏
杨若鹄
邝嘉隆
黄德斌
古蕊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ubtech Robotics Corp
Original Assignee
Ubtech Robotics Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ubtech Robotics Corp filed Critical Ubtech Robotics Corp
Priority to CN202110333069.6A priority Critical patent/CN112966653B/en
Publication of CN112966653A publication Critical patent/CN112966653A/en
Application granted granted Critical
Publication of CN112966653B publication Critical patent/CN112966653B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Electromagnetism (AREA)
  • Artificial Intelligence (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Automation & Control Theory (AREA)
  • Multimedia (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The application is suitable for the technical field of line patrol of mobile equipment, and provides a line patrol model training method, a line patrol method and a line patrol system, wherein the training method comprises the following steps: inputting training images in a preset training image set to an inspection model to obtain a prediction result, wherein each training image in the training image set is obtained by shooting a runway, each training image has corresponding labeling information, the labeling information comprises a labeling steering value and a labeling speed value, and the prediction result comprises a prediction steering value and a prediction speed value; calculating a steering error value and a speed error value according to the marking information and the prediction result corresponding to the training image; and adjusting parameters in the line patrol model according to the steering error value and the speed error value until a convergence condition is met to obtain a target line patrol model. By the method, the mobile equipment can move at different speeds according to the position of the runway in the line patrol process, so that the line patrol speed of the mobile equipment is increased, and the line patrol efficiency of the mobile equipment is improved.

Description

Line patrol model training method, line patrol method and line patrol system
Technical Field
The application belongs to the technical field of line patrol of mobile equipment, and particularly relates to a line patrol model training method, a line patrol model training device, a line patrol device and a line patrol system.
Background
In the field of mobile devices (such as educational trolleys and mobile robots), line patrol has always been a very classic application scenario. In the conventional line patrol technology, runway information at a fixed position is collected, and the current runway position of the mobile equipment is calculated according to the runway information at the fixed position, so that the steering coefficient is calculated.
However, the conventional solution requires the mobile device to use only one speed during the whole course of the line patrol, and when there is a complicated curve in the runway, the mobile device must move at the speed of the curve passing, so that the speed of the mobile device is very slow.
Disclosure of Invention
In view of the above, the present application provides a method and a device for training a line patrol model, a line patrol device and a line patrol system, which can enable a mobile device to move at different speeds according to a current runway position in a line patrol process, thereby increasing a line patrol speed of the mobile device and improving a line patrol efficiency of the mobile device.
In a first aspect, the present application provides a method for training a line patrol model, including:
inputting training images in a preset training image set to the line patrol model to obtain a prediction result output by the line patrol model, wherein each training image in the training image set is obtained by shooting a runway, each training image has corresponding labeling information, the labeling information comprises a labeling steering value and a labeling speed value, and the prediction result comprises a prediction steering value and a prediction speed value;
calculating to obtain a steering error value and a speed error value according to the marking information corresponding to the training image and the prediction result;
and adjusting parameters in the line patrol model according to the steering error value and the speed error value until the line patrol model meets a convergence condition to obtain a target line patrol model.
In a second aspect, the present application provides a line patrol method, including:
acquiring a runway image obtained by shooting the runway in the process of moving the mobile equipment on the runway;
inputting the runway image into a target line patrol model to obtain a prediction result output by the target line patrol model, wherein the prediction result comprises a predicted steering value and a predicted speed value;
controlling the movement of the mobile equipment based on the predicted steering value and the predicted speed value;
the target line patrol model is obtained by training based on a training image set and label information corresponding to each training image in the training image set, wherein the label information comprises a label steering value and a label speed value.
In a third aspect, the present application provides a training device for a line patrol model, including:
the model prediction unit is used for inputting training images in a preset training image set to the line patrol model to obtain a prediction result output by the line patrol model, wherein each training image in the training image set is obtained by shooting a runway, each training image has corresponding labeling information, the labeling information comprises a labeling steering value and a labeling speed value, and the prediction result comprises a predicted steering value and a predicted speed value;
an error calculation unit, configured to calculate a steering error value and a speed error value according to the labeling information corresponding to the training image and the prediction result;
and the parameter adjusting unit is used for adjusting the parameters in the line patrol model according to the steering error value and the speed error value until the line patrol model meets a convergence condition to obtain a target line patrol model.
In a fourth aspect, the present application provides a line patrol apparatus, including:
the image acquisition unit is used for acquiring a runway image obtained by shooting the runway in the process of moving the mobile equipment on the runway;
the target model prediction unit is used for inputting the runway image into a target line patrol model to obtain a prediction result output by the target line patrol model, and the prediction result comprises a predicted steering value and a predicted speed value;
a motion control unit for controlling the motion of the mobile device based on the predicted steering value and the predicted speed value;
the target line patrol model is obtained by training based on a training image set and label information corresponding to each training image in the training image set, wherein the label information comprises a label steering value and a label speed value.
In a fifth aspect, the present application provides a line patrol system, including:
a terminal device for implementing the steps of the method as provided in the first aspect above;
a mobile device for carrying out the steps of the method as provided in the second aspect above.
As can be seen from the above, in the present application, a training image set obtained by shooting a runway is first obtained, where the training image set includes at least one training image, each training image has respective corresponding label information, the label information includes a label turning value and a label speed value, the training images in the training image set are input to the line patrol model to obtain a prediction result output by the line patrol model, the prediction result includes a predicted turning value and a predicted speed value, then a turning error value and a speed error value are obtained by calculation according to the label information and the prediction result corresponding to the training images, and finally parameters in the line patrol model are adjusted according to the turning error value and the speed error value until the line patrol model meets a convergence condition, so as to obtain a target line patrol model. According to the scheme, the target line patrol model is obtained through training of the marked steering values and the marked speed values corresponding to the training images in the training image set and the training images, and the target line patrol model can be matched with different predicted steering values and predicted speed values according to the track types (straight lines, curves, right-angled bends and the like) corresponding to the track position where the mobile equipment is located, so that the line patrol speed of the mobile equipment is accelerated, and the line patrol efficiency of the mobile equipment is improved. It is understood that the beneficial effects of the second aspect to the fifth aspect can be referred to the related description of the first aspect, and are not described herein again.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings without creative efforts.
Fig. 1 is a schematic flowchart of a method for training a line patrol model according to an embodiment of the present disclosure;
FIG. 2 is a diagram illustrating an example of a scenario for obtaining a training image set according to an embodiment of the present application;
FIG. 3 is an exemplary diagram of a designated runway provided by an embodiment of the present application;
FIG. 4 is an exemplary diagram of a visualization interface of a marking tool provided by an embodiment of the present application;
fig. 5 is a diagram illustrating a network structure of a routing model according to an embodiment of the present application;
fig. 6 is a schematic flowchart of a line patrol method according to an embodiment of the present application;
fig. 7 is a schematic structural diagram of a training device for a line patrol model according to an embodiment of the present application;
fig. 8 is a schematic structural diagram of a line patrol device provided in an embodiment of the present application;
fig. 9 is a schematic structural diagram of a terminal device according to an embodiment of the present application;
fig. 10 is a schematic structural diagram of a mobile device according to an embodiment of the present application.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
It will be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It should also be understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.
As used in this specification and the appended claims, the term "if" may be interpreted contextually as "when", "upon" or "in response to" determining "or" in response to detecting ". Similarly, the phrase "if it is determined" or "if a [ described condition or event ] is detected" may be interpreted contextually to mean "upon determining" or "in response to determining" or "upon detecting [ described condition or event ]" or "in response to detecting [ described condition or event ]".
Furthermore, in the description of the present application and the appended claims, the terms "first," "second," "third," and the like are used for distinguishing between descriptions and not necessarily for describing or implying relative importance.
Reference throughout this specification to "one embodiment" or "some embodiments," or the like, means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the present application. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," or the like, in various places throughout this specification are not necessarily all referring to the same embodiment, but rather "one or more but not all embodiments" unless specifically stated otherwise. The terms "comprising," "including," "having," and variations thereof mean "including, but not limited to," unless expressly specified otherwise.
Fig. 1 shows a flowchart of a training method for a line patrol model provided in an embodiment of the present application, where the training method is applied to a terminal device, and is detailed as follows:
step 101, inputting a training image in a preset training image set to an inspection model to obtain a prediction result output by the inspection model.
In the embodiment of the application, each training image in a preset training image set is obtained by shooting a runway, and each training image has corresponding labeling information, wherein the labeling information comprises a labeling steering value and a labeling speed value. Specifically, the way to obtain the training image set may be as follows: referring to fig. 2, the mobile device may be a cart, and the terminal device may be a computer, wherein the mobile device is connected to the bluetooth handle via bluetooth, and the terminal device is connected to the mobile device via Wireless Fidelity (WIFI), so that the bluetooth handle and the mobile device may communicate with each other, and the mobile device and the terminal device may communicate with each other. First, a user may place a mobile device on a designated runway, which may be shown in fig. 3 as one example, and then send a control turn value and a control speed value to the mobile device by manipulating the bluetooth handle to instruct the mobile device to move on the designated runway based on the control turn value and the control speed value. The mobile equipment is provided with a visual module, such as a camera, the visual field of the camera faces the ground, the mobile equipment can shoot a designated runway at each moment in the motion process through the camera to obtain training images, and at the moment of shooting each training image, the mobile equipment can obtain a control steering value and a control speed value sent by a Bluetooth handle at the moment, the control steering value is used as a labeling steering value corresponding to the training images, and the control speed value is used as a labeling speed value corresponding to the training images. By the mode, the mobile equipment can obtain the training image set and the labeling information corresponding to each training image in the training image set, and a user does not need to manually label. Because the computing performance of the mobile device is generally poor and is not enough to complete training, the mobile device can send the labeling information corresponding to each training image in the training image set and the training image set to the terminal device through the wireless communication technology, and the terminal device trains the line patrol model.
In order to enable the line patrol model to recognize the features of the runway, at least one of two sides of the runway needs to be included in the training image, and therefore, the size of the field angle of the camera on the mobile device and the width of the runway need to be limited in the embodiment of the present application. For example, the field angle of the camera may be between 80 and 120 degrees, which is to consider that the resolution cannot be too high when a single chip on the mobile device acquires data to increase the frame rate, and the size of a feature map that can be processed by the single chip is limited, and the resolution is uniformly scaled to be less than 300x300 no matter how large the size of the feature map is, so that the real-time requirement can be met. Accordingly, when the runway is 30 cm in front of the camera, the width of the runway may be set not to exceed 40 cm.
After the terminal device obtains the training image set and the labeling information corresponding to each training image in the training image set, the training images in the training image set can be input to the line patrol model, the line patrol model outputs a corresponding prediction result according to the input training images, and the prediction result comprises a predicted steering value and a predicted speed value
And 102, calculating to obtain a steering error value and a speed error value according to the labeling information and the prediction result corresponding to the training image.
In the embodiment of the application, a steering error value and a speed error value can be calculated according to the input labeling information corresponding to the training image and the prediction result output by the line patrol model. Specifically, the loss function used in the iterative training process of the line patrol model may be a mean square error function, and a steering error value and a speed error value may be calculated according to the mean square error function, the labeling information, and the prediction result.
And 103, adjusting parameters in the line patrol model according to the steering error value and the speed error value until the line patrol model meets the convergence condition to obtain a target line patrol model.
In the embodiment of the present application, after the steering error value and the speed error value are obtained through calculation, parameters in the line patrol model may be adjusted according to the steering error value and the speed error value. And then, continuously training the line patrol model after the parameters are adjusted by repeating the steps 101, 102 and 103 until the line patrol model meets the convergence condition. The convergence condition may be that the error rate of the prediction result output by the line patrol model does not decrease continuously for 10 times when the line patrol model is verified by the verification set.
Optionally, before the step 101, the method further includes:
enhancing the training images in the training image set to obtain enhanced training images;
correspondingly, the step 101 specifically includes:
and inputting the enhanced training image into the line patrol model to obtain a prediction result output by the line patrol model.
In the embodiment of the present application, in order to improve the generalization of the line patrol model, enhancement processing may be performed on the training images in the training images. Wherein the enhancement processing includes at least one of random brightness, random contrast, random saturation, motion blur, gaussian blur, and gaussian noise. The diversity of the training images can be increased by the enhancement process. In the training process, the training images before enhancement processing and the training images after enhancement processing are used as training samples of the training line patrol model, so that the accuracy of the target line patrol model obtained through training is improved. The random brightness refers to the random adjustment of the brightness of the training image, the random contrast refers to the random adjustment of the contrast of the training image, the random saturation refers to the random adjustment of the saturation of the training image, and the motion blur refers to the moving blur effect generated by the scenery in the training image.
Optionally, before the step 101, the method further includes:
normalizing the pixel values of the training images in the training image set to obtain normalized training images;
correspondingly, the step 101 specifically includes:
and inputting the training image after the normalization processing into the line patrol model to obtain a prediction result output by the line patrol model.
In the embodiment of the application, the training image is composed of pixel points, and the value of the pixel value of each pixel point is an integer within the range of 0-255. In practical application, although the training image without normalization processing can be directly input into the line patrol model, the training speed of the line patrol model is very slow, and therefore, in order to improve the training speed of the line patrol model, the normalization processing can be performed on the pixel values of the pixel points of the training images in the training image set, so that the range of the pixel values is scaled from 0-255 to 0-1. Illustratively, the normalization process may be implemented by dividing each pixel value of the training image by 255, such as performing the normalization process on the pixel value 255, and dividing the pixel value 255 by 255, that is, the normalized pixel value becomes 1. In the training process, the training images after normalization processing can be input into the line patrol model, so that the training speed of the line patrol model is improved.
Optionally, before the step 101, the method further includes:
taking the sum of half of the width of the training image and the labeled steering value corresponding to the training image as a moving coordinate value aiming at each training image in the training image set;
displaying the training image and a moving point corresponding to the moving coordinate value in the training image to indicate a user to mark the training image;
eliminating the training images marked as unqualified in the training image set to obtain an eliminated training image set;
correspondingly, the step 101 specifically includes:
and inputting the training images in the eliminated training image set into the line patrol model to obtain a prediction result output by the line patrol model.
In this embodiment, for each training image in the training image set, a sum of a half of a width of the training image and a labeled turning value corresponding to the training image may be used as a moving coordinate value, for example, if the width of the training image 1 is a, and the labeled turning value corresponding to the training image 1 is b, then the moving coordinate value corresponding to the training image 1 is a/2+ b. The moving coordinate values are coordinate values of the moving points in the training image 1. The terminal equipment can display the training image and the moving point corresponding to the moving coordinate value in the training image through the display screen, then the user can judge whether the position of the moving point deviates from the manual expectation or not, so that the training image is marked, and if the position of the moving point deviates from the manual expectation, the user can mark the training image as unqualified through the marking tool. Referring to fig. 4, fig. 4 is a visual interface of the marking tool, a user can select a training image set by selecting a file option, and set an automatic playing time, so that the marking tool plays the training images in the training image set in the visual interface according to the shooting sequence of the training images in the training image set, and displays moving points in the training images. In the playing process, when a user finds that the moving point of a certain training image deviates from the manual expectation, the training image can be marked as unqualified through a 'pause playing marking' option. And finally, eliminating the training images marked as unqualified in the training image set to obtain the eliminated training image set.
In one embodiment, since the motion of the mobile device is continuous, the user may not need to mark the training images for one frame, but only need to mark the first training image (denoted as the leading frame marked image) from which the moving point deviates and the last training image (denoted as the trailing frame marked image) from which the moving point deviates during the playing process. And then, directly removing the first frame of marked image, the last frame of marked image and the training image between the first frame of marked image and the last frame of marked image in the playing process.
Optionally, considering that the line patrol model is finally required to be applied to the mobile device, and the computing capability of the single chip microcomputer on the mobile device is limited, the network structure of the line patrol model should be as simple as possible, so that the single chip microcomputer can operate the line patrol model. Based on this, the present application provides a network structure of a routing model, as shown in fig. 5, the routing model includes five convolutional layers, one pooling layer, and two full-link layers, wherein the size of a convolutional kernel in the five convolutional layers does not exceed 3 × 3, the two full-link layers are used to perform General Matrix Multiplication (GEMM), and an activation function used by the routing model is a Linear rectification function (recti Unit, return).
Optionally, because the computing power of the single chip on the mobile device is limited, only 8-bit data can be processed, and the mobile device has a high requirement on the real-time performance of the target line patrol model, the target line patrol model trained on the terminal device cannot be directly deployed into the mobile device. Based on this, the method further comprises, after the step 103:
and carrying out quantization processing on the target line patrol model to obtain a quantized model.
In the embodiment of the application, after the target line patrol model is obtained through training, the target line patrol model can be subjected to quantization processing, so that the precision of the target line patrol model is converted from 32 bits to 8 bits, and the quantized model is obtained, thereby reducing the calculated amount of the model and further achieving the purpose of acceleration. The quantized model can be deployed to a mobile device and run by a single chip on the mobile device. The parameters in the target patrol model are floating point data, such as 6.0, and the parameters in the quantized model are fixed point data, such as 127. The quantization formula for converting floating point data to fixed point data is:
Figure BDA0002997030700000101
q represents a quantized fixed-point value, R represents a real floating-point value, Z represents a quantized fixed-point value corresponding to a 0 floating-point value, and S represents the minimum scale which can be represented after fixed-point quantization.
Meanwhile, the evaluation formulas of S and Z are as follows:
Z=Qmax-Rmax÷S,
Figure BDA0002997030700000102
Rmaxrepresenting the maximum floating-point value, RminRepresenting the minimum floating-point value, QmaxRepresenting the minimum fixed point value, QminRepresenting the minimum fixed point value.
For example, assuming that a weight activation value is in the range of [ -2.0,6.0], it needs to be quantized with 8-bit data precision, and fixed-point quantization can be represented in the range of [ -128,127], and the calculation process is as follows:
Figure BDA0002997030700000103
Z=127-6.0÷0.0313725≈-64.25≈64
then the following correspondence exists:
quantized fixed point value Floating point value
-128 -2.0
-64 0.0
127 6.0
From the above values, if there is a true weight value of 0.48, i.e. R is 0.48, then the corresponding Q is evaluated as:
Q=0.48÷0.0313725+(-64)≈-55.07≈-55。
as can be seen from the above, in the present application, a training image set obtained by shooting a runway is first obtained, where the training image set includes at least one training image, each training image has respective corresponding label information, the label information includes a label turning value and a label speed value, the training images in the training image set are input to the line patrol model to obtain a prediction result output by the line patrol model, the prediction result includes a predicted turning value and a predicted speed value, then a turning error value and a speed error value are obtained by calculation according to the label information and the prediction result corresponding to the training images, and finally parameters in the line patrol model are adjusted according to the turning error value and the speed error value until a convergence condition of the line patrol model is satisfied, so as to obtain a target line patrol model. According to the scheme, the target line patrol model is obtained through training of the marked steering values and the marked speed values corresponding to the training images in the training image set and the training images, and the target line patrol model can be matched with different predicted steering values and predicted speed values according to the track types (straight lines, curves, right-angled bends and the like) corresponding to the track position where the mobile equipment is located, so that the line patrol speed of the mobile equipment is accelerated, and the line patrol efficiency of the mobile equipment is improved.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present application.
Fig. 6 shows a flowchart of a line patrol method provided in an embodiment of the present application, where the line patrol method is applied to a mobile device, and is detailed as follows:
step 601, obtaining a runway image obtained by shooting a runway in the process that the mobile device moves on the runway.
Step 602, inputting the runway image into the target line patrol model to obtain a prediction result output by the target line patrol model, wherein the prediction result comprises a predicted steering value and a predicted speed value.
And step 603, controlling the movement of the mobile equipment based on the predicted steering value and the predicted speed value.
In the embodiment of the application, after the target line patrol model obtained by training on the terminal device is subjected to quantization processing, the quantized target line patrol model can be deployed on the mobile device. During the course of the movement of the mobile device on the runway, the runway may be photographed according to a preset photographing frequency to obtain an image of the runway, for example, the runway may be photographed by the mobile device every 1 second during the movement. And each time the mobile device shoots a frame of runway image, the mobile device immediately acquires the runway image and inputs the runway image into a target line patrol model, and the target line patrol model outputs a corresponding prediction result, wherein the prediction result comprises a predicted steering value and a predicted speed value. Finally, the mobile device will move based on the predicted steering value and the predicted velocity value. The target line patrol model is obtained by training based on a training image set and marking information corresponding to each training image in the training image set, wherein the marking information comprises a marking steering value and a marking speed value, and specifically, the target line patrol model is obtained by training through the steps of the training method of the line patrol model provided by the embodiment of the application.
As can be seen from the above, in the present application, firstly, a runway image obtained by shooting a runway during a moving process of a mobile device on the runway is obtained, then, the runway image is input to a target route patrol model to obtain a prediction result output by the target route patrol model, where the prediction result includes a predicted steering value and a predicted speed value, and finally, the moving of the mobile device is controlled based on the predicted steering value and the predicted speed value. The target line patrol model is obtained by training based on a training image set and label information corresponding to each training image in the training image set, wherein the label information comprises a label steering value and a label speed value. According to the scheme, the target line patrol model is obtained through training of the marked steering values and the marked speed values corresponding to the training images in the training image set and the training images, and the target line patrol model can be matched with different predicted steering values and predicted speed values according to the track types (straight lines, curves, right-angled bends and the like) corresponding to the track position where the mobile equipment is located, so that the line patrol speed of the mobile equipment is accelerated, and the line patrol efficiency of the mobile equipment is improved.
Fig. 7 is a schematic structural diagram of a training device for a line patrol model according to an embodiment of the present application, where the training device is applied to a terminal device, and for convenience of description, only a part related to the embodiment of the present application is shown.
This training apparatus 700 for line patrol model includes:
a model prediction unit 701, configured to input training images in a preset training image set to the line patrol model, and obtain a prediction result output by the line patrol model, where each training image in the training image set is obtained by shooting a runway, and each training image has corresponding labeling information, where the labeling information includes a labeled steering value and a labeled speed value, and the prediction result includes a predicted steering value and a predicted speed value;
an error calculation unit 702, configured to calculate a steering error value and a speed error value according to the label information corresponding to the training image and the prediction result;
a parameter adjusting unit 703, configured to adjust a parameter in the line patrol model according to the steering error value and the speed error value until the line patrol model meets a convergence condition, so as to obtain a target line patrol model.
Optionally, the training apparatus 700 for the line patrol model further includes:
and the enhancement processing unit is used for enhancing the training images in the training image set to obtain the enhanced training images, wherein the enhancement processing comprises at least one of random brightness, random contrast, random saturation, motion blur, Gaussian blur and Gaussian noise.
The model prediction unit 701 is specifically configured to input the training image after the enhancement processing to the patrol model.
Optionally, the training apparatus 700 for the line patrol model further includes:
a normalization processing unit, configured to perform normalization processing on pixel values of training images in the training image set to obtain a training image after normalization processing;
the model prediction unit 701 is specifically configured to input the training image after the normalization processing to the patrol model.
Optionally, the training apparatus 700 for the line patrol model further includes:
a coordinate calculation unit, configured to use, as a moving coordinate value, a sum of a half of a width of the training image and a labeled turning value corresponding to the training image for each training image in the training image set;
an image display unit, configured to display the training image and a moving point corresponding to the moving coordinate value in the training image, so as to instruct a user to mark the training image;
the image removing unit is used for removing the training images marked as unqualified in the training image set to obtain a removed training image set;
the model prediction unit 701 is specifically configured to input the training images in the eliminated training image set to the patrol model.
Optionally, the routing model includes five convolutional layers and one pooling layer, and the size of a convolutional kernel in the five convolutional layers is smaller than or equal to 3 × 3.
Optionally, the training apparatus 700 for the line patrol model further includes:
and the quantization processing unit is used for performing quantization processing on the target line patrol model to obtain a quantized model, wherein parameters in the target line patrol model are floating point data, and parameters in the quantized model are fixed point data.
As can be seen from the above, in the present application, a training image set obtained by shooting a runway is first obtained, where the training image set includes at least one training image, each training image has respective corresponding label information, the label information includes a label turning value and a label speed value, the training images in the training image set are input to the line patrol model to obtain a prediction result output by the line patrol model, the prediction result includes a predicted turning value and a predicted speed value, then a turning error value and a speed error value are obtained by calculation according to the label information and the prediction result corresponding to the training images, and finally parameters in the line patrol model are adjusted according to the turning error value and the speed error value until a convergence condition of the line patrol model is satisfied, so as to obtain a target line patrol model. According to the scheme, the target line patrol model is obtained through training of the marked steering values and the marked speed values corresponding to the training images in the training image set and the training images, and the target line patrol model can be matched with different predicted steering values and predicted speed values according to the track types (straight lines, curves, right-angled bends and the like) corresponding to the track position where the mobile equipment is located, so that the line patrol speed of the mobile equipment is accelerated, and the line patrol efficiency of the mobile equipment is improved.
Fig. 8 shows a schematic structural diagram of a line patrol apparatus provided in an embodiment of the present application, where the line patrol apparatus is applied to a mobile device, and for convenience of description, only a part related to the embodiment of the present application is shown.
This line patrol device 800 includes:
an image acquisition unit 801, configured to acquire a runway image obtained by shooting a runway while a mobile device is moving on the runway;
a target model prediction unit 802, configured to input the runway image into a target route tracking model, and obtain a prediction result output by the target route tracking model, where the prediction result includes a predicted steering value and a predicted speed value;
a motion control unit 803, configured to control motion of the mobile device based on the predicted steering value and the predicted speed value;
the target line patrol model is obtained by training based on a training image set and label information corresponding to each training image in the training image set, wherein the label information comprises a label steering value and a label speed value.
As can be seen from the above, in the present application, firstly, a runway image obtained by shooting a runway during a moving process of a mobile device on the runway is obtained, then, the runway image is input to a target route patrol model to obtain a prediction result output by the target route patrol model, where the prediction result includes a predicted steering value and a predicted speed value, and finally, the moving of the mobile device is controlled based on the predicted steering value and the predicted speed value. The target line patrol model is obtained by training based on a training image set and label information corresponding to each training image in the training image set, wherein the label information comprises a label steering value and a label speed value. According to the scheme, the target line patrol model is obtained through training of the marked steering values and the marked speed values corresponding to the training images in the training image set and the training images, and the target line patrol model can be matched with different predicted steering values and predicted speed values according to the track types (straight lines, curves, right-angled bends and the like) corresponding to the track position where the mobile equipment is located, so that the line patrol speed of the mobile equipment is accelerated, and the line patrol efficiency of the mobile equipment is improved.
Fig. 9 is a schematic structural diagram of a terminal device according to an embodiment of the present application. As shown in fig. 9, the terminal device 9 of this embodiment includes: at least one first processor 90 (only one is shown in fig. 9), a first memory 91, and a first computer program 92 stored in the first memory 91 and executable on the at least one first processor 90, wherein the first processor 90 executes the first computer program 92 to perform the steps of:
inputting training images in a preset training image set to the line patrol model to obtain a prediction result output by the line patrol model, wherein each training image in the training image set is obtained by shooting a runway, each training image has corresponding labeling information, the labeling information comprises a labeling steering value and a labeling speed value, and the prediction result comprises a prediction steering value and a prediction speed value;
calculating to obtain a steering error value and a speed error value according to the marking information corresponding to the training image and the prediction result;
and adjusting parameters in the line patrol model according to the steering error value and the speed error value until the line patrol model meets a convergence condition to obtain a target line patrol model.
Assuming that the above is the first possible embodiment, in a second possible embodiment provided on the basis of the first possible embodiment, before the training images in the preset training image set are input to the route-around model, the first processor 90 executes the first computer program 92 to further implement the following steps:
enhancing the training images in the training image set to obtain enhanced training images, wherein the enhancement processing comprises at least one of random brightness, random contrast, random saturation, motion blur, Gaussian blur and Gaussian noise;
correspondingly, the inputting of the training images in the preset training image set to the line patrol model includes:
and inputting the training image after the enhancement processing to the line patrol model.
In a third possible embodiment based on the first possible embodiment, before the training images in the preset training image set are input to the tour model, the first processor 90 executes the first computer program 92 to further implement the following steps:
normalizing the pixel values of the training images in the training image set to obtain normalized training images;
correspondingly, the inputting of the training images in the preset training image set to the line patrol model includes:
and inputting the training image after the normalization processing to the line patrol model.
In a fourth possible embodiment based on the first possible embodiment, before the training images in the preset training image set are input to the tour model, the first processor 90 executes the first computer program 92 to further implement the following steps:
for each training image in the training image set, taking the sum of half of the width of the training image and the labeled steering value corresponding to the training image as a moving coordinate value;
displaying the training image and a moving point corresponding to the moving coordinate value in the training image so as to instruct a user to mark the training image;
eliminating the training images marked as unqualified in the training image set to obtain an eliminated training image set;
correspondingly, the inputting of the training images in the preset training image set to the line patrol model includes:
and inputting the training images in the training image set after being eliminated to the line patrol model.
In a fifth possible embodiment provided based on the first possible embodiment, the patrol model includes five convolutional layers and one pooling layer, and the size of a convolutional kernel in the five convolutional layers is less than or equal to 3 × 3.
The terminal device 9 may include, but is not limited to, a first processor 90, a first memory 91. Those skilled in the art will appreciate that fig. 9 is only an example of the terminal device 9, and does not constitute a limitation to the terminal device 9, and may include more or less components than those shown, or combine some components, or different components, for example, and may further include an input/output device, a network access device, and the like.
The first Processor 90 may be a Central Processing Unit (CPU), and the first Processor 90 may also be other general purpose processors, Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The first storage 91 may be an internal storage unit of the terminal device 9 in some embodiments, for example, a hard disk or a memory of the terminal device 9. In other embodiments, the first memory 91 may also be an external storage device of the terminal device 9, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like, which are provided on the terminal device 9. Further, the first memory 91 may include both an internal storage unit and an external storage device of the terminal device 9. The first memory 91 is used for storing an operating system, an application program, a BootLoader (BootLoader), data, other programs, and the like, for example, a program code of the first computer program. The first memory 91 described above may also be used to temporarily store data that has been output or is to be output.
Fig. 10 is a schematic structural diagram of a mobile device according to an embodiment of the present application. As shown in fig. 10, the mobile device 10 of this embodiment includes: at least one second processor 100 (only one is shown in fig. 10), a second memory 101, and a second computer program 102 stored in the second memory 101 and executable on the at least one second processor 100, wherein the second processor 100 implements the following steps when executing the second computer program 102:
acquiring a runway image obtained by shooting the runway in the process of moving the mobile equipment on the runway;
inputting the runway image into a target line patrol model to obtain a prediction result output by the target line patrol model, wherein the prediction result comprises a predicted steering value and a predicted speed value;
controlling the movement of the mobile equipment based on the predicted steering value and the predicted speed value;
the target line patrol model is obtained by training based on a training image set and label information corresponding to each training image in the training image set, wherein the label information comprises a label steering value and a label speed value.
The mobile device may include, but is not limited to, a second processor 100, a second memory 101. Those skilled in the art will appreciate that fig. 10 is merely an example of a mobile device 10 and is not intended to limit the mobile device 10 and that it may include more or less components than those shown, or some components may be combined, or different components may be included, such as input output devices, network access devices, etc.
The second Processor 100 may be a Central Processing Unit (CPU), and the second Processor 100 may also be other general purpose processors, Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The second storage 101 may be an internal storage unit of the mobile device 10 in some embodiments, for example, a hard disk or a memory of the mobile device 10. In other embodiments, the second memory 101 may also be an external storage device of the mobile device 10, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), or the like, provided on the mobile device 10. Further, the second memory 101 may include both an internal storage unit and an external storage device of the mobile device 10. The second memory 101 is used for storing an operating system, an application program, a BootLoader (BootLoader), data, other programs, and the like, for example, a program code of the second computer program. The above-mentioned second memory 101 may also be used for temporarily storing data that has been output or is to be output.
It should be noted that, for the information interaction, execution process, and other contents between the above-mentioned devices/units, the specific functions and technical effects thereof are based on the same concept as those of the embodiment of the method of the present application, and specific reference may be made to the part of the embodiment of the method, which is not described herein again.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned functions may be distributed as different functional units and modules according to needs, that is, the internal structure of the apparatus may be divided into different functional units or modules to implement all or part of the above-mentioned functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
The embodiments of the present application further provide a computer-readable storage medium, where a computer program is stored, and when the computer program is executed by a processor, the computer program implements the steps in the above method embodiments.
The embodiments of the present application further provide a computer program product, which when executed on a server, enables the server to implement the steps in the above method embodiments when executed.
The embodiment of the application further provides a line patrol system which comprises the mobile device and the terminal device.
The integrated unit may be stored in a computer-readable storage medium if it is implemented in the form of a software functional unit and sold or used as a separate product. Based on such understanding, all or part of the processes in the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer-readable storage medium and can implement the steps of the embodiments of the methods described above when the computer program is executed by a processor. The computer program includes computer program code, and the computer program code may be in a source code form, an object code form, an executable file or some intermediate form. The computer-readable medium may include at least: any entity or device capable of carrying computer program code to a server, recording medium, computer Memory, Read-Only Memory (ROM), Random-Access Memory (RAM), electrical carrier wave signals, telecommunications signals, and software distribution medium. Such as a usb-disk, a removable hard disk, a magnetic or optical disk, etc. In certain jurisdictions, computer-readable media may not be an electrical carrier signal or a telecommunications signal in accordance with legislative and patent practice.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus/network device and method may be implemented in other ways. For example, the above-described apparatus/network device embodiments are merely illustrative, and for example, the division of the above modules or units is only one logical function division, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
The above embodiments are only used to illustrate the technical solutions of the present application, and not to limit the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present application and are intended to be included within the scope of the present application.

Claims (10)

1. A method for training a line patrol model is characterized by comprising the following steps:
inputting training images in a preset training image set into the line patrol model to obtain a prediction result output by the line patrol model, wherein each training image in the training image set is obtained by shooting a runway, each training image has corresponding labeling information, the labeling information comprises a labeling steering value and a labeling speed value, and the prediction result comprises a prediction steering value and a prediction speed value;
calculating to obtain a steering error value and a speed error value according to the labeling information corresponding to the training image and the prediction result;
and adjusting parameters in the line patrol model according to the steering error value and the speed error value until the line patrol model meets a convergence condition to obtain a target line patrol model.
2. The training method according to claim 1, wherein before the inputting of the training images in the preset set of training images to the line patrol model, the method further comprises:
enhancing the training images in the training image set to obtain enhanced training images, wherein the enhancement processing comprises at least one of random brightness, random contrast, random saturation, motion blur, Gaussian blur and Gaussian noise;
correspondingly, the inputting of the training images in the preset training image set to the line patrol model includes:
and inputting the enhanced training image to the line patrol model.
3. The training method according to claim 1, wherein before the inputting of the training images in the preset set of training images to the line patrol model, the method further comprises:
normalizing the pixel values of the training images in the training image set to obtain normalized training images;
correspondingly, the inputting of the training images in the preset training image set to the line patrol model includes:
and inputting the training image after the normalization processing into the line patrol model.
4. The training method according to claim 1, wherein before the inputting of the training images in the preset set of training images to the line patrol model, the method further comprises:
taking the sum of half of the width of the training image and the labeled steering value corresponding to the training image as a moving coordinate value for each training image in the training image set;
displaying the training image and a moving point corresponding to the moving coordinate value in the training image to indicate a user to mark the training image;
eliminating the training images marked as unqualified in the training image set to obtain an eliminated training image set;
correspondingly, the inputting of the training images in the preset training image set to the line patrol model includes:
and inputting the training images in the eliminated training image set to the line patrol model.
5. The training method according to claim 1, wherein the patrol model includes five convolutional layers and one pooling layer, and a size of a convolutional kernel in the five convolutional layers is less than or equal to 3 x 3.
6. The training method according to claim 1, further comprising, after the obtaining the target patrol model:
and quantizing the target line patrol model to obtain a quantized model, wherein parameters in the target line patrol model are floating point data, and parameters in the quantized model are fixed point data.
7. A line patrol method is characterized by comprising the following steps:
acquiring a runway image obtained by shooting a runway in the process of moving the mobile equipment on the runway;
inputting the runway image into a target line patrol model to obtain a prediction result output by the target line patrol model, wherein the prediction result comprises a predicted steering value and a predicted speed value;
controlling the mobile device to move based on the predicted steering value and the predicted velocity value;
the target line patrol model is obtained by training based on a training image set and labeling information corresponding to each training image in the training image set, wherein the labeling information comprises a labeling steering value and a labeling speed value.
8. A training device for a line patrol model is characterized by comprising:
the model prediction unit is used for inputting training images in a preset training image set into the line patrol model to obtain a prediction result output by the line patrol model, wherein each training image in the training image set is obtained by shooting a runway, each training image has corresponding labeling information, the labeling information comprises a labeling steering value and a labeling speed value, and the prediction result comprises a predicted steering value and a predicted speed value;
the error calculation unit is used for calculating a steering error value and a speed error value according to the labeling information corresponding to the training image and the prediction result;
and the parameter adjusting unit is used for adjusting the parameters in the line patrol model according to the steering error value and the speed error value until the line patrol model meets a convergence condition to obtain a target line patrol model.
9. A line patrol device, comprising:
the image acquisition unit is used for acquiring a runway image obtained by shooting the runway in the process of moving the mobile equipment on the runway;
the target model prediction unit is used for inputting the runway image into a target line patrol model to obtain a prediction result output by the target line patrol model, and the prediction result comprises a predicted steering value and a predicted speed value;
a motion control unit for controlling the motion of the mobile device based on the predicted steering value and the predicted speed value;
the target line patrol model is obtained by training based on a training image set and labeling information corresponding to each training image in the training image set, wherein the labeling information comprises a labeling steering value and a labeling speed value.
10. A line patrol system, comprising:
terminal device for implementing the steps of the training method according to any one of claims 1 to 6;
a mobile device for implementing the steps of the patrol method according to claim 7.
CN202110333069.6A 2021-03-29 2021-03-29 Line inspection model training method, line inspection method and line inspection system Active CN112966653B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110333069.6A CN112966653B (en) 2021-03-29 2021-03-29 Line inspection model training method, line inspection method and line inspection system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110333069.6A CN112966653B (en) 2021-03-29 2021-03-29 Line inspection model training method, line inspection method and line inspection system

Publications (2)

Publication Number Publication Date
CN112966653A true CN112966653A (en) 2021-06-15
CN112966653B CN112966653B (en) 2023-12-19

Family

ID=76278741

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110333069.6A Active CN112966653B (en) 2021-03-29 2021-03-29 Line inspection model training method, line inspection method and line inspection system

Country Status (1)

Country Link
CN (1) CN112966653B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109034078A (en) * 2018-08-01 2018-12-18 腾讯科技(深圳)有限公司 Training method, age recognition methods and the relevant device of age identification model
CN109901595A (en) * 2019-04-16 2019-06-18 山东大学 A kind of automated driving system and method based on monocular cam and raspberry pie
CN111046752A (en) * 2019-11-26 2020-04-21 上海兴容信息技术有限公司 Indoor positioning method and device, computer equipment and storage medium
CN112329873A (en) * 2020-11-12 2021-02-05 苏州挚途科技有限公司 Training method of target detection model, target detection method and device

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109034078A (en) * 2018-08-01 2018-12-18 腾讯科技(深圳)有限公司 Training method, age recognition methods and the relevant device of age identification model
CN109901595A (en) * 2019-04-16 2019-06-18 山东大学 A kind of automated driving system and method based on monocular cam and raspberry pie
CN111046752A (en) * 2019-11-26 2020-04-21 上海兴容信息技术有限公司 Indoor positioning method and device, computer equipment and storage medium
CN112329873A (en) * 2020-11-12 2021-02-05 苏州挚途科技有限公司 Training method of target detection model, target detection method and device

Also Published As

Publication number Publication date
CN112966653B (en) 2023-12-19

Similar Documents

Publication Publication Date Title
CN108898086B (en) Video image processing method and device, computer readable medium and electronic equipment
CN111179339B (en) Coordinate positioning method, device, equipment and storage medium based on triangulation
US11181624B2 (en) Method and apparatus for calibration between laser radar and camera, device and storage medium
CN111950723B (en) Neural network model training method, image processing method, device and terminal equipment
CN110060276B (en) Object tracking method, tracking processing method, corresponding device and electronic equipment
EP2903256B1 (en) Image processing device, image processing method and program
CN112561978B (en) Training method of depth estimation network, depth estimation method of image and equipment
CN111950570B (en) Target image extraction method, neural network training method and device
CN111208783A (en) Action simulation method, device, terminal and computer storage medium
JP5500400B1 (en) Image processing apparatus, image processing method, and image processing program
CN115880435A (en) Image reconstruction method, model training method, device, electronic device and medium
CN115457364A (en) Target detection knowledge distillation method and device, terminal equipment and storage medium
CN112966653A (en) Line patrol model training method, line patrol method and line patrol system
CN110765926B (en) Picture book identification method, device, electronic equipment and storage medium
CN111104965A (en) Vehicle target identification method and device
CN112215036A (en) Cross-mirror tracking method, device, equipment and storage medium
CN111126101A (en) Method and device for determining key point position, electronic equipment and storage medium
CN113298098B (en) Fundamental matrix estimation method and related product
CN114758076A (en) Training method and device for deep learning model for building three-dimensional model
CN113313010A (en) Face key point detection model training method, device and equipment
CN113724176A (en) Multi-camera motion capture seamless connection method, device, terminal and medium
CN114120423A (en) Face image detection method and device, electronic equipment and computer readable medium
CN114245102A (en) Vehicle-mounted camera shake identification method and device and computer readable storage medium
CN109711363B (en) Vehicle positioning method, device, equipment and storage medium
CN106604041B (en) Panoramic video distribution method and system based on visual continuity

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant