CN111783623B - Algorithm adjustment method, device, equipment and medium for identifying positioning element - Google Patents

Algorithm adjustment method, device, equipment and medium for identifying positioning element Download PDF

Info

Publication number
CN111783623B
CN111783623B CN202010605391.5A CN202010605391A CN111783623B CN 111783623 B CN111783623 B CN 111783623B CN 202010605391 A CN202010605391 A CN 202010605391A CN 111783623 B CN111783623 B CN 111783623B
Authority
CN
China
Prior art keywords
positioning element
information
identification
parameter
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010605391.5A
Other languages
Chinese (zh)
Other versions
CN111783623A (en
Inventor
赵晓健
向旭东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN202010605391.5A priority Critical patent/CN111783623B/en
Publication of CN111783623A publication Critical patent/CN111783623A/en
Application granted granted Critical
Publication of CN111783623B publication Critical patent/CN111783623B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • G06V20/586Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of parking space
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Software Systems (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Medical Informatics (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Image Analysis (AREA)

Abstract

The application discloses an algorithm adjustment method, device, equipment and medium for identifying positioning elements, and relates to autonomous parking and automatic driving. The specific implementation scheme is as follows: acquiring marking information of the positioning element, and acquiring identification information of the positioning element output by an identification algorithm; comparing and analyzing element identification and/or angular point position information in the labeling information and element identification and/or angular point position information in the identification information to obtain an analysis result, wherein the analysis result comprises at least one accuracy evaluation parameter; and adjusting the recognition algorithm according to at least one accuracy evaluation parameter, wherein the adjusted recognition algorithm is used for recognizing the positioning element in the running process of the vehicle. And automatically evaluating and analyzing the accuracy of the recognition algorithm, and accurately performing quantitative evaluation on the recognition algorithm, so that the adjusted recognition algorithm accurately recognizes the positioning element.

Description

Algorithm adjustment method, device, equipment and medium for identifying positioning element
Technical Field
Embodiments of the present application relate to autonomous parking and autopilot in data/image processing, and more particularly, to an algorithm adjustment method, apparatus, device, and medium for identifying positioning elements.
Background
In the case of a vehicle parking, a number of positioning elements may be provided in the auto-park scene, for example, the positioning elements are pillars, wall stickers, or the like. The vehicle may identify the locating element using an identification algorithm and then park based on the locating element. The accuracy of the recognition algorithm for recognizing the positioning element needs to be evaluated and verified, so that the positioning element can be accurately recognized after the recognition algorithm is applied to the vehicle.
In the prior art, the identification result output by the identification algorithm and the positioning element in the real scene can be compared manually, so as to determine whether the identification algorithm is accurate.
However, in the prior art, the accuracy of the manual statistical recognition algorithm requires manual experience and supervisor judgment, which affects the authenticity and accuracy of the evaluation of the recognition algorithm, so that the recognition algorithm cannot be correctly adjusted when the recognition algorithm is adjusted according to the manual evaluation result of the recognition algorithm, and the positioning element cannot be accurately recognized by the adjusted recognition algorithm.
Disclosure of Invention
The application provides an algorithm adjustment method, device, equipment and medium for identifying a positioning element, which can be used for solving the problem that the positioning element is not positioned in the prior art.
According to a first aspect of the present application, there is provided an algorithm adjustment method for identifying a positioning element, comprising:
acquiring marking information of the positioning element, and acquiring identification information of the positioning element output by an identification algorithm, wherein the marking information comprises element identification and/or angular point position information of the positioning element, and the identification information comprises element identification and/or angular point position information of the positioning element;
comparing and analyzing element identification and/or angular point position information in the labeling information and element identification and/or angular point position information in the identification information to obtain an analysis result, wherein the analysis result comprises at least one accuracy evaluation parameter;
and adjusting the recognition algorithm according to the at least one accuracy evaluation parameter, wherein the adjusted recognition algorithm is used for recognizing the positioning element in the running process of the vehicle.
According to a second aspect of the present application, there is provided an algorithm adjustment device for identifying a positioning element, comprising:
the first acquisition unit is used for acquiring the labeling information of the positioning element, wherein the labeling information comprises element identification and/or angular point position information of the positioning element;
The second acquisition unit is used for acquiring the identification information of the positioning element output by the identification algorithm, wherein the identification information comprises element identification and/or angular point position information of the positioning element;
the comparison unit is used for comparing and analyzing the element identification and/or angular point position information in the labeling information and the element identification and/or angular point position information in the identification information to obtain an analysis result, and the analysis result comprises at least one accuracy evaluation parameter;
and the adjusting unit is used for adjusting the recognition algorithm according to the at least one accuracy evaluation parameter, wherein the adjusted recognition algorithm is used for recognizing the positioning element in the running process of the vehicle.
According to a third aspect of the present application, there is provided an algorithm adjustment method for identifying a positioning element, comprising:
comparing and analyzing element identification and/or angular point position information in labeling information of the positioning element and element identification and/or angular point position information in identification information of the positioning element output by an identification algorithm to obtain an analysis result, wherein the analysis result comprises at least one accuracy evaluation parameter;
and adjusting the recognition algorithm according to the at least one accuracy evaluation parameter, wherein the adjusted recognition algorithm is used for recognizing the positioning element in the running process of the vehicle.
According to a fourth aspect of the present application, there is provided an electronic device comprising: a processor and a memory; the memory stores executable instructions of the processor; wherein the processor is configured to perform the algorithm adjustment method for identifying a positioning element as claimed in any one of the first aspects or the algorithm adjustment method for identifying a positioning element as claimed in the third aspect via execution of the executable instructions.
According to a fifth aspect of the present application, there is provided a non-transitory computer readable storage medium storing computer instructions which, when executed by a processor, implement the algorithm-adjustment method for identifying a positioning element of any one of the first aspects, or perform the algorithm-adjustment method for identifying a positioning element as described in the third aspect.
According to a sixth aspect of the present application, there is provided a program product comprising: a computer program stored in a readable storage medium, from which the computer program can be read by at least one processor of a server, the at least one processor executing the computer program causing the server to perform the algorithm adjustment method for identifying a positioning element as described in any one of the first aspects, or to perform the algorithm adjustment method for identifying a positioning element as described in the third aspect.
According to the technical scheme, at least one accuracy evaluation parameter is obtained by comparing and analyzing element identifications and/or angular point position information in the labeling information and element identifications and/or angular point position information in the identification information; and adjusting the recognition algorithm in multiple dimensions according to the obtained accuracy evaluation parameters. Further, the accuracy of the identification algorithm is automatically evaluated and analyzed, and objective analysis results are obtained; based on each accuracy evaluation parameter, the recognition algorithm is adjusted, so that the recognition algorithm can be accurately adjusted; the positioning elements are accurately identified by the adjusted identification algorithm. The labor cost is reduced, the accuracy of the adjustment algorithm is improved, and the identification accuracy and the identification precision of the adjusted identification algorithm are improved. And the identification algorithm is analyzed based on the marked element identification and/or angular point position information and the identified element identification and/or angular point position information, so that the identification algorithm can be accurately quantitatively evaluated.
It should be understood that the description of this section is not intended to identify key or critical features of the embodiments of the application or to delineate the scope of the application. Other features of the present application will become apparent from the description that follows.
Drawings
The drawings are for better understanding of the present solution and do not constitute a limitation of the present application. Wherein:
fig. 1 is a schematic diagram of an application scenario in an embodiment of the present application;
FIG. 2 is a schematic diagram according to a first embodiment of the present application;
fig. 3 is a schematic view of corner point location information of a positioning element provided according to the present application;
FIG. 4 is a schematic diagram according to a second embodiment of the present application;
FIG. 5 is a schematic diagram according to a third embodiment of the present application;
FIG. 6 is a schematic diagram according to a fourth embodiment of the present application;
FIG. 7 is a schematic diagram according to a fifth embodiment of the present application;
FIG. 8 is a schematic diagram according to a sixth embodiment of the present application;
fig. 9 is a schematic diagram according to a seventh embodiment of the present application.
Detailed Description
Exemplary embodiments of the present application are described below in conjunction with the accompanying drawings, which include various details of the embodiments of the present application to facilitate understanding, and should be considered as merely exemplary. Accordingly, one of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present application. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
Vehicles have become a requisite tool for people to travel, for example, vehicles are autopilot vehicles. In a scene where a vehicle is parked, some positioning elements may be provided in an auto-park scene, for example, the positioning elements are pillars, wall stickers, or the like. In one example, posts may be provided in an up-down slope cross-floor, underground garage, or wall stickers may be provided on walls as positioning elements. Because in the automatic parking scene, the GPS signals are weaker, and then the positioning elements are set and recorded in a high-precision map, the vehicle acquires images through a carriage head, and then the positioning elements are obtained based on the high-precision map and an identification algorithm; and then the vehicle acquires the vehicle position according to the positioning element, and the parking is completed based on the positioning element and the vehicle position.
Because the vehicle position needs to be known to finish parking, the parking needs to be finished based on the positioning elements, and further strict requirements are placed on the accuracy and precision of the recognition algorithm for recognizing the positioning elements. Therefore, the accuracy of the recognition algorithm for recognizing the positioning element needs to be evaluated and verified, and the positioning element can be accurately recognized after the recognition algorithm is applied to the vehicle.
In one example, after an image acquired by a vehicle is identified by an identification algorithm to obtain a positioning element, the identified positioning element is visualized on the image; then, the identification result output by the identification algorithm and the positioning elements in the real scene are compared manually, so that whether the identification algorithm is accurate or not is determined.
However, in the above manner, the accuracy of the manual statistical recognition algorithm needs to be determined by manual experience and supervisor, which affects the authenticity and accuracy of the evaluation of the recognition algorithm, and further, according to the manual evaluation result of the recognition algorithm, the recognition algorithm cannot be correctly adjusted when the recognition algorithm is adjusted, so that the adjusted recognition algorithm cannot accurately recognize the positioning element.
The application provides an algorithm adjustment method, device, equipment and medium for identifying positioning elements, which are applied to autonomous parking and automatic driving in data/image processing so as to accurately and reasonably evaluate an identification algorithm for identifying the positioning elements and adjust the identification algorithm; the positioning elements are accurately identified by the adjusted identification algorithm.
The following describes the technical solutions of the present application and how the technical solutions of the present application solve the above technical problems in detail with specific embodiments. The following embodiments may be combined with each other, and the same or similar concepts or processes may not be described in detail in some embodiments. Embodiments of the present application will be described below with reference to the accompanying drawings.
Fig. 1 is a schematic diagram of an application scenario in an embodiment of the present application, as shown in fig. 1, in a parking area (automatic parking scenario), a plurality of positioning elements are set, and when a vehicle parks, the positioning elements need to be identified, and then the position of the vehicle is determined, so as to complete parking. For example, an autonomous vehicle may need to identify a locating element while parking.
Fig. 2 is a schematic diagram according to a first embodiment of the present application, and as shown in fig. 2, an algorithm adjustment method for identifying a positioning element provided in the present embodiment includes:
101. the method comprises the steps of obtaining labeling information of a positioning element, and obtaining identification information of the positioning element, which is output by an identification algorithm, wherein the labeling information comprises element identification and/or angular point position information of the positioning element, and the identification information comprises element identification and/or angular point position information of the positioning element.
In one example, step 101 specifically includes the steps of: acquiring an image through acquisition equipment on a vehicle; and receiving a labeling instruction of a user, and determining a positioning element in the image according to the labeling instruction, wherein the positioning element has labeling information. Acquiring an image through acquisition equipment on a vehicle; and identifying the positioning elements in the image by adopting an identification algorithm to obtain identification information.
The execution subject of the present embodiment may be a vehicle, or a controller of a vehicle, or an electronic device, or an intelligent terminal, or a server, or an algorithm adjustment device for identifying a positioning element, or other apparatus or device that may execute the method of the present embodiment. The present embodiment describes an electronic device as an execution body.
The electronic device obtains an image including a positioning element therein. In one example, the vehicle is provided with a collection device, for example, the collection device is a camera; the image electronic equipment for acquiring the environment of the vehicle by the acquisition equipment can acquire the image acquired by the acquisition equipment.
The electronic equipment displays the image, for example, the image is displayed in an annotation tool, and the annotation tool is software capable of displaying the image and receiving a user instruction; the user sends out a labeling instruction to the electronic equipment in a mode of touching a screen of the electronic equipment, touching a keyboard of the electronic equipment, sending out voice and the like; the labeling instruction indicates the positioning element selected by the user; and the electronic equipment further determines the positioning elements in the image according to the labeling instruction, and labeling information of each positioning element labeled by the user is obtained. The labeling information comprises element identification of the positioning element and/or angular point position information. And further obtaining the information of the marked positioning element.
The electronic device is provided with a recognition algorithm, for example, an algorithm capable of recognizing a positioning element such as a machine learning algorithm; the electronic equipment runs a recognition algorithm, recognizes the image acquired by the acquisition equipment, recognizes the positioning elements in the image, and obtains the recognition information of each positioning element recognized by the recognition algorithm. The identification information includes element identification of the positioning element and/or angular point position information. And further obtains information of the identified positioning element.
Wherein the element is identified as an ID. Fig. 3 is a schematic diagram of angular point location information of a positioning element according to the present application, where, as shown in fig. 3, the positioning element has four angular points, namely, an angular point a, an angular point B, an angular point C, and an angular point D, and the angular point location information of the four angular points defines locations of the positioning element in four directions.
102. And comparing and analyzing the element identification and/or the angular point position information in the labeling information and the element identification and/or the angular point position information in the identification information to obtain an analysis result, wherein the analysis result comprises at least one accuracy evaluation parameter.
In one example, the at least one accuracy assessment parameter is one or more of: classifying F1 parameters, detecting F1 parameters, single-frame transverse average errors and single-frame longitudinal average errors.
Wherein, the classification F1 parameter characterizes the relation between the classification recall rate and the classification accuracy rate. The detection F1 parameter represents the relation between the detection recall rate and the detection accuracy rate, the single-frame horizontal average error represents the position error of the positioning element in the x direction, and the single-frame vertical average error represents the position error of the positioning element in the y direction.
The electronic device performs comparison analysis on the element identifier and/or the angular point position information of each positioning element obtained by labeling, and the element identifier and/or the angular point position information of each positioning element obtained by recognition, so as to obtain at least one accuracy evaluation parameter, namely, obtain an analysis result.
In one example, based on the element identifier of each positioning element obtained by labeling and the element identifier of each positioning element obtained by recognition, whether the two are consistent is analyzed, and whether the number detection of the characterization positioning elements is accurate or not can be obtained as an accuracy evaluation parameter.
In one example, based on the corner position information of each positioning element obtained by labeling and the corner position information of each positioning element obtained by recognition, whether the two are consistent is analyzed, and an accuracy evaluation parameter indicating whether the position detection of the positioning element is accurate can be obtained.
In one example, based on the element identifier and the angular point position information of each positioning element obtained by labeling, and the element identifier and the angular point position information of each positioning element obtained by recognition, whether the identifiers are consistent or not and whether the angular point position information are consistent or not in the result obtained by recognition of the same positioning element are analyzed, and an accuracy evaluation parameter indicating whether the number detection of the positioning elements is accurate or not and an accuracy evaluation parameter indicating whether the position detection of the positioning elements is accurate or not can be obtained.
In one example, based on the element identifier of each positioning element obtained by labeling, and the element identifier and the corner position information of each positioning element obtained by recognition, an accuracy evaluation parameter indicating whether the number detection of the positioning elements is accurate or not can be obtained, and the recognized position of the positioning element can be obtained.
In one example, based on the element identifier and the angular point position information of each positioning element obtained by labeling, and the element identifier of each positioning element obtained by recognition, an accuracy evaluation parameter indicating whether the number detection of the positioning elements is accurate or not can be obtained, and the labeled position of the positioning element can be obtained.
For example, the accuracy assessment parameters are: classifying F1 parameters, detecting F1 parameters, single-frame transverse average errors and single-frame longitudinal average errors. Various accuracy evaluation parameters are provided, so that the recognition algorithm can be conveniently adjusted in multiple dimensions.
The element identification of the positioning element is analyzed by comparing the element identification in the labeling information and the element identification in the identification information, so that whether the positioning element identified by the identification algorithm is accurate or not can be determined, and the classification recall rate and the classification accuracy rate are obtained; and then, determining the classification F1 parameter according to the relation between the classification recall rate and the classification accuracy rate. For example, the class recall is multiplied by the class accuracy to obtain the class F1 parameter.
By comparing the element identifications in the labeling information and the element identifications in the identification information, the element identifications of the positioning elements are analyzed, and whether the positioning elements identified by the identification algorithm are accurate or not can be determined; comparing the angular point position information in the labeling information and the angular point position information in the identification information aiming at the same positioning element, and calculating the error between the two information, thereby obtaining the detection recall rate and the detection accuracy; and then, determining the detection F1 parameter according to the relation between the detection recall rate and the detection accuracy rate. For example, the detection recall and the detection accuracy are used to obtain the detection F1 parameter.
Because the positioning element has four corner points, the direction from the corner point D to the corner point C can be taken as the x direction, and the direction from the corner point D to the corner point A can be taken as the y direction; comparing the angular point position information in the labeling information and the angular point position information in the identification information aiming at the same positioning element, and calculating the error between the two to obtain the position error of the positioning element in the x direction; and calculating an average value based on the position errors of each positioning element in the same frame of image in the x direction to obtain a single-frame transverse average error.
Comparing the angular point position information in the labeling information and the angular point position information in the identification information aiming at the same positioning element, and calculating the error between the two to obtain the position error of the positioning element in the y direction; and calculating an average value based on the position errors of each positioning element in the y direction in the same frame of image to obtain a single-frame longitudinal average error.
103. And adjusting the recognition algorithm according to the at least one accuracy evaluation parameter, wherein the adjusted recognition algorithm is used for recognizing the positioning element in the driving process of the vehicle.
Illustratively, the electronic device adjusts the recognition algorithm based on the respective accuracy assessment parameters obtained. And then the recognition algorithm is adjusted in each dimension.
In one example, the accuracy evaluation parameter is a classification F1 parameter, where the classification F1 parameter characterizes whether the identification algorithm identifies the number of positioning elements accurately. If the classification F1 parameter does not meet the requirement, the parameters for identifying the number of positioning elements in the identification algorithm need to be adjusted.
In one example, the accuracy assessment parameter is a detection F1 parameter, where the detection F1 parameter characterizes whether the recognition algorithm is accurate for identifying the location of the positioning element. If the detected F1 parameter does not meet the requirement, the parameter used for identifying the position of the positioning element in the identification algorithm needs to be adjusted.
In one example, the accuracy assessment parameter is a single frame lateral average error that characterizes whether the recognition algorithm is accurate for recognizing the position of the positioning element in the x-direction. If the single frame transverse average error does not meet the requirement, parameters for identifying the position of the positioning element in the x direction in the identification algorithm need to be adjusted.
In one example, the accuracy assessment parameter is a single-frame longitudinal average error that characterizes whether the recognition algorithm is accurate for recognizing the position of the positioning element in the y-direction. If the single-frame longitudinal average error does not meet the requirement, parameters for identifying the position of the positioning element in the y direction in the identification algorithm need to be adjusted.
Then, setting the adjusted recognition algorithm in a controller of the vehicle; when the vehicle needs to park, the controller of the vehicle identifies the acquired image based on an identification algorithm, and then positioning elements in the environment where the vehicle is located are obtained; the controller of the vehicle determines the position of the vehicle based on the position information of the positioning element, and then completes the parking action.
In the embodiment, at least one accuracy evaluation parameter is obtained by comparing and analyzing element identifications and/or angular point position information in the labeling information and element identifications and/or angular point position information in the identification information; and adjusting the recognition algorithm in multiple dimensions according to the obtained accuracy evaluation parameters. Further, the accuracy of the identification algorithm is automatically evaluated and analyzed, and objective analysis results are obtained; based on each accuracy evaluation parameter, the recognition algorithm is adjusted, so that the recognition algorithm can be accurately adjusted; the positioning elements are accurately identified by the adjusted identification algorithm. The labor cost is reduced, the accuracy of the adjustment algorithm is improved, and the identification accuracy and the identification precision of the adjusted identification algorithm are improved. And the identification algorithm is analyzed based on the marked element identification and/or angular point position information and the identified element identification and/or angular point position information, so that the identification algorithm can be accurately quantitatively evaluated.
Fig. 4 is a schematic diagram according to a second embodiment of the present application, and as shown in fig. 4, an algorithm adjustment method for identifying a positioning element provided in the present embodiment includes:
201. the method comprises the steps of obtaining labeling information of a positioning element, and obtaining identification information of the positioning element, which is output by an identification algorithm, wherein the labeling information comprises element identification and/or angular point position information of the positioning element, and the identification information comprises element identification and/or angular point position information of the positioning element.
The execution subject of the present embodiment may be a vehicle, or a controller of a vehicle, or an electronic device, or an intelligent terminal, or a server, or an algorithm adjustment device for identifying a positioning element, or other apparatus or device that may execute the method of the present embodiment. The present embodiment describes an electronic device as an execution body.
This step may refer to step 101 shown in fig. 2, and will not be described in detail.
202. The accuracy evaluation parameters are classified F1 parameters; and determining the classified recall parameters and the classified accurate parameters according to whether the element identification of the positioning element exists in the labeling information and whether the element identification of the positioning element exists in the identification information.
In one example, step 202 specifically includes the steps of:
if the element identification of the positioning element exists in the identification information and the element identification of the positioning element does not exist in the labeling information, determining that the element false detection number is accumulated by 1.
If the labeling information has the element identification of the positioning element and the identification information does not have the element identification of the positioning element, determining that the element missing detection number is accumulated by 1.
If the element identification of the positioning element exists in the identification information and the element identification of the positioning element exists in the labeling information, determining that the accurate number of the elements is accumulated by 1.
And determining classified recall parameters according to the accurate number of elements and the missed detection number of elements, and determining classified accurate parameters according to the accurate number of elements and the false detection number of elements.
Illustratively, after step 201, labeling information of each frame image is obtained, where the labeling information includes element identification and/or angular point position information of each positioning element in the frame image; and obtaining identification information of each frame of image, wherein the identification information comprises element identification and/or angular point position information of each positioning element in the frame of image.
Comparing the labeling information and the identification information of each frame of image; judging whether the element identification of the positioning element exists in the identification information and whether the element identification of the positioning element exists in the labeling information aiming at the same positioning element, so as to obtain a classified recall parameter and a classified accurate parameter; the classified recall parameters represent the relation between the accuracy of the number detection of the positioning elements and the omission factor of the number detection, and the classified accuracy parameters represent the relation between the accuracy of the number detection of the positioning elements and the false detection rate of the number detection. The accuracy of the number detection is characterized in that the positioning element is identified and marked. The false detection rate of the number detection is characterized in that a positioning element is identified, but the positioning element is not marked. The omission ratio of the number detection is characterized in that a positioning element is marked, but the positioning element is not recognized.
And further obtaining accurate classified recall parameters and accurate classified parameters according to whether element identifiers of the positioning elements exist in the identification information and whether element identifiers of the positioning elements exist in the labeling information.
In one example, first, an element false detection number N is set fp(id) The initial value of (1) is 0, and the element missing detection number N is set fn(id) Initial value of 0, set element accurate number N tp(id) Is 0.
For each frame of image, the labeling information and the identification information of the image are compared. For the same positioning element, judging whether the element identification of the positioning element exists in the identification information and whether the element identification of the positioning element exists in the labeling information. For the same positioning element, when the element identification of the positioning element exists in the identification information, but the element identification of the positioning element does not exist in the labeling information, the element false detection quantity N is calculated fp(id) Accumulate 1. For the same positioning element, when determining that the labeling information exists the element identification of the positioning element, but the identification information does not exist the element identification of the positioning element, missing the element by the quantity N fn(id) Accumulate 1. For the same positioning element, when the element identification of the positioning element exists in the identification information and the element identification of the positioning element exists in the labeling information, the accurate number N of the elements is determined tp(id) Accumulate 1.
The number of false element detection of the multi-frame image can be accumulated, the number of missing element detection of the multi-frame image can be accumulated, and the accurate number of element detection of the multi-frame image can be accumulated.
Then, according to the element false detection number, the element omission detection number and the element accurate number of one or more frames of images, according to the element accurate number N tp(id) And element omission factor N fn(id) Determining the classified Recall parameter as Recall id =N tp(id) /(N tp(id) +N fn(id) ). And according to the exact number N of elements tp(id And element false detection number N fp(id) Determining the classification accuracy parameter as Precision id =N tp(id) /(N tp(id) +N fp(id) )。
Based on the detailed calculation process, according to whether the element identification of the positioning element exists in the identification information and whether the element identification of the positioning element exists in the labeling information, the element false detection number, the element omission detection number and the element accurate number of one or more frames of images are obtained, and further, based on the detection result of one or more frames of images, the accurate classification recall parameters and the accurate classification parameters are obtained.
203. And determining a classification F1 parameter according to the classification recall parameter and the classification accuracy parameter.
Illustratively, after step 202, a classification F1 parameter is determined according to the obtained classification recall parameter and classification accuracy parameter, where the classification F1 parameter characterizes whether the identification algorithm identifies the number of positioning elements accurately.
In one example, recall parameters Recall are recalled according to category id And classification accuracy parameter Precision id Determining the F1 parameter of the classification as F1 id =(2·Precision id ·Recall id )/(Precision id +Recall id )。
204. The accuracy evaluation parameter is a detection F1 parameter; determining a recall parameter and an accurate parameter according to whether element identification of a positioning element exists in the labeling information, whether element identification of the positioning element exists in the identification information, angular point position information in the labeling information and angular point position information in the identification information.
In one example, step 204 specifically includes the steps of:
the method comprises the first step of determining angular point position information of a positioning element in identification information and angular point position information of the positioning element in labeling information if the element identification of the positioning element in the identification information is unique when the element identification of the positioning element in the identification information exists and the element identification of the labeling information exists, and determining a first angular point position pixel error between the angular point position information of the positioning element in the identification information and the angular point position information of the positioning element in the labeling information.
If the pixel error of the first corner point is smaller than or equal to a preset value, determining the accurate number accumulation of the positions to be 1; if the pixel error of the first corner point is larger than a preset value, determining that the number of false position detection is increased by 1, and determining that the number of missed position detection is accumulated by 1; and determining the detection recall parameters according to the accurate position number and the position missing detection number, and determining the detection accurate parameters according to the accurate position number and the position false detection number.
And a second step of determining each angular point position information corresponding to the positioning element in the identification information and the angular point position information of the positioning element in the labeling information when the element identification of the positioning element exists in the identification information and the element identification of the positioning element exists in the labeling information, wherein the total number of the element identifications of the positioning element in the identification information is n.
If the pixel errors of the m second angular point positions are smaller than or equal to a preset value, determining that the accurate position number is accumulated by 1, and determining that the false position number is accumulated by m, wherein m is a positive integer which is larger than or equal to 1 and smaller than or equal to n; if the pixel error of any second angular point position is larger than a preset value, determining that the position missed detection quantity is accumulated 1, and determining that the position false detection quantity is accumulated n.
Illustratively, after step 201, labeling information of each frame image is obtained, where the labeling information includes element identification and/or angular point position information of each positioning element in the frame image; and obtaining identification information of each frame of image, wherein the identification information comprises element identification and/or angular point position information of each positioning element in the frame of image.
Comparing the labeling information and the identification information of each frame of image; for the same positioning element, judging whether the element identification of the positioning element exists in the identification information and whether the element identification of the positioning element exists in the labeling information, and simultaneously determining a detection recall parameter and a detection accurate parameter based on the angular point position information in the labeling information and the angular point position information in the identification information. The detection recall parameter characterizes the relation between the accuracy of the position detection of the positioning element and the omission factor of the position detection, and the detection accuracy parameter characterizes the relation between the accuracy of the position detection of the positioning element and the false detection rate of the position detection. The accuracy of the position detection is characterized in that the positioning element is identified, the positioning element is marked, and the absolute value of the difference value between the identified angular point position and the marked angular point position is smaller than or equal to a preset value. The false detection rate of the position detection is characterized in that the positioning element is identified and marked, and the absolute value of the difference value between the identified angular point position and the marked angular point position is smaller than or equal to a preset value. The omission ratio of the position detection is characterized in that the positioning element is identified, the positioning element is marked, and the absolute value of the difference value between the identified angular point position and the marked angular point position is smaller than or equal to a preset value.
And further, according to whether the element identification of the positioning element exists in the identification information and whether the element identification of the positioning element exists in the labeling information, the angular point position information in the labeling information and the angular point position information in the identification information, accurate detection recall parameters and accurate detection parameters are obtained.
In one example, first, a location-accurate number N is set tp(pos) The initial value of (2) is 0, and the number of false position detection N is set fp(pos) The initial value of (1) is 0, and the position missed detection quantity N is set fn(pos) Is 0.
For each frame of image, the labeling information and the identification information of the image are compared. For the same positioning element, judging whether the element identification of the positioning element exists in the identification information and whether the element identification of the positioning element exists in the labeling information. For the same positioning element, when determining that the element identifier of the positioning element exists in the identification information and the element identifier of the positioning element also exists in the labeling information, it is required to determine that the element identifier of the positioning element in the identification information is unique.
Then, in determining the identification information, the location elementWhen the element identification is unique, calculating the absolute value of the difference value between the angular point position information of the positioning element in the identification information and the angular point position information of the positioning element in the labeling information, and further obtaining the pixel error of the first angular point. When the pixel error of the first angular point is less than or equal to a preset value, the accurate position quantity N is determined tp(pos) Accumulating 1; when the pixel error of the first corner point is determined to be larger than a preset value, the position error detection quantity N is calculated fp(pos) Adding 1 to determine the position omission factor N fn(pos) Accumulate 1.
For example, for a locating element, the element identification of the locating element is present in the identification information, and the element identification of the locating element is also present in the annotation information, while it is determined that the element identification of the locating element in the identification information is unique. And aiming at the positioning element, calculating a difference value of the angular point position information of the positioning element in the identification information and the angular point position information of the positioning element in the labeling information, and then removing the absolute value of the difference value to obtain a first angular point position pixel error.
The angular point position information of the positioning element comprises four angular points, so that the difference value of the angular point position information of each angular point on the labeling information and the identification information can be calculated. For example, subtracting the angular point position information of the angular point A of the positioning element 1 in the identification information from the angular point position information of the angular point A of the positioning element 1 in the labeling information to obtain a first angular point position pixel error of the angular point A; subtracting the angular point position information of the angular point B of the positioning element 1 in the identification information from the angular point position information of the angular point B of the positioning element 1 in the labeling information to obtain a first angular point position pixel error of the angular point B; subtracting the angular point position information of the angular point C of the positioning element 1 in the identification information from the angular point position information of the angular point C of the positioning element 1 in the labeling information to obtain a first angular point position pixel error of the angular point C; and subtracting the angular point position information of the angular point D of the positioning element 1 in the identification information from the angular point position information of the angular point D of the positioning element 1 in the labeling information to obtain a first angular point position pixel error of the angular point D.
Then, at the first corner point position pixel error of the determined 4 corner points,when the positions are smaller than or equal to the preset value, the accurate number N of the positions is obtained tp(pos) Accumulating 1; when determining that the pixel error of the first corner point of any one of the 4 corner points is larger than a preset value, the position error detection quantity N is determined fp(pos) Adding 1 to determine the position omission factor N fn(pos) Accumulate 1. The preset value may be 3 pixels.
Therefore, when the element identification of the positioning element exists in the identification information and the element identification of the positioning element also exists in the labeling information, if the element identification of the positioning element in the identification information is unique, the accurate position accuracy quantity, the position omission quantity and the position false detection quantity can be obtained directly based on the angular point position information in the identification information and the angular point position information in the labeling information.
When the element identification of the positioning element in the identification information is not unique, namely, the total number of the element identifications of the positioning element in the identification information is n, calculating the difference value of each angular point position information corresponding to the positioning element in the identification information and the angular point position information of the positioning element in the labeling information respectively, and obtaining the absolute value of the difference value, and the pixel error of the second angular point position between the two. Thus, for the same positioning element, n second corner position pixel errors of the positioning element are obtained. When determining that m second angular point position pixel errors in the N second angular point position pixel errors are smaller than or equal to a preset value, accurately determining the position N tp(pos) Accumulate 1 and determine the number of position false detections N fp(pos) And accumulating m. When any one of the N second corner position pixel errors is determined to be greater than a preset value, the position omission factor N is determined fn(pos) Accumulate 1 and error-detect the position by the number M fp(pos) And accumulating n.
For example, for a positioning element, the element identifier of the positioning element exists in the identification information, the element identifier of the positioning element also exists in the labeling information, and it is determined that the element identifier of the positioning element in the identification information is not unique, where the total number of element identifiers of the positioning element in the identification information is n. And (3) aiming at the angular point position information of the n element identifications, carrying out difference value calculation on each angular point position information of the positioning element in the identification information and the angular point position information of the positioning element in the labeling information respectively, and then removing the absolute value of the difference value to obtain n second angular point position pixel errors.
When calculating the pixel error of each second angular point position of the same positioning element, the angular point position information of each angular point can be calculated as the angular point position information of the positioning element comprises four angular points, so that the difference value of the angular point position information of each angular point on the labeling information and the identification information can be calculated. For example, subtracting the angular point position information of the angular point A of the positioning element 1 in the identification information from the angular point position information of the angular point A of the positioning element 1 in the labeling information to obtain an error of the angular point A; subtracting the angular point position information of the angular point B of the positioning element 1 in the identification information from the angular point position information of the angular point B of the positioning element 1 in the labeling information to obtain an error of the angular point B; subtracting the angular point position information of the angular point C of the positioning element 1 in the identification information from the angular point position information of the angular point C of the positioning element 1 in the labeling information to obtain an error of the angular point C; and subtracting the angular point position information of the angular point D of the positioning element 1 in the identification information from the angular point position information of the angular point D of the positioning element 1 in the labeling information to obtain an error of the angular point D.
Further, for each second angular point position pixel error in the n second angular point position pixel errors, if the errors of the 4 angular points are all smaller than or equal to a preset value, determining that the second angular point position pixel error is smaller than or equal to the preset value; if the error of any one of the 4 corner points is larger than a preset value, determining that the pixel error of the second corner point position is larger than the preset value.
Then, when determining that m second corner position pixel errors in the N second corner position pixel errors are smaller than or equal to a preset value, accurately determining the number N of positions tp(pos) Accumulate 1 and determine the number of position false detections N fp(pos) And accumulating m. When any one of the N second corner position pixel errors is determined to be greater than a preset value, the position omission factor N is determined fn(pos) Accumulate 1 and error-detect number N of positions fp(pos) And accumulating n.
Therefore, when the element identification of the positioning element exists in the identification information and the element identification of the positioning element exists in the labeling information, if the element identification of the positioning element in the identification information is not unique, the angular point position information of the positioning element in the identification information and the angular point position information in the labeling information are analyzed to obtain accurate position accurate quantity, position omission quantity and position false detection quantity.
Then, the accurate number N can be obtained according to the positions tp(pos) And the position miss number N fn(pos) Determining the Recall parameter to be Recall pos =N tp(pos) /(N tp(pos) +N fn(pos) ) And, according to the accurate number N of positions tp(pos) And number of position false detections N fp(pos) Determining the accurate detection parameter as Precision pos =N tp(pos) /(N tp(pos) +N fp(pos) )。
205. And determining the detection F1 parameter according to the detection recall parameter and the detection accuracy parameter.
Illustratively, after step 204, a number of detected F1 parameters is determined based on the obtained detected recall parameters and the detected accuracy parameters, the detected F1 parameters characterizing whether the recognition algorithm is accurate for the location recognition of the locating element.
In one example, recall parameter Recall is detected pos And detecting accurate parameter Precision pos Determining the F1 parameter to be F1 id =(2·Precision pos ·Recall pos )/(Precision pos +Recall pos )。
The execution order between steps 202-203 and steps 204-205 is not limited. Steps 202-203 may be performed first, followed by steps 204-205; alternatively, steps 204-205 are performed first, followed by steps 202-203; alternatively, steps 202-203, 204-205 are performed simultaneously.
206. The accuracy evaluation parameters are a single-frame transverse average error and a single-frame longitudinal average error; when the element identification of the positioning element exists in the identification information and the element identification of the positioning element exists in the labeling information, determining a third angular point position pixel error of each positioning element in the x direction and a fourth angular point position pixel error of each positioning element in the y direction according to the angular point position information in the labeling information and the angular point position information in the identification information.
Illustratively, after step 202, if, for the same positioning element in the single frame image, there is an element identifier of the positioning element in the identification information and the labeling information is an element identifier of the positioning element, a third corner position pixel error in the x direction of the positioning element and a fourth corner position pixel error in the y direction of the positioning element may be calculated.
In one example, for one positioning element, the positioning element has 4 corner points, as shown in fig. 3. The direction from corner D to corner C may be taken as the x-direction and the direction from corner D to corner a may be taken as the y-direction. Obtaining the position 1 in the x direction according to the corner position information of the corner D and the corner C, for example, summing the corner position information of the corner D and the corner C to obtain the position 1, or subtracting the corner position information of the corner D from the corner position information of the corner C to obtain the position 1; meanwhile, according to the angular point position information of the angular point A and the angular point B, a position 2 in the x direction is obtained, for example, the angular point position information of the angular point A and the angular point B are summed to obtain the position 2, or the angular point position information of the angular point A is subtracted from the angular point position information of the angular point B to obtain the position 2; then, the position 1 and the position 2 are both corner position information of the positioning element in the x direction. Then, for the positioning element, the absolute value 1 of the difference value between the position 1 of the positioning element in the identification information and the position 1 of the positioning element in the identification information is obtained; for the positioning element, obtaining an absolute value 2 of a difference value between a position 2 of the positioning element in the identification information and a position 2 of the positioning element in the identification information; the sum of the absolute value 1 and the absolute value 2 is taken as the pixel error of the third corner position of the positioning element in the x direction.
And, according to the angular point position information of the angular point D and the angular point a, a position 3 in the y direction is obtained, for example, the angular point position information of the angular point D and the angular point a is summed to obtain the position 3, or the angular point position information of the angular point a is subtracted from the angular point position information of the angular point a to obtain the position 3; meanwhile, according to the angular point position information of the angular point C and the angular point B, a position 4 in the y direction is obtained, for example, the angular point position information of the angular point C and the angular point B are summed to obtain the position 4, or the angular point position information of the angular point C is subtracted from the angular point position information of the angular point B to obtain the position 4; then, the position 3 and the position 4 are both corner position information of the positioning element in the y direction. Then, for the positioning element, the absolute value 3 of the difference between the position 3 of the positioning element in the identification information and the position 3 of the positioning element in the identification information is obtained; for the positioning element, obtaining an absolute value 4 of a difference value between a position 4 of the positioning element in the identification information and a position 4 of the positioning element in the identification information; the sum of the absolute value 3 and the absolute value 4 is taken as the pixel error of the third corner position of the positioning element in the y direction.
207. And determining a single-frame transverse average error of one frame of image according to the pixel error of the third corner position of each positioning element in the x direction and the number of the positioning elements in the one frame of image. And determining a single-frame longitudinal average error of one frame of image according to the fourth angle point position pixel error of each positioning element in the y direction and the number of the positioning elements in the one frame of image.
For a single frame image, the pixel errors of the third angular point positions of the positioning elements in the same frame image are accumulated to obtain a first accumulated valueWherein err-x i And R is the number of the positioning elements in one frame image (the number of the positioning elements in the identified single frame image) for the pixel error at the third angular point position of the ith positioning element, i is a positive integer which is more than or equal to 1 and less than or equal to R, and R is a positive integer which is more than or equal to 1. Then according to the first accumulated valueOne frame of imageThe number R of the positioning elements in the image is used for obtaining a single-frame transverse average error of one frame of image
For a single frame image, accumulating pixel errors of fourth corner points of each positioning element in the same frame image to obtain a second accumulated valueWherein err_y i And R is the number of the positioning elements in one frame image (the number of the positioning elements in the identified single frame image) for the pixel error of the third corner position of the ith positioning element. Then, according to the second accumulated value +.>And the number R of positioning elements in one frame of image to obtain single-frame longitudinal average error avg_err of one frame of image y =/>
Through the steps 206-207, the pixel errors of the positions of the positioning elements of the single-frame image are analyzed, and accurate single-frame horizontal average errors and single-frame vertical average errors are obtained.
208. And determining the overall average error according to the single-frame transverse average error, the single-frame longitudinal average error and the number of positioning elements in one frame of image.
Illustratively, after step 207, a single-frame lateral average error and a single-frame longitudinal average error for each frame of image may be obtained for the multiple frames of images.
Then, for the multi-frame images, summing the single-frame transverse average errors of the images of each frame to obtain a sum Q of the single-frame transverse average errors x Calculating the sum T of the number of the positioning elements of each frame of image; sum of single frame transverse average errors Q x Divided by T, the overall error in the lateral direction is obtained.
For multiple frames of images, single-frame longitudinal average error of each frame of image is calculatedThe differences are summed to obtain a sum Q of the single-frame longitudinal average errors y Calculating the sum T of the number of the positioning elements of each frame of image; sum of single frame longitudinal average error Q y Divided by T, the overall error in the machine direction is obtained.
The sum Q of the single frame transverse average errors can also be used x Sum of single frame longitudinal average error Q y Summing is performed to obtain an overall error value Q, and then Q is divided by T to obtain an overall average error.
The pixel error of the third corner position of the positioning element in each frame of image in the multi-frame image can be analyzed aiming at the same positioning element; and further obtaining the transverse error distribution condition of the same positioning element according to the pixel error of the third angular point position of the positioning element in each frame of image in the multi-frame images. Aiming at the same positioning element, analyzing a fourth angle point position pixel error of the positioning element in each frame of image in the multi-frame images; and further obtaining the longitudinal error distribution condition of the same positioning element according to the fourth corner point pixel error of the positioning element in each frame of image in the multi-frame image.
And determining the positioning element with the largest value of the pixel error at the third corner point and the positioning element with the largest value of the pixel error at the fourth corner point according to each positioning element in the single frame image.
The recognition result of the positioning element in the image is analyzed from multiple dimensions, so that data in the multiple dimensions can be obtained, and the data are used for analyzing whether the position detection of the recognition algorithm is accurate or not.
The execution order among steps 202-203, steps 204-205, and steps 206-208 is not limited.
209. And adjusting the recognition algorithm according to the at least one accuracy evaluation parameter, wherein the adjusted recognition algorithm is used for recognizing the positioning element in the driving process of the vehicle.
Illustratively, this step may refer to step 203 shown in fig. 2, which is not described herein.
In this embodiment, based on the above embodiment, accuracy evaluation parameters in multiple dimensions are obtained through steps 202-203, steps 204-205, and steps 206-208; these accuracy assessment parameters are used to assess and adjust the recognition algorithm; therefore, the index of the adjustment recognition algorithm is refined, a plurality of indexes are counted in a plurality of dimensions of classification and detection, and the recognition algorithm is guided to be optimized from parameters of the plurality of dimensions more specifically, so that the recognition algorithm with better effect of recognizing the positioning elements is obtained. Moreover, the method can be applied to a plurality of different recognition algorithms through one-time labeling of the user, and the adjustment of each recognition algorithm can be seen in the embodiment; automated statistics can be done to evaluate the metrics of the recognition algorithm, which can help iterate and optimize the recognition algorithm. And the index of a plurality of evaluation recognition algorithms is provided, the index of one frame of image or a plurality of frames of images is intuitively provided, which frame or which positioning element is can be efficiently and rapidly found, the classification or detection requirements are not met, and the recognition algorithm is convenient to adjust. And the positioning element is accurately identified by the adjusted identification algorithm. In addition, the scheme provided by the embodiment can adjust the detection algorithm in the fields of robot vision positioning and the like besides adjusting the algorithm for identifying the positioning element.
Fig. 5 is a schematic diagram according to a third embodiment of the present application, and as shown in fig. 5, an algorithm adjustment method for identifying a positioning element provided in the present embodiment includes:
301. and comparing and analyzing element identification and/or angular point position information in the labeling information of the positioning element and element identification and/or angular point position information in the identification information of the positioning element output by an identification algorithm to obtain an analysis result, wherein the analysis result comprises at least one accuracy evaluation parameter.
The execution subject of the present embodiment may be a vehicle, or a controller of a vehicle, or an electronic device, or an intelligent terminal, or a server, or an algorithm adjustment device for identifying a positioning element, or other apparatus or device that may execute the method of the present embodiment. The present embodiment describes an electronic device as an execution body.
The electronic equipment stores the marked information of the positioning elements in advance, namely the marked information of the positioning elements is stored; information of the identified positioning element, that is, identification information of the positioning element has been stored in advance in the electronic device.
Then, the electronic device performs comparison and analysis based on the element identification and/or angular point position information of each positioning element obtained by labeling and the element identification and/or angular point position information of each positioning element obtained by recognition, so as to obtain at least one accuracy evaluation parameter, namely, an analysis result.
This step may refer to step 102 shown in fig. 2, and will not be described in detail.
302. And adjusting the recognition algorithm according to the at least one accuracy evaluation parameter, wherein the adjusted recognition algorithm is used for recognizing the positioning element in the driving process of the vehicle.
Illustratively, this step may refer to step 103 shown in fig. 2, which is not described herein.
In the embodiment, at least one accuracy evaluation parameter is obtained by comparing and analyzing element identifications and/or angular point position information in the labeling information and element identifications and/or angular point position information in the identification information; and adjusting the recognition algorithm in multiple dimensions according to the obtained accuracy evaluation parameters. Further, the accuracy of the identification algorithm is automatically evaluated and analyzed, and objective analysis results are obtained; based on each accuracy evaluation parameter, the recognition algorithm is adjusted, so that the recognition algorithm can be accurately adjusted; the positioning elements are accurately identified by the adjusted identification algorithm. The labor cost is reduced, the accuracy of the adjustment algorithm is improved, and the identification accuracy and the identification precision of the adjusted identification algorithm are improved. And the identification algorithm is analyzed based on the marked element identification and/or angular point position information and the identified element identification and/or angular point position information, so that the identification algorithm can be accurately quantitatively evaluated.
Fig. 6 is a schematic diagram of a fourth embodiment of the present application, and as shown in fig. 6, an algorithm adjustment device for identifying a positioning element provided in the present embodiment includes:
the first obtaining unit 31 is configured to obtain labeling information of the positioning element, where the labeling information includes an element identifier and/or corner position information of the positioning element.
The second obtaining unit 32 is configured to obtain identification information of the positioning element output by the identification algorithm, where the identification information includes an element identifier and/or corner position information of the positioning element.
The comparing unit 33 is configured to compare and analyze the element identifier and/or the angular point position information in the labeling information and the element identifier and/or the angular point position information in the identification information to obtain an analysis result, where the analysis result includes at least one accuracy evaluation parameter.
The adjusting unit 34 is configured to adjust the recognition algorithm according to at least one accuracy evaluation parameter, where the adjusted recognition algorithm is used to recognize the positioning element during the driving process of the vehicle.
The device of the embodiment may execute the technical scheme in the above method, and the specific implementation process and the technical principle are the same and are not described herein again.
Fig. 7 is a schematic diagram of a fifth embodiment of the present application, and, based on the embodiment shown in fig. 6, as shown in fig. 7, in the algorithm adjustment device for identifying a positioning element provided in this embodiment, at least one accuracy evaluation parameter is one or more of the following: classifying F1 parameters, detecting F1 parameters, single-frame transverse average errors and single-frame longitudinal average errors.
Wherein, the classification F1 parameter characterizes the relation between the classification recall rate and the classification accuracy rate. The detection F1 parameter represents the relation between the detection recall rate and the detection accuracy rate, the single-frame horizontal average error represents the position error of the positioning element in the x direction, and the single-frame vertical average error represents the position error of the positioning element in the y direction.
In one example, the accuracy assessment parameter is a classification F1 parameter; a contrast unit 33 comprising:
the first determining module 331 is configured to determine a classification recall parameter and a classification accuracy parameter according to whether an element identifier of a positioning element exists in the labeling information and whether an element identifier of the positioning element exists in the identification information.
A second determining module 332, configured to determine a category F1 parameter according to the category recall parameter and the category accuracy parameter.
In one example, the first determining module 331 includes:
The first determining submodule 3311 is configured to determine that the element false detection number is accumulated by 1 if the element identifier of the positioning element exists in the identification information and the element identifier of the positioning element does not exist in the labeling information.
The second determining submodule 3312 is configured to determine that the element omission factor is accumulated by 1 if the element identifier of the positioning element exists in the labeling information and the element identifier of the positioning element does not exist in the identifying information.
The third determining submodule 3313 is configured to determine that the accurate number of elements is accumulated by 1 if the element identifier of the positioning element exists in the identification information and the element identifier of the positioning element exists in the labeling information.
A fourth determining submodule 3314, configured to determine a classified recall parameter according to the accurate number of elements and the missed element detection number, and determine a classified accurate parameter according to the accurate number of elements and the missed element detection number.
In one example, the accuracy assessment parameter is a detection F1 parameter; a contrast unit 33 comprising:
the third determining module 333 is configured to determine a detection recall parameter and a detection accuracy parameter according to whether there is an element identifier of a positioning element in the labeling information, whether there is an element identifier of the positioning element in the identification information, and angular point position information in the labeling information and angular point position information in the identification information.
A fourth determining module 334, configured to determine a detection F1 parameter according to the detection recall parameter and the detection accuracy parameter.
In one example, the third determination module 333 includes:
and a fifth determining submodule 3331, configured to determine, when the element identifier of the positioning element exists in the identification information and the element identifier of the positioning element exists in the labeling information, if the element identifier of the positioning element in the identification information is determined to be unique, the angular point position information of the positioning element in the identification information and the angular point position information of the positioning element in the labeling information, and a first angular point position pixel error between the two.
The sixth determining submodule 3332 is configured to determine that the accurate number of positions is accumulated by 1 if the pixel error of the first corner point is less than or equal to the preset value.
The seventh determining submodule 3333 is configured to determine that the number of false position detections is increased by 1 and determine that the number of missed position detections is increased by 1 if the pixel error of the first corner point is greater than a preset value.
An eighth determination submodule 3334 is configured to determine a detection recall parameter according to the accurate number of positions and the number of missed positions, and determine a detection accuracy parameter according to the accurate number of positions and the number of false positions.
In one example, the third determination module 333 further includes:
And a ninth determining submodule 3335, configured to determine, when the element identifier of the positioning element exists in the identification information and the element identifier of the positioning element exists in the labeling information, if it is determined that the element identifier of the positioning element in the identification information is not unique, each corner position information corresponding to the positioning element in the identification information and the corner position information of the positioning element in the labeling information, and a second corner position pixel error between the two, where the total number of the element identifiers of the positioning element in the identification information is n.
And a tenth determination submodule 3336, configured to determine that the accurate number of positions is accumulated by 1 and determine that the false detection number of positions is accumulated by m if the pixel errors of the m second corner positions are smaller than or equal to a preset value, where m is a positive integer greater than or equal to 1 and less than or equal to n.
The eleventh determining submodule 3337 is configured to determine that the number of missed detection is accumulated by 1 and determine that the number of false detection is accumulated by n if the pixel error at any one of the second corner positions is greater than the preset value.
In one example, the accuracy assessment parameters are a single frame lateral average error and a single frame longitudinal average error; a contrast unit 33 comprising:
and a fifth determining module 335, configured to determine, when the element identifier of the positioning element exists in the identification information and the element identifier of the positioning element exists in the labeling information, a third corner position pixel error in the x direction of each positioning element and a fourth corner position pixel error in the y direction of each positioning element according to the corner position information in the labeling information and the corner position information in the identification information.
A sixth determining module 336 is configured to determine a single-frame transverse average error of one frame image according to the pixel error of the third corner position of each positioning element in the x-direction and the number of positioning elements in the one frame image.
The seventh determining module 337 is configured to determine a single-frame vertical average error of one frame image according to the fourth corner pixel error of each positioning element in the y direction and the number of positioning elements in the one frame image.
In one example, the comparing unit 33 further includes:
the eighth determining module 338 is configured to determine an overall average error according to a single-frame horizontal average error, a single-frame vertical average error, and the number of positioning elements in one frame of image.
In one example, the first obtaining unit 31 is specifically configured to: acquiring an image through acquisition equipment on a vehicle; and receiving a labeling instruction of a user, and determining a positioning element in the image according to the labeling instruction, wherein the positioning element has labeling information.
In one example, the second obtaining unit 32 is specifically configured to: acquiring an image through acquisition equipment on a vehicle; and identifying the positioning elements in the image by adopting an identification algorithm to obtain identification information.
The device of the embodiment may execute the technical scheme in the above method, and the specific implementation process and the technical principle are the same and are not described herein again.
Fig. 8 is a schematic diagram according to a sixth embodiment of the present application, and as shown in fig. 8, an electronic device 70 in the present embodiment may include: a processor 71 and a memory 72.
A memory 72 for storing a program; memory 72, which may include volatile memory (English: random-access memory), such as random-access memory (RAM), static random-access memory (SRAM), double data rate synchronous dynamic random-access memory (Double Data Rate Synchronous Dynamic Random Access Memory, DDR SDRAM), etc.; the memory may also include a non-volatile memory (English) such as a flash memory (English). The memory 72 is used to store computer programs (e.g., application programs, functional modules, etc. that implement the methods described above), computer instructions, etc., which may be stored in one or more of the memories 72 in a partitioned manner. And the above-described computer programs, computer instructions, data, etc. may be called by the processor 71.
The computer programs, computer instructions, etc. described above may be stored in partitions in one or more memories 72. And the above-described computer programs, computer instructions, etc. may be invoked by the processor 71.
A processor 71 for executing a computer program stored in a memory 72 for carrying out the steps of the method according to the above-described embodiment.
Reference may be made in particular to the description of the embodiments of the method described above.
The processor 71 and the memory 72 may be separate structures or may be integrated structures integrated together. When the processor 71 and the memory 72 are separate structures, the memory 72 and the processor 71 may be coupled by a bus 73.
The electronic device in this embodiment may execute the technical scheme in the above method, and the specific implementation process and the technical principle are the same, which are not described herein again.
According to embodiments of the present application, an electronic device and a readable storage medium are also provided.
Fig. 9 is a schematic diagram according to a seventh embodiment of the present application, and as shown in fig. 9, fig. 9 is a block diagram of an electronic device for implementing a search method based on man-machine interaction according to an embodiment of the present application. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smartphones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the application described and/or claimed herein.
As shown in fig. 9, the electronic device includes: one or more processors 801, memory 802, and interfaces for connecting the components, including high-speed interfaces and low-speed interfaces. The various components are interconnected using different buses and may be mounted on a common motherboard or in other manners as desired. The processor may process instructions executing within the electronic device, including instructions stored in or on memory to display graphical information of the GUI on an external input/output device, such as a display device coupled to the interface. In other embodiments, multiple processors and/or multiple buses may be used, if desired, along with multiple memories. Also, multiple electronic devices may be connected, each providing a portion of the necessary operations (e.g., as a server array, a set of blade servers, or a multiprocessor system). One processor 801 is illustrated in fig. 9.
Memory 802 is a non-transitory computer-readable storage medium provided herein. The memory stores instructions executable by the at least one processor to cause the at least one processor to perform the algorithm adjustment method for identifying a positioning element provided herein. The non-transitory computer readable storage medium of the present application stores computer instructions for causing a computer to execute the algorithm adjustment method for identifying a positioning element provided by the present application.
The memory 802 is used as a non-transitory computer readable storage medium, and may be used to store a non-transitory software program, a non-transitory computer executable program, and modules, such as program instructions/modules (e.g., the first acquiring unit 31, the second acquiring unit 32, the comparing unit 33, and the adjusting unit 34 shown in fig. 6) corresponding to the algorithm adjustment method for identifying a positioning element in the embodiments of the present application. The processor 801 executes various functional applications of the server and data processing, i.e., implements the algorithm adjustment method for identifying the positioning element in the above-described method embodiment, by running non-transitory software programs, instructions, and modules stored in the memory 802.
Memory 802 may include a storage program area that may store an operating system, at least one application program required for functionality, and a storage data area; the storage data area may store data created according to the use of the electronic device for identifying the algorithm adjustment method of the positioning element, and the like. In addition, memory 802 may include high-speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid-state storage device. In some embodiments, memory 802 may optionally include memory remotely located relative to processor 801, which may be connected via a network to an electronic device for identifying algorithmic adjustment methods of positioning elements. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The electronic device for identifying the algorithm adjustment method of the positioning element may further include: an input device 803 and an output device 804. The processor 801, memory 802, input device 803, and output device 804 may be connected by a bus or other means, for example in fig. 9.
The input device 803 may receive input numeric or character information and generate key signal inputs related to user settings and function controls of the electronic device for identifying algorithmic adjustment methods of positioning elements, such as input devices for a touch screen, a keypad, a mouse, a trackpad, a touch pad, a pointer stick, one or more mouse buttons, a trackball, a joystick, and the like. The output device 804 may include a display apparatus, auxiliary lighting devices (e.g., LEDs), and haptic feedback devices (e.g., vibration motors), among others. The display device may include, but is not limited to, a Liquid Crystal Display (LCD), a Light Emitting Diode (LED) display, and a plasma display. In some implementations, the display device may be a touch screen.
Various implementations of the systems and techniques described here can be realized in digital electronic circuitry, integrated circuitry, application specific ASIC (application specific integrated circuit), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a special purpose or general-purpose programmable processor, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
These computer programs (also known as programs, software applications, or code) include machine instructions for a programmable processor, and can be implemented in a high-level procedural and/or object-oriented programming language, and/or in assembly/machine language. As used herein, the terms "machine-readable medium" and "computer-readable medium" refer to any computer program product, apparatus, and/or device (e.g., magnetic discs, optical disks, memory, programmable Logic Devices (PLDs)) used to provide machine instructions and/or data to a programmable processor, including a machine-readable medium that receives machine instructions as a machine-readable signal. The term "machine-readable signal" refers to any signal used to provide machine instructions and/or data to a programmable processor.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and pointing device (e.g., a mouse or trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), and the internet.
The computer system may include a client and a server. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps described in the present application may be performed in parallel, sequentially, or in a different order, provided that the desired results of the technical solutions disclosed in the present application can be achieved, and are not limited herein.
The above embodiments do not limit the scope of the application. It will be apparent to those skilled in the art that various modifications, combinations, sub-combinations and alternatives are possible, depending on design requirements and other factors. Any modifications, equivalent substitutions and improvements made within the spirit and principles of the present application are intended to be included within the scope of the present application.

Claims (22)

1. An algorithm adjustment method for identifying a positioning element, comprising:
acquiring marking information of the positioning element, and acquiring identification information of the positioning element output by an identification algorithm, wherein the marking information comprises element identification and/or angular point position information of the positioning element, and the identification information comprises element identification and/or angular point position information of the positioning element;
comparing and analyzing element identification and/or angular point position information in the labeling information and element identification and/or angular point position information in the identification information to obtain an analysis result, wherein the analysis result comprises at least one accuracy evaluation parameter;
adjusting the recognition algorithm according to the at least one accuracy evaluation parameter, wherein the adjusted recognition algorithm is used for recognizing the positioning element in the running process of the vehicle;
The at least one accuracy assessment parameter is one or more of: classifying F1 parameters, detecting F1 parameters, single-frame transverse average errors and single-frame longitudinal average errors; the method comprises the steps that a classification F1 parameter represents a relation between a classification recall rate and a classification accuracy rate, a detection F1 parameter represents a relation between a detection recall rate and a detection accuracy rate, a single-frame horizontal average error represents a position error of a positioning element in an x direction, and a single-frame vertical average error represents a position error of the positioning element in a y direction;
if the accuracy evaluation parameter is a classification F1 parameter, comparing and analyzing the element identifier in the labeling information and the element identifier in the identification information to obtain an analysis result, wherein the method comprises the following steps:
determining a classified recall parameter and a classified accurate parameter according to whether the element identification of the positioning element exists in the labeling information and whether the element identification of the positioning element exists in the identification information;
and determining a classification F1 parameter according to the classification recall parameter and the classification accuracy parameter.
2. The method of claim 1, wherein determining the category recall parameter and the category accuracy parameter based on whether an element identification of a locating element exists in the annotation information and whether an element identification of the locating element exists in the identification information comprises:
If the identification information contains the element identification of the positioning element and the labeling information does not contain the element identification of the positioning element, determining that the element false detection quantity is accumulated by 1;
if the labeling information contains the element identification of the positioning element and the identification information does not contain the element identification of the positioning element, determining that the element missing detection quantity is accumulated by 1;
if the identification information contains the element identification of the positioning element and the labeling information contains the element identification of the positioning element, determining that the accurate number of the elements is accumulated by 1;
and determining a classification recall parameter according to the accurate number of the elements and the missed detection number of the elements, and determining a classification accurate parameter according to the accurate number of the elements and the false detection number of the elements.
3. The method according to claim 1, wherein if the accuracy evaluation parameter is a detection F1 parameter, comparing and analyzing element identification and/or corner position information in the labeling information and element identification and/or corner position information in the identification information to obtain an analysis result, including:
determining a recall parameter and an accurate parameter according to whether element identification of a positioning element exists in the labeling information, whether element identification of the positioning element exists in the identification information, angular point position information in the labeling information and angular point position information in the identification information;
And determining the detection F1 parameter according to the detection recall parameter and the detection accuracy parameter.
4. A method according to claim 3, wherein determining the detection recall parameter and the detection accuracy parameter based on whether there is an element identification of a positioning element in the annotation information, whether there is an element identification of the positioning element in the identification information, and corner position information in the annotation information, corner position information in the identification information, comprises:
when the element identification of the positioning element exists in the identification information and the element identification of the positioning element exists in the labeling information, if the element identification of the positioning element in the identification information is determined to be unique, the angular point position information of the positioning element in the identification information and the angular point position information of the positioning element in the labeling information are determined, and a first angular point position pixel error is formed between the angular point position information and the angular point position information;
if the pixel error of the first corner point is smaller than or equal to a preset value, determining the accurate number accumulation of the positions to be 1;
if the pixel error of the first corner point is larger than a preset value, determining that the number of false position detection is increased by 1, and determining that the number of missed position detection is increased by 1;
and determining a detection recall parameter according to the accurate position number and the position missed detection number, and determining a detection accurate parameter according to the accurate position number and the position false detection number.
5. The method of claim 4, the method further comprising:
when the element identification of the positioning element exists in the identification information and the element identification of the positioning element exists in the labeling information, if the element identification of the positioning element in the identification information is determined to be non-unique, determining each angular point position information corresponding to the positioning element in the identification information and the angular point position information of the positioning element in the labeling information, and a second angular point position pixel error between the angular point position information and the angular point position information, wherein the total number of the element identifications of the positioning element in the identification information is n;
if the pixel errors of the m second angular point positions are smaller than or equal to a preset value, determining the accurate position number accumulation 1, and determining the false position number accumulation m, wherein m is a positive integer which is larger than or equal to 1 and smaller than or equal to n;
if the pixel error of any second angular point position is larger than a preset value, determining that the position missed detection quantity is accumulated 1, and determining that the position false detection quantity is accumulated n.
6. The method according to claim 1, wherein if the accuracy evaluation parameter is a single-frame horizontal average error and a single-frame vertical average error, comparing and analyzing element identification and/or corner position information in the labeling information and element identification and/or corner position information in the identification information to obtain an analysis result, including:
When the element identification of the positioning element exists in the identification information and the element identification of the positioning element exists in the marking information, determining a third angular point position pixel error of each positioning element in the x direction and a fourth angular point position pixel error of each positioning element in the y direction according to angular point position information in the marking information and angular point position information in the identification information;
determining a single-frame transverse average error of one frame of image according to the pixel error of the third angular point position of each positioning element in the x direction and the number of the positioning elements in the one frame of image;
and determining a single-frame longitudinal average error of one frame of image according to the fourth angle point position pixel error of each positioning element in the y direction and the number of the positioning elements in the one frame of image.
7. The method of claim 6, the method further comprising:
and determining the overall average error according to the single-frame transverse average error, the single-frame longitudinal average error and the number of positioning elements in one frame of image.
8. The method according to any one of claims 1-7, wherein the obtaining labeling information of the positioning element includes:
acquiring an image through acquisition equipment on a vehicle;
And receiving a labeling instruction of a user, and determining a positioning element in the image according to the labeling instruction, wherein the positioning element has the labeling information.
9. The method according to any one of claims 1-7, wherein the obtaining the identification information of the positioning element output by the identification algorithm comprises:
acquiring an image through acquisition equipment on a vehicle;
and identifying the positioning elements in the image by adopting the identification algorithm to obtain the identification information.
10. An algorithm adjustment device for identifying a positioning element, comprising:
the first acquisition unit is used for acquiring the labeling information of the positioning element, wherein the labeling information comprises element identification and/or angular point position information of the positioning element;
the second acquisition unit is used for acquiring the identification information of the positioning element output by the identification algorithm, wherein the identification information comprises element identification and/or angular point position information of the positioning element;
the comparison unit is used for comparing and analyzing the element identification and/or angular point position information in the labeling information and the element identification and/or angular point position information in the identification information to obtain an analysis result, and the analysis result comprises at least one accuracy evaluation parameter;
The adjusting unit is used for adjusting the recognition algorithm according to the at least one accuracy evaluation parameter, wherein the adjusted recognition algorithm is used for recognizing the positioning element in the running process of the vehicle;
the at least one accuracy assessment parameter is one or more of: classifying F1 parameters, detecting F1 parameters, single-frame transverse average errors and single-frame longitudinal average errors;
the method comprises the steps that a classification F1 parameter represents a relation between a classification recall rate and a classification accuracy rate, a detection F1 parameter represents a relation between a detection recall rate and a detection accuracy rate, a single-frame horizontal average error represents a position error of a positioning element in an x direction, and a single-frame vertical average error represents a position error of the positioning element in a y direction;
wherein if the accuracy evaluation parameter is a classification F1 parameter, the comparing unit includes:
the first determining module is used for determining a classified recall parameter and a classified accurate parameter according to whether the element identifier of the positioning element exists in the labeling information and whether the element identifier of the positioning element exists in the identification information;
and the second determining module is used for determining the classification F1 parameter according to the classification recall parameter and the classification accuracy parameter.
11. The apparatus of claim 10, wherein the first determination module comprises:
the first determining submodule is used for determining that the element false detection quantity is accumulated by 1 if the element identification of the positioning element exists in the identification information and the element identification of the positioning element does not exist in the marking information;
the second determining submodule is used for determining that the element missing detection quantity is accumulated by 1 if the element identification of the positioning element exists in the marking information and the element identification of the positioning element does not exist in the identifying information;
a third determining submodule, configured to determine that the accurate number of elements is accumulated by 1 if the identification information includes an element identifier of a positioning element and the labeling information includes an element identifier of the positioning element;
and the fourth determining submodule is used for determining the classified recall parameters according to the accurate number of the elements and the missed detection number of the elements and determining the classified accurate parameters according to the accurate number of the elements and the false detection number of the elements.
12. The apparatus of claim 10, wherein if the accuracy assessment parameter is a detection F1 parameter, the comparison unit comprises:
the third determining module is used for determining a recall parameter and an accurate parameter according to whether the element identifier of the positioning element exists in the labeling information, whether the element identifier of the positioning element exists in the identification information, angular point position information in the labeling information and angular point position information in the identification information;
And the fourth determining module is used for determining the detection F1 parameter according to the detection recall parameter and the detection accuracy parameter.
13. The apparatus of claim 12, wherein the third determination module comprises:
a fifth determining submodule, configured to determine, when the element identifier of the positioning element exists in the identification information and the element identifier of the positioning element exists in the labeling information, if it is determined that the element identifier of the positioning element in the identification information is unique, a first corner pixel error between corner position information of the positioning element in the identification information and corner position information of the positioning element in the labeling information;
a sixth determining submodule, configured to determine that the accurate number of positions is accumulated by 1 if the pixel error of the first corner point is less than or equal to a preset value;
a seventh determining submodule, configured to determine that the number of false position detections is increased by 1 and determine that the number of missed position detections is increased by 1 if the pixel error of the first corner point is greater than a preset value;
and the eighth determining submodule is used for determining a detection recall parameter according to the accurate position number and the position missed detection number and determining a detection accurate parameter according to the accurate position number and the position false detection number.
14. The apparatus of claim 13, the third determination module further comprising:
a ninth determining submodule, configured to determine, when the element identifier of the positioning element exists in the identification information and the element identifier of the positioning element exists in the labeling information, if it is determined that the element identifier of the positioning element in the identification information is not unique, each corner position information corresponding to the positioning element in the identification information and the corner position information of the positioning element in the labeling information, and a second corner position pixel error between the two, where the total number of element identifiers of the positioning element in the identification information is n;
a tenth determining submodule, configured to determine that the accurate number of positions is accumulated by 1 if the pixel errors of the m second angular point positions are smaller than or equal to a preset value, and determine that the false detection number of positions is accumulated by m, where m is a positive integer greater than or equal to 1 and less than or equal to n;
an eleventh determining sub-module, configured to determine that the number of missed detection is accumulated 1 and determine that the number of false detection is accumulated n if the pixel error at any second corner is greater than a preset value.
15. The apparatus of claim 10, wherein if the accuracy assessment parameters are a single frame lateral average error and a single frame longitudinal average error, the comparing unit comprises:
A fifth determining module, configured to determine, when the element identifier of the positioning element exists in the identification information and the element identifier of the positioning element exists in the labeling information, a third corner position pixel error in the x direction of each positioning element and a fourth corner position pixel error in the y direction of each positioning element according to the corner position information in the labeling information and the corner position information in the identification information;
a sixth determining module, configured to determine a single-frame transverse average error of a frame image according to the pixel error of the third corner position of each positioning element in the x direction and the number of positioning elements in the frame image;
and a seventh determining module, configured to determine a single-frame vertical average error of one frame of image according to the fourth corner pixel error of each positioning element in the y direction and the number of positioning elements in the one frame of image.
16. The apparatus of claim 15, the contrast unit further comprising:
and the eighth determining module is used for determining the overall average error according to the single-frame transverse average error, the single-frame longitudinal average error and the number of positioning elements in one frame of image.
17. The apparatus according to any of claims 10-16, wherein the first acquisition unit is specifically configured to:
Acquiring an image through acquisition equipment on a vehicle;
and receiving a labeling instruction of a user, and determining a positioning element in the image according to the labeling instruction, wherein the positioning element has the labeling information.
18. The apparatus according to any one of claims 10-16, wherein the second acquisition unit is specifically configured to:
acquiring an image through acquisition equipment on a vehicle;
and identifying the positioning elements in the image by adopting the identification algorithm to obtain the identification information.
19. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-9.
20. A non-transitory computer readable storage medium storing computer instructions for causing the computer to perform the method of any one of claims 1-9.
21. An algorithm adjustment method for identifying a positioning element, comprising:
comparing and analyzing element identification and/or angular point position information in labeling information of the positioning element and element identification and/or angular point position information in identification information of the positioning element output by an identification algorithm to obtain an analysis result, wherein the analysis result comprises at least one accuracy evaluation parameter;
Adjusting the recognition algorithm according to the at least one accuracy evaluation parameter, wherein the adjusted recognition algorithm is used for recognizing the positioning element in the running process of the vehicle;
the at least one accuracy assessment parameter is one or more of: classifying F1 parameters, detecting F1 parameters, single-frame transverse average errors and single-frame longitudinal average errors; the method comprises the steps that a classification F1 parameter represents a relation between a classification recall rate and a classification accuracy rate, a detection F1 parameter represents a relation between a detection recall rate and a detection accuracy rate, a single-frame horizontal average error represents a position error of a positioning element in an x direction, and a single-frame vertical average error represents a position error of the positioning element in a y direction;
if the accuracy evaluation parameter is a classification F1 parameter, comparing and analyzing the element identifier in the labeling information and the element identifier in the identification information to obtain an analysis result, wherein the method comprises the following steps:
determining a classified recall parameter and a classified accurate parameter according to whether the element identification of the positioning element exists in the labeling information and whether the element identification of the positioning element exists in the identification information;
And determining a classification F1 parameter according to the classification recall parameter and the classification accuracy parameter.
22. A computer program product comprising a computer program which, when executed by a processor, implements the method according to any of claims 1-9.
CN202010605391.5A 2020-06-29 2020-06-29 Algorithm adjustment method, device, equipment and medium for identifying positioning element Active CN111783623B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010605391.5A CN111783623B (en) 2020-06-29 2020-06-29 Algorithm adjustment method, device, equipment and medium for identifying positioning element

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010605391.5A CN111783623B (en) 2020-06-29 2020-06-29 Algorithm adjustment method, device, equipment and medium for identifying positioning element

Publications (2)

Publication Number Publication Date
CN111783623A CN111783623A (en) 2020-10-16
CN111783623B true CN111783623B (en) 2024-04-12

Family

ID=72760315

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010605391.5A Active CN111783623B (en) 2020-06-29 2020-06-29 Algorithm adjustment method, device, equipment and medium for identifying positioning element

Country Status (1)

Country Link
CN (1) CN111783623B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112306353B (en) * 2020-10-27 2022-06-24 北京京东方光电科技有限公司 Augmented reality device and interaction method thereof

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2011203921A (en) * 2010-03-25 2011-10-13 Denso It Laboratory Inc Driving evaluation apparatus, method and program
CN109034078A (en) * 2018-08-01 2018-12-18 腾讯科技(深圳)有限公司 Training method, age recognition methods and the relevant device of age identification model
CN109389030A (en) * 2018-08-23 2019-02-26 平安科技(深圳)有限公司 Facial feature points detection method, apparatus, computer equipment and storage medium
CN110148148A (en) * 2019-03-01 2019-08-20 北京纵目安驰智能科技有限公司 A kind of training method, model and the storage medium of the lower edge detection model based on target detection

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120209697A1 (en) * 2010-10-14 2012-08-16 Joe Agresti Bias Reduction in Internet Measurement of Ad Noting and Recognition
US10789728B2 (en) * 2018-11-15 2020-09-29 Denso International America, Inc. Machine learning framework for visual tracking

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2011203921A (en) * 2010-03-25 2011-10-13 Denso It Laboratory Inc Driving evaluation apparatus, method and program
CN109034078A (en) * 2018-08-01 2018-12-18 腾讯科技(深圳)有限公司 Training method, age recognition methods and the relevant device of age identification model
CN109389030A (en) * 2018-08-23 2019-02-26 平安科技(深圳)有限公司 Facial feature points detection method, apparatus, computer equipment and storage medium
CN110148148A (en) * 2019-03-01 2019-08-20 北京纵目安驰智能科技有限公司 A kind of training method, model and the storage medium of the lower edge detection model based on target detection

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于容差PR曲线的路面裂缝识别算法性能评价机制;彭博;姬然;;重庆交通大学学报(自然科学版);20170715(07);全文 *

Also Published As

Publication number Publication date
CN111783623A (en) 2020-10-16

Similar Documents

Publication Publication Date Title
US10445887B2 (en) Tracking processing device and tracking processing system provided with same, and tracking processing method
CN112132113A (en) Vehicle re-identification method and device, training method and electronic equipment
CN112507949A (en) Target tracking method and device, road side equipment and cloud control platform
US11288887B2 (en) Object tracking method and apparatus
CN111553282A (en) Method and device for detecting vehicle
EP3876150A2 (en) Vehicle tracking method and apparatus
CN109711427A (en) Object detection method and Related product
CN111310840B (en) Data fusion processing method, device, equipment and storage medium
CN110335313B (en) Audio acquisition equipment positioning method and device and speaker identification method and system
EP3842995A1 (en) Method and apparatus for generating map
US20190290493A1 (en) Intelligent blind guide method and apparatus
CN111339877B (en) Method and device for detecting length of blind area, electronic equipment and storage medium
CN110910445A (en) Object size detection method and device, detection equipment and storage medium
CN111507204A (en) Method and device for detecting countdown signal lamp, electronic equipment and storage medium
CN111783623B (en) Algorithm adjustment method, device, equipment and medium for identifying positioning element
US11830242B2 (en) Method for generating a license plate defacement classification model, license plate defacement classification method, electronic device and storage medium
US20220044559A1 (en) Method and apparatus for outputing vehicle flow direction, roadside device, and cloud control platform
CN116778458B (en) Parking space detection model construction method, parking space detection method, equipment and storage medium
CN111612851B (en) Method, apparatus, device and storage medium for calibrating camera
CN111640301B (en) Fault vehicle detection method and fault vehicle detection system comprising road side unit
CN110798681B (en) Monitoring method and device of imaging equipment and computer equipment
CN111951328A (en) Object position detection method, device, equipment and storage medium
CN111597940B (en) Rendering model evaluation method and device, electronic equipment and readable storage medium
CN112581526A (en) Evaluation method, device, equipment and storage medium for obstacle detection
CN111523452B (en) Method and device for detecting human body position in image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant