CN111783623A - Algorithm adjustment method, apparatus, device, and medium for recognizing positioning element - Google Patents

Algorithm adjustment method, apparatus, device, and medium for recognizing positioning element Download PDF

Info

Publication number
CN111783623A
CN111783623A CN202010605391.5A CN202010605391A CN111783623A CN 111783623 A CN111783623 A CN 111783623A CN 202010605391 A CN202010605391 A CN 202010605391A CN 111783623 A CN111783623 A CN 111783623A
Authority
CN
China
Prior art keywords
information
positioning element
identification
positioning
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010605391.5A
Other languages
Chinese (zh)
Other versions
CN111783623B (en
Inventor
赵晓健
向旭东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN202010605391.5A priority Critical patent/CN111783623B/en
Publication of CN111783623A publication Critical patent/CN111783623A/en
Application granted granted Critical
Publication of CN111783623B publication Critical patent/CN111783623B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • G06V20/586Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of parking space
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Software Systems (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Medical Informatics (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Image Analysis (AREA)

Abstract

The application discloses an algorithm adjustment method, an apparatus, a device and a medium for identifying positioning elements, and relates to autonomous parking and automatic driving. The specific implementation scheme is as follows: acquiring the marking information of the positioning element, and acquiring the identification information of the positioning element output by an identification algorithm; comparing and analyzing the element identification and/or the angular point position information in the labeling information and the element identification and/or the angular point position information in the identification information to obtain an analysis result, wherein the analysis result comprises at least one accuracy evaluation parameter; and adjusting the recognition algorithm according to the at least one accuracy evaluation parameter, wherein the adjusted recognition algorithm is used for recognizing the positioning elements in the driving process of the vehicle. And automatically evaluating and analyzing the accuracy of the recognition algorithm, and accurately carrying out quantitative evaluation on the recognition algorithm, so that the adjusted recognition algorithm can accurately recognize the positioning elements.

Description

Algorithm adjustment method, apparatus, device, and medium for recognizing positioning element
Technical Field
The embodiments of the present application relate to autonomous parking and automatic driving in data/image processing, and more particularly, to an algorithm adjustment method, apparatus, device, and medium for identifying a location element.
Background
When the vehicle is parked, some positioning elements may be set in the automatic parking scene, for example, the positioning elements are pillars, wall stickers, or the like. The vehicle may identify the location elements using a recognition algorithm and then park based on the location elements. The accuracy of the recognition algorithm for recognizing the positioning elements needs to be evaluated and verified, so that the positioning elements can be accurately recognized after the recognition algorithm is applied to the vehicle.
In the prior art, the identification result output by the identification algorithm and the positioning elements in the real scene can be compared manually, so as to determine whether the identification algorithm is accurate.
However, in the prior art, the accuracy of the manual statistical recognition algorithm needs manual experience and supervisor judgment, which affects the authenticity and accuracy of the evaluation of the recognition algorithm, and further, according to the result of the manual evaluation of the recognition algorithm, when the recognition algorithm is adjusted, the recognition algorithm cannot be adjusted correctly, and the adjusted recognition algorithm cannot accurately recognize the positioning elements.
Disclosure of Invention
An algorithm adjustment method, apparatus, device, and medium for identifying a localization element are provided.
According to a first aspect of the present application, there is provided an algorithm adjustment method for identifying a localization element, comprising:
acquiring marking information of a positioning element and acquiring identification information of the positioning element output by an identification algorithm, wherein the marking information comprises an element identifier and/or angular point position information of the positioning element, and the identification information comprises the element identifier and/or angular point position information of the positioning element;
comparing and analyzing the element identification and/or the angular point position information in the labeling information and the element identification and/or the angular point position information in the identification information to obtain an analysis result, wherein the analysis result comprises at least one accuracy evaluation parameter;
and adjusting the recognition algorithm according to the at least one accuracy evaluation parameter, wherein the adjusted recognition algorithm is used for recognizing the positioning elements in the driving process of the vehicle.
According to a second aspect of the present application, there is provided an algorithm adjusting apparatus for identifying a localization element, comprising:
the system comprises a first acquisition unit, a second acquisition unit and a third acquisition unit, wherein the first acquisition unit is used for acquiring marking information of a positioning element, and the marking information comprises an element identifier and/or corner position information of the positioning element;
the second acquisition unit is used for acquiring the identification information of the positioning element output by the identification algorithm, wherein the identification information comprises the element identifier and/or the angular point position information of the positioning element;
the comparison unit is used for comparing and analyzing the element identifier and/or the corner position information in the labeling information and the element identifier and/or the corner position information in the identification information to obtain an analysis result, and the analysis result comprises at least one accuracy evaluation parameter;
and the adjusting unit is used for adjusting the recognition algorithm according to the at least one accuracy evaluation parameter, wherein the adjusted recognition algorithm is used for recognizing the positioning elements in the driving process of the vehicle.
According to a third aspect of the present application, there is provided an algorithm adjustment method for identifying a localization element, comprising:
comparing and analyzing element identification and/or corner position information in the labeling information of the positioning elements and element identification and/or corner position information in the identification information of the positioning elements output by an identification algorithm to obtain an analysis result, wherein the analysis result comprises at least one accuracy evaluation parameter;
and adjusting the recognition algorithm according to the at least one accuracy evaluation parameter, wherein the adjusted recognition algorithm is used for recognizing the positioning elements in the driving process of the vehicle.
According to a fourth aspect of the present application, there is provided an electronic device comprising: a processor and a memory; the memory stores executable instructions of the processor; wherein the processor is configured to execute the algorithm adjustment method for identifying a localization element according to any one of the first aspect or the third aspect via execution of the executable instructions.
According to a fifth aspect of the present application, there is provided a non-transitory computer readable storage medium storing computer instructions which, when executed by a processor, implement the algorithm adjustment method for identifying a localization element of any one of the first aspect or the algorithm adjustment method for identifying a localization element as described in the third aspect.
According to a sixth aspect of the present application, there is provided a program product comprising: a computer program stored in a readable storage medium, from which at least one processor of the server can read the computer program, the at least one processor executing the computer program causing the server to perform the algorithm adjustment method for identifying a localization element according to any one of the first aspect, or to perform the algorithm adjustment method for identifying a localization element according to the third aspect.
According to the technical scheme of the application, at least one accuracy evaluation parameter is obtained by comparing and analyzing the element identification and/or the angular point position information in the labeling information and the element identification and/or the angular point position information in the identification information; and adjusting the recognition algorithm on multiple dimensions according to the obtained accuracy evaluation parameters. Then, the accuracy of the recognition algorithm is automatically evaluated and analyzed to obtain an objective analysis result; the recognition algorithm is adjusted based on each accuracy evaluation parameter, so that the recognition algorithm can be accurately adjusted; so that the adjusted recognition algorithm can accurately recognize the positioning elements. The labor cost is reduced, the accuracy of the adjustment algorithm is improved, and the identification accuracy and the identification precision of the adjusted identification algorithm are improved. And the identification algorithm is analyzed based on the marked element identification and/or the corner position information and the identified element identification and/or the corner position information, so that the identification algorithm can be accurately and quantitatively evaluated.
It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present application, nor do they limit the scope of the present application. Other features of the present application will become apparent from the following description.
Drawings
The drawings are included to provide a better understanding of the present solution and are not intended to limit the present application. Wherein:
fig. 1 is a schematic diagram of an application scenario according to an embodiment of the present application;
FIG. 2 is a schematic diagram according to a first embodiment of the present application;
FIG. 3 is a schematic illustration of corner point position information for a localization element provided in accordance with the present application;
FIG. 4 is a schematic illustration according to a second embodiment of the present application;
FIG. 5 is a schematic illustration according to a third embodiment of the present application;
FIG. 6 is a schematic illustration according to a fourth embodiment of the present application;
FIG. 7 is a schematic illustration according to a fifth embodiment of the present application;
FIG. 8 is a schematic illustration according to a sixth embodiment of the present application;
fig. 9 is a schematic diagram of a seventh embodiment according to the present application.
Detailed Description
The following description of the exemplary embodiments of the present application, taken in conjunction with the accompanying drawings, includes various details of the embodiments of the application for the understanding of the same, which are to be considered exemplary only. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present application. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
Vehicles have become an essential tool for people to travel, and for example, the vehicles are automatic driving vehicles. In a scenario in which the vehicle is parked, some positioning elements may be provided in an automatic parking scenario, for example, the positioning elements are pillars, wall stickers, or the like. In one example, posts may be provided in an up-down hill crossing, an underground garage, or wall stickers may be provided on walls as positioning elements. In an automatic parking scene, GPS signals are weak, positioning elements are set and recorded in a high-precision map, and a vehicle acquires an image through a carriage head and obtains the positioning elements based on the high-precision map and an identification algorithm; and then the vehicle acquires the position of the vehicle according to the positioning element, and parking is finished based on the positioning element and the position of the vehicle.
Because the position of the vehicle needs to be known to finish parking, and the parking needs to be finished based on the positioning elements, the accuracy and precision of the identification algorithm for identifying the positioning elements are strictly required. Therefore, the accuracy of the identification algorithm for identifying the positioning elements needs to be evaluated and verified, so that the positioning elements can be accurately identified after the identification algorithm is applied to the vehicle.
In one example, after the image collected by the vehicle is identified by adopting an identification algorithm to obtain the positioning element, the identified positioning element is visualized on the image; then, the identification result output by the identification algorithm and the positioning elements in the real scene are compared manually, and whether the identification algorithm is accurate or not is further determined.
However, in the above manner, the accuracy of the manual statistical recognition algorithm needs manual experience and supervisor judgment, which may affect the authenticity and accuracy of the evaluation of the recognition algorithm, and further, according to the result of the manual evaluation of the recognition algorithm, when the recognition algorithm is adjusted, the recognition algorithm cannot be correctly adjusted, and thus the adjusted recognition algorithm cannot accurately recognize the positioning element.
The application provides an algorithm adjusting method, device, equipment and medium for identifying positioning elements, which are applied to autonomous parking and automatic driving in data/image processing so as to accurately and reasonably evaluate the identification algorithm for identifying the positioning elements and further adjust the identification algorithm; so that the adjusted recognition algorithm can accurately recognize the positioning elements.
The following describes the technical solutions of the present application and how to solve the above technical problems with specific embodiments. The following several specific embodiments may be combined with each other, and details of the same or similar concepts or processes may not be repeated in some embodiments. Embodiments of the present application will be described below with reference to the accompanying drawings.
Fig. 1 is a schematic view of an application scenario according to an embodiment of the present application, and as shown in fig. 1, a plurality of positioning elements are arranged in a parking area (an automatic parking scenario), and when a vehicle parks, the positioning elements need to be identified to determine a position of the vehicle, so as to complete parking. For example, when an autonomous vehicle is parked, a localization element needs to be recognized.
Fig. 2 is a schematic diagram according to a first embodiment of the present application, and as shown in fig. 2, the algorithm adjusting method for identifying a positioning element provided in this embodiment includes:
101. and acquiring marking information of the positioning elements, and acquiring identification information of the positioning elements output by an identification algorithm, wherein the marking information comprises element identifications and/or corner position information of the positioning elements, and the identification information comprises the element identifications and/or the corner position information of the positioning elements.
In one example, step 101 specifically includes the following steps: acquiring an image through acquisition equipment on a vehicle; receiving a marking instruction of a user, and determining a positioning element in the image according to the marking instruction, wherein the positioning element has marking information. Acquiring an image through acquisition equipment on a vehicle; and identifying the positioning elements in the image by adopting an identification algorithm to obtain identification information.
For example, the embodiment may be a vehicle, or a controller of the vehicle, or an electronic device, or an intelligent terminal, or a server, or an algorithm adjusting device for identifying a positioning element, or other apparatuses or devices that may perform the method of the embodiment. The embodiment is described with the execution main body as the electronic device.
The electronic device obtains an image, which includes a positioning element. In one example, a vehicle is provided with a collection device, for example, the collection device is a camera; the electronic device can acquire the image acquired by the acquisition device by acquiring the image of the environment where the vehicle is located.
The electronic equipment displays the image, for example, the image is displayed in an annotation tool, and the annotation tool is software which can display the image and receive a user instruction; a user sends a marking instruction to the electronic equipment by touching a screen of the electronic equipment, or touching a keyboard of the electronic equipment, or sending a voice and the like; the marking instruction indicates the positioning element selected by the user; and the electronic equipment determines the positioning elements in the image according to the labeling instruction to obtain the labeling information of each positioning element labeled by the user. The marking information includes element identification of the positioning element and/or angular point position information. And then the information of the marked positioning elements is obtained.
In addition, the electronic device is provided with an identification algorithm, for example, an algorithm capable of identifying the positioning elements, such as a machine learning algorithm; the electronic equipment runs the recognition algorithm, recognizes the image collected by the collecting equipment, recognizes the positioning elements in the image, and obtains the recognition information of each positioning element recognized by the recognition algorithm. The identification information comprises an element identification of the positioning element and/or corner position information. And then information of the identified positioning element is obtained.
Wherein the element identification is an ID. Fig. 3 is a schematic diagram of angular point position information of a positioning element according to the present application, and as shown in fig. 3, the positioning element has four angular points, namely an angular point a, an angular point B, an angular point C, and an angular point D, and the angular point position information of the four angular points defines positions of the positioning element in four directions, respectively.
102. And comparing and analyzing the element identification and/or the corner position information in the labeling information and the element identification and/or the corner position information in the identification information to obtain an analysis result, wherein the analysis result comprises at least one accuracy evaluation parameter.
In one example, the at least one accuracy assessment parameter is one or more of: classifying F1 parameters, detecting F1 parameters, single-frame horizontal average errors and single-frame vertical average errors.
Wherein, the classification F1 parameter characterizes the relationship between the classification recall ratio and the classification accuracy. The detection F1 parameter represents the relation between the detection recall rate and the detection accuracy rate, the single-frame horizontal average error represents the position error of the positioning element in the x direction, and the single-frame vertical average error represents the position error of the positioning element in the y direction.
Exemplarily, the electronic device performs comparison analysis based on the element identifier and/or the corner position information of each positioning element obtained by labeling and the element identifier and/or the corner position information of each positioning element obtained by recognition to obtain at least one accuracy evaluation parameter, that is, obtain an analysis result.
In one example, based on the element identifier of each positioning element obtained by labeling and the element identifier of each positioning element obtained by recognition, whether the two identifiers are consistent or not is analyzed, and an accuracy evaluation parameter indicating whether the number detection of the positioning elements is accurate or not can be obtained.
In one example, based on the corner position information of each positioning element obtained by labeling and the corner position information of each positioning element obtained by identifying, whether the two are consistent or not is analyzed, and an accuracy evaluation parameter representing whether the position detection of the positioning element is accurate or not can be obtained.
In one example, based on the element identifier and the corner position information of each positioning element obtained by labeling and the element identifier and the corner position information of each positioning element obtained by identification, whether the identifiers are consistent and the corner position information is consistent in the result obtained by labeling and the result obtained by identification of the same positioning element are analyzed, and an accuracy evaluation parameter indicating whether the number detection of the positioning elements is accurate and an accuracy evaluation parameter indicating whether the position detection of the positioning elements is accurate can be obtained.
In one example, based on the element identifier of each positioning element obtained by labeling, and the element identifier and the corner position information of each positioning element obtained by recognition, an accuracy evaluation parameter indicating whether the number detection of the positioning elements is accurate or not can be obtained, and the recognized position of the positioning element can be obtained.
In one example, based on the element identifier and the corner position information of each positioning element obtained by labeling and the element identifier of each positioning element obtained by recognition, an accuracy evaluation parameter indicating whether the number detection of the positioning elements is accurate or not can be obtained, and the labeled position of the positioning element can be obtained.
For example, the accuracy assessment parameters are: classifying F1 parameters, detecting F1 parameters, single-frame horizontal average errors and single-frame vertical average errors. And various accuracy evaluation parameters are provided, so that the subsequent adjustment of the recognition algorithm on multiple dimensions is facilitated.
The element identification of the positioning element is analyzed by comparing the element identification in the labeling information with the element identification in the identification information, so that whether the positioning element identified by the identification algorithm is accurate or not can be determined, and the classification recall rate and the classification accuracy rate are obtained; then, according to the relation between the classification recall rate and the classification accuracy rate, a classification F1 parameter is determined. For example, the classification recall is multiplied by the classification accuracy to obtain a classification F1 parameter.
By comparing the element identifier in the labeling information with the element identifier in the identification information, the element identifier of the positioning element is analyzed, and whether the positioning element identified by the identification algorithm is accurate or not can be determined; comparing the angular point position information in the marking information and the angular point position information in the identification information aiming at the same positioning element, and calculating the error between the angular point position information and the identification information so as to obtain the detection recall rate and the detection accuracy; then, according to the relation between the detection recall rate and the detection accuracy rate, a detection F1 parameter is determined. For example, the detection recall rate and the detection accuracy are used to obtain the detection F1 parameter.
Because the positioning element is provided with four corner points, the direction from the corner point D to the corner point C can be taken as the x direction, and the direction from the corner point D to the corner point A can be taken as the y direction; comparing the angular point position information in the marking information and the angular point position information in the identification information aiming at the same positioning element, and calculating the error between the angular point position information and the identification information to obtain the position error of the positioning element in the x direction; and performing average value calculation based on the position errors of all the positioning elements in the same frame image in the x direction to obtain a single-frame transverse average error.
Comparing the angular point position information in the marking information and the angular point position information in the identification information aiming at the same positioning element, and calculating the error between the angular point position information and the identification information to obtain the position error of the positioning element in the y direction; and performing average calculation based on the position errors of all the positioning elements in the same frame image in the y direction to obtain a single-frame longitudinal average error.
103. And adjusting the recognition algorithm according to the at least one accuracy evaluation parameter, wherein the adjusted recognition algorithm is used for recognizing the positioning elements during the driving process of the vehicle.
Illustratively, the electronic device adjusts the recognition algorithm based on the respective accuracy assessment parameters obtained. And then adjusting the recognition algorithm in each dimension.
In one example, the accuracy assessment parameter is a classification F1 parameter, and the classification F1 parameter characterizes whether the recognition algorithm is accurate for identifying the number of positioning elements. If the classification F1 parameter does not meet the requirement, the parameter for identifying the number of positioning elements in the identification algorithm needs to be adjusted.
In one example, the accuracy evaluation parameter is a detection F1 parameter, and the detection F1 parameter characterizes whether the recognition algorithm is accurate for identifying the position of the positioning element. If the detected F1 parameter is not satisfactory, the parameters for identifying the position of the localization element in the identification algorithm need to be adjusted.
In one example, the accuracy evaluation parameter is a single-frame horizontal average error, and the single-frame horizontal average error characterizes whether the position of the positioning element in the x direction is accurately identified by the identification algorithm. If the single frame horizontal average error does not meet the requirement, the parameters for identifying the position of the positioning element in the x direction in the identification algorithm need to be adjusted.
In one example, the accuracy evaluation parameter is a single-frame longitudinal average error, and the single-frame longitudinal average error characterizes whether the position of the positioning element in the y direction is accurately identified by the identification algorithm. If the single-frame longitudinal average error does not meet the requirement, the parameters for identifying the position of the positioning element in the y direction in the identification algorithm need to be adjusted.
Then, setting the adjusted recognition algorithm in a controller of the vehicle; when the vehicle needs to be parked, a controller of the vehicle identifies the acquired image based on an identification algorithm, and then positioning elements in the environment where the vehicle is located are obtained; and the controller of the vehicle determines the position of the vehicle based on the position information of the positioning element, and then completes the parking action.
In this embodiment, at least one accuracy evaluation parameter is obtained by comparing and analyzing the element identifier and/or the corner position information in the labeling information and the element identifier and/or the corner position information in the identification information; and adjusting the recognition algorithm on multiple dimensions according to the obtained accuracy evaluation parameters. Then, the accuracy of the recognition algorithm is automatically evaluated and analyzed to obtain an objective analysis result; the recognition algorithm is adjusted based on each accuracy evaluation parameter, so that the recognition algorithm can be accurately adjusted; so that the adjusted recognition algorithm can accurately recognize the positioning elements. The labor cost is reduced, the accuracy of the adjustment algorithm is improved, and the identification accuracy and the identification precision of the adjusted identification algorithm are improved. And the identification algorithm is analyzed based on the marked element identification and/or the corner position information and the identified element identification and/or the corner position information, so that the identification algorithm can be accurately and quantitatively evaluated.
Fig. 4 is a schematic diagram of a second embodiment of the present application, and as shown in fig. 4, the algorithm adjusting method for identifying a positioning element provided in this embodiment includes:
201. and acquiring marking information of the positioning elements, and acquiring identification information of the positioning elements output by an identification algorithm, wherein the marking information comprises element identifications and/or corner position information of the positioning elements, and the identification information comprises the element identifications and/or the corner position information of the positioning elements.
For example, the embodiment may be a vehicle, or a controller of the vehicle, or an electronic device, or an intelligent terminal, or a server, or an algorithm adjusting device for identifying a positioning element, or other apparatuses or devices that may perform the method of the embodiment. The embodiment is described with the execution main body as the electronic device.
This step can be referred to as step 101 shown in fig. 2, and is not described again.
202. The accuracy evaluation parameter is a classification F1 parameter; and determining a classification recall parameter and a classification accurate parameter according to whether the element identifier of the positioning element exists in the labeling information and whether the element identifier of the positioning element exists in the identification information.
In one example, step 202 specifically includes the following steps:
and if the element identification of the positioning element exists in the identification information and the element identification of the positioning element does not exist in the marking information, determining that the number of the false detections of the element is accumulated to 1.
And if the element identification of the positioning element exists in the labeling information and the element identification of the positioning element does not exist in the identification information, determining that the missing element detection quantity is accumulated to be 1.
And if the element identification of the positioning element exists in the identification information and the element identification of the positioning element exists in the marking information, determining that the accurate quantity of the elements is accumulated by 1.
And determining a classification recall parameter according to the accurate element quantity and the missing element quantity, and determining a classification accurate parameter according to the accurate element quantity and the false element quantity.
Illustratively, after step 201, obtaining annotation information of each frame of image, where the annotation information includes an element identifier and/or corner position information of each positioning element in the frame of image; and obtaining the identification information of each frame of image, wherein the identification information comprises the element identification and/or the corner position information of each positioning element in the frame of image.
For each frame of image, comparing the labeling information and the identification information of the image; judging whether the element identification of the positioning element exists in the identification information and whether the element identification of the positioning element exists in the labeling information aiming at the same positioning element, and further obtaining a classification recall parameter and a classification accurate parameter; the classification recall parameter represents the relationship between the accuracy of the number detection of the positioning elements and the omission factor of the number detection, and the classification accuracy parameter represents the relationship between the accuracy of the number detection of the positioning elements and the false detection rate of the number detection. The accuracy of the number detection indicates that the positioning element is identified and marked. And the false detection rate of the number detection is characterized in that the positioning element is identified but is not marked. And (4) the missed detection rate of the number detection is characterized in that the positioning element is marked, but the positioning element is not identified.
And then obtaining accurate classification recall parameters and accurate classification parameters according to whether the element identification of the positioning element exists in the identification information and whether the element identification of the positioning element exists in the labeling information.
In one example, first, the number of false detections of an element N is setfp(id)Is 0, and the number of missed detections of the element N is setfn(id)Is 0, sets the exact number of elements Ntp(id)Is 0.
And comparing the labeling information and the identification information of each frame of image. And judging whether the element identifier of the positioning element exists in the identification information and whether the element identifier of the positioning element exists in the marking information aiming at the same positioning element. Aiming at the same positioning element, when the element identification of the positioning element exists in the identification information and the element identification of the positioning element does not exist in the marking information, the number N of the false detection of the element is determinedfp(id)The 1 is accumulated. For the same positioning element, when the fact that the element identifier of the positioning element exists in the labeling information but the element identifier of the positioning element does not exist in the identification information is determined, the element identifier of the positioning element exists in the labeling informationNumber of missing element detections Nfn(id)The 1 is accumulated. Aiming at the same positioning element, when the element identification of the positioning element exists in the identification information and the element identification of the positioning element exists in the marking information, the accurate number N of the elements is determinedtp(id)The 1 is accumulated.
Moreover, the false detection number of elements of the multi-frame images can be accumulated, the missing detection number of elements of the multi-frame images can be accumulated, and the accurate number of elements of the multi-frame images can be accumulated.
Then, according to the number of false detection elements, the number of missed detection elements and the accurate number of elements of one or more frames of images, the accurate number N of elements is determinedtp(id)And the number of missed elements Nfn(id)Determining a Recall classification parameter as Recallid=Ntp(id)/(Ntp(id)+Nfn(id)). And, according to the exact number of elements Ntp(idAnd number of false detections of element Nfp(id)Determining the classification accuracy parameter as Precisionid=Ntp(id)/(Ntp(id)+Nfp(id))。
Based on the detailed calculation process, the number of false detection of elements, the number of missed detection of elements and the accurate number of elements of one or more frames of images are obtained according to whether the element identification of the positioning element exists in the identification information and whether the element identification of the positioning element exists in the labeling information, and then based on the detection result of one frame of multi-frame images, the accurate classification recall parameter and the accurate classification parameter are obtained.
203. And determining a classification F1 parameter according to the classification recall parameter and the classification accuracy parameter.
Illustratively, after the step 202, a classification F1 parameter is determined according to the obtained classification recall parameter and the classification accuracy parameter, and the classification F1 parameter characterizes whether the recognition algorithm is accurate for the number of the positioning elements.
In one example, Recall parameter Recall according to classificationidAnd classification accuracy parameter PrecisionidDetermining the classification F1 parameter as F1id=(2·Precisionid·Recallid)/(Precisionid+Recallid)。
204. The accuracy evaluation parameter is a detection F1 parameter; and determining a detection recall parameter and a detection accurate parameter according to whether the element identification of the positioning element exists in the labeling information, whether the element identification of the positioning element exists in the identification information, and angular point position information in the labeling information and angular point position information in the identification information.
In one example, step 204 specifically includes the following steps:
the method comprises the following steps that when element identification of a positioning element exists in identification information and element identification of the positioning element exists in marking information, if the element identification of the positioning element in the identification information is determined to be unique, corner position information of the positioning element in the identification information and corner position information of the positioning element in the marking information are determined, and a first corner position pixel error is formed between the corner position information of the positioning element in the identification information and the corner position information of the positioning element in the marking information.
If the pixel error of the first corner point position is less than or equal to a preset value, determining the accurate quantity accumulation 1 of the positions; if the pixel error of the first corner position is larger than the preset value, determining that the number of position false detections is added by 1, and determining that the number of position missed detections is added by 1; and determining detection recall parameters according to the accurate position number and the position missing detection number, and determining detection accurate parameters according to the accurate position number and the position false detection number.
And a second step of determining, when the element identifier of the positioning element exists in the identification information and the element identifier of the positioning element exists in the labeling information, if it is determined that the element identifier of the positioning element in the identification information is not unique, the position information of each corner point corresponding to the positioning element in the identification information, the position information of the corner point of the positioning element in the labeling information, and a second corner point position pixel error between the two, wherein the total number of the element identifiers of the positioning element in the identification information is n.
If the pixel errors of the m second corner positions are smaller than or equal to a preset value, determining the accurate position quantity accumulation 1, and determining the position false detection quantity accumulation m, wherein m is a positive integer which is larger than or equal to 1 and smaller than or equal to n; and if the pixel error of any second corner position is larger than a preset value, determining the position missing detection quantity accumulation 1, and determining the position false detection quantity accumulation n.
Illustratively, after step 201, obtaining annotation information of each frame of image, where the annotation information includes an element identifier and/or corner position information of each positioning element in the frame of image; and obtaining the identification information of each frame of image, wherein the identification information comprises the element identification and/or the corner position information of each positioning element in the frame of image.
For each frame of image, comparing the labeling information and the identification information of the image; and judging whether the element identification of the positioning element exists in the identification information and whether the element identification of the positioning element exists in the labeling information aiming at the same positioning element, and simultaneously determining a detection recall parameter and a detection accurate parameter based on the angular point position information in the labeling information and the angular point position information in the identification information. The detection recall parameter represents the relationship between the accuracy of position detection of the positioning element and the omission factor of the position detection, and the detection accuracy parameter represents the relationship between the accuracy of position detection of the positioning element and the false detection rate of the position detection. The accuracy of the position detection is characterized in that the positioning elements are identified and marked, and the absolute value of the difference between the identified corner position and the marked corner position is less than or equal to a preset value. The false detection rate of the position detection is characterized in that the positioning elements are identified and marked, and the absolute value of the difference between the identified corner position and the marked corner position is less than or equal to a preset value. The missed detection rate of the position detection is characterized in that the positioning elements are identified and marked, and the absolute value of the difference between the identified corner position and the marked corner position is less than or equal to a preset value.
And then obtaining accurate detection recall parameters and accurate detection parameters according to whether the element identification of the positioning element exists in the identification information, whether the element identification of the positioning element exists in the marking information, and angular point position information in the marking information and angular point position information in the identification information.
In one example, first, a position accuracy number is setQuantity Ntp(pos)Is 0, and the number of position false detections N is setfp(pos)Is 0, and the number of position missing detections N is setfn(pos)Is 0.
And comparing the labeling information and the identification information of each frame of image. And judging whether the element identifier of the positioning element exists in the identification information and whether the element identifier of the positioning element exists in the marking information aiming at the same positioning element. For the same positioning element, when it is determined that the element identifier of the positioning element exists in the identification information and the element identifier of the positioning element also exists in the annotation information, it is required to determine that the element identifier of the positioning element in the identification information is unique.
Then, when the element identifier of the positioning element in the identification information is determined to be unique, the absolute value of the difference between the corner position information of the positioning element in the identification information and the corner position information of the positioning element in the labeling information is calculated, and then the pixel error of the first corner position is obtained. When the pixel error of the first corner position is determined to be less than or equal to the preset value, the accurate position number N is determinedtp(pos)Accumulating for 1; when the pixel error of the first corner position is determined to be larger than the preset value, the position false detection number N is determinedfp(pos)Adding 1 to determine the position missing detection number Nfn(pos)The 1 is accumulated.
For example, for one positioning element, the element identifier of the positioning element exists in the identification information, and the element identifier of the positioning element also exists in the annotation information, and it is determined that the element identifier of the positioning element in the identification information is unique. And aiming at the positioning element, performing difference calculation on the angular point position information of the positioning element in the identification information and the angular point position information of the positioning element in the marking information, and then removing the absolute value of the difference to obtain the pixel error of the position of the first angular point.
The position information of the corner points of the positioning element comprises four corner points, so that the difference value of the position information of the corner points of each corner point on the marking information and the identification information can be calculated. For example, the corner position information of the corner a of the positioning element 1 in the labeling information is subtracted from the corner position information of the corner a of the positioning element 1 in the identification information to obtain a first corner position pixel error of the corner a; subtracting the corner position information of the corner B of the positioning element 1 in the identification information from the corner position information of the corner B of the positioning element 1 in the labeling information to obtain a first corner position pixel error of the corner B; subtracting the angular point position information of the angular point C of the positioning element 1 in the identification information from the angular point position information of the angular point C of the positioning element 1 in the marking information to obtain a first angular point position pixel error of the angular point C; and subtracting the corner position information of the corner D of the positioning element 1 in the identification information from the corner position information of the corner D of the positioning element 1 in the labeling information to obtain a first corner position pixel error of the corner D.
Then, when the pixel errors of the first corner positions of the 4 corners are determined to be less than or equal to a preset value, the accurate number N of the positions is determinedtp(pos)Accumulating for 1; when the pixel error of the first corner position of any one corner in 4 corners is determined to be larger than a preset value, the position false detection number N is determinedfp(pos)Adding 1 to determine the position missing detection number Nfn(pos)The 1 is accumulated. The preset value may be 3 pixels.
Therefore, when it is determined that the element identifier of the positioning element exists in the identification information and the element identifier of the positioning element also exists in the annotation information for the same positioning element, if the element identifier of the positioning element in the identification information is unique, the accurate number of accurate positions, the number of missed position detections and the number of false position detections can be obtained directly based on the angular point position information in the identification information and the angular point position information in the annotation information.
When the element identifier of the positioning element in the identification information is determined to be not unique, that is, the total number of the element identifiers of the positioning element in the identification information is n, difference calculation is performed on each corner position information corresponding to the positioning element in the identification information and the corner position information of the positioning element in the labeling information respectively, and an absolute value of the difference and a second corner position pixel error between the absolute value and the difference are obtained. Thus, for the same positioning element, n second angles of the positioning element are obtainedDot position pixel error. When m second corner position pixel errors in the N second corner position pixel errors are determined to be less than or equal to a preset value, the accurate number N of the positions is determinedtp(pos)Adding 1, and determining the position false detection number Nfp(pos)And accumulating m. When any one of the N second corner position pixel errors is determined to be larger than a preset value, the position missing detection number N is determinedfn(pos)Adding 1, and false detecting position number Mfp(pos)And accumulating n.
For example, for a positioning element, the element identifier of the positioning element exists in the identification information, and the element identifier of the positioning element also exists in the tagging information, and it is determined that the element identifier of the positioning element in the identification information is not unique, and at this time, the total number of the element identifiers of the positioning element in the identification information is n. And aiming at the angular point position information identified by n elements, performing difference calculation on each angular point position information of the positioning element in the identification information and the angular point position information of the positioning element in the marking information respectively, and then removing the absolute value of the difference to obtain n second angular point position pixel errors.
When calculating the pixel error of the position of each second corner point of the same positioning element, the difference between the marking information and the identification information of the corner point position information of each corner point can be calculated because the corner point position information of the positioning element comprises four corner points. For example, the error of the corner point a is obtained by subtracting the corner point position information of the corner point a of the positioning element 1 in the identification information from the corner point position information of the corner point a of the positioning element 1 in the labeling information; subtracting the angular point position information of the angular point B of the positioning element 1 in the identification information from the angular point position information of the angular point B of the positioning element 1 in the marking information to obtain the error of the angular point B; subtracting the angular point position information of the angular point C of the positioning element 1 in the identification information from the angular point position information of the angular point C of the positioning element 1 in the marking information to obtain the error of the angular point C; and subtracting the angular point position information of the angular point D of the positioning element 1 in the identification information from the angular point position information of the angular point D of the positioning element 1 in the marking information to obtain the error of the angular point D.
Then, for each second corner position pixel error in the n second corner position pixel errors, if the errors of 4 corners are less than or equal to a preset value, determining that the second corner position pixel error is less than or equal to the preset value; and if the error of any one of the 4 corner points is greater than a preset value, determining that the pixel error of the position of the second corner point is greater than the preset value.
Then, when m second corner position pixel errors in the N second corner position pixel errors are determined to be less than or equal to a preset value, the accurate number N of the positions is determinedtp(pos)Adding 1, and determining the position false detection number Nfp(pos)And accumulating m. When any one of the N second corner position pixel errors is determined to be larger than a preset value, the position missing detection number N is determinedfn(pos)Adding 1, and false detecting the position by Nfp(pos)And accumulating n.
Therefore, when the element identifier of the positioning element exists in the identification information and the element identifier of the positioning element also exists in the labeling information for the same positioning element, if the element identifier of the positioning element in the identification information is not unique, the position information of the positioning element in the plurality of corner points in the identification information and the position information of the positioning element in the labeling information are analyzed to obtain the accurate position number, the position missing detection number and the position false detection number.
Then, the number N can be determined accurately according to the positiontp(pos)And number of missed position detections Nfn(pos)Determining the detection Recall parameter as Recallpos=Ntp(pos)/(Ntp(pos)+Nfn(pos)) And, according to the position accuracy number Ntp(pos)And the number of position false detections Nfp(pos)Determining the detection accuracy parameter as Precisionpos=Ntp(pos)/(Ntp(pos)+Nfp(pos))。
205. And determining a detection F1 parameter according to the detection recall parameter and the detection accurate parameter.
Illustratively, after the step 204, the number of detected F1 parameters is determined according to the obtained detection recall parameters and detection accuracy parameters, and the detected F1 parameters characterize whether the position identification of the positioning element by the identification algorithm is accurate.
In one example, Recall parameter Recall is detectedposAnd detecting the Precision parameter PrecisionposDetermining the detected F1 parameter as F1id=(2·Precisionpos·Recallpos)/(Precisionpos+Recallpos)。
Wherein, the steps 202 and 204 are 203 and 205, and the execution sequence between the two is not limited. Step 202-; or, firstly, step 204-; alternatively, step 202-.
206. The accuracy evaluation parameters are a single-frame horizontal average error and a single-frame vertical average error; and when the element identification of the positioning element exists in the identification information and the element identification of the positioning element exists in the marking information, determining a third corner position pixel error of each positioning element in the x direction and a fourth corner position pixel error of each positioning element in the y direction according to the corner position information in the marking information and the corner position information in the identification information.
For example, after step 202, for the same positioning element in the single-frame image, if the element identifier of the positioning element exists in the identification information and the element identifier of the positioning element exists in the annotation information, a third corner point position pixel error in the x direction of the positioning element and a fourth corner point position pixel error in the y direction of the positioning element may be calculated.
In one example, for one positioning element, the positioning element has 4 corner points, as shown in fig. 3. The direction from the corner point D to the corner point C may be taken as the x direction, and the direction from the corner point D to the corner point a may be taken as the y direction. Obtaining a position 1 in the x direction according to the angular point position information of the angular point D and the angular point C, for example, summing the angular point position information of the angular point D and the angular point C to obtain the position 1, or subtracting the angular point position information of the angular point D from the angular point position information of the angular point C to obtain the position 1; meanwhile, a position 2 in the x direction is obtained according to the angular point position information of the angular point a and the angular point B, for example, the angular point position information of the angular point a and the angular point B is summed to obtain the position 2, or the angular point position information of the angular point a is subtracted from the angular point position information of the angular point B to obtain the position 2; then, the position 1 and the position 2 are used as the angular point position information of the positioning element in the x direction. Then, for the positioning element, the absolute value 1 of the difference between the position 1 of the positioning element in the identification information and the position 1 of the positioning element in the identification information is obtained; for the positioning element, calculating an absolute value 2 of a difference value between a position 2 of the positioning element in the identification information and a position 2 of the positioning element in the identification information; the sum of absolute value 1 and absolute value 2 is taken as the third corner position pixel error in the x direction of the localization element.
Obtaining a position 3 in the y direction according to the angular point position information of the angular point D and the angular point a, for example, summing the angular point position information of the angular point D and the angular point a to obtain the position 3, or subtracting the angular point position information of the angular point D from the angular point position information of the angular point a to obtain the position 3; meanwhile, a position 4 in the y direction is obtained according to the angular point position information of the angular point C and the angular point position information of the angular point B, for example, the angular point position information of the angular point C and the angular point position information of the angular point B are summed to obtain the position 4, or the angular point position information of the angular point C is subtracted from the angular point position information of the angular point B to obtain the position 4; then, the position 3 and the position 4 are used as angular point position information of the positioning element in the y direction. Then, for the positioning element, the absolute value 3 of the difference between the position 3 of the positioning element in the identification information and the position 3 of the positioning element in the identification information is obtained; for the positioning element, the absolute value 4 of the difference between the position 4 of the positioning element in the identification information and the position 4 of the positioning element in the identification information is obtained; the sum of absolute value 3 and absolute value 4 is taken as the third corner position pixel error in the y direction of the localization element.
207. And determining the single-frame transverse average error of one frame of image according to the pixel error of the third corner position of each positioning element in the x direction and the number of the positioning elements in one frame of image. And determining the single-frame longitudinal average error of one frame of image according to the pixel error of the fourth corner point of each positioning element in the y direction and the number of the positioning elements in one frame of image.
Illustratively, for a single frame image, pixel errors of a third corner position of each positioning element in the same frame image are accumulated to obtain a first accumulated value
Figure BDA0002560895690000171
Wherein, err-xiThe pixel error of the third angular point position of the ith positioning element, R is the number of positioning elements in one frame image (the number of identified positioning elements in a single frame image), i is a positive integer greater than or equal to 1 and less than or equal to R, and R is a positive integer greater than or equal to 1. Then, according to the first accumulated value
Figure BDA0002560895690000172
And the number R of positioning elements in one frame of image to obtain the single-frame transverse average error of one frame of image
Figure BDA0002560895690000173
Accumulating pixel errors of the fourth corner point position of each positioning element in the same frame image aiming at a single frame image to obtain a second accumulated value
Figure BDA0002560895690000174
Wherein err _ yiThe third angular point position pixel error of the ith positioning element, and R is the number of positioning elements in one frame image (the number of identified positioning elements in a single frame image). Then, according to the second accumulated value
Figure BDA0002560895690000175
And the number R of positioning elements in one frame image to obtain the single-frame longitudinal average error avg _ err of the one frame imagey
Figure BDA0002560895690000176
Through the step 206 and the step 207, the position pixel errors of the positioning elements of the single-frame image are analyzed to obtain the accurate single-frame horizontal average error and single-frame longitudinal average error.
208. And determining the overall average error according to the single-frame horizontal average error, the single-frame longitudinal average error and the number of positioning elements in one frame of image.
Illustratively, after step 207, for a plurality of frame images, a single frame horizontal average error and a single frame vertical average error of each frame image may be obtained.
Then, aiming at the multiple frames of images, summing the horizontal average errors of the single frame of each frame of image to obtain the sum Q of the horizontal average errors of the single framexAnd calculating the total T of the number of the positioning elements of each frame of image; summing the horizontal average errors of a single framexDivided by T to give the lateral total error.
For multiple frames of images, summing the longitudinal average errors of the single frames of the images to obtain the sum Q of the longitudinal average errors of the single framesyAnd calculating the total T of the number of the positioning elements of each frame of image; summing the vertical average errors of the single frameyDivided by T to give the longitudinal overall error.
The sum Q of the horizontal average errors of the single frame can also be usedxThe sum of the vertical mean errors of the single frame QyThe sum is summed to obtain an overall error value Q, which is then divided by T to obtain an overall average error.
The pixel error of the position of a third corner point of the positioning element in each frame of image in the multi-frame images can be analyzed aiming at the same positioning element; and then obtaining the distribution condition of the transverse error of the same positioning element according to the pixel error of the position of the third corner point of the positioning element in each frame of image in the plurality of frames of images. Analyzing the pixel error of the position of a fourth corner point of the positioning element in each frame of image in the multi-frame of images aiming at the same positioning element; and then obtaining the longitudinal error distribution condition of the same positioning element according to the pixel error of the fourth corner point of the positioning element in each frame of image in the multi-frame images.
And determining the positioning element with the maximum value of the pixel error of the third corner position and the positioning element with the maximum value of the pixel error of the fourth corner position aiming at each positioning element in the single-frame image.
The recognition results of the localization elements in the image are thus analyzed from multiple dimensions in order to obtain data in multiple dimensions, which are used to analyze whether the position detection by the recognition algorithm is accurate.
Wherein, the execution sequence among the steps 202, 204, 205, 206, 208 is not limited.
209. And adjusting the recognition algorithm according to the at least one accuracy evaluation parameter, wherein the adjusted recognition algorithm is used for recognizing the positioning elements during the driving process of the vehicle.
For example, this step may refer to step 203 shown in fig. 2, which is not described again.
In this embodiment, on the basis of the above embodiment, the accuracy evaluation parameters in multiple dimensions are obtained through step 202-; these accuracy assessment parameters are used to assess and adjust the recognition algorithm; the indexes of the adjusted recognition algorithm are refined, so that a plurality of indexes are counted in a plurality of dimensions of classification and detection, the recognition algorithm is more specifically guided to be optimized from parameters of the plurality of dimensions, and the recognition algorithm with a better recognition and positioning element recognition effect is obtained. Moreover, the method can be applied to various different recognition algorithms through one-time labeling of the user, and the adjustment of each recognition algorithm can be referred to the embodiment; the automatic statistics of the indexes of the evaluation recognition algorithm can be completed, and the indexes can help the recognition algorithm to carry out iteration and optimization. And a plurality of indexes for evaluating the recognition algorithm are provided, the indexes of one frame of image or a plurality of frames of images are visually provided, which frame or which positioning element can be efficiently and quickly found, the requirement of classification or detection is not met, and the recognition algorithm is convenient to adjust. And then the adjusted recognition algorithm accurately recognizes the positioning elements. In addition, the scheme provided by the embodiment can adjust the detection algorithm in the fields of robot visual positioning and the like in addition to the algorithm for identifying the positioning elements.
Fig. 5 is a schematic diagram of a third embodiment of the present application, and as shown in fig. 5, the algorithm adjusting method for identifying a positioning element provided in this embodiment includes:
301. and comparing and analyzing the element identification and/or the angular point position information in the labeling information of the positioning element and the element identification and/or the angular point position information in the identification information of the positioning element output by the identification algorithm to obtain an analysis result, wherein the analysis result comprises at least one accuracy evaluation parameter.
For example, the embodiment may be a vehicle, or a controller of the vehicle, or an electronic device, or an intelligent terminal, or a server, or an algorithm adjusting device for identifying a positioning element, or other apparatuses or devices that may perform the method of the embodiment. The embodiment is described with the execution main body as the electronic device.
The electronic equipment stores the information of the marked positioning elements in advance, namely the marking information of the positioning elements is stored; the electronic device stores information of the identified positioning element in advance, that is, identification information of the positioning element is already stored.
Then, the electronic device performs comparison analysis based on the element identifier and/or the corner position information of each positioning element obtained by labeling and the element identifier and/or the corner position information of each positioning element obtained by identification to obtain at least one accuracy evaluation parameter, that is, an analysis result.
This step can be referred to as step 102 shown in fig. 2, and is not described again.
302. And adjusting the recognition algorithm according to the at least one accuracy evaluation parameter, wherein the adjusted recognition algorithm is used for recognizing the positioning elements during the driving process of the vehicle.
For example, this step may refer to step 103 shown in fig. 2, and is not described again.
In this embodiment, at least one accuracy evaluation parameter is obtained by comparing and analyzing the element identifier and/or the corner position information in the labeling information and the element identifier and/or the corner position information in the identification information; and adjusting the recognition algorithm on multiple dimensions according to the obtained accuracy evaluation parameters. Then, the accuracy of the recognition algorithm is automatically evaluated and analyzed to obtain an objective analysis result; the recognition algorithm is adjusted based on each accuracy evaluation parameter, so that the recognition algorithm can be accurately adjusted; so that the adjusted recognition algorithm can accurately recognize the positioning elements. The labor cost is reduced, the accuracy of the adjustment algorithm is improved, and the identification accuracy and the identification precision of the adjusted identification algorithm are improved. And the identification algorithm is analyzed based on the marked element identification and/or the corner position information and the identified element identification and/or the corner position information, so that the identification algorithm can be accurately and quantitatively evaluated.
Fig. 6 is a schematic diagram of a fourth embodiment of the present application, and as shown in fig. 6, the algorithm adjusting device for identifying a positioning element provided in this embodiment includes:
the first obtaining unit 31 is configured to obtain labeling information of the positioning element, where the labeling information includes an element identifier and/or corner position information of the positioning element.
The second obtaining unit 32 is configured to obtain identification information of the positioning element output by the identification algorithm, where the identification information includes an element identifier of the positioning element and/or corner position information.
The comparison unit 33 is configured to perform comparison analysis on the element identifier and/or the corner position information in the labeling information and the element identifier and/or the corner position information in the identification information to obtain an analysis result, where the analysis result includes at least one accuracy evaluation parameter.
An adjusting unit 34 for adjusting the recognition algorithm according to the at least one accuracy evaluation parameter, wherein the adjusted recognition algorithm is used for recognizing the positioning element during the driving of the vehicle.
The apparatus of this embodiment may execute the technical solution in the method, and the specific implementation process and the technical principle are the same, which are not described herein again.
Fig. 7 is a schematic diagram of a fifth embodiment according to the present application, based on the embodiment shown in fig. 6, as shown in fig. 7, in the algorithm adjusting apparatus for identifying a positioning element provided in the present embodiment, at least one accuracy evaluation parameter is one or more of the following: classifying F1 parameters, detecting F1 parameters, single-frame horizontal average errors and single-frame vertical average errors.
Wherein, the classification F1 parameter characterizes the relationship between the classification recall ratio and the classification accuracy. The detection F1 parameter represents the relation between the detection recall rate and the detection accuracy rate, the single-frame horizontal average error represents the position error of the positioning element in the x direction, and the single-frame vertical average error represents the position error of the positioning element in the y direction.
In one example, the accuracy assessment parameter is a classification F1 parameter; a comparison unit 33, comprising:
the first determining module 331 is configured to determine the classification recall parameter and the classification accuracy parameter according to whether the element identifier of the positioning element exists in the tagging information and whether the element identifier of the positioning element exists in the identification information.
And a second determining module 332, configured to determine a classification F1 parameter according to the classification recall parameter and the classification accuracy parameter.
In one example, the first determining module 331 includes:
the first determining sub-module 3311 is configured to determine that the false detection number of the element is accumulated to 1 if the element identifier of the positioning element exists in the identification information and the element identifier of the positioning element does not exist in the tagging information.
The second determining sub-module 3312 is configured to determine that the missing detection number of the element is accumulated to 1 if the tagging information has the element identifier of the positioning element and the identification information does not have the element identifier of the positioning element.
The third determining sub-module 3313 is configured to determine that the number of the elements is exactly 1 if the element identifier of the positioning element exists in the identification information and the element identifier of the positioning element exists in the tagging information.
The fourth determining sub-module 3314 is configured to determine a classification recall parameter according to the accurate number of elements and the number of missed detections of elements, and determine a classification accuracy parameter according to the accurate number of elements and the number of false detections of elements.
In one example, the accuracy assessment parameter is the detection F1 parameter; a comparison unit 33, comprising:
a third determining module 333, configured to determine the recall detection parameter and the accurate detection parameter according to whether the element identifier of the positioning element exists in the annotation information, whether the element identifier of the positioning element exists in the identification information, the corner position information in the annotation information, and the corner position information in the identification information.
And a fourth determining module 334, configured to determine a detection F1 parameter according to the detection recall parameter and the detection accuracy parameter.
In one example, the third determining module 333 includes:
the fifth determining submodule 3331 is configured to, when the element identifier of the positioning element exists in the identification information and the element identifier of the positioning element exists in the annotation information, determine corner position information of the positioning element in the identification information and corner position information of the positioning element in the annotation information, and a first corner position pixel error between the two, if it is determined that the element identifier of the positioning element in the identification information is unique.
A sixth determining submodule 3332, configured to determine that the number of accurate positions is accumulated to 1 if the pixel error at the first corner position is smaller than or equal to the preset value.
And the seventh determining submodule 3333 is configured to, if the pixel error at the first corner position is greater than the preset value, determine that the number of false detections of the position is increased by 1, and determine that the number of missed detections of the position is increased by 1.
The eighth determining submodule 3334 is configured to determine the detection recall parameters according to the accurate number of the positions and the number of missed position detections, and determine the detection accurate parameters according to the accurate number of the positions and the number of missed position detections.
In an example, the third determining module 333 further includes:
a ninth determining sub-module 3335, configured to, when the element identifier of the positioning element exists in the identification information and the element identifier of the positioning element exists in the annotation information, determine, if it is determined that the element identifier of the positioning element in the identification information is not unique, the corner position information of each corner corresponding to the positioning element in the identification information and the corner position information of the positioning element in the annotation information, and a second corner position pixel error between the two corner position pixel information, where a total number of the element identifiers of the positioning element in the identification information is n.
And a tenth determining submodule 3336, configured to determine, if the pixel errors of the m second corner positions are smaller than or equal to a preset value, that the number of accurate positions is accumulated by 1, and that the number of false position detections is accumulated by m, where m is a positive integer greater than or equal to 1 and less than or equal to n.
An eleventh determining sub-module 3337, configured to determine that the number of missed position detections is accumulated by 1, and determine that the number of false position detections is accumulated by n, if the error of any second corner position pixel is greater than the preset value.
In one example, the accuracy evaluation parameters are a single-frame horizontal average error and a single-frame vertical average error; a comparison unit 33, comprising:
a fifth determining module 335, configured to determine, when the element identifier of the positioning element exists in the identification information and the element identifier of the positioning element exists in the tag information, a third corner position pixel error of each positioning element in the x direction and a fourth corner position pixel error of each positioning element in the y direction according to corner position information in the tag information and corner position information in the tag information.
A sixth determining module 336, configured to determine a single-frame horizontal average error of one frame of image according to the third corner pixel error of each positioning element in the x direction and the number of positioning elements in one frame of image.
The seventh determining module 337 is configured to determine a single-frame longitudinal average error of a frame image according to the pixel error of the fourth corner point of each positioning element in the y direction and the number of the positioning elements in the frame image.
In one example, the comparison unit 33 further includes:
an eighth determining module 338, configured to determine the total average error according to the single-frame horizontal average error, the single-frame vertical average error, and the number of positioning elements in one frame of image.
In an example, the first obtaining unit 31 is specifically configured to: acquiring an image through acquisition equipment on a vehicle; receiving a marking instruction of a user, and determining a positioning element in the image according to the marking instruction, wherein the positioning element has marking information.
In an example, the second obtaining unit 32 is specifically configured to: acquiring an image through acquisition equipment on a vehicle; and identifying the positioning elements in the image by adopting an identification algorithm to obtain identification information.
The apparatus of this embodiment may execute the technical solution in the method, and the specific implementation process and the technical principle are the same, which are not described herein again.
Fig. 8 is a schematic diagram of a sixth embodiment according to the present application, and as shown in fig. 8, an electronic device 70 in the present embodiment may include: a processor 71 and a memory 72.
A memory 72 for storing programs; the Memory 72 may include a volatile Memory (RAM), such as a Static Random Access Memory (SRAM), a Double Data Rate Synchronous Dynamic Random Access Memory (DDR SDRAM), and the like; the memory may also comprise a non-volatile memory, such as a flash memory. The memory 72 is used to store computer programs (e.g., applications, functional modules, etc. that implement the above-described methods), computer instructions, etc., which may be stored in one or more of the memories 72 in a partitioned manner. And the above-mentioned computer program, computer instructions, data, etc. can be called by the processor 71.
The computer programs, computer instructions, etc. described above may be stored in one or more memories 72 in partitions. And the above-mentioned computer program, computer instruction, etc. can be called by the processor 71.
A processor 71 for executing the computer program stored in the memory 72 to implement the steps of the method according to the above embodiments.
Reference may be made in particular to the description relating to the preceding method embodiment.
The processor 71 and the memory 72 may be separate structures or may be an integrated structure integrated together. When the processor 71 and the memory 72 are separate structures, the memory 72 and the processor 71 may be coupled by a bus 73.
The electronic device of this embodiment may execute the technical solution in the method, and the specific implementation process and the technical principle are the same, which are not described herein again.
According to an embodiment of the present application, an electronic device and a readable storage medium are also provided.
Fig. 9 is a schematic diagram according to a seventh embodiment of the present application, and as shown in fig. 9, fig. 9 is a block diagram of an electronic device for implementing a human-computer interaction based search method according to the embodiment of the present application. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the present application that are described and/or claimed herein.
As shown in fig. 9, the electronic apparatus includes: one or more processors 801, memory 802, and interfaces for connecting the various components, including a high speed interface and a low speed interface. The various components are interconnected using different buses and may be mounted on a common motherboard or in other manners as desired. The processor may process instructions for execution within the electronic device, including instructions stored in or on the memory to display graphical information of a GUI on an external input/output apparatus (such as a display device coupled to the interface). In other embodiments, multiple processors and/or multiple buses may be used, along with multiple memories, as desired. Also, multiple electronic devices may be connected, with each device providing portions of the necessary operations (e.g., as a server array, a group of blade servers, or a multi-processor system). Fig. 9 illustrates an example of a processor 801.
The memory 802 is a non-transitory computer readable storage medium as provided herein. The memory stores instructions executable by the at least one processor to cause the at least one processor to perform the algorithm adjustment method for identifying a localization element provided herein. The non-transitory computer readable storage medium of the present application stores computer instructions for causing a computer to perform the algorithm adjustment method for identifying a localization element provided by the present application.
The memory 802 is a non-transitory computer readable storage medium, and can be used for storing non-transitory software programs, non-transitory computer executable programs, and modules, such as program instructions/modules corresponding to the algorithm adjustment method for identifying a positioning element in the embodiment of the present application (for example, the first obtaining unit 31, the second obtaining unit 32, the comparing unit 33, and the adjusting unit 34 shown in fig. 6). The processor 801 executes various functional applications of the server and data processing, namely, implements the algorithm adjustment method for identifying the positioning element in the above method embodiment, by running non-transitory software programs, instructions, and modules stored in the memory 802.
The memory 802 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created according to use of the electronic device for the algorithm adjustment method for identifying the positioning element, and the like. Further, the memory 802 may include high speed random access memory and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid state storage device. In some embodiments, the memory 802 optionally includes memory located remotely from the processor 801, which may be connected via a network to an electronic device for identifying algorithmic adjustment methods of the positional elements. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The electronic device for the algorithm adjustment method for recognizing the localization element may further include: an input device 803 and an output device 804. The processor 801, the memory 802, the input device 803, and the output device 804 may be connected by a bus or other means, and are exemplified by a bus in fig. 9.
The input device 803 may receive input numeric or character information and generate key signal inputs related to user settings and function controls of the electronic device for identifying the algorithmic adjustment method of the positional element, such as a touch screen, keypad, mouse, track pad, touch pad, pointer, one or more mouse buttons, track ball, joystick, or like input device. The output devices 804 may include a display device, auxiliary lighting devices (e.g., LEDs), and haptic feedback devices (e.g., vibrating motors), among others. The display device may include, but is not limited to, a Liquid Crystal Display (LCD), a Light Emitting Diode (LED) display, and a plasma display. In some implementations, the display device can be a touch screen.
Various implementations of the systems and techniques described here can be realized in digital electronic circuitry, integrated circuitry, application specific ASICs (application specific integrated circuits), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
These computer programs (also known as programs, software applications, or code) include machine instructions for a programmable processor, and may be implemented using high-level procedural and/or object-oriented programming languages, and/or assembly/machine languages. As used herein, the terms "machine-readable medium" and "computer-readable medium" refer to any computer program product, apparatus, and/or device (e.g., magnetic discs, optical disks, memory, Programmable Logic Devices (PLDs)) used to provide machine instructions and/or data to a programmable processor, including a machine-readable medium that receives machine instructions as a machine-readable signal. The term "machine-readable signal" refers to any signal used to provide machine instructions and/or data to a programmable processor.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), Wide Area Networks (WANs), and the Internet.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present application may be executed in parallel, sequentially, or in different orders, and the present invention is not limited thereto as long as the desired results of the technical solutions disclosed in the present application can be achieved.
The above-described embodiments should not be construed as limiting the scope of the present application. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present application shall be included in the protection scope of the present application.

Claims (25)

1. An algorithmic adjustment method for identifying a localization element, comprising:
acquiring marking information of a positioning element and acquiring identification information of the positioning element output by an identification algorithm, wherein the marking information comprises an element identifier and/or angular point position information of the positioning element, and the identification information comprises the element identifier and/or angular point position information of the positioning element;
comparing and analyzing the element identification and/or the angular point position information in the labeling information and the element identification and/or the angular point position information in the identification information to obtain an analysis result, wherein the analysis result comprises at least one accuracy evaluation parameter;
and adjusting the recognition algorithm according to the at least one accuracy evaluation parameter, wherein the adjusted recognition algorithm is used for recognizing the positioning elements in the driving process of the vehicle.
2. The method of claim 1, the at least one accuracy assessment parameter being one or more of: classifying F1 parameters, detecting F1 parameters, single-frame horizontal average errors and single-frame vertical average errors;
wherein the classification F1 parameter characterizes a relationship between classification recall and classification accuracy, the detection F1 parameter characterizes a relationship between detection recall and detection accuracy, the single-frame transverse average error characterizes a position error of the localization element in the x-direction, and the single-frame longitudinal average error characterizes a position error of the localization element in the y-direction.
3. The method of claim 2, wherein the accuracy-assessment parameter is a classification F1 parameter; comparing and analyzing the element identification in the labeling information and the element identification in the identification information to obtain an analysis result, wherein the analysis result comprises the following steps:
determining a classification recall parameter and a classification accurate parameter according to whether the element identifier of the positioning element exists in the labeling information and whether the element identifier of the positioning element exists in the identification information;
and determining a classification F1 parameter according to the classification recall parameter and the classification accuracy parameter.
4. The method of claim 3, wherein determining the classification recall parameter and the classification accuracy parameter according to whether the element identifier of the positioning element exists in the annotation information and whether the element identifier of the positioning element exists in the identification information comprises:
if the element identification of the positioning element exists in the identification information and the element identification of the positioning element does not exist in the labeling information, determining that the number of false detections of the element is accumulated to 1;
if the element identifier of the positioning element exists in the labeling information and the element identifier of the positioning element does not exist in the identification information, determining that the missing element detection quantity is accumulated to 1;
if the element identification of the positioning element exists in the identification information and the element identification of the positioning element exists in the marking information, determining that the accurate quantity of the elements is accumulated by 1;
and determining a classification recall parameter according to the accurate element quantity and the missed element quantity, and determining a classification accurate parameter according to the accurate element quantity and the false element quantity.
5. The method of claim 2, wherein the accuracy assessment parameter is a test F1 parameter; comparing and analyzing the element identification and/or the corner position information in the labeling information and the element identification and/or the corner position information in the identification information to obtain an analysis result, wherein the analysis result comprises the following steps:
determining a detection recall parameter and a detection accurate parameter according to whether the element identifier of the positioning element exists in the labeling information, whether the element identifier of the positioning element exists in the identification information, the angular point position information in the labeling information and the angular point position information in the identification information;
and determining a detection F1 parameter according to the detection recall parameter and the detection accurate parameter.
6. The method of claim 5, wherein determining the recall detection parameter and the accurate detection parameter according to whether the element identifier of the positioning element exists in the annotation information, whether the element identifier of the positioning element exists in the identification information, and the corner position information in the annotation information and the corner position information in the identification information comprises:
when the element identification of the positioning element exists in the identification information and the element identification of the positioning element exists in the labeling information, if the element identification of the positioning element in the identification information is determined to be unique, the corner position information of the positioning element in the identification information and the corner position information of the positioning element in the labeling information, and a first corner position pixel error between the two are determined;
if the pixel error of the first corner point position is less than or equal to a preset value, determining the accurate quantity accumulation 1 of the positions;
if the pixel error of the first corner position is larger than a preset value, determining that the number of position false detections is added by 1, and determining that the number of position missed detections is added by 1;
and determining a detection recall parameter according to the accurate position number and the position missing detection number, and determining a detection accurate parameter according to the accurate position number and the position false detection number.
7. The method of claim 6, further comprising:
when the element identification of the positioning element exists in the identification information and the element identification of the positioning element exists in the labeling information, if the element identification of the positioning element in the identification information is determined to be not unique, determining the position information of each corner point corresponding to the positioning element in the identification information, the position information of the corner point of the positioning element in the labeling information and a second corner point position pixel error between the two, wherein the total number of the element identifications of the positioning element in the identification information is n;
if the pixel errors of the m second corner positions are smaller than or equal to a preset value, determining the accurate position quantity accumulation 1, and determining the position false detection quantity accumulation m, wherein m is a positive integer which is larger than or equal to 1 and smaller than or equal to n;
and if the pixel error of any second corner position is larger than a preset value, determining the position missing detection quantity accumulation 1, and determining the position false detection quantity accumulation n.
8. The method of claim 2, wherein the accuracy assessment parameters are a single frame horizontal average error and a single frame vertical average error; comparing and analyzing the element identification and/or the corner position information in the labeling information and the element identification and/or the corner position information in the identification information to obtain an analysis result, wherein the analysis result comprises the following steps:
when the element identification of the positioning element exists in the identification information and the element identification of the positioning element exists in the label information, determining a third corner position pixel error of each positioning element in the x direction and a fourth corner position pixel error of each positioning element in the y direction according to corner position information in the label information and corner position information in the identification information;
determining a single-frame transverse average error of a frame of image according to the pixel error of the third corner position of each positioning element in the x direction and the number of the positioning elements in the frame of image;
and determining the single-frame longitudinal average error of one frame of image according to the pixel error of the fourth corner point of each positioning element in the y direction and the number of the positioning elements in one frame of image.
9. The method of claim 8, further comprising:
and determining the overall average error according to the single-frame horizontal average error, the single-frame longitudinal average error and the number of positioning elements in one frame of image.
10. The method according to any one of claims 1-9, wherein the obtaining of the annotation information of the positioning element comprises:
acquiring an image through acquisition equipment on a vehicle;
receiving an annotation instruction of a user, and determining a positioning element in the image according to the annotation instruction, wherein the positioning element has the annotation information.
11. The method according to any one of claims 1-9, wherein said obtaining identification information of the localization element output by the identification algorithm comprises:
acquiring an image through acquisition equipment on a vehicle;
and identifying the positioning elements in the image by adopting the identification algorithm to obtain the identification information.
12. An algorithmic adjustment means for identifying positional elements, comprising:
the system comprises a first acquisition unit, a second acquisition unit and a third acquisition unit, wherein the first acquisition unit is used for acquiring marking information of a positioning element, and the marking information comprises an element identifier and/or corner position information of the positioning element;
the second acquisition unit is used for acquiring the identification information of the positioning element output by the identification algorithm, wherein the identification information comprises the element identifier and/or the angular point position information of the positioning element;
the comparison unit is used for comparing and analyzing the element identifier and/or the corner position information in the labeling information and the element identifier and/or the corner position information in the identification information to obtain an analysis result, and the analysis result comprises at least one accuracy evaluation parameter;
and the adjusting unit is used for adjusting the recognition algorithm according to the at least one accuracy evaluation parameter, wherein the adjusted recognition algorithm is used for recognizing the positioning elements in the driving process of the vehicle.
13. The apparatus of claim 12, the at least one accuracy assessment parameter being one or more of: classifying F1 parameters, detecting F1 parameters, single-frame horizontal average errors and single-frame vertical average errors;
wherein the classification F1 parameter characterizes a relationship between classification recall and classification accuracy, the detection F1 parameter characterizes a relationship between detection recall and detection accuracy, the single-frame transverse average error characterizes a position error of the localization element in the x-direction, and the single-frame longitudinal average error characterizes a position error of the localization element in the y-direction.
14. The apparatus of claim 13, wherein the accuracy-assessment parameter is a classification F1 parameter; the comparison unit includes:
a first determining module, configured to determine a classification recall parameter and a classification accuracy parameter according to whether an element identifier of a positioning element exists in the tagging information and whether an element identifier of the positioning element exists in the identification information;
and the second determination module is used for determining a classification F1 parameter according to the classification recall parameter and the classification accuracy parameter.
15. The apparatus of claim 14, wherein the first determining means comprises:
the first determining submodule is used for determining that the false detection quantity of the element is accumulated to be 1 if the element identifier of the positioning element exists in the identification information and the element identifier of the positioning element does not exist in the labeling information;
a second determining submodule, configured to determine that the missing inspection quantity of the element is accumulated to 1 if the tagging information has the element identifier of the positioning element and the identification information does not have the element identifier of the positioning element;
a third determining submodule, configured to determine that the accurate quantity of the elements is accumulated by 1 if the identification information includes an element identifier of the positioning element and the labeling information includes an element identifier of the positioning element;
and the fourth determining submodule is used for determining a classification recall parameter according to the accurate element quantity and the missing element quantity and determining a classification accurate parameter according to the accurate element quantity and the false element quantity.
16. The apparatus of claim 13, wherein the accuracy assessment parameter is a test F1 parameter; the comparison unit includes:
a third determining module, configured to determine a recall detection parameter and an accurate detection parameter according to whether an element identifier of a positioning element exists in the tag information, whether an element identifier of the positioning element exists in the identification information, corner position information in the tag information, and corner position information in the tag information;
and the fourth determination module is used for determining the detection F1 parameter according to the detection recall parameter and the detection accurate parameter.
17. The apparatus of claim 16, wherein the third determining means comprises:
a fifth determining submodule, configured to determine, when the element identifier of the positioning element exists in the identification information and the element identifier of the positioning element exists in the annotation information, if it is determined that the element identifier of the positioning element in the identification information is unique, corner position information of the positioning element in the identification information and corner position information of the positioning element in the annotation information, and a first corner position pixel error between the two;
a sixth determining submodule, configured to determine that the number of accurate positions is accumulated by 1 if the pixel error at the first corner position is smaller than or equal to a preset value;
a seventh determining submodule, configured to determine that the number of false position detections is added by 1 and determine that the number of missed position detections is added by 1 if the pixel error at the first corner position is greater than a preset value;
and the eighth determining submodule is used for determining a detection recall parameter according to the accurate position quantity and the position missing detection quantity, and determining a detection accurate parameter according to the accurate position quantity and the position false detection quantity.
18. The apparatus of claim 17, the third determination module, further comprising:
a ninth determining sub-module, configured to determine, when an element identifier of a positioning element exists in the identification information and an element identifier of the positioning element exists in the labeling information, if it is determined that the element identifier of the positioning element in the identification information is not unique, position information of each corner point in the identification information corresponding to the positioning element, position information of the corner point of the positioning element in the labeling information, and a second corner point position pixel error between the two, where a total number of the element identifiers of the positioning element in the identification information is n;
a tenth determining submodule, configured to determine, if m second corner position pixel errors are smaller than or equal to a preset value, a position accurate quantity accumulation 1, and determine a position false detection quantity accumulation m, where m is a positive integer greater than or equal to 1 and less than or equal to n;
and the eleventh determining submodule is used for determining the position missing detection quantity accumulation 1 and the position false detection quantity accumulation n if the pixel error of any one second corner position is greater than a preset value.
19. The apparatus of claim 13, wherein the accuracy assessment parameters are a single frame horizontal average error and a single frame vertical average error; the comparison unit includes:
a fifth determining module, configured to determine, when an element identifier of a positioning element exists in the identification information and an element identifier of the positioning element exists in the tag information, a third corner position pixel error of each positioning element in the x direction and a fourth corner position pixel error of each positioning element in the y direction according to corner position information in the tag information and corner position information in the tag information;
the sixth determining module is used for determining the single-frame transverse average error of one frame of image according to the pixel error of the third corner position of each positioning element in the x direction and the number of the positioning elements in one frame of image;
and the seventh determining module is used for determining the single-frame longitudinal average error of the frame image according to the pixel error of the fourth corner point of each positioning element in the y direction and the number of the positioning elements in the frame image.
20. The apparatus of claim 19, the comparison unit, further comprising:
and the eighth determining module is used for determining the overall average error according to the single-frame horizontal average error, the single-frame longitudinal average error and the number of the positioning elements in the frame of image.
21. The apparatus according to any one of claims 12 to 20, wherein the first obtaining unit is specifically configured to:
acquiring an image through acquisition equipment on a vehicle;
receiving an annotation instruction of a user, and determining a positioning element in the image according to the annotation instruction, wherein the positioning element has the annotation information.
22. The apparatus according to any one of claims 12 to 20, wherein the second obtaining unit is specifically configured to:
acquiring an image through acquisition equipment on a vehicle;
and identifying the positioning elements in the image by adopting the identification algorithm to obtain the identification information.
23. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-11.
24. A non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method of any one of claims 1-11.
25. An algorithmic adjustment method for identifying a localization element, comprising:
comparing and analyzing element identification and/or corner position information in the labeling information of the positioning elements and element identification and/or corner position information in the identification information of the positioning elements output by an identification algorithm to obtain an analysis result, wherein the analysis result comprises at least one accuracy evaluation parameter;
and adjusting the recognition algorithm according to the at least one accuracy evaluation parameter, wherein the adjusted recognition algorithm is used for recognizing the positioning elements in the driving process of the vehicle.
CN202010605391.5A 2020-06-29 2020-06-29 Algorithm adjustment method, device, equipment and medium for identifying positioning element Active CN111783623B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010605391.5A CN111783623B (en) 2020-06-29 2020-06-29 Algorithm adjustment method, device, equipment and medium for identifying positioning element

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010605391.5A CN111783623B (en) 2020-06-29 2020-06-29 Algorithm adjustment method, device, equipment and medium for identifying positioning element

Publications (2)

Publication Number Publication Date
CN111783623A true CN111783623A (en) 2020-10-16
CN111783623B CN111783623B (en) 2024-04-12

Family

ID=72760315

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010605391.5A Active CN111783623B (en) 2020-06-29 2020-06-29 Algorithm adjustment method, device, equipment and medium for identifying positioning element

Country Status (1)

Country Link
CN (1) CN111783623B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112306353A (en) * 2020-10-27 2021-02-02 北京京东方光电科技有限公司 Augmented reality device and interaction method thereof

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2011203921A (en) * 2010-03-25 2011-10-13 Denso It Laboratory Inc Driving evaluation apparatus, method and program
US20120209697A1 (en) * 2010-10-14 2012-08-16 Joe Agresti Bias Reduction in Internet Measurement of Ad Noting and Recognition
CN109034078A (en) * 2018-08-01 2018-12-18 腾讯科技(深圳)有限公司 Training method, age recognition methods and the relevant device of age identification model
CN109389030A (en) * 2018-08-23 2019-02-26 平安科技(深圳)有限公司 Facial feature points detection method, apparatus, computer equipment and storage medium
CN110148148A (en) * 2019-03-01 2019-08-20 北京纵目安驰智能科技有限公司 A kind of training method, model and the storage medium of the lower edge detection model based on target detection
US20200160550A1 (en) * 2018-11-15 2020-05-21 Denso International America, Inc. Machine learning framework for visual tracking

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2011203921A (en) * 2010-03-25 2011-10-13 Denso It Laboratory Inc Driving evaluation apparatus, method and program
US20120209697A1 (en) * 2010-10-14 2012-08-16 Joe Agresti Bias Reduction in Internet Measurement of Ad Noting and Recognition
CN109034078A (en) * 2018-08-01 2018-12-18 腾讯科技(深圳)有限公司 Training method, age recognition methods and the relevant device of age identification model
CN109389030A (en) * 2018-08-23 2019-02-26 平安科技(深圳)有限公司 Facial feature points detection method, apparatus, computer equipment and storage medium
US20200160550A1 (en) * 2018-11-15 2020-05-21 Denso International America, Inc. Machine learning framework for visual tracking
CN110148148A (en) * 2019-03-01 2019-08-20 北京纵目安驰智能科技有限公司 A kind of training method, model and the storage medium of the lower edge detection model based on target detection

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
彭博;姬然;: "基于容差PR曲线的路面裂缝识别算法性能评价机制", 重庆交通大学学报(自然科学版), no. 07, 15 July 2017 (2017-07-15) *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112306353A (en) * 2020-10-27 2021-02-02 北京京东方光电科技有限公司 Augmented reality device and interaction method thereof
CN112306353B (en) * 2020-10-27 2022-06-24 北京京东方光电科技有限公司 Augmented reality device and interaction method thereof

Also Published As

Publication number Publication date
CN111783623B (en) 2024-04-12

Similar Documents

Publication Publication Date Title
EP4044146A1 (en) Method and apparatus for detecting parking space and direction and angle thereof, device and medium
CN112132113A (en) Vehicle re-identification method and device, training method and electronic equipment
CN112507949A (en) Target tracking method and device, road side equipment and cloud control platform
CN113221677B (en) Track abnormality detection method and device, road side equipment and cloud control platform
CN110675644B (en) Method and device for identifying road traffic lights, electronic equipment and storage medium
CN111860319A (en) Method for determining lane line, method, device and equipment for evaluating positioning accuracy
CN111540023B (en) Monitoring method and device of image acquisition equipment, electronic equipment and storage medium
CN110703732B (en) Correlation detection method, device, equipment and computer readable storage medium
CN113091757B (en) Map generation method and device
CN109711427A (en) Object detection method and Related product
CN112101223B (en) Detection method, detection device, detection equipment and computer storage medium
CN111292531A (en) Tracking method, device and equipment of traffic signal lamp and storage medium
CN111310840A (en) Data fusion processing method, device, equipment and storage medium
CN113012200B (en) Method and device for positioning moving object, electronic equipment and storage medium
CN111862199A (en) Positioning method, positioning device, electronic equipment and storage medium
CN111339877B (en) Method and device for detecting length of blind area, electronic equipment and storage medium
CN110688873A (en) Multi-target tracking method and face recognition method
EP4020312B1 (en) Traffic light recognition method, apparatus, storage medium and program product
CN111783623B (en) Algorithm adjustment method, device, equipment and medium for identifying positioning element
CN110866504A (en) Method, device and equipment for acquiring marked data
CN111640301B (en) Fault vehicle detection method and fault vehicle detection system comprising road side unit
CN113569912A (en) Vehicle identification method and device, electronic equipment and storage medium
CN113989929A (en) Human body action recognition method and device, electronic equipment and computer readable medium
CN111445499B (en) Method and device for identifying target information
CN110798681B (en) Monitoring method and device of imaging equipment and computer equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant