CN115471804A - Marked data quality inspection method and device, storage medium and electronic equipment - Google Patents

Marked data quality inspection method and device, storage medium and electronic equipment Download PDF

Info

Publication number
CN115471804A
CN115471804A CN202211158727.3A CN202211158727A CN115471804A CN 115471804 A CN115471804 A CN 115471804A CN 202211158727 A CN202211158727 A CN 202211158727A CN 115471804 A CN115471804 A CN 115471804A
Authority
CN
China
Prior art keywords
label
target
tag
lane line
road image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211158727.3A
Other languages
Chinese (zh)
Inventor
刘若愚
闫泽杭
柴亚捷
张亚森
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Xiaomi Pinecone Electronic Co Ltd
Xiaomi Automobile Technology Co Ltd
Original Assignee
Beijing Xiaomi Pinecone Electronic Co Ltd
Xiaomi Automobile Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Xiaomi Pinecone Electronic Co Ltd, Xiaomi Automobile Technology Co Ltd filed Critical Beijing Xiaomi Pinecone Electronic Co Ltd
Priority to CN202211158727.3A priority Critical patent/CN115471804A/en
Publication of CN115471804A publication Critical patent/CN115471804A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/98Detection or correction of errors, e.g. by rescanning the pattern or by human intervention; Evaluation of the quality of the acquired patterns
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/70Labelling scene content, e.g. deriving syntactic or semantic representations

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Computational Linguistics (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)

Abstract

The disclosure relates to a method and a device for quality inspection of labeled data, a storage medium and electronic equipment. The method comprises the following steps: acquiring road image marking data to be subjected to quality inspection, wherein the road image marking data comprises a road image and a marking label of each lane line in the road image; determining target labels meeting preset label constraint conditions from the labeling labels corresponding to all the lane lines, and determining target lane lines corresponding to the target labels; inputting the road image into a trained classification model to obtain a prediction label of the target lane line output by the classification model; and under the condition that the predicted label is consistent with the target label, determining the target label as a valid labeling label of the target lane line in the road image. By adopting the method disclosed by the invention, the quality inspection efficiency and accuracy can be improved.

Description

Labeling data quality inspection method and device, storage medium and electronic equipment
Technical Field
The disclosure relates to the technical field of lane marking, and in particular to a marked data quality inspection method, a marked data quality inspection device, a storage medium and electronic equipment.
Background
Lane line detection is a basic perceptual task in the field of automated driving, the goal of which is to identify and locate a lane line of interest from visual signals acquired by an onboard camera. The lane line detection technology plays a significant role in basic applications such as 'positioning', 'regulation', 'drawing-up' and 'automatic driving environment simulation display' in an automatic driving system. Compared with the lane line detection technology based on manual lane line labeling and lane line identification by a traditional algorithm, the scheme for detecting the lane line based on the deep neural network is mainstream in the industry.
The lane line detection mode based on the deep neural network requires massive manual marking data to train a deep neural network model. Because the lane lines have the characteristics of various types, dense distribution, fuzzy definition of certain categories and the like, the annotating personnel inevitably make mistakes in a long-time high-attention working mode, and a large number of wrong annotations are contained in an annotation data finished product. However, the deep neural network model for detecting lane lines is extremely sensitive to the accuracy of the labeled data, and any noise data can cause an unpredictable damage to the accuracy of the deep neural network model. Therefore, the marked lane marking data need to be subjected to quality inspection to clean the manual marking result, so that wrong marks can be removed as much as possible.
In the related art, the quality inspection method of the lane marking data is a manual quality inspection method. Specifically, after visualizing the lane marking data, lane images and marking information displayed on the lane images are obtained, quality inspectors sequentially check the position of each lane line in the lane images and whether the marking information is accurate, and mark wrong marking information and the lane images where the wrong marking information is located. However, since a lane image usually includes a large number of lane lines and the lane lines are overlapped in a criss-cross manner, and the lane image also has the characteristics that the lane lines at a distance are not clear and the lane lines are easily blocked, a large amount of manpower is required for quality inspection.
Disclosure of Invention
In order to overcome the problems in the related art, the present disclosure provides a method and an apparatus for quality inspection of labeled data, a storage medium, and an electronic device.
According to a first aspect of the embodiments of the present disclosure, there is provided a method for quality inspection of labeled data, the method including:
acquiring road image labeling data to be subjected to quality inspection, wherein the road image labeling data comprise road images and labeling labels of each lane line in the road images;
determining target labels meeting preset label constraint conditions from the labeling labels corresponding to all the lane lines, and determining target lane lines corresponding to the target labels;
inputting the road image into a trained classification model to obtain a prediction label of the target lane line output by the classification model;
and under the condition that the predicted label is consistent with the target label, determining the target label as a valid labeling label of the target lane line in the road image.
Optionally, the label tag of each lane line includes a position tag representing a position of the lane line relative to the subject vehicle and a category tag of the lane line, and the position tag and the category tag correspond to different tag types;
the determining, from the labeling labels corresponding to all the lane lines, a target label meeting a preset label constraint condition includes:
determining the target tags meeting the preset tag constraint condition from the position tags and the category tags corresponding to all the lane lines;
correspondingly, the inputting the road image into the trained classification model to obtain the prediction label of the target lane line output by the classification model includes:
and inputting the road image into a trained classification model to obtain the predicted label of the target lane line under the label type corresponding to the target label.
Optionally, the determining, by the target tag, a target tag that meets a preset tag constraint condition from the labeling tags corresponding to all of the lane lines includes:
and determining the position label which is different from any other position label in the plurality of position labels as the target label.
Optionally, the determining, from the position tags and the category tags corresponding to all the lane lines, the target tag meeting the preset tag constraint condition includes:
aiming at the position label, determining a preset category label set corresponding to the position label;
and under the condition that the category label corresponding to the same lane line as the position label exists in the preset category label set, determining the position label and the category label corresponding to the same lane line as the position label as the target label.
Optionally, the determining, by the target tag, a target tag that meets a preset tag constraint condition from the labeling tags corresponding to all of the lane lines includes:
for each of the location tags, determining from a plurality of the location tags whether a target location tag associated with the location tag exists if it is determined that the location tag exists in an associated set of location tags;
and if the target position label exists, determining the position label as the target label.
Optionally, the road image labeling data further includes a corresponding position labeling area of each lane line in the road image;
the classification model comprises a characteristic image extraction module, a lane line characteristic extraction module and a classification module;
the characteristic image extraction module is used for converting the road image into a characteristic map;
the lane line feature extraction module is used for determining a plurality of interest points from a target position marking area corresponding to the target lane line in the road image and determining interest features corresponding to the interest points in the feature map;
the classification module is used for classifying according to the interesting features to obtain the prediction label.
Optionally, the method further comprises:
and displaying other labeling labels except the effective labeling label in the road image labeling data and the road image to a user so that the user can perform manual quality inspection or correct the other labeling labels.
According to a second aspect of the embodiments of the present disclosure, there is provided an annotated data quality inspection device, the device comprising:
the system comprises an acquisition module, a quality inspection module and a quality inspection module, wherein the acquisition module is configured to acquire road image marking data to be subjected to quality inspection, and the road image marking data comprises a road image and a marking label of each lane line in the road image;
the determining module is configured to determine a target label meeting a preset label constraint condition from the labeling labels corresponding to all the lane lines, and determine a target lane line corresponding to the target label;
the input module is configured to input the road image into a trained classification model, and obtain a prediction label of the target lane line output by the classification model;
an execution module configured to determine the target label as a valid annotation label of the target lane line in the road image if the predicted label and the target label are consistent.
Optionally, the label tag of each lane line includes a position tag representing a position of the lane line relative to the host vehicle and a category tag of the lane line, and the position tag and the category tag correspond to different tag types;
the determining module includes:
the first determining submodule is configured to determine the target tags meeting the preset tag constraint condition from the position tags and the category tags corresponding to all the lane lines;
correspondingly, the input module is configured to input the road image into the trained classification model, and obtain the predicted label of the target lane line under the label type corresponding to the target label.
Optionally, the target tag includes a plurality of location tags, where the number of lane lines is multiple, and correspondingly, the number of location tags is multiple, and the determining module includes:
a second determining submodule configured to determine, as the target tag, the location tag that is different from any of the other location tags in the plurality of location tags.
Optionally, the first determining sub-module includes:
a third determining sub-module configured to determine, for the location tag, a preset category tag set corresponding to the location tag;
a fourth determining sub-module configured to determine, when the category tag corresponding to the same lane line as the position tag exists in the preset category tag set, both the position tag and the category tag corresponding to the same lane line as the position tag as the target tag.
Optionally, the target tag includes a plurality of location tags, where the number of lane lines is multiple, and correspondingly, the number of location tags is multiple, and the determining module includes:
a fifth determining sub-module configured to determine, for each of the location tags, whether a target location tag associated with the location tag exists from the plurality of location tags in a case where it is determined that the location tag exists in the associated set of location tags;
a sixth determining sub-module configured to determine the location tag as the target tag if the target location tag exists.
Optionally, the road image labeling data further includes a corresponding position labeling area of each lane line in the road image;
the classification model comprises a characteristic image extraction module, a lane line characteristic extraction module and a classification module;
the characteristic image extraction module is used for converting the road image into a characteristic map;
the lane line feature extraction module is used for determining a plurality of interest points from a target position marking area corresponding to the target lane line in the road image and determining interest features corresponding to the interest points in the feature map;
the classification module is used for classifying according to the interesting features to obtain the prediction label.
Optionally, the apparatus further comprises:
and the manual processing module is configured to show other labeling labels except the effective labeling label in the road image labeling data and the road image to a user so as to allow the user to perform manual quality inspection or correct the other labeling labels.
According to a third aspect of the embodiments of the present disclosure, there is provided a computer-readable storage medium, on which computer program instructions are stored, which when executed by a processor, implement the steps of the annotated data quality inspection method provided in the first aspect of the present disclosure.
According to a fourth aspect of the embodiments of the present disclosure, there is provided an electronic apparatus including:
a memory having a computer program stored thereon;
a processor for executing the computer program in the memory to implement the steps of the quality inspection method for the annotated data provided by the first aspect of the disclosure.
The technical scheme provided by the embodiment of the disclosure can have the following beneficial effects:
and acquiring road image marking data to be subjected to quality inspection, wherein the road image marking data comprises a road image and a marking label of each lane line in the road image. And determining a target label meeting the preset label constraint condition from the labeling labels corresponding to all the lane lines, and determining a target lane line corresponding to the target label. And inputting the road image into the trained classification model to obtain a prediction label of the target lane line output by the classification model. And under the condition that the predicted label is consistent with the target label, determining the target label as an effective labeling label of the target lane line in the road image. Compared with the manual quality inspection method in the related art, the automatic quality inspection method disclosed by the invention has higher efficiency. Compared with a manual sampling inspection mode, the mode disclosed by the invention has the advantages that the detection is more comprehensive, and the detection result is more accurate.
In addition, in this way, the accurate target label (i.e., the accurate target label in terms of the labeled label) is screened out from the labeled label layer based on the preset label constraint condition, and then the predicted label of the target lane line corresponding to the target label is predicted from the image layer through the classification model, and whether the target label is accurate (i.e., whether the target label is accurate is further verified from the image layer) is verified according to the predicted label. The accuracy of the effective annotation tag screened/determined from multiple levels is higher than the accuracy of the result determined from a single level.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure.
FIG. 1 is a flow chart illustrating a method of annotating data quality testing according to an exemplary embodiment.
FIG. 2 is a schematic diagram illustrating a classification model according to an exemplary embodiment.
FIG. 3 is a block diagram illustrating an annotated data quality testing apparatus according to an exemplary embodiment.
FIG. 4 is a block diagram of an electronic device shown in accordance with an example embodiment.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
It should be noted that all actions of acquiring signals, information or data in the present application are performed under the premise of complying with the corresponding data protection regulation policy of the country of the location and obtaining the authorization given by the owner of the corresponding device.
In the related art, the quality inspection of the lane marking data is usually performed by a development team of a deep neural network model. However, since the research and development team does not have as much capability of manually processing data as the labeling team, the quality inspection process is time-consuming and time-consuming, and thus the progress of the project is delayed. Therefore, the research and development team usually performs a random sampling inspection on a small portion of lane marking data to complete the quality inspection task. However, such a quality inspection method may leave most of the wrong labels in the lane marking data after quality inspection, which may affect the accuracy of the deep neural network model to be trained. If the accuracy of the deep neural network model for lane line detection is low, it is fatal to the downstream automatic driving model, and the accuracy and safety of the downstream task are seriously damaged. In view of this, the present disclosure provides a method, an apparatus, a storage medium, and an electronic device for quality inspection of labeled data. The marked data are subjected to automatic quality inspection through an intelligent quality inspection method to assist or even replace manual quality inspection, so that the quality of the data subjected to quality inspection is improved, and the quality inspection process is accelerated.
Fig. 1 is a flowchart illustrating an annotated data quality inspection method according to an exemplary embodiment, where the annotated data quality inspection method is used in a terminal device, as shown in fig. 1, and includes the following steps.
In step S11, road image labeling data to be quality inspected is obtained, where the road image labeling data includes a road image and a label of each lane line in the road image.
It should be noted that one road image corresponds to one road image labeling data, and the lane marking data used for training the deep neural network model includes road image labeling data corresponding to a plurality of road images. The road image is an image captured by a camera on a vehicle on the road. The road image includes a lane line drawn on the road.
Each lane line in the road image corresponds to a labeling label, which may be manually labeled. The label can be a position label, a category label of the lane line, and the like. The category of the lane line may refer to a category into which image features of the lane line are classified. For example, the category of the lane line may be a white dotted line, a white solid line, a yellow dotted line, a yellow solid line, a double white dotted line, a double white solid line, a white dotted solid line, a double yellow dotted line, a double yellow solid line, a yellow dotted solid line, an orange dotted/solid line, a blue dotted/solid line, or the like.
In some embodiments, the category of the lane line may also be a category classified by the function of the lane line. For example, the category of the lane line may be an indication mark line category having an indication function/function for indicating a lane, a traveling direction, a road surface edge, a sidewalk, a parking space, a parking station, a speed bump, and the like. The lane lines indicating the category of the marking lines can be lane lines which can cross the boundary line of opposite lane lines, can cross the boundary line of equidirectional lane lines, tide lane lines, lane edge lines, left-turn waiting area lines, intersection guide lines, guide lane lines, pedestrian lane lines, vehicle distance confirmation lines, lane entrance and exit marking lines, parking space marking lines, parking station marking lines, deceleration dune marking lines, guide arrows, road surface character marks, road surface figure marks and the like.
Further, for example, the category of the lane line may be a prohibited reticle category having a prohibited function for notifying special regulations such as compliance, prohibition, restriction, and the like of road traffic. The lane lines in the category of the forbidden marking lines can be lane lines such as forbidden crossing of a boundary line of opposite lanes, forbidden crossing of a boundary line of the same-direction lanes, forbidden stop lines, parking-letting pass lines, deceleration-letting pass lines, non-motor-vehicle forbidden zone marking lines, diversion lines, reticular lines, special lane lines, forbidden turning (turning) lines and the like.
For another example, the type of lane line may be a warning line type having a warning function for prompting a road user to understand a particular situation on a road and improving alertness preparation against a strain measure. The lane lines of the warning marking category can be lane lines such as road surface (lane) width gradient marking lines, approaching barrier marking lines, approaching railway level crossing marking lines, deceleration marking lines and the like.
In step S12, a target label meeting a preset label constraint condition is determined from the labeling labels corresponding to all the lane lines, and a target lane line corresponding to the target label is determined.
Optionally, the determining, by the target tag, a target tag that meets a preset tag constraint condition from the labeling tags corresponding to all of the lane lines includes: and determining the position label which is different from any other position label in the plurality of position labels as the target label.
The position tags of the lane lines are used to characterize the position of the lane lines relative to the subject vehicle. The subject vehicle is a vehicle in which a camera that captures an image of a lane is located.
Since the lane image is an image taken by a camera on a vehicle on the road, each lane line in the road image has a unique relative position with respect to the vehicle. For example, the location tag may be a "right lane line" tag for characterizing that the lane line is located on the right side of the vehicle and closest to the vehicle, a "next right lane line" tag for characterizing that the lane line is next to the lane line corresponding to the "right lane line" tag, a "right road edge line" tag for characterizing that the lane line is located at the edge of the right road, and so on. Therefore, in some embodiments, the preset tag constraint condition includes that the position tags corresponding to one lane image have uniqueness.
For example, if the preset tag constraint condition includes a condition that the position tag corresponding to one lane image has uniqueness, a position tag different from any other position tag in the plurality of position tags corresponding to one lane image may be determined as the target tag. For example, assume that the position labels corresponding to 6 lane lines in a lane image are a, B, C, D, B, respectively. The position tags a, C, D therein can be used as target tags. Location tag B cannot be a target tag.
In step S13, the road image is input into the trained classification model, and the predicted label of the target lane line output by the classification model is obtained.
The classification model is obtained by training according to a small amount of sample data, the training mode is a supervised learning mode in the related art, and details of the classification model are not described in the present disclosure.
In some embodiments, before inputting the road image into the trained classification model, the other lane lines except the target lane line in the road image may be masked to obtain a processed road image, and then the processed road image may be input into the trained classification model to obtain the predicted label of the target lane line output by the classification model.
In step S14, if the predicted tag and the target tag are consistent, the target tag is determined as a valid annotation tag of the target lane line in the road image.
In the embodiment of the present disclosure, that the predicted tag and the target tag are consistent means that the tag type and the tag content/name are the same. For example, the preset label "right lane line" is the same as the target label "right lane line". The preset label "right lane line" and the target label "next right lane line" are the same in label type, but different in label content/name.
By adopting the method, the road image labeling data to be subjected to quality inspection is obtained, and the road image labeling data comprises the road image and the labeling label of each lane line in the road image. And determining target labels meeting the preset label constraint conditions from the labeling labels corresponding to all the lane lines, and determining the target lane lines corresponding to the target labels. And inputting the road image into the trained classification model to obtain a prediction label of the target lane line output by the classification model. And under the condition that the predicted label is consistent with the target label, determining the target label as a valid labeling label of the target lane line in the road image. Compared with the manual quality inspection method in the related art, the automatic quality inspection method disclosed by the invention has higher efficiency. Compared with a manual sampling inspection mode, the mode disclosed by the invention has the advantages that the detection is more comprehensive, and the detection result is more accurate.
In addition, according to the method, the accurate target label is screened out from the labeled label layer based on the preset label constraint condition, and then the prediction label of the target lane line corresponding to the target label is predicted from the image layer through the classification model, and whether the target label is accurate or not is verified according to the prediction label. The accuracy of the effective label tags screened/determined from multiple layers is higher than the accuracy of the result determined from a single layer.
The lane images, the target lane lines and the effective labeling labels of the target lane lines determined according to the mode of the present disclosure are used for training the deep neural network model for detecting the lane lines, so that the accuracy of the deep neural network model can be improved.
In some embodiments, after determining the effective annotation tag in the road image annotation data, the other annotation tags in the road image annotation data, the lane lines corresponding to the other annotation tags, and the road image may be displayed to the user in a visual manner in combination, so that the user may perform manual quality inspection or correction on the other annotation tags. By adopting the mode, compared with a mode of manually carrying out quality inspection on all road image labeling data, the method greatly reduces the manual workload and improves the quality inspection efficiency.
Optionally, the label tag of each lane line includes a position tag representing a position of the lane line relative to the host vehicle, and a category tag of the lane line, and the position tag and the category tag correspond to different tag types.
For example, a location tag is a location type tag and a category tag is a category type tag.
Optionally, the determining, from the labeling labels corresponding to all the lane lines, a target label meeting a preset label constraint condition includes:
and determining the target label meeting the preset label constraint condition from the position labels and the category labels corresponding to all the lane lines.
Under the condition that each lane line corresponds to a position tag and a category tag, a target tag meeting a preset tag constraint condition can be determined from the position tags and the category tags corresponding to all the lane lines, and the target tag can be a position tag or a category tag.
Correspondingly, inputting the road image into the trained classification model to obtain the prediction label of the target lane line output by the classification model, including:
and inputting the road image into a trained classification model to obtain the predicted label of the target lane line under the label type corresponding to the target label.
If the target tag is a location type tag, then the predicted tag should also be a location type tag. If the target tag is a class type tag, then the predicted tag should also be a class type tag.
In one embodiment, the road image is input into the trained classification model, so as to obtain a classification result of the target lane line under each label type, and then the classification result under the label type corresponding to the target label is used as the prediction label.
Optionally, the determining, from the position tags and the category tags corresponding to all the lane lines, the target tag meeting the preset tag constraint condition includes:
aiming at the position label, determining a preset category label set corresponding to the position label; and under the condition that the category label corresponding to the same lane line as the position label exists in the preset category label set, determining the position label and the category label corresponding to the same lane line as the position label as the target label.
The correspondence between the position tag and the preset category tag set may be determined in advance based on the lane line setting rule. For example, if the location tag is "right road edge line", the preset category tag set corresponding to the location tag "right road edge line" may be set to include "white solid line" corresponding to the fence, the route line, the fence, and the like. For example, in a case where a category tag corresponding to the same lane line as the position tag exists in the preset category tag set, if the position tag is a "right road edge line", the category tag is a "white solid line", then both the position tag and the category tag may be determined as target tags. On the contrary, in the case that the category label corresponding to the same lane line as the position label does not exist in the preset category label set, if the position label is the "right road edge line", the category label is the "white dotted line", then neither the position label nor the category label is the target label.
By adopting the mode, the target label can be screened out according to the corresponding relation between the position label and the category label. So as to eliminate the error labeling condition of the position label and the category label which are correspondingly repelled by the same lane line.
Optionally, the determining, by the target tag, a target tag that meets a preset tag constraint condition from the labeling tags corresponding to all of the lane lines includes:
for each of the location tags, determining from a plurality of the location tags whether a target location tag associated with the location tag exists if it is determined that the location tag exists in an associated set of location tags; and if the target position label exists, determining the position label as the target label.
It should be noted that the position labels of the lane lines in the same lane image should not be "jumped" to occur. For example, when there is a lane line a whose position label is "next right lane line" in one lane image, there should be a lane line b whose position label is "right lane line". If there is no lane line b with the position label being "right lane line", it indicates that there may be "jumping line" in the position label of the lane line a. If the position label of a certain lane line may have a "jumping line" condition, it indicates that the position label of the lane line may have a false mark condition.
To avoid this, a set of relevance position tags may be preset, and each position tag in the set of relevance position tags should appear in pairs with other position tags associated therewith. The position label y associated with the position label x may be preset or determined based on image features in the lane image. For example, the position tag x is "next right lane line", then the left side of the lane line corresponding to the position tag x "next right lane line" should have the lane line, and the position tag of the lane line on the left side should be y "right lane line".
In some embodiments, in the case where it is determined that the position tag exists in the association position tag set, it may be determined whether there is a target position tag associated with the position tag from a plurality of position tags corresponding to the lane image. If there is a target position tag "right lane line" associated with the position tag "second right lane line" among a plurality of position tags corresponding to the lane images, it is determined whether or not the position tag "second right lane line" is present. If a target location tag exists, the location tag may be determined to be a target tag. Conversely, if there is no target position tag, the position tag is not regarded as a target tag.
The present disclosure also provides another embodiment of determining a target tag. The number of lane lines in the road image is plural, and accordingly, the number of position tags is plural. The embodiment of determining the target tag meeting the preset tag constraint condition from the position tag and the category tag may be:
a first candidate position label which is different from any other position label is determined from the plurality of position labels. And determining a preset category label set corresponding to the first candidate position label aiming at the first candidate position label. And under the condition that the category label corresponding to the same lane line as the first candidate position label exists in a preset category label set, determining the first candidate position label as a second candidate position label, and determining the category label corresponding to the same lane line as the first candidate position label as a target label. For the second candidate location tag, in a case where it is determined that the second candidate location tag exists in the preset set of associated location tags, it is determined whether there is a target location tag associated with the second candidate location tag from among the plurality of location tags. And if the target position label exists, determining the second candidate position label as the target label. And if the target position label does not exist, the second candidate position label is not the target label.
By adopting the mode, the target labels screened out through the plurality of screening conditions are more accurate.
Optionally, the road image labeling data further includes a corresponding position labeling area of each lane line in the road image; the classification model comprises a feature image extraction module, a lane line feature extraction module and a classification module; the characteristic image extraction module is used for converting the road image into a characteristic map; the lane line feature extraction module is used for determining a plurality of interest points from a target position marking area corresponding to the target lane line in the road image and determining interest features corresponding to the interest points in the feature map; the classification module is used for classifying according to the interesting features to obtain the prediction label.
The corresponding position labeling area of each lane line in the road image may be an area represented by a plurality of points, lines, and frames drawn by the user for the lane line in the road image. And the image area of the position marking area corresponding to the road image is the image area where the lane line is located.
Illustratively, as shown in fig. 2, the classification model includes a feature image extraction module, a lane line feature extraction module, and a classification module. The characteristic image extraction module is used for converting the road image into a characteristic image. Specifically, the road image generates features (L0, L1, and L2) of three scales via a backbone network, specifically, the road image is converted into a first feature map of the same size, the first feature map is down-sampled to obtain a second feature map, the second feature map is down-sampled to obtain a third feature map, L0 is determined from the third feature map, L1 is determined from the second feature map and L0, and L2 is determined from L1 and the first feature map. For more detailed principles, reference is made to the related art. And respectively carrying out up-sampling (Upesample) on the three features (L0, L1 and L2) to obtain feature subgraphs with the same size, and connecting the feature subgraphs into a feature graph.
The lane line feature extraction module is configured to determine a plurality of interest points (ROIs) from a target location labeling area corresponding to a target lane line in a road image, where each interest point includes one or more pixel points, and determine interest features corresponding to the plurality of interest points in a feature map (e.g., features at locations corresponding to the plurality of interest points represented by a dotted line in the feature map in fig. 2), where the interest features represent target lane line features (representing the target lane line). In this way, the classification module performs classification according to the interesting features, and the prediction label of the target lane line can be obtained.
The classification module comprises a plurality of full Connected layers (FC) and is used for classifying the interested features (target lane line features), obtaining the label classification result of the position type and the label classification result of the category type, and then determining the prediction label under the label type corresponding to the target label.
In some embodiments, when determining a plurality of interest points from within a target position labeling area corresponding to a target lane line in the road image, the plurality of interest points may be determined at equal intervals along an extending direction of the lane line from within the target position labeling area in the road image.
FIG. 3 is a block diagram illustrating an annotated data quality inspection apparatus according to an exemplary embodiment. Referring to fig. 3, the annotated data quality inspection apparatus 300 includes:
the obtaining module 310 is configured to obtain road image labeling data to be subjected to quality inspection, where the road image labeling data includes a road image and a labeling label of each lane line in the road image;
a determining module 320, configured to determine, from the labeling labels corresponding to all the lane lines, a target label meeting a preset label constraint condition, and determine a target lane line corresponding to the target label;
an input module 330, configured to input the road image into a trained classification model, so as to obtain a predicted label of the target lane line output by the classification model;
an executing module 340, configured to determine the target label as a valid annotation label of the target lane line in the road image if the predicted label and the target label are consistent.
By adopting the device, road image labeling data to be subjected to quality inspection is obtained, and the road image labeling data comprises road images and labeling labels of each lane line in the road images. And determining target labels meeting the preset label constraint conditions from the labeling labels corresponding to all the lane lines, and determining the target lane lines corresponding to the target labels. And inputting the road image into the trained classification model to obtain a prediction label of the target lane line output by the classification model. And under the condition that the predicted label is consistent with the target label, determining the target label as a valid labeling label of the target lane line in the road image. Compared with the manual quality inspection method in the related art, the automatic quality inspection method disclosed by the invention has higher efficiency. Compared with a manual sampling inspection mode, the mode disclosed by the invention has the advantages that the detection is more comprehensive, and the detection result is more accurate.
In addition, according to the mode, accurate target labels are screened out from the labeling label layer based on the preset label constraint conditions, then, the prediction labels of the target lane lines corresponding to the target labels are predicted from the image layer through the classification model, and whether the target labels are accurate or not is verified according to the prediction labels. The accuracy of the effective annotation tag screened/determined from multiple levels is higher than the accuracy of the result determined from a single level.
Optionally, the label tag of each lane line includes a position tag representing a position of the lane line relative to the subject vehicle and a category tag of the lane line, and the position tag and the category tag correspond to different tag types;
the determining module 320 includes:
the first determining submodule is configured to determine the target tags meeting the preset tag constraint condition from the position tags and the category tags corresponding to all the lane lines;
correspondingly, the input module 330 is configured to input the road image into the trained classification model, so as to obtain the predicted label of the target lane line under the label type corresponding to the target label.
Optionally, the target tag includes a plurality of location tags, the number of lane lines is multiple, and accordingly, the number of location tags is multiple, and the determining module 320 includes:
a second determining sub-module configured to determine the location tag different from any other location tag in the plurality of location tags as the target tag.
Optionally, the first determining sub-module includes:
a third determining submodule configured to determine, for the location tag, a preset category tag set corresponding to the location tag;
a fourth determining sub-module configured to determine, when the category label corresponding to the same lane line as the position label exists in the preset category label set, both the position label and the category label corresponding to the same lane line as the position label as the target label.
Optionally, the target tag includes a plurality of location tags, the number of lane lines is multiple, and accordingly, the number of location tags is multiple, and the determining module 320 includes:
a fifth determining sub-module configured to determine, for each of the location tags, whether a target location tag associated with the location tag exists from the plurality of location tags in a case where it is determined that the location tag exists in the associated set of location tags;
a sixth determining sub-module configured to determine the location tag as the target tag if the target location tag exists.
Optionally, the road image labeling data further includes a corresponding position labeling area of each lane line in the road image;
the classification model comprises a feature image extraction module, a lane line feature extraction module and a classification module;
the characteristic image extraction module is used for converting the road image into a characteristic map;
the lane line feature extraction module is used for determining a plurality of interest points from a target position marking area corresponding to the target lane line in the road image and determining interest features corresponding to the interest points in the feature map;
the classification module is used for classifying according to the interesting features to obtain the prediction label.
Optionally, the annotated data quality inspection apparatus 300 further includes:
and the manual processing module is configured to show other labeling labels except the effective labeling label in the road image labeling data and the road image to a user so as to allow the user to perform manual quality inspection or correct the other labeling labels.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
The present disclosure also provides a computer readable storage medium having stored thereon computer program instructions, which when executed by a processor, implement the steps of the annotated data quality inspection method provided by the present disclosure.
FIG. 4 is a block diagram illustrating an electronic device in accordance with an example embodiment. Referring to fig. 4, the electronic device 800 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, an exercise device, a personal digital assistant, and the like.
With continued reference to fig. 4, the electronic device 800 may include one or more of the following components: a processing component 802, a memory 804, a power component 806, a multimedia component 808, an audio component 810, an input/output interface 812, a sensor component 814, and a communication component 816.
The processing component 802 generally controls overall operation of the electronic device 800, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing component 802 may include one or more processors 820 to execute instructions to perform all or a portion of the steps of the annotated data quality inspection method described above. Further, the processing component 802 can include one or more modules that facilitate interaction between the processing component 802 and other components. For example, the processing component 802 can include a multimedia module to facilitate interaction between the multimedia component 808 and the processing component 802.
The memory 804 is configured to store various types of data to support operations at the electronic device 800. Examples of such data include instructions for any application or method operating on the electronic device 800, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 804 may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
The power supply component 806 provides power to the various components of the electronic device 800. The power components 806 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the electronic device 800.
The multimedia component 808 includes a screen that provides an output interface between the electronic device 800 and a user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 808 includes a front facing camera and/or a rear facing camera. The front camera and/or the rear camera may receive external multimedia data when the electronic device 800 is in an operation mode, such as a shooting mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component 810 is configured to output and/or input audio signals. For example, the audio component 810 includes a Microphone (MIC) configured to receive external audio signals when the electronic device 800 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may further be stored in the memory 804 or transmitted via the communication component 816. In some embodiments, audio component 810 also includes a speaker for outputting audio signals.
The input/output interface 812 provides an interface between the processing component 802 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor assembly 814 includes one or more sensors for providing various aspects of state assessment for the electronic device 800. For example, the sensor assembly 814 may detect an open/closed state of the electronic device 800, the relative positioning of components, such as a display and keypad of the electronic device 800, the sensor assembly 814 may also detect a change in the position of the electronic device 800 or a component of the electronic device 800, the presence or absence of user contact with the electronic device 800, orientation or acceleration/deceleration of the electronic device 800, and a change in the temperature of the electronic device 800. Sensor assembly 814 may include a proximity sensor configured to detect the presence of a nearby object without any physical contact. The sensor assembly 814 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 814 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 816 is configured to facilitate wired or wireless communication between the electronic device 800 and other devices. The electronic device 800 may access a wireless network based on a communication standard, such as WiFi,2G or 3G, or a combination thereof. In an exemplary embodiment, the communication component 816 receives broadcast signals or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 816 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, ultra Wideband (UWB) technology, bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the electronic device 800 may be implemented by one or more Application Specific Integrated Circuits (ASICs), digital Signal Processors (DSPs), digital Signal Processing Devices (DSPDs), programmable Logic Devices (PLDs), field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the above-described labeled data quality inspection method.
In an exemplary embodiment, a non-transitory computer-readable storage medium comprising instructions, such as the memory 804 comprising instructions, executable by the processor 820 of the electronic device 800 to perform the above-described annotation data quality inspection method is also provided. For example, the non-transitory computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
In another exemplary embodiment, a computer program product is also provided, which comprises a computer program executable by a programmable apparatus, the computer program having code portions for performing the above mentioned annotation data quality testing method when executed by the programmable apparatus.
The embodiment of the present disclosure further provides a vehicle, which includes the annotation data quality inspection device 300 or includes the electronic device 800.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (10)

1. A quality inspection method for labeled data is characterized by comprising the following steps:
acquiring road image labeling data to be subjected to quality inspection, wherein the road image labeling data comprise road images and labeling labels of each lane line in the road images;
determining a target label meeting a preset label constraint condition from the labeling labels corresponding to all the lane lines, and determining a target lane line corresponding to the target label;
inputting the road image into a trained classification model to obtain a prediction label of the target lane line output by the classification model;
and under the condition that the predicted label is consistent with the target label, determining the target label as a valid labeling label of the target lane line in the road image.
2. The method of claim 1, wherein the label tag of each lane line comprises a location tag characterizing the location of the lane line relative to the subject vehicle and a category tag of the lane line, the location tag and the category tag corresponding to different tag types;
the step of determining a target label meeting a preset label constraint condition from the labeling labels corresponding to all the lane lines includes:
determining the target tags meeting the preset tag constraint condition from the position tags and the category tags corresponding to all the lane lines;
correspondingly, the inputting the road image into the trained classification model to obtain the prediction label of the target lane line output by the classification model includes:
and inputting the road image into a trained classification model to obtain the predicted label of the target lane line under the label type corresponding to the target label.
3. The method according to claim 1, wherein the target tags include a plurality of position tags, the number of lane lines is multiple, and accordingly, the determining the target tags meeting preset tag constraints from the labeling tags corresponding to all the lane lines comprises:
and determining the position label which is different from any other position label in the plurality of position labels as the target label.
4. The method according to claim 2, wherein the determining the target tag meeting the preset tag constraint condition from the location tags and the category tags corresponding to all the lane lines comprises:
aiming at the position label, determining a preset category label set corresponding to the position label;
and under the condition that the category label corresponding to the same lane line as the position label exists in the preset category label set, determining the position label and the category label corresponding to the same lane line as the position label as the target label.
5. The method according to claim 1, wherein the target tag includes a plurality of position tags, the number of lane lines is plural, and accordingly, the number of position tags is plural, and the determining the target tag meeting a preset tag constraint condition from the labeling tags corresponding to all the lane lines includes:
for each of the location tags, determining whether a target location tag associated with the location tag exists from the plurality of location tags if the location tag is determined to exist in the set of associated location tags;
and if the target position label exists, determining the position label as the target label.
6. The method according to claim 1, wherein the road image labeling data further comprises a corresponding position labeling area of each of the lane lines in the road image;
the classification model comprises a characteristic image extraction module, a lane line characteristic extraction module and a classification module;
the characteristic image extraction module is used for converting the road image into a characteristic map;
the lane line feature extraction module is used for determining a plurality of interest points from a target position marking area corresponding to the target lane line in the road image and determining interest features corresponding to the interest points in the feature map;
the classification module is used for classifying according to the interesting characteristics to obtain the prediction label.
7. The method of claim 1, further comprising:
and displaying other labeling labels except the effective labeling label in the road image labeling data and the road image to a user so that the user can perform manual quality inspection or correct the other labeling labels.
8. An apparatus for quality inspection of labeled data, the apparatus comprising:
the system comprises an acquisition module, a quality inspection module and a quality inspection module, wherein the acquisition module is configured to acquire road image labeling data to be subjected to quality inspection, and the road image labeling data comprise a road image and a labeling label of each lane line in the road image;
the determining module is configured to determine a target label meeting a preset label constraint condition from the labeling labels corresponding to all the lane lines, and determine a target lane line corresponding to the target label;
the input module is configured to input the road image into a trained classification model, and obtain a prediction label of the target lane line output by the classification model;
an execution module configured to determine the target label as a valid annotation label of the target lane line in the road image if the predicted label and the target label are consistent.
9. A computer-readable storage medium, on which computer program instructions are stored, which program instructions, when executed by a processor, carry out the steps of the method according to any one of claims 1 to 7.
10. An electronic device, comprising:
a memory having a computer program stored thereon;
a processor for executing the computer program in the memory to implement the steps of the method of any one of claims 1-7.
CN202211158727.3A 2022-09-22 2022-09-22 Marked data quality inspection method and device, storage medium and electronic equipment Pending CN115471804A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211158727.3A CN115471804A (en) 2022-09-22 2022-09-22 Marked data quality inspection method and device, storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211158727.3A CN115471804A (en) 2022-09-22 2022-09-22 Marked data quality inspection method and device, storage medium and electronic equipment

Publications (1)

Publication Number Publication Date
CN115471804A true CN115471804A (en) 2022-12-13

Family

ID=84335231

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211158727.3A Pending CN115471804A (en) 2022-09-22 2022-09-22 Marked data quality inspection method and device, storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN115471804A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116824275A (en) * 2023-08-29 2023-09-29 青岛美迪康数字工程有限公司 Method, device and computer equipment for realizing intelligent model optimization

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116824275A (en) * 2023-08-29 2023-09-29 青岛美迪康数字工程有限公司 Method, device and computer equipment for realizing intelligent model optimization
CN116824275B (en) * 2023-08-29 2023-11-17 青岛美迪康数字工程有限公司 Method, device and computer equipment for realizing intelligent model optimization

Similar Documents

Publication Publication Date Title
EP3806064B1 (en) Method and apparatus for detecting parking space usage condition, electronic device, and storage medium
US9275547B2 (en) Prediction of free parking spaces in a parking area
CN107430815A (en) Method and system for automatic identification parking area
US11734783B2 (en) System and method for detecting on-street parking violations
CN106815574B (en) Method and device for establishing detection model and detecting behavior of connecting and calling mobile phone
CN107316006A (en) A kind of method and system of road barricade analyte detection
CN110723432A (en) Garbage classification method and augmented reality equipment
CN113192109B (en) Method and device for identifying motion state of object in continuous frames
CN109785637A (en) The assay method and device of rule-breaking vehicle
CN109711427A (en) Object detection method and Related product
CN106682648B (en) A kind of user takes mobile phone behavioral value method and apparatus
CN111967396A (en) Processing method, device and equipment for obstacle detection and storage medium
Cao et al. Amateur: Augmented reality based vehicle navigation system
JP2017163374A (en) Traffic situation analyzer, traffic situation analyzing method, and traffic situation analysis program
CN115471804A (en) Marked data quality inspection method and device, storage medium and electronic equipment
CN115294022A (en) Fastener defect detection method and system, computer equipment and storage medium
CN111768630A (en) Violation waste image detection method and device and electronic equipment
US20080211689A1 (en) Illegal-parking-management portable terminal, illegal-parking management method and computer program product
KR101573243B1 (en) Apparatus for providing vehicle location, and system and method for guiding parking location employing the same
CN113901946A (en) Abnormal behavior detection method and device, electronic equipment and storage medium
CN113192016A (en) Method, device and equipment for detecting abnormal deformation of conveyor belt and storage medium
CN103761345A (en) Video retrieval method based on OCR character recognition technology
CN114693722B (en) Vehicle driving behavior detection method, detection device and detection equipment
KR20210066081A (en) Parking management system capable of recognizing thai car number and the method there of
TWI451990B (en) System and method for lane localization and markings

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination