CN113706552A - Method and device for generating semantic segmentation marking data of laser reflectivity base map - Google Patents

Method and device for generating semantic segmentation marking data of laser reflectivity base map Download PDF

Info

Publication number
CN113706552A
CN113706552A CN202110849345.4A CN202110849345A CN113706552A CN 113706552 A CN113706552 A CN 113706552A CN 202110849345 A CN202110849345 A CN 202110849345A CN 113706552 A CN113706552 A CN 113706552A
Authority
CN
China
Prior art keywords
base map
laser reflectivity
marked
traffic marking
marking
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202110849345.4A
Other languages
Chinese (zh)
Inventor
杨立荣
任海兵
申浩
夏华夏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Sankuai Online Technology Co Ltd
Original Assignee
Beijing Sankuai Online Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Sankuai Online Technology Co Ltd filed Critical Beijing Sankuai Online Technology Co Ltd
Priority to CN202110849345.4A priority Critical patent/CN113706552A/en
Publication of CN113706552A publication Critical patent/CN113706552A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30204Marker

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Traffic Control Systems (AREA)
  • Image Analysis (AREA)

Abstract

The service platform can acquire a laser reflectivity base map to be marked and a preset electronic map, determines an image coverage area of each traffic marking in the laser reflectivity base map to be marked according to the electronic map so as to obtain a target road image, and determines the traffic marking category corresponding to each traffic marking in the target road image. And then labeling each traffic marking in the laser reflectivity base map to be labeled according to the determined traffic marking category corresponding to each traffic marking to obtain generated semantic segmentation labeling data, and training a marking identification model to be trained according to the semantic segmentation labeling data, wherein the marking identification model is used for identifying each traffic marking in the laser reflectivity base map and the category of each traffic marking, so that the labor cost is reduced, and the efficiency of obtaining a training sample is improved.

Description

Method and device for generating semantic segmentation marking data of laser reflectivity base map
Technical Field
The specification relates to the field of unmanned driving, in particular to a method and a device for generating semantic segmentation annotation data of a laser reflectivity base map.
Background
With the rapid development of information technology, the unmanned technology is primarily applied to the life of people, and a high-precision map is an indispensable important element in the unmanned technology, and can provide information of the positions and the types of various traffic marked lines in roads.
At present, the high-precision map drawing needs to acquire a road image capable of showing a road surface, and identify the position of a traffic marking and the category corresponding to the traffic marking from the road image through an identification model. Thus, the recognition model needs to be trained in advance through the labeled training samples.
In the prior art, the position and the category information of the traffic marking in the road image need to be marked manually to obtain a training sample, the labor cost of the method is high, the efficiency is low, certainly, the approximate position of the traffic marking in the road image can be marked primarily through a machine learning model in the prior art, and then the marking is carried out manually and accurately, but the labor cost of the method is also high, and the efficiency is also low due to manual addition.
Therefore, how to improve the efficiency of obtaining training samples and reduce the labor cost is an urgent problem to be solved.
Disclosure of Invention
The present specification provides a method and an apparatus for generating semantic segmentation labeling data of a laser reflectivity base map, which partially solve the above problems in the prior art.
The technical scheme adopted by the specification is as follows:
the specification provides a method for generating semantic segmentation labeling data of a laser reflectivity base map, which comprises the following steps:
acquiring a laser reflectivity base map to be marked and a preset electronic map;
determining the image coverage area of each traffic marking in the laser reflectivity base map to be marked according to the electronic map so as to obtain a target road image corresponding to the laser reflectivity base map to be marked, and determining the traffic marking category corresponding to each traffic marking in the target road image;
marking each traffic marking in the laser reflectivity base map to be marked according to the traffic marking category corresponding to each traffic marking determined in the target road image based on the electronic map, so as to obtain semantic segmentation marking data corresponding to the generated laser reflectivity base map to be marked;
and training a marking line recognition model to be trained according to the semantic segmentation marking data, wherein the marking line recognition model is used for recognizing each traffic marking line in the laser reflectivity base map and recognizing the category of each traffic marking line in the laser reflectivity base map.
Optionally, determining an image coverage area of each traffic marking in the laser reflectivity base map to be marked according to the electronic map, specifically including:
determining the initial coverage area of each traffic marking in the laser reflectivity base map to be marked according to the electronic map;
obtaining a processed road image corresponding to the laser reflectivity base map to be marked according to the initial coverage area;
and carrying out image segmentation on the processed road image so as to determine the image coverage area of each traffic marking in the laser reflectivity base map to be marked and obtain a target road image corresponding to the laser reflectivity base map to be marked.
Optionally, obtaining an initial coverage area of each traffic marking in the laser reflectivity base map to be marked according to the electronic map specifically includes:
determining a geographical area corresponding to the laser reflectivity base map to be marked;
determining the geographic coordinates corresponding to all traffic marked lines in the geographic area in the electronic map;
determining image coordinates corresponding to each traffic marking line in the laser reflectivity base map to be marked according to the geographic coordinates corresponding to each traffic marking line in the geographic area in the electronic map;
and determining the initial coverage area of each traffic marking in the laser reflectivity base map to be marked according to the image coordinate corresponding to each traffic marking in the laser reflectivity base map to be marked.
Optionally, determining an initial coverage area of each traffic marking in the to-be-marked laser reflectivity base map according to the image coordinate corresponding to each traffic marking in the to-be-marked laser reflectivity base map, specifically including:
for each traffic marking line in the laser reflectivity base map to be marked, determining the basic range of the traffic marking line in the laser reflectivity base map to be marked according to the image coordinate corresponding to the traffic marking line in the laser reflectivity base map to be marked;
and performing width compensation on the basic range according to a preset width compensation value to obtain an initial coverage area of the traffic marking in the laser reflectivity base map to be marked.
Optionally, the image segmentation is performed on the processed road image to determine an image coverage area of each traffic marking in the to-be-labeled laser reflectivity base map, and obtain a target road image corresponding to the to-be-labeled laser reflectivity base map, and specifically includes:
inputting the processed road image into a preset image recognition model to perform image semantic segmentation on the processed road image to obtain an image coverage area of each traffic marking in the laser reflectivity base map to be labeled;
and redrawing the laser reflectivity base map to be marked or the processed road image according to the image coverage area of each traffic marking in the laser reflectivity base map to be marked to obtain the target road image.
Optionally, obtaining a laser reflectivity base map to be marked specifically includes:
acquiring an initial road image;
carrying out image segmentation on the initial road image to obtain a base map of the reflectivity of each laser to be marked;
according to the marked road image, training a marking line recognition model to be trained, and specifically comprises the following steps:
according to the position of each laser reflectivity base map to be marked in the initial road image, splicing the semantic segmentation marking data corresponding to each laser reflectivity base map to be marked to obtain a marking image corresponding to the initial road image;
and training the marking line recognition model according to the marked image corresponding to the initial road image.
Optionally, the method further comprises:
acquiring a collected road image to be identified, wherein the road image to be identified is a laser reflectivity base map;
inputting the road image to be recognized into the trained marking recognition model, and determining the coverage area of the traffic marking in the road image to be recognized and the traffic marking category of each traffic marking in the road image to be recognized so as to obtain the recognition result corresponding to the road image to be recognized;
according to the acquisition position based on the road image to be identified, determining a map area corresponding to the acquisition position from the electronic map to be drawn as a map area to be drawn;
and drawing traffic marking lines in the map area to be drawn in the electronic map to be drawn according to the identification result.
The present specification provides a device for generating semantic segmentation labeling data of a laser reflectivity base map, comprising:
the acquisition module is used for acquiring a laser reflectivity base map to be marked and a preset electronic map;
the determining module is used for determining the image coverage area of each traffic marking in the laser reflectivity base map to be marked according to the electronic map so as to obtain a target road image corresponding to the laser reflectivity base map to be marked, and determining the traffic marking category corresponding to each traffic marking in the target road image;
the marking module is used for marking each traffic marking in the laser reflectivity base map to be marked according to the traffic marking category corresponding to each traffic marking determined in the target road image based on the electronic map, so as to obtain semantic segmentation marking data corresponding to the generated laser reflectivity base map to be marked;
and the training module is used for training a marking line recognition model to be trained according to the semantic segmentation marking data, and the marking line recognition model is used for recognizing each traffic marking line in the laser reflectivity base map and recognizing the category of each traffic marking line in the laser reflectivity base map.
The present specification provides a computer-readable storage medium storing a computer program which, when executed by a processor, implements the method for generating the laser reflectivity base map semantic segmentation annotation data described above.
The present specification provides an electronic device, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, wherein the processor executes the program to implement the method for generating the laser reflectivity base map semantic segmentation labeling data.
The technical scheme adopted by the specification can achieve the following beneficial effects:
in the method and apparatus for generating semantic segmentation labeling data of a laser reflectivity base map provided in this specification, a service platform may obtain a laser reflectivity base map to be labeled and a preset electronic map, and determine an image coverage area of each traffic marking in the laser reflectivity base map to be labeled according to the electronic map, so as to obtain a target road image corresponding to the laser reflectivity base map to be labeled, and determine a traffic marking category corresponding to each traffic marking in the target road image. And then labeling each traffic marking in the laser reflectivity base map to be labeled according to the traffic marking category corresponding to each traffic marking determined in the target road image based on the electronic map to obtain semantic segmentation labeling data corresponding to the generated laser reflectivity base map to be labeled, and training a marking recognition model to be trained according to the semantic segmentation labeling data, wherein the marking recognition model is used for recognizing each traffic marking in the laser reflectivity base map and recognizing the category of each traffic marking in the laser reflectivity base map.
At present, in the process of drawing a high-precision map, traffic marking lines and traffic marking line categories of the traffic marking lines may exist in certain areas, and the traffic marking lines and the traffic marking line categories of the traffic marking lines are recorded in certain areas, so that the method can generate a training sample of a training marking line recognition model through the areas in which the traffic marking lines and the traffic marking line categories of the traffic marking lines are recorded in the electronic map and laser reflectivity base maps to be marked in the areas. That is to say, when the laser reflectivity base map to be labeled is labeled, the service platform can refer to the information about the traffic marking in the electronic map, and label the position of the traffic marking and the type of the traffic marking in the laser reflectivity base map to be labeled, so that compared with the prior art, the labor cost is reduced, the efficiency is improved, and the automatic labeling of the laser reflectivity base map of other map areas which are not labeled with the traffic marking and the traffic marking type of the traffic marking is facilitated.
Drawings
The accompanying drawings, which are included to provide a further understanding of the specification and are incorporated in and constitute a part of this specification, illustrate embodiments of the specification and together with the description serve to explain the specification and not to limit the specification in a non-limiting sense. In the drawings:
fig. 1 is a schematic flow chart of a method for generating semantic segmentation labeling data of a laser reflectivity base map in this specification;
FIG. 2 is a schematic diagram of a laser reflectivity base map provided herein;
FIG. 3 is a schematic illustration of a processed road image provided herein;
FIG. 4 is a schematic view of a target road image provided herein;
FIG. 5 is a diagram of semantic segmentation annotation data provided herein;
fig. 6 is a schematic diagram of a device for generating semantic segmentation labeling data of a laser reflectivity base map provided in the present specification;
fig. 7 is a schematic diagram of an electronic device corresponding to fig. 1 provided in the present specification.
Detailed Description
In order to make the objects, technical solutions and advantages of the present disclosure more clear, the technical solutions of the present disclosure will be clearly and completely described below with reference to the specific embodiments of the present disclosure and the accompanying drawings. It is to be understood that the embodiments described are only a few embodiments of the present disclosure, and not all embodiments. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments in the present specification without any creative effort belong to the protection scope of the present specification.
The technical solutions provided by the embodiments of the present description are described in detail below with reference to the accompanying drawings.
Fig. 1 is a schematic flow chart of a method for generating semantic segmentation labeling data of a laser reflectivity base map in this specification, including the following steps:
s101: and acquiring a laser reflectivity base map to be marked and a preset electronic map.
S102: and determining the image coverage area of each traffic marking in the laser reflectivity base map to be marked according to the electronic map so as to obtain a target road image corresponding to the laser reflectivity base map to be marked, and determining the traffic marking category corresponding to each traffic marking in the target road image.
In practical application, in order to enable the unmanned equipment to control self-running in a road, a high-precision map is indispensable, and therefore, a service platform needs to determine a traffic marking in a laser reflectivity base map and a traffic marking category of the traffic marking so as to draw the high-precision map, in this specification, a marking recognition model is used for determining the traffic marking in the laser reflectivity base map and the traffic marking category, and the service platform needs to determine a training sample for training the marking recognition model.
Based on the method, the service platform can obtain the laser reflectivity base map to be marked and a preset electronic map, and determines the image coverage area of each traffic marking in the laser reflectivity base map to be marked according to the electronic map so as to obtain a target road image corresponding to the laser reflectivity base map to be marked, and determines the traffic marking category corresponding to each traffic marking in the target road image.
The electronic map mentioned here may refer to a high-precision map, a general navigation map, and the like. The laser reflectivity base map mentioned here may be a grayscale image obtained by converting the laser reflectivity corresponding to the point cloud data, as shown in fig. 2.
Fig. 2 is a schematic diagram of a laser reflectivity base map provided in the present specification.
As can be seen from fig. 2, the laser reflectivity base map is a grayscale image in which the road color is grayish and the traffic markings (such as lane lines and zebra stripes) are whiter, and since the laser reflectivity of the traffic markings is different from that of the road in the point cloud data, such a grayscale image can be obtained by mapping the laser reflectivity of each point cloud point included in the point cloud data to a certain grayscale value, in which the traffic markings are higher in grayscale value and whiter in color than the road surface, the laser reflectivity base map shown in fig. 2 is ideal, and in practice, some areas in the laser reflectivity base map will be uneven, and the traffic markings such as lane lines may also be less clear.
The service platform determines the image coverage area of each traffic marking in the laser reflectivity base map to be marked, and the image coverage area can be determined in various ways. Specifically, the service platform may determine an initial coverage area of each traffic marking in the to-be-labeled laser reflectivity base map according to the electronic map, obtain a processed road image corresponding to the to-be-labeled laser reflectivity base map according to the initial coverage area, and perform image segmentation on the processed road image to determine an image coverage area of each traffic marking in the to-be-labeled laser reflectivity base map and obtain a target road image corresponding to the to-be-labeled laser reflectivity base map.
The initial coverage area may be an image area where each traffic marking included in the to-be-marked laser reflectivity base map is located, which is determined by the service platform through the position of the traffic marking recorded in the electronic map. The processed road image may be an image including a traffic marking and a road, which is obtained by regenerating or processing the to-be-marked laser reflectivity base map with reference to the to-be-marked laser reflectivity base map, that is, the processed road image can clearly distinguish the traffic marking and the road outside the traffic marking. Therefore, the service platform can accurately perform image segmentation on the processed road image, namely, can accurately distinguish the traffic marking and the road area from the laser reflectivity base map to be marked, so as to determine the actual image coverage area of each traffic marking in the laser reflectivity base map to be marked and obtain the target road image corresponding to the laser reflectivity base map to be marked.
When determining the initial coverage area, the service platform may determine a geographic area corresponding to the laser reflectivity base map to be marked, determine geographic coordinates corresponding to each traffic marking in the geographic area in the electronic map, and determine image coordinates corresponding to each traffic marking in the laser reflectivity base map to be marked according to the geographic coordinates corresponding to each traffic marking in the geographic area in the electronic map, thereby determining the initial coverage area of each traffic marking in the laser reflectivity base map to be marked according to the image coordinates corresponding to each traffic marking in the laser reflectivity base map to be marked.
That is to say, first, the service platform may determine a geographic area related to the laser reflectivity base map to be labeled, for example, if the laser reflectivity base map to be labeled is shot by an image collector, the geographic area related to the laser reflectivity base map to be labeled may be determined by the shooting location of the laser reflectivity base map to be labeled and the image collection range of the image collector. And if the laser reflectivity base map to be marked is a gray level image obtained through the point cloud data, the laser reflectivity base map to be marked can be determined through the acquisition starting point and the acquisition end point of the point cloud data corresponding to the laser reflectivity base map to be marked and the acquisition range of the equipment for acquiring the point cloud data.
After the geographical area corresponding to the laser reflectivity base map to be marked is determined, traffic marking lines in the geographical area and geographical coordinates corresponding to the traffic marking lines can be determined from the electronic map, and because each image coordinate point in the laser reflectivity base map to be marked has a certain corresponding relation with the geographical coordinates in the electronic map, the geographical coordinates corresponding to the traffic marking lines can be converted into image coordinates in the laser reflectivity base map to be marked, so that the image coordinates of each traffic marking line in the laser reflectivity base map to be marked are determined, and thus, the initial coverage area of each traffic marking line in the laser reflectivity base map to be marked can be determined according to the image coordinates corresponding to each traffic marking line in the laser reflectivity base map to be marked.
The initial coverage area of each traffic marking in the laser reflectivity base map to be labeled, which is determined by the service platform, may be determined directly according to the image coordinates corresponding to each traffic marking in the laser reflectivity base map to be labeled, but the geographic coordinates of the traffic marking recorded in the electronic map may not accurately represent the actual area of the traffic marking (for example, for a lane marking, only the central axis of the lane marking is usually recorded in the electronic map, and for a sidewalk and a turning arrow, the outlines of the sidewalk and the turning arrow may be recorded in the electronic map, but may be smaller than the actual range). That is, the area of the image coordinate corresponding to the traffic marking in the laser reflectivity base map to be labeled, which is converted from the geographic coordinate of the traffic marking in the electronic map, may also be smaller than the real area of the traffic marking in the laser reflectivity base map to be labeled.
Therefore, the service platform can determine the basic range of the traffic marking in the laser reflectivity base map to be marked according to the image coordinate corresponding to the traffic marking in the laser reflectivity base map to be marked aiming at each traffic marking in the laser reflectivity base map to be marked, and therefore, the basic range can be subjected to width compensation according to the preset width compensation value, and the initial coverage area of the traffic marking in the laser reflectivity base map to be marked is obtained. The width compensation value can be set to be larger, so that the determined initial coverage area of the traffic marking in the road image can fully cover the traffic marking.
The service platform can determine a processed road image through the initial coverage area, wherein the service platform can process the laser reflectivity base map to be marked according to the initial coverage area to obtain the processed road image, and because the service platform determines the image coverage area corresponding to each traffic marking in the laser reflectivity base map to be marked, image pixels outside the image coverage area are all image pixels corresponding to a road (such as asphalt road), the service platform can adjust the image pixels except the image coverage area in the laser reflectivity base map to be marked into road attribute pixels to obtain the processed road image. The road attribute pixel mentioned here may refer to a pixel capable of expressing a road form, for example, a road is low in gradation value (from black to white, with a gradation value of 0 to 255) compared to a traffic marking for a gradation image. The road in the image takes a form tending to gray or black and the traffic markings take a form tending to white, so for such a laser reflectivity map to be marked, the road attribute pixels can be set to pixels of a certain gray value, e.g. to pixels of black (i.e. with a gray value of 0), as shown in fig. 3.
Fig. 3 is a schematic diagram of a processed road image provided in this specification.
As can be seen from fig. 3, most of the area in the processed road image is set to be black, and some things which do not belong to the traffic marking (such as green belts in the middle of the road and prohibited driving areas) originally existing are eliminated, however, some indication arrows and the surroundings of the zebra crossing area in the processed road image are not completely set to be black, so that the initial coverage area of the traffic marking determined by the electronic map may be larger than the actual area of the traffic marking, and the whole area of the zebra crossing is recorded in the electronic map, so that the actual image coverage area of the traffic marking in the processed road image is subsequently determined by other means, thereby accurately determining the actual image coverage area of the traffic marking in the image to be marked.
Of course, the processed road image may also be determined in other ways, for example, the service platform may generate, according to the initial coverage area, an image that is consistent with the size of the laser reflectivity base map to be marked, in the initial coverage area, an image of a traffic marking, and outside the initial coverage area, a processed road image of a road, that is, the processed road image is an image that is substantially consistent with the content of the image of the laser reflectivity base map to be marked, but is an image regenerated by the service platform.
The determined initial coverage area is only used for determining the processed road image, so that the traffic marking can be highlighted in the processed road image, and things which can interfere with the recognition of the traffic marking in the laser reflectivity base map to be marked are eliminated. The pixels of the processed road image except the traffic marking are basically the pixels corresponding to the road, so that after objects which possibly interfere with the identification of the traffic marking and are contained in the original laser reflectivity base map to be marked are eliminated, the processed road image is subjected to image identification, the position of the traffic marking in the laser reflectivity base map to be marked can be accurately determined, and namely, the image coverage area of each traffic marking in the laser reflectivity base map to be marked is accurately determined.
Therefore, after the service platform determines the processed road image, the target road image corresponding to the to-be-annotated laser reflectivity base map can be determined through the processed road image, for example, the service platform can input the processed road image into a preset image recognition model to perform image semantic segmentation on the processed road image to obtain the image coverage area of each traffic marking in the to-be-annotated laser reflectivity base map, so as to perform image redrawing on the to-be-annotated laser reflectivity base map or the processed road image according to the image coverage area of each traffic marking in the to-be-annotated laser reflectivity base map (the image redrawing may refer to redrawing the target road image, or may refer to modifying some pixels included in the to-be-annotated laser reflectivity base map or the processed road image by a pointer to obtain the target road image), and obtaining a target road image.
That is to say, the service platform can distinguish the road image to be processed from the traffic marking included in the road image through the image recognition model, so that the obtained target road image can show the road image to be processed and whether each pixel in the image to be labeled belongs to the traffic marking or the road, as shown in fig. 4.
Fig. 4 is a schematic diagram of a target road image provided in the present specification.
As can be seen from fig. 4, black in the target road image represents a road in the laser reflectivity base map to be labeled, and white represents a traffic marking line in the laser reflectivity base map to be labeled, that is, which pixels in the laser reflectivity base map to be labeled are roads and which pixels are traffic marking lines can be represented in the target road image.
After the service platform determines the target road image, the service platform may determine, based on the electronic map, a traffic marking category corresponding to each traffic marking in the target road image, that is, it is originally only able to distinguish which pixels are traffic markings and which pixels are roads in the target road image, but the traffic markings also have a certain category, for example, the traffic markings include lane lines, turning arrows, pedestrian lane lines, and the like, and the turning arrows also include various turning arrows.
S103: and marking each traffic marking in the laser reflectivity base map to be marked according to the traffic marking category corresponding to each traffic marking determined in the target road image based on the electronic map, so as to obtain semantic segmentation marking data corresponding to the generated laser reflectivity base map to be marked.
After the business platform determines the traffic marking line category corresponding to each traffic marking line in the target road image through the electronic map, marking each traffic marking line in the laser reflectivity base map to be marked according to the traffic marking line category corresponding to each traffic marking line determined in the target road image based on the electronic map to obtain semantic segmentation marking data corresponding to the generated laser reflectivity base map to be marked. That is to say, the semantic segmentation labeling data may indicate a traffic marking class corresponding to each traffic marking in the laser reflectivity base map to be labeled, and certainly, may also indicate a road in the laser reflectivity base map to be labeled, so labeling each traffic marking in the laser reflectivity base map to be labeled may refer to labeling each pixel in the laser reflectivity base map to be labeled, so as to obtain semantic segmentation labeling data capable of distinguishing, for each pixel, whether the pixel belongs to the road or to which kind of traffic marking, as shown in fig. 5.
Fig. 5 is a schematic diagram of semantic segmentation labeling data provided in this specification.
As can be seen from fig. 5, the traffic markings of different categories in the semantic segmentation labeling data have different colors, such as the deepest zebra stripes, the shallowest lane lines, and the shallowest zebra stripes, and the color of each traffic marking is different from the color of the road, and since there is no redundant color, the broken line in the graph represents the stop line, what is shown in fig. 5 is that in the semantic segmentation labeling data, the category of each pixel in the laser reflectivity base map to be labeled can be represented, that is, the road image after labeling can represent the category to which each pixel in the laser reflectivity base map to be labeled belongs, where the pixel is a road, a stop line, a lane line, or a zebra line.
It should be noted that the labeled road image may be stored in a shape file format, and since the file in this format can be opened by editing software, the labeled road image can be modified conveniently, so that even when there is a certain error in the semantic segmentation labeling data, the semantic segmentation labeling data can be modified conveniently.
S104: and training a marking line recognition model to be trained according to the semantic segmentation marking data, wherein the marking line recognition model is used for recognizing each traffic marking line in the laser reflectivity base map and recognizing the category of each traffic marking line in the laser reflectivity base map.
Because the marking recognition model needs to be capable of recognizing each traffic marking in the laser reflectivity base map and the category of each traffic marking in the laser reflectivity base map, the service platform can divide the marking data and the laser reflectivity base map to be marked according to the semantics and train the marking recognition model to be trained, wherein the marking recognition model is used for recognizing each traffic marking in the laser reflectivity base map and recognizing the category of each traffic marking in the laser reflectivity base map. Specifically, the training sample generated by the service platform may include the laser reflectivity base map to be labeled and semantic segmentation labeling data, where the semantic segmentation labeling data may indicate a category corresponding to each pixel in the laser reflectivity base map to be labeled, that is, which pixels are roads, which pixels are traffic markings, and which pixels belong to the traffic markings are pixels corresponding to which traffic markings, that is, the category corresponding to the pixel includes roads and various traffic markings.
Therefore, when the marking recognition model is trained, the laser reflectivity base map to be marked can be input into the marking recognition model to obtain a recognition result, and the marking recognition model is trained by taking the minimized deviation between the recognition result and the semantic segmentation marking data as an optimization target. Therefore, the marking recognition model can determine the position of the traffic marking in any laser reflectivity base map and the traffic marking category of the traffic marking, and the efficiency is improved for marking the high-precision map.
It should be noted that, when point cloud collection is performed, collected point cloud may involve many areas, so that the initial laser reflectivity base map is large, and the initial laser reflectivity base map needs to be segmented and then labeled, therefore, when a laser reflectivity base map to be labeled is obtained, an initial road image (i.e., an initial laser reflectivity base map) can be obtained, and the initial road image is segmented to obtain each laser reflectivity base map to be labeled, and each laser reflectivity base map to be labeled is labeled by the above method, after the labeling is completed, the labeled road images corresponding to each laser reflectivity base map to be labeled can be spliced according to the position of each laser reflectivity base map to be labeled in the initial road image to obtain a labeled image corresponding to the initial road image, and then, according to the labeled image corresponding to the initial road image, the reticle recognition model is trained. During splicing, because image coordinates of each laser reflectivity base map to be labeled are different from those of the initial road image, the position of each laser reflectivity base map to be labeled in the initial road image is used as a reference, and coordinate conversion is performed on the labeled road image corresponding to each laser reflectivity base map to be labeled so as to obtain a labeled image corresponding to the initial road image.
It should be further noted that the marking recognition model can recognize the traffic markings and the types of the traffic markings in the laser reflectivity base map, which is to facilitate drawing of an electronic map (such as a high-precision map) finally, so that the service platform can acquire the acquired road image to be recognized, input the road image to be recognized into the trained marking recognition model, determine the coverage area of the traffic markings in the road image to be recognized and the types of the traffic markings of each traffic marking in the road image to be recognized, so as to obtain the recognition result corresponding to the road image to be recognized, where the road image to be recognized is the laser reflectivity base map, and the acquisition position corresponding to the road image to be recognized is the geographical position where the high-precision map has not been constructed yet.
That is to say, the identification result corresponding to the road image to be identified, which is determined by the service platform through the marking identification model, indicates the position of the traffic marking in the road image to be identified and which type of traffic marking each traffic marking is, the identification result may be an identified image corresponding to the road image to be identified, and the identified image may indicate the type corresponding to each pixel in the road image to be identified, that is, whether the pixel in the road image to be identified belongs to a road or a certain traffic marking.
Then, the service platform can determine a map area corresponding to the acquisition position from the electronic map to be drawn according to the acquisition position based on the road image to be identified, the map area is used as a map area to be drawn, and the traffic marking is drawn on the map area to be drawn in the electronic map to be drawn according to the identification result.
That is to say, after the service platform identifies the position of the traffic marking from the road image to be identified and the category of each traffic marking, the service platform can draw the traffic marking of the map area corresponding to the road image to be identified in the electronic map to be drawn according to the identified position of the traffic marking and the category of each traffic marking, and mark the category of the traffic marking of the map area corresponding to the road image to be identified in the electronic map to be drawn. The electronic map to be drawn mentioned here may refer to the electronic map mentioned above, or may refer to another electronic map.
The unmanned equipment mentioned above may refer to equipment capable of realizing automatic driving, such as unmanned vehicles, unmanned aerial vehicles, automatic distribution equipment, and the like. Based on this, the method for generating the semantic segmentation annotation data of the laser reflectivity base map provided by the specification can obtain a training sample of an algorithm for identifying the laser reflectivity base map, which is used for drawing a high-precision map required by the unmanned device, and the unmanned device can be particularly applied to the field of distribution through the unmanned device, such as business scenes of distribution such as express delivery, logistics and takeaway using the unmanned device.
It can be seen from the above method that, in the process of drawing a high-precision map, there may exist some areas where no traffic marking and the traffic marking category of the traffic marking are recorded, and some areas where the traffic marking and the traffic marking category of the traffic marking are recorded, so that the method can generate a training sample of a training marking recognition model by using the areas where the traffic marking and the traffic marking category of the traffic marking have been recorded in an electronic map (such as a high-precision map) and the laser reflectivity base map to be labeled in the areas, that is, when labeling the laser reflectivity base map to be labeled, the service platform can label the position of the traffic marking and the traffic marking category in the laser reflectivity base map to be labeled by referring to the information about the traffic marking in the electronic map, thereby reducing the labor cost and improving the efficiency compared with the prior art, in addition, the laser reflectivity base map of other areas which are not marked with the traffic marking lines and the traffic marking line types of the traffic marking lines can be automatically marked, so that the map of the areas can be drawn conveniently.
In this specification, the road image to be identified may be understood as a laser reflectivity base map, and of course, in practical applications, the method may also be applied to extracting traffic markings from a road image captured by a camera.
Based on the same idea, the present specification further provides a corresponding apparatus for generating semantic segmentation labeling data of a laser reflectivity base map, as shown in fig. 6.
Fig. 6 is a schematic diagram of an apparatus for generating semantic segmentation labeling data of a laser reflectivity base map provided in this specification, including:
the acquisition module 601 is used for acquiring a laser reflectivity base map to be marked and a preset electronic map;
a determining module 602, configured to determine, according to the electronic map, an image coverage area of each traffic marking in the to-be-labeled laser reflectivity base map, so as to obtain a target road image corresponding to the to-be-labeled laser reflectivity base map, and determine, in the target road image, a traffic marking category corresponding to each traffic marking;
the labeling module 603 is configured to label each traffic marking in the laser reflectivity base map to be labeled according to the traffic marking category corresponding to each traffic marking determined in the target road image based on the electronic map, so as to obtain semantic segmentation labeling data corresponding to the generated laser reflectivity base map to be labeled;
the training module 604 is configured to train a to-be-trained marking line recognition model according to the semantic segmentation labeling data, where the marking line recognition model is configured to recognize each traffic marking line in the laser reflectivity base map and recognize a category of each traffic marking line in the laser reflectivity base map.
Optionally, the determining module 602 is specifically configured to determine, according to the electronic map, an initial coverage area of each traffic marking in the to-be-marked laser reflectivity base map; obtaining a processed road image corresponding to the laser reflectivity base map to be marked according to the initial coverage area; and carrying out image segmentation on the processed road image so as to determine the image coverage area of each traffic marking in the laser reflectivity base map to be marked and obtain a target road image corresponding to the laser reflectivity base map to be marked.
Optionally, the determining module 602 is specifically configured to determine a geographic area corresponding to the laser reflectivity base map to be labeled; determining the geographic coordinates corresponding to all traffic marked lines in the geographic area in the electronic map; determining image coordinates corresponding to each traffic marking line in the laser reflectivity base map to be marked according to the geographic coordinates corresponding to each traffic marking line in the geographic area in the electronic map; and determining the initial coverage area of each traffic marking in the laser reflectivity base map to be marked according to the image coordinate corresponding to each traffic marking in the laser reflectivity base map to be marked.
Optionally, the determining module 602 is specifically configured to, for each traffic marking in the to-be-labeled laser reflectivity base map, determine, according to the image coordinate corresponding to the traffic marking in the to-be-labeled laser reflectivity base map, a basic range of the traffic marking in the to-be-labeled laser reflectivity base map; and performing width compensation on the basic range according to a preset width compensation value to obtain an initial coverage area of the traffic marking in the laser reflectivity base map to be marked.
Optionally, the determining module 602 is specifically configured to input the processed road image into a preset image recognition model, so as to perform image semantic segmentation on the processed road image, and obtain an image coverage area of each traffic marking in the to-be-labeled laser reflectivity base map; and redrawing the laser reflectivity base map to be marked or the processed road image according to the image coverage area of each traffic marking in the laser reflectivity base map to be marked to obtain the target road image.
Optionally, the obtaining module 601 is specifically configured to obtain an initial road image; carrying out image segmentation on the initial road image to obtain a base map of the reflectivity of each laser to be marked; the training module 604 is specifically configured to splice semantic segmentation labeling data corresponding to each laser reflectivity base map to be labeled according to a position of each laser reflectivity base map to be labeled in the initial road image, so as to obtain a labeled image corresponding to the initial road image; and training the marking line recognition model according to the marked image corresponding to the initial road image.
Optionally, the apparatus further comprises:
the drawing module 605 is configured to obtain a collected road image to be identified, where the road image to be identified is a laser reflectivity base map; inputting the road image to be recognized into the trained marking recognition model, and determining the coverage area of the traffic marking in the road image to be recognized and the traffic marking category of each traffic marking in the road image to be recognized so as to obtain the recognition result corresponding to the road image to be recognized; according to the acquisition position based on the road image to be identified, determining a map area corresponding to the acquisition position from the electronic map to be drawn as a map area to be drawn; and drawing traffic marking lines in the map area to be drawn in the electronic map to be drawn according to the identification result.
The present specification also provides a computer-readable storage medium, which stores a computer program, wherein the computer program can be used to execute the method for generating the laser reflectivity base map semantic division labeling data provided in fig. 1.
This description also provides a schematic block diagram of an electronic device corresponding to that of fig. 1, shown in fig. 7. As shown in fig. 7, at the hardware level, the electronic device includes a processor, an internal bus, a network interface, a memory, and a non-volatile memory, but may also include hardware required for other services. The processor reads a corresponding computer program from the nonvolatile memory into the memory and then runs the computer program to realize the generation method of the laser reflectivity base map semantic segmentation marking data described in the above fig. 1. Of course, besides the software implementation, the present specification does not exclude other implementations, such as logic devices or a combination of software and hardware, and the like, that is, the execution subject of the following processing flow is not limited to each logic unit, and may be hardware or logic devices.
In the 90 s of the 20 th century, improvements in a technology could clearly distinguish between improvements in hardware (e.g., improvements in circuit structures such as diodes, transistors, switches, etc.) and improvements in software (improvements in process flow). However, as technology advances, many of today's process flow improvements have been seen as direct improvements in hardware circuit architecture. Designers almost always obtain the corresponding hardware circuit structure by programming an improved method flow into the hardware circuit. Thus, it cannot be said that an improvement in the process flow cannot be realized by hardware physical modules. For example, a Programmable Logic Device (PLD), such as a Field Programmable Gate Array (FPGA), is an integrated circuit whose Logic functions are determined by programming the Device by a user. A digital system is "integrated" on a PLD by the designer's own programming without requiring the chip manufacturer to design and fabricate application-specific integrated circuit chips. Furthermore, nowadays, instead of manually making an Integrated Circuit chip, such Programming is often implemented by "logic compiler" software, which is similar to a software compiler used in program development and writing, but the original code before compiling is also written by a specific Programming Language, which is called Hardware Description Language (HDL), and HDL is not only one but many, such as abel (advanced Boolean Expression Language), ahdl (alternate Hardware Description Language), traffic, pl (core universal Programming Language), HDCal (jhdware Description Language), lang, Lola, HDL, laspam, hardward Description Language (vhr Description Language), vhal (Hardware Description Language), and vhigh-Language, which are currently used in most common. It will also be apparent to those skilled in the art that hardware circuitry that implements the logical method flows can be readily obtained by merely slightly programming the method flows into an integrated circuit using the hardware description languages described above.
The controller may be implemented in any suitable manner, for example, the controller may take the form of, for example, a microprocessor or processor and a computer-readable medium storing computer-readable program code (e.g., software or firmware) executable by the (micro) processor, logic gates, switches, an Application Specific Integrated Circuit (ASIC), a programmable logic controller, and an embedded microcontroller, examples of which include, but are not limited to, the following microcontrollers: ARC 625D, Atmel AT91SAM, Microchip PIC18F26K20, and Silicone Labs C8051F320, the memory controller may also be implemented as part of the control logic for the memory. Those skilled in the art will also appreciate that, in addition to implementing the controller as pure computer readable program code, the same functionality can be implemented by logically programming method steps such that the controller is in the form of logic gates, switches, application specific integrated circuits, programmable logic controllers, embedded microcontrollers and the like. Such a controller may thus be considered a hardware component, and the means included therein for performing the various functions may also be considered as a structure within the hardware component. Or even means for performing the functions may be regarded as being both a software module for performing the method and a structure within a hardware component.
The systems, devices, modules or units illustrated in the above embodiments may be implemented by a computer chip or an entity, or by a product with certain functions. One typical implementation device is a computer. In particular, the computer may be, for example, a personal computer, a laptop computer, a cellular telephone, a camera phone, a smartphone, a personal digital assistant, a media player, a navigation device, an email device, a game console, a tablet computer, a wearable device, or a combination of any of these devices.
For convenience of description, the above devices are described as being divided into various units by function, and are described separately. Of course, the functions of the various elements may be implemented in the same one or more software and/or hardware implementations of the present description.
As will be appreciated by one skilled in the art, embodiments of the present description may be provided as a method, system, or computer program product. Accordingly, the description may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the description may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The description has been presented with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the description. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
As will be appreciated by one skilled in the art, embodiments of the present description may be provided as a method, system, or computer program product. Accordingly, the description may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the description may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
This description may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. The specification may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for the system embodiment, since it is substantially similar to the method embodiment, the description is simple, and for the relevant points, reference may be made to the partial description of the method embodiment.
The above description is only an example of the present specification, and is not intended to limit the present specification. Various modifications and alterations to this description will become apparent to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present specification should be included in the scope of the claims of the present specification.

Claims (10)

1. A method for generating semantic segmentation marking data of a laser reflectivity base map relates to the field of unmanned driving and comprises the following steps:
acquiring a laser reflectivity base map to be marked and a preset electronic map;
determining the image coverage area of each traffic marking in the laser reflectivity base map to be marked according to the electronic map so as to obtain a target road image corresponding to the laser reflectivity base map to be marked, and determining the traffic marking category corresponding to each traffic marking in the target road image;
marking each traffic marking in the laser reflectivity base map to be marked according to the traffic marking category corresponding to each traffic marking determined in the target road image based on the electronic map, so as to obtain semantic segmentation marking data corresponding to the generated laser reflectivity base map to be marked;
and training a marking line recognition model to be trained according to the semantic segmentation marking data, wherein the marking line recognition model is used for recognizing each traffic marking line in the laser reflectivity base map and recognizing the category of each traffic marking line in the laser reflectivity base map.
2. The method according to claim 1, wherein determining an image coverage area of each traffic marking in the laser reflectivity base map to be marked according to the electronic map specifically comprises:
determining the initial coverage area of each traffic marking in the laser reflectivity base map to be marked according to the electronic map;
obtaining a processed road image corresponding to the laser reflectivity base map to be marked according to the initial coverage area;
and carrying out image segmentation on the processed road image so as to determine the image coverage area of each traffic marking in the laser reflectivity base map to be marked and obtain a target road image corresponding to the laser reflectivity base map to be marked.
3. The method of claim 2, wherein determining an initial coverage area of each traffic marking in the laser reflectivity base map to be marked based on the electronic map comprises:
determining a geographical area corresponding to the laser reflectivity base map to be marked;
determining the geographic coordinates corresponding to all traffic marked lines in the geographic area in the electronic map;
determining image coordinates corresponding to each traffic marking line in the laser reflectivity base map to be marked according to the geographic coordinates corresponding to each traffic marking line in the geographic area in the electronic map;
and determining the initial coverage area of each traffic marking in the laser reflectivity base map to be marked according to the image coordinate corresponding to each traffic marking in the laser reflectivity base map to be marked.
4. The method according to claim 3, wherein determining an initial coverage area of each traffic marking in the to-be-marked laser reflectivity histogram according to the image coordinates corresponding to each traffic marking in the to-be-marked laser reflectivity histogram specifically comprises:
for each traffic marking line in the laser reflectivity base map to be marked, determining the basic range of the traffic marking line in the laser reflectivity base map to be marked according to the image coordinate corresponding to the traffic marking line in the laser reflectivity base map to be marked;
and performing width compensation on the basic range according to a preset width compensation value to obtain an initial coverage area of the traffic marking in the laser reflectivity base map to be marked.
5. The method according to claim 2, wherein the image segmentation is performed on the processed road image to determine an image coverage area of each traffic marking in the to-be-labeled laser reflectivity base map, and obtain a target road image corresponding to the to-be-labeled laser reflectivity base map, specifically comprising:
inputting the processed road image into a preset image recognition model to perform image semantic segmentation on the processed road image to obtain an image coverage area of each traffic marking in the laser reflectivity base map to be labeled;
and redrawing the laser reflectivity base map to be marked or the processed road image according to the image coverage area of each traffic marking in the laser reflectivity base map to be marked to obtain the target road image.
6. The method of claim 1, wherein obtaining a base map of laser reflectivity to be marked comprises:
acquiring an initial road image;
carrying out image segmentation on the initial road image to obtain a base map of the reflectivity of each laser to be marked;
according to the semantic segmentation labeling data, training a marking line recognition model to be trained, and specifically comprises the following steps:
according to the position of each laser reflectivity base map to be marked in the initial road image, splicing the semantic segmentation marking data corresponding to each laser reflectivity base map to be marked to obtain a marking image corresponding to the initial road image;
and training the marking line recognition model according to the marked image corresponding to the initial road image.
7. The method of claim 1, wherein the method further comprises:
acquiring a collected road image to be identified, wherein the road image to be identified is a laser reflectivity base map;
inputting the road image to be recognized into the trained marking recognition model, and determining the coverage area of the traffic marking in the road image to be recognized and the traffic marking category of each traffic marking in the road image to be recognized so as to obtain the recognition result corresponding to the road image to be recognized;
according to the acquisition position based on the road image to be identified, determining a map area corresponding to the acquisition position from the electronic map to be drawn as a map area to be drawn;
and drawing traffic marking lines in the map area to be drawn in the electronic map to be drawn according to the identification result.
8. An apparatus for generating laser reflectivity base map semantic segmentation labeling data, comprising:
the acquisition module is used for acquiring a laser reflectivity base map to be marked and a preset electronic map;
the determining module is used for determining the image coverage area of each traffic marking in the laser reflectivity base map to be marked according to the electronic map so as to obtain a target road image corresponding to the laser reflectivity base map to be marked, and determining the traffic marking category corresponding to each traffic marking in the target road image;
the marking module is used for marking each traffic marking in the laser reflectivity base map to be marked according to the traffic marking category corresponding to each traffic marking determined in the target road image based on the electronic map, so as to obtain semantic segmentation marking data corresponding to the generated laser reflectivity base map to be marked;
and the training module is used for training a marking line recognition model to be trained according to the semantic segmentation marking data, and the marking line recognition model is used for recognizing each traffic marking line in the laser reflectivity base map and recognizing the category of each traffic marking line in the laser reflectivity base map.
9. A computer-readable storage medium, characterized in that the storage medium stores a computer program which, when executed by a processor, implements the method of any of the preceding claims 1 to 7.
10. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the method of any of claims 1 to 7 when executing the program.
CN202110849345.4A 2021-07-27 2021-07-27 Method and device for generating semantic segmentation marking data of laser reflectivity base map Withdrawn CN113706552A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110849345.4A CN113706552A (en) 2021-07-27 2021-07-27 Method and device for generating semantic segmentation marking data of laser reflectivity base map

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110849345.4A CN113706552A (en) 2021-07-27 2021-07-27 Method and device for generating semantic segmentation marking data of laser reflectivity base map

Publications (1)

Publication Number Publication Date
CN113706552A true CN113706552A (en) 2021-11-26

Family

ID=78650577

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110849345.4A Withdrawn CN113706552A (en) 2021-07-27 2021-07-27 Method and device for generating semantic segmentation marking data of laser reflectivity base map

Country Status (1)

Country Link
CN (1) CN113706552A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116245931A (en) * 2023-03-16 2023-06-09 如你所视(北京)科技有限公司 Method, device, equipment, medium and product for determining object attribute parameters

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108470159A (en) * 2018-03-09 2018-08-31 腾讯科技(深圳)有限公司 Lane line data processing method, device, computer equipment and storage medium
CN109740554A (en) * 2019-01-09 2019-05-10 宽凳(北京)科技有限公司 A kind of road edge line recognition methods and system
WO2020088076A1 (en) * 2018-10-31 2020-05-07 阿里巴巴集团控股有限公司 Image labeling method, device, and system
CN111797698A (en) * 2020-06-10 2020-10-20 北京三快在线科技有限公司 Target object identification method and identification device
CN112734775A (en) * 2021-01-19 2021-04-30 腾讯科技(深圳)有限公司 Image annotation, image semantic segmentation and model training method and device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108470159A (en) * 2018-03-09 2018-08-31 腾讯科技(深圳)有限公司 Lane line data processing method, device, computer equipment and storage medium
WO2020088076A1 (en) * 2018-10-31 2020-05-07 阿里巴巴集团控股有限公司 Image labeling method, device, and system
CN109740554A (en) * 2019-01-09 2019-05-10 宽凳(北京)科技有限公司 A kind of road edge line recognition methods and system
CN111797698A (en) * 2020-06-10 2020-10-20 北京三快在线科技有限公司 Target object identification method and identification device
CN112734775A (en) * 2021-01-19 2021-04-30 腾讯科技(深圳)有限公司 Image annotation, image semantic segmentation and model training method and device

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116245931A (en) * 2023-03-16 2023-06-09 如你所视(北京)科技有限公司 Method, device, equipment, medium and product for determining object attribute parameters
CN116245931B (en) * 2023-03-16 2024-04-19 如你所视(北京)科技有限公司 Method, device, equipment, medium and product for determining object attribute parameters

Similar Documents

Publication Publication Date Title
CN112801229B (en) Training method and device for recognition model
CN111311709B (en) Method and device for generating high-precision map
CN108334892B (en) Vehicle type identification method, device and equipment based on convolutional neural network
CN111639682A (en) Ground segmentation method and device based on point cloud data
CN111797698A (en) Target object identification method and identification device
CN112766241B (en) Target object identification method and device
CN111508258A (en) Positioning method and device
CN112327864A (en) Control method and control device of unmanned equipment
CN110929664B (en) Image recognition method and device
CN113642620B (en) Obstacle detection model training and obstacle detection method and device
WO2022116704A1 (en) High-precision map updating method and device
CN112309233A (en) Road boundary determining and road segmenting method and device
CN111797722A (en) Method and device for drawing lane line
CN111426299B (en) Method and device for ranging based on depth of field of target object
CN113706552A (en) Method and device for generating semantic segmentation marking data of laser reflectivity base map
CN114332808A (en) Method and device for predicting steering intention
CN112990099A (en) Method and device for detecting lane line
CN112902987B (en) Pose correction method and device
CN112861831A (en) Target object identification method and device, storage medium and electronic equipment
CN112699043A (en) Method and device for generating test case
CN113344198B (en) Model training method and device
CN114332201A (en) Model training and target detection method and device
CN111899264A (en) Target image segmentation method, device and medium
CN114332189A (en) High-precision map construction method and device, storage medium and electronic equipment
CN115017905A (en) Model training and information recommendation method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication
WW01 Invention patent application withdrawn after publication

Application publication date: 20211126