CN116342745A - Editing method and device for lane line data, electronic equipment and storage medium - Google Patents

Editing method and device for lane line data, electronic equipment and storage medium Download PDF

Info

Publication number
CN116342745A
CN116342745A CN202310318565.3A CN202310318565A CN116342745A CN 116342745 A CN116342745 A CN 116342745A CN 202310318565 A CN202310318565 A CN 202310318565A CN 116342745 A CN116342745 A CN 116342745A
Authority
CN
China
Prior art keywords
track
screen
track segment
segment
lane line
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310318565.3A
Other languages
Chinese (zh)
Inventor
欧阳宇彤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Autonavi Software Co Ltd
Original Assignee
Autonavi Software Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Autonavi Software Co Ltd filed Critical Autonavi Software Co Ltd
Priority to CN202310318565.3A priority Critical patent/CN116342745A/en
Publication of CN116342745A publication Critical patent/CN116342745A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/20Drawing from basic elements, e.g. lines or circles
    • G06T11/203Drawing of straight lines or curves
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/29Geographical information databases
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/20Drawing from basic elements, e.g. lines or circles
    • G06T11/206Drawing of charts or graphs
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Databases & Information Systems (AREA)
  • Remote Sensing (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The application provides a lane line data editing method, a lane line data editing device, electronic equipment and a storage medium, and according to the embodiment of the application, a target driving track can be displayed on a display screen; and determining a first screen track segment according to the display parameters submitted for the target driving track, and displaying an editing interface of lane line data corresponding to the first screen track segment. And splitting the rest part of the target driving track into a multi-screen track segment according to the first-screen track segment so as to determine a track segment splitting result which accords with the view angle determined by the user, and improving the use experience of the user. And responding to the completion of editing the lane line data in the editing interface of the previous screen track segment by the user, and displaying the editing interface of the lane line data corresponding to the next screen track segment on the display screen so as to obtain the lane line data respectively edited for each screen track segment. The editing interface of each screen track segment obtained through splitting is displayed in sequence, so that the editing difficulty of the lane line data can be reduced, and the editing efficiency is improved.

Description

Editing method and device for lane line data, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of high-precision maps, and in particular, to a lane line data editing method, device, electronic apparatus, and storage medium.
Background
In a scene where the auxiliary driving information is provided based on the electronic map, it is necessary to restore road traffic information in the real world by making the electronic map. The core element of the electronic map is a lane line. When the electronic map is manufactured, the lane line data is required to be adjusted in a manual editing mode so as to ensure the accuracy and the integrity of the lane line data.
However, the lane line data has the characteristics of large data scale and complex data relationship, and when the lane line operator performs editing operation, the operator needs to drag the screen repeatedly to search the position of the lane line to be edited, and the edited lane line data is determined. This makes editing of lane line data difficult and time-consuming. Therefore, a new lane line data editing method is needed to reduce the editing difficulty of the lane line data, reduce the time consumption and improve the editing efficiency.
Disclosure of Invention
The embodiment of the application provides a lane line data editing method, a lane line data editing device, electronic equipment and a storage medium, so as to solve one or more of the technical problems.
In a first aspect, an embodiment of the present application provides a method for editing lane line data, where the method includes: displaying the target driving track on a display screen; determining a first screen track segment according to display parameters submitted for a target driving track, and displaying an editing interface of lane line data corresponding to the first screen track segment; dividing the rest part of the target track into a plurality of track fragments according to the first track fragment; and responding to the completion of editing the lane line data in the editing interface of the previous screen track segment by the user, and displaying the editing interface of the lane line data corresponding to the next screen track segment on the display screen so as to obtain the lane line data respectively edited for each screen track segment.
In a second aspect, an embodiment of the present application provides a method for manufacturing a high-precision map, where the method includes: determining a target driving track to be edited in the high-precision map; acquiring lane line data edited for the target track, wherein the lane line data is generated based on the method of any one of the above; and manufacturing a high-precision map according to the acquired lane line data.
In a third aspect, an embodiment of the present application provides an apparatus for editing lane line data, including: the track display module is used for displaying the target driving track on the display screen; the track segment determining module is used for determining a first screen track segment according to display parameters submitted for the target driving track and displaying an editing interface of lane line data corresponding to the first screen track segment; the track segment splitting module is used for splitting the rest part of the target travelling track into multi-screen track segments according to the first-screen track segment; the data editing interface display module is used for data responding to the completion of editing lane line data in the editing interface of the previous screen track segment by a user, and displaying the editing interface of the lane line data corresponding to the next screen track segment on the display screen so as to obtain the lane line data respectively edited for each screen track segment.
In a fourth aspect, an embodiment of the present application provides a device for manufacturing a high-precision map, where the device includes: the track determining module is used for determining a target driving track to be edited in the high-precision map; the data acquisition module is used for acquiring lane line data edited for the target driving track, and the lane line data is generated based on the method provided in the embodiment; and the map making module is used for making a high-precision map according to the acquired lane line data.
In a fifth aspect, embodiments of the present application provide an electronic device comprising a memory, a processor and a computer program stored on the memory, the processor implementing the method of any one of the preceding claims when the computer program is executed.
In a sixth aspect, embodiments of the present application provide a computer product comprising computer instructions, wherein the computer instructions, when executed by a processor, implement the method of any of the above.
In a seventh aspect, embodiments of the present application provide a computer-readable storage medium having a computer program stored therein, which when executed by a processor, implements the method of any one of the above.
Compared with the related art, the method has the following advantages:
according to the embodiment of the application, the target driving track is displayed on a display screen; and determining a first screen track segment according to the display parameters submitted for the target driving track, and displaying an editing interface of lane line data corresponding to the first screen track segment. And splitting the rest part of the target driving track into a plurality of track fragments according to the first track fragment so as to determine a track fragment splitting result which accords with the display parameters determined by a user (such as a lane line operator). And then, responding to the completion of editing the lane line data in the editing interface of the previous screen track segment by the user, and displaying the editing interface of the lane line data corresponding to the next screen track segment on the display screen so as to obtain the lane line data respectively edited for each screen track segment. The lane line position to be edited can be directly jumped and displayed through displaying the next screen obtained through splitting or the editing interface of a specified certain screen track segment, so that the time consumption of repeatedly dragging the screen to search the lane line position to be edited is avoided, the editing difficulty of the lane line data is reduced, and the editing efficiency is improved.
Further, for a plurality of driving tracks corresponding to the same lane line, the superposition condition of the range of a certain split screen track segment and the range of the driving track which is already edited can be determined, and the track segment which is already edited is skipped, so that the time consumption for editing the lane line data is further reduced, and the editing efficiency is improved.
The foregoing description is merely an overview of the technical solutions of the present application, and in order to make the technical means of the present application more clearly understood, it is possible to implement the present application according to the content of the present specification, and in order to make the above and other objects, features and advantages of the present application more clearly understood, the following detailed description of the present application will be given.
Drawings
In the drawings, the same reference numerals refer to the same or similar parts or elements throughout the several views unless otherwise specified. The figures are not necessarily drawn to scale. It is appreciated that these drawings depict only some embodiments according to the application and are not to be considered limiting of its scope.
Fig. 1 shows a flowchart of a method for editing lane line data provided in an embodiment of the present application;
FIG. 2 is a schematic diagram of an edit page of lane line data provided in an embodiment of the present application;
FIG. 3 is a schematic diagram showing a process of calculating region coordinates of a display region of a screen track segment corresponding to a track coordinate system according to an embodiment of the present application;
FIG. 4 is a second schematic diagram showing a region coordinate calculation process of a display region of a screen track segment corresponding to a track coordinate system according to an embodiment of the present application;
FIG. 5 shows a third schematic diagram of a region coordinate calculation process of a display region of a screen track segment corresponding to a track coordinate system provided in an embodiment of the present application;
FIG. 6 shows a fourth schematic diagram of a region coordinate calculation process for a display region of a screen track segment provided in an embodiment of the present application;
FIG. 7 is a diagram showing a fifth exemplary embodiment of a process for calculating region coordinates of a display region of a screen track segment according to the present application;
FIG. 8 is a diagram showing a display region of a screen track segment according to an embodiment of the present application corresponding to a region coordinate calculation process under a track coordinate system;
FIG. 9 illustrates one of the schematic diagrams of the implementation of deleting a one-screen track segment provided in the embodiments of the present application;
FIG. 10 shows a second schematic diagram of an embodiment of deleting a one-screen track segment provided in an embodiment of the present application;
FIG. 11 illustrates a third schematic diagram of an implementation of deleting a one-screen track segment provided in an embodiment of the present application;
fig. 12 shows a schematic view of a scenario of an editing scheme of lane line data provided in an embodiment of the present application;
fig. 13 shows a flowchart of a method for manufacturing a high-precision map provided in an embodiment of the present application;
Fig. 14 is a block diagram showing the configuration of an editing apparatus for lane line data provided in the embodiment of the present application;
fig. 15 is a block diagram showing the construction of the high-precision map making apparatus provided in the embodiment of the present application; and
fig. 16 shows a block diagram of an electronic device used to implement an embodiment of the present application.
Detailed Description
Hereinafter, only certain exemplary embodiments are briefly described. As will be recognized by those of skill in the pertinent art, the described embodiments may be modified in various different ways without departing from the spirit or scope of the present application. Accordingly, the drawings and description are to be regarded as illustrative in nature, and not as restrictive.
In order to facilitate understanding of the technical solutions of the embodiments of the present application, the following describes related technologies of the embodiments of the present application. The following related technologies may be optionally combined with the technical solutions of the embodiments of the present application, which all belong to the protection scope of the embodiments of the present application.
Related concepts referred to in this application will first be described. The lane line referred to in the embodiments of the present application refers to an identification for transmitting traffic information and road information to traffic participants using lines, elevation marks, raised road signs, outline marks, and the like on the road surface of the road. In an editing scenario for lane line data, shape data and attribute data of a lane line may be edited, wherein the shape data and attribute data may be used to describe characteristics of the lane line. The shape data of the lane lines are data for describing the positions of the lane lines and the geometric shapes of the lane lines. In the electronic map, the lane lines are represented by the lines of the lane shape points. The lane line operator can edit the shape data of the lane line by adding, deleting or modifying the position of the lane shape point, so that the shape data of the lane line can more accurately express the actual coordinate position and the geometric connection shape corresponding to the lane line.
The attribute data of the lane line includes the type of lane line (e.g., single dashed line, single solid line, double dashed line, double solid line, left-right dashed line or line, etc.), color (e.g., white or yellow), material (e.g., paint, metal, cement, etc.), longitudinal deceleration identification (e.g., whether the identification is located on the left or right side), movable guardrail identification (e.g., whether the identification is located on the left or right side), credibility identification, etc. The reliability identification refers to the reliability description of the collected pictures of the road, such as whether the collected pictures of the road are clear, whether the lane lines in the collected pictures of the road are blocked, and the like, which are referred to by an operator when editing the data of the lane lines.
In one related art before the present application, a lane line operator needs to search a lane line position to be edited in an editing interface through a drag operation mode, so as to edit lane line data. Because the same road section often has a plurality of lane lines, the geometric connection relationship between the lane lines is complex, and lane line operators need to drag repeatedly to determine the positions of the lane lines needing editing. For the lane lines with changed geometric connection shapes (such as the lane lines of a straight curve), the display view angle is also required to be repeatedly adjusted so that the extending direction of the lane lines on the editing page is generally consistent with the extending direction of the lane lines in the collected pictures of the road, thereby ensuring that the lane line data is consistent with the characteristics of the actual lane lines and ensuring the accuracy of the edited lane line data. Therefore, the lane line data is edited by the related technology, so that the editing difficulty is high, the time consumption is long, and the editing efficiency is low.
In view of the foregoing, embodiments of the present application provide a new lane line data editing method to solve the above technical problems in whole or in part.
An embodiment of the present application provides a method for editing lane line data, as shown in fig. 1, which is a flowchart of a method 100 for editing lane line data according to an embodiment of the present application, where the method 100 may include:
in step S101, the target track is displayed on the display screen.
In order to distinguish different vehicle trajectories related to the embodiment of the application, the vehicle trajectories provided to a user (for example, a lane operator) through a display screen are recorded as target vehicle trajectories, so that the vehicle trajectories edited by the user through lane line data.
The vehicle path referred to is a path for restoring the travel of the data acquisition vehicle along the actual road. The data acquisition vehicle uses one or more image/point cloud acquisition devices to acquire images/point clouds of a road and surrounding environment during running. And recording the image acquired by the acquisition vehicle as a road acquisition photo, wherein the road acquisition photo is correspondingly recorded with information such as acquisition point position identification, acquisition point position coordinates, acquisition time and the like, and is recorded in a storage position associated with the driving track together with the road acquisition photo. In the embodiment of the application, the acquisition point positions can be used as track points of the vehicle track, and the acquisition point positions (track points) are sequentially connected according to the acquisition time sequence to be used as the vehicle track. In the embodiment of the application, the lane line operator edits lane line data of a road corresponding to the driving track based on the driving track. The lane line operator may edit the lane line data based on the track and the road corresponding to the track, as well as other related data (e.g., point cloud data corresponding to the track).
The method for editing lane line data provided by the embodiment of the application may be integrated into an application (such as a program, application software or web application) for editing lane lines. When the lane line operator edits the lane line data, the application for editing the lane line may be used by a terminal device such as a mobile phone, a computer, a tablet computer (Pad), virtual Reality (VR), and augmented Reality (Augmented Reality, AR), and the editing operation may be performed based on a display screen of the terminal device. The contents displayed on the display screen and the editing result of the lane line data can be determined by acquiring and responding to the editing operation of the lane line operator.
It can be understood that, since the data acquisition vehicle travels along the road traveling direction, the direction of the vehicle track in the embodiment of the present application is consistent with the traveling direction of the traveling road along which the vehicle track is located. The following mainly takes a scenario that a road on which a data acquisition vehicle runs is taken as a right side to run, and edits lane line data under the condition that a relative relation between a target running track displayed on a display screen and the display screen is approximately parallel to a vertical direction as an example, and the lane line data editing method provided by the embodiment of the application is developed and described.
In one possible implementation manner, before the target driving track is displayed on the display screen, a road collection photo corresponding to at least one driving track corresponding to the lane line making task may also be displayed as a reference for selecting the driving track. The lane line making task refers to an editing task of lane line data to be processed by a user, and the lane line making task comprises a subtask for editing lane line data corresponding to one or more driving tracks. In combination with the foregoing embodiment, in the editing scene of lane line data, the user edits lane line data corresponding to the lane line along the lane line. The reference basis for editing the lane line data can be provided for the user by displaying the road collected photo corresponding to the driving track.
In the embodiment of the present application, the target vehicle track to be displayed may be determined by acquiring a vehicle track selected from the at least one vehicle track. Specifically, the vehicle track selected by the lane line operator can be determined as the target vehicle track in response to the selection of the vehicle track by the user; the vehicle path recommended by the machine learning model may also be acquired as the target vehicle path in response to confirmation by the lane-line operator of the vehicle path selected by the pre-trained machine learning model.
The machine learning model is used for recommending the target driving track, and the model can recommend the driving track corresponding to the road acquisition photo with clear pictures and non-shielding lane lines as the target driving track based on the characteristics of the road acquisition photo corresponding to the plurality of driving tracks. Furthermore, the acquisition time of the photos of the road can be combined, and the vehicle track with the acquisition time being closer to the time of the user for carrying out the lane line making task is recommended to be used as the target vehicle track. The embodiment of the application does not limit the specific algorithm and the structure of the machine learning model.
In step S102, a first screen track segment is determined according to the display parameters submitted for the target track, and an editing interface of lane line data corresponding to the first screen track segment is displayed.
After determining the target track, the determined target track may be displayed on a display screen of the lane operator based on default display parameters. In the embodiment of the application, the related display parameters may include a track range of the driving track in the display screen, a zoom parameter, and a distance between a track segment and a boundary of the display area. The content of the target track displayed on the display screen changes in response to user adjustment of the display parameters. The user can adjust the track range and the zoom parameter of the track in the display screen by dragging the display picture of the target track or by enlarging or reducing the track, and can also adjust the display parameter by inputting parameter data. In the display parameters submitted for the target driving track, the track range can be determined through coordinates corresponding to the track segments of the driving track in the display screen. The track segment refers to a part of target driving tracks displayed on a display screen.
In one possible implementation, the editing interface of the first screen track segment also displays initial marking data for the lane line of the first screen track segment. When the editing interface of the first screen track segment is displayed, initial marking data of the lane line for the first screen track segment may be first acquired. The initial marking data are generated by calling a machine learning model to predict point cloud data of the driving road and collected photos of the road. For example, the initial marking data may include shape points of lane lines generated by machine learning models to identify road collection photographs. The lane line operator can verify the positions of the shape points of the lane lines, adjust the shape points with larger errors in the initial mark data, correct the error marks and improve the accuracy of the lane line data.
In addition, the associated position of the editing interface of the first screen track segment can also be displayed with a road acquisition photo corresponding to the first screen track segment, and the road acquisition photo is used as a reference basis for editing lane line data.
Fig. 2 shows a schematic diagram of an edit page of lane line data provided in an embodiment of the present application. As shown in fig. 2, initial marking data for displaying the first screen track segment and the predicted generation of the lane line for the first screen track segment may be included in the editing page. Elements associated with the scene such as road collection photographs, editing controls (e.g., buttons, input fields, and drop-down fields) corresponding to the displayed first screen track segment may also be displayed in the editing page. The road acquisition photos can be acquired based on track points on the first screen track segment, one track point possibly corresponds to a plurality of road acquisition photos, and the editing interface can respond to the selection of the road acquisition photos by the lane line operators and correspondingly display the road acquisition photos selected and checked by the lane line operators. It will be appreciated that fig. 2 only schematically illustrates one possible editing page, and the embodiment of the present application does not limit the specific display design of the editing interface.
In step S103, the remaining portion of the target track is split into a multi-screen track segment according to the first-screen track segment.
In one application example, the split result of the first screen track segment and the remaining portion of the multi-screen track segments (i.e., at least one-screen track segment) may be stored in a stack form, and the stack storing the split result may be denoted as a job stack.
In the embodiment of the application, the display area of the related multi-screen track segment in the display screen corresponds to the area coordinate under the track coordinate system. The region coordinate range of the display region may be determined according to the vertex coordinates of the display region. The track coordinate system refers to a coordinate system used for correspondingly recording coordinates of track point positions on the driving track, and can be, for example, a Mars coordinate system or a WGS-84 (World Geodetic System-1984 Coordinate System,1984 world geodetic coordinate system, an internationally adopted geodetic coordinate system) coordinate system, and the like.
In one possible implementation manner, when the remaining part of the target track is split into the multi-screen track segments according to the first-screen track segment, the remaining part of the target track can be split sequentially by utilizing the sliding window according to the direction of the target track, so that a track segment splitting result taking the sliding window as a unit can be obtained. The boundary range of the sliding window may be determined according to the display area of the track segment (for example, may be consistent with the area size of the display area), and the sliding direction of the sliding window may be determined according to the line segment direction of the end portion of the track segment of the previous screen. In the embodiment of the application, the next screen track segment can be determined according to the previous screen track segment and the sliding window in a mode of carrying out iterative processing on the target track. The next track segment refers to a track segment to be determined, and the previous track segment refers to a track segment which is adjacent to the next track segment and is already determined.
When the target driving track is split, the area coordinates of the display area of the multi-screen track segment corresponding to the track coordinate system can be determined according to the starting point coordinates and the end point coordinates of the multi-screen track segment. That is, when determining the next screen track segment, it may be determined that the display area of the next screen track segment corresponds to the area coordinates under the track coordinate system according to the start point coordinates and the end point coordinates of the next screen track segment.
In one possible implementation manner, when the remaining part of the target track is sequentially split by using the sliding window according to the direction of the target track, a target line segment formed by the last track point and the next track point of the last track segment can be determined for the next track segment to be determined. Then, the starting point coordinates of the track segment of the next screen are determined according to the points included in the target line segment, the sliding window is slid according to the direction of the target line segment, and the ending point coordinates of the track segment of the next screen are determined according to the boundary range of the sliding window. In one application example, the boundary range of the sliding window may be determined by acquiring the area size of the display area corresponding to the track segment. When the boundary range of the sliding window is consistent with the area size of the display area, the height and width corresponding to the display area can be determined as the height and width of the sliding window. When the end point coordinates of the estimated section of the next screen are determined, the end point coordinates of the section of the next screen can be determined on the target driving track according to the start point coordinates of the section of the next screen and the determined height of the sliding window.
In one possible implementation manner, the display parameters include a track range of the target track in the display screen and a distance between a track segment and a boundary of a display area of a track segment of a screen. In this embodiment of the present application, when determining that the display area of one screen track segment corresponds to the area coordinate under the track coordinate system according to the start point coordinate and the end point coordinate of the one screen track segment, the vertex coordinate of the display area of the next screen track segment in the direction of the joint position of the two screen track segments may be determined according to the start point coordinate of the next screen track segment and the distance between the track segment in the previous screen track segment and the boundary of the display area. Then, according to the height and width of the display area and the determined vertex coordinates, the remaining three vertex coordinates of the display area of the next screen track segment are determined, and the obtained four vertex coordinates are used as the area coordinates of the display area of the next screen track segment corresponding to the area coordinates under the track coordinate system.
The concepts and calculations involved in determining the end point coordinates of the next screen track segment are described below in connection with fig. 3-8. Fig. 3-8 show a plurality of schematic diagrams of a region coordinate calculation process under a track coordinate system corresponding to a display region of a screen track segment provided in an embodiment of the present application in one application example. In this application example, the boundary range of the sliding window coincides with the area size of the display area. In fig. 3 to 8, the directional lines made up of T0, T1, T2, T3, T4, and T5 show the target trajectories involved in the embodiments of the present application, which are the trajectories recorded when the collection vehicle travels in the right-hand traveling scene and the photographs are collected.
The target track takes T0 as a starting point (namely a first track point), takes T5 as an end point (namely a last track point), and T2, T3, T4 and T5 are the other four track points on the target track. The area framed by P0, P1, P2, and P3 shown in fig. 3 is a display area corresponding to the determined first screen track segment according to the display parameters submitted by the lane line operator for the target track. The height and width of the display area may be determined according to a style file used by the editing interface, such as a CSS (Cascading Style Sheets, cascading style sheet) file, in combination with the resolution of the display screen currently used.
In the right-hand driving scenario, the opposite lane is to the left of the current lane. In the case where it is desired that the vehicle track extends in a substantially vertical upward direction in the display area in the display screen (a direction of the display area in which the direction from the T0 point to the T1 point is shown in fig. 3), the lane operator edits the lane line on the road in the forward direction to the target vehicle track in the direction of the target vehicle track. When the target driving track determined by the lane line operator is displayed on the editing interface, the lane line operator can be guided to adjust the track segment to the left in the display area on the editing page. In this scenario, by adjusting the track segment in the display area to a position farther to the left in the display area, it is possible to avoid selecting the opposite road frame into the display area, thereby avoiding erroneous editing of the lane line data on the opposite lane when editing the lane line data corresponding to the target driving track.
And (3) marking an intersection point of a target line segment formed by the T1 point and the T2 point in the target running track and the display area as an intersection point i. The boundary distance refers to the pixel distance between the first trajectory point and the boundary (e.g., left boundary) of the display area. In fig. 3, the boundary formed by the end points P0 and P3 is denoted as the left boundary, and the boundary distance is the distance from the T0 point to the left boundary, that is, the left distance shown in the figure. In this scenario, the boundary distance involved may also be the right-hand distance between the first trajectory point and the right boundary of the display area. In a scenario in which lane line data is edited in a case where the relative relationship between the target travel track displayed on the display screen and the display screen is substantially parallel to the horizontal direction, the boundary distance involved may be the distance between the first track point and the upper boundary or the lower boundary of the display area.
In fig. 3 to 6, the previous screen corresponds to the first screen track segment display area, and the next screen corresponds to the second screen track segment display area. When the area range of the next screen is calculated, the intersection point i can be directly used as the starting point of the track segment of the next screen, or the starting point can be determined by the appropriate distance of the point i, so that the area range of the next screen is partially overlapped with the area range of the previous screen, the connection relation of the lane lines in the area ranges of the two screens is displayed in the next screen, and the situation that the lane line data corresponding to the track where the two screens are connected is omitted is avoided. As shown in fig. 4, on the target line segment, a point determined by retracting the i point by 2 meters along the target line segment is marked as a start point b of the next screen track segment, and the retracting direction is a direction from the i point to the T1 point.
Then, on the target line segment, the starting point of the next screen track segment is taken as a foot drop, and the vertex coordinates of the display area of the next screen track segment in the connecting position direction of the two screen track segments are determined by combining the distance between the previous screen track segment and the boundary of the display area. As shown in fig. 5, a point N0, which is a boundary distance (left distance) from the point b, is left along the direction perpendicular to the target line segment with the point b as the foot drop, as the vertex of the next screen. The remaining three vertex coordinates of the display area of the next screen track segment may be determined based on the height and width of the N0 point and the display area. As shown in fig. 6, the N1 point may be first determined along the direction of N0 to b in combination with the width W of the display area, then the N2 point and the N3 point may be respectively determined in combination with the height H of the display area and the direction perpendicular to the line segment formed by the N0 point and the N1 point, and the coordinates of the N0 point, the N1 point, the N2 point and the N3 point may be determined as the area coordinates of the display area of the next screen track segment corresponding to the track coordinate system, that is, the area coordinates of the second screen track segment area corresponding to the track coordinate system. As shown in fig. 7, the track segments in the second screen track segment display area are displayed in the display area in a generally vertically upward extending direction.
And continuing to split the rest target running tracks in an iterative calculation mode according to the same calculation thought. In fig. 7-8, the previous screen correspondingly displays a second track segment display region. As shown in fig. 8, the area coordinates of the third track segment display area can be calculated and determined according to the area coordinates of the second track segment display area and the display parameters (e.g., the left distance). In the iterative calculation process, the completion of splitting can be determined in the case that the intersection point is not present or the target line segment is not present.
It will be appreciated that in other examples of applications, the direction of travel of the track in the display area in the display screen may be vertically downward, horizontally leftward, horizontally rightward, or other possible directions. In the embodiment of the present application, the extending direction may be predetermined, and the content displayed in the display area may be determined according to the determined extending direction. The specific extending direction may be adjusted according to practical applications, which is not limited in the embodiments of the present application.
In one possible implementation manner, after the remaining part of the target track is split into multiple track segments according to the first track segment, an overlapping area of a display area corresponding to one track segment and an integral area formed by the display area of the multiple track segments of the historical track under the track coordinate system can be determined for any track segment. Under the condition of having a plurality of historical driving tracks, the whole area corresponding to the plurality of historical driving tracks can be sequentially determined, and the whole area of part or all of the historical driving tracks can be determined in batches. Then, when the area of the overlapping area exceeds the set proportion (for example, may be 75%) of the area of the display area corresponding to the track segment of the screen, and the included angle between the overlapping area and the track direction of the track segment corresponding to the track segment of the screen is lower than the set angle threshold (for example, may be 45 degrees), deleting the track segment of the screen.
The historical driving track refers to the driving track which is edited by the lane line data before the lane line data of the target driving track is edited. The overall area involved can be determined by the following steps. Firstly, a display area corresponding to the generated first screen track segment corresponding to the historical driving track and the multi-screen track segments of the rest part except the first screen is obtained. And then determining the regional boundary of the whole region formed by all the display regions corresponding to the historical driving tracks according to the region coordinates of each display region corresponding to the historical driving tracks under the track coordinate system.
The manner in which the overlapping area and the entire area are determined is described below with reference to fig. 9 to 11. As shown in fig. 9, when the track segment of the target track is split, for a certain track segment of the target track, after determining the area coordinates of the display area a of the track segment of the screen, the overlapping area of the display area a and the historical track is determined. In fig. 9, the generated first screen track segment and the remaining multi-screen track segments except the first screen corresponding to the historical driving track include a first screen track screen segment, a second screen track segment, and a third screen track segment. The area boundary of the whole area formed by all the display areas corresponding to the historical driving tracks can be determined according to the area coordinates of the three-screen track segment display areas under the track coordinate system, and the determined area boundary of the whole area is the outer edge boundary of the three-screen track segment display area set.
Then, after determining the coordinates corresponding to the region coordinates of the display region a of the target vehicle track and the region boundary of the whole region of the historical vehicle track, the coordinates corresponding to the overlapping region of the overlapping portion of the display region a and the whole region of the historical vehicle track can be determined through differential calculation, and then the area of the overlapping region can be determined. Then, when the area of the overlapping area exceeds the set proportion of the area of the display area corresponding to the track segment of the screen, and the included angle between the overlapping area and the track direction of the track segment corresponding to the track segment of the screen is lower than the set angle threshold, the lane line data corresponding to the track segment of the screen can be considered to be edited. When the area of the overlapping area exceeds the set proportion with the area of the display area corresponding to the track segment of the screen, the lane lines corresponding to the target track and the historical track are indicated to have overlapping parts geometrically. And when the included angle of the track directions of the track segments corresponding to the overlapping area and the track segments of the screen respectively is lower than a set angle threshold value, the track line directions corresponding to the target track and the historical track are consistent. Therefore, under the condition that the area exceeds the set proportion and the included angle is lower than the set angle threshold value, lane line data corresponding to one-screen track segment of the target track can be determined, editing is completed in the lane line manufacturing task corresponding to the historical track, namely, the lane line which is completed in editing can be prevented from being repeatedly displayed for a lane line operator by deleting one-screen track segment, and the efficiency of editing the lane line data is improved.
In one application example, when the area of the overlapping area exceeds 75% of the area of the display area corresponding to the track segment of the screen, and the included angle between the overlapping area and the track direction of the track segment corresponding to the track segment of the screen is lower than 45 degrees, deleting the track segment of the screen. Fig. 10 and 11 show the overlapping area of the display area a of the target track and the historical track in two application scenarios respectively. In fig. 10 and 11, the hatched portion shown is the overlapping area of the display area a and the entire area of the history track. As shown in fig. 10, the ratio of the area of the overlapping portion to the area of the display area a is significantly greater than 75%, and the angle between the direction of the track segment corresponding to the display area a and the direction of the historical track in the overlapping portion is lower than 45 degrees. In this case, the one-screen track segment corresponding to the display area a may be deleted. Therefore, when the editing interfaces of the track segments of each screen obtained through splitting are sequentially displayed on the display screen, the display of the corresponding editing interface of the track segment can be skipped. As shown in fig. 11, the ratio of the area of the overlapping portion to the area of the display area a is significantly greater than 75%, however, the included angle between the direction of the track segment corresponding to the display area a and the direction of the historical track in the overlapping portion is greater than 45 degrees, so that it is explained that the direction of the lane line corresponding to the target track and the historical track in the overlapping portion is inconsistent, and therefore it cannot be determined that the editing of the lane line data corresponding to the track segment is completed, and therefore the deletion processing is not performed on the track segment corresponding to the display area a.
In one possible implementation manner, after the remaining part of the target track is split into a multi-screen track segment according to the first-screen track segment and the display parameter, if an adjustment operation on the display parameter is detected, the current remaining part of the target track is split again according to the adjusted display parameter and the current first-screen track segment, and each screen track segment is updated according to the re-splitting result. The specific splitting process may be referred to the foregoing embodiments, and will not be described herein.
In one possible implementation manner, lane line data corresponding to the target driving track can be generated according to lane line data respectively edited for the first screen track segment and the split screen track segments. That is, after the lane line operator completes editing lane line data corresponding to the target lane line along the target lane line in a segmented manner, the lane line data corresponding to the target lane line can be updated and generated uniformly, so that the lane line manufacturing task for the target lane line is completed.
Fig. 12 shows a schematic view of a scenario of an editing scheme of lane line data provided by an embodiment of the present application. In this scenario, when the lane line operator edits the lane line data, the application for editing the lane line may be used by a terminal device such as a mobile phone, a computer, a tablet computer, virtual reality, or augmented reality, and the editing operation may be performed based on a display screen of the terminal device. After the display parameters of the operator for the first screen track segment are obtained, the rest of the target track can be split according to the first screen track segment and the determined display parameters, and the splitting result shown in fig. 12 is obtained.
In one possible application example, the lane line drawing task of the lane line operator includes editing lane line data corresponding to the lane line a and the lane line B, wherein the lane line operator has completed editing the lane line data corresponding to the lane line a before starting editing the lane line data corresponding to the lane line B, and the track segment splitting result A0 corresponding to the lane line a is stored in the operation stack.
When editing is performed on the lane line corresponding to the lane track B, the lane track B is the target lane track. After the display parameters of the first screen track segment of the driving track B are determined, splitting the rest part of the target driving track according to the first screen track segment and the determined display parameters, wherein the obtained splitting result is B0: fragment 0-fragment 1-fragment 2-fragment 3-fragment 4-fragment 5-fragment 6. And determining track fragments meeting preset conditions by using differential comparison of the splitting results of the B0 and the track A stored in the operation stack so as to delete repeated track fragments. The preset condition may be that the area of the overlapping area of the track segment and the track stored in the operation stack exceeds 75% of the area of the display area corresponding to the track segment of one screen, and the included angle between the overlapping area and the track direction of the track segment corresponding to the track segment of one screen is lower than 45 degrees. For example, in the case where the fragments 2 and 3 are found to meet the preset condition, the fragments 2 and 3 in B0 can be removed. And (5) marking the removed splitting result (screen 0-screen 1-screen 4-screen 5-screen 6) as B1, and storing the splitting result in a job stack.
When the lane line operator starts the subsequent lane data editing based on the lane B, the display parameters are adjusted when the operation is performed on the segment 4. Based on the adjusted display parameters, a split result B2 from segment 4 to the track endpoint can be calculated: fragment 0-fragment 1-fragment 4-fragment 5-fragment 6, and differentially comparing the split result of the track a stored in the operation stack with the B2, for example, if the fragment 5 is found to meet the above preset condition, the fragment 5 in the B2 may be deleted and removed to obtain a new split result B3: fragment 0-fragment 1-fragment 4-fragment 6 and updating B2 stored in the job stack to B3.
Therefore, the splitting result in the stored operation stack is displayed with the corresponding display parameters conforming to the determined display view angle when the lane line operator operates. When the lane line operator checks the lane line data corresponding to the driving track again, the splitting result stored in the operation stack can be called, the track segments which accord with the display view angle of the lane operator are displayed, and the time consumption caused by repeatedly adjusting the operation view angle when the lane line data corresponding to the same track segments are edited is avoided.
In step S104, in response to the user completing editing of the lane line data in the editing interface of the previous screen track segment, displaying the editing interface of the lane line data corresponding to the next screen track segment on the display screen, so as to obtain the lane line data respectively edited for each screen track segment.
In one application example, it may be determined that the user has completed editing lane line data corresponding to a track segment of one screen in response to an interactive operation performed by the user on the display screen, for example, a double-click on the screen or clicking on an editing control for displaying a display area of the next screen, and an editing interface of the next screen track segment obtained by splitting is displayed. For example, after receiving confirmation of completion of editing of an operator for a certain screen track segment, lane line data corresponding to the screen track segment may be acquired, and the acquired lane line data may be updated to a corresponding database. And then, reading the related data of the next screen track segment from the job stack to show the editing interface of the next screen track segment. In addition, in response to a request for displaying a certain track segment, the relevant data of the requested track segment can be read from the job stack, and the track segment can be displayed in the editing interface.
The embodiment of the present application further provides a method for manufacturing a high-precision map, as shown in fig. 13, which is a flowchart of a method 1300 for manufacturing a high-precision map according to an embodiment of the present application, where the method 1300 may include:
In step S1301, a target track to be edited in the high-precision map is determined.
In step S1302, lane line data edited for the target track is acquired, the lane line data being generated based on the method 100 provided in the embodiments of the present application.
In step S1303, a high-precision map is created from the acquired lane line data.
The lane line data is an important part of the high-precision map data, and the production of the high-precision map includes editing the lane line data. The specific implementation manner of editing the lane line data may be referred to the above embodiments, and will not be described herein.
Corresponding to the application scene and the method of the method provided by the embodiment of the application, the embodiment of the application also provides an editing device of the lane line data. Fig. 14 is a block diagram showing a configuration of an apparatus 1400 for editing lane line data according to an embodiment of the present application, the apparatus 1400 includes:
a track display module 1401 for displaying a target track on a display screen;
the track segment determining module 1402 is configured to determine a first screen track segment according to a display parameter submitted for a target driving track, and display an editing interface of lane line data corresponding to the first screen track segment;
The track segment splitting module 1403 is configured to split the remaining part of the target track into a multi-screen track segment according to the first-screen track segment;
the data editing interface display module 1404 is configured to respond to the user by editing the lane line data in the editing interface of the previous track segment, and display the editing interface of the lane line data corresponding to the next track segment on the display screen, so as to obtain the lane line data respectively edited for each track segment.
In one possible implementation, the track segment determination module may include:
the track splitting module is used for sequentially splitting the rest part of the target track by utilizing a sliding window according to the direction of the target track, and the sliding direction of the sliding window is determined according to the line segment direction of the tail end part of the track segment of the previous screen;
and the region coordinate determination submodule is used for determining the region coordinate of the display region of the one-screen track segment corresponding to the track coordinate system according to the starting point coordinate and the end point coordinate of the one-screen track segment.
In one possible implementation, the trace splitting module may include:
The line segment determining unit is used for determining a target line segment formed by the last track point and the next track point of the last screen track segment aiming at the next screen track segment to be determined;
the starting point coordinate determining unit is used for determining the starting point coordinate of the next screen track segment according to the points included by the target line segment;
and the end point coordinate determining unit is used for sliding the sliding window according to the direction of the target line segment and determining the end point coordinate of the track segment of the next screen according to the boundary range of the sliding window.
In one possible implementation manner, the display parameters include a track range of the target track in the display screen and a distance between a track segment and a boundary of a display area of a screen track segment;
the region coordinate determination submodule may be specifically used for: determining vertex coordinates of a display area of the next screen track segment in the direction of the joint position of the two screen track segments according to starting point coordinates of the next screen track segment and the distance between the track segment in the previous screen track segment and the boundary of the display area; and determining the remaining three vertex coordinates of the display area of the next screen track segment according to the height and the width of the display area and the determined vertex coordinates, and taking the obtained four vertex coordinates as the area coordinates of the display area of the next screen track segment corresponding to the track coordinate system.
In one possible implementation, the display area of each screen track segment in the display screen corresponds to an area coordinate under the track coordinate system; the apparatus 1400 may further include:
the overlapping region determining submodule is used for respectively determining the overlapping region of a display region corresponding to one screen track segment and an integral region formed by the display region of the multi-screen track segment of the historical track under a track coordinate system for any screen track segment after the rest part of the target track is split into the multi-screen track segments according to the first screen track segment;
and the track segment deleting sub-module is used for deleting the track segment of the screen when the area of the overlapped area exceeds the set proportion of the area of the display area corresponding to the track segment of the screen and the included angle between the overlapped area and the track direction of the track segment corresponding to the track segment of the screen is lower than the set angle threshold value.
In a possible implementation manner, the overlapping area determining submodule may further include an overall area determining unit, where the regular area determining unit is configured to obtain display areas corresponding to the generated first screen track segment corresponding to the historical driving track and the multi-screen track segments corresponding to the rest parts except the first screen respectively; and determining the regional boundary of the whole region formed by all the display regions corresponding to the historical driving tracks according to the region coordinates of each display region corresponding to the historical driving tracks under a track coordinate system.
In one possible implementation manner, the apparatus 1400 may further include a track selection sub-module, configured to display, before displaying the target track on the display screen, a road collection photograph collected corresponding to at least one track corresponding to the lane line making task, as a reference for selecting the track; and acquiring a target driving track selected from the at least one driving track.
In a possible implementation manner, the apparatus 1400 may further include a track segment update sub-module, configured to, after splitting the remaining portion of the target track into multi-screen track segments according to the first-screen track segment, if an adjustment operation on the display parameter is detected, re-split the current remaining portion of the target track according to the adjusted display parameter and the current first-screen track segment, and update each screen track segment according to a re-splitting result.
In a possible implementation manner, the apparatus 1400 may further include a data generating sub-module, configured to generate lane line data corresponding to the target vehicle track according to lane line data respectively edited for the first screen track segment and the split screen track segments.
Corresponding to the application scene and the method of the method provided by the embodiment of the application, the embodiment of the application also provides a device for manufacturing the high-precision map. Fig. 15 is a block diagram of a high-precision map making apparatus 1500 according to an embodiment of the present application, where the apparatus 1500 includes:
the track determining module 1501 is configured to determine a target driving track to be edited in the high-precision map;
a data acquisition module 1502, configured to acquire lane line data edited for the target track, where the lane line data is generated based on a method provided in an embodiment of the present application;
the map making module 1503 is configured to make a high-precision map according to the obtained lane line data.
The functions of each module in each device of the embodiments of the present application may be referred to the corresponding descriptions in the above methods, and have corresponding beneficial effects, which are not described herein.
Fig. 16 is a block diagram of an electronic device used to implement an embodiment of the present application. As shown in fig. 16, the electronic device includes: a memory 1601 and a processor 1602, the memory 1601 stores a computer program executable on the processor 1602. The processor 1602, when executing the computer program, implements the methods of the embodiments described above. The number of memories 1601 and processors 1602 may be one or more.
The electronic device further includes:
the communication interface 1603 is used for communicating with external devices for data interactive transmission.
If the memory 1601, the processor 1602, and the communication interface 1603 are implemented independently, the memory 1601, the processor 1602, and the communication interface 1603 may be interconnected and communicate with each other via a bus. The bus may be an industry standard architecture (Industry Standard Architecture, ISA) bus, an external device interconnect (Peripheral Component Interconnect, PCI) bus, or an extended industry standard architecture (Extended Industry Standard Architecture, EISA) bus, among others. The bus may be classified as an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown in fig. 16, but not only one bus or one type of bus.
Alternatively, in a specific implementation, if the memory 1601, the processor 1602 and the communication interface 1603 are integrated on a single chip, the memory 1601, the processor 1602 and the communication interface 1603 may communicate with each other through internal interfaces.
The present embodiments provide a computer-readable storage medium storing a computer program that, when executed by a processor, implements the methods provided in the embodiments of the present application.
The embodiment of the application also provides a chip, which comprises a processor and is used for calling the instructions stored in the memory from the memory and running the instructions stored in the memory, so that the communication device provided with the chip executes the method provided by the embodiment of the application.
The embodiment of the application also provides a chip, which comprises: the input interface, the output interface, the processor and the memory are connected through an internal connection path, the processor is used for executing codes in the memory, and when the codes are executed, the processor is used for executing the method provided by the application embodiment.
It should be appreciated that the processor may be a central processing unit (Central Processing Unit, CPU), but may also be other general purpose processors, digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), field programmable gate arrays (Field Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or the like. A general purpose processor may be a microprocessor or any conventional processor or the like. It is noted that the processor may be a processor supporting an advanced reduced instruction set machine (Advanced RISC Machines, ARM) architecture.
Further alternatively, the memory may include a read-only memory and a random access memory. The memory may be volatile memory or nonvolatile memory, or may include both volatile and nonvolatile memory. The nonvolatile Memory may include Read-Only Memory (ROM), programmable ROM (PROM), erasable Programmable ROM (EPROM), electrically Erasable EPROM (EEPROM), or flash Memory, among others. Volatile memory can include random access memory (Random Access Memory, RAM), which acts as external cache memory. By way of example, and not limitation, many forms of RAM are available. For example, static RAM (SRAM), dynamic RAM (Dynamic Random Access Memory, DRAM), synchronous DRAM (SDRAM), double Data Rate Synchronous DRAM (DDR SDRAM), enhanced SDRAM (ESDRAM), synchronous Link DRAM (SLDRAM), and Direct RAM (DR RAM).
In the above embodiments, it may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When the computer program instructions are loaded and executed on a computer, the processes or functions in accordance with the present application are produced in whole or in part. The computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable apparatus. Computer instructions may be stored in a computer-readable storage medium or transmitted from one computer-readable storage medium to another.
In the description of the present specification, a description referring to terms "one embodiment," "some embodiments," "examples," "specific examples," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present application. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, the different embodiments or examples described in this specification and the features of the different embodiments or examples may be combined and combined by those skilled in the art without contradiction.
Furthermore, the terms "first," "second," and the like, are used for descriptive purposes only and are not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include at least one such feature. In the description of the present application, the meaning of "a plurality" is two or more, unless explicitly defined otherwise.
Any process or method described in flow charts or otherwise herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps of the process. And the scope of the preferred embodiments of the present application includes additional implementations in which functions may be performed in a substantially simultaneous manner or in an opposite order from that shown or discussed, including in accordance with the functions that are involved.
Logic and/or steps described in the flowcharts or otherwise described herein, e.g., may be considered a ordered listing of executable instructions for implementing logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions.
It is to be understood that portions of the present application may be implemented in hardware, software, firmware, or a combination thereof. In the above-described embodiments, the various steps or methods may be implemented in software or firmware stored in a memory and executed by a suitable instruction execution system. All or part of the steps of the methods of the embodiments described above may be performed by a program that, when executed, comprises one or a combination of the steps of the method embodiments, instructs the associated hardware to perform the method.
In addition, each functional unit in each embodiment of the present application may be integrated in one processing module, or each unit may exist alone physically, or two or more units may be integrated in one module. The integrated modules may be implemented in hardware or in software functional modules. The integrated modules described above, if implemented in the form of software functional modules and sold or used as a stand-alone product, may also be stored in a computer-readable storage medium. The storage medium may be a read-only memory, a magnetic or optical disk, or the like.
The foregoing is merely exemplary embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily think of various changes or substitutions within the technical scope of the present application, which should be covered in the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (14)

1. An editing method of lane line data, comprising:
displaying the target driving track on a display screen;
determining a first screen track segment according to display parameters submitted for a target driving track, and displaying an editing interface of lane line data corresponding to the first screen track segment;
dividing the rest part of the target track into a plurality of track fragments according to the first track fragment;
and responding to the completion of editing the lane line data in the editing interface of the previous screen track segment by the user, and displaying the editing interface of the lane line data corresponding to the next screen track segment on the display screen so as to obtain the lane line data respectively edited for each screen track segment.
2. The method of claim 1, wherein the splitting the remaining portion of the target track into multi-screen track segments according to the first-screen track segment comprises:
According to the direction of the target travelling path, sequentially splitting the rest part of the target travelling path by utilizing a sliding window, wherein the sliding direction of the sliding window is determined according to the line segment direction of the tail end part of the last screen travelling path segment;
and determining the region coordinates of the display region of the one-screen track segment corresponding to the track coordinate system according to the start point coordinates and the end point coordinates of the one-screen track segment.
3. The method of claim 2, wherein the sequentially splitting the remaining portion of the target track with the sliding window according to the target track direction comprises:
determining a target line segment formed by a last track point and a next track point of a last screen track segment aiming at the next screen track segment to be determined;
determining starting point coordinates of a track segment of the next screen according to a plurality of points included in the target line segment;
and sliding the sliding window according to the direction of the target line segment, and determining the endpoint coordinates of the track segment of the next screen according to the boundary range of the sliding window.
4. A method according to claim 3, wherein the display parameters include a track range of the target track in the display screen and a distance between a track segment and a boundary of a display area of a screen track segment;
The determining the region coordinates of the display region of the one-screen track segment corresponding to the track coordinate system according to the start point coordinates and the end point coordinates of the one-screen track segment comprises:
determining vertex coordinates of a display area of the next screen track segment in the direction of the joint position of the two screen track segments according to starting point coordinates of the next screen track segment and the distance between the track segment in the previous screen track segment and the boundary of the display area;
and determining the remaining three vertex coordinates of the display area of the next screen track segment according to the height and the width of the display area and the determined vertex coordinates, and taking the obtained four vertex coordinates as the area coordinates of the display area of the next screen track segment corresponding to the track coordinate system.
5. The method of claim 1, wherein the display area of each screen track segment in the display screen corresponds to an area coordinate under a track coordinate system;
after the splitting the rest part of the target track into multi-screen track segments according to the first-screen track segments, the method further comprises:
for any screen track segment, respectively determining an overlapping area of a display area corresponding to the screen track segment and an integral area formed by a display area of a multi-screen track segment of the historical driving track under a track coordinate system;
And deleting the one-screen track segment if the area of the overlapped area exceeds the set proportion of the area of the display area corresponding to the one-screen track segment and the included angle between the overlapped area and the track direction of the track segment corresponding to the one-screen track segment is lower than a set angle threshold.
6. The method of claim 5, wherein the overall area is determined by:
the generated first screen track segment corresponding to the historical driving track and the display area corresponding to the multi-screen track segments of the rest part except the first screen are obtained;
and determining the regional boundary of the whole region formed by all the display regions corresponding to the historical driving tracks according to the region coordinates of each display region corresponding to the historical driving tracks under a track coordinate system.
7. The method of claim 1, wherein prior to the displaying the target track on the display screen, the method further comprises:
displaying a road acquisition photo which is acquired corresponding to at least one driving track corresponding to the lane line manufacturing task and is used as a reference for selecting the driving track;
and acquiring a target driving track selected from the at least one driving track.
8. The method of claim 1, wherein after splitting the remaining portion of the target track into multi-screen track segments according to the first-screen track segments, the method further comprises:
and if the adjustment operation of the display parameters is detected, re-splitting the current residual part of the target track according to the adjusted display parameters and the current track segment of the first screen, and updating each track segment of the second screen according to the re-splitting result.
9. The method of claim 1, wherein the method further comprises:
and generating lane line data corresponding to the target driving track according to the lane line data respectively edited for the first screen track segment and the split screen track segments.
10. A method for manufacturing a high-precision map comprises the following steps:
determining a target driving track to be edited in the high-precision map;
acquiring lane line data edited for the target lane trace, the lane line data being generated based on the method of any one of claims 1-9;
and manufacturing a high-precision map according to the acquired lane line data.
11. An editing apparatus for lane line data, comprising:
the track display module is used for displaying the target driving track on the display screen;
The track segment determining module is used for determining a first screen track segment according to display parameters submitted for the target driving track and displaying an editing interface of lane line data corresponding to the first screen track segment;
the track segment splitting module is used for splitting the rest part of the target travelling track into multi-screen track segments according to the first-screen track segment;
the data editing interface display module is used for data responding to the completion of editing lane line data in the editing interface of the previous screen track segment by a user, and displaying the editing interface of the lane line data corresponding to the next screen track segment on the display screen so as to obtain the lane line data respectively edited for each screen track segment.
12. An electronic device comprising a memory, a processor and a computer program stored on the memory, the processor implementing the method of any one of claims 1-10 when the computer program is executed.
13. A computer product comprising computer instructions which, when executed by a processor, implement the method of any of claims 1-10.
14. A computer readable storage medium having stored therein a computer program which, when executed by a processor, implements the method of any of claims 1-10.
CN202310318565.3A 2023-03-28 2023-03-28 Editing method and device for lane line data, electronic equipment and storage medium Pending CN116342745A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310318565.3A CN116342745A (en) 2023-03-28 2023-03-28 Editing method and device for lane line data, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310318565.3A CN116342745A (en) 2023-03-28 2023-03-28 Editing method and device for lane line data, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN116342745A true CN116342745A (en) 2023-06-27

Family

ID=86885418

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310318565.3A Pending CN116342745A (en) 2023-03-28 2023-03-28 Editing method and device for lane line data, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN116342745A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117724647A (en) * 2024-02-07 2024-03-19 杭州海康威视数字技术股份有限公司 Information configuration display method and device, electronic equipment and machine-readable storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117724647A (en) * 2024-02-07 2024-03-19 杭州海康威视数字技术股份有限公司 Information configuration display method and device, electronic equipment and machine-readable storage medium
CN117724647B (en) * 2024-02-07 2024-06-04 杭州海康威视数字技术股份有限公司 Information configuration display method and device, electronic equipment and machine-readable storage medium

Similar Documents

Publication Publication Date Title
CN108460815B (en) Method and device for editing map road elements
KR102675523B1 (en) Method and apparatus of determining lane
KR100967448B1 (en) Image processing system for the geospatial data by uniting the numerical value map data and picture image
JP4964801B2 (en) Method and apparatus for generating 3D model from 2D live-action video
US9816826B2 (en) Technique for correcting digitized map data
KR100946250B1 (en) Image processing system linking the information on the picture image for edit processing
KR100995107B1 (en) Image editing system improving the understanding about the photoreconnaissance
KR20130040773A (en) Three-dimensional map drawing system
CN113495940B (en) Road area correction device, road area correction method, and computer-readable recording medium
JP2022541977A (en) Image labeling method, device, electronic device and storage medium
CN116342745A (en) Editing method and device for lane line data, electronic equipment and storage medium
JP5725908B2 (en) Map data generation system
CN114461740A (en) Map updating method, map updating device, computer device, and storage medium
CN114818065A (en) Three-dimensional roadway model building method and device, electronic equipment and storage medium
JP6110780B2 (en) Additional information display system
JP7109822B2 (en) Road network data generation method, apparatus and computer program for autonomous vehicles
CN115129291B (en) Three-dimensional oblique photography measurement model visualization optimization method, device and equipment
KR102121088B1 (en) Stereoscopic image output system
CN113641284B (en) Regional plotting method and device of electronic map, electronic equipment and storage medium
CN113932825A (en) Robot navigation path width acquisition system, method, robot and storage medium
CN116134488A (en) Point cloud labeling method, point cloud labeling device, computer equipment and storage medium
CN114924680B (en) Port unmanned map generation method based on WEB front-end visual interface
CN117708261B (en) Map data processing method, apparatus, device, storage medium, and program product
CN117724647B (en) Information configuration display method and device, electronic equipment and machine-readable storage medium
CN117315034B (en) Method and device for determining transverse slope parking space coordinates, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination