WO2023060386A1 - 地图数据处理、地图数据构建方法、装置、车辆及计算机可读存储介质 - Google Patents

地图数据处理、地图数据构建方法、装置、车辆及计算机可读存储介质 Download PDF

Info

Publication number
WO2023060386A1
WO2023060386A1 PCT/CN2021/123041 CN2021123041W WO2023060386A1 WO 2023060386 A1 WO2023060386 A1 WO 2023060386A1 CN 2021123041 W CN2021123041 W CN 2021123041W WO 2023060386 A1 WO2023060386 A1 WO 2023060386A1
Authority
WO
WIPO (PCT)
Prior art keywords
map data
data
information
observation data
vehicle
Prior art date
Application number
PCT/CN2021/123041
Other languages
English (en)
French (fr)
Inventor
江灿森
黄晓鹏
衡量
沈劭劼
施亮
Original Assignee
深圳市大疆创新科技有限公司
上汽大众汽车有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市大疆创新科技有限公司, 上汽大众汽车有限公司 filed Critical 深圳市大疆创新科技有限公司
Priority to PCT/CN2021/123041 priority Critical patent/WO2023060386A1/zh
Priority to CN202180101632.5A priority patent/CN118019958A/zh
Publication of WO2023060386A1 publication Critical patent/WO2023060386A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/28Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network with correlation of data from several navigational instruments
    • G01C21/30Map- or contour-matching
    • G01C21/32Structuring or formatting of map data

Definitions

  • the embodiments of the present application relate to the technical field of automatic driving, and in particular, relate to a map data processing, map data construction method, device, vehicle, and computer-readable storage medium.
  • self-driving vehicles can collect information about the surrounding environment through sensors, and navigate the vehicle through map data of the current environment in which the vehicle is located, thereby realizing automatic driving.
  • the embodiments of the present application provide a map data processing, map data construction method, device, vehicle, and computer-readable storage medium to solve the problem of cumbersome user operations caused by inefficient management of map data in the related art and affecting vehicle driving. safety issue.
  • a method for processing map data comprising:
  • map data of the traffic scene includes second description information generated based on historical observation data of the global scene of the traffic scene;
  • the map data is updated according to the matching result and the observation data of the currently collected traffic scene.
  • a method for constructing map data comprising:
  • the description information is used to determine the position information of the vehicle in the traffic scene and whether the map data is used when the vehicle enters the traffic scene again. need to be updated.
  • a map data processing device in a third aspect, includes a processor, a memory, and a computer program stored in the memory that can be executed by the processor, and the processor implements the first step when executing the computer program.
  • a map data construction device in a fourth aspect, includes a processor, a memory, and a computer program stored in the memory that can be executed by the processor, and the processor implements the first step when executing the computer program.
  • a vehicle in a fifth aspect, includes the map data processing device described in the third aspect, and/or the map data constructing device described in the fourth aspect.
  • a computer-readable storage medium is provided, and several computer instructions are stored on the readable storage medium.
  • the computer instructions are executed, the steps of the map data processing method described in the first aspect are implemented.
  • a computer-readable storage medium is provided, and several computer instructions are stored on the readable storage medium, and when the computer instructions are executed, the steps of the map data processing method described in the second aspect are implemented.
  • the vehicle can obtain the observation data of the partial scenery of the traffic scene currently collected by the sensor, and determine the first description information of the local scenery based on the observation data; and the map data of the traffic scene includes The second description information generated by the historical observation data of the global scenery of the traffic scene; therefore, based on whether the first description information matches the second description information, it can be determined whether to update the map data. Therefore, the solution of this application can efficiently manage the map data of the traffic scene where the vehicle is located, and can automatically update the existing map data by using the observation data currently collected by the sensor in time, thus reducing the user's manual update of the map data. It can ensure the safe driving of vehicles based on updated map data.
  • Fig. 1 is a schematic diagram of a method for processing map data according to an embodiment of the present application.
  • Fig. 2A is a schematic diagram of a map data processing method according to another embodiment of the present application.
  • Fig. 2B is a schematic diagram of a method for constructing map data according to an embodiment of the present application.
  • FIG. 2C is a schematic diagram of a map data processing method according to another embodiment of the present application.
  • Fig. 3 is a schematic diagram of a method for constructing map data according to another embodiment of the present application.
  • FIG. 4 is a schematic structural diagram of a map data processing device for implementing the map data processing method of this embodiment in the present application.
  • FIG. 5 is a schematic structural diagram of a map data construction device for implementing the map data construction method of this embodiment in the present application.
  • Figure 6 is a block diagram of a vehicle according to one embodiment of the present application.
  • self-driving vehicles can perceive the information of the surrounding environment in real time through sensors to make autonomous driving decisions.
  • self-driving vehicles can collect information about the surrounding environment through sensors, and combine the map data of the vehicle's current environment to achieve automatic driving.
  • the map data in the embodiment of this application refers to high-precision map data suitable for self-driving vehicles. It is different from ordinary electronic maps for user-oriented navigation in daily life.
  • the present application provides an embodiment of a map data processing method, which can efficiently manage the map data of the traffic scene where the vehicle is located, and can automatically update the existing map data in a timely manner using the observation data currently collected by the sensor. Therefore, the operation of manually updating the map data by the user is reduced, and the safe driving of the vehicle can be guaranteed based on the updated map data.
  • FIG. 1 it is a flow chart of a map data processing method in the embodiment of the present application.
  • the method in this embodiment can be executed by the map data processing device of the vehicle.
  • the device is implemented by software and/or hardware, and can be configured in Electronic devices with certain data computing capabilities.
  • the electronic device may specifically be configured in the vehicle, or may be independent of the vehicle.
  • the embodiment of the map data processing method may include the following steps:
  • step 102 the observation data of the partial scene of the traffic scene currently collected by the sensor of the vehicle is acquired.
  • step 104 first description information of the local scene is determined based on the observation data.
  • step 106 the map data of the traffic scene is acquired, wherein the map data includes second description information generated based on historical observation data of the global scene of the traffic scene.
  • step 108 the first description information is matched with the second description information to obtain a matching result.
  • step 110 the map data is updated according to the matching result and the currently collected observation data of the traffic scene.
  • the solution of this embodiment is applicable to the scene where the vehicle needs to use the map data to drive in the traffic scene.
  • the map data can be updated when the vehicle is in the traffic scene.
  • this embodiment is applicable to a variety of traffic scenarios, that is, the traffic scenarios can have various embodiments, for example, it can be a parking lot, such as an underground parking lot or an outdoor parking lot, etc.; it can also be other vehicles that require map data to control Driving traffic scenarios, such as industrial parks, freight stations or docks, etc.
  • the vehicle can obtain the map data of the traffic scene, and the map data can be used by the automatic driving software, so that the automatic driving software can make driving decisions based on the comparison between the real-time observation data of the vehicle and the map data. decision making.
  • the scene in this embodiment refers to any target in the traffic scene that can be sensed by the sensor of the vehicle. It is the target that the sensor of the vehicle needs to collect observation data when the vehicle is controlled.
  • These scenes are in the traffic scene and can be detected in the traffic scene With location information, when the vehicle's sensor perceives its information, these scenes can be used to indicate its location information in the traffic scene to the vehicle, so that the vehicle can determine the location information of the scene in the traffic scene based on the observation data, so as to determine the location of the vehicle. Its own position in the traffic scene, and then be able to execute automatic driving decisions.
  • the scene may include lanes, parking spaces, vehicles, walls, columns, signs, lights, pedestrians or parking lot gates, etc., or it may also be monitoring equipment, wall
  • the map data of the traffic scene in this embodiment can be constructed in various ways.
  • the user may drive a vehicle in a traffic scene, and the vehicle’s sensors collect observation data of various scenes in the traffic scene while the vehicle is driving; in other scenarios, it may also be that the vehicle automatically Driving at a low speed, while the vehicle is driving, the vehicle's sensors collect observation data of various scenes in the traffic scene.
  • Map data can be constructed from the collected observation data.
  • image sensors collect image data
  • lidar sensors can collect point cloud data
  • millimeter-wave radar sensors can collect millimeter-wave data
  • ultrasonic sensors can collect to ultrasound data.
  • the traffic scene has a certain range
  • the vehicle can drive in the traffic scene, so that the vehicle's sensor can cover the entire traffic scene and collect the observation data of the entire traffic scene.
  • This embodiment is called the overall situation of the traffic scene.
  • the observation data of the scene, the global scene can include all the targets that need to collect observation data in the entire traffic scene, based on the observation data of the global scene of the traffic scene, map data that can cover the entire traffic scene can be constructed.
  • the map data of the traffic scene can also record the location information of all the scenes in the traffic scene.
  • the location information in this embodiment can include the geographic location information of the scene, for example, it can be based on the positioning sensor of the vehicle.
  • the collected information determines the geographic location information of the vehicle, and when the vehicle is in the geographic location information, other sensors can perceive the scene, thereby determining the geographic location information of the scene.
  • the position information of the scene in the traffic scene may also include the relative position information of the scene in the traffic scene.
  • the position of a certain known geographic location information may be determined in the traffic scene as a base point, and the position of the scene in the traffic scene
  • the location information may also include relative location information of the scene relative to the base point.
  • the map data of the traffic scene may include feature information of each scene in the traffic scene determined based on the observation data.
  • the feature information of the scene may include visual feature information of the scene and the like.
  • different processing methods can be used to process each type of observation data, so as to determine the characteristic information of the global scene of the traffic scene from the observation data.
  • the map data in the traffic scene constructed by the vehicle can be stored locally, and can be used by the vehicle when it enters the traffic scene again later.
  • the vehicle can also send the map data to other vehicles capable of communication, for example, by means of short-range wireless communication, or by means of P2P (peer to peer, point-to-point) and other means.
  • the vehicle can also send the map data of the traffic scene to the server.
  • the server can send the map data to other vehicles that can communicate with the server.
  • the vehicle's sensor can collect the observation data of the local scenery of the traffic scene in real time, based on the comparison between the currently collected observation data and the observation data recorded in the map data, The location information of the vehicle's current location in the traffic scene can be determined; for example, the location information of each scene in the traffic scene is recorded in the map data. The position information of the matched scene in the data can determine the position information of the vehicle's current position in the traffic scene.
  • the map data of the traffic scene may face partial failure or complete failure;
  • the map data used may not be able to locate its own position in the traffic scene because the map data used does not record any new changes in the scene, and may make wrong decisions, putting the vehicle at risk state.
  • the collected data for some positions of the traffic scene is relatively sparse, so the amount of information in the map data at this position is relatively small, resulting in poor positioning accuracy of the vehicle at this position when the map data is subsequently used.
  • the environmental state when the observation data is collected is quite different from the environmental state when the map data is used to control the movement of the vehicle.
  • the vehicle may cause the vehicle to be unable to efficiently and accurately locate its own position in the traffic scene. Or, the vehicle locates itself at a certain position in the traffic scene at a certain moment, and the position at which the vehicle locates itself at the next moment is quite different from the position at the previous moment.
  • the difference in the position of the vehicle at two similar times should not be too large, and the large difference may be caused by the failure of the map data used. Based on this, these situations all indicate that the map data used at the current moment is unreliable at the current moment. In order to ensure the safety of the vehicle, it is necessary to find out whether the map data needs to be updated in time.
  • the vehicle can acquire the observation data of the local scenery of the traffic scene currently collected by the sensor, and determine the first description information of the local scenery based on the observation data; and the map data of the traffic scene includes The second description information generated by the historical observation data of the global scene of the scene; therefore, based on whether the first description information matches the second description information, it can be determined whether to update the map data.
  • the vehicle can automatically and timely update the map data without user intervention to perform operations, and since the map data can be updated in time, the driving safety of the vehicle can also be guaranteed.
  • whether the map data needs to be updated is determined by the first description information of the currently collected local scene and the second description information in the map data, and the first description information of the currently collected local scene can represent the current collection of the scene.
  • the current state of the data such as the time state, the state of one or more scenes or the weather state, etc.
  • the second description information in the map data can represent various states of the traffic scene when the map data is constructed, such as the time state, a The state of one or more scenes or the state of the weather, etc., so based on the comparison of the first description information and the second description information, it can be determined whether the map data needs to be updated.
  • the vehicle moves in the traffic scene and collects the observation data of the local scene in real time
  • the first description information of the local scene can be determined based on the observation data, that is, the first description information is determined while the vehicle is currently collecting
  • the acquired map data of the traffic scene includes second descriptive information generated based on historical observation data of the global scene of the traffic scene.
  • the second descriptive information is generated before the current moment, It is generated by the vehicle and/or other vehicles before collecting the observation data of the scenery in the traffic scene, and belongs to the historical description information. It can be understood that the first description information and the second description information may be generated in the same manner.
  • the description information can include features extracted from observation data; for example, vehicle sensors can include image sensors, etc., vision devices can provide appearance information of the scene, and binocular vision can also provide information about the scene within a certain range. Depth information, for example, can provide dense point cloud information within a certain range around the vehicle. Vehicle sensors can also include lidar, etc., and its high measurement accuracy can provide point cloud information within a 360-degree range. Based on this, the features of the scene in the traffic scene can be extracted from the image data or point cloud data.
  • the feature extraction method is not limited in this embodiment, and may be realized by using a neural network or the like.
  • ORB features can be extracted and matched; where ORB features can be extracted in real time, and optical flow tracking and descriptor matching can be performed on the extracted features.
  • the point cloud data collected by the lidar can also be combined to determine the depth feature information of the pixels in the image, and the point cloud features can also be extracted from the point cloud data. These features can be used as description information and stored as part of the map data, and can be matched with the observation data collected by the vehicle in real time when the map data is used for navigation.
  • the description information may also include key observation data in all collected observation data, which is referred to as key observation data in this embodiment, where one or more key observation data can be configured as required, optional , the key observation data can be data that has an impact on vehicle movement, such as traffic scenes such as lanes, parking spaces, etc., which will greatly affect the driving or positioning of the vehicle, and the observation data of these scenes can be used as key observation data. Or, some scenes in the traffic scene can provide a greater role for the decision-making of vehicle movement, and the observation data of these scenes can also be used as key observation data.
  • the visual information of a certain scene in the traffic scene is relatively rich, and these rich Visual information can provide positive effects for vehicle control, such as easier identification and more accurate positioning information, so these scenes can be used as key observation data.
  • the key observation data can be determined in combination with the vehicle's motion posture. For example, if the vehicle is in a state where the trajectory of the vehicle is changing, such as turning, it means that the current scene changes greatly and needs to be paid attention to. Therefore, the vehicle's The observation data collected when the movement attitude changes preset can be used as the key observation data.
  • the key observation data includes: the observation data collected by the sensor within a target sampling period when the attitude of the vehicle undergoes a preset change.
  • the preset change of the vehicle attitude in this embodiment can be implemented in various ways according to the needs, for example, it may include a preset change in the driving direction of the vehicle, and the preset change here may include a change in the angle of the driving direction greater than Set angle thresholds, such as a vehicle traveling forward and then turning left or right, or a vehicle traveling forward and then reversing.
  • it may also include a preset change in the vehicle's driving trajectory, for example, the driving trajectory changes from a straight line to a curved line.
  • the sensor of the vehicle collects the observation data according to a preset sampling period; the description information includes key observation data, and the key observation data characterizes the observation data collected by the sensor within a target sampling period.
  • Vehicle sensors may include one or more of image sensors, lidar sensors, millimeter-wave radar sensors or ultrasonic sensors, and the corresponding key observation data include one or more of the following data collected during the target sampling period: The image data collected by the image sensor during the target sampling period; the point cloud data collected by the lidar sensor during the target sampling period; the millimeter wave data collected by the millimeter wave radar sensor during the target sampling period; the ultrasonic sensor during the target sampling period Ultrasonic data collected within.
  • the key observation data can be determined according to changes in the traffic scene while the vehicle is running, for example, the target sampling period is determined according to the observation data of multiple sampling periods; the vehicle's sensor The observation data is collected according to the sampling period, and the observation data of the target sampling period may be determined as the key observation data based on the determination of the observation data of multiple sampling periods as required. As an example, it is possible to determine whether there is key observation data by comparing the observation data of multiple sampling periods. Based on this, through the changes of the observation data of multiple sampling periods, it is possible to find out which Are there data that can be used as key observations.
  • the key observation data may be the observation data of a target sampling period when the data fluctuates greatly among the observation data of multiple sampling periods. For example, if there is a large change in the scene from one position to another in the traffic scene, the observation data collected by the vehicle sensor for this large scene change is that the characteristics of the scene in the observation data have undergone a large change; for example, There is a column between two adjacent parking spaces. Between the data collected by the vehicle's sensors on these two adjacent parking spaces, it can be found that the observation data from one parking space to the observation data of the column to the other parking space Observation data, the characteristics of the data have large changes, and these changes can be paid attention to to control the vehicle movement more stably and safely.
  • the key observation data of the target sampling period may be determined based on the feature change of the scene in the observation data of multiple sampling periods.
  • the data change in this embodiment may also refer to the change between the currently collected observation data of the local scene in the traffic scene and the historical observation data of the local scene in the map data; for example, for a certain scene in the traffic scene, The data characteristics of the currently collected observation data of the scene are quite different from the data characteristics of the historical observation data of the scene in the map data, indicating that the current state of the scene is quite different from the historical state. Therefore, it can be Take the currently collected observation data of the scene as the key observation data.
  • the key observation data can also be the observation data of the target sampling period when the number of data features is greater than the set threshold among the observation data of a plurality of the sampling periods, wherein the set threshold can be determined as required Configuration; the number of features of the scene determined based on the observation data of multiple sampling periods is relatively large, and since the features of the scene are relatively rich, rich information can be provided in subsequent processing, so it can be used as key observation data.
  • the key observation data may also be the observation data of the target sampling period when the number of data features is less than the set threshold among the observation data of a plurality of the sampling periods, wherein the set threshold may be based on Configuration is required; based on the observation data of multiple sampling periods, it is determined that the number of features of the scene is small. Due to the small number of features, the observation data of the scene at this position may not be able to be accurately positioned and identified by the automatic driving system. Therefore, The observation data of these few scenes can be paid attention to as the key observation data.
  • the data characteristics of the observation data collected within the target sampling period meet a preset data characteristic condition.
  • the key observation data can be determined based on the data characteristics, and from the observation data of multiple sampling periods, according to the data characteristics, the observation data meeting the preset data characteristic conditions can be found as the key observation data, that is, the key observation data
  • the data characteristics meet the preset data characteristic conditions.
  • the preset data characteristic condition may be implemented in multiple manners as required.
  • the image data collected by the image sensor within the target sampling period can meet the preset image texture conditions, that is, the image data as the key observation data, whose image texture meets the preset image texture conditions;
  • the preset Image texture conditions can be implemented in a variety of ways, for example, image texture conditions can include matching the feature quantity of image texture with the set quantity threshold, matching the feature density of image texture with the set density threshold, or matching the feature arrangement of image texture with the set threshold.
  • the specified arrangement rules match and so on.
  • the point cloud data collected by the lidar sensor within the target sampling period can meet the preset point cloud distribution conditions, that is, the point cloud data as the key observation data, and its point cloud distribution meets the preset Point cloud distribution condition;
  • the preset point cloud distribution condition can be implemented in many ways, for example, the point cloud distribution condition can include the point cloud point number of the point cloud matching the set number threshold, the point cloud point density of the point cloud matching the set The fixed density threshold matching or point cloud distribution matches the set distribution law and so on.
  • the description information may also include tag information, that is, the scene in the traffic scene corresponds to the tag information.
  • tag information can be configured for the scene in the traffic scene according to the needs. Based on the tag information Map data matching can be performed quickly.
  • the tag information may include: tag information used to characterize the category to which a specific scene in the traffic scene belongs.
  • the specific scene can be a global scene in the traffic scene, that is, all the scenes have label information representing their category; in other examples, the specific scene can be a part of the scene in the traffic scene, that is, a part of the scene It has label information indicating its category, but some scenes do not have this label information.
  • specific scenes with tag information belonging to their categories can be configured according to needs, for example, some key scenes in traffic scenes, such as scenes that help vehicle positioning, take parking scenes as an example, including but not limited to parking spaces , markers or lights on the wall, etc., may also include scenes that affect the driving safety of vehicles, such as pedestrians, lanes, etc.
  • multiple categories can be set according to needs.
  • specific scenes in traffic scenes can be classified into categories based on a variety of different classification methods; Non-fixed scenes, such as walls, parking spaces, or columns, are fixed-position scenes; pedestrians, vehicles, or lights, etc., are non-fixed scenes.
  • the classification method may also include the category of the location of the scene, such as the scene on the ground or in the space, such as a lane or a parking space marker, which belongs to the ground category, and lights or monitoring equipment, etc., which belong to the scene in the space.
  • it can also include other categories.
  • different categories can be set based on different traffic scenes.
  • the categories can include walls, columns, vehicles, parking spaces, lanes, wall markers, Column markers or pedestrians, etc.
  • each scene may have one or more categories, that is, each scene may have one or more tag information representing the category it belongs to.
  • the tag information may also include: tag information used to characterize the collection time of the observation data of the traffic scene; where the collection time can accurately indicate the timeliness of the map data and the environment when the map data is constructed Status, in practical applications, you can set a variety of tag information that characterizes the collection time as needed.
  • label information representing the season of the collection time may be included, for example, label information of different seasons in spring, summer, autumn and winter.
  • tag information representing the time period of the collection time may be included, and the time period may be set as required.
  • the time period may include morning, afternoon, or evening.
  • the description information may include key observation data, and the key observation data, as key observation data in the map data, plays a key role in the control of the vehicle.
  • the label information in this embodiment also includes: label information used to characterize the characteristics of the key observation data, that is, this embodiment also configures labels for the characteristics of the key observation data, so that based on the label information of the characteristics of the key observation data , can more quickly and accurately determine whether map data needs to be updated based on key observation data.
  • the features of the key observation data include the visual features of the key observation data, through the visual features of the key observation data, the automatic driving system can be provided with the appearance of the scene in the traffic scene with high reliability information.
  • the visual features include at least one of the following: SiFT feature (scale-invariant feature transform, Scale-invariant feature transform, SIFT), ORB (Oriented Fast and Rotated Brief, fast feature extraction) feature or MSER feature (Maximally Stable Extremal Regions, maximum stable extreme value region), etc.
  • the matching process of the first description information and the second description information may also have multiple implementation manners.
  • there are multiple types of description information which can be matched with corresponding types of description information.
  • the description information can include features extracted from the observation data. Therefore, the observation book currently collected by the vehicle sensor can extract the features of the scene, and the features of the currently extracted scene can be combined with the historical observation data in the map data. Match the characteristics of the scene.
  • the description information may include label information, and the label information determined by the currently collected observation data may be matched with the label information in the map data; wherein, the label information may be There are many kinds. When matching, you can match the corresponding type of label information.
  • the label information used to represent the collection time from the currently collected observation data determines the label information used to represent the collection time from the currently collected observation data, and match it with the label information used to represent the collection time in the map data. ; Determine the label information used to represent the illumination information from the currently collected observation data, and match it with the label information used to represent the illumination information in the map data, and so on.
  • the description information includes key observation data, and the key observation data determined by the currently collected observation data can be matched with the key observation data in the map data, for example, The key observation data determined by directly matching the currently collected observation data with the key observation data in the map data may also be matched based on features or label information of features of the key observation data.
  • a matching result of the two can be obtained, and based on the matching result, it can be determined whether to update the map data.
  • information matching conditions can be set as required. This embodiment is referred to as preset information matching conditions.
  • the preset information matching conditions are used to indicate the degree of matching between the first description information and the second description information, that is, to indicate Whether the map data needs to be updated; in practical applications, the preset information matching conditions can be flexibly configured according to the needs, for example, there are multiple types of first description information and second description information, so the preset information matching conditions can include one or more Any combination of matching conditions for description information.
  • the preset information matching condition may be a condition related to label information.
  • the preset information matching condition may include: the currently collected The label information of the collection time of the observation data matches the label information of the collection time of the map data.
  • the description information may include features extracted from the observation data, and the preset information matching conditions may be configured based on the features.
  • the description information includes label information used to characterize the category of the specific scene in the traffic scene, and the preset information matching condition may include the label information of the category of the specific scene determined by the currently collected observation data, and the information in the map data. The category label information of the scene matches and so on.
  • the description information includes key observation data
  • the preset information matching conditions can be configured based on the key observation data.
  • the preset information matching conditions can include the key observation data in the currently collected observation data and the key observation data in the map data. match and so on.
  • the preset information matching conditions may also include conditions formed by a combination of one or more of tag information, features and/or key observation data, which can be set according to needs in actual applications, and this embodiment does not Do limited.
  • the matching result of the first description information and the second description information satisfies the preset information matching condition, it can be determined that the map data does not need to be updated, and it can be determined based on the matching result of the first description information and the second description information that the real-time position information of the vehicle in the traffic scene, and control the movement of the vehicle in the traffic scene based on the real-time position information. If the first description information does not match the second description information, it may be determined that the map data needs to be updated, and the map data may be updated according to the matching result and the observation data of the currently collected traffic scene.
  • the update method of the preset map data includes any of the following: replacing the first description information with the second description information; or replacing the first description information with the second description information information fusion.
  • the current traffic scene is quite different from the traffic scene at the time of map construction. For example, some scenes in the current traffic scene have changed greatly, or the state of the traffic scene at the The state of the traffic scene is quite different, so that the map data cannot be applied to the current traffic scene, and the map data is invalid. If the vehicle uses the map data for navigation, the vehicle cannot be accurately positioned. Based on this, the first description information may be replaced with the second description information. Since the second description information is replaced, that is, the map data becomes invalid, and new map data needs to be constructed using currently collected observation data.
  • the replacement of the second description information with the first description information to indicate that the map data is invalid may be performed under the following conditions: the matching result does not meet the preset information matching condition, And the real-time position information of the vehicle in the traffic scene is not determined according to the matching result.
  • the description information may include features extracted from observation data, and the preset information matching conditions may be configured based on the features.
  • the map data is reliable, the characteristics of the observation data collected by the current vehicle should match the characteristics of the historical observation data in the map data. If the map data is unreliable, it is difficult to match the characteristics of the observation data collected by the current vehicle with the characteristics of the historical observation data in the map data. As a result, the vehicle cannot use the map data to achieve stable positioning in the current traffic scene, and cannot be stable and accurate.
  • whether the real-time position information of the vehicle in the traffic scene can be determined can be based on the characteristics of the observation data collected by the current vehicle and the characteristics of the historical observation data in the map data. For example, when the positioning cannot be stabilized, the vehicle continues to drive M (M can be set as required), the number of feature matches is less than the set value, or the number of average feature matches is less than the set value, etc. Or, it may also be that the continuous running L (L can be set as required) does not have matching features and the like.
  • the method of this embodiment may further include: controlling the vehicle to move in the traffic scene to continuously collect observation data of local scenes of the traffic scene, using continuous The collected observation data determines the description information and stores it. Therefore, the scheme of this embodiment can control the vehicle to execute the construction process of the map data after it is judged that the map data is invalid, so as to generate available map data for the current traffic scene.
  • another optional embodiment is to fuse the first description information with the second description information; in practical applications, there may be some data failure in the map data, that is, the whole map data Available, but some of them need to be updated; for example, some scenes in the traffic scene have changed, but other parts of the scene in the map data have not changed, that is, the data of other parts of the scene is still available.
  • vehicle positioning that is, when the vehicle movement is controlled based on the map data, when the position information of the vehicle in the current traffic scene is located based on the map data, the positioning is unstable, that is, the vehicle can continue to be positioned stably at certain times. A situation where vehicle positioning is unstable, inaccurate, or drifting at certain times.
  • the observation data collected for traffic scenes may not be dense enough, and there are some deficiencies.
  • the currently collected observation data can be used as a supplement, and the first description information can be added to in the map data.
  • this embodiment may fuse the first description information with the second description information, that is, use the first description information of the currently collected observation data to update some invalid second description information in the map data.
  • the fusion method includes any of the following: merging the first description information into the preset map data; or changing a part of the preset map data according to the first description information The second descriptive information.
  • the merging of the first description information and the second description information is carried out under the following conditions: the matching result does not meet the preset information matching condition, and it is determined according to the matching result If the real-time position information of the vehicle in the traffic scene does not meet the preset stable state, the first description information is fused with the second description information.
  • the preset stable state can be realized in many ways according to the needs. For example, the matching number of features in the matching result is greater than the set threshold, that is, the features of the currently collected local scene can match the features of the scene in the map data.
  • N can be set according to needs, such as 50, etc.; due to the large number of feature matches, the vehicle can use these matching features in the map data in the traffic scene. Position information to determine the current position information of the vehicle in the traffic scene.
  • the vehicle can continuously determine accurate location information, and the vehicle continues to move, and the location information of the vehicle will not change greatly in two adjacent time periods; if The vehicle determines that the real-time position information of the vehicle in the traffic scene has changed greatly based on the map data within two adjacent time periods, and it can be determined that the real-time position information of the vehicle in the traffic scene does not meet the preset stable state; Based on this, the preset stable state may include that, within a plurality of set time periods, the difference of the real-time position information of the vehicle in the traffic scene is less than a set threshold, if within a plurality of set time periods, the vehicle is in the The difference of real-time location information in the traffic scene is greater than the set threshold, that is, the real-time location of the vehicle changes greatly, so the preset stable state is not satisfied.
  • the real-time position information of the vehicle in the traffic scene does not satisfy a preset stable state, including: the real-time position of the vehicle displayed on the user interface drifts.
  • the real-time position of the vehicle can be displayed on the user interface. If the real-time position information of the vehicle in the traffic scene does not meet the preset stable state, the real-time position of the vehicle displayed on the user interface will drift.
  • the description information may include features extracted from the observation data, and it may be determined whether the map data needs to be updated based on whether the features match. For example, if the map data is reliable, the characteristics of the observation data collected by the current vehicle should match the characteristics of the historical observation data in the map data. If the map data is unreliable, it is difficult to match the characteristics of the observation data collected by the current vehicle with the characteristics of the historical observation data in the map data. As a result, the vehicle cannot use the map data to achieve stable positioning in the current traffic scene, and cannot be stable and accurate. The current location information of the vehicle in the traffic scene can be accurately obtained, so it is possible to set the implementation of feature matching and whether the map needs to be updated based on this.
  • a traffic scene can have multiple map data, such as different map data constructed from observation data collected at different times, and map data constructed at different times can have different characteristics, such as light in the morning and evening The brightness is different, the light characteristics in the map data are different, and the different light brightness will cause different scene characteristics; or, the map data constructed under different weather conditions also have different characteristics.
  • each map data may have usage attribute information, and the usage attribute information is determined based on the tag information; the acquiring the map data of the traffic scene, The method includes: displaying the use attribute information of the plurality of map data on the user interface, obtaining the use attribute information selected by the user through the user interface; determining the use attribute information from the plurality of map data according to the use attribute information selected by the user. map data.
  • the user interface in this embodiment may be the user interface of the display part of the vehicle, or the user interface of other equipment that can communicate with the vehicle. The other equipment is independent of the vehicle and can communicate with the vehicle.
  • the user interface can provide the user with a selection function of multiple map data of the traffic scene.
  • the attribute information may include information such as time, scene, or weather of the map data. Based on this, after the user interface is provided to display the use attribute information of each map data, the user can select the appropriate map data based on the use attribute information.
  • the traffic scene has multiple pieces of map data, and each piece of map data has usage attribute information, and the usage attribute information is determined based on the tag information; the acquiring the preset map data of the traffic scene , including: matching the tag information in the first description information with the usage attribute information of the multiple pieces of map data, and determining the map data from the multiple pieces of map data according to the matching result.
  • the vehicle may automatically select from multiple pieces of map data in the traffic scene, and based on the usage attribute information, select the map data that best matches the state of the current vehicle's traffic scene from the multiple pieces of map data.
  • the usage attribute information can include information such as time, scene or weather of each map data, and the current time information, current weather information, scene information of the current vehicle location, etc.
  • map data can be obtained when selecting, and then combined with the information of each map data Use attribute information for matching.
  • the most suitable map data can be automatically obtained, which reduces user operations, and ensures safe driving of vehicles due to the fact that map data with better reliability can be obtained Effect.
  • the method may further include: sending the updated map data to the server and/or other vehicles.
  • the vehicle after the vehicle updates the map data, it can send the updated map data to the server for storage, and the server can also send it to other vehicles that can communicate with the server as needed; or, the vehicle can also Sent to other vehicles that can communicate for use by other vehicles.
  • FIG. 2A it is a schematic diagram of another map data processing method in the embodiment of the present application.
  • FIG. 2A includes the following steps:
  • the map is constructed; the map construction process is used to construct the map data of the traffic scene; in some examples, it may be that the user drives the vehicle in the traffic scene. In other scenarios, the vehicle can also automatically drive at a low speed in the traffic scene.
  • the vehicle's sensors collect observation data of various scenes in the traffic scene. Map data can be constructed from the collected observation data.
  • FIG. 2B it shows a schematic flow chart of a map construction in this embodiment; in step 211, the observation data collected by the sensor of the vehicle can be used to construct the map;
  • vehicle sensors can include image sensors, etc., vision devices can provide the appearance information of the scene, and binocular vision can also provide depth information of the scene within a certain range, for example, it can provide dense point cloud information within a certain range around the vehicle.
  • Vehicle sensors can also include lidar, etc., and its high measurement accuracy can provide point cloud information within a 360-degree range.
  • image data as an example, in FIG. 2B , in step 212 , the image data collected by the image sensor can be obtained, and in step 213 , feature extraction and matching can be performed on the image data.
  • ORB features can be extracted and matched; where ORB features can be extracted in real time, and optical flow tracking and descriptor matching can be performed on the extracted features.
  • the point cloud data collected by the lidar can also be combined to determine the depth information of the pixels in the image.
  • three-dimensional feature map reconstruction can be carried out, for example, based on the extracted ORB features, linear triangulation can be used, and the reprojection error method can be used to perform wrong matching or low-quality three-dimensional features can be removed.
  • the characteristics of each scene in the scene can be extracted from the observation data. These characteristics can be used as description information and stored as map data. When using the map data for navigation, it can be matched with the observation data collected in real time by the vehicle.
  • key frame extraction may be performed, that is, key observation data may be obtained, for example, a series of representative images may be extracted as key frames for more complete expression of map environment information.
  • the extraction of key frames can be done in many ways, for example, during the continuous motion and acquisition process of the vehicle, within the target sampling period, the number of feature point tracking and matching drops to the set threshold, such as falling to 30%, etc.; or, during the target sampling period During the period, the number of image feature points may also be less than 300; or, within the target sampling period, the attitude of the vehicle changes, for example, the driving distance exceeds 2 meters or the angle of the vehicle exceeds 20 degrees, etc.
  • Keyframe labeling can be performed in step 216.
  • various visual features can be extracted, and visual features can include SiFT (complex geometric texture structure), ORB (fast extraction), MSER (local Stable region features), etc., you can also use these features to configure label information, and you can also build a dictionary.
  • the labels of key observation data may include bag-of-words feature labels, overall brightness feature labels, or feature space distribution labels.
  • the user scene labeling process is performed, that is, the observation data is used to configure more other description information.
  • the description information can include time tags, such as obtaining time information when the map is constructed, and time tags can be added to the map data, such as seasons. , month, date, time period (morning, noon, afternoon, evening, night, etc.) and so on.
  • the usage attribute information of the map data can be configured based on the time tag, and the usage attribute information represents the time information of the map data.
  • the description information may also include scene tags, for example, the environmental status of the traffic scene can be determined based on observation data, for example, indoor parking lot, outdoor parking lot, semi-enclosed parking lot, office parking lot, residential parking lot, etc.
  • the use attribute information of the map data can also be configured based on the scene tag, and the use attribute information represents the environmental state of the traffic scene to which the map data belongs.
  • map storage can be performed; in the map storage stage, if the original observation data is used for storage, a large amount of space resources will be consumed and there will be more redundant information.
  • description information is generated based on observation data in the foregoing processing, wherein the description information includes a large number of features of the scene, including key observation data determined based on observation data, and a large amount of tag information is also generated based on observation data, etc. Therefore, in this embodiment, description information can be stored as map data, and the map data can be maintained for a long time.
  • the same traffic scene can store a plurality of map data with different label combinations such as different time periods, different seasons, and different lighting conditions, so as to supplement the map failure caused by different scene changes. That is to say, one traffic scene may correspond to multiple pieces of map data.
  • step 230 the vehicle can use the map to perform map positioning; in step 240, the positioning status can be judged; in step 250, the map can be updated. It will be described in conjunction with a flow chart of map data processing in this embodiment shown in FIG. 2C .
  • one or more pieces of map data can be constructed for one traffic scene.
  • the vehicle enters the traffic scene again, it can navigate based on the constructed map data, and it can also be determined whether the map data needs to be updated.
  • the map data of the traffic scene may be acquired.
  • the geographic location of the vehicle can be determined based on the positioning sensor of the vehicle, and the matching map data can be queried locally or from the server based on the geographic location.
  • the traffic scene has multiple copies of map data, it may be selected automatically by the vehicle or selected by the user based on the use attribute information of the map data.
  • the observation data of the current traffic scene can be collected by the vehicle sensor, and based on the description information of the currently collected observation data, it can be matched with the use attribute information of multiple map data to find out the The map data matched by the current traffic scene.
  • the user may specify map data; for example, the usage attribute information of multiple pieces of map data may be displayed in the user interface, and the user may select one piece of map data.
  • a function of setting the current position and direction of the vehicle can also be provided on the user interface, and the current position and direction of the vehicle can be set by the user. Since the system stores maps of multiple scenes, the correctness of the map data can be effectively improved and the time for automatic matching of the map data can be saved by specifying the map data by the user.
  • the vehicle can move in the traffic scene based on the map data, and in step 262 the vehicle can perform map positioning.
  • the observation data of the partial scenery of the traffic scene currently collected by the sensor of the vehicle can be obtained; the first description information of the local scenery is determined based on the observation data, and the first description information and the second description in the map data Match the information and get the matching result. If the matching result satisfies the preset information matching condition, determine the real-time position information of the vehicle in the traffic scene according to the matching result, and control the movement of the vehicle in the traffic scene based on the real-time position information .
  • the map data may be completely invalid or partially invalid and need to be updated.
  • positioning quality monitoring may be performed to determine the quality of the vehicle's positioning based on map data.
  • a map status may be determined.
  • the vehicle will start driving based on the map data, and based on the observation data of the local scene of the traffic scene currently collected by the vehicle's sensor; determine the first description of the local scene based on the observation data information; for example, feature extraction can be performed on images and point cloud data collected in real time, and the extracted features can be used as the first description information, which can be matched with the features in the map data and pose solution.
  • normal automatic driving control can be performed, and the observation data of the local scene of the traffic scene currently collected by the vehicle can also provide corresponding environmental prior information.
  • monitoring of the positioning quality can be performed, that is, after the vehicle is positioned using the map, the state of the map can be monitored.
  • the map positioning state can be set to various states as required, and as an example, it can include normal positioning, that is, the map data is valid and reliable, and does not need to be updated.
  • the positioning status can also include unstable positioning, that is, the description information of some scenes in the map data may be invalid, resulting in sometimes reliable positioning of the vehicle using map data, and sometimes unreliable positioning. In this case, the map data needs to be updated. .
  • the positioning status may also include positioning failure, that is, the map data cannot be reliably positioned by the vehicle, the map data is basically invalid, and the map data of the traffic scene needs to be reconstructed.
  • the judgment conditions for normal positioning may include: the number of matching features is greater than the set threshold, that is, the features of the currently collected local scene can be matched with the features of the scene in the map data, and the number of matches is abundant, for example, there are continuously N
  • N can be set as needed, for example, 50 and so on.
  • the unstable state and positioning failure that is, the vehicle is not in the normal positioning
  • the number of feature matching is small, such as continuous driving M of the vehicle (M can be set as required), feature The number of matches is less than the set value, or the number of average feature matches is less than the set value, etc.
  • the continuous driving L L can be set as required
  • the map state judgment of step 208 may be further performed.
  • map status determination may be performed.
  • multiple types of label information can be combined for determination. For example, it can be judged by the time and scene tags. If the time tag difference is too large, for example, the time change is greater than the set number of days; or the time period does not match.
  • the label information determined by the observation data is dusk, and the time period of the two changes greatly; or, it may also be that the weather does not match.
  • the label information of is cloudy, etc.; if the time label difference is too large, map reconstruction will be triggered, that is, the map construction in step 210 can be performed. It can be understood that this embodiment triggers map construction again, and the constructed map data is an update of existing map data.
  • the judgment can be made based on the key observation data and/or the tag information of the key observation data; for example, the key observation data can be determined for the currently collected observation data, based on the The data determines the description information, such as the label information of the key observation data.
  • the label information can include visual features, and features such as SiFT (complex geometric texture structure), ORB (rapid feature extraction), MSER (local stable region feature) and other features can be extracted, and Match and judge with the label information in the map data and/or key observation data.
  • map data reconstruction can be triggered , that is, the map building in step 210 can be performed.
  • map storage can be performed.
  • map data of the same traffic scene can be automatically merged and updated according to the time tag of the map data.
  • the time label of the newly constructed map data is the same as that of the existing old map data, and the old map data can be replaced by the new map data.
  • the time label of the newly constructed map data is summer, and the time label of the old map data is autumn, both the old and new map data can be kept.
  • the solution of this embodiment can be applied to a variety of practical scenarios.
  • the method of this embodiment can be used to solve the problem of automatic parking in the assisted driving function of the vehicle, such as automatic parking in different scenarios such as commonly used industrial parks, residential areas, and campuses.
  • the parking system it is used to solve the problem of map failure.
  • the solution of this embodiment can be used as an automatic driving solution for the last 100 meters of real-life scenarios in the vehicle field, and can be applied to the assisted parking system of the vehicle.
  • Map data management methods such as map switching and map switching can improve the long-term operation stability of the automatic driving system.
  • the assisted parking system usually requires the user to perform a single training for a scene, but the feature map of a single training has a certain timeliness. With the change of time, weather, light, season, etc., the features in the map data will gradually become invalid.
  • the effective map failure management method the effective use time of the parking system can be improved, and the cumbersome operations of frequent map creation by users can be reduced.
  • the scheme of this embodiment can realize the optimized management of the parking system map.
  • the time characteristics of the map data can be marked by automatically capturing the complete timeliness information of the map data; , according to the set conditions, such as distance and angle thresholds, etc., different key observation data can be extracted to record the characteristics of the traffic scene, thereby generating a series of unique labels.
  • the label attributes of the current traffic scene can be accurately expressed.
  • Automatic parking technology is currently one of the technologies that the industry has invested heavily in and is expected to achieve L4 mass production the fastest. By promoting automatic parking technology, it will bring huge product benefits.
  • This embodiment can realize effective management of automatic parking map failure, and can effectively improve the stability of the map positioning function during parking and the user experience of map failure.
  • map data construction method which may include the following steps:
  • step 302 the vehicle is controlled to move in the traffic scene, and the observation data of the global scene of the traffic scene collected by the vehicle's sensors is acquired.
  • step 304 the description information of the global scene of the traffic scene is determined based on the observation data.
  • step 306 the description information of the global scene is stored as the map data of the traffic scene; the description information is used when the vehicle enters the traffic scene again, to determine the position information of the vehicle in the traffic scene and to determine Whether the map data needs to be updated.
  • the map data constructed in this embodiment includes the description information of the global scenery of the traffic scene, so by using the description information, it can be determined that the vehicle is in the traffic scene when it enters the traffic scene again.
  • the location information of the traffic scene and the determination of whether the map data needs to be updated so that the vehicle can efficiently manage the map data of the traffic scene where the vehicle is located, and can timely and automatically use the observation data currently collected by the sensor to update the existing map data Therefore, it not only reduces the operation of the user to manually update the map data, but also ensures the safe driving of the vehicle based on the updated map data.
  • the senor collects the observation data according to a preset sampling period
  • the description information includes key observation data, and the key observation data characterizes the observation data collected by the sensor within a target sampling period.
  • the key observation data includes: the observation data collected by the sensor within a target sampling period when the attitude of the vehicle undergoes a preset change.
  • the senor includes one or more of the following sensors: an image sensor, a lidar sensor, a millimeter wave radar sensor or an ultrasonic sensor.
  • the key observation data includes one or more of the following:
  • Ultrasonic data collected by the ultrasonic sensor during the target sampling period is the ultrasonic data collected by the ultrasonic sensor during the target sampling period.
  • the data characteristics of the observation data collected within the target sampling period meet a preset data characteristic condition.
  • the image data collected by the image sensor within the target sampling period satisfies a preset image texture condition.
  • the point cloud data collected by the lidar sensor within the target sampling period satisfies a preset point cloud distribution condition.
  • the target sampling period is determined according to the observation data of a plurality of the sampling periods.
  • the description information includes label information
  • Label information used to characterize the category to which a specific scene in the traffic scene belongs
  • Tag information used to characterize the collection time of the observed data of the traffic scene.
  • the label information also includes:
  • the features of the key observations include visual features of the key observations
  • the visual features include at least one of the following: SiFT features, ORB features or MSER features.
  • the foregoing method embodiments may be implemented by software, or by hardware or a combination of software and hardware.
  • software implementation as an example, as a device in a logical sense, it is formed by reading the corresponding computer program instructions in the non-volatile memory into the memory for operation by the image processing processor where it is located.
  • FIG. 4 it is a hardware structure diagram for implementing the map data processing device 400 of this embodiment.
  • the map data processing device implementing this embodiment of the map data processing method may generally include other hardware according to the actual functions of the map data processing device, which will not be repeated here.
  • the processor 401 implements the following steps when executing the computer program:
  • map data of the traffic scene includes second description information generated based on historical observation data of the global scene of the traffic scene;
  • the map data is updated according to the matching result and the observation data of the currently collected traffic scene.
  • the processor 401 also implements the following steps when executing the computer program:
  • the matching result satisfies the preset information matching condition, determine the real-time position information of the vehicle in the traffic scene according to the matching result, and control the movement of the vehicle in the traffic scene based on the real-time position information .
  • the processor 401 executes the step of updating the map data according to the matching result and the observation data of the currently collected traffic scene, including:
  • the map data is updated according to the first description information.
  • the update method of the map data includes any of the following:
  • the replacing of the first description information with the second description information is performed under the following conditions:
  • the matching result does not meet the preset information matching condition, and the real-time position information of the vehicle in the traffic scene is not determined according to the matching result.
  • the processor 401 executes the step that the matching result does not meet the preset information matching condition and the real-time position information of the vehicle in the traffic scene is not determined according to the matching result, The processor 401 also executes:
  • Controlling the vehicle to move in the traffic scene to continuously collect observation data of local scenes of the traffic scene, using the continuously collected observation data to determine and store description information.
  • the merging of the first description information and the second description information is performed under the following conditions:
  • the matching result does not meet the preset information matching condition, and it is determined according to the matching result that the real-time position information of the vehicle in the traffic scene does not meet the preset stable state, and the first description information is combined with the The second describes information fusion.
  • the real-time position information of the vehicle in the traffic scene does not satisfy a preset stable state, including: the real-time position of the vehicle displayed on the user interface drifts.
  • the manner of fusion includes any of the following:
  • the senor collects the observation data according to a preset sampling period
  • the description information includes key observation data, and the key observation data characterizes the observation data collected by the sensor within a target sampling period.
  • the key observation data includes: the observation data collected by the sensor within a target sampling period when the attitude of the vehicle undergoes a preset change.
  • the senor includes one or more of the following sensors: an image sensor, a lidar sensor, a millimeter wave radar sensor or an ultrasonic sensor.
  • the key observation data includes one or more of the following:
  • Ultrasonic data collected by the ultrasonic sensor during the target sampling period is the ultrasonic data collected by the ultrasonic sensor during the target sampling period.
  • the data characteristics of the observation data collected within the target sampling period meet a preset data characteristic condition.
  • the image data collected by the image sensor within the target sampling period satisfies a preset image texture condition.
  • the point cloud data collected by the lidar sensor within the target sampling period satisfies a preset point cloud distribution condition.
  • the target sampling period is determined according to the observation data of a plurality of the sampling periods.
  • the description information includes label information
  • Label information used to characterize the category to which a specific scene in the traffic scene belongs
  • Tag information used to characterize the collection time of the observed data of the traffic scene.
  • the label information also includes:
  • the features of the key observations include visual features of the key observations
  • the visual features include at least one of the following: SiFT features, ORB features or MSER features.
  • the description information includes label information and/or key observation data
  • the processor 401 executes matching the first description information and the second description information to obtain a matching result, including:
  • the traffic scene has multiple pieces of map data, and each piece of map data has usage attribute information, and the usage attribute information is determined based on the tag information;
  • the processor 401 performing the acquiring the map data of the traffic scene includes:
  • the map data is determined from the multiple pieces of map data according to the usage attribute information selected by the user.
  • the traffic scene has multiple pieces of map data, and each piece of map data has usage attribute information, and the usage attribute information is determined based on the tag information;
  • the processor 401 performing the acquiring the map data of the traffic scene includes:
  • the processor 401 also executes:
  • FIG. 5 it is a hardware structural diagram of a map data construction device 500 provided in this embodiment.
  • the map data construction device includes a processor 501, a memory 502, and a Executing a computer program, the processor 501 implements the following steps when executing the computer program:
  • the description information is used to determine the position information of the vehicle in the traffic scene and whether the map data is used when the vehicle enters the traffic scene again. need to be updated.
  • the senor collects the observation data according to a preset sampling period
  • the description information includes key observation data, and the key observation data characterizes the observation data collected by the sensor within a target sampling period.
  • the key observation data includes: the observation data collected by the sensor within a target sampling period when the attitude of the vehicle undergoes a preset change.
  • the senor includes one or more of the following sensors: an image sensor, a lidar sensor, a millimeter wave radar sensor or an ultrasonic sensor.
  • the key observation data includes one or more of the following:
  • Ultrasonic data collected by the ultrasonic sensor during the target sampling period is the ultrasonic data collected by the ultrasonic sensor during the target sampling period.
  • the data characteristics of the observation data collected within the target sampling period meet a preset data characteristic condition.
  • the image data collected by the image sensor within the target sampling period satisfies a preset image texture condition.
  • the point cloud data collected by the lidar sensor within the target sampling period satisfies a preset point cloud distribution condition.
  • the target sampling period is determined according to the observation data of a plurality of the sampling periods.
  • the description information includes label information
  • Label information used to characterize the category to which a specific scene in the traffic scene belongs
  • Tag information used to characterize the collection time of the observed data of the traffic scene.
  • the label information also includes:
  • the features of the key observations include visual features of the key observations
  • the visual features include at least one of the following: SiFT features, ORB features or MSER features.
  • the processor 501 further executes: sending the map data to a server and/or other vehicles.
  • the embodiment of the present application also provides a vehicle 600 , including: one or more sensors 610 ; and a map data processing device 400 and/or a map data construction device 500 .
  • the embodiment of the present application also provides a computer-readable storage medium, on which several computer instructions are stored, and when the computer instructions are executed, the steps of the map data processing method in any embodiment are implemented.
  • the embodiment of the present application also provides a computer-readable storage medium, on which several computer instructions are stored, and when the computer instructions are executed, the steps of the method for constructing map data in any embodiment are implemented.
  • Embodiments of the present description may take the form of a computer program product embodied on one or more storage media (including but not limited to magnetic disk storage, CD-ROM, optical storage, etc.) having program code embodied therein.
  • Computer usable storage media includes both volatile and non-permanent, removable and non-removable media, and may be implemented by any method or technology for information storage.
  • Information may be computer readable instructions, data structures, modules of a program, or other data.
  • Examples of storage media for computers include, but are not limited to: phase change memory (PRAM), static random access memory (SRAM), dynamic random access memory (DRAM), other types of random access memory (RAM), read only memory (ROM), Electrically Erasable Programmable Read-Only Memory (EEPROM), Flash memory or other memory technology, Compact Disc Read-Only Memory (CD-ROM), Digital Versatile Disc (DVD) or other optical storage, Magnetic tape cartridge, tape magnetic disk storage or other magnetic storage device or any other non-transmission medium that can be used to store information that can be accessed by a computing device.
  • PRAM phase change memory
  • SRAM static random access memory
  • DRAM dynamic random access memory
  • RAM random access memory
  • ROM read only memory
  • EEPROM Electrically Erasable Programmable Read-Only Memory
  • Flash memory or other memory technology
  • CD-ROM Compact Disc Read-Only Memory
  • DVD Digital Versatile Disc
  • Magnetic tape cartridge tape magnetic disk storage or other magnetic storage device or any other non-transmission medium that can be used to
  • the device embodiment since it basically corresponds to the method embodiment, for related parts, please refer to the part description of the method embodiment.
  • the device embodiments described above are only illustrative, and the units described as separate components may or may not be physically separated, and the components shown as units may or may not be physical units, that is, they may be located in One place, or it can be distributed to multiple network elements. Part or all of the modules can be selected according to actual needs to achieve the purpose of the solution of this embodiment. It can be understood and implemented by those skilled in the art without creative effort.

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Automation & Control Theory (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Traffic Control Systems (AREA)

Abstract

一种地图数据处理方法,包括:获取车辆的传感器当前采集的交通场景的局部景物的观测数据(102);基于该观测数据确定该局部景物的第一描述信息(104);获取该交通场景的地图数据(106),其中,该地图数据包括基于对该交通场景的全局景物的历史观测数据生成的第二描述信息;将该第一描述信息和该第二描述信息进行匹配,得到匹配结果(108);根据该匹配结果和当前采集的交通场景的该观测数据,更新该地图数据(110)。还提供了一种地图数据构建方法、地图数据处理装置(400)、地图数据构建装置(500)、车辆(600)及计算机可读存储介质。

Description

地图数据处理、地图数据构建方法、装置、车辆及计算机可读存储介质 技术领域
本申请实施例涉及自动驾驶技术领域,具体而言,涉及一种地图数据处理、地图数据构建方法、装置、车辆及计算机可读存储介质。
背景技术
自动驾驶作为一项新技术,是汽车行业当前的热点,越来越多的自动驾驶车辆被研发并投入使用。在一些场景中,自动驾驶车辆可以通过传感器来采集周围环境的信息,并通过车辆当前所处环境的地图数据对车辆进行导航,从而实现自动驾驶。
发明内容
有鉴于此,本申请实施例提供有一种地图数据处理、地图数据构建方法、装置、车辆及计算机可读存储介质,以解决相关技术中地图数据的管理低效导致的用户操作繁琐及影响车辆行驶安全的问题。
第一方面,提供一种地图数据处理方法,所述方法包括:
获取车辆的传感器当前采集的交通场景的局部景物的观测数据;
基于所述观测数据确定所述局部景物的第一描述信息;
获取所述交通场景的地图数据,其中,所述地图数据包括基于对所述交通场景的全局景物的历史观测数据生成的第二描述信息;
将所述第一描述信息和所述第二描述信息进行匹配,得到匹配结果;
根据所述匹配结果和当前采集的交通场景的所述观测数据,更新所述地图数据。
第二方面,提供一种地图数据构建方法,所述方法包括:
控制车辆在交通场景中运动,获取车辆的传感器采集的所述交通场景的全局景物的观测数据;
基于所述观测数据确定出所述交通场景的全局景物的描述信息;
将所述全局景物的描述信息存储为所述交通场景的地图数据;所述描述信息用于车辆再次进入所述交通场景时,确定车辆在所述交通场景的位置信息以及确定所述地图数据是否需要更新。
第三方面,提供一种地图数据处理装置,所述装置包括处理器、存储器、存储在所述存储器上可被所述处理器执行的计算机程序,所述处理器执行所述计算机程序时实现第一方面所述的地图数据处理方法实施例。
第四方面,提供一种地图数据构建装置,所述装置包括处理器、存储器、存储在所述存储器上可被所述处理器执行的计算机程序,所述处理器执行所述计算机程序时实现第二方面所述的地图数据构建方法实施例。
第五方面,提供一种车辆,所述车辆包括第三方面所述的地图数据处理装置,和/或第四方面所述的地图数据构建装置。
第六方面,提供一种计算机可读存储介质,所述可读存储介质上存储有若干计算机指令,所述计算机指令被执行时实现第一方面所述地图数据处理方法的步骤。
第七方面,提供一种计算机可读存储介质,所述可读存储介质上存储有若干计算机指令,所述计算机指令被执行时实现第二方面所述地图数据处理方法的步骤。
应用本申请提供的方案,车辆可以获取传感器当前采集的交通场景的的局部景物的观测数据,基于所述观测数据确定所述局部景物的第一描述信息;而交通场景的地图数据包括基于对所述交通场景的全局景物的历史观测数据生成的第二描述信息;因此,基于第一描述信息与第二描述信息是否匹配,可以确定是否更新地图数据。因此,本申请方案能够高效地管理车辆所在交通场景的地图数据,能及时自动地利用传感器当前采集的观测数据对已有的地图数据进行更新,因此既减少了用户手动更新地图数据的操作,也能够基于更新的地图数据保障车辆安全行驶。
附图说明
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。
图1是本申请一个实施例的地图数据处理方法的示意图。
图2A是本申请另一个实施例的地图数据处理方法的示意图。
图2B是本申请一个实施例的地图数据构建方法的示意图。
图2C是本申请另一个实施例的地图数据处理方法的示意图。
图3是本申请另一个实施例的地图数据构建方法的示意图。
图4是本申请中用于实施本实施例的地图数据处理方法的一种地图数据处理装置的结构示意图。
图5是本申请中用于实施本实施例的地图数据构建方法的一种地图数据构建装置的结构示意图。
图6是本申请一个实施例的车辆的框图。
具体实施方式
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本申请一部分实施例,而不是全部的实施例。
在一些场景中,自动驾驶车辆可以通过传感器实时感知周围环境的信息来进行自动驾驶决策。在另一些场景中,出于降低算力、提高决策效率等考虑,自动驾驶车辆可以通过传感器来采集周围环境的信息,并结合车辆当前所处环境的地图数据实现自动驾驶。
本申请实施例中的地图数据,是指适用于自动驾驶车辆的高精度地图数据,不同于日常生活中面向用户进行导航的普通电子地图,此类地图数据包含的信息更加丰富和详细,是面向车辆的自动驾驶***所使用的、使自动驾驶***能进行环境感知、高精度定位、规划与决策的地图数据。因此,地图数据对于车辆自动驾驶的安全性具有 重要影响。
基于此,本申请提供一种地图数据处理方法实施例,该实施例能够高效地管理车辆所在交通场景的地图数据,能及时自动地利用传感器当前采集的观测数据对已有的地图数据进行更新,因此既减少了用户手动更新地图数据的操作,也能够基于更新的地图数据保障车辆安全行驶。接下来通过如下实施例进行详细说明。
如图1所示,是本申请实施例中一种地图数据处理方法的流程图,本实施例方法可以通过车辆的地图数据处理装置执行,该装置采用软件和/或硬件实现,并可配置于具备一定数据运算能力的电子设备中。可选的,该电子设备具体可以配置在车辆内,也可以独立于车辆等。该地图数据处理方法实施例可包括如下步骤:
在步骤102中,获取车辆的传感器当前采集的交通场景的局部景物的观测数据。
在步骤104中,基于所述观测数据确定所述局部景物的第一描述信息。
在步骤106中,获取所述交通场景的地图数据,其中,所述地图数据包括基于对所述交通场景的全局景物的历史观测数据生成的第二描述信息。
在步骤108中,将所述第一描述信息和所述第二描述信息进行匹配,得到匹配结果。
在步骤110中,根据所述匹配结果和当前采集的交通场景的所述观测数据,更新所述地图数据。
本实施例方案适用于车辆需要利用地图数据在交通场景中行驶的场景,利用上述实施例,车辆处于交通场景内时可以更新地图数据。其中,本实施例适用于多种交通场景,即所述交通场景可以有多种实施例,例如可以是停车场,如地下停车场或户外停车场等等;还可以是其他需要地图数据控制车辆行驶的交通场景,例如工业园区、货运站或码头等。在此类交通场景中,车辆可以获取到该交通场景的地图数据,该地图数据可以供自动驾驶软件使用,使得自动驾驶软件能够基于车辆实时的观测数据与地图数据的比对,进而做出驾驶决策。
本实施例中的景物,是指交通场景内任意可被车辆的传感器感知到的目标,是控制车辆驾驶时车辆的传感器需要采集观测数据的目标,这些景物处于交通场景内,在交通场景内可以具有位置信息,因此车辆的传感器感知到其信息时,这些景物可用于向车辆指示其在交通场景内的位置信息,使得车辆可以基于观测数据确定出景物在交通场景内的位置信息,从而确定车辆自身在交通场景内的位置,进而能够执行自动驾驶决策。实际应用中所述景物有多种实施例,从车辆的传感器采集的观测数据中,可以根据需要识别出一种或多种景物。作为例子,所述景物可以包括车道、车位、车辆、墙体、立柱、指示牌、照明灯、行人或停车场道闸等等,或者,还可以是布设在交通场景内的监控设备、墙体上的标志物、立柱上的标志物、停车位上的标志物等,或者,还可以包括交通场景内的光线分布等等。
本实施例的交通场景的地图数据,可以有多种方式构建得到。一些场景中,可以是用户驾驶车辆在交通场景中行驶,车辆在行驶的过程中,车辆的传感器采集交通场景中各类景物的观测数据;在其他场景中,也可以是车辆在交通场景中自动低速地行驶,车辆在行驶的过程中,车辆的传感器采集交通场景中各类景物的观测数据。通过采集到的观测数据可以构建地图数据。
实际应用中,车辆的传感器可以有多种,例如可以包括图像传感器、激光雷达传 感器、毫米波雷达传感器或超声波传感器等。基于不同传感器,车辆的传感器采集到的观测数据也有多种,例如,图像传感器采集到图像数据,激光雷达传感器可以采集到点云数据,毫米波雷达传感器可以采集到毫米波数据,超声波传感器可以采集到超声波数据。
在一些例子中,交通场景具有一定范围,车辆可以在交通场景中行驶,使车辆的传感器能够覆盖到整个交通场景并采集到对整个交通场景的观测数据,本实施例称之为交通场景的全局景物的观测数据,此处的全局景物可以包括整个交通场景中所有需要采集观测数据的目标,基于交通场景的全局景物的观测数据,可以构建出能够覆盖整个交通场景的地图数据。
在一些例子中,交通场景的地图数据中还可以记录有交通场景的所有景物在交通场景中的位置信息,本实施例的位置信息可以包括景物的地理位置信息,例如,可以基于车辆的定位传感器采集的信息确定车辆的地理位置信息,而车辆处于该地理位置信息时其他传感器可以感知到景物,从而确定景物的地理位置信息。在一些交通场景中,例如地下停车场等卫星信号或通信信号较差的情况下,车辆可能无法准确地获取到地理位置信息。可选的,景物在交通场景中的位置信息也可以包括景物在交通场景的相对位置信息,例如,可以在交通场景中确定某个已知地理位置信息的位置作为基点,景物在交通场景中的位置信息也可以包括该景物相对于该基点的相对位置信息。
在其他例子中,交通场景的地图数据中可以包括基于观测数据确定出交通场景中各个景物的特征信息,作为例子,景物的特征信息可以包括景物的视觉特征信息等等。实际应用中,基于各类传感器分别采集到的各类观测数据,可以通过不同的处理方式分别对各类观测数据进行处理,以从观测数据中确定出交通场景的全局景物的特征信息。
其中,车辆构建好的交通场景中的地图数据可以存储在本地,可以供该车辆在后续再次进入交通场景中使用。在其他例子中,该车辆还可以将地图数据发送给能通信的其他车辆,例如可以采用近距离无线通信的方式进行发送,还可以采用P2P(peer to peer,点对点)等方式进行发送。在其他例子中,该车辆还可以将该交通场景的地图数据发送给服务端,可选的,服务端可以将地图数据发送给其他可与服务端通信的车辆。
在一些例子中,车辆利用交通场景的地图数据自动驾驶时,可以是车辆的传感器实时采集交通场景的局部景物的观测数据,基于当前采集的观测数据与地图数据中记录的观测数据的比对,可以确定出车辆当前在交通场景的所处位置的位置信息;例如,地图数据中记录有各个景物在交通场景的位置信息,通过查询出当前采集到的景物与地图数据中匹配的景物,基于地图数据中该匹配的景物的位置信息,可以确定出车辆当前在交通场景的所处位置的位置信息。
实际应用中,交通场景的地图数据可能面临局部失效或全部失效的情况;例如,交通场景的地图数据是预先构建的,在构建后,交通场景中可能出现某些景物发生变化的情况,当车辆行驶在发生变化的交通场景中,所使用的地图数据由于未记录有新变化的景物,很有可能导致车辆无法定位自身在交通场景中的位置,可能做出错误的决策,导致车辆行驶处于风险状态。或者,预先采集的观测数据中对交通场景的某些位置采集数据较为稀疏,因此地图数据在该位置的信息量较少,从而导致后续使用地 图数据时,车辆在该位置的定位精度较差。或者,采集观测数据时的环境状态与后续使用地图数据控制车辆运动时的环境状态有较大差异,例如在白天光线较好情况下采集观测数据所构建的地图数据,若在夜晚环境下使用,可能导致车辆无法高效准确地定位自身在交通场景中的位置。或者,车辆在某个时刻定位到自身出于交通场景中的某个位置,在下一时刻车辆定位到的自身的位置与上一时刻所定位出的位置的差异较大,而理论上正常状态下车辆在两个相近的时刻所处的位置差异应该不会太大,出现差异较大的情况有较大可能是由于所使用的地图数据失效导致的。基于此,这些情况都表明当前时刻所采用的地图数据在当前时刻不可靠,为了保证车辆行驶的安全,需要及时地发现地图数据是否需要更新。
本实施例中,车辆可以获取传感器当前采集的交通场景的的局部景物的观测数据,基于所述观测数据确定所述局部景物的第一描述信息;而交通场景的地图数据包括基于对所述交通场景的全局景物的历史观测数据生成的第二描述信息;因此,基于第一描述信息与第二描述信息是否匹配,可以确定是否更新地图数据。通过本实施例方案,车辆可以自动及时地更新地图数据,无需用户介入执行操作,由于地图数据能够得到及时更新,也能保证车辆的行驶安全。
本实施例中,通过当前采集的局部景物的第一描述信息和地图数据中的第二描述信息来确定地图数据是否需要更新,当前采集的局部景物的第一描述信息可以表征当前采集该景物的数据的当前状态,例如时间状态、一个或多种景物的状态或天气状态等等,而地图数据中的第二描述信息可以表征地图数据构建时的交通场景的多种状态,例如时间状态、一个或多种景物的状态或天气状态等等,因此基于第一描述信息和第二描述信息的比对,可以确定地图数据是否需要更新。
在一些例子中,车辆在交通场景中运动并实时采集局部景物的观测数据,可以基于观测数据确定局部景物的第一描述信息,也即是第一描述信息是车辆当前边采集边确定的;而获取的交通场景的地图数据中,包括基于对所述交通场景的全局景物的历史观测数据生成的第二描述信息,相对于第一描述信息,该第二描述信息是在当前时刻之前生成的,是本车辆和/或其他车辆在之前采集交通场景中景物的观测数据并生成的,属于历史的描述信息。可以理解,第一描述信息和第二描述信息的生成方式可以相同。接下来对描述信息的多种实现方式进行举例说明。
在一些例子中,描述信息可以包括从观测数据中提取出的特征;例如,车辆传感器可以包括图像传感器等,视觉设备可以提供场景的外观信息,而双目视觉还能提供一定范围内的场景的深度信息,例如可以提供车辆周围一定范围内稠密的点云信息。车辆传感器还可以包括激光雷达等,其测量精度高可以提供360度范围内的点云信息。基于此,可以对图像数据或点云数据,提取出交通场景内景物的特征。特征的提取方式本实施例不进行限定,可以采用神经网络等方式实现。例如,可以进行ORB特征进行提取和匹配;其中ORB特征可以是实时地进行提取,并可对所提取的特征进行光流跟踪和描述子匹配。其中,还可结合激光雷达采集的点云数据,确定图像中像素点的深度特征信息,还可以对点云数据提取出点云的特征等等。这些特征可以作为描述信息并存储作为地图数据的一部分,在后续利用地图数据进行导航时,可以与车辆实时采集的观测数据进行特征匹配。
在一些例子中,描述信息还可以包括所有采集到的观测数据中关键的观测数据, 本实施例称之为关键观测数据,其中,可以根据需要配置一种或多种关键观测数据,可选的,关键观测数据可以是对车辆运动具有影响的数据,例如交通场景内如车道、停车位等景物,会较大影响车辆的驾驶或定位,这些景物的观测数据可以作为关键观测数据。或者,交通场景内某些景物可以为车辆运动的决策提供较大的作用,这些景物的的观测数据也可以作为关键观测数据,例如,交通场景中某个景物的视觉信息较为丰富,这些丰富的视觉信息可以为车辆的控制提供正面作用,例如更容易地识别到从而更准确地提供定位信息,因此这些景物可以作为关键观测数据。或者,可以结合车辆的运动姿态来确定关键观测数据,例如,若车辆处于拐弯等行驶轨迹发生变化的状态下,说明当前所处场景的变动较大,需要给与一定的关注,因此,车辆的运动姿态发生预设变化时采集的观测数据可以作为关键观测数据。
在一些例子中,所述关键观测数据包括:车辆姿态发生预设变化时,所述传感器在目标采样周期内采集的所述观测数据。可选的,本实施例的车辆姿态发生预设变化根据需要可以有多种实现方式,例如,可以包括车辆的行驶方向发生预设变化,此处的预设变化可以包括行驶方向角度的变化大于设定角度阈值,例如车辆向前行驶后往左转弯或往右转弯,或者车辆向前行驶后又倒车。可选的,还可以包括车辆的行驶轨迹发生预设变化,例如行驶轨迹由直线变为曲线等。
在一些例子中,车辆的传感器按照预设的采样周期采集所述观测数据;所述描述信息包括关键观测数据,所述关键观测数据表征所述传感器在目标采样周期内采集的所述观测数据。车辆的传感器可以包括图像传感器、激光雷达传感器、毫米波雷达传感器或超声波传感器中的一种或多种,相应的关键观测数据包括以下一种或者多种在目标采样周期内采集到的数据:由图像传感器在目标采样周期内采集的图像数据;由激光雷达传感器在目标采样周期内采集的点云数据;由毫米波雷达传感器在目标采样周期内采集的毫米波数据;由超声波传感器在目标采样周期内采集的超声波数据。
在一些例子中,可以在车辆行驶的过程中根据交通场景内的变化来确定关键观测数据,例如,所述目标采样周期是根据多个所述采样周期的所述观测数据确定的;车辆的传感器按照采样周期采集观测数据,可以根据需要,基于多个所述采样周期的所述观测数据确定来确定目标采样周期的观测数据作为关键观测数据。作为例子,可以通过多个所述采样周期的所述观测数据的比对,来确定是否有关键观测数据,基于此,通过多个所述采样周期的所述观测数据的变化,可以查找出其中是否有可以作为关键观测数据的数据。
作为例子,关键观测数据可以是多个所述采样周期的所述观测数据中,数据变动较大时的目标采样周期的观测数据。例如,若交通场景中一个位置至另一位置的景物发现较大变化,这种较大的场景变化从车辆传感器采集的观测数据的表现是观测数据中景物特征出现了较大的变化;例如,两个邻近的停车位之间有一个立柱,车辆的传感器对这两个邻近的停车位采集的数据之间,可以发现从一个停车位的观测数据至立柱的观测数据再到另一个停车位的观测数据,数据的特征具有较大变化,这些变化可以予以关注,以更稳定安全地控制车辆运动。基于此,可以基于多个所述采样周期的所述观测数据中,景物的特征变化来确定作为目标采样周期的关键观测数据。在其他例子中,本实施例的数据变化还可以是指当前采集的交通场景的局部景物的观测数据与地图数据中的局部景物的历史观测数据的变化;例如,针对交通场景中某个景物, 当前采集到的该景物的观测数据的数据特征,与地图数据中该景物的历史观测数据的数据特征具有较大的差异,说明该景物当前的状态与历史状态具有较大的差异,因此,可以将当前采集到的该景物的观测数据作为关键观测数据。
在另一些例子中,关键观测数据还可以是多个所述采样周期的所述观测数据中,数据特征数量大于设定阈值时的目标采样周期的观测数据,其中,该设定阈值可以根据需要配置;基于多个所述采样周期的所述观测数据确定景物的特征数量较多,由于该景物的特征较为丰富,可以在后续的处理中提供丰富的信息,因此可以将其作为关键观测数据。
在另一些例子中,关键观测数据还可以是多个所述采样周期的所述观测数据中,数据特征数量少于设定阈值时的目标采样周期的观测数据,其中,该设定阈值可以根据需要配置;基于多个所述采样周期的所述观测数据确定景物的特征数量较少,由于特征数量较少,该位置的景物的观测数据可能无法供自动驾驶***准确地定位和识别,因此,这些数量较少的景物的观测数据可以作为关键观测数据予以关注。
在一些例子中,所述目标采样周期内采集的所述观测数据的数据特征符合预设的数据特征条件。本实施例中,可以基于数据特征来确定出关键观测数据,从多个采样周期的观测数据中,根据数据特征查找出符合预设的数据特征条件的观测数据作为关键观测数据,即关键观测数据的数据特征符合预设的数据特征条件。实际应用中,所述预设的数据特征条件根据需要可以有多种实现方式。作为例子,可以是由图像传感器在目标采样周期内采集的图像数据,满足预设的图像纹理条件,即作为关键观测数据的图像数据,其图像纹理满足预设的图像纹理条件;该预设的图像纹理条件可以有多种实现方式,例如图像纹理条件可以包括图像纹理的特征数量与设定数量阈值匹配、图像纹理的特征密度与设定密度阈值匹配,或者是图像纹理的特征排布与设定的排布规律匹配等等。在另一些例子中,可以是由激光雷达传感器在目标采样周期内采集的点云数据,满足预设的点云分布条件,即作为关键观测数据的点云数据,其点云分布满足预设的点云分布条件;该预设的点云分布条件可以有多种实现方式,例如点云分布条件可以包括点云的点云点数量与设定数量阈值匹配、点云的点云点密度与设定密度阈值匹配或点云的分布与设定的分布规律匹配等等。
在一些例子中,所述描述信息还可以包括标签信息,即交通场景中景物对应有该标签信息,实际应用中可以根据需要为交通场景中的景物配置一种或多种标签信息,基于标签信息可以快速地进行地图数据匹配。
在一些例子中,所述标签信息可以包括:用于表征所述交通场景中特定景物所属的类别的标签信息。其中,所述特定景物可以是交通场景中的全局景物,即对所有景物都具有表征其所属类别的标签信息;在其他例子中,所述特定景物可以是交通场景中的部分景物,即部分景物具有表征其所属类别的标签信息,而部分景物未有该标签信息。实际应用中,具有所属类别的标签信息的特定景物可以根据需要进行配置,例如可以是交通场景一些关键的景物,例如有助于车辆定位的景物,以停车场景为例,包括但不限于停车位、墙体上的标志物或照明灯等等,还可以包括影响车辆行驶安全的景物,例如行人、车道等等。实际应用中,可以根据需要设置多种类别,例如,可以基于多种不同的分类方式对交通场景中的特定景物进行类别的划分;例如以景物是否具有位置固定的属性,类别可以包括固定景物和非固定景物,例如墙体、车位或立 柱等属于位置固定的景物;例如行人、车辆或光线等属于非固定的景物。或者,分类方式还可以包括景物所在位置的类别,例如地面或空间中的景物,如车道或车位标志物等属于地面类别,照明灯或监控设备等属于空间中的景物。在其他例子中,还可以包括其他多种类别,例如可以基于不同交通场景设置不同的类别,如停车场场景下,类别可以包括墙体、立柱、车辆、停车位、车道、墙体标志物、立柱标志物或行人等等。其中,基于上述类别的设置,每种景物可以有一种或多种类别,即每种景物可以有一种或多种表征所属类别的标签信息。
在一些例子中,所述标签信息还可以包括:用于表征所述交通场景的观测数据的采集时间的标签信息;其中,采集时间可以准确地指示地图数据的时效性以及构建地图数据时的环境状态,实际应用中,可以根据需要设置多种表征采集时间的标签信息。作为例子,可以包括表征采集时间所属季节的标签信息,例如春夏秋冬不同季节的标签信息。在其他例子中,可以包括表征采集时间所属时间段的标签信息,该时间段可以根据需要设置,作为例子,时间段可以包括早上、下午或晚上等。
由前述实施例可知,在一些例子中,描述信息可以包括关键观测数据,关键观测数据作为地图数据中关键的观测数据,对车辆的控制起到关键作用。基于此,本实施例的标签信息还包括:用于表征所述关键观测数据的特征的标签信息,即本实施例还针对关键观测数据的特征配置标签,从而基于关键观测数据的特征的标签信息,可以更快速和准确地基于关键观测数据确定地图数据是否需要更新。
考虑到自动驾驶***需要准确的环境感知来正确的进行车辆运动控制,其中,准确的环境感知,其中一种重要的信息是交通场景中景物的外观信息。基于此,在一些例子中,所述关键观测数据的特征包括所述关键观测数据的视觉特征,通过关键观测数据的视觉特征,能够为自动驾驶***提供可靠性较高的交通场景中景物的外观信息。可选的,所述视觉特征包括以下至少一种:SiFT特征(尺度不变特征变换,Scale-invariant feature transform,SIFT)、ORB(Oriented Fast and Rotated Brief,快速特征提取)特征或MSER特征(Maximally Stable Extremal Regions,最大稳定极值区域)等。
本实施例中,其中,第一描述信息与第二描述信息的匹配过程也可以有多种实现方式。作为例子,所述描述信息有多种,可以分别匹配对应种类的描述信息。例如,描述信息可以包括从观测数据中提取出的特征,因此,车辆的传感器当前采集的观测书就可以提取出景物的特征,可以将当前提取出的景物的特征,与地图数据中历史观测数据的景物的特征进行匹配。在其他例子中,所述描述信息可以包括标签信息,可以将所述当前采集的所述观测数据确定的所述标签信息和所述地图数据中的所述标签信息进行匹配;其中,标签信息可以有多种,匹配时可以分别匹配对应种类的标签信息,例如,从当前采集的观测数据中确定出用于表征采集时间的标签信息,与地图数据中的用于表征采集时间的标签信息进行匹配;从当前采集的观测数据中确定出用于表征光照信息的标签信息,与地图数据中的用于表征光照信息的标签信息进行匹配等等。在另一些例子中,所述描述信息包括关键观测数据,可以将所述当前采集的所述观测数据确定的所述关键观测数据和所述地图数据中的所述关键观测数据进行匹配,例如可以直接匹配当前采集的所述观测数据确定的所述关键观测数据和所述地图数据中的所述关键观测数据,也可以基于关键观测数据的特征或特征的标签信息进行匹配。
基于第一描述信息与第二描述信息的匹配,可以获取到两者的匹配结果,基于匹配结果可以确定是否更新地图数据。可选的,可以根据需要设置信息匹配条件,本实施例称之为预设信息匹配条件,该预设信息匹配条件用于指示第一描述信息与第二描述信息的匹配程度,也即是指示地图数据是否需要更新的条件;实际应用中可以根据需要灵活配置预设信息匹配条件,例如,第一描述信息和第二描述信息有多种,因此,预设信息匹配条件可以包括一种或多种描述信息的匹配条件的任意组合。
作为例子,预设信息匹配条件可以是与标签信息相关的条件,例如,以用于表征所述交通场景的观测数据的采集时间的标签信息为例,预设信息匹配条件可以包括:当前采集的观测数据的采集时间的标签信息与地图数据的采集时间的标签信息相匹配。或者,描述信息中可以包括从观测数据中提取出的特征,可以基于特征来配置该预设信息匹配条件。或者,描述信息中包括用于表征所述交通场景中特定景物所属的类别的标签信息,预设信息匹配条件可以包括当前采集的观测数据确定的特定景物所属的类别的标签信息,与地图数据中景物的类别标签信息相匹配等等。或者,描述信息中包括关键观测数据,可以基于关键观测数据来配置预设信息匹配条件,例如预设信息匹配条件可以包括当前采集的观测数据中的关键观测数据,与地图数据中的关键观测数据相匹配等等。在其他例子中,预设信息匹配条件还可以包括标签信息、特征和/或关键观测数据中一种或多种组合而形成的条件,实际应用中可以根据需要进行设置,本实施例对此不做限定。
在一些例子中,若第一描述信息与第二描述信息的匹配结果满足预设信息匹配条件,可以确定地图数据无需更新,可以基于第一描述信息与第二描述信息的匹配结果,确定所述车辆在所述交通场景中的实时位置信息,并基于所述实时位置信息控制所述车辆在所述交通场景中运动。若所述第一描述信息和所述第二描述信息不匹配,可以确定需要更新地图数据,可以根据所述匹配结果和当前采集的交通场景的所述观测数据,更新所述地图数据。
实际应用中地图的更新方式可以有多种实现方式。在一些例子中,所述预设地图数据的更新方式,包括以下任一:将所述第一描述信息替换所述第二描述信息;或,将所述第一描述信息与所述第二描述信息融合。
作为例子,实际应用中,可能出现当前交通场景与地图构建时交通场景差异较大的情况,例如,当前交通场景中某些景物出现了较大变化,或者是地图构建时交通场景的状态与当前交通场景的状态具有较大差异,从而导致地图数据无法适用于当前交通场景,该地图数据失效,若车辆使用该地图数据进行导航,车辆无法准确定位。基于此,可以将所述第一描述信息替换所述第二描述信息,由于第二描述信息被替换,也即是地图数据失效,需要利用当前采集的观测数据构建新的地图数据。
可选的,所述将所述第一描述信息替换所述第二描述信息,表征地图数据失效,基于此,可以是在如下条件下执行的:所述匹配结果不满足预设信息匹配条件,且根据所述匹配结果未确定出所述车辆在所述交通场景中的实时位置信息。
实际应用中,根据所述匹配结果是否能确定出车辆在所述交通场景中的实时位置信息,根据需要可以有多种实现方式。例如,描述信息中可以包括从观测数据中提取出的特征,可以基于特征来配置该预设信息匹配条件。作为例子,若地图数据可靠,当前车辆采集的观测数据的特征,应该与地图数据中历史观测数据的特征匹配上。若 地图数据不可靠,当前车辆采集的观测数据的特征与地图数据中历史观测数据的特征难以较好地匹配上,进而导致车辆无法利用地图数据在当前交通场景中实现稳定的定位,无法稳定准确地获取到车辆当前在交通场景中的位置信息,因此,是否可确定出车辆在交通场景中的实时位置信息,可以根据当前车辆采集的观测数据的特征与地图数据中历史观测数据的特征的匹配情况来确定,例如无法稳定定位情况下,车辆连续行驶M(M可以根据需要设置),特征匹配的数量小于设定值,或者平均特征匹配的数量小于设定值等等。或者,还可以是连续行驶L(L可以根据需要设置)未有匹配的特征等等。
在一些例子中,在地图数据失效的情况下,需要重新构建当前交通场景的地图数据,基于此,在所述匹配结果不满足预设信息匹配条件,且根据所述匹配结果未确定出所述车辆在所述交通场景中的实时位置信息的步骤之后,本实施例方法还可包括:控制所述车辆在所述交通场景内运动以持续采集所述交通场景的局部景物的观测数据,利用持续采集的所述观测数据确定出描述信息并存储,因此,本实施例方案可以在判断出地图数据失效后,控制车辆执行地图数据的构建流程,从而为当前交通场景生成可用的地图数据。
针对地图更新的方式,另一种可选的实施例是将所述第一描述信息与所述第二描述信息融合;实际应用中,可能存在地图数据中部分数据失效的情况,即地图数据整体可用,但其中有部分需要更新;例如,交通场景中存在部分景物发生变动,而地图数据中其他部分景物未有变化,即其他部分景物的数据仍然可用。从车辆定位的角度来看,即基于地图数据控制车辆运动时,基于地图数据定位车辆在当前交通场景中的位置信息时,出现定位不稳定的情况,即在某些时候车辆能够持续稳定定位,在某些时候车辆定位不稳定、不准确或位置漂移的情况。或者,实际应用中可能还存在,构建地图数据时,对交通场景采集的观测数据可能不够密集,存在某些缺失,基于此,可以将当前采集的观测数据作为补充,将第一描述信息增加至地图数据中。基于此,本实施例可以将所述第一描述信息与所述第二描述信息融合,即利用当前采集的观测数据的第一描述信息,来更新地图数据中某些失效的第二描述信息。作为例子,所述融合的方式,包括以下任一:将所述第一描述信息合并至所述预设地图数据中;或,根据所述第一描述信息更改所述预设地图数据中的部分第二描述信息。例如,将所述第一描述信息合并至所述预设地图数据中,可以是增加了新的景物的第一描述信息,也可以是对已有景物的描述信息中,增加新的描述信息等等;根据所述第一描述信息更改所述预设地图数据中的部分第二描述信息,可以是更改部分景物的描述信息,可以包括该被更改的景物的全部或部分描述信息等等。
在一些例子中,所述将所述第一描述信息与所述第二描述信息融合,是在如下条件下执行的:所述匹配结果不满足预设信息匹配条件,且根据所述匹配结果确定出所述车辆在所述交通场景中的实时位置信息不满足预设稳定状态,将所述第一描述信息与所述第二描述信息融合。实际应用中,预设稳定状态根据需要可以有多种实现方式,例如,匹配结果中特征的匹配数量大于设定阈值,即当前采集的局部景物的特征与地图数据中景物的特征能匹配上,而且匹配的数量丰富,例如持续有N个特征匹配上,N可以根据需要设置,例如50个等;由于特征匹配的数量较多,则车辆能够利用地图数据中这些匹配的特征在交通场景中的位置信息,来确定车辆当前在交通场景中的位 置信息。可选的,若地图数据可靠,则车辆可以持续地确定出准确的位置信息,并且,车辆持续运动,在相邻的两个时间周期内,车辆的位置信息不会有较大的变动;若车辆在相邻的两个时间周期内,基于地图数据确定出车辆在交通场景中的实时位置信息发生较大变动,可以确定车辆在所述交通场景中的实时位置信息不满足预设稳定状态;基于此,预设稳定状态可以包括,在多个设定时间周期内,车辆在所述交通场景中的实时位置信息的差异小于设定阈值,若多个设定时间周期内,车辆在所述交通场景中的实时位置信息的差异大于设定阈值,即车辆实时位置变动较大,因此不满足预设稳定状态。
作为例子,所述车辆在所述交通场景中的实时位置信息不满足预设稳定状态,包括:用户界面上显示的车辆实时位置出现漂移。本实施例中,用户界面上可以显示车辆实时位置,若车辆在所述交通场景中的实时位置信息不满足预设稳定状态,用户界面上显示的车辆实时位置出现漂移。
作为例子,描述信息中可以包括从观测数据中提取出的特征,可以基于特征是否匹配来确定地图数据是否需要更新。例如,若地图数据可靠,当前车辆采集的观测数据的特征,应该与地图数据中历史观测数据的特征匹配上。若地图数据不可靠,当前车辆采集的观测数据的特征与地图数据中历史观测数据的特征难以较好地匹配上,进而导致车辆无法利用地图数据在当前交通场景中实现稳定的定位,无法稳定准确地获取到车辆当前在交通场景中的位置信息,因此可以基于此设定特征匹配与地图是否需要更新的实现方式。
在一些例子中,一个交通场景可以具有多份地图数据,例如在不同时间采集的观测数据所构建的不同地图数据,不同时间下所构建的地图数据可以具有不同的特点,例如早上和晚上的光线亮度不同,地图数据中的光线特征不同,不同光线亮度会引起不同的景物特征;或者,不同天气状态下所构建的地图数据也具有不同特点。基于此,在交通场景具有多份地图数据的情况下,各份地图数据可以具有使用属性信息,所述使用属性信息是基于所述标签信息确定的;所述获取所述交通场景的地图数据,包括:在用户界面上显示所述多份地图数据的使用属性信息,通过所述用户界面获取用户选取的使用属性信息;根据用户选取的使用属性信息,从所述多份地图数据中确定出所述地图数据。本实施例中的用户界面可以是车辆的显示部件的用户界面,也可以是可与车辆通信的其他设备的用户界面,该其他设备独立于车辆,并且可与车辆通信,例如可以包括便携式设备,例如智能手机、个人计算机或PDA等,还可以包括可穿戴设备,如智能手表、智能手环、智能眼镜或智能头盔等等,本实施例对此不进行限定。基于此,车辆可以在进入交通场景之前或进入交通场景后,用户界面中可以向用户提供对该交通场景的多份地图数据的选择功能,其中,使用属性信息可以有多种实现方式,例如使用属性信息可以包括该份地图数据的时间、场景或天气等信息,基于此,提供用户界面显示出各份地图数据的使用属性信息后,用户可以基于使用属性信息选取出合适的地图数据。
在一些例子中,所述交通场景具有多份地图数据,各份地图数据具有使用属性信息,所述使用属性信息是基于所述标签信息确定的;所述获取所述交通场景的预设地图数据,包括:将所述第一描述信息中的标签信息,与所述多份地图数据的使用属性信息进行匹配,根据所述匹配结果从所述多份地图数据中确定出所述地图数据。本实 施例中,可以是由车辆自动从交通场景的多份地图数据中选取,基于使用属性信息,从多份地图数据中选取出与当前车辆所处交通场景的状态最匹配的地图数据。例如,使用属性信息可以包括各份地图数据的时间、场景或天气等信息,在选取时可以获取当前时间信息、当前天气信息、当前车辆所处位置的场景信息等,然后与各份地图数据的使用属性信息进行匹配。通过本实施例,在当前交通场景有多份地图数据的情况下,可以自动获取到最合适的地图数据,减少了用户操作,由于可以获取到可靠性较好的地图数据而达到保证车辆安全驾驶的效果。
在一些例子中,所述方法还可包括:将更新后的地图数据发送给服务端和/或其他车辆。本实施例中,车辆更新地图数据后,可以将更新后的地图数据发送给服务端,由服务端存储,服务端也可以根据需要发送给其他可与服务端通信的车辆;或者,车辆还可以发送给可通信的其他车辆,以供其他车辆使用。
接下来再通过一实施例进行说明。如图2A所示,是本申请实施例另一种地图数据处理方法的示意图,图2A中包括如下步骤:
在步骤210中,地图构建;地图构建流程用于构建交通场景的地图数据;一些例子中,可以是用户驾驶车辆在交通场景中行驶,车辆在行驶的过程中,车辆的传感器采集交通场景中各类景物的观测数据;在其他场景中,也可以是车辆在交通场景中自动低速地行驶,车辆在行驶的过程中,车辆的传感器采集交通场景中各类景物的观测数据。通过采集到的观测数据可以构建地图数据。
如图2B所示,示出了本实施例的一种地图构建的流程示意图;在步骤211中,可以利用车辆的传感器采集到的观测数据进行地图的构建;
例如,车辆传感器可以包括图像传感器等,视觉设备可以提供场景的外观信息,而双目视觉还能提供一定范围内的场景的深度信息,例如可以提供车辆周围一定范围内稠密的点云信息。车辆传感器还可以包括激光雷达等,其测量精度高可以提供360度范围内的点云信息。以图像数据为例,图2B中,在步骤212中,可以获取图像传感器采集的图像数据,在步骤213中,可以对图像数据进行特征提取和匹配。例如,可以进行ORB特征进行提取和匹配;其中ORB特征可以是实时地进行提取,并可对所提取的特征进行光流跟踪和描述子匹配。其中,还可结合激光雷达采集的点云数据,确定图像中像素点的深度信息。在步骤214中,可以进行三维特征地图重构,例如,可以基于所提取的ORB特征,采用线性三角化,并利用重投影误差法进行错误匹配或低质量三维特征进行去除。基于此,从而可以从观测数据提取出场景中各景物的特征,这些特征可以作为描述信息并存储作为地图数据,在后续利用地图数据进行导航时,可以与车辆实时采集的观测数据进行特征匹配。
在步骤215中可以进行关键帧提取,即获取关键观测数据,例如,提取一系列有代表性的图像作为关键帧,用于更完整的表达地图环境信息。关键帧的提取可以采用多种方式,例如车辆在持续运动及采集过程中,在目标采样周期内,特征点跟踪和匹配数量下降到设定阈值,例如下降至30%等;或者,在目标采样周期内,还可以是图像特征点数量小于300个;或者,还可以是在目标采样周期内,车辆的姿态发生变化,例如行驶距离超过2米或者车辆的角度超过20度等等。
在步骤216中可以执行关键帧标签化,例如,对于所有的关键观测数据,可以进行多种视觉特征的提取,视觉特征可以包括SiFT(复杂几何纹理结构)、ORB(快速 提取)、MSER(局部稳定区域特征)等,还可以利用这些特征配置标签信息,还可以进行词典构建。作为例子,关键观测数据的标签可以包括词袋特征标签、整体亮度特征标签或特征空间分布标签等。
在步骤217中执行用户场景标签化处理,即利用观测数据配置更多的其他描述信息,例如描述信息可以包括时间标签,例如获取地图构建时的时间信息,可以为地图数据添加时间标签,例如季节、月份、日期、时段(上午、中午、下午、傍晚、夜间等)等等。其中,可以基于时间标签配置地图数据的使用属性信息,该使用属性信息表征地图数据的时间信息。在其他例子中,描述信息还可以包括场景标签,例如可以基于观测数据确定出交通场景的环境状态,例如,室内停车场、室外停车场、半封闭停车场、办公室停车场、住宅停车场等等。其中,还可以基于场景标签配置地图数据的使用属性信息,该使用属性信息表征地图数据所属的交通场景的环境状态。
在步骤218中,可以执行地图存储;在地图存储阶段,若使用原始的观测数据进行存储会耗费大量的空间资源,且存在较多的冗余信息。而本实施例在前述处理中基于观测数据生成描述信息,其中,描述信息包括了大量的景物的特征,包括了基于观测数据确定出的关键观测数据,还基于观测数据生成大量的标签信息等,因此,本实施例可以存储描述信息作为地图数据,并对该地图数据进行长期维护。
本实施例中,同一个交通场景,可以存储多个不同时间段、不同季节、不同光照条件等不同标签组合的地图数据,用于补充不同场景变化导致的地图失效。也即是,一个交通场景,可以对应有多份地图数据。
在获取到地图数据后,在步骤230中,车辆可以利用地图进行地图定位;在步骤在步骤240中,可以执行定位状态的判断;在步骤250中,可以进行地图更新。结合图2C所示的本实施例的一种地图数据处理流程图进行说明。
由前述实施例可知,可以对一个交通场景构建出一份或多份地图数据。当车辆再次进入该交通场景时,可以基于已构建的地图数据进行导航,还可以确定是否需要更新地图数据。作为例子,当车辆进入交通场景之前或者在进入交通场景时,可以获取该交通场景的地图数据。例如,可以基于车辆的定位传感器确定车辆所处的地理位置,基于地理位置从本地或服务端查询匹配的地图数据。可选的,在交通场景具有多份地图数据的情况下,可以基于地图数据的使用属性信息,可以是车辆自动选取或者是由用户选择。在自动选取的情况下,可选的,可以由车辆的传感器采集当前交通场景的观测数据,并基于当前采集的观测数据的描述信息,与多份地图数据的使用属性信息进行匹配,查找出与当前交通场景匹配的地图数据。
或者,如图2C所示,在步骤261中,可以是用户指定地图数据;例如,可以在用户界面中展示多份地图数据的使用属性信息,由用户选择其中一份地图数据。可选的,还可以在用户界面上提供车辆当前位置和方向的设置功能,由用户设置车辆当前位置和方向。由于***存储多个场景的地图,通过用户指定地图数据,可以有效提高地图数据的正确性以及节省地图数据自动化匹配的时间。
车辆可以基于地图数据在交通场景中运动,在步骤262中车辆可以进行地图定位。例如,可以获取车辆的传感器当前采集的交通场景的局部景物的观测数据;基于所述观测数据确定所述局部景物的第一描述信息,将所述第一描述信息和地图数据中的第二描述信息进行匹配,得到匹配结果。若所述匹配结果满足预设信息匹配条件,根据 所述匹配结果确定所述车辆在所述交通场景中的实时位置信息,并基于所述实时位置信息控制所述车辆在所述交通场景中运动。
其中,地图数据可能存在完全失效或部分失效的情况而需要更新。在步骤263中,可以执行定位质量监控,以确定车辆基于地图数据定位的质量情况,基于此,在步骤208中,可以进行地图状态的判断。作为例子,在车辆选定地图数据后,车辆会基于地图数据开始行驶,并基于车辆的传感器当前采集的交通场景的局部景物的观测数据;基于所述观测数据确定所述局部景物的第一描述信息;例如,可以对实时采集的图像和点云数据等进行特征提取,提取出的特征作为第一描述信息,可以与地图数据中的特征进行匹配和位姿求解。当定位程序定位成功,则可以进行正常的自动驾驶控制,并且,车辆当前当前采集的交通场景的局部景物的观测数据也可以提供相应的环境先验信息。
在步骤263中可以执行定位质量监控,即车辆利用地图定位后,可以对地图状态进行监控。作为例子,地图定位状态可以根据需要设置多种状态,作为例子,可以包括正常定位,即地图数据有效可靠,无需更新。定位状态还可以包括不稳定定位,即地图数据中可能某些景物的描述信息失效,导致车辆利用地图数据定位时有时候定位可靠,有时候定位不可靠,此种情况下需要对地图数据进行更新。定位状态还可以包括定位失效,即地图数据无法供车辆可靠定位,该地图数据基本失效,需要重新构建该交通场景的地图数据。
可选的,可以采用多种实现方式表征定位状态。作为例子,正常定位的判断条件可以包括:特征的匹配数量大于设定阈值,即当前采集的局部景物的特征与地图数据中景物的特征能匹配上,而且匹配的数量丰富,例如持续有N个特征匹配上,N可以根据需要设置,例如50个等。
而不稳定状态和定位失效,即车辆未处于正常定位,可选的,也可以有多种判断方式,例如,特征匹配的数量较少,例如车辆连续行驶M(M可以根据需要设置),特征匹配的数量小于设定值,或者平均特征匹配的数量小于设定值等。或者,还可以是连续行驶L(L可以根据需要设置)未有匹配的特征。当未处于正常定位状态下,可以进一步执行步骤208的地图状态判断。
在步骤208中,可以执行地图状态判断。作为例子,可以结合多种标签信息进行判断。例如,可以通过时间和场景标签判断,若发现时间标签差异过大,例如时间变化大于设定天数;或者是时段不匹配,例如,地图数据的时间标签指示构建时间段为早上,而当前采集的观测数据确定出的标签信息为黄昏,两者时间段变化较大;或者,还可以是天气不匹配,例如,地图数据的时间标签指示构建时天气状态为晴天,而当前采集的观测数据确定出的标签信息为阴天等;若时间标签差异过大,则会触发地图重构,即可以执行步骤210的地图构建。可以理解,本实施例再次触发地图构建,所构建的地图数据也即是对已有地图数据的更新。
可选的,若时间标签信息和场景标签信息无异常,可以基于关键观测数据和/或关键观测数据的标签信息进行判断;例如,可以对当前采集的观测数据确定出关键观测数据,基于关键观测数据确定出描述信息,如关键观测数据的标签信息,标签信息可以包括视觉特征,可以提取包括SiFT(复杂几何纹理结构)、ORB(快速特征提取)、MSER(局部稳定区域特征)等特征,并和地图数据中的和/或关键观测数据的标签信 息进行匹配判断。如果当前采集的关键观测数据与地图数据中的关键观测数据差异过大,或者是当前采集的关键观测数据的标签信息与地图数据中关键观测数据的标签信息差异过大,可以触发地图数据重新构建,即可以执行步骤210的地图构建。
本实施例的步骤204中可以执行地图存储,可选的,为了防止地图数据过多问题,可以根据地图数据的时间标签,自动合并和更新同一交通场景的地图数据。例如,新构建的地图数据的时间标签与已有旧地图数据的时间标签相同,可以利用新建的地图数据替换旧地图数据。或者,若新构建的地图数据的时间标签为夏天,而旧地图数据的时间标签为秋天,可以将新旧两份地图数据都保留。
本实施例方案可以适用于多种实际场景,例如,本实施例方法可用于解决车辆的辅助驾驶功能中的自动泊车问题,例如在常用的工业园区、居民生活小区、校园等不同场景的自动泊车***中,用于解决地图失效的问题。
例如,本实施例方案可以作为车辆领域中,实际生活场景的最后100米的自动驾驶解决方案,可应用于车辆的辅助泊车***中,通过上述地图失效判断、触发用户进行地图更新、地图重建和地图切换等地图数据管理方式,提高自动驾驶***的长时间运作稳定性。辅助泊车***通常需要用户对一个场景进行单次训练,但由于单次训练的特征地图存在一定的时效性。随着时间、天气、光照、季节等变化,地图数据中的特征会逐步失效。通过有效的地图失效管理方法,可以提升泊车***的有效使用时间,降低用户频繁建图的繁琐操作。
本实施例方案能实现泊车***地图的优化管理,例如,在地图构建阶段,在提取观测数据的特征时,可以通过自动捕捉地图数据的完整时效信息,标记地图数据的时间特性;地图构建时,按照设定条件,例如距离和角度阈值等考虑,可以提取不同的关键观测数据,用于记录交通场景的特性,从而生成一系列的独特的标签。通过对地图数据的特征的时效性、关键观测数据的特性描述,可以准确表达当前交通场景的标签属性。当再次来到相同的交通位置,只有少量的特征匹配,但观测数据与关键观测数据相似度较高,则可判定当前交通场景发生变化,触发用户进行地图数据更新策略;当只有极少量特征匹配,且关键观测数据和实时采集的观测数据差异明显,则可以判断当前地图处于失效状态;需要进行地图数据重新采集,并加入到地图库中;通过以上的地图更策略,能有效提升泊车地图的使用效率,以及应对场景变化的鲁棒性。
自动泊车技术是当前业界大量投入且预计最快实现L4级别量产的技术之一。通过推广自动泊车技术,将会带来巨大的产品收益。本实施例能够实现自动泊车地图失效的有效管理,能有效提高泊车过程地图定位功能的稳定性以及地图失效的用户使用体验。
如图3所示,本申请还提供一种地图数据构建方法实施例,可包括如下步骤:
在步骤302中,控制车辆在交通场景中运动,获取车辆的传感器采集的所述交通场景的全局景物的观测数据。
在步骤304中,基于所述观测数据确定出所述交通场景的全局景物的描述信息。
在步骤306中,将所述全局景物的描述信息存储为所述交通场景的地图数据;所述描述信息用于车辆再次进入所述交通场景时,确定车辆在所述交通场景的位置信息以及确定所述地图数据是否需要更新。
由上述实施例可见,本实施例构建出的地图数据中,由于包括有所述交通场景的 全局景物的描述信息,因此利用该描述信息,可以供车辆再次进入所述交通场景时,确定车辆在所述交通场景的位置信息以及确定所述地图数据是否需要更新,因此使得车辆能够高效地管理车辆所在交通场景的地图数据,能及时自动地利用传感器当前采集的观测数据对已有的地图数据进行更新,因此既减少了用户手动更新地图数据的操作,也能够基于更新的地图数据保障车辆安全行驶。
在一些例子中,所述传感器按照预设的采样周期采集所述观测数据;
所述描述信息包括关键观测数据,所述关键观测数据表征所述传感器在目标采样周期内采集的所述观测数据。
在一些例子中,所述关键观测数据包括:车辆姿态发生预设变化时,所述传感器在目标采样周期内采集的所述观测数据。
在一些例子中,所述传感器包括以下一种或者多种传感器:图像传感器、激光雷达传感器、毫米波雷达传感器或超声波传感器。
在一些例子中,所述关键观测数据包括以下一种或者多种:
由图像传感器在目标采样周期内采集的图像数据;
由激光雷达传感器在目标采样周期内采集的点云数据;
由毫米波雷达传感器在目标采样周期内采集的毫米波数据;
由超声波传感器在目标采样周期内采集的超声波数据。
在一些例子中,所述目标采样周期内采集的所述观测数据的数据特征符合预设的数据特征条件。
在一些例子中,由图像传感器在目标采样周期内采集的图像数据,满足预设的图像纹理条件。
在一些例子中,由激光雷达传感器在目标采样周期内采集的点云数据,满足预设的点云分布条件。
在一些例子中,所述目标采样周期是根据多个所述采样周期的所述观测数据确定的。
在一些例子中,所述描述信息包括标签信息;
所述标签信息包括以下一种或者多种:
用于表征所述交通场景中特定景物所属的类别的标签信息;
用于表征所述交通场景的观测数据的采集时间的标签信息。
在一些例子中,所述标签信息还包括:
用于表征所述关键观测数据的特征的标签信息。
在一些例子中,所述关键观测数据的特征包括所述关键观测数据的视觉特征;
所述视觉特征包括以下至少一种:SiFT特征、ORB特征或MSER特征。
上述方法实施例可以通过软件实现,也可以通过硬件或者软硬件结合的方式实现。以软件实现为例,作为一个逻辑意义上的装置,是通过其所在图像处理的处理器将非易失性存储器中对应的计算机程序指令读取到内存中运行形成的。从硬件层面而言,如图4所示,为实施本实施例地图数据处理装置400的一种硬件结构图,除了图4所示的处理器401、以及存储器402之外,实施例中用于实施本地图数据处理方法实施例的地图数据处理装置,通常根据该地图数据处理装置的实际功能,还可以包括其他硬件,对此不再赘述。
本实施例中,所述处理器401执行所述计算机程序时实现以下步骤:
获取车辆的传感器当前采集的交通场景的局部景物的观测数据;
基于所述观测数据确定所述局部景物的第一描述信息;
获取所述交通场景的地图数据,其中,所述地图数据包括基于对所述交通场景的全局景物的历史观测数据生成的第二描述信息;
将所述第一描述信息和所述第二描述信息进行匹配,得到匹配结果;
根据所述匹配结果和当前采集的交通场景的所述观测数据,更新所述地图数据。
在一些例子中,所述处理器401执行所述计算机程序时还实现以下步骤:
若所述匹配结果满足预设信息匹配条件,根据所述匹配结果确定所述车辆在所述交通场景中的实时位置信息,并基于所述实时位置信息控制所述车辆在所述交通场景中运动。
在一些例子中,所述处理器401执行所述根据所述匹配结果和当前采集的交通场景的所述观测数据,更新所述地图数据的步骤,包括:
若所述匹配结果不满足预设信息匹配条件,根据所述第一描述信息,更新所述地图数据。
在一些例子中,所述地图数据的更新方式,包括以下任一:
将所述第一描述信息替换所述第二描述信息;或,
将所述第一描述信息与所述第二描述信息融合。
在一些例子中,所述将所述第一描述信息替换所述第二描述信息,是在如下条件下执行的:
所述匹配结果不满足预设信息匹配条件,且根据所述匹配结果未确定出所述车辆在所述交通场景中的实时位置信息。
在一些例子中,在所述处理器401执行所述匹配结果不满足预设信息匹配条件,且根据所述匹配结果未确定出所述车辆在所述交通场景中的实时位置信息的步骤之后,所述处理器401还执行:
控制所述车辆在所述交通场景内运动以持续采集所述交通场景的局部景物的观测数据,利用持续采集的所述观测数据确定出描述信息并存储。
在一些例子中,所述将所述第一描述信息与所述第二描述信息融合,是在如下条件下执行的:
所述匹配结果不满足预设信息匹配条件,且根据所述匹配结果确定出所述车辆在所述交通场景中的实时位置信息不满足预设稳定状态,将所述第一描述信息与所述第二描述信息融合。
在一些例子中,所述车辆在所述交通场景中的实时位置信息不满足预设稳定状态,包括:用户界面上显示的车辆实时位置出现漂移。
在一些例子中,所述融合的方式,包括以下任一:
将所述第一描述信息合并至所述地图数据中;或,
根据所述第一描述信息更改所述地图数据中的部分第二描述信息。
在一些例子中,所述传感器按照预设的采样周期采集所述观测数据;
所述描述信息包括关键观测数据,所述关键观测数据表征所述传感器在目标采样周期内采集的所述观测数据。
在一些例子中,所述关键观测数据包括:车辆姿态发生预设变化时,所述传感器在目标采样周期内采集的所述观测数据。
在一些例子中,所述传感器包括以下一种或者多种传感器:图像传感器、激光雷达传感器、毫米波雷达传感器或超声波传感器。
在一些例子中,所述关键观测数据包括以下一种或者多种:
由图像传感器在目标采样周期内采集的图像数据;
由激光雷达传感器在目标采样周期内采集的点云数据;
由毫米波雷达传感器在目标采样周期内采集的毫米波数据;
由超声波传感器在目标采样周期内采集的超声波数据。
在一些例子中,所述目标采样周期内采集的所述观测数据的数据特征符合预设的数据特征条件。
在一些例子中,由图像传感器在目标采样周期内采集的图像数据,满足预设的图像纹理条件。
在一些例子中,由激光雷达传感器在目标采样周期内采集的点云数据,满足预设的点云分布条件。
在一些例子中,所述目标采样周期是根据多个所述采样周期的所述观测数据确定的。
在一些例子中,所述描述信息包括标签信息;
所述标签信息包括以下一种或者多种:
用于表征所述交通场景中特定景物所属的类别的标签信息;
用于表征所述交通场景的观测数据的采集时间的标签信息。
在一些例子中,所述标签信息还包括:
用于表征所述关键观测数据的特征的标签信息。
在一些例子中,所述关键观测数据的特征包括所述关键观测数据的视觉特征;
所述视觉特征包括以下至少一种:SiFT特征、ORB特征或MSER特征。
在一些例子中,所述描述信息包括标签信息和/或关键观测数据,所述处理器401执行将所述第一描述信息和所述第二描述信息进行匹配,得到匹配结果,包括:
将所述当前采集的所述观测数据确定的所述标签信息和所述地图数据中的所述标签信息进行匹配;和/或,
将所述当前采集的所述观测数据确定的所述关键观测数据和所述地图数据中的所述关键观测数据进行匹配。
在一些例子中,所述交通场景具有多份地图数据,各份地图数据具有使用属性信息,所述使用属性信息是基于所述标签信息确定的;
所述处理器401执行所述获取所述交通场景的地图数据,包括:
在用户界面上显示所述多份地图数据的使用属性信息,通过所述用户界面获取用户选取的使用属性信息;
根据用户选取的使用属性信息,从所述多份地图数据中确定出所述地图数据。
在一些例子中,所述交通场景具有多份地图数据,各份地图数据具有使用属性信息,所述使用属性信息是基于所述标签信息确定的;
所述处理器401执行所述获取所述交通场景的地图数据,包括:
将所述第一描述信息中的标签信息,与所述多份地图数据的使用属性信息进行匹配,根据所述匹配结果从所述多份地图数据中确定出所述地图数据。
在一些例子中,所述处理器401还执行:
将更新后的地图数据发送给服务端和/或其他车辆。
如图5所示,是本实施例提供的一种地图数据构建装置500的硬件结构图,该地图数据构建装置包括处理器501、存储器502,以及存储在所述存储器上可被所述处理器执行的计算机程序,所述处理器501执行所述计算机程序时实现如下步骤:
控制车辆在交通场景中运动,获取车辆的传感器采集的所述交通场景的全局景物的观测数据;
基于所述观测数据确定出所述交通场景的全局景物的描述信息;
将所述全局景物的描述信息存储为所述交通场景的地图数据;所述描述信息用于车辆再次进入所述交通场景时,确定车辆在所述交通场景的位置信息以及确定所述地图数据是否需要更新。
在一些例子中,所述传感器按照预设的采样周期采集所述观测数据;
所述描述信息包括关键观测数据,所述关键观测数据表征所述传感器在目标采样周期内采集的所述观测数据。
在一些例子中,所述关键观测数据包括:车辆姿态发生预设变化时,所述传感器在目标采样周期内采集的所述观测数据。
在一些例子中,所述传感器包括以下一种或者多种传感器:图像传感器、激光雷达传感器、毫米波雷达传感器或超声波传感器。
在一些例子中,所述关键观测数据包括以下一种或者多种:
由图像传感器在目标采样周期内采集的图像数据;
由激光雷达传感器在目标采样周期内采集的点云数据;
由毫米波雷达传感器在目标采样周期内采集的毫米波数据;
由超声波传感器在目标采样周期内采集的超声波数据。
在一些例子中,所述目标采样周期内采集的所述观测数据的数据特征符合预设的数据特征条件。
在一些例子中,由图像传感器在目标采样周期内采集的图像数据,满足预设的图像纹理条件。
在一些例子中,由激光雷达传感器在目标采样周期内采集的点云数据,满足预设的点云分布条件。
在一些例子中,所述目标采样周期是根据多个所述采样周期的所述观测数据确定的。
在一些例子中,所述描述信息包括标签信息;
所述标签信息包括以下一种或者多种:
用于表征所述交通场景中特定景物所属的类别的标签信息;
用于表征所述交通场景的观测数据的采集时间的标签信息。
在一些例子中,所述标签信息还包括:
用于表征所述关键观测数据的特征的标签信息。
在一些例子中,所述关键观测数据的特征包括所述关键观测数据的视觉特征;
所述视觉特征包括以下至少一种:SiFT特征、ORB特征或MSER特征。
在一些例子中,所述处理器501还执行:将所述地图数据发送给服务端和/或其他车辆。
如图6所示,本申请实施例还提供一种车辆600,包括:一个或多个传感器610;以及地图数据处理装置400和/或地图数据构建装置500。
本申请实施例还提供一种计算机可读存储介质,所述可读存储介质上存储有若干计算机指令,所述计算机指令被执行时实任一实施例所述地图数据处理方法的步骤。
本申请实施例还提供一种计算机可读存储介质,所述可读存储介质上存储有若干计算机指令,所述计算机指令被执行时实任一实施例所述地图数据构建方法的步骤。
本说明书实施例可采用在一个或多个其中包含有程序代码的存储介质(包括但不限于磁盘存储器、CD-ROM、光学存储器等)上实施的计算机程序产品的形式。计算机可用存储介质包括永久性和非永久性、可移动和非可移动媒体,可以由任何方法或技术来实现信息存储。信息可以是计算机可读指令、数据结构、程序的模块或其他数据。计算机的存储介质的例子包括但不限于:相变内存(PRAM)、静态随机存取存储器(SRAM)、动态随机存取存储器(DRAM)、其他类型的随机存取存储器(RAM)、只读存储器(ROM)、电可擦除可编程只读存储器(EEPROM)、快闪记忆体或其他内存技术、只读光盘只读存储器(CD-ROM)、数字多功能光盘(DVD)或其他光学存储、磁盒式磁带,磁带磁磁盘存储或其他磁性存储设备或任何其他非传输介质,可用于存储可以被计算设备访问的信息。
对于装置实施例而言,由于其基本对应于方法实施例,所以相关之处参见方法实施例的部分说明即可。以上所描述的装置实施例仅仅是示意性的,其中所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部模块来实现本实施例方案的目的。本领域普通技术人员在不付出创造性劳动的情况下,即可以理解并实施。
需要说明的是,在本文中,诸如第一和第二等之类的关系术语仅仅用来将一个实体或者操作与另一个实体或操作区分开来,而不一定要求或者暗示这些实体或操作之间存在任何这种实际的关系或者顺序。术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者设备所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括所述要素的过程、方法、物品或者设备中还存在另外的相同要素。
以上对本发明实施例所提供的方法和装置进行了详细介绍,本文中应用了具体个例对本发明的原理及实施方式进行了阐述,以上实施例的说明只是用于帮助理解本发明的方法及其核心思想;同时,对于本领域的一般技术人员,依据本发明的思想,在具体实施方式及应用范围上均会有改变之处,综上所述,本说明书内容不应理解为对本发明的限制。

Claims (42)

  1. 一种地图数据处理方法,其特征在于,所述方法包括:
    获取车辆的传感器当前采集的交通场景的局部景物的观测数据;
    基于所述观测数据确定所述局部景物的第一描述信息;
    获取所述交通场景的地图数据,其中,所述地图数据包括基于对所述交通场景的全局景物的历史观测数据生成的第二描述信息;
    将所述第一描述信息和所述第二描述信息进行匹配,得到匹配结果;
    根据所述匹配结果和当前采集的交通场景的所述观测数据,更新所述地图数据。
  2. 根据权利要求1所述的方法,其特征在于,所述方法还包括:
    若所述匹配结果满足预设信息匹配条件,根据所述匹配结果确定所述车辆在所述交通场景中的实时位置信息,并基于所述实时位置信息控制所述车辆在所述交通场景中运动。
  3. 根据权利要求1所述的方法,其特征在于,所述根据所述匹配结果和当前采集的交通场景的所述观测数据,更新所述地图数据,包括:
    若所述匹配结果不满足预设信息匹配条件,根据所述第一描述信息,更新所述地图数据。
  4. 根据权利要求1或3所述的方法,其特征在于,所述地图数据的更新方式,包括以下任一:
    将所述第一描述信息替换所述第二描述信息;或,
    将所述第一描述信息与所述第二描述信息融合。
  5. 根据权利要求4所述的方法,其特征在于,所述将所述第一描述信息替换所述第二描述信息,是在如下条件下执行的:
    所述匹配结果不满足预设信息匹配条件,且根据所述匹配结果未确定出所述车辆在所述交通场景中的实时位置信息。
  6. 根据权利要求5所述的方法,其特征在于,在所述匹配结果不满足预设信息匹配条件,且根据所述匹配结果未确定出所述车辆在所述交通场景中的实时位置信息的步骤之后,所述方法还包括:
    控制所述车辆在所述交通场景内运动以持续采集所述交通场景的局部景物的观测数据,利用持续采集的所述观测数据确定出描述信息并存储。
  7. 根据权利要求4所述的方法,其特征在于,所述将所述第一描述信息与所述第二描述信息融合,是在如下条件下执行的:
    所述匹配结果不满足预设信息匹配条件,且根据所述匹配结果确定出所述车辆在所述交通场景中的实时位置信息不满足预设稳定状态,将所述第一描述信息与所述第二描述信息融合。
  8. 根据权利要求7所述的方法,其特征在于,所述车辆在所述交通场景中的实时位置信息不满足预设稳定状态,包括:用户界面上显示的车辆实时位置出现漂移。
  9. 根据权利要求4所述的方法,其特征在于,所述融合的方式,包括以下任一:
    将所述第一描述信息合并至所述地图数据中;或,
    根据所述第一描述信息更改所述地图数据中的部分第二描述信息。
  10. 根据权利要求1所述的方法,其特征在于,所述传感器按照预设的采样周期 采集所述观测数据;
    所述描述信息包括关键观测数据,所述关键观测数据表征所述传感器在目标采样周期内采集的所述观测数据。
  11. 根据权利要求10所述的方法,其特征在于,所述关键观测数据包括:车辆姿态发生预设变化时,所述传感器在目标采样周期内采集的所述观测数据。
  12. 根据权利要求10所述的方法,其特征在于,所述传感器包括以下一种或者多种传感器:图像传感器、激光雷达传感器、毫米波雷达传感器或超声波传感器。
  13. 根据权利要求12所述的方法,其特征在于,所述关键观测数据包括以下一种或者多种:
    由图像传感器在目标采样周期内采集的图像数据;
    由激光雷达传感器在目标采样周期内采集的点云数据;
    由毫米波雷达传感器在目标采样周期内采集的毫米波数据;
    由超声波传感器在目标采样周期内采集的超声波数据。
  14. 根据权利要求13所述的方法,其特征在于,所述目标采样周期内采集的所述观测数据的数据特征符合预设的数据特征条件。
  15. 根据权利要求14所述的方法,其特征在于,由图像传感器在目标采样周期内采集的图像数据,满足预设的图像纹理条件。
  16. 根据权利要求14所述的方法,其特征在于,由激光雷达传感器在目标采样周期内采集的点云数据,满足预设的点云分布条件。
  17. 根据权利要求14所述的方法,其特征在于,所述目标采样周期是根据多个所述采样周期的所述观测数据确定的。
  18. 根据权利要求1或10所述的方法,其特征在于,所述描述信息包括标签信息;
    所述标签信息包括以下一种或者多种:
    用于表征所述交通场景中特定景物所属的类别的标签信息;
    用于表征所述交通场景的观测数据的采集时间的标签信息。
  19. 根据权利要求18所述的方法,其特征在于,所述标签信息还包括:
    用于表征所述关键观测数据的特征的标签信息。
  20. 根据权利要求19所述的方法,其特征在于,所述关键观测数据的特征包括所述关键观测数据的视觉特征;
    所述视觉特征包括以下至少一种:SiFT特征、ORB特征或MSER特征。
  21. 根据权利要求1或18所述的方法,其特征在于,所述描述信息包括标签信息和/或关键观测数据,所述将所述第一描述信息和所述第二描述信息进行匹配,得到匹配结果,包括:
    将所述当前采集的所述观测数据确定的所述标签信息和所述地图数据中的所述标签信息进行匹配;和/或,
    将所述当前采集的所述观测数据确定的所述关键观测数据和所述地图数据中的所述关键观测数据进行匹配。
  22. 根据权利要求18所述的方法,其特征在于,所述交通场景具有多份地图数据,各份地图数据具有使用属性信息,所述使用属性信息是基于所述标签信息确定的;
    所述获取所述交通场景的地图数据,包括:
    在用户界面上显示所述多份地图数据的使用属性信息,通过所述用户界面获取用户选取的使用属性信息;
    根据用户选取的使用属性信息,从所述多份地图数据中确定出所述地图数据。
  23. 根据权利要求18所述的方法,其特征在于,所述交通场景具有多份地图数据,各份地图数据具有使用属性信息,所述使用属性信息是基于所述标签信息确定的;
    所述获取所述交通场景的地图数据,包括:
    将所述第一描述信息中的标签信息,与所述多份地图数据的使用属性信息进行匹配,根据所述匹配结果从所述多份地图数据中确定出所述地图数据。
  24. 根据权利要求1所述的方法,其特征在于,所述方法还包括:
    将更新后的地图数据发送给服务端和/或其他车辆。
  25. 一种地图数据构建方法,其特征在于,所述方法包括:
    控制车辆在交通场景中运动,获取车辆的传感器采集的所述交通场景的全局景物的观测数据;
    基于所述观测数据确定出所述交通场景的全局景物的描述信息;
    将所述全局景物的描述信息存储为所述交通场景的地图数据;所述描述信息用于车辆再次进入所述交通场景时,确定车辆在所述交通场景的位置信息以及确定所述地图数据是否需要更新。
  26. 根据权利要求25所述的方法,其特征在于,所述传感器按照预设的采样周期采集所述观测数据;
    所述描述信息包括关键观测数据,所述关键观测数据表征所述传感器在目标采样周期内采集的所述观测数据。
  27. 根据权利要求26所述的方法,其特征在于,所述关键观测数据包括:车辆姿态发生预设变化时,所述传感器在目标采样周期内采集的所述观测数据。
  28. 根据权利要求26所述的方法,其特征在于,所述传感器包括以下一种或者多种传感器:图像传感器、激光雷达传感器、毫米波雷达传感器或超声波传感器。
  29. 根据权利要求28所述的方法,其特征在于,所述关键观测数据包括以下一种或者多种:
    由图像传感器在目标采样周期内采集的图像数据;
    由激光雷达传感器在目标采样周期内采集的点云数据;
    由毫米波雷达传感器在目标采样周期内采集的毫米波数据;
    由超声波传感器在目标采样周期内采集的超声波数据。
  30. 根据权利要求25所述的方法,其特征在于,所述目标采样周期内采集的所述观测数据的数据特征符合预设的数据特征条件。
  31. 根据权利要求30所述的方法,其特征在于,由图像传感器在目标采样周期内采集的图像数据,满足预设的图像纹理条件。
  32. 根据权利要求30所述的方法,其特征在于,由激光雷达传感器在目标采样周期内采集的点云数据,满足预设的点云分布条件。
  33. 根据权利要求30所述的方法,其特征在于,所述目标采样周期是根据多个所述采样周期的所述观测数据确定的。
  34. 根据权利要求25或26所述的方法,其特征在于,所述描述信息包括标签信 息;
    所述标签信息包括以下一种或者多种:
    用于表征所述交通场景中特定景物所属的类别的标签信息;
    用于表征所述交通场景的观测数据的采集时间的标签信息。
  35. 根据权利要求34所述的方法,其特征在于,所述标签信息还包括:
    用于表征所述关键观测数据的特征的标签信息。
  36. 根据权利要求35所述的方法,其特征在于,所述关键观测数据的特征包括所述关键观测数据的视觉特征;
    所述视觉特征包括以下至少一种:SiFT特征、ORB特征或MSER特征。
  37. 根据权利要求25所述的方法,其特征在于,所述方法还包括:
    将所述地图数据发送给服务端和/或其他车辆。
  38. 一种地图数据处理装置,其特征在于,所述装置包括处理器、存储器、存储在所述存储器上可被所述处理器执行的计算机程序,所述处理器执行所述计算机程序时实现权利要求1至24任一所述的方法。
  39. 一种地图数据构建装置,其特征在于,所述装置包括处理器、存储器、存储在所述存储器上可被所述处理器执行的计算机程序,所述处理器执行所述计算机程序时实现权利要求25至37任一所述的方法。
  40. 一种车辆,其特征在于,所述车辆包括:
    一个或多个传感器;以及
    权利要求38所述的地图数据处理装置,和/或权利要求39所述的地图数据构建装置。
  41. 一种计算机可读存储介质,其特征在于,所述可读存储介质上存储有若干计算机指令,所述计算机指令被执行时实现权利要求1至24任一项所述地图数据处理方法的步骤。
  42. 一种计算机可读存储介质,其特征在于,所述可读存储介质上存储有若干计算机指令,所述计算机指令被执行时实现权利要求25至37任一项所述地图数据构建方法的步骤。
PCT/CN2021/123041 2021-10-11 2021-10-11 地图数据处理、地图数据构建方法、装置、车辆及计算机可读存储介质 WO2023060386A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/CN2021/123041 WO2023060386A1 (zh) 2021-10-11 2021-10-11 地图数据处理、地图数据构建方法、装置、车辆及计算机可读存储介质
CN202180101632.5A CN118019958A (zh) 2021-10-11 2021-10-11 地图数据处理、地图数据构建方法、装置、车辆及计算机可读存储介质

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2021/123041 WO2023060386A1 (zh) 2021-10-11 2021-10-11 地图数据处理、地图数据构建方法、装置、车辆及计算机可读存储介质

Publications (1)

Publication Number Publication Date
WO2023060386A1 true WO2023060386A1 (zh) 2023-04-20

Family

ID=85987171

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/123041 WO2023060386A1 (zh) 2021-10-11 2021-10-11 地图数据处理、地图数据构建方法、装置、车辆及计算机可读存储介质

Country Status (2)

Country Link
CN (1) CN118019958A (zh)
WO (1) WO2023060386A1 (zh)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110000786A (zh) * 2019-04-12 2019-07-12 珠海市一微半导体有限公司 一种基于视觉机器人的历史地图利用方法
CN110146099A (zh) * 2019-05-31 2019-08-20 西安工程大学 一种基于深度学习的同步定位与地图构建方法
US20190331499A1 (en) * 2017-02-02 2019-10-31 Robert Bosch Gmbh Method and device for updating a digital map
CN112650220A (zh) * 2020-12-04 2021-04-13 东风汽车集团有限公司 一种车辆自动驾驶方法、车载控制器及***
CN112960000A (zh) * 2021-03-15 2021-06-15 新石器慧义知行智驰(北京)科技有限公司 高精地图更新方法、装置、电子设备和存储介质
CN113190564A (zh) * 2020-01-14 2021-07-30 阿里巴巴集团控股有限公司 地图更新***、方法及设备
CN113468941A (zh) * 2021-03-11 2021-10-01 长沙智能驾驶研究院有限公司 障碍物检测方法、装置、设备及计算机存储介质

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190331499A1 (en) * 2017-02-02 2019-10-31 Robert Bosch Gmbh Method and device for updating a digital map
CN110000786A (zh) * 2019-04-12 2019-07-12 珠海市一微半导体有限公司 一种基于视觉机器人的历史地图利用方法
CN110146099A (zh) * 2019-05-31 2019-08-20 西安工程大学 一种基于深度学习的同步定位与地图构建方法
CN113190564A (zh) * 2020-01-14 2021-07-30 阿里巴巴集团控股有限公司 地图更新***、方法及设备
CN112650220A (zh) * 2020-12-04 2021-04-13 东风汽车集团有限公司 一种车辆自动驾驶方法、车载控制器及***
CN113468941A (zh) * 2021-03-11 2021-10-01 长沙智能驾驶研究院有限公司 障碍物检测方法、装置、设备及计算机存储介质
CN112960000A (zh) * 2021-03-15 2021-06-15 新石器慧义知行智驰(北京)科技有限公司 高精地图更新方法、装置、电子设备和存储介质

Also Published As

Publication number Publication date
CN118019958A (zh) 2024-05-10

Similar Documents

Publication Publication Date Title
Mühlfellner et al. Summary maps for lifelong visual localization
CN111928862B (zh) 利用激光雷达和视觉传感器融合在线构建语义地图的方法
CN107229690B (zh) 基于路侧传感器的高精度动态地图数据处理***及方法
US20200309541A1 (en) Localization and mapping methods using vast imagery and sensory data collected from land and air vehicles
CN108965687B (zh) 拍摄方向识别方法、服务器及监控方法、***及摄像设备
Cummins et al. Highly scalable appearance-only SLAM-FAB-MAP 2.0.
Ma et al. Find your way by observing the sun and other semantic cues
Hong et al. TextPlace: Visual place recognition and topological localization through reading scene texts
US20210004021A1 (en) Generating training data for deep learning models for building high definition maps
CN103366250A (zh) 基于三维实景数据的市容环境检测方法及***
CN112712138A (zh) 一种图像处理方法、装置、设备及存储介质
WO2021202784A1 (en) Systems and methods for augmenting perception data with supplemental information
Panphattarasap et al. Automated map reading: image based localisation in 2-D maps using binary semantic descriptors
Cheng et al. Modeling weather and illuminations in driving views based on big-video mining
Tao et al. SeqPolar: Sequence matching of polarized LiDAR map with HMM for intelligent vehicle localization
CN105930381A (zh) 基于混合数据库架构的全球Argo数据存储与更新方法
CN115203352A (zh) 车道级定位方法、装置、计算机设备和存储介质
Han et al. A novel loop closure detection method with the combination of points and lines based on information entropy
Vallone et al. Danish airs and grounds: A dataset for aerial-to-street-level place recognition and localization
WO2023060386A1 (zh) 地图数据处理、地图数据构建方法、装置、车辆及计算机可读存储介质
Li et al. An efficient point cloud place recognition approach based on transformer in dynamic environment
CN104700384A (zh) 基于增强现实技术的展示***及展示方法
CN116258820B (zh) 大规模城市点云数据集与建筑单体化构建方法及相关装置
CN117011413A (zh) 道路图像重建方法、装置、计算机设备和存储介质
CN115311867B (zh) 隧道场景的定位方法、装置、计算机设备、存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21960140

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE