CN112099481A - Method and system for constructing road model - Google Patents

Method and system for constructing road model Download PDF

Info

Publication number
CN112099481A
CN112099481A CN201910525727.4A CN201910525727A CN112099481A CN 112099481 A CN112099481 A CN 112099481A CN 201910525727 A CN201910525727 A CN 201910525727A CN 112099481 A CN112099481 A CN 112099481A
Authority
CN
China
Prior art keywords
data
statistical model
time
sensor
sensor data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910525727.4A
Other languages
Chinese (zh)
Inventor
M·多梅林
李千山
田文鑫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Bayerische Motoren Werke AG
Original Assignee
Bayerische Motoren Werke AG
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Bayerische Motoren Werke AG filed Critical Bayerische Motoren Werke AG
Priority to CN201910525727.4A priority Critical patent/CN112099481A/en
Publication of CN112099481A publication Critical patent/CN112099481A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0257Control of position or course in two dimensions specially adapted to land vehicles using a radar
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0268Control of position or course in two dimensions specially adapted to land vehicles using internal positioning means
    • G05D1/0274Control of position or course in two dimensions specially adapted to land vehicles using internal positioning means using mapping information stored in a memory device
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/29Geographical information databases
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Automation & Control Theory (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • General Engineering & Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Electromagnetism (AREA)
  • Traffic Control Systems (AREA)

Abstract

Methods and apparatus are provided for constructing a real-time road model. The method may comprise and the apparatus may be for: acquiring, at a first time, a first set of sensor data output via a plurality of different types of sensors onboard a vehicle; feeding a first set of sensor data to a generic statistical model to form generic statistical model format data for the first time instance; extracting map data within a threshold range around a position where the vehicle is located at a first moment from the map; correcting each set of sensor data in the generic statistical model format data for a first time instant based on the map data; fusing the general statistical model format data aiming at the first moment with the historical general statistical model format data to update the general statistical model format data aiming at the first moment; comparing the updated generic statistical model format data for the first time instant to the implicit model to identify the object; the identified objects are combined with map data to form a real-time road model.

Description

Method and system for constructing road model
Technical Field
The present invention relates to constructing road models, and more particularly, to constructing road models using real-time sensor data and offline maps.
Background
An autonomous vehicle (also called a drone car, an autonomous car, a robotic car) is a vehicle that is able to sense its surroundings and navigate without human input. Autonomous vehicles (hereinafter "ADVs") use various technologies to detect their surroundings, such as radar, laser, GPS, ranging, and computer vision. Advanced control systems interpret the sensed information to identify the appropriate navigation path, as well as obstacles and associated landmarks.
More specifically, ADVs collect sensor data from various onboard sensors, such as vision-type sensors (e.g., cameras), radar-type ranging sensors (such as lidar, millimeter-wave radar, ultrasonic radar), and so forth. Based on the sensor data, the ADV may build a real-time road model around it. The road model may include a variety of information including, but not limited to, lane information (such as the location, type, width, etc. of the lane lines), traffic lights, traffic signs, road boundaries, etc. By comparing the constructed road model with a previously obtained road model, such as a road model included in a High Definition (HD) map provided by an HD map provider, the ADV may more accurately determine its location in the road. At the same time, ADVs may also identify objects around them, such as vehicles, pedestrians, and buildings, based on sensor data. The ADV may make appropriate driving decisions, such as lane changes, acceleration, braking, etc., based on the determined road model and the identified surrounding objects.
As is known in the art, different types of sensors produce data in different forms or formats. In processing sensor data from different sensors, each type of sensor data must be processed separately. Thus, for each type of sensor data, one or more models for storing that type of sensor data must be built for object identification. Currently, there is no model that can simultaneously support multiple different types of sensor data.
Furthermore, the single set of sensor data obtained for a single moment is unstable and unreliable. For example, at some point in time, objects on the road (such as vehicles) may occlude the sensor, objects on the road (such as lane markers) may be occluded by other vehicles on the road or the sensor may be jittered due to the vehicle jittering, in either case, incorrect sensor data obtained by the sensor of the ADV may result in incorrect road model construction. Thus, it is desirable to construct a real-time road model by comparing the sensor data for that time instant with a priori information (e.g., maps) to correct for the apparently incorrect sensor data, and fusing the sensor data for that time instant with sets of sensor data for a plurality of previous time instants to fit the data.
It is therefore desirable to provide a solution that can support multiple types of models of sensor data simultaneously and can combine historical data for real-time road model construction to overcome the above-mentioned drawbacks.
Disclosure of Invention
This summary is provided to introduce a selection of concepts in a simplified form that are further described below in the detailed description. This summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used as an aid in determining the scope of the claimed subject matter.
According to an embodiment of the invention, there is provided a method for constructing a real-time road model, including: acquiring, at a first time, a first set of sensor data output via a plurality of different types of sensors onboard a vehicle; feeding the first set of sensor data to a generic statistical model to form generic statistical model format data for the first time instance, the generic statistical model format comprising a plurality of sensor data sets, wherein each sensor data set of the plurality of sensor data sets is output by one sensor of the different types of sensors; extracting map data from a map within a threshold range around a location at which the vehicle is located at the first time; correcting one or more of a plurality of sensor data sets in generic statistical model format data for the first time instance based on the map data; fusing the general statistical model format data aiming at the first moment with historical general statistical model format data to update the general statistical model format data aiming at the first moment; comparing the updated generic statistical model format data for the first time instance to an implicit model to identify an object, the implicit model comprising a plurality of sets of sensor data samples describing a predefined object, wherein each set of sensor data samples in the plurality of sets of sensor data samples comprises a pre-acquired set of sensor data samples describing the predefined object by one of the different types of sensors; and combining the identified objects with the map data to form the real-time road model.
According to an embodiment of the present invention, there is provided an apparatus for constructing a real-time road model, including: a sensor data acquisition module configured to acquire a first set of sensor data output by different types of sensors onboard a vehicle at a first time; a data feed module configured to feed the first set of sensor data to a generic statistical model to form generic statistical model format data for the first time instance, the generic statistical model format comprising a plurality of sensor data sets, wherein each sensor data set of the plurality of sensor data sets is output by one sensor of the different types of sensors; a map data extraction module configured to extract map data from a map within a threshold range around a location at which the vehicle is located at the first time; a data correction module configured to correct one or more of a plurality of sets of sensor data in generic statistical model format data for the first moment in time based on the map data; a data fusion module configured to fuse the generic statistical model format data for the first time with historical generic statistical model format data to update the generic statistical model format data for the first time; an object identification module configured to compare the updated generic statistical model format data for the first time instance to an implicit model to identify an object, the implicit model comprising a plurality of sets of sensor data samples describing a predefined object, wherein each set of sensor data samples in the plurality of sets of sensor data samples comprises a pre-acquired set of sensor data samples describing the predefined object by one of the different types of sensors; and a real-time road model formation module configured to combine the identified objects with the map data to form the real-time road model.
According to an embodiment of the present invention, there is provided a vehicle including: a plurality of different types of sensors; and the device for constructing the real-time road model. The plurality of different types of sensors includes a vision-type sensor including a camera and a radar-type ranging sensor, the radar including one or more of a lidar, an ultrasonic radar, and a millimeter-wave radar.
By adopting the method, the device and the vehicle disclosed by the invention, a plurality of different types of sensor data can be supported in one model, so that the positioning accuracy of the vehicle at the specific position of the road model is improved. Furthermore, by taking historical data into account, the accuracy of the resulting sensor data can be greatly improved. In addition, the map data is used as the prior information to eliminate the obvious errors of the real-time sensor data, so that the constructed real-time road model has higher reliability.
These and other features and advantages will become apparent upon reading the following detailed description and upon reference to the accompanying drawings. It is to be understood that both the foregoing general description and the following detailed description are explanatory only and are not restrictive of aspects as claimed.
Drawings
So that the manner in which the above recited features of the present invention can be understood in detail, a more particular description of the invention, briefly summarized above, may be had by reference to embodiments, some of which are illustrated in the appended drawings. It is to be noted, however, that the appended drawings illustrate only some typical aspects of this invention and are therefore not to be considered limiting of its scope, for the description may admit to other equally effective aspects.
FIG. 1 shows a schematic representation of an autonomous vehicle 100 having different types of sensors traveling on a road according to one embodiment of the invention.
FIG. 2 shows a flow diagram of a method 200 for constructing a real-time road model according to one embodiment of the invention.
FIG. 3 shows schematic diagrams 301 and 302 for generic statistical model format data fusion according to one embodiment of the present invention.
FIG. 4 shows a flow diagram of a method 400 for fusing generic statistical model format data for a first time instant with historical generic statistical model format data according to the embodiment of FIG. 3.
Fig. 5 is a block diagram of an apparatus 500 for constructing a real-time road model according to an embodiment of the present invention.
FIG. 6 shows a block diagram of an exemplary computing device 600 according to one embodiment of the invention.
Detailed Description
The present invention will be described in detail below with reference to the attached drawings, and the features of the present invention will be further apparent from the following detailed description.
The following detailed description refers to the accompanying drawings that illustrate exemplary embodiments of the invention. The scope of the invention is not, however, limited to these embodiments, but is defined by the appended claims. Accordingly, embodiments other than those shown in the drawings, such as modified versions of the illustrated embodiments, are encompassed by the present invention.
References in the specification to "one embodiment," "an example embodiment," etc., indicate that the embodiment described may include a particular feature, structure, or characteristic, but every embodiment may not necessarily include the particular feature, structure, or characteristic. Moreover, such phrases are not necessarily referring to the same embodiment. Further, when a particular feature, structure, or characteristic is described in connection with an embodiment, it is submitted that it is within the knowledge of one skilled in the relevant art to effect such feature, structure, or characteristic in connection with other embodiments whether or not explicitly described.
For convenience of explanation, only an embodiment in which the technical solution of the present invention is applied to a "vehicle" or an "autonomous vehicle" (these two terms are used interchangeably hereinafter) is described in detail herein, but a person skilled in the art can fully understand that the technical solution of the present invention can be applied to any vehicle such as an airplane, a helicopter, a train, a subway, a ship, etc. which can implement unmanned autonomous driving. The term "a or B" as used throughout this specification, unless otherwise specified, refers to "a and B" and "a or B," rather than to the exclusion of a and B.
General statistical model
When an autonomous vehicle travels on a road, it is necessary to know the road condition in real time. To this end, various types of sensors are mounted on the vehicle to act as the "eyes" of the vehicle. Sensors that are currently widely used include vision-type sensors (e.g., cameras) and radar-type ranging sensors (such as lidar, millimeter-wave radar, ultrasonic radar). Each sensor has its own strengths and weaknesses. For example, the camera is low in cost, can identify different objects, has advantages in the aspects of object height and width measurement accuracy, lane line identification, pedestrian identification accuracy and the like, is an indispensable sensor for realizing functions such as lane departure warning and traffic sign identification, but is not as good as a radar in operating distance and ranging accuracy, and is easily influenced by factors such as illumination and weather. The millimeter wave radar is applicable to all-weather environment, does not receive the influence of bad weather such as light, haze, sand and dust storm, can realize the discernment to moving, static barrier, but under driving environment, be in the coexistence environment of multiple wave band under to the influence of millimeter wave radar great, can produce the condition that the data accuracy is low on the side from this. The laser radar has wide detection range and high detection precision, but has poor performance and high cost in extreme weather such as rain, snow, fog and the like. Thus, it is desirable to use two or more sensor overlays to verify each other during driving. The following comprehensive solutions are widely adopted for the current automatic driving vehicle to achieve the purpose of safety redundancy: (1) the camera and the millimeter wave radar are provided; (2) the device has a camera and a laser radar; (3) the system has a camera, a millimeter wave radar and a laser radar.
Referring to FIG. 1, a schematic diagram of an autonomous vehicle 100 having different types of sensors traveling on a road is shown. Although vehicle 100 of FIG. 1 employs camera 101, millimeter-wave radar 102, and lidar 103 to identify objects for purposes of illustration, those skilled in the art will appreciate that aspects of the present invention may include other one or more in-vehicle sensors 104. Moreover, it is within the scope of the present invention to employ more or fewer sensors, such as only camera 101 and millimeter wave radar 102 or only camera 101 and lidar 103.
In the case of a vehicle having multiple types of sensors, each sensor records its own sensor data and provides it to the central processing unit of the vehicle. The format of sensor data provided by each type or variety of sensor manufacturers is typically different. In general, a sensor outputs raw sensor data or outputs data after preprocessing (e.g., feature extraction, object extraction, etc.) the raw sensor data according to settings of a vehicle manufacturer or a sensor manufacturer. For example, for the same object, such as the lane line 105 in fig. 1, the camera 101 outputs camera data representing the lane line 105, such as raw image data or image features extracted from the raw image data. Millimeter-wave radar 102 outputs millimeter-wave radar data representing this lane line 104, such as raw millimeter-wave radar data or polygon sequence data constructed from the raw millimeter-wave radar data. And the lidar 103 outputs lidar data representing the lane line 105, such as raw lidar data or three-dimensional point cloud data constructed from the raw lidar data. Of course, the data formats listed above for the sensor outputs are merely illustrative, and one skilled in the art would fully appreciate that any data format for the sensor outputs is within the scope of the present invention.
Typically, for each type of sensor data, one or more models are employed to record that type of sensor data. However, this approach may result in multiple sensor data records. For example, at a certain time t, a plurality of data records respectively recording the camera data output by the camera 101, the millimeter wave radar data output by the millimeter wave radar 102, and the laser radar data output by the laser radar 103 are generated for the same object. Not only does this approach consume excessive memory space, it can potentially cause a delay in data processing speed.
The present invention defines a generic statistical model that is capable of supporting multiple types of sensor data simultaneously. The generic statistical model format is as follows: { t, ds1,ds2…dsnRepresents a set of sensor data sets output by sensor 1 through sensor n at time t. Where s1 denotes a sensor 1 mounted on the vehicle, s2 denotes a sensor 2 mounted on the vehicle, and sn denotes a sensor n (n is any integer greater than 1) mounted on the vehicle, which are different types of sensors for identifying an object, such as a camera, a millimeter wave radar, a laser radar, and the like. Those skilled in the art will appreciate that other types of sensors, such as ultrasonic sensors, are also included within the scope of the present invention, depending on the particular configuration and requirements of the manufacturer of the autonomous vehicle.
ds1Representing the data set, d, output at time t at s1s2Representing the data set, d, output at time t at s2snRepresenting the data set output at time t sn (n being any integer greater than 1). In general, ds1……dsnEach data set in (a) has a different data format, such as camera data, millimeter wave radar data, lidar data, and the like. Those skilled in the art will appreciate that the above data sets may well include sensor data output by other types of sensors, depending on the sensor employed.
By employing the generic statistical model, data sets output by a plurality of different types of sensors at a single time can be integrated in one data model, thereby relieving storage pressure and enabling more efficient data processing.
Implicit model
In the present invention, a plurality of hidden models are predefined, each hidden model including a plurality of different types of sensor data describing an object. To illustrate, to name a few examples, the objects may be: road signs, lane lines, buildings, pedestrians, road borders, bridges, utility poles, elevated structures, traffic lights or traffic signs, etc.
The format of the hidden model is as follows: { pd {s1,pds2…pdsnObject, which represents a set of different types of sensor data sample sets that describe an object. Wherein s1 … … sn corresponds to the sensors s1 … … sn in the above general statistical model, respectively. pd (photo data)s1Representing a pre-obtained sample set, pd, output by s1 describing the objects2Represents a sample set output by s2 obtained in advance describing the object, and pdsnRepresenting a set of samples output by a pre-obtained sn describing the object. For example, if the object is a building, and s1 is a camera, s2 is a millimeter wave radar, and s3 is a lidar, then the hidden model is instantiated as: { pd {Camera head,pdMillimeter wave radar,pdLaser radarBuilding, by pdCamera headMay include a sample set of camera data, pd, describing the buildingMillimeter wave radarMay include a millimeter wave radar data sample set describing the building, and pdLaser radarA set of lidar data samples describing the building may be included.
The sensor data sample set may be obtained in advance by an autonomous vehicle manufacturer, a sensor manufacturer, or a user. For example, as known to those skilled in the art, a vehicle manufacturer, a sensor manufacturer, etc. may collect a large number of sensor data samples when training a target model or a road model, and mark the collected sensor data samples through various algorithms or feature extraction methods. Thus, sample data output by a sensor that is tagged to describe the same type of object may be grouped together to form a sensor data sample set for that object by the sensor.
For further explanation, by way of example only, { pd ] can be constructed in the following mannerCamera head,pdMillimeter wave radar,pdLaser radarBuilding }. For example, during training, a vehicle manufacturer may drive a vehicle from point a to point B during which sensors such as cameras, millimeter wave radar, lidar, etc. acquire a large number of sensor data samples. After processing each of these sensor data samples, each sensor data sample may be labeled as identifying an object (e.g., a road sign, a lane line, a building, a pedestrian, a road edge, a bridge, a utility pole, an overhead structure, or a traffic sign, etc.). Next, multiple camera data samples identifying the same object (e.g., building) are aggregated to the pd for that objectCamera headIn (e.g., grouping multiple millimeter wave radar data samples identifying the same type of object (e.g., building) into a pd for that objectMillimeter wave radarAnd aggregating a plurality of lidar data samples identifying the same object to a pd for that objectLaser radarIn (1). Finally, the three sensor data sample sets described above are fed into the hidden model to instantiate as the hidden model for the building, i.e. { pd }Camera head,pdMillimeter wave radar,pdLaser radarBuilding }.
Further, the number of samples included in a sensor data sample set may vary depending on specific hardware limitations, usage scenarios, and user needs. According to one embodiment of the invention, the sensor data sample set may be pre-stored in a storage device of the autonomous vehicle or may be obtained in real-time over a network from a server of the autonomous vehicle manufacturer, a server of the sensor manufacturer, or various cloud services.
According to one embodiment of the invention, after obtaining the real-time sensor, the object may be identified by comparing the sensor data to a sample set of sensor data. The specific mode is described in detail below.
Implementation mode
FIG. 2 depicts a flow diagram of a method 200 for constructing a real-time road model in accordance with one embodiment of the present invention. For example, the method 200 may be implemented within at least one processor (e.g., the processor 604 of fig. 6), which may be located in an on-board computer system, a remote server, or a combination thereof. Of course, in various aspects of the invention, the method 200 may be implemented by any suitable apparatus capable of performing the relevant operations.
The method 200 begins at step 210. At step 210, at a first time (hereinafter, "first time" is understood to be "real-time") a first set of sensor data output at the first time via a plurality of different types of sensors loaded by a vehicle is acquired. The vehicle may employ two or more different types of sensors. According to one embodiment of the invention, the plurality of different types of sensors may include a camera and one or more of a millimeter wave sensor and a lidar sensor. For example, the vehicle may employ a camera and a millimeter wave radar, employ a camera and a lidar, or employ a camera, a millimeter wave radar, and a lidar. The first set of sensor data includes a plurality of sensor data sets corresponding to real-time sensor data output by the camera, the millimeter wave sensor, and/or the lidar sensor, respectively. Of course, as will be appreciated by those skilled in the art, other numbers and types of sensors are also within the scope of the present invention.
At step 220, the acquired first set of sensor data is fed to the generic statistical model to form generic statistical model format data for the first time instance. I.e. by feeding time information, such as time stamps, and a plurality of sensor data sets comprised in the first set of sensor data to the generic statistical model t, ds1,ds2…dsnTo instantiate the model. For example, in the case of a vehicle using a camera, a millimeter wave radar, and a laser radar, the common statistical model format data for the first time is { t } tAt the first moment,dCamera head,dMillimeter wave radar,dLaser radar}. As described above, dCamera head,dMillimeter wave radar,dLaser radarThe format of the data output may be set by the vehicle manufacturer or the sensor manufacturer. In general, dCamera head,dMillimeter wave radar,dLaser radarWith different data formats.
At step 230, map data is extracted from the map within a threshold range around the location at which the vehicle was located at the first time. At step 240, one or more of the various sensor data sets in the generic statistical model format data for the first time instance are corrected based on the extracted map data. As described above, sensors such as cameras or lidar are susceptible to weather factors, and the accuracy of the sensor data collected in situations where the detection environment is not ideal (e.g., rain, snow, haze, fog) is low. Also, in practice, it is often found that ground markings or lane lines wear more and sensor data for such objects does not identify the original characteristics of the object well. Furthermore, when the on-board sensors are obscured by other obstacles, the collected sensor data is also inaccurate, and even erroneous. For this purpose, the map data can be used as a priori information to support the sensors in the event of poor or failed sensor performance.
According to one embodiment of the invention, the map may be pre-loaded in the memory of the autonomous vehicle or may be obtained from a map provider over a network. According to an embodiment of the present invention, the Map as the prior information may include an offline Map such as an Open Street Map (OSM) offline Map. Alternatively, to obtain a further more accurate centimeter-level localization, high-precision maps (such as those provided by map vendors like Google, HERE, etc.) may be used. Those skilled in the art will appreciate that other types of maps are within the scope of the present invention.
The GPS information of the current vehicle can be obtained through a GPS module loaded on the vehicle. Based on the GPS information, map data within a threshold range around the vehicle is extracted from the map. The selection of the threshold range may be set by the vehicle manufacturer or by the user, such as the area covered by a circle centered on the vehicle with a radius of 20 meters, 50 meters, 100 meters, etc. Of course, those skilled in the art will appreciate that other threshold ranges of values or shapes are within the scope of the present invention.
By comparing the real-time sensor data with the extracted map data, the apparent erroneous or abnormal real-time sensor data can be corrected, thereby enabling each sensor data set in the generic statistical model format data not to have a large deviation due to performance problems of the sensor or the influence of environmental factors. As can be appreciated by those skilled in the art, to obtain an accurate comparison result, the real-time sensor data and the map data are generally converted into the same coordinate system (for example, various types of coordinates are uniformly converted into coordinates in a world coordinate system), and then the real-time sensor data and the corresponding map data are compared.
In general, map data as prior information indicates static objects such as road signs, lane lines, buildings, road edges, bridges, utility poles, overhead structures, traffic lights, or traffic signboards, and the like. Thus, if the real-time sensor data indicates information such as pedestrians or other vehicles (i.e., dynamic objects) around the vehicle, the real-time sensor data need not be corrected to effectively avoid accidents and mishaps. In other words, if the coordinates of the object indicated by the real-time sensor data do not have a record of the corresponding static object at the same location in the map data, the real-time sensor data need not be corrected. However, if the coordinates of the object indicated by the real-time sensor data have a record of the corresponding static object at the same location in the map data, and the data of the real-time sensor data is not complete or accurate, the data representing the corresponding static object is extracted from the map data to correct the real-time sensor data.
In accordance with one or more embodiments of the present invention, there may be multiple ways to correct real-time sensor data. Assume that the generic statistical model format data for this first time instant (in real time) is { t }At the first moment,dCamera head,dMillimeter wave radar,dLaser radarDepending on the extracted map data, the three types of sensor data sets can be corrected separately such that dCamera head,dMillimeter wave radar,dLaser radarEach containing a sensor data set describing the correct static object. Alternatively, only d may be corrected in consideration of the size of the data amountCamera head,dMillimeter wave radar,dLaser radarOne or both, while leaving the data of the other two or one empty to save storage space.
For example, at a first time (i.e., in real time), if there should be a traffic sign turning right on a road 10 meters ahead of the vehicle according to the extracted map data within the threshold range around the vehicle, but since the traffic sign is mostly covered by sand scattered by a sand truck ahead, the sensor cannot completely sense the traffic sign, resulting in that each sensor data set in the generic statistical model format data for the first time only records data indicating the traffic sign that is incomplete. Thus, by comparison with the extracted map data, one or more of the respective sets of sensor data in the generic statistical model format data for the first time instant may be updated such that at least one type of sensor data describing a complete traffic sign is included in the generic statistical model format data for the first time instant.
Returning to FIG. 2, at step 250, the corrected generic statistical model format data for the first time instance is fused with the historical generic statistical model format data to update the generic statistical model format data for the first time instance. In practice, real-time sensor data acquired at a time alone does not render an object very accurately. In particular, for objects with continuity, such as lane lines and the like, fusion of sensor data at a plurality of consecutive times is required to describe the lane line.
According to one embodiment of the present invention, it is assumed that a plurality of types of sensors mounted on a vehicle are synchronized in time and sensor data is output at the same time interval. The time interval may be different according to different practical requirements, such as 0.1 second, 0.2 second, 0.5 second, 1 second, etc. Of course, other time periods are also within the scope of the present invention. According to one or more embodiments of the invention, historical generic statistical model format data may be formed at several consecutive times before the first time and stored in the memory of the vehicle or cached for quick reading. As should be appreciated, the historical generic statistical model format data has the same data format as the generic statistical model format data for the first instance and is formed at one or more instances prior to the first instance in the same manner as the generic statistical model format data for the first instance is formed.
FIG. 3 shows schematic diagrams 301 and 302 for generic statistical model format data fusion in accordance with one or more embodiments of the invention. Briefly, diagram 301 shows a single fusion, while diagram 302 shows a plurality of fusions that are iterative.
Diagram 301 is a schematic diagram illustrating fusing generic statistical model format data for a first time instant with historical generic statistical model format data including a plurality of generic statistical model format data for a plurality of previous time instants within a threshold time period prior to the first time instant. I.e., will { t }At the first moment,ds1,ds2…dsnAnd t for a plurality of previous successive time instantsFirst time-1,ds1,ds2…dsn},{tFirst time-2,ds1,ds2…dsn}……{tFirst time instant-tn,ds1,ds2…dsnMerge to update { t }At the first moment,ds1,ds2…dsn}. Wherein between two adjacent moments, e.g. tAt the first momentAnd tFirst time-1T isFirst time-1And tFirst time-2In between, at predetermined time intervals as described above, and tFirst time instant-tnTo tAt the first momentThe threshold time period elapsed in between can also be selected according to actual requirements. For example, in the case where the predetermined time interval is 0.1 seconds, the threshold time period may be selectedIs 1 second, and thus 10 historical generic statistical model format data (i.e., tn is 10 in this case) within 1 second before the first time are selected for fusion. For example, { t } may beAt the first moment,ds1,ds2…dsnFuse with 10 historical generic statistical model format data for the previous 1 second to update { t } with fused sensor dataAt the first moment,ds1,ds2…dsnGet { t } to get { t }At the first moment,ds1’,ds2’…dsn’And } wherein each of the 10 pieces of historical common statistical model format data respectively corresponds to common statistical model format data obtained at time intervals of 0.1 meter within 1 second before the first time. As can be seen, the way in which diagram 301 is shown is to be { t }At the first moment,ds1,ds2…dsnUpdating t by fusing the data with historical universal statistical model format onceAt the first moment,ds1,ds2…dsn}。
Diagram 302 is a schematic diagram showing the iterative fusion of a plurality of generic statistical model format data for a plurality of previous time instants within a threshold time period prior to a first time instant. Continuing with the above example, assume that the threshold time period is 1 second and the predetermined time interval between two adjacent time instants is 0.1 second, respectively. Iteratively fusing the general statistical model format data for the previous moment with the general statistical model format data for the next moment to update the general statistical model format data for the next moment until the general statistical model format data for the first moment is updated, thereby obtaining { t }At the first moment,ds1’,ds2’…dsn’}。
For example, first, will { t }First time instant-tn,ds1,ds2…dsnAnd { t } andfirst time-tn +1,ds1,ds2…dsnMerge to update { t }First time-tn +1,ds1,ds2…dsnGet updated { t }First time-tn +1,ds1’,ds2’…dsn’}. Then, the process of the present invention is carried out,will { t }First time-tn +1,ds1’,ds2’…dsn’And { t } andfirst time-tn +2,ds1,ds2…dsnMerge to update { t }First time-tn +2,ds1,ds2…dsnGet updated { t }First time-tn +2,ds1’,ds2’…dsn’}. Next, { t }First time-tn +2,ds1’,ds2’…dsn’And { t } andfirst time-tn +3,ds1,ds2…dsnMerge to update { t }First time-tn +3,ds1,ds2…dsnGet updated { t }First time-tn +3,ds1’,ds2’…dsn’}. And so on until tFirst time-1,ds1’,ds2’…dsn’And { t } andat the first moment,ds1,ds2…dsnMerge to update { t }At the first moment,ds1,ds2…dsnGet updated { t }At the first moment,ds1’,ds2’…dsn’}。
In order to make the fused data more accurate, the following mathematical methods may be employed in the fusion process.
FIG. 4 shows a flow diagram of a method 400 for fusing generic statistical model format data with historical generic statistical model format data for a first time instant in accordance with the embodiment of FIG. 3. At step 410, historical generic statistical model format data is obtained, the historical generic statistical model format data including a plurality of generic statistical model format data for a plurality of previous time instants within a threshold time period prior to a first time instant.
At step 420, the generic statistical model format data for the first time instant and the historical generic statistical model format data are transformed into the same coordinate system. For example, assume that the vehicle is the origin of the local coordinate system, the traveling direction of the vehicle is the x-axis of the local coordinate system, and the direction perpendicular to the traveling direction of the vehicle is the y-axis of the local coordinate system.Then from time t with the vehicleFirst time-1To tAt the first momentWhen the vehicle has traveled the distance L in the traveling direction, it can be understood that t isAt the first momentThe origin of the local coordinate system of (a) is compared to tFirst time-1The origin of the local coordinate system of (A) is moved by (L)x,Ly). The sensor data set in the collected historical generic statistical model format data is converted to t by coordinate conversionAt the first momentThereby making all sensor data sets used for fusion in the same coordinate system. According to another embodiment of the invention, the collected historical universal statistical model format data and various types of coordinates adopted by the universal statistical model format data for the first moment can be uniformly converted into coordinates in a world coordinate system, so that all sensor data sets used for fusion are in the same coordinate system. Various coordinate transformation methods include, but are not limited to, translation and rotation of coordinates in two-dimensional space, translation and rotation of coordinates in three-dimensional space, and the like.
At step 430, the generic statistical model format data for the first time instance and the historical generic statistical model format data of the sensor data set in the same coordinate system are fused according to either of the two fusion manners 301 and 302 shown in fig. 3, so that the generic statistical model format data for the first time instance is updated to include fused sensor data. As known to those skilled in the art, the fusion process includes clustering and denoising of the data set in order to obtain smooth and coherent data. For example, in the fusion method using 301, it is assumed that the threshold time period is 1 second and the predetermined time interval between two adjacent time instants is 0.1 second. Will { t }At the first moment,ds1,ds2…dsnD ins1,ds2…dsnIncluded sensor data set and d in the top 10 historical generic statistical model format datas1,ds2…dsnThe included sensor data sets are aggregated, and then repeated data or abnormal data in the aggregated sensor data sets are removed or filtered to obtain the fused updated { t }First timeCarving tool,ds1’,ds2’…dsn’}. For another example, in the fusion method using 302, the sensor data sets included in the generic statistical model format data at two consecutive time instants before and after are also similarly aggregated and denoised, thereby updating the generic statistical model format data for the subsequent time instant until the generic statistical model format data for the first time instant is updated.
In one embodiment, a weighted average algorithm may also be employed for fusion. For example, in aggregation, historical generic statistical model format data recorded closer in time to the first time instance is given higher weight, while generic statistical model format data recorded further in time from the first time instance is given lower weight. Of course, other weighting approaches are also contemplated.
Returning to FIG. 2, at step 260, the updated generic statistical model format data for the first instance in time is compared to the hidden model to identify the object. As described above, the hidden model format is { pds1,pds2…pdsnObject, which represents a set of different types of sensor data sample sets that describe an object. By comparing t obtained in step 250At the first moment,ds1’,ds2’…dsn’With one or more { pd } ss1,pds2…pdsnObject of comparison, can yield { tAt the first moment,ds1’,ds2’…dsn’Description of specific objects. According to one embodiment of the invention, assume that the vehicle employs three sensors, and t will beAt the first moment,ds1’,ds2’,ds3’And { pd } and { pds1,pds2,pds3Object of1The comparison is made. In this example, d wills1’And pds1,ds2’And pds2,ds3’And pds3Are compared respectively to determine tAt the first moment,ds1’,ds2’,ds3’Whether or not an object is described1. In particular practice, three of the three comparisons for three sensors are likely to occurThe case where not all of the comparison results are true. In this regard, the vehicle manufacturer may define a predetermined criterion, for example, that two of the comparison results of the three sensor data sets are true, i.e., the overall comparison result is deemed to be true, or that three of the comparison results of the three sensor data sets must be true, i.e., the overall comparison result is deemed to be true.
Alternatively, the vehicle may automatically select the decision criteria based on the current climate environment, road environment, or identified objects, according to a preset setting. For example, as mentioned above, different kinds of sensors have different advantages and disadvantages. Depending on the different adaptability of the sensor to the environmental conditions, the confidence of the sensor data set output by the millimeter wave radar may be designated to be high in an environment with poor visibility such as haze, fog, or the like, and the confidence of the sensor data set output by the laser radar and the camera may be designated to be high in a case with good environmental conditions. Furthermore, the confidence of different sensor data sets may be set for different types of objects, depending on the way the different kinds of sensors obtain data. For example, for certain objects having three-dimensional characteristics, such as buildings, the confidence level of the sensor data set output by the camera may be set lower than the confidence level of the sensor data sets output by the laser sensor and the millimeter wave radar sensor, whereas for certain planar objects, such as lane lines, ground traffic signs, the confidence level of the sensor data set output by the camera may be set lower than the confidence level of the sensor data sets output by the laser sensor and the millimeter wave radar sensor. In this way, in general, the overall decision result can be calculated by the following equation:
integral confidences1S1+ confidences2Confidence of S2+ … … +sn*Sn。
Wherein the confidence levels1+ confidence levels2+ … … confidence levelsn1, S1 is ds1’And pds1S2 is ds2’And pds2Sn site d as a result of comparison of (1)sn’And pdsnThe comparison result of (1). Wherein S1 and S2 … … Sn are 1 or 0. For example, S1 ═1 represents ds1’And pds1After comparison, yield ds1’Identification pds1The object being described, and S1 ═ 0 denotes ds1’And pds1After comparison, yield ds1’Unidentified pds1The object described. The same is true for S2 … Sn. Thus, the vehicle manufacturer or user can set, if at all, the vehicle manufacturer or user>A predetermined value (e.g., 50%) may then determine tAt the first moment,ds1’,ds2’…dsn’Denotes { pd }s1,pds2…pdsnObject } described. Of course, various other different decision criteria are also contemplated.
In step 270, the identified objects are combined with the extracted map data to form a real-time road model. As described above, the real-time sensor data collected by the sensors may not fully reflect the surrounding road conditions depending on environmental factors, etc. Thus, the updated objects identified by the generic statistical model format data for the first time instant may be stitched with the map data extracted in step 230 to form the real-time road model. In some embodiments, at step 260, the updated generic statistical model format data for the first instance in time may be compared only to the underlying model describing the dynamic object (e.g., pedestrian, other vehicle, etc.) such that the real-time sensor data is only used to identify the dynamic object. Thus, the real-time dynamic objects identified by the sensors are combined with static objects in the map data (e.g., road signs, lane lines, buildings, road borders, bridges, utility poles, overhead structures, traffic lights or traffic signs, etc.) to form a real-time road model. Thus, the computational burden of the onboard processor is reduced.
Also, in step 260, after comparing the updated generic statistical model format data for the first time with the implicit model, it is possible to conclude that the updated generic statistical model format data for the first time does not successfully identify the object. In this case, the extracted map data may be used as a real-time road model.
Thus, by using the method of the present invention, sensor data sets can be processed more quickly by placing different types of sensor data sets into a unified data model than by obtaining sensor data from each different type of sensor and processing separately. Meanwhile, the map data is used as prior information, so that the real-time data of the sensor and the existing information of the map data can be combined, and the real-time road model can be constructed more accurately and rapidly.
Fig. 5 is a block diagram of an apparatus 500 for constructing a real-time road model according to an embodiment of the present invention. All of the functional blocks of the apparatus 500 (including the respective units in the apparatus 500) may be implemented by hardware, software, or a combination of hardware and software. Those skilled in the art will appreciate that the functional blocks depicted in fig. 5 may be combined into a single functional block or divided into multiple sub-functional blocks.
The apparatus 500 may include a sensor data acquisition module 510, the sensor data acquisition module 510 configured to acquire a first set of sensor data output by different types of sensors onboard the vehicle at a first time. The apparatus 500 may further include a data feed module 520, the data feed module 520 configured to feed the first set of sensor data to the generic statistical model to form generic statistical model format data for the first time instance. The apparatus 500 may further comprise a map data extraction module 530, the map data extraction module 530 being configured to extract from a map data for a threshold range around a location at which the vehicle is located at the first time. The apparatus 500 may further include a data correction module 540, the data correction module 540 configured to correct one or more of the plurality of sensor data sets in the generic statistical model format data for the first time instance based on the map data. The apparatus 500 may further include a data fusion module 550 configured to fuse the generic statistical model format data for the first time instance with historical generic statistical model format data to update the generic statistical model format data for the first time instance. The apparatus 500 may further include an object identification module 560, the object identification module 560 configured to compare the updated generic statistical model format data for the first instance in time to the implicit model to identify the object. The apparatus 500 may further include a real-time road model formation module 570, the real-time road model formation module 570 configured to combine the identified objects with map data to form the real-time road model.
FIG. 6 shows a block diagram of an exemplary computing device, which is one example of a hardware device that may be applied to aspects of the present invention, according to one embodiment of the present invention.
With reference to FIG. 6, a computing device 600, which is one example of a hardware device that may be employed in connection with aspects of the present invention, will now be described. Computing device 600 may be any machine that may be configured to implement processing and/or computing, and may be, but is not limited to, a workstation, a server, a desktop computer, a laptop computer, a tablet computer, personal digital processing, a smart phone, an in-vehicle computer, or any combination thereof. The various methods/apparatus/servers/client devices described above may be implemented in whole or at least in part by a computing device 600 or similar device or system.
Computing device 600 may include components that may be connected or communicate via one or more interfaces and a bus 602. For example, computing device 600 may include a bus 602, one or more processors 604, one or more input devices 606, and one or more output devices 608. The one or more processors 604 may be any type of processor and may include, but are not limited to, one or more general purpose processors and/or one or more special purpose processors (e.g., dedicated processing chips). Input device 606 may be any type of device capable of inputting information to a computing device and may include, but is not limited to, a mouse, a keyboard, a touch screen, a microphone, and/or a remote controller. Output device 608 may be any type of device capable of presenting information and may include, but is not limited to, a display, speakers, a video/audio output terminal, a vibrator, and/or a printer. Computing device 600 may also include or be connected to non-transitory storage device 610, which may be any storage device that is non-transitory and that enables data storage, and which may include, but is not limited to, a disk drive, an optical storage device, a solid-state memory, a floppy disk, a flexible disk, a hard disk, a tape, or any other magnetic medium, an optical disk or any other optical medium, a ROM (read only memory), a RAM (random access memory), a cache memory, and/or any memory chip or cartridge, and/or any other medium from which a computer can read data, instructions, and/or code. Non-transitory storage device 610 may be detached from the interface. The non-transitory storage device 610 may have data/instructions/code for implementing the above-described methods and steps. Computing device 600 may also include a communication device 612. The communication device 612 may be any type of device or system capable of communicating with internal apparatus and/or with a network and may include, but is not limited to, a modem, a network card, an infrared communication device, a wireless communication device, and/or a chipset, such as a bluetooth device, an IEEE 1302.11 device, a WiFi device, a WiMax device, a cellular communication device, and/or the like.
When the computing device 600 is used as an in-vehicle device, it may also be connected with external devices, such as a GPS receiver, sensors for sensing different environmental data, such as acceleration sensors, wheel speed sensors, gyroscopes, etc. In this manner, computing device 600 may receive, for example, positioning data and sensor data indicative of a vehicle-form condition. When computing device 600 is used as an in-vehicle device, it may also be connected with other devices for controlling the travel and operation of the vehicle (e.g., engine systems, wipers, anti-lock brake systems, etc.).
Further, non-transitory storage device 610 may have map information and software components so that processor 604 may implement route guidance processing. Further, the output device 606 may include a display for displaying a map, displaying a location marker of the vehicle, and displaying an image indicating the running condition of the vehicle. Output device 606 may also include a speaker or headphone interface for audio guidance.
The bus 602 may include, but is not limited to, an Industry Standard Architecture (ISA) bus, a Micro Channel Architecture (MCA) bus, an Enhanced ISA (EISA) bus, a Video Electronics Standards Association (VESA) local bus, and a Peripheral Component Interconnect (PCI) bus. In particular, for an in-vehicle device, bus 602 may also include a Controller Area Network (CAN) bus or other structure designed for applications in an automobile.
Computing device 600 may also include a working memory 614, which working memory 614 may be any type of working memory capable of storing instructions and/or data that facilitate the operation of processor 604 and may include, but is not limited to, random access memory and/or read only memory devices.
Software components may be located in the working memory 614, including, but not limited to, an operating system 616, one or more application programs 618, drivers, and/or other data and code. Instructions for implementing the above-described methods and steps may be included in the one or more applications 618, and the aforementioned modules/units/components of the various apparatus/server/client devices may be implemented by instructions for the processor 604 to read and execute the one or more applications 618.
It should also be appreciated that variations may be made according to particular needs. For example, customized hardware might also be used, and/or particular components might be implemented in hardware, software, firmware, middleware, microcode, hardware description speech, or any combination thereof. In addition, connections to other computing devices, such as network input/output devices and the like, may be employed. For example, some or all of the disclosed methods and apparatus can be implemented with logic and algorithms in accordance with the present invention through programming hardware (e.g., programmable logic circuitry including Field Programmable Gate Arrays (FPGAs) and/or Programmable Logic Arrays (PLAs)) having assembly language or hardware programming languages (e.g., VERILOG, VHDL, C + +).
Although the various aspects of the present invention have been described thus far with reference to the accompanying drawings, the above-described methods, systems, and apparatuses are merely examples, and the scope of the present invention is not limited to these aspects but only by the appended claims and equivalents thereof. Various components may be omitted or may be replaced with equivalent components. In addition, the steps may also be performed in a different order than described in the present invention. Further, the various components may be combined in various ways. It is also important that as technology develops that many of the described components can be replaced by equivalent components appearing later.

Claims (15)

1. A method for constructing a real-time road model, comprising:
acquiring, at a first time, a first set of sensor data output via a plurality of different types of sensors onboard a vehicle;
feeding the first set of sensor data to a generic statistical model to form generic statistical model format data for the first time instance, the generic statistical model format comprising a plurality of sensor data sets, wherein each sensor data set of the plurality of sensor data sets is output by one sensor of the different types of sensors;
extracting map data from a map within a threshold range around a location at which the vehicle is located at the first time;
correcting one or more of a plurality of sensor data sets in generic statistical model format data for the first time instance based on the map data;
fusing the general statistical model format data aiming at the first moment with historical general statistical model format data to update the general statistical model format data aiming at the first moment;
comparing the updated generic statistical model format data for the first time instance to an implicit model to identify an object, the implicit model comprising a plurality of sets of sensor data samples describing a predefined object, wherein each set of sensor data samples in the plurality of sets of sensor data samples comprises a pre-acquired set of sensor data samples describing the predefined object by one of the different types of sensors; and
combining the identified objects with the map data to form the real-time road model.
2. The method of claim 1, wherein correcting, based on the map data, one or more of a plurality of sets of sensor data in the generic statistical model format data for the first time instance further comprises:
converting the plurality of sensor data sets and the map data in the generic statistical model format data for the first time instance into the same coordinate system;
if the coordinates of the object indicated by the plurality of sensor data sets do not have a record of the corresponding object at the same location in the map data, then one or more of the plurality of sensor data sets in the common statistical model format data for the first time instance need not be corrected;
if the coordinates of the object indicated by the plurality of sensor data sets are documented for the respective object at the same location in the map data and the data for one or more of the plurality of sensor data sets is not complete or accurate, extracting data representing the respective object from the map data to correct the one or more of the plurality of sensor data sets.
3. The method of claim 1, wherein fusing generic statistical model format data for the first time instance with historical generic statistical model format data comprises: fusing the generic statistical model format data for the first time instant with historical generic statistical model format data comprising a plurality of generic statistical model format data for a plurality of previous time instants within a threshold time period prior to the first time instant.
4. The method of claim 1, wherein fusing generic statistical model format data for the first time instance with historical generic statistical model format data comprises:
iteratively performing, for a plurality of generic statistical model format data for a plurality of previous time instants within a threshold time period since the first time instant: and fusing the general statistical model format data for each moment with general statistical model format data for a later moment after a preset time interval to update the general statistical model format data for the later moment until the general statistical model format data for the first moment is updated.
5. The method of claim 1, wherein the fusing further comprises: enabling the historical common statistical model format data and the common statistical model format data for the first time instant to be represented by the same coordinate system by any one of the following methods: and converting the historical universal statistical model format data into a local coordinate system by taking the position of the vehicle at the first moment as an origin of the local coordinate system, or converting various types of coordinates adopted in the historical universal statistical model format data and the universal statistical model format data aiming at the first moment into coordinates in a world coordinate system in a unified manner.
6. The method of claim 5, wherein the fusing further comprises: correspondingly aggregating the historical common statistical model format data and the sensor data sets output by the same sensor in the common statistical model format data for the first moment, and removing repeated data in each aggregated sensor data set.
7. The method of claim 1, wherein the object comprises at least one of: a road sign, a lane line, a building, a pedestrian, another vehicle, a road edge, a bridge, a utility pole, an elevated structure, or a traffic sign.
8. The method of claim 1, wherein the plurality of different types of sensors includes a vision-type sensor including a camera and a radar-type ranging sensor, the radar including one or more of a lidar, an ultrasonic radar, and a millimeter-wave radar.
9. The method of claim 1, wherein the map comprises a high precision map, wherein comparing the updated generic statistical model format data for the first time instance to an implicit model to identify objects further comprises: comparing only the updated generic statistical model format data for the first instance of time to an implicit model describing dynamic objects to identify only dynamic objects, the dynamic objects including pedestrians and/or another vehicle;
combining the identified objects with the map data to form the real-time road model further comprises: combining the identified dynamic objects with the map data to form a real-time road model.
10. An apparatus for constructing a real-time road model, comprising:
a sensor data acquisition module configured to acquire a first set of sensor data output by different types of sensors onboard a vehicle at a first time;
a data feed module configured to feed the first set of sensor data to a generic statistical model to form generic statistical model format data for the first time instance, the generic statistical model format comprising a plurality of sensor data sets, wherein each sensor data set of the plurality of sensor data sets is output by one sensor of the different types of sensors;
a map data extraction module configured to extract map data from a map within a threshold range around a location at which the vehicle is located at the first time;
a data correction module configured to correct one or more of a plurality of sets of sensor data in generic statistical model format data for the first moment in time based on the map data;
a data fusion module configured to fuse the generic statistical model format data for the first time with historical generic statistical model format data to update the generic statistical model format data for the first time;
an object identification module configured to compare the updated generic statistical model format data for the first time instance to an implicit model to identify an object, the implicit model comprising a plurality of sets of sensor data samples describing a predefined object, wherein each set of sensor data samples in the plurality of sets of sensor data samples comprises a pre-acquired set of sensor data samples describing the predefined object by one of the different types of sensors; and
a real-time road model formation module configured to combine the identified objects with the map data to form the real-time road model.
11. The apparatus of claim 10, wherein the data correction module is further configured to:
converting the plurality of sensor data sets and the map data in the generic statistical model format data for the first time instance into the same coordinate system;
if the coordinates of the object indicated by the plurality of sensor data sets do not have a record of the corresponding object at the same location in the map data, then one or more of the plurality of sensor data sets in the common statistical model format data for the first time instance need not be corrected;
if the coordinates of the object indicated by the plurality of sensor data sets are documented for the respective object at the same location in the map data and the data for one or more of the plurality of sensor data sets is not complete or accurate, extracting data representing the respective object from the map data to correct the one or more of the plurality of sensor data sets.
12. The apparatus of claim 10, wherein the map comprises a high precision map, wherein the object identification module is further configured to: comparing only the updated generic statistical model format data for the first instance of time to an implicit model describing dynamic objects to identify only dynamic objects, the dynamic objects including pedestrians and/or another vehicle;
wherein the real-time road model formation module is further configured to: combining the identified dynamic objects with the map data to form a real-time road model.
13. The apparatus of claim 10, wherein the plurality of different types of sensors comprises a vision-type sensor and a radar-type ranging sensor, the vision-type sensor comprising a camera, the radar comprising one or more of a lidar, an ultrasonic radar, and a millimeter-wave radar.
14. A vehicle, comprising:
a plurality of different types of sensors; and
the apparatus of claim 10.
15. The vehicle of claim 14, wherein the plurality of different types of sensors includes a vision-type sensor including a camera and a radar-type ranging sensor, the radar including one or more of a lidar, an ultrasonic radar, and a millimeter-wave radar.
CN201910525727.4A 2019-06-18 2019-06-18 Method and system for constructing road model Pending CN112099481A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910525727.4A CN112099481A (en) 2019-06-18 2019-06-18 Method and system for constructing road model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910525727.4A CN112099481A (en) 2019-06-18 2019-06-18 Method and system for constructing road model

Publications (1)

Publication Number Publication Date
CN112099481A true CN112099481A (en) 2020-12-18

Family

ID=73748683

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910525727.4A Pending CN112099481A (en) 2019-06-18 2019-06-18 Method and system for constructing road model

Country Status (1)

Country Link
CN (1) CN112099481A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113160454A (en) * 2021-05-31 2021-07-23 重庆长安汽车股份有限公司 Method and system for recharging historical sensor data of automatic driving vehicle

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113160454A (en) * 2021-05-31 2021-07-23 重庆长安汽车股份有限公司 Method and system for recharging historical sensor data of automatic driving vehicle

Similar Documents

Publication Publication Date Title
US10540554B2 (en) Real-time detection of traffic situation
US10671068B1 (en) Shared sensor data across sensor processing pipelines
CN109949439B (en) Driving live-action information labeling method and device, electronic equipment and medium
CN111507130B (en) Lane-level positioning method and system, computer equipment, vehicle and storage medium
EP3745376A1 (en) Method and system for determining driving assisting data
EP3644013B1 (en) Method, apparatus, and system for location correction based on feature point correspondence
WO2023179028A1 (en) Image processing method and apparatus, device, and storage medium
CN117576652A (en) Road object identification method and device, storage medium and electronic equipment
US20190293444A1 (en) Lane level accuracy using vision of roadway lights and particle filter
CN111833443A (en) Landmark position reconstruction in autonomous machine applications
KR102596297B1 (en) Apparatus and method for improving cognitive performance of sensor fusion using precise map
CN112099481A (en) Method and system for constructing road model
CN111028544A (en) Pedestrian early warning system with V2V technology and vehicle-mounted multi-sensor integration
CN116045964A (en) High-precision map updating method and device
JP7429246B2 (en) Methods and systems for identifying objects
CN112101392A (en) Method and system for identifying objects
CN111060114A (en) Method and device for generating feature map of high-precision map
CN112113593A (en) Method and system for testing sensor configuration of vehicle
CN113390422B (en) Automobile positioning method and device and computer storage medium
CN114216469B (en) Method for updating high-precision map, intelligent base station and storage medium
WO2020073272A1 (en) Snapshot image to train an event detector
WO2020073270A1 (en) Snapshot image of traffic scenario
WO2020073268A1 (en) Snapshot image to train roadmodel
WO2020073271A1 (en) Snapshot image of traffic scenario
CN109964132A (en) Method, apparatus and system for the sensors configured in moving object

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination