CN117275214A - Geofence decision-making method and system based on laser radar - Google Patents

Geofence decision-making method and system based on laser radar Download PDF

Info

Publication number
CN117275214A
CN117275214A CN202210666635.XA CN202210666635A CN117275214A CN 117275214 A CN117275214 A CN 117275214A CN 202210666635 A CN202210666635 A CN 202210666635A CN 117275214 A CN117275214 A CN 117275214A
Authority
CN
China
Prior art keywords
lane
road
vehicle
information
road model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210666635.XA
Other languages
Chinese (zh)
Inventor
刘玉磊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Bayerische Motoren Werke AG
Original Assignee
Bayerische Motoren Werke AG
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Bayerische Motoren Werke AG filed Critical Bayerische Motoren Werke AG
Priority to CN202210666635.XA priority Critical patent/CN117275214A/en
Publication of CN117275214A publication Critical patent/CN117275214A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/0104Measuring and analyzing of parameters relative to traffic conditions
    • G08G1/0125Traffic data processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/762Arrangements for image or video recognition or understanding using pattern recognition or machine learning using clustering, e.g. of similar faces in social networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/04Detecting movement of traffic to be counted or controlled using optical or ultrasonic detectors

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Artificial Intelligence (AREA)
  • Analytical Chemistry (AREA)
  • Health & Medical Sciences (AREA)
  • Chemical & Material Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention provides a geofence decision-making method and a geofence decision-making system for a vehicle, wherein the method comprises the steps of acquiring vehicle positioning data, map data and point cloud data about the current position of the vehicle in real time; determining a first road model based on the acquired vehicle positioning data and map data, the first road model including lane information and road edge information about one or more candidate lanes; determining a second road model based on the acquired point cloud data, the second road model including lane information and road edge information about the probe lane; performing fusion calculation on the first road model and the second road model to determine a lane matched with the detection lane in the one or more candidate lanes as a lane where the vehicle is currently located; and deciding whether the vehicle is in a geofenced area based on the determined lane. The invention also provides a vehicle supporting the geofence decision-making function.

Description

Geofence decision-making method and system based on laser radar
Technical Field
The invention relates to the field of autopilot, in particular to a geofence decision-making method and system based on a laser radar.
Background
Autopilot is an important direction of future traffic systems, and one of the key problems is to quickly and accurately find a drivable area for a vehicle, so that an important concept in autopilot, that is, a geofence, is derived, and the geofence is to construct an environment with complete infrastructure more suitable for the autopilot to drive by setting a range (e.g., expressway, urban expressway, etc.) for the autopilot to drive in advance, thereby helping the autopilot to make a decision about when to activate autopilot related functions.
The existing geofence decision-making technology is generally realized by utilizing data collected by a visual sensor, a millimeter wave radar and the like, however, the image-based detection method has obvious defects, and the quality of an image is seriously affected when conditions such as a rainy day, a foggy day, a low-illumination environment and the like are met, so that the detection accuracy is affected, and in addition, the external environment filled with clutters is also easy to influence the millimeter wave radar perception.
Thus, to increase the accuracy of geofence decisions with little environmental impact, it is desirable to be able to provide a geofence decision method and system for a vehicle.
Disclosure of Invention
This summary is provided to introduce a selection of concepts in a simplified form that are further described below in the detailed description. This summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used as an aid in determining the scope of the claimed subject matter.
According to a first aspect of the present invention there is provided a geofence decision-making method for a vehicle, the method comprising: acquiring vehicle positioning data, map data and point cloud data about the surrounding environment of a vehicle in real time; determining a first road model based on the acquired vehicle positioning data and map data, the first road model including lane information and road edge information about one or more candidate lanes; determining a second road model based on the acquired point cloud data, the second road model including lane information and road edge information about the probe lane; performing fusion calculation on the first road model and the second road model to determine a lane matched with the detection lane from the one or more candidate lanes as a lane where a vehicle is currently located; a decision is made whether the vehicle is in a geofenced area based on the determined lane.
Therefore, the invention obtains the lane model (comprising the road edge information and the lane information) by utilizing the laser radar, and performs fusion calculation with the lane model obtained by utilizing the high-definition map data and the vehicle positioning data to determine the lane where the vehicle is currently located, so as to determine whether the vehicle triggers a geofence event (namely whether the vehicle runs on a structured road) to further judge whether the automatic driving function is started, which is more accurate compared with the scheme of adopting only the radar and the vision sensor at present, has less environmental influence, and thus improves the automatic driving safety.
According to one embodiment of the invention, determining the second road model based on the acquired point cloud image further comprises: preprocessing the acquired point cloud data to convert the three-dimensional point cloud image into a two-dimensional image; and extracting lane information and road edge information about the probe lane from the two-dimensional image to determine the second road model, wherein the lane information comprises lane boundaries and the number of lanes, and the road edge information comprises elevation information of the road boundaries.
According to a further embodiment of the invention, the lane information is detected based on a lidar reflected intensity value.
According to a further embodiment of the invention, the road edge information is detected using a mesh-based clustering algorithm.
According to a further embodiment of the present invention, determining the first road model based on the acquired vehicle positioning data and map data further comprises: and determining lane information and road edge information about one or more candidate lanes within a predetermined search range in the map data according to the vehicle positioning data and the predetermined search range, wherein the predetermined search range is increased in a case where the lane information and road edge information about the one or more candidate lanes are not obtained within the predetermined search range.
According to a further embodiment of the present invention, the fusing calculation of the first road model and the second road model further includes: matching lane information from the first road model regarding one or more candidate lanes with lane information from the second road model regarding probe lanes to filter candidate lanes; and matching the road edge information from the first road model with the road edge information from the second road model to determine a lane from the filtered candidate lanes in which the vehicle is currently located.
According to a further embodiment of the invention, the road properties of the lane in which the vehicle is currently located are determined from the acquired map data; and deciding whether the vehicle is in a geofenced area based on the determined road attribute.
According to a second aspect of the present invention there is provided a geofence decision system for a vehicle, the system comprising: a data acquisition module configured to acquire vehicle positioning data, map data, and point cloud data about a surrounding environment of a vehicle in real time, with respect to a current location of the vehicle; a first road model determination module configured to determine a first road model based on the acquired vehicle positioning data and map data, the first road model including lane information and road edge information about one or more candidate lanes; a second road model determination module configured to determine a second road model based on the acquired point cloud data, the second road model including lane information and road edge information about the probe lane; a model fusion module configured to perform fusion calculation on the first road model and the second road model to determine a lane matched with the probe lane from among the one or more candidate lanes as a lane in which a vehicle is currently located; and a decision module configured to decide whether the vehicle is in a geofence area based on the determined lane.
According to one embodiment of the invention, the second road model determination module is further configured to: preprocessing the acquired point cloud data to convert the three-dimensional point cloud image into a two-dimensional image; and extracting lane information and road edge information about the detected lane from the two-dimensional image to determine the second road model, wherein the lane information comprises lane boundaries and lane numbers, and the road edge information comprises elevation information of the road boundaries.
According to a further embodiment of the invention, the lane information is detected based on a laser radar reflection intensity value and the road edge information is detected using a grid-based clustering algorithm.
According to a further embodiment of the invention, the first road model determination module is further configured to: and determining lane information and road edge information about one or more candidate lanes within a predetermined search range in the map data according to the vehicle positioning data and the predetermined search range, wherein the predetermined search range is increased in a case where the lane information and road edge information about the one or more candidate lanes are not obtained within the predetermined search range.
According to a further embodiment of the invention, the model fusion module is further configured to: matching lane information from the first road model regarding one or more candidate lanes with lane information from the second road model regarding probe lanes to filter candidate lanes; and matching the road edge information from the first road model with the road edge information from the second road model to determine the lane in which the vehicle is currently located from the filtered candidate lanes.
According to a further embodiment of the invention, the decision module is further configured to: determining the road attribute of the lane where the vehicle is currently located according to the acquired map data; a decision is made whether the vehicle is in a geofenced area based on the determined road attribute.
According to a third aspect of the present invention, there is provided a vehicle having a geofence decision function, the vehicle comprising: the positioning device is configured to acquire vehicle positioning data about the current position of the vehicle in real time; a networking device configured to obtain map data in real-time; at least one sensor configured to acquire point cloud data about the vehicle surroundings in real time; a control unit configured to perform the method of any one of the preceding aspects; and an autopilot control unit configured to receive a decision from the control unit as to whether the vehicle is in a geofenced area.
According to a fourth aspect of the present invention there is provided a computer readable storage medium storing instructions that when executed cause a machine to perform the method of any of the preceding aspects.
These and other features and advantages will become apparent upon reading the following detailed description and upon reference to the associated drawings. It is to be understood that both the foregoing general description and the following detailed description are explanatory only and are not restrictive of aspects as claimed.
Drawings
So that the manner in which the above recited features of the present invention can be understood in detail, a more particular description of the invention, briefly summarized above, may be had by reference to embodiments, some of which are illustrated in the appended drawings. It is to be noted, however, that the appended drawings illustrate only certain typical aspects of this invention and are therefore not to be considered limiting of its scope, for the description may admit to other equally effective aspects.
FIG. 1 illustrates an architectural diagram of a geofence decision system in accordance with one embodiment of the present invention.
Fig. 2 shows a schematic diagram of preprocessing a lidar signal according to an embodiment of the invention.
Fig. 3 shows a schematic diagram of laser radar based lane information extraction according to one embodiment of the invention.
Fig. 4 shows a schematic diagram of laser radar based road edge information extraction according to one embodiment of the invention.
FIG. 5 illustrates an example of a scenario of a lidar-based geofence decision-making in accordance with one embodiment of the present invention.
FIG. 6 illustrates an example of another scenario of a lidar-based geofence decision-making in accordance with one embodiment of the present invention.
FIG. 7 shows a schematic flow chart diagram of a geofence decision method in accordance with one embodiment of the present invention.
FIG. 8 illustrates an exemplary vehicle supporting geofence decisions according to one embodiment of the invention.
FIG. 9 illustrates an architectural diagram of a geofence decision system in accordance with one embodiment of the present invention.
Detailed Description
The features of the present invention will become more apparent from the detailed description set forth below when taken in conjunction with the drawings. The term "vehicle" as used throughout the specification refers to any type of automobile, including but not limited to cars, vans, trucks, buses, and the like. For simplicity, the invention is described with respect to "automobiles". The term "a or B" as used in the specification means "a and B" and "a or B", and does not mean that a and B are exclusive unless otherwise indicated.
FIG. 1 is a schematic architecture diagram of a geofence decision system 100 in accordance with one embodiment of the present invention. As shown in fig. 1, the geofence decision system 100 can include at least a data acquisition module 101, a first road model determination module 102, a second road model determination module 103, a model fusion module 104, and a decision module 105.
The data acquisition module 101 may acquire vehicle positioning data, map data, and point cloud data about the surrounding environment of the vehicle in real time with respect to the current location of the vehicle. In one embodiment, the data acquisition module 101 may acquire vehicle positioning data in real time regarding the current location of the vehicle, for example, by a global positioning navigation system GNSS (including GPS, beidou, galileo (galileo), glonass (Glonass), etc.) that provides global positioning services. In one embodiment, the map data may be a high definition map (also referred to as a high definition map), and the data acquisition module 101 may acquire the high definition map from the cloud in real time, for example, via a network. Compared with the traditional navigation map, the high-definition map can provide navigation information at the road level and navigation information at the lane level, and in addition, the high-definition map contains more rich driving assistance information and semantic information (such as accurate three-dimensional representation of a road network and the like). Thus, high definition maps are far superior to traditional maps in terms of the richness and accuracy of the information. In one embodiment, the data acquisition module 101 may acquire point cloud data about the vehicle surroundings in real time by a laser radar provided on the vehicle (e.g., a front end of the vehicle). The laser radar has good characteristics of monochromaticity, good coherence, strong directivity, high light speed flight and the like, and is little influenced by the environment, so that the problem of large influence by the environment in a scheme based on a vision sensor and a millimeter wave radar can be effectively solved. In some cases, the lidar may include single-line lidar and multi-line lidar. The point cloud data may include distance information, reflection intensity information, and deflection angle fed back by a beam of laser points reaching the reflection point, so that a distance from the reflection point to the center of the laser radar, an angle on a vertical plane, and an angle on a horizontal plane may be obtained. Thus, by analyzing the point cloud data received via the lidar, a three-dimensional point cloud image can be reconstructed.
The first road model determination module 102 may determine a first road model based on the acquired vehicle positioning data and map data, wherein the first road model includes lane information and road edge information regarding one or more candidate lanes. In one embodiment, the first road model determination module 102 may determine lane information and road edge information about one or more candidate lanes within a predetermined search range in map data acquired in real time according to a current position of the vehicle and the predetermined search range (e.g., 10 meters), wherein the predetermined search range may be increased for further searching in a case where the lane information and the road edge information about the one or more candidate lanes are not obtained within the predetermined search range. In one embodiment, the lane information may include lane boundaries (e.g., boundary shape or material), number of lanes, etc. In one embodiment, the curbside information may include elevation information of the road boundary (e.g., guard rails, sound walls, green belts, curbs, etc.).
The second road model determination module 103 may determine a second road model based on the acquired point cloud data, wherein the second road model includes lane information and road edge information about the probe lane. In one embodiment, the second road model determination module 103 may pre-process the point cloud data acquired in real time to convert the three-dimensional point cloud image reconstructed using the point cloud data into a two-dimensional depth image (i.e., project the three-dimensional point cloud data onto a two-dimensional plane), as shown in further detail in fig. 2, where the actual scene of the vehicle traveling on an overhead is shown on the left side in fig. 2, and the converted 2D image is shown on the right side. In one embodiment, lane information about the probe lane may be detected based on lidar reflected intensity values, as shown in further detail in fig. 3 (4 lane boundaries and lane lines). As shown in fig. 3, since the reflection intensities of different media for laser light are different (for example, the echo intensity value of the characteristic coating for laser light is about 12-30, which may be a lane marking, the echo intensity value of asphalt and concrete for laser light is about 5-8, which may be a road and a house, and the echo intensity value of vegetation and metal for laser light is about 45-150, which may be a bush, a vehicle, etc.), the algorithm for detecting lane lines according to the reflection intensity value of the laser radar can easily distinguish the road from the lane lines in the point cloud data of the surrounding environment of the road obtained by the vehicle-mounted laser radar. In another embodiment, the lane information about the probe lane may also be detected based on the laser radar echo width. In another embodiment, the lane information about the probe lane may also be estimated based on the road edge detection and the known road width. In one embodiment, the road edge information about the probe lane may be detected using a grid-based clustering algorithm. As shown in fig. 4, a schematic drawing of extraction of road edge information (i.e., road edge information) is shown, wherein in fig. 4, a camera view in a scene where a vehicle is traveling on an overhead is shown on the left side, and a corresponding 3D point cloud is shown on the right side (wherein a white solid line represents a road edge, a yellow square is a vehicle traveling ahead, and green point clouds are elevation information on both sides of a road). In some cases, since processing 3D point cloud data acquired by a lidar brings huge computation load, resulting in poor real-time, a gridding method may be employed to reduce computation load, by dividing the point cloud into grids, counting grid height information, screening grids according to road edge height features, and clustering neighboring grids, road edges (e.g., guard rails of road edges, green belts, etc.) may be detected, and lane lines may thus be identified further based on the detected road edges.
The model fusion module 104 may perform fusion calculations on the first road model and the second road model to determine a lane of the one or more candidate lanes that matches the probe lane as the lane in which the vehicle is currently located. In one embodiment, the model fusion module 104 may match lane information from the first road model regarding one or more candidate lanes with lane information from the second road model regarding the probe lane to filter the candidate lanes (i.e., filter out candidate lanes with non-matching lane boundaries and number of lanes), and then match the road edge information from the first road model with the road edge information from the second road model (i.e., compare the degree of matching of elevation information (e.g., rail guards, etc.) of the road boundaries to determine the lane in which the vehicle is currently located from the filtered candidate lanes.
The decision module 105 may decide whether the vehicle is in a geofenced area based on the determined lane. "geofence" may refer to a virtual boundary of a drivable position of an autonomous vehicle within which an autopilot-related function may be activated. In one embodiment, the decision module 105 may determine a road attribute of a lane in which the vehicle is currently located from map data acquired in real-time, and decide whether the vehicle is in a geofenced area based on the determined road attribute, where the road attribute may include, for example, a road type, a road class, and the like. In one embodiment, the decision module 105 may also decide whether the vehicle is about to enter or leave the geofence area based on the determined lane, in which case the decision vehicle is about to enter the geofence area, the driver may be notified to take over the vehicle in time, and in which case the decision vehicle is about to leave the geofence area, the vehicle may be notified to be ready to switch to the autonomous mode.
Those skilled in the art will appreciate that the geofence decision system of the present invention can be implemented in either hardware or software, and that the modules can be combined or combined in any suitable manner.
FIG. 5 illustrates an example of a scenario 500 of a lidar-based geofence decision-making in accordance with one embodiment of the present invention.
Fig. 5 shows a scene in which a vehicle is traveling on a bidirectional four-lane road, wherein road edges are provided with curbs, and a guard rail is provided in the middle of the road. Positioning based on vehicle navigation data may make it difficult to distinguish which of a plurality of parallel lanes the vehicle is in, while a geofenced area that activates an autopilot function may contain only one lane, not all of them. By utilizing the geofence decision-supporting system of the present invention, a first road model may be first determined based on the acquired vehicle location data and map data, which includes four candidate parallel lanes L1, L2, L3, and L4 and a guard rail between L2 and L3. Subsequently, a second road model may be determined based on the acquired point cloud data, which includes the probe lane L5 and the guard rail to the left of the lane. The first road model and the second road model may then be fused to find a lane of the four candidate parallel lanes that matches the probe lane, specifically, the lane information about the candidate lanes L1, L2, L3, and L4 may be compared with the lane information about the probe lane L5 to filter out the candidate lane L3 because the lane boundaries L3-1, L3-2 of L3 match the lane boundaries L5-1, L5-2 of L5, and then the road edge information about the candidate lane L3 may be compared with the road edge information about the probe lane L5 to obtain a lane L3 where the vehicle is currently located because the guard rail on the left side of L3 matches the guard rail on the left side of L5. Thus, it may be determined that the vehicle is currently in a geofenced area based on the road attribute (e.g., city expressway) for L3, etc., in the map data. Therefore, the lane model is obtained by utilizing the laser radar, and the lane model obtained by utilizing the high-definition map data and the vehicle positioning data is fused with the lane model to determine the current lane of the vehicle, so that whether the vehicle triggers a geofence event or not is judged, the current lane of the vehicle can be more accurately positioned, the influence of the environment is small, and the automatic driving safety is remarkably improved.
FIG. 6 illustrates another example of a scenario 600 of a lidar-based geofence decision-making in accordance with one embodiment of the present invention.
Fig. 6 shows a scenario in which a vehicle has just entered an overhead road, with guard rails on both sides of the overhead road. Positioning based on vehicle navigation data may make it difficult to distinguish whether the vehicle is on an overhead or below an overhead lane, while a geofence area that activates an autopilot function may contain only the overhead lane. By utilizing the geofence decision-supporting system of the present invention, a first road model may be first determined based on the acquired vehicle position data and map data, which includes the overhead lane L1, the overhead lane L2 below, the rail guards on both sides of the lane L1, and the rail guard on the left side of the lane L2. Subsequently, a second road model may be determined based on the acquired point cloud data, which includes the probe lane L3 and the guard rails on both sides of the lane. The first road model may then be fused with the second road model to find a lane of the two candidate lanes that matches the probe lane, specifically, the lane information about the candidate lanes L1, L2 may be compared with the lane information about the probe lane L3 to filter out the candidate lane L1 because the lane boundaries L1-1, L1-2 of L1 match the lane boundaries L3-1, L3-2 of L3, and then the road edge information about the candidate lane L1 may be compared with the road edge information about the probe lane L3 to obtain the lane where the vehicle is currently located as L1 because the guard rails on both sides of L1 match the guard rails on both sides of L3. Thus, it may be determined that the vehicle is currently in a geofenced area for subsequent activation of the autopilot function based on the road attribute (e.g., overhead) for L1, etc. in the map data.
It will be appreciated that there are other various scenarios for making a geofence decision using the geofence decision system 100.
FIG. 7 shows a schematic flow chart diagram of a geofence decision method 700 in accordance with one embodiment of the present invention. The method 700 begins at step 701, where the data acquisition module 101 may acquire vehicle location data, map data, and point cloud data regarding the vehicle's surroundings in real time, regarding the current location of the vehicle.
In step 702, the first road model determination module 102 may determine a first road model based on the acquired vehicle positioning data and map data, the first road model including lane information and road edge information about one or more candidate lanes, wherein the lane information may include lane boundaries and lane numbers, and the road edge information may include elevation information (e.g., rail guards, curbs, etc.) of the road boundaries. In one embodiment, the first road model determination module 102 may match within a predetermined search range of map data based on the acquired vehicle positioning data to generate a lane model (including lane information and road edge information, etc. regarding the candidate lane), and may appropriately increase the predetermined search range when the lane is not matched within the predetermined search range.
In step 703, the second road model determination module 103 may determine a second road model based on the acquired point cloud data, the second road model including lane information and road edge information about the probe lane. In one embodiment, the second road model determination module 103 may pre-process the acquired point cloud data to project the parsed 3D point cloud image to a 2D plane, and then extract a lane model (including lane information and road edge information about the probe lane) from the generated 2D image, wherein the lane information may be detected based on the lidar reflected intensity values and the road edge information may be detected using a grid-based clustering algorithm.
In step 704, the model fusion module 104 may perform fusion calculations on the first road model and the second road model to determine a lane of the one or more candidate lanes that matches the probe lane as the lane in which the vehicle is currently located. In one embodiment, the model fusion module 104 may first perform a lateral match based on lane information (e.g., lane boundaries) and then perform a match based on road edge information.
In step 705, the decision module may decide whether the vehicle is in a geofenced area based on the determined lane. In one embodiment, the decision module may determine whether the vehicle is in a geofenced area based on the determined lanes from the assistance information in the map data acquired in real-time. The auxiliary information may include one or more of road class (e.g., expressway/express way/urban expressway/national road/provincial road/county road), road type (e.g., main road/auxiliary road), lane type (e.g., normal lane/emergency lane).
FIG. 8 illustrates an exemplary vehicle 800 supporting geofence decisions according to one embodiment of the invention. The vehicle 800 may include a positioning device 802, which positioning device 802 may be configured to obtain vehicle positioning data regarding the current location of the vehicle in real time. In one embodiment, the positioning device 802 may be implemented as an onboard GPS. The vehicle 800 may also include a networking device 804, which networking device 804 may download map data (e.g., high definition maps) from the cloud in real-time via a network. In addition, the vehicle 800 may also include at least one sensor 806 configured to acquire point cloud data regarding the vehicle surroundings in real time. The at least one sensor 806 may be implemented as a lidar. The vehicle 800 may also include an Electronic Control Unit (ECU) 808. The electronic control unit can be generally used for controlling the running state of the vehicle and realizing various functions thereof, and mainly uses various sensors and data acquisition and exchange of buses to judge the vehicle state and the driving intention of a driver and control the automobile through an actuator. The electronic control unit 810 may support various functions described herein, including: determining a first road model based on the acquired vehicle positioning data and map data, the first road model including lane information and road edge information about one or more candidate lanes; determining a second road model based on the acquired point cloud data, the second road model including lane information and road edge information about the probe lane; carrying out fusion calculation on the first road model and the second road model to determine a lane matched with the detection lane in the one or more candidate lanes as a lane where the vehicle is currently located; and deciding whether the vehicle is in a geofenced area based on the determined lane. Additionally, the vehicle 800 may also include an autopilot control unit 810 communicatively coupled with the electronic control unit, which may be configured to receive a decision from the electronic control unit regarding whether the vehicle is currently in a geofenced area for use in determining whether to activate autopilot related functions.
FIG. 9 illustrates an architectural diagram of a geofence decision system 900 in accordance with one embodiment of the present invention. As shown in fig. 9, system 900 may include a memory 901 and at least one processor 902. The memory 901 may include RAM, ROM, or a combination thereof. The memory 901 may store computer executable instructions that, when executed by at least one processor 902, cause the at least one processor 902 to perform the various functions described herein, including: determining a first road model based on the acquired vehicle positioning data and map data, the first road model including lane information and road edge information about one or more candidate lanes; determining a second road model based on the acquired point cloud data, the second road model including lane information and road edge information about the probe lane; carrying out fusion calculation on the first road model and the second road model to determine a lane matched with the detection lane in the one or more candidate lanes as a lane where the vehicle is currently located; a decision is made whether the vehicle is in a geofenced area based on the determined lane. In some cases, memory 901 may contain, among other things, a BIOS that may control basic hardware or software operations, such as interactions with peripheral components or devices. The processor 902 may include an intelligent hardware device (e.g., a general purpose processor, DSP, CPU, microcontroller, ASIC, FPGA, programmable logic device, discrete gate or transistor logic components, discrete hardware components, or any combination thereof).
The functions described herein may be implemented in hardware, software executed by a processor, firmware, or any combination thereof. If implemented in software for execution by a processor, the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium. Other examples and implementations are within the scope of the disclosure and the appended claims. For example, due to the nature of software, the functions described herein may be implemented using software executed by a processor, hardware, firmware, hardwired or any combination thereof. Features that implement the functions may also be physically located in various places including being distributed such that parts of the functions are implemented at different physical locations.
What has been described above includes examples of aspects of the claimed subject matter. It is, of course, not possible to describe every conceivable combination of components or methodologies for purposes of describing the claimed subject matter, but one of ordinary skill in the art may recognize that many further combinations and permutations of the claimed subject matter are possible. Accordingly, the disclosed subject matter is intended to embrace all such alterations, modifications and variations that fall within the spirit and scope of the appended claims.

Claims (15)

1. A geofence decision-making method for a vehicle, the method comprising:
acquiring vehicle positioning data, map data and point cloud data about the surrounding environment of a vehicle in real time;
determining a first road model based on the acquired vehicle positioning data and map data, the first road model including lane information and road edge information about one or more candidate lanes;
determining a second road model based on the acquired point cloud data, the second road model including lane information and road edge information about the probe lane;
performing fusion calculation on the first road model and the second road model to determine a lane matched with the detection lane from the one or more candidate lanes as a lane where a vehicle is currently located; and
a decision is made whether the vehicle is in a geofenced area based on the determined lane.
2. The method of claim 1, wherein determining a second road model based on the acquired point cloud image further comprises:
preprocessing the acquired point cloud data to convert the three-dimensional point cloud image into a two-dimensional image; and
and extracting lane information and road edge information about the detected lane from the two-dimensional image to determine the second road model, wherein the lane information comprises lane boundaries and lane numbers, and the road edge information comprises elevation information of the road boundaries.
3. The method of claim 2, wherein the lane information is detected based on a lidar reflected intensity value.
4. The method of claim 2, wherein the curbside information is detected using a grid-based clustering algorithm.
5. The method of claim 1, wherein determining a first road model based on the acquired vehicle positioning data and map data further comprises:
determining lane information and road edge information about one or more candidate lanes within a predetermined search range in the map data based on the vehicle positioning data and the predetermined search range,
wherein the predetermined search range is increased in a case where lane information and road edge information on one or more candidate lanes are not obtained within the predetermined search range.
6. The method of claim 1, wherein performing a fusion calculation of the first road model and the second road model further comprises:
matching lane information from the first road model regarding one or more candidate lanes with lane information from the second road model regarding probe lanes to filter candidate lanes; and
and matching the road edge information from the first road model with the road edge information from the second road model to determine the lane in which the vehicle is currently located from the filtered candidate lanes.
7. The method of claim 1, wherein determining whether the vehicle is in a geofenced area based on the determined lane further comprises:
determining the road attribute of the lane where the vehicle is currently located according to the acquired map data; and
a decision is made whether the vehicle is in a geofenced area based on the determined road attribute.
8. A geofence decision-making system for a vehicle, the system comprising:
a data acquisition module configured to acquire vehicle positioning data, map data, and point cloud data about a surrounding environment of a vehicle in real time, with respect to a current location of the vehicle;
a first road model determination module configured to determine a first road model based on the acquired vehicle positioning data and map data, the first road model including lane information and road edge information about one or more candidate lanes;
a second road model determination module configured to determine a second road model based on the acquired point cloud data, the second road model including lane information and road edge information about the probe lane;
a model fusion module configured to perform fusion calculation on the first road model and the second road model to determine a lane matched with the probe lane from among the one or more candidate lanes as a lane in which a vehicle is currently located; and
a decision module configured to decide whether the vehicle is in a geofenced area based on the determined lane.
9. The system of claim 8, wherein the second road model determination module is further configured to:
preprocessing the acquired point cloud data to convert the three-dimensional point cloud image into a two-dimensional image; and
and extracting lane information and road edge information about the detected lane from the two-dimensional image to determine the second road model, wherein the lane information comprises lane boundaries and lane numbers, and the road edge information comprises elevation information of the road boundaries.
10. The system of claim 9, wherein the lane information is detected based on a lidar reflection intensity value and the road edge information is detected using a grid-based clustering algorithm.
11. The system of claim 8, wherein the first road model determination module is further configured to:
determining lane information and road edge information about one or more candidate lanes within a predetermined search range in the map data based on the vehicle positioning data and the predetermined search range,
wherein the predetermined search range is increased in a case where lane information and road edge information on one or more candidate lanes are not obtained within the predetermined search range.
12. The system of claim 8, wherein the model fusion module is further configured to:
matching lane information from the first road model regarding one or more candidate lanes with lane information from the second road model regarding probe lanes to filter candidate lanes; and
and matching the road edge information from the first road model with the road edge information from the second road model to determine the lane in which the vehicle is currently located from the filtered candidate lanes.
13. The system of claim 8, wherein the decision module is further configured to:
determining the road attribute of the lane where the vehicle is currently located according to the acquired map data; and
a decision is made whether the vehicle is in a geofenced area based on the determined road attribute.
14. A vehicle having a geofence decision function, the vehicle comprising:
the positioning device is configured to acquire vehicle positioning data about the current position of the vehicle in real time;
a networking device configured to obtain map data in real-time;
at least one sensor configured to acquire point cloud data about the vehicle surroundings in real time;
an electronic control unit configured to perform the method of any one of claims 1-7; and
an autopilot control unit configured to receive a decision from the electronic control unit as to whether the vehicle is in a geofenced area.
15. A computer readable storage medium storing instructions that, when executed, cause a machine to perform the method of any of claims 1-7.
CN202210666635.XA 2022-06-13 2022-06-13 Geofence decision-making method and system based on laser radar Pending CN117275214A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210666635.XA CN117275214A (en) 2022-06-13 2022-06-13 Geofence decision-making method and system based on laser radar

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210666635.XA CN117275214A (en) 2022-06-13 2022-06-13 Geofence decision-making method and system based on laser radar

Publications (1)

Publication Number Publication Date
CN117275214A true CN117275214A (en) 2023-12-22

Family

ID=89220183

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210666635.XA Pending CN117275214A (en) 2022-06-13 2022-06-13 Geofence decision-making method and system based on laser radar

Country Status (1)

Country Link
CN (1) CN117275214A (en)

Similar Documents

Publication Publication Date Title
US11113545B2 (en) Road detection using traffic sign information
JP6714688B2 (en) System and method for matching road data objects to generate and update an accurate road database
EP2461305B1 (en) Road shape recognition device
US10984557B2 (en) Camera calibration using traffic sign recognition
CN114375467B (en) System and method for detecting an emergency vehicle
US12037015B2 (en) Vehicle control device and vehicle control method
CN107389084B (en) Driving path planning method and storage medium
GB2614379A (en) Systems and methods for vehicle navigation
CN111508276B (en) High-precision map-based V2X reverse overtaking early warning method, system and medium
CN109583416B (en) Pseudo lane line identification method and system
WO2020259284A1 (en) Obstacle detection method and device
JP2019197356A (en) Automatic driving system
WO2020145053A1 (en) Travel lane estimation device, travel lane estimation method, control program, and computer-readable non-transitory recording medium
US11403951B2 (en) Driving assistance for a motor vehicle when approaching a tollgate
CN114084129A (en) Fusion-based vehicle automatic driving control method and system
CN116242375A (en) High-precision electronic map generation method and system based on multiple sensors
CN117275214A (en) Geofence decision-making method and system based on laser radar
CN114743179A (en) Panoramic visible driving area detection method based on semantic segmentation
JP2019095875A (en) Vehicle control device, vehicle control method, and program
JP6848847B2 (en) Stationary object map information generator
JP7152848B2 (en) External environment recognition device
US11780455B2 (en) Vehicle control device, vehicle control method, and recording medium
KR102178585B1 (en) Attitude estimation system of surrounding vehicles
CN115985109B (en) Unmanned mine car environment sensing method and system
JP2019048513A (en) Vehicle control device, vehicle control method, and program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication