CN115953748A - Multi-sensor fusion sensing method, system, device and medium for Internet of vehicles - Google Patents

Multi-sensor fusion sensing method, system, device and medium for Internet of vehicles Download PDF

Info

Publication number
CN115953748A
CN115953748A CN202211553879.3A CN202211553879A CN115953748A CN 115953748 A CN115953748 A CN 115953748A CN 202211553879 A CN202211553879 A CN 202211553879A CN 115953748 A CN115953748 A CN 115953748A
Authority
CN
China
Prior art keywords
target object
moving
moving target
route
moving route
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211553879.3A
Other languages
Chinese (zh)
Inventor
陆科杰
王恒达
徐子清
洪涛
马小燕
马铤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Chuangyuan Industrial Investment Co ltd
Original Assignee
Suzhou Chuangyuan Industrial Investment Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Chuangyuan Industrial Investment Co ltd filed Critical Suzhou Chuangyuan Industrial Investment Co ltd
Priority to CN202211553879.3A priority Critical patent/CN115953748A/en
Publication of CN115953748A publication Critical patent/CN115953748A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Traffic Control Systems (AREA)

Abstract

The application relates to a multi-sensor fusion perception method, system, device and medium for internet of vehicles. The method comprises the following steps: acquiring data collected by a laser radar and a high-definition camera, carrying out combined calibration according to the collected data to divide areas in an image visual field, and identifying the category of each area; generating a pseudo image and a basic network by using the acquired data in a V2X-based laser point cloud SLAM positioning mode, and constructing a space rectangular coordinate system; identifying a moving target object according to the acquired data, acquiring relative coordinates of the moving target object in a space rectangular coordinate system, and acquiring a moving route of the moving target object according to the relative coordinates of the moving target object; and tracking the moving route of each moving target object in real time, and judging whether the moving route of each moving target object is reasonable or not. By adopting the method, the vehicle position can be jointly calibrated by integrating all the sensors, and the accurate position of the vehicle can be obtained all day long and all day long by taking the advantages of all the sensors.

Description

Multi-sensor fusion sensing method, system, device and medium for Internet of vehicles
Technical Field
The application relates to the technical field of car networking and vehicle road collaboration, in particular to a multi-sensor fusion sensing method, a multi-sensor fusion sensing system, computer equipment and a storage medium for car networking.
Background
The roadside device and the advanced driving assistance technology utilize the vehicle-mounted sensor to sense the situation of the road in real time, so that the purpose of path planning or auxiliary judgment is achieved, and the method is an effective means for reducing the incidence rate of traffic accidents and improving the driving safety. However, the technology is not mature at present, and the sensors used in the system are very expensive. Furthermore, in many special cases it is difficult to leave the driver's control of the driving environment as a whole. Each sensor manufacturer is a private protocol, so that respective advantages are difficult to effectively exert, communication time delay between vehicles can be caused due to different network signal intensities in different areas, deviation between an acquired distance and an actual distance is caused, and certain safety influence on vehicle running can be caused.
Disclosure of Invention
Based on this, it is necessary to provide a multi-sensor fusion sensing method, system, computer device and storage medium for the internet of vehicles, which can integrate each sensor to jointly calibrate the vehicle position, and can obtain the precise position of the vehicle all day long, and solve the technical problems that each sensor is difficult to effectively exert its own advantages, different area network signal strengths are different, communication time lag between vehicles can be caused, deviation between the obtained distance and the actual distance can be caused, and certain safety influence can be caused to the driving of the vehicle.
In one aspect, a multi-sensor fusion perception method for vehicle networking is provided, and the method comprises the following steps:
acquiring data collected by a laser radar and a high-definition camera, carrying out combined calibration according to the collected data to divide areas in an image visual field, and identifying the category of each area;
generating a pseudo image and a basic network by using the acquired data in a V2X-based laser point cloud SLAM positioning mode, and constructing a space rectangular coordinate system;
identifying a moving target object according to the acquired data, acquiring relative coordinates of the moving target object in a space rectangular coordinate system, and acquiring a moving route of the moving target object according to the relative coordinates of the moving target object;
and tracking the moving route of each moving target object in real time, and judging whether the moving route of each moving target object is reasonable or not.
In one embodiment, in the step of performing combined calibration and division on the regions in the image field of view according to the acquired data and identifying the category of each region, the identified regions comprise motor vehicle lanes, lane lines, sidewalks and intersections; the lane lines comprise yellow solid lines, white solid lines, yellow dotted lines, white dotted lines, steering marks and forbidden marks; the category of each region comprises a normative behavior and a violation behavior in each region; the illegal behaviors comprise traffic converse, traffic jam, road scattering, parking, road construction, vehicle overspeed, slow running, abnormal lane change, continuous lane change, emergency lane occupation, break-in event, queue overrun, overflow event and pedestrian crossing event.
In one embodiment, in the step of acquiring the moving route of the moving object according to the relative coordinates of the moving object, the moving route of the moving object is identified by combining with the high-precision map data.
In one embodiment, in the step of tracking the moving route of each moving target object in real time, the moving route of each moving target object is tracked in real time by adopting a method combining manual calibration and machine calibration; the moving object includes a motor vehicle, a non-motor vehicle, and a pedestrian.
In one embodiment, the method combining manual calibration and machine calibration includes:
simulating the scene of the moving target object moving in the image visual field in a manual calibration mode;
positioning through a multi-sensor fusion algorithm;
calibrating the moving distance and the spatial position;
and fitting the positioning result to obtain a moving route and outputting the moving route.
In one embodiment, the multi-sensor fusion algorithm performs localization, including: the method has the advantages that data collected by the plurality of laser radars and the plurality of high-definition cameras are fused in real time, and the moving route of the moving target object is learned, so that the detection, tracking and moving route prediction of the moving target object are realized under low illumination.
In one embodiment, the step of determining whether the moving route of each moving object is reasonable includes:
acquiring all areas passed by each moving target object according to the moving route of the moving target object;
the method comprises the steps of obtaining the category of a moving target object, obtaining the category information of each area through which the moving target object passes, and judging the normative behavior and the illegal behavior of the moving target object in each area according to the category of the moving target object;
if the moving target object is in the standard behavior in all the areas, judging that the moving route of the moving target object is reasonable;
and if the moving target object has at least one illegal behavior in all the areas, judging that the moving route of the moving target object is unreasonable.
In another aspect, a multi-sensor fusion perception system for internet of vehicles is provided, the system comprising:
the region identification module is used for acquiring the collected data of the laser radar and the high-definition camera, carrying out combined calibration and division on regions in the image visual field according to the collected data, and identifying the category of each region;
the system comprises a space rectangular coordinate system building module, a virtual image generating module and a basic network generating module, wherein the space rectangular coordinate system building module is used for generating a pseudo image and a basic network from collected data by using a laser point cloud SLAM positioning mode based on V2X and building a space rectangular coordinate system;
the target tracking module is used for identifying the moving target object according to the acquired data, acquiring the relative coordinates of the moving target object in the space rectangular coordinate system and acquiring the moving route of the moving target object according to the relative coordinates of the moving target object;
and the traffic event perception module is used for tracking the moving route of each moving target object in real time and judging whether the moving route of each moving target object is reasonable or not.
In another aspect, a computer device is provided, which includes a memory, a processor, and a computer program stored on the memory and executable on the processor, and the processor implements the following steps when executing the computer program:
acquiring data collected by a laser radar and a high-definition camera, carrying out combined calibration according to the collected data to divide areas in an image visual field, and identifying the category of each area;
generating a pseudo image and a basic network by using the acquired data in a V2X-based laser point cloud SLAM positioning mode, and constructing a space rectangular coordinate system;
identifying a moving target object according to the acquired data, acquiring relative coordinates of the moving target object in a space rectangular coordinate system, and acquiring a moving route of the moving target object according to the relative coordinates of the moving target object;
and tracking the moving route of each moving target object in real time, and judging whether the moving route of each moving target object is reasonable or not.
In yet another aspect, a computer-readable storage medium is provided, having stored thereon a computer program which, when executed by a processor, performs the steps of:
acquiring data collected by a laser radar and a high-definition camera, carrying out combined calibration according to the collected data to divide areas in an image visual field, and identifying the category of each area;
generating a pseudo image and a basic network by using the acquired data in a V2X-based laser point cloud SLAM positioning mode, and constructing a space rectangular coordinate system;
identifying a moving target object according to the acquired data, acquiring relative coordinates of the moving target object in a space rectangular coordinate system, and acquiring a moving route of the moving target object according to the relative coordinates of the moving target object;
and tracking the moving route of each moving target object in real time, and judging whether the moving route of each moving target object is reasonable or not.
According to the multi-sensor fusion sensing method and system for the Internet of vehicles, the computer equipment and the storage medium, the vehicle position is jointly calibrated by integrating the sensors, so that the advantages of the sensors can be obtained, and the accurate position of the vehicle can be obtained all day long.
The invention combines the requirements of the application and development of the car networking and car road cooperation technology, and is based on the intelligent networking car 'car road cloud integrated' digital infrastructure construction. The problem of the information isolated island that intelligent networking car exists is solved, be difficult to interconnection cooperation, effective management and control is waited, utilizes multisensor to fuse perception technical rule, realizes the interconnection of data between car, way, the cloud platform. The open sharing of the state information, the traffic events and the control information of the traffic signal lamps is promoted, and the progress of urban intellectualization is promoted; real-time information such as suggested speed, road condition dynamic and the like is actively pushed to passing vehicles, travelers are helped to master traffic conditions in time, traveling routes are reasonably selected, correct and effective road information is provided for social vehicles, and road passing efficiency is improved; the application of the active guidance vehicle-road cooperation technology to special guarantee vehicles such as buses, ambulances, fire trucks, emergency rescue and the like senses the positions and routes of the vehicles in real time, strengthens the information interaction of the vehicle-road and guarantees priority or convenient traffic.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 is a flowchart illustrating an embodiment of processing acquired data acquired by a laser radar and a high definition camera to form a spatial rectangular coordinate system;
FIG. 2 is a schematic flow chart diagram of a multi-sensor fusion sensing method for Internet of vehicles according to one embodiment;
FIG. 3 is a flow diagram illustrating the method steps for using manual calibration in combination with machine calibration in one embodiment;
FIG. 4 is a flowchart illustrating the steps of determining whether the movement path of each moving object is reasonable in one embodiment;
FIG. 5 is a block diagram of a multi-sensor fusion perception system for Internet of vehicles in one embodiment;
FIG. 6 is a diagram illustrating an internal structure of a computer device according to an embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more clearly understood, the present application is further described in detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
Example 1
As described in the background art, since each sensor manufacturer is a private protocol, it is difficult to effectively exert its own advantages, and different local area network signal strengths are different, which may also cause communication delay between vehicles, cause deviation between the acquired distance and the actual distance, and also cause certain safety influence on the driving of the vehicle.
In order to solve the problems, the embodiment of the invention creatively provides a multi-sensor fusion sensing method for the Internet of vehicles, and relates to the three aspects of target detection, classification, target tracking and prediction and traffic incident sensing.
Target detection and classification: the method comprises the steps of carrying out combined calibration by utilizing a laser radar and a high-definition camera (namely a high-definition camera), automatically identifying lane lines according to image visual fields, automatically dividing driving areas, automatically detecting vehicle parking behaviors and outputting whether violation occurs or not according to lanes where the vehicles can drive. The lane line can be automatically identified according to the image visual field, the drivable area can be automatically segmented, the vehicle parking behavior can be automatically detected, and whether violation of regulations exists or not can be output according to the lane where the vehicle is located.
Target tracking and prediction: by utilizing an improved laser point cloud SLAM technology based on V2X, pseudo images are generated, a basic network is generated, a detection head, loss function calculation and data augmentation are generated, reinforcement learning of pedestrians is achieved, tracking of targets is achieved under low illumination, vehicles are automatically detected, license plates are identified, and over-the-horizon tracking is achieved by combining vehicle features.
Traffic event awareness: by utilizing millimeter wave radar, laser radar, traffic light signals and high-definition cameras and combining high-precision map data, the advantages of all the sensors are taken, and whether the vehicles have traffic violation behaviors such as line-pressing running, ramp overtaking, truck occupation and the like is judged all day long. The judgment of whether the vehicle has traffic violations such as line-pressing running, ramp overtaking, truck occupation and the like is realized.
It should be noted that, the realization of above functions must be carried out intelligent transformation or change to the curb device, be carried out intelligent construction to the high in the clouds platform.
Specifically, referring to fig. 1, fig. 1 is a flowchart for acquiring collected data acquired by a laser radar and a high definition camera (i.e., a high definition camera) and processing the acquired data to form a spatial rectangular coordinate system WGS-84 BLH. In order to realize the multi-sensor fusion sensing technology of the internet of vehicles and realize the detection and classification of targets of pedestrians, vehicles and non-motor vehicles, firstly, equipment capable of detecting motor vehicles, non-motor vehicles and pedestrians is installed on a roadside electric police pole, and angle calibration is carried out according to actual lanes and positions. The calibration adopts a method combining manual calibration and machine calibration. The scene that motor vehicles, non-motor vehicles and pedestrians move in the view finding range of the equipment is artificially simulated, and the positioning, distance calibration and space calibration are carried out through a multi-sensor fusion algorithm. And fitting the positioning result to obtain a moving route and outputting the moving route. The data of a plurality of sensors such as a camera and a millimeter wave radar are fused in real time, and the target and the event can be detected more accurately under severe conditions such as low illumination/rain, fog and snow weather; and the pedestrian is subjected to reinforcement learning, and target detection and tracking are realized under low illumination. Judging whether the vehicle has violation behaviors such as line pressing driving, ramp overtaking, truck occupation and the like by combining high-precision map data; automatically detecting vehicles, identifying license plate contents, and realizing beyond-the-horizon tracking by combining vehicle characteristics; and automatically detecting the vehicle according to the video image and outputting the vehicle classification and the lane where the vehicle is located.
The specific technical indexes are as follows: 0-50m, man-machine not: the average detection rate of the target is more than 99.5 percent, and the average accuracy of classification is more than 99 percent; 50-100m, man-machine non: the average detection rate of the target is more than 99.5 percent, and the average accuracy of classification is more than 99 percent; 100-150m, motor vehicle: the average detection rate of the target is more than 98%, and the average accuracy of classification is more than 98%; the pedestrian detection rate and the accuracy rate are more than 90 percent; the detection rate and accuracy rate of the non-motor vehicle are more than 95 percent; the output frequency was 10Hz. And (3) realizing the high-precision positioning result output of the traffic participants, wherein the positioning result is 0-100m: ranging by using a laser radar, wherein the lowest precision of a tracked vehicle is 0.4m, and the precision of a non-tracked vehicle is 0.3m; the speed accuracy is <1km/h.100-150m: vision and laser radar fusion ranging, wherein the tracking vehicle precision is less than 1m in the longitudinal direction and 0.4m at the lowest in the transverse direction, the non-tracking vehicle precision is 0.75m at the lowest in the longitudinal direction and 0.35m at the lowest in the transverse direction; the speed accuracy is <1km/h. (lateral lane-dependent) output frequency: 10Hz. Traffic event detection, supported event types: the detection rate of traffic reverse/traffic jam/road scattering/parking/road construction/vehicle overspeed/slow running/abnormal lane change/continuous lane change/emergency lane occupation/intrusion event/queuing overrun/overflow event/pedestrian crossing event is more than 98%; the average accuracy rate is more than 95 percent.
The invention combines the requirements of the application and development of the car networking and car road cooperation technology, and is based on the intelligent networking car 'car road cloud integrated' digital infrastructure construction. The problem of the information isolated island that intelligent networking car exists is solved, be difficult to interconnection cooperation, effective management and control is waited, utilizes multisensor to fuse perception technical rule, realizes the interconnection of data between car, way, the cloud platform. The open sharing of the state information, the traffic events and the control information of the traffic signal lamps is promoted, and the progress of urban intellectualization is promoted; real-time information such as suggested speed, road condition dynamic and the like is actively pushed to passing vehicles, travelers are helped to master traffic conditions in time, traveling routes are reasonably selected, correct and effective road information is provided for social vehicles, and road passing efficiency is improved; the application of the active guiding vehicle-road cooperation technology to special guarantee vehicles such as buses, ambulances, fire engines and emergency rescue senses the positions and routes of the vehicles in real time, strengthens the interaction of vehicle-road information and guarantees priority or convenient traffic.
Example 2
Based on the same inventive concept as in embodiment 1, all the technical features of embodiment 1 are included in embodiment 2, and the same technical effects as in embodiment 1 are obtained.
Specifically, as shown in fig. 2, the multi-sensor fusion sensing method for the internet of vehicles provided in embodiment 2 of the present application includes the following steps:
the method comprises the following steps of S1, acquiring data of a laser radar and a high-definition camera, carrying out combined calibration and division on areas in an image visual field according to the acquired data, and identifying the category of each area;
s2, generating a pseudo image and a basic network by using the acquired data in a V2X-based laser point cloud SLAM positioning mode, and constructing a space rectangular coordinate system;
s3, identifying a moving target object according to the acquired data, acquiring relative coordinates of the moving target object in a space rectangular coordinate system, and acquiring a moving route of the moving target object according to the relative coordinates of the moving target object;
and S4, tracking the moving route of each moving target object in real time and judging whether the moving route of each moving target object is reasonable or not.
In this embodiment, in the step of performing combined calibration and division on the regions in the image field of view according to the collected data and identifying the category to which each region belongs, the identified regions include a motor vehicle lane, a lane line, a sidewalk and an intersection; the lane lines comprise yellow solid lines, white solid lines, yellow dotted lines, white dotted lines, steering marks and forbidden marks; the category of each region comprises a normative behavior and a violation behavior in each region; the illegal behaviors comprise traffic converse, traffic jam, road scattering, parking, road construction, vehicle overspeed, slow running, abnormal lane change, continuous lane change, emergency lane occupation, break-in event, queue overrun, overflow event and pedestrian crossing event.
In this embodiment, in the step of acquiring the moving route of the moving object according to the relative coordinates of the moving object, the moving route of the moving object is identified by combining with the high-precision map data.
In this embodiment, in the step of tracking the moving route of each moving target object in real time, the moving route of each moving target object is tracked in real time by a method combining manual calibration and machine calibration; the moving object includes a motor vehicle, a non-motor vehicle, and a pedestrian.
As shown in fig. 3, in this embodiment, the method of combining manual calibration and machine calibration includes:
s11, simulating the scene of the moving target object moving in the image visual field in a manual calibration mode;
s12, positioning through a multi-sensor fusion algorithm;
s13, calibrating the moving distance and the spatial position;
and S14, fitting the positioning result to obtain a moving route and outputting the moving route.
In this embodiment, the positioning by the multi-sensor fusion algorithm includes: the method has the advantages that data collected by the plurality of laser radars and the plurality of high-definition cameras are fused in real time, and the moving route of the moving target object is learned, so that the detection, tracking and moving route prediction of the moving target object are realized under low illumination.
As shown in fig. 4, in this embodiment, the step of determining whether the moving route of each moving object is reasonable includes:
step S21, acquiring all areas passed by each moving target object according to the moving route of the moving target object;
step S22, obtaining the category of the moving target object, obtaining the category information of each area through which the moving target object passes, and judging the normative behavior and the illegal behavior of the moving target object in each area according to the category of the moving target object;
step S23, if the moving target object is in a standard behavior in all areas, judging that the moving route of the moving target object is reasonable;
and step S24, if the moving target object has at least one illegal behavior in all the areas, judging that the moving route of the moving target object is unreasonable.
According to the multi-sensor fusion sensing method for the Internet of vehicles, the actual inspection and verification show that the accurate positions of the vehicles can be obtained all day long by integrating all sensors to jointly calibrate the positions of the vehicles.
The specific technical indexes are as follows: 0-50m, man-machine not: the average detection rate of the target is more than 99.5 percent, and the average accuracy of classification is more than 99 percent; 50-100m, man-machine not: the average detection rate of the target is more than 99.5 percent, and the average accuracy of classification is more than 99 percent; 100-150m, motor vehicle: the average detection rate of the target is more than 98%, and the average accuracy of classification is more than 98%; the pedestrian detection rate and the accuracy rate are more than 90 percent; the detection rate and accuracy of the non-motor vehicle are more than 95 percent; the output frequency was 10Hz. And (3) realizing the high-precision positioning result output of the traffic participants, wherein the positioning result is 0-100m: ranging by using a laser radar, wherein the lowest precision of a tracked vehicle is 0.4m, and the precision of a non-tracked vehicle is 0.3m; the speed accuracy is <1km/h.100-150m: vision and laser radar fusion ranging, wherein the tracking vehicle precision is less than 1m in the longitudinal direction and 0.4m at the lowest in the transverse direction, the non-tracking vehicle precision is 0.75m at the lowest in the longitudinal direction and 0.35m at the lowest in the transverse direction; the speed accuracy is <1km/h. (lateral lane-dependent) output frequency: 10Hz. Traffic event detection, supported event types: traffic reversing/traffic jam/road scattering/parking/road construction/vehicle speeding/slow running/abnormal lane changing/continuous lane changing/emergency lane occupation/break-in incident/queue overrun/overflow incident/pedestrian crossing incident detection rate is more than 98%; the average accuracy rate is more than 95 percent.
The invention combines the requirements of the application and development of the car networking and car road cooperation technology, and is based on the intelligent networking car 'car road cloud integrated' digital infrastructure construction. The problem of information isolated island that intelligent networking car exists, be difficult to interconnection cooperation, effective management and control is solved, utilize multisensor to fuse perception technical rule, realize the interconnection of data between car, road, the cloud platform. The open sharing of the state information, the traffic events and the control information of the traffic signal lamps is promoted, and the progress of urban intellectualization is promoted; real-time information such as suggested speed, road condition dynamic and the like is actively pushed to passing vehicles, travelers are helped to master traffic conditions in time, traveling routes are reasonably selected, correct and effective road information is provided for social vehicles, and road passing efficiency is improved; the application of the active guidance vehicle-road cooperation technology to special guarantee vehicles such as buses, ambulances, fire trucks, emergency rescue and the like senses the positions and routes of the vehicles in real time, strengthens the information interaction of the vehicle-road and guarantees priority or convenient traffic.
It should be understood that although the various steps in the flowcharts of fig. 2-4 are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least some of the steps in fig. 2-4 may include multiple sub-steps or multiple stages that are not necessarily performed at the same time, but may be performed at different times, and the order of performance of the sub-steps or stages is not necessarily sequential, but may be performed in turn or alternating with other steps or at least some of the sub-steps or stages of other steps.
In one embodiment, as shown in fig. 5, there is provided a multi-sensor fusion perception system 10 for internet of vehicles, comprising: the system comprises a region identification module 1, a construction space rectangular coordinate system module 2, a target tracking module 3 and a traffic incident perception module 4.
The region identification module 1 is used for acquiring the collected data of the laser radar and the high-definition camera, performing combined calibration according to the collected data to divide regions in an image visual field, and identifying the category to which each region belongs.
The space rectangular coordinate system building module 2 is used for generating a pseudo image and a basic network from the acquired data by using a V2X-based laser point cloud SLAM positioning mode, and building a space rectangular coordinate system.
The target tracking module 3 is used for identifying the moving target object according to the collected data, acquiring the relative coordinates of the moving target object in the space rectangular coordinate system, and acquiring the moving route of the moving target object according to the relative coordinates of the moving target object.
The traffic incident perception module 4 is used for tracking the moving route of each moving target object in real time and judging whether the moving route of each moving target object is reasonable or not.
In this embodiment, in the step of performing combined calibration and division on the regions in the image field of view according to the collected data and identifying the category to which each region belongs, the identified regions include a motor vehicle lane, a lane line, a sidewalk and an intersection; the lane line comprises a yellow solid line, a white solid line, a yellow dotted line, a white dotted line, a steering mark and a prohibition mark; the category of each region comprises a normative behavior and a violation behavior in each region; the illegal behaviors comprise traffic retrograde motion, traffic jam, road scattering, parking, road construction, vehicle overspeed, slow motion, abnormal lane change, continuous lane change, emergency lane occupation, break-in events, queue overrun, overflow events and pedestrian crossing events.
In this embodiment, in the step of acquiring the moving route of the moving object according to the relative coordinates of the moving object, the moving route of the moving object is identified by combining with the high-precision map data.
In this embodiment, in the step of tracking the moving route of each moving target object in real time, the moving route of each moving target object is tracked in real time by a method combining manual calibration and machine calibration; the moving object includes a motor vehicle, a non-motor vehicle, and a pedestrian.
In this embodiment, the method of combining manual calibration and machine calibration includes: simulating the scene of the moving target object moving in the image visual field in a manual calibration mode; positioning through a multi-sensor fusion algorithm; calibrating the moving distance and the spatial position; and fitting the positioning result to obtain a moving route and outputting the moving route.
In this embodiment, the positioning by the multi-sensor fusion algorithm includes: the method has the advantages that data collected by the plurality of laser radars and the plurality of high-definition cameras are fused in real time, and the moving route of the moving target object is learned, so that the detection, tracking and moving route prediction of the moving target object are realized under low illumination.
In this embodiment, the step of determining whether the moving route of each moving target object is reasonable includes: acquiring all areas passed by each moving target object according to the moving route of the moving target object; the method comprises the steps of obtaining the category of a moving target object, obtaining the category information of each area through which the moving target object passes, and judging the normative behavior and the illegal behavior of the moving target object in each area according to the category of the moving target object; if the moving target object is in a standard behavior in all areas, judging that the moving route of the moving target object is reasonable; and if the moving target object has at least one illegal behavior in all the areas, judging that the moving route of the moving target object is unreasonable.
In the multi-sensor fusion sensing system for the Internet of vehicles, the vehicle position is jointly calibrated by integrating the sensors, so that the advantages of the sensors can be obtained, and the accurate position of the vehicle can be obtained all day long.
For specific limitations of the multi-sensor fusion sensing system for the internet of vehicles, reference may be made to the above limitations of the multi-sensor fusion sensing method for the internet of vehicles, which are not described herein again. The modules in the multi-sensor fusion perception system for the internet of vehicles can be wholly or partially realized by software, hardware and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, a computer device is provided, which may be a server, and its internal structure diagram may be as shown in fig. 6. The computer device includes a processor, a memory, a network interface, and a database connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, a computer program, and a database. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The database of the computer equipment is used for storing multi-sensor fusion perception data used for the Internet of vehicles. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement a multi-sensor fusion awareness method for vehicle networking.
Those skilled in the art will appreciate that the architecture shown in fig. 6 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, a computer device is provided, comprising a memory, a processor, and a computer program stored on the memory and executable on the processor, the processor implementing the following steps when executing the computer program:
acquiring data collected by a laser radar and a high-definition camera, carrying out combined calibration according to the collected data to divide areas in an image visual field, and identifying the category of each area;
generating a pseudo image and a basic network by using the acquired data in a V2X-based laser point cloud SLAM positioning mode, and constructing a space rectangular coordinate system;
identifying a moving target object according to the acquired data, acquiring relative coordinates of the moving target object in a space rectangular coordinate system, and acquiring a moving route of the moving target object according to the relative coordinates of the moving target object;
and tracking the moving route of each moving target object in real time, and judging whether the moving route of each moving target object is reasonable or not.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
in the step of carrying out combined calibration and division on the regions in the image visual field according to the acquired data and identifying the category of each region, the identified regions comprise motor vehicle lanes, lane lines, sidewalks and intersections; the lane lines comprise yellow solid lines, white solid lines, yellow dotted lines, white dotted lines, steering marks and forbidden marks; the category of each region comprises a normative behavior and a violation behavior in each region; the illegal behaviors comprise traffic converse, traffic jam, road scattering, parking, road construction, vehicle overspeed, slow running, abnormal lane change, continuous lane change, emergency lane occupation, break-in event, queue overrun, overflow event and pedestrian crossing event.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
and in the step of acquiring the moving route of the moving target object according to the relative coordinates of the moving target object, the moving route of the moving target object is identified by combining with high-precision map data.
In one embodiment, the processor when executing the computer program further performs the steps of:
in the step of tracking the moving route of each moving target object in real time, the moving route of each moving target object is tracked in real time by adopting a method of combining manual calibration and machine calibration; the moving object includes a motor vehicle, a non-motor vehicle, and a pedestrian.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
the method for combining manual calibration and machine calibration comprises the following steps:
simulating the scene of the moving target object moving in the image visual field in a manual calibration mode;
positioning through a multi-sensor fusion algorithm;
calibrating the moving distance and the spatial position;
and fitting the positioning result to obtain a moving route and outputting the moving route.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
the multi-sensor fusion algorithm performs positioning, and comprises the following steps: the method includes the steps that data collected by a plurality of laser radars and a plurality of high-definition cameras are fused in real time, and the moving route of the moving target object is learned so that the detection, tracking and moving route prediction of the moving target object can be achieved under low illumination.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
the step of judging whether the moving route of each moving target object is reasonable comprises the following steps:
acquiring all areas passed by each moving target object according to the moving route of the moving target object;
the method comprises the steps of obtaining the category of a moving target object, obtaining the category information of each area through which the moving target object passes, and judging the normative behavior and the illegal behavior of the moving target object in each area according to the category of the moving target object;
if the moving target object is in the standard behavior in all the areas, judging that the moving route of the moving target object is reasonable;
and if the moving target object has at least one illegal behavior in all the areas, judging that the moving route of the moving target object is unreasonable.
Specific limitations regarding the implementation of the steps when the processor executes the computer program may be referred to the above limitations on the method for multi-sensor fusion sensing for vehicle networking, and will not be described herein again.
In one embodiment, a computer-readable storage medium is provided, on which a computer program is stored which, when executed by a processor, performs the steps of:
acquiring data collected by a laser radar and a high-definition camera, carrying out combined calibration according to the collected data to divide areas in an image visual field, and identifying the category of each area;
generating a pseudo image and a basic network by using the acquired data in a V2X-based laser point cloud SLAM positioning mode, and constructing a space rectangular coordinate system;
identifying a moving target object according to the acquired data, acquiring relative coordinates of the moving target object in a space rectangular coordinate system, and acquiring a moving route of the moving target object according to the relative coordinates of the moving target object;
and tracking the moving route of each moving target object in real time, and judging whether the moving route of each moving target object is reasonable or not.
In one embodiment, the computer program when executed by the processor further performs the steps of:
in the step of carrying out combined calibration and division on the regions in the image visual field according to the acquired data and identifying the category of each region, the identified regions comprise motor vehicle lanes, lane lines, sidewalks and intersections; the lane lines comprise yellow solid lines, white solid lines, yellow dotted lines, white dotted lines, steering marks and forbidden marks; the category of each region comprises a normative behavior and a violation behavior in each region; the illegal behaviors comprise traffic converse, traffic jam, road scattering, parking, road construction, vehicle overspeed, slow running, abnormal lane change, continuous lane change, emergency lane occupation, break-in event, queue overrun, overflow event and pedestrian crossing event.
In one embodiment, the computer program when executed by the processor further performs the steps of:
and in the step of acquiring the moving route of the moving target object according to the relative coordinates of the moving target object, the moving route of the moving target object is identified by combining with high-precision map data.
In one embodiment, the computer program when executed by the processor further performs the steps of:
in the step of tracking the moving route of each moving target object in real time, a method of combining manual calibration and machine calibration is adopted to track the moving route of each moving target object in real time; the moving target object includes a motor vehicle, a non-motor vehicle and a pedestrian.
In one embodiment, the computer program when executed by the processor further performs the steps of:
the method for combining manual calibration and machine calibration comprises the following steps:
simulating the scene of the moving target object moving in the image visual field in a manual calibration mode;
positioning through a multi-sensor fusion algorithm;
calibrating the moving distance and the spatial position;
and fitting the positioning result to obtain a moving route and outputting the moving route.
In one embodiment, the computer program when executed by the processor further performs the steps of:
the multi-sensor fusion algorithm performs positioning, and comprises the following steps: the method has the advantages that data collected by the plurality of laser radars and the plurality of high-definition cameras are fused in real time, and the moving route of the moving target object is learned, so that the detection, tracking and moving route prediction of the moving target object are realized under low illumination.
In one embodiment, the computer program when executed by the processor further performs the steps of:
the step of judging whether the moving route of each moving target object is reasonable comprises the following steps:
acquiring all areas passed by each moving target object according to the moving route of the moving target object;
the method comprises the steps of obtaining the category of a moving target object, obtaining the category information of each area through which the moving target object passes, and judging the normative behavior and the illegal behavior of the moving target object in each area according to the category of the moving target object;
if the moving target object is in the standard behavior in all the areas, judging that the moving route of the moving target object is reasonable;
and if the moving target object has at least one illegal behavior in all the areas, judging that the moving route of the moving target object is unreasonable.
Specific limitations regarding the implementation steps of the computer program when being executed by the processor can be found in the above limitations on the method for multi-sensor fusion sensing for internet of vehicles, which are not described herein again.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database or other medium used in the embodiments provided herein can include non-volatile and/or volatile memory. Non-volatile memory can include read-only memory (ROM), programmable ROM (PROM), electrically Programmable ROM (EPROM), electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous Link DRAM (SLDRAM), rambus (Rambus) direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (10)

1. A multi-sensor fusion perception method for Internet of vehicles is characterized by comprising the following steps:
acquiring data collected by a laser radar and a high-definition camera, carrying out combined calibration according to the collected data to divide areas in an image visual field, and identifying the category of each area;
generating a pseudo image and a basic network by using the acquired data in a V2X-based laser point cloud SLAM positioning mode, and constructing a space rectangular coordinate system;
identifying a moving target object according to the acquired data, acquiring relative coordinates of the moving target object in a space rectangular coordinate system, and acquiring a moving route of the moving target object according to the relative coordinates of the moving target object;
and tracking the moving route of each moving target object in real time, and judging whether the moving route of each moving target object is reasonable or not.
2. The multi-sensor fusion perception method for the internet of vehicles according to claim 1, wherein in the step of carrying out combined calibration and division on the areas in the image field of view according to the collected data and identifying the category to which each area belongs, the identified areas comprise motor vehicle lanes, lane lines, sidewalks and intersections; the lane line comprises a yellow solid line, a white solid line, a yellow dotted line, a white dotted line, a steering mark and a prohibition mark; the category of each region comprises a normative behavior and a violation behavior in each region; the illegal behaviors comprise traffic retrograde motion, traffic jam, road scattering, parking, road construction, vehicle overspeed, slow motion, abnormal lane change, continuous lane change, emergency lane occupation, break-in events, queue overrun, overflow events and pedestrian crossing events.
3. The multi-sensor fusion perception method for the internet of vehicles according to claim 2, wherein in the step of obtaining the moving route of the moving object according to the relative coordinates of the moving object, the moving route of the moving object is identified by combining with high-precision map data.
4. The multi-sensor fusion perception method for the internet of vehicles according to claim 1, wherein in the step of tracking the moving route of each moving target object in real time, the moving route of each moving target object is tracked in real time by a method combining manual calibration and machine calibration; the moving object includes a motor vehicle, a non-motor vehicle, and a pedestrian.
5. The multi-sensor fusion perception method for the internet of vehicles according to claim 4, wherein the method combining manual calibration and machine calibration comprises:
simulating the moving scene of the moving target object in the image visual field in a manual calibration mode;
positioning through a multi-sensor fusion algorithm;
calibrating the moving distance and the spatial position;
and fitting the positioning result to obtain a moving route and outputting the moving route.
6. The multi-sensor fusion perception method for internet of vehicles according to claim 5, wherein the multi-sensor fusion algorithm performs localization, including: the method has the advantages that data collected by the plurality of laser radars and the plurality of high-definition cameras are fused in real time, and the moving route of the moving target object is learned, so that the detection, tracking and moving route prediction of the moving target object are realized under low illumination.
7. The multi-sensor fusion perception method for the internet of vehicles according to claim 2, wherein the step of judging whether the moving route of each moving target object is reasonable includes:
acquiring all areas passed by each moving target object according to the moving route of the moving target object;
the method comprises the steps of obtaining the category of a moving target object, obtaining the category information of each area through which the moving target object passes, and judging the normative behavior and the illegal behavior of the moving target object in each area according to the category of the moving target object;
if the moving target object is in the standard behavior in all the areas, judging that the moving route of the moving target object is reasonable;
and if the moving target object has at least one illegal behavior in all the areas, judging that the moving route of the moving target object is unreasonable.
8. A multi-sensor fusion awareness system for use in a vehicle networking, the system comprising:
the region identification module is used for acquiring the collected data of the laser radar and the high-definition camera, carrying out combined calibration and division on regions in the image visual field according to the collected data, and identifying the category of each region;
the system comprises a space rectangular coordinate system building module, a virtual image generating module and a basic network generating module, wherein the space rectangular coordinate system building module is used for generating a pseudo image and a basic network from collected data by using a laser point cloud SLAM positioning mode based on V2X and building a space rectangular coordinate system;
the target tracking module is used for identifying the moving target object according to the acquired data, acquiring the relative coordinates of the moving target object in the space rectangular coordinate system and acquiring the moving route of the moving target object according to the relative coordinates of the moving target object;
and the traffic event perception module is used for tracking the moving route of each moving target object in real time and judging whether the moving route of each moving target object is reasonable or not.
9. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the steps of the method of any of claims 1 to 7 are implemented when the computer program is executed by the processor.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 7.
CN202211553879.3A 2022-12-06 2022-12-06 Multi-sensor fusion sensing method, system, device and medium for Internet of vehicles Pending CN115953748A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211553879.3A CN115953748A (en) 2022-12-06 2022-12-06 Multi-sensor fusion sensing method, system, device and medium for Internet of vehicles

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211553879.3A CN115953748A (en) 2022-12-06 2022-12-06 Multi-sensor fusion sensing method, system, device and medium for Internet of vehicles

Publications (1)

Publication Number Publication Date
CN115953748A true CN115953748A (en) 2023-04-11

Family

ID=87286722

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211553879.3A Pending CN115953748A (en) 2022-12-06 2022-12-06 Multi-sensor fusion sensing method, system, device and medium for Internet of vehicles

Country Status (1)

Country Link
CN (1) CN115953748A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117012032A (en) * 2023-09-28 2023-11-07 深圳市新城市规划建筑设计股份有限公司 Intelligent traffic management system and method based on big data

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117012032A (en) * 2023-09-28 2023-11-07 深圳市新城市规划建筑设计股份有限公司 Intelligent traffic management system and method based on big data
CN117012032B (en) * 2023-09-28 2023-12-19 深圳市新城市规划建筑设计股份有限公司 Intelligent traffic management system and method based on big data

Similar Documents

Publication Publication Date Title
US11967230B2 (en) System and method for using V2X and sensor data
CN112712717B (en) Information fusion method, device and equipment
EP3410418B1 (en) Vehicle travel control method and vehicle travel control device
WO2020258277A1 (en) Way-giving method and apparatus for intelligent driving vehicle, and vehicle-mounted device
EP3822582A1 (en) Driving environment information generation method, driving control method, driving environment information generation device
JP6627152B2 (en) Vehicle control device, vehicle control method, and program
US11914041B2 (en) Detection device and detection system
EP3696789A1 (en) Driving control method and driving control apparatus
CN110036426B (en) Control device and control method
EP3822945B1 (en) Driving environment information generation method, driving control method, driving environment information generation device
JP2019128614A (en) Prediction device, prediction method, and program
EP3806062A1 (en) Detection device and detection system
JP7098366B2 (en) Vehicle control devices, vehicle control methods, and programs
CN111781933A (en) High-speed automatic driving vehicle implementation system and method based on edge calculation and spatial intelligence
CN112829753B (en) Guard bar estimation method based on millimeter wave radar, vehicle-mounted equipment and storage medium
EP3410417A1 (en) Vehicle travel control method and vehicle travel control device
WO2020258276A1 (en) Yielding method and apparatus for intelligent driving vehicle, and vehicle-mounted device
CN115292435B (en) High-precision map updating method and device, electronic equipment and storage medium
CN112712729A (en) Method and system for predicting motion trajectory
CN115953748A (en) Multi-sensor fusion sensing method, system, device and medium for Internet of vehicles
CN113227831B (en) Guardrail estimation method based on multi-sensor data fusion and vehicle-mounted equipment
US20240085209A1 (en) Display control device and display control method
CN115176296A (en) Travel assist device, travel assist method, and travel assist program
Amendola et al. D1. 4-Final Release Of The Smart City Use Case
WO2024075186A1 (en) Travel path generation method and travel path generation device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination