CN116524311A - Road side perception data processing method and system, storage medium and electronic equipment thereof - Google Patents

Road side perception data processing method and system, storage medium and electronic equipment thereof Download PDF

Info

Publication number
CN116524311A
CN116524311A CN202310283139.0A CN202310283139A CN116524311A CN 116524311 A CN116524311 A CN 116524311A CN 202310283139 A CN202310283139 A CN 202310283139A CN 116524311 A CN116524311 A CN 116524311A
Authority
CN
China
Prior art keywords
data
vehicle
vector
fused
vector image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310283139.0A
Other languages
Chinese (zh)
Inventor
陈宁
远斯
樊迪
史骏
裴晓栋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
BEIJING BOYOTOD TECHNOLOGY CO LTD
Original Assignee
BEIJING BOYOTOD TECHNOLOGY CO LTD
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by BEIJING BOYOTOD TECHNOLOGY CO LTD filed Critical BEIJING BOYOTOD TECHNOLOGY CO LTD
Priority to CN202310283139.0A priority Critical patent/CN116524311A/en
Publication of CN116524311A publication Critical patent/CN116524311A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W40/00Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
    • B60W40/02Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to ambient conditions
    • B60W40/06Road conditions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/49Segmenting video sequences, i.e. computational techniques such as parsing or cutting the sequence, low-level clustering or determining units such as shots or scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2420/00Indexing codes relating to the type of sensors based on the principle of their operation
    • B60W2420/40Photo, light or radio wave sensitive means, e.g. infrared sensors
    • B60W2420/403Image sensing, e.g. optical camera
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2420/00Indexing codes relating to the type of sensors based on the principle of their operation
    • B60W2420/40Photo, light or radio wave sensitive means, e.g. infrared sensors
    • B60W2420/408Radar; Laser, e.g. lidar
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2556/00Input parameters relating to data
    • B60W2556/35Data fusion

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Evolutionary Computation (AREA)
  • Databases & Information Systems (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Automation & Control Theory (AREA)
  • Mathematical Physics (AREA)
  • Transportation (AREA)
  • Mechanical Engineering (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention discloses a road side perception data processing method and a system thereof, a storage medium and electronic equipment, wherein the method comprises the following steps: acquiring video data and millimeter wave radar data of a vehicle; vectorizing the video data of the vehicle to obtain vector image data; the millimeter wave radar data and the vector image data are fused to form fusion perception data; and sending the fusion perception data to the vehicle. The video data can be vectorized to obtain vector image data, and the vector image data has the advantages that the real information of the dot matrix image expression image is vivid and comprehensive, and meanwhile, the storage space is relatively small; the vector video is obtained after fusion, namely, the vector video is used as a main body of fusion perception data, so that the data load can be reduced on the premise of guaranteeing the information richness, the information loss can be prevented, the time delay can be reduced, the efficiency of data processing and sending can be improved, and the enabling level of road side equipment can be improved in future scenes facing auxiliary driving and high-order automatic driving.

Description

Road side perception data processing method and system, storage medium and electronic equipment thereof
Technical Field
The invention relates to the technical field of automobile driving, in particular to a road side perception data processing method and system, a storage medium and electronic equipment.
Background
In recent years, autopilot technology has become a hot point and difficulty in technology development, and is considered as a solution for solving a series of traffic problems such as congestion in the future. For a pure intelligent single-vehicle route, the sensors need to be stacked on the vehicle, so that the manufacturing cost is raised, and the maintenance cost of the sensors is also a direct or indirect burden of a vehicle owner when an accident occurs, so that automatic driving is uneconomical. Therefore, the vehicle-road cooperative technology route represented by the vehicle-road cloud integration gradually becomes a better choice for the market.
However, the relatively low-flux and processed sensing information provided by the vehicle-road cloud integration is low-flux data such as video structured compressed data or event prediction judgment, the information loss is too large, the time delay is high, and the enabling level of road side equipment is restricted in future scenes facing auxiliary driving and high-order automatic driving.
Disclosure of Invention
Object of the invention
The invention aims to provide a road side perception data processing method and system, a storage medium and electronic equipment.
(II) technical scheme
The first aspect of the present invention provides a road side perception data processing method, including: acquiring video data and millimeter wave radar data of a vehicle; vectorizing the video data of the vehicle to obtain vector image data; the millimeter wave radar data and the vector image data are fused to form fusion perception data; and sending the fusion perception data to the vehicle.
Further, the vectorizing the video data of the vehicle to obtain vector image data includes: dividing and mapping video data of the vehicle to a single frame image; identifying an object in the single-frame image, and extracting the outline of the object; discretizing a contour curve point set of the object; adjacent two points are connected into vectors to form a vector group which is strictly approximate to the contour along the anticlockwise direction so as to realize the vectorization of the contour.
Further, the acquiring millimeter wave radar data includes acquiring point cloud data generated by the millimeter wave radar, vehicle position information, vehicle speed, vehicle pitch angle and vehicle horizontal angle.
Further, the millimeter wave radar data and the vector image data are fused to form fused perception data, which comprises: and fusing the point cloud data with the vector image data to form a fused vector image.
Further, the point cloud data and the vector image data are fused to form a fused vector image, which comprises: and carrying out vector coordinate conversion on the point cloud data so as to enable the point cloud data to be displayed in vector image data to form the fused vector image.
Further, the millimeter wave radar data and the vector image data are fused to form fused perception data, and the method further comprises the following steps: combining the fused vector images according to time sequence to obtain a target vector video; and sending the target vector video to the vehicle.
Further, before the acquiring the video data of the vehicle, the method further includes: acquiring coordinates of a signal transmitting unit and road side sensing equipment coordinates; and when the vehicle enters the area covered by the signal transmitting unit and the road side sensing equipment, obtaining the position information of the vehicle.
Further, the road side perception data processing method further comprises the following steps: the vehicle obtains the running information of the target vehicle according to the position information of the vehicle, the position information of each target vehicle in the fusion perception data and the vector video; the vehicle determines a vehicle speed control strategy of the own vehicle according to the running information of the target vehicle.
A second aspect of the present invention provides a roadside awareness data processing system comprising: an acquisition unit for acquiring video data and millimeter wave radar data of a vehicle; the vectorization unit is used for vectorizing the video data of the vehicle to obtain vector image data; the fusion unit is used for fusing the millimeter wave radar data and the vector image data to form fused perception data; and the transmitting unit is used for transmitting the fusion perception data to the vehicle.
A third aspect of the present invention provides a storage medium that is a computer-readable storage medium having a computer program stored thereon; the computer program is executed by a processor to implement the steps of the method described above.
A fourth aspect of the invention provides an electronic device comprising a memory, a processor and a computer program stored on the memory, the processor implementing the steps of the method described above when executing the computer program.
(III) beneficial effects
The technical scheme of the invention has the following beneficial technical effects:
the video data can be vectorized to obtain vector image data, and the vector image data has the advantages that the real information of the dot matrix image expression image is vivid and comprehensive, and meanwhile, the storage space is relatively small; the vector image data and the millimeter wave radar data are fused to form fusion perception data; extracting feature information from original video data through feature level fusion, and then fusing to obtain vector video, so that a vector image becomes the feature information of the original image; the vector video is used as a main body of fusion perception data, so that data load can be reduced on the premise of guaranteeing the information richness, the situation that the information quantity is greatly reduced after information processing commonly seen in a vehicle-road cloud integrated technology route is avoided, time delay can be reduced, and therefore the efficiency of data processing and sending can be improved, and the enabling level of road side equipment is improved in future scenes facing auxiliary driving and high-order automatic driving.
Drawings
FIG. 1 is a flow chart of a roadside awareness data processing method according to a first embodiment of the present invention;
fig. 2 is a flowchart for vectoring video data of a vehicle according to a second embodiment of the present invention;
FIG. 3 is a schematic diagram of a deployment architecture of a road side aware device according to a third embodiment of the present invention;
fig. 4 is a schematic diagram of a blind spot warning according to a fourth embodiment of the present invention;
fig. 5 is a schematic structural diagram of a roadside awareness data processing system according to a fifth embodiment of the present invention.
Detailed Description
The objects, technical solutions and advantages of the present invention will become more apparent by the following detailed description of the present invention with reference to the accompanying drawings. It should be understood that the description is only illustrative and is not intended to limit the scope of the invention. In addition, in the following description, descriptions of well-known structures and techniques are omitted so as not to unnecessarily obscure the present invention.
The first aspect of the present invention provides a road side perception data processing method, as shown in fig. 1, specifically including the following steps:
step S110, acquiring video data and millimeter wave radar data of a vehicle;
step S120, vectorizing the video data of the vehicle to obtain vector image data;
step S130, the millimeter wave radar data and the vector image data are fused to form fusion perception data;
and step S140, the fusion perception data is sent to the vehicle.
The road side sensing equipment can comprise image pickup equipment, a radar, a thermal imager and the like, wherein the image pickup equipment is used for collecting video data of a vehicle; the video data of the vehicle can be acquired through the camera equipment at the road side; the video data can be vectorized to obtain vector image data, and the vector image data has the advantages that the real information of the dot matrix image expression image is vivid and comprehensive, and meanwhile, the storage space is relatively small; the vector image data and the millimeter wave radar data are fused to form fusion perception data; extracting feature information from original video data through feature level fusion, and then fusing to obtain vector video, so that a vector image becomes the feature information of the original image; the vector video is used as a main body of fusion perception data, so that data load can be reduced on the premise of guaranteeing the information richness, the situation that the information quantity is greatly reduced after information processing commonly seen in a vehicle-road cloud integrated technology route is avoided, time delay can be reduced, and therefore the efficiency of data processing and sending can be improved, and the enabling level of road side equipment is improved in future scenes facing auxiliary driving and high-order automatic driving.
In some embodiments, when the video data is difficult to identify due to low visibility, such as during night conditions, some video data with unsatisfactory quality is filtered, and other types of perception data are used for serving vehicles, such as millimeter wave radar data, and information is sent to the vehicles to indicate the problem of the video data; or the thermal imaging instrument is used for identifying pedestrians on the road surface and sending a prompt to the intelligent network-connected vehicle.
In some embodiments, step S120, the vectorizing the video data of the vehicle to obtain vector image data, as shown in fig. 2, includes:
step S121, dividing and mapping the video data of the vehicle to a single frame image;
step S122, identifying an object in the single-frame image, and extracting the outline of the object;
step S123, discretizing a contour curve point set of the object;
in step S124, two adjacent points are connected into a vector to form a vector group which approaches the contour strictly in the counterclockwise direction, so as to realize the vectorization of the contour. In the vectorization process of the video data, the extracted object contour information does not contain license plate information and in-car scene information, so that the effect of quick desensitization is achieved.
In some embodiments, in the process of vectorizing the profile curve, a suitable discrete point sampling interval is first determined, and discretization is completed on a profile curve point set.
In the exemplary embodiment, taking an irregular closed object contour curve as an example, firstly, the center point of the area a represented by the contour is defined, the gravity center O of the area a is taken as the center point, and the coordinate calculation formula is as follows:
wherein: i=1, 2,3. q i (x)、q i (y) each represents a point q i Coordinates at X, Y axis; s is S A The area of region a is shown. Starting from the center point O of the area A, every other fixed angle gamma is set at a starting angle of 0 and in a counterclockwise direction Discrete article Emitting rays to the periphery, wherein the intersection point of the rays and the outline is taken as a feature point set { p } 1 ,p 2 ,p 3 …, different fixed angles γ are calculated experimentally Discrete article Similarity of contours before and after vectorization, fixed angle gamma in the embodiment of the application Discrete article Determined to be 11.25.
After sampling, two adjacent points are connected into vectors to form a vector group which strictly approximates to the contour along the anticlockwise direction, namely { p } 1 p 2 ,p 2 p 3 ,p 3 p 4 …, at which point vectorization of the contours is completed.
In some embodiments, the acquiring millimeter wave radar data includes acquiring point cloud data generated by a millimeter wave radar, vehicle location information, vehicle speed, vehicle pitch angle, and vehicle horizontal angle. The desensitized vector video is fused with other sensing data, including data fed back by millimeter wave radar, such as vehicle coordinates, vehicle speed, vehicle pitch angle and vehicle horizontal angle, and further including feedback data of a thermal imager, so as to provide super-vision fusion sensing capability for the vehicle.
In some embodiments, step S130, the millimeter wave radar data and the vector image data are fused to form fused sensing data, which includes:
step S131, fusing the point cloud data and the vector image data to form a fused vector image.
In some embodiments, step S131, the fusing the point cloud data with the vector image data to form a fused vector image includes:
and carrying out vector coordinate conversion on the point cloud data so as to enable the point cloud data to be displayed in vector image data to form the fused vector image.
In an exemplary embodiment, the vector coordinate conversion of the point cloud data is to convert the point cloud coordinate into its coordinate in the vector image, assuming that the point cloud world coordinate is P cloud Its coordinates in the vector image are:
in the above formula, R m For the checked camera installation angle rotation matrix, P G Moving the north-east coordinate of the GNSS antenna phase center at the photographing moment of the camera;the camera eccentricity after verification; r is R nc Is a rotation matrix of the world coordinate system to the camera coordinate system. Wherein R is m The calculation mode of (2) is as follows:
after the vector coordinate conversion is completed, the point cloud data can be represented in the vector image to form a fused vector image. And combining the vector images according to time sequence to form multi-source fused collaborative perception data such as video, radar data, thermal imaging data and the like.
In some embodiments, step S130, the millimeter wave radar data and the vector image data are fused to form fused sensing data, and further includes:
and step S132, combining the fused vector images according to time sequence to obtain a target vector video. The target vector video is sent to the vehicle, so that the efficiency of data processing and sending is improved, and the enabling level of road side equipment is improved in the future scenes of auxiliary driving and high-order automatic driving; and the vector video is used as a main body for fusing the perception data, so that the data load can be reduced on the premise of ensuring the information richness, the situation that the information quantity is greatly reduced after the common information processing in the vehicle-road-cloud integrated technology route is avoided, and the time delay is reduced.
In some embodiments, before the capturing the video data of the vehicle, the roadside awareness data processing method further includes:
step S101, acquiring coordinates of a signal transmitting unit and road side sensing equipment coordinates;
step S102, when the vehicle enters an area covered by the signal transmitting unit and the road side sensing equipment, position information of the vehicle is obtained.
The road side sensing equipment sends the acquired data to the road side computing equipment, the data are processed by the road side computing equipment and then sent to each vehicle, the vehicles can be automatic driving vehicles, the vehicles are communicated with the road side computing equipment through an On Board Unit (OBU), and the signal transmitting Unit can adopt an antenna LTE-V2X/5G to realize broadcast communication and point-to-point communication; as shown in fig. 3, when an authorized intelligent networked vehicle enters an area covered by the signal transmitting unit and the roadside computing device, the vehicle will obtain its own accurate location information. In a specific application scene, a corresponding number of signal transmitting units, road side computing equipment, road side sensing equipment and the like are deployed at intervals of corresponding distances so as to process sensing data of corresponding road sections and lanes. The automatic driving vehicle can receive the fusion perception data which is transmitted after being processed by the road side computing equipment through the OBU, and the fusion perception data and the self vehicle-mounted perception data are fused to form collaborative perception data. In an exemplary embodiment, the intelligent network-connected vehicle obtains operation information of the target vehicles according to the position information of the own vehicle, the position information of each target vehicle in the fusion perception data and the vector video; the vehicle determines a vehicle speed control strategy of the own vehicle according to the running information of the target vehicle. For example, as shown in fig. 4, when a 2-lane large vehicle blocks a 3-lane vehicle, the 1-lane vehicle self-sensing system has the possibility of unrecognizable, namely, a blind area b area exists, and b area sensing data acquired by road side sensing equipment at another view angle can play a role; the possible activity range of the 3-lane vehicle in the next period can be predicted through the acquired b-area sensing data, and the possible activity range comprises the whole range of the b-area, and an early warning prompt is sent to the 1-lane vehicle, so that the 1-lane vehicle can reduce the probability of collision through the mode of monitoring the vehicles in the b-area in advance. Therefore, the intelligent network-connected vehicle confirms the surrounding environment and the vehicle running information of the intelligent network-connected vehicle by comparing the longitude and latitude of the intelligent network-connected vehicle with the coordinates of each vehicle and the longitude and latitude of the vehicle in the cooperative sensing data, and is used for autonomous decision. Therefore, the vehicle can acquire the operation information of the shielded vehicle on the other lane by acquiring the fusion sensing data, and can play a role in early warning and prompting so as to realize the safety of the vehicle in the driving process; providing more efficient readiness data for downstream products (on-board units) to use the fused awareness data.
A second aspect of the present invention provides a roadside awareness data processing system, as shown in fig. 5, comprising:
an acquisition unit 510 for acquiring video data and millimeter wave radar data of a vehicle;
a vectorizing unit 520, configured to vectorize the video data of the vehicle to obtain vector image data;
a fusion unit 530, configured to fuse the millimeter wave radar data with vector image data to form fused sensing data;
and a transmitting unit 540 for transmitting the fusion awareness data to the vehicle.
In some embodiments, the vectoring unit 520 comprises:
a segmentation module for segmenting and mapping video data of the vehicle to a single frame image;
an extraction module for identifying an object in the single frame image and extracting a contour of the object;
a discretizing module for discretizing a set of contour curve points of the object;
and the vectorization module is used for connecting two adjacent points into vectors to form a vector group which strictly approximates to the contour along the anticlockwise direction so as to realize vectorization of the contour.
In some embodiments, the fusion unit 530 includes:
the fusion module is used for fusing the point cloud data and the vector image data to form a fused vector image;
and the combination module is used for combining the fused vector images according to time sequence to obtain a target vector video.
The specific shape structures of the acquiring unit 510, the vectorizing unit 520, the fusing unit 530 and the transmitting unit 540 are not limited, and may be arbitrarily set by a person skilled in the art according to the function implemented by the acquiring unit, the vectorizing unit 520, the fusing unit 530 and the transmitting unit 540, which is not described herein; in addition, the specific implementation process and implementation effect of the operation steps implemented by the above modules in the embodiment of the present invention are the same as the specific implementation process and implementation effect of step S110 to step S140 in the embodiment of the present invention, and specific reference may be made to the above statement content, and no further description is given here.
A third aspect of the present invention provides a storage medium that is a computer-readable storage medium having a computer program stored thereon; the computer program is executed by a processor to implement the method described above.
A fourth aspect of the invention provides an electronic device comprising a memory, a processor and a computer program stored on the memory, the processor implementing the steps of the method described above when executing the computer program. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smartphones, wearable devices, and other similar computing devices. In the embodiment of the invention, the electronic device may be, for example, a road side computing device.
It is to be understood that the above-described embodiments of the present invention are merely illustrative of or explanation of the principles of the present invention and are in no way limiting of the invention. Accordingly, any modification, equivalent replacement, improvement, etc. made without departing from the spirit and scope of the present invention should be included in the scope of the present invention. Furthermore, the appended claims are intended to cover all such changes and modifications that fall within the scope and boundary of the appended claims, or equivalents of such scope and boundary.
Those of ordinary skill in the art will appreciate that all or a portion of the steps in implementing the methods of the above embodiments may be implemented by a program to instruct related hardware, where the program may be stored in a computer readable storage medium, where the program when executed includes the steps of the embodiments of the methods as described below. The storage medium may be a magnetic disk, an optical disc, a Read-only memory (ROM), a Random Access Memory (RAM), or the like.
The steps in the method of the embodiment of the invention can be sequentially adjusted, combined and deleted according to actual needs. The modules in the system of the embodiment of the invention can be combined, divided and deleted according to actual needs.

Claims (10)

1. A method for processing road side perception data, comprising the steps of:
acquiring video data and millimeter wave radar data of a vehicle;
vectorizing the video data of the vehicle to obtain vector image data;
the millimeter wave radar data and the vector image data are fused to form fusion perception data;
and sending the fusion perception data to the vehicle.
2. The method of claim 1, wherein vectorizing the video data of the vehicle to obtain vector image data comprises:
dividing and mapping video data of the vehicle to a single frame image;
identifying an object in the single-frame image, and extracting the outline of the object;
discretizing a contour curve point set of the object;
adjacent two points are connected into vectors to form a vector group which is strictly approximate to the contour along the anticlockwise direction so as to realize the vectorization of the contour.
3. The method of claim 1, wherein the acquiring millimeter wave radar data comprises acquiring millimeter wave radar generated point cloud data, vehicle location information, vehicle speed, vehicle pitch angle, and vehicle horizontal angle.
4. A method according to claim 3, wherein the millimeter wave radar data is fused with vector image data to form fused perception data, comprising:
and fusing the point cloud data with the vector image data to form a fused vector image.
5. The method of claim 4, wherein the point cloud data is fused with vector image data to form a fused vector image, comprising:
and carrying out vector coordinate conversion on the point cloud data so as to enable the point cloud data to be displayed in vector image data to form the fused vector image.
6. The method of claim 5, wherein the millimeter wave radar data is fused with vector image data to form fused perception data, further comprising:
and combining the fused vector images according to time sequence to obtain a target vector video.
7. The method according to any one of claims 3-6, further comprising, prior to said acquiring video data of the vehicle:
acquiring coordinates of a signal transmitting unit and road side sensing equipment coordinates;
and when the vehicle enters the area covered by the signal transmitting unit and the road side sensing equipment, obtaining the position information of the vehicle.
8. A roadside-aware data processing system, comprising:
an acquisition unit for acquiring video data and millimeter wave radar data of a vehicle;
the vectorization unit is used for vectorizing the video data of the vehicle to obtain vector image data;
the fusion unit is used for fusing the millimeter wave radar data and the vector image data to form fused perception data;
and the transmitting unit is used for transmitting the fusion perception data to the vehicle.
9. A storage medium, characterized in that the storage medium is a computer-readable storage medium, on which a computer program is stored;
the computer program being executed by a processor to carry out the steps of the method according to any one of claims 1-7.
10. An electronic device comprising a memory, a processor and a computer program stored on the memory, the processor implementing the steps of the method of any one of claims 1-7 when the computer program is executed.
CN202310283139.0A 2023-03-22 2023-03-22 Road side perception data processing method and system, storage medium and electronic equipment thereof Pending CN116524311A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310283139.0A CN116524311A (en) 2023-03-22 2023-03-22 Road side perception data processing method and system, storage medium and electronic equipment thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310283139.0A CN116524311A (en) 2023-03-22 2023-03-22 Road side perception data processing method and system, storage medium and electronic equipment thereof

Publications (1)

Publication Number Publication Date
CN116524311A true CN116524311A (en) 2023-08-01

Family

ID=87405448

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310283139.0A Pending CN116524311A (en) 2023-03-22 2023-03-22 Road side perception data processing method and system, storage medium and electronic equipment thereof

Country Status (1)

Country Link
CN (1) CN116524311A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117452392A (en) * 2023-12-26 2024-01-26 哈尔滨工业大学(深圳)(哈尔滨工业大学深圳科技创新研究院) Radar data processing system and method for vehicle-mounted auxiliary driving system

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117452392A (en) * 2023-12-26 2024-01-26 哈尔滨工业大学(深圳)(哈尔滨工业大学深圳科技创新研究院) Radar data processing system and method for vehicle-mounted auxiliary driving system
CN117452392B (en) * 2023-12-26 2024-03-08 哈尔滨工业大学(深圳)(哈尔滨工业大学深圳科技创新研究院) Radar data processing system and method for vehicle-mounted auxiliary driving system

Similar Documents

Publication Publication Date Title
EP3700198B1 (en) Imaging device, image processing apparatus, and image processing method
JP2021099793A (en) Intelligent traffic control system and control method for the same
US20200309538A1 (en) System for producing and/or updating a digital model of a digital map
JP2022024741A (en) Vehicle control device and vehicle control method
US11370420B2 (en) Vehicle control device, vehicle control method, and storage medium
US20230260266A1 (en) Camera-radar data fusion for efficient object detection
US11107227B1 (en) Object detection based on three-dimensional distance measurement sensor point cloud data
JPWO2020100922A1 (en) Data distribution systems, sensor devices and servers
CN111508276A (en) High-precision map-based V2X reverse overtaking early warning method, system and medium
CN114492679B (en) Vehicle data processing method and device, electronic equipment and medium
WO2024012211A1 (en) Autonomous-driving environmental perception method, medium and vehicle
CN116524311A (en) Road side perception data processing method and system, storage medium and electronic equipment thereof
JP2017207920A (en) Reverse travelling vehicle detection device and reverse travelling vehicle detection method
CN114387533A (en) Method and device for identifying road violation, electronic equipment and storage medium
US20210323577A1 (en) Methods and systems for managing an automated driving system of a vehicle
CN116572995A (en) Automatic driving method and device of vehicle and vehicle
WO2023158642A1 (en) Camera-radar data fusion for efficient object detection
WO2023158706A1 (en) End-to-end processing in automated driving systems
CN115359332A (en) Data fusion method and device based on vehicle-road cooperation, electronic equipment and system
US20230126957A1 (en) Systems and methods for determining fault for a vehicle accident
CN115205311A (en) Image processing method, image processing apparatus, vehicle, medium, and chip
JP2019095875A (en) Vehicle control device, vehicle control method, and program
CN113614782A (en) Information processing apparatus, information processing method, and program
JP7449497B2 (en) Obstacle information acquisition system
JP7367709B2 (en) Information processing device, information processing system, and information processing program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination