CN115909728A - Road side sensing method and device, electronic equipment and storage medium - Google Patents

Road side sensing method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN115909728A
CN115909728A CN202211366855.7A CN202211366855A CN115909728A CN 115909728 A CN115909728 A CN 115909728A CN 202211366855 A CN202211366855 A CN 202211366855A CN 115909728 A CN115909728 A CN 115909728A
Authority
CN
China
Prior art keywords
perception
roadside
scene
information
fusion
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211366855.7A
Other languages
Chinese (zh)
Inventor
刘文健
芦嘉鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhidao Network Technology Beijing Co Ltd
Original Assignee
Zhidao Network Technology Beijing Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhidao Network Technology Beijing Co Ltd filed Critical Zhidao Network Technology Beijing Co Ltd
Priority to CN202211366855.7A priority Critical patent/CN115909728A/en
Publication of CN115909728A publication Critical patent/CN115909728A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Traffic Control Systems (AREA)

Abstract

The application discloses a roadside sensing method, a roadside sensing device, electronic equipment and a storage medium, wherein the method comprises the steps of receiving first sensing information of roadside sensing equipment; determining a first road side perception scene according to the first perception information; and calling a corresponding first perception fusion strategy in the roadside configuration file according to the first roadside perception scene, and performing perception fusion on the scene of the target area. According to the method and the device, different perception fusion strategies are selected according to the perception environment in a typical scene, and the accuracy and the robustness of roadside perception are improved.

Description

Road side sensing method and device, electronic equipment and storage medium
Technical Field
The present application relates to the technical field of roadside sensing, and in particular, to a roadside sensing method and apparatus, an electronic device, and a storage medium.
Background
With the continuous development of intelligent traffic, roadside monitoring systems are improved, but the roadside perception fusion method mainly refers to a vehicle end scheme at present, and no specific strategy is formulated based on roadside characteristics for roadside perception of vehicle end perception problems.
Compared with vehicle-end sensing equipment, the environment where the roadside sensing equipment is located is relatively stable, and a solution is easier to model, analyze and formulate for the sensing problem caused by some environments.
Disclosure of Invention
The embodiment of the application provides a roadside sensing method and device, electronic equipment and a storage medium, so as to realize calling and switching of sensing strategies under different scenes.
The embodiment of the application adopts the following technical scheme:
in a first aspect, an embodiment of the present application provides a roadside sensing method, where the method includes:
receiving first sensing information of roadside sensing equipment;
determining a first road side perception scene according to the first perception information;
and calling a corresponding first perception fusion strategy in the road side configuration file according to the first road side perception scene, and performing perception fusion on the scene of the target area.
In some embodiments, the method further comprises:
determining a second road side perception scene according to second perception information, wherein the second perception information and the first perception information are perception results in different environments;
and calling a corresponding first perception fusion strategy in a roadside configuration file from the first roadside perception scene, and dynamically switching to a second perception fusion strategy corresponding to the second roadside perception scene, wherein the second perception fusion strategy and the first perception fusion strategy are different algorithm strategies in different or same environments.
In some embodiments, the perceptual fusion policy comprises a plurality of roadside perceptual fusion models stored in a roadside configuration file.
In some embodiments, the perceptual fusion policy comprises different parameter configurations for the same algorithm policy held in a roadside configuration file.
In some embodiments, the invoking a corresponding first perception fusion policy in a roadside configuration file according to the first roadside perception scene to perform perception fusion on the scene of the target region includes:
obtaining a scene detection result through a scene recognition engine according to the local weather condition and/or the roadside image recognition result in the first roadside perception scene;
according to the scene detection result, searching a perception fusion strategy matched with the scene detection result in a road side configuration file;
and sensing the target area according to the parameters of the sensing fusion model or the algorithm strategy in the search result.
In some embodiments, the determining a first road side perception scene according to the first perception information includes:
taking roadside weather information as redundant information of the first perception information;
determining feature information of the first road side perception scene according to the redundant information of the first perception information, wherein the feature information at least comprises one of the following information: rain, night, fog, snow.
In some embodiments, after the invoking a corresponding first perceptual fusion policy in a roadside configuration file according to the first roadside perceptual scene and performing perceptual fusion on a scene of a target region, the method further includes:
and communicating with the vehicle end, and synchronizing the perception fusion result of the target area with the vehicle end.
In a second aspect, an embodiment of the present application further provides a roadside sensing device, where the device includes:
the receiving module is used for receiving first sensing information of the road side sensing equipment;
the determining module is used for determining a first road side perception scene according to the first perception information;
and the calling module is used for calling a corresponding first perception fusion strategy in the road side configuration file according to the first road side perception scene so as to perform perception fusion on the scene of the target area.
In a third aspect, an embodiment of the present application further provides an electronic device, including: a processor; and a memory arranged to store computer executable instructions that, when executed, cause the processor to perform the above method.
In a fourth aspect, embodiments of the present application further provide a computer-readable storage medium storing one or more programs that, when executed by an electronic device that includes a plurality of application programs, cause the electronic device to perform the above-described method.
The embodiment of the application adopts at least one technical scheme which can achieve the following beneficial effects: a first road side perception scene can be determined by receiving first perception information of road side perception equipment; and then calling a corresponding first perception fusion strategy in the road side configuration file according to the first road side perception scene so as to perceive the target area. Different perception fusion strategies are selected according to the perception environment in a typical scene, and accuracy and robustness of roadside perception are improved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the application and together with the description serve to explain the application and not to limit the application. In the drawings:
FIG. 1 is a schematic flow chart of a roadside sensing method in an embodiment of the present application;
FIG. 2 is a schematic structural diagram of a roadside sensing device in an embodiment of the present application;
FIG. 3 is a schematic flow chart illustrating an implementation of a roadside sensing method in an embodiment of the present application;
fig. 4 is a schematic structural diagram of an electronic device in an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the technical solutions of the present application will be described in detail and completely with reference to the following specific embodiments of the present application and the accompanying drawings. It should be apparent that the described embodiments are only some of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The technical solutions provided by the embodiments of the present application are described in detail below with reference to the accompanying drawings.
The embodiment of the present application provides a roadside sensing method, as shown in fig. 1, which provides a flowchart of the roadside sensing method in the embodiment of the present application, and the method at least includes the following steps S110 to S130:
step S110, receiving first sensing information of the roadside sensing device.
The roadside sensing method in the embodiment of the application is based on the built roadside system.
The roadside system includes a roadside sensing device, a roadside server, and the like, and is usually deployed on a plurality of roadside units, with the roadside units as carriers. The road side unit is arranged at a target intersection or a target road section, and synchronizes the sensing fusion result of the road side system to the vehicles in the coverage area of the road side unit, so as to provide sensing information for the vehicles. Roadside sensing devices include, but are not limited to, lidar, cameras, and may also include temperature and humidity sensing modules, and the like.
A roadside unit is described herein as the smallest implementation unit. And a road side server on the road side unit receives the first sensing information of the road side sensing equipment. Since the position of the roadside unit is relatively fixed, and the sensing information of the roadside sensing device is also relatively fixed/static background (environment) information, the moving object mainly includes a vehicle, a pedestrian and the like.
The roadside sensing equipment can sense the environment by adopting a camera and a laser radar to obtain image information and corresponding laser point cloud information. Meanwhile, because the longitude and latitude positioning information of the road side unit is fixed, the environmental information, such as weather and meteorological information, in the area where the road side unit is located can be determined as well.
It should be noted that algorithm programs such as an image vision algorithm and a laser radar can be written into the roadside sensing device, and the roadside sensing device is used for improving the sensing capability of the roadside sensing device. The image vision algorithm includes, but is not limited to, YOLOv4. Lidar algorithms include, but are not limited to, 2D/3D object detection algorithms. The embodiment of the present application shows a specific implementation manner of the foregoing step S110. Of course, it should be understood that other implementations may be adopted, and the embodiments of the present application are not limited thereto.
Step S120, determining a first road side perception scene according to the first perception information.
Since the first sensing information includes the environmental sensing result in the current scene, and the position of the roadside device is fixed, the variation in the environmental sensing result may affect the accuracy of sensing fusion in the roadside unit. It is understood that the variation in the environmental perception result includes, but is not limited to, perception information in typical scenes such as day, night, rainy day, snowy day, etc.
After the first roadside sensing scene is determined, the environmental characteristics of the region where the roadside unit is currently located are known, and the environmental characteristics generally affect the stability and accuracy of the result of roadside sensing fusion.
Step S130, calling a corresponding first perception fusion strategy in the road side configuration file according to the first road side perception scene, and carrying out perception fusion on the scene of the target area.
For the first road side perception scene, the previous perception fusion strategy can be adjusted (or not adjusted) by calling the corresponding first perception fusion strategy in the road side configuration file, so that the target area can be better perceived. The road side configuration file is typically written in advance in the road side server.
And the road side configuration file has a mutual mapping relation between the perception fusion strategy and the road side perception scene.
It should be noted that the background environment of the target area is relatively fixed for the roadside unit, and the change factors of the foreground environment (such as temperature and illumination intensity) may affect the perceptual fusion result, so that the corresponding perceptual fusion policy needs to be switched to perceive the environmental information in the target area.
Preferably, to ensure real-time, the called road-side configuration file is saved locally. For example, an intersection with a large traffic flow at peak time.
However, if the real-time requirement is not high, asynchronous processing can be performed, and the road side configuration file can be acquired from the cloud. For example, in periods of low traffic.
Further, for the vehicle end, when the vehicle end enters a complex and unfamiliar environment of a target area, a sensing result of the vehicle end is often affected, and at the moment, the sensed environment information in the target area can be synchronized to the vehicle end through the road side unit to perform laser or visual SLAM mapping and the like, so that the vehicle end is helped to perform path planning and obstacle avoidance better.
By the roadside sensing method in the embodiment of the application, dynamic switching of a roadside sensing fusion algorithm based on environment recognition is realized, and the accuracy and robustness of roadside sensing are improved. And identifying the perception environment according to the relatively fixed characteristic in the roadside perception information.
The embodiment of the present application provides a roadside sensing system based on the roadside sensing method, where the system includes: a road side information collection module, a road side sensing equipment module, a scene detection engine and an algorithm selection module,
the roadside sensing equipment module mainly provides image data for scene detection through an algorithm through a camera. The roadside information collection module can provide information such as weather information of the day and the like, and mainly provides redundant information for scene identification. The algorithm selection module is mainly used for selecting different perception algorithms according to scene detection results in the scene detection engine.
Specifically, the roadside sensing system acquires local weather information and then identifies the environment through the camera. And then configuring and/or adjusting a perception fusion algorithm according to a specific scene, and dynamically selecting a corresponding algorithm according to a scene recognition output result of a scene detection engine.
In one embodiment of the present application, the method further comprises: determining a second road side perception scene according to second perception information, wherein the second perception information and the first perception information are perception results in different environments; and calling a first perception fusion strategy corresponding to the roadside configuration file from the first roadside perception scene, and dynamically switching to a second perception fusion strategy corresponding to the second roadside perception scene, wherein the second perception fusion strategy and the first perception fusion strategy are different algorithm strategies in different or same environments.
In a specific implementation, if the environmental information changes, such as the illumination intensity, the roadside sensing device generates the second sensing information. And the second perception information and the first perception information are perception results in different environments. It should be noted that the perception information here may be current weather and weather information and image information captured by a camera in the current target area. The current weather and weather information can be acquired by a sensor of the road side unit, or acquired by a road side server of the road side unit after being accessed into a network.
It should be noted that the frequency of acquiring the second perception information may be periodic or customized according to a natural time.
Further, since the second perception fusion strategy and the first perception fusion strategy are different algorithm strategies in different or the same environment, when different algorithm strategies are selected according to the environment, the first road side perception scene is called the first perception fusion strategy corresponding to the road side configuration file, and the second road side perception fusion strategy corresponding to the second road side perception scene is dynamically switched.
It should be noted that different algorithm strategies in different environments may include, but are not limited to, different environments such as rainy days, foggy days, snowy days, and the like. Different algorithm strategies in the same environment may include, but are not limited to, day, night, same environment but different illumination intensities.
In an embodiment of the present application, the perceptual fusion policy includes a plurality of roadside perceptual fusion models stored in a roadside configuration file.
In specific implementation, the corresponding first perception fusion strategy or second perception fusion strategy in the road side configuration file is called according to the first road side perception scene, and the first perception fusion strategy or the second perception fusion strategy is a fusion model which is stored in the road side configuration file and obtained through pre-training.
It is understood that the fusion model is obtained by training the network to converge through machine learning using a plurality of sets of data, each of the plurality of sets of data including: the image data and/or the weather information of the target area and the type label corresponding to the image data and/or the weather information. And establishing a corresponding perception fusion strategy according to the confidence degrees of the perception fusion results of each roadside perception fusion model in different scenes.
Illustratively, the perception fusion models for multiple roadside may be stored according to different algorithm tables LIST.
In an embodiment of the present application, the perceptual fusion policy includes different parameter configurations for the same algorithm policy, which are saved in a roadside configuration file.
In specific implementation, the first perception fusion strategy or the second perception fusion strategy corresponding to the road side configuration file is called according to the first road side perception scene, and the first perception fusion strategy or the second perception fusion strategy is different parameter configurations aiming at the same algorithm strategy and stored in the road side configuration file.
It should be noted that, since different parameter configurations all affect the perception fusion result, if the same parameter configuration is adopted for the same algorithm strategy, the change of the environmental perception result cannot be well adapted, and the accuracy and timeliness of roadside perception fusion are reduced.
For example, different parameter configurations for the same algorithm policy may be saved by the algorithm table LIST.
In an embodiment of the present application, the invoking a corresponding first perception fusion policy in a roadside configuration file according to the first roadside perception scene to perform perception fusion on a scene of a target area includes: obtaining a scene detection result through a scene recognition engine according to the local weather condition and/or the roadside image recognition result in the first roadside perception scene; according to the scene detection result, searching a perception fusion strategy matched with the scene detection result in a road side configuration file; and sensing the target area according to the parameters of the sensing fusion model or the algorithm strategy in the search result.
In specific implementation, as shown in fig. 3, in the embodiment of the present application, a scene recognition engine, an algorithm engine (i.e., an algorithm LIST, which includes algorithm 1, algorithm 2, and algorithm 3, where algorithms 1,2, and 3 are configured for an example only and are configured according to actual use situations) are included, the scene recognition engine and the perceptual fusion policy (algorithm) selection module are in an upstream-downstream relationship, the upstream scene recognition engine may process some information collected in a current scene, including local weather conditions, image content recognition results, and the like, the algorithm engine may output a scene detection result, and the algorithm LIST may select an appropriate algorithm model or algorithm parameter according to the scene detection result.
It should be noted that, multiple algorithm models or different parameter configurations of the same algorithm model may be saved in the roadside configuration file according to requirements. For example, when the algorithm engine identifies that the day is snowy, the algorithm selection module may select configuration parameters suitable for snowy days or the inference model may guarantee vehicle detection performance in special weather.
Preferably, if the local meteorological conditions and the image content identification result have more obvious characteristics which can be identified, the scene identification result can be output according to one of the characteristics. Furthermore, if the features are not obvious, more information needs to be collected for judgment. For example, the weather features after the background is removed are identified in the image content identification result, and whether the weather features are snowing or raining is identified. For another example, the local weather conditions may collect more temperature and humidity information.
In an embodiment of the present application, the determining, according to the first sensing information, a first road side sensing scene includes: taking roadside weather information as redundant information of the first perception information; determining feature information of the first road side perception scene according to the redundant information of the first perception information, wherein the feature information at least comprises one of the following: rain, night, fog, snow.
In specific implementation, for the first sensing information (mainly image information or laser point cloud information), the roadside weather information may be used as redundant information thereof, so as to determine characteristic information of the first roadside sensing scene according to the redundant information, where the characteristic information includes but is not limited to environmental characteristics such as rain, night, fog, snow, and the like. In addition, it should be noted that different environmental characteristics in the characteristic information cover the influence of different lighting conditions, visibility, and other factors.
In an embodiment of the application, after the invoking a corresponding first perception fusion policy in a roadside configuration file according to the first roadside perception scene and performing perception fusion on a scene of a target area, the method further includes: and communicating with the vehicle end, and synchronizing the perception fusion result of the target area with the vehicle end.
And communicating with a vehicle end through a V2X protocol, and synchronizing the perception fusion result of the target area with the vehicle end entering the target area. The vehicle end may or may not have autopilot capabilities. The road side sensing accuracy is improved, meanwhile, the environmental information in the sensing area of the sensing data can be sent to the vehicle end, the disadvantage that the vehicle end senses in a complex strange environment is further made up, and the vehicle end with the automatic driving capability is helped to better plan a path and avoid obstacles.
The embodiment of the present application further provides a roadside sensing device 200, as shown in fig. 2, a schematic structural diagram of the roadside sensing device in the embodiment of the present application is provided, the roadside sensing device 200 at least includes: a receiving module 210, a determining module 220, and a calling module 230, wherein:
in an embodiment of the present application, the receiving module 210 is specifically configured to: receiving first perception information of the roadside perception device.
The roadside system includes a roadside sensing device, a roadside server, and the like, and is usually deployed on a plurality of roadside units, with the roadside units as carriers. The road side unit is arranged at a target intersection or a target road section, and synchronizes the sensing fusion result of the road side system to the vehicles in the coverage area of the road side unit, so as to provide sensing information for the vehicles.
A roadside unit is described herein as the smallest implementation unit. And a road side server on a road side unit receives the first sensing information of the road side sensing equipment. Since the position of the roadside unit is relatively fixed, the perception information of the roadside perception device also has relatively fixed background (environment) information.
The roadside sensing equipment can sense the environment by adopting a camera and a laser radar to obtain image information and corresponding laser point cloud information. Meanwhile, the longitude and latitude positioning information of the road side unit is fixed, so that the environment in the region where the road side unit is located, such as weather and meteorological information, can also be determined.
It should be noted that algorithm programs such as an image vision algorithm and a laser radar can be written into the roadside sensing device, and the roadside sensing device is used for improving the sensing capability of the roadside sensing device. The image vision algorithm includes, but is not limited to, YOLOv4. Lidar algorithms include, but are not limited to, 2D/3D object detection algorithms. The embodiment of the present application shows a specific implementation manner of the foregoing receiving module 210. Of course, it should be understood that other implementations may be adopted, and the embodiments of the present application are not limited thereto.
In an embodiment of the present application, the determining module 220 is specifically configured to: and determining a first road side perception scene according to the first perception information.
Since the first sensing information includes the environmental sensing result in the current scene, and the position of the roadside device is fixed, the variation in the environmental sensing result may affect the accuracy of sensing fusion in the roadside unit. It is understood that the variation in the environmental perception result includes, but is not limited to, perception information in typical scenes such as day, night, rainy day, snowy day, etc.
After the first roadside sensing scene is determined, the environmental characteristics of the region where the roadside unit is currently located are known, and the environmental characteristics generally affect the stability and accuracy of the result of roadside sensing fusion.
In an embodiment of the present application, the invoking module 230 is specifically configured to: and calling a corresponding first perception fusion strategy in the road side configuration file according to the first road side perception scene, and performing perception fusion on the scene of the target area.
For the first road side perception scene, the previous perception fusion strategy can be adjusted (or not adjusted) by calling the corresponding first perception fusion strategy in the road side configuration file, so that the target area can be better perceived. The road side configuration file is typically written in advance in the road side server. And the road side configuration file has a mutual mapping relation between the perception fusion strategy and the road side perception scene.
It should be noted that the background environment of the target area is relatively fixed for the roadside unit, and the change factors of the foreground environment (such as temperature and illumination intensity) affect the perceptual fusion result, so that the corresponding perceptual fusion policy needs to be switched to perceive the environmental information in the target area.
Preferably, to ensure real-time, the called road-side configuration file is saved locally. For example, an intersection with a large traffic flow at peak time.
However, if the real-time requirement is not high, asynchronous processing can be performed, and the road side configuration file can be acquired from the cloud. For example, in periods of low traffic.
Further, for the vehicle end, when the vehicle end enters a complex unfamiliar target area environment, a sensing result of the vehicle end is often influenced, and at the moment, environmental information sensed in the target area can be synchronized to the vehicle end through the road side unit to be subjected to laser or visual SLAM mapping and the like, so that the vehicle end is helped to perform path planning and obstacle avoidance better.
It can be understood that the roadside sensing device can implement each step of the roadside sensing method provided in the foregoing embodiments, and the relevant explanations about the roadside sensing method are applicable to the roadside sensing device, and are not described herein again.
Fig. 4 is a schematic structural diagram of an electronic device according to an embodiment of the present application. Referring to fig. 4, at a hardware level, the electronic device includes a processor, and optionally further includes an internal bus, a network interface, and a memory. The Memory may include a Memory, such as a Random-Access Memory (RAM), and may further include a non-volatile Memory, such as at least 1 disk Memory. Of course, the electronic device may also include hardware required for other services.
The processor, the network interface, and the memory may be connected to each other via an internal bus, which may be an ISA (Industry Standard Architecture) bus, a PCI (Peripheral Component Interconnect) bus, an EISA (Extended Industry Standard Architecture) bus, or the like. The bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one double-headed arrow is shown in FIG. 4, but that does not indicate only one bus or one type of bus.
And the memory is used for storing programs. In particular, the program may include program code comprising computer operating instructions. The memory may include both memory and non-volatile storage and provides instructions and data to the processor.
The processor reads the corresponding computer program from the nonvolatile memory into the memory and then runs the computer program to form the roadside sensing device on the logic level. The processor is used for executing the program stored in the memory and is specifically used for executing the following operations:
receiving first sensing information of roadside sensing equipment;
determining a first road side perception scene according to the first perception information;
and calling a corresponding first perception fusion strategy in the road side configuration file according to the first road side perception scene, and performing perception fusion on the scene of the target area.
The method executed by the roadside sensing device disclosed in the embodiment of fig. 1 of the present application may be applied to a processor, or implemented by a processor. The processor may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuits of hardware in a processor or instructions in the form of software. The Processor may be a general-purpose Processor, including a Central Processing Unit (CPU), a Network Processor (NP), and the like; but also Digital Signal Processors (DSPs), application Specific Integrated Circuits (ASICs), field Programmable Gate Arrays (FPGAs) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components. The various methods, steps, and logic blocks disclosed in the embodiments of the present application may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in connection with the embodiments of the present application may be directly implemented by a hardware decoding processor, or implemented by a combination of hardware and software modules in the decoding processor. The software modules may be located in ram, flash, rom, prom, or eprom, registers, etc. as is well known in the art. The storage medium is located in a memory, and a processor reads information in the memory and completes the steps of the method in combination with hardware of the processor.
The electronic device may further execute the method executed by the roadside sensing device in fig. 1, and implement the functions of the roadside sensing device in the embodiment shown in fig. 1, which are not described herein again in this embodiment of the application.
An embodiment of the present application further provides a computer-readable storage medium, where the computer-readable storage medium stores one or more programs, where the one or more programs include instructions, which, when executed by an electronic device that includes multiple application programs, enable the electronic device to perform the method performed by the roadside sensing device in the embodiment shown in fig. 1, and are specifically configured to perform:
receiving first sensing information of roadside sensing equipment;
determining a first road side perception scene according to the first perception information;
and calling a corresponding first perception fusion strategy in the road side configuration file according to the first road side perception scene, and performing perception fusion on the scene of the target area.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention has been described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrases "comprising one of 8230; \8230;" 8230; "does not exclude the presence of additional like elements in a process, method, article, or apparatus that comprises that element.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The above description is only an example of the present application and is not intended to limit the present application. Various modifications and changes may occur to those skilled in the art. Any modification, equivalent replacement, improvement or the like made within the spirit and principle of the present application shall be included in the scope of the claims of the present application.

Claims (10)

1. A roadside sensing method, wherein the method comprises:
receiving first sensing information of roadside sensing equipment;
determining a first road side perception scene according to the first perception information;
and calling a corresponding first perception fusion strategy in the road side configuration file according to the first road side perception scene, and performing perception fusion on the scene of the target area.
2. The method of claim 1, wherein the method further comprises:
determining a second road side perception scene according to second perception information, wherein the second perception information and the first perception information are perception results in different environments;
and dynamically switching a corresponding first perception fusion strategy in a roadside configuration file called according to the first roadside perception scene to a corresponding second perception fusion strategy of the second roadside perception scene, wherein the second perception fusion strategy and the first perception fusion strategy are different algorithm strategies in different or same environments.
3. The method of claim 1, wherein the perceptual fusion policy comprises a plurality of roadside perceptual fusion models stored in the roadside configuration file.
4. The method of claim 1, wherein the perceptual fusion policy comprises different parameter configurations for the same algorithm policy held in the roadside configuration file.
5. The method of claim 1, wherein the determining a first road side perceptual scene according to the first perceptual information comprises:
the roadside weather information is used as redundant information of the first perception information;
determining feature information of the first road side perception scene according to the redundant information of the first perception information, wherein the feature information at least comprises one of the following: rain, night, fog, snow.
6. The method according to claim 1, wherein the invoking a corresponding first perceptual fusion policy in a roadside configuration file according to the first roadside perceptual scene to perform perceptual fusion on the scene of the target area includes:
obtaining a scene detection result through a scene recognition engine according to roadside weather information and/or a roadside image recognition result in the first roadside perception scene;
according to the scene detection result, searching a perception fusion strategy matched with the scene detection result in the road side configuration file;
and sensing the target area according to the parameters of the sensing fusion model or the algorithm strategy in the search result.
7. The method according to any one of claims 1 to 6, wherein, after the invoking of the corresponding first perceptual fusion policy in the roadside configuration file according to the first roadside perceptual scene and the perceptual fusion of the scene of the target area, the method further comprises:
and communicating with the vehicle end, and synchronizing the perception fusion result of the target area with the vehicle end.
8. A roadside sensing device, wherein the device comprises:
the receiving module is used for receiving first sensing information of the road side sensing equipment;
the determining module is used for determining a first road side perception scene according to the first perception information;
and the calling module is used for calling a corresponding first perception fusion strategy in the road side configuration file according to the first road side perception scene so as to perform perception fusion on the scene of the target area.
9. An electronic device, comprising:
a processor; and
a memory arranged to store computer executable instructions which, when executed, cause the processor to perform the method of any one of claims 1 to 7.
10. A computer readable storage medium storing one or more programs which, when executed by an electronic device comprising a plurality of application programs, cause the electronic device to perform the method of any of claims 1-7.
CN202211366855.7A 2022-11-02 2022-11-02 Road side sensing method and device, electronic equipment and storage medium Pending CN115909728A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211366855.7A CN115909728A (en) 2022-11-02 2022-11-02 Road side sensing method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211366855.7A CN115909728A (en) 2022-11-02 2022-11-02 Road side sensing method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN115909728A true CN115909728A (en) 2023-04-04

Family

ID=86490534

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211366855.7A Pending CN115909728A (en) 2022-11-02 2022-11-02 Road side sensing method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115909728A (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20140054755A (en) * 2012-10-29 2014-05-09 한국전자통신연구원 Sensor fusion system reflecting environment information and method thereof
DE102018008442A1 (en) * 2018-10-26 2019-03-28 Daimler Ag Method for weather and / or visibility detection
DE102018101913A1 (en) * 2018-01-29 2019-08-01 Valeo Schalter Und Sensoren Gmbh Improved environmental sensor fusion
CN113537362A (en) * 2021-07-20 2021-10-22 中国第一汽车股份有限公司 Perception fusion method, device, equipment and medium based on vehicle-road cooperation
CN113566889A (en) * 2021-08-02 2021-10-29 榆林学院 Internet of things environment monitoring system
CN114670852A (en) * 2022-02-28 2022-06-28 高新兴科技集团股份有限公司 Method, device, equipment and medium for identifying abnormal driving behaviors
CN114756032A (en) * 2022-05-17 2022-07-15 桂林电子科技大学 Multi-sensing information efficient fusion method for intelligent agent autonomous navigation
CN114862901A (en) * 2022-04-26 2022-08-05 青岛慧拓智能机器有限公司 Road-end multi-source sensor fusion target sensing method and system for surface mine
CN115240405A (en) * 2021-04-25 2022-10-25 中兴通讯股份有限公司 Traffic information management method, system, network equipment and storage medium

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20140054755A (en) * 2012-10-29 2014-05-09 한국전자통신연구원 Sensor fusion system reflecting environment information and method thereof
DE102018101913A1 (en) * 2018-01-29 2019-08-01 Valeo Schalter Und Sensoren Gmbh Improved environmental sensor fusion
DE102018008442A1 (en) * 2018-10-26 2019-03-28 Daimler Ag Method for weather and / or visibility detection
CN115240405A (en) * 2021-04-25 2022-10-25 中兴通讯股份有限公司 Traffic information management method, system, network equipment and storage medium
CN113537362A (en) * 2021-07-20 2021-10-22 中国第一汽车股份有限公司 Perception fusion method, device, equipment and medium based on vehicle-road cooperation
CN113566889A (en) * 2021-08-02 2021-10-29 榆林学院 Internet of things environment monitoring system
CN114670852A (en) * 2022-02-28 2022-06-28 高新兴科技集团股份有限公司 Method, device, equipment and medium for identifying abnormal driving behaviors
CN114862901A (en) * 2022-04-26 2022-08-05 青岛慧拓智能机器有限公司 Road-end multi-source sensor fusion target sensing method and system for surface mine
CN114756032A (en) * 2022-05-17 2022-07-15 桂林电子科技大学 Multi-sensing information efficient fusion method for intelligent agent autonomous navigation

Similar Documents

Publication Publication Date Title
US9235766B2 (en) Optimizing the detection of objects in images
CN111582189B (en) Traffic signal lamp identification method and device, vehicle-mounted control terminal and motor vehicle
CN115292435B (en) High-precision map updating method and device, electronic equipment and storage medium
CN111178215A (en) Sensor data fusion processing method and device
CN113642620B (en) Obstacle detection model training and obstacle detection method and device
CN111797698A (en) Target object identification method and identification device
CN116564133A (en) Vehicle early warning method and device for tidal lane and electronic equipment
CN115294544A (en) Driving scene classification method, device, equipment and storage medium
CN115143952A (en) Automatic driving vehicle positioning method and device based on visual assistance
CN112883871B (en) Model training and unmanned vehicle motion strategy determining method and device
CN113804214A (en) Vehicle positioning method and device, electronic equipment and computer readable storage medium
CN111427331B (en) Perception information display method and device of unmanned vehicle and electronic equipment
CN113450388A (en) Target tracking method and device and electronic equipment
CN112818968A (en) Target object classification method and device
CN111310660A (en) Target detection false alarm suppression method and device for ADAS scene
CN115909728A (en) Road side sensing method and device, electronic equipment and storage medium
WO2023179031A1 (en) Image processing method and apparatus, electronic device, storage medium and computer program product
CN115792945A (en) Floating obstacle detection method and device, electronic equipment and storage medium
CN115170679A (en) Calibration method and device for road side camera, electronic equipment and storage medium
CN114722931A (en) Vehicle-mounted data processing method and device, data acquisition equipment and storage medium
Vitols et al. LiDAR and camera data for smart urban traffic monitoring: Challenges of automated data capturing and synchronization
CN112101177A (en) Map construction method and device and carrier
CN112565387A (en) Method and device for updating high-precision map
CN115542336A (en) Night vehicle tracking method and device, electronic equipment and storage medium
CN111479217B (en) Method and system for positioning unmanned vehicle in tunnel and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination