CN115240148A - Vehicle behavior detection method and device, storage medium and electronic device - Google Patents

Vehicle behavior detection method and device, storage medium and electronic device Download PDF

Info

Publication number
CN115240148A
CN115240148A CN202210977137.7A CN202210977137A CN115240148A CN 115240148 A CN115240148 A CN 115240148A CN 202210977137 A CN202210977137 A CN 202210977137A CN 115240148 A CN115240148 A CN 115240148A
Authority
CN
China
Prior art keywords
target
information
vehicle
image
classification
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210977137.7A
Other languages
Chinese (zh)
Inventor
林亦宁
陈庆
倪华健
赵之健
彭垚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Supremind Intelligent Technology Co Ltd
Original Assignee
Shanghai Supremind Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Supremind Intelligent Technology Co Ltd filed Critical Shanghai Supremind Intelligent Technology Co Ltd
Priority to CN202210977137.7A priority Critical patent/CN115240148A/en
Publication of CN115240148A publication Critical patent/CN115240148A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • G06V20/54Surveillance or monitoring of activities, e.g. for recognising suspicious objects of traffic, e.g. cars on the road, trains or boats
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Evolutionary Computation (AREA)
  • Databases & Information Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • Traffic Control Systems (AREA)

Abstract

The embodiment of the invention provides a method and a device for detecting vehicle behaviors, a storage medium and an electronic device, wherein the method comprises the following steps: acquiring a first image obtained by shooting a target area by a camera device and acquiring a second image which is acquired by a radar device and comprises radar data of the target area; detecting the first image and the second image to obtain first data information of a vehicle included in the first image and second data information of the vehicle included in the second image, and determining running information of a target vehicle of a target category in the target area based on the first data information and the second data information; determining whether the target vehicle has performed a target behavior based on the travel information. According to the invention, the problem of lower detection accuracy of the vehicle behavior in the related technology is solved, and the effects of improving the detection accuracy of the vehicle behavior and greatly reducing the rate of missing report of the vehicle behavior are achieved.

Description

Vehicle behavior detection method, device, storage medium and electronic device
Technical Field
The embodiment of the invention relates to the field of computers, in particular to a method and a device for detecting vehicle behaviors, a storage medium and an electronic device.
Background
In the related art, when determining the driving behavior of a vehicle, the determination is generally performed only through images captured by a camera, or only through data detected by a radar, or even for some relatively special behaviors, the determination is performed by means of artificial monitoring, for example, for the right turning behavior of a large vehicle, the determination is generally performed by means of artificial monitoring, and the right turning behavior of the large vehicle is described as an example below:
when a large vehicle turns right, the phenomenon of 'inner wheel difference' can occur, and the phenomenon easily causes traffic accidents. When a large vehicle turns right, the rear wheels do not travel along the tracks of the front wheels, so that deviation occurs, and deviation caused by turning is called "wheel difference", and the longer the vehicle body is, the larger the formed wheel difference becomes, and the range of the inner wheel difference also expands. In order to improve and reduce casualty accidents caused by vision blind areas of vehicle owners when a large vehicle turns right, a traffic rule of 'turning right and stopping necessarily' is frequently required to be implemented at present, namely that under the condition that the large vehicle enters an intersection and needs to turn right, a vehicle driver needs to stop the vehicle stably, look out outwards and backwards, ensure the safety of the periphery of the vehicle and then drive the vehicle to turn right.
In the related technology, at present, the traffic regulation that whether a large vehicle carries out 'must stop when turning right' is mainly monitored manually, one or more persons are arranged at each important intersection where the large vehicle passes through to monitor and guide the large vehicle to carry out the traffic regulation that the large vehicle must stop when turning right, and then whether the large vehicle carries out the violation behaviors or not is found through human eyes.
In addition, for other behaviors of the vehicle, the detection mode in the related art is relatively single, so that the problems of relatively low detection efficiency and relatively low determination accuracy rate also occur.
Aiming at the problem of low detection accuracy of vehicle behaviors in the related technology, no effective solution is provided at present.
Disclosure of Invention
The embodiment of the invention provides a method and a device for detecting vehicle behaviors, a storage medium and an electronic device, which are used for at least solving the problem of low detection accuracy of the vehicle behaviors in the related technology.
According to an embodiment of the present invention, there is provided a detection method of a vehicle behavior, including: the method comprises the steps of obtaining a first image obtained by shooting a target area by a camera device, and obtaining a second image which is collected by a radar device and comprises radar data of the target area, wherein the first image is an image which is continuously shot by the camera device in a target time period when the environment where the camera device is located meets a preset brightness threshold, and the second image is an image which is continuously collected by the radar device in the target time period and comprises the radar data; detecting the first image and the second image to obtain first data information of a vehicle included in the first image and second data information of the vehicle included in the second image, and determining running information of a target vehicle of a target category in the target area based on the first data information and the second data information; determining whether the target vehicle has performed a target behavior based on the travel information.
In one exemplary embodiment, determining the travel information of the target vehicle of the target category in the target area based on the first data information and the second data information includes: determining a first classification statistic result of each vehicle included in the first image based on the first data information, wherein the first classification statistic result is indicative of size information, position information, and category information of each vehicle included in the first image, and determining a second classification statistic result of each vehicle included in the second image based on the second data information, wherein the second classification statistic result is indicative of size information, position information, and category information of each vehicle included in the second image; determining a first vehicle of the target category included in the first image based on the size information and the category information indicated in the first classification statistical result, and determining a second vehicle of the target category included in the second image based on the size information and the category information indicated in the second classification statistical result; determining target position information of the target vehicle corresponding to the position in the first image and the second image based on the position information of the first vehicle indicated in the first classification statistical result and the position information of the second vehicle indicated in the second classification statistical result; determining the travel information of the target vehicle in the target area based on the target position information.
In one exemplary embodiment, after determining the first classification statistic for each vehicle included in the first image based on the first data information and determining the second classification statistic for each vehicle included in the second image based on the second data information, the method further comprises: generating a first mask table based on the second classification statistical result, wherein the first mask table is used for recording size information, position information and category information of each vehicle included in the second image; correspondingly correcting the information recorded in the first mask table by using the size information, the position information and the category information of each vehicle included in the first image recorded in the first classification statistical result; obtaining a target mask table based on the correction result; determining the travel information of the target vehicle in the target area based on the target mask table.
In one exemplary embodiment, the correspondingly correcting the information recorded in the first mask table using the size information, the position information, and the category information of each vehicle included in the first image recorded in the first classification statistical result includes: determining a target classification result from the first classification result when a classification confidence of each vehicle included in the first image is further recorded in the first classification result, wherein the target classification result is used for indicating size information, position information and classification information of the vehicle of which the classification confidence included in the first image is greater than a preset confidence threshold; and correspondingly correcting the information recorded in the first mask table by using the target classification result.
In one exemplary embodiment, determining whether the target vehicle has performed the target behavior based on the travel information includes: determining trajectory information of the target vehicle included in the travel information; determining speed information of the target vehicle based on the trajectory information; determining whether the target vehicle has performed a target behavior based on the speed information.
In one exemplary embodiment, determining whether the target vehicle performed the target behavior based on the speed information includes: determining that the target vehicle has performed the target behavior in a case where it is determined that the target vehicle has zero over-speed in a right turn zone included in the target zone based on the speed information; determining that the target vehicle has not performed the target behavior in a case where it is determined that the target vehicle has not experienced zero speed in a right turn zone included in the target zone based on the speed information.
In one exemplary embodiment, the method further comprises: acquiring a third image including radar data of the target area acquired by the radar device in a period other than the target period; detecting the third image to obtain third data information of the vehicle included in the third image, and determining target driving information of the target vehicle in the target area based on the third data information; determining whether the target vehicle has performed a target behavior based on the target travel information.
According to still another embodiment of the present invention, there is also provided a vehicle behavior detection device including: the device comprises a first acquisition module and a second acquisition module, wherein the first acquisition module is used for acquiring a first image obtained by shooting a target area by a camera device and acquiring a second image which is acquired by a radar device and comprises radar data of the target area, the first image is an image which is continuously shot by the camera device in a target time period when the environment where the first device is located meets a preset brightness threshold, and the second image is an image which is continuously acquired by the radar device in the target time period and comprises the radar data; a detection module, configured to detect the first image and the second image to obtain first data information of a vehicle included in the first image and second data information of the vehicle included in the second image, and determine, based on the first data information and the second data information, traveling information of a target vehicle of a target category in the target area; a first determination module to determine whether the target vehicle performed a target behavior based on the travel information.
According to a further embodiment of the present invention, there is also provided a computer-readable storage medium having a computer program stored thereon, wherein the computer program is arranged to, when executed, perform the steps of any of the method embodiments described above.
According to yet another embodiment of the present invention, there is also provided an electronic device, including a memory in which a computer program is stored and a processor configured to execute the computer program to perform the steps in any of the above method embodiments.
According to the method, the first image obtained by continuously shooting the target area in the target time period when the environment where the camera device is located meets the preset brightness threshold value is obtained, the second image which is continuously collected by the radar device in the target time period and comprises radar data of the target area is obtained, the first image and the second image are detected to obtain the first data information of the vehicle contained in the first image and the second data information of the vehicle contained in the second image, and the running information of the target vehicle of the target category in the target area is determined based on the first data information and the second data information; and then determines whether the target vehicle has performed the target behavior based on the travel information. By adopting the method, the first image obtained by continuously shooting the target area by the camera equipment in the target time period when the environment meets the preset brightness threshold and the second image which is continuously acquired by the radar equipment in the target time period and comprises the radar data of the target area are detected, and whether the target vehicle executes the target behavior is determined based on the two detection results, so that the problem of low detection accuracy of the vehicle behavior in the related technology is solved, and the effects of improving the detection accuracy of the vehicle behavior and greatly reducing the false positive rate of the vehicle behavior are achieved.
Drawings
Fig. 1 is a block diagram of a hardware configuration of a mobile terminal of a detection method of a vehicle behavior according to an embodiment of the present invention;
FIG. 2 is a flow chart of a method of detecting vehicle behavior according to an embodiment of the present invention;
FIG. 3 is a general flowchart for detecting an unrestrained right turn of a large vehicle, according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of an intersection right turn zone according to an embodiment of the invention;
fig. 5 is a block diagram of the structure of a vehicle behavior detection apparatus according to an embodiment of the present invention.
Detailed Description
Embodiments of the present invention will be described in detail below with reference to the accompanying drawings in conjunction with embodiments.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order.
The embodiment of the detection method of the vehicle behavior provided in the embodiment of the application can be executed in a mobile terminal, a computer terminal or a similar operation device. Taking an example of the method performed by a mobile terminal, fig. 1 is a block diagram of a hardware structure of the mobile terminal according to the method for detecting a vehicle behavior in the embodiment of the present invention. As shown in fig. 1, the mobile terminal may include one or more (only one shown in fig. 1) processors 102 (the processor 102 may include, but is not limited to, a processing device such as a microprocessor MCU or a programmable logic device FPGA), and a memory 104 for storing data, wherein the mobile terminal may further include a transmission device 106 for communication functions and an input-output device 108. It will be understood by those of ordinary skill in the art that the structure shown in fig. 1 is only an illustration and is not intended to limit the structure of the mobile terminal. For example, the mobile terminal may also include more or fewer components than shown in FIG. 1, or have a different configuration than shown in FIG. 1.
The memory 104 may be used to store a computer program, for example, a software program and a module of application software, such as a computer program corresponding to the detection method of the vehicle behavior in the embodiment of the present invention, and the processor 102 executes various functional applications and data processing by running the computer program stored in the memory 104, so as to implement the method described above. The memory 104 may include high speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, the memory 104 may further include memory located remotely from the processor 102, which may be connected to the mobile terminal over a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The transmission device 106 is used for receiving or transmitting data via a network. Specific examples of the network described above may include a wireless network provided by a communication provider of the mobile terminal. In one example, the transmission device 106 includes a Network adapter (NIC), which can be connected to other Network devices through a base station so as to communicate with the internet. In one example, the transmission device 106 may be a Radio Frequency (RF) module, which is used for communicating with the internet in a wireless manner.
In the present embodiment, a method for detecting a vehicle behavior is provided, and fig. 2 is a flowchart of a method for detecting a vehicle behavior according to an embodiment of the present invention, as shown in fig. 2, the method including the steps of:
s202, acquiring a first image obtained by shooting a target area by a camera device, and acquiring a second image which is acquired by a radar device and comprises radar data of the target area, wherein the first image is an image which is continuously shot by the camera device in a target time period when the environment where the camera device is located meets a preset brightness threshold, and the second image is an image which is continuously acquired by the radar device in the target time period and comprises the radar data;
s204, detecting the first image and the second image to obtain first data information of the vehicle included in the first image and second data information of the vehicle included in the second image, and determining running information of a target vehicle of a target category in the target area based on the first data information and the second data information;
s206, determining whether the target vehicle executes the target behavior based on the running information.
The controller, or the processor, or the device or the system having the image recognition capability, or other processing devices or processing units having similar processing capabilities, or the like may perform the above operations, the image capturing device may be a device having the capabilities of capturing images and analyzing images, or the like, the image capturing device may be a ball type image capturing device, a gun type image capturing device, a thermal imaging image capturing device, or the like, and the radar device may be a laser radar device, a microwave radar device, a millimeter wave radar device, an ultrasonic radar device, or the like.
In the above embodiment, the preset brightness threshold is a value that can be set in advance, for example, 50lx,100lx,200lx, etc., and the size of the specific setting can be set based on the shooting accuracy or scene requirements of the image capturing apparatus. When the outline of the vehicle cannot be identified in the first image captured within the time period that the preset brightness threshold is not satisfied, the image is not used for vehicle detection and identification, so as to avoid the problems of time delay increase in the image detection and identification process and efficiency reduction in the image detection and identification.
In the above embodiment, the first data information includes the number of vehicles captured by the camera device, the category of the vehicles, the location of the vehicles, the size of the vehicles, the confidence level of the vehicle classification, and the like, where the vehicles passing under the camera device or the camera device are counted, and the following vehicles may perform detection and analysis on the behaviors of different types of vehicles in a targeted manner, the category of the vehicles includes a large vehicle, a small vehicle, and other vehicles, the large vehicle may be a large truck, a large work vehicle, and the like, the small vehicle may be a small van, a car, a motorcycle, an electric vehicle, and the like, the other vehicles may be a bicycle, an electric scooter, a two-wheeled electric balance vehicle, and the like, the size of the vehicle includes the length, the width, and the like of the vehicle, the confidence level of the vehicle classification refers to the reliability level of the vehicle being recognized and classified as a large vehicle, a small vehicle, or other vehicles, the higher the confidence level indicates that the classification result of the vehicle is more reliable, the second data information includes the distance of the vehicle collected by the radar device, the speed of the vehicle, the size of the vehicle, and the distance between the vehicles may be determined according to the running distance, and the video may be determined according to the safety distance between the situations that the vehicles may be the distance between the distance of the vehicles may be determined, and the distance may be determined when the distance between the distance may be determined, and the distance may be determined in the distance determined in the safety region, and the safety region may be determined according to the distance of the vehicle, further, the speed of the vehicle in the right turn area is determined to determine whether the vehicle behavior is illegal, for example, when a large vehicle and an electric bicycle are in parallel and turn right at the same time on a right turn lane on a road, when the large vehicle and the electric bicycle enter the right turn area, a radar device (or other devices with distance measurement capability) may be used to detect a distance between the two vehicles, and if the distance is not within a safe distance, an early warning is reported and a camera is linked to store a video, and then it is determined whether the vehicle performs a right turn stop behavior by acquiring the speed of the large vehicle.
In the above embodiment, the running information includes running tracks of target vehicles entering and exiting a target area, running directions of the target vehicles and speed information of the target vehicles in the target area, where there may be a plurality of target vehicles, and the running directions of the plurality of target vehicles may be the same or different, and the camera devices and/or radar devices on different roads may be linked to track the vehicles in different running directions respectively to determine the running tracks and speed information corresponding to each vehicle in different running directions, and the camera devices and/or radar devices on the same road are used to perform unified tracking on the same running direction to determine the running tracks and speed information corresponding to each vehicle in the same running direction.
In the above embodiment, a first image obtained by continuously shooting a target area by an image pickup device in a target period in which an environment where the image pickup device is located satisfies a preset brightness threshold value and a second image including radar data of the target area continuously collected by the radar device in a target period are obtained, the first image and the second image are detected to obtain first data information of a vehicle included in the first image and second data information of a vehicle included in the second image, and travel information of a target vehicle of a target category in the target area is determined based on the first data information and the second data information; and then determines whether the target vehicle has performed the target behavior based on the travel information.
By adopting the method, a first image obtained by continuously shooting the target area by the camera device in a target time period when the environment meets the preset brightness threshold and a second image which is continuously collected by the radar device in the target time period and comprises radar data of the target area are detected, and whether the target vehicle executes the target behavior is determined based on two types of detection results.
Meanwhile, the target vehicle is tracked through two types of detection results to determine the driving information of the target vehicle, the vehicle does not need to be tracked artificially, a large amount of manpower and material resources can be avoided, whether the target behavior is executed or not is further determined based on the driving information of the target vehicle, the vehicle is detected through the camera auxiliary radar to solve the problems that the judgment of the speed of the vehicle under the camera image is inaccurate, and the weather vehicle such as night/fog cannot be identified, so that the efficiency of monitoring the vehicle in real time is improved, the problem that the detection accuracy of the vehicle behavior is low in the related technology is solved, and the effects of improving the detection accuracy of the right-turning behavior of the vehicle and greatly reducing the rate of missing report of the vehicle behavior are achieved.
In one exemplary embodiment, determining the travel information of the target vehicle of the target category in the target area based on the first data information and the second data information includes: determining a first classification statistic result for each vehicle included in the first image based on the first data information, wherein the first classification statistic result is indicative of size information, location information, and category information of each vehicle included in the first image, and determining a second classification statistic result for each vehicle included in the second image based on the second data information, wherein the second classification statistic result is indicative of size information, location information, and category information of each vehicle included in the second image; determining a first vehicle of the target category included in the first image based on the size information and category information indicated in the first classification statistical result, and determining a second vehicle of the target category included in the second image based on the size information and category information indicated in the second classification statistical result; determining target position information of the target vehicle corresponding to the position in the first image and the second image based on the position information of the first vehicle indicated in the first classification statistical result and the position information of the second vehicle indicated in the second classification statistical result; determining the travel information of the target vehicle in the target area based on the target position information.
In the above embodiment, the first image may include a plurality of vehicles, the positions of the plurality of vehicles in the target area are different from each other, and the sizes and types of the vehicles are also different from each other, so that the vehicle information under the view angle of the camera device is subjected to first classification statistics based on the first data information acquired from the first image, the vehicle information under the view angle of the radar device is subjected to second classification statistics based on the second data information acquired from the second image, the vehicle information of the target type included in the first classification statistics result is in one-to-one correspondence with the vehicle information of the target type included in the second classification statistics result to determine the target position information of the target vehicle, for example, the camera device includes a cart, a dolly, another cart, and the like in the first image captured by the target area, the radar device collects a second image in the target area, the second image includes a cart, a trolley, another vehicle, and the like, and then performs classified statistics on vehicle types (e.g., cart, trolley, and other vehicle) included in the first image and vehicle types (e.g., cart, trolley, and other vehicle) included in the second image, respectively, and the two classified statistics result make the position of each type of vehicle included in the first image correspond to the position of each type of vehicle included in the second image one-to-one, so as to achieve the effect of improving the accuracy of determining the target position information of the target vehicle, where the target type refers to the cart type, and certainly may also be other types (e.g., the type of other vehicles except the cart, the specific type under the cart type, such as a trailer, a tank truck, a trailer, etc.), the target category may be set according to the actual application, or may be adjusted again according to the actual application after being set.
In the above embodiment, the first classification statistical result includes the position of each type of vehicle in the road, the size of each type of vehicle, the category of each type of vehicle, and the like captured by the image capture device, and the second classification statistical result includes the position of each type of vehicle in the road, the size of each type of vehicle, the category of each type of vehicle, and the like captured by the radar device.
In one exemplary embodiment, after determining the first classification statistic for each vehicle included in the first image based on the first data information and determining the second classification statistic for each vehicle included in the second image based on the second data information, the method further comprises: generating a first mask table based on the second classification statistical result, wherein the first mask table is used for recording the size information, the position information and the category information of each vehicle included in the second image; correspondingly correcting the information recorded in the first mask table by using the size information, the position information and the category information of each vehicle included in the first image recorded in the first classification statistical result; obtaining a target mask table based on the correction result; determining the travel information of the target vehicle in the target area based on the target mask table.
In the above embodiment, the first mask table is generated based on the second classification statistical result of the vehicle under the view angle of the radar device, and the vehicle information is recorded in the form of the mask table, so as to facilitate the subsequent management of different types of vehicles, and then the vehicle information in the first mask table is correspondingly modified by the vehicle information recorded in the first classification statistical result of the vehicle under the view angle of the camera device, so as to obtain the target mask table.
In an exemplary embodiment, the performing the corresponding correction on the information recorded in the first mask table using the size information, the position information, and the category information of each vehicle included in the first image recorded in the first classification statistical result includes: determining a target classification result from the first classification result when a classification confidence of each vehicle included in the first image is further recorded in the first classification result, wherein the target classification result is used for indicating size information, position information and classification information of the vehicle of which the classification confidence included in the first image is greater than a preset confidence threshold; and correspondingly correcting the information recorded in the first mask table by using the target classification result.
In the above-described embodiment, the information of the vehicle in the first mask table is corrected using the information of the vehicle whose category confidence is greater than the confidence threshold in the first classification statistical result of the vehicles from the viewpoint of the imaging apparatus, that is, the classification result of the vehicle whose category confidence is less than or equal to the confidence threshold is not accurate, and the effect of the negative correction on the first mask table using the vehicle information in the classification result is performed, for example, when there are vehicles whose size information (and/or position information, etc.) does not coincide with the size information (and/or position information, etc.) of the corresponding vehicle recorded in the first classification statistical result in the first mask table, and there is a vehicle whose confidence is less than or equal to the confidence threshold in the plurality of vehicles recorded in the first classification statistical result, the size information (and/or position information, etc.) of the vehicle in the first mask table is corrected using only the vehicle whose center of the plurality of vehicles is greater than the confidence threshold, thereby achieving the effect of improving the size accuracy, the vehicle position accuracy, and the vehicle category accuracy.
In the above embodiment, the predetermined confidence threshold may be a preset value, and may be set to 75%, 80%, 90%, or the like, for example, when the predetermined confidence threshold is 90%, and when a category confidence of each vehicle included in the first image is further recorded in the first classification result, the size information, the position information, and the category information of the vehicle included in the first image with the category confidence greater than 90% are determined from the first classification result, and the size of the vehicle is increased by performing weighted average on the size of the vehicle with the category confidence greater than 90% and the size of the vehicle corresponding to the first mask table, and the position accuracy of the vehicle is increased by performing weighted average on the position of the vehicle with the category confidence greater than 90% and the position of the vehicle corresponding to the first mask table, and the category accuracy of the vehicle is increased by performing weighted average on the category information of the vehicle with the category confidence greater than 90% and the category information of the vehicle corresponding to the first mask table.
In one exemplary embodiment, determining whether the target vehicle has performed the target behavior based on the travel information includes: determining track information of the target vehicle included in the travel information; determining speed information of the target vehicle based on the trajectory information; determining whether the target vehicle has performed a target behavior based on the speed information.
In the above-mentioned embodiment, the track information of the target vehicle is determined from the travel information of the target vehicle, and further the travel speed of the target vehicle in the target area is accurately determined from the track information of the target vehicle, for example, the travel distance of the target vehicle in the target area may be determined from the track information of the target vehicle, the travel time of the target vehicle in the target area is determined from the interval between each frame of image captured by the image capturing device, the travel speed of the target vehicle is obtained by dividing the travel distance by the travel time, and whether the target vehicle executes the target behavior is determined based on the travel speed, so that the situations of missing detection and false detection of whether the target vehicle executes the target behavior are avoided, and the effect of improving the detection efficiency of whether the target vehicle executes the target behavior is achieved.
In one exemplary embodiment, determining whether the target vehicle performed the target behavior based on the speed information includes: determining that the target vehicle has performed the target behavior in a case where it is determined that the target vehicle has zero over-speed in a right turn zone included in the target zone based on the speed information; determining that the target vehicle has not performed the target behavior in a case where it is determined that the target vehicle has not experienced zero speed in a right turn zone included in the target zone based on the speed information.
In the above embodiment, when the speed of the target vehicle in the right turn area included in the target area is not zero, the monitoring device in the right turn area may report the warning and link the camera to shoot and store the video of the target vehicle in the right turn area, or the monitoring device in the right turn area may report the warning and link the camera to store the video including the process of the target vehicle in the right turn area, for example, when a large vehicle travels into the right turn area, the travel speed of the large vehicle in the right turn area is obtained, when it is determined that the travel speed is not zero, the monitoring device immediately reports the warning and link the camera to shoot and store the video of the large vehicle in the right turn area, or when a large vehicle travels out of the right turn area, the travel speed of the large vehicle in the right turn area is obtained, when it is determined that the travel speed is not zero, the monitoring device immediately reports and links the camera to report the warning and link the camera to include the video including the process of the large vehicle traveling in the right turn area, so that the follow-up warning and safety of people can be guaranteed to comply with traffic regulations.
In one exemplary embodiment, the method further comprises: acquiring a third image which is acquired by the radar equipment in other time periods except the target time period and comprises radar data of the target area; detecting the third image to obtain third data information of the vehicle included in the third image, and determining target driving information of the target vehicle in the target area based on the third data information; determining whether the target vehicle has performed a target behavior based on the target travel information.
In the above embodiment, in the case of a bad environment such as night, heavy rainstorm, heavy fog, etc., the vehicle information obtained from the first image captured by the camera device is inaccurate, and therefore, in the case of a bad environment, only the vehicle included in the third image collected by the radar device is detected and analyzed to determine whether the target vehicle performs the target behavior, for example, in the case of night and no street lamp on the road or dark street lamp illumination, only the radar device is used to collect the image including the radar data of the target area, and the third data information obtained by detecting and analyzing the vehicle information included in the image determines the target driving information of the target vehicle in the target area, and further determines whether the target vehicle performs the target behavior based on the target driving information, and further determines whether the target vehicle performs the target behavior by using the corresponding detection method for different brightness periods, so that the purpose of detecting the right turn violation behavior of the vehicle in the whole period is achieved.
It is to be understood that the above-described embodiments are only a few, and not all, embodiments of the present invention.
The present invention will be described in detail with reference to the following specific examples:
fig. 3 is a general flowchart of detecting that a large vehicle is not stopped in a right turn according to an embodiment of the present invention, as shown in fig. 3, the flowchart includes the steps of:
s302, acquiring real-time camera data (corresponding to the first image) and radar data (corresponding to the second image);
s304, carrying out coding and decoding processing on the camera data, and sending the coded and decoded camera data into an algorithm;
s306, carrying out brightness analysis on the camera data, and selecting a time period with higher brightness (corresponding to the target time period) for analysis;
s308, performing a first determination to determine the quality of the image captured by the camera, and if the first determination result is negative, performing detection processing only on radar data (corresponding to the third image) to obtain distance, speed, and size information (corresponding to the third data information) of the vehicle;
s310, under the condition that the first judgment result is yes, when the brightness is high, carrying out full-image detection on the whole frame of picture sent to the algorithm by utilizing a deep learning technology, and acquiring information (corresponding to the first data information) such as the number, the type, the position, the size, the confidence coefficient and the like of the targets of the motor vehicle;
s312, when the brightness is high, tracking the detection result under the data of the camera;
s314, encoding and decoding radar data (corresponding to the second image), and sending the encoded and decoded radar data into an algorithm;
s316, analyzing the radar data at all times, and performing segmentation detection on the radar data by using a deep learning technology to obtain the distance, speed and size information (corresponding to the second data information) of the motor vehicle;
s318, tracking the detection result under the radar data;
s320, performing a second determination to determine whether the detection result of the camera data is a large vehicle (corresponding to the target vehicle of the target category), and performing no processing if the detection result of the camera data is no;
s322, if the second determination result is yes, mapping the tracking result (corresponding to the target classification result) of the high-confidence detection frame in the camera data to the radar data, fusing the position and trajectory information with the tracking result in the radar, transmitting the target type information in the camera image to the target in the corresponding radar image, and caching a vehicle size table (corresponding to the target mask table) of the common position in the radar data;
s324, tracking the cart under the radar data, setting a right turn area at the place where the cart turns right, recording whether the speed of the cart is 0 or not when the cart enters the right turn area, enabling a parking identification position if the speed of the cart is 0, and finishing the judgment of turning right and parking the cart by judging whether the parking identification position is enabled or not if the cart passes through the right turn area according to the right turn direction;
and S326, if the cart does not stop in the right turning area and the nearby carts are determined to be around the cart based on the vehicle size table, reporting early warning and storing a video by linking the cameras.
The specific implementation mode is as follows:
this application does not have clear and definite requirement to camera and radar, and the camera can be for visible light rifle bolt, ball machine, thermal imaging camera, binocular camera etc. and the radar can be laser radar, millimeter wave radar etc.. The present embodiment will be described below by taking a general bolt face device and a laser radar device as examples:
1. obtaining real-time camera data and radar data
The acquired image is only required to satisfy the algorithm requirements and to be able to see the contour of the object (corresponding to the vehicle).
2. Time-phased analysis
The method and the device use the camera visual data to assist the radar data to judge the violation behaviors of the large vehicle, and the camera imaging is greatly influenced by weather and illumination, so that the time interval (corresponding to the target time interval) of a common brightness interval is selected to carry out deep learning analysis by judging the brightness of the image, and at the moment, the image of the camera is clear, the target is obvious, and the algorithm accuracy and the recall rate are very high. The purpose of the camera image algorithm is to detect and track vehicles, correct vehicle categories included in radar data through a radar vision fusion technology, generate a large and small vehicle size mask (corresponding to the first mask table) in a right turn zone, and improve the accuracy of type classification by using the size mask when service judgment is performed by using the radar data. The size mask is generated by generating a lookup table, where the size of the table is 8 × 8 (of course, the size is only an example, and may also be set to other sizes in practical applications, for example, 4 × 4, 16 × 16, and so on, it should be noted that the larger the size is, the more accurate the data is, but the larger the required memory is), where each block corresponds to an area on the laser radar image after being scaled proportionally, and each block records average length, width, and height information of two categories, i.e., cart and dolly. The size mask (corresponding to the target mask table) is updated in a mode that the big car and the small car with high confidence level under the visible light are mapped to the radar image, and data are weighted and updated into the size mask according to the length, the width and the height of the detection target on the radar image, the position of the target and the type of the target detection on the visible light. The size mask is used for triggering early warning when a target is in a near state, judging the distance between the size of the target (corresponding to the target vehicle) and the size of a cart and a trolley in the corresponding size mask, and if the distance is close to the cart, early warning can be given. The mapping from the camera position to the radar position is performed by calibration, and there are many spatial calibration methods for the camera and the radar, for example, calculating a homography matrix or a mapping function according to coordinates of multiple points in two images, and the application is not limited herein.
3. Description of algorithms
The algorithm used under the visual angle of the camera is multi-target detection and multi-target tracking, the difficulty of realizing the detection and tracking algorithm is not large under the condition of good image quality, the existing open source algorithm (such as yolo V3) can be used for micro-training under traffic data, the multi-target detection algorithm inputs the image information of each frame and outputs the information of the number, the position, the size, the confidence coefficient, the subcategory and the like of the vehicle targets in the picture, and the subcategory comprises a cart and a trolley. Since the detection of the large vehicles at the urban intersection is stable and the tracking effect is good, the multi-target tracking algorithm can also directly use the existing open source algorithm (e.g., the DeepSort algorithm, etc.), input the image and the detection output information of each frame, and output the cache track (corresponding to the driving information) of each target. The detection and tracking mode is adopted to avoid the condition that the size mask is influenced by missed detection, false detection or class errors which happen occasionally during detection;
because the situation of false detection or missed detection inevitably occurs in a simple detection algorithm, a complete track is given to each target by using a scheme of target detection and tracking, occasional false detection cannot form the complete track, occasional missed detection or category errors can be compensated by the tracking algorithm, and finally the target track is mapped to a radar image coordinate to generate a size mask based on a category statistical result of the target track in the whole right-turn area;
the method comprises the steps that multi-target detection, multi-target tracking and rule judgment are carried out on images under the radar view angle, detection algorithms under the laser radar view angle are more, vehicle detection is a task which is easier to complete, an existing algorithm (for example, a VoxelNet algorithm) can be directly used, a tracking algorithm can add 3D input variables on the basis of 2D tracking, fig. 4 is a schematic diagram of a right turn area of an intersection according to an embodiment of the invention, as shown in fig. 4, a right turn area (namely a filling area shown in the drawing) is drawn in all areas capable of turning right, the entering and exiting directions (entering and exiting arrows of the filling area shown in the drawing) are set, each entering and exiting vehicle track is recorded, speed information is contained in the track, if the speed does not reach 0, the vehicle is determined not to be parked, further, the vehicle is judged whether to be a large vehicle or not to be parked based on category statistics in the track and size mask, and early warning is reported and a camera is linked to store a video if the large vehicle is reported.
Through the above description of the embodiments, those skilled in the art can clearly understand that the method according to the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but the former is a better implementation mode in many cases. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (e.g., ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal device (e.g., a mobile phone, a computer, a server, or a network device) to execute the method according to the embodiments of the present invention.
In this embodiment, a device for determining a vehicle type is further provided, and the device is used to implement the foregoing embodiments and preferred embodiments, which have already been described and are not described again. As used below, the term "module" may be a combination of software and/or hardware that implements a predetermined function. Although the means described in the embodiments below are preferably implemented in software, an implementation in hardware or a combination of software and hardware is also possible and contemplated.
Fig. 5 is a block diagram showing the configuration of a vehicle behavior detection apparatus according to an embodiment of the present invention, which includes, as shown in fig. 5:
a first obtaining module 52, configured to obtain a first image obtained by shooting a target area by a camera device, and obtain a second image which includes radar data of the target area and is collected by a radar device, where the first image is an image continuously shot by the camera device in a target time period in which an environment where the first device is located meets a preset brightness threshold, and the second image is an image which includes the radar data and is continuously collected by the radar device in the target time period;
a detection module 54, configured to detect the first image and the second image to obtain first data information of a vehicle included in the first image and second data information of the vehicle included in the second image, and determine traveling information of a target vehicle of a target category in the target area based on the first data information and the second data information;
a first determination module 56 for determining whether the target vehicle has performed a target action based on the travel information.
In an exemplary embodiment, the detection module 54 includes:
a first determination sub-module configured to determine a first classification statistic result for each vehicle included in the first image based on the first data information, wherein the first classification statistic result is indicative of size information, position information, and category information of each vehicle included in the first image, and determine a second classification statistic result for each vehicle included in the second image based on the second data information, wherein the second classification statistic result is indicative of size information, position information, and category information of each vehicle included in the second image;
a second determination sub-module configured to determine a first vehicle of the target category included in the first image based on the size information and category information indicated in the first classification statistical result, and determine a second vehicle of the target category included in the second image based on the size information and category information indicated in the second classification statistical result;
a third determination sub-module configured to determine target position information of the target vehicle corresponding to a position in the first image and the second image based on the position information of the first vehicle indicated in the first classification statistical result and the position information of the second vehicle indicated in the second classification statistical result;
a fourth determination submodule configured to determine the travel information of the target vehicle in the target area based on the target position information.
In an exemplary embodiment, the apparatus further includes:
a generating module, configured to generate a first mask table based on a second classification statistical result after determining a first classification statistical result of each vehicle included in the first image based on the first data information and determining the second classification statistical result of each vehicle included in the second image based on the second data information, wherein the first mask table is used for recording size information, position information, and category information of each vehicle included in the second image;
a correction module, configured to correspondingly correct information recorded in the first mask table by using the size information, the position information, and the category information of each vehicle included in the first image recorded in the first classification statistical result;
the processing module is used for obtaining a target mask table based on the correction result;
a second determination module to determine the travel information of the target vehicle in the target area based on the target mask table.
In an exemplary embodiment, the modification module includes:
a fifth determining sub-module, configured to determine a target classification result from the first classification result when a category confidence of each vehicle included in the first image is further recorded in the first classification result, where the target classification result is used to indicate size information, position information, and category information of a vehicle whose category confidence included in the first image is greater than a predetermined confidence threshold;
and the correction submodule is used for correspondingly correcting the information recorded in the first mask table by using the target classification result.
In an exemplary embodiment, the first determining module 56 includes:
a sixth determining submodule configured to determine trajectory information of the target vehicle included in the travel information;
a seventh determining sub-module for determining speed information of the target vehicle based on the trajectory information;
an eighth determination sub-module that determines whether the target vehicle has performed a target action based on the speed information.
In an exemplary embodiment, the eighth determining sub-module includes:
a first determination unit configured to determine that the target vehicle has performed the target behavior in a case where it is determined based on the speed information that the target vehicle has zero over-speed in a right turn zone included in the target zone;
a second determination unit configured to determine that the target vehicle has not performed the target behavior in a case where it is determined based on the speed information that zero speed of the target vehicle has not occurred in a right turn zone included in the target zone.
In an exemplary embodiment, the apparatus further includes:
a second acquisition module, configured to acquire a third image that includes radar data of the target area and is acquired by the radar device in a period other than the target period;
a second determining module, configured to detect the third image to obtain third data information of the vehicle included in the third image, and determine target driving information of the target vehicle in the target area based on the third data information;
a third determination module to determine whether the target vehicle has performed a target behavior based on the target travel information.
It should be noted that, the above modules may be implemented by software or hardware, and for the latter, the following may be implemented, but not limited to: the modules are all positioned in the same processor; alternatively, the modules are respectively located in different processors in any combination.
Embodiments of the present invention also provide a computer-readable storage medium having a computer program stored thereon, wherein the computer program is arranged to perform the steps of any of the above-mentioned method embodiments when executed.
In an exemplary embodiment, the computer-readable storage medium may include, but is not limited to: various media capable of storing computer programs, such as a usb disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic disk, or an optical disk.
Embodiments of the present invention also provide an electronic device comprising a memory having a computer program stored therein and a processor arranged to run the computer program to perform the steps of any of the above method embodiments.
In an exemplary embodiment, the electronic apparatus may further include a transmission device and an input/output device, wherein the transmission device is connected to the processor, and the input/output device is connected to the processor.
For specific examples in this embodiment, reference may be made to the examples described in the above embodiments and exemplary embodiments, and details of this embodiment are not repeated herein.
It will be apparent to those skilled in the art that the modules or steps of the present invention described above may be implemented in a general purpose computing device, they may be centralized in a single computing device or distributed across a network of multiple computing devices, and they may be implemented in program code that is executable by a computing device, such that they may be stored in a memory device and executed by a computing device, and in some cases, the steps shown or described may be executed in an order different from that shown or described herein, or they may be separately fabricated into individual integrated circuit modules, or multiple modules or steps therein may be fabricated into a single integrated circuit module. Thus, the present invention is not limited to any specific combination of hardware and software.
The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the principle of the present invention should be included in the protection scope of the present invention.

Claims (10)

1. A method of detecting a behavior of a vehicle, characterized by comprising:
the method comprises the steps of obtaining a first image obtained by shooting a target area by a camera device, and obtaining a second image which is collected by a radar device and comprises radar data of the target area, wherein the first image is an image which is continuously shot by the camera device in a target time period when the environment where the camera device is located meets a preset brightness threshold, and the second image is an image which is continuously collected by the radar device in the target time period and comprises the radar data;
detecting the first image and the second image to obtain first data information of a vehicle included in the first image and second data information of the vehicle included in the second image, and determining running information of a target vehicle of a target category in the target area based on the first data information and the second data information;
determining whether the target vehicle has performed a target behavior based on the travel information.
2. The method of claim 1, wherein determining travel information for a target vehicle of a target category in the target area based on the first data information and the second data information comprises:
determining a first classification statistic result for each vehicle included in the first image based on the first data information, wherein the first classification statistic result is indicative of size information, location information, and category information of each vehicle included in the first image, and determining a second classification statistic result for each vehicle included in the second image based on the second data information, wherein the second classification statistic result is indicative of size information, location information, and category information of each vehicle included in the second image;
determining a first vehicle of the target category included in the first image based on the size information and category information indicated in the first classification statistical result, and determining a second vehicle of the target category included in the second image based on the size information and category information indicated in the second classification statistical result;
determining target position information of the target vehicle corresponding to the position in the first image and the second image based on the position information of the first vehicle indicated in the first classification statistical result and the position information of the second vehicle indicated in the second classification statistical result;
determining the travel information of the target vehicle in the target area based on the target position information.
3. The method of claim 2, wherein after determining a first classification statistic for each vehicle included in the first image based on the first data information and a second classification statistic for each vehicle included in the second image based on the second data information, the method further comprises:
generating a first mask table based on the second classification statistical result, wherein the first mask table is used for recording the size information, the position information and the category information of each vehicle included in the second image;
correspondingly correcting the information recorded in the first mask table by using the size information, the position information and the category information of each vehicle included in the first image recorded in the first classification statistical result;
obtaining a target mask table based on the correction result;
determining the travel information of the target vehicle in the target area based on the target mask table.
4. The method according to claim 3, wherein the correspondingly correcting the information recorded in the first mask table using the size information, the position information, and the category information of each vehicle included in the first image recorded in the first classification statistic result comprises:
determining a target classification result from the first classification result when a classification confidence of each vehicle included in the first image is further recorded in the first classification result, wherein the target classification result is used for indicating size information, position information and classification information of the vehicle of which the classification confidence included in the first image is greater than a preset confidence threshold;
and correspondingly correcting the information recorded in the first mask table by using the target classification result.
5. The method of claim 1, wherein determining whether the target vehicle performed the target action based on the travel information comprises:
determining trajectory information of the target vehicle included in the travel information;
determining speed information of the target vehicle based on the trajectory information;
determining whether the target vehicle has performed a target behavior based on the speed information.
6. The method of claim 5, wherein determining whether the target vehicle performed the target action based on the speed information comprises:
determining that the target vehicle has performed the target behavior in a case where it is determined that the over-speed of the target vehicle is zero in a right turn area included in the target area based on the speed information;
determining that the target vehicle has not performed the target behavior in a case where it is determined that the target vehicle has not experienced zero speed in a right turn zone included in the target zone based on the speed information.
7. The method of claim 1, further comprising:
acquiring a third image which is acquired by the radar equipment in other time periods except the target time period and comprises radar data of the target area;
detecting the third image to obtain third data information of the vehicle included in the third image, and determining target driving information of the target vehicle in the target area based on the third data information;
determining whether the target vehicle has performed a target behavior based on the target travel information.
8. A detection device of a vehicle behavior, characterized by comprising:
the device comprises a first acquisition module and a second acquisition module, wherein the first acquisition module is used for acquiring a first image obtained by shooting a target area by a camera device and acquiring a second image which is acquired by a radar device and comprises radar data of the target area, the first image is an image which is continuously shot by the camera device in a target time period when the environment where the camera device is located meets a preset brightness threshold, and the second image is an image which is continuously acquired by the radar device in the target time period and comprises the radar data;
a detection module, configured to detect the first image and the second image to obtain first data information of a vehicle included in the first image and second data information of the vehicle included in the second image, and determine, based on the first data information and the second data information, traveling information of a target vehicle of a target category in the target area;
a first determination module to determine whether the target vehicle performed a target behavior based on the travel information.
9. A computer-readable storage medium, in which a computer program is stored, which computer program, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 7.
10. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor implements the steps of the method as claimed in any one of claims 1 to 7 when executing the computer program.
CN202210977137.7A 2022-08-15 2022-08-15 Vehicle behavior detection method and device, storage medium and electronic device Pending CN115240148A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210977137.7A CN115240148A (en) 2022-08-15 2022-08-15 Vehicle behavior detection method and device, storage medium and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210977137.7A CN115240148A (en) 2022-08-15 2022-08-15 Vehicle behavior detection method and device, storage medium and electronic device

Publications (1)

Publication Number Publication Date
CN115240148A true CN115240148A (en) 2022-10-25

Family

ID=83679064

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210977137.7A Pending CN115240148A (en) 2022-08-15 2022-08-15 Vehicle behavior detection method and device, storage medium and electronic device

Country Status (1)

Country Link
CN (1) CN115240148A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115908838A (en) * 2022-12-12 2023-04-04 南京慧尔视智能科技有限公司 Vehicle existence detection method, device, equipment and medium based on radar vision fusion
CN117636297A (en) * 2023-11-20 2024-03-01 苏州大学 Front car driving intention identification system and method based on visible light communication
CN117789486A (en) * 2024-02-28 2024-03-29 南京莱斯信息技术股份有限公司 Monitoring system and method for right turn stop of intersection of large-sized vehicle

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115908838A (en) * 2022-12-12 2023-04-04 南京慧尔视智能科技有限公司 Vehicle existence detection method, device, equipment and medium based on radar vision fusion
CN115908838B (en) * 2022-12-12 2023-11-07 南京慧尔视智能科技有限公司 Vehicle presence detection method, device, equipment and medium based on radar fusion
CN117636297A (en) * 2023-11-20 2024-03-01 苏州大学 Front car driving intention identification system and method based on visible light communication
CN117789486A (en) * 2024-02-28 2024-03-29 南京莱斯信息技术股份有限公司 Monitoring system and method for right turn stop of intersection of large-sized vehicle
CN117789486B (en) * 2024-02-28 2024-05-10 南京莱斯信息技术股份有限公司 Monitoring system and method for right turn stop of intersection of large-sized vehicle

Similar Documents

Publication Publication Date Title
CN108513674B (en) Detection and alarm method for accumulated snow and icing in front of vehicle, storage medium and server
CN115240148A (en) Vehicle behavior detection method and device, storage medium and electronic device
CN106571046B (en) Vehicle-road cooperative driving assisting method based on road surface grid system
CN103927878B (en) A kind of automatic shooting device for parking offense and automatically grasp shoot method
CN112424793A (en) Object identification method, object identification device and electronic equipment
CN110738150B (en) Camera linkage snapshot method and device and computer storage medium
CN102792314A (en) Cross traffic collision alert system
CN105679043A (en) 3D radar intelligent bayonet system and processing method thereof
CN113255439B (en) Obstacle identification method, device, system, terminal and cloud
CN114463372A (en) Vehicle identification method and device, terminal equipment and computer readable storage medium
US11281916B2 (en) Method of tracking objects in a scene
CN114648748A (en) Motor vehicle illegal parking intelligent identification method and system based on deep learning
CN117372979A (en) Road inspection method, device, electronic equipment and storage medium
CN113221724B (en) Vehicle spray detection method and system
CN112447060A (en) Method and device for recognizing lane and computing equipment
CN111862621B (en) Intelligent snapshot system of multi-type adaptive black cigarette vehicle
CN113420714A (en) Collected image reporting method and device and electronic equipment
CN105679090A (en) Smartphone-based driver nighttime driving assistance system and smartphone-based driver nighttime driving assistance method
CN115953764B (en) Vehicle sentinel method, device, equipment and storage medium based on aerial view
CN115631420B (en) Tunnel accumulated water identification method and device, storage medium and electronic device
CN113468911A (en) Vehicle-mounted red light running detection method and device, electronic equipment and storage medium
CN116152753A (en) Vehicle information identification method and system, storage medium and electronic device
CN115797880A (en) Method and device for determining driving behavior, storage medium and electronic device
CN113112814B (en) Snapshot method and device without stopping right turn and computer storage medium
CN105303825A (en) Violating inclined side parking evidence obtaining device and method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination