CN114913695B - Vehicle reverse running detection method, system, equipment and storage medium based on AI vision - Google Patents

Vehicle reverse running detection method, system, equipment and storage medium based on AI vision Download PDF

Info

Publication number
CN114913695B
CN114913695B CN202210704832.6A CN202210704832A CN114913695B CN 114913695 B CN114913695 B CN 114913695B CN 202210704832 A CN202210704832 A CN 202210704832A CN 114913695 B CN114913695 B CN 114913695B
Authority
CN
China
Prior art keywords
vehicle
lane
video
vision
image area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210704832.6A
Other languages
Chinese (zh)
Other versions
CN114913695A (en
Inventor
谭黎敏
赵钊
余磊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Xijing Technology Co ltd
Original Assignee
Shanghai Xijing Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Xijing Technology Co ltd filed Critical Shanghai Xijing Technology Co ltd
Priority to CN202210704832.6A priority Critical patent/CN114913695B/en
Publication of CN114913695A publication Critical patent/CN114913695A/en
Application granted granted Critical
Publication of CN114913695B publication Critical patent/CN114913695B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/017Detecting movement of traffic to be counted or controlled identifying vehicles
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • G06V20/42Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items of sport video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • G06V20/54Surveillance or monitoring of activities, e.g. for recognising suspicious objects of traffic, e.g. cars on the road, trains or boats
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/056Detecting movement of traffic to be counted or controlled with provision for distinguishing direction of travel
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/08Detecting or categorising vehicles
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Traffic Control Systems (AREA)

Abstract

The application provides a vehicle reverse running detection method, a system, equipment and a storage medium based on AI vision, wherein the method comprises the following steps: collecting video streams of a road surface, and converting the video streams into a plurality of video frame sequences based on time sequences; identifying vehicles in the video frame sequence, and obtaining image areas occupied by the vehicles and lanes where the vehicles are positioned respectively; obtaining a first driving direction of the vehicle according to the position change of the image area of the vehicle in the video stream; and judging whether the first running direction is opposite to a second running direction preset by the lane, and if so, intervening in the state of the vehicle or the lane. The application can automatically judge whether the vehicle is in reverse running or not, and intervene or brake the reverse running vehicle in time, thereby improving the traffic safety of the large-scale operation place.

Description

Vehicle reverse running detection method, system, equipment and storage medium based on AI vision
Technical Field
The application relates to the technical field of AI vision, in particular to a vehicle reverse detection method, system, equipment and storage medium based on AI vision.
Background
In the field of port safety detection, the vehicle reverse detection function is in an important position. Especially, in the development period of unmanned harbour at present, unmanned integrated circuit card, have the integrated circuit card to travel in the pier region of harbour jointly, unmanned integrated circuit card mainly navigate through the rule of preset route travel, and have the integrated circuit card to observe the road surface sign and drive. When the road network is optimized in the wharf area, a plurality of identifications are not updated timely, or road identifications are updated, but when related driving rules are not uploaded to the electronic map of the unmanned integrated card in time, vehicles are easy to reverse and collide, so that traffic accidents are caused.
At present, AI vision, i.e. machine vision technology, is an interdisciplinary subject in many fields such as artificial intelligence, neurobiology, psychophysics, computer science, image processing, pattern recognition, etc. Machine vision mainly uses a computer to simulate the visual function of a person, extracts information from an image of an objective object, processes and understands the information, and is finally used for actual detection, measurement and control. The machine vision technology has the greatest characteristics of high speed, large information quantity and multiple functions.
In view of the above, the application provides a vehicle reverse running detection method, a system, a device and a storage medium based on AI vision.
It should be noted that the information disclosed in the foregoing background section is only for enhancement of understanding of the background of the application and thus may include information that does not form the prior art that is already known to those of ordinary skill in the art.
Disclosure of Invention
Aiming at the problems in the prior art, the application aims to provide the vehicle retrograde detection method, the system, the equipment and the storage medium based on AI vision, which overcome the difficulty in the prior art, can automatically judge whether the vehicle is retrograde currently and intervene or brake the retrograde vehicle in time, thereby improving the traffic safety of the large-scale operation place vehicle.
The embodiment of the application provides a vehicle reverse detection method based on AI vision, which comprises the following steps:
collecting video streams of a road surface, and converting the video streams into a plurality of video frame sequences based on time sequences;
identifying vehicles in the video frame sequence, and obtaining image areas occupied by the vehicles and lanes where the vehicles are located respectively;
obtaining a first driving direction of the vehicle according to the position change of the image area of the vehicle in the video stream;
and judging whether the first running direction is opposite to a second running direction preset by the lane, and if so, intervening in the state of the vehicle or the lane.
Preferably, the capturing the video stream of the road surface is converted into a plurality of video frame sequences based on time sequence, including:
collecting video streams of a road surface;
extracting video frames from the video stream according to a preset frame extraction interval;
a sequence of video frames is established based on the timing of the video frames.
Preferably, the identifying a vehicle in the video frame sequence, and obtaining the image areas occupied by the vehicle and the lane where the vehicle is located respectively, includes:
carrying out vehicle identification on video frames in the video frame sequence through a vehicle image identification neural network to obtain a first image area occupied by a vehicle;
obtaining a second image area with a lane of a preset second driving direction, in which the first image area is located;
and establishing a mapping relation between the vehicle and the second driving direction.
Preferably, the obtaining the first driving direction of the vehicle according to the position change of the image area of the vehicle in the video stream includes:
obtaining center coordinates of the first image region in video frames in each of the video frame sequences;
a first direction of travel of the vehicle is obtained based on a movement trajectory of a center coordinate in a sequence of video frames.
Preferably, the center coordinates of the first image area are obtained in the video frames in each video frame sequence, and the sum of the distances from the center coordinates to each pixel in the first image area is the smallest;
and obtaining a first driving direction of the vehicle based on the moving track of the central coordinates in the video frame sequence, wherein the first driving direction is the transmitting direction of a ray with the minimum sum of distances from each central coordinate in the video stream.
Preferably, the determining whether the first driving direction is opposite to the second driving direction preset by the lane, if yes, intervening in the state of the vehicle or the lane includes:
when the first driving direction is continuously opposite to the second driving direction preset by the lane within the preset duration, identifying license plate information of the vehicle;
and obtaining the control authority of the vehicle corresponding to the license plate information through a vehicle networking system based on the license plate information, and remotely braking the vehicle.
Preferably, the determining whether the first driving direction is opposite to the second driving direction preset by the lane, if yes, intervening in the state of the vehicle or the lane, further includes:
and switching traffic lights of the nearest intersection of the current advancing direction of the vehicle into red lights based on road network information.
Preferably, the determining whether the first driving direction is opposite to the second driving direction preset by the lane, if yes, intervening in the state of the vehicle or the lane, further includes:
and acquiring license plate information of at least one vehicle of the same lane in the current advancing direction of the vehicle based on road network information, acquiring control authority of at least one vehicle corresponding to the license plate information through a vehicle networking system based on the license plate information, and remotely braking the vehicle.
The embodiment of the application also provides a vehicle reverse running detection system based on AI vision, which is used for realizing the vehicle reverse running detection method based on AI vision, and comprises the following steps:
the video acquisition module acquires a video stream of a road surface and converts the video stream into a plurality of video frame sequences based on time sequence;
the vehicle identification module is used for identifying vehicles in the video frame sequence and obtaining image areas occupied by the vehicles and lanes where the vehicles are located respectively;
a direction obtaining module that obtains a first traveling direction of the vehicle according to a change in a position of an image area of the vehicle in the video stream;
and the reverse intervention module is used for judging whether the first running direction is opposite to the second running direction preset by the lane, and if so, intervening the state of the vehicle or the lane.
The embodiment of the application also provides a vehicle reverse running detection device based on AI vision, which comprises:
a processor;
a memory having stored therein executable instructions of a processor;
wherein the processor is configured to perform the steps of the AI vision-based vehicle reverse detection method described above via execution of the executable instructions.
The embodiment of the application also provides a computer readable storage medium for storing a program, which when executed, implements the steps of the vehicle reverse running detection method based on AI vision.
The vehicle retrograde detection method, the system, the equipment and the storage medium based on AI vision can automatically judge whether the vehicle is retrograde currently and intervene or brake the retrograde vehicle in time, thereby improving the traffic safety of the large-scale operation place.
Drawings
Other features, objects and advantages of the present application will become more apparent upon reading of the detailed description of non-limiting embodiments, made with reference to the following drawings.
Fig. 1 is a flowchart of the AI vision-based vehicle reverse running detection method of the present application.
Fig. 2 to 6 are schematic diagrams of an implementation process scenario of the AI vision-based vehicle reverse detection method of the present application.
Fig. 7 is a schematic structural diagram of the AI vision-based vehicle reverse running detection system of the present application.
Fig. 8 is a schematic structural view of the AI vision-based vehicle reverse running detection apparatus of the present application. And
fig. 9 is a schematic structural view of a computer-readable storage medium according to an embodiment of the present application.
Detailed Description
Other advantages and effects of the present application will be readily apparent to those skilled in the art from the following disclosure, which describes the embodiments of the present application by way of specific examples. The application may be practiced or carried out in other embodiments and with various details, and various modifications and alterations may be made to the details of the application from various points of view and applications without departing from the spirit of the application. It should be noted that, without conflict, the embodiments of the present application and features of the embodiments may be combined with each other.
The embodiments of the present application will be described in detail below with reference to the attached drawings so that those skilled in the art to which the present application pertains can easily implement the present application. This application may be embodied in many different forms and is not limited to the embodiments described herein.
In the context of the present description, reference to the terms "one embodiment," "some embodiments," "examples," "particular examples," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present application. Furthermore, the particular features, structures, materials, or characteristics may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples, as well as features of various embodiments or examples, presented herein may be combined and combined by those skilled in the art without conflict.
Furthermore, the terms "first," "second," and the like, are used for descriptive purposes only and are not to be construed as indicating or implying a relative importance or the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include at least one such feature. In the context of the present application, the meaning of "a plurality" is two or more, unless explicitly defined otherwise.
For the purpose of clarity of explanation of the present application, components that are not related to the explanation are omitted, and the same or similar components are given the same reference numerals throughout the description.
Throughout the specification, when a device is said to be "connected" to another device, this includes not only the case of "direct connection" but also the case of "indirect connection" with other elements interposed therebetween. In addition, when a certain component is said to be "included" in a certain device, unless otherwise stated, other components are not excluded, but it means that other components may be included.
When a device is said to be "on" another device, this may be directly on the other device, but may also be accompanied by other devices therebetween. When a device is said to be "directly on" another device in contrast, there is no other device in between.
Although the terms first, second, etc. may be used herein to describe various elements in some examples, these elements should not be limited by these terms. These terms are only used to distinguish one element from another element. For example, a first interface, a second interface, etc. Furthermore, as used in this application, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context indicates otherwise. It will be further understood that the terms "comprises," "comprising," "includes," and/or "including" specify the presence of stated features, steps, operations, elements, components, items, categories, and/or groups, but do not preclude the presence, presence or addition of one or more other features, steps, operations, elements, components, items, categories, and/or groups. The terms "or" and/or "as used herein are to be construed as inclusive, or meaning any one or any combination. Thus, "A, B or C" or "A, B and/or C" means "any of the following: a, A is as follows; b, a step of preparing a composite material; c, performing operation; a and B; a and C; b and C; A. b and C). An exception to this definition will occur only when a combination of elements, functions, steps or operations are in some way inherently mutually exclusive.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the language clearly indicates the contrary. The meaning of "comprising" in the specification is to specify the presence of stated features, regions, integers, steps, operations, elements, and/or components, but does not preclude the presence or addition of other features, regions, integers, steps, operations, elements, and/or components.
Although not differently defined, including technical and scientific terms used herein, all have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The term addition defined in the commonly used dictionary is interpreted as having a meaning conforming to the contents of the related art document and the current hint, so long as no definition is made, it is not interpreted as an ideal or very formulaic meaning too much.
Fig. 1 is a flowchart of the AI vision-based vehicle reverse running detection method of the present application. As shown in fig. 1, an embodiment of the present application provides a vehicle reverse detection method based on AI vision, including the following steps:
s110, acquiring a video stream of a road surface, and converting the video stream into a plurality of video frame sequences based on time sequence;
s120, identifying vehicles in the video frame sequence, and obtaining image areas occupied by the vehicles and lanes where the vehicles are located respectively;
s130, obtaining a first driving direction of the vehicle according to the position change of the image area of the vehicle in the video stream;
and S140, judging whether the first driving direction is opposite to the second driving direction preset by the lane, and if so, intervening in the state of the vehicle or the lane.
The vehicle reverse detection function in the application is that a preset camera is used for acquiring a real-time picture in a driving area of a port, the driving direction of a vehicle in the picture is identified through algorithm logic, and then whether the driving direction is the same as the direction specified by a road is judged. If the information is different, the method provides alarm information and reserves an alarm picture. The detection of the direction of vehicles on the port road requires the setting of an algorithm detection area and the direction of travel specified by the road. The vehicle retrograde detection algorithm is operated on the GPU, and a model based on a deep learning network is used, so that the CPU utilization rate of a computer server can be remarkably reduced, the calculation capacity of the GPU is fully utilized, the operation speed and accuracy of the model are improved, and the detection efficiency of the whole system is further improved.
In a preferred embodiment, step S110 comprises the steps of:
s111, collecting video streams of a road surface;
s112, extracting video frames from the video stream according to a preset frame extraction interval;
s113, establishing a video frame sequence based on the time sequence of the video frames.
In a preferred embodiment, step S120 comprises the steps of:
s121, carrying out vehicle identification on video frames in a video frame sequence through a vehicle image identification neural network to obtain a first image area occupied by a vehicle;
s122, obtaining a second image area with a lane in a preset second driving direction, where the first image area is located;
s123, establishing a mapping relation between the vehicle and the second driving direction.
In a preferred embodiment, step S130 comprises the steps of:
s131, obtaining the center coordinates of a first image area in video frames in each video frame sequence;
s132, obtaining a first driving direction of the vehicle based on the moving track of the center coordinates in the video frame sequence.
In a preferred embodiment, in step S131, the sum of the distances of the center coordinates to each pixel in the first image area is minimized;
in step S132, the first traveling direction is the emission direction of a ray in the video stream, in which the sum of distances to the respective center coordinates is smallest.
In a preferred embodiment, step S140 comprises the steps of:
s141, identifying license plate information of the vehicle when the first driving direction is continuously opposite to the second driving direction preset by the lane within the preset duration;
s142, obtaining control authority of the vehicle corresponding to the license plate information through the Internet of vehicles based on the license plate information, and remotely braking the vehicle.
In a preferred embodiment, step S140 further comprises the steps of:
s143, switching traffic lights of the latest intersection of the current advancing direction of the vehicle to red lights based on road network information.
In a preferred embodiment, step S140 further comprises the steps of:
s144, acquiring license plate information of at least one vehicle of the same lane in the current advancing direction of the vehicle based on road network information, acquiring control authority of at least one vehicle corresponding to the license plate information through a vehicle networking system based on the license plate information, and remotely braking the vehicle.
The application uses the deep neural network model based on the GPU, can obviously improve the accuracy of identifying the vehicle, and can accurately identify the running direction of the vehicle by judging the front-back position relationship of the same vehicle. The main calculation work is put into the GPU, so that the speed of processing video information in real time can be achieved. The picture processing speed of the application reaches 19fps, the port video detection requirement can be met, and the accuracy of vehicle direction identification reaches 95 percent
As shown in fig. 2 to 6, an implementation procedure of the present application mainly includes:
the vehicle direction detection system of the port transmits the digital picture signal to the server 10 through the preset camera, and the server 10 calculates the running direction of the vehicle through a built-in algorithm to judge whether the vehicle is in reverse running or not. The application can be deployed in ports, and the main components of the complete system are as follows: a camera, a computer processor CPU, a computer processor GPU, a computer program stored in a memory that is executable in the above-mentioned processor, etc. The method is characterized in that the camera, the processor CPU, the computer processor GPU and the memory can execute steps in all computer programs.
Referring to fig. 2, the flow of implementing the present application is as follows:
the network camera 1 is used for shooting the road 2 of the port, referring to fig. 3, the automatic range calibration is carried out on different lanes of the road 2 on the basis of the picture obtained by the network camera 1, and the running direction A, B of each lane 21, 22 is preset.
Referring to fig. 4 and 5, a road 2 and a vehicle 31 of a harbor are photographed using a network camera 1, and a video stream of a road surface is collected; extracting video frames from the video stream according to a preset frame extraction interval; a sequence of video frames is established based on the timing of the video frames. The video was disassembled into individual pictures, disassembled at a speed of about 15fps, the vehicle on each picture was detected using a deep learning model, and the position coordinates (x, y) of the vehicle center were recorded. And circularly detecting the position of the vehicle on the picture. For example: carrying out vehicle identification on video frames in the video frame sequence through a vehicle image identification neural network to obtain a first image area occupied by a vehicle; obtaining a second image area with a lane in a preset second driving direction, where the first image area is located; and establishing a mapping relation between the vehicle and the second driving direction. Obtaining center coordinates of a first image region for video frames in each video frame sequence; a first travel direction of the vehicle is obtained based on a movement trajectory of a center coordinate in the sequence of video frames. The sum of the distances from the center coordinates to each pixel in the first image area is the smallest; the first direction of travel is the direction of emission V of a ray in the video stream that has the smallest sum of distances to the respective center coordinates.
And finally judging the position relation of the vehicle on the previous picture and the next picture, and considering the vehicle as the same vehicle when the overlapping degree (iou) is greater than 0.2. Thus, a plurality of pieces of position information of the same vehicle are accumulated. The information of the front and rear positions is compared, and then the direction in which the vehicle travels is calculated.
The vehicle is judged to have currently reversed according to the preset traveling direction B of the lane 22 and the traveling direction V of the vehicle 31. In a preferred embodiment, the basis of the judgment is that the included angle between the two directions is larger than 145 degrees, and after the retrograde vehicle is identified, retrograde picture information is stored and alarm information is sent.
The license plate information ABCDEF of the vehicle can be visually identified through AI, and the control authority of the vehicle corresponding to the license plate information ABCDEF is obtained through the Internet of vehicles system, so that the vehicle is remotely braked. Alternatively, referring to fig. 6, traffic lights 41, 42 at the nearest intersection in the current advancing direction of the vehicle are switched to red lights based on road network information, thereby causing the related vehicle on the lane to brake, avoiding occurrence of collision. The license plate information of at least one vehicle 32 in the same lane of the current advancing direction of the vehicle 31 can be acquired through AI vision based on road network information, the control authority of the at least one vehicle 32 corresponding to the license plate information is obtained through the vehicle networking system based on the license plate information, the vehicle 32 is braked remotely, collision is further avoided, and safety is enhanced.
Fig. 7 is a schematic structural diagram of the AI vision-based vehicle reverse running detection system of the present application. As shown in fig. 7, the AI vision-based vehicle reverse travel detection system 5 of the present application includes:
the video acquisition module 51 acquires a video stream of a road surface and converts the video stream into a plurality of video frame sequences based on time sequence.
The vehicle identification module 52 identifies the vehicle in the sequence of video frames and obtains the image areas occupied by the vehicle and the lane in which it is located, respectively.
The direction obtaining module 53 obtains a first traveling direction of the vehicle according to a change in the position of the image area of the vehicle in the video stream.
The reverse intervention module 54 determines whether the first driving direction is opposite to a second driving direction preset in the lane, and if so, intervenes in the state of the vehicle or the lane.
In a preferred embodiment, the video acquisition module 51 is configured to acquire a video stream of the road surface; extracting video frames from the video stream according to a preset frame extraction interval; a sequence of video frames is established based on the timing of the video frames.
In a preferred embodiment, the vehicle identification module 52 is configured to perform vehicle identification on video frames in the sequence of video frames through a vehicle image identification neural network to obtain a first image area occupied by the vehicle; obtaining a second image area with a lane in a preset second driving direction, where the first image area is located; and establishing a mapping relation between the vehicle and the second driving direction.
In a preferred embodiment, the direction obtaining module 53 is configured to obtain the center coordinates of the first image area in the video frames in each video frame sequence; a first travel direction of the vehicle is obtained based on a movement trajectory of a center coordinate in the sequence of video frames. The sum of the distances from the center coordinates to each pixel in the first image area is the smallest; the first direction of travel is the direction of emission of a ray in the video stream that has the smallest sum of distances to the respective center coordinates.
In a preferred embodiment, the reverse intervention module 54 is configured to identify license plate information of the vehicle when the first direction of travel continues to be opposite to a second direction of travel preset for the lane for a preset duration; and obtaining the control authority of the vehicle corresponding to the license plate information through the vehicle networking system based on the license plate information, and remotely braking the vehicle. And switching traffic lights of the nearest intersection of the current advancing direction of the vehicle into red lights based on road network information.
The vehicle retrograde detection system based on AI vision can automatically judge whether the vehicle is retrograde currently and intervene or brake the retrograde vehicle in time, so that the traffic safety of a large-scale operation place is improved.
The embodiment of the application also provides a vehicle retrograde detection device based on AI vision, which comprises a processor. A memory having stored therein executable instructions of a processor. Wherein the processor is configured to execute the steps of the AI vision-based vehicle reverse detection method via execution of the executable instructions.
As described above, the vehicle reverse running detection device based on AI vision can automatically judge whether the vehicle is currently reverse running or not, and intervene or brake the reverse running vehicle in time, so that the traffic safety of the large-scale operation place is improved.
Those skilled in the art will appreciate that the various aspects of the application may be implemented as a system, method, or program product. Accordingly, aspects of the application may be embodied in the following forms, namely: an entirely hardware embodiment, an entirely software embodiment (including firmware, micro-code, etc.) or an embodiment combining hardware and software aspects may be referred to herein as a "circuit," module "or" platform.
Fig. 8 is a schematic structural view of the AI vision-based vehicle reverse running detection apparatus of the present application. An electronic device 600 according to this embodiment of the application is described below with reference to fig. 8. The electronic device 600 shown in fig. 8 is merely an example, and should not be construed as limiting the functionality and scope of use of embodiments of the present application.
As shown in fig. 8, the electronic device 600 is in the form of a general purpose computing device. Components of electronic device 600 may include, but are not limited to: at least one processing unit 610, at least one memory unit 620, a bus 630 connecting the different platform components (including memory unit 620 and processing unit 610), a display unit 640, etc.
Wherein the storage unit stores program code executable by the processing unit 610 such that the processing unit 610 performs the steps according to various exemplary embodiments of the present application described in the above-described electronic prescription flow processing method section of the present specification. For example, the processing unit 610 may perform the steps as shown in fig. 1.
The storage unit 620 may include readable media in the form of volatile storage units, such as Random Access Memory (RAM) 6201 and/or cache memory unit 6202, and may further include Read Only Memory (ROM) 6203.
The storage unit 620 may also include a program/utility 6204 having a set (at least one) of program modules 6205, such program modules 6205 including, but not limited to: an operating system, one or more application programs, other program modules, and program data, each or some combination of which may include an implementation of a network environment.
Bus 630 may be a local bus representing one or more of several types of bus structures including a memory unit bus or memory unit controller, a peripheral bus, an accelerated graphics port, a processing unit, or using any of a variety of bus architectures.
The electronic device 600 may also communicate with one or more external devices 700 (e.g., keyboard, pointing device, bluetooth device, etc.), one or more devices that enable a user to interact with the electronic device 600, and/or any device (e.g., router, modem, etc.) that enables the electronic device 600 to communicate with one or more other computing devices. Such communication may occur through an input/output (I/O) interface 650. Also, electronic device 600 may communicate with one or more networks such as a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network, such as the Internet, through network adapter 660. The network adapter 660 may communicate with other modules of the electronic device 600 over the bus 630. It should be appreciated that although not shown, other hardware and/or software modules may be used in connection with electronic device 600, including, but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, data backup storage platforms, and the like.
The embodiment of the application also provides a computer readable storage medium for storing a program, and the steps of the vehicle reverse running detection method based on AI vision are realized when the program is executed. In some possible embodiments, the aspects of the present application may also be implemented in the form of a program product comprising program code for causing a terminal device to carry out the steps according to the various exemplary embodiments of the application as described in the electronic prescription stream processing method section of this specification, when the program product is run on the terminal device.
As described above, the program of the computer-readable storage medium according to this embodiment, when executed, can automatically determine whether the vehicle is currently traveling in reverse, and intervene or brake the traveling vehicle in time, thereby improving the safety of the vehicle traveling in a large-scale work place.
Fig. 9 is a schematic structural view of a computer-readable storage medium of the present application. Referring to fig. 9, a program product 800 for implementing the above-described method according to an embodiment of the present application is described, which may employ a portable compact disc read only memory (CD-ROM) and include program code, and may be run on a terminal device, such as a personal computer. However, the program product of the present application is not limited thereto, and in this document, a readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
The program product may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. The readable storage medium can be, for example, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium would include the following: an electrical connection having one or more wires, a portable disk, a hard disk, random Access Memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The computer readable storage medium may include a data signal propagated in baseband or as part of a carrier wave, with readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A readable storage medium may also be any readable medium that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a readable storage medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Program code for carrying out operations of the present application may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C++ or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device, partly on a remote computing device, or entirely on the remote computing device or server. In the case of remote computing devices, the remote computing device may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., connected via the Internet using an Internet service provider).
In summary, the vehicle reverse running detection method, system, equipment and storage medium based on AI vision can automatically judge whether the vehicle is currently reverse running or not, and timely intervene or brake the reverse running vehicle, so that the traffic safety of a large-scale operation place is improved.
The foregoing is a further detailed description of the application in connection with the preferred embodiments, and it is not intended that the application be limited to the specific embodiments described. It will be apparent to those skilled in the art that several simple deductions or substitutions may be made without departing from the spirit of the application, and these should be considered to be within the scope of the application.

Claims (7)

1. The vehicle reverse detection method based on AI vision is characterized by comprising the following steps:
s110, acquiring a video stream of a road surface, and converting the video stream into a plurality of video frame sequences based on time sequence;
s120, carrying out vehicle recognition on video frames in the video frame sequence through a vehicle image recognition neural network to obtain a first image area occupied by a vehicle, obtaining a second image area with a lane in a preset second driving direction where the first image area is located, and establishing a mapping relation between the vehicle and the second driving direction;
s130, when the overlapping degree between the vehicle position of the previous picture and the vehicle position of the next picture is larger than 0.2, the vehicle is considered as the same vehicle, a plurality of pieces of position information of the same vehicle are accumulated, the center coordinates of the first image area are obtained from video frames in each video frame sequence, and the sum of the distances from the center coordinates to each pixel in the first image area is minimum; obtaining a first driving direction of the vehicle based on central coordinates in the video frame sequence, wherein the first driving direction is the emission direction of a ray with the smallest sum of distances from the central coordinates in the video stream;
and S140, judging whether the first running direction is opposite to a second running direction preset by the lane, and if so, intervening in the state of the vehicle or the lane.
2. The AI-vision-based vehicle reverse detection method of claim 1, wherein the capturing the video stream of the road surface, converting into a number of video frame sequences based on timing, comprises:
collecting video streams of a road surface;
extracting video frames from the video stream according to a preset frame extraction interval;
a sequence of video frames is established based on the timing of the video frames.
3. The AI-vision-based vehicle reverse travel detection method according to claim 1, wherein the determining whether the first travel direction is opposite to a second travel direction preset for the lane, if so, intervening in a state of the vehicle or the lane, includes:
when the first driving direction is continuously opposite to the second driving direction preset by the lane within the preset duration, identifying license plate information of the vehicle;
and obtaining the control authority of the vehicle corresponding to the license plate information through a vehicle networking system based on the license plate information, and remotely braking the vehicle.
4. The AI-vision-based vehicle reverse travel detection method according to claim 1, wherein the determining whether the first travel direction is opposite to a second travel direction preset for the lane, if so, intervenes in a state of the vehicle or the lane, further comprising:
and switching traffic lights of the nearest intersection of the current advancing direction of the vehicle into red lights based on road network information.
5. A vehicle reverse detection system based on AI vision, the system comprising:
the video acquisition module acquires a video stream of a road surface and converts the video stream into a plurality of video frame sequences based on time sequence;
the vehicle identification module is used for carrying out vehicle identification on video frames in the video frame sequence through a vehicle image identification neural network, obtaining a first image area occupied by a vehicle, obtaining a second image area with a lane in a preset second driving direction where the first image area is located, and establishing a mapping relation between the vehicle and the second driving direction;
a direction obtaining module, configured to consider the same vehicle when the overlapping degree between the vehicle position of the previous picture and the vehicle position of the next picture is greater than 0.2, accumulate a plurality of position information of the same vehicle, obtain a center coordinate of the first image area in each video frame sequence, and sum up the distances from the center coordinate to each pixel in the first image area to be minimum; the first driving direction of the vehicle is obtained based on the central coordinates in the video frame sequence, and the first driving direction is the emission direction of a ray with the smallest sum of distances from the central coordinates in the video stream;
and the reverse intervention module is used for judging whether the first running direction is opposite to the second running direction preset by the lane, and if so, intervening the state of the vehicle or the lane.
6. A vehicle reverse travel detection apparatus based on AI vision, characterized by comprising:
a processor;
a memory having stored therein executable instructions of the processor;
wherein the processor is configured to perform the steps of the AI vision-based vehicle reverse detection method of any one of claims 1-4 via execution of the executable instructions.
7. A computer-readable storage medium storing a program, characterized in that the program when executed implements the steps of the AI vision-based vehicle reverse running detection method according to any one of claims 1 to 4.
CN202210704832.6A 2022-06-21 2022-06-21 Vehicle reverse running detection method, system, equipment and storage medium based on AI vision Active CN114913695B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210704832.6A CN114913695B (en) 2022-06-21 2022-06-21 Vehicle reverse running detection method, system, equipment and storage medium based on AI vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210704832.6A CN114913695B (en) 2022-06-21 2022-06-21 Vehicle reverse running detection method, system, equipment and storage medium based on AI vision

Publications (2)

Publication Number Publication Date
CN114913695A CN114913695A (en) 2022-08-16
CN114913695B true CN114913695B (en) 2023-10-31

Family

ID=82772712

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210704832.6A Active CN114913695B (en) 2022-06-21 2022-06-21 Vehicle reverse running detection method, system, equipment and storage medium based on AI vision

Country Status (1)

Country Link
CN (1) CN114913695B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN2658867Y (en) * 2003-11-13 2004-11-24 欧阳永军 Multifunction vehicle traffic control device
CN106228140A (en) * 2016-07-28 2016-12-14 国网湖南省电力公司 The transmission line forest fire smog of a kind of combination weather environment sentences knowledge method
CN109919074A (en) * 2019-03-01 2019-06-21 中国科学院合肥物质科学研究院 A kind of the vehicle cognitive method and device of view-based access control model cognition technology
CN111339994A (en) * 2019-12-31 2020-06-26 智慧互通科技有限公司 Method and device for judging temporary illegal parking
CN112329722A (en) * 2020-11-26 2021-02-05 上海西井信息科技有限公司 Driving direction detection method, system, equipment and storage medium
CN112750317A (en) * 2020-12-21 2021-05-04 深圳市商汤科技有限公司 Vehicle reverse running detection method, device, equipment and computer readable storage medium
CN113744519A (en) * 2020-12-20 2021-12-03 洪其波 System device for intelligently correcting non-motor vehicle retrograde motion and implementation method
CN114360261A (en) * 2021-12-30 2022-04-15 北京软通智慧科技有限公司 Vehicle reverse driving identification method and device, big data analysis platform and medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11127288B2 (en) * 2019-12-05 2021-09-21 Geotoll, Inc. Wrong way driving detection

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN2658867Y (en) * 2003-11-13 2004-11-24 欧阳永军 Multifunction vehicle traffic control device
CN106228140A (en) * 2016-07-28 2016-12-14 国网湖南省电力公司 The transmission line forest fire smog of a kind of combination weather environment sentences knowledge method
CN109919074A (en) * 2019-03-01 2019-06-21 中国科学院合肥物质科学研究院 A kind of the vehicle cognitive method and device of view-based access control model cognition technology
CN111339994A (en) * 2019-12-31 2020-06-26 智慧互通科技有限公司 Method and device for judging temporary illegal parking
CN112329722A (en) * 2020-11-26 2021-02-05 上海西井信息科技有限公司 Driving direction detection method, system, equipment and storage medium
CN113744519A (en) * 2020-12-20 2021-12-03 洪其波 System device for intelligently correcting non-motor vehicle retrograde motion and implementation method
CN112750317A (en) * 2020-12-21 2021-05-04 深圳市商汤科技有限公司 Vehicle reverse running detection method, device, equipment and computer readable storage medium
CN114360261A (en) * 2021-12-30 2022-04-15 北京软通智慧科技有限公司 Vehicle reverse driving identification method and device, big data analysis platform and medium

Also Published As

Publication number Publication date
CN114913695A (en) 2022-08-16

Similar Documents

Publication Publication Date Title
Datondji et al. A survey of vision-based traffic monitoring of road intersections
CN109492507B (en) Traffic light state identification method and device, computer equipment and readable medium
CN109598066B (en) Effect evaluation method, apparatus, device and storage medium for prediction module
CN112750150B (en) Vehicle flow statistical method based on vehicle detection and multi-target tracking
US11727668B2 (en) Using captured video data to identify pose of a vehicle
CN112740268B (en) Target detection method and device
US11308357B2 (en) Training data generation apparatus
CN116958935A (en) Multi-view-based target positioning method, device, equipment and medium
CN118038409A (en) Vehicle drivable region detection method, device, electronic equipment and storage medium
CN112735163B (en) Method for determining static state of target object, road side equipment and cloud control platform
JP7454685B2 (en) Detection of debris in vehicle travel paths
CN114913695B (en) Vehicle reverse running detection method, system, equipment and storage medium based on AI vision
CN117372991A (en) Automatic driving method and system based on multi-view multi-mode fusion
CN112639822A (en) Data processing method and device
CN113298044B (en) Obstacle detection method, system, device and storage medium based on positioning compensation
CN113096427B (en) Information display method and device
Beresnev et al. Automated Driving System based on Roadway and Traffic Conditions Monitoring.
Grigioni et al. Safe road-crossing by autonomous wheelchairs: a novel dataset and its experimental evaluation
KR102523637B1 (en) A traffic data processor, traffic controlling system and method of processing traffic data
CN118042433A (en) Inter-vehicle communication method, device, electronic equipment and readable storage medium
Kenk et al. Driving Perception in Challenging Road Scenarios: An Empirical Study
Reyes-Cocoletzi et al. Obstacle Detection and Trajectory Estimation in Vehicular Displacements based on Computational Vision
Cai et al. Traffic flow diversion control system on scheduling algorithm
Yu Tackling Limited Sensing Capabilities for Autonomous Driving
KR20230083067A (en) Method and system for tracking vehicles based on multi-camera

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: Room 503-3, 398 Jiangsu Road, Changning District, Shanghai 200050

Applicant after: Shanghai Xijing Technology Co.,Ltd.

Address before: Room 503-3, 398 Jiangsu Road, Changning District, Shanghai 200050

Applicant before: SHANGHAI WESTWELL INFORMATION AND TECHNOLOGY Co.,Ltd.

GR01 Patent grant
GR01 Patent grant