CN115384541A - Method and system for driving risk detection - Google Patents

Method and system for driving risk detection Download PDF

Info

Publication number
CN115384541A
CN115384541A CN202210961264.8A CN202210961264A CN115384541A CN 115384541 A CN115384541 A CN 115384541A CN 202210961264 A CN202210961264 A CN 202210961264A CN 115384541 A CN115384541 A CN 115384541A
Authority
CN
China
Prior art keywords
driving
risk
driving risk
target vehicle
vehicle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210961264.8A
Other languages
Chinese (zh)
Inventor
郭炯光
王海燕
李辉
李廷温
卢声晓
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alipay Hangzhou Information Technology Co Ltd
Original Assignee
Alipay Hangzhou Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alipay Hangzhou Information Technology Co Ltd filed Critical Alipay Hangzhou Information Technology Co Ltd
Priority to CN202210961264.8A priority Critical patent/CN115384541A/en
Publication of CN115384541A publication Critical patent/CN115384541A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W50/08Interaction between the driver and the control system
    • B60W50/14Means for informing the driver, warning the driver or prompting a driver intervention
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W40/00Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
    • B60W40/02Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to ambient conditions
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W40/00Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
    • B60W40/08Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to drivers or passengers
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W50/08Interaction between the driver and the control system
    • B60W50/14Means for informing the driver, warning the driver or prompting a driver intervention
    • B60W2050/146Display means
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2756/00Output or target parameters relating to data
    • B60W2756/10Involving external transmission of data to or from the vehicle

Landscapes

  • Engineering & Computer Science (AREA)
  • Automation & Control Theory (AREA)
  • Transportation (AREA)
  • Mechanical Engineering (AREA)
  • Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Human Computer Interaction (AREA)
  • Traffic Control Systems (AREA)

Abstract

According to the method and the system for detecting the driving risk, after the current vehicle video of the target vehicle is acquired through the vehicle-mounted equipment of the target vehicle, the driving risk detection strategy of the target vehicle is updated, so that different driving risk detection strategies can be adopted for different vehicles or different driving objects, and the accuracy rate of driving risk detection is increased. In addition, based on the updated driving risk detection strategy, the driving risk corresponding to the target vehicle is directly identified in the current vehicle video without acquiring other data, so that the application range of driving risk detection can be enlarged, the driving risk can be directly output, the hysteresis of traffic safety early warning is reduced, and the detection efficiency of driving risk detection can be improved.

Description

Method and system for driving risk detection
Technical Field
The present disclosure relates to the field of risk detection, and more particularly, to a method and system for driving risk detection.
Background
In recent years, with the rapid development of technology, vehicles are becoming more and more, and the running risk is becoming higher. In order to ensure safe Driving of a vehicle, driving risks often need to be detected, and the existing Driving risk detection mode can identify the Driving risks by snapping illegal behaviors by a user or acquiring Driving information by a Driving recorder, or can also identify the Driving risks by adopting an Advanced Driving Assistance System (ADAS).
In the research and practice process of the prior art, the inventor of the invention finds that the driving risk can be generated to the user by snapping the illegal behavior by the user, the driving record is adopted to only record images around the front of the vehicle, the driving risk cannot be directly identified, the reminding of the traffic safety has hysteresis, in addition, the ADAS is adopted to add an additional sensing device to the vehicle to collect various data, the cost is higher, the driving risk detection scene is limited, and therefore, the detection efficiency of the driving risk detection is lower.
Therefore, it is desirable to provide a method and system for driving risk detection with higher detection efficiency.
Disclosure of Invention
The present specification provides a method and system for detecting a driving risk with higher detection efficiency.
In a first aspect, the present description provides a method of driving risk detection, comprising, by an on-board device of a target vehicle: acquiring a current vehicle video of the target vehicle; updating a driving risk detection strategy of the target vehicle; identifying a driving risk corresponding to the target vehicle in the current vehicle video based on the updated driving risk detection strategy; and outputting the driving risk.
In some embodiments, the current vehicle video includes at least one of a vehicle interior video, a vehicle exterior video, and a road condition video of the target vehicle.
In some embodiments, the updating the driving risk detection strategy of the target vehicle includes: receiving a current driving risk detection strategy issued by a server; and updating the driving risk detection strategy of the target vehicle based on the current driving risk detection strategy to obtain the updated driving risk detection strategy.
In some embodiments, before receiving the current driving risk detection policy issued by the server, the method further includes: and sending safety strategy configuration information to the server so that the server can generate the current driving risk detection strategy based on the safety strategy configuration information.
In some embodiments, before sending the security policy configuration information to the server, the method further includes: acquiring historical driving information corresponding to the target vehicle; and generating the safety strategy configuration information based on the historical driving information.
In some embodiments, before receiving the current driving risk detection policy issued by the server, the method further includes: and sending a historical vehicle video of a preset time period and a target driving risk corresponding to the historical vehicle video to the server, so that the server extracts a target driving feature corresponding to the target driving risk from the historical vehicle video, and generating the current driving risk detection strategy based on the target driving feature and the target driving risk.
In some embodiments, the driving risk includes at least one of a vehicle risk and a driving risk, the driving risk including at least one of a driving risk of a person inside the target vehicle and a driving risk of another person outside the target vehicle, the driving risk including at least one of a driving state anomaly and an illegal driving behavior.
In some embodiments, the identifying, in the current vehicle video, the driving risk corresponding to the target vehicle based on the updated driving risk detection policy includes: determining a running risk detection mode of the target vehicle; and identifying the running risk corresponding to the target vehicle in the current vehicle video based on the running risk detection mode and the updated running risk detection strategy.
In some embodiments, the driving risk detection manner includes at least one of local detection and server-side detection.
In some embodiments, the determining a driving risk detection manner of the target vehicle includes: acquiring detection configuration information for the target vehicle; and determining a driving risk detection mode of the target vehicle based on the detection configuration information.
In some embodiments, the determining a driving risk detection manner of the target vehicle includes: acquiring vehicle attribute information of the target vehicle; identifying a driving scene of the target vehicle based on the vehicle attribute information and the current vehicle video; and determining a driving risk detection mode of the target vehicle based on the driving scene.
In some embodiments, the identifying, in the current vehicle video, the driving risk corresponding to the target vehicle based on the driving risk detection manner and the updated driving risk detection policy includes: determining that the driving risk detection mode is local detection, and extracting current driving characteristics from the current vehicle video; and determining the driving risk corresponding to the target vehicle based on the updated driving risk detection strategy and the current driving characteristics.
In some embodiments, the extracting the current driving feature from the current vehicle video includes: framing the current vehicle video to obtain a video frame set; performing feature extraction on each video frame in the video frame set to obtain video features corresponding to each video frame; and identifying the current driving feature in the video features.
In some embodiments, the determining a driving risk corresponding to the target vehicle based on the updated driving risk detection strategy and the current driving characteristics includes: comparing a preset driving characteristic corresponding to the updated driving risk detection strategy with the current driving characteristic, wherein the preset driving characteristic is a driving characteristic corresponding to a preset driving risk; and determining the running risk corresponding to the target vehicle based on the comparison result.
In some embodiments, the comparing the current driving characteristic with a preset driving characteristic corresponding to the updated driving risk detection policy includes: calculating the feature similarity between the preset driving feature corresponding to the updated driving risk detection strategy and the current driving feature; and determining the driving risk corresponding to the target vehicle based on the comparison result, wherein the determining comprises the following steps: and when the feature similarity exceeds a preset similarity threshold, taking a preset driving risk corresponding to the preset driving feature as a driving risk corresponding to the target vehicle.
In some embodiments, the determining, based on the comparison result, a driving risk corresponding to the target vehicle includes: and when the comparison result is abnormal, determining the driving risk corresponding to the target vehicle.
In some embodiments, the identifying, in the current vehicle video, the driving risk corresponding to the target vehicle based on the driving risk detection manner and the updated driving risk detection policy includes: determining that the running risk detection mode is server side detection, sending the current vehicle video to a server so that the server can identify the running risk corresponding to the target vehicle in the current vehicle video based on a current running risk detection strategy, and reporting the risk based on the running risk; and receiving the driving risk corresponding to the target vehicle returned by the server.
In some embodiments, said outputting said driving risk comprises: visually displaying the driving risk; and reporting the running risk.
In some embodiments, the visual presentation comprises at least one of a voice broadcast, an audible and visual display, a vibratory alert, or a display of the driving risk.
In some embodiments, the risk reporting on the driving risk includes: determining a reporting address corresponding to the driving risk based on the risk type of the driving risk; and carrying out risk reporting on the driving risk and the target video frame corresponding to the driving risk based on the reporting address.
In some embodiments, further comprising: sending a driving information inquiry request to a server so that the server can screen out target historical driving information based on the driving information inquiry request; and receiving the target historical driving information returned by the server.
In a second aspect, the present specification also provides a driving risk detection system comprising: at least one storage medium storing at least one instruction set for performing driving risk detection; and at least one processor communicatively coupled to the at least one storage medium, wherein when the driving risk detection system is operating, the at least one processor reads the at least one instruction set and performs the method of driving risk detection according to the first aspect of the specification, according to an instruction of the at least one instruction set.
According to the technical scheme, the method and the system for detecting the driving risk provided by the specification update the driving risk detection strategy of the target vehicle after the current vehicle video of the target vehicle is acquired through the vehicle-mounted equipment of the target vehicle, identify the driving risk corresponding to the target vehicle in the current vehicle video based on the updated driving risk detection strategy, and output the driving risk; because this scheme can directly discern the risk of traveling in the vehicle video, need not to gather other data, thereby can promote the range of application that the risk of traveling detected, and can also direct output the risk of traveling, in order to carry out traffic safety in real time and remind, in addition, can also update the risk of traveling detection strategy of target vehicle, make to different vehicles or different driving objects adopt different risk detection strategies of traveling, thereby increase the rate of accuracy that the risk of traveling detected, consequently, can promote the detection efficiency that the risk of traveling detected.
Other functions of the method and system for driving risk detection provided by the present specification will be set forth in part in the description that follows. The following numerical and exemplary descriptions will be readily apparent to those of ordinary skill in the art in view of the description. The inventive aspects of the method and system for driving risk detection provided herein may be fully explained by the practice or use of the methods, apparatus and combinations described in the detailed examples below.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present specification, the drawings required to be used in the description of the embodiments will be briefly described below, and it is apparent that the drawings in the description below are only some embodiments of the present specification, and it is obvious for those skilled in the art that other drawings may be obtained according to the drawings without creative efforts.
Fig. 1 is a schematic view illustrating an application scenario of a driving risk detection system provided in an embodiment of the present specification;
FIG. 2 illustrates a hardware block diagram of a computing device provided in accordance with an embodiment of the present description;
FIG. 3 illustrates a flow chart of a method of driving risk detection provided in accordance with an embodiment of the present description; and
fig. 4 is a schematic diagram illustrating an intelligent service scenario for driving safety provided in accordance with an embodiment of the present disclosure.
Detailed Description
The following description is presented to enable any person skilled in the art to make and use the present disclosure, and is provided in the context of a particular application and its requirements. Various localized modifications to the disclosed embodiments will be readily apparent to those skilled in the art, and the general principles defined herein may be applied to other embodiments and applications without departing from the spirit and scope of the present description. Thus, the present description is not limited to the embodiments shown, but is to be accorded the widest scope consistent with the claims.
The terminology used herein is for the purpose of describing particular example embodiments only and is not intended to be limiting. For example, as used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. The terms "comprises," "comprising," "includes," and/or "including," when used in this specification, are intended to specify the presence of stated integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
These and other features of the present specification, as well as the operation and function of the elements of the structure related thereto, and the combination of parts and economies of manufacture, may be particularly improved upon in view of the following description. Reference is made to the accompanying drawings, all of which form a part of this specification. It is to be expressly understood, however, that the drawings are for the purpose of illustration and description only and are not intended as a definition of the limits of the specification. It should also be understood that the drawings are not drawn to scale.
The flowcharts used in this specification illustrate operations implemented by the system according to some embodiments in this specification. It should be clearly understood that the operations of the flow diagrams may be performed out of order. Rather, the operations may be performed in reverse order or simultaneously. In addition, one or more other operations may be added to the flowchart. One or more operations may be removed from the flowchart.
For convenience of description, the present specification will explain terms that will appear from the following description as follows:
violation: generally refers to a traffic violation, i.e., a violation of road safety regulations, traffic management, and behaviors affecting traffic conditions by a motor vehicle, a non-motor driver, or a pedestrian. In addition, violations may also refer to violations of conventional regulation practices.
The driving recorder comprises: the automobile running recorder, commonly known as automobile black box, is a digital electronic recorder for recording and storing the running speed, time, mileage of the automobile and other running state information of the automobile and outputting data via an interface.
And (3) snapshot: it is referred to capturing an image of the target scene as "instant time and dead". For example, it may be a news event, a social event, a human expression, a highlight in a football match, etc. In an intelligent safety scene, the method can be used for capturing the violation behaviors of road traffic participants. When the violation behaviors are captured, the agreement of the captured object can be obtained, but under a specific scene, the agreement of the captured object does not need to be obtained.
ADAS: an Advanced Driving Assistance System (Advanced Driving Assistance System) senses the surrounding environment at any time in the Driving process of an automobile by using various sensors (millimeter wave radar, laser radar, monocular/binocular camera and satellite navigation) arranged on the automobile, collects data, identifies, detects and tracks static and dynamic objects, and performs systematic operation and analysis by combining navigation map data, thereby enabling drivers to detect possible dangers in advance and effectively increasing the comfort and safety of automobile Driving.
Vehicle-mounted equipment: may be a device integrated on the target vehicle, which may perform travel risk detection in the target vehicle. The type of the vehicle-mounted device may be various, and may include, for example, a vehicle-mounted image acquisition device, a vehicle-mounted computer, or any device integrated with the target vehicle for data acquisition, data calculation, or driving risk detection. The vehicle-mounted device may be a device carried by the target vehicle when the target vehicle leaves a factory, or may be a device installed after the target vehicle leaves the factory.
Fig. 1 is a schematic diagram illustrating an application scenario of a system 001 for driving risk detection according to an embodiment of the present disclosure. The system 001 for driving risk detection (hereinafter, referred to as system 001) may be applied to driving risk detection in any scene, for example, driving risk detection in an intelligent safe driving scene, driving risk detection in an unmanned driving scene, driving risk detection in an assisted driving scene, and the like, and as shown in fig. 1, the system 001 may include a target user 100 inside or outside a target vehicle, a client 200, a server 300, and a network 400.
The target user 100 may be a user who triggers driving risk detection on the target vehicle, and the target user 100 may perform driving risk detection operation on the client 200.
The client 200 may be a device that performs travel risk detection in response to a travel risk detection operation of the target user 100. In some embodiments, the method of driving risk detection may be performed on the client 200. At this time, the client 200 may store data or instructions for performing the method of driving risk detection described in the present specification, and may execute or be used to execute the data or instructions. In some embodiments, the client 200 may include a hardware device having a data information processing function and a program necessary for driving the hardware device to operate. As shown in fig. 1, the client 200 may be communicatively coupled to a server 300. In some embodiments, the server 300 may be communicatively coupled to a plurality of clients 200. In some embodiments, the client 200 may interact with the server 300 over the network 400 to receive or send messages or the like, such as receiving or sending current vehicle video or detected driving risks. In some embodiments, the client 200 may include a mobile device, a tablet, a laptop, a built-in device of a motor vehicle, or the like, or any combination thereof. In some embodiments, the mobile device may include a smart home device, a smart mobile device, a virtual reality device, an augmented reality device, or the like, or any combination thereof. In some embodiments, the smart home device may include a smart television, a desktop computer, or the like, or any combination thereof. In some embodiments, the smart mobile device may include a smartphone, a personal digital assistant, a gaming device, a navigation device, and the like, or any combination thereof. In some embodiments, the virtual reality device or augmented reality device may include a virtual reality helmet, virtual reality glasses, a virtual reality patch, an augmented reality helmet, augmented reality glasses, an augmented reality patch, or the like, or any combination thereof. For example, the virtual reality device or the augmented reality device may include *** glass, a head mounted display, a VR, and the like. In some embodiments, the built-in devices in the motor vehicle may include an on-board computer, an on-board television, and the like. In some embodiments, the client 200 may include an image capture device for capturing video or image information of the interior, exterior, or road on which the target vehicle is traveling, thereby obtaining the current vehicle video. In some embodiments, the image capture device may be a two-dimensional image capture device (such as an RGB camera), and a depth image capture device (such as a 3D structured light camera, a laser detector, etc.). In some embodiments, the client 200 may be a device with location technology for locating the location of the client 200.
In some embodiments, the client 200 may have one or more Applications (APPs) installed. The APP can provide the target user 100 with the ability to interact with the outside world through the network 400 and an interface. The APP includes but is not limited to: the system comprises a webpage browser type APP program, a search type APP program, a chat type APP program, a shopping type APP program, a video type APP program, a financing type APP program, an instant messaging tool, a mailbox client, social platform software and the like. In some embodiments, a target APP may be installed on the client 200. The target APP can acquire video or image information of the inside, the outside or a running road of a target vehicle for the client 200, so that a current vehicle video is obtained. In some embodiments, the target object 100 may also trigger a driving risk detection request through the target APP. The target APP may perform the method of driving risk detection described herein in response to the driving risk request. The method of detecting the driving risk will be described in detail later.
The server 300 may be a server that provides various services, such as a background server that provides support for current vehicle video captured on the client 200. In some embodiments, the method of driving risk detection may be performed on the server 300. At this time, the server 300 may store data or instructions to perform the method of driving risk detection described herein, and may execute or be used to execute the data or instructions. In some embodiments, the server 300 may include a hardware device having a data information processing function and a program necessary for driving the hardware device to operate. The server 300 may be communicatively coupled to a plurality of clients 200 and receive data transmitted by the clients 200.
Network 400 is the medium used to provide communication links between clients 200 and server 300. The network 400 may facilitate the exchange of information or data. As shown in fig. 1, the client 200 and the server 300 may be connected to a network 400 and transmit information or data to each other through the network 400. In some embodiments, the network 400 may be any type of wired or wireless network, as well as combinations thereof. For example, network 400 may include a cable network, a wired network, a fiber optic network, a telecommunications network, an intranet, the Internet, a Local Area Network (LAN), a Wide Area Network (WAN), a Wireless Local Area Network (WLAN), a Metropolitan Area Network (MAN), a Wide Area Network (WAN), the Public Switched Telephone Network (PSTN), a Bluetooth network, a ZigBee network, a Near Field Communication (NFC) network, or the like. In some embodiments, network 400 may include one or more network access points. For example, network 400 may include a wired or wireless network access point, such as a base station or an internet exchange point, through which one or more components of client 200 and server 300 may connect to network 400 to exchange data or information.
It should be understood that the number of clients 200, servers 300, and networks 400 in fig. 1 is merely illustrative. There may be any number of clients 200, servers 300, and networks 400, as desired for an implementation.
The method for detecting the driving risk may be completely executed on the client 200, may be completely executed on the server 300, may be partially executed on the client 200, and may be partially executed on the server 300.
FIG. 2 illustrates a hardware block diagram of a computing device 600 provided in accordance with an embodiment of the present description. Computing device 600 may perform the method for risk of travel described herein. The method of driving risk detection is described elsewhere in this specification. When the method of travel risk detection is performed on a client 200, the computing device 600 may be the client 200. When the method of travel risk detection is performed on server 300, computing device 600 may be server 300. While the method of travel risk detection may be performed in part on client 200 and in part on server 300, computing device 600 may be both client 200 and server 300.
As shown in fig. 2, computing device 600 may include at least one storage medium 630 and at least one processor 620. In some embodiments, computing device 600 may also include a communication port 650 and an internal communication bus 610. Computing device 600 may also include I/O component 660.
Internal communication bus 610 may connect various system components including storage medium 630, processor 620 and communication port 650.
I/O components 660 support input/output between computing device 600 and other components.
Communication port 650 provides for data communication between computing device 600 and the outside world, for example, communication port 650 may provide for data communication between computing device 600 and network 400. The communication port 650 may be a wired communication port or a wireless communication port.
The storage medium 630 may include a data storage device. The data storage device may be a non-transitory storage medium or a transitory storage medium. For example, the data storage device may include one or more of a magnetic disk 632, a read only memory medium (ROM) 634, or a random access memory medium (RAM) 636. The storage medium 630 also includes at least one set of instructions stored in the data storage device. The instructions are computer program code that may include programs, routines, objects, components, data structures, procedures, modules, etc. that perform the methods of travel risk detection provided herein.
The at least one processor 620 may be communicatively coupled to at least one storage medium 630 and a communication port 650 via an internal communication bus 610. The at least one processor 620 is configured to execute the at least one instruction set. When the computing device 600 is running, the at least one processor 620 reads the at least one instruction set and, as directed by the at least one instruction set, performs the methods of travel risk detection provided herein. Processor 620 may perform all the steps involved in the method of driving risk detection. The processor 620 may be in the form of one or more processors, and in some embodiments, the processor 620 may include one or more hardware processors, such as microcontrollers, microprocessors, reduced Instruction Set Computers (RISC), application Specific Integrated Circuits (ASICs), application specific instruction set processors (ASIPs), central Processing Units (CPUs), graphics Processing Units (GPUs), physical Processing Units (PPUs), microcontroller units, digital Signal Processors (DSPs), field Programmable Gate Arrays (FPGAs), advanced RISC Machines (ARMs), programmable Logic Devices (PLDs), any circuit or processor capable of executing one or more functions, or the like, or any combination thereof. For illustrative purposes only, only one processor 620 is depicted in the computing device 600 in this description. It should be noted, however, that the computing device 600 may also include multiple processors, and thus, the operations and/or method steps disclosed in this specification may be performed by one processor, as described herein, or by a combination of multiple processors. For example, if in this description the processor 620 of the computing device 600 performs steps a and B, it should be understood that steps a and B may also be performed jointly or separately by two different processors 620 (e.g., a first processor performing step a, a second processor performing step B, or both a first and second processor performing steps a and B).
Fig. 3 shows a flowchart of a method P100 of driving risk detection provided according to an embodiment of the present description. As before, the computing device 600 may perform the method of travel risk detection P100 of the present description. Specifically, processor 620 may read a set of instructions stored in its local storage medium and then execute method P100 of driving risk detection of the present description, as specified by the set of instructions. As shown in fig. 3, method P100 may include, by a vehicle device of a target vehicle:
s110: and acquiring a current vehicle video of the target vehicle.
The target vehicle may be a vehicle that triggers driving risk detection, and the target vehicle may include at least one vehicle occupant therein, where the vehicle occupant may be a driver of the target vehicle or a passenger of the target vehicle.
The current vehicle video can be a vehicle video collected by an image collecting device integrated in the target vehicle, or a video inside or outside the target vehicle collected by an image collecting device outside the target vehicle. The type of the current vehicle video may be various, and may include at least one of a vehicle interior video, a vehicle exterior video, and a road condition video of the target vehicle, for example. The vehicle interior video may be video information of the interior of the target vehicle, and the video information may include video information of an occupant in the target vehicle and video information of the interior of the target vehicle. The vehicle exterior video may be video information of other vehicles, pedestrians, environments, and the like within a preset range around the target vehicle. The road information may be road surface information of a road on which the target vehicle travels, and the type of the road information may be various, and may include at least one of a road surface type, a road surface traffic sign, and road surface environment information, for example. The road surface environment information here may include road surface obstacle information, road surface damage information, road surface coverage information such as surface water/snow, and the like. The current vehicle video can be a real-time video, namely a continuous stable video stream, and can also be a non-real-time video. When the current vehicle video is a non-real-time video, the current vehicle video can also be set as video information in one or more specific periods or a video stream with a specific delay.
The method for acquiring the current vehicle video of the target vehicle may be various, and specifically may be as follows:
for example, the processor 620 may directly acquire video information of the inside, the outside, and/or the road of the target vehicle through an on-board device of the target vehicle, so as to obtain a current vehicle video of the target vehicle, or may receive video information of the inside, the outside, and/or the road of the target vehicle acquired by an image acquisition device integrated in the target vehicle, so as to obtain a current vehicle video of the target vehicle, or may receive video information of the inside, the outside, and/or the road of the target vehicle, which is acquired by an occupant in the target vehicle or a person outside the target vehicle through the image acquisition device, so as to obtain a current vehicle video of the target vehicle, or may receive video information of the road sent by a third-party traffic platform or a video platform, so as to identify video information of the inside, the outside, and/or the road of the target vehicle in the video information of the road, so as to obtain a current vehicle video of the target vehicle, or may receive video information of the road sent by a third-party traffic platform or a video platform, so as to identify video information of the inside, or detect radar information, so as to obtain a current vehicle video information of the target vehicle, or radar information, so as to obtain a target vehicle video information.
The image acquisition device is a device capable of acquiring video information, and the type of the image acquisition device may be various, for example, the image acquisition device may include various types of cameras, thermal imaging devices, or other devices capable of acquiring images or videos.
In some embodiments, the manner of triggering the processor 620 to obtain the current vehicle video of the target vehicle may be various, for example, the target user 100 performs a driving risk detection operation so as to trigger the processor 620 to obtain the current vehicle video of the target vehicle, or the processor 620 obtains the current vehicle video of the target vehicle when detecting that the image capture device captures the vehicle video, or the processor 620 obtains the current vehicle video of the target vehicle when detecting that the target vehicle is in a preset state, and so on.
The preset state may be a state preset in the target vehicle, and the preset state may be of various types, for example, the preset state may include at least one of a start state, an early warning state, a specific person entering the target vehicle, and the like. The activated state may be understood as a state corresponding to when the target vehicle is activated, or may be understood as a state corresponding to when one or more in-vehicle devices in the target vehicle are activated. The warning state is understood to be a state in which the target vehicle travels under a specific condition and a warning is required. The type of the specific condition may be various, and may include at least one of driving at a specific speed, driving on a specific road condition, driving with a specific person/cargo/equipment, or driving within a specific region, for example. The particular speed may be set to a speed that exceeds the road speed limit or other speed. The specific road may be set as a road requiring early warning, such as a one-way road, a congested road, a road with a low speed limit, a road on which a natural disaster is likely to occur, or a road on which a traffic accident is likely to occur, and the like. The specific road conditions can be set as the road conditions needing early warning, such as a water accumulation road section, a crosswind road section, a sleet road, a road surface construction road, a road surface damage road and the like. The specific person may be set as a person who needs to be warned, such as a patient, a wounded person, an old person, a criminal, a parturient, or a child, etc. Specific goods can be set for acquisition that requires early warning, such as banknotes, precision instruments, high value goods or delicate goods, and so forth. The specific device may be a device that needs to be pre-warned, and the device may be a vehicle-mounted device, or may be a non-vehicle-mounted device, such as a testing device, a scientific research device, or a safety device, for example.
And S120, updating the driving risk detection strategy of the target vehicle.
The driving risk detection strategy may be strategy information for performing risk detection on driving risks, and the driving risk detection strategy may include one or more detection rules for driving risk detection. The detection rule can also be understood as a safety strategy, which serves as a configuration factor and characterizes a traffic violation scenario or driving risk scenario. Basic elements are defined in the characteristics, and the basic elements can comprise necessary elements of road traffic such as wheels, vehicle heads, human figures, human face side figures, human face front faces, roads, traffic marking lines and the like. These elements are combined to form an violation scenario or a driving risk scenario. Therefore, the driving risk detection strategy may include one or more preset driving characteristics corresponding to the preset driving risk. The current driving characteristics are extracted from the current vehicle video, and then the current driving characteristics are compared with the preset driving characteristics, so that the driving risk detection can be realized through the driving risk detection strategy.
The mode of updating the driving risk detection strategy of the target vehicle may be various, and specifically may be as follows:
for example, processor 620 may receive the current driving risk detection policy issued by server 300, and update the driving risk detection policy of the target vehicle based on the current driving risk detection policy to obtain an updated driving risk detection policy, which may specifically be as follows:
s121: and receiving a current driving risk detection strategy issued by the server 300.
The manner of receiving the current driving risk detection policy issued by the server 300 may be various, and specifically may be as follows:
for example, when the current vehicle video is acquired, the processor 620 triggers generation of a driving risk detection policy update request, sends the driving risk detection policy update request to the server 300, and then receives the current driving risk detection policy returned by the server 300.
The current driving risk detection policy returned by server 300 may be a driving risk detection policy set by target user 100 or a manager in a unified manner, may be a driving risk detection policy set in advance for each driving risk detection, or may be a driving risk detection policy set in advance for a specific target vehicle, an occupant in the target vehicle, or a specific driving risk. Therefore, before receiving the current driving risk detection policy issued by server 300, the driving risk detection policy may also be configured, so that server 300 generates the current driving risk detection policy. For example, the processor 620 may send security policy configuration information to the server 300 so that the server 300 generates a current driving risk detection policy based on the security policy configuration information, or may also send historical vehicle videos of a preset time period and a target driving risk corresponding to the historical vehicle videos to the server so that the server 300 extracts a target driving feature corresponding to the target driving risk from the historical vehicle videos and generates the current driving risk detection policy based on the target driving feature and the target driving risk, which may be specifically as follows:
(1) And sending the security policy configuration information to the server.
The safety policy configuration information may be configuration information of a driving risk detection policy set for the target vehicle by the target user 100 or an administrator of the target vehicle, and the configuration information may include a characteristic value corresponding to a driving risk preset by the target user 100 or the administrator of the target vehicle, and may further include selection information of the driving risk policy selected by the target user 100 or the administrator of the target vehicle from a preset safety policy set. Therefore, before transmitting the security policy configuration information to the server 300, the security policy configuration information may be generated in various ways, for example, the processor 620 may acquire historical driving information corresponding to the target vehicle and generate the security policy configuration information based on the historical driving information, or may display a policy list of candidate driving risk detection policies, generate the security policy configuration information based on the candidate driving risk detection policy corresponding to the selection operation in response to the selection operation for the policy list, or may receive the security policy configuration information transmitted or uploaded by the target user 100 or the target vehicle administrator through the target terminal.
The historical driving information may be driving information of the target vehicle before the current time, and the historical driving information may include at least one of historical vehicle information, historical driving information of an occupant in the target vehicle, and historical driving risks corresponding to the target vehicle. The manner of generating the safety policy configuration information may be various based on the historical driving information, for example, the processor 620 may extract historical driving characteristics from the historical driving information and determine the safety policy configuration information based on the historical driving characteristics. For example, taking the historical driving characteristics as an example that the target vehicle is mostly driven by the elderly people in the historical stage, the safety policy configuration information may be the pupil dilation of the driving object as the driving risk, the characteristic value corresponding to the pupil dilation is identified in the historical driving characteristics, the driving risk and the characteristic value corresponding to the pupil dilation may be used as the safety policy configuration information, the safety policy configuration information is sent to the server 300, and the server 300 may generate the driving risk detection policy for detecting the pupil of the driving object based on the safety policy configuration information, so that the driving risk detection policy may be beneficial to further ensuring the safety factor of driving by the elderly people. For another example, taking the historical driving characteristics as the frequency of overspeed of the target vehicle in the historical stage as an example, the speed limit with higher overspeed frequency can be identified, then the speed slightly lower than the speed limit is set as the overspeed detection speed, and the driving risk corresponding to the overspeed and the overspeed detection speed are sent to the server 300 as the safety policy configuration information, so that the server 300 generates the driving risk detection policy corresponding to the overspeed risk based on the safety policy configuration information, so that when the target vehicle exceeds the overspeed detection speed, the driving risk is output, and the driving risk of overspeed of the target vehicle can be effectively reduced.
The policy list of the candidate driving risk detection policies may be a list corresponding to a preset driving risk detection policy set, and through the policy list, the target user 100 or an administrator of the target vehicle may screen at least one candidate driving risk detection policy from the preset driving risk detection policy set as a current driving risk detection policy. The driving risk corresponding to the candidate driving risk policy may be various, for example, from the perspective of a vehicle, the driving risk may include driving in violation such as backing a car, going backwards, running a red light, pressing a line, and the like, and also from the perspective of a vehicle occupant (a driver and a passenger), the driving risk may include connecting and disconnecting a mobile phone, not fastening a safety belt, shaking the head for more than n seconds (n may be set to an arbitrary value, for example, 60 seconds or other time), or smoking, and the like.
(2) And sending the historical vehicle videos of the preset time period and the target driving risks corresponding to the historical vehicle videos to a server.
The historical vehicle video may be a vehicle video acquired by the target vehicle before the current moment, and the historical vehicle video may also include at least one of a historical vehicle interior video, a historical vehicle exterior video, and a historical road condition video of the target vehicle.
The target running risk is a running risk included in the historical vehicle video, and the running risk may include at least one of a vehicle risk and a driving risk. The vehicle risk may include at least one of a risk of the target vehicle and a risk of a vehicle other than the target vehicle in the current vehicle video. The driving risk includes at least one of a driving risk of a person inside the target vehicle and a driving risk of another person outside the target vehicle, and the driving risk may include at least one of an abnormal driving state and an illegal driving behavior. The driving state abnormality refers to a state abnormality of a vehicle occupant in the target vehicle, and may include, for example, fatigue driving, sudden illness, driver's lack of sight ahead or getting out of seat, and the like. The illegal driving behavior is understood to be the behavior of persons inside and outside the target vehicle violating traffic regulations, which may include, for example, a driver driving to make a call, not wearing a seat belt, smoking a cigarette, driving a car to throw a car, running a red light (pedestrian or driver), and the like. Therefore, it can be found that the driving risks here may include risks of the vehicle itself, such as too close distance, abnormal vehicle temperature, abnormal driving track, or various vehicle faults; it may also include the presence of risks to the occupants or pedestrians, as described above.
The method for sending the historical vehicle video of the preset time period and the target vehicle driving risk corresponding to the historical vehicle video to the server may be various, and specifically may be as follows:
for example, the processor 620 may receive a historical vehicle video of a preset time period uploaded by the target user 100 or an administrator of the target vehicle and a target driving risk corresponding to the historical vehicle video, transmit the historical vehicle video and the target driving risk to the server 300, so that the server 300 extracts a target driving feature corresponding to the target driving risk from the historical vehicle video, and generates a current driving risk detection policy based on the target driving feature and the target driving risk, or may further receive a driving risk identification request uploaded by the target user 100 or the administrator of the target vehicle, the driving risk identification request including time information corresponding to the driving risk, screen a video clip corresponding to the time information from an original vehicle video of the target vehicle, so as to obtain a historical vehicle video of the preset time period, and identify the target driving risk from the historical vehicle video, transmit the historical vehicle video and the target driving risk to the server 300, so that the server 300 extracts a target driving feature corresponding to the target driving risk from the historical vehicle video, and generates a current driving risk detection policy based on the target driving feature and the target driving risk.
After the historical vehicle videos of the preset time period and the target driving risks corresponding to the historical vehicle videos are sent to the server 300, the server 300 mainly extracts the features of the historical vehicle videos and performs model training based on the extracted key features and the target driving risks, so that a current driving risk detection strategy can be generated. Taking the target risk as fatigue driving as an example, the server 300 may extract a target driving feature (key feature value) corresponding to fatigue driving from the historical vehicle video, use the target driving feature as a comparison feature value, and use the target driving feature and the driving risk corresponding to fatigue driving as a current driving risk detection policy. In the running risk detection process, when the characteristic value of one or more frames of video in the current vehicle video is detected to be similar to the target running characteristic in the current running risk detection strategy or meets a preset matching condition, the running risk of fatigue driving existing in the driver in the target vehicle at present can be determined.
It should be noted that, in this scheme, the driving risk detection strategy may be configured in multiple ways, so that the driving risk detection strategy may be more accurate and flexible, and the accuracy and detection efficiency of driving risk detection may be improved.
S122: and updating the running risk detection strategy of the target vehicle based on the current running risk detection strategy to obtain the updated running risk detection strategy.
For example, the processor 620 may compare the current driving risk detection policy with the driving risk detection policy of the target vehicle, and when the current driving risk detection policy is different from the driving risk detection policy, replace the driving risk detection policy with the current driving risk detection policy, thereby obtaining an updated driving risk detection policy; and when the current driving risk detection strategy is the same as the driving risk detection strategy, taking the driving risk detection strategy as an updated driving risk detection strategy.
For example, the processor 620 may obtain a current version number of the current driving risk detection policy, and compare the current version number with a local version number of the driving risk detection policy of the target vehicle, or may also compare driving characteristics corresponding to the current driving risk detection policy with local driving characteristics corresponding to the driving risk detection policy of the target vehicle.
S130: and identifying the driving risk corresponding to the target vehicle in the current vehicle video based on the updated driving risk detection strategy.
Here, the driving risk may be understood as the risk of all objects contained in the current vehicle video of the target vehicle, which may include vehicles and persons. Thus, the driving risk may include at least one of a vehicle risk and a driving risk. The vehicle risk may include at least one of a risk of the target vehicle and a risk of a vehicle other than the target vehicle in the current vehicle video. The driving risk includes at least one of a driving risk of a person inside the target vehicle and a driving risk of another person outside the target vehicle, and the driving risk may include at least one of an abnormal driving state and an illegal driving behavior. The specific driving risk is described above, and is not described in detail here.
Based on the updated driving risk detection strategy, the mode of identifying the driving risk corresponding to the target vehicle in the current vehicle video may be various, and specifically may be as follows:
for example, the processor 620 may determine a driving risk detection manner of the target vehicle, and based on the driving risk detection manner and the updated driving risk detection strategy, the driving risk corresponding to the target vehicle identified in the current vehicle video may specifically be as follows:
s131: and determining a running risk detection mode of the target vehicle.
The driving risk detection mode can indicate a detection position for detecting the driving risk of the current vehicle video. The detection position may include a local device or a server, and thus, the driving risk detection manner may include at least one of a local detection and a server detection. The local detection can be understood as performing risk detection on a current vehicle video at a local equipment terminal, and the server-side detection can be understood as transmitting the current vehicle video to a server by the local equipment terminal, performing driving risk detection on the current vehicle video through the server, and then returning the driving risk of a target vehicle to the local equipment by the server.
The driving risk detection mode for determining the target vehicle may be multiple, and specifically may be as follows:
for example, the processor 620 may acquire detection configuration information of the target vehicle and determine a driving risk detection manner of the target vehicle based on the detection configuration information, or may further acquire vehicle attribute information of the target vehicle, identify a driving scene of the target vehicle based on the vehicle attribute information and the current vehicle video, and determine the driving risk detection manner of the target vehicle based on the driving scene.
In an embodiment, the detection configuration information may be understood as information that the target user 100 or the administrator of the target vehicle configures a driving risk detection manner, and the detection configuration information may indicate a driving risk detection manner of the target vehicle. For example, the processor 620 may directly receive the detection configuration information uploaded by the target user 100 or an administrator of the target vehicle through the target terminal, or may display a candidate detection condition list corresponding to each driving risk detection manner, screen out a target detection condition corresponding to a selection operation in the candidate detection condition list in response to the selection operation on the detection condition list, and generate the detection configuration information based on the target detection condition and the driving risk detection manner corresponding to the target detection condition.
The detection condition may be understood as a condition required for selecting the driving risk detection, for example, taking a driving risk detection manner as local detection as an example, the candidate detection condition may include that a driver is an old person, the target vehicle is in a ground bank, the target vehicle has no network, the speed of the target vehicle exceeds a preset speed threshold, the risk level of passengers of the target vehicle is high, or the local device has strong performance, and the like. The target user 100 or the administrator of the target vehicle may screen out the candidate conditions for detection using a locally detected driving risk detection scheme. When the target vehicle satisfies the detection condition, the travel risk detection manner of the target vehicle can be determined. Therefore, the detection configuration information may include at least one detection condition corresponding to each driving risk detection manner and the driving risk detection manner.
After the detection configuration information for the target vehicle is obtained, the driving risk detection manner of the target vehicle may be determined based on the detection configuration information, and the determination of the driving risk detection manner may be multiple, for example, the processor 620 may obtain current vehicle information of the target vehicle, current detection conditions corresponding to the target vehicle identified in the current vehicle information, match the current detection conditions with the detection configuration information, and thereby take the driving risk detection manner corresponding to the detection conditions matching the current detection conditions as the driving detection manner of the target vehicle.
In one embodiment, the vehicle attribute information may be understood as parameter information characterizing the traveling of the target vehicle, and may include, for example, a traveling time, a traveling speed, a traveling mileage, a vehicle model, an engine model, vehicle equipment information, and the like of the vehicle. The driving scene may be understood as a scene representing the target vehicle during driving, and the driving scene may be represented by driving information of the target vehicle, and the type of the driving information may be various, for example, network information of the target vehicle, a driving position of the target vehicle, a computing performance of an on-board device of the target vehicle, driver information, and environment information of the target vehicle may be included. Taking the driver and passenger information as an example, the driving scenes corresponding to the driver and passenger information may include an old people driving scene, an teenager driving scene, a senior driver driving scene according to the driving age difference, a practice driver driving scene, and the like. Taking the environmental information of the target vehicle as an example, the corresponding driving scenes may include a night driving scene, a bad weather driving scene, a normal weather driving scene, a congested road driving scene, and the like. The manner of identifying the driving scene of the target vehicle may be various based on the vehicle attribute information and the current vehicle video, for example, the processor 620 may extract driving information from the vehicle attribute information and the current vehicle video, and determine the driving scene of the target vehicle based on the driving information.
It should be noted that the driving scene may be a single driving scene or a composite driving scene. The single driving scene may be only one type of driving scene, for example, a night driving scene, or an elderly driving scene, etc. The composite driving scene may be a driving scene including two or more types, for example, a night-old people driving scene, or a night-bad weather-old people driving scene, and the like.
After the driving scenario of the target vehicle is identified, the driving risk detection manner of the target vehicle may be determined based on the driving scenario, and the determination of the driving risk detection manner may be multiple, for example, the processor 620 may obtain mapping information of the driving scenario and the driving risk detection manner, and identify the driving risk detection manner corresponding to the driving scenario in the mapping information. For a single driving scenario or a composite driving scenario existing in the mapping information, the processor 620 may directly identify a driving risk detection manner corresponding to the driving scenario. For a composite driving scenario that does not exist in the mapping information, the processor 620 may identify, in the mapping information, a driving risk detection manner corresponding to each single driving scenario in the composite driving scenario, determine that the driving risk detection manner is the same driving risk detection manner, may use the driving risk detection manner as a driving risk detection manner of a target vehicle, and when it is determined that the driving risk detection manner is not the same driving risk detection manner, count the number of each driving risk detection manner, and use a driving risk detection manner with a large number of driving risk detection manners as a driving risk detection manner of the target vehicle, or use different driving risk detection manners as driving risk detection manners of the target vehicle, or may obtain a weighting coefficient corresponding to each driving scenario, weight the counted number of risk detection manners based on the weighting coefficient, and compare the weighted number, thereby determining the driving risk detection manner of the target vehicle.
In some embodiments, the processor 620 may further input the driving scenario into a recognition model of the driving risk detection manner, and determine the driving risk detection manner of the target vehicle through the recognition model.
The method and the system can detect the driving risk of the target vehicle through a double-cache mechanism, wherein the double-cache mechanism can be understood as setting the driving risk detection mode of the target vehicle as local detection, or server side detection, or simultaneous detection of the two detection modes. The driving risk detection mode can be dynamically adjusted through a double-cache mechanism, so that driving risk detection can be flexibly and accurately performed on a target vehicle, for example, dangerous behaviors of drivers and passengers who directly influence the target vehicle can be directly preset to an equipment end, driving risk detection and alarming are directly performed by local equipment or a vehicle-mounted terminal of the target vehicle through a local detection mode, so that the safety of the drivers and passengers is greatly improved, or the target vehicle can be timely switched to local detection under the condition that no network exists, so that failure of detection of a service end is avoided, or the local equipment or the vehicle-mounted terminal of the target vehicle can be timely switched to the service end for driving risk detection under the condition that the calculation capacity of the local equipment or the vehicle-mounted terminal of the target vehicle is insufficient, so that the detection efficiency of driving risk detection is greatly improved, or the local detection and the service end detection can be set to be performed simultaneously in the long-distance driving process, and on the premise that early warning is performed on the target vehicle in time, the accuracy of driving risk detection can be improved through the service end.
S132: and identifying the running risk corresponding to the target vehicle in the current vehicle video based on the running risk detection mode and the updated running risk detection strategy.
For example, the processor 620 may determine that the driving risk detection manner is local detection, extract current driving features from the current vehicle video, determine a driving risk corresponding to the target vehicle based on the updated driving risk detection policy, and may also determine that the driving risk manner is server-side detection, and send the current vehicle video to the server 300, so that the server 300 identifies a driving risk corresponding to the target vehicle in the current vehicle video based on the current driving risk detection policy, and performs risk reporting based on the driving risk, and receives a driving risk corresponding to the target vehicle returned by the server, which may specifically be as follows:
(1) Determining a driving risk detection mode as local detection
For example, the processor 620 may extract the current driving characteristics from the current vehicle video, and determine the driving risk corresponding to the target vehicle based on the updated driving risk detection policy.
The current driving characteristics can be understood as characteristic information representing elements in a driving risk scene, and the elements can include traffic elements such as wheels, a vehicle head, a portrait side face, a portrait front face, a road or a zebra crossing. The current driving feature may be extracted from the current vehicle video in various ways, for example, the processor 620 may perform framing on the current vehicle video to obtain a video frame set, perform feature extraction on each video frame in the video frame set to obtain a video feature corresponding to each video frame, and identify the current driving feature in the video feature, or the processor 620 may also perform framing on the current vehicle video to obtain a video frame set, directly extract an image feature value from each video frame in the video frame set, and use the image feature value as the current driving feature.
For example, the processor 620 may extract a feature value of each pixel from the video features, and determine features of road elements based on the feature value, so as to obtain the current driving features, for example, infrastructure or character features such as vehicles, buildings, roads, traffic lights, and the like are analyzed through image feature values in the video features, so that the analyzed features are used as the current driving features; alternatively, the processor 620 may also input the video features to a driving feature extraction model, and identify the current driving features in the video features through the driving feature extraction model.
After the current driving feature is extracted, the driving risk corresponding to the target vehicle may be determined based on the updated driving risk detection policy and the current driving feature, and the manner of determining the driving risk may be multiple, for example, the processor 620 may compare the preset driving feature corresponding to the updated driving risk detection policy with the current driving feature, and determine the driving risk corresponding to the target vehicle based on the comparison result.
The preset driving characteristics may be driving characteristics corresponding to the preset driving risk. For example, the processor 620 may calculate a feature similarity between the preset driving feature corresponding to the updated driving risk detection policy and the current driving feature, and use the feature similarity as a comparison result, or may calculate a difference between a feature value of the preset driving feature corresponding to the updated driving risk detection policy and a feature value of the current driving feature, and use the difference between the features as a comparison result, or may calculate a feature distance between the preset driving feature corresponding to the updated driving risk detection policy and the current driving feature, and use the feature distance as a comparison result.
After the preset driving features are compared with the current driving features, the driving risk corresponding to the target vehicle can be determined based on the comparison result, and the driving risk determination mode can be various, for example, taking the comparison result as the feature similarity, when the feature similarity exceeds the preset similarity threshold, the processor 620 can take the preset driving risk corresponding to the preset driving features as the driving risk corresponding to the target vehicle, for example, taking the preset driving risk corresponding to the preset driving features as the example that a pedestrian runs a red light, the driving risk corresponding to the current driving features can be that the pedestrian runs the red light, and at this time, it can be determined that the pedestrian runs the red light in the current vehicle video of the target vehicle is captured; alternatively, taking the comparison result as the difference between the features or the feature distance as an example, the processor 620 may determine the driving risk corresponding to the target vehicle when there is an abnormality in the comparison result.
The type of the comparison result with the abnormality may be multiple, and for example, the comparison result may include at least one of a difference between the features being less than or equal to a preset difference threshold and a feature distance being less than or equal to a preset distance threshold. When the comparison result is abnormal, there may be multiple manners for determining the driving risk corresponding to the target vehicle, for example, when the comparison result is abnormal, the preset driving risk corresponding to the corresponding preset driving feature is used as the driving risk corresponding to the target vehicle.
It should be noted that, the process of detecting the driving risk may be to identify the current driving feature in the video features, and compare the current driving feature with a preset driving feature, so as to identify the driving risk. The essence of the driving risk detection may be that each basic element (an element participating in traffic) is extracted from the current video frame, and after the basic elements are extracted, the judgment is performed according to the characteristic values of the basic elements, for example, if a wheel is located on a zebra crossing or a human face side lasts for more than n seconds, the existence of the violation of the regulation in the current video frame may be judged, and the type of the violation of the regulation may be judged, so as to obtain the driving risk. The judgment is mainly carried out by comparing the characteristic value of the basic element with the preset characteristic value of the element in the violation behaviors.
In some embodiments, the processor 620 may further screen out a target video frame corresponding to the driving risk in the current vehicle video, and send the driving risk and the target video frame corresponding to the driving risk as the current driving information to the server 300, so that the server 300 updates the driving information set. The travel information set is used to implement a historical travel information query function.
The driving risk detection is directly carried out in the local equipment (terminal/vehicle-mounted equipment) in a local detection mode, early warning is directly carried out based on the detected driving risk, the detection mode does not depend on the network condition, the detection and early warning of safety behaviors can be directly carried out at the equipment end, the driving state of drivers and conductors of target vehicles or other vehicles can be captured and reminded in real time, and therefore the driving safety of the drivers and conductors can be effectively guaranteed.
(2) And determining a driving risk detection mode as server side detection.
For example, the processor 620 may send the current vehicle video to the server 300, so that the server 300 identifies a driving risk corresponding to the target vehicle in the current vehicle video based on the current driving risk detection policy, and performs risk reporting based on the driving risk, and accepts the driving risk corresponding to the target vehicle returned by the server.
The driving risk corresponding to the target vehicle identified in the current vehicle video by the server 300 based on the current driving risk detection strategy is the same as the risk identification method using the local detection method, and the difference is only the execution subject for executing the driving risk detection or the detection address for executing the driving risk detection, so that the details can be found above, and are not repeated here.
After identifying the driving risk corresponding to the target vehicle, the server 300 may also report the risk based on the driving risk, and the risk reporting manner may be multiple, for example, the processor 620 may determine a reporting address corresponding to the driving risk based on the risk type of the driving risk, and report the driving risk and the target video frame corresponding to the driving risk based on the reporting address.
The risk type may include an internal risk and an external risk of the target vehicle, the internal risk may include a traveling risk of the target vehicle and passengers in the target vehicle, and the external risk may include at least one of a traveling risk of other vehicles in the current vehicle video except the target vehicle, a traveling risk of members of other vehicles, and a risk of a pedestrian participating in traffic on a traveling road of the target vehicle. Based on the driving risk, there may be multiple ways of determining the reporting address corresponding to the driving risk, for example, when the driving risk is an internal risk, the processor 620 may determine that the type of the reporting address corresponding to the driving risk is an internal reporting address, obtain an internal reporting address set, and screen a reporting address corresponding to a target vehicle in the internal reporting address set as the reporting address corresponding to the driving risk; when the driving risk is an external risk, the processor 620 may determine that the type of the reporting address corresponding to the driving risk is an external reporting address, acquire an external reporting address set, screen out a reporting address corresponding to the risk type from the external reporting address set, and use the reporting address corresponding to the risk type as the reporting address corresponding to the driving risk.
The internal risk can be understood as a driving risk which does not need to be reported to a third-party management platform or a traffic administration management platform. The corresponding external risk can be understood as the driving risk which needs to be reported to a third-party management platform or a traffic administration platform. The third party management platform or the traffic administration platform may be understood as a platform for performing traffic administration, and may include, for example, a traffic police platform, a road administration platform, an alarm platform, or other platforms with traffic administration authority, etc. Therefore, the external reporting address may be a network address corresponding to the third party management platform or the traffic administration management platform, and the internal reporting address corresponding to the external reporting address may be an address for performing internal management on the target vehicle or a passenger in the target vehicle, for example, if the target vehicle is a taxi, the internal reporting address may be a network address of an internal management platform of a taxi company, or if the target vehicle is a bus, the internal reporting address may be a network address of an internal management platform of a bus company, or, if the passenger in the target vehicle is a teenager, the internal reporting address may be a network address of a legal guardian of the passenger (for example, the network address may include a mobile phone number, a mailbox address, an account number of an instant messaging platform, or the like), or, if the passenger in the target vehicle is an elderly person, the internal reporting address may be a network address of an emergency contact of the passenger, and the like.
In some embodiments, the internal risk may also be understood as a driving risk that does not violate traffic regulations, such as the target vehicle being too close to a preceding vehicle, or a sign of fatigue driving by a driver of the target vehicle, or the tire pressure of the target vehicle being too low, or a failure of the target vehicle or another vehicle, etc. The corresponding external risk can be understood as a driving risk which violates the traffic regulations, and at this time, the external risk must be reported to a platform with traffic management authority, and vehicles or people corresponding to the driving risk are managed through the platform, and the like.
The target video frame may be a video frame with a driving risk, and the number of video frames corresponding to one driving risk may be one frame or a plurality of consecutive frames.
It should be noted that in the process of risk reporting, the scheme can carry out specific risk reporting according to risk types, not only can report illegal candid shots accurately, but also can report risks of the target vehicle or passengers in the target vehicle to an internal management platform, and timely carries out multi-dimensional early warning on driving risks through driving of risk reporting, so that traffic safety is improved.
After the server 300 identifies the driving risk corresponding to the target vehicle, the driving risk corresponding to the target vehicle returned by the server 300 may be received, and the manner of receiving the driving risk may be multiple, for example, the processor 620 may receive the driving risk corresponding to the target vehicle returned by the server 300 in real time, that is, the server 300 may send the driving risk to the client 200 after identifying one driving risk in the process of identifying the current vehicle video, so that the processor 620 obtains the driving risk, or may also receive the driving clouds detected in one or more detection periods returned by the server, for example, taking one detection period as 10 minutes as an example, the server 300 may segment the current vehicle video into a plurality of video segments of 10 minutes, identify the driving risk for each video segment, then send the identified driving risk to the client 200, then continue to identify the next video segment of 10 minutes, and so on until all the video segments are identified, and the processor 620 may obtain one or more driving risks identified by the server 300.
It should be noted that, in the present scheme, the two driving risk detection modes may be separately detected, or may be simultaneously detected, so as to form a dual cache mechanism. Through this dual buffer mechanism, promoted the flexibility and the accuracy that the risk detected of traveling, consequently, can promote the detection efficiency that the risk detected of traveling greatly.
S140: and outputting the driving risk.
For example, the processor 620 may visually display the driving risk and report the driving risk.
The type of the visual display may be various, and for example, may include at least one of voice broadcast, sound and light display, vibration prompt, or display of driving risk.
Wherein, the voice broadcast can be understood as broadcasting the risk of traveling through the form of pronunciation. The range of the voice broadcast may include the entire running risk, or the running risk of the target vehicle, or the running risk of the occupant in the target vehicle, and the like. The content of the voice broadcast may be various, for example, the content may include a voice content corresponding to the driving risk, or may further include a preset voice content corresponding to a risk type of the driving risk, and the like. The execution subject of the voice announcement may include the target vehicle, an in-vehicle device of the target vehicle, a terminal of the target user 100, the client 200, or a terminal of an occupant in the target vehicle, and the like.
The sound and light display can be understood as performing early warning in the form of sound and light, for example, emitting a preset alarm sound, emitting a specific reminding light, and the like. The preset warning sound may be a sound set by the target user 100 or the administrator of the target vehicle, or may be a warning sound provided by the target vehicle. The reminding light may be a light of a specific color that is lit according to a specific rule, for example, a red light may be used for flashing, or the lighting mode of the light may be determined according to the risk type of the driving risk, and then the light display is performed based on the lighting mode, for example, lights of the same or different colors are respectively flashed at certain intervals, and so on.
For example, the processor 620 may determine a reporting address corresponding to the driving risk based on the risk type of the driving risk, and report the driving risk and the risk of the target video frame corresponding to the driving risk based on the reporting address. Here, the risk reporting may be performed in the same manner as the risk reporting performed by the server 300. Therefore, when the driving risk detection mode is server-side detection, the step of reporting the risk may not be executed by the driving risk acquired by the processor 620, and when the driving risk detection mode is local detection, the step of reporting the risk may be executed by the processor 620, thereby completing reporting of the driving risk. When the driving risk detection mode is that the local detection and the server detection are performed simultaneously, the processor 620 may report the driving risk detected locally, and not report the driving risk detected by the server.
In some embodiments, the processor 620 may also query the server 300 for target historical travel information. The target historical driving information may include driving risks detected by a local and/or a server within a preset historical time interval and video information corresponding to the driving risks, and may also include vehicle driving information within the preset historical time interval. The manner of inquiring the target historical travel information may be various, and for example, the processor 620 may transmit a travel information inquiry request to the server 300 so that the server 300 filters out the target historical travel information based on the travel information inquiry request, and receive the target historical travel information returned by the server 300.
The driving information query request may include a preset historical time interval, and the server 300 may screen the target historical driving information based on the driving information query request in various ways, for example, the server 300 may screen the historical driving information in the preset historical time interval from the driving information set, so as to obtain the target historical driving information.
The driving risk detection method in the present scheme may perform driving risk detection and early warning at a local device or a server, and if the early warning manner is voice broadcast and acousto-optic display, for example, a system architecture of a driving risk detection system that executes the driving risk detection method may include modules and units such as a sensing module, a driving risk detection policy setting module, a driving risk detection policy storage module, an image processing module, a voice playing module, an acousto-optic display module, an intelligent image processing unit, an illegal reporting unit, a driving risk policy configuration unit, a history query module, a data communication module, and a data receiving module. The system architecture can include main functions of hardware control, strategy control, model setting, model training, model identification, safety modeling and the like, and specifically can include the following functions:
(1) A perception module: the system is a core module of a local equipment end (vehicle-mounted equipment or a local terminal) and is mainly responsible for acquiring the current vehicle video, and the form of the system can be various cameras or image acquisition devices;
(2) The driving risk detection strategy setting module is used for setting an execution main body of a driving risk detection strategy, and specifically can be used for setting whether driving risk detection is performed locally or at a server side, namely, a double-cache mechanism provided in the scheme. The module can be used for operating the target user 100 or the administrator of the target vehicle, and can be arranged at a local detection device, a server for detection, or two running risk detection modes of the local detection and the server for detection.
(3) A history inquiry module: the target user 100 or another user can query this module for the travel history, snapshot history (external travel risk) and road violation history (internal travel risk) of the target vehicle for the query function provided by the server.
(4): a data communication module: and the core module of the local equipment end is mainly responsible for data communication between the local equipment and the server end.
(5) An image processing module: the video processing module is a core module of the local equipment end and is mainly responsible for processing the current vehicle video locally.
(6) And the data receiving module is a core module of the server and is mainly responsible for receiving various data reported by the local equipment.
(7) The voice playing module and the audio and visual display module: and the output equipment is the output equipment of the local equipment end and is used for carrying out early warning or risk prompt on the detected driving risk.
(8) Intelligent image processing unit: the system is a core unit of the server and mainly used for detecting the driving risk of the current vehicle video received by the server according to a driving risk detection strategy, recording and storing the time and the picture of the current video frame when the preset driving risk is detected to exist in the current video frame, and returning the detected driving risk to the local equipment so as to facilitate a voice playing module and a sound and light display module of the local equipment to carry out early warning or risk prompt. In addition, the intelligent image processing unit may further receive a video clip selected by the target user 100 or an administrator of the target vehicle and the driving risk determined by the video clip, and extract a key feature value in the video clip as a comparison feature value, thereby generating a new driving risk detection policy.
(9) The driving risk detection strategy configuration unit: the system is a core unit of the server and is mainly used for configuring a preset characteristic value of the driving risk, and an operator can be allowed to embody planning or select a driving risk detection strategy from a strategy library planned in advance, so that a current driving risk detection strategy is generated. The current driving risk detection strategy can be set to be executed at a local equipment end, and can also be set to be executed at a service end.
It should be noted that the setting of the current driving risk detection strategy may be set by a driving risk detection strategy setting module. When the current driving risk detection strategy is set at the local equipment end, the driving risk detection mode at the moment can be local detection, and in the local detection, the driving risk detection strategy configuration unit issues the current driving risk detection strategy to the driving risk detection strategy storage module of the local equipment end for storage through the data communication module. The sensing module receives the video stream, preprocesses the video stream through the influence processing module, directly extracts an image characteristic value (current driving characteristic) of each frame, compares the image characteristic value with a current driving risk detection strategy stored in the driving risk detection strategy storage module, and when the comparison is abnormal or the current similarity reaches a certain threshold value, the local equipment end can directly perform early warning or risk prompting through the voice playing module and the sound and light display module.
When the current driving risk detection strategy is set at the server, that is, the driving risk detection mode at the moment can be detected by the server, in the detection of the server, the intelligent image processing unit performs frame-by-frame detection on the video stream of the current vehicle video received by the data receiving module according to the preset current driving risk detection strategy, extracts the image characteristic value (current driving characteristic) of each frame, analyzes the basic facility or task characteristics of vehicles, buildings, roads and the like through the image characteristic value, compares the basic facility or task characteristics according to the preset driving risk detection strategy, and when the frame is detected to have the driving risk, the frame image can be directly output to the violation reporting unit for subsequent risk reporting, and simultaneously, the driving risk is directly notified to the voice playing module and the sound and light display module of the local equipment end through the data communication module and the data receiving module, so that the local equipment end performs early warning or risk prompting through the sound and light display module
(10) And a violation reporting unit: the violation reporting unit can be configured at the local device side, can also be configured at the server side, and can also be configured at both the local device side and the server side. For external risks, the risk reporting by the violation reporting unit can be regarded as reporting that the illegal action is feared, and for internal risks, the risk reporting by the violation reporting unit can be regarded as internal management and control of the driving risk.
The main application scene of the scheme can be an intelligent service scene of driving safety, under the scene, the driving state of drivers and conductors in a target vehicle and the surrounding dangerous behaviors of the target vehicle are identified through vehicle-mounted equipment deployed locally, violation snapshot, risk early warning and the like are achieved, specifically, as shown in fig. 4, videos inside and outside the target vehicle are collected through a camera and the like in the vehicle-mounted equipment, video streams are obtained, algorithm analysis is conducted on the video streams through the vehicle-mounted equipment, and fatigue driving detection (people), driving safety monitoring (target vehicle) and surrounding dangerous behavior identification are achieved. The driving state of a driver of the target vehicle is identified through fatigue driving detection and driving safety monitoring, and when the driving state is an abnormal driving state, voice warning is carried out on the vehicle-mounted equipment or other equipment in the target vehicle. The image corresponding to the dangerous behavior can be uploaded to the cloud for storage aiming at peripheral dangerous driving identification, a driver of a target vehicle can check the dangerous behavior through a vehicle owner applet, and the image of the dangerous behavior can be uploaded to a traffic management platform.
To sum up, according to the method P100 and the system 001 for detecting driving risk provided by the present specification, after obtaining the current vehicle video of the target vehicle through the vehicle-mounted device of the target vehicle, the driving risk detection policy of the target vehicle is updated, and then, based on the updated driving risk detection policy, the driving risk corresponding to the target vehicle is identified in the current vehicle video, and the driving risk is output; because this scheme can directly discern the risk of traveling in the vehicle video, need not to gather other data, thereby can promote the range of application that the risk of traveling detected, but also can directly export the risk of traveling, with carry out traffic safety in real time and remind, in addition, can also update the risk detection strategy of traveling of target vehicle, make and adopt different risk detection strategies of traveling to different vehicles or different driving objects, thereby increase the rate of accuracy that the risk of traveling detected, consequently, can promote the detection efficiency that the risk of traveling detected.
Another aspect of the present description provides a non-transitory storage medium storing at least one set of executable instructions for performing driving risk detection. When executed by a processor, the executable instructions direct the processor to perform the steps of the method of driving risk detection P100 described herein. In some possible implementations, various aspects of the present description may also be implemented in the form of a program product including program code. When the program product is run on a computing device 600, the program code is adapted to cause the computing device 600 to perform the steps of the method of driving risk detection P100 described herein. A program product for implementing the methods described above may employ a portable compact disc read only memory (CD-ROM) including program code and may be run on the computing device 600. However, the program product of this description is not limited in this respect, as the readable storage medium can be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system. The program product may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. A readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the readable storage medium include: an electrical connection having one or more wires, a portable disk, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. The computer readable storage medium may include a propagated data signal with readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A readable storage medium may also be any readable medium that is not a readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a readable storage medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing. Program code for carrying out operations for this specification may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on computing device 600, partly on computing device 600, as a stand-alone software package, partly on computing device 600 and partly on a remote computing device, or entirely on the remote computing device.
The foregoing description of specific embodiments has been presented for purposes of illustration and description. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims may be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may also be possible or may be advantageous.
In conclusion, after reading this detailed disclosure, those skilled in the art will appreciate that the foregoing detailed disclosure may be presented by way of example only, and may not be limiting. Those skilled in the art will appreciate that the present specification is susceptible to various reasonable variations, improvements and modifications of the embodiments, even if not explicitly described herein. Such alterations, improvements, and modifications are intended to be suggested by this specification, and are within the spirit and scope of the exemplary embodiments of this specification.
Furthermore, certain terminology has been used in this specification to describe embodiments of the specification. For example, "one embodiment," "an embodiment," and/or "some embodiments" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the specification. Therefore, it is emphasized and should be appreciated that two or more references to "an embodiment" or "one embodiment" or "an alternative embodiment" in various portions of this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined as suitable in one or more embodiments of the specification.
It should be appreciated that in the foregoing description of embodiments of the specification, various features are grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the specification, for the purpose of aiding in the understanding of one feature. This is not to be taken as an admission that any of the above-described features are required in combination, and it is fully possible for a person skilled in the art, on reading this description, to identify some of the devices as single embodiments. That is, the embodiments in the present specification may also be understood as an integration of a plurality of sub-embodiments. And each sub-embodiment described herein is equally applicable to less than all features of a single foregoing disclosed embodiment.
Each patent, patent application, publication of a patent application, and other material, such as articles, books, descriptions, publications, documents, articles, and the like, cited herein is hereby incorporated by reference. All matters hithertofore set forth herein except to the extent they relate to any prosecution history, any prosecution history which may be inconsistent or conflicting with this document or any prosecution history which may have a limiting effect on the broadest scope of the claims appended hereto. Now or later associated with this document. For example, if there is any inconsistency or conflict in the description, definition, and/or use of terms associated with any of the included materials with respect to the terms, descriptions, definitions, and/or uses associated with this document, the terms in this document shall be used.
Finally, it should be understood that the embodiments of the application disclosed herein are illustrative of the principles of the embodiments of the present specification. Other modified embodiments are also within the scope of this description. Accordingly, the disclosed embodiments are to be considered in all respects as illustrative and not restrictive. Those skilled in the art can implement the application in this specification in alternative configurations according to the embodiments in this specification. Therefore, the embodiments of the present description are not limited to the embodiments described precisely in the application.

Claims (22)

1. A running risk detection method includes, by an in-vehicle device of a target vehicle:
acquiring a current vehicle video of the target vehicle;
updating a driving risk detection strategy of the target vehicle;
identifying a driving risk corresponding to the target vehicle in the current vehicle video based on the updated driving risk detection strategy; and
and outputting the driving risk.
2. The driving risk detection method according to claim 1, wherein the current vehicle video includes at least one of a vehicle interior video, a vehicle exterior video, and a road condition video of the target vehicle.
3. The running risk detection method according to claim 1, wherein the updating of the running risk detection strategy of the target vehicle includes:
receiving a current driving risk detection strategy issued by a server; and
and updating the driving risk detection strategy of the target vehicle based on the current driving risk detection strategy to obtain the updated driving risk detection strategy.
4. The driving risk detection method according to claim 3, wherein before receiving the current driving risk detection policy issued by the server, the method further comprises:
and sending security policy configuration information to the server so that the server can generate the current driving risk detection policy based on the security policy configuration information.
5. The driving risk detection method according to claim 4, wherein before the sending of the security policy configuration information to the server, further comprising:
acquiring historical driving information corresponding to the target vehicle; and
and generating the safety strategy configuration information based on the historical driving information.
6. The driving risk detection method according to claim 3, wherein before receiving the current driving risk detection policy issued by the server, the method further comprises:
and sending a historical vehicle video of a preset time period and a target driving risk corresponding to the historical vehicle video to the server, so that the server extracts a target driving feature corresponding to the target driving risk from the historical vehicle video, and generating the current driving risk detection strategy based on the target driving feature and the target driving risk.
7. The running risk detection method according to claim 1, wherein the running risk includes at least one of a vehicle risk and a driving risk, the driving risk includes at least one of a driving risk of a person inside the target vehicle and a driving risk of another person outside the target vehicle, and the driving risk includes at least one of a driving state abnormality and an illegal driving behavior.
8. The driving risk detection method according to claim 1, wherein the identifying of the driving risk corresponding to the target vehicle in the current vehicle video based on the updated driving risk detection strategy comprises:
determining a running risk detection mode of the target vehicle; and
and identifying the running risk corresponding to the target vehicle in the current vehicle video based on the running risk detection mode and the updated running risk detection strategy.
9. The running risk detection method according to claim 8, wherein the running risk detection manner includes at least one of local detection and server-side detection.
10. The running risk detection method according to claim 8, wherein the determining of the running risk detection manner of the target vehicle includes:
acquiring detection configuration information for the target vehicle; and
and determining a running risk detection mode of the target vehicle based on the detection configuration information.
11. The running risk detection method according to claim 8, wherein the determining of the running risk detection manner of the target vehicle includes:
acquiring vehicle attribute information of the target vehicle;
identifying a driving scene of the target vehicle based on the vehicle attribute information and the current vehicle video; and
and determining a running risk detection mode of the target vehicle based on the running scene.
12. The driving risk detection method according to claim 9, wherein the identifying a driving risk corresponding to the target vehicle in the current vehicle video based on the driving risk detection manner and the updated driving risk detection strategy includes:
determining that the driving risk detection mode is local detection, and extracting current driving characteristics from the current vehicle video; and
and determining the running risk corresponding to the target vehicle based on the updated running risk detection strategy and the current running characteristic.
13. The driving risk detection method according to claim 12, wherein the extracting of the current driving feature in the current vehicle video includes:
framing the current vehicle video to obtain a video frame set;
extracting the characteristics of each video frame in the video frame set to obtain the video characteristics corresponding to each video frame; and
the current driving feature is identified in the video features.
14. The driving risk detection method according to claim 12, wherein the determining of the driving risk corresponding to the target vehicle based on the updated driving risk detection strategy and the current driving characteristics includes:
comparing a preset driving characteristic corresponding to the updated driving risk detection strategy with the current driving characteristic, wherein the preset driving characteristic is a driving characteristic corresponding to a preset driving risk; and
and determining the running risk corresponding to the target vehicle based on the comparison result.
15. The driving risk detection method according to claim 14, wherein comparing the current driving characteristic with a preset driving characteristic corresponding to the updated driving risk detection strategy includes:
calculating the feature similarity between the preset driving feature corresponding to the updated driving risk detection strategy and the current driving feature; and
the determining of the driving risk corresponding to the target vehicle based on the comparison result comprises: and when the characteristic similarity exceeds a preset similarity threshold value, taking a preset driving risk corresponding to the preset driving characteristic as a driving risk corresponding to the target vehicle.
16. The driving risk detection method according to claim 14, wherein the determining the driving risk corresponding to the target vehicle based on the comparison result includes:
and when the comparison result is abnormal, determining the driving risk corresponding to the target vehicle.
17. The driving risk detection method according to claim 9, wherein the identifying, in the current vehicle video, the driving risk corresponding to the target vehicle based on the driving risk detection manner and the updated driving risk detection strategy includes:
determining that the running risk detection mode is server side detection, sending the current vehicle video to a server so that the server can identify a running risk corresponding to the target vehicle in the current vehicle video based on a current running risk detection strategy, and reporting the risk based on the running risk; and
and receiving the driving risk corresponding to the target vehicle returned by the server.
18. The running risk detection method according to claim 1, wherein the outputting the running risk includes:
visually displaying the driving risk; and
and reporting the running risk.
19. The driving risk detection method according to claim 19, wherein the visual presentation comprises at least one of a voice broadcast, an acousto-optic display, a vibratory cue or a display of the driving risk.
20. The driving risk detection method according to claim 19, wherein the risk reporting of the driving risk includes:
determining a reporting address corresponding to the driving risk based on the risk type of the driving risk; and
and reporting the running risk and the target video frame corresponding to the running risk based on the reporting address.
21. The running risk detection method according to claim 1, further comprising:
sending a driving information inquiry request to a server so that the server can screen out target historical driving information based on the driving information inquiry request; and
and receiving the target historical driving information returned by the server.
22. A running risk detection system comprising:
at least one storage medium storing at least one instruction set for performing driving risk detection; and
at least one processor communicatively coupled to the at least one storage medium,
wherein the at least one processor reads the at least one instruction set when the driving risk detection system is running and performs the method of driving risk detection of any of claims 1-21 as directed by the at least one instruction set.
CN202210961264.8A 2022-08-11 2022-08-11 Method and system for driving risk detection Pending CN115384541A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210961264.8A CN115384541A (en) 2022-08-11 2022-08-11 Method and system for driving risk detection

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210961264.8A CN115384541A (en) 2022-08-11 2022-08-11 Method and system for driving risk detection

Publications (1)

Publication Number Publication Date
CN115384541A true CN115384541A (en) 2022-11-25

Family

ID=84118995

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210961264.8A Pending CN115384541A (en) 2022-08-11 2022-08-11 Method and system for driving risk detection

Country Status (1)

Country Link
CN (1) CN115384541A (en)

Similar Documents

Publication Publication Date Title
US11060882B2 (en) Travel data collection and publication
US20220292956A1 (en) Method and system for vehicular-related communications
US20210357670A1 (en) Driver Attention Detection Method
CN113240909B (en) Vehicle monitoring method, equipment, cloud control platform and vehicle road cooperative system
CA2848995C (en) A computing platform for development and deployment of sensor-driven vehicle telemetry applications and services
CN109345829B (en) Unmanned vehicle monitoring method, device, equipment and storage medium
CN114446056B (en) Vehicle information code generation and vehicle passing control method, device and equipment
CN108860166A (en) Processing system and processing method occur for pilotless automobile accident
MX2014015331A (en) Affective user interface in an autonomous vehicle.
Saiprasert et al. Driver behaviour profiling using smartphone sensory data in a V2I environment
CN109191829B (en) road safety monitoring method and system, and computer readable storage medium
CN110889351A (en) Video detection method and device, terminal equipment and readable storage medium
CN110728218A (en) Dangerous driving behavior early warning method and device, electronic equipment and storage medium
WO2020100922A1 (en) Data distribution system, sensor device, and server
US20220139090A1 (en) Systems and methods for object monitoring
CN112990069A (en) Abnormal driving behavior detection method, device, terminal and medium
CN111582239A (en) Violation monitoring method and device
CN110386088A (en) System and method for executing vehicle variance analysis
KR102319383B1 (en) Method and apparatus for automatically reporting traffic rule violation vehicles using black box images
CN114677848B (en) Perception early warning system, method, device and computer program product
CN115384541A (en) Method and system for driving risk detection
KR20220122832A (en) Apparatus and method for riding notification of mobility on demand
CN211087269U (en) Driver identity monitoring device, vehicle and system
JP2019079203A (en) Image generation device and image generation method
JP7301715B2 (en) State Prediction Server and Alert Device Applied to Vehicle System Using Surveillance Camera

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination