CN115720253B - Video processing method, device, vehicle and storage medium - Google Patents

Video processing method, device, vehicle and storage medium Download PDF

Info

Publication number
CN115720253B
CN115720253B CN202211395823.XA CN202211395823A CN115720253B CN 115720253 B CN115720253 B CN 115720253B CN 202211395823 A CN202211395823 A CN 202211395823A CN 115720253 B CN115720253 B CN 115720253B
Authority
CN
China
Prior art keywords
video
emergency
target
videos
target video
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202211395823.XA
Other languages
Chinese (zh)
Other versions
CN115720253A (en
Inventor
侯旭光
姚昂
吴祥
李世龙
权伍明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Automobile Group Co Ltd
Original Assignee
Guangzhou Automobile Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Automobile Group Co Ltd filed Critical Guangzhou Automobile Group Co Ltd
Priority to CN202211395823.XA priority Critical patent/CN115720253B/en
Publication of CN115720253A publication Critical patent/CN115720253A/en
Application granted granted Critical
Publication of CN115720253B publication Critical patent/CN115720253B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

The application discloses a video processing method, a video processing device, a vehicle and a storage medium. The method comprises the following steps: continuously acquiring N videos through an image acquisition device, and storing the N videos in a first appointed folder; acquiring target videos from N videos based on time information of the vehicle entering an emergency state; and performing splicing processing on the plurality of target videos to obtain an emergency state video, and storing the emergency state video in a second designated folder. According to the technical scheme provided by the embodiment of the application, after the vehicle enters the emergency state, the video acquired by the image acquisition device in the cyclic recording mode is multiplexed to obtain the emergency state video for recording the environment information before and after the vehicle enters the emergency state, and the process only needs to consume small calculation power, so that the consumption of hardware resources of the electronic controller is reduced, and therefore, the electronic controller has enough hardware resources to process other services in the vehicle, and the jamming sense is reduced.

Description

Video processing method, device, vehicle and storage medium
Technical Field
The present application relates to the field of data processing technologies, and in particular, to a video processing method, a video processing device, a vehicle, and a storage medium.
Background
The automobile data recorder can record the environment of the automobile in real time in the running process of the automobile, and is widely applied to the field of automobiles.
In the related art, the electronic controller is required to process other services in the vehicle in addition to controlling the vehicle recorder, and when the vehicle enters an emergency state, the vehicle recorder consumes a great amount of calculation force of the electronic controller when carrying out emergency acquisition on the environment where the vehicle is located in the running process, so that the electronic controller is blocked when processing other services.
Disclosure of Invention
The application provides a video processing method, a video processing device, a vehicle and a storage medium.
In a first aspect, an embodiment of the present application provides a video processing method, including: continuously acquiring N videos through an image acquisition device, and storing the N videos in a first designated folder, wherein N is a positive integer greater than 2, and the videos in the first designated folder are deleted when the storage time length is greater than or equal to a preset time length; after the vehicle is monitored to enter an emergency state, acquiring target videos from N videos based on time information of the vehicle entering the emergency state; and under the condition that a plurality of target videos exist, splicing the plurality of target videos to obtain an emergency video, storing the emergency video in a second designated folder, wherein files in the second designated folder do not respond to designated deleting instructions, the designated deleting instructions are other deleting instructions except the deleting instructions triggered by a user, and the emergency video is used for recording the environment information of the vehicle before and after entering the emergency.
In a second aspect, an embodiment of the present application provides a video processing apparatus, including: the video acquisition module is used for continuously acquiring N videos through the image acquisition device, wherein N is a positive integer greater than 2; the first storage module is used for storing the N videos in a first designated folder, and the videos in the first designated folder are deleted when the storage time length is greater than or equal to the preset time length; the video acquisition module is used for acquiring target videos from a plurality of videos based on time information of the vehicle entering the emergency state after the vehicle is monitored to be in the emergency state; the video processing module is used for performing splicing processing on the plurality of target videos under the condition that the plurality of target videos exist, so as to obtain an emergency state video, wherein the emergency state video is used for recording environment information of the vehicle before and after entering the emergency state; and the second storage module is used for storing the emergency video in a second designated folder, wherein the files in the second designated folder do not respond to the designated deleting instruction, and the designated deleting instruction is other deleting instructions except the deleting instruction triggered by the user.
In a third aspect, an embodiment of the present application provides a vehicle including: one or more processors; a memory; an image acquisition device; one or more applications, wherein the one or more applications are stored in memory and configured to be executed by one or more processors, the one or more applications configured to perform the video processing method as described in the first aspect.
In a fourth aspect, embodiments of the present application provide a computer readable storage medium having stored therein computer program instructions that are callable by a processor to perform a video processing method as in the first aspect.
In a fifth aspect, embodiments of the present application provide a computer program product for implementing the video processing method according to the first aspect when the computer program product is executed.
Compared with the prior art, in the video processing method provided by the embodiment of the application, after the vehicle enters an emergency state, three target videos are acquired from a plurality of videos acquired by the image acquisition device in a cyclic recording mode so as to splice the videos to obtain the emergency state videos for recording the environmental information before and after the vehicle enters the emergency state.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the description of the embodiments will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic illustration of an implementation environment provided by one embodiment of the present application.
Fig. 2 is a flowchart of a video processing method according to an embodiment of the present application.
Fig. 3 is a schematic diagram of video processing provided by one embodiment of the present application.
Fig. 4 is a flowchart of a video processing method according to an embodiment of the present application.
Fig. 5 is a schematic diagram of an interface for playing emergency video according to an embodiment of the present application.
Fig. 6 is a block diagram of a video processing apparatus according to an embodiment of the present application.
Fig. 7 is a block diagram of a vehicle according to an embodiment of the present application.
Fig. 8 is a block diagram of a computer-readable storage medium provided by an embodiment of the application.
Detailed Description
Embodiments of the present application are described in detail below, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to like or similar elements or elements having like or similar functions throughout. The embodiments described below by referring to the drawings are exemplary only for explaining the present application and are not to be construed as limiting the present application.
In order to enable those skilled in the art to better understand the solution of the present application, the following description will make clear and complete descriptions of the technical solution of the present application in the embodiments of the present application with reference to the accompanying drawings. It will be apparent that the described embodiments are only some, but not all, embodiments of the application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
Referring to fig. 1, a schematic diagram of an implementation environment provided by an embodiment of the present application is shown. The implementation environment includes a vehicle 100. Vehicle 100 refers to a Vehicle that is powered or towed by a power plant, for passengers or for transporting items, including, but not limited to, a car, a sport utility Vehicle (Suburban Utility Vehicle, SUV), a utility Vehicle (MPV), and the like. The vehicle 100 is provided with an image pickup device and an electronic controller, between which a communication connection is established.
The image acquisition device is used for recording environmental information during the running of the vehicle 100. Optionally, the image acquisition device is a vehicle recorder, and the vehicle recorder can be installed at a position of the front windshield of the vehicle, which is close to the rear of the central rearview mirror. In some embodiments, the vehicle recorder continuously performs image acquisition during the running process of the vehicle 100 to obtain a plurality of videos with the same duration, and then sequentially sends the videos to the electronic controller, and the electronic controller stores the videos, but deletes the videos to accommodate the newly acquired videos of the vehicle recorder when the storage duration of the videos reaches the preset duration. This mode of operation is referred to as a cyclic recording mode.
The electronic controller is used to process various services in the vehicle 100, including, but not limited to: control of the drive system, control of the brake system, control of the entertainment system, control of in-vehicle environmental components (such as air conditioning, atmosphere lights, etc.). In the embodiment of the application, when the electronic controller monitors that the vehicle enters an emergency state (such as a collision accident, emergency brake, etc.), the image acquisition device multiplexes the videos acquired by the image acquisition device in a cyclic recording mode, and splices the videos to obtain an emergency state video for recording the environmental information of the vehicle 100 before and after entering the emergency state, the process only consumes a small amount of calculation force of the electronic controller, reduces the consumption of hardware resources of the electronic controller, and further enables the electronic controller to have enough hardware resources to process other services in the vehicle 100, and reduces the jamming feeling.
Referring to fig. 2, a flowchart of a video processing method according to an embodiment of the application is shown. The main execution body of each step in the method can be an electronic controller, and the method comprises the following processes.
Step S201, N videos are continuously collected by the image collecting device, and the N videos are stored in the first designated folder.
N is a positive integer greater than 2. In the embodiment of the application. The image acquisition device is used for continuously acquiring videos in the running process of the vehicle to obtain N videos with the same duration.
And deleting the video in the first designated folder under the condition that the storage time length is greater than or equal to the preset time length so as to accommodate the video newly acquired by the image acquisition device. In other possible embodiments, in the case where the data amount of the video in the first designated folder is greater than the preset data amount, the electronic controller deletes the one or more videos stored first to accommodate the video newly acquired by the image acquisition device.
Step S202, after the vehicle is monitored to be in an emergency state, acquiring target videos from N videos based on time information of the vehicle entering the emergency state.
Emergency conditions include, but are not limited to: the emergency state represents that the vehicle has a high probability of accident, and the current environmental information of the vehicle needs to be accurately recorded at the moment so as to facilitate subsequent rescue, responsibility following and the like.
In some embodiments, the vehicle acquires the vehicle acceleration in real time through the acceleration sensor, and determines that the vehicle enters an emergency state when the absolute value of the vehicle acceleration is greater than a first preset value. The first preset value is set according to experiments or experience, which is not limited in the embodiment of the present application. The absolute value of the vehicle acceleration is greater than a first preset value, which indicates that the vehicle is suddenly braked or suddenly accelerated. It should be noted that, even when the vehicle is in the running state, there is a case where the acceleration is large, however, this is not an accident of the vehicle, so when judging whether the vehicle is in the emergency state by the acceleration of the vehicle, it is also necessary to acquire the running speed of the vehicle before the acceleration is larger than the first preset value, if the running speed of the vehicle before the acceleration is larger than the first preset value is smaller than the preset running speed, it is indicated that the vehicle is in the starting acceleration stage with a large probability, but not in the emergency state, and if the running speed of the vehicle before the acceleration is larger than the first preset value is larger than the preset running speed, it is indicated that the vehicle is in the emergency state.
In other embodiments, the vehicle monitors a change in steering wheel angle and determines that the vehicle is in an emergency condition if the rate of change of steering wheel angle is greater than a second predetermined value. The second preset value is set according to experiments or experience, which is not limited in the embodiment of the present application. The change rate of the steering wheel angle is larger than a second preset value, and the steering wheel angle of the vehicle is represented to be changed greatly in a short time, namely, the vehicle turns emergently.
In other embodiments, the vehicle is determined to be in an emergency condition in the event of a detected collision event. Alternatively, the vehicle may analyze the video frames acquired by the image acquisition device to determine whether a collision event has occurred.
In some embodiments, the time information for the vehicle to enter the emergency state includes a target time at which the vehicle enters the emergency state, and the target video includes a first target video, a second target video, and a third target video. In this embodiment, step S202 may alternatively be implemented as the following sub-steps: after the fact that the vehicle is in an emergency state is monitored, determining a video with the acquisition time including a target moment in N videos as a first target video; determining a video with the acquisition time of N videos being before the acquisition time of the first target video and the time interval between the N videos and the acquisition time of the first target video being smaller than or equal to the first time interval as a second target video; and determining the video with the acquisition time of the first target video after the acquisition time of the N videos and the time interval between the N videos and the acquisition time of the first target video being smaller than or equal to the second time interval as a third target video.
The first time interval and the second time interval can be determined according to the duration of the video acquired by the image acquisition device, and the first time interval and the second time interval can be the same or different. Specifically, the first time interval may be a positive integer multiple of a duration of the video acquired by the image acquisition device, and the second time interval may also be a positive integer multiple of a duration of the video acquired by the image acquisition device, so that the video acquired by the image acquisition video can be directly acquired without splitting the video when the emergency video is generated subsequently.
Further, the first time interval and the second time interval are both durations of image acquisition videos, and further, determining, as the second target video, a video having a time interval smaller than or equal to the first time interval between the acquisition time of the first target video and a time interval between the acquisition time of the first target video and the acquisition time of the N videos before the acquisition time of the first target video, includes: and determining the video with the smallest time interval between the acquisition time of the first target video and the acquisition time of the first target video in the N videos as the second target video. Determining, as a third target video, a video having a collection time of N videos after a collection time of the first target video and a time interval between the collection time of the N videos and the collection time of the first target video being less than or equal to the second time interval, including: and determining the video with the smallest time interval between the acquisition time of the first target video and the acquisition time of the first target video after the acquisition time of the first target video in the N videos as a third target video.
Referring in conjunction to fig. 3, a schematic diagram of video processing provided by one embodiment of the present application is shown. The image acquisition device acquires N videos in a cyclic recording mode, at the time t0, the vehicle monitors that the vehicle enters an emergency state, then searches videos with the acquisition time including the time t0 from the N videos to serve as first target videos 31, then determines the video with the acquisition time before the acquisition time of the first target videos 31 and closest to the acquisition time of the first target videos 31 as second target videos 32, and determines the video with the acquisition time after the acquisition time of the first target videos 31 and closest to the acquisition time of the first target videos 31 as third target videos 33.
It should be noted that, the acquisition time of the first target video, the second target video and the third target video may be the same or different. In some embodiments, the vehicle acquires the second target video upon entering the emergency state, acquires the first target video upon completion of the first target video acquisition, and acquires the third target video upon completion of the third target video acquisition. In other embodiments, the vehicle acquires the first target video, the second target video, and the third target video simultaneously after the third target video acquisition is completed.
In other possible embodiments, if the image capturing device does not capture video before being in an emergency state (for example, the image capturing device is in a state of being just installed), the electronic controller may acquire a first target video whose capturing time includes a target time, and a third target video whose capturing time is after the target time; if the image acquisition device stops working after the vehicle enters an emergency state, the electronic controller can acquire a first target video with acquisition time including target time and a third target video with acquisition time before the target time; if both of the above situations occur, the electronic controller may acquire only the first target video whose acquisition time includes the target video.
In step S203, if there are multiple target videos, the multiple target videos are spliced to obtain an emergency video, and the emergency video is stored in a second designated folder.
The emergency video is used for recording the environment information of the vehicle before and after entering the emergency. The video in the second designated folder does not respond to the designated deleting instruction, and the designated deleting instruction is other deleting instructions except the deleting instruction triggered by the user, namely, the files in the designated storage path can only be manually deleted by the user and cannot be automatically deleted, so that the situation that the video in the emergency state is deleted by mistake is avoided, and the situation that the video in the emergency state is deleted by mistake is avoided. The first designated folder and the second designated folder are not identical.
In the case where the target video includes the first target video, the second target video, and the third target video, step S203 is implemented as: and splicing the second target video, the first target video and the third target video end to end according to the sequence of the acquisition time to obtain the emergency state video.
In the case that the target video includes the first target video and the third target video, step S203 is implemented by splicing the first target video and the third target video end to end according to the sequence of the acquisition time, so as to obtain the emergency video. In the case that the target video includes the first target video and the second target video, step S203 is implemented by splicing the second target video and the first target video end to end according to the sequence of the acquisition time, so as to obtain the emergency video. In the case where the target video includes only the first target video, the electronic controller determines the first target video as the emergency video. Referring to fig. 3 again, the electronic controller sequentially splices the second target video 32, the first target video 31, and the third target video 33 to obtain an emergency video 34.
In summary, according to the technical scheme provided by the embodiment of the application, after a vehicle enters an emergency state, three target videos are acquired from a plurality of videos acquired by an image acquisition device in a cyclic recording mode, so that an emergency state video for recording environment information before and after the vehicle enters the emergency state is spliced, and because the videos acquired by the image acquisition device in the cyclic recording mode are multiplexed in the process of acquiring the emergency state video, the video splicing process only consumes a small amount of calculation force of the electronic controller, so that the consumption of hardware resources of the electronic controller is reduced, and therefore, the electronic controller has enough hardware resources to process other services in the vehicle, and the click feeling is reduced.
In some embodiments, after the vehicle obtains the emergency video, the name of the emergency video may also be set for subsequent searching. Optionally, the vehicle sets the name of the emergency video based on time information of the vehicle entering the emergency state. Optionally, the name of the emergency video contains the above-mentioned time information. For example, if the target time for the vehicle to enter an emergency is 2022, 10, 19, and 15 pm 23 minutes, then the name of the emergency video may be "2210191523". By the method, the user can quickly search the emergency video according to the time of the vehicle entering the emergency state, and the searching efficiency is improved.
Further, the vehicle may set the name of the emergency video based on time information of the vehicle entering the emergency state and position information of the vehicle entering the emergency state. The vehicle can acquire the position information of the vehicle in an emergency state through the positioning module. The positioning module may be a GPS module in a vehicle. In this embodiment, the name of the emergency video includes, in addition to the above-described time information, position information where the vehicle enters an emergency state. In combination with the above example, the vehicle may have a rear-end collision accident at the xx high-speed toll station at 23 pm on the 10/19 th 2022 day, and the name of the emergency video may be set to "xx high-speed toll station 2210191523". By the method, the user can quickly search the emergency video according to the time and the position information of the vehicle entering the emergency state, and the searching efficiency is improved.
In some embodiments, the vehicle may also acquire a target video frame from the emergency video and then set the target video frame as a video cover of the emergency video. The target video frame is used for recording environment information when the vehicle enters an emergency state, namely, the target video frame is a video frame acquired by the image acquisition device when the vehicle enters the emergency state (namely, the target moment). By the method, when a user views the video cover of the emergency video, the user can quickly know the content of the emergency video.
Referring to fig. 4, a flowchart of a video processing method according to an embodiment of the application is shown. The method comprises the following procedures.
In step S401, N videos are continuously collected by the image collecting device, and the N videos are stored in the first designated folder.
The video in the first designated folder is deleted if the storage time period is greater than or equal to a preset time period.
In step S402, after the vehicle is monitored to be in an emergency state, a target video is acquired from the N videos based on time information of the vehicle entering the emergency state.
Step S403, in the case that a plurality of target videos exist, performing splicing processing on the plurality of target videos to obtain an emergency video, and storing the emergency video in a second designated folder.
The emergency video is used for recording the environment information of the vehicle before and after entering the emergency. Files in the second designated folder do not respond to the designated deletion instruction, which is other deletion instructions than the user-triggered deletion instruction.
Step S404, after receiving the playing instruction for the emergency video, playing the emergency video.
In some embodiments, the vehicle is provided with a touch-controllable central control screen, the central control screen displays the name of the emergency video, and when a trigger signal for the name of the emergency video is acquired, the electronic controller receives the playing instruction. The trigger signal may be any one of a single click trigger signal, a double click trigger signal, and a long press trigger signal. In some embodiments, the vehicle is provided with a non-touch center control screen and an operable control, and a user can trigger a play instruction for the emergency video through triggering operation of the operable control. In other embodiments, the play command is a voice signal, the electronic controller collects the voice signal through the sound collection device, and receives the play command when detecting that the voice signal includes a specified keyword.
Step S405, in the process of playing the emergency video, displays a playing progress bar of the emergency video.
The playing progress bar is used for indicating the playing progress of the video in the emergency state. The play progress bar includes a target mark. The target mark is used for marking the video frame acquired by the image acquisition device when the vehicle enters an emergency state.
Referring to fig. 5 in combination, a schematic view of playing an emergency video according to an embodiment of the present application is shown, where a central control screen displays a playing progress bar 51, and the playing progress bar 51 includes a target mark 52 during playing the emergency video.
Step S406, after receiving the trigger signal for the target mark, displaying the video frame acquired by the image acquisition device when the vehicle enters the emergency state.
The trigger signal may be any one of a single-click trigger signal, a double-click trigger signal and a long-press trigger signal of a user aiming at a target mark, and in the embodiment of the present application, only the trigger signal is taken as the single-click trigger signal for illustration. In addition, the trigger signal may also be a voice signal.
In the embodiment of the application, after the user triggers the target mark through the trigger signal, the central control screen jumps to display the video frame acquired by the image acquisition device when the vehicle enters the emergency state, and by adopting the mode, the picture when the vehicle enters the emergency state can be rapidly positioned, and the searching efficiency is improved. Referring again to fig. 5, after the user triggers the target mark 52, the central screen displays the video frame 53 acquired by the image acquisition device when the vehicle enters an emergency state.
In summary, according to the technical scheme provided by the embodiment of the application, the target mark is displayed on the playing progress bar of the emergency video when the emergency video is played, and after the target mark is triggered by the user, the central control screen jumps to display the video frame acquired by the image acquisition device when the vehicle enters the emergency, so that the rapid positioning of the picture when the vehicle enters the emergency is realized, and the searching efficiency is improved.
In some embodiments, the vehicle further monitors that the ratio of available hardware resources to total hardware resources of the electronic controller is less than a preset ratio, and performs a subsequent video processing step to obtain the emergency video. The preset duty cycle is set experimentally or empirically. The ratio of available hardware resources to total hardware resources of the electronic controller is smaller than the preset ratio, which indicates that the available hardware resources of the electronic controller are insufficient, and at the moment, videos acquired by the image acquisition device in a cyclic recording mode can be multiplexed to reduce calculation force and consumption of the hardware resources of the electronic controller, so that the electronic controller has enough hardware resources to process other services, and the click feeling is reduced.
Referring to fig. 6, a block diagram of a video processing apparatus according to an embodiment of the application is shown. The device comprises: the system comprises a video acquisition module 610, a first storage module 620, a video acquisition module 630, a video processing module 630 and a second storage module 650.
The video acquisition module 610 is configured to continuously acquire N videos by using the image acquisition device, where N is a positive integer greater than 2.
The first storage module 620 is configured to store N videos in a first designated folder, where the videos in the first designated folder are deleted when the storage time period is greater than or equal to a preset time period.
The video acquisition module 630 is configured to acquire, after detecting that the vehicle is in an emergency state, a target video from a plurality of videos based on time information of the vehicle entering the emergency state.
The video processing module 640 is configured to, when there are multiple target videos, perform a stitching process on the target videos to obtain an emergency video, where the emergency video is used to record environmental information of the vehicle before and after entering the emergency.
And a second storage module 650 for storing the emergency video in a second designated folder, wherein the files in the second designated folder do not respond to the designated deletion instruction, and the designated deletion instruction is other deletion instructions than the user-triggered deletion instruction.
In summary, according to the technical scheme provided by the embodiment of the application, after a vehicle enters an emergency state, three target videos are acquired from a plurality of videos acquired by an image acquisition device in a cyclic recording mode, so that an emergency state video for recording environment information before and after the vehicle enters the emergency state is spliced, and because the videos acquired by the image acquisition device in the cyclic recording mode are multiplexed in the process of acquiring the emergency state video, the video splicing process only consumes a small amount of calculation force of the electronic controller, so that the consumption of hardware resources of the electronic controller is reduced, and therefore, the electronic controller has enough hardware resources to process other services in the vehicle, and the click feeling is reduced.
In some embodiments, the time information for the vehicle to enter an emergency state includes a target time; the target videos comprise a first target video, a second target video and a third target video; a video acquisition module 630, configured to: after the fact that the vehicle is in an emergency state is monitored, determining a video with the acquisition time including a target moment in N videos as a first target video; determining a video with the acquisition time before the acquisition time of the first target video and the time interval between the acquisition time of the N videos and the acquisition time of the first target video being smaller than the first time interval as a second target video; and determining the video with the acquisition time of the first target video after the acquisition time of the N videos and the time interval between the N videos and the acquisition time of the first target video being smaller than the second time interval as a third target video.
In some embodiments, the video acquisition module 630 is configured to determine, as the second target video, a video with a smallest time interval between the N videos and the acquisition time of the first target video, where the acquisition time of the N videos is before the acquisition time of the first target video; and determining the video with the smallest time interval between the acquisition time of the first target video and the acquisition time of the first target video after the acquisition time of the first target video in the N videos as a third target video.
In some embodiments, the apparatus further comprises: a play module (not shown). A playing module, configured to: after receiving a playing instruction for the emergency video, playing the emergency video; displaying a playing progress bar of the emergency video in the process of playing the emergency video, wherein the playing progress bar comprises a target mark, and the target mark is used for indicating the position of a video frame acquired by the image acquisition device in the emergency video when the vehicle enters the emergency; and after receiving the trigger signal aiming at the target mark, playing the video frame acquired by the image acquisition device when the vehicle enters an emergency state.
In some embodiments, video processing module 640 is configured to: and splicing the second target video, the first target video and the third target video end to end according to the sequence of the acquisition time to obtain the emergency state video.
In some embodiments, the apparatus further comprises: naming module (not shown in the figures). A naming module for: the name of the video to be in emergency is set based on the time information of the vehicle entering the emergency.
In some embodiments, a naming module is used to: acquiring position information of a vehicle in an emergency state through a positioning module; and setting the name of the emergency video based on the time information of the vehicle entering the emergency state and the position information of the vehicle entering the emergency state.
In some embodiments, the apparatus further comprises: a cover setting module (not shown). The cover setting module is used for: acquiring a target video frame from the emergency state video, wherein the target video frame is used for recording environment information when the vehicle enters the emergency state; the target video frame is set to the cover of the emergency video.
It will be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working process of the apparatus and modules described above may refer to the corresponding process in the foregoing method embodiment, which is not repeated herein.
In several embodiments provided by the present application, the coupling of the modules to each other may be electrical, mechanical, or other.
In addition, each functional module in each embodiment of the present application may be integrated into one processing module, or each module may exist alone physically, or two or more modules may be integrated into one module. The integrated modules may be implemented in hardware or in software functional modules.
Referring to fig. 7, there is shown a vehicle 700 according to an embodiment of the present application, the vehicle 700 including: one or more processors 710, memory 720, and one or more application programs. Wherein one or more application programs are stored in the memory and configured to be executed by the one or more processors, the one or more application programs configured to perform the methods described in the above embodiments.
Processor 710 may include one or more processing cores. The processor 710 utilizes various interfaces and lines to connect various portions of the overall battery management system, perform various functions of the battery management system, and process data by executing or executing instructions, programs, code sets, or instruction sets stored in the memory 720, and invoking data stored in the memory 720. Alternatively, the processor 710 may be implemented in hardware in at least one of digital signal Processing (DIGITAL SIGNAL Processing, DSP), field-Programmable gate array (Field-Programmable GATE ARRAY, FPGA), programmable logic array (Programmable Logic Array, PLA). The processor 710 may integrate one or a combination of several of a central processor 710 (Central Processing Unit, CPU), an image processor 710 (Graphics Processing Unit, GPU), and a modem, etc. The CPU mainly processes an operating system, a user interface, an application program and the like; the GPU is used for being responsible for rendering and drawing of display content; the modem is used to handle wireless communications. It will be appreciated that the modem may not be integrated into the processor 710 and may be implemented solely by a single communication chip.
The Memory 720 may include a random access Memory 720 (Random Access Memory, RAM) or a Read-Only Memory 720 (ROM). Memory 720 may be used to store instructions, programs, code, sets of codes, or sets of instructions. The memory 720 may include a stored program area and a stored data area, wherein the stored program area may store instructions for implementing an operating system, instructions for implementing at least one function (e.g., a touch function, a sound playing function, an image playing function, etc.), instructions for implementing the various method embodiments described above, and the like. The storage data area may also store data created by the electronic device map in use (e.g., phonebook, audiovisual data, chat log data), and the like.
Referring to fig. 8, an embodiment of the present application is further provided with a computer readable storage medium 800, where the computer readable storage medium 800 stores computer program instructions 810, and the computer program instructions 810 may be invoked by a processor to perform the method described in the above embodiment.
The computer readable storage medium 800 may be, for example, a flash Memory, an Electrically Erasable Programmable Read Only Memory (EEPROM), an electrically programmable Read Only Memory (ELECTRICAL PROGRAMMABLE READ ONLY MEMORY, EPROM), a hard disk, or a Read-Only Memory (ROM). Optionally, the computer readable storage medium comprises a Non-volatile computer readable storage medium (Non-transitory Computer-readable Storage Medium). The computer readable storage medium 800 has storage space for computer program instructions 810 that perform any of the method steps described above. The computer program instructions 810 may be read from or written to one or more computer program products.
Although the present application has been described in terms of the preferred embodiments, it should be understood that the present application is not limited to the specific embodiments, but is capable of numerous modifications and equivalents, and alternative embodiments and modifications of the embodiments described above, without departing from the spirit and scope of the present application.

Claims (9)

1. A method of video processing, the method comprising:
continuously acquiring N videos through an image acquisition device, and storing the N videos in a first designated folder, wherein N is a positive integer greater than 2, and the videos in the first designated folder are deleted when the storage time length is greater than or equal to a preset time length;
after the situation that the vehicle enters an emergency state is monitored, determining a video with the acquisition time including a target moment in N videos as a first target video; the target time is the time when the vehicle enters the emergency state;
determining a video with the acquisition time of N videos being before the acquisition time of the first target video and the time interval between the N videos and the acquisition time of the first target video being smaller than the first time interval as a second target video;
Determining a video with the acquisition time of N videos being after the acquisition time of the first target video and the time interval between the N videos and the acquisition time of the first target video being smaller than a second time interval as a third target video;
And splicing the first target video, the second target video and the third target video to obtain an emergency video, storing the emergency video in a second designated folder, wherein files in the second designated folder do not respond to designated deleting instructions, the designated deleting instructions are other deleting instructions except user-triggered deleting instructions, and the emergency video is used for recording environment information of the vehicle before and after entering the emergency.
2. The method of claim 1, wherein determining, as the second target video, a video having a collection time of N videos that is before a collection time of the first target video and that has a time interval with the collection time of the first target video that is less than a first time interval, comprises:
determining a video with the smallest time interval between the acquisition time of the first target video and the acquisition time of the N videos before the acquisition time of the first target video as the second target video;
The determining, as the third target video, a video having a collection time of N videos after the collection time of the first target video and a time interval between the collection time of the first target video being smaller than a second time interval, includes:
And determining the video with the smallest time interval between the acquisition time of the first target video and the acquisition time of the first target video after the acquisition time of the first target video in the N videos as the third target video.
3. The method according to claim 1 or 2, wherein after performing the stitching process on the first target video, the second target video, and the third target video to obtain an emergency video, the method further comprises:
playing the emergency video after receiving a playing instruction aiming at the emergency video;
displaying a playing progress bar of the emergency video in the process of playing the emergency video, wherein the playing progress bar comprises a target mark, and the target mark is used for indicating the position of a video frame acquired by the image acquisition device in the emergency video when a vehicle enters the emergency;
and after receiving the trigger signal aiming at the target mark, playing the video frame acquired by the image acquisition device when the vehicle enters the emergency state.
4. The method according to claim 1 or 2, wherein after performing the stitching process on the first target video, the second target video, and the third target video to obtain an emergency video, the method further comprises:
and setting the name of the emergency video based on the time information of the vehicle entering the emergency.
5. The method according to claim 4, wherein the method further comprises:
acquiring position information of the vehicle in the emergency state through a positioning module;
The setting of the name of the emergency video based on the time information of the vehicle entering the emergency state includes:
And setting the name of the emergency video based on the time information of the vehicle entering the emergency state and the position information of the vehicle entering the emergency state.
6. The method according to claim 1 or 2, wherein after performing the stitching process on the first target video, the second target video, and the third target video to obtain an emergency video, the method further comprises:
Acquiring a target video frame from the emergency state video, wherein the target video frame is used for recording environment information when the vehicle enters the emergency state;
And setting the target video frame as a cover of the emergency video.
7. A video processing apparatus, the apparatus comprising:
The video acquisition module is used for continuously acquiring N videos through the image acquisition device, wherein N is a positive integer greater than 2;
The first storage module is used for storing N videos in a first designated folder, and the videos in the first designated folder are deleted when the storage time length is greater than or equal to a preset time length;
The video acquisition module is used for determining videos with the acquisition time including target moments in N videos as first target videos after the vehicle is monitored to enter an emergency state; the target time is the time when the vehicle enters the emergency state; determining a video with the acquisition time of N videos being before the acquisition time of the first target video and the time interval between the N videos and the acquisition time of the first target video being smaller than the first time interval as a second target video; determining a video with the acquisition time of N videos being after the acquisition time of the first target video and the time interval between the N videos and the acquisition time of the first target video being smaller than a second time interval as a third target video;
The video processing module is used for performing splicing processing on the first target video, the second target video and the third target video to obtain an emergency state video, wherein the emergency state video is used for recording environment information of the vehicle before and after entering the emergency state;
And the second storage module is used for storing the emergency video in a second designated folder, wherein the files in the second designated folder do not respond to designated deleting instructions, and the designated deleting instructions are other deleting instructions except the deleting instructions triggered by a user.
8. A vehicle, characterized by comprising:
one or more processors;
a memory;
An image acquisition device;
One or more applications, wherein one or more of the applications are stored in the memory and configured to be executed by one or more of the processors, the one or more applications configured to perform the video processing method of any of claims 1-6.
9. A computer readable storage medium having stored therein computer program instructions that are callable by a processor to perform the video processing method according to any one of claims 1-6.
CN202211395823.XA 2022-11-08 2022-11-08 Video processing method, device, vehicle and storage medium Active CN115720253B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211395823.XA CN115720253B (en) 2022-11-08 2022-11-08 Video processing method, device, vehicle and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211395823.XA CN115720253B (en) 2022-11-08 2022-11-08 Video processing method, device, vehicle and storage medium

Publications (2)

Publication Number Publication Date
CN115720253A CN115720253A (en) 2023-02-28
CN115720253B true CN115720253B (en) 2024-05-03

Family

ID=85255065

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211395823.XA Active CN115720253B (en) 2022-11-08 2022-11-08 Video processing method, device, vehicle and storage medium

Country Status (1)

Country Link
CN (1) CN115720253B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116028435B (en) * 2023-03-30 2023-07-21 深圳市深航华创汽车科技有限公司 Data processing method, device and equipment of automobile data recorder and storage medium
CN116798144A (en) * 2023-04-18 2023-09-22 润芯微科技(江苏)有限公司 Collision video storage method, system, device and computer readable storage medium
CN116740837A (en) * 2023-06-25 2023-09-12 广东省安全生产技术中心有限公司 Black box for whole process tracing of limited space operation

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2012003607A (en) * 2010-06-18 2012-01-05 Yazaki Corp Drive recorder for vehicle and recorded information management method
CN102722574A (en) * 2012-06-05 2012-10-10 深圳市中兴移动通信有限公司 Device and method for naming photo/video file on basis of shooting position and time
CN106027934A (en) * 2016-07-13 2016-10-12 深圳市爱培科技术股份有限公司 Vehicle driving video storing method and system based on rearview mirror
CN107564130A (en) * 2016-07-02 2018-01-09 上海卓易科技股份有限公司 Driving recording method and drive recorder, mobile terminal
CN110381357A (en) * 2019-08-15 2019-10-25 杭州鸿晶自动化科技有限公司 A kind of processing method of driving recording video
CN110570542A (en) * 2019-08-08 2019-12-13 北京汽车股份有限公司 Video recording method, device, vehicle and machine readable storage medium
CN114640823A (en) * 2022-02-22 2022-06-17 东风汽车集团股份有限公司 Emergency video recording method based on cockpit domain controller
CN114783180A (en) * 2022-04-07 2022-07-22 合众新能源汽车有限公司 Vehicle collision accident recording method and system and vehicle

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2012003607A (en) * 2010-06-18 2012-01-05 Yazaki Corp Drive recorder for vehicle and recorded information management method
CN102722574A (en) * 2012-06-05 2012-10-10 深圳市中兴移动通信有限公司 Device and method for naming photo/video file on basis of shooting position and time
CN107564130A (en) * 2016-07-02 2018-01-09 上海卓易科技股份有限公司 Driving recording method and drive recorder, mobile terminal
CN106027934A (en) * 2016-07-13 2016-10-12 深圳市爱培科技术股份有限公司 Vehicle driving video storing method and system based on rearview mirror
CN110570542A (en) * 2019-08-08 2019-12-13 北京汽车股份有限公司 Video recording method, device, vehicle and machine readable storage medium
CN110381357A (en) * 2019-08-15 2019-10-25 杭州鸿晶自动化科技有限公司 A kind of processing method of driving recording video
CN114640823A (en) * 2022-02-22 2022-06-17 东风汽车集团股份有限公司 Emergency video recording method based on cockpit domain controller
CN114783180A (en) * 2022-04-07 2022-07-22 合众新能源汽车有限公司 Vehicle collision accident recording method and system and vehicle

Also Published As

Publication number Publication date
CN115720253A (en) 2023-02-28

Similar Documents

Publication Publication Date Title
CN115720253B (en) Video processing method, device, vehicle and storage medium
US20110077819A1 (en) Data management device, data reading method and computer-readable medium
EP4036873A1 (en) Recording control device, recording control method, and recording control program
CN108202696A (en) Vehicle glazing defogging control method, device and electronic equipment
CN111445599A (en) Automatic short video generation method and device for automobile data recorder
JP4445493B2 (en) Driving assistance device
CN113071511A (en) Method and device for displaying reverse image, electronic equipment and storage medium
JP2011091667A (en) Drive recorder, recording method, data processor, data processing method, and program
US10279793B2 (en) Understanding driver awareness through brake behavior analysis
US11734967B2 (en) Information processing device, information processing method and program
CN108429817A (en) One bulb vehicle vehicle-mounted terminal system
CN112543937A (en) Data processing method, device and equipment
JP2007141212A (en) Driving assisting method and driving assisting device
JP5085693B2 (en) Driving support device and driving support method
CN113791841A (en) Execution instruction determining method, device, equipment and storage medium
CN110356346B (en) Vehicle-mounted function pushing system and method for vehicle
CN114636568B (en) Test method and device for automatic emergency braking system, vehicle and storage medium
CN110562262B (en) Vehicle motion state determination method and device, storage medium and vehicle
CN114743290A (en) Driving record control method and device and automobile
CN116028435B (en) Data processing method, device and equipment of automobile data recorder and storage medium
CN114103944B (en) Workshop time interval adjusting method, device and equipment
JP2011028748A (en) Driving support device, driving support system, driving support software, and driving support method
KR102121283B1 (en) Setting method for a car black box standby screen information
CN115147810A (en) Vehicle attribute tracking method, system, equipment and medium in panoramic environment
JP2009295189A (en) Drive support device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant