CN113689707A - Video data processing method, device and computer readable storage medium - Google Patents

Video data processing method, device and computer readable storage medium Download PDF

Info

Publication number
CN113689707A
CN113689707A CN202110821437.1A CN202110821437A CN113689707A CN 113689707 A CN113689707 A CN 113689707A CN 202110821437 A CN202110821437 A CN 202110821437A CN 113689707 A CN113689707 A CN 113689707A
Authority
CN
China
Prior art keywords
image
type
current frame
frame image
sub
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110821437.1A
Other languages
Chinese (zh)
Other versions
CN113689707B (en
Inventor
郝李鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Dahua Technology Co Ltd
Original Assignee
Zhejiang Dahua Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Dahua Technology Co Ltd filed Critical Zhejiang Dahua Technology Co Ltd
Priority to CN202110821437.1A priority Critical patent/CN113689707B/en
Publication of CN113689707A publication Critical patent/CN113689707A/en
Application granted granted Critical
Publication of CN113689707B publication Critical patent/CN113689707B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/017Detecting movement of traffic to be counted or controlled identifying vehicles
    • G08G1/0175Detecting movement of traffic to be counted or controlled identifying vehicles by photographing vehicles, e.g. when violating traffic rules
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/0104Measuring and analyzing of parameters relative to traffic conditions
    • G08G1/0125Traffic data processing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/181Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Chemical & Material Sciences (AREA)
  • Analytical Chemistry (AREA)
  • Transforming Light Signals Into Electric Signals (AREA)
  • Studio Devices (AREA)

Abstract

The application discloses a video data processing method, a device and a computer readable storage medium, wherein the method comprises the following steps: the coprocessor obtains sub-image data of one channel in the current frame image copied by the signal copier; and determining the type of the current frame image according to the sub-image data, and sending the type to a central processing unit so that the central processing unit classifies and processes the received current frame image according to the type. By the design mode, the advantages of multi-shutter snapshot can be kept, the performance requirement on the coprocessor can be greatly reduced, and the time sequence correlation between the synchronous information output to the image sensor and the video data acquired by the central processing unit is strong.

Description

Video data processing method, device and computer readable storage medium
Technical Field
The present application relates to the field of intelligent transportation technologies, and in particular, to a video data processing method, apparatus, and computer-readable storage medium.
Background
In the field of intelligent transportation, a commonly adopted snapshot scheme at present is called a double-shutter or three-shutter snapshot scheme. Different image shutter modes are used for acquiring image data with different qualities according to actual requirements. For example: the video shutter mode data is used for common video monitoring; snapshooting shutter mode data for illegal forensics; the picture shutter mode data is used for vehicle identification and tracking. Therefore, it is necessary to classify image data collected by the image sensor. The coprocessor acquires image data output by the image sensor, adds current image shutter mode information and snapshot information at a set position and outputs the image data to the central processing unit, and the central processing unit acquires an image with additional information and then carries out corresponding processing according to information classification.
At present, the low-cost intelligent traffic snapshot scheme mainly depends on a coprocessor to transmit pseudo video data information to a central processing unit, and the central processing unit distinguishes the video shutter mode of received image data according to the pseudo video data information. However, as the resolution and frame rate of the image output by the image sensor are continuously improved, the number of the output signals of the image sensor is more and more, which has high requirements on the core parameters such as the operating frequency, the number of logic units, the number of pins and the like of the selected coprocessor, and also means that the selection limitation of the coprocessor under the current scheme is large. Therefore, a new video data processing method is needed to solve the above problems.
Disclosure of Invention
The technical problem mainly solved by the present application is to provide a video data processing method, apparatus and computer readable storage medium, which enable a coprocessor to control the driving and exposure of an image sensor, gain control and processing of peripheral components in real time, and at the same time, only need to parse and transfer a data signal of one channel.
In order to solve the technical problem, the application adopts a technical scheme that: there is provided a video data processing method including: the coprocessor obtains sub-image data of one channel in the current frame image copied by the signal copier; and determining the type of the current frame image according to the sub-image data, and sending the type to a central processing unit so that the central processing unit classifies and processes the received current frame image according to the type.
Wherein the step of sending the type to a central processor comprises: and sending the type of the current frame image to a central processing unit in a binary mode, wherein the type comprises a video frame, a picture frame or a snapshot frame.
The coprocessor and the central processor comprise a first transmission line and a second transmission line which are arranged in parallel; the step of sending the type of the current frame image to a central processing unit in a binary manner comprises the following steps: in response to the type of the current frame image being a video frame, causing the first transmission line to transmit low level signals and the second transmission line to transmit low level signals; in response to the type of the current frame image being a picture frame, causing the first transmission line to transmit low level signals and the second transmission line to transmit high level signals; and in response to the type of the current frame image being a capture frame, enabling the first transmission line to transmit high-level signals and the second transmission line to transmit low-level signals.
Wherein the step of determining the type of the current frame image according to the sub-image data comprises: matching the image parameters issued to the image sensor with the sub-image data to determine the type of the sub-image data; and the type of the sub-image data is the type of the current frame image.
Wherein, before the step of determining the type of the current frame image according to the sub-image data, the method comprises: generating a horizontal/field synchronizing signal according to a preset condition, and simultaneously issuing the image parameter to the image sensor at the moment of generating the horizontal/field synchronizing signal; the preset condition is related to the image resolution and the image frame rate.
Wherein, after the step of generating the line/field sync signal according to the preset condition, the method comprises: sending the line/field synchronizing signal to the image sensor, and controlling the image sensor to output a frame starting signal of the current frame image after delay time; wherein the delay time is determined by a register value of the image sensor.
After the step of determining the type of the current frame image according to the sub-image data, the method further includes: marking the type of the current frame image at the falling edge of the frame synchronization signal of the previous frame image to obtain the type of the current frame image.
In order to solve the above technical problem, another technical solution adopted by the present application is: there is provided a video data processing method including: the signal duplicator obtains a current frame image acquired by the image sensor, duplicates sub-image data of all channels in the current frame image, and duplicates the sub-image data of one channel into two parts; sending one of the two copies of sub-image data to a coprocessor, so that the coprocessor determines the type of the current frame image according to the sub-image data; and sending the copied sub-image data of all channels to a central processor, so that the central processor classifies and processes the types after receiving the types.
In order to solve the above technical problem, the present application adopts another technical solution: there is provided a video data processing method including: the central processing unit obtains sub-image data of all channels in the current frame image copied by the signal copier and the type of the current frame image transmitted by the coprocessor; and classifying and processing the received current frame image according to the type.
Wherein the step of classifying and processing the received current frame image according to the type includes: responding to the type of the video frame, and inputting the current frame image to a coding display module for processing; responding to the type of the image frame, and inputting the current frame image into an intelligent algorithm module for processing; and responding to the type of the snapshot frame, and inputting the current frame image to a evidence obtaining and image adding module for processing.
In order to solve the above technical problem, the present application adopts another technical solution: there is provided a video data processing apparatus including: the system comprises a coprocessor, an image sensor, a signal duplicator and a central processing unit; the coprocessor is respectively connected with the image sensor, the signal duplicator and the central processing unit, and the signal duplicator is respectively connected with the image sensor and the central processing unit; wherein, the coprocessor, the image sensor, the signal duplicator and the central processing unit are mutually matched to realize the video data processing method of any one of the above embodiments.
The coprocessor comprises a first transmission line and a second transmission line which are arranged in parallel, and the type of the current frame image is transmitted to the central processing unit by the coprocessor through the first transmission line and the second transmission line.
In order to solve the above technical problem, the present application adopts another technical solution: there is provided a computer-readable storage medium storing a computer program for implementing the video data processing method according to any one of the above embodiments.
Different from the prior art, the beneficial effects of the application are that: in the application, the coprocessor only needs to obtain the sub-image data of one channel in the current frame image copied by the signal copier, can determine the type of the current frame image according to the sub-image data, and sends the type to the central processing unit, so that the central processing unit can classify and process the received current frame image according to the type. Through the design mode, the coprocessor does not acquire complete data transmitted by the image sensor any more, does not need to set a position to add the type and the snapshot information of the current frame image by using the frame type marking module, and only needs to analyze the image data of one channel output by the image sensor, so that the advantages of multi-shutter snapshot can be kept, the performance requirement on the coprocessor can be greatly reduced, and the timing sequence correlation between the synchronization information output to the image sensor and the video data acquired by the central processing unit is strong.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts. Wherein:
FIG. 1 is a schematic flow chart diagram of a video data processing method in the prior art;
FIG. 2 is a schematic diagram of an image data channel in the related art;
FIG. 3 is a diagram illustrating a line/frame synchronization signal of a whole frame in the related art;
FIG. 4 is a schematic structural diagram of an embodiment of a video data processing apparatus according to the present application;
FIG. 5 is a schematic flow chart diagram illustrating an embodiment of a video data processing method according to the present application;
fig. 6 is a schematic diagram of an output pattern of an image sensor in the related art;
FIG. 7 is a diagram showing a representation of a sync code in the related art;
FIG. 8 is a schematic flowchart of an embodiment of a video data processing method according to the present application;
FIG. 9 is a timing diagram of a marker image type;
FIG. 10 is a schematic flow chart diagram illustrating a video data processing method according to another embodiment of the present application;
FIG. 11 is a schematic flow chart diagram illustrating a video data processing method according to another embodiment of the present application;
fig. 12 is a block diagram showing the structure of a video data processing apparatus according to the present application;
FIG. 13 is a block diagram of an embodiment of a computer-readable storage medium of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Referring to fig. 1, fig. 1 is a schematic flow chart diagram illustrating a video data processing method in the prior art. The central processing unit sends a main control signal to the coprocessor through the low-speed communication interface, the coprocessor is connected with the auxiliary equipment through the auxiliary equipment control interface, the coprocessor generates a control signal after receiving the main control signal and sends the control signal to the image sensor, and VF in the figure 1 represents an image frame configured by the image sensor and corresponding to an output interface mode. S represents video shutter mode data which is mainly used for video monitoring; t represents picture shutter mode data, and the picture shutter mode data are provided for the intelligent algorithm module and used for target tracking and identification; TM represents snapshot shutter mode data, and the snapshot shutter mode data are provided for a snapshot module to be used for picture evidence obtaining; s _ i represents image sensor data carrying video shutter mode indication information; t _ i represents image sensor data carrying picture shutter mode indication information; TM _ i carries image sensor data of snapshot shutter mode indication information; the coprocessor acquires image data output by the image sensor, current image shutter mode information and snapshot information are added at a set position by using a frame type marking module and output to the central processing unit, and the central processing unit acquires an image with additional information and then carries out corresponding processing according to information classification. Specifically, the DataAnalyze distribution Center is a data analysis and distribution Center, and is a functional module implemented inside the central processing unit, which is abbreviated as DADC. The DADC classifies the acquired video data and performs image video processing according to the matched multi-shutter frame embedding information, and then sends the processed video data to a next-stage module of the central processing unit. However, in the scheme, the coprocessor needs to complete the driving of the image sensor, the real-time exposure and gain control, the acquisition of image sensor data, the superposition of image sensor image video shutter mode information and the output of the image data after the superposition. With the continuous improvement of the resolution and the frame rate of the image output by the image sensor, the selection of the coprocessor has great limitation because of high requirements on the core parameters such as the operating frequency, the number of logic units, the number of pins and the like of the selected coprocessor.
Referring to fig. 2 and 3, fig. 2 is a diagram illustrating an image data channel in the related art, and fig. 3 is a diagram illustrating a line/frame synchronization signal of a whole frame in the related art. Specifically, the coprocessor acquires all video high-speed data signals and corresponding clock signals output by the image sensor, analyzes data on different data paths according to an image sensor frame format coding protocol, and synthesizes the data into complete frame video data. Taking 1 clock and 8 data channels as an example, the coprocessor processes image data of 8 channels simultaneously to complete the analysis of each channel synchronous code and the marking of line/frame synchronous signals. Because data among different channels CH1-CH8 output by the image sensor are basically synchronous, in the actual process, line synchronization signals analyzed by different channels may have inter-pixel deviation in time, generally, deviation within 2 pixels, so that line/frame synchronization signals of different channels and video data need to be synchronized to combine into a line and frame synchronization signal of a complete picture. From another perspective, the full frame line/frame sync signal is temporally synchronized with the single channel resolved line/frame sync signal.
Under the conventional scheme, the coprocessor analyzes a line/frame synchronization signal of the whole picture, can finish the accurate classification of images of different shutter modes output by an image sensor, determines whether the type of the currently received image is a video frame, a picture frame or a capture frame, and embeds the type information of the image into a fixed position of image video data (determined by negotiation with a central processing unit). After the central processing unit collects the data with the type indication of the image, the video data are classified, and the image data of different types are sent to different processing modules for processing.
Referring to fig. 4 and fig. 5, fig. 4 is a schematic structural diagram of an embodiment of a video data processing apparatus according to the present application, and fig. 5 is a schematic flow structural diagram of an embodiment of a video data processing method according to the present application. The video data processing apparatus includes a coprocessor 10, an image sensor 12, a signal duplicator 14, and a central processor 16. Specifically, in the present embodiment, the coprocessor 10 is connected to the image sensor 12, the signal duplicator 14 and the central processing unit 16, respectively, and the signal duplicator 14 is connected to the image sensor 12 and the central processing unit 16, respectively. Specifically, the coprocessor 10, the image sensor 12, the signal duplicator 14, and the central processor 16 cooperate to implement the video data processing method of the present application of fig. 5. Specifically, the coprocessor 10 may be an FPGA, a CPLD, or the like, and the present application is not limited thereto.
Specifically, in the present embodiment, a first transmission line 100 and a second transmission line 102 are disposed in parallel between the coprocessor 10 and the central processor 16, and the coprocessor 10 transmits the type of the current frame image to the central processor 16 through the first transmission line 100 and the second transmission line 102.
In addition, in the present embodiment, the coprocessor 10 configures the exposure and gain parameters of each frame of image data of the image sensor 12 in real time, and the image sensor 12 outputs video data signals and clock signals of a plurality of channels. Specifically, referring to fig. 6 and 7, fig. 6 is a schematic diagram of an output mode of an image sensor in the related art, and fig. 7 is a schematic diagram of a representation form of a synchronization code in the related art. As shown in fig. 6, CH1-CH8 are 8 channels output by the image sensor 12, wherein CH1-CH8 share a set of clock signals, and SAV 1-SAV 4 and EAV 1-EAV 4 in fig. 6 represent synchronization codes (Sync codes) for indicating line and field synchronization signals of a video image, i.e. indicating the start and end of a line or the start and end of a frame. As shown in fig. 7, the synchronization codes of different sensors may not be the same, but can be distinguished from the actual data. Sav (valid line) indicates the start position of the active line pixels in the video frame, eav (valid line) indicates the end position of the active line pixels in the video frame, sav (invalid line) indicates the start position of the inactive line [ blanking line ] pixels in the video frame, and eav (invalid line) indicates the end position of the inactive line pixels in the video frame. Specifically, the 8 Sensor output channels shown in CH1-CH8 output different columns of the same row of sensors at the same time, for example, CH1 outputs the first pixel, the ninth pixel and the seventeenth pixel … of each row; CH2 outputs the second, tenth, and eighteenth pixels … of each row; …, respectively; CH8 outputs the eighth, sixteenth, and twenty-fourth pixels … for each row.
A video data processing method by the above-described video data processing apparatus will be described below from the viewpoint of the coprocessor 10.
Referring to fig. 8, fig. 8 is a flowchart illustrating a video data processing method according to an embodiment of the present application. The video data processing method comprises the following steps:
s10: the coprocessor obtains the sub-image data of one channel in the current frame image copied by the signal copier.
Specifically, referring to fig. 2 and 5, the coprocessor only acquires the sub-image data of one channel in the current frame image copied by the signal copier at port 4, taking channel 1 as an example, and acquires and analyzes the line/frame synchronization signal of the current channel.
S11: and determining the type of the current frame image according to the sub-image data, and sending the type to a central processing unit so that the central processing unit classifies and processes the received current frame image according to the type.
Through the design mode, the coprocessor does not acquire complete data transmitted by the image sensor any more, does not need to set a position to add the type and the snapshot information of the current frame image by using the frame type marking module, and only needs to analyze the image data of one channel output by the image sensor, so that the advantages of multi-shutter snapshot can be maintained, and the performance requirement on the coprocessor can be greatly reduced.
Specifically, in the present embodiment, the step of sending the type to the central processor in step S11 includes: and sending the type of the current frame image to a central processor in a binary mode. In particular, the type includes a video frame or a picture frame or a snapshot frame. Because the line/frame synchronizing signal of the whole picture and the line/frame synchronizing signal analyzed by the single channel are synchronous in time, the classification of the type of the current frame image can be finished, and whether the current frame image is a video frame, a picture frame or a capture frame can be determined.
Specifically, in the present embodiment, please continue to refer to fig. 5, a first transmission line and a second transmission line are included between the coprocessor and the central processing unit, and the type of the current frame image is transmitted to the central processing unit through the first transmission line and the second transmission line by the conversion of the IO signal. Specifically, the step of sending the type of the current frame image to the central processor in a binary manner includes: when the type of the current frame image is a video frame, enabling the first transmission line to transmit low-level signals and the second transmission line to transmit low-level signals; when the type of the current frame image is a picture frame, enabling the first transmission line to transmit low-level signals and the second transmission line to transmit high-level signals; and when the type of the current frame image is a capture frame, enabling the first transmission line to transmit high-level signals and the second transmission line to transmit low-level signals.
Specifically, in the present embodiment, the step of determining the type of the current frame image from the sub-image data in step S11 includes: and matching the image parameters sent to the image sensor with the sub-image data to determine the type of the sub-image data. Specifically, since the line/frame synchronization signal of the entire frame is temporally synchronized with the line/frame synchronization signal analyzed by the single channel, the type of the sub-image data is the same as that of the current frame image, that is, the type of the sub-image data is the type of the current frame image. Through the design, the time sequence correlation between the synchronous information output to the image sensor and the video data collected by the central processing unit is strong.
Specifically, in this embodiment, the step of determining the type of the current frame image from the sub-image data in step S11 is preceded by: and generating a line/field synchronizing signal according to a preset condition, and simultaneously issuing the image parameters to the image sensor at the time of generating the line/field synchronizing signal. The preset condition is related to the image resolution and the image frame rate. In addition, in this embodiment, after the step of generating the line/field sync signal according to the preset condition, the method includes: and sending the line/field synchronizing signal to the image sensor, and controlling the image sensor to output a frame starting signal of the current frame image after delay time. Specifically, the delay time is determined by a register value of the image sensor.
Specifically, the coprocessor generates a horizontal/field synchronization signal by itself according to the requirements of image resolution and frame rate and sends the horizontal/field synchronization signal to the image sensor, the coprocessor generally sends image related parameters such as exposure, gain and the like to the image sensor at the moment of generating the horizontal/field synchronization signal, and the image sensor starts to output a frame start signal of a video image after delay time after receiving the horizontal/field synchronization signal of the coprocessor. Different image sensors have different delay time from receiving the line/field synchronizing signal of the coprocessor to outputting the frame start signal of the video image, that is, after the different image sensors receive the image parameters such as exposure, gain and the like at the field synchronizing signal of the coprocessor, the image corresponding to the exposure and gain image parameters is output at the next frame or the next frame, and the specific delay frame is determined by the register value of the sensor. Assuming that the image fixation corresponding to exposure and gain image parameters can be effectively output in the second frame after the current sensor receives the exposure and gain image parameters in the field synchronization signal of the coprocessor, assuming that the sending time of the exposure related parameters of the current sensor is the video frame parameter sent by the coprocessor, the type corresponding to the second frame start signal collected by the coprocessor after the sending time of the relative exposure parameters is the video frame, and identifying the type of the sampled current frame image by the coprocessor according to the principle.
Specifically, in this embodiment, after the step of determining the type of the current frame image from the sub-image data in step S11, the method further includes: marking the type of the current frame image at the falling edge of the frame synchronization signal of the previous frame image to obtain the type of the current frame image. Specifically, referring to fig. 9, fig. 9 is a timing diagram of a mark image type. Because the central processing unit and the coprocessor simultaneously acquire data of the image sensor channel, that is, when the central processing unit side analyzes the frame start mark of the current frame image, that is, the rising edge of the frame synchronization signal, the coprocessor marks the type of the current frame image. In general, the co-process may learn in advance the type of at least the current frame image and the next frame image, depending on the contents of the image sensor configuration validation register.
A video data processing method by the above-described video data processing apparatus will be described below from the viewpoint of the signal duplicator 14.
Referring to fig. 10, fig. 10 is a schematic flow chart illustrating a video data processing method according to another embodiment of the present application. The video data processing method comprises the following steps:
s20: the signal duplicator obtains a current frame image acquired by the image sensor, duplicates sub-image data of all channels in the current frame image, and duplicates the sub-image data of one channel into two.
S21: sending one of the two copies of the sub-image data to a coprocessor, so that the coprocessor determines the type of the current frame image according to the sub-image data; and sending the copied sub-image data of all channels to a central processor, so that the central processor classifies and processes the types after receiving the types.
Specifically, referring to fig. 5, the signal duplicator duplicates the sub-image data signals of all channels and the clock signal of the data channel in the current frame image, duplicates the sub-image data signal of one channel and the clock signal of the data channel at least two times, one time is sent to the port 1 of the central processing unit together with other clock and video data signals of the image sensor, and the other time is sent to the port 4 of the coprocessor. As can be seen from the description of the channel synchronization code embedding manner of the image sensor in fig. 6 and 7, each channel synchronization code of the image sensor has the same encoding manner, and the start-stop position and the start-line position of the current frame of image data can be obtained by analyzing any channel of the image sensor. Through the design, the time sequence correlation between the synchronous information output to the image sensor and the video data collected by the central processing unit is strong.
Through the design mode, the coprocessor only needs to analyze the image data of one channel output by the image sensor while controlling the driving and control of the image sensor and peripheral components in real time, so that the index requirements on the performance, pin number, resources and the like of the coprocessor can be reduced.
A video data processing method by the above-described video data processing apparatus will be described below from the viewpoint of the central processor 16.
Referring to fig. 11, fig. 11 is a flowchart illustrating a video data processing method according to another embodiment of the present application. The video data processing method comprises the following steps:
s30: the central processor obtains the sub-image data of all channels in the current frame image copied by the signal copier and the type of the current frame image transmitted by the coprocessor.
Specifically, please continue to refer to fig. 5, the cpu collects VF _ x (where the parameters corresponding to S \ T \ TM act on the image sensor, especially including parameters such as exposure and gain) generated by the image sensor at the port 1 and recovers the line/frame synchronization signal of the whole frame. The central processor has no way to identify the image frame type marking information from the collected original image data, so the coprocessor is needed to mark the type of the current frame image, so that the central processor obtains the type of the current frame image.
S31: and classifying and processing the received current frame image according to the type.
Specifically, in the present embodiment, please continue to refer to fig. 5, step S31 specifically includes: when the type of the current frame image is a video frame, inputting the current frame image to a coding display module for processing; when the type of the current frame image is a picture frame, inputting the current frame image into an intelligent algorithm module for processing; and when the type of the current frame image is a capture frame, inputting the current frame image to a forensics image-taking module for processing.
Through the design mode, the coprocessor does not acquire complete data transmitted by the image sensor any more, does not need to set a position to add the type and the snapshot information of the current frame image by using the frame type marking module, and only needs to analyze the image data of one channel output by the image sensor, so that the advantages of multi-shutter snapshot can be maintained, and the performance requirement on the coprocessor can be greatly reduced.
Referring to fig. 12, fig. 12 is a block diagram of a video data processing apparatus according to the present application. Specifically, in the present embodiment, the data processing apparatus includes a coprocessor 20, an image sensor 22, a signal duplicator 24, a central processor 26, and an auxiliary device 28. Specifically, the coprocessor 20 includes a sensor driver and parameter issuing module 200, an image acquisition module 202, an image frame and sensor control parameter matching module 204, a CPU protocol parsing module 206, and an auxiliary device control and sensor parameter synchronization module 208. In addition, in the present embodiment, the central processing unit 26 includes a frame shutter mode synchronization module 260, an image processing module 262, a video encoding and displaying module 264, an intelligent algorithm module 266, a evidence obtaining upper image module 268, an image parameter control module 261, and a snapshot control and parameter issuing module 263.
Specifically, the sensor driving and parameter issuing module 200 may drive the image sensor 22 through the low-speed communication interface and issue the image parameters through the low-speed communication interface. The image sensor 22 acquires image data based on the image parameters. The signal duplicator 24 obtains the current frame image acquired by the image sensor 22, duplicates the sub-image data of all channels in the current frame image, and duplicates the sub-image data of one channel into two. Signal replicator 24 sends one of the two copies of sub-image data to image acquisition module 202 and sends the replicated sub-image data to image frame and sensor control parameter matching module 204. The image frame and sensor control parameter matching module 204 is further capable of receiving parameters sent by the sensor driving and parameter issuing module 200, matching the obtained image parameters with the obtained sub-image data to determine the type of the current frame image, and sending the type to the frame shutter mode synchronization module 260. The signal duplicator 24 also transmits the duplicated sub-image data of all channels to the frame shutter mode synchronization module 260 to cause the image processing module 262 to classify and process them.
The image processing module 262 inputs the video frame to the video encoding display module 264, and the video encoding display module 264 performs encoding display on the video frame. The image processing module 262 inputs the picture frame to the intelligent algorithm module 266, the image processing module 262 processes the picture frame, determines whether the environmental parameters of the picture frame shooting are appropriate, sends the exposure and gain image parameters to the image parameter control module 261, and the image parameter control module 261 sends the exposure and gain parameters to the CPU protocol analysis module 206. The CPU protocol analyzing module 206 analyzes the exposure and gain parameters, and sends the exposure and gain parameters to the sensor driving and parameter issuing module 200 through the internal bus interface, and issues the parameters related to the auxiliary device control and sensor parameter synchronizing module 208. The sensor driving and parameter issuing module 200 sends the exposure and gain parameters to the image sensor 22, and the image sensor 22 adjusts the parameters such as the exposure time according to the exposure and gain parameters. The auxiliary device control and sensor parameter synchronization module 208 sends the sensor parameters and the parameters related to the auxiliary device 28, and the auxiliary device 28 adjusts parameters such as the light supplement time according to the sensor parameters and the parameters related to the auxiliary device, for example, a relationship parameter between the light supplement start time and the exposure start time.
The image processing module 262 inputs the snapshot frame to the evidence obtaining upper graph module 268, the image processing module 262 processes the video frame, sends the snapshot instruction and the snapshot parameter to the snapshot control and parameter issuing module 263, and the snapshot control and parameter issuing module 263 sends the snapshot instruction and the snapshot parameter to the CPU protocol analyzing module 206. The CPU protocol analyzing module 206 analyzes the snapshot instruction and the snapshot parameters, and sends the analyzed snapshot instruction and snapshot parameters to the sensor driving and parameter issuing module 200 and the auxiliary device control and sensor parameter synchronizing module 208 through the internal bus interface. The sensor driver and parameter issuing module 200 and the auxiliary device control and sensor parameter synchronization module 208 send the snapshot instruction and the snapshot parameters to the image sensor 22 and the auxiliary device 28, respectively. The image sensor 22 captures an image according to the capturing instruction and the capturing parameters, and sends the captured image to the image acquisition module 202. The auxiliary device 28 adjusts parameters such as fill-in time and illumination brightness according to the snapshot instruction and the snapshot parameters.
Referring to fig. 13, fig. 13 is a block diagram illustrating a computer-readable storage medium according to an embodiment of the present disclosure. The computer-readable storage medium 30 stores a computer program 300, which can be read by a computer, and the computer program 300 can be executed by a processor to implement the video data processing method mentioned in any of the above embodiments. The computer program 300 may be stored in the computer-readable storage medium 30 in the form of a software product, and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device) or a processor (processor) to execute all or part of the steps of the method according to the embodiments of the present application. The computer-readable storage medium 30 having a storage function may be various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, or may be a terminal device, such as a computer, a server, a mobile phone, or a tablet.
In summary, different from the situation in the prior art, in the present application, the coprocessor only needs to obtain the sub-image data of one channel in the current frame image copied by the signal copier, and can determine the type of the current frame image according to the sub-image data, and send the type to the central processor, so that the central processor classifies and processes the received current frame image according to the type. Through the design mode, the coprocessor does not acquire complete data transmitted by the image sensor any more, does not need to set a position to add the type and the snapshot information of the current frame image by using the frame type marking module, and only needs to analyze the image data of one channel output by the image sensor, so that the advantages of multi-shutter snapshot can be kept, the performance requirement on the coprocessor can be greatly reduced, and the timing sequence correlation between the synchronization information output to the image sensor and the video data acquired by the central processing unit is strong.
The above description is only for the purpose of illustrating embodiments of the present application and is not intended to limit the scope of the present application, and all modifications of equivalent structures and equivalent processes, which are made by the contents of the specification and the drawings of the present application or are directly or indirectly applied to other related technical fields, are also included in the scope of the present application.

Claims (13)

1. A method of processing video data, comprising:
the coprocessor obtains sub-image data of one channel in the current frame image copied by the signal copier;
and determining the type of the current frame image according to the sub-image data, and sending the type to a central processing unit so that the central processing unit classifies and processes the received current frame image according to the type.
2. The method of claim 1, wherein said step of sending said type to a central processor comprises:
and sending the type of the current frame image to a central processing unit in a binary mode, wherein the type comprises a video frame, a picture frame or a snapshot frame.
3. The video data processing method according to claim 2, wherein a first transmission line and a second transmission line are arranged in parallel between the coprocessor and the central processor; the step of sending the type of the current frame image to a central processing unit in a binary manner comprises the following steps:
in response to the type of the current frame image being a video frame, causing the first transmission line to transmit low level signals and the second transmission line to transmit low level signals;
in response to the type of the current frame image being a picture frame, causing the first transmission line to transmit low level signals and the second transmission line to transmit high level signals;
and in response to the type of the current frame image being a capture frame, enabling the first transmission line to transmit high-level signals and the second transmission line to transmit low-level signals.
4. The video data processing method according to claim 1,
the step of determining the type of the current frame image according to the sub-image data includes:
matching the image parameters issued to the image sensor with the sub-image data to determine the type of the sub-image data; and the type of the sub-image data is the type of the current frame image.
5. The method of claim 4, wherein the step of determining the type of the current frame image from the sub-image data is preceded by:
generating a horizontal/field synchronizing signal according to a preset condition, and simultaneously issuing the image parameter to the image sensor at the moment of generating the horizontal/field synchronizing signal;
the preset condition is related to the image resolution and the image frame rate.
6. The method of claim 5, wherein the step of generating the line/field sync signal according to the preset condition is followed by:
sending the line/field synchronizing signal to the image sensor, and controlling the image sensor to output a frame starting signal of the current frame image after delay time; wherein the delay time is determined by a register value of the image sensor.
7. The method of claim 1, wherein the step of determining the type of the current frame image according to the sub-image data is followed by:
marking the type of the current frame image at the falling edge of the frame synchronization signal of the previous frame image to obtain the type of the current frame image.
8. A method of processing video data, comprising:
the signal duplicator obtains a current frame image acquired by the image sensor, duplicates sub-image data of all channels in the current frame image, and duplicates the sub-image data of one channel into two parts;
sending one of the two copies of sub-image data to a coprocessor, so that the coprocessor determines the type of the current frame image according to the sub-image data; and sending the copied sub-image data of all channels to a central processor, so that the central processor classifies and processes the types after receiving the types.
9. A method of processing video data, comprising:
the central processing unit obtains sub-image data of all channels in the current frame image copied by the signal copier and the type of the current frame image transmitted by the coprocessor;
and classifying and processing the received current frame image according to the type.
10. The method of claim 9, wherein the step of classifying and processing the received current frame image according to the type comprises:
responding to the type of the video frame, and inputting the current frame image to a coding display module for processing;
responding to the type of the image frame, and inputting the current frame image into an intelligent algorithm module for processing;
and responding to the type of the snapshot frame, and inputting the current frame image to a evidence obtaining and image adding module for processing.
11. A video data processing apparatus, comprising:
the system comprises a coprocessor, an image sensor, a signal duplicator and a central processing unit;
the coprocessor is respectively connected with the image sensor, the signal duplicator and the central processing unit, and the signal duplicator is respectively connected with the image sensor and the central processing unit;
wherein the co-processor, the image sensor, the signal duplicator, and the central processor cooperate to implement the video data processing method of any one of claims 1 to 7, or the video data processing method of claim 8, or the video data processing method of any one of claims 9 to 10.
12. The video-data processing apparatus according to claim 11,
the coprocessor comprises a first transmission line and a second transmission line which are arranged in parallel, and the type of the current frame image is transmitted to the central processor by the coprocessor through the first transmission line and the second transmission line.
13. A computer-readable storage medium, characterized in that the computer-readable storage medium stores a computer program for implementing the video data processing method of any one of claims 1 to 7, or implementing the video data processing method of claim 8, or implementing the video data processing method of any one of claims 9 to 10.
CN202110821437.1A 2021-07-20 2021-07-20 Video data processing method, device and computer readable storage medium Active CN113689707B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110821437.1A CN113689707B (en) 2021-07-20 2021-07-20 Video data processing method, device and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110821437.1A CN113689707B (en) 2021-07-20 2021-07-20 Video data processing method, device and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN113689707A true CN113689707A (en) 2021-11-23
CN113689707B CN113689707B (en) 2022-09-06

Family

ID=78577485

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110821437.1A Active CN113689707B (en) 2021-07-20 2021-07-20 Video data processing method, device and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN113689707B (en)

Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4918526A (en) * 1987-03-20 1990-04-17 Digital Equipment Corporation Apparatus and method for video signal image processing under control of a data processing system
CN1159957A (en) * 1995-11-22 1997-09-24 任天堂株式会社 High performance/low cost video game system with multi-functional peripheral processing subsystem
US6275239B1 (en) * 1998-08-20 2001-08-14 Silicon Graphics, Inc. Media coprocessor with graphics video and audio tasks partitioned by time division multiplexing
CN1501259A (en) * 2002-10-10 2004-06-02 英特尔公司 An apparatus and method for facilitating memory data access with generic read/write patterns
US7184059B1 (en) * 2000-08-23 2007-02-27 Nintendo Co., Ltd. Graphics system with copy out conversions between embedded frame buffer and main memory
CN103856764A (en) * 2012-11-30 2014-06-11 浙江大华技术股份有限公司 Device for performing monitoring through double shutters
CN104427218A (en) * 2013-09-02 2015-03-18 北京计算机技术及应用研究所 Ultra high definition CCD (charge coupled device) multichannel acquisition and real-time transmission system and method
WO2017059577A1 (en) * 2015-10-09 2017-04-13 华为技术有限公司 Eyeball tracking device and auxiliary light source control method and related device thereof
CN106688015A (en) * 2014-09-25 2017-05-17 微软技术许可有限责任公司 Processing parameters for operations on blocks while decoding images
CN107292808A (en) * 2016-03-31 2017-10-24 阿里巴巴集团控股有限公司 Image processing method, device and image coprocessor
CN111093077A (en) * 2019-12-31 2020-05-01 深圳云天励飞技术有限公司 Video coding method and device, electronic equipment and storage medium
CN112270639A (en) * 2020-09-21 2021-01-26 浙江大华技术股份有限公司 Image processing method, image processing device and storage medium
CN112291477A (en) * 2020-11-03 2021-01-29 浙江大华技术股份有限公司 Multimedia information processing method, device, storage medium and electronic device
CN112735141A (en) * 2020-12-09 2021-04-30 浙江大华技术股份有限公司 Video data processing method and device
CN112995515A (en) * 2021-03-05 2021-06-18 浙江大华技术股份有限公司 Data processing method and device, storage medium and electronic device

Patent Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4918526A (en) * 1987-03-20 1990-04-17 Digital Equipment Corporation Apparatus and method for video signal image processing under control of a data processing system
CN1159957A (en) * 1995-11-22 1997-09-24 任天堂株式会社 High performance/low cost video game system with multi-functional peripheral processing subsystem
US6275239B1 (en) * 1998-08-20 2001-08-14 Silicon Graphics, Inc. Media coprocessor with graphics video and audio tasks partitioned by time division multiplexing
US7184059B1 (en) * 2000-08-23 2007-02-27 Nintendo Co., Ltd. Graphics system with copy out conversions between embedded frame buffer and main memory
CN1501259A (en) * 2002-10-10 2004-06-02 英特尔公司 An apparatus and method for facilitating memory data access with generic read/write patterns
CN103856764A (en) * 2012-11-30 2014-06-11 浙江大华技术股份有限公司 Device for performing monitoring through double shutters
CN104427218A (en) * 2013-09-02 2015-03-18 北京计算机技术及应用研究所 Ultra high definition CCD (charge coupled device) multichannel acquisition and real-time transmission system and method
CN106688015A (en) * 2014-09-25 2017-05-17 微软技术许可有限责任公司 Processing parameters for operations on blocks while decoding images
WO2017059577A1 (en) * 2015-10-09 2017-04-13 华为技术有限公司 Eyeball tracking device and auxiliary light source control method and related device thereof
CN107292808A (en) * 2016-03-31 2017-10-24 阿里巴巴集团控股有限公司 Image processing method, device and image coprocessor
CN111093077A (en) * 2019-12-31 2020-05-01 深圳云天励飞技术有限公司 Video coding method and device, electronic equipment and storage medium
CN112270639A (en) * 2020-09-21 2021-01-26 浙江大华技术股份有限公司 Image processing method, image processing device and storage medium
CN112291477A (en) * 2020-11-03 2021-01-29 浙江大华技术股份有限公司 Multimedia information processing method, device, storage medium and electronic device
CN112735141A (en) * 2020-12-09 2021-04-30 浙江大华技术股份有限公司 Video data processing method and device
CN112995515A (en) * 2021-03-05 2021-06-18 浙江大华技术股份有限公司 Data processing method and device, storage medium and electronic device

Also Published As

Publication number Publication date
CN113689707B (en) 2022-09-06

Similar Documents

Publication Publication Date Title
CN112291477B (en) Multimedia information processing method, device, storage medium and electronic device
CN101500128B (en) Method and apparatus for loading additional information on display image of network camera device terminal
CN112950951B (en) Intelligent information display method, electronic device and storage medium
CN113329174B (en) Control method, device and system of multi-view camera and electronic device
CN112788329A (en) Video static frame detection method and device, television and storage medium
CN106339194A (en) Method and system for dynamically adjusting multi-device display effect
CN110677601B (en) Intelligent video source switching system and intelligent video source switching method
CN113689707B (en) Video data processing method, device and computer readable storage medium
CN102348127B (en) Television picture quality detection system and method
CN112995515A (en) Data processing method and device, storage medium and electronic device
CN219204583U (en) Image acquisition device and electronic equipment
CN116107902A (en) Recharging method and device for test data and recharging system for test data
CN112585957A (en) Station monitoring system and station monitoring method
CN113343857B (en) Labeling method, labeling device, storage medium and electronic device
CN112270639B (en) Image processing method, image processing device and storage medium
CN115484369A (en) Video frame delay time determination method, device, medium, and remote driving system
CN113596395A (en) Image acquisition method and monitoring equipment
JP2010061411A (en) Image projector, image synthesizer, image projection method and image projection program
CN113408475A (en) Indication signal recognition method, indication signal recognition apparatus, and computer storage medium
CN104010221B (en) Digital billboard playing system, instant monitoring system and instant monitoring method thereof
CN117156300B (en) Video stream synthesis method and device based on image sensor, equipment and medium
CN117176979B (en) Method, device, equipment and storage medium for extracting content frames of multi-source heterogeneous video
CN110855930B (en) Intelligent identification method and system for network equipment
CN102694977A (en) Method for automatically converting between high definition mode and standard definition mode of high-definition camera
CN113947721B (en) Image verification method and system for entertainment system test

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant