CN112188261B - Video processing method and video processing device - Google Patents

Video processing method and video processing device Download PDF

Info

Publication number
CN112188261B
CN112188261B CN201910586773.5A CN201910586773A CN112188261B CN 112188261 B CN112188261 B CN 112188261B CN 201910586773 A CN201910586773 A CN 201910586773A CN 112188261 B CN112188261 B CN 112188261B
Authority
CN
China
Prior art keywords
image
output
video
image processor
processor
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910586773.5A
Other languages
Chinese (zh)
Other versions
CN112188261A (en
Inventor
尹前澄
周晶晶
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xian Novastar Electronic Technology Co Ltd
Original Assignee
Xian Novastar Electronic Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian Novastar Electronic Technology Co Ltd filed Critical Xian Novastar Electronic Technology Co Ltd
Priority to CN201910586773.5A priority Critical patent/CN112188261B/en
Publication of CN112188261A publication Critical patent/CN112188261A/en
Application granted granted Critical
Publication of CN112188261B publication Critical patent/CN112188261B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • H04N21/4316Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations for displaying supplemental content in a region of the screen, e.g. an advertisement in a separate window
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/76Television signal recording
    • H04N5/765Interface circuits between an apparatus for recording and another apparatus

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Business, Economics & Management (AREA)
  • Marketing (AREA)
  • Image Processing (AREA)
  • Studio Devices (AREA)

Abstract

The invention discloses a video processing method and a video processing device, wherein the video processing method comprises the following steps: performing a clipping operation on the target image by a first image processor according to the first control parameter to obtain a first clipped image portion and a residual image portion; generating, by the first image processor, a first set of output image signals based on the first truncated image portion for output to a first set of video output interfaces coupled to the first image processor; sharing, by the first image processor, the remaining image portion to the second image processor through the serial transceiver; intercepting, by a second image processor, the remaining image portion according to a second control parameter to obtain a second intercepted image portion; and generating, by the second image processor, a second set of output image signals based on the second truncated image portion for output to a second set of video output interfaces coupled to the second image processor. The invention has the beneficial effects of reducing bandwidth pressure and improving processing capacity.

Description

Video processing method and video processing device
Technical Field
The invention belongs to the technical field of video processing, and particularly relates to a video processing method and a video processing device.
Background
At present, with the increase of input sources, video processing functions of a video processor are increased, specification requirements are higher, a video processing system is more and more complex, a single chip cannot meet the requirements of processing capacity, a system cooperatively processed by a plurality of FPGAs gradually becomes a main stream architecture of a high-end video processing product, in the architecture, the video processing functions are generally dispersed into a plurality of processing chips (mainly FPGAs), video contents are exchanged through high-speed SerDes interconnection, so that high-speed SerDes resources in the system become a critical data exchange channel, and the high efficiency of the system and timeliness of data exchange are directly influenced by the transmission bandwidth.
In the prior art, input video sources and outputs are reasonably distributed due to the restriction of I/O pins and internal processing resources of the FPGA, and then all the FPGA share all the input sources through SerDes exchange. Therefore, the transmission bandwidth requirement of the GTx is very high, the bandwidth consumption is large, and each FPGA is required to completely send a local video source to each FPGA and simultaneously receive video sources sent by other FPGAs. The data bandwidth of the 4K2K video source is 12G, but the transmission bandwidth of the SerDes only supports 10G at present, and two SerDes channels are required to be occupied for transmitting a complete video source; furthermore, one 1080P video data bandwidth is about 3G, and one SerDes channel can transmit two 1080P video sources. However, the number of SerDes channels of an FPGA is limited, and the transmission efficiency of SerDes in a system of multiple FPGAs becomes a critical factor in determining the interconnection of systems, which limits further increases in the number of FPGAs of the system.
Disclosure of Invention
The embodiment of the invention provides a video processing method and a video processing device, which can realize the technical effects of saving channel resources, reducing channel bandwidth pressure and improving processing capacity.
In one aspect, a video processing method provided by an embodiment of the present invention includes: performing a clipping operation on the target image by a first image processor according to the first control parameter to obtain a first clipped image portion and a residual image portion; generating, by the first image processor, a first set of output image signals from the first truncated image portion for output to a first set of video output interfaces coupled to the first image processor; sharing, by the first image processor, the remaining image portion to a second image processor through a serial transceiver; performing, by the second image processor, a clipping operation on the remaining image portion according to a second control parameter to obtain a second clipped image portion; and generating, by the second image processor, a second set of output image signals according to the second truncated image portion for output to a second set of video output interfaces coupled to the second image processor.
According to the embodiment of the invention, the image is intercepted, and the intercepted residual image is shared with other image processors such as FPGA chips, so that channel resources such as SerDes channel resources can be saved, the transmitted data volume is reduced, the bandwidth pressure is reduced, a system can integrate a larger number of FPGA chips, and the processing capacity is improved.
In one embodiment of the present invention, the video processing method further includes: an input video source image is received as the target image by the first image processor.
In one embodiment of the present invention, generating, by the first image processor, a first set of output image signals from the first truncated image portion for output to a first set of video output interfaces coupled to the first image processor, comprises: scaling the first truncated image portion to obtain a scaled image; performing superposition processing on the zoomed image to obtain a superposed image; and generating the first output image signal group according to the superimposed image to output to the first video output interface group.
In one embodiment of the present invention, the video processing method further includes: scaling the input video source image by the first image processor to obtain a scaled image; and performing superposition processing on the scaled image by the first image processor to obtain the target image.
In one embodiment of the present invention, the first image processor and the second image processor are each programmable logic devices, and the serial transceiver is a gigabit transceiver.
In another aspect, an embodiment of the present invention provides a video processing apparatus, including: the microcontroller is used for outputting a first control parameter and a second control parameter; a first video output interface group; a first image processor connected to the microcontroller and the first video output interface group and comprising a first serial transceiver; a second video output interface group; a second image processor coupled to the microcontroller and the second set of video output interfaces and comprising a second serial transceiver, wherein the second serial transceiver is coupled to the first serial transceiver; the first image processor is used for performing a clipping operation on a target image according to a first control parameter to obtain a first clipping image part and a residual image part, generating a first output image signal group according to the first clipping image part to be output to the first video output interface group, and sharing the residual image part to a second image processor through interaction of the first serial transceiver and the second serial transceiver; the second image processor is used for performing a clipping operation on the residual image part according to a second control parameter to obtain a second clipping image part, and generating a second output image signal group according to the second clipping image part to output to the second video output interface group.
In one embodiment of the present invention, the video processing apparatus further includes: a video source input interface connected to the first image processor; the first image processor is further configured to receive a video source image input by the video source input interface as the target image.
In one embodiment of the present invention, the first image processor includes: the intercepting module is used for intercepting the target image according to the first control parameter so as to obtain the first intercepted image part and the residual image part; the scaling module is used for scaling the first intercepted image part to obtain a scaled image; the superposition module is used for carrying out superposition processing on the zoomed image to obtain a superposed image; and the output module is used for generating the first output image signal group according to the superimposed image so as to output the first output image signal group to the first video output interface group.
In another embodiment of the present invention, the first image processor includes: the scaling module is used for scaling the input video source image to obtain a scaled image; the superposition module is used for carrying out superposition processing on the zoomed image so as to obtain the target image; the intercepting module is used for intercepting the target image according to the first control parameter so as to obtain the first intercepted image part and the residual image part; and the output module is used for generating the first output image signal group according to the first intercepted image part so as to output the first output image signal group to the first video output interface group.
In one embodiment of the present invention, the first image processor and the second image processor are each programmable logic devices, and the first serial transceiver and the second serial transceiver are each gigabit transceivers.
The embodiment of the invention can save channel resources, reduce the transmitted data volume, lighten bandwidth pressure and realize the integration of more FPGAs.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required for the description of the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flowchart of a video processing method according to a first embodiment of the present invention.
Fig. 2 is a schematic diagram of a method for performing image processing on a first truncated image in a video processing method according to a first embodiment of the present invention.
Fig. 3 is a schematic diagram of a correspondence between an input video source image and an output video source image in a video processing method according to a first embodiment of the present invention.
Fig. 4 is a partial flowchart of a video processing method according to a second embodiment of the present invention.
Fig. 5 is a schematic structural diagram of a video processing apparatus according to a third embodiment of the present invention.
Fig. 6 is a schematic structural diagram of another video processing apparatus according to a third embodiment of the present invention.
Fig. 7 is a schematic diagram of an image processing procedure of a video processing apparatus according to a third embodiment of the present invention.
Fig. 8 is a schematic diagram of a correspondence between an input video source image and an output video source image in a video processing apparatus according to a third embodiment of the present invention.
Fig. 9 is a schematic structural diagram of a video processing apparatus according to a fourth embodiment of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
[ first embodiment ]
As shown in fig. 1, a video processing method according to a first embodiment of the present invention includes the following steps S11, S12, S13, S14 and S15, for example.
S11, performing interception operation on the target image by a first image processor according to a first control parameter to obtain a first intercepted image part and a residual image part;
s12, generating a first output image signal group by the first image processor according to the first intercepted image part to be output to a first video output interface group connected with the first image processor;
the first image processor may be a programmable logic device, such as an FPGA, and the first control parameter is specifically configured to control a capturing size of the target image, for example, may control a capturing manner of the target image according to an image content and a data amount and a current transmission state, and may perform equal capturing or unequal capturing. In a specific implementation, the first control parameter may be a coordinate position parameter of the image, by which the size of the truncated target image is located. After the target image is intercepted, the intercepted image part is grouped by a first image processor and then transmitted to a first video output interface group (such as a DVI interface) for output, and the image part is the first intercepted image part.
And the remaining image portion is processed by the following steps S13, S14 and S15:
s13, sharing the residual image part to a second image processor through a serial transceiver by the first image processor;
s14, the second image processor intercepts the residual image part according to a second control parameter so as to obtain a second intercepted image part; and
s15, the second image processor generates a second output image signal group according to the second intercepted image part so as to output the second output image signal group to a second video output interface group connected with the second image processor.
The second image processor may be a programmable logic device, such as an FPGA, and the second control parameter is specifically configured to control the capturing size of the target image, for example, may control the capturing mode of the target image according to the image content and the data amount and the current transmission state, where the capturing mode is equal, unequal, or total (i.e. no remaining portion). In a specific implementation, the first control parameter and the second control parameter may be coordinate position parameters of the image, by which the size of the truncated target image is located.
It should be noted that, in a general scene, the two image processors can complete the segmentation processing, that is, the second image processor only needs to directly output the rest image part in a grouping way, that is, all rest images are selected to be intercepted when the interception operation is performed, and the all rest images are the second intercepted image part; in some application scenarios, the number of image blocks to be segmented is large, and more than two image processors may be required to process the image blocks, so that the second image processor needs to further perform partial segmentation according to the image content and the data amount and the current transmission state, thereby obtaining a second segmented image portion and a segmented residual image portion, and the residual image portion continues to perform the above operation in another image processor according to the above manner.
The embodiment of the invention cuts the image in the mode, and only shares the residual image part obtained by cutting to other image processors such as an FPGA chip without transmitting complete image data, thereby saving the use of channel resources between the image processors such as SerDes channel resources (typically adopting a GTx transceiver), reducing the transmitted data volume, relieving the bandwidth pressure, and integrating more image processors in the mode so as to improve the image processing efficiency.
In one specific implementation of the present embodiment, the input video source image received by the first image processor is taken as the target image; thus, referring to FIG. 2, FIG. 2 is a flow chart of a process for inputting video source images in a first embodiment of the invention,
step S12 may further include:
s121, performing scaling processing on the first intercepted image part to obtain a scaled image;
s122, performing superposition processing on the zoomed image to obtain a superposed image; and
s123, generating the first output image signal group according to the superimposed image so as to output the first output image signal group to the first video output interface group.
That is, for the first truncated image portion, the zooming and overlaying processes of the first truncated image are performed after the image capturing operation, and for the remaining image portion, the second image processor still needs to perform the zooming and overlaying processes after receiving the remaining image portion, and the specific zooming and overlaying methods are all the prior art and are not described herein.
As shown in fig. 3, in order to better embody the object of the present invention, an implementation procedure is shown by way of example as follows: the first video output interface group comprises a first output port DVI-1 and a second output port DVI-2, the second video output interface group comprises a third output port DVI-3 and a fourth output port DVI-4, the target image is a 4K input image of 3840 x 2160, and the coordinates of four vertexes are defined as follows: first coordinates (1, 1), second vertex coordinates (3840,1), third vertex coordinates (1,2160), fourth vertex coordinates (3840,1). The required image content of each DVI-1 to DVI-4 is calculated according to the display state of PIP (Picture-In-Picture) and the topological relation of 4 DVI interfaces.
Before the target image is intercepted, a first image processor (such as an FPGA) and a second image processor (such as an FPGA) obtain a horizontal vertical starting point coordinate and a horizontal vertical ending point coordinate, in the intercepting of the present example, the target image is equally intercepted by a central coordinate (1920,1080) along a horizontal coordinate x=1920, vertex coordinates of a first intercepted image part of the intercepting are (1, 1), (1920,1), (1920,2160) and (1,2160) respectively, vertex coordinates of a second intercepted image part of the intercepting are (1921,1), (3840,1), (3840,2160) and (1921,2160) respectively, and considering that each FPGA only carries two paths of video output ports (DVI-1 and DVI-2, or DVI-3 and DVI-4), therefore, the intercepted first intercepted image part is intercepted again, and is output in parallel by two paths of output ports (DVI-1 and DVI-2) along y=1080, the vertex coordinates of the intercepted second intercepted image part are (1921,1), (3840,1) and (1920,1) output by the first DVI-1), (1920,1) and (3767) are output by the vertex coordinates of the second DVI-1; the second cut image portion is cut and equally cut along y=1080 at the same time as the first cut image portion, the vertex coordinates of the image output by the third output port DVI-3 after cutting are 1921,1), (3840,1), (3840,1080), (1921,1080), and the vertex coordinates of the image output by the fourth output port DVI-4 are (1921,1081), (3840,1081), (3840,2160), (1921,2160). In other embodiments, unequal clipping may be performed by setting different horizontal and vertical endpoint coordinates, and compressing and superimposing the clipped image by the image processor.
[ second embodiment ]
The present embodiment provides another process flow chart for inputting a video source image, specifically, please refer to fig. 4, in this implementation, it further includes, before step S11 in the first embodiment:
s01, scaling the input video source image by the first image processor to obtain a scaled image; and
s02, the first image processor performs superposition processing on the scaled image to obtain the target image.
That is, for the first cut-out image portion, the scaling and superimposition processing of the first cut-out image is performed before the image cutting-out operation, and the remaining image portion is an image subjected to the scaling and superimposition processing, and the remaining image portion received by the second image processor is not required to be subjected to the scaling and superimposition processing any more, and can be directly output to the video interface group or output to the next-stage image processor through the serial transceiver only after the cut-out.
[ third embodiment ]
As shown in fig. 5, a video processing apparatus according to a third embodiment of the present invention includes, for example, the following components:
the microcontroller is used for outputting a first control parameter and a second control parameter;
when the video source is processed by adopting only two image processors, the microcontroller only needs to output a first control parameter and a second control parameter to control the first image processor and the second image processor; in some application scenarios, the number of image blocks to be segmented is large, and more than two image processors are required to process the image blocks, and at this time, the microcontroller is required to output more than two control parameters to control the image processors respectively.
In addition to the microcontroller, the video processing device of the embodiment of the invention further comprises:
a first video output interface group;
a first image processor connecting the microcontroller and the first set of video output interfaces and including a first serial transceiver, such as a gigabit transceiver (GTx transceiver);
a second video output interface group; and
a second image processor connecting the microcontroller and the second set of video output interfaces and comprising a second serial transceiver, such as a gigabit transceiver (GTx transceiver), wherein the second serial transceiver is connected to the first serial transceiver;
the first image processor is used for performing a clipping operation on a target image according to a first control parameter to obtain a first clipping image part and a residual image part, generating a first output image signal group according to the first clipping image part to be output to the first video output interface group, and sharing the residual image part to a second image processor through interaction of the first serial transceiver and the second serial transceiver;
the second image processor is used for performing a clipping operation on the residual image part according to a second control parameter to obtain a second clipping image part, and generating a second output image signal group according to the second clipping image part to output to the second video output interface group.
Specifically, the microcontroller sends control parameters to each image processor to control the image processor to intercept the target image and the residual image, wherein parameter transmission is realized between the microcontroller and the image processor through an independent communication interface, and in the embodiment, the parameter transmission can be performed according to a agreed private protocol by adopting an FSMC (Flexible Static Memory Controller, variable static memory controller) data bus communication mode. The first image processor intercepts a target image according to a first control parameter sent by the microcontroller to obtain a first intercepted image part and a residual image part, and the first intercepted image part is converted into a first output image signal group by the first image processor and is sent to a first video output interface group; the residual image part is sent to the second image processor through the first serial transceiver and the second serial transceiver, and the second image processor intercepts the residual image part according to the second control parameter to obtain a second intercepted image part, and the second intercepted image part is converted into a second output image signal group by the second image processor and is sent to the second video output interface group. In addition, when a large number of captured images are obtained and processed, a plurality of image processors are required to process the target image and the remaining image portions. Therefore, the processing mode can save the resources of the transmission channel of the video processing device and relieve the bandwidth pressure.
In addition, one embodiment of the present invention further includes a video source input interface, such as a 4K video source input interface and a 2K video source input interface, connecting the first image processor and the second image processor; the first image processor is further configured to receive a video source image input by the video source input interface as the target image.
In most application scenarios, an input video source image is generally transmitted to a video source input interface through a shooting device such as a camera, the input video source image is divided into a 4K video source image and a 2K video source image, the 4K video source image is respectively transmitted to a first image processor and a second image processor through each 4K video source input interface, and the 2K video source image is respectively transmitted to the first image processor and the second image processor through each 2K video source input interface to participate in image processing operation.
As shown in fig. 6, in one embodiment of the present invention, the first and second image processors may employ programmable logic devices, and the first and second serial transceivers may employ gigabit transceivers (GTx), the first image processor comprising: the intercepting module is used for intercepting the target image according to the first control parameter so as to obtain the first intercepted image part and the residual image part; the scaling module is used for scaling the first intercepted image part to obtain a scaled image; the superposition module is used for carrying out superposition processing on the zoomed image to obtain a superposed image; and the output module is used for generating the first output image signal group according to the superimposed image so as to output the first output image signal group to the first video output interface group.
Specifically, in one embodiment of the present invention, the source of the target image is an input video source image, and each image processor needs to perform a scaling process and an overlaying process on the truncated image portion after performing a truncated operation on the target image or the remaining image portion. The specific operation process comprises the following steps: the video source input interface of the first image processor receives an input video source image, namely a target image, the interception module of the first image processor intercepts the target image to obtain a first intercepted image part and a residual image part, the scaling module and the superposition module of the first image processor sequentially scale and superpose the first intercepted image part to finally obtain a first output image signal group through the output module, and the first output image signal group is sent to the first video output interface group; the residual image part is sent to a second image processor through a first serial transceiver and a second serial transceiver, and the second image processor sequentially passes through an intercepting module, a scaling module, a superposition module and an output module for the residual image part, and finally outputs the obtained second output image signal group to the second video output interface group. In addition, when a large number of image processors need to be integrated to divide the image, the second image processor needs to intercept the image according to the second control parameter, so as to obtain a second intercepted image part and a residual image part, and the residual image part is sent to other image processors through the serial transceiver to continue the operation.
Still further, as shown in fig. 7, in an embodiment of the present invention, each image processor includes two image processing units, and each image processing unit has the same structure and includes a scaling module, a superposition module and an output module, and the first video output interface group and the second video output interface group are respectively composed of two output ports DVI-1 and DVI-2, or DVI-3 and DVI-4. Referring to fig. 8, taking two image processors for sequentially performing the processing of capturing, scaling and overlaying the target image as an example: the method comprises the steps that a 4K input video source image with a target image of 3840 x 2160 is defined, four vertex coordinates are respectively a first coordinate (1, 1), a second vertex coordinate (3840,1), a third vertex coordinate (1,2160) and a fourth vertex coordinate (3840,2160), an intercepting module in a first image processor performs asymmetric intercepting on the target image based on a coordinate point (2880,1600), in the intercepting operation, the vertex coordinates of a first intercepted image part are (1, 1), (2880,1), (2880,2160) and (1,2160), and the vertex coordinates of a second intercepted image part are (2881,1), (3840,1), (3840,2160) and (2881,2160); the intercepting module of the first image processor performs asymmetric intercepting operation on the first intercepted image part again to obtain intercepted images with vertex coordinates of (1, 1), (2880,1), (2880,1600) and (1,1600) and intercepted images with vertex coordinates of (1,1601), (2880,1601), (2880,2160) and (1,2160), and the two intercepted images respectively pass through the scaling module and the superposition module in the first image processing unit and the second image processing unit in sequence to obtain a superimposed image with vertex coordinates of (1, 1), (1440,1), (1440,800), (1, 800) output through the first output port DVI-1 and a superimposed image with vertex coordinates of (1, 801), (1440,801), (1440,1080) and (1,1080) output through the second output port DVI-2; the clipping module of the second image processor performs asymmetric clipping operation on the second clipping image portion again to obtain clipping images with vertex coordinates of (2881,1), (3840,1), (3840,1600), (2881,1600) and clipping images with vertex coordinates of (2881,1601), (3840,1601), (3840,2160), (2881,2160), and similarly, vertex coordinates of the superimposed image output by the third output port DVI-3 after passing through the third image processing unit are (1441,1), (1920,1), (1920,800), (1440,800), and vertex coordinates of the superimposed image output by the fourth output port DVI-4 after passing through the fourth image processing unit are (1441,801), (1920,801), (1920,1080), (1441,1080).
According to the embodiment of the invention, the image is intercepted in the mode, and the intercepted residual image part is only shared to other image processors such as an FPGA chip without transmitting complete image data, so that channel resources such as SerDes channel resources can be saved, the transmitted data volume is reduced, the bandwidth pressure is reduced, and more image processors can be integrated in the mode, so that the image processing efficiency is improved.
[ fourth embodiment ]
The present embodiment provides another video processing apparatus for inputting video source images, and in particular, please refer to fig. 9, fig. 9 is a schematic structural diagram of the video processing apparatus according to an embodiment of the present invention. In this embodiment, the first image processor includes: the scaling module is used for scaling the input video source image to obtain a scaled image; the superposition module is used for carrying out superposition processing on the zoomed image so as to obtain the target image; the intercepting module is used for intercepting the target image according to the first control parameter so as to obtain the first intercepted image part and the residual image part; and the output module is used for generating the first output image signal group according to the first intercepted image part so as to output the first output image signal group to the first video output interface group.
That is, before the capturing operation, the target image is obtained after the first image processor performs the scaling and stacking processes on the input video source image through the scaling module and the stacking module, and for the second captured image portion, the capturing operation is only performed on the remaining image portion by the capturing module to obtain the second captured image portion, and the specific scaling method and the stacking method are both in the prior art and are not described herein again.
In one embodiment of the invention, a scaling module of the first image processor performs scaling processing on an input video source to obtain a scaled image; the superposition module of the first image processor performs superposition processing on the zoomed image to obtain a target image; the intercepting module is used for intercepting the target image through the first control parameter to obtain a first intercepted image, a second intercepted image and a residual image part; the first intercepted image and the second intercepted image are respectively transmitted to a first image processing unit and a second image processing unit to obtain a first output signal and a second output signal; the first output signal and the second output signal are sent to the first output port DVI-1 and the second output port DVI-2, respectively. The intercepting module of the second image processor intercepts the residual image part according to the second control parameter to obtain a third intercepted image and a fourth intercepted image, and the third intercepted image and the fourth intercepted image are respectively transmitted to a third image processing unit and a fourth image processing unit to obtain a third output signal and a fourth output signal, and are respectively transmitted to a third output port DVI-3 and a fourth output port DVI-4 for output.
Specifically, in the embodiment of the invention, the image is intercepted, and the intercepted residual image is shared with other image processors such as FPGA chips, so that the channel resources such as SerDes channel resources can be saved, the transmitted data volume is reduced, the bandwidth pressure is reduced, the video processing device can integrate a larger number of FPGA chips, and the processing capacity is improved.
In addition, it should be understood that the foregoing embodiments are merely exemplary illustrations of the present invention, and the technical solutions of the embodiments may be arbitrarily combined and matched without conflict in technical features, contradiction in structure, and departure from the purpose of the present invention.
In the several embodiments provided herein, it should be understood that the disclosed systems, devices, and/or methods may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and the division of the units/modules is merely a logical function division, and there may be additional divisions when actually implemented, e.g., multiple units or modules may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units/modules described as separate units may or may not be physically separate, and units/modules may or may not be physically units, may be located in one place, or may be distributed on multiple network units. Some or all of the units/modules may be selected according to actual needs to achieve the purpose of the embodiment.
In addition, each functional unit/module in the embodiments of the present invention may be integrated in one processing unit/module, or each unit/module may exist alone physically, or two or more units/modules may be integrated in one unit/module. The integrated units/modules may be implemented in hardware or in hardware plus software functional units/modules.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present invention, and are not limiting; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention.

Claims (10)

1. A video processing method, comprising:
performing a clipping operation on the target image by a first image processor according to the first control parameter to obtain a first clipped image portion and a residual image portion;
generating, by the first image processor, a first set of output image signals from the first truncated image portion for output to a first set of video output interfaces coupled to the first image processor;
sharing, by the first image processor, the remaining image portion to a second image processor through a serial transceiver;
performing, by the second image processor, a clipping operation on the remaining image portion according to a second control parameter to obtain a second clipped image portion; and
and generating a second output image signal group by the second image processor according to the second intercepted image part so as to be output to a second video output interface group connected with the second image processor.
2. The video processing method of claim 1, further comprising:
an input video source image is received as the target image by the first image processor.
3. The video processing method of claim 1 or 2, wherein generating, by the first image processor, a first set of output image signals from the first truncated image portion for output to a first set of video output interfaces coupled to the first image processor, comprises:
scaling the first truncated image portion to obtain a scaled image;
performing superposition processing on the zoomed image to obtain a superposed image; and
and generating the first output image signal group according to the superimposed image so as to output the first output image signal group to the first video output interface group.
4. The video processing method of claim 1, further comprising:
scaling the input video source image by the first image processor to obtain a scaled image; and
and superposing the scaled image by the first image processor to obtain the target image.
5. The video processing method of claim 1, wherein the first image processor and the second image processor are each programmable logic devices, and the serial transceiver is a gigabit transceiver.
6. A video processing apparatus, comprising:
the microcontroller is used for outputting a first control parameter and a second control parameter;
a first video output interface group;
a first image processor connected to the microcontroller and the first video output interface group and comprising a first serial transceiver;
a second video output interface group;
a second image processor coupled to the microcontroller and the second set of video output interfaces and comprising a second serial transceiver, wherein the second serial transceiver is coupled to the first serial transceiver; the first image processor is used for performing a clipping operation on a target image according to a first control parameter to obtain a first clipping image part and a residual image part, generating a first output image signal group according to the first clipping image part to be output to the first video output interface group, and sharing the residual image part to a second image processor through interaction of the first serial transceiver and the second serial transceiver;
the second image processor is used for performing a clipping operation on the residual image part according to a second control parameter to obtain a second clipping image part, and generating a second output image signal group according to the second clipping image part to output to the second video output interface group.
7. The video processing apparatus of claim 6, further comprising:
a video source input interface connected to the first image processor;
the first image processor is further configured to receive a video source image input by the video source input interface as the target image.
8. The video processing apparatus according to claim 6 or 7, wherein the first image processor includes:
the intercepting module is used for intercepting the target image according to the first control parameter so as to obtain the first intercepted image part and the residual image part;
the scaling module is used for scaling the first intercepted image part to obtain a scaled image;
the superposition module is used for carrying out superposition processing on the zoomed image to obtain a superposed image; and
and the output module is used for generating the first output image signal group according to the superimposed image so as to output the first output image signal group to the first video output interface group.
9. The video processing apparatus of claim 6, wherein the first image processor comprises:
the scaling module is used for scaling the input video source image to obtain a scaled image;
the superposition module is used for carrying out superposition processing on the zoomed image so as to obtain the target image;
the intercepting module is used for intercepting the target image according to the first control parameter so as to obtain the first intercepted image part and the residual image part; and
and the output module is used for generating the first output image signal group according to the first intercepted image part so as to output the first output image signal group to the first video output interface group.
10. The video processing apparatus of claim 6, wherein the first image processor and the second image processor are each programmable logic devices, and the first serial transceiver and the second serial transceiver are each gigabit transceivers.
CN201910586773.5A 2019-07-01 2019-07-01 Video processing method and video processing device Active CN112188261B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910586773.5A CN112188261B (en) 2019-07-01 2019-07-01 Video processing method and video processing device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910586773.5A CN112188261B (en) 2019-07-01 2019-07-01 Video processing method and video processing device

Publications (2)

Publication Number Publication Date
CN112188261A CN112188261A (en) 2021-01-05
CN112188261B true CN112188261B (en) 2023-05-09

Family

ID=73914911

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910586773.5A Active CN112188261B (en) 2019-07-01 2019-07-01 Video processing method and video processing device

Country Status (1)

Country Link
CN (1) CN112188261B (en)

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110317034A1 (en) * 2010-06-28 2011-12-29 Athreya Madhu S Image signal processor multiplexing
CN104104888A (en) * 2014-07-01 2014-10-15 大连民族学院 Parallel multi-core FPGA digital image real-time zooming processing method and device
CN107172365A (en) * 2017-04-25 2017-09-15 西安诺瓦电子科技有限公司 Video source premonitoring device and method and video display processor

Also Published As

Publication number Publication date
CN112188261A (en) 2021-01-05

Similar Documents

Publication Publication Date Title
EP2638523B1 (en) Parallel image processing using multiple processors
US10114786B2 (en) Back channel support for systems with split lane swap
EP3726814A1 (en) Network interface device
US9232177B2 (en) Video chat data processing
CN113194269B (en) Image output system and method
US20200267363A1 (en) Data processing method, data sending end, data receiving end, and communication system
CN110581963B (en) V-BY-ONE signal conversion method and device and electronic equipment
US20160255315A1 (en) Digital movie projection system and method
EP3226552A1 (en) Multi-screen processing method, multi control unit and video system
CN112559074A (en) Dynamic configuration method of machine vision software and computer
CN111988552B (en) Image output control method and device and video processing equipment
CN112188261B (en) Video processing method and video processing device
Ibraheem et al. A resource-efficient multi-camera gige vision ip core for embedded vision processing platforms
CN116644010A (en) Data processing method, device, equipment and medium
WO2020063171A1 (en) Data transmission method, terminal, server and storage medium
US7590090B1 (en) Time segmentation sampling for high-efficiency channelizer networks
CN107682587A (en) Video processor
CN114445260A (en) Distributed GPU communication method and device based on FPGA
JP2021090127A (en) Control unit, control method, and program
US10681761B2 (en) Apparatus for distributing short-range wireless signals using an interconnection protocol for electronic devices
US20100045668A1 (en) Apparatus and Method for 3D Packet Scale Down with Proxy Server in Mobile Environment
JP2005027193A (en) Image transfer device, method, and program
EP4184913A1 (en) Fusion apparatus for multiple data transmission channels, and electronic device
CN118175390A (en) Video data conversion method, system, device, medium and computer program product
WO2019240806A1 (en) Conferencing with error state hid notification

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant