CN112188262B - Image processing method, device and system and computer readable medium - Google Patents

Image processing method, device and system and computer readable medium Download PDF

Info

Publication number
CN112188262B
CN112188262B CN201910589202.7A CN201910589202A CN112188262B CN 112188262 B CN112188262 B CN 112188262B CN 201910589202 A CN201910589202 A CN 201910589202A CN 112188262 B CN112188262 B CN 112188262B
Authority
CN
China
Prior art keywords
point
data
target
transparency
layers
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910589202.7A
Other languages
Chinese (zh)
Other versions
CN112188262A (en
Inventor
葛敏峰
周晶晶
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xian Novastar Electronic Technology Co Ltd
Original Assignee
Xian Novastar Electronic Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian Novastar Electronic Technology Co Ltd filed Critical Xian Novastar Electronic Technology Co Ltd
Priority to CN201910589202.7A priority Critical patent/CN112188262B/en
Publication of CN112188262A publication Critical patent/CN112188262A/en
Application granted granted Critical
Publication of CN112188262B publication Critical patent/CN112188262B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/443OS processes, e.g. booting an STB, implementing a Java virtual machine in an STB or power management in an STB
    • H04N21/4438Window management, e.g. event handling following interaction with the user interface
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/265Mixing

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Software Systems (AREA)
  • Image Generation (AREA)
  • Controls And Circuits For Display Device (AREA)
  • Image Processing (AREA)

Abstract

The embodiment of the invention discloses an image processing method, an image processing device, an image processing system and a computer readable medium. The image processing method comprises the following steps: caching image data of a plurality of input image layers; caching point-by-point transparency data of the plurality of input layers; under the drive of an output time sequence, synchronously reading and merging target pixel data in the image data of each target input layer in the plurality of input layers and target point-by-point transparency data in the point-by-point transparency data to obtain merged image data of the target input layers, and combining the global transparency coefficient of the target input layers to perform superposition processing on the merged image data to obtain image data of corresponding positions of output images. The embodiment of the invention can realize transparent superposition and window display in any shape, and improves the display effect.

Description

Image processing method, device and system and computer readable medium
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to an image processing method, an image processing apparatus, an image processing system, and a computer readable medium.
Background
Generally, a video processor has multiple layers for displaying different contents and creating different display effects. However, layers of a conventional video processor are generally rectangular windows and do not have transparency coefficients, the windows are overlapped in an overlapping manner, and in an area where two layers are overlapped, a layer with a low priority is covered by a layer with a high priority. Meanwhile, the overlapping coverage mode does not have the function of transparent overlapping, and effects such as fade-in and fade-out are difficult to achieve by utilizing the characteristic. In addition, the windows are all rectangular, so that other windows with special shapes such as heart-shaped windows and butterfly-shaped windows cannot be realized, and the display effect is very limited.
Disclosure of Invention
Embodiments of the present invention provide an image processing method, an image processing apparatus, an image processing system, and a computer readable medium, which can implement transparent overlay and arbitrary-shaped window display, and improve a display effect.
In one aspect, an image processing method provided in an embodiment of the present invention includes: caching image data of a plurality of input image layers; caching point-by-point transparency data of the plurality of input layers; under the drive of an output time sequence, synchronously reading and merging target pixel data in the image data of each target input layer in the plurality of input layers and target point-by-point transparency data in the point-by-point transparency data to obtain merged image data of the target input layers, and combining the global transparency coefficient of the target input layers to perform superposition processing on the merged image data to obtain image data of corresponding positions of output images.
The technical scheme can have the following advantages or beneficial effects: by applying the global transparency coefficient and the point-by-point transparency data of the input image layer to the image layer superposition processing, transparent display between the image layers and window modeling in any shape are realized, the display styles are rich and colorful, and the display effect of the output image is greatly improved.
In an embodiment of the present invention, the superimposing processing performed on the merged image data by combining the global transparency coefficient of the target input layer adopts the following formula:
NewData n =α n *InData n +(1-α n )*NewData n-1
α n =PerPixelAlpha n *GlblAlpha n
wherein n represents the number of layers of the target input layer for superposition processing, is a positive integer greater than 0, and InData n Inputting target pixel data of layer for nth target, perPixelAlpha n Inputting a normalized value of point-by-point transparency data of the object of the layer for the nth object, glblalpha n Inputting a global transparency coefficient of the layer for the nth target, and when n =1, newData 0 Is target pixel data of a background layer, when n > 1, newData n-1 Inputting the target pixel data, newData, after the layers are overlapped for the (n-1) th target n After the input layers for the nth target are overlappedTarget pixel data.
In an embodiment of the present invention, before the step of caching the point-by-point transparency data of the plurality of input image layers, the method further includes: and carrying out scaling processing on a plurality of point-by-point transparency materials respectively corresponding to the plurality of input layers to obtain the point-by-point transparency data of the plurality of input layers.
In an embodiment of the present invention, each point-to-point transparency material is a picture, and the brightness value of each pixel in the picture or the gray value of one target color channel in the multiple color channels represents the point-to-point transparency data of the corresponding pixel in the corresponding input layer.
In an embodiment of the present invention, the merging the target pixel data in the image data of each target input layer in the plurality of input layers and the target point-by-point transparency data in the point-by-point transparency data to obtain merged image data of the target input layers specifically includes: and performing line-by-line merging processing on the target pixel data and the target point-by-point transparency data of the target input layer according to the priority order of the plurality of input layers to obtain the merged image data of the target input layer.
In another aspect, an image processing apparatus provided in an embodiment of the present invention includes: the image data caches are used for caching the image data of the input image layers; the transparency data caches are used for caching point-by-point transparency data of the input layers; a superposition processing module; and the data generation module is used for synchronously reading and merging target pixel data in the image data of each target input layer in the plurality of input layers and target point-by-point transparency data in the point-by-point transparency data under the drive of an output time sequence to obtain merged image data of the target input layers, and sending the merged image data to the superposition processing module, and the superposition processing module combines the global transparency coefficient of the target input layer to perform superposition processing on the merged image data to obtain image data of a corresponding position of an output image.
In an embodiment of the present invention, the superposition processing module, in combination with the global transparency coefficient of the input layer, performs superposition processing on the merged image data by using the following formula:
NewData n =α n *InData n +(1-α n )*NewData n-1
α n =PerPixelAlpha n *GlblAlpha n
wherein n represents the number of layers of the target input layer for superposition processing, and is a positive integer greater than 0, inData n Inputting target pixel data of layer for nth target, perPixelAlpha n Inputting a normalized value of point-by-point transparency data of the object of the layer for the nth object, glblalpha n Inputting a global transparency coefficient of the layer for the nth target, and when n =1, newData 0 For the target pixel data of the background layer, when n > 1, newData n-1 Inputting the target pixel data, newData, after the layers are overlapped for the (n-1) th target n And inputting the target pixel data after the layer superposition for the nth target.
In one embodiment of the present invention, the image processing apparatus further includes: and the zooming modules are used for respectively zooming a plurality of point-by-point transparency materials respectively corresponding to the plurality of input layers so as to obtain the point-by-point transparency data of the plurality of input layers.
In one embodiment of the present invention, the plurality of image data caches, the plurality of transparency data caches, the superposition processing module, and the plurality of data generating modules are integrated in a programmable logic device.
In an embodiment of the present invention, the data generating module performs merging processing on target pixel data in the image data of each target input layer in the plurality of input layers and target point-by-point transparency data in the point-by-point transparency data to obtain merged image data of the target input layers, and specifically includes: and performing line-by-line merging processing on the target pixel data and the target point-by-point transparency data of the target input layer according to the priority order of the plurality of input layers to obtain the merged image data of the target input layer.
In another aspect, an embodiment of the present invention provides an image processing system, including: a memory and a processor connected to the memory, the memory storing a computer program, the processor executing the computer program when executing the computer program performing the image processing method as described above.
In another aspect, an embodiment of the present invention provides a computer-readable medium, which is a non-volatile memory and stores computer-executable instructions, where the computer-executable instructions are used to execute the image processing method as described above.
One or more of the above technical solutions may have the following advantages or beneficial effects: according to the invention, the global transparency coefficient and point-by-point transparency data of the input layer are introduced into the layer superposition processing, and different global transparency coefficients are set to realize the effects of translucency, semi-fusion, fade-out and fade-in, and the like, so that a user can realize windows with any shapes and shapes like local transparency, fixed position transparency, heart shape, butterfly shape and the like by loading different point-by-point transparency data templates according to the needs, the window has rich and colorful styles and very strong expandability, the display effect is greatly improved, a non-uniform atmosphere is created, the application occasions are greatly enriched, the attraction of output display is increased, and the product competitiveness is improved. In addition, the point-by-point transparency data are represented by the brightness of the pixels of the picture material or the gray value of one color channel, and are merged and superposed, so that a user can set the point-by-point transparency data randomly to obtain a desired layer superposition effect, and the user experience is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the description below are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a flowchart illustrating an image processing method according to a first embodiment of the present invention.
Fig. 2 is a schematic diagram of a point-by-point transparency material according to a first embodiment of the present invention.
Fig. 3 is a diagram illustrating the effect of the semitransparent overlapping according to the first embodiment of the present invention.
Fig. 4 is an effect diagram of the overlapping of the special-shaped windows according to the first embodiment of the present invention.
Fig. 5 is a schematic structural diagram of an image processing apparatus according to a second embodiment of the present invention.
Fig. 6 is a schematic structural diagram of an image processing system according to a third embodiment of the present invention.
Fig. 7 is a schematic structural diagram of a computer-readable medium according to a fourth embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
[ first embodiment ] A method for manufacturing a semiconductor device
As shown in fig. 1, a first embodiment of the present invention provides an image processing method, including the steps of:
s11: caching image data of a plurality of input image layers;
s13: caching point-by-point transparency data of the plurality of input layers; and
s15: under the drive of an output time sequence, synchronously reading and merging target pixel data in the image data of each target input layer in the plurality of input layers and target point-by-point transparency data in the point-by-point transparency data to obtain merged image data of the target input layers, and combining the global transparency coefficient of the target input layers to perform superposition processing on the merged image data to obtain image data of corresponding positions of output images.
In order to facilitate understanding of the present invention, each step of the image processing method of the present embodiment will be described in detail below with reference to fig. 2 to 4.
The video processor is generally configured to process image data of an input image layer to obtain an output image. The video processor includes, for example, a microprocessor and a Programmable logic device such as an FPGA (Field-Programmable Gate Array) connected to the microprocessor. The programmable logic device processes the image data (or video data) of the input image layer under the control of the microprocessor, such as scaling, buffering, merging, overlaying and the like of the image data.
First, image data of a plurality of input image layers are buffered. The image data is, for example, RGB data of an input layer. The FPGA includes a plurality of caches. The image data of the plurality of input image layers are respectively stored in different caches. The size of the plurality of buffers can be set according to actual conditions, as long as the condition that the internal data is not overflowed or empty is ensured. Of course, before that, in order to make the resolution (size) of the multiple input image layers consistent with the resolution of the output image, the FPGA may perform scaling processing on the original image data of the multiple input image layers to obtain the image data of the multiple input image layers. The original image data of the input image layers are stored in an external DDR connected with the FPGA. The scaling process may be implemented by an FPGA or a video scaling chip. It should be noted that, when the resolution of the original image data of the input image layers is consistent with the resolution of the output image, the original image data of the input image layers does not need to be scaled.
And then caching the point-by-point transparency data of the plurality of input layers. And respectively storing the point-by-point transparency data of the plurality of input layers in different caches of the FPGA. The point-by-point transparency data can be loaded by a plurality of point-by-point transparency materials respectively corresponding to a plurality of input image layers generated by a user according to the user requirement, or can be generated by calculation. The pointwise transparency material may be, for example, a picture. The pointwise transparency material may comprise various different shapes such as a heart shape, a butterfly shape, etc. The point-by-point transparency data is represented, for example, in luminance values of individual pixels in the picture or in grayscale values of one of a plurality of color channels, such as R, G, B or other color channels. The range of the point-by-point transparency data is, for example, 0 to 255. When the point-by-point transparency data takes a maximum value of 255 (0 xFF), it indicates that the pixel is completely opaque; when the minimum value of 0 (0 x 0) is taken by the point-by-point transparency data, the pixel is completely transparent; when the dot-by-dot transparency data takes a value between 0 and 255, it indicates that the pixel is partially transparent. The point-by-point transparency data and the point-by-point transparency coefficient (or called normalized value of the point-by-point transparency data) correspond to each other, that is, when the point-by-point transparency data has a maximum value of 255 (0 xFF), the point-by-point transparency coefficient is 1, and when the point-by-point transparency data has a minimum value of 0 (0 x 0), the point-by-point transparency coefficient is 0. Referring to fig. 2, it is a point-by-point transparency material shaped as a heart. The point-by-point transparency data of the heart-shaped central area are all 255, and the point-by-point transparency coefficient is 1, which indicates that the heart-shaped central area is completely opaque; the point-by-point transparency data of the region outside the heart shape are all 0, and the point-by-point transparency coefficient is 0, which indicates that the region outside the heart shape is completely transparent; the point-by-point transparency data of the edge area of the heart shape is gradually reduced from 255 to 0 from inside to outside, and the point-by-point transparency coefficient is correspondingly gradually reduced from 1 to 0 from inside to outside, so that the effect of gradually transparent heart-shaped images is formed. In addition, generally, the size (resolution) of the point-by-point transparency material is not always completely matched with the size (resolution) of the corresponding input layer, and in some special application occasions, the size of the input layer may dynamically change, so that the size of the point-by-point transparency material may be required to be arbitrarily matched and adjusted according to the size of the output image, the FPGA may perform real-time scaling processing on a plurality of point-by-point transparency materials respectively corresponding to a plurality of input layers, and the scaled size of the plurality of point-by-point transparency materials is completely matched with the size of the plurality of input layers and the size of the output image.
And finally, under the drive of an output time sequence, synchronously reading and merging target pixel data in the image data of each target input layer in the plurality of input layers and target point-by-point transparency data in the point-by-point transparency data to obtain merged image data of the target input layers, and superposing the merged image data by combining the global transparency coefficient of the target input layers to obtain image data of corresponding positions of output images. The output timing is, for example, a control signal generated by the FPGA for controlling the output of the output image, and for example, the output timing may control the output image to output image data line by line. Under the driving of the output time sequence, the FPGA, for example, sequentially and synchronously reads target pixel data in image data of a target input layer in the plurality of input layers and target point-by-point transparency data in the point-by-point transparency data according to priorities of the plurality of input layers, and then merges the target pixel data and the target point-by-point transparency data to obtain merged image data. Of course, in other embodiments of the present invention, the FPGA may also synchronously read target pixel data in the image data of the multiple input image layers and target point-by-point transparency data in the point-by-point transparency data, which is not limited in the present invention. The target pixel data may be, for example, line pixel data in image data, and the target point-by-point transparency data is, for example, line point-by-point transparency data in point-by-point transparency data, and therefore, the combining the target pixel data in the image data of each of the plurality of input layers and the target point-by-point transparency data in the point-by-point transparency data to obtain the combined image data of the target input layers specifically includes: and performing line-by-line merging processing on the target pixel data and the target point-by-point transparency data of the target input layer according to the priority order of the plurality of input layers to obtain the merged image data of the target input layer. But of course also image data and point-by-point transparency data for other parts or the whole image. The target pixel data is RGB data, for example, and the merged image data is ARGB format data, where a denotes dot-by-dot transparency data. The priorities of the plurality of input layers represent relative hierarchical relationships of the plurality of input layers. For example, the input layer with the higher priority is located above the input layer with the lower priority. The user can set the priority of a plurality of input layers through the microprocessor according to the user requirement.
Specifically, for example, when the output timing is the first pixel row of the output image, the FPGA synchronously reads, according to the priorities of the plurality of input layers, row pixel data corresponding to the first pixel row of the output image and row point-by-point transparency data in the image data of the first target input layer with the lowest priority. Certainly, there may be no data in reading the row pixel data and the row-to-row transparency data corresponding to the first pixel row of the output image on the first target input layer, that is, there is no image data in the first target input layer at the position corresponding to the first pixel row of the output image, so that it is not necessary to perform the overlay processing with the background layer, and the FPGA continues to read the data of other layers according to the priorities of the multiple input layers.
And when the first target input layer has image data at a first pixel row position corresponding to the output image, the FPGA merges the read row pixel data and row point-by-point transparency data to obtain the row merged image data of the first target input layer. And then, combining the obtained line-merged image data corresponding to the first pixel line of the output image on the first target input layer with the obtained image data corresponding to the first pixel line of the output image on the background layer, and performing superposition processing on the obtained line-merged image data corresponding to the first pixel line of the output image on the first target input layer, so as to obtain the processed line pixel data of the first target input layer. Further, the image data of all the pixels on the background layer of the output image is RGB (0,0,0), i.e., the color of the background layer is black. The superposition process uses the following formula:
NewData 1 =α 1 *InData 1 +(1-α 1 )*NewData 0 (formula 1)
α 1 =PerPixelAlpha 1 *GlblAlpha 1 (formula 2)
Wherein, inData 1 Inputting line pixel data of the layer for the first object, perPixelAlpha 1 Inputting a line-by-line transparency factor, glblalpha, for a first target 1 Inputting a global of layers for a first targetCoefficient of transparency, newData 0 New Data, the line pixel data of the background layer 1 Inputting layer processed pixel data for the first target.
And then, synchronously reading the row pixel data corresponding to the first pixel row of the output image and the row point-by-point transparency data in the image data of the second target input layer with low priority by the FPGA according to the priorities of the plurality of input layers. Similarly, when reading the row pixel data and the row point-by-point transparency data of the first pixel row of the corresponding output image on the second target input layer, that is, the second target input layer does not have image data output at the position of the first pixel row of the corresponding output image, and does not need to be superposed with the background layer, the FPGA continuously reads the data of other layers according to the priorities of the plurality of input layers.
And when the second target input layer has image data output at the position corresponding to the first pixel row of the output image, the FPGA merges the read row pixel data and the row point-by-point transparency data to obtain the image data of the second target input layer after row merging. And then, combining the obtained line-merged image data corresponding to the first pixel line of the output image on the second target input layer with the obtained line-processed pixel data of the first target input layer to perform superposition processing on the obtained line-merged image data of the first pixel line of the output image on the second target input layer and the obtained line-processed pixel data of the second target input layer.
And repeating the steps until the FPGA synchronously reads, combines and superposes the row pixel data of the first pixel row of the first output image of the output image corresponding to each target input layer of the plurality of input layers and the row point-by-point transparency data in the point-by-point transparency data according to the priority. The superposition process uses the following formula:
NewData n =α n *InData n +(1-α n )*NewData n-1 (formula 3)
α n =PerPixelAlpha n *GlblAlpha n (formula 4)
Where n represents the number of layers of the target input layer for superimposition processingNatural number greater than 1, inData n Inputting line pixel data of the image layer for the nth object, perPixelAlpha n Inputting a normalized value (or called a line point transparency coefficient) of line point transparency data of the layer for the nth target n Inputting the global transparency coefficient, newData, of the layer for the nth object n-1 Inputting the superimposed line pixel data, newData, for the (n-1) th target n Inputting the line pixel data after the layer superposition for the nth target.
And finally, taking the last line of pixel data after the target input layer processing as the line image data of the first pixel line of the output image.
And then, under the driving of the output time sequence, the FPGA carries out data reading, merging and overlapping processing on other pixel rows with the output time sequence being the output image according to the same processing mode as that when the output time sequence is the first pixel row of the output image, and image data of all the pixel rows of the output image are obtained for output.
In this way a transparent overlay of multiple input layers is achieved. When the whole input layer is required to be semitransparent, the value of the global transparency coefficient corresponding to the input layer is only required to be set to be less than 1 (see fig. 3). When partial images of various special-shaped windows need to be displayed and the partial images are completely transparent, the point-by-point transparency coefficient corresponding to the partial images needing to be displayed is set to be 1, the point-by-point transparency coefficient corresponding to the partial images needing to be completely transparent is set to be 0, and the point-by-point transparency coefficient corresponding to the partial images in the transition area is set to be between 0 and 1 (see fig. 4). By the method, various image models can be realized, and the display effect of the image is greatly enriched.
In summary, in the embodiment of the present invention, the global transparency coefficient and the point-by-point transparency data of the input layer are introduced into the layer overlay process, and different global transparency coefficients are set to achieve the effects of translucency, semi-fusion, fade-out and fade-in, and the like, so that the user can load different point-by-point transparency data templates as required to achieve windows with arbitrary shapes and shapes, such as local transparency, fixed position transparency, heart shape, butterfly shape, and the like, and the window has rich and colorful styles and very strong expandability, thereby greatly improving the display effect, creating a non-uniform atmosphere, greatly enriching the application occasions, increasing the appeal of output display, and improving the product competitiveness. In addition, the point-by-point transparency data are represented by the brightness of the pixels of the picture material or the gray value of one color channel, and are merged and superposed, so that a user can set the point-by-point transparency data randomly to obtain a desired layer superposition effect, and the user experience is improved.
[ second embodiment ] A
As shown in fig. 5, a second embodiment of the present invention provides an image processing apparatus 100. The image processing apparatus 100 includes, for example: a plurality of image data buffers 110, a plurality of transparency data buffers 130, a plurality of data generation modules 150, and an overlay processing module 170.
A plurality of image data buffers 110 for buffering image data of a plurality of input image layers.
And a plurality of transparency data buffers 130, configured to buffer the point-by-point transparency data of the plurality of input layers.
The data generating modules 150 are configured to, under driving of an output timing sequence, synchronously read and combine target pixel data in the image data of each target input layer in the plurality of input layers and target point-by-point transparency data in the point-by-point transparency data to obtain combined image data of the target input layers, and send the combined image data to the superposition processing module 170.
And the superposition processing module 170 is configured to perform superposition processing on the merged image data in combination with the global transparency coefficient of the eye input layer to obtain image data of a corresponding position of an output image.
Further, the plurality of image data buffers 110, the plurality of transparency data buffers 130, the plurality of data generating modules 150, and the overlay processing module 170 are integrated in a programmable logic device.
In addition, the image processing apparatus 100 further includes a plurality of scaling modules 190, configured to scale a plurality of point-by-point transparency materials respectively corresponding to the plurality of input layers, so as to obtain the point-by-point transparency data of the plurality of input layers. Further, the scaling modules 190 are integrated into the programmable logic device.
The specific operation and technical effects between the modules in the image processing apparatus 100 in the present embodiment are described with reference to the foregoing first embodiment.
[ third embodiment ]
As shown in fig. 6, a third embodiment of the present invention provides an image processing system 300. Image processing system 300 includes a memory 310 and a processor 330 coupled to memory 310. The memory 310 may be, for example, a non-volatile memory, on which a computer program 311 is stored. The processor 330 may, for example, comprise an embedded processor. The processor 330 executes the computer program 311 to execute the image processing method provided by the foregoing first embodiment.
[ fourth example ] A
As shown in FIG. 7, a fourth embodiment of the invention provides a computer-readable medium 500 having stored thereon computer-executable instructions 510. The computer-executable instructions 510 are for performing the image processing method as described in the first embodiment above. The computer-readable medium 500 is a non-volatile memory, including, for example: magnetic media (e.g., hard disks, floppy disks, and magnetic tape), optical media (e.g., cd ROM disks and DVDs), magneto-optical media (e.g., optical disks), and hardware devices specially constructed for storing and executing computer-executable instructions (e.g., read Only Memories (ROMs), random Access Memories (RAMs), flash memories, etc.). The computer-readable medium 500 may execute the computer-executable instructions 510 by one or more processors or processing devices.
In the embodiments provided in the present invention, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. For example, the above-described embodiments of the apparatus are merely illustrative, and for example, a division of a unit is merely a division of one logic function, and an actual implementation may have another division, for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may also be distributed on multiple network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (10)

1. An image processing method, comprising:
caching image data of a plurality of input image layers;
caching point-by-point transparency data of the plurality of input layers; and
under the drive of an output time sequence, synchronously reading and merging target pixel data in the image data of each target input layer in the plurality of input layers and target point-by-point transparency data in the point-by-point transparency data to obtain merged image data of the target input layers, and superposing the merged image data by combining a global transparency coefficient of the target input layers to obtain image data of corresponding positions of output images;
wherein, the superposition processing of the merged image data by combining the global transparency coefficient of the target input layer adopts the following formula:
NewData n =α n *InData n +(1-α n )*NewData n-1
α n =PerPixelAlpha n *GlblAlpha n
wherein n represents the number of layers of the target input layer for superposition processing, is a positive integer greater than 0, and InData n Inputting target pixel data of layer for nth target, perPixelAlpha n Inputting a normalized value of point-by-point transparency data of the object of the layer for the nth object, glblalpha n Inputting a global transparency coefficient of the layer for the nth target, and when n =1, newData 0 Is target pixel data of a background layer, when n > 1, newData n-1 Inputting the target pixel data, newData, after the layers are overlapped for the (n-1) th target n And inputting the target pixel data after the layer superposition for the nth target.
2. The image processing method of claim 1, wherein prior to the step of buffering pointwise transparency data for the plurality of input image layers, further comprising: and carrying out scaling processing on a plurality of point-by-point transparency materials respectively corresponding to the plurality of input layers to obtain the point-by-point transparency data of the plurality of input layers.
3. The image processing method according to claim 2, wherein each of the point-by-point transparency materials is a picture, and the point-by-point transparency data of the corresponding pixel in the input layer is represented by a luminance value of each pixel in the picture or a gray value of a target color channel of a plurality of color channels.
4. The image processing method according to claim 1, wherein the merging processing is performed on target pixel data in the image data of each target input layer in the plurality of input layers and target point-by-point transparency data in the point-by-point transparency data to obtain merged image data of the target input layers, specifically: and performing line-by-line merging processing on the target pixel data and the target point-by-point transparency data of the target input layer according to the priority order of the plurality of input layers to obtain the merged image data of the target input layer.
5. An image processing apparatus characterized by comprising:
the image data caches are used for caching the image data of the input image layers;
the transparency data caches are used for caching point-by-point transparency data of the input layers;
a superposition processing module; and
the data generating modules are used for synchronously reading and merging target pixel data in the image data of each target input layer in the input layers and target point-by-point transparency data in the point-by-point transparency data under the drive of an output time sequence to obtain merged image data of the target input layers, and sending the merged image data to the superposition processing module, and the superposition processing module combines the global transparency coefficient of the target input layer to perform superposition processing on the merged image data to obtain image data of a corresponding position of an output image;
the superposition processing module combines the global transparency coefficient of the target input image layer to carry out superposition processing on the merged image data by adopting the following formula:
NewData n =α n *InData n +(1-α n )*NewData n-1
α n =PerPixelAlpha n *GlblAlpha n
wherein n represents the number of layers of the target input layer for superposition processing, is a positive integer greater than 0, and InData n Inputting target pixel data of layer for nth target, perPixelAlpha n Inputting a normalized value of point-by-point transparency data of the object of the layer for the nth object, glblalpha n Inputting a global transparency coefficient of the layer for the nth target, and when n =1, newData 0 Is target pixel data of a background layer, when n > 1, newData n-1 Is the (n-1) th target outputTarget pixel data, newData, after layer-wise stacking n And inputting the target pixel data after the layer superposition for the nth target.
6. The image processing apparatus according to claim 5, further comprising: and the zooming modules are used for respectively zooming a plurality of point-by-point transparency materials respectively corresponding to the plurality of input layers so as to obtain the point-by-point transparency data of the plurality of input layers.
7. The image processing apparatus of claim 5, wherein the plurality of image data caches, the plurality of transparency data caches, the overlay processing module, and the plurality of data generation modules are integrated into a programmable logic device.
8. The image processing apparatus according to claim 5, wherein the data generation module performs merging processing on target pixel data in the image data of each target input layer in the plurality of input layers and target point-by-point transparency data in the point-by-point transparency data to obtain merged image data of the target input layers, specifically: and performing line-by-line merging processing on the target pixel data and the target point-by-point transparency data of the target input layer according to the priority order of the plurality of input layers to obtain the merged image data of the target input layer.
9. An image processing system, comprising: a memory storing a computer program and a processor connected to the memory, the processor executing the computer program when executing the image processing method according to any one of claims 1 to 4.
10. A computer-readable medium being a non-volatile memory and storing computer-executable instructions for performing the image processing method of any one of claims 1 to 4.
CN201910589202.7A 2019-07-02 2019-07-02 Image processing method, device and system and computer readable medium Active CN112188262B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910589202.7A CN112188262B (en) 2019-07-02 2019-07-02 Image processing method, device and system and computer readable medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910589202.7A CN112188262B (en) 2019-07-02 2019-07-02 Image processing method, device and system and computer readable medium

Publications (2)

Publication Number Publication Date
CN112188262A CN112188262A (en) 2021-01-05
CN112188262B true CN112188262B (en) 2023-02-17

Family

ID=73915028

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910589202.7A Active CN112188262B (en) 2019-07-02 2019-07-02 Image processing method, device and system and computer readable medium

Country Status (1)

Country Link
CN (1) CN112188262B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115348469B (en) * 2022-07-05 2024-03-15 西安诺瓦星云科技股份有限公司 Picture display method, device, video processing equipment and storage medium
CN117632826A (en) * 2022-08-15 2024-03-01 万有引力(宁波)电子科技有限公司 Data transmission method, device, system, equipment and storage medium
CN116880938B (en) * 2023-06-08 2024-01-26 青岛汉泰电子有限公司 Oscilloscope multi-layer stacking processing system and method

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102572304A (en) * 2011-12-13 2012-07-11 广东威创视讯科技股份有限公司 Image addition processing method and device
CN109448077A (en) * 2018-11-08 2019-03-08 郑州云海信息技术有限公司 A kind of method, apparatus, equipment and storage medium that multi-layer image merges
CN109785410A (en) * 2019-01-30 2019-05-21 郑州云海信息技术有限公司 A kind of figure layer merging method, device and associated component

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8698840B2 (en) * 1999-03-05 2014-04-15 Csr Technology Inc. Method and apparatus for processing video and graphics data to create a composite output image having independent and separate layers of video and graphics display planes

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102572304A (en) * 2011-12-13 2012-07-11 广东威创视讯科技股份有限公司 Image addition processing method and device
CN109448077A (en) * 2018-11-08 2019-03-08 郑州云海信息技术有限公司 A kind of method, apparatus, equipment and storage medium that multi-layer image merges
CN109785410A (en) * 2019-01-30 2019-05-21 郑州云海信息技术有限公司 A kind of figure layer merging method, device and associated component

Also Published As

Publication number Publication date
CN112188262A (en) 2021-01-05

Similar Documents

Publication Publication Date Title
CN112188262B (en) Image processing method, device and system and computer readable medium
US11127110B2 (en) Data processing systems
KR20150093592A (en) Method of and apparatus for generating an overdrive frame for a display
US20070040849A1 (en) Making an overlay image edge artifact less conspicuous
CN110659005B (en) Operating data processing system and method, display device, and computer readable medium
CN112399120B (en) Electronic device and control method thereof
WO2020001022A1 (en) Image expansion method, computing device and computer storage medium
JP5051983B2 (en) LCD blur reduction by frame rate control
US20210049983A1 (en) Display rendering
US20200090575A1 (en) Data processing systems
CN109003227B (en) Contrast enhancement device and display
US11323678B2 (en) Method of and apparatus for processing frames in a data processing system
US6879329B2 (en) Image processing apparatus having processing operation by coordinate calculation
US20110157199A1 (en) Method and Device for Processing Digital Images
US20130021371A1 (en) Image display apparatus and image display method
CN115511709A (en) Self-adaptive super-resolution image sampling system
JP2000324337A (en) Image magnification and reducing device
JP2007102822A (en) Three-dimensional image processor and method for generating video window
JP2005266792A (en) Memory efficient method and apparatus for displaying large overlaid camera image
WO2018177043A1 (en) Stereoscopic display driving method and apparatus, and display device
US20240161229A1 (en) Image processing device and method using video area splitting, and electronic system including the same
WO2023197284A1 (en) Saliency-based adaptive color enhancement
TW202420225A (en) Image processing device using video area splitting, and electronic system including the same
CN107657598A (en) The production method of graphic processing facility and hybrid frame
JP2003228713A (en) Image display device of pachinko machine

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant