CN115187763A - Method, apparatus, medium, and device for processing a plurality of image sequences - Google Patents

Method, apparatus, medium, and device for processing a plurality of image sequences Download PDF

Info

Publication number
CN115187763A
CN115187763A CN202210803798.8A CN202210803798A CN115187763A CN 115187763 A CN115187763 A CN 115187763A CN 202210803798 A CN202210803798 A CN 202210803798A CN 115187763 A CN115187763 A CN 115187763A
Authority
CN
China
Prior art keywords
image
synthesized
region
channel
current
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210803798.8A
Other languages
Chinese (zh)
Inventor
朱俊炜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Bilibili Technology Co Ltd
Original Assignee
Shanghai Bilibili Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Bilibili Technology Co Ltd filed Critical Shanghai Bilibili Technology Co Ltd
Priority to CN202210803798.8A priority Critical patent/CN115187763A/en
Publication of CN115187763A publication Critical patent/CN115187763A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/10Image acquisition
    • G06V10/16Image acquisition using multiple overlapping images; Image stitching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4007Scaling of whole images or parts thereof, e.g. expanding or contracting based on interpolation, e.g. bilinear interpolation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Image Processing (AREA)

Abstract

The present disclosure provides a method, an apparatus, a computer program product, a non-transitory computer-readable storage medium, and an electronic device for processing a plurality of image sequences. The method comprises the following steps: performing the following merging processing on a plurality of images with the same serial number in a plurality of image sequences: decoding images in the current image sequence to obtain a current image to be synthesized; combining the current image to be synthesized and the previous synthesized image to obtain a current synthesized image, wherein the first synthesized image is obtained by the following method: decoding images in the first image sequence to obtain a first image to be synthesized; decoding the images in the second image sequence to obtain a second image to be synthesized; and combining the second image to be synthesized and the first image to be synthesized to obtain a first synthesized image. According to the embodiment provided by the disclosure, the video gifts can be automatically synthesized, and the labor cost of the synthesis processing of the multiple video layers is avoided.

Description

Method, apparatus, medium, and device for processing a plurality of image sequences
Technical Field
The present disclosure relates generally to the field of image and video processing technology, and more particularly, to a method for processing a plurality of image sequences, an apparatus, a computer program product, a non-transitory computer-readable storage medium, and an electronic device.
Background
This section is intended to introduce a selection of aspects of the art, which may be related to various aspects of the present disclosure that are described and/or claimed below. This section is believed to be helpful in providing background information to facilitate a better understanding of the various aspects of the present disclosure. Accordingly, it should be understood that these descriptions should be read in this light, and not as admissions of prior art.
Gifts on live platforms may be implemented through vector animation. However, vector animation occupies computing resources during playing, and the gift special effect of the vector animation is not as rich as that of a video, so that the gift on a live broadcast platform gradually becomes realized in a video mode. Since the gift special effect on the live platform needs to be transparent, while the video is usually opaque, the transparent video gift special effect needs to be customized with professional video editing software (e.g., adobe).
However, in the scheme, the gift effect is single, and the gift effects of all users on the live platform are the same.
Disclosure of Invention
An object of the present disclosure is to provide a method of processing a plurality of image sequences, an apparatus, a computer program product, a non-transitory computer readable storage medium, and an electronic device to avoid the labor cost of the compositing process of multiple video layers.
According to a first aspect of the present disclosure, there is provided a method of processing a plurality of image sequences, the method of processing the plurality of image sequences comprising: carrying out the following merging processing on a plurality of images with the same serial number in the plurality of image sequences: decoding the images in the current image sequence to obtain a current image to be synthesized; combining the current image to be synthesized and the previous synthesized image to obtain a current synthesized image, wherein the first synthesized image is obtained by the following method: decoding the images in the first image sequence to obtain a first image to be synthesized; decoding the images in the second image sequence to obtain a second image to be synthesized; and combining the second image to be synthesized and the first image to be synthesized to obtain a first synthesized image.
According to a second aspect of the present disclosure, there is provided an apparatus for processing a plurality of image sequences, comprising: a processing module configured to perform a merging process on a plurality of images having the same sequence number in the plurality of image sequences as follows: decoding the images in the current image sequence to obtain a current image to be synthesized; combining the current image to be synthesized and the previous synthesized image to obtain a current synthesized image, wherein the first synthesized image is obtained by the following method: decoding the images in the first image sequence to obtain a first image to be synthesized; decoding the images in the second image sequence to obtain a second image to be synthesized; and combining the second image to be synthesized and the first image to be synthesized to obtain a first synthesized image.
According to a third aspect of the present disclosure, there is provided a computer program product comprising program code instructions which, when executed by a computer, cause the computer to perform the method according to the first aspect of the present disclosure.
According to a fourth aspect of the present disclosure, there is provided a non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method according to the first aspect of the present disclosure.
According to a fifth aspect of the present disclosure, there is provided an electronic device comprising: a processor, a memory in electronic communication with the processor; and instructions stored in the memory and executable by the processor to cause the electronic device to perform the method according to the first aspect of the disclosure.
According to the embodiment provided by the disclosure, the video gifts can be automatically synthesized, and the labor cost of the synthesis processing of the multiple video layers is avoided.
It should be understood that the statements herein are not intended to identify key or essential features of the claimed subject matter, nor are they intended to be used as an aid in determining the scope of the claimed subject matter, alone.
Drawings
In order to more clearly illustrate the embodiments of the present disclosure or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present disclosure, and other drawings can be obtained by those skilled in the art without creative efforts. Throughout the drawings, identical reference numbers designate similar, but not necessarily identical, elements.
FIG. 1 illustrates one example of splitting RGB and Alpha channels of an image according to the present disclosure.
Fig. 2 shows a flowchart of a method of merging a first frame of a sequence of n (n being an integer greater than 1) images according to an embodiment of the present disclosure.
Fig. 3 shows one example of a position parameter associated with the position of the first and second regions in the image according to an embodiment of the disclosure.
Fig. 4 shows an example of a normalized first region image and second region image according to an embodiment of the disclosure.
FIG. 5 illustrates one example of a plurality of layers in accordance with an embodiment of the disclosure.
Fig. 6 illustrates one example of an output image according to an embodiment of the present disclosure.
FIG. 7 illustrates a flow diagram of a method of processing a plurality of image sequences according to some embodiments of the present disclosure.
Fig. 8 illustrates an exemplary block diagram of an apparatus for processing a plurality of images according to an embodiment of the present disclosure.
FIG. 9 illustrates a schematic block diagram of an example electronic device that can be used to implement embodiments of the present disclosure.
Detailed description of the invention
The present disclosure will be described more fully hereinafter with reference to the accompanying drawings. The present disclosure may, however, be embodied in many alternate forms and should not be construed as limited to the embodiments set forth herein. Accordingly, while the disclosure is susceptible to various modifications and alternative forms, specific embodiments thereof are shown by way of example in the drawings and will herein be described in detail. It should be understood, however, that there is no intent to limit the disclosure to the particular forms disclosed, but on the contrary, the disclosure is to cover all modifications, equivalents, and alternatives falling within the spirit and scope of the disclosure as defined by the claims.
It will be understood that, although the terms first, second, etc. may be used herein to describe various elements, these elements should not be limited by these terms. These terms are only used to distinguish one element from another. For example, a first element could be termed a second element, and, similarly, a second element could be termed a first element, without departing from the teachings of the present disclosure.
Some examples are described herein in connection with block diagrams and/or flowchart illustrations, where each block represents a circuit element, module, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in other implementations, the functions noted in the blocks may occur out of the order noted. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved.
Reference herein to "according to.. Examples" or "in.. Examples" means that a particular feature, structure, or characteristic described in connection with the examples can be included in at least one implementation of the present disclosure. The appearances of the phrase "according to.. Example" or "in.. Example" in various places herein are not necessarily all referring to the same example, nor are separate or alternative examples necessarily mutually exclusive of other examples.
The RGB color scheme is a color standard in the industry, and RGB represents colors of red, green, and blue channels, and the colors are mixed and superimposed. For the RGB coding method, three variables are available for each color to represent the intensity of red, green, and blue. YUV (also known as YCrCb) is a color coding method adopted by modern television systems, in which a three-tube color camera or a color CCD camera is usually used for image capture, then the obtained color image signals are subjected to color separation and respective amplification and correction to obtain RGB, and then a luminance signal Y and two color difference signals R-Y (i.e., U) and B-Y (i.e., V) are obtained through a matrix conversion circuit. For the YUV encoding method, "Y" represents brightness (Luma or Luma), i.e., a gray scale value, and "U" and "V" represent Chrominance (Chroma or Chroma) for describing the color and saturation of an image, which is used to specify the color of a pixel.
The video frames may be composed of a number of different channels, for example, an RGB video frame is composed of a red channel (R), a green channel (G), and a blue channel (B), while a YUV video frame is composed of a luminance channel (Y), a first chrominance channel (U), and a second chrominance channel (V). As used herein, the terms "image," "picture," "frame," "picture," and "video frame" are used interchangeably, and "layer," "sequence of images," and "video" are used interchangeably
Alpha Channel or Alpha Channel refers to the transparency and translucency of an image. For example: a bitmap stored using 16 bits per pixel may represent red in 5 bits, green in 5 bits, blue in 5 bits, and alpha in the last bit for each pixel in the graph. In this case it either means transparent or not, because the alpha bit has only the possibility of two different representations, 0 or 1. As another example, a bitmap stored using 32 bits, the RGB and alpha channels are each represented by 8 bits. In this case, the alpha channel may represent not only transparent or opaque, but also 256 levels of translucency, since there are 8 bits of the alpha channel with the possibility of 256 different representations.
In the present disclosure, the effect of image transparency can be achieved by splitting the data of the RGB channel and the data of the Alpha channel of an image into different positions of a non-transparent video. FIG. 1 illustrates one example of splitting RGB and Alpha channels of an image according to the present disclosure. As shown in fig. 1, a rectangular area 20 and a rectangular area 30 are included on a non-transparent video frame 10, where data of RGB channels of an image is placed at the rectangular area 20, and data of Alpha channels of the image is placed at the area 30. In the example of FIG. 1, the RGB values for the pure black pixels at the rectangular region 20 are [0,0,0], and the RGB values for the pure white pixels are [255,255 ]. In this example, data of Alpha channel of the image is mapped to black and white data. For example, if the value of Alpha channel of an image at a certain pixel is [0], the pixel can be mapped to RGB value of [0,0,0], i.e. pure black; if the Alpha channel of an image has a value of [255] at a pixel, the pixel can be mapped to RGB values of [255,255 ], i.e., pure white.
The following describes a method for processing a plurality of image sequences according to the embodiment of the present disclosure based on RGB channel data and Alpha channel data of the split image.
Fig. 2 shows a flowchart of a method of merging a first frame of a sequence of n (n being an integer greater than 1) images according to an embodiment of the present disclosure. As shown in fig. 2, the method 200 includes:
decoding a first frame in a first image sequence to obtain a first image to be synthesized;
decoding a first frame in the second image sequence to obtain a second image to be synthesized;
combining the second image to be synthesized and the first image to be synthesized to obtain a first synthesized image;
decoding a first frame in the third image sequence to obtain a third image to be synthesized;
combining the third image to be synthesized and the first synthesized image to obtain a second synthesized image;
decoding a first frame in the nth image sequence by analogy to obtain an nth image to be synthesized;
and combining the nth image to be synthesized and the nth-2 synthetic image to obtain the nth-1 synthetic image.
In this example, after the first frame in the n image sequences is merged by using the above method, the (n-1) th synthesized image is the merged image.
In this example, the method of performing the merging process on the second frame, the third frame, …, and the mth frame (m is an integer greater than 1) of the n image sequences is the same as the first frame.
According to the method for processing the plurality of image sequences, provided by the embodiment of the disclosure, the video gifts can be automatically synthesized, and the labor cost of the synthesis processing of the plurality of video layers is avoided.
In the example of fig. 2, decoding a frame in the image sequence to obtain an image to be synthesized includes:
converting the image from the first format to a second format having the color channel and a transparency channel based on the color information and the transparency information of the image.
In this example, the image or frame may include a first region and a second region, the first region may contain color information of the image, the second region may contain transparency information of the image, and the image may be in a first format having color channels. In this example, the color information of the image may be data of a non-Alpha channel, such as data of an RGB channel or data of a YUV channel, and the transparency information of the image may be information corresponding to the data of the Alpha channel, such as data mapping the data of the Alpha channel to the RGB channel. As one example, the information contained at the rectangular area 20 in fig. 1 is color information of an image, and the information contained at the rectangular area 30 is transparency information of an image. In an application scenario of the live broadcast platform, the color information in this example may be color information of a gift, and the transparency information may be transparency information of the gift.
In this example, the first region and the second region may be located at any region in the image. As an example, the rectangular region 20 in fig. 1 may be an example of a first region, and the rectangular region 30 may be an example of a second region.
In this example, the color channel may be any image channel other than Alpha channel, such as RGB channel or YUV channel, and the transparency channel is the Alpha channel described above. In this example, the format of the image may be embodied by a representation of the pixels in the image. The image in the first format may represent pixels in the image in a color channel and the image in the second format may represent pixels in the image in a color channel and a transparency channel. For example, if the value of a pixel in the image is [0,0,0] (the values representing the three channels of RGB are 0), the image is in the first format; if the value of a pixel in the image is [0,0,0,1] (representing 0 for the three channels of RGB and 1 for the alpha channel), the image is in the second format.
In this example, the image may be converted from the first format to the second format using color information and transparency information of the image. The format conversion step of the image will be described with reference to fig. 1. As shown in fig. 1, if the RGB value of a certain pixel of the image in the rectangular area 20 is [0,0,0] (i.e., pure black) and the RGB value of the transparency information corresponding to the pixel is [0,0,0] (i.e., pure black), the pixel value can be converted into [0,0,0,0] (representing that the RGB three channels have values of 0 and the alpha channel has a value of 0), so that the pixel format is converted from the first format into the second format.
In this example, a plurality of images in the first format may be converted into images in the second format in a preset order. In this example, the preset order may be an order randomly designated by the user, for example, the images A, B, C of the existing first format may sequentially perform format conversion in the order of a → B → C, or may sequentially perform format conversion in the order of a → C → B. Thus, format conversion may be performed in a variety of different orders for the plurality of images in the first format.
In this example, a preset order (e.g., a first image sequence, a second image sequence, etc.) of the n image sequences may be an order randomly designated by a user, and the plurality of images of the second format may be sequentially merged according to the preset order. In this example, the merging between images may be a merging between channels of multiple images, e.g., alpha channels of multiple images may be merged.
According to the method for processing the image sequences, the video layers can be combined into different videos through the user-defined combination sequence, so that different gift effects can be combined for each user.
In the example of fig. 2, the converting the image from the first format to the second format having the color channel and the transparency channel based on the color information and the transparency information of the image may include:
step S2022: the image is segmented into a first region image and a second region image based on location parameters associated with the locations of the first region and the second region in the image.
Fig. 3 shows one example of a location parameter associated with the location of the first and second regions in the image according to an embodiment of the disclosure. As shown in fig. 3, the left half area of the image is the first area in the present disclosure, and the right upper half area of the image is the second area in the present disclosure. The coordinates of the pixel point at the uppermost left corner of the image are (X =0, y = 0), the position parameter of the first region is a rectangular region with a zero point (X =0, y = 0) as a starting point, a height (H) of 1280, and a width (W) of 720; the position parameter of the second region is a rectangular region having a starting point of (X =724, y = 0), a height (H) of 640, and a width (W) of 360. In this example, the image may be sliced, for example, the first and second regions may be respectively sliced from the image, according to the position parameters of the first and second regions. In this example, the first region image may be obtained by cutting out the first region from the image, and the second region image may be obtained by cutting out the second region from the image.
Step S2024: the sizes of the first area image and the second area image are normalized.
In this example, the data sizes of the first area image and the second area image may be normalized by resampling. In some optional examples, the sizes of the first region image and the second region image may be normalized according to an image scaling algorithm. In this example, the image scaling algorithm may include a linear interpolation algorithm, a quadratic cubic interpolation algorithm, a Lanczos algorithm, a new edge guided interpolation algorithm, and the like. The linear interpolation algorithm (Bilinear algorithm) is to perform one-time interpolation in two directions, and obtain a pixel to be solved through interpolation of four adjacent pixels. The quadratic cubic interpolation algorithm (Bicubic algorithm) considers not only the influence of the gray values of four directly adjacent pixel points around, but also the influence of the change rate of the gray values, so that a smoother edge can be generated compared with a linear interpolation algorithm. A New Edge-Directed Interpolation algorithm (New Edge-Directed Interpolation) is to use local covariance properties to derive prediction coefficients for optimal linear MMSE prediction.
Step S2026: and respectively mapping the color information contained in the normalized first area image and the transparency information contained in the normalized second area image into pixels of a color channel and pixels of a transparency channel of the image to be synthesized.
Fig. 4 shows an example of a normalized first region image and second region image according to an embodiment of the disclosure. As shown in fig. 4, the image on the left side is a normalized first region image, the image on the right side is a normalized second region image, and the first region image and the second region image have the same size. In this example, the color information in the first region image and the transparency information in the second region image may be mapped to pixels of a color channel and pixels of a transparency channel. For example, if the pixel value at the coordinate of (X =6, y = 6) in the first area image is [0,0,0] (the value representing three channels of RGB is 0), and the pixel value at the same coordinate in the second area image is [0,0,0] (the value representing three channels of RGB is 0), the pixel value at the coordinate in the mapped image is [0,0,0,0] (the value representing three channels of RGB is 0, and the value of Alpha channel is 0), where the pixel value of RGB channel at the mapped coordinate is the pixel value at the coordinate in the first area image, and the pixel value of Alpha channel at the mapped coordinate is the mapped value at the coordinate in the second area image.
According to the method for processing the image sequences, provided by the embodiment of the disclosure, the process of converting the image format can be simplified and the processing efficiency can be improved by performing segmentation and normalization operations on the images.
In some embodiments, the plurality of image sequences include an image sequence belonging to a background layer and an image sequence belonging to a foreground layer, and the current image to be synthesized in fig. 2 is the foreground layer and the previous synthesized image is the background layer. In this example, the sequence of images belonging to the background layer may be a bottommost layer of the plurality of layers, and the sequence of images belonging to the foreground layer may be a bottommost layer of the plurality of layers. FIG. 5 illustrates one example of a plurality of layers in accordance with an embodiment of the disclosure. As shown in fig. 5, the background layer video includes sequence frames A1, A2, A3, …, an, the second layer video includes sequence frames B1, B2, B3, …, bn, the third layer video includes C1, C2, C3, …, cn, the second layer video and the third layer video belong to the foreground layer video and are respectively the middle layer video and the upper layer video. In this example, frames of each layer of video at time t, that is, a frame A1, a frame B1, and a frame C1, may be obtained, and then the frame A1, the frame B1, and the frame C1 are sequentially converted from the first format to the second format. After the video frames of each layer at the time t are processed, frames (namely, a frame A2, a frame B2 and a frame C2) of each layer at the next time t' can be continuously obtained, and then the frames A2, the frames B2 and the frames C2 are sequentially converted from the first format to the second format, so that the pipeline processing is realized, and the memory occupation in the processing process is reduced.
In other embodiments, the current image to be synthesized and the previous synthesized image may be merged according to the following equation:
α 0 =α ab (1-α a ) Equation 1
Figure BDA0003735608460000091
Wherein, C a For the pixels of the color channel of the current image to be synthesized, C b For the pixels of the color channel of the previous composite image, alpha a Is the location of the current image to be synthesizedPixel of the transparency channel, alpha b For the pixels of the transparency channel of the previous composite image, α 0 For the pixels of the transparency channel of the current composite image, C 0 Is a pixel of the color channel of the current composite image.
By merging images using the above equation, the portability of the method for processing a plurality of image sequences provided according to the embodiments of the present disclosure in a tool chain can be improved. For example, the merging of multiple images may be accomplished using an FFmpeg tool. FFmpeg is an open source code free software that can perform the functions of recording, converting, and streaming audio and video in a variety of formats. When multiple images are merged using the above equations, it can be implemented on the FFmpeg tool.
In other embodiments, the pixels of the color channels of the plurality of images are pre-multiplied pixels, and the current image to be synthesized and the previous synthesized image may be merged according to the following equation:
α 0 =α ab (1-α a ) Equation 3
c 0 =c a +c b (1-α a ) Equation 4
Wherein, c a For the pixels of the color channel of the current image to be synthesized, c b For the pixels of the color channel of the previous composite image, alpha a For the pixels of the transparency channel, α, of the current image to be synthesized b For the pixels of the transparency channel of the previous composite image, α 0 For the pixels of the transparency channel of the current composite image, c 0 Is a pixel of the color channel of the current composite image. The pre-multiplication is also called Alpha pre-multiplication, which means that Alpha synthesis pre-calculation is carried out on pixels of color channels of an image according to the following equation:
c i =α i C i equation 5
Wherein, C i Being pixels of a colour channel of an image before pre-multiplication, alpha i For transparency channels of the imagePixel, c i I =1, 2, …, N (N is the total number of pixels of a color channel of an image) for the pixels of the color channel of the image after pre-multiplication. In the merging process of the images, if the pixels of the color channels of the images are pre-multiplied pixels, merging the images according to equations 1 and 2 (i.e. according to the non-pre-multiplied pixel process) at this time may result in the merged images being dark, which affects the processing effect. By performing the distinguishing process on the pre-multiplied image and the non-pre-multiplied image, the image quality can be improved.
In some embodiments, the plurality of image sequences may include an image sequence belonging to a background layer and an image sequence belonging to a foreground layer, and the current image to be synthesized in fig. 2 is the background layer and the previous synthesized image is the foreground layer. Continuing with the description of fig. 5, in this example, frames of each layer of video at time t, that is, frame C1, frame B1, and frame A1, may be obtained, and then frame C1, frame B1, and frame A1 are sequentially converted from the first format to the second format. After the video frames of each layer at the time t are processed, frames (namely, the frame C2, the frame B2 and the frame A2) of each layer at the next time t' can be continuously obtained, and then the frame C2, the frame B2 and the frame A2 are sequentially converted from the first format to the second format, so that the pipeline processing is realized, and the memory occupation in the processing process is reduced.
In some examples, the method of fig. 2 further comprises: and converting the image obtained after the merging processing into an output image comprising the first area and the second area based on the color channel and the transparency channel of the image obtained after the merging processing.
In this example, the first region may contain color information of the output image, and the second region may contain transparency information of the output image, the output image being in the first format. In this example, the pixels of the color channel of the merged image may be mapped to the color information of the output image, and the transparency channel of the merged image may be mapped to the transparency information of the output image. For example, if the pixel value of the merged image at a certain coordinate is [0,0,0,0] (the values representing the three RGB channels are 0 and the value of the Alpha channel is 0), the RGB value at the coordinate may be mapped to the pixel value of the corresponding coordinate in the first region of the output image [0,0,0] (the values representing the three RGB channels are 0), and the Alpha value at the coordinate may be mapped to the pixel value of the corresponding coordinate in the second region of the output image [0,0,0] (the values representing the three RGB channels are 0). In this example, the position parameters of the first and second areas in the output image may be the same as those of the first and second areas of the first format image in step S602. In this example, after the output image is obtained, the image may be subjected to processing such as line prediction, transformation, quantization, entropy coding, inverse quantization, inverse transformation, reconstruction, filtering, and the like, thereby outputting a coded stream. In the method for processing a plurality of image sequences provided by this embodiment, the images obtained after merging are inversely converted, so that an output image with the same format as that of the input image can be obtained, and the degree of automation of the processing process is improved.
In some embodiments, the converting the merged image into the output image including the first region and the second region based on the color channel and the transparency channel of the merged image may include:
step S6062: and mapping the pixels of the color channel and the pixels of the transparency channel of the image obtained after the merging processing into a first image containing color information and a second image containing transparency information respectively.
In this example, the first image and the second image are both in the first format, and the first image and the second image are the same size. For example, if the pixel value of the merged image at (X =6, y = 6) is [0,0,0,0] (the values representing the three RGB channels are 0 and the value of the Alpha channel is 0), the RGB value at the coordinate may be mapped to the pixel value [0,0,0] (the values representing the three RGB channels are 0) of the first image at (X =6, y = 6), and the Alpha value at the coordinate may be mapped to the pixel value [0,0,0] (the values representing the three RGB channels are 0) of the second image at (X =6, y = 6).
Step S6064: the first and second images are converted into first and second regions of an output image, respectively, based on position parameters associated with the positions of the first and second regions in the image.
In this example, the position parameters of the first and second regions in the output image may be the same as the position parameters associated with the positions of the first and second regions in the image (input image).
In other embodiments, step S6064 may include:
step S60642: and resampling the first image and/or the second image based on the position parameters associated with the positions of the first area and the second area in the image to obtain a first area image and a second area image.
In this example, the first region image may contain color information of the output image, and the second region image may contain transparency information of the output image. In some optional examples, the first image and/or the second image may be resampled according to an image scaling algorithm. In this example, the image scaling algorithm may include a linear interpolation algorithm, a quadratic cubic interpolation algorithm, a Lanczos algorithm, a new edge guided interpolation algorithm, and the like.
Step S60644: and drawing the first area image and the second area image on the same canvas to obtain an output image, wherein the first area image corresponds to a first area, and the second area image corresponds to a second area.
Fig. 6 illustrates one example of an output image according to an embodiment of the present disclosure. As shown in fig. 6, on the black canvas, the first region image 70 may be drawn to a designated region, and the second region image 80 may be drawn to a designated region.
In other embodiments, the first format in the above embodiments of the present disclosure is a Red Green Blue (RGB) pixel format, and the second format is a Red Green Blue Alpha (RGBA) pixel format. In still other embodiments, the second region in the above-described embodiments of the present disclosure is formed by mapping transparency information of an image to a luminance channel (Y) in a luminance-chrominance (YUV) pixel format. By mapping the transparency information of the image to the brightness channel in YUV, the space can be saved when video frame data is compressed, and the data compression efficiency is improved.
To illustrate the embodiments, fig. 7 shows a flow chart of a method of processing a plurality of image sequences according to the embodiments. As shown in fig. 7, the image may be input in the order of background to foreground, and then the graphic data in the region may be cut out according to the position parameters of the first region and the second region in the image, so as to obtain the RGB region and the Alpha region. And then, normalizing the RGB area and the Alpha area through resampling, and then mapping the data of the Alpha area to an Alpha channel to be combined with the RGB channel mapped by the data of the RGB area to obtain RGBA data. Then, judging whether the input image is a background layer, and if the input image is the background layer, further judging whether the input image is the last layer; if the input image is not a background layer, it is merged with the previous RGBA data result (i.e., image merging in this disclosure). If the input image is the last layer, the combined RGBA data (i.e., the combined image in the present disclosure) can be obtained, otherwise, the RGBA data of the current layer is output and the processing of the input image of the next layer is continued (i.e., returning to the initial "input" step). The combined RGBA data is then input and channel separated to obtain an RGB channel picture (i.e., the first image in this disclosure) and an Alpha channel picture (i.e., the second image in this disclosure). Then, the Alpha channel picture is resampled, and the Alpha channel data is mapped into black and white data information. Thereafter, the canvas is rendered in a specified format (which may be determined from the location parameters in this disclosure). And finally, encoding the obtained output image and continuously processing a plurality of images of the next frame.
According to the example of the method for processing a plurality of image sequences shown in fig. 7, a pipeline-type process can be implemented, the intermediate file does not need to be stored locally, the memory usage during the process is low, and the process is easy to be migrated to the existing tool chain.
The method for processing the plurality of image sequences has the advantage of strong portability, and can be realized in an FFmpeg tool set.
Fig. 8 illustrates an exemplary block diagram of an apparatus that processes a plurality of image sequences according to an embodiment of the disclosure. As shown in fig. 8, the apparatus 800 includes: a processing module 801 configured to perform the following merging processing on a plurality of images with the same sequence number in the plurality of image sequences: decoding the images in the current image sequence to obtain a current image to be synthesized; combining the current image to be synthesized and the previous synthesized image to obtain a current synthesized image, wherein the first synthesized image is obtained by the following method: decoding the images in the first image sequence to obtain a first image to be synthesized; decoding the images in the second image sequence to obtain a second image to be synthesized; and combining the second image to be synthesized and the first image to be synthesized to obtain a first synthesized image.
According to the device for processing the plurality of images, the plurality of video layers can be synthesized into a plurality of different videos through the user-defined synthesis sequence, so that different gift effects can be synthesized for each user.
The operations, features and advantages described above with respect to the method 200 are equally applicable to the apparatus 800 and the modules included therein. Certain operations, features and advantages may not be described in detail herein for the sake of brevity.
In some examples, the image includes a first region and a second region, the first region contains color information of the image, the second region contains transparency information of the image, the image is in a first format having color channels, the processing module 801 further includes: a first conversion module configured to convert the image from the first format to a second format having the color channel and a transparency channel based on the color information and the transparency information of the image.
In some examples, the first conversion module includes: a segmentation module configured to segment the image into a first region image and a second region image based on a position parameter associated with positions of the first region and the second region in the image, wherein the first region image contains the color information of the image, the second region image contains the transparency information of the image, and the first region image and the second region image are both in the first format; a normalization module configured to normalize sizes of the first region image and the second region image; and a first mapping module configured to map color information included in the normalized first region image and transparency information included in the normalized second region image to pixels of the color channel and pixels of the transparency channel of the image to be synthesized, respectively.
In some examples, the plurality of image sequences includes an image sequence belonging to a background layer and an image sequence belonging to a foreground layer, and the current image to be synthesized is the foreground layer and the previous synthesized image is the background layer.
In some examples, the plurality of image sequences includes an image sequence belonging to a background layer and an image sequence belonging to a foreground layer, and the current image to be synthesized is the background layer and the previous synthesized image is the foreground layer.
In some examples, the merging the current image to be synthesized and the previous synthesized image to obtain the current synthesized image includes: merging the current image to be synthesized and the previous synthesized image according to the following equation:
Figure BDA0003735608460000131
wherein, C a For the pixels of the color channel of the current image to be synthesized, C b For the pixels of the color channel of the previous composite image, alpha a For the pixels of the transparency channel, α, of the current image to be synthesized b For the pixels of the transparency channel of the previous composite image, α 0 For the pixels of the transparency channel of the current composite image, C 0 Is a pixel of the color channel of the current composite image.
In some examples, the pixels of the color channels of the plurality of images are pre-multiplied pixels, and merging the current image to be synthesized and a previous synthesized image to obtain a current synthesized image includes:
merging the current image to be synthesized and the previous synthesized image according to the following equation:
α 0 =α ab (1-α a ),c 0 =c a +c b (1-α a )
wherein, c a For the pixels of the color channel of the current image to be synthesized, c b For the pixels of the color channel of the previous composite image, alpha a For the pixels of the transparency channel, α, of the current image to be synthesized b For the pixels of the transparency channel of the previous composite image, α 0 For the pixels of the transparency channel of the current composite image, c 0 Is a pixel of the color channel of the current composite image.
In some examples, the normalization module is further configured to: normalizing the size of the first region image and the second region image according to an image scaling algorithm, wherein the image scaling algorithm comprises at least one of: a linear interpolation algorithm; a quadratic cubic interpolation algorithm; lanczos algorithm; and a new edge guided interpolation algorithm.
In some examples, the apparatus 900 further comprises: a second conversion module, configured to convert, based on the color channel and the transparency channel of the merged image, the merged image into an output image including the first area and the second area, where the first area includes color information of the output image, the second area includes transparency information of the output image, and the output image is in the first format.
In some examples, the second conversion module includes: a second mapping module configured to map pixels of the color channel and pixels of the transparency channel of the merged image into a first image containing color information and a second image containing transparency information, respectively, wherein the first image and the second image are both in the first format; and a third conversion module configured to convert the first and second images into the first and second regions of the output image, respectively, based on position parameters associated with the positions of the first and second regions in the image.
In some examples, the third conversion module includes: a resampling module configured to resample the first image and/or the second image based on a position parameter associated with positions of the first region and the second region in the image to obtain a first region image and a second region image, wherein the first region image contains the color information of the output image, and the second region image contains the transparency information of the output image; and a drawing module configured to draw the first region image and the second region image onto a same canvas, resulting in the output image, wherein the first region image corresponds to the first region and the second region image corresponds to the second region.
In some examples, the first format is a red-green-blue (RGB) pixel format and the second format is a red-green-blue Alpha (RGBA) pixel format.
In some examples, the second region is formed by mapping transparency information of the image to a target channel in a luminance-chrominance (YUV) pixel format.
In some examples, the target channel is a luminance channel.
According to another aspect of the present disclosure, there is provided a computer program product comprising program code instructions which, when executed by a computer, cause the computer to perform a method according to the above.
According to another aspect of the present disclosure, there is provided a non-transitory computer-readable storage medium having stored thereon computer instructions for causing the computer to perform the method according to the above.
According to another aspect of the present disclosure, there is provided an electronic device including: a processor, a memory in electronic communication with the processor; and instructions stored in the memory and executable by the processor to cause the electronic device to perform a method according to the above.
FIG. 9 illustrates a schematic block diagram of an example electronic device 900 that can be used to implement embodiments of the present disclosure. Referring to fig. 9, a block diagram of a structure of an electronic device 90, which may be a server or a client of the present disclosure, which is an example of a hardware device that may be applied to aspects of the present disclosure, will now be described. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the disclosure described and/or claimed herein. As shown in fig. 9, the electronic apparatus 900 includes a computing unit 901, which can perform various appropriate actions and processes in accordance with a computer program stored in a Read Only Memory (ROM) 902 or a computer program loaded from a storage unit 908 into a Random Access Memory (RAM) 903. In the RAM903, various programs and data required for the operation of the device 900 can also be stored. The calculation unit 901, ROM 902, and RAM903 are connected to each other via a bus 904. An input/output (I/O) interface 905 is also connected to bus 904. A number of components in the device 900 are connected to the I/O interface 905, including: an input unit 906 such as a keyboard, a mouse, and the like; an output unit 907 such as various types of displays, speakers, and the like; a storage unit 908 such as a magnetic disk, optical disk, or the like; and a communication unit 909 such as a network card, a modem, a wireless communication transceiver, and the like. The communication unit 909 allows the device 900 to exchange information/data with other devices through a computer network such as the internet and/or various telecommunication networks.
The computing unit 901 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of the computing unit 901 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various dedicated Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, and so forth. The calculation unit 901 performs the respective methods and processes described above, for example, a method of processing a plurality of image sequences. For example, in some embodiments, the method of processing a plurality of image sequences may be implemented as a computer software program tangibly embodied in a machine-readable medium, such as storage unit 908. In some embodiments, part or all of the computer program may be loaded and/or installed onto device 900 via ROM 902 and/or communications unit 909. When the computer program is loaded into the RAM903 and executed by the computing unit 901, one or more steps of the method of processing a plurality of image sequences described above may be performed. Alternatively, in other embodiments, the computing unit 901 may be configured by any other suitable means (e.g., by means of firmware) to perform the method of processing the plurality of image sequences.
The various illustrative logics, logical blocks, modules, circuits, and algorithm processes described in connection with the aspects disclosed herein may be implemented as electronic hardware, computer software, or combinations of both. Interchangeability of hardware and software has been described generally in terms of their functionality, and illustrated in the various illustrative components, blocks, modules, circuits, and processes described above. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system.
The hardware and data processing apparatus used to implement the various illustrative logics, logical blocks, modules, and circuits described in connection with the aspects disclosed herein may be implemented or performed with a general purpose single-or multi-chip processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general purpose processor may be a microprocessor or any conventional processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration. In some aspects, certain processes and methods may be performed by circuitry that is specific to a given function.
In one or more aspects, the functions described may be implemented in hardware, digital electronic circuitry, computer software, firmware (including the structures disclosed in this specification and their equivalents), or any combination thereof. Aspects of the subject matter described in this specification can also be implemented as one or more computer programs, i.e., one or more modules of computer program instructions, encoded on a computer storage medium for execution by, or to control the operation of, data processing apparatus.
If implemented in software, the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium. The processes of a method or algorithm disclosed herein may be implemented in a software module executable by a processor, which may reside on a computer-readable medium. Computer-readable media includes both computer storage media and communication media including any medium that can communicate a computer program from one place to another. A storage media may be any available media that can be accessed by a computer. By way of example, and not limitation, such computer-readable media can comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium that can be used to store desired program code in the form of instructions or data structures and that can be accessed by a computer. Also, any connection is properly termed a computer-readable medium. Disk (Disk) and disc (disc), as used herein, includes Compact Disc (CD), laser disc, optical disc, digital Versatile Disc (DVD), floppy Disk and blu-ray disc where disks (disks) usually reproduce data magnetically, while discs (discs) reproduce data optically with lasers. Combinations of the above should also be included within the scope of computer-readable media. Additionally, the operations of a method or algorithm may be provided as one or any combination or set of codes and instructions on a machine-readable medium and a computer-readable medium, which may be incorporated into a computer program product.
All the embodiments in the disclosure are described in a related manner, and the same and similar parts among the embodiments can be referred to each other, and each embodiment focuses on differences from other embodiments. In particular, apparatus embodiments, device embodiments, computer-readable storage medium embodiments, and computer program product embodiments are described for simplicity as they are substantially similar to method embodiments, where relevant, reference may be made to some descriptions of method embodiments.

Claims (18)

1. A method of processing a plurality of image sequences, comprising:
carrying out the following merging processing on a plurality of images with the same serial number in the plurality of image sequences:
decoding the images in the current image sequence to obtain a current image to be synthesized;
combining the current image to be synthesized and the previous synthesized image to obtain a current synthesized image,
wherein the first composite image is obtained by:
decoding the images in the first image sequence to obtain a first image to be synthesized;
decoding the images in the second image sequence to obtain a second image to be synthesized;
and combining the second image to be synthesized and the first image to be synthesized to obtain a first synthesized image.
2. The method of processing a plurality of image sequences according to claim 1, wherein the image comprises a first region containing color information of the image and a second region containing transparency information of the image, the image being in a first format having color channels,
decoding the image to obtain an image to be synthesized comprises:
converting the image from the first format to a second format having the color channel and a transparency channel based on the color information and the transparency information of the image.
3. The method of claim 2, wherein the converting the image from the first format to a second format having the color channel and a transparency channel based on the color information and the transparency information of the image comprises:
segmenting the image into a first region image and a second region image based on location parameters associated with locations of the first region and the second region in the image, wherein,
the first region image contains the color information of the image, the second region image contains the transparency information of the image, and both the first region image and the second region image are in the first format;
normalizing the sizes of the first area image and the second area image; and
respectively mapping color information contained in the normalized first area image and transparency information contained in the normalized second area image into pixels of the color channel and pixels of the transparency channel of the image to be synthesized.
4. The method according to claim 1 or 3, wherein the plurality of image sequences comprises an image sequence belonging to a background layer and an image sequence belonging to a foreground layer, and,
the current image to be synthesized is a foreground layer, and the previous synthesized image is a background layer.
5. The method according to claim 1 or 3, wherein the plurality of image sequences comprises an image sequence belonging to a background layer and an image sequence belonging to a foreground layer, and,
the current image to be synthesized is a background layer, and the previous synthesized image is a foreground layer.
6. The method according to claim 4, wherein the merging the current image to be synthesized and the previous synthesized image to obtain the current synthesized image comprises:
merging the current image to be synthesized and the previous synthesized image according to the following equation:
α 0 =α ab (1-α a ),
Figure FDA0003735608450000021
wherein, C a For the pixels of the color channel of the current image to be synthesized, C b For the pixels of the color channel of the previous composite image, alpha a For the pixels of the transparency channel, α, of the current image to be synthesized b For the pixels of the transparency channel of the previous composite image, α 0 For the pixels of the transparency channel of the current composite image, C 0 Is a pixel of the color channel of the current composite image.
7. The method of claim 4, wherein pixels of the color channels of the plurality of images are pre-multiplied pixels, and,
merging the current image to be synthesized and the previous synthesized image to obtain a current synthesized image, wherein the step of merging the current image to be synthesized and the previous synthesized image comprises the following steps:
merging the current image to be synthesized and the previous synthesized image according to the following equation:
α 0 =α ab (1-α a ),c 0 =c a +c b (1-α a )
wherein, c a For the pixels of the color channel of the current image to be synthesized, c b For the pixels of the color channel of the previous composite image, alpha a For the pixels of the transparency channel, α, of the current image to be synthesized b For the pixels of the transparency channel of the previous composite image, α 0 As pixels of the transparency channel of the current composite image, c 0 Is a pixel of the color channel of the current composite image.
8. The method of claim 3, wherein the normalizing the size of the first region image and the second region image comprises:
normalizing the size of the first region image and the second region image according to an image scaling algorithm, wherein the image scaling algorithm comprises at least one of:
a linear interpolation algorithm;
a quadratic cubic interpolation algorithm;
lanczos algorithm; and
the new edge guides the interpolation algorithm.
9. The method of claim 2, further comprising:
converting the merged image into an output image including the first region and the second region based on the color channel and the transparency channel of the merged image,
wherein the first region includes color information of the output image, the second region includes transparency information of the output image, and the output image is in the first format.
10. The method according to claim 9, wherein the converting the merged image into an output image including the first region and the second region based on the color channel and the transparency channel of the merged image comprises:
mapping the pixels of the color channel and the pixels of the transparency channel of the image obtained after the merging processing into a first image containing color information and a second image containing transparency information respectively, wherein the first image and the second image are both in the first format; and
converting the first and second images into the first and second regions of the output image, respectively, based on location parameters associated with locations of the first and second regions in the image.
11. The method of claim 10, wherein the converting the first and second images into the first and second regions of the output image, respectively, based on location parameters associated with locations of the first and second regions in the image comprises:
resampling the first image and/or the second image based on position parameters associated with the positions of the first region and the second region in the image resulting in a first region image and a second region image,
wherein the first region image contains the color information of the output image, and the second region image contains the transparency information of the output image; and
and drawing the first area image and the second area image on the same canvas to obtain the output image, wherein the first area image corresponds to the first area, and the second area image corresponds to the second area.
12. The method of claim 2, wherein the first format is a Red Green Blue (RGB) pixel format and the second format is a Red Green Blue Alpha (RGBA) pixel format.
13. The method of claim 2, wherein the second region is formed by mapping transparency information of the image to a target channel in a luminance-chrominance (YUV) pixel format.
14. The method of claim 13, wherein the target channel is a luminance channel.
15. An apparatus for processing a plurality of image sequences, comprising:
a processing module configured to perform a merging process on a plurality of images having the same sequence number in the plurality of image sequences as follows:
decoding the images in the current image sequence to obtain a current image to be synthesized;
combining the current image to be synthesized and the previous synthesized image to obtain a current synthesized image,
wherein the first composite image is obtained by:
decoding the images in the first image sequence to obtain a first image to be synthesized;
decoding the images in the second image sequence to obtain a second image to be synthesized;
and combining the second image to be synthesized and the first image to be synthesized to obtain a first synthesized image.
16. A computer program product comprising program code instructions which, when executed by a computer, cause the computer to perform the method of at least one of claims 1 to 14.
17. A non-transitory computer-readable storage medium having stored thereon computer instructions for causing the computer to perform the method of at least one of claims 1 to 14.
18. An electronic device, comprising:
a processor for processing the received data, wherein the processor is used for processing the received data,
a memory in electronic communication with the processor; and
instructions stored in the memory and executable by the processor to cause the electronic device to perform the method of at least one of claims 1 to 14.
CN202210803798.8A 2022-07-07 2022-07-07 Method, apparatus, medium, and device for processing a plurality of image sequences Pending CN115187763A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210803798.8A CN115187763A (en) 2022-07-07 2022-07-07 Method, apparatus, medium, and device for processing a plurality of image sequences

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210803798.8A CN115187763A (en) 2022-07-07 2022-07-07 Method, apparatus, medium, and device for processing a plurality of image sequences

Publications (1)

Publication Number Publication Date
CN115187763A true CN115187763A (en) 2022-10-14

Family

ID=83517087

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210803798.8A Pending CN115187763A (en) 2022-07-07 2022-07-07 Method, apparatus, medium, and device for processing a plurality of image sequences

Country Status (1)

Country Link
CN (1) CN115187763A (en)

Similar Documents

Publication Publication Date Title
US7747098B2 (en) Representing and reconstructing high dynamic range images
JP6472429B2 (en) Method, apparatus and system for determining LUMA values
JP6703032B2 (en) Backward compatibility extended image format
US8385639B2 (en) Compressive coding device and visual display control device
CN108235055B (en) Method and device for realizing transparent video in AR scene
CN106937132A (en) A kind of photograph document handling method
US20160100161A1 (en) Decoder, encoder, decoding method, encoding method, and codec system
US11741585B2 (en) Method and device for obtaining a second image from a first image when the dynamic range of the luminance of the first image is greater than the dynamic range of the luminance of the second image
KR20180044291A (en) Coding and decoding methods and corresponding devices
CN106412595B (en) Method and apparatus for encoding high dynamic range frames and applied low dynamic range frames
WO2021237569A1 (en) Encoding method, decoding method, apparatus and system
US20110002553A1 (en) Compressive coding device and decoding device
US10165278B2 (en) Image compression device, image compression method, image extension device, and image extension method
CN114788280A (en) Video coding and decoding method and device
CN115187763A (en) Method, apparatus, medium, and device for processing a plurality of image sequences
Said Compression of compound images and video for enabling rich media in embedded systems
JP2003264830A (en) Image encoder and image decoder
Ward A general approach to backwards-compatible delivery of high dynamic range images and video
WO2021217428A1 (en) Image processing method and apparatus, photographic device and storage medium
CN115914637A (en) Image format conversion method and device and video processing equipment
CN113450293A (en) Video information processing method, device and system, electronic equipment and storage medium
US11330278B2 (en) Chroma adjustment with color components in color spaces in video coding
CN111131857A (en) Image compression method and device and electronic equipment
CN116074527A (en) Multimedia file encoding method, decoding method, device and electronic equipment
US11515961B2 (en) Encoding data arrays

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination