CN112862905B - Image processing method, device, storage medium and computer equipment - Google Patents

Image processing method, device, storage medium and computer equipment Download PDF

Info

Publication number
CN112862905B
CN112862905B CN201911180391.9A CN201911180391A CN112862905B CN 112862905 B CN112862905 B CN 112862905B CN 201911180391 A CN201911180391 A CN 201911180391A CN 112862905 B CN112862905 B CN 112862905B
Authority
CN
China
Prior art keywords
target
vector
color attribute
pixel point
pixel points
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911180391.9A
Other languages
Chinese (zh)
Other versions
CN112862905A (en
Inventor
周雷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Oneplus Technology Shenzhen Co Ltd
Original Assignee
Oneplus Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Oneplus Technology Shenzhen Co Ltd filed Critical Oneplus Technology Shenzhen Co Ltd
Priority to CN201911180391.9A priority Critical patent/CN112862905B/en
Publication of CN112862905A publication Critical patent/CN112862905A/en
Application granted granted Critical
Publication of CN112862905B publication Critical patent/CN112862905B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

The application relates to an image processing method, a device, a storage medium and computer equipment, after an image is acquired, carrying out color attribute reconstruction processing on a preset number of target pixel points in the image, wherein the processing process is realized according to initial color attribute values of the target pixel points and related pixel points close to the target pixel points, then taking new color attribute values obtained by the reconstruction processing as color attribute values of the target pixel points, and the processing process combines the color attribute values of the related pixel points around the target pixel points, thereby being beneficial to improving the contrast of the image; in addition, the processing method is realized in a computer digital signal processor, and can process the target pixel points with preset quantity at the same time, so that the method has the characteristics of high speed and low power consumption.

Description

Image processing method, device, storage medium and computer equipment
Technical Field
The present application relates to the field of image processing technologies, and in particular, to an image processing method, an image processing device, a storage medium, and a computer device.
Background
The visual effect of an image is related to various image attributes, wherein the contrast is an important attribute parameter, the contrast refers to measurement of different brightness levels between the brightest white and the darkest black of a bright and dark area in the image, and generally, the larger the contrast is, the clearer and more striking the image and the more vivid and bright the color is; and the contrast is small, so that the whole picture is gray. The high contrast is helpful for the definition, detail representation and gray level representation of the image.
In real life, images acquired by people have a problem of low contrast, thereby reducing visual effects of the images.
Disclosure of Invention
In view of the foregoing, it is desirable to provide an image processing method, apparatus, storage medium, and computer device that contribute to improvement of contrast in view of the problems of the related art.
An image processing method applied to a computer digital signal processor, the image processing method comprising:
acquiring a target image to be processed;
selecting a target vector from the target image, wherein the target vector comprises a preset number of target pixel points;
determining relevant pixel points corresponding to all target pixel points in the target vector, wherein the relevant pixel points are neighborhood pixel points of the target pixel points;
performing color attribute reconstruction on each target pixel point in the target vector according to the initial color attribute value of the target pixel point and the initial color attribute value of the corresponding related pixel point to obtain a new color attribute value of each target pixel point;
and after obtaining new color attribute values of all pixel points in the target image, replacing corresponding initial color attribute values by using the new color attribute values to obtain the processed image.
An image processing apparatus comprising:
the target image acquisition module is used for acquiring a target image to be processed;
the target vector selection module is used for selecting a target vector from the target image, wherein the target vector comprises a preset number of target pixel points;
the related pixel point determining module is used for determining related pixel points corresponding to all target pixel points in the target vector, wherein the related pixel points are neighborhood pixel points of the target pixel points;
the color attribute reconstruction module is used for carrying out color attribute reconstruction on each target pixel point in the target vector according to the initial color attribute value of the target pixel point and the initial color attribute value of the corresponding related pixel point to obtain a new color attribute value of each target pixel point;
and the color attribute replacement module is used for replacing the corresponding initial color attribute value by using the new color attribute value after obtaining the new color attribute values of all the pixel points in the target image, so as to obtain the processed image.
A computer device comprising a memory storing a computer program and a processor implementing the steps of the method described above when the processor executes the computer program.
A computer readable storage medium having stored thereon a computer program which when executed by a processor performs the steps of the above method.
After the image is acquired, carrying out color attribute reconstruction processing on a preset number of target pixel points in the image, wherein the processing is realized according to the initial color attribute values of the target pixel points and the related pixel points close to the target pixel points, and then taking the new color attribute value obtained by the reconstruction processing as the color attribute value of the target pixel points, wherein the processing combines the color attribute values of the related pixel points around the target pixel points, so that the contrast of the image can be improved; in addition, the processing method is realized in a computer digital signal processor, and can process the target pixel points with preset quantity at the same time, so that the method has the characteristics of high speed and low power consumption.
Drawings
FIG. 1 is a flow chart of an image processing method in one embodiment;
FIG. 2 is a schematic diagram illustrating a partial pixel arrangement in a target image according to an embodiment;
FIG. 3 is a schematic flow chart of reconstructing color attributes of each target pixel point in a target vector according to an initial color attribute value of the target pixel point and an initial color attribute value of a corresponding related pixel point to obtain new color attribute values of each target pixel point in an embodiment;
FIG. 4 is a schematic diagram of an image processing apparatus according to an embodiment;
FIG. 5 is a flow chart of a video processing method in one embodiment;
FIG. 6 is a schematic flow chart of performing color enhancement processing on a second YUV image to obtain a third YUV image in a linear format in one embodiment;
FIG. 7 is a flow chart of a method including each buffer space in one embodiment;
FIG. 8 is a schematic diagram of a video processing apparatus according to an embodiment;
fig. 9 is an internal structural diagram of a computer device in one embodiment.
Detailed Description
The present application will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present application more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the application.
The application provides an image processing method, which is mainly used for realizing the calculation of a Laplacian of an image, and is usually realized on a CPU (Central Processing Unit ) or a GPU (Graphics Processing Unit, graphics processor) when the Laplacian is calculated in the prior art. However, the CPU can only process a single target object (pixel point) at a time, which has a problem of slow processing speed, and the GPU can process a plurality of target objects at the same time, but has a problem of large power consumption, and both the above two implementation methods are not suitable for mobile terminals such as mobile phones of users. The graphic processing method of the application realizes the calculation of the Laplacian operator on a computer digital signal processor (Computer Digital Signal Processor, CDSP), has the characteristics of high processing speed, low power consumption and the like, and can be used for processing images or videos on mobile terminals such as mobile phones and the like.
In one embodiment, as shown in fig. 1, there is provided an image processing method applied to a computer digital signal processor (hereinafter referred to as CDSP), the image processing method including the steps of:
step S110, acquiring a target image to be processed.
The target image acquired by the CDSP may be a single image or a frame of image in the video, that is, the graphics processing method in the present application may be to process a single image or process the video, which is not limited herein.
Step S120, selecting a target vector from the target image.
The target vector includes a preset number of target pixels, and the CDSP may process a plurality of target objects at the same time, where in this embodiment, the target objects are pixels, and the target pixels are pixels to be processed. Specifically, the CDSP selects the pixel points to be processed by taking a target vector, where the target vector is composed of a preset number of consecutive pixel points, and the preset number may be specifically determined according to the working characteristics of the CDSP itself. After the target vector is selected, each pixel in the target vector may be a target pixel.
Step S130, determining relevant pixel points corresponding to each target pixel point in the target vector.
The related pixel points are neighborhood pixel points of the target pixel point, the neighborhood pixel points can be understood as other pixel points in a nine-square grid taking the target pixel point as a central pixel point, the related pixel points can be specifically 4 neighborhood pixel points or 8 neighborhood pixel points of the target pixel point, wherein the 4 neighborhood pixel points refer to pixel points in four directions of up, down, left and right in the nine-square grid, and the 8 neighborhood pixel points refer to pixel points in eight directions of up, down, left, right, up, down, left and down in the nine-square grid.
As shown in fig. 2, taking a preset number of 3 as an example, the CDSP may select a vector composed of pixels D4, D5, and D6 as the target vector. Taking a target pixel point D4 in a target vector as an example when determining the relevant pixel point, the 4 neighborhood pixel points of D4, namely C4, D3, D5 and E4, can be taken as relevant pixel points of D4; the 8 neighborhood pixels of D4, i.e., C3, C4, C5, D3, D5, E3, E4, E5, may also be referred to as the relevant pixels of D4. It is understood that the preset number 3 is only an illustration of one preset number, and may be set according to actual situations during actual processing.
Step S140, performing color attribute reconstruction on each target pixel point in the target vector according to the initial color attribute value of the target pixel point and the initial color attribute value of the corresponding related pixel point, and obtaining a new color attribute value of each target pixel point.
The color attribute value refers to an attribute value of a pixel under different color coding formats, and the color coding formats may specifically be RGB (R represents a red channel, G represents a green channel, and B represents a blue channel), HSV (H represents a hue, S represents saturation, V represents brightness), YUV (Y represents brightness, and U and V represent chromaticity), and the like. After determining the target pixel point and the corresponding related pixel points, the CDSP performs Laplacian calculation according to the initial color attribute values of the pixel points, and obtains a new color attribute value of the target pixel point according to a calculation result.
Step S150, after obtaining new color attribute values of all pixel points in the target image, replacing corresponding initial color attribute values with the new color attribute values to obtain the processed image.
The CDSP performs Laplacian operator calculation on all the pixel points in the target image through the processing flow from step S120 to step S140, obtains new color attribute values of all the pixel points in the target image according to the calculation result, and then performs replacement of the color attribute values on all the pixel points to obtain the processed image.
It should be noted that, after obtaining the new color attribute value of a part of pixel points, the CDSP does not immediately replace the color attribute value of the part of pixel points, but continues to select the target pixel point for processing until obtaining the new color attribute value of all the pixel points. This is because, in the CDSP, a plurality of target pixels are processed in parallel, and, for example, in the case of the target pixel in fig. 2, when the target pixel D4 is processed, D5 and D6 are processed at the same time, and when D4 is processed, the color attribute value of D5 is required to be used, and therefore, the calculation of the laplace operator is set to be performed by using the initial color attribute values of all the pixels at once.
The embodiment provides an image processing method, after an image is acquired, color attribute reconstruction processing is performed on a preset number of target pixel points in the image, the processing is implemented according to initial color attribute values of the target pixel points and related pixel points close to the target pixel points, then a new color attribute value obtained by the reconstruction processing is taken as a color attribute value of the target pixel points, and the processing combines the color attribute values of the related pixel points around the target pixel points, so that the contrast of the image can be improved; in addition, the processing method is realized in a computer digital signal processor, and can process the target pixel points with preset quantity at the same time, so that the method has the characteristics of high speed and low power consumption.
In one embodiment, selecting the target vector from the target image comprises selecting a preset number of pixels as target pixels according to the arrangement direction of the pixels by taking a first pixel in the target image as a starting pixel, and obtaining the target vector.
The pixel arrangement direction refers to a horizontal direction in the image, specifically, taking fig. 2 as an example, starting from a first pixel A1 in the target image, taking a preset number of pixels as target pixels, and forming a target vector, for example, when the preset number is 3, the target pixels A1, A2, A3 may be selected to form the target vector. The selection of the target Vector may be specifically implemented by a HVX _vector instruction.
In one embodiment, selecting the target vector from the target image comprises the steps of adopting a point alignment and taking mode after finishing processing the current target vector, taking the next pixel point of the last pixel point in the current target vector as a starting pixel point, and selecting a preset number of pixel points as target pixel points according to the arrangement direction of the pixel points to obtain the next target vector to be processed.
Specifically, taking fig. 2 as an example, the current target vector is composed of target pixel points D4, D5 and D6, and when selecting the target vector to be processed next time, a point-taking alignment mode is adopted to select D7, D8 and D9 as new target pixel points to form the target vector to be processed next time.
In one embodiment, when the relevant pixel is a 4-neighborhood pixel of the target pixel, step S130 determines relevant pixels corresponding to each target pixel in the target vector, including steps 132A to 136A.
Step 132A, according to the pixel position of the target vector, adopting a non-aligned point-taking manner, taking the vector of the target vector shifted forward by one pixel as a first correlation vector, and taking the vector of the target vector shifted backward by one pixel as a second phase Guan Xiangliang;
step 134A, according to the pixel positions of the target vector, determining the vector of the target vector shifted forward by the first number of pixels as a third phase Guan Xiangliang, and determining the vector of the target vector shifted backward by the first number of pixels as a fourth phase Guan Xiangliang, where the first number is the pixel width of the target image;
in step 136A, from the first correlation vector, the second phase Guan Xiangliang, the third phase Guan Xiangliang, and the fourth correlation vector, the 4 neighboring pixel points of each target pixel point are selected as the correlation pixel points corresponding to each target pixel point.
Because the CDSP selects the characteristics of a plurality of pixel points at the same time, when determining the related pixel point of the target pixel point, instead of directly selecting a single pixel point, the CDSP determines the related vector of the target vector in a non-aligned point taking mode, and then selects the related pixel point corresponding to the target pixel point from the related vectors. Optionally, the correlation vector may be cached in a cache space, and when the correlation pixel needs to be selected, the reading selection is performed in the cache space.
Specifically, taking fig. 2 as an example, the target vector includes target pixels D4, D5, and D6, the first correlation vector includes pixels D3, D4, and D5, the second correlation vector includes pixels D5, D6, and D7, the third correlation vector includes pixels C4, C5, and C6, and the fourth correlation vector includes pixels E4, E5, and E6. The front-back offset correlation vector can be realized through a vmemu instruction, and the up-down offset correlation vector can be realized through stride. When the relevant pixel point of the target pixel point D4 is selected, the pixel point D3 in the first relevant vector, the pixel point D5 in the second relevant vector, the pixel point C4 in the third relevant vector, and the pixel point E4 in the fourth relevant vector are determined as relevant pixel points of the target pixel point D4.
In one embodiment, when the relevant pixel is an 8-neighborhood pixel of the target pixel, step S130 determines relevant pixels corresponding to each target pixel in the target vector, including steps 132B to 136B.
Step 132B, according to the pixel position of the target vector, adopting a non-aligned point-taking mode, taking the vector of the target vector shifted forward by one pixel as a fifth related vector, and taking the vector of the target vector shifted backward by one pixel as a sixth phase Guan Xiangliang;
Step 134B, according to the pixel positions of the target vector, determining a vector obtained by shifting the target vector forward by a second number of pixels as a seventh correlation vector, determining a vector obtained by shifting the target vector forward by a third number of pixels as an eighth correlation vector, determining a vector obtained by shifting the target vector backward by the second number of pixels as a ninth correlation vector, determining a vector obtained by shifting the target vector backward by the third number of pixels as a tenth correlation vector, wherein the second number is the pixel width of the target image plus one, and the third number is the pixel width of the target image minus one;
in step 136B, 8 neighboring pixel points of each target pixel point are selected as the corresponding correlation pixel points of each target pixel point from the fifth correlation vector, the sixth phase Guan Xiangliang, the seventh correlation vector, the eighth correlation vector, the ninth correlation vector, and the tenth correlation vector.
Specifically, taking fig. 2 as an example, the target vector includes target pixels D4, D5, and D6, the fifth correlation vector includes pixels D3, D4, and D5, the sixth correlation vector includes pixels D5, D6, and D7, the seventh correlation vector includes pixels C3, C4, and C5, the eighth correlation vector includes pixels C5, C6, and C7, the ninth correlation vector includes pixels E3, E4, and E5, and the tenth correlation vector includes pixels E5, E6, and E7. When the relevant pixel point of the target pixel point D4 is selected, the pixel point D3 in the fifth relevant vector, the pixel point D5 in the sixth relevant vector, the pixel points C3 and C4 in the seventh relevant vector, the pixel point C5 in the eighth relevant vector, the pixel points E3 and E4 in the ninth relevant vector, and the pixel point E5 in the tenth relevant vector are determined as relevant pixel points of the target pixel point D4.
In one embodiment, when determining the relevant pixel point of the target pixel point, if the target pixel point is a boundary pixel point in the target image, the relevant pixel point with the target pixel point filled with the empty is used.
Specifically, taking fig. 2 as an example, when the target pixel is A2, the pixel at the upper position is empty in the 4-neighborhood pixel of A2, the pixel at the upper position of A2 is filled with A2, and then the relevant pixel of the target pixel A2 is selected.
In one embodiment, as shown in fig. 3, step S140 performs color attribute reconstruction on each target pixel point in the target vector according to the initial color attribute value of the target pixel point and the initial color attribute value of the corresponding related pixel point, to obtain a new color attribute value of each target pixel point, including steps S141 to S149.
Step S141, caching initial color attribute values of all pixel points in the target image into a first cache space;
step S143, reading the initial color attribute value of the target pixel point and the initial color attribute value of the corresponding relevant pixel point from the first buffer space;
step S145, calculating the sum of the initial color attribute values of the related pixel points to obtain a first calculation result;
Step S146, calculating the product of the initial color attribute value of the target pixel point and the number of the related pixel points to obtain a second calculation result;
in step S149, a difference operation is performed on the first calculation result and the second calculation result, and a new color attribute value of the target pixel is obtained according to the difference operation result.
Specifically, taking the relevant pixel point as a 4-neighborhood pixel point as an example, when performing color attribute reconstruction, the laplace operator can be calculated by the following formula:
wherein, the liquid crystal display device comprises a liquid crystal display device,the calculation result of the Laplacian of the pixel point with the coordinates of f (x, y), wherein x is the abscissa of the pixel point, y is the ordinate of the pixel point, and f (x, y) is the initial color attribute value of the pixel point with the coordinates of (x, y).
After the laplace operator is obtained through calculation, a new color attribute value can be further obtained according to the laplace operator and a calculation formula corresponding to each color attribute value.
For example, taking color brightness (V) as an example, a new color brightness may be achieved by the following formula:
where V' is the new color brightness and Y is the difference between the initial color brightness and the Laplacian.
Taking color saturation (S) as an example, the new color saturation can be achieved by the following formula:
Where S' is the new color saturation, H is the hue in HSV, and S is the initial color saturation.
In one embodiment, after obtaining the new color attribute value of each target pixel point, the method further includes: and caching the new color attribute values of the target pixel points corresponding to the initial color attribute values into a second cache space according to the arrangement sequence of the initial color attribute values in the first cache space.
In this embodiment, by caching the new color attribute value into the second cache space, the calculation result can be saved without interfering with the parallel processing process of the target pixel point (the calculation needs to be performed by using the initial color attribute value); in addition, by buffering in sequence, the corresponding relation between the new color attribute value and the initial color attribute value can be ensured when the color attribute value is replaced, and the problem of chaotic replacement is avoided.
In one embodiment, after obtaining new color attribute values of all pixel points in the target image, replacing corresponding initial color attribute values with the new color attribute values to obtain a processed image, including: sequentially reading new color attribute values of the target pixel points from the second buffer space, and replacing the initial color attribute values of the target pixel points; and when the initial color attribute values of all the target pixel points are replaced, obtaining a processed image.
In one embodiment, as shown in fig. 4, there is provided an image processing apparatus including the following structure:
a target image acquisition module 110, configured to acquire a target image to be processed;
the target vector selection module 120 is configured to select a target vector from the target images, where the target vector includes a preset number of target pixel points;
the related pixel point determining module 130 is configured to determine related pixel points corresponding to each target pixel point in the target vector, where the related pixel points are neighboring pixel points of the target pixel point;
the color attribute reconstruction module 140 is configured to reconstruct a color attribute of each target pixel point in the target vector according to the initial color attribute value of the target pixel point and the initial color attribute value of the corresponding related pixel point, so as to obtain a new color attribute value of each target pixel point;
and the color attribute replacing module 150 is configured to replace the corresponding initial color attribute value with the new color attribute value after obtaining the new color attribute values of all the pixel points in the target image, so as to obtain the processed image.
For specific limitations of the image processing apparatus, reference may be made to the above limitations of the image processing method, and no further description is given here. The respective modules in the above-described image processing apparatus may be implemented in whole or in part by software, hardware, and combinations thereof. The above modules may be embedded in hardware or may be independent of a processor in the computer device, or may be stored in software in a memory in the computer device, so that the processor may call and execute operations corresponding to the above modules.
In one embodiment, as shown in fig. 5, a video processing method is provided, and the video processing method can be applied to a mobile terminal such as a mobile phone, and the video processing method includes the following steps:
step S210, obtaining an original video to be processed.
The original video includes first YUV images in a universal bandwidth compression format (UBWC), that is, each frame of original image in the original video may be regarded as a first YUV image, where YUV is a color coding format, and is commonly used in various video processing components, YUV is a kind of compiling true-color space (color space), and proper terms such as Y' UV, YUV, YCbCr, YPbPr may be called YUV, and they overlap each other. "Y" represents brightness (Luminance or Luma), i.e., gray scale values, "U" and "V" represent Chrominance (Chroma) to describe the image color and saturation for a given pixel color. YUV allows for reduced bandwidth of chroma in encoding video or light, taking into account human perceptibility.
Step S220, converting the first YUV image from the general bandwidth compression format to the linear format, and obtaining a second YUV image in the linear format.
After the high pass 845, the video data is a YUV image in a bandwidth compression format, so that the bandwidth can be saved, however, the YUV image in the bandwidth compression format cannot be directly processed by an image algorithm, so that compression format conversion is firstly performed on each first YUV image in the original video, namely, conversion is performed from the bandwidth compression format to a Linear (Linear) format, so that a second YUV image in the Linear format is obtained. The linear format is understood to be a common linear coding format.
It can be understood that the processing in this step is performed for each frame of image in the original video, that is, the first YUV image of each frame is subjected to compression format conversion, so as to obtain a corresponding second YUV image.
Step S230, performing color enhancement processing on the second YUV image to obtain a third YUV image in a linear format.
After the compression format conversion, the image processing, specifically the color enhancement processing, can be directly performed on the second YUV image in the linear format, so as to improve the visual effect of the image, and a processed third YUV image is obtained. In addition, since the color enhancement process does not change the compressed format of the original image, the third YUV image is still in a linear format.
Step S240, converting the third YUV image from a linear format to a general bandwidth compression format, and obtaining a fourth YUV image in the general bandwidth compression format;
after the processed third YUV image with the linear format is obtained, in order to keep consistency of the platform video format, the third YUV image is subjected to compression format conversion again, namely, the linear format is converted into the bandwidth compression format, and a fourth YUV image with the bandwidth compression format is obtained.
Step S250, replacing the corresponding first YUV image in the original video with the fourth YUV image to obtain the processed video.
And (3) performing image processing on the first YUV image in the original video according to the flow from the step S220 to the step S240, after obtaining a corresponding fourth YUV image, using the obtained fourth YUV image to correspondingly replace the first YUV image, and after all the first YUV images are replaced, obtaining a processed video which corresponds to the original video and has better visual effect.
In addition, when the image is converted from the general bandwidth compression format to the linear format or vice versa, the image may be implemented by calling an API (Application Programming Interface ) associated therewith, for example, ubwccdma, etc., and is not limited thereto.
The embodiment provides a video processing method, after an original video with a general bandwidth compression format is obtained, compression format conversion is performed on an image in the original video, so that the converted image is an image which can be directly processed, color enhancement processing is facilitated, compression format conversion is performed again, the converted image is in the general bandwidth compression format, video playing is facilitated, and the color enhancement processing is performed on the image in the video, so that the visual effect of the video is improved.
In one embodiment, converting the first YUV image from the universal bandwidth compressed format to the linear format includes: and converting the first YUV images from the general bandwidth compression format to the linear format one by one according to the image frame sequence of the first YUV images in the original video.
Specifically, when the compression format conversion is performed on each first YUV image in the original video, the conversion may be performed frame by frame, that is, the conversion is performed according to the sequence of the last frame of the first frame, the second frame, the i-th frame of …, the i+1th frame, and the … th frame, where the processing sequence is consistent with the display sequence of each frame image during video playing, so that the video playing is directly performed after the subsequent processing.
In one embodiment, converting the first YUV image from the universal bandwidth compressed format to the linear format includes: and simultaneously converting the multiple frames of first YUV images from the common bandwidth compression format to the linear format according to the image frame sequence of the first YUV images in the original video.
Specifically, when the compression format conversion is performed on each first YUV image in the original video, the conversion may be performed on multiple frames at the same time, that is, when the first processing is performed, the images of the first frame, the second frame, and the … ith frame are processed at the same time; the second processing is to process the images of the (i+1) th frame, the (i+2) th frame, the (…) nd (i) th frame, and so on. By processing the multi-frame images simultaneously, the processing process can be in a parallel mode, so that the image processing efficiency can be effectively improved.
In one embodiment, after obtaining the second YUV image in linear format, further comprising: and buffering the second YUV image to the first buffering space.
After the first YUV image is subjected to compression format conversion frame by frame or simultaneously subjected to compression format conversion to a plurality of frames of first YUV images to obtain a second YUV image in a linear format, the second YUV image is cached to a first cache space, and when subsequent processing is performed, the image to be processed can be read from the first cache space and processed, and the image is cached, so that the temporary storage effect can be achieved, and in addition, compared with a memory storage mode, the caching speed is faster, so that the image processing efficiency can be improved.
In one embodiment, as shown in fig. 6, step S230 performs color enhancement processing on the second YUV image to obtain a third YUV image in a linear format, including steps S231 to S239.
Step S231, reading a second YUV image from the first buffer space, converting the second YUV image from YUV color coding format to RGB color coding format, obtaining RGB image, and buffering the RGB image to the second buffer space;
step S233, reading an RGB image from the second cache space, converting the RGB image from an RGB color coding format to an HSV color coding format, obtaining an HSV image, and caching the HSV image in a third cache space;
Step S235, reading an HSV image from the third cache space, performing color enhancement processing on the HSV image to obtain an enhanced HSV image, and replacing the HSV image in the third cache space with the enhanced HSV image;
step S237, reading the enhanced HSV image from the third buffer space, converting the enhanced HSV image from the HSV color coding format to the RGB color coding format, obtaining an enhanced RGB image, and replacing the RGB image in the second buffer space with the enhanced RGB image;
step S239, reading the enhanced RGB image from the second buffer space, converting the enhanced RGB image from RGB color coding format to YUV color coding format, obtaining a third YUV image, and replacing the second YUV image in the first buffer space with the third YUV image.
Specifically, as shown in fig. 7, in this embodiment, three different buffer spaces are mainly used, where the first buffer space is mainly used for buffering a second YUV image and a third YUV image, the second buffer space is mainly used for buffering an RGB image and enhancing an RGB image, and the third buffer space is mainly used for buffering an HSV image and enhancing an HSV image. In addition, when performing color coding format conversion on an image, conversion may be performed by a conversion method in the related art, which is not limited herein.
It can be appreciated that in the practical application process, other amounts of buffer space may be used to buffer the image, for example, six different buffer space distributions may be used to correspondingly buffer the six different images, so that the image data originally stored in the buffer space need not be replaced.
In this embodiment, when image processing is performed, the image is buffered, so that temporary storage can be performed, and in addition, compared with a memory storage mode, the buffering speed is faster, so that the image processing efficiency can be improved.
In one embodiment, the performing color enhancement processing on the HSV image in step S237 includes: and adopting a color regulating curve formula corresponding to the HSV image to regulate the saturation and brightness of the HSV image.
Specifically, the color adjustment curve formula refers to a formula for applying color adjustment (or color shift adjustment) to an image, and specific forms of formulas corresponding to different processing procedures are different, which is not limited herein. The saturation and brightness of the HSV image are regulated by adopting a color regulating curve formula corresponding to the HSV image, so that the effect of color enhancement can be achieved, and the visual effect is improved.
In one embodiment, the performing color enhancement processing on the HSV image in step S237 includes: determining a new color attribute value of each target pixel point in the HSV image according to the initial color attribute value of each target pixel point and the initial color attribute value of the neighborhood pixel point corresponding to each target pixel point, wherein the color attribute values comprise saturation and brightness; the original color attribute value of the target pixel point is replaced with the new color attribute value.
The method for determining the new color attribute value of the target pixel point according to the initial color attribute value of each target pixel point in the HSV image and the initial color attribute value of the neighborhood pixel point corresponding to each target pixel point comprises the following steps: caching initial color attribute values of all pixel points in the HSV image into a fourth cache space; reading an initial color attribute value of the target pixel point and an initial color attribute value of a corresponding neighborhood pixel point from a fourth cache space; calculating the sum of the initial color attribute values of the neighborhood pixel points to obtain a first calculation result; calculating the product of the initial color attribute value of the target pixel point and the number of the neighborhood pixel points to obtain a second calculation result; and performing a difference operation on the first calculation result and the second calculation result, and obtaining a new color attribute value of the target pixel point according to the difference operation result.
In this embodiment, when performing color enhancement processing on the HSV image, the image processing method described in the previous embodiments is adopted, and the processing procedure in this embodiment may be regarded as the process of calculating the laplace operator of the pixel point in the image described in the previous embodiments, so for the limitation of the method in this embodiment, reference is made to the limitation of the image processing method in the previous embodiments, and the description thereof is omitted here.
In one embodiment, as shown in fig. 8, there is provided a video processing apparatus including:
the video acquisition module 210 is configured to acquire an original video to be processed, where the original video includes first YUV images in a general bandwidth compression format;
a first conversion module 220, configured to convert the first YUV image from a general bandwidth compression format to a linear format, and obtain a second YUV image in the linear format;
an image processing module 230, configured to perform color enhancement processing on the second YUV image to obtain a third YUV image in a linear format;
a second conversion module 240, configured to convert the third YUV image from a linear format to a universal bandwidth compression format, and obtain a fourth YUV image in the universal bandwidth compression format;
the image replacing module 250 is configured to replace the corresponding first YUV image in the original video with the fourth YUV image, so as to obtain a processed video.
For specific limitations of the video processing apparatus, reference may be made to the above limitations of the video processing method, and no further description is given here. The respective modules in the video processing apparatus described above may be implemented in whole or in part by software, hardware, and combinations thereof. The above modules may be embedded in hardware or may be independent of a processor in the computer device, or may be stored in software in a memory in the computer device, so that the processor may call and execute operations corresponding to the above modules.
It should be understood that, under reasonable conditions, although the steps in the flowcharts referred to in the foregoing embodiments are shown in order as indicated by the arrows, these steps are not necessarily performed in order as indicated by the arrows. The steps are not strictly limited to the order of execution unless explicitly recited herein, and the steps may be executed in other orders. Moreover, at least some of the steps in the flowcharts may include a plurality of sub-steps or stages that are not necessarily performed at the same time, but may be performed at different times, and the order of execution of the sub-steps or stages is not necessarily sequential, but may be performed in rotation or alternately with at least a portion of the sub-steps or stages of other steps or other steps.
In one embodiment, a computer device is provided, including a memory and a processor, where the memory stores a computer program, and the processor implements the steps of the image processing method or the video processing method in the above embodiments when the computer program is executed.
FIG. 9 illustrates an internal block diagram of a computer device in one embodiment. The computer device may in particular be a terminal (or a server). As shown in fig. 9, the computer device includes a processor, a memory, a network interface, an input device, and a display screen connected by a system bus. The memory includes a nonvolatile storage medium and an internal memory. The non-volatile storage medium of the computer device stores an operating system, and may also store a computer program that, when executed by a processor, causes the processor to implement an image processing method or a video processing method. The internal memory may also store a computer program that, when executed by the processor, causes the processor to perform an image processing method or a video processing method. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, the input device of the computer equipment can be a touch layer covered on the display screen, can also be keys, a track ball or a touch pad arranged on the shell of the computer equipment, and can also be an external keyboard, a touch pad or a mouse and the like.
It will be appreciated by persons skilled in the art that the architecture shown in fig. 9 is merely a block diagram of some of the architecture relevant to the present inventive arrangements and is not limiting as to the computer device to which the present inventive arrangements are applicable, and that a particular computer device may include more or fewer components than shown, or may combine some of the components, or have a different arrangement of components.
In one embodiment, a computer-readable storage medium is provided, on which a computer program is stored, which when executed by a processor implements the steps of the image processing method or the video processing method in the above embodiments.
Those skilled in the art will appreciate that implementing all or part of the above-described methods may be accomplished by way of a computer program, which may be stored on a non-transitory computer readable storage medium and which, when executed, may comprise the steps of the above-described embodiments of the methods. Any reference to memory, storage, database, or other medium used in embodiments provided herein may include non-volatile and/or volatile memory. The nonvolatile memory can include Read Only Memory (ROM), programmable ROM (PROM), electrically Programmable ROM (EPROM), electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous Link DRAM (SLDRAM), memory bus direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM), among others.
The technical features of the above-described embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above-described embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The above examples illustrate only a few embodiments of the invention, which are described in detail and are not to be construed as limiting the scope of the invention. It should be noted that it will be apparent to those skilled in the art that several variations and modifications can be made without departing from the spirit of the invention, which are all within the scope of the invention. Accordingly, the scope of protection of the present invention is to be determined by the appended claims.

Claims (13)

1. An image processing method, the image processing method being applied to a computer digital signal processor, the image processing method comprising:
acquiring a target image to be processed;
selecting a preset number of pixels as target pixels according to the arrangement direction of the pixels by taking the first pixel in the target image as a starting pixel, so as to obtain a target vector; or after finishing the processing of the current target vector, adopting an alignment point taking mode, taking the next pixel point of the last pixel point in the current target vector as a starting pixel point, and selecting a preset number of pixel points as target pixel points according to the arrangement direction of the pixel points to obtain the next target vector to be processed; the target vector comprises a preset number of target pixel points;
Determining relevant pixel points corresponding to all target pixel points in the target vector, wherein the relevant pixel points are neighborhood pixel points of the target pixel points; the related pixel points comprise 4 neighborhood pixel points of the target pixel point;
when the relevant pixel point is a 4-neighborhood pixel point of the target pixel point, the determining relevant pixel points corresponding to each target pixel point in the target vector includes:
according to the pixel point position of the target vector, adopting a non-alignment point taking mode, taking a vector obtained by shifting the target vector forward by one pixel point as a first correlation vector, and taking a vector obtained by shifting the target vector backward by one pixel point as a second phase Guan Xiangliang;
according to the pixel point positions of the target vector, determining a vector of the target vector shifted forward by a first number of pixel points as a third phase Guan Xiangliang, and determining a vector of the target vector shifted backward by the first number of pixel points as a fourth phase Guan Xiangliang, wherein the first number is the pixel width of the target image;
selecting a 4 neighborhood pixel point of each target pixel point from the first correlation vector, the second correlation vector, the third correlation vector and the fourth correlation vector as a correlation pixel point corresponding to each target pixel point;
Performing color attribute reconstruction on each target pixel point in the target vector according to the initial color attribute value of the target pixel point and the initial color attribute value of the corresponding related pixel point to obtain a new color attribute value of each target pixel point;
and after obtaining new color attribute values of all pixel points in the target image, replacing corresponding initial color attribute values by using the new color attribute values to obtain the processed image.
2. The method according to claim 1, wherein the method further comprises:
the first and second correlation vectors are obtained by vmmu instruction Guan Xiangliang and the third and fourth correlation vectors are obtained by stride instruction.
3. The method of claim 1, wherein performing color attribute reconstruction on each target pixel point in the target vector according to the initial color attribute value of the target pixel point and the initial color attribute value of the corresponding related pixel point to obtain a new color attribute value of each target pixel point, comprises:
caching initial color attribute values of all pixel points in the target image into a first cache space;
Reading the initial color attribute value of the target pixel point and the initial color attribute value of the corresponding related pixel point from the first cache space;
calculating the sum of the initial color attribute values of the related pixel points to obtain a first calculation result;
calculating the product of the initial color attribute value of the target pixel point and the number of the related pixel points to obtain a second calculation result;
and performing a difference operation on the first calculation result and the second calculation result, and obtaining a new color attribute value of the target pixel point according to the difference operation result.
4. A method according to claim 3, further comprising, after obtaining the new color attribute value for each target pixel point:
and caching new color attribute values of the target pixel points corresponding to the initial color attribute values into a second cache space according to the arrangement sequence of the initial color attribute values in the first cache space.
5. The method of claim 4, wherein after obtaining new color attribute values for all pixels in the target image, replacing corresponding initial color attribute values with the new color attribute values to obtain a processed image, comprising:
Sequentially reading new color attribute values of the target pixel points from the second buffer space, and replacing initial color attribute values of the target pixel points;
and when the initial color attribute values of all the target pixel points are replaced, obtaining the processed image.
6. An image processing method, the image processing method being applied to a computer digital signal processor, the image processing method comprising:
acquiring a target image to be processed;
selecting a preset number of pixels as target pixels according to the arrangement direction of the pixels by taking the first pixel in the target image as a starting pixel, so as to obtain a target vector; or after finishing the processing of the current target vector, adopting an alignment point taking mode, taking the next pixel point of the last pixel point in the current target vector as a starting pixel point, and selecting a preset number of pixel points as target pixel points according to the arrangement direction of the pixel points to obtain the next target vector to be processed; the target vector comprises a preset number of target pixel points;
determining relevant pixel points corresponding to all target pixel points in the target vector, wherein the relevant pixel points are neighborhood pixel points of the target pixel points; the related pixel points comprise 8 neighborhood pixel points of the target pixel point;
When the relevant pixel point is an 8-neighborhood pixel point of the target pixel point, the determining relevant pixel points corresponding to each target pixel point in the target vector includes:
according to the pixel point position of the target vector, adopting a non-alignment point taking mode, taking a vector obtained by shifting the target vector forward by one pixel point as a fifth correlation vector, and taking a vector obtained by shifting the target vector backward by one pixel point as a sixth phase Guan Xiangliang;
according to the pixel point positions of the target vector, determining a vector obtained by shifting the target vector forward by a second number of pixel points as a seventh correlation vector, determining a vector obtained by shifting the target vector forward by a third number of pixel points as an eighth correlation vector, determining a vector obtained by shifting the target vector backward by a second number of pixel points as a ninth correlation vector, and determining a vector obtained by shifting the target vector backward by a third number of pixel points as a tenth correlation vector, wherein the second number is the pixel width of the target image plus one, and the third number is the pixel width of the target image minus one;
selecting 8 neighborhood pixel points of each target pixel point from the fifth correlation vector, the sixth correlation vector, the seventh correlation vector, the eighth correlation vector, the ninth correlation vector and the tenth correlation vector as correlation pixel points corresponding to each target pixel point;
Performing color attribute reconstruction on each target pixel point in the target vector according to the initial color attribute value of the target pixel point and the initial color attribute value of the corresponding related pixel point to obtain a new color attribute value of each target pixel point;
and after obtaining new color attribute values of all pixel points in the target image, replacing corresponding initial color attribute values by using the new color attribute values to obtain the processed image.
7. The method of claim 6, wherein performing color attribute reconstruction on each target pixel point in the target vector according to the initial color attribute value of the target pixel point and the initial color attribute value of the corresponding related pixel point to obtain a new color attribute value of each target pixel point, comprises:
caching initial color attribute values of all pixel points in the target image into a first cache space;
reading the initial color attribute value of the target pixel point and the initial color attribute value of the corresponding related pixel point from the first cache space;
calculating the sum of the initial color attribute values of the related pixel points to obtain a first calculation result;
Calculating the product of the initial color attribute value of the target pixel point and the number of the related pixel points to obtain a second calculation result;
and performing a difference operation on the first calculation result and the second calculation result, and obtaining a new color attribute value of the target pixel point according to the difference operation result.
8. The method of claim 7, further comprising, after obtaining the new color attribute value for each target pixel point:
and caching new color attribute values of the target pixel points corresponding to the initial color attribute values into a second cache space according to the arrangement sequence of the initial color attribute values in the first cache space.
9. The method of claim 8, wherein after obtaining new color attribute values for all pixels in the target image, replacing corresponding initial color attribute values with the new color attribute values to obtain a processed image, comprising:
sequentially reading new color attribute values of the target pixel points from the second buffer space, and replacing initial color attribute values of the target pixel points;
and when the initial color attribute values of all the target pixel points are replaced, obtaining the processed image.
10. An image processing apparatus, comprising:
the target image acquisition module is used for acquiring a target image to be processed;
the target vector selection module is used for selecting a preset number of pixels as target pixels according to the pixel arrangement direction by taking the first pixel in the target image as a starting pixel, so as to obtain a target vector; or after finishing the processing of the current target vector, adopting an alignment point taking mode, taking the next pixel point of the last pixel point in the current target vector as a starting pixel point, and selecting a preset number of pixel points as target pixel points according to the arrangement direction of the pixel points to obtain the next target vector to be processed; the target vector comprises a preset number of target pixel points;
the related pixel point determining module is used for determining related pixel points corresponding to all target pixel points in the target vector, wherein the related pixel points are neighborhood pixel points of the target pixel points; the related pixel points comprise 4 neighborhood pixel points of the target pixel point; when the relevant pixel point is a 4-neighborhood pixel point of the target pixel point, the determining relevant pixel points corresponding to each target pixel point in the target vector includes: according to the pixel point position of the target vector, adopting a non-alignment point taking mode, taking a vector obtained by shifting the target vector forward by one pixel point as a first correlation vector, and taking a vector obtained by shifting the target vector backward by one pixel point as a second phase Guan Xiangliang; according to the pixel point positions of the target vector, determining a vector of the target vector shifted forward by a first number of pixel points as a third phase Guan Xiangliang, and determining a vector of the target vector shifted backward by the first number of pixel points as a fourth phase Guan Xiangliang, wherein the first number is the pixel width of the target image; selecting a 4 neighborhood pixel point of each target pixel point from the first correlation vector, the second correlation vector, the third correlation vector and the fourth correlation vector as a correlation pixel point corresponding to each target pixel point;
The color attribute reconstruction module is used for carrying out color attribute reconstruction on each target pixel point in the target vector according to the initial color attribute value of the target pixel point and the initial color attribute value of the corresponding related pixel point to obtain a new color attribute value of each target pixel point;
and the color attribute replacement module is used for replacing the corresponding initial color attribute value by using the new color attribute value after obtaining the new color attribute values of all the pixel points in the target image, so as to obtain the processed image.
11. The apparatus of claim 10, wherein the apparatus further comprises:
and the acquisition module is used for acquiring the first correlation vector and the second phase Guan Xiangliang through a vmmu instruction, and acquiring the third correlation vector and the fourth correlation vector through a stride instruction.
12. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor implements the steps of the method of any one of claims 1 to 9 when the computer program is executed.
13. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the steps of the method of any of claims 1 to 9.
CN201911180391.9A 2019-11-27 2019-11-27 Image processing method, device, storage medium and computer equipment Active CN112862905B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911180391.9A CN112862905B (en) 2019-11-27 2019-11-27 Image processing method, device, storage medium and computer equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911180391.9A CN112862905B (en) 2019-11-27 2019-11-27 Image processing method, device, storage medium and computer equipment

Publications (2)

Publication Number Publication Date
CN112862905A CN112862905A (en) 2021-05-28
CN112862905B true CN112862905B (en) 2023-08-11

Family

ID=75985520

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911180391.9A Active CN112862905B (en) 2019-11-27 2019-11-27 Image processing method, device, storage medium and computer equipment

Country Status (1)

Country Link
CN (1) CN112862905B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104680484A (en) * 2013-11-26 2015-06-03 展讯通信(上海)有限公司 Image enhancement method and device
CN108932706A (en) * 2018-08-14 2018-12-04 长沙全度影像科技有限公司 A kind of contrast and saturation degree Enhancement Method of color image
CN109523564A (en) * 2018-10-19 2019-03-26 北京字节跳动网络技术有限公司 Method and apparatus for handling image
CN110233734A (en) * 2019-06-13 2019-09-13 Oppo广东移动通信有限公司 Signature check method and Related product

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104680484A (en) * 2013-11-26 2015-06-03 展讯通信(上海)有限公司 Image enhancement method and device
CN108932706A (en) * 2018-08-14 2018-12-04 长沙全度影像科技有限公司 A kind of contrast and saturation degree Enhancement Method of color image
CN109523564A (en) * 2018-10-19 2019-03-26 北京字节跳动网络技术有限公司 Method and apparatus for handling image
CN110233734A (en) * 2019-06-13 2019-09-13 Oppo广东移动通信有限公司 Signature check method and Related product

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
柳杨.2.3.2拉普拉斯算子.《数字图像物体识别理论详解与实战》.北京邮电大学出版社,2018, *

Also Published As

Publication number Publication date
CN112862905A (en) 2021-05-28

Similar Documents

Publication Publication Date Title
CN112866802B (en) Video processing method, video processing device, storage medium and computer equipment
US9311696B2 (en) Color enhancement for graphic images
US8094230B2 (en) Image processing apparatus, image processing method, and program
JP4612845B2 (en) Image processing apparatus and method
KR20060109451A (en) Smart clipper for mobile displays
CN112184877B (en) Method and system for rendering optimization of glow effect
CN114040246A (en) Image format conversion method, device, equipment and storage medium of graphic processor
CN108846871B (en) Image processing method and device
CN113538271A (en) Image display method, image display device, electronic equipment and computer readable storage medium
CN107220934B (en) Image reconstruction method and device
US11263805B2 (en) Method of real-time image processing based on rendering engine and a display apparatus
Zhang et al. Multi-scale-based joint super-resolution and inverse tone-mapping with data synthesis for UHD HDR video
CN112954355B (en) Image frame processing method and device
CN112862905B (en) Image processing method, device, storage medium and computer equipment
CN113409196B (en) High-speed global chromatic aberration correction method for real-time video splicing
US10475164B2 (en) Artifact detection in a contrast enhanced output image
CN103810671B (en) The color drawing process and system of RGB mode images
CA2815609C (en) Transparency information in image or video format not natively supporting transparency
CN114999363A (en) Color shift correction method, device, equipment, storage medium and program product
CN111338627B (en) Front-end webpage theme color adjustment method and device
CN114155137A (en) Format conversion method, controller and computer-readable storage medium
WO2018092715A1 (en) Image processing device, image processing method and program
CN111583104B (en) Light spot blurring method and device, storage medium and computer equipment
WO2023185706A1 (en) Image processing method, image processing apparatus and storage medium
CN117036209B (en) Image contrast enhancement method, image contrast enhancement device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant