CN118200580A - Image coding method, device, electronic equipment, chip and storage medium - Google Patents

Image coding method, device, electronic equipment, chip and storage medium Download PDF

Info

Publication number
CN118200580A
CN118200580A CN202211599018.9A CN202211599018A CN118200580A CN 118200580 A CN118200580 A CN 118200580A CN 202211599018 A CN202211599018 A CN 202211599018A CN 118200580 A CN118200580 A CN 118200580A
Authority
CN
China
Prior art keywords
sub
blocks
block
code length
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211599018.9A
Other languages
Chinese (zh)
Inventor
马昊辰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Xuanjie Technology Co ltd
Original Assignee
Shanghai Xuanjie Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Xuanjie Technology Co ltd filed Critical Shanghai Xuanjie Technology Co ltd
Priority to CN202211599018.9A priority Critical patent/CN118200580A/en
Publication of CN118200580A publication Critical patent/CN118200580A/en
Pending legal-status Critical Current

Links

Landscapes

  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

The present disclosure provides an image encoding method, an image encoding device, an electronic device, a chip and a storage medium, wherein image texture analysis is performed on a bar block of an image to obtain image texture information; dividing a bar block into a plurality of coding units based on image texture information, and dividing each coding unit into a plurality of sub-blocks; determining an adjustment factor for each sub-block and determining a target code length for each sub-block based on the adjustment factor; based on the target code length, each sub-block is encoded, the texture coding characteristic is fully considered, smooth transition can be realized by dynamically adjusting the code length in the encoding process, obvious image quality damage of partial areas is avoided, and the method is more flexible and stable.

Description

Image coding method, device, electronic equipment, chip and storage medium
Technical Field
The present disclosure relates to the field of image compression, and in particular, to an image encoding method and apparatus, an electronic device, a chip, and a storage medium.
Background
The frame buffer compression is mainly applied to buffer compression for image processing in a chip, and aims to reduce hardware storage and bandwidth resources, and obvious image damage cannot be caused, so that the compression rate is usually not too high, and is generally about 2-3 times. In addition, the minimum compression unit is aligned to an integer multiple of the memory granule and is required to support random access. In CBR (fixed code length coding) mode, the coded data can be obtained directly through address offset, while in VBR (variable code length coding) mode, additional address location information is required to realize random access.
However, the existing compression coding scheme is not flexible enough as a whole, and when the compression rate is high, the flat area is easy to generate the phenomena of color gradation layering and the like.
Disclosure of Invention
The present disclosure provides an image encoding method, apparatus, electronic device, chip, and storage medium to solve the problems in the related art.
An embodiment of a first aspect of the present disclosure proposes an image encoding method, the method including: performing image texture analysis on the strip blocks of the image to obtain image texture information; dividing a bar block into a plurality of coding units based on image texture information, and dividing each coding unit into a plurality of sub-blocks; determining an adjustment factor for each sub-block and determining a target code length for each sub-block based on the adjustment factor; each sub-block is encoded based on the target code length.
In some embodiments of the present disclosure, performing image texture analysis on a bar of an image, obtaining image texture information includes: extracting gradient information of pixels in the bar block; determining a minimum directional gradient for each pixel in the bar based on the gradient information; the bit width corresponding to the minimum directional gradient of each pixel is determined.
In some embodiments of the present disclosure, determining the bit width corresponding to the minimum directional gradient for each pixel includes: when the minimum direction gradient is greater than zero, determining the bit width according to a preset formula; when the minimum directional gradient is equal to zero, the bit width is determined to be 1.
In some embodiments of the present disclosure, the image texture information includes at least a minimum directional gradient of each pixel in the bar, dividing the bar into a plurality of coding units based on the image texture information, and dividing each coding unit into a plurality of sub-blocks includes: equally dividing the bar block into a plurality of coding units; determining the gradient mean square error of each coding unit according to the image texture information of each coding unit; the coding unit is divided into a plurality of sub-blocks based on the gradient mean square error.
In some embodiments of the present disclosure, dividing the coding unit into a plurality of sub-blocks based on the gradient mean square error comprises: determining a plurality of sub-block division modes of the coding units according to the image texture information of each coding unit; determining the gradient mean square error of the coding unit in each sub-block division mode; selecting a sub-block dividing mode corresponding to the minimum gradient mean square error, and dividing the coding unit into a plurality of sub-blocks, wherein the shapes and the numbers of the sub-blocks obtained by dividing different coding units are different.
In some embodiments of the present disclosure, determining the adjustment factor for each sub-block includes: adding the minimum directional gradients of the pixels in each sub-block to determine a gradient sum of the sub-blocks; adding the minimum directional gradients of each pixel in the bar block and dividing the minimum directional gradients by the number of the plurality of sub-blocks to determine an average gradient; determining the ratio of the gradient sum to the average gradient as an initial factor of the sub-block; and carrying out logarithmic mapping processing on the initial factors to obtain adjustment factors of the sub-blocks.
In some embodiments of the present disclosure, determining the target code length for each sub-block based on the adjustment factor includes: dividing the plurality of sub-blocks into a first group of sub-blocks and a second group of sub-blocks according to the coding sequence of the plurality of sub-blocks and the number of the plurality of sub-blocks; determining target code length of each sub-block in the first group of sub-blocks by taking the principle of ensuring the image quality; and determining the target code length of each sub-block in the second group of sub-blocks by taking the principle of ensuring coding convergence, wherein the coding sequence of the first group of sub-blocks is prior to the coding sequence of the second group of sub-blocks, and the number of the sub-blocks in the second group of sub-blocks is more than or equal to 2 and less than or equal to half of the number of the plurality of sub-blocks.
In some embodiments of the present disclosure, determining the target code length of each sub-block in the first set of sub-blocks based on the principle of ensuring image quality includes: obtaining a preset code length of the bar block; dividing the preset code length by the number of the plurality of sub-blocks to obtain an initial average code length serving as a basic code length of a first sub-block; multiplying the basic code length by the adjustment factor of the first sub-block to obtain a target code length of the first sub-block; and determining the target code length of the remaining sub-blocks based on the remaining code length and the number of the remaining sub-blocks until the last sub-block in the first group of sub-blocks.
In some embodiments of the present disclosure, determining the target code length of the remaining sub-blocks based on the remaining code length and the number of remaining sub-blocks comprises: subtracting the target code length of the coded sub-block from the preset code length to obtain a residual code length; subtracting the number of the coded sub-blocks from the number of the plurality of sub-blocks to obtain the number of the residual sub-blocks; taking the ratio of the residual code length to the number of residual sub-blocks as the basic code length of the next coded sub-block; and taking the product of the basic code length of the next coded sub-block and the adjustment factor as the target code length of the next coded sub-block.
In some embodiments of the present disclosure, the method further comprises: traversing under different coding modes and different quantization parameters, and dynamically adjusting the target code length of each sub-block according to a preset rule, wherein the preset rule is as follows: the difference between the quantization parameter corresponding to the target code length used to encode the current sub-block and the quantization parameter corresponding to the target code length used to encode the previous sub-block falls within the interval [ -1,2].
In some embodiments of the present disclosure, determining the target code length of each sub-block in the second set of sub-blocks based on a principle of ensuring coding convergence includes: subtracting the sum of target code lengths of all sub-blocks in the first group of sub-blocks from the preset code length to obtain the total code length of the second group of sub-blocks; the ratio of the total code length to the number of sub-blocks of the second set is taken as the target code length of each sub-block of the second set.
An embodiment of a second aspect of the present disclosure proposes an image encoding apparatus including: the acquisition module is used for carrying out image texture analysis on the strip blocks of the image to acquire image texture information; or a honey module for dividing the bar block into a plurality of coding units based on the image texture information, and dividing each coding unit into a plurality of sub-blocks; a determining module, configured to determine an adjustment factor for each sub-block, and determine a target code length of each sub-block based on the adjustment factor; and the coding module is used for coding each sub-block based on the target code length.
An embodiment of a third aspect of the present disclosure proposes an electronic device, including: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method described in the embodiments of the first aspect of the present disclosure.
An embodiment of a fourth aspect of the present disclosure proposes a non-transitory computer-readable storage medium storing computer instructions for causing a computer to perform the method described in the embodiment of the first aspect of the present disclosure.
A fifth aspect embodiment of the present disclosure proposes a computer program product comprising a computer program which, when executed by a processor, performs the method described in the first aspect embodiment of the present disclosure.
A sixth aspect of the present disclosure provides a chip comprising one or more interface circuits and one or more processors; the interface circuit is for receiving a signal from a memory of the electronic device and sending the signal to the processor, the signal comprising computer instructions stored in the memory, which when executed by the processor, cause the electronic device to perform the method described in the embodiments of the first aspect of the disclosure.
In summary, the image encoding method provided by the present disclosure performs image texture analysis on a bar block of an image to obtain image texture information; dividing a bar block into a plurality of coding units based on image texture information, and dividing each coding unit into a plurality of sub-blocks; determining an adjustment factor for each sub-block and determining a target code length for each sub-block based on the adjustment factor; based on the target code length, each sub-block is encoded, the texture coding characteristic is fully considered, smooth transition can be realized by dynamically adjusting the code length in the encoding process, obvious image quality damage of partial areas is avoided, and the method is more flexible and stable.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the disclosure and together with the description, serve to explain the principles of the disclosure and do not constitute an undue limitation on the disclosure.
Fig. 1 is a schematic view of an application scenario provided in an embodiment of the present disclosure;
fig. 2 is a flowchart of an image encoding method according to an embodiment of the present disclosure;
FIG. 3 is a flowchart of an image encoding method according to an embodiment of the present disclosure;
fig. 4 is a schematic flow chart of an image encoding method according to an embodiment of the disclosure;
FIG. 5 is a schematic diagram of a logarithmic coordinate system provided by embodiments of the disclosure;
Fig. 6 is a schematic structural diagram of an image encoding device according to an embodiment of the present disclosure;
fig. 7 is a schematic structural diagram of an electronic device according to an embodiment of the disclosure.
Detailed Description
Embodiments of the present disclosure are described in detail below, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to like or similar elements or elements having like or similar functions throughout. The embodiments described below by referring to the drawings are exemplary and intended for the purpose of explaining the present disclosure and are not to be construed as limiting the present disclosure.
Unlike the image compression standard such as JPEG, the frame buffer compression is mainly applied to the buffer compression for image processing in the chip, and the purpose of the compression is to reduce hardware storage and bandwidth resources, but the compression cannot have obvious image damage. The compression ratio is usually not too high, typically about 2-3 times. In addition, the minimum compression unit is aligned to an integer multiple of the memory granule and is required to support random access. In CBR mode, encoded data can be obtained directly through address offset, while in VBR mode additional storage of address location information is required to achieve random access.
Currently, existing schemes are used for compressing visually lossless images, and are used for processing YUV data and bayer data. For compression coding, the general flow includes modules such as prediction, quantization, entropy coding, code flow control, etc. In order to achieve higher compression quality and compression rate, the code stream control algorithm is a key module, which can reasonably distribute the compression of different texture areas of an image, so as to achieve the balance of subjective quality and compression rate. However, in the existing image compression scheme, the flexibility of the code flow control as a whole is insufficient, and when the compression rate is high, the phenomenon of gradation layering and the like easily occurs in a flat area.
In order to solve the problems in the related art, the present disclosure proposes an image encoding method, which fully considers texture encoding characteristics, and can realize smooth transition by dynamically adjusting code length in the encoding process, thereby avoiding obvious image quality damage in a partial region, and being more flexible and stable.
Before introducing the detailed scheme of the present disclosure, a description is given of a scenario to which the scheme of the present disclosure is applied. Fig. 1 is an application scenario diagram of an image encoding method in one embodiment. As shown in fig. 1, the application scenario includes an electronic device 104, in which a camera module may be installed in the electronic device 104, and a plurality of application programs may also be installed. The application may initiate an image acquisition instruction to acquire an image, with the camera module acquiring the image 102. Wherein, the camera module can include leading camera module and/or rearmounted camera module. And finally, sending the target image to the target application program. The electronic device 104 may be a smart phone, a tablet computer, a personal digital assistant, a wearable device, etc.
In some alternative embodiments, the electronic device may also be a vehicle-mounted device or a vehicle networking device, such as a smart car, and a smart phone is only taken as an example in the present disclosure, but it does not represent that it limits the scope of the present disclosure.
The electronic equipment can be provided with a camera, and an image is acquired through the installed camera. The camera can be divided into a laser camera, a visible light camera and the like according to different acquired images, the laser camera can acquire an image formed by irradiating laser on an object, and the visible light image can acquire an image formed by irradiating visible light on the object. The electronic equipment can be provided with a plurality of cameras, and the installation position is not limited. For example, one camera may be mounted on a front panel of the electronic device, two cameras may be mounted on a rear panel, and the cameras may be mounted inside the electronic device in an embedded manner and then opened by rotating or sliding. Specifically, the front camera and the rear camera can be mounted on the electronic device, the front camera and the rear camera can acquire images from different visual angles, the front camera can acquire images from the front visual angle of the electronic device in general, and the rear camera can acquire images from the back visual angle of the electronic device.
It should be understood that in the present disclosure, the front camera or the rear camera is only used as an example to distinguish the shooting angles of different cameras, and not to limit the functions of multiple cameras, and multiple cameras in the present disclosure may be rear cameras or front cameras at the same time, which is not limited in the present disclosure.
The electronic equipment can safely store a plurality of application programs, the application programs refer to software written for a certain application purpose in the electronic equipment, and the electronic equipment can realize the demand service for users through the application programs. When an application program needs to collect images, an image collection instruction is initiated, and the electronic equipment invokes the camera module to collect the images according to the image collection instruction. The image acquisition instruction refers to an instruction for triggering an image acquisition operation.
The electronic equipment is also provided with a processor, and the image coding module in the processor can process the image acquired by the camera module, for example, the image coding method provided by the disclosure is executed.
Fig. 2 is a flowchart of an image encoding method according to an embodiment of the present disclosure. As shown in fig. 2, the image encoding method includes steps 201-204.
In step 201, image texture analysis is performed on the bar blocks of the image to obtain image texture information.
It should be noted that the scheme is mainly used for reducing bandwidth and storage when hardware processes images, and has low latency, low complexity and random storage requirements on the scheme.
In the present disclosure, the image needs to be divided into several small random access strips (tiles) according to the random access requirements. Specifically, the image may be divided into rectangular areas in the horizontal and vertical directions, which are called bars, and one image may be divided into at least one bar.
In the present disclosure, image texture analysis is performed on a bar block to be encoded, image texture information is obtained for dividing the bar block and determining an adjustment factor, for example, pixel gradient information of each pixel in the bar block to be encoded can be extracted through analysis of image textures in each bar block, and a bit width number is taken based on the pixel gradient information, and the pixel gradient information and the bit width number are used as a basis for dividing the bar block and determining the code length adjustment factor.
Step 202, dividing the bar block into a plurality of coding units based on the image texture information, and dividing each coding unit into a plurality of sub-blocks.
In the present disclosure, the coding units are equally divided, each coding unit adopts a different sub-block division manner, for example, the coding unit is divided into 4 by 4 blocks, and some coding units are divided into 8 by 2 stripes.
In the present disclosure, several partitioning methods are preset for the sub-block partitioning method of each coding unit, and a partitioning method that makes the inside of each coding unit flatter is selected as a final partitioning method.
In the present disclosure, a sub-block division manner of each coding unit is selected based on image texture information, for example, based on pixel gradient information of each pixel, among several preset division manners, a mean-square error (MSE) in each divided coding unit is calculated, and a division manner with the minimum mean-square error is used as a final division basis.
In step 203, an adjustment factor is determined for each sub-block, and a target code length for each sub-block is determined based on the adjustment factor.
In the present disclosure, the adjustment factor determines the adjustment coefficient of the code length allocated to each sub-block, which may be obtained by calculation based on the texture information obtained by the texture analysis in step 201, for example, according to the bit width of each pixel, calculating the ratio of the sum of the minimum directional gradient bit width numbers in the sub-block to the average minimum directional gradient bit width number in each sub-block as the basic adjustment coefficient, and in order to avoid extreme situations, adding constraint to the coefficient by using logarithmic mapping, and finally determining the adjustment coefficient.
In the present disclosure, each bar block has a preset target code length, the target code length of the bar block is divided by the number of sub-blocks to obtain an initialized average code length, and the target code length of each sub-block can be determined according to the average code length and an adjustment factor of each sub-block.
Step 204, each sub-block is encoded based on the target code length.
In embodiments of the present disclosure, after determining the target code rate, a proper code length is matched by traversing between different coding modes and different quantization parameters (quantization parameter, QP).
In the embodiments of the present disclosure, each sub-block is encoded based on a target code length, and a certain exceeding of the target code length is allowed in the early encoding stage, but a constraint on quantization parameters needs to be satisfied to keep the image quality stable, for example, only one quantization parameter can be lowered at most, and the difference from the quantization parameter of the previous block cannot be excessive. The strength of the code stream control in the later stage of coding gradually weakens and gradually approaches to coding strictly according to the target code length.
In summary, according to the image encoding method provided by the present disclosure, image texture analysis is performed on a bar block of an image to obtain image texture information; dividing a bar block into a plurality of coding units based on image texture information, and dividing each coding unit into a plurality of sub-blocks; determining an adjustment factor for each sub-block and determining a target code length for each sub-block based on the adjustment factor; based on the target code length, each sub-block is encoded, the texture coding characteristic is fully considered, smooth transition can be realized by dynamically adjusting the code length in the encoding process, obvious image quality damage of partial areas is avoided, and the method is more flexible and stable.
Based on the embodiment shown in fig. 2, fig. 3 further shows a flowchart of an image encoding method proposed by the present disclosure. Based on the embodiment shown in fig. 2, fig. 3 includes the following steps. For ease of understanding, fig. 4 shows a flowchart of an image encoding method provided by the present disclosure.
In step 301, image texture analysis is performed on the image bar to obtain image texture information.
In this disclosure, due to the random access requirement, the image needs to be divided into a plurality of small random access blocks, and the small random access blocks are stored in the designated address unit after being compressed. The stripe blocks typically do not exceed 4x128 considering the image processing buffer size.
In an embodiment of the present disclosure, performing image texture analysis on a bar of an image, obtaining image texture information includes: extracting gradient information of pixels in the bar block; determining a minimum directional gradient for each pixel in the bar based on the gradient information; the bit width corresponding to the minimum directional gradient of each pixel is determined.
It should be noted that the top, top left, top right and left pixels of the bar are extracted to determine the gradient or bit width in terms of the order and nature of the pixel encoding.
Specifically, by analyzing the texture information of the image in the bar block, the gradient information (grad) of the upper left, upper right, left pixel and the current pixel is extracted, the minimum direction gradient of each pixel in the bar block is determined based on the gradient information, the minimum direction gradient is stored, and then the corresponding bit width is obtained for the minimum direction gradient of each pixel.
In some embodiments of the present disclosure, determining the bit width corresponding to the minimum directional gradient for each pixel includes: when the minimum direction gradient is greater than zero, determining the bit width according to a preset formula; when the minimum directional gradient is equal to zero, the bit width is determined to be 1.
Specifically, the expression floor (log 2 (grad)) +1 may be used to calculate when grad is greater than 0, and bit width 1 when grad is equal to 0.
It should be noted that the number of bit widths is taken because the bit width of the value, rather than the magnitude of the value itself, is the main impact on encoding, and the gradient information and corresponding bit width of each pixel will be the basis for dividing the sub-blocks and determining the code length adjustment factor as described below.
In an embodiment of the present disclosure, the image texture information includes at least a minimum directional gradient for each pixel in the bar, dividing the bar into a plurality of coding units based on the image texture information, and dividing each coding unit into a plurality of sub-blocks includes steps 302-304.
It should be noted that, when the sub-blocks are divided by the methods of steps 302-304, the image texture information obtained by step 301 includes at least a minimum directional gradient for each pixel in the bar.
Step 302, dividing the bar block equally into a plurality of coding units.
In the present disclosure, the bar block is divided into a plurality of coding units in an equal division manner, and the size of each coding unit is uniform.
Step 303, determining the gradient mean square error of each coding unit according to the image texture information of each coding unit.
Step 304, dividing the coding unit into a plurality of sub-blocks based on the gradient mean square error.
In an embodiment of the present disclosure, dividing the coding unit into a plurality of sub-blocks based on the gradient mean square error includes: determining a plurality of sub-block division modes of the coding units according to the image texture information of each coding unit; determining the gradient mean square error of the coding unit in each sub-block division mode; selecting a sub-block dividing mode corresponding to the minimum gradient mean square error, and dividing the coding unit into a plurality of sub-blocks, wherein the shapes and the numbers of the sub-blocks obtained by dividing different coding units are different.
It should be noted that the purpose of dividing the coding units is to divide the sub-blocks into different sub-blocks by different dividing methods in each coding unit, and the shapes and numbers of the sub-blocks obtained by dividing the different coding units are different.
In the present disclosure, several division modes are preset for the sub-block division mode of each coding unit, and the division mode for dividing the coding unit into sub-blocks is selected according to the image texture information.
Specifically, several modes such as a block (4 x 4) or a strip (8 x 2) are preset for the coding division mode, and according to the pixel gradient information obtained in step 301, the gradient mean square error in each coding unit after division is calculated, and the division mode with the minimum gradient mean square error is used as the final sub-block division mode, and the code stream is written. Thus, each coding unit may obtain a different quantization coefficient, and each coding unit is relatively flat inside in order to reduce the code stream length.
Step 305 determines an adjustment factor for each sub-block and determines a target code length for each sub-block based on the adjustment factors.
Step 306: each sub-block is encoded based on the target code length.
In an embodiment of the present disclosure, determining the adjustment factor for each sub-block includes: adding the minimum directional gradients of the pixels in each sub-block to determine a gradient sum of the sub-blocks; adding the minimum directional gradients of each pixel in the bar block and dividing the minimum directional gradients by the number of the plurality of sub-blocks to determine an average gradient; determining the ratio of the gradient sum to the average gradient as an initial factor of the sub-block; and carrying out logarithmic mapping processing on the initial factors to obtain adjustment factors of the sub-blocks.
Specifically, the ratio of the sum of gradients to the average gradient is taken as the initial factor for the sub-block. The minimum directional gradient bit width of each pixel in the stripe block can be obtained through step 301, the minimum directional gradients of each pixel in each sub-block are added to obtain the gradient sum of the sub-blocks, the minimum directional gradients of each pixel in the stripe block are added and divided by the number of the sub-blocks, and the average gradient can be obtained. In order to avoid extreme situations, such as when the adjustment factor is too high or too low during code control, a severe fluctuation of the image quality is likely to occur, so that a constraint needs to be added to the initial factor, the range of the constraint is [1/2,2], a logarithmic mapping is adopted, as shown in fig. 5 in a logarithmic coordinate system,F (x) =1/4 (0 < x.ltoreq.1/4); f (x) =2 (x > 4), where x is an initial factor and f (x) is an adjustment factor obtained by constraining the initial factor.
It should be noted that, because the logarithmic mapping is unfavorable for hardware implementation, the form of replacing direct calculation by using the segmented table lookup can be adopted, which is more convenient, and the size of the table is determined according to the actual requirement.
In an embodiment of the present disclosure, determining the target code length for each sub-block based on the adjustment factor includes: dividing the plurality of sub-blocks into a first group of sub-blocks and a second group of sub-blocks according to the coding sequence of the plurality of sub-blocks and the number of the plurality of sub-blocks; determining target code length of each sub-block in the first group of sub-blocks by taking the principle of ensuring the image quality; and determining the target code length of each sub-block in the second group of sub-blocks by taking the principle of ensuring coding convergence, wherein the coding sequence of the first group of sub-blocks is prior to the coding sequence of the second group of sub-blocks, and the number of the sub-blocks in the second group of sub-blocks is more than or equal to 2 and less than or equal to half of the number of the plurality of sub-blocks.
It should be noted that, in the early stage of encoding, the target code length is determined based on the principle of ensuring the image quality, in the late stage of encoding, since the preset code length of the bar block is fixed, in order to avoid the situation that no code length is available, when the number of the sub-blocks in the late stage is between 2 and half of the number of all the sub-blocks, the target code length is determined based on the principle of ensuring the convergence of encoding according to the actual situation, so as to ensure that the remaining sub-blocks can complete encoding.
In some embodiments of the present disclosure, determining the target code length of each sub-block in the first set of sub-blocks based on the principle of ensuring image quality includes: obtaining a preset code length of the bar block; dividing the preset code length by the number of the plurality of sub-blocks to obtain an initial average code length serving as a basic code length of a first sub-block; multiplying the basic code length by the adjustment factor of the first sub-block to obtain a target code length of the first sub-block; and determining the target code length of the remaining sub-blocks based on the remaining code length and the number of the remaining sub-blocks until the last sub-block in the first group of sub-blocks.
In an embodiment of the present disclosure, determining the target code length of the remaining sub-blocks based on the remaining code length and the number of remaining sub-blocks includes: subtracting the target code length of the coded sub-block from the preset code length to obtain a residual code length; subtracting the number of the coded sub-blocks from the number of the plurality of sub-blocks to obtain the number of the residual sub-blocks; taking the ratio of the residual code length to the number of residual sub-blocks as the basic code length of the next coded sub-block; and taking the product of the basic code length of the next coded sub-block and the adjustment factor as the target code length of the next coded sub-block.
Specifically, each bar block has a preset code length, the preset code length is divided by the number of divided sub-blocks in the bar block to obtain an initial average code length, the initial average code length is used as a basic code length of a first sub-block, and the basic code length is multiplied by an adjustment factor of the first sub-block to obtain a target code length of the first sub-block. After the first sub-block is coded, subtracting the target code length of the first sub-block from the basic code length to obtain a residual code length, subtracting the number of the sub-blocks by one to obtain the number of the residual sub-blocks, dividing the residual code length by the number of the residual sub-blocks to obtain the basic code length of the second sub-block, taking the product of the basic code and the adjustment factor of the second sub-block as the target code length of the first sub-block, and similarly, obtaining the target code lengths of the third sub-block and the fourth sub-block until all the sub-blocks needing to be coded are coded.
In some embodiments of the present disclosure, the method further comprises: traversing under different coding modes and different quantization parameters, and dynamically adjusting the target code length of each sub-block according to a preset rule, wherein the preset rule is as follows: the difference between the quantization parameter corresponding to the target code length used to encode the current sub-block and the quantization parameter corresponding to the target code length used to encode the previous sub-block falls within the interval [ -1,2].
It should be noted that, different quantization parameters will correspond to different code lengths, and when encoding, the quantization parameter closest to the target code length is selected for encoding in different encoding modes and between different quantization parameters, and most of the time cannot be completely equal to the target code length, a preset rule is set for the quantization parameters used for sub-blocks to ensure the stability of image quality, and the target code length of each sub-block is dynamically adjusted according to the preset rule.
In an embodiment of the present disclosure, the preset rule includes a falling constraint and a rising constraint of quantization parameters used for a sub-block, where a current sub-block is encoded with a previous sub-block, and at most one quantization parameter can be reduced, and at most two quantization parameters can be raised.
In an embodiment of the present disclosure, determining the target code length of each sub-block in the second set of sub-blocks based on a principle of ensuring coding convergence includes: subtracting the sum of target code lengths of all sub-blocks in the first group of sub-blocks from the preset code length to obtain the total code length of the second group of sub-blocks; the ratio of the total code length to the number of sub-blocks of the second set is taken as the target code length of each sub-block of the second set.
It can be understood that, in the earlier stage of encoding, the target encoding is determined by taking the principle of ensuring the image quality as the first group of sub-blocks, the average code length is recalculated for each sub-block after encoding is completed, and the target code length is determined by multiplying the adjustment factor of the currently encoded sub-block, so that the image quality can be better ensured; in the later stage of coding, in order to prevent the preset code length of the bar block from being used up, the target code length is determined for the second group of sub-blocks by taking the principle of ensuring the convergence of coding, and the average code length of the second group of remaining sub-blocks is directly calculated as the target code length of each sub-block so as to ensure that the second group of sub-blocks can finish coding.
In summary, the image encoding method provided by the present disclosure performs image texture analysis on a bar block of an image to obtain image texture information; dividing a bar block into a plurality of coding units based on image texture information, and dividing each coding unit into a plurality of sub-blocks; determining an adjustment factor for each sub-block and determining a target code length for each sub-block based on the adjustment factor; based on the target code length, each sub-block is encoded, the texture coding characteristic is fully considered, smooth transition can be realized by dynamically adjusting the code length in the encoding process, obvious image quality damage of partial areas is avoided, and the method is more flexible and stable.
Fig. 6 is a schematic structural diagram of an image encoding apparatus 400 according to an embodiment of the disclosure. As shown in fig. 6, the image encoding apparatus includes:
An obtaining module 410, configured to perform image texture analysis on the bar block of the image to obtain image texture information; a dividing module 420 for dividing the bar block into a plurality of encoding units based on the image texture information, and dividing each encoding unit into a plurality of sub-blocks; a determining module 430, configured to determine an adjustment factor for each sub-block, and determine a target code length of each sub-block based on the adjustment factor; the encoding module 440 is configured to encode each sub-block based on the target code length.
In some embodiments, the obtaining module 410 is specifically configured to: extracting gradient information of pixels in the bar block; determining a minimum directional gradient for each pixel in the bar based on the gradient information; the bit width corresponding to the minimum directional gradient of each pixel is determined.
In some embodiments, determining the bit width corresponding to the minimum directional gradient for each pixel includes: when the minimum direction gradient is greater than zero, determining the bit width according to a preset formula; when the minimum directional gradient is equal to zero, the bit width is determined to be 1.
In some embodiments, the image texture information includes at least a minimum directional gradient for each pixel in the bar, and the partitioning module 420 is specifically configured to: equally dividing the bar block into a plurality of coding units; determining the gradient mean square error of each coding unit according to the image texture information of each coding unit; the coding unit is divided into a plurality of sub-blocks based on the gradient mean square error.
In some embodiments, the partitioning module 420 is specifically configured to: determining a plurality of sub-block division modes of the coding units according to the image texture information of each coding unit; determining the gradient mean square error of the coding unit in each sub-block division mode; selecting a sub-block dividing mode corresponding to the minimum gradient mean square error, and dividing the coding unit into a plurality of sub-blocks, wherein the shapes and the numbers of the sub-blocks obtained by dividing different coding units are different.
In some embodiments, the determining module 430 is specifically configured to: adding the minimum directional gradients of the pixels in each sub-block to determine a gradient sum of the sub-blocks; adding the minimum directional gradients of each pixel in the bar block and dividing the minimum directional gradients by the number of the plurality of sub-blocks to determine an average gradient; determining the ratio of the gradient sum to the average gradient as an initial factor of the sub-block; and carrying out logarithmic mapping processing on the initial factors to obtain adjustment factors of the sub-blocks.
In some embodiments, the determining module 430 is specifically configured to: dividing the plurality of sub-blocks into a first group of sub-blocks and a second group of sub-blocks according to the coding sequence of the plurality of sub-blocks and the number of the plurality of sub-blocks; determining target code length of each sub-block in the first group of sub-blocks by taking the principle of ensuring the image quality; and determining the target code length of each sub-block in the second group of sub-blocks by taking the principle of ensuring coding convergence, wherein the coding sequence of the first group of sub-blocks is prior to the coding sequence of the second group of sub-blocks, and the number of the sub-blocks in the second group of sub-blocks is more than or equal to 2 and less than or equal to half of the number of the plurality of sub-blocks.
In some embodiments, determining the target code length for each sub-block in the first set of sub-blocks based on the principle of ensuring image quality includes: obtaining a preset code length of the bar block; dividing the preset code length by the number of the plurality of sub-blocks to obtain an initial average code length serving as a basic code length of a first sub-block; multiplying the basic code length by the adjustment factor of the first sub-block to obtain a target code length of the first sub-block; and determining the target code length of the remaining sub-blocks based on the remaining code length and the number of the remaining sub-blocks until the last sub-block in the first group of sub-blocks.
In some embodiments, determining the target code length for the remaining sub-blocks based on the remaining code length and the number of remaining sub-blocks comprises: subtracting the target code length of the coded sub-block from the preset code length to obtain a residual code length; subtracting the number of the coded sub-blocks from the number of the plurality of sub-blocks to obtain the number of the residual sub-blocks; taking the ratio of the residual code length to the number of residual sub-blocks as the basic code length of the next coded sub-block; and taking the product of the basic code length of the next coded sub-block and the adjustment factor as the target code length of the next coded sub-block.
In some embodiments, the determination module 430 is further to: traversing under different coding modes and different quantization parameters, and dynamically adjusting the target code length of each sub-block according to a preset rule, wherein the preset rule is as follows: the difference between the quantization parameter corresponding to the target code length used to encode the current sub-block and the quantization parameter corresponding to the target code length used to encode the previous sub-block falls within the interval [ -1,2].
In some embodiments, determining the target code length for each sub-block in the second set of sub-blocks based on the principle of ensuring coding convergence comprises: subtracting the sum of target code lengths of all sub-blocks in the first group of sub-blocks from the preset code length to obtain the total code length of the second group of sub-blocks; the ratio of the total code length to the number of sub-blocks of the second set is taken as the target code length of each sub-block of the second set.
Since the apparatus provided by the embodiments of the present disclosure corresponds to the methods provided by the above-described several embodiments, implementation manners of the methods are also applicable to the apparatus provided by the present embodiment, and will not be described in detail in the present embodiment.
In summary, the image encoding device provided by the present disclosure performs image texture analysis on a bar block of an image to obtain image texture information; dividing a bar block into a plurality of coding units based on image texture information, and dividing each coding unit into a plurality of sub-blocks; determining an adjustment factor for each sub-block and determining a target code length for each sub-block based on the adjustment factor; based on the target code length, each sub-block is encoded, the texture coding characteristic is fully considered, smooth transition can be realized by dynamically adjusting the code length in the encoding process, obvious image quality damage of partial areas is avoided, and the method is more flexible and stable.
In the embodiment provided by the application, the method and the device provided by the embodiment of the application are introduced. In order to implement the functions in the method provided by the embodiment of the present application, the electronic device may include a hardware structure, a software module, and implement the functions in the form of a hardware structure, a software module, or a hardware structure plus a software module. Some of the functions described above may be implemented in a hardware structure, a software module, or a combination of a hardware structure and a software module.
Fig. 7 is a block diagram illustrating an electronic device 500 for implementing the above-described image encoding method according to an exemplary embodiment. For example, electronic device 500 may be a mobile phone, computer, messaging device, game console, tablet device, medical device, exercise device, personal digital assistant, or the like.
Referring to fig. 7, an electronic device 500 may include one or more of the following components: a processing component 502, a memory 504, a power supply component 506, a multimedia component 508, an audio component 510, an input/output (I/O) interface 512, a sensor component 514, and a communication component 516.
The processing component 502 generally controls overall operation of the electronic device 500, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing component 502 may include one or more processors 820 to execute instructions to perform all or part of the steps of the methods described above. Further, the processing component 502 can include one or more modules that facilitate interactions between the processing component 502 and other components. For example, the processing component 502 can include a multimedia module to facilitate interaction between the multimedia component 508 and the processing component 502.
The memory 504 is configured to store various types of data to support operations at the electronic device 500. Examples of such data include instructions for any application or method operating on the electronic device 500, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 504 may be implemented by any type or combination of volatile or nonvolatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disk.
The power supply component 506 provides power to the various components of the electronic device 500. The power components 506 may include a power management system, one or more power sources, and other components associated with generating, managing, and distributing power for the electronic device 500.
The multimedia component 508 includes a screen that provides an output interface between the electronic device 500 and the user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive input signals from a user. The touch panel includes one or more touch sensors to sense touches, swipes, and gestures on the touch panel. The touch sensor may sense not only the boundary of a touch or sliding action, but also the duration and pressure associated with the touch or sliding operation. In some embodiments, the multimedia component 508 includes a front-facing camera and/or a rear-facing camera. When the electronic device 500 is in an operational mode, such as a shooting mode or a video mode, the front camera and/or the rear camera may receive external multimedia data. Each front camera and rear camera may be a fixed optical lens system or have focal length and optical zoom capabilities.
The audio component 510 is configured to output and/or input audio signals. For example, the audio component 510 includes a Microphone (MIC) configured to receive external audio signals when the electronic device 500 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may be further stored in the memory 504 or transmitted via the communication component 516. In some embodiments, the audio component 510 further comprises a speaker for outputting audio signals.
The I/O interface 512 provides an interface between the processing component 502 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: homepage button, volume button, start button, and lock button.
The sensor assembly 514 includes one or more sensors for providing status assessment of various aspects of the electronic device 500. For example, the sensor assembly 514 may detect an on/off state of the electronic device 500, a relative positioning of the components, such as a display and keypad of the electronic device 500, the sensor assembly 514 may also detect a change in position of the electronic device 500 or a component of the electronic device 500, the presence or absence of a user's contact with the electronic device 500, an orientation or acceleration/deceleration of the electronic device 500, and a change in temperature of the electronic device 500. The sensor assembly 514 may include a proximity sensor configured to detect the presence of nearby objects without any physical contact. The sensor assembly 514 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 514 may also include an acceleration sensor, a gyroscopic sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 516 is configured to facilitate communication between the electronic device 500 and other devices, either wired or wireless. The electronic device 500 may access a wireless network based on a communication standard, such as WiFi,2G or 3G,4G LTE, 5G NR (New Radio), or a combination thereof. In one exemplary embodiment, the communication component 516 receives broadcast signals or broadcast-related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 516 further includes a Near Field Communication (NFC) module to facilitate short range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, ultra Wideband (UWB) technology, bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the electronic device 500 may be implemented by one or more Application Specific Integrated Circuits (ASICs), digital Signal Processors (DSPs), digital Signal Processing Devices (DSPDs), programmable Logic Devices (PLDs), field Programmable Gate Arrays (FPGAs), controllers, microcontrollers, microprocessors, or other electronic elements for executing the methods described above.
In an exemplary embodiment, a non-transitory computer readable storage medium is also provided, such as memory 504, including instructions executable by processor 520 of electronic device 500 to perform the above-described method. For example, the non-transitory computer readable storage medium may be ROM, random Access Memory (RAM), CD-ROM, magnetic tape, floppy disk, optical data storage device, etc.
Embodiments of the present disclosure also propose a non-transitory computer-readable storage medium storing computer instructions for causing a computer to execute the image encoding method described in the above embodiments of the present disclosure.
Embodiments of the present disclosure also propose a computer program product comprising a computer program which, when executed by a processor, performs the image encoding method described in the above embodiments of the present disclosure.
Embodiments of the present disclosure also provide a chip including one or more interface circuits and one or more processors; the interface circuit is used for receiving the code instruction and transmitting the code instruction to the processor; the processor is configured to execute the code instructions to perform the image encoding method described in the above embodiments of the present disclosure.
It should be noted that the terms "first," "second," and the like in the description and claims of the present disclosure and in the foregoing figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the disclosure described herein may be capable of operation in sequences other than those illustrated or described herein. The implementations described in the following exemplary examples are not representative of all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with some aspects of the present disclosure as detailed in the accompanying claims.
In the description of the present specification, reference is made to the terms "one embodiment," "some embodiments," "illustrative embodiments," "examples," "specific examples," or "some examples," etc., meaning that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, schematic representations of the above terms do not necessarily refer to the same embodiments or examples. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps of the process, and further implementations are included within the scope of the preferred embodiment of the present invention in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the present invention.
Logic and/or steps represented in the flowcharts or otherwise described herein, e.g., a ordered listing of executable instructions for implementing logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, system that includes a processing module, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. For the purposes of this description, a "computer-readable medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection (control method) with one or more wires, a portable computer cartridge (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable compact disc read-only memory (CDROM). Additionally, the computer-readable medium may even be paper or other suitable medium upon which the program is printed, as the program may be electronically captured, via, for instance, optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner, if necessary, and then stored in a computer memory.
It is to be understood that portions of embodiments of the present invention may be implemented in hardware, software, firmware, or a combination thereof. In the above-described embodiments, the various steps or methods may be implemented in software or firmware stored in a memory and executed by a suitable instruction execution system. For example, if implemented in hardware, as in another embodiment, may be implemented using any one or combination of the following techniques, as is well known in the art: discrete logic circuits having logic gates for implementing logic functions on data signals, application specific integrated circuits having suitable combinational logic gates, programmable Gate Arrays (PGAs), field Programmable Gate Arrays (FPGAs), and the like.
Those of ordinary skill in the art will appreciate that all or a portion of the steps carried out in the method of the above-described embodiments may be implemented by a program to instruct related hardware, and the program may be stored in a computer readable storage medium, where the program when executed includes one or a combination of the steps of the method embodiments.
Furthermore, functional units in various embodiments of the present invention may be integrated into one processing module, or each unit may exist alone physically, or two or more units may be integrated into one module. The integrated modules may be implemented in hardware or in software functional modules. The integrated modules may also be stored in a computer readable storage medium if implemented as software functional modules and sold or used as a stand-alone product. The above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, or the like.
While embodiments of the present invention have been shown and described above, it will be understood that the above embodiments are illustrative and not to be construed as limiting the invention, and that variations, modifications, alternatives, and variations of the above embodiments may be made by those of ordinary skill in the art within the scope of the invention.

Claims (16)

1. An image encoding method, the method comprising:
Performing image texture analysis on the strip blocks of the image to obtain image texture information;
dividing the bar block into a plurality of coding units based on the image texture information, and dividing each coding unit into a plurality of sub-blocks;
determining an adjustment factor for each sub-block, and determining a target code length for each sub-block based on the adjustment factor;
And encoding each sub-block based on the target code length.
2. The method of claim 1, wherein performing image texture analysis on the bars of the image, obtaining image texture information comprises:
extracting gradient information of pixels in the bar block;
Determining a minimum directional gradient for each pixel in the bar based on the gradient information;
And determining the bit width corresponding to the minimum directional gradient of each pixel.
3. The method of claim 2, wherein determining the bit width corresponding to the minimum directional gradient for each pixel comprises:
When the minimum direction gradient is greater than zero, determining the bit width according to a preset formula;
When the minimum directional gradient is equal to zero, the bit width is determined to be 1.
4. The method of claim 1, wherein the image texture information includes at least a minimum directional gradient for each pixel in the bar, wherein dividing the bar into a plurality of coding units and each coding unit into a plurality of sub-blocks based on the image texture information comprises:
Dividing the bar block equally into a plurality of coding units;
determining the gradient mean square error of each coding unit according to the image texture information of each coding unit;
the coding unit is divided into a plurality of sub-blocks based on the gradient mean square error.
5. The method of claim 4, wherein the dividing the coding unit into a plurality of sub-blocks based on the gradient mean square error comprises:
determining a plurality of sub-block division modes of each coding unit according to the image texture information of the coding unit;
Determining the gradient mean square error of the coding unit in each sub-block division mode;
selecting a sub-block dividing mode corresponding to the minimum gradient mean square error, dividing the coding unit into a plurality of sub-blocks,
Wherein the shapes and the numbers of the sub-blocks obtained by dividing different coding units are different.
6. The method of any one of claims 1 to 5, wherein the determining an adjustment factor for each sub-block comprises:
adding the minimum directional gradients of the pixels in each sub-block to determine a sum of gradients of the sub-blocks;
adding the minimum directional gradients of each pixel within the bar block and dividing by the number of the plurality of sub-blocks to determine an average gradient;
Determining the ratio of the gradient sum to the average gradient as an initial factor of the sub-block;
and carrying out logarithmic mapping processing on the initial factors to obtain adjustment factors of the sub-blocks.
7. The method of claim 1, wherein the determining the target code length for each sub-block based on the adjustment factor comprises:
Dividing the plurality of sub-blocks into a first group of sub-blocks and a second group of sub-blocks according to the coding sequence of the plurality of sub-blocks and the number of the plurality of sub-blocks;
Determining target code length of each sub-block in the first group of sub-blocks by taking the principle of ensuring the image quality;
determining a target code length of each sub-block in the second group of sub-blocks based on the principle of ensuring coding convergence,
The coding sequence of the first group of sub-blocks is earlier than that of the second group of sub-blocks, and the number of the sub-blocks in the second group of sub-blocks is more than or equal to 2 and less than or equal to half of the number of the plurality of sub-blocks.
8. The method of claim 7, wherein determining the target code length for each sub-block in the first set of sub-blocks based on the principle of ensuring image quality comprises:
Acquiring a preset code length of the bar block;
Dividing the preset code length by the number of the plurality of sub-blocks to obtain an initial average code length serving as a basic code length of a first sub-block;
Multiplying the basic code length by an adjustment factor of the first sub-block to obtain a target code length of the first sub-block;
And determining the target code length of the remaining sub-blocks based on the remaining code length and the number of the remaining sub-blocks until the last sub-block in the first group of sub-blocks.
9. The method of claim 8, wherein determining the target code length for the remaining sub-blocks based on the remaining code length and the number of remaining sub-blocks comprises:
subtracting a target code length of the coded sub-block from the preset code length to obtain the residual code length;
subtracting the number of coded sub-blocks from the number of the plurality of sub-blocks to obtain the number of residual sub-blocks;
taking the ratio of the residual code length to the number of the residual sub-blocks as the basic code length of the next coded sub-block;
and taking the product of the basic code length of the next coded sub-block and the adjustment factor as the target code length of the next coded sub-block.
10. The method of claim 7, wherein the method further comprises:
Traversing under different coding modes and different quantization parameters, dynamically adjusting the target code length of each sub-block according to a preset rule,
Wherein, the preset rule is: the difference between the quantization parameter corresponding to the target code length used to encode the current sub-block and the quantization parameter corresponding to the target code length used to encode the previous sub-block falls within the interval [ -1,2].
11. The method according to any one of claims 7 to 10, wherein determining the target code length of each sub-block in the second set of sub-blocks based on a guaranteed coding convergence principle comprises:
subtracting the sum of target code lengths of all sub-blocks in the first group of sub-blocks from the preset code length to be used as the total code length of the second group of sub-blocks;
And taking the ratio of the total code length to the number of the sub-blocks of the second group as the target code length of each sub-block in the second group of sub-blocks.
12. An image encoding apparatus, the apparatus comprising:
the acquisition module is used for carrying out image texture analysis on the strip blocks of the image to acquire image texture information;
A dividing module for dividing the bar block into a plurality of coding units based on the image texture information, and dividing each coding unit into a plurality of sub-blocks;
A determining module, configured to determine an adjustment factor for each sub-block, and determine a target code length of each sub-block based on the adjustment factor;
And the coding module is used for coding each sub-block based on the target code length.
13. An electronic device, comprising:
At least one processor; and
A memory communicatively coupled to the at least one processor; wherein,
The memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-11.
14. A non-transitory computer readable storage medium storing computer instructions for causing the computer to perform the method of any one of claims 1-11.
15. A computer program product comprising a computer program which, when executed by a processor, implements the method according to any of claims 1-11.
16. A chip comprising one or more interface circuits and one or more processors; the interface circuit is configured to receive a signal from a memory of an electronic device and to send the signal to the processor, the signal comprising computer instructions stored in the memory, which when executed by the processor, cause the electronic device to perform the method of any of claims 1-11.
CN202211599018.9A 2022-12-12 2022-12-12 Image coding method, device, electronic equipment, chip and storage medium Pending CN118200580A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211599018.9A CN118200580A (en) 2022-12-12 2022-12-12 Image coding method, device, electronic equipment, chip and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211599018.9A CN118200580A (en) 2022-12-12 2022-12-12 Image coding method, device, electronic equipment, chip and storage medium

Publications (1)

Publication Number Publication Date
CN118200580A true CN118200580A (en) 2024-06-14

Family

ID=91412699

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211599018.9A Pending CN118200580A (en) 2022-12-12 2022-12-12 Image coding method, device, electronic equipment, chip and storage medium

Country Status (1)

Country Link
CN (1) CN118200580A (en)

Similar Documents

Publication Publication Date Title
EP3171596B1 (en) Image compression with adaptive quantization of regions of interest (roi)
CN110267036B (en) Filtering method and device
CN108391127B (en) Video encoding method, device, storage medium and equipment
CN106412702B (en) Video clip intercepting method and device
KR20170101872A (en) Method and apparatus for determining reference picture for inter-prediction
US8615140B2 (en) Compression of image data in accordance with depth information of pixels
EP4007289A1 (en) Video uploading method and apparatus, electronic device, and storage medium
KR20130094633A (en) Apparatus and method for transmitting a frame image of camera using a hybrid interleaved data
KR20150068192A (en) Methed, device and system for processing image based on cloud server
WO2020175298A1 (en) Image capture device, image processing device, control method thereof, and program
CN114070993B (en) Image pickup method, image pickup apparatus, and readable storage medium
CN113099233A (en) Video encoding method, video encoding device, video encoding apparatus, and storage medium
KR100719841B1 (en) Method for creation and indication of thumbnail view
KR20170027508A (en) Method and apparatus for quantization based on rate-distortion optimization
CN112738516A (en) Encoding method, encoding device, storage medium and electronic equipment
CN115052150A (en) Video encoding method, video encoding device, electronic equipment and storage medium
CN115379210A (en) Video data encoding method, device and equipment
CN113727105A (en) Depth map compression method, device, system and storage medium
CN111953980B (en) Video processing method and device
CN118200580A (en) Image coding method, device, electronic equipment, chip and storage medium
CN110990088A (en) Data processing method and related equipment
JP2011129995A (en) Communication terminal device, image processing method and program for the same, and image communication system
CN109660794B (en) Decision method, decision device and computer readable storage medium for intra prediction mode
CN116017171B (en) Image processing method and device, electronic equipment, chip and storage medium
CN117676170A (en) Method, apparatus, device and storage medium for detecting blocking effect

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination