CN117764819A - Image processing method, device, electronic equipment and storage medium - Google Patents

Image processing method, device, electronic equipment and storage medium Download PDF

Info

Publication number
CN117764819A
CN117764819A CN202410116929.4A CN202410116929A CN117764819A CN 117764819 A CN117764819 A CN 117764819A CN 202410116929 A CN202410116929 A CN 202410116929A CN 117764819 A CN117764819 A CN 117764819A
Authority
CN
China
Prior art keywords
target
original
determining
coordinate value
pixel point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410116929.4A
Other languages
Chinese (zh)
Inventor
张钇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN202410116929.4A priority Critical patent/CN117764819A/en
Publication of CN117764819A publication Critical patent/CN117764819A/en
Pending legal-status Critical Current

Links

Landscapes

  • Image Processing (AREA)

Abstract

The present application relates to an image processing method, an apparatus, an electronic device, a storage medium, and a computer program product. The method comprises the following steps: for each target pixel point, determining a plurality of adjacent pixel points of the target pixel point in the original image; the target pixel points are pixel points in the target image obtained by scaling the original image; determining a first weight coefficient and a second weight coefficient based on the position coordinates of a plurality of adjacent pixel points in the original image and the original coordinates of the target pixel points in the original image; determining a target pixel value of a target pixel point in a target image based on position coordinates of a plurality of adjacent pixel points in an original image, a first weight coefficient and a second weight coefficient; and obtaining a target image based on the target pixel values corresponding to the target pixel points. The image processing efficiency can be improved by adopting the method.

Description

Image processing method, device, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of image processing technologies, and in particular, to an image processing method, an image processing device, an electronic device, and a storage medium.
Background
Image scaling techniques are an image processing method for changing the size or resolution of an image. Image scaling may include both image magnification and image reduction operations, and is widely used in the fields of computer vision, graphics processing, computer graphics, and digital image processing.
In the conventional technology, the image is scaled by adopting a conventional image scaling technology, for example, an edge clipping method, an equal-proportion scaling method, a nonlinear scaling method and the like, and as the resolution of the image is higher and higher or the size of the image is larger and larger, the time consumption of the image processing is longer, so that the efficiency of the image processing is lower.
Disclosure of Invention
In view of the foregoing, it is desirable to provide an image processing method, apparatus, electronic device, and computer-readable storage medium capable of improving image processing efficiency.
In a first aspect, the present application provides an image processing method. The method comprises the following steps:
for each target pixel point, determining a plurality of adjacent pixel points of the target pixel point in an original image; the target pixel points are pixel points in a target image obtained by scaling the original image;
Determining a first weight coefficient and a second weight coefficient based on the position coordinates of a plurality of adjacent pixel points in the original image and the original coordinates of the target pixel point in the original image;
determining a target pixel value of the target pixel point in the target image based on position coordinates of a plurality of adjacent pixel points in the original image, the first weight coefficient and the second weight coefficient;
and obtaining the target image based on the target pixel values corresponding to the target pixel points.
In a second aspect, the present application also provides an image processing apparatus. The device comprises:
the pixel point determining module is used for determining a plurality of adjacent pixel points of each target pixel point in an original image; the target pixel points are pixel points in a target image obtained by scaling the original image;
the weight coefficient determining module is used for determining a first weight coefficient and a second weight coefficient based on the position coordinates of a plurality of adjacent pixel points in the original image and the original coordinates of the target pixel point in the original image;
a pixel value determining module, configured to determine a target pixel value of the target pixel point in the target image based on position coordinates of a plurality of adjacent pixel points in the original image, the first weight coefficient, and the second weight coefficient;
And the scaling module is used for obtaining the target image based on the target pixel value corresponding to each target pixel point. In a third aspect, the present application also provides an electronic device comprising a memory storing a computer program and a processor implementing the steps of the method of any one of the first aspects when the computer program is executed by the processor.
In a fourth aspect, the present application also provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the method of any of the first aspects.
In a fifth aspect, the present application also provides a computer program product comprising a computer program which, when executed by a processor, implements the steps of the method of any of the first aspects.
According to the image processing method, the device, the electronic equipment, the storage medium and the computer program product, the first weight coefficient and the second weight coefficient are determined based on the position coordinates of the plurality of adjacent pixels in the original image and the original coordinates of the target pixels in the original image by determining the plurality of adjacent pixels in the original image, the first weight coefficient and the second weight coefficient are used for determining weights corresponding to the plurality of adjacent pixels respectively, then the target pixel value of the target pixels in the target image is determined based on the position coordinates of the plurality of adjacent pixels in the original image, the first weight coefficient and the second weight coefficient, compared with the pixel value of two interpolation pixels which are required to be determined in a bilinear interpolation algorithm, and then the target pixel value of the target pixel is determined according to the pixel values of the two interpolation pixels, so that the calculation amount of the determination process of the target pixel value is reduced, the determination efficiency of the target pixel value is improved, the target image processing efficiency is improved based on the target pixel value corresponding to each target pixel value of the target pixel is improved, and therefore the processing efficiency of the target image is improved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of an image processing method in one embodiment;
FIG. 2 is a flowchart illustrating a target pixel value determining step in one embodiment;
FIG. 3 is a flow chart illustrating the interpolation vector preservation step in one embodiment;
FIG. 4 is a schematic diagram of an image processing process in one embodiment;
FIG. 5 is a flowchart illustrating a neighboring pixel determination step in one embodiment;
FIG. 6 is a flow chart of the primary coordinate determination step in one embodiment;
FIG. 7 is a flow chart of the primary coordinate constraint steps in one embodiment;
FIG. 8 is a flow chart of an image processing method in one embodiment;
FIG. 9 is a flow chart illustrating the determination of interpolation vectors in one embodiment;
FIG. 10 is a diagram showing comparison of image processing results in one embodiment;
FIG. 11 is a block diagram showing the structure of an image processing apparatus in one embodiment;
fig. 12 is an internal structural diagram of an electronic device in one embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application will be further described in detail with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the present application.
In one embodiment, as shown in fig. 1, an image processing method is provided, where the method is applied to an electronic device, and the electronic device may be, but not limited to, various personal computers, notebook computers, smart phones, tablet computers, internet of things devices and portable wearable devices, where the internet of things devices may be smart speakers, smart televisions, smart air conditioners, smart vehicle devices, smart automobiles, etc., and the portable wearable devices may be smart watches, smart bracelets, etc., it is understood that the method may also be applied to a system including an electronic device and a server, and implemented through interaction between the electronic device and the server. In this embodiment, the method includes steps 102 to 108, wherein:
102, determining a plurality of adjacent pixel points of the target pixel point in an original image according to each target pixel point; the target pixel points are pixel points in the target image obtained by scaling the original image.
The target pixel points refer to pixel points in a target image, the target image refers to an image obtained by scaling an original image, the original image refers to an image needing scaling, the target image can be an image obtained by amplifying the original image, and the target image can also be an image obtained by shrinking the original image. The original image may be an RGBA (Red-Green-Blue-Alpha, red-Green-Blue-transparency) image, a U8gray (8-bit gray) image, a Raw16 (16-bit Raw, 16-bit original data) image, or the like. The adjacent pixel points refer to the pixel points adjacent to the original pixel points corresponding to the target pixel points in the original image, and it can be understood that the target pixel points are mapped into the original image to obtain the original pixel points corresponding to the target pixel points in the original image, and the pixel points adjacent to the original pixel points in the original image are the adjacent pixel points. The number of adjacent pixels corresponding to the target pixel in the original image may be determined according to practical situations, for example, the number of adjacent pixels corresponding to one target pixel in the original image is 4.
The electronic device performs scaling processing on the original image to obtain target position coordinates of each target pixel point in the target image, and determines, for each target pixel point in the target image, a plurality of adjacent pixel points of the target pixel point in the original image based on the target position coordinates of the target pixel point.
Step 104, determining a first weight coefficient and a second weight coefficient based on the position coordinates of the plurality of adjacent pixel points in the original image and the original coordinates of the target pixel point in the original image.
The position coordinates refer to two-dimensional coordinates representing positions of adjacent pixel points in an original image, the position coordinates comprise a first position coordinate value and a second position coordinate value, the first position coordinate value represents coordinate values in the X-axis direction, the second position coordinate value represents coordinate values in the Y-axis direction, and a coordinate system where the position coordinates are located takes the center of the original image as an origin, and can be understood that the coordinate system where the position coordinates are located is a coordinate system established by performing geometric center alignment on the original image and a target image. The original coordinates refer to two-dimensional coordinates representing the position of the original pixel point corresponding to the target pixel point in the original image, and the original coordinates and the position coordinates are located in the same coordinate system. The first weight coefficient refers to one coefficient for determining the weight of each neighboring pixel point, the second weight coefficient refers to another coefficient for determining the weight of each neighboring pixel point, and it is understood that the first weight coefficient and the second weight coefficient refer to two coefficients for determining the weight of each neighboring pixel point.
The electronic device may determine the first weight coefficient and the second weight coefficient based on the position coordinates of the target adjacent pixel point and the original coordinates of the target pixel point in the original image.
And 106, determining a target pixel value of the target pixel point in the target image based on the position coordinates of the plurality of adjacent pixel points in the original image, the first weight coefficient and the second weight coefficient.
The target pixel value refers to a pixel value of the target pixel point in the target image.
For each adjacent pixel point, the electronic device determines an original pixel value of the adjacent pixel point in the original image based on the position coordinates of the adjacent pixel point in the original image, and determines a target pixel value of the target pixel point in the target image based on the original pixel values of the plurality of adjacent pixel points in the original image, the first weight coefficient and the second weight coefficient.
In one embodiment, determining a target pixel value for a target pixel in a target image based on original pixel values of a plurality of neighboring pixel points in an original image, a first weight coefficient, and a second weight coefficient, comprises: and respectively determining the weight corresponding to each adjacent pixel point based on the first weight coefficient and the second weight coefficient, and determining the target pixel value of the target pixel point in the target image based on the original pixel values and the weights corresponding to the adjacent pixel points.
Based on the first weight coefficient and the second weight coefficient, determining the weight corresponding to each adjacent pixel point respectively comprises the following steps: for each adjacent pixel point, determining the relative position of the adjacent pixel point relative to the target pixel point based on the position coordinates of the adjacent pixel point in the original image and the original coordinates of the target pixel point in the original image, acquiring a weight calculation mode corresponding to the relative position, and determining the weight corresponding to the adjacent pixel point based on the weight calculation mode, the first weight coefficient and the second weight coefficient. The relative position refers to a position of the adjacent pixel point in the original image relative to the target pixel point, and the relative position may be one of upper left, lower left, upper right and lower right. The weight calculation method refers to a method of calculating weights of adjacent pixels, and the weight calculation method corresponds to the relative position one by one, for example, when the target adjacent pixel is an adjacent pixel at the lower left, the first weight coefficient is α, and the second weight coefficient is β, the weight calculation method corresponding to the upper left is (1- α) ×β, the weight calculation method corresponding to the lower left is (1- α) × (1- β), the weight calculation method corresponding to the upper right is α×β, and the weight calculation method corresponding to the lower right is α× (1- β).
Determining a target pixel value of a target pixel point in a target image based on original pixel values and weights corresponding to a plurality of adjacent pixel points, including: for each adjacent pixel point, multiplying the original pixel value of the adjacent pixel point in the original image by the corresponding weight to obtain a reference pixel value corresponding to the adjacent pixel point, and counting the reference pixel values corresponding to the plurality of adjacent pixel points to obtain a target pixel value of the target pixel point in the target image. For example, if the original pixel points of four adjacent pixel points are g (u, v+1), g (u, v), g (u+1, v+1), and g (u+1, v), respectively, the target pixel value of the target pixel point in the target image is g (u, v+1) × (1- α) ×β+g (u, v) × (1- α) × (1- β) +g (u+1, v+1) ×α×β+g (u+1, v) ×α× (1- β).
In one embodiment, determining a target pixel value for a target pixel in a target image based on original pixel values of a plurality of neighboring pixel points in an original image, a first weight coefficient, and a second weight coefficient, comprises: combining original pixel values of a plurality of adjacent pixel points in an original image, a first weight coefficient and a second weight coefficient to obtain an interpolation vector corresponding to a target pixel point, and determining the target pixel value of the target pixel point in the target image based on the interpolation vector corresponding to the target pixel point.
And step 108, obtaining a target image based on the target pixel values corresponding to the target pixel points.
The electronic device obtains the target image according to the target pixel values corresponding to the target pixel points.
In the image processing method, the first weight coefficient and the second weight coefficient are determined based on the position coordinates of the plurality of adjacent pixels in the original image and the original coordinates of the target pixels in the original image by determining the plurality of adjacent pixels in the original image, the first weight coefficient and the second weight coefficient are used for determining weights corresponding to the plurality of adjacent pixels respectively, then the target pixel value of the target pixels in the target image is determined based on the position coordinates of the plurality of adjacent pixels in the original image, the first weight coefficient and the second weight coefficient, compared with the pixel values of the two interpolation pixels required to be determined in the bilinear interpolation algorithm, the target pixel value of the target pixels is determined according to the pixel values of the two interpolation pixels, the step of determining the pixel value of the interpolation pixel is reduced, so that the calculation amount of the target pixel value determining process is reduced, the determining efficiency of the target pixel value is improved, the target image processing efficiency is improved based on the target pixel value corresponding to each target pixel value of the target pixels, and the image processing efficiency is improved.
In one embodiment, determining the first weight coefficient and the second weight coefficient based on the position coordinates of the plurality of neighboring pixel points in the original image and the original coordinates of the target pixel point in the original image includes:
determining a target adjacent pixel point based on the position coordinates of a plurality of adjacent pixel points in the original image; determining a first weight coefficient based on a first target coordinate value in the position coordinates of the target adjacent pixel points and a first original coordinate value in the original coordinates; and determining a second weight coefficient based on a second target coordinate value in the position coordinates of the target adjacent pixel point and a second original coordinate value in the original coordinates.
The target adjacent pixel point refers to one adjacent pixel point used for determining the first weight coefficient and determining the second weight coefficient in the plurality of adjacent pixel points, and the target adjacent pixel point may be any one of the plurality of adjacent pixel points, for example, an adjacent pixel point located at the lower left in the original image relative to the target pixel point in the four adjacent pixel points may be determined as the target adjacent pixel point. The first target coordinate value refers to a coordinate value of the target adjacent pixel point representing the X direction in the position coordinate in the original image, the second target coordinate value refers to a coordinate value of the target adjacent pixel point representing the Y direction in the position coordinate in the original image, for example, the position coordinate of the target adjacent pixel point in the original image is (u, v), and u is the first target coordinate value, and v is the second target coordinate value. The first original coordinate value refers to a coordinate value of the target pixel point representing the X direction in the original coordinates of the corresponding original pixel point in the original image, the second original coordinate value refers to a coordinate value of the target pixel point representing the Y direction in the original coordinates of the corresponding original pixel point in the original image, for example, the original coordinates of the target pixel point in the original image are (u 0, v 0), then u0 is the first original coordinate value, and v0 is the second original coordinate value.
The electronic device obtains a preset relative position, determines adjacent pixel points located at the preset relative position among the plurality of adjacent pixel points as target adjacent pixel points, subtracts a first target coordinate value in the position coordinates of the target adjacent pixel points from a first original coordinate value in the original coordinates to obtain a first weight coefficient, subtracts a second target coordinate value in the position coordinates of the target adjacent pixel points from a second original coordinate value in the original coordinates to obtain a second weight coefficient. The preset relative position refers to a preset relative position of the target adjacent pixel point relative to an original pixel point of the target pixel point in the original image, the preset relative position can be set according to actual requirements, and the preset relative position can be one of upper left, lower left, upper right and lower right.
In this embodiment, the electronic device determines a target adjacent pixel point from a plurality of adjacent pixel points, then determines a first weight coefficient according to a first target coordinate value in a position coordinate of the target adjacent pixel point and a first original coordinate value in an original coordinate, and determines a second weight coefficient according to a second target coordinate value in the position coordinate of the target adjacent pixel point and a second original coordinate value in the original coordinate.
In one embodiment, the position coordinates are in a coordinate axis with the center of the original image as the origin; determining a target adjacent pixel point based on position coordinates of a plurality of adjacent pixel points in an original image, including:
comparing first position coordinate values in the position coordinates of a plurality of adjacent pixel points to obtain a minimum first position coordinate value; comparing the second position coordinate values in the position coordinates of the plurality of adjacent pixel points to obtain the minimum second position coordinate value; and determining the adjacent pixel point corresponding to the position coordinate comprising the minimum first position coordinate value and the minimum second position coordinate value as a target adjacent pixel point.
The electronic device obtains first position coordinate values in the position coordinates of each adjacent pixel point respectively, compares the plurality of first position coordinate values to obtain minimum first position coordinate values, obtains second position coordinate values in the position coordinates of each adjacent pixel point respectively, compares the plurality of second position coordinate values to obtain minimum second position coordinate values, and then determines adjacent pixel points corresponding to the position coordinates including the minimum first position coordinate values and the minimum second position coordinate values as target adjacent pixel points.
It can be understood that, for example, the original coordinates of the target pixel point in the original image are (u 0, v 0), the target pixel value of the target pixel point in the target image is g (u 0, v 0), the position coordinates of four adjacent pixel points of the target pixel point in the original image are (u, v), (u+1, v), (u, v+1) and (u+1, v+1), the pixel values of the four adjacent pixel points are g (u, v), g (u+1, v), g (u, v+1) and g (u+1, v+1), and the process of determining the target pixel value of the target pixel point in the target image is g (u 0, v 0) by using the bilinear interpolation algorithm is as follows:
interpolation is carried out on a straight line with y=v, and the pixel value of the first interpolation pixel point is as follows:
g (u 0, v) =g (u+1, v) × (u 0-u) +g (u, v) × [1- (u 0-u) ] formula (1)
Interpolation is carried out on a straight line with y=v+1, and the pixel value of the second interpolation pixel point is as follows:
g (u 0, v+1) =g (u+1, v+1) × (u 0-u) +g (u, v+1) × [1- (u 0-u) ] formula (2)
Interpolation is performed on a straight line of x=u0, and the target pixel value of the target pixel point is:
g (u 0, v 0) =g (u+1, v+1) × (u 0-u) × (v 0-v) +g (u, v+1) × [1- (u 0-u) ]× (v 0-v) +g (u+1, v) × (u 0-u) × [1- (v 0-v) ]) +g (u, v) × [1- (u 0-u) ]× [1- (v 0-v) ]), formula (3)
Let α=u0-u, β=v0-v, then the target pixel value of the target pixel point is:
g (u 0, v 0) =g (u+1, v+1) ×α×β+g (u, v+1) × (1- α) ×β+g (u+1, v) ×α× (1- β) +g (u, v) × (1- α) × (1- β) formula (4)
If the adjacent pixel point corresponding to the position coordinate including the minimum first position coordinate value and the minimum second position coordinate value is determined as the target adjacent pixel point, the adjacent pixel point with the position coordinate of (u, v) in the four adjacent pixel points is the target adjacent pixel point, the first weight coefficient is α=u0-u, the second weight coefficient is β=v0-v, and after the first weight coefficient and the second weight coefficient are determined, the target pixel value of the target pixel point in the target image can be determined by substituting the pixel values of the first weight coefficient, the second weight coefficient and the four adjacent pixel points in the original image into the formula (4).
In this embodiment, the position coordinates of a plurality of adjacent pixel points in the original image are used to determine the target adjacent pixel point, so as to provide basic data for subsequently determining the first weight coefficient and the second weight coefficient.
In one embodiment, a first weight coefficient is determined based on a first target coordinate value in the position coordinates of the target adjacent pixel point and a first original coordinate value in the original coordinates; determining a second weight coefficient based on a second target coordinate value in the position coordinates of the target adjacent pixel point and a second original coordinate value in the original coordinates, including:
Subtracting a first target coordinate value in the position coordinates of the target adjacent pixel points from a first original coordinate value in the original coordinates through a first thread to obtain a first weight coefficient; and subtracting a second target coordinate value in the position coordinates of the target adjacent pixel points from a second original coordinate value in the original coordinates by a second thread to obtain a second weight coefficient.
The threads refer to execution units running independently in the process, and it can be understood that one process can comprise a plurality of threads, and the plurality of threads share the same resource and context, but have independent execution paths, so that the threads play a role in cooperative work in multitasking, and the program can execute concurrent operation more effectively. The first thread refers to a thread for determining a first weight coefficient, and the second thread refers to a thread for determining a second weight coefficient.
The electronic device, after determining the target adjacent pixel point, subtracts a first target coordinate value in the position coordinates of the target adjacent pixel point from a first original coordinate value in the original coordinates by a first thread to obtain a first weight coefficient; and subtracting a second target coordinate value in the position coordinates of the target adjacent pixel points from a second original coordinate value in the original coordinates by a second thread to obtain a second weight coefficient.
In this embodiment, the first weight coefficient and the second weight coefficient do not affect each other in the determining process, so that the first weight coefficient and the second weight coefficient are determined in parallel through two threads, thereby improving the determining efficiency of the first weight coefficient and the second weight coefficient.
In one embodiment, determining a target pixel value for a target pixel in a target image based on position coordinates of a plurality of neighboring pixels in an original image, a first weight coefficient, and a second weight coefficient, includes:
determining an interpolation vector of the target pixel point based on position coordinates of a plurality of adjacent pixel points in the original image, the first weight coefficient and the second weight coefficient; and determining a target pixel value of the target pixel point in the target image based on the interpolation vector of the target pixel point.
The interpolation vector is a vector for determining a target pixel value of a target pixel point in a target image. The interpolation vector may include position coordinates of a plurality of adjacent pixel points in the original image, a first weight coefficient and a second weight coefficient, and positions of the plurality of adjacent pixel points in the original image, the first weight coefficient and the second weight coefficient in the interpolation vector may be set according to actual requirements, where the interpolation vector of the target pixel point is, for example, (u, v, u+1, v+1, α, β), or (α, β, u, v+1, u+1, v+1; the interpolation vector may also include original pixel values of a plurality of adjacent pixel points in the original image, a first weight coefficient, and a second weight coefficient, for example, the interpolation vector is (g (u, v), g (u+1, v), g (u, v+1), g (u+1, v+1), α, β).
The electronic device combines the position coordinates of the plurality of adjacent pixel points in the original image, the first weight coefficient and the second weight coefficient based on a preset combination sequence to obtain an interpolation vector of the target pixel point, and determines a target pixel value of the target pixel point in the target image by using the interpolation vector of the target pixel point. The preset combination sequence is preset, and is an order of combining the position coordinates of the plurality of adjacent pixel points in the original image, the first weight coefficient and the second weight coefficient, which can be understood that the preset combination sequence specifies that the position coordinates of the plurality of adjacent pixel points in the original image, the first weight coefficient and the second weight coefficient are respectively arranged in the interpolation vector.
In one embodiment, the electronic device obtains an original pixel value of each adjacent pixel point in the original image based on a position coordinate of each adjacent pixel point in the original image, combines the original pixel values of the plurality of adjacent pixel points in the original image, the first weight coefficient and the second weight coefficient based on a preset combination sequence to obtain an interpolation vector of the target pixel point, and then determines the target pixel value of the target pixel point in the target image by using the interpolation vector of the target pixel point.
In one embodiment, determining a target pixel value for a target pixel in a target image based on an interpolation vector for the target pixel comprises: and storing the interpolation vector of the target pixel point in a target register, and sending a vector instruction to target hardware equipment under the condition that the interpolation vector stored in the target register reaches the maximum storage capacity of the target register, wherein the vector instruction is used for indicating the target hardware equipment to respectively operate the interpolation vectors in the target register at the same time so as to obtain a target pixel value of the target pixel point corresponding to each interpolation vector.
In this embodiment, based on the position coordinates of multiple adjacent pixel points in the original image, the first weight coefficient and the second weight coefficient, an interpolation vector of the target pixel point is determined, where the interpolation vector includes related parameters for determining a target pixel value of the target pixel point in the target image, and compared with the bilinear interpolation algorithm where the weights corresponding to each adjacent pixel point need to be determined, only the first weight coefficient and the second weight coefficient need to be stored in the interpolation vector, so that the number of related parameters in the interpolation vector is reduced, and storage resources of electronic devices are convenient to store and save.
In one embodiment, as shown in fig. 2, determining a target pixel value of a target pixel point in a target image based on an interpolation vector of the target pixel point includes:
step 202, storing the interpolation vector of the target pixel point in a target register, and counting the interpolation vector stored in the target register to obtain the counted number.
The target register is a register in the target hardware device, and is used for storing the interpolation vector of the target pixel point. The target registers may be NEON (Advanced SIMD, advanced single instruction multiple data) registers, which refers to a special set of registers on an ARM (Advanced RISC Machine, advanced reduced instruction set computer) processor that support NEON technology, an SIMD (Single Instruction Multiple Data ) extension on the ARM architecture that provides the ARM processor with a vector and signal processing instruction set. The NEON register is a hardware component embedded within the ARM processor for efficient processing of vectorized data. The statistical number refers to the number of interpolation vectors held in the target register.
Illustratively, the electronic device stores the interpolation vector of the target pixel point in the target register, and performs statistics on the interpolation vector stored in the target register to obtain the statistical quantity.
Step 204, sending a vector instruction to the target hardware device when the statistical quantity reaches the statistical threshold; the vector instruction is used for indicating that the interpolation vectors stored in the target register are respectively operated at the same time, and a target pixel value corresponding to each interpolation vector is obtained.
The statistical threshold value refers to the set minimum statistical quantity for sending vector instructions to the target hardware device. The target hardware device refers to a hardware device where the target register is located, and the target hardware device may be an ARM processor. Vector instructions refer to instructions that instruct a target hardware device to perform the same operation on multiple interpolated vectors in a target register, i.e., instructions that instruct a target hardware device to perform target pixel value calculations on multiple interpolated vectors in a target register in parallel. The relative positions of the adjacent pixels of the target are different, and the vector instructions are different.
The electronic device sends a vector instruction to the target hardware device when the statistics number corresponding to the target register reaches the statistics threshold, and the target hardware device performs parallel operation on the interpolation vectors stored in the target register based on the vector instruction, and obtains a target pixel value corresponding to each interpolation vector. For example, if 4 interpolation vectors are stored in the target register, parallel operation is performed on the 4 interpolation vectors, and target pixel values corresponding to the 4 interpolation vectors are obtained.
And 206, obtaining target pixel values of target pixel points corresponding to the interpolation vectors through target hardware equipment.
The electronic device obtains, through the target hardware device, target pixel values of target pixel points corresponding to the interpolation vectors.
In this embodiment, by storing the interpolation vector of the target pixel point in the target register, when the statistics number corresponding to the target register reaches the statistics threshold, the vector instruction is sent to the target hardware device, and the target hardware device performs parallel operation on the plurality of vector instructions stored in the target register based on the vector instruction, so that the determination efficiency of the target pixel value is improved compared with the case where the target pixel value of each target pixel point is sequentially determined.
In one embodiment, the number of destination registers is at least two; and sending a vector instruction to the target hardware device when the statistic reaches a statistic threshold, wherein the vector instruction comprises the following components:
under the condition that the statistical quantity of each target register respectively reaches a corresponding statistical threshold value, a vector instruction is sent to target hardware equipment; the vector instruction is used for indicating that a plurality of interpolation vectors stored in at least two target registers are respectively operated at the same time, so as to obtain target pixel values respectively corresponding to each interpolation vector in each target register.
The statistical thresholds corresponding to different target registers may be equal or unequal, and the statistical thresholds corresponding to the target registers may be set according to actual situations, which are not limited herein, for example, the statistical thresholds corresponding to the target registers are determined according to the storage capacity of the target registers.
In an exemplary embodiment, when the number of the target registers is at least two, the electronic device sends a vector instruction to the target hardware device under the condition that it is determined that the statistical number of each target register reaches the corresponding statistical threshold value, and the target hardware device performs parallel operation on a plurality of interpolation vectors stored in at least two target registers based on the vector instruction, and obtains a target pixel value corresponding to each interpolation vector in each target register. For example, there are 16 target registers, and each target register stores 4 interpolation vectors, and then the 16×4 interpolation vectors are operated in parallel, and target pixel values corresponding to the 64 interpolation vectors are obtained.
In this embodiment, by setting a plurality of target registers, the number of interpolation vectors for performing parallel operation is increased, and the determination efficiency of the target pixel value is further improved compared to setting one target register.
In one embodiment, as shown in fig. 3, before the interpolation vector of the target pixel point is stored in the target register, the method further includes:
step 302, a scaling scene is acquired, and the number of threads is determined based on the scaling scene.
The scaling scene refers to a scene for scaling an original image, and the scaling scene includes but is not limited to a preview scene, an image processing scene and the like. The number of threads refers to the number of threads used for scaling the original image, and it can be understood that the time-consuming requirements of scaling the original image by different scaling scenes are different, and the number of threads determined by the scaling scenes with higher time-consuming requirements is more, so that the different numbers of threads are determined according to the different scaling scenes.
The electronic device obtains a scaling scene, determines the number of threads according to the scaling scene and a target mapping table, and the target mapping table comprises corresponding relations between a plurality of candidate scaling scenes and the number of candidate threads.
Step 304, creating a plurality of target threads with the same number as threads; the plurality of target threads are used for simultaneously and respectively determining interpolation vectors of one target pixel point.
The target thread is a thread for determining an interpolation vector of a target pixel point, that is, the target thread executes all steps in determining the interpolation vector of the target pixel point in the steps, including determining a plurality of adjacent pixel points of the target pixel point in an original image; determining a first weight coefficient and a second weight coefficient based on the position coordinates of a plurality of adjacent pixel points in the original image and the original coordinates of the target pixel points in the original image; and determining an interpolation vector of the target pixel point based on the position coordinates of the plurality of adjacent pixel points in the original image, the first weight coefficient and the second weight coefficient.
Illustratively, the electronic device creates a number of target threads equal to the number of threads, using the number of target threads for determining interpolation vectors for different target pixels in parallel.
Step 306, determining target registers with the same number as threads, and establishing a corresponding relationship between the target threads and the target registers.
The corresponding relation refers to a one-to-one corresponding relation between the target thread and the target register.
Illustratively, the electronic device determines a number of target registers equal to the number of threads, and establishes a correspondence between the target threads and the target registers.
In step 308, the interpolation vector of the target pixel point determined by the target thread is stored in the target register corresponding to the target thread.
The electronic device stores the interpolation vector of the target pixel point determined by the target thread in a target register corresponding to the target thread.
In one embodiment, if the number of target threads determined according to the scaling scene is 4, the number of determined target registers is 16, and the statistical threshold corresponding to the target registers is 4, then compared with the bilinear interpolation algorithm, the target pixel values of 1 target pixel point are determined each time, as shown in fig. 4 a; if 4 interpolation vectors in one target register are operated in parallel by using the target hardware device, the target pixel values of 1×4 target pixel points are determined simultaneously, as shown in fig. 4 b; if 16 target registers are used, the simultaneous determination of target pixel values of 16×4 target pixel points is realized, as shown in fig. 4 c; if 4 target threads and 16 target registers are used, the target pixel values of 4×16×4 target pixel points are determined simultaneously, as shown in fig. 4d, so that the determination efficiency of the target pixel values is improved, and the efficiency of image processing is further improved.
In this embodiment, the number of threads is determined by scaling the scene, then, target threads equal to the number of threads are created, interpolation vectors of different target pixels are determined in parallel by using a plurality of target threads, and the interpolation vectors are stored in corresponding target registers, so that the operation and storage processes of the interpolation vectors are improved.
In one embodiment, as shown in fig. 5, for each target pixel, determining a plurality of adjacent pixels of the target pixel in the original image includes:
step 502, obtaining target coordinates of a target pixel point in a target image, target height and target width of the target image, and original height and original width of an original image.
The target coordinates refer to coordinates representing the position of the target pixel point in the target image. The target height refers to the height of the target image, and the target height may be represented by the number of pixels of each column in the target image. The target width refers to the width of the target image, and the target width may be expressed by the number of pixels of each line in the target image. The original height refers to the height of the original image, and the original width refers to the width of the original image.
Illustratively, the electronic device obtains target coordinates of a target pixel in a target image, target height and target width of the target image, and original height and original width of an original image.
A first scaling is determined based on the target width and the original width, step 504.
Wherein the first scaling means a width scaling.
Illustratively, the electronic device divides the original width by the target width to obtain a first scaling. For example, where Wsrc is the original width and Wdst is the target width, the first scale is Wsrc/Wdst.
Step 506, determining a second scaling based on the target height and the original height.
Wherein the first scale refers to a height scale.
Illustratively, the electronic device divides the original height by the target height to obtain a second scale. For example, hrc is the original height, hdst is the target height, and the second scale is Hrc/Hdst.
In step 508, the original coordinates of the target pixel point in the original image are determined based on the target coordinates, the first scale, and the second scale.
Illustratively, the electronic device determines the original coordinates of the target pixel point in the original image according to the first scale, the second scale, and the target coordinate.
Step 510, determining a plurality of adjacent pixel points of the target pixel point in the original image based on the original coordinates.
The electronic device determines a plurality of pixels adjacent to the original coordinates in the original image, and determines the plurality of pixels adjacent to the original coordinates as a plurality of adjacent pixels of the target pixel in the original image.
In this embodiment, the original coordinates of the target pixel point corresponding to the original image are determined through the first scaling and the second scaling, and then a plurality of adjacent pixel points of the target pixel point in the original image are determined according to the original image and the original coordinates, so as to provide basic data for subsequently determining the target pixel value of the target pixel point in the target image.
In one embodiment, as shown in fig. 6, determining the original coordinates of the target pixel point in the original image based on the target coordinates, the first scale, and the second scale includes:
in step 602, a first original coordinate value is determined based on a first target coordinate value and a first scaling in the target coordinates.
The electronic device multiplies the difference between the first target coordinate value and the preset offset value by a first scaling ratio to obtain a first original coordinate value. The preset offset value refers to a preset offset value, and it can be understood that after the target image is aligned through the geometric center, the first target coordinate value and the second target coordinate value in the target coordinates are all in decimal, and are not compatible when running in the electronic device, so that the preset offset value of 0.5 is set to convert the first target coordinate value and the second target coordinate value in the target coordinates into integers. For example, the target coordinate is (i, j), the first scaling is Wsrc/Wdst, the first target coordinate is i, and the first original coordinate dsti=wsrc/wdst× (i-0.5).
Step 604, determining a second original coordinate value based on a second target coordinate value and a second scaling in the target coordinates.
The electronic device multiplies the difference between the second target coordinate value and the preset offset value by a second scaling factor to obtain a second original coordinate value. For example, the target coordinate is (i, j), the second scaling is Hsrc/Hdst, the second target coordinate is j, and the second original coordinate dstj=hsrc/hdst× (j-0.5).
Step 606, obtaining the original coordinates of the target pixel point in the original image based on the first original coordinate value and the second original coordinate value.
The electronic device uses the first original coordinate value as an X-axis coordinate value and uses the second original coordinate value as a Y-axis coordinate value to obtain the original coordinate of the target pixel point in the original image.
In this embodiment, the original coordinates of the target pixel point in the original image are determined through the target coordinates, the first scaling and the second scaling, so as to provide basic data for subsequently determining a plurality of adjacent pixel points of the target pixel point in the original image.
In one embodiment, as shown in fig. 7, obtaining the original coordinates of the target pixel point in the original image based on the first original coordinate value and the second original coordinate value includes:
In step 702, in the case where the first original coordinate value is larger than the original width of the original image, the original width is determined as the first corrected coordinate value.
The first corrected coordinate value refers to an X-axis coordinate value after correcting a first original coordinate value larger than an original width, and it can be understood that the first original coordinate value is larger than the original width of the original image, and the corresponding original pixel point of the target pixel point in the original image is necessarily out of the original image, so that the first original coordinate value needs to be constrained, so that the first original coordinate value is within the original image.
The electronic device may compare the first original coordinate value with the original width after determining the first original coordinate value, and determine the original width as the first corrected coordinate value if the first original coordinate value is greater than the original width; in the case where the first original coordinate value is less than or equal to the original width, the first original coordinate value is determined as a first corrected coordinate value.
In step 704, in the case where the second original coordinate value is greater than the original height of the original image, the original height is determined as the second corrected coordinate value.
The second corrected coordinate value refers to a Y-axis coordinate value after correcting the second original coordinate value larger than the original height.
The electronic device may compare the first original coordinate value with the first original height after determining the first original coordinate value, and determine the first original height as the first corrected coordinate value if the first original coordinate value is greater than the first original height; and determining the second original coordinate value as a second corrected coordinate value in the case where the second original coordinate value is less than or equal to the original height.
Step 706, obtaining the original coordinates of the target pixel point in the original image based on the first corrected coordinate value and the second corrected coordinate value.
The electronic device obtains the original coordinates of the target pixel point in the original image by taking the first corrected coordinate value as an X-axis coordinate value and taking the second corrected coordinate value as a Y-axis coordinate value.
In this embodiment, by determining the original width as the first corrected coordinate value when the first original coordinate value is greater than the original width and/or determining the original height as the second corrected coordinate value when the second original coordinate value is greater than the original height, not only is the original coordinate ensured to be located in the target image, but also a complicated correction process is avoided, thereby not only improving the accuracy of the original coordinate, but also improving the determination efficiency of the original coordinate.
In an exemplary embodiment, there is provided an image processing method for performing a scaling process on an original image, a flowchart of the image processing method being shown in fig. 8, including:
(1) The electronic device acquires the original image.
(2) The electronic equipment acquires a scaling scene corresponding to the original image, determines the number of threads according to the scaling scene and the target mapping table, creates a plurality of target threads equal to the number of threads, and the plurality of target threads are used for determining interpolation vectors of different target pixel points in parallel.
(3) And determining an interpolation vector corresponding to each target pixel point in the target image.
The interpolation vector determination flow is shown in fig. 9, and includes: for each target pixel point, the electronic equipment acquires target coordinates of the target pixel point in a target image, target height and target width of the target image, and original height and original width of an original image, divides the original width by the target width to obtain a first scaling factor, and divides the original height by the target height to obtain a second scaling factor; multiplying the difference between the first target coordinate value and the preset offset value by a first scaling to obtain a first original coordinate value, comparing the first original coordinate value with an original width, and determining the original width as a first corrected coordinate value under the condition that the first original coordinate value is larger than the original width; in the case where the first original coordinate value is less than or equal to the original width, the first original coordinate value is determined as a first corrected coordinate value. Multiplying the difference between the second target coordinate value and the preset offset value by a second scaling to obtain a second original coordinate value, comparing the second original coordinate value with the original height, and determining the original height as a second corrected coordinate value under the condition that the second original coordinate value is larger than the original height; and determining the second original coordinate value as a second corrected coordinate value in the case where the second original coordinate value is less than or equal to the original height. And taking the first corrected coordinate value as an X-axis coordinate value and taking the second corrected coordinate value as a Y-axis coordinate value to obtain the original coordinate of the target pixel point in the original image. Based on the original coordinates of the target pixel, a plurality of adjacent pixels of the target pixel in the original image are determined.
The electronic equipment respectively acquires first position coordinate values in the position coordinates of each adjacent pixel point, compares the plurality of first position coordinate values, and acquires the minimum first position coordinate value; respectively acquiring second position coordinate values in the position coordinates of each adjacent pixel point, and comparing a plurality of second position coordinate values to obtain a minimum second position coordinate value; and then determining the adjacent pixel point corresponding to the position coordinate comprising the minimum first position coordinate value and the minimum second position coordinate value as a target adjacent pixel point.
The electronic equipment subtracts a first target coordinate value in the position coordinates of the target adjacent pixel points from a first original coordinate value in the original coordinates through a first thread to obtain a first weight coefficient; and subtracting a second target coordinate value in the position coordinates of the target adjacent pixel points from a second original coordinate value in the original coordinates by a second thread to obtain a second weight coefficient. The electronic equipment combines the position coordinates of a plurality of adjacent pixel points in the original image, the first weight coefficient and the second weight coefficient based on a preset combination sequence to obtain an interpolation vector of the target pixel point.
(4) And determining the target pixel value of the target pixel point in the target image based on the interpolation vector corresponding to the target pixel point.
Under the condition that a plurality of target registers exist, the electronic equipment stores interpolation vectors of target pixel points in the target registers, under the condition that the statistical quantity of each target register is determined to reach a corresponding statistical threshold value respectively, a vector instruction is sent to target hardware equipment, and the target hardware equipment carries out parallel operation on the plurality of interpolation vectors stored in the plurality of target registers based on the vector instruction, and simultaneously obtains target pixel values corresponding to each interpolation vector in each target register respectively. The electronic device obtains target pixel values of target pixel points corresponding to the interpolation vectors through the target hardware device.
(5) And the electronic equipment obtains a target image according to the target pixel values corresponding to the target pixel points.
On the basis that an RGBA image is taken as an original image, 4 target threads and 16 target registers, each target register stores 4 interpolation vectors, a traditional bilinear interpolation method and the image processing method are respectively sampled for the original image of a single-precision Floating Point type (FP 32, floating Point 32) and a half-precision Floating Point type (FP 16, floating Point 16) to carry out image processing, and a result is shown in fig. 10, wherein the image processing time for the original image of the single-precision Floating Point type is 189.799 milliseconds by adopting the traditional bilinear interpolation method, and the image processing time for the image processing by adopting the image processing method is 15.027 milliseconds; the image processing of the half-precision floating point type original image takes 257.499 milliseconds by adopting the traditional bilinear interpolation method, and the image processing takes 23.267 milliseconds by adopting the image processing method, so that the image processing efficiency is greatly improved by adopting the image processing method to process the image. The cv.resize refers to image size adjustment by using a resize function in an Open CV (a computer vision library) library, the Torch refers to image size adjustment by using a Py Torch (a framework for deep learning) deep learning framework, and the difference between pixel values is small between the image processing method and image scaling by using cv.resize and Torch, so that the image processing method is used for image processing, and the accuracy of image processing is high.
In the image processing method, the first weight coefficient and the second weight coefficient are determined based on the position coordinates of the plurality of adjacent pixels in the original image and the original coordinates of the target pixels in the original image by determining the plurality of adjacent pixels in the original image, the first weight coefficient and the second weight coefficient are used for determining weights corresponding to the plurality of adjacent pixels respectively, then the target pixel value of the target pixels in the target image is determined based on the position coordinates of the plurality of adjacent pixels in the original image, the first weight coefficient and the second weight coefficient, compared with the pixel values of the two interpolation pixels required to be determined in the bilinear interpolation algorithm, the target pixel value of the target pixels is determined according to the pixel values of the two interpolation pixels, the step of determining the pixel value of the interpolation pixel is reduced, so that the calculation amount of the target pixel value determining process is reduced, the determining efficiency of the target pixel value is improved, the target image processing efficiency is improved based on the target pixel value corresponding to each target pixel value of the target pixels, and the image processing efficiency is improved.
It should be understood that, although the steps in the flowcharts related to the embodiments described above are sequentially shown as indicated by arrows, these steps are not necessarily sequentially performed in the order indicated by the arrows. The steps are not strictly limited to the order of execution unless explicitly recited herein, and the steps may be executed in other orders. Moreover, at least some of the steps in the flowcharts described in the above embodiments may include a plurality of steps or a plurality of stages, which are not necessarily performed at the same time, but may be performed at different times, and the order of the steps or stages is not necessarily performed sequentially, but may be performed alternately or alternately with at least some of the other steps or stages.
Based on the same inventive concept, the embodiments of the present application also provide an image processing apparatus for implementing the above-mentioned image processing method. The implementation of the solution provided by the apparatus is similar to the implementation described in the above method, so the specific limitation of one or more embodiments of the image processing apparatus provided below may refer to the limitation of the image processing method hereinabove, and will not be repeated herein.
In one embodiment, as shown in fig. 11, there is provided an image processing apparatus including: pixel point determination module 1102, weight coefficient determination module 1104, pixel value determination module 1106, and scaling module 1108, wherein:
a pixel point determining module 1102, configured to determine, for each target pixel point, a plurality of adjacent pixel points of the target pixel point in the original image; the target pixel points are pixel points in the target image obtained by scaling the original image;
a weight coefficient determining module 1104, configured to determine a first weight coefficient and a second weight coefficient based on the position coordinates of the plurality of adjacent pixel points in the original image and the original coordinates of the target pixel point in the original image;
a pixel value determining module 1106, configured to determine a target pixel value of the target pixel point in the target image based on the position coordinates of the plurality of neighboring pixel points in the original image, the first weight coefficient, and the second weight coefficient;
The scaling module 1108 is configured to obtain a target image based on the target pixel values corresponding to the target pixel points.
In one embodiment, the weight coefficient determination module 1104 is further configured to: determining a target adjacent pixel point based on the position coordinates of a plurality of adjacent pixel points in the original image; determining a first weight coefficient based on a first target coordinate value in the position coordinates of the target adjacent pixel points and a first original coordinate value in the original coordinates; and determining a second weight coefficient based on a second target coordinate value in the position coordinates of the target adjacent pixel point and a second original coordinate value in the original coordinates.
In one embodiment, the weight coefficient determination module 1104 is further configured to: comparing first position coordinate values in the position coordinates of a plurality of adjacent pixel points to obtain a minimum first position coordinate value; comparing the second position coordinate values in the position coordinates of the plurality of adjacent pixel points to obtain the minimum second position coordinate value; and determining the adjacent pixel point corresponding to the position coordinate comprising the minimum first position coordinate value and the minimum second position coordinate value as a target adjacent pixel point.
In one embodiment, the weight coefficient determination module 1104 is further configured to: subtracting a first target coordinate value in the position coordinates of the target adjacent pixel points from a first original coordinate value in the original coordinates through a first thread to obtain a first weight coefficient; determining a second weight coefficient based on a second target coordinate value in the position coordinates of the target adjacent pixel point and a second original coordinate value in the original coordinates, including: and subtracting a second target coordinate value in the position coordinates of the target adjacent pixel points from a second original coordinate value in the original coordinates by a second thread to obtain a second weight coefficient.
In one embodiment, the pixel value determination module 1106 is further configured to: determining an interpolation vector of the target pixel point based on position coordinates of a plurality of adjacent pixel points in the original image, the first weight coefficient and the second weight coefficient; and determining a target pixel value of the target pixel point in the target image based on the interpolation vector of the target pixel point.
In one embodiment, the pixel value determination module 1106 is further configured to: storing the interpolation vector of the target pixel point in a target register, and counting the interpolation vector stored in the target register to obtain the counted number; under the condition that the statistical quantity reaches a statistical threshold value, a vector instruction is sent to target hardware equipment; the vector instruction is used for indicating that interpolation vectors stored in the target register are respectively operated at the same time to obtain target pixel values corresponding to each interpolation vector respectively; and obtaining target pixel values of target pixel points corresponding to the interpolation vectors through target hardware equipment.
In one embodiment, the pixel value determination module 1106 is further configured to: under the condition that the statistical quantity of each target register respectively reaches a corresponding statistical threshold value, a vector instruction is sent to target hardware equipment; the vector instruction is used for indicating that a plurality of interpolation vectors stored in at least two target registers are respectively operated at the same time, so as to obtain target pixel values respectively corresponding to each interpolation vector in each target register.
In one embodiment, the pixel value determination module 1106 is further configured to: obtaining a scaling scene, and determining the number of threads based on the scaling scene; creating a plurality of target threads equal to the number of threads; the target threads are used for simultaneously determining interpolation vectors of a target pixel point respectively; determining target registers with the same number as threads, and establishing a corresponding relation between the target threads and the target registers; storing the interpolated vector for the target pixel in a target register, comprising: and storing the interpolation vector of the target pixel point determined by the target thread into a target register corresponding to the target thread.
In one embodiment, the pixel point determination module 1102 is further configured to: acquiring target coordinates of a target pixel point in a target image, target height and target width of the target image, and original height and original width of an original image; determining a first scaling based on the target width and the original width; determining a second scaling based on the target height and the original height; determining original coordinates of the target pixel point in the original image based on the target coordinates, the first scaling and the second scaling; based on the original coordinates, a plurality of adjacent pixel points of the target pixel point in the original image are determined.
In one embodiment, the pixel point determination module 1102 is further configured to: determining a first original coordinate value based on a first target coordinate value and a first scaling in the target coordinates; determining a second original coordinate value based on a second target coordinate value and a second scaling in the target coordinates; and obtaining the original coordinates of the target pixel point in the original image based on the first original coordinate value and the second original coordinate value.
In one embodiment, the pixel point determination module 1102 is further configured to: determining the original width as a first corrected coordinate value in the case where the first original coordinate value is greater than the original width of the original image; determining the original height as a second corrected coordinate value in the case where the second original coordinate value is greater than the original height of the original image; and obtaining the original coordinates of the target pixel point in the original image based on the first corrected coordinate value and the second corrected coordinate value.
The respective modules in the above-described image processing apparatus may be implemented in whole or in part by software, hardware, and combinations thereof. The above modules may be embedded in hardware or independent of a processor in the electronic device, or may be stored in software in a memory in the electronic device, so that the processor may call and execute operations corresponding to the above modules.
In one embodiment, an electronic device is provided, which may be a terminal, and an internal structure diagram thereof may be as shown in fig. 12. The electronic device includes a processor, a memory, an input/output interface, a communication interface, a display unit, and an input device. The processor, the memory and the input/output interface are connected through a system bus, and the communication interface, the display unit and the input device are connected to the system bus through the input/output interface. Wherein the processor of the electronic device is configured to provide computing and control capabilities. The memory of the electronic device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The input/output interface of the electronic device is used to exchange information between the processor and the external device. The communication interface of the electronic device is used for conducting wired or wireless communication with an external terminal, and the wireless communication can be realized through WIFI, a mobile cellular network, NFC (near field communication) or other technologies. The computer program is executed by a processor to implement an image processing method. The display unit of the electronic device is used for forming a visual picture, and can be a display screen, a projection device or a virtual reality imaging device. The display screen can be a liquid crystal display screen or an electronic ink display screen, and the input device of the electronic equipment can be a touch layer covered on the display screen, can also be keys, a track ball or a touch pad arranged on the shell of the electronic equipment, and can also be an external keyboard, a touch pad or a mouse and the like.
It will be appreciated by those skilled in the art that the structure shown in fig. 12 is merely a block diagram of a portion of the structure associated with the present application and is not limiting of the electronic device to which the present application is applied, and that a particular electronic device may include more or fewer components than shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, an electronic device is provided that includes a memory having a computer program stored therein and a processor that when executing the computer program performs the steps of the method embodiments described above.
In one embodiment, a computer-readable storage medium is provided, on which a computer program is stored which, when executed by a processor, carries out the steps of the method embodiments described above.
In an embodiment, a computer program product is provided, comprising a computer program which, when executed by a processor, implements the steps of the method embodiments described above.
It should be noted that, user information (including but not limited to user equipment information, user personal information, etc.) and data (including but not limited to data for analysis, stored data, presented data, etc.) referred to in the present application are information and data authorized by the user or sufficiently authorized by each party.
Those skilled in the art will appreciate that implementing all or part of the above described methods may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed, may comprise the steps of the embodiments of the methods described above. Any reference to memory, database, or other medium used in the various embodiments provided herein may include at least one of non-volatile and volatile memory. The nonvolatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical Memory, high density embedded nonvolatile Memory, resistive random access Memory (ReRAM), magnetic random access Memory (Magnetoresistive Random Access Memory, MRAM), ferroelectric Memory (Ferroelectric Random Access Memory, FRAM), phase change Memory (Phase Change Memory, PCM), graphene Memory, and the like. Volatile memory can include random access memory (Random Access Memory, RAM) or external cache memory, and the like. By way of illustration, and not limitation, RAM can be in the form of a variety of forms, such as static random access memory (Static Random Access Memory, SRAM) or dynamic random access memory (Dynamic Random Access Memory, DRAM), and the like. The databases referred to in the various embodiments provided herein may include at least one of relational databases and non-relational databases. The non-relational database may include, but is not limited to, a blockchain-based distributed database, and the like. The processors referred to in the embodiments provided herein may be general purpose processors, central processing units, graphics processors, digital signal processors, programmable logic units, quantum computing-based data processing logic units, etc., without being limited thereto.
The technical features of the above embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The above examples only represent a few embodiments of the present application, which are described in more detail and are not to be construed as limiting the scope of the present application. It should be noted that it would be apparent to those skilled in the art that various modifications and improvements could be made without departing from the spirit of the present application, which would be within the scope of the present application. Accordingly, the scope of protection of the present application shall be subject to the appended claims.

Claims (15)

1. An image processing method, the method comprising:
for each target pixel point, determining a plurality of adjacent pixel points of the target pixel point in an original image; the target pixel points are pixel points in a target image obtained by scaling the original image;
determining a first weight coefficient and a second weight coefficient based on the position coordinates of a plurality of adjacent pixel points in the original image and the original coordinates of the target pixel point in the original image;
Determining a target pixel value of the target pixel point in the target image based on position coordinates of a plurality of adjacent pixel points in the original image, the first weight coefficient and the second weight coefficient;
and obtaining the target image based on the target pixel values corresponding to the target pixel points.
2. The method of claim 1, wherein the determining the first weight coefficient and the second weight coefficient based on the position coordinates of the plurality of the neighboring pixels in the original image and the original coordinates of the target pixel in the original image comprises:
determining a target adjacent pixel point based on position coordinates of a plurality of adjacent pixel points in the original image;
determining a first weight coefficient based on a first target coordinate value in the position coordinates of the target adjacent pixel points and a first original coordinate value in the original coordinates;
and determining a second weight coefficient based on a second target coordinate value in the position coordinates of the target adjacent pixel points and a second original coordinate value in the original coordinates.
3. The method of claim 2, wherein the position coordinates are in a coordinate axis having a center of the original image as an origin; the determining a target adjacent pixel point based on the position coordinates of the plurality of adjacent pixel points in the original image includes:
Comparing the first position coordinate values in the position coordinates of the plurality of adjacent pixel points to obtain the minimum first position coordinate value;
comparing the second position coordinate values in the position coordinates of the plurality of adjacent pixel points to obtain the minimum second position coordinate value;
and determining the adjacent pixel point corresponding to the position coordinate comprising the minimum first position coordinate value and the minimum second position coordinate value as a target adjacent pixel point.
4. The method of claim 2, wherein the determining a first weight coefficient based on a first target coordinate value in the position coordinates of the target adjacent pixel point and a first original coordinate value in the original coordinates comprises:
subtracting a first target coordinate value in the position coordinates of the target adjacent pixel points from a first original coordinate value in the original coordinates through a first thread to obtain a first weight coefficient;
the determining a second weight coefficient based on a second target coordinate value in the position coordinates of the target adjacent pixel point and a second original coordinate value in the original coordinates includes:
and subtracting a second target coordinate value in the position coordinates of the target adjacent pixel points from a second original coordinate value in the original coordinates through a second thread to obtain a second weight coefficient.
5. The method of claim 1, wherein the determining the target pixel value for the target pixel in the target image based on the position coordinates of the plurality of neighboring pixels in the original image, the first weight coefficient, and the second weight coefficient comprises:
determining an interpolation vector of the target pixel point based on position coordinates of a plurality of adjacent pixel points in the original image, the first weight coefficient and the second weight coefficient;
and determining a target pixel value of the target pixel point in the target image based on the interpolation vector of the target pixel point.
6. The method of claim 5, wherein determining a target pixel value for the target pixel in the target image based on the interpolated vector for the target pixel comprises:
storing the interpolation vector of the target pixel point in a target register, and counting the interpolation vector stored in the target register to obtain the counting quantity;
under the condition that the statistical quantity reaches a statistical threshold value, a vector instruction is sent to target hardware equipment; the vector instruction is used for indicating that interpolation vectors stored in the target register are respectively operated at the same time to obtain target pixel values corresponding to the interpolation vectors respectively;
And acquiring target pixel values of target pixel points corresponding to the interpolation vectors through the target hardware equipment.
7. The method of claim 6, wherein the number of destination registers is at least two; and sending a vector instruction to target hardware equipment under the condition that the statistic value reaches a statistic threshold value, wherein the vector instruction comprises the following steps:
under the condition that the statistical quantity of each target register respectively reaches a corresponding statistical threshold value, a vector instruction is sent to target hardware equipment; the vector instruction is used for indicating that a plurality of interpolation vectors stored in at least two target registers are respectively operated at the same time, so as to obtain target pixel values respectively corresponding to the interpolation vectors in each target register.
8. The method of claim 6, wherein the storing the interpolated vector for the target pixel point in a target register is preceded by:
acquiring a scaling scene, and determining the number of threads based on the scaling scene;
creating a plurality of target threads equal to the number of threads; the target threads are used for simultaneously determining interpolation vectors of one target pixel point respectively;
Determining target registers with the same number as the threads, and establishing a corresponding relation between the target threads and the target registers;
the storing the interpolation vector of the target pixel point in a target register includes:
and storing the interpolation vector of the target pixel point determined by the target thread into a target register corresponding to the target thread through the target thread.
9. The method of claim 1, wherein the determining, for each target pixel, a plurality of neighboring pixels of the target pixel in the original image comprises:
acquiring target coordinates of the target pixel points in the target image, target height and target width of the target image, and original height and original width of the original image;
determining a first scaling based on the target width and the original width;
determining a second scale based on the target height and the original height;
determining original coordinates of the target pixel point in the original image based on the target coordinates, the first scaling and the second scaling;
and determining a plurality of adjacent pixel points of the target pixel point in the original image based on the original coordinates.
10. The method of claim 9, wherein the determining the original coordinates of the target pixel point in the original image based on the target coordinates, the first scale, and the second scale comprises:
determining a first original coordinate value based on a first target coordinate value in the target coordinates and the first scaling;
determining a second original coordinate value based on a second target coordinate value in the target coordinates and the second scaling;
and obtaining the original coordinates of the target pixel point in the original image based on the first original coordinate value and the second original coordinate value.
11. The method of claim 10, wherein the deriving the original coordinates of the target pixel point in the original image based on the first original coordinate value and the second original coordinate value comprises:
determining the original width as a first corrected coordinate value in the case that the first original coordinate value is greater than the original width of the original image;
determining the original height as a second corrected coordinate value in the case where the second original coordinate value is greater than the original height of the original image;
And obtaining the original coordinates of the target pixel point in the original image based on the first corrected coordinate value and the second corrected coordinate value.
12. An image processing apparatus, characterized in that the apparatus comprises:
the pixel point determining module is used for determining a plurality of adjacent pixel points of each target pixel point in an original image; the target pixel points are pixel points in a target image obtained by scaling the original image;
the weight coefficient determining module is used for determining a first weight coefficient and a second weight coefficient based on the position coordinates of a plurality of adjacent pixel points in the original image and the original coordinates of the target pixel point in the original image;
a pixel value determining module, configured to determine a target pixel value of the target pixel point in the target image based on position coordinates of a plurality of adjacent pixel points in the original image, the first weight coefficient, and the second weight coefficient;
and the scaling module is used for obtaining the target image based on the target pixel value corresponding to each target pixel point.
13. An electronic device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor implements the steps of the method of any one of claims 1 to 11 when the computer program is executed.
14. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the steps of the method of any of claims 1 to 11.
15. A computer program product comprising a computer program, characterized in that the computer program, when being executed by a processor, implements the steps of the method of any one of claims 1 to 11.
CN202410116929.4A 2024-01-26 2024-01-26 Image processing method, device, electronic equipment and storage medium Pending CN117764819A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410116929.4A CN117764819A (en) 2024-01-26 2024-01-26 Image processing method, device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410116929.4A CN117764819A (en) 2024-01-26 2024-01-26 Image processing method, device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN117764819A true CN117764819A (en) 2024-03-26

Family

ID=90314622

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410116929.4A Pending CN117764819A (en) 2024-01-26 2024-01-26 Image processing method, device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN117764819A (en)

Similar Documents

Publication Publication Date Title
US20220114708A1 (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
EP3093757B1 (en) Multi-dimensional sliding window operation for a vector processor
CN109934773A (en) A kind of image processing method, device, electronic equipment and computer-readable medium
CN111476718A (en) Image amplification method and device, storage medium and terminal equipment
CN113963072B (en) Binocular camera calibration method and device, computer equipment and storage medium
CN112070854B (en) Image generation method, device, equipment and storage medium
CN112927163A (en) Image data enhancement method and device, electronic equipment and storage medium
CN110097525B (en) Image rendering method and device and computing equipment
JP2021144428A (en) Data processing device and data processing method
CN117764819A (en) Image processing method, device, electronic equipment and storage medium
US20230377265A1 (en) Systems for Efficiently Rendering Vector Objects
CN116051345A (en) Image data processing method, device, computer equipment and readable storage medium
CN115205115A (en) Image amplification method based on bilinear interpolation algorithm
CN114612294A (en) Image super-resolution processing method and computer equipment
CN116527908B (en) Motion field estimation method, motion field estimation device, computer device and storage medium
CN116563357B (en) Image matching method, device, computer equipment and computer readable storage medium
CN115456892B (en) 2.5-dimensional visual image automatic geometric correction method, device, equipment and medium
CN113570660B (en) Shooting device pose estimation method, device, computer equipment and storage medium
CN113436325B (en) Image processing method and device, electronic equipment and storage medium
CN102724432B (en) SSE2-based image mixing processing method
CN116645428A (en) Image display method, device, computer equipment and storage medium
CN117036174A (en) Scaling control method, scaling control device, computer apparatus, storage medium, and program product
CN114612295A (en) Image super-resolution processing method and computer equipment
CN114549351A (en) Color correction method, apparatus, computer device and storage medium
CN117688638A (en) Electronic drawing generation method, system, electronic device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination