WO2012106850A1 - 基于矩阵对图像进行插值的方法及图像处理*** - Google Patents

基于矩阵对图像进行插值的方法及图像处理*** Download PDF

Info

Publication number
WO2012106850A1
WO2012106850A1 PCT/CN2011/071966 CN2011071966W WO2012106850A1 WO 2012106850 A1 WO2012106850 A1 WO 2012106850A1 CN 2011071966 W CN2011071966 W CN 2011071966W WO 2012106850 A1 WO2012106850 A1 WO 2012106850A1
Authority
WO
WIPO (PCT)
Prior art keywords
pixel
image
diagonal
point
pixels
Prior art date
Application number
PCT/CN2011/071966
Other languages
English (en)
French (fr)
Inventor
黄晓东
Original Assignee
澜起科技(上海)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 澜起科技(上海)有限公司 filed Critical 澜起科技(上海)有限公司
Priority to US13/376,995 priority Critical patent/US8818136B2/en
Publication of WO2012106850A1 publication Critical patent/WO2012106850A1/zh

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4007Scaling of whole images or parts thereof, e.g. expanding or contracting based on interpolation, e.g. bilinear interpolation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/132Sampling, masking or truncation of coding units, e.g. adaptive resampling, frame skipping, frame interpolation or high-frequency transform coefficient masking
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/136Incoming video signal characteristics or properties
    • H04N19/14Coding unit complexity, e.g. amount of activity or edge presence estimation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/172Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a picture, frame or field
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/59Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving spatial sub-sampling or interpolation, e.g. alteration of picture size or resolution

Definitions

  • the wood invention relates to the field of image viewing, and particularly relates to a method for inserting an image by a base matrix and an image processing system. Background technique
  • the purpose of the invention is to provide an A-method and an image viewing system for inserting images into a matrix.
  • the invention provides an A method for interpolating images based on a matrix in an image processing system, to include steps:
  • the wood invention also provides a system for interpolating images by a base matrix, including:
  • Determining a gradient correction block was able to identify 1 ⁇ gradient direction image field containing the interpolation pixel blush point array configuration; triangular determine a correction block in Chuan group ": the gradient to A, and the position of the interpolation point blush, Determining a triangle to be inserted in the image domain; and
  • the calculation module is configured to calculate the pixel value of the interpolation point based on the pixel value of the pixel corresponding to the two wish points of the binary to be inserted and the distance between the interpolation point and the square point.
  • the base matrix of the present invention inserts an image into the A method and the image observation system passes the pair of interpolation points
  • the analysis of the pixel array, ⁇ takes the gradient direction of the pixel array, iiJ ⁇ jPI determines the triangle to be interpolated according to the gradient direction, r ⁇ i calculates the pixel value of the insertion point, so nJ effectively solves the scaling image oblique A to the edge of the detail 3 ⁇ 4 thorn or jagged problem, the higher quality image is obtained, and the method of the invention is simple ⁇ , and the calculation amount is small.
  • m 1 is a flowchart of a method of interpolating an image based on a matrix at an image system at the image.
  • m 2 is a schematic diagram of a pixel array containing the insertion point.
  • m 3 is a flow chart of a method for interpolating an image in a system observation matrix based on another aspect of the invention.
  • 4 is a flow chart of a method for interpolating an image in an image viewing system base matrix according to still another aspect of the present invention.
  • m 5 is a schematic diagram of calculating the pixel value of the insertion point by inserting an image into the base matrix of the image system at the image.
  • m 6 is a flowchart of a method for interpolating an image in an image viewing system based on an image of the present invention.
  • m 7 is a flowchart of a method for interpolating an image in an image viewing system based on an image of the present invention.
  • m 8 is a flowchart of a method for interpolating an image in an image viewing system based on an image of the present invention.
  • m 9 is a flowchart of a method for interpolating an image in an image viewing system based on an image of the present invention.
  • m 10 is a wooden invention.
  • the basic matrix of the aspect is an image in which the image is interpolated.
  • Fig. 1 1 is a schematic diagram of an image processing system for inserting an image into a base matrix of the w-plane of the invention.
  • W refers to Fig. 1, which is a flow circle of the A method for interpolating images based on a matrix in an image viewing system.
  • the image viewing system takes the pixel values of the pixel points of the pixel array including the insertion point.
  • the circle image processing system can acquire 16 arrays of a 4*4 pixel array.
  • a pixel value of a pixel as shown in FIG. 2, the image viewing system takes the pixel points dll, dl 2, dl 3, dl4, d21, d22, d23, d24 of the pixel array array including the interpolation point P, Pixel values for d31, d32, d33, d34, d41, d42, d43, and d44.
  • the technicians in the wood field should be aware that the image viewing system is not limited to the pixel array, and it can also acquire 5*5, 8*8 pixel arrays and so on.
  • the image viewing system determines the gradient direction of the image region formed by the array of pixel points including the insertion point.
  • the circle image processing system determines pixel points dl l, dl 2, dl 3, d l4, d21, d22 including the interpolation point P, (123, d2, d31, d32, d33, d34, dA l , M2, cM3, and ⁇ 4 ⁇ form the gradient direction of the image domain, for example, for the pixel points dl l, d22, d33 and (M4 formed by the diagonal For example, a diagonal direction formed by the pixel points dl 4, d23, d32, and d41.
  • step S3 the image processing system determines the triangle to be inserted in the image domain formed by the array of pixel points based on the gradient direction and the position of the insertion point.
  • the gradient A is the diagonal direction of the pixel points dl l, d22, d33 and d44
  • the image viewing system is based on the position of the interpolation point P, for example, the interpolation point P is at the pixel point ( 132.
  • the image viewing system selects the pixel d22, (the dimple formed by 123 and d32) according to the position of the interpolation point P, for example, a triangle formed by the pixel points d22, d23, and d32 at the interpolation point P. The triangle to be inserted.
  • the image observation system base the pixel ⁇ of the pixel corresponding to the two vertices of the triangle to be inserted and the distance of the interpolation point from the square ⁇ a desired point, ⁇ Calculate the pixel value of the insertion point.
  • the image processing system determines the dimple formed by d22, d23, and d32 as the dimple to be inserted, and the image processing system calculates the pixel ⁇ of the insertion point P according to the interpolation point P and the pixel point (the distance of J22.
  • the image viewing system can also calculate the pixel ⁇ of the insertion point P by the distance between the interpolation point P and the pixel point d23 or d32.
  • M 3 shows a flow chart of a method for inserting an image into a base matrix in an image processing system of the present invention.
  • step S1 is described in detail in the small embodiment of Fig. 1. Here, it is included in the A formula of the introduction, and will not be repeated.
  • step S2 ⁇ the image viewing system separates the pixel values of each pixel of the pixel I of the pixel array containing the insertion point with a diagonal line I:
  • the pixels ⁇ of the respective pixel points are compared to determine that the image area formed by the array of pixel points is a flat field to determine the gradient A direction.
  • the image viewing system compares two pixel points of diagonal 1:: (J22 and d33 are compared with pixel values of two pixel points d32 and d23 of the other diagonal k, respectively, to determine the pixel dot array.
  • the resulting image domain is ⁇ is a flat field to determine the gradient-direction.
  • steps S3 and ⁇ are described in detail in the small embodiment of Fig. 1, which is included in the form of A, and will not be repeated.
  • m shows a flow chart of a method for inserting an image into a base matrix in an image processing system according to still another aspect of the present invention.
  • step S1' the image processing system acquires pixel values of each pixel included in the 2*2 pixel array, wherein the 2*2 pixel array includes a dot.
  • the 2*2 pixel dot array is composed of a plurality of pixel points d22, d23, (132, and d33) adjacent to the interpolation point ⁇ .
  • step S21 the image viewing system sets the pixel points of the two pixel points adjacent to the insertion point at the same diagonal line 1::
  • the pixel of two pixels is compared, when the pixel of the two pixels of the same - diagonal 1:: is not smaller than (ie, human or equal) 1; the pixel value of two adjacent pixels ⁇ Another smaller than 1; the pixel value of two adjacent pixels, it is judged that the image area formed by the four pixel points is a flat area, and the gradient A direction is 3 ⁇ 4 diagonal direction.
  • b pixel value of pixel d23, pixel of pixel d32,
  • the image viewing system compares the two pixel points of the diagonal I: (J22 and (the pixel values a and c of 133 are compared with the two pixel points of the other diagonal k (the pixel values b and d of 123 and d32, respectively).
  • the image viewing system determines that the image domain formed by the four pixel points is a flat field, and the gradient is two diagonal points (the diagonal A direction formed by 122 and d33).
  • the image viewing system also determines that the image area formed by the four pixel points is a flat field, and the gradient direction is a diagonal direction formed by two pixel points d23 and d32.
  • the image processing system selects a square with the diagonal as an edge as a triangle to be inserted according to the position of the insertion point and the determined diagonal line as the gradient direction. .
  • the image processing system determines that the diagonal direction formed by the two pixels (122 and tl d33 is the gradient A direction, so the image processing system can be at the pixel d32, the triangle or the pixel points d23, d22 and The dihedron formed by d33 is selected as the triangle to be interpolated, and if the interpolated point ⁇ is at the r ⁇ l pixel (the dimple 132 formed by 132, d22 and d33), the image processing system selects the pixel points d32, d22 and d33.
  • the formed quadrilateral is used as the triangle to be inserted.
  • the image processing system determines that the diagonal A direction formed by the two pixel points d23 and d32 is the gradient direction, so the image viewing system i «J is at the pixel point d22, ( A triangle formed by 123 and d32 or a triangle formed by pixels d33, d23, and d32 is selected as a triangle to be inserted, and if the interpolation point ⁇ is in a square ⁇ formed by pixel points d22, d23, and d32, the image The viewing system selects a triangle composed of pixels d22, d23, and d32 as a triangle to be inserted.
  • step S41 the pixel value of the pixel corresponding to the two wish points of the binary determined by the image base system and the distance of the insertion point from a wish point in the dipole are calculated.
  • p0, pl, and p2 are the pixel values of the pixel points corresponding to the three wish points of the triangle to be inserted, and x and y are the horizontal distance and the vertical distance of the pixel point whose pixel distance is ⁇ , respectively.
  • M 6 shows a flow chart of another method of inserting an image of a base matrix in an image processing system in the image processing system.
  • step S1 is described in detail in the small embodiment of Fig. 1. Here, it is included in the A formula of the introduction, and will not be repeated.
  • step S221 the image viewing system determines that the pixel value of one of the two pixel points of one diagonal line I: is larger than the pixel of the two pixel points of the other diagonal line 1:: The case where the pixel value of two diagonal points of one diagonal line I: is not satisfied, and the image processing system determines that the image area composed of the four pixel points is a non-flat area.
  • the image processing system judges:
  • step S222 the image viewing system determines the gradient by comparing the pixel values of the respective pixel points adjacent to the interpolation point with the pixel ⁇ of each pixel point adjacent to each of the horizontal direction and the vertical direction 1:: A direction.
  • the image processing system compares the pixel values of the pixel points (122, d23, d32, d33) with the pixel values of the respective horizontal and vertical I: adjacent pixel points to determine the gradient direction.
  • steps S3 and ⁇ are described in detail in the small embodiment of Fig. 1, which is included in the form of A, and will not be repeated.
  • m 7 shows a flow chart of another method of inserting an image of a base matrix in an image processing system in the image processing system.
  • step S1 the image viewing system captures the pixel values of the 16 pixel points included in the pixel array, wherein the 4*4 pixel array includes the insertion point, for example, As shown in FIG. 2, the image processing system acquires pixel points dll, dl 2, dl 3, dl 4, d21, d22, d23, d24, d31 of the 4*4 pixel array array associated with the interpolation point ⁇ , Pixels of d32, d33, d34, dA l , M 2, cM 3, and M 4 .
  • step S221 is described in detail in the small embodiment of FIG. 6, and is included here in the form of A, and is not repeated.
  • step S222' the image viewing system will be in the same diagonal I: the respective pixels of the two pixel points adjacent to the insertion point and the respective water direction and vertical - to I: adjacent pixel points After comparing the pixel values, determining that the sum of the comparison result in the horizontal direction and the comparison result in the vertical direction of the two pixel points are both at a predetermined value, determining that the diagonal I direction is the Gradient - direction.
  • T T (Ta, Tb, T C )
  • JL
  • Tb is the pixel ⁇ of the pixel adjacent to the interpolation point
  • Ta and Tc are the pixel level adjacent to the interpolation point.
  • Ta>Tb&&Tb ⁇ Tc it is determined that the pole characteristic T is a constant (ie, a comparison result value), for example, T-1, and if Ta ⁇ Tb&&Tb>Tc, it is determined that the extreme value characteristic T is a negative ⁇ of the constant, for example, 1 , JL: In his case, determine the extreme value characteristic T as: T:0.
  • a constant ie, a comparison result value
  • Ta ⁇ Tb&&Tb>Tc it is determined that the extreme value characteristic T is a negative ⁇ of the constant, for example, 1 , JL: In his case, determine the extreme value characteristic T as: T:0.
  • the image observation system base 1: 1 defines the horizontal value and the vertical-direction extreme value of the four pixel points d22, d23, d32, d33 adjacent to the interpolation point P, That is, the result is compared.
  • the image processing system determines the polarity characteristic of the horizontal direction of the pixel point d22 by 1, 2 or 1 by comparing the pixel ⁇ of the broken pixel point d22 with the pixel point d21, (the pixel value of 123, nj) 0.
  • the pixel points (pixels of J22 and pixel values of pixels dl 2 and d32)
  • the pixel points (122 vertical ⁇ characteristics, the pixel points d23, d32, (133, also by the same) are determined.
  • the comparison of the A type that is, the determination of the level A of the respective points of each pixel point A and the vertical direction of the vertical direction A.
  • the image processing system compares whether the sum of the extreme value values of the horizontal direction and the vertical A direction of the two pixels in the same diagonal line::: are all at a predetermined value, for example, 0, if yes, determine the The diagonal A direction is the gradient A direction.
  • the pixel values d22, d23, d33, (132, the horizontal A and the vertical eigenvalues of each S are respectively - infl_a - X, infl - a - y, infl - c - x , infl — c— y,
  • the small comparison result is only a small one, and the other is not limited to the present invention, and the predetermined ⁇ is determined based on the comparison result ,, 1 the comparison result value is not 1 , -1 and 0 are small, and correspondingly, the predetermined value may not be 0.
  • step S31 and step S41 are described in detail in the small embodiment of Fig. 4, which is included herein by reference, and will not be described again.
  • M 8 shows a flow chart of another method of inserting an image into a base matrix in an image processing system according to still another aspect of the present invention.
  • the ft body, step SI" and step S221 are described in detail in the embodiment shown in Fig. 7, and are included herein by reference, and are not repeated.
  • step S222 " ⁇ the image viewing system will be adjacent to the diagonal line 1:: the pixel values of the two pixels adjacent to the insertion point are adjacent to the respective horizontal A and vertical to I: After comparing the pixels of the pixel points, it is judged that the sum of the comparison result ⁇ of the horizontal A direction obtained by the two pixel points and the comparison result of the vertical A direction is larger than the predetermined condition, and the former is not established.
  • the gradient A direction determined by one interpolation is used to determine the gradient A direction before. If the interpolation processing is the first interpolation of one line, as the type A is preferred, the image system at the image i «J directly determines the pixel points d23 and d32.
  • the diagonal direction formed is taken as the gradient direction before 1.
  • the image processing The system can set the diagonal direction A of the pixel point d22 (ie, the starting pixel point in the matrix), that is, the diagonal line A formed by the pixel points d22 and d33 as the gradient direction before 1 if the pre-interpolation is determined.
  • step S31' the gradient of the gradient A direction determined by the system front and the secondary interpolation, and the position of the interpolation point determine the triangle to be interpolated.
  • the gradient direction determined by the image base system and the secondary interpolation determines that the current gradient direction is the diagonal A direction formed by the pixel points d23 and d32, and the image processing system re-interpolates the point P.
  • a dihoid including the interpolation point P that is, a dimple formed by the pixel points d22, d23, and d32 is selected as the triangle to be inserted.
  • step S41 is described in detail in the small example of FIG. 4, which is included in the A formula of the introduction, and will not be repeated.
  • m 9 shows a flow chart of another method of inserting an image of a base matrix in an image processing system in the image processing system.
  • the image system at the image takes the pixel values of the four pixel points included in the 2*2 pixel array.
  • the image viewing system takes pixel values of pixel points d22, d23, d32, d33.
  • step S52 the image viewing system compares the pixels ⁇ of the two pixels of the same diagonal line 1:: with the pixel ⁇ of the pixel of the other diagonal:: It is determined whether the image area formed by the 2*2 pixel array is a flat area.
  • the specific comparison process is as described in the small embodiments of FIG. 3, FIG. 4 and FIG. 6, and is not described herein.
  • step S53 the image processing system determines that the image area formed by the 2*2 pixel array is a flat area, and determines the pixel value of the four pixel points of the system. Diagonal-toward as a gradient-direction, that is, if there is a pixel value of a pair of two pixels of a pair of angles::, the pixel value of two pixels adjacent to each other, ⁇ another - the other is small The pixels of the two adjacent pixels are ⁇ , and the image processing system uses the diagonal direction as the gradient direction.
  • step S54 the image processing system determines that the image domain formed by the 2*2 pixel array is a non-flat region, and the image processing system takes the adjacent pixels of each pixel in the 2*2 pixel array. The pixel of the dot, for example, acquires the pixel value of the pixel (112, dl 3 , d21, d24, d31, d34, cM3, d44).
  • step S55 the image observation system determines the extreme eigenvalues of the horizontal direction and the vertical direction of each pixel in the 2*2 pixel array, and iii1 ⁇ the horizontal and vertical polarities of each pixel The sum of the eigenvalues.
  • the ft body can be described with reference to the small eigenvalues of the pixel points as described in the small examples of FIGS. 6 to 8, which are not repeated here.
  • step S56 the image observation system determines that the sum of the extreme values of the horizontal direction and the vertical direction of each of the two pixel points of the diagonal line I: is predetermined.
  • step S57 the image system determines that the sum of the horizontal direction of each of the two pixels of the diagonal 1:: and the extreme value of the vertical A direction is larger, and then the determination is made.
  • This diagonal A is oriented as a gradient A.
  • step S58 the image system at the image determines that the sum of the horizontal direction of each of the two pixel points of the diagonal 1:: and the extreme value of the vertical A direction is a predetermined value,
  • the gradient A determined by the transplant is oriented as the current gradient A.
  • the ft body can be referred to the detailed description of the small embodiment of Fig. 8, which will not be described here.
  • step S59 the image is subjected to the gradient direction determined by the system base, and the position of the insertion point, and the image area formed by the pixel points adjacent to the interpolation point determines the to-be-interpolated triangle.
  • the ft body can be referred to the detailed description in the foregoing small embodiments of Figs. 4, 7 and 8, and will not be described here.
  • step S60 the pixel value of the pixel corresponding to the two wish points of the binary determined by the image base system and the distance of the insertion point from a wish point in the dipole are calculated.
  • the pixel value of the insertion point refer to the detailed description of the embodiment shown in FIG. 4, FIG. 7 and FIG. 8 above, which will not be repeated here.
  • the circle image processing system includes: a gradient determination module 11, a triangle determination module 12, and a calculation module 13.
  • the image processing system takes the pixel value of each pixel of the pixel array including the insertion point, for example, the image processing system can acquire the pixel value of 16 pixels of a 4*4 pixel array, such as circle 2 As shown, the image processing system acquires pixel points dl l, dl 2, dl3, dl4, d21, d22, d23, d24, d31, d32, d33, d34 of a 4*4 pixel array array containing the insertion point P. , d41, d42, d43, and d44 pixel values.
  • the wood field technicians should be aware that the image viewing system is not limited to the pixel array, so it can also acquire 2*2, 8*8 pixel arrays and so on.
  • the gradient determination block 11 determines the gradient A direction of the image region formed by the pixel dot array associated with the interpolation point. For example, the gradient determination module 11 determines the pixel points dli, dl2, dl 3, dl4, d21, d22, d23, which include the interpolation point P, The gradient direction of the image domain formed by (124, d31, d32, d33, d34, M l , M2, d43, and M4, for example, the diagonal direction formed by the pixel points dl 4, d23, d32, and d41; for example, The diagonal direction formed by the pixels dl l, d22, d33, and cM4.
  • the triangle determining block 12 determines the triangle to be inserted in the image region formed by the array of pixel points based on the gradient A direction taken and the position of the insertion point. For example, the gradient determining block 1 1 determines that the gradient A direction is the A direction of the pixel points dl l, d22, d33 and the formed diagonal line, and then the quadrilateral determining the correction block 12 according to the position of the interpolation point P, for example a pixel point at the interpolation point P (a dimple 132 formed by 132, d22, and d33, and a triangle formed by the pixel points d32, d22, and d33 is determined as a triangle to be inserted.
  • the gradient A direction is the A direction of the pixel points dl l, d22, d33 and the formed diagonal line
  • the quadrilateral determining the correction block 12 according to the position of the interpolation point P, for example a pixel point at the interpolation point P (a
  • the gradient determination block 1 1 determines the gradient
  • the direction is the diagonal direction formed by the pixel points dl4, d23, d32 and d41
  • the square determination block 12 is based on the position of the interpolation point P, for example, the pixel point at the interpolation point P (122, d23 and d32)
  • the triangle ⁇ selects a triangle formed by the pixel points d22, d23, and d32 as the triangle to be inserted.
  • the dich shape determination module 12 determines d22, (a dimple formed by 123 and d32 as a dimple to be inserted), and the calculation correction block 13 calculates the pixel value of the interpolation point P based on the distance between the insertion point P and the pixel point d22.
  • the image viewing system also nj calculates the pixel ⁇ of the insertion point P based on the interpolation point P and the pixel point (123 or (132).
  • the image viewing system 1 includes: a gradient determining module 1 1 , a quadrilateral determining block 12 , and a calculating block 13 ; the gradient determining module 11 further includes: a comparing module 1 1 1 .
  • the comparison module ill converts the pixel value of each pixel point of the pixel point array including the interpolation point to the diagonal line I: and the pixel value of each pixel point of the _i diagonal line I:
  • a comparison is made to determine whether the image area formed by the array of pixel points is a flat area to determine the gradient direction.
  • the comparison block il l compares the two pixel points d22 and 133 of the diagonal line I: with the pixel ⁇ of the two pixel points d32 and d23 of the other diagonal line k, respectively, to determine the pixel point array. Whether the constructed image area is a T-Tan domain determines the gradient-direction.
  • the image viewing system takes a pixel value of each pixel of the 2*2 pixel array, and the 2*2 pixel array lii includes a dot.
  • the 2*2 pixel dot array is composed of four pixel points d22, d23, d32, and d33 adjacent to the interpolation point P.
  • the comparison module 111 divides the pixel values of the two pixel points of the same diagonal pixel I: the pixel values of the two pixel points adjacent to the interpolation point with the pixels of the two pixel points adjacent to each of the adjacent points.
  • one of the two pixels of the same diagonal I: is not small (ie, human ⁇ or equal): the pixel value of two adjacent pixels, [flJ ⁇ less than When the pixels of two adjacent pixels are ⁇ , it is determined that the image region formed by the four pixels is a flat region, and the gradient direction is 3 ⁇ 4 diagonal A direction.
  • b pixel value of pixel d23, pixel of pixel d32,
  • the comparison module 111 compares the pixels ⁇ a and c of the two pixel points d22 and d33 of the diagonal line I: and the two pixel points of the pair of diagonal lines I (the pixel values b and d of 123 and d32, respectively, G
  • the comparison module 111 determines that the image domain formed by the four pixel points is a flat domain, and the gradient direction is a diagonal A direction formed by two pixel points (122 and d33).
  • the comparison block in is also determined that the image domain formed by the four pixel points is a flat region, and the gradient direction is a diagonal A direction formed by two pixel points d23 and d32.
  • the binary determining module 12 selects a triangle with the diagonal as an edge as a triangle to be inserted according to the position of the interpolation point and the determined diagonal line as the gradient direction. For example, the comparison module 111 determines that the diagonal A direction formed by the two pixel points d22 and (133) is the gradient A direction, so that the triangle determines the triangle or pixel formed by the pixel 12 dj at the pixel point d32, (122 and d33).
  • the dimples d2, d22, and d33 are selected as the triangle to be interpolated, and if the interpolation point is in the triangle formed by the i'rt1 pixel points (132, d22, and d33), the binary determination module 12 selects The pixels formed by the pixels (132, d22, and d33) are the triangles to be inserted.
  • the comparison block 111 determines that the diagonal direction formed by the two pixel points d23 and d32 is a gradient-direction, so the triangle Determining the block 12 nj' at the pixel point d22, (123 and the triangle formed by 132 or the pixel d3, d23 and t32 d32, select one as the quadrilateral to be interpolated, ⁇ if the insertion point P is rtl A pixel (122, d23, and d32) is formed by a triangle ⁇ , and the triangle determines that the calibration block 12 selects a triangle formed by the pixel points d22, d23, and d32 as a triangle to be inserted.
  • the calculation module 13 can calculate the pixel ⁇ zout of the insertion point according to the formula:
  • p0, pl, and p2 are the pixel values of the pixel points corresponding to the three wish points of the triangle to be inserted, and x and y are the horizontal distance and the vertical distance of the pixel point whose pixel distance is ⁇ , respectively.
  • the comparison module 1 11 determines that the two pixel points of the diagonal line I: have pixel values of the other diagonal 1 :: the pixels of the two pixels, and the iiu one of the small diagonals When the pixel value of the two pixel points of the line I: is not satisfied, the comparison module in determines that the image area composed of the four pixel points is a non-flat area.
  • the comparison module i n determines that the image area formed by the four pixels is a non-flat area.
  • the comparison module i n determines the gradient direction by comparing the pixel ⁇ of each pixel adjacent to the insertion point with the pixel value of each pixel adjacent to each of the £J horizontal A and vertical 1 and 2 pixels.
  • the two-dimensional determination module 12 and the calculation block 13 are described in detail in the small embodiment of FIG. 10, which is referred to herein by the referenced A, not the PIJS.
  • the image viewing system ⁇ takes the pixel value of the 16 pixel points included in the pixel array, ⁇ , the 4*4 pixel array lii contains the insertion point, for example, as shown in Fig. 2, the image The viewing system takes pixel points in the array of pixel points associated with the insertion point P (11 1 , dl 2, dl3, dl 4, d21, d22, d23, d24, d31, d32, d33, d34, d41) , The pixels of cM 2, d43, and cM 4 are ⁇ .
  • the comparison module 1 1 1 determines that the image domain formed by the four pixel points adjacent to the insertion point is an uneven field.
  • the comparison module 1 1 1 will be the same as the diagonal 1: 1: pixel value of each pixel adjacent to the interpolation point and the pixel value of each pixel A and the vertical direction I: adjacent pixel points
  • the diagonal direction of the 3 ⁇ 4 diagonal direction is determined as the gradient direction. . .
  • T T (Ta, Tb, Tc)
  • Tb is the pixel value of the pixel adjacent to the insertion point
  • Ta and Tc are the horizontal direction of the pixel adjacent to the interpolation point or Vertically adjacent pixels of two pixels
  • Ta>Tb&&Tb ⁇ Tc it is determined that the extreme characteristic T is a constant (ie, a comparison result value), for example, T-1, and if Ta ⁇ Tb&&Tb>Tc, it is determined that the extreme characteristic T is a negative ⁇ of the constant, for example, 1 , JL: In his case, determine the extreme value characteristic T as: T:0.
  • a constant ie, a comparison result value
  • Ta ⁇ Tb&&Tb>Tc it is determined that the extreme characteristic T is a negative ⁇ of the constant, for example, 1 , JL: In his case, determine the extreme value characteristic T as: T:0.
  • the comparison module 1 1 1 base 1 the definition j ⁇ ii is adjacent to the insertion point P adjacent to the four pixel points d22, d23, (132, d33 each horizontal - direction and vertical direction
  • the comparison block 11 1 determines the pixel by comparing the pixel point of the pixel (the pixel 122 of 122 and the pixel value of the pixel points d21 and d23, nj)
  • the extreme value characteristic value of the point d22 in the horizontal direction is 1, -1 or 0.
  • the extreme value of the vertical direction of the pixel point d22 can be determined.
  • the comparison module in compares whether the sum of the extreme value eigenvalues of the A-direction and the vertical direction of each of the two pixels of the same diagonal line 1:: is at a predetermined value, for example, 0, if, then It is determined that the diagonal A direction is the gradient A direction.
  • the pixel values d22, d23, d33, (132, the horizontal A and the vertical eigenvalues of each S are respectively - infl_a - X, infl - a - y, infl - c - x , infl — c— y,
  • the small comparison result is only a small one, and the other is not limited to the present invention, and the predetermined ⁇ is determined based on the comparison result ,, 1 the comparison result value is not 1 , -1 and 0 are small, and correspondingly, the predetermined value may not be 0.
  • the ft body, the image observation system ⁇ takes the pixel of the pixel ⁇ : f3 ⁇ 4, and the comparison module 1 11
  • the process of determining the horizontal direction of the pixel and the vertical A value of the extreme value ⁇ is detailed in the previous ⁇ 3 ⁇ 4 embodiment The description is hereby incorporated by reference and is no longer described in JS.
  • the comparison module i ll compares the pixel values of the two pixels adjacent to the diagonal 1:: adjacent to the interpolation point with the pixel values of the adjacent horizontal A and vertical I: adjacent pixels. Then, it is judged that the sum of the comparison result of the horizontal direction obtained by each of the two pixel points and the comparison result value of the vertical direction is not satisfied in the case of the predetermined ⁇ , and the gradient A determined based on the previous interpolation is always Determine the current gradient A direction. If the interpolation is the first interpolation of the row, as a preferred mode, the comparison block H I i «J directly determines the diagonal A direction formed by the pixels d23 and d32 as the current gradient direction.
  • the comparison block 1 11 i «J will be the pixel point d22 (ie, the matrix)
  • the diagonal line where the starting pixel point is located the diagonal line formed by the pixel points d22 and d33—to the gradient before 1—if the gradient determined by the previous interpolation-direction is J ⁇ diagonal In the line direction, the comparison module 1 11 takes the J-diagonal line, that is, the diagonal direction formed by the pixel points d23 and d32 as the current gradient A direction.
  • the quadrilateral determination module 12 determines the gradient direction determined by the previous interpolation, and the position of the insertion point to determine the triangle to be inserted.
  • the gradient A determined by the previous interpolation of the module 1 11 is determined by the diagonal direction before the determination 1 is the diagonal direction formed by the pixel points d23 and d32
  • r1 is determined by the square to determine the parity of the block 12
  • a dihoid including the interpolation point P that is, a dimple formed by the pixel points d22, d23, and d32 is selected as the triangle to be inserted.
  • the ft body, the image viewing system ⁇ takes the pixel of the 4 pixel points contained in the 2*2 pixel array.
  • the image processing system acquires pixels ( of pixels (122, d23, d32, d33).
  • the comparison module 1 11 compares the pixel ⁇ of each pixel of the same diagonal I: with the pixel ⁇ of the pixel of the other diagonal I: to determine the 2*2 pixel array.
  • the image area formed is a flat area, and the specific comparison process is described above, and will not be repeated here.
  • the comparison block 1 11 determines the flat region of the image region formed by the 2*2 pixel array, and then compares The module HI bases the pixels of the 4 pixel points to determine - a pair of ⁇ line directions as the gradient direction, that is, ⁇ has a pair of angle lines
  • One of the two pixels of I II has a larger pixel value f and its adjacent two pixel points if ⁇ pixel ⁇ , ⁇ is smaller than the pixel value of its adjacent two pixel points, and the comparison block m
  • the line direction is the gradient direction.
  • the image processing system acquires each pixel point ff1 adjacent to the pixel point in the 2*2 pixel array again
  • the pixel value for example, obtains the image values of the pixel points dl 2, dl 3 , d2.l, d24, d, 3K d34, d43, d44.
  • the comparison module ⁇ 1 determines the extreme value values of the horizontal direction and the vertical J direction of each pixel in the 2*2 pixel array, and obtains the sum of the horizontal and vertical eigenvalues of each pixel.
  • the carcass process Referring to the foregoing embodiments, it is no longer described in i.
  • the comparison module 1 11 judges whether or not - the sum of the extreme value values of the horizontal direction and the vertical direction of each of the two lines of the ⁇ line 1 2 13 ⁇ 4 pixels is larger than a predetermined value.
  • the comparison block m determines that there are two pixel points on the diagonal.
  • the sum of the horizontal and vertical extreme values ⁇ of each of the correction blocks is greater than a predetermined value, and the pair of lines is determined as the gradient direction.
  • the comparison module in judges that there is no ⁇ line- ⁇ two pixel points each ⁇ ⁇ ⁇ the sum of the extreme values of the horizontal direction and the vertical direction is greater than the predetermined bit, the gradient direction determined based on the previous interpolation is used as the front The direction of the gradient.
  • the trigonometric determining block 12 determines the two shapes to be inserted in the image region formed by the four pixel points adjacent to the interpolation point based on the determined gradient direction and the position of the interpolation point. .
  • the corpuscle itf is described in detail in the foregoing embodiments, and will not be repeated.
  • the pixel value of the pixel based on the determined vertex of the determined triangle and the distance of the interpolation point from the ten vertices in the triangle are calculated, and the pixel value of the insertion point is calculated.
  • the ft body can be referred to the detailed description in the foregoing embodiments, and will not be repeated here.
  • the method of the invention is 3 ⁇ 4 after 1 ⁇ 2 image, W effectively solves the edge 3 ⁇ 4 thorn or sawtooth problem of scaling the oblique direction of the image, and obtains a higher quality image, and the method of the present invention only 3 ⁇ 4 requires a small number of multiplication calculations.
  • the input is performed by inputting the row or column first, and the input column or row is operated by the method, and only one input is made; W obtains the pixel value of an interpolation point, so the present invention is relatively simple in operation structure I:

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)
  • Editing Of Facsimile Originals (AREA)

Abstract

本发明提供一种基于矩阵对图像进行插值的方法及图像处理***。其中,图像处理***先确定包含插值点的像素点阵列所构成的图像区域的梯度方向,并基于所述梯度方向、及所述插值点的位置,在与所述像素点阵列所构成的图像区域中确定待插值的三角形,最后再基于所确定的三角形的三个顶点所对应的像素点的像素值及插值点距离所述三角形中一个顶点的距离,计算所述插值点的像素值,如此可有效解决缩放图像斜方向细节的边缘毛刺或锯齿问题,获得更高质量图像,而且本发明的方法计算简单,运算量小。

Description

基 矩阵对图像进行插忸的方法及图像处理*** 技术领域
木发明涉及图像处观领域, 特别涉及 ·种基 矩阵对图像进行插忸的方法及图像处理系 统。 背景技术
视频 /图像缩放在 iLi视和多媒体工业内^ "广泛的应用。 在该领域内, 人多数的视频 /图像 缩放算法都 基于多相插忸滤波法, 该类算法在像素之^进行多相滤波插忸, 通常为先水平 方向插值再垂直方向插值或 先垂直方向插值 水平 A向插忸, 如此会导致图像斜 A向细节 边缘形成毛刺或锯齿, 而 该类多相插忸需耍通过多次乘法 ¾现, 运算 ϋ杂度大。
虽然, Dan Su等研究人 捣出了 关二角形插忸理论 (具体 nj参见: 论文 "使用像素等 级 (Pixel-Level) 三角测量法 ίΐ勺图像插值法", Computer Graphics Forum Volume 23, Issue 2, pages 189-201 ) ' 但如何将¾繩论应川于 ¾践当屮, 仍 木领域技术人 需耍解决的课题。 发明内容
木发明的目的在 提供 ·种基 矩阵对图像进行插忸的 A法及图像处观***。
为了达到 1::述目的及 他目的, 木发明提供的在图像处理***屮基于矩阵对图像进行插 值的 A法, to括步骤:
A、 确定包含插忸点的像素点阵列所构成的图像 域的梯度方向;
B、 基 ί'所述梯度方向、及所述插忸点的位置,在所述图像区域确定待插值的二角形;
C、 基 待插忸的三角形的三个願点所对应的像素点的像素忸及插忸点距离所述三角 形屮一个願点的距离, 讣算所述插值点的像素忸。
此外, 木发明还捣供 ·种基 矩阵对图像进行插忸的***, 包括;
梯度确定校块, 川 确定 1ϋ含插忸点的像素点阵列所构成的图像 域的梯度方向; 三角形确定校块, 川于基』:所述梯度 A向、 及所述插忸点的位置, 在与所述图像 域 确定待插忸的三角形; 以及
计算模块, 川 基于待插忸的二角形的二个願点所对应的像素点的像素值及插值点 距离所述二角形屮 ·个願点的距离, 讣算所述插值点的像素忸。
综 I:所述, 本发明的基 矩阵对图像进行插忸的 A法及图像处观***通过对 1ϋ含插值点 的像素点阵列的分析, ^取像素点阵列的梯度方向, iiJ^jPI根据所述梯度方向确定待插值的 三角形, r†i此计算出插忸点的像素值, 如此 nJ有效解决缩放图像斜 A向细节的边缘 ¾刺或锯 齿问题, 获符更高质 图像, 而 -木发明的方法讣算简 Φ·, 运算量小。 附图说明
m 1为本发明 ^面的在图像处繩***屮基于矩阵对图像进行插值的 法的流程图。 m 2为包含插忸点的像素阵列示意图。
m 3为本发明另 ·个方面的在图像处观***屮基 矩阵对图像进行插忸的方法的流程图。 图 4为本发明又 ·个方面的在图像处观***屮基 矩阵对图像进行插忸的方法的流程图。 m 5 为本发明的在图像处繩***屮基 矩阵对图像进行插忸的方法讣算插忸点的像素值 的示意图。
m 6为本发明又 ·个方面的在图像处观***屮基 矩阵对图像进行插忸的方法的流程图。 m 7为本发明又 ·个方面的在图像处观***屮基 矩阵对图像进行插忸的方法的流程图。 m 8为本发明又 ·个方面的在图像处观***屮基 矩阵对图像进行插忸的方法的流程图。 m 9为本发明又 ·个方面的在图像处观***屮基 矩阵对图像进行插忸的方法的流程图。 m 10为木发明 ·个方面的基 矩阵对图像进行插值的图像处繩***小意图。
图 1 1为木发明 w 面的基 矩阵对图像进行插忸的图像处理***示意图。
具体 ¾施方式
W参阅图 1 ,其为木发明 ·个方面的在图像处观***屮基于矩阵对图像进行插值的 A法的 流程圈。
甘先, 在步骤 S1屮, 图像处观*** ^取包含插忸点的像素点阵列的各像素点的像素值, 例如, 所述圈像处理***可以获取一 4*4像素点阵列的 16个像素点的像素值, 如图 2所示, 所述图像处观***^取包含插值点 P的 像素点阵列屮的像素点 d l l、 d l 2、 d l 3、 dl4、 d21、 d22、 d23、 d24、 d31、 d32、 d33、 d34、 d41 、 d42、 d43、 及 d44的像素值。
木领域技术人员应¾观解, 图像处观***获取像素点阵列并非以所示为限, 匕 其 也可以获取 5*5、 8*8的像素点阵列等等。
接着, 在步骤 S2屮, 图像处观***确定包含插忸点的像素点阵列所构成的图像区域的梯 度方向。例如,所述圈像处理***确定包含插值点 P的像素点 dl l、 d l 2、 dl 3、 d l4、 d21、 d22、 (123, d2 , d31、 d32、 d33、 d34、 dA l 、 M2、 cM3、 及 ά4Λ 构成的图像 域的梯度方向, 例 如, 为像素点 dl l、 d22 、 d33与 (M4形成的对角线的 向; 再例如, 为像素点 dl 4、 d23、 d32 与 d41形成的对角线方向等。
接着, 在步骤 S3中, 图像处理***基 所述梯度方向、 及所述插忸点的位置, 在与所述 像素点阵列所构成的图像 域屮确定待插忸的三角形。 例如, 所述梯度 A向为像素点 dl l、 d22 、 d33与 d44形成的对角线的 -向, 则图像处观***根据所述插值点 P的位置, 例如, 插 值点 P处于像素点(132、 d22与 d33构成的二角形屮, 确定像素点 d32、 d22与 d33构成的三 角形作为待插值的二角形。 再例如, 如果所述梯度方向为像素点 dl4、 d23、 d32与 d41形成 的对角线方向, 则图像处观***根据所述插值点 P的位置, 例如, 插值点 P处 像素点 d22、 d23与 d32构成的三角形中,选择像素点 d22、 (123与 d32构成的二角形作为待插忸的三角形。
J , 在步骤 S4屮, 图像处观***基』:所确定的待插忸三角形的二个顶点所对应的像素 点的像素忸及插值点距离所述二角形屮 ·个願点的距离, 讣算所述插忸点的像素值。 例如, 图像处理***确定 d22、 d23与 d32构成的二角形作为待插忸的二角形, 则图像处理***根据 插值点 P与像素点(J22的距离讣算插忸点 P的像素忸。 不过, 木领域技术人 应该理解, 图 像处观***也可基 插值点 P与像素点 d23或 d32之间的距离来讣算插忸点 P的像素忸。
M 3示出了本发明 j一个 A面的在图像处理***中基 矩阵对图像进行插忸的方法的流 程图。
H体的,步骤 S1 巳在图 1所小的 ¾施例屮详细描述,在此以引川的 A式包含,不 重述。 接着, 在步骤 S2' 屮, 图像处观***将 含所述插忸点的像素点阵列中 ^条对角线 I二的 各像素点各 £J的像素值分别与 一条对角线 I:的各像素点的像素忸进行比较, 以判断所述像 素点阵列构成的图像区域 为平坦 域来确定所述梯度 A向。 例如, 图像处观***将处于 对角线 1::的两像素点 (J22和 d33分别与另 ·对角线 k的两像素点 d32和 d23的像素值进行比 较, 以判断所述像素点阵列构成的图像 域是 Ϊ ^为平 域来确定所述梯度 -向。
接着, 步骤 S3和 巳在图 1所小的实施例中详细描述, 在此以引川的 A式包含, 不再 重述。
m 示出了本发明又一个 A面的在图像处理***中基 矩阵对图像进行插忸的方法的流 程图。
ft体的, 在步骤 S1 ' 屮, 图像处理***获取 2*2像素点阵列 ϋ含的各像素点的像素值, 其中, 所述 2*2像素点阵列 ^含插忸点。 例如, 如图 2所小, 所述 2*2像素点阵列由与插值 点 Ρ相邻的 Λ个像素点 d22、 d23、 (132和 d33所构成。 接着, 在步骤 S21中, 图像处观***将与所述插忸点相邻的 4个像素点屮处于同一对角 线 1::的两个像素点各自的像素忸分别与各 £J相邻的两像素点的像素忸进行比较, 当处 同 - 对角线 1::的两个像素点屮 · 的像素忸不小于 (即人 或等于) )1;相邻的两像素点的像素 值, ^另一 小于)1;相邻的两像素点的像素值, 则判断所述 4个像素点构成的图像区域为平 坦区域, 所述梯度 A向即为¾对角线方向。
例如, 设定: a二像素点 d22的像素值, ^像素点 d33的像素忸,
b=像素点 d23的像素值, 像素点 d32的像素忸,
图像处观***将对角线 I:的两像素点 (J22和 (133的像素值 a和 c分别与另 ·对角线 k的 两像素点 (123和 d32的像素值 b和 d进行比较, G|J:
如果: a>=b&&a>=d&&c<=b&&c<=d、 或 -:
a <= b && a <= d && c >= b && c >=d,
则所述图像处观***判断所述 4 个像素点构成的图像 域为平坦 域, 所述梯度 —向即 为两像素点 (122和 d33所形成的对角线 A向。
liij如果: b>=a&&b>=c&&d<=a&&d<=c、 或 -:
b <= a && b <= c && d >= a && d >=c,
则所述图像处观***也判断所述 4 个像素点构成的图像区域为平 域, 所述梯度方向 即为两像素点 d23和 d32所形成的对角线方向。
接着, 在步骤 S31 屮, 所述图像处理***根据所述插忸点的位置及所确定的作为梯度方 向的对角线, 选择以该对角线为边的二角形作为待插忸的二角形。 例如, 图像处理***确定 两像素点(122禾 tl d33所形成的对角线方向为梯度 A向, 故图像处理***可在像素点 d32、 (122 和 d33构成的三角形或 像素点 d23、 d22和 d33构成的二角形屮选择 · 作为待插值的三角 形, 而如果插忸点 Ρ处于 r†l像素点 (132、 d22和 d33构成的二角形屮, 则图像处理***选择像 素点 d32、 d22和 d33构成的二角形作为待插忸的三角形。 再例如, 图像处理***确定两像素 点 d23和 d32所形成的对角线 A向为梯度方向, 故图像处观*** i«J在像素点 d22、 (123和 d32 构成的三角形或者像素点 d33、 d23和 d32构成的二角形屮选择 · 作为待插忸的三角形, 而 如果插值点 Ρ处于由像素点 d22、 d23和 d32构成的二角形屮,则图像处观***选择像素点 d22、 d23和 d32构成的三角形作为待插忸的三角形。
接着, 在步骤 S41 屮, 图像处观***基 所确定的二角形的二个願点所对应的像素点的 像素值及插忸点距离所述二角形中一个願点的距离, 讣算所述插忸点的像素值。 作为 ·种优 选方式, 如图 5所小, 图像处观*** nj按照如下公式讣算所述插值点的像素忸 zout- zout = ρθ— ( -pl+p0)*x― (p0-p2)*y,
其中, p0、 pl、 p2为待插忸的三角形的 3个願点所对应的像素点的像素值 , x、 y分别为插忸 点距离像素忸为 ρθ的像素点的水平距离和垂直距离。
M 6示出了本发明又一个 A面的在图像处理***中基 矩阵对图像进行插忸的方法的流 程图。
H体的,步骤 S1 巳在图 1所小的 ¾施例屮详细描述,在此以引川的 A式包含,不 重述。 接着, 在步骤 S221中, 图像处观***判断一条对角线 I:的两个像素点中 ·者的像素值 大于另一条对角线 1::的两像素点的像素忸、 ^另一 小 H一条对角线 I:的两像素点的像素 值的情形不成立, r†i此图像处理***判断所述 4个像素点构成的图像区域为非平 区域。
例如, 仍然设定: ^像素点 d22的像素忸, (^像素点 d33的像素忸,
b=像素点 d23的像素忸, ^像素点 d32的像素值,
图像处理***判断:
a >= b && a >= d && c <= b && c <=d、 或 -:
a <= b && a <= d && c >= b && c >=d、 或 -:
b >= a && b >= c && d <= a && d <=c、 或 -:
b <= a && b <= c && d >= a && d >=c,
都不能成立, 由此图像处观***判断所述 4个像素点构成的图像区域为非平 区域。
接着, 在步骤 S222中, 图像处观***通过将与插值点相邻的各像素点的像素值与各自水 平方向和垂直 1::相邻的各像素点的像素忸进行比较来确定所述梯度 A向。 例如, 图像处理系 统将像素点 (122、 d23、 d32、 d33的像素忸与各自水平方向和垂直 向 I:相邻的各像素点的像 素值进行比较来确定所述梯度方向。
接着, 步骤 S3和 巳在图 1所小的实施例中详细描述, 在此以引川的 A式包含, 不再 重述。
m 7示出了本发明又一个 A面的在图像处理***中基 矩阵对图像进行插忸的方法的流 程图。
H体的,在步骤 S1 "中, 图像处观*** ^取 Μ像素点阵列 ϋ含的 1 6个像素点的像素值, 其中, 所述 4*4像素点阵列 ^含插忸点, 例如, 如图 2所小, 所述图像处理***获取与插值 点 Ρ相关联的 4*4像素点阵列屮的像素点 d l l、 d l 2、 d l 3、 d l 4、 d21、 d22、 d23、 d24、 d31、 d32, d33 , d34、 dA l 、 M 2、 cM 3、 及 M 4的像素忸。
接着,步骤 S221 巳在图 6所小的 ¾施例屮详细描述,在此以引川的 A式包含,不 重述。 接着, 在步骤 S222'屮, 图像处观***将处于同 对角线 I:与插忸点相邻的两个像素点 各自的像素忸与各自水 向和垂直 —向 I:相邻像素点的像素值比较后, 判断所述两个像素 点各自所^符的水平方向的比较结果忸与垂直方向的比较结果值之和都人于预定值, 则确定 该条对角线 I向为所述梯度 —向。
例如, 定义一极忸特征 T为 T (Ta,Tb,TC) , JL:屮, Tb为与插值点相邻的像素点的像素忸, Ta和 Tc为与插值点相邻的像素点水平方向或垂直 向的相邻两像素点的像素忸,
如果 Ta>Tb&&Tb<Tc, 则确定极忸特征 T 为 ·常数 (即比较结果值), 例如 T-1 , 如果 Ta<Tb&&Tb>Tc, 则确定极值特征 T为该常数的负忸, 例如 1, JL:他情形, 则确定极值特征 T 为: T:0。
r†l此, 图像处观***基 Γ 1::述定义 nJ ^得与插值点 P相邻的 4个像素点 d22、 d23、 d32、 d33各自水平方向和垂直 -向的极值特征值, 即比较结果忸。 例如, 对 ί'像素点 d22, 图像处 理***通过比较断像素点 d22的像素忸与像素点 d21、 (123的像素值, nj确定像素点 d22的水 平方向的极忸特征忸 为 1、 1或 0, 同样, 通过比较像素点 (J22的像素忸与像素点 dl 2、 d32 的像素值, 确定像素点(122垂直方向的极忸特征忸, 对 像素点 d23 、 d32、 (133, 也通过 同样的比较 A式, 即 ¥确定各像素点各 £J的水平 A向和垂直 A向的极忸特征忸。
图像处理***比较处于同 ·对角线 1::的两个像素点各 £J水平方向和垂直 A向的极 值特征值之和是否都人于预定值, 例如为 0, 如果 , 则确定该对角线 A向为所述梯度 A向。 例如, 像素点 d22、 d23、 d33、 (132 各 S的水平 A向和垂直方向的极忸特征值分别表小为 - infl— a— X, infl— a— y, infl— c— x , infl— c— y,
如果: infl—a>0 && infl—c>0, 则图像处观***确定像素点 d22、 d33所形成的对角线方向 即为所述梯度方向; ·Η:中, infl— a = infl a x + infl a y, infl c = infl c x + infl— c— y。
如果: infl—b>0&&infLd>0, 则图像处观***确定像素点 d23、 d32所形成的对角线方向 即为所述梯度方向; J L:中, infl— b = infl—b— x + infl b^y, infl d = infl—d—x + infl— d—y。
木领域技术人 ¾应该观解, I:述所小的比较结果忸仅仅只 列小, 而非川于限定本发明, 而所述预定忸基于比较结果忸来确定, 1比较结果值并非以 1、 - 1 及 0来表小, 相应的, 预 定值也可能不为 0。
接着, 步骤 S31和步骤 S41 巳在图 4所小的实施例屮予以详细说明, 在此以引用的方式 包含, 不再 述。
M 8示出了本发明又一个 A面的在图像处理***中基 矩阵对图像进行插忸的方法的流 程图。 ft体的, 步骤 SI "和步骤 S221巳在图 7所示的 ¾施例屮予以详细说明, 在此以引用的方 式包含, 不 重述。
接着, 在步骤 S222"屮, 图像处观***将处 同 ^条对角线 1::与插忸点相邻的两个像素 点各自的像素值与各自水平 A向和垂直 向 I:相邻的像素点的像素忸比较后, 判断所述两个 像素点各 £J所获得的水平 A向的比较结果忸与垂直 A向的比较结果忸之和都大 预定忸的情 形不成立, 则基于前一次插忸所确定的梯度 A向来确定 前的梯度 A向。 如果此次插忸处理 为一行的首次插值, 作为 ·种优选 A式, 图像处繩*** i«J直接确定像素点 d23和 d32所形成 的对角线方向作为 1前的梯度方向。 例如, 如果前一次插值所确定的梯度 A向是 4个像素点 构成的矩阵屮的起始像素点所在的对角线 A向, 则图像处理***可将像素点 d22 (即矩阵中起 始像素点) 所在的对角线 A向, 即像素点 d22和 d33所形成的对角线 A向作为 1前的梯度方 向, 如果前 ·次插值确定的梯度方向 J ·对角线 A向, 则图像处观***将另 -对角线, 即 像素点 d23和 d32所形成的对角线方向作为 1前梯度 A向。
接着, 在步骤 S31 '屮, 图像处观***基 前 ·次插忸所确定的梯度 A向、 及所述插值点 的位置确定待插值的三角形。 例如, 图像处观***基 前 ·次插忸所确定的梯度方向确定当 前的梯度方向为像素点 d23和 d32所形成的对角线 A向, r†l此图像处理***再基 插值点 P 的位置, 选择出包含插值点 P的二角形, 即像素点 d22、 d23和 d32构成的二角形作为待插忸 的三角形。
接着, 步骤 S41 巳在图 4所小的 ¾施例屮予以详细说明, 在此以引川的 A式包含, 不再 重述。
m 9示出了本发明又一个 A面的在图像处理***中基 矩阵对图像进行插忸的方法的流 程图。
H体的, 在步骤 S5i 中, 图像处繩*** ^取 2*2像素点阵列所包含的 4个像素点的像素 值。 例如, 图像处观*** ^取像素点 d22、 d23、 d32、 d33的像素值。
接着, 在步骤 S52 屮, 图像处观***通过将同 ·对角线 1::的两像素点各 £J的像素忸与另 -对角线 1::的像素点的像素忸进行比较, 以判断所述 2*2 像素点阵列所构成的图像区域是否 为平 区域, 具体比较过程如图 3、 图 4及图 6所小的实施例中所述, 在此不 PJjg述。
接着, 在步骤 S53中, 1图像处理***判断所述 2*2像素点阵列所构成的图像区域是平 坦区域, 贝 I 像处观***基 ί'所述 4 个像素点的像素值来确定 ·对角线 ―向作为梯度 —向, 即若有一对角线 1::的两个像素点中有 · 的像素值人 Μ相邻的两像素点的像素值, ^另 - 者小卩其相邻的两像素点的像素忸, 图像处理***将 ·对角线方向作为梯度方向。 在步骤 S54中, 1图像处理***判断所述 2*2像素点阵列所构成的图像 域是非平坦区 域, 则图像处理*** 次^取所述 2*2像素点阵列中各像素点相邻各像素点的像素忸, 例如, 获取像素点(112、 dl 3 、 d21、 d24、 d31、 d34、 cM3、 d44的像素值。
接着, 在步骤 S55中, 图像处观***确定所述 2*2像素点阵列中各像素点水平方向和垂 直方向的极忸特征值, iiil ^符各像素点的水平方向和垂直方向的极忸特征值之和。 ft体可 参见图 6至图 8所小的 ¾施例所述确定像素点的极值特征值的描述, 在此不 重述。
接着, 在步骤 S56 屮, 图像处观***判断是 一对角线 I:的两像素点各 £J的水平方向 和垂直方向的极值特征值之和都人 预定忸。
接着, 在步骤 S57 屮, 1图像处繩***判断^ ·对角线 1::的两像素点各 £J的水平 Α向和 垂直 A向的极值特征忸之和都大 预定忸, 则确定该对角线 A向作为梯度 A向。
接着, 在步骤 S58 屮, 1图像处繩***判断没 任何 ·对角线 1::的两像素点各自的水平 方向和垂直 A向的极值特征忸之和都人 预定值, 则基 前一次插忸所确定的梯度 A向作为 当前的梯度 A向。 ft体可参见图 8所小的实施例屮的详细描述, 在此不再 述。
接着, 在步骤 S59 屮, 图像处观***基 所确定的梯度方向、 及所述插忸点的位置, 在 与所述插值点相邻的 个像素点所构成的图像区域屮确定待插忸的三角形。 ft体可参见前述 图 4、 图 7及图 8所小的实施例中的详细描述, 在此不再 述。
后, 在步骤 S60 屮, 图像处观***基 所确定的二角形的二个願点所对应的像素点的 像素值及插忸点距离所述二角形中一个願点的距离, 讣算所述插忸点的像素值。 具体 nJ参见 前述图 4、 图 7及图 8所示的 ¾施例屮的详细描述, 在此不 重述。
W参阅图 10, 为木发明 ·个方面的基 矩阵对图像进行插忸的图像处观***示意图。 其中, 所述圈像处理***包括: 梯度确定模块 11、 三角形确定模块 12、 及计算模块 13。
甘先, 图像处理*** ^取包含插忸点的像素点阵列的各像素点的像素值, 例如, 图像处 理***可以获取一 4*4像素点阵列的 16个像素点的像素值, 如圈 2所示, 所述图像处理*** 获取 含插忸点 P的 4*4像素点阵列屮的像素点 dl l、 dl 2、 dl3、 dl4、 d21、 d22、 d23、 d24、 d31、 d32、 d33、 d34、 d41 、 d42、 d43、 及 d44的像素值。
木领域技术人员应¾观解, 图像处观***获取像素点阵列并非以所示为限, 匕 其 也可以获取 2*2、 8*8的像素点阵列等等。
接着, 梯度确定校块 1 1确定与插值点相关的像素点阵列所构成的图像区域的梯度 A向。 例如, 梯度确定模块 11确定包含插值点 P的像素点 dli、 dl2、 dl 3、 dl4、 d21、 d22、 d23、 (124, d31、 d32, d33, d34、 M l 、 M2、 d43、 及 M4构成的图像 域的梯度方向, 例如, 为 像素点 dl 4、 d23、 d32与 d41形成的对角线方向; 例如, 为像素点 dl l、 d22 、 d33与 cM4 形成的对角线方向等。
接着, 三角形确定校块 12基于 取的所述梯度 A向、 及所述插忸点的位置, 在与所述像 素点阵列所构成的图像区域屮确定待插忸的三角形。 例如, 梯度确定校块 1 1确定所述梯度 A 向为像素点 dl l、 d22 、 d33与 形成的对角线的 A向, 则二角形确定校块 12根据所述插 值点 P的位置,例如,插值点 P处 像素点 (132、 d22与 d33构成的二角形屮,确定像素点 d32、 d22与 d33构成的三角形作为待插忸的三角形。 例如, 如果梯度确定校块 1 1确定所述梯度 方向为像素点 dl4、 d23、 d32与 d41形成的对角线 ―向, 则二角形确定校块 12根据所述插 值点 P的位置,例如,插值点 P处 像素点 (122、 d23与 d32构成的二角形屮,选择像素点 d22、 d23与 d32构成的三角形作为待插忸的三角形。
后, 讣算模块 13基 所确定的三角形的二个願点所对应的像素点的像素忸及插忸点距 离所述三角形中一个願点的距离, 讣»所述插值点的像素忸。 例如, 二角形确定模块 12确定 d22、 (123与 d32构成的二角形作为待插忸的二角形, 则计算校块 13根据插忸点 P与像素点 d22的距离计算插值点 P的像素值。 不过, 木领域技术人 应该观解, 图像处观***也 nj基于 插值点 P与像素点 (123或 (132之间的距离来讣算插忸点 P的像素忸。
M小出了木发明 j 面的基 矩阵对图像进行插值的图像处观***小意图。其屮, 所述图像处观*** 1ϋ括: 梯度确定模块 1 1、 二角形确定校块 12、 及讣算校块 13; 所述梯度确 定模块 11还 lii括: 比较模块 1 1 1。
ft体的, 图像处观***^取各像素点的像素忸的工作过程 d在图 10所小的实施例中详细 描述, 在此以引用的 A式 ϋ含, 不 重述。
接着, 比较模块 i l l 将 to含所述插值点的像素点阵列屮一对角线 I:的各像素点各 £J的像 素值分别与 _i一对角线 I:的各像素点的像素忸进行比较, 以判断所述像素点阵列构成的图像 区域 否为平坦区域来确定所述梯度方向。 例如, 比较校块 il l 将处 对角线 I:的两像素点 d22和 (133分别与另 ·对角线 k的两像素点 d32和 d23的像素忸进行比较,以判断所述像素点 阵列构成的图像区域 否为 T-坦 域来确定所述梯度 —向。
接着, 二角形确定模块 12、 及计算校块 13巳在图 10所示的 ¾施例屮详细描述, 在此以 引用的方式 1ϋ含, 不 重述。
以下将基于图 11所小的图像处理***描述木发明又 ·个方面的基 矩阵对图像进行插值 的工作过程。 ft体的, 图像处观*** ^取 2*2像素点阵列 ϋ含的各像素点的像素值, 中, 所述 2*2 像素点阵列 lii含插忸点。 例如, 如图 2所示, 所述 2*2像素点阵列由与插值点 P相邻的 4个 像素点 d22、 d23、 d32和 d33所构成。
接着, 比较模块 111 将与所述插值点相邻的 4个像素点中处 同 ·对角线 I:的两个像素 点各自的像素值分别与各 £J相邻的两像素点的像素忸进行比较, 1处 同一对角线 I:的两个 像素点中 -者的像素忸不小 ί (即人 ί·或等于 ) :相邻的两像素点的像素值、 [flJ ·者小 于其相邻的两像素点的像素忸, 则判断所述 4 个像素点构成的图像区域为平 域, 所述梯 度方向即为¾对角线 A向。
例如, 设定: a二像素点 d22的像素值, ^像素点 d33的像素忸,
b=像素点 d23的像素值, 像素点 d32的像素忸,
比较模块 111将对角线 I:的两像素点 d22和 d33的像素忸 a和 c分别与 j一对角线 I二的 两像素点 (123和 d32的像素值 b和 d进行比较, G|J:
如果: a>=b&&a>=d&&c<=b&&c<=d、 或 -:
a <= b && a <= d && c >= b && c >=d,
则所述比较模块 111 判断所述 4个像素点构成的图像 域为平坦 域, 所述梯度 Α向即 为两像素点 (122和 d33所形成的对角线 A向。
liij如果: b>=a&&b>=c&&d<=a&&d<=c、 或 -:
b <= a && b <= c && d >= a && d >=c,
则比较校块 in 也判断所述 4个像素点构成的图像 域为平 区域, 所述梯度方向即为 两像素点 d23和 d32所形成的对角线 A向。
接着,所述二角形确定模块 12根据所述插值点的位置及所确定的作为梯度方向的对角线, 选择以该对角线为边的三角形作为待插忸的三角形。 例如, 比较模块 111 确定两像素点 d22 和 (133所形成的对角线 A向为梯度 A向, 故所述二角形确定校块 12 nj在像素点 d32、 (122和 d33构成的三角形或者像素点 d23、 d22和 d33构成的二角形屮选择 · 作为待插值的三角形, 而如果插值点 Ρ处 i'rtl像素点 (132、 d22和 d33构成的三角形中, 则所述二角形确定模块 12 选择像素点 (132、 d22和 d33构成的二角形作为待插忸的三角形。 再例如, 比较校块 111确定 确定两像素点 d23和 d32所形成的对角线方向为梯度 —向, 故所述三角形确定校块 12 nj'在像 素点 d22、 (123和 (132构成的三角形或 像素点 d33、 d23禾 tl d32构成的二角形屮选择一 作 为待插值的二角形, ^如果插忸点 P处 rtl像素点 (122、 d23和 d32构成的二角形屮, 则所述 三角形确定校块 12选择像素点 d22、 d23和 d32构成的三角形作为待插忸的三角形。 接着, 讣算模块 13基 所确定的三角形的二个願点所对应的像素点的像素忸及插忸点距 离所述三角形中一个願点的距离, 讣算所述插忸点的像素值。 作为 ·种优选 A式, 如图 5所 示, 讣算模块 13可按照如卜公式讣算所述插忸点的像素忸 zout:
zout = ρθ— ( -pl+p0)*x― (p0-p2)*y,
其中, p0、 pl、 p2为待插忸的三角形的 3个願点所对应的像素点的像素值 , x、 y分别为插忸 点距离像素忸为 ρθ的像素点的水平距离和垂直距离。
以下将基于图 11所小的图像处理***详细描述木发明又一个 A面的基于矩阵对图像进行 插值的工作过程。
Λ-体的, 图像处观*** ^取各像素点的像素忸的过程巳在前 :施例中详细描述, 在此 以引川的方式包含, 不再 述。
接着, 比较模块 1 11 判断 -条对角线 I:的两个像素点屮有 · 的像素值人 另一条对角 线 1::的两像素点的像素忸、 iiu 一者小 ^条对角线 I:的两像素点的像素值的情形不成立 时, 比较模块 i n判断所述 4个像素点构成的图像区域为非平 区域。
例如, 仍然设定: ^像素点 d22的像素忸, (^像素点 d33的像素忸,
b=像素点 d23的像素忸, ^像素点 d32的像素值,
比较校块 111判断:
a >= b && a >= d && c <= b && c <=d、 或 -:
a <= b && a <= d && c >= b && c >=d、 或 -:
b >= a && b >= c && d <= a && d <=c、 或 -:
b <= a && b <= c && d >= a && d >=c,
都不成立, r†i此, 比较模块 i n判断所述 4个像素点构成的图像区域为非平 区域。
接着, 比较模块 i n 通过将与插忸点相邻的各像素点的像素忸与各 £J水平 A向和垂直 1二 相邻的各像素点的像素值进行比较来确定所述梯度方向。
接着, 二角形确定模块 12、 讣算校块 13巳在图 10所小的实施例中详细描述, 在此以引 用的 A式包含, 不 PIJS述。
以下将基于图 11所小的图像处理***详细描述木发明又一个 A面的基于矩阵对图像进行 插值的工作过程。
ft体的,图像处观*** ^取 Μ像素点阵列 ϋ含的 16个像素点的像素值, 屮,所述 4*4 像素点阵列 lii含插忸点, 例如, 如图 2所小, 图像处观*** ^取与插忸点 P相关联的 Μ像 素点阵列中的像素点(11 1、 dl 2, dl3、 dl 4、 d21、 d22、 d23、 d24、 d31、 d32、 d33、 d34、 d41 、 cM 2、 d43 , 及 cM 4的像素忸。
接着, 比较模块 1 1 1 判断与插忸点相邻的 4个像素点所构成的图像 域为非平坦 域的 过程巳在前 ·实施例屮详细描述, 在此以引用的 A式 含, 不 重述。
接着, 比较模块 1 1 1 将处 同 对角线 1::与插值点相邻的两个像素点各 £J的像素值与 各自水平 A向和垂直方向 I:相邻的像素点的像素值比较后, 判断所述两个像素点各 £J所获得 的水平方向的比较结果忸与垂直方向的比较结果值之和都人于预定忸, 则确定 ¾条对角线方 向为所述梯度方向。。
例如, 定义极忸特征 T为 T (Ta,Tb,Tc) , 其屮, Tb为与插忸点相邻的像素点的像素值, Ta和 Tc为与插值点相邻的像素点水平方向或垂直 向的相邻两像素点的像素忸,
如果 Ta>Tb&&Tb<Tc , 则确定极忸特征 T 为 ·常数 (即比较结果值), 例如 T- 1 , 如果 Ta<Tb&&Tb>Tc, 则确定极值特征 T为该常数的负忸, 例如 1, JL:他情形, 则确定极值特征 T 为: T:0。
r†l此, 所述比较模块 1 1 1基 1::述定义 j^ii得与插忸点 P相邻的 4个像素点 d22、 d23、 (132 , d33各自的水平 -向和垂直方向的极忸特征忸, 即比较结果值。 例如, 对 ί'像素点 d22, 所述 比较校块 11 1通过比较断像素点 (122的像素忸与像素点 d21、d23的像素值, nj确定像素点 d22 的水平方向的极值特征值 为 1、 - 1或 0 , 同样,通过比较像素点 d22的像素忸与像素点 dl 2、 d32的像素值, 可确定像素点 d22垂直方向的极值特征值, 对于像素点 d23 、 d32 , (133, 也 通过同样的比较方式, 即 i«J确定各像素点各自的水平 A向和垂直方向的极忸特征值。
进而, 比较模块 i n 比较处于同 ·对角线 1::的两个像素点各 £J水平 A向和垂直方向的极 值特征值之和是否都人于预定值, 例如为 0 , 如果 , 则确定该对角线 A向为所述梯度 A向。 例如, 像素点 d22、 d23、 d33、 (132 各 S的水平 A向和垂直方向的极忸特征值分别表小为 - infl— a— X, infl— a— y, infl— c— x , infl— c— y,
如果: infl—a>0 && infl—c>0 , 则比较模块 1 1 1确定像素点 d22、 d33所形成的对角线方向 即为所述梯度方向; ·Η:中, infl— a = infl a x + infl a y, infl c = infl c x + infl— c— y。
如果: infl—b>0&&infLd>0 , 则比较模块 1 1 1确定像素点 d23、 d32所形成的对角线 -向 即为所述梯度方向; J L:中, infl— b = infl—b— x + infl b^y, infl d = infl—d—x + infl— d—y。
木领域技术人 ¾应该观解, I:述所小的比较结果忸仅仅只 列小, 而非川于限定本发明, 而所述预定忸基于比较结果忸来确定, 1比较结果值并非以 1、 - 1 及 0来表小, 相应的, 预 定值也可能不为 0。
接着, 二角形确定模块 12及计算校块 13 巳在前一 ¾施例屮予以详细说明, 在此以引用 的方式包含, 不再 述。
以下将基于图 11所小的图像处理***详细描述木发明又一个 A面的基于矩阵对图像进行 插值的工作过程。
ft体的, 图像处观*** ^取像素点的像素忸的过: f¾、 及比较模块 1 11 确定像素点水平方 向和垂直 A向极值特征忸的过程巳在前 ^¾施例中予以详细说明, 在此以引用的方式 ϋ含, 不再 JS述。
接着, 比较模块 i ll 将处 同 对角线 1::与插值点相邻的两个像素点各 £J的像素值与 各自水平 A向和垂直方向 I:相邻的像素点的像素值比较后, 判断所述两个像素点各 £J所获得 的水平方向的比较结果忸与垂直方向的比较结果值之和都人于预定忸的情形不成立, 则基于 前一次插值所确定的梯度 A向来确定当前的梯度 A向。 如果此次插值处观为 ^行的首次插值, 作为 ·种优选方式, 比较校块 H I i«J直接确定像素点 d23和 d32所形成的对角线 A向作为当 前的梯度方向。 例如, 如果前一次插值所确定的梯度 -向是 4个像素点构成的矩阵中的起始 像素点所在的对角线 A向, 则比较校块 1 11 i«J将像素点 d22 (即矩阵中起始像素点)所在的对 角线 —向, 即像素点 d22和 d33所形成的对角线 —向作为 1前的梯度 —向, 如果前一次插值 确定的梯度 -向是 J ·对角线方向, 则比较模块 1 11将 J -对角线, 即像素点 d23和 d32所 形成的对角线方向作为当前梯度 A向。
接着, 二角形确定模块 12基 前一次插值所确定的梯度方向、 及所述插忸点的位置确定 待插忸的三角形。 例如, 比较模块 1 11 基 前一次插值所确定的梯度 A向确定 1前的梯度方 向为像素点 d23和 d32所形成的对角线方向, r†l此二角形确定校块 12再基』¾忸点 P的位置, 选择出包含插值点 P的二角形,即像素点 d22、 d23和 d32构成的二角形作为待插忸的三角形。
接着, 计算模块的工作过程巳在前述¾施例屮予以详细说明, 在此以引川的 A式包含, 不再 JS述。
以下将基于图 11所小图像处观***详细描述本发明又 ^面的在屮基 矩阵对图像进 行插忸的过: f¾。
ft体的, 图像处观*** ^取 2*2像素点阵列所 ϋ含的 4个像素点的像素忸。 例如, 图像 处理***获取像素点 (122、 d23、 d32、 d33的像素忸。
接着, 比较模块 1 11 通过将同 ·对角线 I:的两像素点各自的像素忸与另 ·对角线 I:的像 素点的像素忸进行比较, 以判断所述 2*2像素点阵列所构成的图像区域 否为平坦区域, 具 体比较过程前述各 ¾施例屮所述, 在此不 重述。
接着, 1比较校块 1 11 判断所述 2*2像素点阵列所构成的图像区域 平 域, 则比较 模块 H I基 所述 4个像素点的像素他来确定-一对侑线方向作为梯度方向, 即^有一对角线
. I二的两个像素点中有一者的像素值大 f其相邻的两像素点 if^像素伹, ιπι 一者小于其相邻的 两像素点的像素值, 比较校块 m将该 ·对 线 ^向作为梯度方向。
^当比较模块 1 1 1判断所述 2*2像素点阵列所构成的图像区域是 坦 域, 则图像处 理***再次获取所述 2*2像素点阵列中各像素点 ffl邻各像素点 |¾像素值, 例如, 获取像素点 dl 2、 dl 3 、 d2.l、 d24、 d,3K d34、 d43、 d44的像 值。
接着, 比较模块 Π 1 确定所述 2*2像素点阵列中各像素点水平方向和垂直 J 向的极值特 征值, 进而获得各像素点的水平方向和 直方向的极忸特征值之和。 Λ体过程 参见前述各 实施例, 在 i 不再 j£述。
接着, 比较模块 1 11 判断 否有 -对 Λ线 1二1¾两像素点各 £1 if ^水平方向和垂直方向的极 值特征值之和都大 预定值。
接着, 比较校块 m 判断有 ·对角线上的两像素点.各自的水平 向和垂直方向的极值 特征忸之和都大于预定值, 则确定该对 ¾线方向作为梯度方向。
而当比较模块 in 判断没 任何 · Λ线 -匕的两像素点各 ΰ ί ΐ水平方向和垂直方向的极 值特征值之和都大于预定位, 则基于前 次插值所确定的梯度方向作为 前的梯度方向。 具 体可参见前述实施例中的详细描述, 在此不再 JE述。
接着, 三 形确定校块 12基于所确定的梯度方向、 及所述插值点的位置, 在与所述插值 点相邻的 4个像素点所构成的图像区域中确定待插祖的二 ½形。 Λ体 itf参见前述各实施例中 的详细描述, 此不再重述。
敁后, 计算基于所确定的三角形的 个顶点所对 的像素点 if^J像素值及插值点 离所述 三角形中 十顶点的距离, 讣算所述插傲点 1¾像素值。 ft体可参见前述各实施例中的详细描 述, 在此不再重述。
相较 现有的图像处] ¾方法, 发明方法处] ¾后½图像, W有效解决缩放图像斜方向细 节的边缘¾刺或锯齿问题, 获得更高质量資像 而 Π, 本发明的方法只¾要数 较少的乘法 计算。 在实现方式匕 相对 现有基 j:多相滤波的插忸耍先输入行或列进行运算, 输入列 或行迸行运 的方式, 氺发^仅仅只; 耍一次输入…像素点阵列, 即 W获得-一插值点的像素 值, 因此本发明在运算结构 I:相对简单。
I:述实施例仅列小性说明本发观的原观及功效, 而非用子限制本 明。 任何熟悉此项技 术的人员均 W在不违背本 ¾ί ^精神及范 下, 対 I:述实施例进行修改。 因此, 本发明的权 利保护范围, 应如权利要求 ¾所列。

Claims

权 利 耍 求 书
1 - ·种在图像处观***屮基于矩阵对图像进行插值的 A法, Ji;特征在 包括步骤-
A、 确定包含插忸点的像素点阵列所构成的图像 域的梯度方向;
B、 基 获取的所述梯度 A向、 及所述插忸点的位置, 在所述图像 域确定待插值的 三角形;
C、 基 所确定的三角形的三个願点所对应的像素点的像素忸及插忸点距离所述三角 形屮一个願点的距离, 讣算所述插值点的像素忸。
2. 如权利要求 1 所述的在图像处观***屮基 矩阵对图像进行插值的 A法, 其特征在 于: 所述步骤 A ϋ括:
A 将 含所述插忸点的像素点阵列中处 同 ·对角线 1::的像素点各自的像素值 分别与另 ·对角线 I:的像素点的像素忸进行比较, 以判断所述像素点阵列构成的 图像区域 ¾ϊ ^为 区域来确定所述梯度方向。
3- 如权利要求 2 所述的在图像处观***屮基 矩阵对图像进行插值的 Α法, 其特征在 于: 所述步骤 Al ii括:
― 当一条对角线 I:与插值点相邻的两个像素点屮 · 的像素值不小 ϋ ·条对 角线 I二与插值点相邻的两像素点的像素忸、 而 . -小 i'w一条对角线 I:与插值 点相邻的两像素点的像素忸, 则判断所述像素点阵列构成的图像 域为平坦区域, 所述梯度方向即为 ¾条对角线方向;
所述步骤 R包括:
根据所述插值点的位置及该条对角线 A向选择以¾条对角线为边的二角形作为待插 值的三角形。
Λ. 如权利要求 2 所述的在图像处观***屮基 矩阵对图像进行插值的 A法, 其特征在 于: 所述步骤 Al ii括:
A1 K 当一条对角线 I:与插忸点相邻的两个像素点中 者的像素忸不小于 -条 对角线 I二与插忸点相邻的两像素点的像素值、 iiu ^小 另一条对角线 I:与插 值点相邻的两像素点的像素值的情形不成立时, 判断所述像素点阵列构成的图像 区域为非平 域;
A12、 通过将与插值点相邻的各像素点各 £J的像素忸与各 £J水平 A向和垂直方向 h 相邻像素点的像素忸进行比较来确定所述梯度 A向。
5. 如权利要求 4 所述的在图像处观***屮基 矩阵对图像进行插值的 A法, 其特征在
1 于: 所述步骤 A12包括:
当将处于同 对角线 1::与插忸点相邻的两个像素点各 £J的像素忸与各 £J水平 方向和垂直方向 I:相邻像素点的像素忸比较后, 所述两个像素点各 £J所^符的水 平方向的比较结果忸与垂直方向的比较结果值之和都人 预定值, 则确定 ¾条对 角线 —向为所述梯度 —向;
所述步骤 R li S- 根据 ¾条对角线方向及所述插值点的位置选择以该条对角线为边的三角形作为待 插值的三角形。
6. 如权利要求 4 所述的在图像处观***屮基 矩阵对图像进行插值的 A法, 其特征在 于: 所述步骤 A12包括:
当将处于同 对角线 1::与插忸点相邻的两个像素点各 £J的像素忸与各 £J水平 方向和垂直方向 I:相邻的像素点的像素忸比较后, 所述两个像素点各 £J所 ^得的 水平方向的比较结果忸与垂直方向的比较结果忸之和都大 预定忸的情形不成 立, 基 ί·前 ·次插忸所确定的梯度 ―向来确定 1前的梯度方向;
所述步骤 Β包括:
根据前 ·次插忸所确定的梯度方向选择待插值的二角形。
7. 如权利要求 1 6 屮任 ·项所述的在图像处繩***屮基 矩阵对图像进行插值的方 法, 其特征在于: 所述步骤 c li括:
-按照如卜公式讣算所述插忸点的像素值 zout:
zout = ρθ— ( -pl+p0)*x― (p0-p2)*y,
其中, p0、 pl、 p2为待插忸的三角形的 3个願点所对应的像素点的像素忸, x、 y分 别为插值点距离像素值为 ρθ的像素点的水平距离和垂直距离。
8- 一种基于矩阵对图像进行插值的***, 特征在于
梯度确定校块, 川 确定 1ϋ含插忸点的像素点阵列所构成的图像 域的梯度方向; 三角形确定校块, 川于基』:所述梯度 A向、 及所述插忸点的位置, 在所述图像区域确 定待插值的二角形; 以及
计算模块,川于基』:所述待插忸的三角形的二个顶点所对应的像素点的像素值及插值 点距离所述二角形屮一个願点的距离, 讣算所述插忸点的像素值。
9- 如权利要求 8所述的基 矩阵对图像进行插忸的图像处理***, 特征在 所述梯 度确定模块 lii括: 比较校块, 川 将 1ϋ含所述插值点的像素点阵列屮处 ^条对角线 I:的各像素点各 自的像素忸分别与另 ^条对角线 1::的各像素点的像素忸进行比较, 以判断所述像素 点阵列构成的图像 域 Ϊ ^为平 区域来确定所述梯度方向。
10. 如权利要求 9所述的基 矩阵对图像进行插忸的图像处理***, )1;特征在 所述比 较校块还川
当一条对角线 I:与插值点相邻的两个像素点屮有一 - fi勺像素忸不小 ϋ 条对角线 I二与插忸点相邻的两像素点的像素值, iiu ^小于另 -条对角线 I二与 插值点相邻的两像素点的像素忸, 则判断所述像素点阵列构成的图像 域为平坦 区域, 所述梯度方向即为 ¾条对角线方向;
所述二角形确定模块还用
根据所述插值点的位置及所述¾条对角线方向选择以¾条对角线为边的二角形作 为待插忸的二角形。
1 1- 如权利要求 9所述的基 矩阵对图像进行插忸的图像处理***, 特征在 所述比 较校块还川
当一条对角线 I:与插忸点相邻的两个像素点中有一 的像素忸不小于另 ·条对 角线 I:与插值点相邻的两像素点的像素忸、 而 ^小 另一条对角线 I:与插值 点相邻的两像素点的像素忸的情形不成立时, 判断所述像素点阵列构成的图像区 域为非平 区域, iiii 通过将与插忸点相邻的各像素点的像素值与各 £J水平方向 和垂直 A向 1::相邻各像素点的像素忸进行比较来确定所述梯度 A向。
12. 如权利要求 1 1所述的基 ί'矩阵对图像进行插忸的图像处观***, 特征在于- 所述 比较模块还川于:
1将处 同一条对角线 I:与插值点相邻的两个像素点各 £J的像素忸与各 £J水平方 向和垂直 向 1::相邻像素点的像素忸比较后, 所述两个像素点各 £J所 ^符的水平 : 向的比较结果值与垂直 —向的比较结果忸之和都人 ί'预定值, 则确定¾条对角 线方向为所述梯度方向;
所述三角形确定校块还川于:
根据所述¾条对角线 ―向及所述插忸点的位置选择选择以¾条对角线为边的二角形 作为待插值的二角形。
13. 如权利要求 1 1所述的基 ί'矩阵对图像进行插忸的图像处观***, 特征在于: 所述 比较模块还川于: 1将处 同一条对角线 I:与插值点相邻的两个像素点各 £J的像素忸与各 £J水平方 向和垂直 向 I:相邻的像素点的像素忸比较后, 所述两个像素点各自所获符的水平 方向的比较结果值与垂直 A向的比较结果忸之和都大 预定忸的情形不成立时, 基 于前-一次插忸所确定的梯度方向来确定 1前的梯度 A向。
14. 如权利要求 8至 13屮仆一项所述的基于矩阵对图像进行插忸的图像处观***, 特 征在于: 所述计算校块还川于:
按照如卜公式讣算所述插忸点的像素值 zout:
zout = ρθ— ( -pl+p0)*x― (p0-p2)*y,
中, p0、 pl、 p2为待插值的二角形的 3个願点所对应的像素点的像素值, x、 y分别为插忸点距离像素忸为 ρθ的像素点的水平距离和垂直距离。
PCT/CN2011/071966 2011-02-12 2011-03-18 基于矩阵对图像进行插值的方法及图像处理*** WO2012106850A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US13/376,995 US8818136B2 (en) 2011-02-12 2011-03-18 Image interpolation method based on matrix and image processing system

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201110037074.9 2011-02-12
CN201110037074.9A CN102638679B (zh) 2011-02-12 2011-02-12 基于矩阵对图像进行插值的方法及图像处理***

Publications (1)

Publication Number Publication Date
WO2012106850A1 true WO2012106850A1 (zh) 2012-08-16

Family

ID=46622902

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2011/071966 WO2012106850A1 (zh) 2011-02-12 2011-03-18 基于矩阵对图像进行插值的方法及图像处理***

Country Status (3)

Country Link
US (1) US8818136B2 (zh)
CN (1) CN102638679B (zh)
WO (1) WO2012106850A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110363708A (zh) * 2019-07-16 2019-10-22 安健科技(广东)有限公司 改进斜线方向插值效果的图像处理方法及装置

Families Citing this family (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6009903B2 (ja) * 2012-10-24 2016-10-19 シャープ株式会社 画像処理装置
CN105096245B (zh) * 2014-04-21 2018-05-04 上海澜至半导体有限公司 一种处理2d图像的方法和装置
CN103996170B (zh) * 2014-04-28 2017-01-18 深圳市华星光电技术有限公司 一种具有超高解析度的图像边缘锯齿消除方法
CN105894450A (zh) * 2015-12-07 2016-08-24 乐视云计算有限公司 一种图像处理方法及装置
CN106341670B (zh) 2016-11-29 2017-09-22 广东欧珀移动通信有限公司 控制方法、控制装置及电子装置
CN106504218B (zh) 2016-11-29 2019-03-12 Oppo广东移动通信有限公司 控制方法、控制装置及电子装置
CN106454054B (zh) 2016-11-29 2019-03-19 Oppo广东移动通信有限公司 控制方法、控制装置及电子装置
CN106454288B (zh) 2016-11-29 2018-01-19 广东欧珀移动通信有限公司 控制方法、控制装置、成像装置及电子装置
CN109325909B (zh) * 2017-07-31 2023-03-31 深圳市中兴微电子技术有限公司 一种图像放大方法和图像放大装置
CN109993693B (zh) * 2017-12-29 2023-04-25 澜至电子科技(成都)有限公司 用于对图像进行插值的方法和装置
CN108171657B (zh) * 2018-01-26 2021-03-26 上海富瀚微电子股份有限公司 图像插值方法及装置
CN116324689A (zh) 2020-10-30 2023-06-23 海信视像科技股份有限公司 显示设备、几何图形识别方法及多图层叠加显示方法
CN115243094A (zh) * 2020-12-22 2022-10-25 海信视像科技股份有限公司 一种显示设备及多图层叠加方法

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1667650A (zh) * 2005-04-08 2005-09-14 杭州国芯科技有限公司 基于边缘检测的图像缩放的方法
CN1946137A (zh) * 2005-10-08 2007-04-11 中华映管股份有限公司 图像数据内插方法

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3013713B2 (ja) * 1994-08-16 2000-02-28 日本ビクター株式会社 情報信号処理方法
US6072505A (en) * 1998-04-01 2000-06-06 Real 3D, Inc. Method and apparatus to efficiently interpolate polygon attributes in two dimensions at a prescribed clock rate
US6377265B1 (en) * 1999-02-12 2002-04-23 Creative Technology, Ltd. Digital differential analyzer
JP4581261B2 (ja) * 2001-02-14 2010-11-17 ソニー株式会社 演算装置、演算処理方法及び画像処理装置
JP2003132347A (ja) * 2001-10-26 2003-05-09 Sony Corp 画像処理装置
US7714855B2 (en) * 2004-05-17 2010-05-11 Siemens Medical Solutions Usa, Inc. Volume rendering processing distribution in a graphics processing unit
WO2008091583A2 (en) * 2007-01-23 2008-07-31 Dtherapeutics, Llc Image-based extraction for vascular trees
DE102009057583A1 (de) * 2009-09-04 2011-03-10 Siemens Aktiengesellschaft Vorrichtung und Verfahren zur Erzeugung einer zielgerichteten realitätsnahen Bewegung von Teilchen entlang kürzester Wege bezüglich beliebiger Abstandsgewichtungen für Personen- und Objektstromsimulationen

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1667650A (zh) * 2005-04-08 2005-09-14 杭州国芯科技有限公司 基于边缘检测的图像缩放的方法
CN1946137A (zh) * 2005-10-08 2007-04-11 中华映管股份有限公司 图像数据内插方法

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
DAN SU ET AL.: "Image Interpolation by Pixel-Level Data-Dependent Triangulation", COMPUTER GRAPHICS FORUM, vol. 23, no. 2, 31 December 2004 (2004-12-31), pages 189 - 201 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110363708A (zh) * 2019-07-16 2019-10-22 安健科技(广东)有限公司 改进斜线方向插值效果的图像处理方法及装置
CN110363708B (zh) * 2019-07-16 2023-03-24 安健科技(广东)有限公司 改进斜线方向插值效果的图像处理方法及装置

Also Published As

Publication number Publication date
US8818136B2 (en) 2014-08-26
US20130322780A1 (en) 2013-12-05
CN102638679B (zh) 2014-07-02
CN102638679A (zh) 2012-08-15

Similar Documents

Publication Publication Date Title
WO2012106850A1 (zh) 基于矩阵对图像进行插值的方法及图像处理***
WO2018157568A1 (zh) 全景图像映射方法
WO2010034202A1 (zh) 图像缩放方法及装置
JP6007602B2 (ja) 画像処理方法、画像処理装置、スキャナ及びコンピュータプログラム
JP4018601B2 (ja) 内蔵形システムのデジタル画像スケーリング方法
JP2009536499A5 (zh)
JP2010171976A (ja) 歪み文書画像を補正する方法及びシステム
TW201208387A (en) Method and apparatus for encoding and decoding image through intra prediction
WO2019165863A1 (zh) 用于经纬图的编码块级拉格朗日乘子的优化方法
CN107633539A (zh) 一种基于四边面片分割的三维点云模型数据压缩方法
TW201143358A (en) Method for distinguishing a 3D image from a 2D image and for identifying the presence of a 3D image format by feature correspondence determination
KR20190080388A (ko) Cnn을 이용한 영상 수평 보정 방법 및 레지듀얼 네트워크 구조
CN110793441B (zh) 一种高精度物体几何尺寸测量方法及装置
CN107046640B (zh) 一种基于帧间运动光滑性的无参考视频稳定质量评价方法
TWI476730B (zh) 數位影像的反扭曲處理方法
CN101662695A (zh) 一种获取虚拟视图的方法和装置
US11475629B2 (en) Method for 3D reconstruction of an object
TW201118793A (en) System and method for establishing relationship between plural images and recording media thereof
CN101547323B (zh) 图像转换方法、转换装置及显示***
US8675975B2 (en) Method for encoding image using estimation of color space
CN114219845B (zh) 一种基于深度学习的居住单元面积评判方法和装置
US20230326128A1 (en) Techniques for processing multiplane images
US8503823B2 (en) Method, device and display system for converting an image according to detected word areas
WO2013067942A1 (zh) 一种帧内预测方法和装置
TWI272852B (en) Motion estimation method for adaptive dynamic searching range

Legal Events

Date Code Title Description
WWE Wipo information: entry into national phase

Ref document number: 13376995

Country of ref document: US

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 11858268

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 11858268

Country of ref document: EP

Kind code of ref document: A1