CN104700361B - Image interpolation method and system based on rim detection - Google Patents

Image interpolation method and system based on rim detection Download PDF

Info

Publication number
CN104700361B
CN104700361B CN201510152962.3A CN201510152962A CN104700361B CN 104700361 B CN104700361 B CN 104700361B CN 201510152962 A CN201510152962 A CN 201510152962A CN 104700361 B CN104700361 B CN 104700361B
Authority
CN
China
Prior art keywords
mrow
msub
interpolation
mtd
mtr
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201510152962.3A
Other languages
Chinese (zh)
Other versions
CN104700361A (en
Inventor
韩睿
汤仁君
郭若杉
罗杨
颜奉丽
汤晓莉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Jilang Semiconductor Technology Co Ltd
Original Assignee
Institute of Automation of Chinese Academy of Science
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Institute of Automation of Chinese Academy of Science filed Critical Institute of Automation of Chinese Academy of Science
Priority to CN201510152962.3A priority Critical patent/CN104700361B/en
Publication of CN104700361A publication Critical patent/CN104700361A/en
Application granted granted Critical
Publication of CN104700361B publication Critical patent/CN104700361B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Medicines Containing Antibodies Or Antigens For Use As Internal Diagnostic Agents (AREA)
  • Image Processing (AREA)

Abstract

The present invention provides a kind of image interpolation method and system based on rim detection, and described image interpolation method includes:Position of the interpolation pixel in original image is determined according to the size of image after original image and interpolation;Determine edge direction of the interpolation pixel in original image;If the absolute value of the slope of edge direction is not less than first threshold, row interpolation is entered according to row intersection method and/or row intersection method.Interpolation method of the present invention can make after interpolation the edge clear of image and without crenellated phenomena.

Description

Image interpolation method and system based on rim detection
Technical field
The invention belongs to image processing field, more particularly to image interpolation method and system based on rim detection.
Background technology
Image interpolation, available for the resolution adjustment of image, high-definition image (1920*1080) is such as enlarged into ultra high-definition figure As (3840*2160).
Traditional image interpolation method, such as bilinear interpolation, bi-cubic interpolation, leggy interpolation etc., essence are using low Bandpass filter enters row interpolation, while smoother interpolation image is obtained, image medium-high frequency information can be caused to lose, in image Edge there is fuzzy and crenellated phenomena.At present, a kind of more advanced image interpolation method is that the image based on rim detection is inserted Value method.By rim detection, the edge direction of interpolation pixel is calculated, interpolating pixel is treated along edge direction and is inserted Value, so as to obtain smooth image border, avoids crenellated phenomena.But the existing image interpolation method based on rim detection At least there is one of following shortcomings:Only support the amplification of integral multiple image;The edge direction used during interpolation is not any direction, Only several directions of a small number of fixations;For the original point negligible amounts of interpolation, cause the edge of image not clear enough.
Therefore, it is necessary to a kind of image interpolation method that can solve the problem that above mentioned problem.
The content of the invention
The present invention provides a kind of image interpolation method and system based on rim detection, makes image border after interpolation to realize Clear and adenticulate purpose.
The first aspect of the invention is to provide a kind of image interpolation method based on rim detection, including:
Position of the interpolation pixel in original image is determined according to the size of image after original image and interpolation;
Determine edge direction of the interpolation pixel in original image;
If the absolute value of the slope of edge direction is not less than first threshold, entered according to row intersection method and/or row intersection method Row interpolation, the row intersection method and/or row intersection method include:
Some rows and/or several columns are true by interpolation pixel and edge direction in interpolation neighborhood of pixels in calculating original image The position of some row intersection points and/or several columns intersection point that fixed straight line is cut;
Using described in the value determination of pixel in one-dimensional interpolation method row intersection point according to original image and/or row intersection point neighborhood The pixel value of row intersection point and/or row intersection point;
One-dimensional filtering is carried out to the pixel value of the row intersection point in identified interpolation neighborhood of pixels and/or row intersection point, obtained The value of interpolation pixel is obtained, and row interpolation is entered to original image.
The second aspect of the invention is to provide a kind of image interpolation system based on rim detection, including:
Coordinate calculating unit, for determining interpolation pixel in original image according to the size of image after original image and interpolation Position;
Direction calculating unit, for determining edge direction of the interpolation pixel in original image;
Intersection point calculation unit, when the absolute value for the slope in edge direction is not less than first threshold, calculate original image Some rows and/or several columns are cut some by the straight line that interpolation pixel and edge direction determine in middle interpolation neighborhood of pixels The position of row intersection point and/or several columns intersection point;
Edge interpolation filter unit, for entering row interpolation according to row intersection method and/or row intersection method, specifically for utilizing one The value of pixel determines the row intersection point and/or row in dimension interpolation method row intersection point according to original image and/or row intersection point neighborhood The pixel value of intersection point and to the pixel value of the row intersection point in identified interpolation neighborhood of pixels and/or row intersection point carry out it is one-dimensional Filtering, obtains the value of interpolation pixel, and enters row interpolation to original image.
Beneficial effects of the present invention are:
The image interpolation method based on rim detection of the invention can the larger original point of usage quantity with arbitrary integer or Non-integer scales multiplying power and carries out interpolation processing in any edge direction, makes the image edge clear after interpolation and avoids saw Tooth phenomenon.
Brief description of the drawings
Fig. 1 is the flow chart of the image interpolation method embodiment one of the invention based on rim detection;
Fig. 2 is the Sobel gradient method schematic diagrames in the image interpolation method embodiment one of the invention based on rim detection;
Fig. 3 is that the gradient covariance matrix method in the image interpolation method embodiment one of the invention based on rim detection is shown It is intended to;
Fig. 4 is the row intersection method schematic diagram in the image interpolation method embodiment one of the invention based on rim detection;
Fig. 5 is the row intersection method schematic diagram in the image interpolation method embodiment one of the invention based on rim detection;
Fig. 6 is connected applications row intersection method and Lie Jiao in the image interpolation method embodiment one of the invention based on rim detection Weighting function during point method;
Fig. 7 is the structured flowchart of the image interpolation system embodiment one of the invention based on rim detection.
Embodiment
Fig. 1 is the flow chart of the image interpolation method embodiment one of the invention based on rim detection, as shown in figure 1, this hair The bright image interpolation method based on rim detection, including:
S11, according to the size namely resolution ratio of image after original image and interpolation, determine interpolation pixel in original image Position;Preferably, the size according to image after original image and interpolation determines position bag of the interpolation pixel in original image Include and position of the interpolation pixel in original image is calculated according to formula (1):
Wherein, iLAnd jLRespectively represent interpolation pixel in original image namely low-resolution image the row coordinate of position and Row coordinate, iHAnd jHThe row coordinate and row of interpolation pixel position in image namely high-definition picture after interpolation are represented respectively Coordinate, HLAnd WLThe height and width of original image, H are represented respectivelyHAnd WHRespectively represent interpolation after image height and width;
S12, determine edge direction of the interpolation pixel in original image;Preferably, the determination interpolation pixel is in original The first method of edge direction in image can include:
Fig. 2 is the Sobel gradient method schematic diagrames in the image interpolation method embodiment one of the invention based on rim detection, As shown in Fig. 2 using Sobel gradient operators, calculated respectively according to formula (2) and (3) in original image in interpolation neighborhood of pixels The horizontal gradient g of some pixelsH(i, j) and vertical gradient gV(i,j):
Respectively according to the horizontal gradient of position of the interpolation pixel in original image and pixel in the neighborhood and vertical Vertical ladder degree utilizes bilinear interpolation namely the horizontal gradient g of interpolation pixel is determined according to formula (4) and (5)H(iL,jL) and hang down Vertical ladder degree gV(iL,jL), then the edge direction of interpolation pixel is the vertical direction in the direction of the gradient of the interpolation pixel (gV(iL,jL),-gH(iL,jL)):
Wherein, IL(i-1,j+1)、IL(i,j+1)、IL(i+1,j+1)、IL(i-1,j-1)、IL(i,j-1)、IL(i+1,j- 1)、IL(i-1,j)、IL(i+1, j) is illustrated respectively in the pixel value of eight pixels in interpolation neighborhood of pixels in original image;
Next, Fig. 3 is the gradient covariance square in the image interpolation method embodiment one of the invention based on rim detection Battle array method schematic diagram, as shown in figure 3, the second method for determining edge direction of the interpolation pixel in original image can be with Including:
Choose the window of H*W window Ω arbitrary sizes in interpolation neighborhood of pixels, such as H=4 in this example, W=6;Really Determine the horizontal gradient g of whole pixels in windowH(i, j) and vertical gradient gV(i, j), so that it is determined that window in interpolation neighborhood of pixels The covariance matrix M of whole pixels in mouthful:
Calculate the characteristic value and characteristic vector of the covariance matrix, it is determined that characteristic vector v corresponding to smaller characteristic value For the edge direction, namely:
Wherein,Represent characteristic vector corresponding to the smaller characteristic value of the covariance matrix;vxRepresent edge side To horizontal component, vyRepresent the vertical component of edge direction.
In addition, described edge direction the third method for determining interpolation pixel in original image can include second of side Each step described in method and also include:
Covariance matrix M' after the covariance matrix is improved is improved according to formula (8):
Wherein, w (i, j) value following the example of using bilinear interpolation, it is:
W (i-1, j-2)=(1-dx) * (1-dy), w (i-1, j-1)=(1-dy), w (i-1, j)=(1-dy),
W (i-1, j+1)=(1-dy), w (i-1, j+2)=(1-dy), w (i-1, j+3)=dx* (1-dy);
W (i, j-2)=(1-dx), w (i, j-1)=1, w (i, j)=1, w (i, j+1)=1, w (i, j+2)=1,
W (i, j+3)=dx;W (i+1, j-2)=(1-dx), w (i+1, j-1)=1, w (i+1, j)=1, w (i+1, j+1) =1,
W (i+1, j+2)=1, w (i+1, j+3)=dx;W (i+2, j-2)=(1-dx) * dy, w (i+2, j-1)=dy,
W (i+2, j)=dy, w (i+2, j+1)=dy, w (i+2, j+2)=dy, w (i+2, j+3)=dx*dy;w(i,j)
Value can also be represented with table 1:
(1-dx)*(1-dy) (1-dy) (1-dy) (1-dy) (1-dy) dx*(1-dy)
(1-dx) 1 1 1 1 dx
(1-dx) 1 1 1 1 dx
(1-dx)*dy dy dy dy dy dx*dy
Table 1
If the absolute value of the slope of S13, edge direction is not less than first threshold, according to row intersection method and/or row intersection point Method namely edge interpolation method enter row interpolation, it is preferred that the row intersection method and/or row intersection method include:
S131, judge edge direction slope absolute value, if being less than Second Threshold T1, inserted according to row intersection method Value;
If S132, being not less than the 3rd threshold value T2, row interpolation is entered according to row intersection method;
If S133, being not less than Second Threshold T1 and being less than the 3rd threshold value T2, simultaneously according to row intersection method and row intersection method Enter row interpolation, including:
S1331, some rows and/or several columns are calculated in original image in interpolation neighborhood of pixels by interpolation pixel and edge The position of some row intersection points and/or several columns intersection point that the straight line that direction determines is cut, including:
Some rows and several columns are determined by interpolation pixel and edge direction in interpolation neighborhood of pixels in calculating original image The position of some row intersection points and several columns intersection point cut of straight line;
Preferably, Fig. 4 is the row intersection method signal in the image interpolation method embodiment one of the invention based on rim detection Figure, as shown in figure 4, the row intersection method is realized using the intersection point of four rows above and below interpolation pixel;
Accordingly, some rows are true by interpolation pixel and edge direction in interpolation neighborhood of pixels in the calculating original image The position for some row intersection points that fixed straight line is cut includes calculating four rows respectively according to formula (9), (10), (11) and (12) The position of intersection point:
Similarly, Fig. 5 is the row intersection method schematic diagram in the image interpolation method embodiment one of the invention based on rim detection, As shown in figure 5, the row intersection method is realized using the intersection point of the row of interpolation pixel or so four, and accordingly, the calculating original image The position for the several columns intersection point that several columns are cut by the straight line that interpolation pixel and edge direction determine in middle interpolation neighborhood of pixels Put the position including calculating four row intersection points respectively according to formula (13), (14), (15) and (16):
It is S1332, true using the value of pixel in one-dimensional interpolation method row intersection point according to original image and/or row intersection point neighborhood The pixel value of fixed the row intersection point and/or row intersection point, including:
The row is determined using the value of pixel in one-dimensional interpolation method row intersection point according to original image and row intersection point neighborhood The pixel value of intersection point and row intersection point, it is preferred that including determining four row intersection points according to formula (17), (18), (19) and (20) Pixel value:
Similarly, the row intersection method is realized using the intersection point of the row of interpolation pixel or so four, accordingly, only lifts first row Exemplified by the calculating of the pixel value of intersection point, it is described using one-dimensional interpolation method according to original image in row intersection point neighborhood pixel value Determine that the pixel value of the row intersection point includes the pixel value that row intersection point is determined according to formula (21):
The calculating of the pixel value of other three row intersection points repeats no more;
The pixel value of the row intersection point and/or row intersection point in identified interpolation neighborhood of pixels carries out one-dimensional filter Ripple, obtain the value that the value of interpolation pixel includes carrying out one-dimensional filtering according to formula (22) and obtaining interpolation pixel:
IH(iH,jH)=f0*IP0+f1*IP1+f2*IP2+f3*IP3 (22)
Wherein, [] represents to round downwards, (iL,jL) represent the coordinate of position of the interpolation pixel in original image, i and j Line number and columns, (v are represented respectivelyx,vy) represent edge direction, P0、P1、P2、P3Four row intersection points, I are represented respectivelyP0、IP1、IP2、 IP3The pixel value of four row intersection points, [f are represented respectively0,f1,f2,f3] be one-dimensional filtering device coefficient, such as [1,3,3,1];
It should be noted that when calculating the intersection point of some rows in the edge direction and its neighborhood of interpolation pixel, if edge Direction is horizontal direction, then it does not have intersection point with some rows in interpolation neighborhood of pixels;When edge direction slope absolute value compared with It is small, less than the first threshold k of settingT1When, its intersection point with some rows in interpolation neighborhood of pixels farther out, with interpolation pixel Correlation is relatively small;Therefore, row interpolation is entered using the image interpolation method of non-edge to above-mentioned two situations, by two dimensional image Interpolation is decomposed into the order that horizontal and vertical two one-dimensional squares enter row interpolation, horizontal direction interpolation and vertical direction interpolation successively It is commutative;Similarly, when calculating the intersection point of several columns in the edge direction and its neighborhood of interpolation pixel, if edge direction is vertical Direction, then its there is no intersection point with several columns in interpolation neighborhood of pixels;When the absolute value of edge direction slope is larger, more than setting First threshold kT2When, its intersection point with several columns in interpolation neighborhood of pixels is farther out, relative with the correlation of interpolation pixel It is smaller;Therefore, row interpolation is entered using the image interpolation method of non-edge to above-mentioned two situations, two dimensional image interpolation is decomposed into Horizontal and vertical two one-dimensional squares enter row interpolation successively, and the order of horizontal direction interpolation and vertical direction interpolation is commutative.
S1333, one-dimensional filter is carried out to the pixel value of the row intersection point in identified interpolation neighborhood of pixels and/or row intersection point Ripple, obtains the value of interpolation pixel, and enters row interpolation to original image, including:
One-dimensional filtering is carried out respectively to the row intersection point in identified interpolation neighborhood of pixels and the pixel value of row intersection point to obtain To the interpolation result I of row intersection point filteringHR(iH,jH) and row intersection point filtering interpolation result IHC(iH,jH), Fig. 6 is based on for the present invention Connected applications row intersection method and weighting function during row intersection method, Ye Jitong in the image interpolation method embodiment one of rim detection Weight during the two weighting of curve generation shown in Fig. 6 is crossed, and the value I for determining interpolation pixel is weighted according to formula (23)H(iH, jH):
IH(iH,jH)=w*IHR(iH,jH)+(1-w)*IHC(iH,jH) (23)
Row interpolation is entered to original image further according to the value of the interpolation pixel;
Wherein, (iH,jH) represent interpolation pixel position coordinate, w represents the power of the interpolation result of row intersection point filtering Weight;
It should be noted that when the absolute value that low angle is edge direction slope is less than T1, using the method for row intersection point Enter row interpolation;In other directions, the absolute value of edge direction slope uses row intersection method and Lie Jiao not less than T1 and when being less than T2 Point method enters row interpolation, enters row interpolation using the method for row intersection point when not less than T2, and T1 and T2 are threshold value set in advance, and row is handed over Point methods and the associated methods of row intersection method are not limited to form described above;
Preferably, the image interpolation method based on rim detection also includes:
If the absolute value of the slope of S14, edge direction is less than given threshold, row interpolation is entered according to non-edge interpolation method; If that is, horizontal component v of the edge direction of interpolation pixelxWith vertical component vyAll it is 0, then interpolation pixel is not properly To, then row interpolation is entered using the image interpolation method of non-edge, by two dimensional image interpolation be decomposed into horizontal and vertical two it is one-dimensional Row interpolation is entered in direction successively, and the order of horizontal direction interpolation and vertical direction interpolation is commutative.
S15, the result obtained to the row intersection method and/or row intersection method interpolation and the non-edge insert what method was worth to As a result merged so as to obtain the image after interpolation.
The image interpolation method based on rim detection of the invention can the larger original point of usage quantity with arbitrary integer or Non-integer scales multiplying power and carries out interpolation processing in any edge direction, makes the image edge clear after interpolation and avoids saw Tooth phenomenon.
Fig. 7 is the structured flowchart of the image interpolation system embodiment one of the invention based on rim detection, as shown in fig. 7, this Image interpolation system of the invention based on rim detection includes:
Coordinate calculating unit, for determining interpolation pixel in original image according to the size of image after original image and interpolation Position;
Direction calculating unit, for determining edge direction of the interpolation pixel in original image;
Intersection point calculation unit, when the absolute value for the slope in edge direction is not less than first threshold, calculate original image Some rows and/or several columns are cut some by the straight line that interpolation pixel and edge direction determine in middle interpolation neighborhood of pixels The position of row intersection point and/or several columns intersection point;
Edge interpolation filter unit, for entering row interpolation according to row intersection method and/or row intersection method, specifically for utilizing one The value of pixel determines the row intersection point and/or row in dimension interpolation method row intersection point according to original image and/or row intersection point neighborhood The pixel value of intersection point and to the pixel value of the row intersection point in identified interpolation neighborhood of pixels and/or row intersection point carry out it is one-dimensional Filtering, obtains the value of interpolation pixel, and enters row interpolation to original image.
Preferably, the image interpolation system based on rim detection also includes:
Non-edge interpolating unit, when the absolute value for the slope in edge direction is less than given threshold, according to non-edge Interpolation method enters row interpolation;
Integrated unit, the result for being obtained to the row intersection method and/or row intersection method interpolation are inserted with the non-edge The result that value method obtains is merged so as to obtain the image after interpolation.
Preferably, the direction calculating unit is specifically used for:Calculated respectively according to formula (2) and (3) to be inserted in original image It is worth the horizontal gradient g of some pixels in neighborhood of pixelsH(i, j) and vertical gradient gV(i,j):
Respectively according to the horizontal gradient of position of the interpolation pixel in original image and pixel in the neighborhood and Vertical gradient utilizes bilinear interpolation namely the horizontal gradient g of interpolation pixel is determined according to formula (4) and (5)H(iL,jL) and Vertical gradient gV(iL,jL), then the edge direction of interpolation pixel is the Vertical Square in the direction of the gradient of the interpolation pixel To (gV(iL,jL),-gH(iL,jL)):
Wherein, IL(i-1,j+1)、IL(i,j+1)、IL(i+1,j+1)、IL(i-1,j-1)、IL(i,j-1)、IL(i+1,j- 1)、IL(i-1,j)、IL(i+1, j) is illustrated respectively in the pixel value of eight pixels in interpolation neighborhood of pixels in original image.
Finally it should be noted that:Various embodiments above is merely illustrative of the technical solution of the present invention, rather than its limitations;To the greatest extent The present invention is described in detail with reference to foregoing embodiments for pipe, it will be understood by those within the art that:Its according to The technical scheme described in foregoing embodiments can so be modified, either which part or all technical characteristic are entered Row equivalent substitution;And these modifications or replacement, the essence of appropriate technical solution is departed from various embodiments of the present invention technology The scope of scheme.

Claims (10)

  1. A kind of 1. image interpolation method based on rim detection, it is characterised in that including:
    Position of the interpolation pixel in original image is determined according to the size of image after original image and interpolation;
    Determine edge direction of the interpolation pixel in original image;
    If the absolute value of the slope of edge direction is not less than first threshold, inserted according to row intersection method and/or row intersection method Value, the row intersection method and/or row intersection method include:
    Calculate in original image what some rows and/or several columns in interpolation neighborhood of pixels were determined by interpolation pixel and edge direction The position of some row intersection points and/or several columns intersection point that straight line is cut;
    Determine that the row is handed over using the value of pixel in one-dimensional interpolation method row intersection point according to original image and/or row intersection point neighborhood The pixel value of point and/or row intersection point;
    One-dimensional filtering is carried out to the pixel value of the row intersection point in identified interpolation neighborhood of pixels and/or row intersection point, treated The value of interpolating pixel, and row interpolation is entered to original image;
    Wherein,
    The selection condition of the row intersection method and/or row intersection method is:
    When the absolute value of the slope of edge direction is less than Second Threshold, row interpolation is entered according to row intersection method;When edge direction When the absolute value of slope is not less than three threshold values, row interpolation is entered according to row intersection method;When edge direction slope absolute value not When being less than three threshold values less than Second Threshold, then row interpolation is entered according to row intersection method and row intersection method simultaneously;
    The row intersection point, it is the straight line that m rows are determined by interpolation pixel and edge direction in interpolation neighborhood of pixels in original image The m row intersection point cut;Wherein, above interpolation pixel, each m/2 in lower section;M is even number;
    The row intersection point, it is that n arranges the straight line by interpolation pixel and edge direction determination in interpolation neighborhood of pixels in original image The n row intersection point cut;Wherein, on the left of interpolation pixel, each n/2 in right side;N is even number;
    The magnitude relationship of three threshold values is:First threshold<Second Threshold<3rd threshold value.
  2. 2. the image interpolation method according to claim 1 based on rim detection, it is characterised in that described while according to row Intersection method and row intersection method enter row interpolation, specifically include:
    Calculate in original image some rows and several columns in interpolation neighborhood of pixels determined by interpolation pixel and edge direction it is straight The position of some row intersection points and several columns intersection point that line is cut;
    The row intersection point is determined using the value of pixel in one-dimensional interpolation method row intersection point according to original image and row intersection point neighborhood With the pixel value of row intersection point;
    One-dimensional filtering is carried out respectively to the row intersection point in identified interpolation neighborhood of pixels and the pixel value of row intersection point to be gone The interpolation result I of intersection point filteringHR(iH,jH) and row intersection point filtering interpolation result IHC(iH,jH), weighted according to formula (23) true Determine the value I of interpolation pixelH(iH,jH):
    IH(iH,jH)=w*IHR(iH,jH)+(1-w)*IHC(iH,jH) (23)
    Row interpolation is entered to original image further according to the value of the interpolation pixel;
    Wherein, (iH,jH) represent interpolation pixel position coordinate, w represents the weight of the interpolation result of row intersection point filtering.
  3. 3. the image interpolation method according to claim 1 based on rim detection, it is characterised in that also include:
    If the absolute value of the slope of edge direction is less than first threshold, row interpolation is entered according to non-edge interpolation method;
    Accordingly, one is carried out in the pixel value of the row intersection point in identified interpolation neighborhood of pixels and/or row intersection point Dimension filtering, obtains the value of interpolation pixel, and original image is entered after row interpolation and carried out described according to non-edge interpolation method After interpolation, in addition to:
    The result that the result obtained to the row intersection method and/or row intersection method interpolation obtains with the non-edge interpolation method is carried out Merge so as to obtain the image after interpolation.
  4. 4. the image interpolation method according to claim 1 based on rim detection, it is characterised in that the row intersection method is adopted Realized with the intersection point of four rows above and below interpolation pixel;
    Accordingly, it is described to calculate in original image what some rows in interpolation neighborhood of pixels were determined by interpolation pixel and edge direction The position for some row intersection points that straight line is cut includes calculating four row intersection points respectively according to formula (9), (10), (11) and (12) Position:
    <mrow> <mtable> <mtr> <mtd> <mrow> <msub> <mi>P</mi> <mn>0</mn> </msub> <mo>:</mo> </mrow> </mtd> <mtd> <mrow> <mo>(</mo> <mi>i</mi> <mo>-</mo> <mn>1</mn> <mo>,</mo> <msub> <mi>j</mi> <mi>L</mi> </msub> <mo>+</mo> <mfrac> <mrow> <mo>(</mo> <mn>1</mn> <mo>+</mo> <mi>d</mi> <mi>y</mi> <mo>)</mo> <mo>*</mo> <msub> <mi>v</mi> <mi>x</mi> </msub> </mrow> <msub> <mi>v</mi> <mi>y</mi> </msub> </mfrac> <mo>)</mo> </mrow> </mtd> </mtr> </mtable> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>9</mn> <mo>)</mo> </mrow> </mrow>
    <mrow> <mtable> <mtr> <mtd> <mrow> <msub> <mi>P</mi> <mn>1</mn> </msub> <mo>:</mo> </mrow> </mtd> <mtd> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <msub> <mi>j</mi> <mi>L</mi> </msub> <mo>+</mo> <mfrac> <mrow> <mi>d</mi> <mi>y</mi> <mo>*</mo> <msub> <mi>v</mi> <mi>x</mi> </msub> </mrow> <msub> <mi>v</mi> <mi>y</mi> </msub> </mfrac> <mo>)</mo> </mrow> </mtd> </mtr> </mtable> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>10</mn> <mo>)</mo> </mrow> </mrow>
    <mrow> <mtable> <mtr> <mtd> <mrow> <msub> <mi>P</mi> <mn>2</mn> </msub> <mo>:</mo> </mrow> </mtd> <mtd> <mrow> <mo>(</mo> <mi>i</mi> <mo>+</mo> <mn>1</mn> <mo>,</mo> <msub> <mi>j</mi> <mi>L</mi> </msub> <mo>-</mo> <mfrac> <mrow> <mo>(</mo> <mn>1</mn> <mo>-</mo> <mi>d</mi> <mi>y</mi> <mo>)</mo> <mo>*</mo> <msub> <mi>v</mi> <mi>x</mi> </msub> </mrow> <msub> <mi>v</mi> <mi>y</mi> </msub> </mfrac> <mo>)</mo> </mrow> </mtd> </mtr> </mtable> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>11</mn> <mo>)</mo> </mrow> </mrow>
    <mrow> <mtable> <mtr> <mtd> <mrow> <msub> <mi>P</mi> <mn>3</mn> </msub> <mo>:</mo> </mrow> </mtd> <mtd> <mrow> <mo>(</mo> <mi>i</mi> <mo>+</mo> <mn>2</mn> <mo>,</mo> <msub> <mi>j</mi> <mi>L</mi> </msub> <mo>-</mo> <mfrac> <mrow> <mo>(</mo> <mn>2</mn> <mo>-</mo> <mi>d</mi> <mi>y</mi> <mo>)</mo> <mo>*</mo> <msub> <mi>v</mi> <mi>x</mi> </msub> </mrow> <msub> <mi>v</mi> <mi>y</mi> </msub> </mfrac> <mo>)</mo> </mrow> </mtd> </mtr> </mtable> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>12</mn> <mo>)</mo> </mrow> </mrow>
    It is described that using one-dimensional interpolation method, the value of pixel determines the picture of the row intersection point in row intersection point neighborhood according to original image Plain value includes determining the pixel value of four row intersection points according to formula (17), (18), (19) and (20):
    <mrow> <mtable> <mtr> <mtd> <mrow> <msub> <mi>I</mi> <mrow> <mi>P</mi> <mn>0</mn> </mrow> </msub> <mo>=</mo> <mrow> <mo>(</mo> <mn>1</mn> <mo>-</mo> <mo>(</mo> <mrow> <msub> <mi>j</mi> <mi>L</mi> </msub> <mo>+</mo> <mfrac> <mrow> <mo>(</mo> <mn>1</mn> <mo>+</mo> <mi>d</mi> <mi>y</mi> <mo>)</mo> <mo>*</mo> <msub> <mi>v</mi> <mi>x</mi> </msub> </mrow> <msub> <mi>v</mi> <mi>y</mi> </msub> </mfrac> <mo>-</mo> <mo>&amp;lsqb;</mo> <msub> <mi>j</mi> <mi>L</mi> </msub> <mo>+</mo> <mfrac> <mrow> <mo>(</mo> <mn>1</mn> <mo>+</mo> <mi>d</mi> <mi>y</mi> <mo>)</mo> <mo>*</mo> <msub> <mi>v</mi> <mi>x</mi> </msub> </mrow> <msub> <mi>v</mi> <mi>y</mi> </msub> </mfrac> <mo>&amp;rsqb;</mo> </mrow> <mo>)</mo> <mo>)</mo> </mrow> <mo>*</mo> <msub> <mi>I</mi> <mi>L</mi> </msub> <mrow> <mo>(</mo> <mi>i</mi> <mo>-</mo> <mn>1</mn> <mo>,</mo> <mo>&amp;lsqb;</mo> <msub> <mi>j</mi> <mi>L</mi> </msub> <mo>+</mo> <mfrac> <mrow> <mo>(</mo> <mn>1</mn> <mo>+</mo> <mi>d</mi> <mi>y</mi> <mo>)</mo> <mo>*</mo> <msub> <mi>v</mi> <mi>x</mi> </msub> </mrow> <msub> <mi>v</mi> <mi>y</mi> </msub> </mfrac> <mo>&amp;rsqb;</mo> <mo>)</mo> </mrow> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <mo>+</mo> <mrow> <mo>(</mo> <msub> <mi>j</mi> <mi>L</mi> </msub> <mo>+</mo> <mfrac> <mrow> <mo>(</mo> <mn>1</mn> <mo>+</mo> <mi>d</mi> <mi>y</mi> <mo>)</mo> <mo>*</mo> <msub> <mi>v</mi> <mi>x</mi> </msub> </mrow> <msub> <mi>v</mi> <mi>y</mi> </msub> </mfrac> <mo>-</mo> <mo>&amp;lsqb;</mo> <msub> <mi>j</mi> <mi>L</mi> </msub> <mo>+</mo> <mfrac> <mrow> <mo>(</mo> <mn>1</mn> <mo>+</mo> <mi>d</mi> <mi>y</mi> <mo>)</mo> <mo>*</mo> <msub> <mi>v</mi> <mi>x</mi> </msub> </mrow> <msub> <mi>v</mi> <mi>y</mi> </msub> </mfrac> <mo>&amp;rsqb;</mo> <mo>)</mo> </mrow> <mo>*</mo> <msub> <mi>I</mi> <mi>L</mi> </msub> <mrow> <mo>(</mo> <mi>i</mi> <mo>-</mo> <mn>1</mn> <mo>,</mo> <mo>&amp;lsqb;</mo> <msub> <mi>j</mi> <mi>L</mi> </msub> <mo>+</mo> <mfrac> <mrow> <mo>(</mo> <mn>1</mn> <mo>+</mo> <mi>d</mi> <mi>y</mi> <mo>)</mo> <mo>*</mo> <msub> <mi>v</mi> <mi>x</mi> </msub> </mrow> <msub> <mi>v</mi> <mi>y</mi> </msub> </mfrac> <mo>&amp;rsqb;</mo> <mo>+</mo> <mn>1</mn> <mo>)</mo> </mrow> </mrow> </mtd> </mtr> </mtable> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>17</mn> <mo>)</mo> </mrow> </mrow>
    <mrow> <mtable> <mtr> <mtd> <mrow> <msub> <mi>I</mi> <mrow> <mi>P</mi> <mn>1</mn> </mrow> </msub> <mo>=</mo> <mrow> <mo>(</mo> <mn>1</mn> <mo>-</mo> <mo>(</mo> <mrow> <msub> <mi>j</mi> <mi>L</mi> </msub> <mo>+</mo> <mfrac> <mrow> <mi>d</mi> <mi>y</mi> <mo>*</mo> <msub> <mi>v</mi> <mi>x</mi> </msub> </mrow> <msub> <mi>v</mi> <mi>y</mi> </msub> </mfrac> <mo>-</mo> <mo>&amp;lsqb;</mo> <msub> <mi>j</mi> <mi>L</mi> </msub> <mo>+</mo> <mfrac> <mrow> <mi>d</mi> <mi>y</mi> <mo>*</mo> <msub> <mi>v</mi> <mi>x</mi> </msub> </mrow> <msub> <mi>v</mi> <mi>y</mi> </msub> </mfrac> <mo>&amp;rsqb;</mo> </mrow> <mo>)</mo> <mo>)</mo> </mrow> <mo>*</mo> <msub> <mi>I</mi> <mi>L</mi> </msub> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mo>&amp;lsqb;</mo> <msub> <mi>j</mi> <mi>L</mi> </msub> <mo>+</mo> <mfrac> <mrow> <mi>d</mi> <mi>y</mi> <mo>*</mo> <msub> <mi>v</mi> <mi>x</mi> </msub> </mrow> <msub> <mi>v</mi> <mi>y</mi> </msub> </mfrac> <mo>&amp;rsqb;</mo> <mo>)</mo> </mrow> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <mo>+</mo> <mrow> <mo>(</mo> <msub> <mi>j</mi> <mi>L</mi> </msub> <mo>+</mo> <mfrac> <mrow> <mi>d</mi> <mi>y</mi> <mo>*</mo> <msub> <mi>v</mi> <mi>x</mi> </msub> </mrow> <msub> <mi>v</mi> <mi>y</mi> </msub> </mfrac> <mo>-</mo> <mo>&amp;lsqb;</mo> <msub> <mi>j</mi> <mi>L</mi> </msub> <mo>+</mo> <mfrac> <mrow> <mi>d</mi> <mi>y</mi> <mo>*</mo> <msub> <mi>v</mi> <mi>x</mi> </msub> </mrow> <msub> <mi>v</mi> <mi>y</mi> </msub> </mfrac> <mo>&amp;rsqb;</mo> <mo>)</mo> </mrow> <mo>*</mo> <msub> <mi>I</mi> <mi>L</mi> </msub> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mo>&amp;lsqb;</mo> <msub> <mi>j</mi> <mi>L</mi> </msub> <mo>+</mo> <mfrac> <mrow> <mi>d</mi> <mi>y</mi> <mo>*</mo> <msub> <mi>v</mi> <mi>x</mi> </msub> </mrow> <msub> <mi>v</mi> <mi>y</mi> </msub> </mfrac> <mo>&amp;rsqb;</mo> <mo>+</mo> <mn>1</mn> <mo>)</mo> </mrow> </mrow> </mtd> </mtr> </mtable> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>18</mn> <mo>)</mo> </mrow> </mrow>
    <mrow> <mtable> <mtr> <mtd> <mrow> <msub> <mi>I</mi> <mrow> <mi>P</mi> <mn>2</mn> </mrow> </msub> <mo>=</mo> <mrow> <mo>(</mo> <mn>1</mn> <mo>-</mo> <mo>(</mo> <mrow> <msub> <mi>j</mi> <mi>L</mi> </msub> <mo>-</mo> <mfrac> <mrow> <mo>(</mo> <mn>1</mn> <mo>-</mo> <mi>d</mi> <mi>y</mi> <mo>)</mo> <mo>*</mo> <msub> <mi>v</mi> <mi>x</mi> </msub> </mrow> <msub> <mi>v</mi> <mi>y</mi> </msub> </mfrac> <mo>-</mo> <mo>&amp;lsqb;</mo> <msub> <mi>j</mi> <mi>L</mi> </msub> <mo>-</mo> <mfrac> <mrow> <mo>(</mo> <mn>1</mn> <mo>-</mo> <mi>d</mi> <mi>y</mi> <mo>)</mo> <mo>*</mo> <msub> <mi>v</mi> <mi>x</mi> </msub> </mrow> <msub> <mi>v</mi> <mi>y</mi> </msub> </mfrac> <mo>&amp;rsqb;</mo> </mrow> <mo>)</mo> <mo>)</mo> </mrow> <mo>*</mo> <msub> <mi>I</mi> <mi>L</mi> </msub> <mrow> <mo>(</mo> <mi>i</mi> <mo>+</mo> <mn>1</mn> <mo>,</mo> <mo>&amp;lsqb;</mo> <msub> <mi>j</mi> <mi>L</mi> </msub> <mo>-</mo> <mfrac> <mrow> <mo>(</mo> <mn>1</mn> <mo>-</mo> <mi>d</mi> <mi>y</mi> <mo>)</mo> <mo>*</mo> <msub> <mi>v</mi> <mi>x</mi> </msub> </mrow> <msub> <mi>v</mi> <mi>y</mi> </msub> </mfrac> <mo>&amp;rsqb;</mo> <mo>)</mo> </mrow> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <mo>+</mo> <mrow> <mo>(</mo> <msub> <mi>j</mi> <mi>L</mi> </msub> <mo>-</mo> <mfrac> <mrow> <mo>(</mo> <mn>1</mn> <mo>-</mo> <mi>d</mi> <mi>y</mi> <mo>)</mo> <mo>*</mo> <msub> <mi>v</mi> <mi>x</mi> </msub> </mrow> <msub> <mi>v</mi> <mi>y</mi> </msub> </mfrac> <mo>-</mo> <mo>&amp;lsqb;</mo> <msub> <mi>j</mi> <mi>L</mi> </msub> <mo>-</mo> <mfrac> <mrow> <mo>(</mo> <mn>1</mn> <mo>-</mo> <mi>d</mi> <mi>y</mi> <mo>)</mo> <mo>*</mo> <msub> <mi>v</mi> <mi>x</mi> </msub> </mrow> <msub> <mi>v</mi> <mi>y</mi> </msub> </mfrac> <mo>&amp;rsqb;</mo> <mo>)</mo> </mrow> <mo>*</mo> <msub> <mi>I</mi> <mi>L</mi> </msub> <mrow> <mo>(</mo> <mi>i</mi> <mo>+</mo> <mn>1</mn> <mo>,</mo> <mo>&amp;lsqb;</mo> <msub> <mi>j</mi> <mi>L</mi> </msub> <mo>-</mo> <mfrac> <mrow> <mo>(</mo> <mn>1</mn> <mo>-</mo> <mi>d</mi> <mi>y</mi> <mo>)</mo> <mo>*</mo> <msub> <mi>v</mi> <mi>x</mi> </msub> </mrow> <msub> <mi>v</mi> <mi>y</mi> </msub> </mfrac> <mo>&amp;rsqb;</mo> <mo>+</mo> <mn>1</mn> <mo>)</mo> </mrow> </mrow> </mtd> </mtr> </mtable> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>19</mn> <mo>)</mo> </mrow> </mrow>
    <mrow> <mtable> <mtr> <mtd> <mrow> <msub> <mi>I</mi> <mrow> <mi>P</mi> <mn>3</mn> </mrow> </msub> <mo>=</mo> <mrow> <mo>(</mo> <mn>1</mn> <mo>-</mo> <mo>(</mo> <mrow> <msub> <mi>j</mi> <mi>L</mi> </msub> <mo>-</mo> <mfrac> <mrow> <mo>(</mo> <mn>2</mn> <mo>-</mo> <mi>d</mi> <mi>y</mi> <mo>)</mo> <mo>*</mo> <msub> <mi>v</mi> <mi>x</mi> </msub> </mrow> <msub> <mi>v</mi> <mi>y</mi> </msub> </mfrac> <mo>-</mo> <mo>&amp;lsqb;</mo> <msub> <mi>j</mi> <mi>L</mi> </msub> <mo>-</mo> <mfrac> <mrow> <mo>(</mo> <mn>2</mn> <mo>-</mo> <mi>d</mi> <mi>y</mi> <mo>)</mo> <mo>*</mo> <msub> <mi>v</mi> <mi>x</mi> </msub> </mrow> <msub> <mi>v</mi> <mi>y</mi> </msub> </mfrac> <mo>&amp;rsqb;</mo> </mrow> <mo>)</mo> <mo>)</mo> </mrow> <mo>*</mo> <msub> <mi>I</mi> <mi>L</mi> </msub> <mrow> <mo>(</mo> <mi>i</mi> <mo>+</mo> <mn>2</mn> <mo>,</mo> <mo>&amp;lsqb;</mo> <msub> <mi>j</mi> <mi>L</mi> </msub> <mo>-</mo> <mfrac> <mrow> <mo>(</mo> <mn>2</mn> <mo>-</mo> <mi>d</mi> <mi>y</mi> <mo>)</mo> <mo>*</mo> <msub> <mi>v</mi> <mi>x</mi> </msub> </mrow> <msub> <mi>v</mi> <mi>y</mi> </msub> </mfrac> <mo>&amp;rsqb;</mo> <mo>)</mo> </mrow> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <mo>+</mo> <mrow> <mo>(</mo> <msub> <mi>j</mi> <mi>L</mi> </msub> <mo>-</mo> <mfrac> <mrow> <mo>(</mo> <mn>2</mn> <mo>-</mo> <mi>d</mi> <mi>y</mi> <mo>)</mo> <mo>*</mo> <msub> <mi>v</mi> <mi>x</mi> </msub> </mrow> <msub> <mi>v</mi> <mi>y</mi> </msub> </mfrac> <mo>-</mo> <mo>&amp;lsqb;</mo> <msub> <mi>j</mi> <mi>L</mi> </msub> <mo>-</mo> <mfrac> <mrow> <mo>(</mo> <mn>2</mn> <mo>-</mo> <mi>d</mi> <mi>y</mi> <mo>)</mo> <mo>*</mo> <msub> <mi>v</mi> <mi>x</mi> </msub> </mrow> <msub> <mi>v</mi> <mi>y</mi> </msub> </mfrac> <mo>&amp;rsqb;</mo> <mo>)</mo> </mrow> <mo>*</mo> <msub> <mi>I</mi> <mi>L</mi> </msub> <mrow> <mo>(</mo> <mi>i</mi> <mo>+</mo> <mn>2</mn> <mo>,</mo> <mo>&amp;lsqb;</mo> <msub> <mi>j</mi> <mi>L</mi> </msub> <mo>-</mo> <mfrac> <mrow> <mo>(</mo> <mn>2</mn> <mo>-</mo> <mi>d</mi> <mi>y</mi> <mo>)</mo> <mo>*</mo> <msub> <mi>v</mi> <mi>x</mi> </msub> </mrow> <msub> <mi>v</mi> <mi>y</mi> </msub> </mfrac> <mo>&amp;rsqb;</mo> <mo>+</mo> <mn>1</mn> <mo>)</mo> </mrow> </mrow> </mtd> </mtr> </mtable> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>20</mn> <mo>)</mo> </mrow> </mrow>
    The pixel value of the row intersection point in identified interpolation neighborhood of pixels carries out one-dimensional filtering, obtains interpolation pixel Value include according to formula (22) carry out one-dimensional filtering obtain interpolation pixel value:
    IH(iH,jH)=f0*IP0+f1*IP1+f2*IP2+f3*IP3 (22)
    Wherein, [] represents to round downwards, (iL,jL) represent the coordinate of position of the interpolation pixel in original image, i and j difference tables Show line number and columns, (vx,vy) represent edge direction, vxRepresent the horizontal component of edge direction, vyRepresent the vertical of edge direction Component, P0、P1、P2、P3Four row intersection points, I are represented respectivelyP0、IP1、IP2、IP3The pixel value of four row intersection points, [f are represented respectively0, f1,f2,f3] be one-dimensional filtering device coefficient.
  5. 5. the image interpolation method according to claim 1 based on rim detection, it is characterised in that the determination interpolation Edge direction of the pixel in original image includes:
    The horizontal gradient g of some pixels in interpolation neighborhood of pixels in original image is calculated according to formula (2) and (3) respectivelyH(i,j) With vertical gradient gV(i,j):
    <mrow> <mtable> <mtr> <mtd> <mrow> <msub> <mi>g</mi> <mi>H</mi> </msub> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>=</mo> <msub> <mi>I</mi> <mi>L</mi> </msub> <mrow> <mo>(</mo> <mi>i</mi> <mo>-</mo> <mn>1</mn> <mo>,</mo> <mi>j</mi> <mo>+</mo> <mn>1</mn> <mo>)</mo> </mrow> <mo>+</mo> <mn>2</mn> <mo>*</mo> <msub> <mi>I</mi> <mi>L</mi> </msub> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>+</mo> <mn>1</mn> <mo>)</mo> </mrow> <mo>+</mo> <msub> <mi>I</mi> <mi>L</mi> </msub> <mrow> <mo>(</mo> <mi>i</mi> <mo>+</mo> <mn>1</mn> <mo>,</mo> <mi>j</mi> <mo>+</mo> <mn>1</mn> <mo>)</mo> </mrow> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <mo>-</mo> <msub> <mi>I</mi> <mi>L</mi> </msub> <mrow> <mo>(</mo> <mi>i</mi> <mo>-</mo> <mn>1</mn> <mo>,</mo> <mi>j</mi> <mo>-</mo> <mn>1</mn> <mo>)</mo> </mrow> <mo>-</mo> <mn>2</mn> <mo>*</mo> <msub> <mi>I</mi> <mi>L</mi> </msub> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>-</mo> <mn>1</mn> <mo>)</mo> </mrow> <mo>-</mo> <msub> <mi>I</mi> <mi>L</mi> </msub> <mrow> <mo>(</mo> <mi>i</mi> <mo>+</mo> <mn>1</mn> <mo>,</mo> <mi>j</mi> <mo>-</mo> <mn>1</mn> <mo>)</mo> </mrow> </mrow> </mtd> </mtr> </mtable> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>2</mn> <mo>)</mo> </mrow> </mrow>
    <mrow> <mtable> <mtr> <mtd> <mrow> <msub> <mi>g</mi> <mi>V</mi> </msub> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>=</mo> <msub> <mi>I</mi> <mi>L</mi> </msub> <mrow> <mo>(</mo> <mi>i</mi> <mo>-</mo> <mn>1</mn> <mo>,</mo> <mi>j</mi> <mo>-</mo> <mn>1</mn> <mo>)</mo> </mrow> <mo>+</mo> <mn>2</mn> <mo>*</mo> <msub> <mi>I</mi> <mi>L</mi> </msub> <mrow> <mo>(</mo> <mi>i</mi> <mo>-</mo> <mn>1</mn> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>+</mo> <msub> <mi>I</mi> <mi>L</mi> </msub> <mrow> <mo>(</mo> <mi>i</mi> <mo>-</mo> <mn>1</mn> <mo>,</mo> <mi>j</mi> <mo>+</mo> <mn>1</mn> <mo>)</mo> </mrow> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <mo>-</mo> <msub> <mi>I</mi> <mi>L</mi> </msub> <mrow> <mo>(</mo> <mi>i</mi> <mo>+</mo> <mn>1</mn> <mo>,</mo> <mi>j</mi> <mo>-</mo> <mn>1</mn> <mo>)</mo> </mrow> <mo>-</mo> <mn>2</mn> <mo>*</mo> <msub> <mi>I</mi> <mi>L</mi> </msub> <mrow> <mo>(</mo> <mi>i</mi> <mo>+</mo> <mn>1</mn> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>-</mo> <msub> <mi>I</mi> <mi>L</mi> </msub> <mrow> <mo>(</mo> <mi>i</mi> <mo>+</mo> <mn>1</mn> <mo>,</mo> <mi>j</mi> <mo>+</mo> <mn>1</mn> <mo>)</mo> </mrow> </mrow> </mtd> </mtr> </mtable> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>3</mn> <mo>)</mo> </mrow> </mrow>
    Horizontal gradient and vertical ladder of the position with pixel in the neighborhood according to the interpolation pixel in original image respectively Degree utilizes bilinear interpolation namely the horizontal gradient g of interpolation pixel is determined according to formula (4) and (5)H(iL,jL) and vertical ladder Spend gV(iL,jL), then the edge direction of interpolation pixel is the vertical direction (g in the direction of the gradient of the interpolation pixelV (iL,jL),-gH(iL,jL)):
    <mrow> <mtable> <mtr> <mtd> <mrow> <msub> <mi>g</mi> <mi>H</mi> </msub> <mrow> <mo>(</mo> <msub> <mi>i</mi> <mi>L</mi> </msub> <mo>,</mo> <msub> <mi>j</mi> <mi>L</mi> </msub> <mo>)</mo> </mrow> <mo>=</mo> <mrow> <mo>(</mo> <mn>1</mn> <mo>-</mo> <mi>d</mi> <mi>x</mi> <mo>)</mo> </mrow> <mo>*</mo> <mrow> <mo>(</mo> <mn>1</mn> <mo>-</mo> <mi>d</mi> <mi>y</mi> <mo>)</mo> </mrow> <mo>*</mo> <msub> <mi>g</mi> <mi>H</mi> </msub> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>+</mo> <mi>d</mi> <mi>x</mi> <mo>*</mo> <mrow> <mo>(</mo> <mn>1</mn> <mo>-</mo> <mi>d</mi> <mi>y</mi> <mo>)</mo> </mrow> <mo>*</mo> <msub> <mi>g</mi> <mi>H</mi> </msub> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>+</mo> <mn>1</mn> <mo>)</mo> </mrow> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <mo>+</mo> <mrow> <mo>(</mo> <mn>1</mn> <mo>-</mo> <mi>d</mi> <mi>x</mi> <mo>)</mo> </mrow> <mo>*</mo> <mi>g</mi> <mi>y</mi> <mo>*</mo> <msub> <mi>g</mi> <mi>H</mi> </msub> <mrow> <mo>(</mo> <mi>i</mi> <mo>+</mo> <mn>1</mn> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>+</mo> <mi>d</mi> <mi>x</mi> <mo>*</mo> <mi>d</mi> <mi>y</mi> <mo>*</mo> <msub> <mi>g</mi> <mi>H</mi> </msub> <mrow> <mo>(</mo> <mi>i</mi> <mo>+</mo> <mn>1</mn> <mo>,</mo> <mi>j</mi> <mo>+</mo> <mn>1</mn> <mo>)</mo> </mrow> </mrow> </mtd> </mtr> </mtable> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>4</mn> <mo>)</mo> </mrow> </mrow>
    <mrow> <mtable> <mtr> <mtd> <mrow> <msub> <mi>g</mi> <mi>V</mi> </msub> <mrow> <mo>(</mo> <msub> <mi>i</mi> <mi>L</mi> </msub> <mo>,</mo> <msub> <mi>j</mi> <mi>L</mi> </msub> <mo>)</mo> </mrow> <mo>=</mo> <mrow> <mo>(</mo> <mn>1</mn> <mo>-</mo> <mi>d</mi> <mi>x</mi> <mo>)</mo> </mrow> <mo>*</mo> <mrow> <mo>(</mo> <mn>1</mn> <mo>-</mo> <mi>d</mi> <mi>y</mi> <mo>)</mo> </mrow> <mo>*</mo> <msub> <mi>g</mi> <mi>V</mi> </msub> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>+</mo> <mi>d</mi> <mi>x</mi> <mo>*</mo> <mrow> <mo>(</mo> <mn>1</mn> <mo>-</mo> <mi>d</mi> <mi>y</mi> <mo>)</mo> </mrow> <mo>*</mo> <msub> <mi>g</mi> <mi>V</mi> </msub> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>+</mo> <mn>1</mn> <mo>)</mo> </mrow> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <mo>+</mo> <mrow> <mo>(</mo> <mn>1</mn> <mo>-</mo> <mi>d</mi> <mi>x</mi> <mo>)</mo> </mrow> <mo>*</mo> <mi>d</mi> <mi>y</mi> <mo>*</mo> <msub> <mi>g</mi> <mi>V</mi> </msub> <mrow> <mo>(</mo> <mi>i</mi> <mo>+</mo> <mn>1</mn> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>+</mo> <mi>d</mi> <mi>x</mi> <mo>*</mo> <mi>d</mi> <mi>y</mi> <mo>*</mo> <msub> <mi>g</mi> <mi>V</mi> </msub> <mrow> <mo>(</mo> <mi>i</mi> <mo>+</mo> <mn>1</mn> <mo>,</mo> <mi>j</mi> <mo>+</mo> <mn>1</mn> <mo>)</mo> </mrow> </mrow> </mtd> </mtr> </mtable> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>5</mn> <mo>)</mo> </mrow> </mrow>
    Wherein, IL(i-1,j+1)、IL(i,j+1)、IL(i+1,j+1)、IL(i-1,j-1)、IL(i,j-1)、IL(i+1,j-1)、IL (i-1,j)、IL(i+1, j) is illustrated respectively in the pixel value of eight pixels in interpolation neighborhood of pixels in original image.
  6. 6. the image interpolation method according to claim 1 based on rim detection, it is characterised in that the determination interpolation Edge direction of the pixel in original image includes:
    The window of arbitrary size in interpolation neighborhood of pixels is chosen, determines the horizontal gradient g of whole pixels in windowH(i, j) and hang down Vertical ladder degree gV(i, j), so that it is determined that in interpolation neighborhood of pixels in window whole pixels covariance matrix M:
    <mrow> <mi>M</mi> <mo>=</mo> <mfenced open = "[" close = "]"> <mtable> <mtr> <mtd> <mi>A</mi> </mtd> <mtd> <mi>B</mi> </mtd> </mtr> <mtr> <mtd> <mi>B</mi> </mtd> <mtd> <mi>C</mi> </mtd> </mtr> </mtable> </mfenced> <mo>=</mo> <mfenced open = "[" close = "]"> <mtable> <mtr> <mtd> <mrow> <munder> <mo>&amp;Sigma;</mo> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> <mo>&amp;Element;</mo> <mi>&amp;Omega;</mi> </mrow> </munder> <msup> <mrow> <mo>(</mo> <msub> <mi>g</mi> <mi>H</mi> </msub> <mo>(</mo> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> <mo>)</mo> <mo>)</mo> </mrow> <mn>2</mn> </msup> </mrow> </mtd> <mtd> <mrow> <munder> <mo>&amp;Sigma;</mo> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> <mo>&amp;Element;</mo> <mi>&amp;Omega;</mi> </mrow> </munder> <msub> <mi>g</mi> <mi>H</mi> </msub> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>*</mo> <msub> <mi>g</mi> <mi>V</mi> </msub> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <munder> <mo>&amp;Sigma;</mo> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> <mo>&amp;Element;</mo> <mi>&amp;Omega;</mi> </mrow> </munder> <msub> <mi>g</mi> <mi>H</mi> </msub> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>*</mo> <msub> <mi>g</mi> <mi>V</mi> </msub> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> </mrow> </mtd> <mtd> <mrow> <munder> <mo>&amp;Sigma;</mo> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> <mo>&amp;Element;</mo> <mi>&amp;Omega;</mi> </mrow> </munder> <msup> <mrow> <mo>(</mo> <msub> <mi>g</mi> <mi>V</mi> </msub> <mo>(</mo> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> <mo>)</mo> <mo>)</mo> </mrow> <mn>2</mn> </msup> </mrow> </mtd> </mtr> </mtable> </mfenced> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>6</mn> <mo>)</mo> </mrow> </mrow>
    Calculate the characteristic value and characteristic vector of the covariance matrix, it is determined that characteristic vector v corresponding to smaller characteristic value is institute Edge direction is stated, namely:
    <mrow> <mi>v</mi> <mo>=</mo> <mo>=</mo> <mfenced open = "[" close = "]"> <mtable> <mtr> <mtd> <msub> <mi>v</mi> <mi>x</mi> </msub> </mtd> </mtr> <mtr> <mtd> <msub> <mi>v</mi> <mi>y</mi> </msub> </mtd> </mtr> </mtable> </mfenced> <mo>=</mo> <mfenced open = "[" close = "]"> <mtable> <mtr> <mtd> <mrow> <mn>2</mn> <mi>B</mi> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <mi>C</mi> <mo>-</mo> <mi>A</mi> <mo>-</mo> <msqrt> <mrow> <msup> <mrow> <mo>(</mo> <mi>C</mi> <mo>-</mo> <mi>A</mi> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>+</mo> <mn>4</mn> <msup> <mi>B</mi> <mn>2</mn> </msup> </mrow> </msqrt> </mrow> </mtd> </mtr> </mtable> </mfenced> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>7</mn> <mo>)</mo> </mrow> </mrow> 3
    Wherein,Represent characteristic vector corresponding to the smaller characteristic value of the covariance matrix, vxRepresent edge direction Horizontal component, vyRepresent the vertical component of edge direction.
  7. 7. according to the image interpolation method based on rim detection described in claim 6, it is characterised in that the determination interpolation picture Edge direction of the element in original image also includes:
    Covariance matrix M' after the covariance matrix is improved is improved according to formula (6):
    <mrow> <msup> <mi>M</mi> <mo>&amp;prime;</mo> </msup> <mo>=</mo> <mfenced open = "[" close = "]"> <mtable> <mtr> <mtd> <mi>A</mi> </mtd> <mtd> <mi>B</mi> </mtd> </mtr> <mtr> <mtd> <mi>B</mi> </mtd> <mtd> <mi>C</mi> </mtd> </mtr> </mtable> </mfenced> <mo>=</mo> <mfenced open = "[" close = "]"> <mtable> <mtr> <mtd> <mrow> <munder> <mo>&amp;Sigma;</mo> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> <mo>&amp;Element;</mo> <mi>&amp;Omega;</mi> </mrow> </munder> <mi>w</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>*</mo> <msup> <mrow> <mo>(</mo> <msub> <mi>g</mi> <mi>H</mi> </msub> <mo>(</mo> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> <mo>)</mo> <mo>)</mo> </mrow> <mn>2</mn> </msup> </mrow> </mtd> <mtd> <mrow> <munder> <mo>&amp;Sigma;</mo> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> <mo>&amp;Element;</mo> <mi>&amp;Omega;</mi> </mrow> </munder> <mi>w</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>*</mo> <msub> <mi>g</mi> <mi>H</mi> </msub> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>*</mo> <msub> <mi>g</mi> <mi>V</mi> </msub> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <munder> <mo>&amp;Sigma;</mo> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> <mo>&amp;Element;</mo> <mi>&amp;Omega;</mi> </mrow> </munder> <mi>w</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>*</mo> <msub> <mi>g</mi> <mi>H</mi> </msub> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>*</mo> <msub> <mi>g</mi> <mi>V</mi> </msub> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> </mrow> </mtd> <mtd> <mrow> <munder> <mo>&amp;Sigma;</mo> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> <mo>&amp;Element;</mo> <mi>&amp;Omega;</mi> </mrow> </munder> <mi>w</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>*</mo> <msup> <mrow> <mo>(</mo> <msub> <mi>g</mi> <mi>V</mi> </msub> <mo>(</mo> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> <mo>)</mo> <mo>)</mo> </mrow> <mn>2</mn> </msup> </mrow> </mtd> </mtr> </mtable> </mfenced> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>8</mn> <mo>)</mo> </mrow> </mrow>
    Wherein, w (i, j) value following the example of using bilinear interpolation, it is:
    W (i-1, j-2)=(1-dx) * (1-dy), w (i-1, j-1)=(1-dy), w (i-1, j)=(1-dy),
    W (i-1, j+1)=(1-dy), w (i-1, j+2)=(1-dy), w (i-1, j+3)=dx* (1-dy);
    W (i, j-2)=(1-dx), w (i, j-1)=1, w (i, j)=1, w (i, j+1)=1, w (i, j+2)=1,
    W (i, j+3)=dx;W (i+1, j-2)=(1-dx), w (i+1, j-1)=1, w (i+1, j)=1, w (i+1, j+1)=1,
    W (i+1, j+2)=1, w (i+1, j+3)=dx;W (i+2, j-2)=(1-dx) * dy, w (i+2, j-1)=dy,
    W (i+2, j)=dy, w (i+2, j+1)=dy, w (i+2, j+2)=dy, w (i+2, j+3)=dx*dy.
  8. A kind of 8. image interpolation system based on rim detection, it is characterised in that including:
    Coordinate calculating unit, for determining position of the interpolation pixel in original image according to the size of image after original image and interpolation Put;
    Direction calculating unit, for determining edge direction of the interpolation pixel in original image;
    Intersection point calculation unit, is configured to:When the absolute value of the slope of edge direction is not less than first threshold, calculate in original image Some rows that some rows and/or several columns are cut by the straight line that interpolation pixel and edge direction determine in interpolation neighborhood of pixels The position of intersection point and/or several columns intersection point;
    Edge interpolation filter unit, it is configured to enter row interpolation according to row intersection method and/or row intersection method, concrete configuration is:Utilize In one-dimensional interpolation method row intersection point according to original image and/or row intersection point neighborhood the value of pixel determine the row intersection point and/or The pixel value of row intersection point;The pixel value of row intersection point in identified interpolation neighborhood of pixels and/or row intersection point is carried out one-dimensional Filtering, obtains the value of interpolation pixel, and enters row interpolation to original image;
    Wherein,
    The selection condition of the row intersection method and/or row intersection method is:
    When the absolute value of the slope of edge direction is less than Second Threshold, row interpolation is entered according to row intersection method;When edge direction When the absolute value of slope is not less than three threshold values, row interpolation is entered according to row intersection method;When edge direction slope absolute value not When being less than three threshold values less than Second Threshold, then row interpolation is entered according to row intersection method and row intersection method simultaneously;
    The row intersection point, it is the straight line that m rows are determined by interpolation pixel and edge direction in interpolation neighborhood of pixels in original image The m row intersection point cut;Wherein, above interpolation pixel, each m/2 row intersection point in lower section;M is even number;
    The row intersection point, it is that n arranges the straight line by interpolation pixel and edge direction determination in interpolation neighborhood of pixels in original image The n row intersection point cut;Wherein, on the left of interpolation pixel, each n/2 in right side;N is even number;
    The magnitude relationship of three threshold values is:First threshold<Second Threshold<3rd threshold value.
  9. 9. the image interpolation system according to claim 8 based on rim detection, it is characterised in that also include:
    Non-edge interpolating unit, when the absolute value for the slope in edge direction is less than first threshold, according to non-edge interpolation Method enters row interpolation;
    Integrated unit, for result and the non-edge interpolation method obtained to the row intersection method and/or row intersection method interpolation Obtained result is merged so as to obtain the image after interpolation.
  10. 10. the image interpolation system according to claim 8 based on rim detection, it is characterised in that the direction calculating Unit is specifically used for:The horizontal ladder of some pixels in interpolation neighborhood of pixels in original image is calculated according to formula (2) and (3) respectively Spend gH(i, j) and vertical gradient gV(i,j):
    <mrow> <mtable> <mtr> <mtd> <mrow> <msub> <mi>g</mi> <mi>H</mi> </msub> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>=</mo> <msub> <mi>I</mi> <mi>L</mi> </msub> <mrow> <mo>(</mo> <mi>i</mi> <mo>-</mo> <mn>1</mn> <mo>,</mo> <mi>j</mi> <mo>+</mo> <mn>1</mn> <mo>)</mo> </mrow> <mo>+</mo> <mn>2</mn> <mo>*</mo> <msub> <mi>I</mi> <mi>L</mi> </msub> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>+</mo> <mn>1</mn> <mo>)</mo> </mrow> <mo>+</mo> <msub> <mi>I</mi> <mi>L</mi> </msub> <mrow> <mo>(</mo> <mi>i</mi> <mo>+</mo> <mn>1</mn> <mo>,</mo> <mi>j</mi> <mo>+</mo> <mn>1</mn> <mo>)</mo> </mrow> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <mo>-</mo> <msub> <mi>I</mi> <mi>L</mi> </msub> <mrow> <mo>(</mo> <mi>i</mi> <mo>-</mo> <mn>1</mn> <mo>,</mo> <mi>j</mi> <mo>-</mo> <mn>1</mn> <mo>)</mo> </mrow> <mo>-</mo> <mn>2</mn> <mo>*</mo> <msub> <mi>I</mi> <mi>L</mi> </msub> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>-</mo> <mn>1</mn> <mo>)</mo> </mrow> <mo>-</mo> <msub> <mi>I</mi> <mi>L</mi> </msub> <mrow> <mo>(</mo> <mi>i</mi> <mo>+</mo> <mn>1</mn> <mo>,</mo> <mi>j</mi> <mo>-</mo> <mn>1</mn> <mo>)</mo> </mrow> </mrow> </mtd> </mtr> </mtable> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>2</mn> <mo>)</mo> </mrow> </mrow>
    <mrow> <mtable> <mtr> <mtd> <mrow> <msub> <mi>g</mi> <mi>V</mi> </msub> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>=</mo> <msub> <mi>I</mi> <mi>L</mi> </msub> <mrow> <mo>(</mo> <mi>i</mi> <mo>-</mo> <mn>1</mn> <mo>,</mo> <mi>j</mi> <mo>-</mo> <mn>1</mn> <mo>)</mo> </mrow> <mo>+</mo> <mn>2</mn> <mo>*</mo> <msub> <mi>I</mi> <mi>L</mi> </msub> <mrow> <mo>(</mo> <mi>i</mi> <mo>-</mo> <mn>1</mn> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>+</mo> <msub> <mi>I</mi> <mi>L</mi> </msub> <mrow> <mo>(</mo> <mi>i</mi> <mo>-</mo> <mn>1</mn> <mo>,</mo> <mi>j</mi> <mo>+</mo> <mn>1</mn> <mo>)</mo> </mrow> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <mo>-</mo> <msub> <mi>I</mi> <mi>L</mi> </msub> <mrow> <mo>(</mo> <mi>i</mi> <mo>+</mo> <mn>1</mn> <mo>,</mo> <mi>j</mi> <mo>-</mo> <mn>1</mn> <mo>)</mo> </mrow> <mo>-</mo> <mn>2</mn> <mo>*</mo> <msub> <mi>I</mi> <mi>L</mi> </msub> <mrow> <mo>(</mo> <mi>i</mi> <mo>+</mo> <mn>1</mn> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>-</mo> <msub> <mi>I</mi> <mi>L</mi> </msub> <mrow> <mo>(</mo> <mi>i</mi> <mo>+</mo> <mn>1</mn> <mo>,</mo> <mi>j</mi> <mo>+</mo> <mn>1</mn> <mo>)</mo> </mrow> </mrow> </mtd> </mtr> </mtable> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>3</mn> <mo>)</mo> </mrow> </mrow>
    Respectively according to the horizontal gradient of position of the interpolation pixel in original image and pixel in the neighborhood and vertical Gradient utilizes bilinear interpolation namely the horizontal gradient g of interpolation pixel is determined according to formula (4) and (5)H(iL,jL) and it is vertical Gradient gV(iL,jL), then the edge direction of interpolation pixel is the vertical direction in the direction of the gradient of the interpolation pixel (gV(iL,jL),-gH(iL,jL)):
    <mrow> <mtable> <mtr> <mtd> <mrow> <msub> <mi>g</mi> <mi>H</mi> </msub> <mrow> <mo>(</mo> <msub> <mi>i</mi> <mi>L</mi> </msub> <mo>,</mo> <msub> <mi>j</mi> <mi>L</mi> </msub> <mo>)</mo> </mrow> <mo>=</mo> <mrow> <mo>(</mo> <mn>1</mn> <mo>-</mo> <mi>d</mi> <mi>x</mi> <mo>)</mo> </mrow> <mo>*</mo> <mrow> <mo>(</mo> <mn>1</mn> <mo>-</mo> <mi>d</mi> <mi>y</mi> <mo>)</mo> </mrow> <mo>*</mo> <msub> <mi>g</mi> <mi>H</mi> </msub> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>+</mo> <mi>d</mi> <mi>x</mi> <mo>*</mo> <mrow> <mo>(</mo> <mn>1</mn> <mo>-</mo> <mi>d</mi> <mi>y</mi> <mo>)</mo> </mrow> <mo>*</mo> <msub> <mi>g</mi> <mi>H</mi> </msub> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>+</mo> <mn>1</mn> <mo>)</mo> </mrow> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <mo>+</mo> <mrow> <mo>(</mo> <mn>1</mn> <mo>-</mo> <mi>d</mi> <mi>x</mi> <mo>)</mo> </mrow> <mo>*</mo> <mi>d</mi> <mi>y</mi> <mo>*</mo> <msub> <mi>g</mi> <mi>H</mi> </msub> <mrow> <mo>(</mo> <mi>i</mi> <mo>+</mo> <mn>1</mn> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>+</mo> <mi>d</mi> <mi>x</mi> <mo>*</mo> <mi>d</mi> <mi>y</mi> <mo>*</mo> <msub> <mi>g</mi> <mi>H</mi> </msub> <mrow> <mo>(</mo> <mi>i</mi> <mo>+</mo> <mn>1</mn> <mo>,</mo> <mi>j</mi> <mo>+</mo> <mn>1</mn> <mo>)</mo> </mrow> </mrow> </mtd> </mtr> </mtable> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>4</mn> <mo>)</mo> </mrow> </mrow>
    <mrow> <mtable> <mtr> <mtd> <mrow> <msub> <mi>g</mi> <mi>V</mi> </msub> <mrow> <mo>(</mo> <msub> <mi>i</mi> <mi>L</mi> </msub> <mo>,</mo> <msub> <mi>j</mi> <mi>L</mi> </msub> <mo>)</mo> </mrow> <mo>=</mo> <mrow> <mo>(</mo> <mn>1</mn> <mo>-</mo> <mi>d</mi> <mi>x</mi> <mo>)</mo> </mrow> <mo>*</mo> <mrow> <mo>(</mo> <mn>1</mn> <mo>-</mo> <mi>d</mi> <mi>y</mi> <mo>)</mo> </mrow> <mo>*</mo> <msub> <mi>g</mi> <mi>V</mi> </msub> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>+</mo> <mi>d</mi> <mi>x</mi> <mo>*</mo> <mrow> <mo>(</mo> <mn>1</mn> <mo>-</mo> <mi>d</mi> <mi>y</mi> <mo>)</mo> </mrow> <mo>*</mo> <msub> <mi>g</mi> <mi>V</mi> </msub> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>+</mo> <mn>1</mn> <mo>)</mo> </mrow> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <mo>+</mo> <mrow> <mo>(</mo> <mn>1</mn> <mo>-</mo> <mi>d</mi> <mi>x</mi> <mo>)</mo> </mrow> <mo>*</mo> <mi>d</mi> <mi>y</mi> <mo>*</mo> <msub> <mi>g</mi> <mi>V</mi> </msub> <mrow> <mo>(</mo> <mi>i</mi> <mo>+</mo> <mn>1</mn> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>+</mo> <mi>d</mi> <mi>x</mi> <mo>*</mo> <mi>d</mi> <mi>y</mi> <mo>*</mo> <msub> <mi>g</mi> <mi>V</mi> </msub> <mrow> <mo>(</mo> <mi>i</mi> <mo>+</mo> <mn>1</mn> <mo>,</mo> <mi>j</mi> <mo>+</mo> <mn>1</mn> <mo>)</mo> </mrow> </mrow> </mtd> </mtr> </mtable> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>5</mn> <mo>)</mo> </mrow> </mrow>
    Wherein, IL(i-1,j+1)、IL(i,j+1)、IL(i+1,j+1)、IL(i-1,j-1)、IL(i,j-1)、IL(i+1,j-1)、IL (i-1,j)、IL(i+1, j) is illustrated respectively in the pixel value of eight pixels in interpolation neighborhood of pixels in original image.
CN201510152962.3A 2015-04-01 2015-04-01 Image interpolation method and system based on rim detection Active CN104700361B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510152962.3A CN104700361B (en) 2015-04-01 2015-04-01 Image interpolation method and system based on rim detection

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510152962.3A CN104700361B (en) 2015-04-01 2015-04-01 Image interpolation method and system based on rim detection

Publications (2)

Publication Number Publication Date
CN104700361A CN104700361A (en) 2015-06-10
CN104700361B true CN104700361B (en) 2017-12-05

Family

ID=53347450

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510152962.3A Active CN104700361B (en) 2015-04-01 2015-04-01 Image interpolation method and system based on rim detection

Country Status (1)

Country Link
CN (1) CN104700361B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105678700B (en) * 2016-01-11 2018-10-09 苏州大学 Image interpolation method and system based on prediction gradient
CN106886981B (en) * 2016-12-30 2020-02-14 中国科学院自动化研究所 Image edge enhancement method and system based on edge detection
CN108062821B (en) * 2017-12-12 2020-04-28 深圳怡化电脑股份有限公司 Edge detection method and currency detection equipment
CN109993693B (en) * 2017-12-29 2023-04-25 澜至电子科技(成都)有限公司 Method and apparatus for interpolating an image
CN108495118A (en) * 2018-02-27 2018-09-04 吉林省行氏动漫科技有限公司 A kind of 3 D displaying method and system of Glassless

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1667650A (en) * 2005-04-08 2005-09-14 杭州国芯科技有限公司 Image zooming method based on edge detection
CN101197995A (en) * 2006-12-07 2008-06-11 深圳艾科创新微电子有限公司 Edge self-adapting de-interlacing interpolation method

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7286721B2 (en) * 2003-09-11 2007-10-23 Leadtek Research Inc. Fast edge-oriented image interpolation algorithm

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1667650A (en) * 2005-04-08 2005-09-14 杭州国芯科技有限公司 Image zooming method based on edge detection
CN101197995A (en) * 2006-12-07 2008-06-11 深圳艾科创新微电子有限公司 Edge self-adapting de-interlacing interpolation method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
手机图像插值算法研究;吴炜森;《中国优秀硕士学位论文全文数据库 信息科技辑》;20111215;第11-12页 *

Also Published As

Publication number Publication date
CN104700361A (en) 2015-06-10

Similar Documents

Publication Publication Date Title
CN104700361B (en) Image interpolation method and system based on rim detection
CN103996170B (en) Image edge saw-tooth eliminating method with super resolution
US8494308B2 (en) Image upscaling based upon directional interpolation
CN106204441B (en) Image local amplification method and device
CN1319375C (en) Image zooming method based on edge detection
CN101755286B (en) Image upscaling based upon directional interpolation
US9105106B2 (en) Two-dimensional super resolution scaling
US8340472B2 (en) Pixel interpolation apparatus and method
CN104700360A (en) Image zooming method and system based on edge self-adaptation
CN109191377B (en) Image amplification method based on interpolation
CN107330885A (en) A kind of multi-operator image reorientation method of holding important content region the ratio of width to height
US8045053B2 (en) Video image deinterlacing apparatus and methods of performing video image deinterlacing
US20090226097A1 (en) Image processing apparatus
CN102682424A (en) Image amplification processing method based on edge direction difference
CN106169173A (en) A kind of image interpolation method
CN101790069A (en) Scale transformation method based on image edge direction
US8830395B2 (en) Systems and methods for adaptive scaling of digital images
CN103456031B (en) A kind of new method of area image interpolation
CN103646379B (en) A kind of image magnification method and device
CN110349090A (en) A kind of image-scaling method based on newton second order interpolation
CN102800047B (en) Method for reconstructing super resolution of single-frame image
US9076232B2 (en) Apparatus and method for interpolating image, and apparatus for processing image using the same
CN109325909A (en) A kind of image magnification method and image amplifying device
CN102831589B (en) A kind of method utilizing convolutional filtering and antialiasing analysis to strengthen image resolution ratio
JP5616014B2 (en) Method for generating a distance representing the direction of an edge in a video picture, corresponding device, and use of the method for deinterlacing or format conversion

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20171211

Address after: 102412 Beijing City, Fangshan District Yan Village Yan Fu Road No. 1 No. 11 building 4 layer 402

Patentee after: Beijing Si Lang science and Technology Co.,Ltd.

Address before: 100080 Zhongguancun East Road, Beijing, No. 95, No.

Patentee before: Institute of Automation, Chinese Academy of Sciences

TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20220119

Address after: 519031 room 532, building 18, No. 1889, Huandao East Road, Hengqin District, Zhuhai City, Guangdong Province

Patentee after: Zhuhai Jilang Semiconductor Technology Co.,Ltd.

Address before: 102412 room 402, 4th floor, building 11, No. 1, Yanfu Road, Yancun Town, Fangshan District, Beijing

Patentee before: Beijing Si Lang science and Technology Co.,Ltd.

TR01 Transfer of patent right
CP03 Change of name, title or address

Address after: Room 701, 7th Floor, Building 56, No. 2, Jingyuan North Street, Beijing Economic and Technological Development Zone, Daxing District, Beijing 100176 (Beijing Pilot Free Trade Zone High-end Industry Zone Yizhuang Group)

Patentee after: Beijing Jilang Semiconductor Technology Co., Ltd.

Address before: 519031 room 532, building 18, No. 1889, Huandao East Road, Hengqin District, Zhuhai City, Guangdong Province

Patentee before: Zhuhai Jilang Semiconductor Technology Co.,Ltd.

CP03 Change of name, title or address