CN104202554A - Intra-field anti-aliasing and deinterlacing method - Google Patents

Intra-field anti-aliasing and deinterlacing method Download PDF

Info

Publication number
CN104202554A
CN104202554A CN201410466905.8A CN201410466905A CN104202554A CN 104202554 A CN104202554 A CN 104202554A CN 201410466905 A CN201410466905 A CN 201410466905A CN 104202554 A CN104202554 A CN 104202554A
Authority
CN
China
Prior art keywords
under
dmax
choose
hunting zone
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201410466905.8A
Other languages
Chinese (zh)
Inventor
张宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
CHENGDU ZHIMINGDA DIGITAL EQUIPMENT Co Ltd
Original Assignee
CHENGDU ZHIMINGDA DIGITAL EQUIPMENT Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by CHENGDU ZHIMINGDA DIGITAL EQUIPMENT Co Ltd filed Critical CHENGDU ZHIMINGDA DIGITAL EQUIPMENT Co Ltd
Priority to CN201410466905.8A priority Critical patent/CN104202554A/en
Publication of CN104202554A publication Critical patent/CN104202554A/en
Pending legal-status Critical Current

Links

Landscapes

  • Processing Or Creating Images (AREA)

Abstract

The invention discloses an intra-field anti-aliasing and deinterlacing method. The method is characterized in that an interpolation point P (x, y) is selected, with x and y being a row number and a column number of a to-be-interpolated point in an image; first, two rows of related pixels above and below the to-be-interpolated point are selected, and smoothing and filtering are performed to form vectors UPPER and UNDER; second, the vectors UPPER and UNDER are used as input of direction models, and direction relevancy values DR(i), i<[-5, 5], of eleven direction models are calculated; a maximum Dmax is found from the direction relevancy values DR(i), and screening is performed; third, a direction estimated value D'(x, y) of the to-be-interpolated point is calculated with direction values of two points on the left and right of the to-be-interpolated point; fourth, a search range is selected according to the direction estimated value D'(x, y) of the to-be-interpolated point; fifth, within the search range, a direction relevancy DR(k), k<(search range), of each direction model is calculated; a maximum is found from the direction relevancies DR (k), and the direction model corresponding to the maximum is an interpolation direction DI of the to-be-interpolated point.

Description

The de-interlaced method of reverse sawtooth in
Technical field
The present invention relates to a kind of Computer Image Processing method, particularly an interior de-interlaced method of reverse sawtooth.
Background technology
In prior art, and the immediate scheme of the technology of the present invention: edge self-adaption interpolation method.1, to be inserted some position selected several detection sides to; 2, detection side, make progress, two corresponding pixel values are done difference computing, and result takes absolute value; 3, compare order of magnitude, find out that direction of absolute value minimum, as interpolation direction; 4, in interpolation direction, choose related pixel, add certain weight, calculate interpolation result.Adopt above-mentioned mode to have that row copies, the low side method such as average interpolation in the ranks, image border sawtooth effect is serious, and image vertical definition is not good; Edge self-adaption interpolation method, easily causes edge direction erroneous judgement, and then introduces noise; Interframe movement detects, and operand is large, and resource consumption is large, once and detect mistake, easily introduce the shortcoming of noise.
Summary of the invention
For above-mentioned the deficiencies in the prior art part, the invention provides a de-interlaced method of interior reverse sawtooth, effectively solved the problem that above-mentioned prior art exists.
To achieve these goals, the technical solution used in the present invention is: the de-interlaced method of reverse sawtooth in, it is P (x, y) that this method is established interpolation point, wherein x, y be respectively to be inserted in image line number, the columns at place;
The first step: choose to be inserted some lastrow related pixel: P (x-1, j-4), P (x-1, j-3), P (x-1, j-2), P (x-1, j-1), P (x-1, j), P (x-1, j+1), P (x-1, j+2), P (x-1, j+3), P (x-1, j+4), it is carried out to smothing filtering, obtain vectorial UPPER; Choose to be inserted some next line related pixel: P (x+1, j-4), P (x+1, j-3), P (x+1, j-2), P (x+1, j-1), P (x+1, j), P (x+1, j+1), P (x+1, j+2), P (x+1, j+3), P (x+1, j+4), it is carried out to smothing filtering, obtain vectorial UNDER.
Second step: by vectorial UPPER, UNDER, as the input of direction model, calculates the directional dependency value DR (i) of 11 direction models, i ∈ [5,5].Find out maximum Dmax in DR (i), then do following screening:
If DR (0)=Dmax, D (x, y)=0;
Else if, DR (1)=Dmax and DR (1) >DR (1), D (x, y)=1;
Else if, DR (1)=Dmax and DR (1) >DR (1), D (x, y)=-1;
Else if, DR (2)=Dmax and DR (2) >DR (2), D (x, y)=2;
Else if, DR (2)=Dmax and DR (2) >DR (2), D (x, y)=-2;
Else if, DR (3)=Dmax and DR (3) >DR (3), D (x, y)=3;
Else if, DR (3)=Dmax and DR (3) >DR (3), D (x, y)=-3;
Else if, DR (4)=Dmax and DR (4) >DR (4), D (x, y)=4;
Else if, DR (4)=Dmax and DR (4) >DR (4), D (x, y)=-4;
Else if, DR (5)=Dmax and DR (5) >DR (5), D (x, y)=5;
Else if, DR (5)=Dmax and DR (5) >DR (5), D (x, y)=-5;
Otherwise D=0;
Each point to be inserted, all calculates direction value D (x, y), wherein x, y be respectively to be inserted in image line number, the columns at place.
The 3rd step: with the direction value of to be inserted each two points of left and right, calculate the direction estimated value D ' (x, y) of point to be inserted, concrete formula is:
D’(x,y)=D(x,y-2)+D(x,y-1)+D(x,y+1)+D(x,y+2);
The 4th step: according to be inserted some direction estimated value D ' (x, y), choose hunting zone, concrete grammar is:
If D ' (x, y)≤-16, choose-5 ,-4 ,-3 ,-2 ,-1,0 direction model, as hunting zone;
D ' (x, y)≤-12, choose-4 ,-3 ,-2 else if, and-1,0 direction model, as hunting zone;
D ' (x, y) >=12, choose 4,3,2,1 else if, and 0 direction model, as hunting zone;
D ' (x, y) >=16, choose 5,4,3,2 else if, and 1,0 direction model, as hunting zone;
Otherwise, choose-3 ,-2 ,-1,0,1,2,3 direction models, as hunting zone.
As preferably, described calculating result to be inserted, concrete grammar is:
If, DI=0, P (x, y)=(UPPER (5)+UNDER (5))/2;
If, DI=-1 or-2, P (x, y)=(UPPER (4)+UPPER (5)+UNDER (5)+UNDER (6))/4;
If, DI=1 or 2, P (x, y)=(UPPER (5)+UPPER (6)+UNDER (4)+UNDER (5))/4;
If, DI=3, P (x, y)=(UPPER (6)+UNDER (4))/2;
If, DI=-3, P (x, y)=(UPPER (4)+UNDER (6))/2;
If, DI=4, P (x, y)=(UPPER (7)+UNDER (3))/2;
If, DI=-4, P (x, y)=(UPPER (3)+UNDER (7))/2;
If, DI=5, P (x, y)=(UPPER (8)+UNDER (2))/2;
If, DI=-5, P (x, y)=(UPPER (2)+UNDER (8))/2.
Compared with prior art, this beneficial effect of the invention: the present invention by the improvement in method make in field accurately, adaptive direction detects, and avoids interframe movement to detect a large amount of logical resources of required consumption.Effectively suppress the noise that edge sawtooth effect and direction erroneous judgement are introduced, obtain fine and smooth, soft image border.
Accompanying drawing explanation
Fig. 1 is the design diagram of direction model.
Embodiment
Below in conjunction with drawings and the specific embodiments, the present invention is described in further detail.
Referring to Fig. 1, the de-interlaced method of reverse sawtooth in, it is P (x, y) that this method is established interpolation point, wherein x, y be respectively to be inserted in image line number, the columns at place;
The first step: choose to be inserted some lastrow related pixel: P (x-1, j-4), P (x-1, j-3), P (x-1, j-2), P (x-1, j-1), P (x-1, j), P (x-1, j+1), P (x-1, j+2), P (x-1, j+3), P (x-1, j+4), it is carried out to smothing filtering, obtain vectorial UPPER; Choose to be inserted some next line related pixel: P (x+1, j-4), P (x+1, j-3), P (x+1, j-2), P (x+1, j-1), P (x+1, j), P (x+1, j+1), P (x+1, j+2), P (x+1, j+3), P (x+1, j+4), it is carried out to smothing filtering, obtain vectorial UNDER.Second step: by vectorial UPPER, UNDER (refers to and formulates 11 directions and give a numbering (5~5) as direction model, when a direction is detected, set a pixel extraction template, three pairs of specific pixel that extraction the party makes progress are carried out pixel interdependence calculating, wherein A0 and B0 are paired, A1 and B1 are paired, A2 and B2 are paired) input, the directional dependency that calculates 11 direction models (refers in direction model, three pairs of pixel interdependences and value, with DR, represent, be specially: DR=R (A0, B0)+R (A1, B1)+R (A2, B2) 11 directional dependency, be expressed as DR (i), i ∈ [5, 5]) value DR (i), i ∈ [5, 5].Find out maximum Dmax in DR (i), then do following screening:
If DR (0)=Dmax, D (x, y)=0;
Else if, DR (1)=Dmax and DR (1) >DR (1), D (x, y)=1;
Else if, DR (1)=Dmax and DR (1) >DR (1), D (x, y)=-1;
Else if, DR (2)=Dmax and DR (2) >DR (2), D (x, y)=2;
Else if, DR (2)=Dmax and DR (2) >DR (2), D (x, y)=-2;
Else if, DR (3)=Dmax and DR (3) >DR (3), D (x, y)=3;
Else if, DR (3)=Dmax and DR (3) >DR (3), D (x, y)=-3;
Else if, DR (4)=Dmax and DR (4) >DR (4), D (x, y)=4;
Else if, DR (4)=Dmax and DR (4) >DR (4), D (x, y)=-4;
Else if, DR (5)=Dmax and DR (5) >DR (5), D (x, y)=5;
Else if, DR (5)=Dmax and DR (5) >DR (5), D (x, y)=-5;
Otherwise D=0;
Each point to be inserted, all calculate direction value and (refer to the directional dependency that goes out 11 direction models to be inserted some position calculation, find out maximum wherein, the direction numbering that this direction model is corresponding, is the direction value of this point to be inserted, with D (x, y) represent, x, y are respectively to be inserted some place line number, columns) D (x, y), wherein x, y be respectively to be inserted in image line number, the columns at place.
The 3rd step: with the direction value of to be inserted each two points of left and right, calculate direction estimated value the D ' (x of point to be inserted, y) (refer to the direction value sum to be inserted left and right each two points in position, with D ' (x, y) represent, be specially: D ' (x, y)=D (x, y-2)+D (x, y-1)+D (x, y+1)+D (x, y+2) wherein x, y is respectively to be inserted some place line number, columns), concrete formula is: D ' (x, y)=D (x, y-2)+D (x, y-1)+D (x, y+1)+D (x, y+2);
The 4th step: according to be inserted some direction estimated value D ' (x, y), choose hunting zone, concrete grammar is:
If D ' (x, y)≤-16, choose-5 ,-4 ,-3 ,-2 ,-1,0 direction model, as hunting zone;
D ' (x, y)≤-12, choose-4 ,-3 ,-2 else if, and-1,0 direction model, as hunting zone;
D ' (x, y) >=12, choose 4,3,2,1 else if, and 0 direction model, as hunting zone;
D ' (x, y) >=16, choose 5,4,3,2 else if, and 1,0 direction model, as hunting zone;
Otherwise, choose-3 ,-2 ,-1,0,1,2,3 direction models, as hunting zone.
In the present embodiment, described calculating result to be inserted, concrete grammar is:
If, DI=0, P (x, y)=(UPPER (5)+UNDER (5))/2;
If, DI=-1 or-2, P (x, y)=(UPPER (4)+UPPER (5)+UNDER (5)+UNDER (6))/4;
If, DI=1 or 2, P (x, y)=(UPPER (5)+UPPER (6)+UNDER (4)+UNDER (5))/4;
If, DI=3, P (x, y)=(UPPER (6)+UNDER (4))/2;
If, DI=-3, P (x, y)=(UPPER (4)+UNDER (6))/2;
If, DI=4, P (x, y)=(UPPER (7)+UNDER (3))/2;
If, DI=-4, P (x, y)=(UPPER (3)+UNDER (7))/2;
If, DI=5, P (x, y)=(UPPER (8)+UNDER (2))/2;
If, DI=-5, P (x, y)=(UPPER (2)+UNDER (8))/2.

Claims (2)

1. a de-interlaced method of interior reverse sawtooth, is characterized in that: it is P (x, y) that this method is established interpolation point, wherein x, y be respectively to be inserted in image line number, the columns at place;
The first step: choose to be inserted some lastrow related pixel: P (x-1, j-4), P (x-1, j-3), P (x-1, j-2), P (x-1, j-1), P (x-1, j), P (x-1, j+1), P (x-1, j+2), P (x-1, j+3), P (x-1, j+4), it is carried out to smothing filtering, obtain vectorial UPPER; Choose to be inserted some next line related pixel: P (x+1, j-4), P (x+1, j-3), P (x+1, j-2), P (x+1, j-1), P (x+1, j), P (x+1, j+1), P (x+1, j+2), P (x+1, j+3), P (x+1, j+4), it is carried out to smothing filtering, obtain vectorial UNDER;
Second step: by vectorial UPPER, UNDER, as the input of direction model, calculates the directional dependency value DR (i) of 11 direction models, i ∈ [5,5].Find out maximum Dmax in DR (i), then do following screening:
If DR (0)=Dmax, D (x, y)=0;
Else if, DR (1)=Dmax and DR (1) >DR (1), D (x, y)=1;
Else if, DR (1)=Dmax and DR (1) >DR (1), D (x, y)=-1;
Else if, DR (2)=Dmax and DR (2) >DR (2), D (x, y)=2;
Else if, DR (2)=Dmax and DR (2) >DR (2), D (x, y)=-2;
Else if, DR (3)=Dmax and DR (3) >DR (3), D (x, y)=3;
Else if, DR (3)=Dmax and DR (3) >DR (3), D (x, y)=-3;
Else if, DR (4)=Dmax and DR (4) >DR (4), D (x, y)=4;
Else if, DR (4)=Dmax and DR (4) >DR (4), D (x, y)=-4;
Else if, DR (5)=Dmax and DR (5) >DR (5), D (x, y)=5;
Else if, DR (5)=Dmax and DR (5) >DR (5), D (x, y)=-5;
Otherwise D=0;
Each point to be inserted, all calculates direction value D (x, y), wherein x, y be respectively to be inserted in image line number, the columns at place;
The 3rd step: with the direction value of to be inserted each two points of left and right, calculate the direction estimated value D ' (x, y) of point to be inserted, concrete formula is:
D’(x,y)=D(x,y-2)+D(x,y-1)+D(x,y+1)+D(x,y+2);
The 4th step: according to be inserted some direction estimated value D ' (x, y), choose hunting zone, concrete grammar is:
If D ' (x, y)≤-16, choose-5 ,-4 ,-3 ,-2 ,-1,0 direction model, as hunting zone;
D ' (x, y)≤-12, choose-4 ,-3 ,-2 else if, and-1,0 direction model, as hunting zone;
D ' (x, y) >=12, choose 4,3,2,1 else if, and 0 direction model, as hunting zone;
D ' (x, y) >=16, choose 5,4,3,2 else if, and 1,0 direction model, as hunting zone;
Otherwise, choose-3 ,-2 ,-1,0,1,2,3 direction models, as hunting zone;
The 5th step: in hunting zone, calculate the directional dependency DR (k) of all directions model, k ∈ " hunting zone ".Find out maximum in DR (k), the direction model that it is corresponding, is the interpolation direction DI of point to be inserted.
2. the de-interlaced method of reverse sawtooth according to claim 1, is characterized in that: described calculating result to be inserted, and concrete grammar is:
If, DI=0, P (x, y)=(UPPER (5)+UNDER (5))/2;
If, DI=-1 or-2, P (x, y)=(UPPER (4)+UPPER (5)+UNDER (5)+UNDER (6))/4;
If, DI=1 or 2, P (x, y)=(UPPER (5)+UPPER (6)+UNDER (4)+UNDER (5))/4;
If, DI=3, P (x, y)=(UPPER (6)+UNDER (4))/2;
If, DI=-3, P (x, y)=(UPPER (4)+UNDER (6))/2;
If, DI=4, P (x, y)=(UPPER (7)+UNDER (3))/2;
If, DI=-4, P (x, y)=(UPPER (3)+UNDER (7))/2;
If, DI=5, P (x, y)=(UPPER (8)+UNDER (2))/2;
If, DI=-5, P (x, y)=(UPPER (2)+UNDER (8))/2.
CN201410466905.8A 2014-09-15 2014-09-15 Intra-field anti-aliasing and deinterlacing method Pending CN104202554A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410466905.8A CN104202554A (en) 2014-09-15 2014-09-15 Intra-field anti-aliasing and deinterlacing method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410466905.8A CN104202554A (en) 2014-09-15 2014-09-15 Intra-field anti-aliasing and deinterlacing method

Publications (1)

Publication Number Publication Date
CN104202554A true CN104202554A (en) 2014-12-10

Family

ID=52087774

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410466905.8A Pending CN104202554A (en) 2014-09-15 2014-09-15 Intra-field anti-aliasing and deinterlacing method

Country Status (1)

Country Link
CN (1) CN104202554A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104506791A (en) * 2014-12-25 2015-04-08 珠海全志科技股份有限公司 Deinterlacing method and deinterlacing device

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101197995A (en) * 2006-12-07 2008-06-11 深圳艾科创新微电子有限公司 Edge self-adapting de-interlacing interpolation method
CN101442648A (en) * 2008-12-19 2009-05-27 四川虹微技术有限公司 Field interpolation method
US7944503B1 (en) * 2006-01-27 2011-05-17 Texas Instruments Incorporated Interlaced-to-progressive video processing
CN102868870A (en) * 2012-09-28 2013-01-09 许丹 Deinterlacing processing method
CN103475838A (en) * 2013-06-21 2013-12-25 青岛海信信芯科技有限公司 Deinterlacing method based on edge self adaption

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7944503B1 (en) * 2006-01-27 2011-05-17 Texas Instruments Incorporated Interlaced-to-progressive video processing
CN101197995A (en) * 2006-12-07 2008-06-11 深圳艾科创新微电子有限公司 Edge self-adapting de-interlacing interpolation method
CN101442648A (en) * 2008-12-19 2009-05-27 四川虹微技术有限公司 Field interpolation method
CN102868870A (en) * 2012-09-28 2013-01-09 许丹 Deinterlacing processing method
CN103475838A (en) * 2013-06-21 2013-12-25 青岛海信信芯科技有限公司 Deinterlacing method based on edge self adaption

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
刘然等: "一种用于DIBR的去隔行算法", 《计算机应用研究》 *
马斌等: "加权边沿自适应的场内插值去隔行方法", 《计算机应用研究》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104506791A (en) * 2014-12-25 2015-04-08 珠海全志科技股份有限公司 Deinterlacing method and deinterlacing device
CN104506791B (en) * 2014-12-25 2017-09-22 珠海全志科技股份有限公司 Interlace-removing method and device

Similar Documents

Publication Publication Date Title
CN104376319B (en) A kind of method based on anisotropic Gaussian core extraction closed edge image outline
US20150131728A1 (en) Method for motion vector estimation
CN104680483B (en) The noise estimation method of image, video image denoising method and device
JP2012022654A5 (en)
CN104011771A (en) Method of and apparatus for scalable frame rate up-conversion
TW201342916A (en) Complexity scalable frame rate up-conversion
TWI460681B (en) Method for processing edges in an image and image processing apparatus
US20130342444A1 (en) Method and Apparatus for Hand Gesture Trajectory Recognition
JP2013048375A (en) Device and method for generating motion compensation frame
CN104202554A (en) Intra-field anti-aliasing and deinterlacing method
US20150254815A1 (en) Image downsampling apparatus and method
JP2013020605A (en) Movement image area determination device or method for the same
CN103297659A (en) Edge processing method of image and image processing device
WO2013031418A1 (en) Device for detecting line segment and arc
GB2545649A (en) Artefact detection
Silalahi et al. Comparison of sensitivity analysis on linear optimization using optimal partition and optimal basis (in the simplex method) at some cases
CN109215046A (en) A kind of Laplace operator edge detection method based on image interpolation arithmetic
JP2007221602A5 (en)
JP2013228995A5 (en)
CN110537202A (en) Correlation arithmetic unit
CN111383183B (en) Image edge enhancement method and device and computer storage medium
CN107124611B (en) Method and device for converting video frame rate
CN108475339A (en) For the method and system to the object classification in image
JP2008271492A (en) Noise reducer and reducing method for television video signal
JP2017116985A5 (en)

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20141210