CN103426144B - For making the method and apparatus of the image deblurring with perspective distortion - Google Patents

For making the method and apparatus of the image deblurring with perspective distortion Download PDF

Info

Publication number
CN103426144B
CN103426144B CN201210154166.XA CN201210154166A CN103426144B CN 103426144 B CN103426144 B CN 103426144B CN 201210154166 A CN201210154166 A CN 201210154166A CN 103426144 B CN103426144 B CN 103426144B
Authority
CN
China
Prior art keywords
image
subregion
subimage
fuzziness
cutting apart
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201210154166.XA
Other languages
Chinese (zh)
Other versions
CN103426144A (en
Inventor
李建杰
李献
罗兆海
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Canon Inc
Original Assignee
Canon Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Canon Inc filed Critical Canon Inc
Priority to CN201210154166.XA priority Critical patent/CN103426144B/en
Publication of CN103426144A publication Critical patent/CN103426144A/en
Application granted granted Critical
Publication of CN103426144B publication Critical patent/CN103426144B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Processing (AREA)

Abstract

The present invention discloses a kind of for making the method and apparatus of the image deblurring with perspective distortion. Described method comprises: perspective information detecting step, for detection of the perspective information of image; Subregion segmentation step, for image being divided into multiple subregions based on perspective information, the fuzziness in all subregion is identical; And deblurring step, the deblurring for making all subregion based on its fuzzy message. Compared with prior art, the present invention can improve the deblurring effect of the image with perspective distortion.

Description

For making the method and apparatus of the image deblurring with perspective distortion
Technical field
The present invention relates to the method and apparatus for making the image deblurring with perspective distortion.
Background technology
Can utilize digital camera carry out photographic images for record and extract text message, andAnd the text message extracting can be by optical character identification (OCR) Module recognition, for for exampleOther processing of retrieval. But if the surface of destination document is out of focus, the image obtaining willBe fuzzy, and this deteriorated picture quality will affect the precision of identification of text message.
And, when taking direction when not vertical with the surface of destination document, the figure usually takingIn picture, there is perspective distortion. If image has perspective distortion, this means from image notWith the surface of part to the distance difference of digital camera lens, the therefore different piece of imageFuzziness difference.
For making image deblurring (that is, recovering), many methods are proposed.
In U.S. Patent No. 6154574, with a predetermined window size by fuzzy figurePicture is divided into multiple subimages, and the edge direction based on all subimages is calculated average step response(StepResponse), estimate such as point spread function (PSF) based on average step responseFuzzy message, and recover fuzzy image based on PSF.
In the open No.2009/0067742 of United States Patent (USP), fuzzy image is divided into multipleOverlapping piece, unit of account step response is to estimate the PSF of each overlapping block, based on adjacent weightThe PSF parameter of folded piece is determined the validity of the PSF parameter of an overlapping block, and further based onThe PSF parameter of adjacent overlapping block substitutes the invalid PSF parameter of a described overlapping block.
In the open No.2011/0044554 of United States Patent (USP), fuzzy image is divided into and is hadThe multiple overlapping subimage of fixed dimension, mainly estimates based on its marginal information for each subimageMeter PSF, makes the PSF of each subimage based on himself and deblurring, and finally uses image to spellGather and synthesize the subimage of deblurring.
In the open No.2011/0032575 of United States Patent (USP), based on comprising identical non-flat faceTwo different Image estimations moduluses out of focus that divide and scan in two different focal planes.Can be by the Fourier of the distance of described two different focal planes and described two different imagesTransformation parameter estimation modulus out of focus.
But, for blurred picture (, the fuzziness of its different piece with perspective distortionDifferent), the deblurring method of prior art all has shortcoming, because they only consider fuzzy graphThe local message of picture, and do not consider distortion information.
More specifically, for U.S. Patent No. 6154574, average based on all subimagesStep response is estimated single PSF, to recover whole blurred picture. But, if multiple sonThe fuzziness difference of image, average step response is unsuitable for whole image.
For the open No.2009/0067742 of United States Patent (USP), the PSF parameter of an overlapping block is subject toThe impact of the PSF parameter of adjacent overlapping block. Therefore, if one or more adjacent weightFolded piece can not produce accurate PSF parameter, and the PSF parameter of the overlapping block at center will be subject to soImpact, even if these parameters are accurate. Same with the method in U.S. Patent No. 6154574Sample, the method in the open No.2009/0067742 of United States Patent (USP) is more suitable for having substantially evenThe blurred picture of fuzziness.
For the open No.2011/0044554 of United States Patent (USP), the PSF of each subimage based on himselfBy deblurring, and the PSF of himself mainly depends on the marginal information of himself. At edgeIn information situation pockety, for example, if sub-picture pack containing little marginal information (,Near subimage margin always has little marginal information), the essence that its PSF estimates soDegree will be affected.
For the open No.2011/0032575 of United States Patent (USP), need to know in advance two focal planesDistance. But, for digital camera, be difficult to obtain with different lens distance that can be scheduledTo two images.
Therefore, need to make method and the device thereof of the image deblurring with perspective distortion.
What in addition, need to have a better deblurring effect than prior art has perspective for makingMethod and the device thereof of the image deblurring of distortion.
Summary of the invention
In view of at least one in above problem, the present invention is proposed.
One of object of the present invention is to provide the side that can make the image deblurring with perspective distortionMethod and device thereof.
Another object of the present invention is to provide the use than prior art with better deblurring effectIn the method and the device thereof that make the image deblurring with perspective distortion.
According to a first aspect of the invention, provide a kind of for the image with perspective distortion is goneFuzzy method, comprising: perspective information detecting step, for detection of the perspective information of image;Subregion segmentation step, for based on perspective information, image being divided into multiple subregions, each sonFuzziness in region is identical; And deblurring step, for making all subregion fuzzy based on itInformation and deblurring.
According to a second aspect of the invention, provide a kind of for the image with perspective distortion is goneFuzzy device, comprising: perspective information detection part, for detection of the perspective information of image;Subregion partition member, for based on perspective information, image being divided into multiple subregions, each sonFuzziness in region is identical; And deblurring parts, for making all subregion fuzzy based on itInformation and deblurring.
By means of above feature, the present invention can provide and can make the image with perspective distortion goFuzzy method and device thereof. In addition, the present invention can further provide than prior art and have moreGood deblurring effect for making method and the dress thereof of the image deblurring with perspective distortionPut.
From the following detailed description to exemplary embodiment with reference to accompanying drawing, of the present invention furtherObject, feature and advantage will become obvious.
Brief description of the drawings
Be merged in description and form its a part of accompanying drawing embodiments of the invention are shown, andTogether with the description for explaining principle of the present invention.
Fig. 1 is schematically illustrated to be realized according to the computing equipment of deblurring method of the present inventionThe block diagram of hardware configuration.
Fig. 2 is schematically illustrated according to the general flow figure of deblurring method of the present invention.
The schematically illustrated inspection of the perspective information according to deblurring method of the present invention of Fig. 3 and Fig. 4Survey two substituting flow charts of step.
The schematically illustrated subregion according to deblurring method of the present invention of Fig. 5 and Fig. 6 is cut apartTwo substituting flow charts of step.
Fig. 7 is schematically illustrated according in the subregion segmentation step of deblurring method of the present inventionThe flow chart of accurate adjustment (refining) step.
Fig. 8 is schematically illustrated according in the subregion segmentation step of deblurring method of the present inventionThe flow chart of segmentation step.
Fig. 9 is schematically illustrated according to the flow process of the deblurring step of deblurring method of the present inventionFigure.
Figure 10 is schematically illustrated according in the deblurring step of deblurring method of the present inventionTypical case's subimage is selected the flow chart of step.
Figure 11 is schematically illustrated according to the general block diagram of deblurring device of the present invention.
Figure 12 is shown schematically in the subimage in accurate adjustment step in the situation of just cutting apart angleCut apart.
Figure 13 is shown schematically in the subimage in accurate adjustment step in the negative situation of cutting apart angleCut apart.
Figure 14 is shown schematically in the subregion of cutting apart in the situation that angle is greater than 45 ° and cuts apart.
Figure 15 is shown schematically in the subregion of cutting apart in the situation that angle is less than 45 ° and cuts apart.
Figure 16 illustrates a part for the exemplary blurred picture with perspective distortion.
Figure 17 illustrates the exemplary deblurring result according to prior art.
Figure 18 illustrates according to exemplary deblurring result of the present invention.
Detailed description of the invention
Describe exemplary embodiment of the present invention in detail hereinafter with reference to accompanying drawing. It should be noted that followingTo be described in be only indicative and exemplary in essence, be never intended to limit the present invention andApplication or purposes. Unless stated otherwise, otherwise, assembly and the step set forth in an embodimentRapid positioned opposite, numerical expression and numerical value do not limit the scope of the invention. In addition,Technology well known by persons skilled in the art, method and apparatus may not be discussed in detail, butIn suitable situation, be intended to become a part for description.
By the way, in this manual, conventionally represent size taking " pixel " as unit(such as length, highly, width etc.).
First, can realize according to the calculating of deblurring processing of the present invention and establishing with reference to Fig. 1 descriptionStandby 1000 exemplary hardware components configuration. In order to simplify, a computing equipment is only shown. But,Also can use as required multiple computing equipments.
As shown in Figure 1, computing equipment 1000 can comprise CPU1110, chipset 1120, RAM1130, storage control 1140, display controller 1150, hard disk drive 1160, CD-ROMDriver 1170 and display 1180. Computing equipment 1000 also can comprise and be connected to CPU1110And holding wire 1210 between chipset 1120, be connected to chipset 1120 and RAM1130Between holding wire 1220, be connected to the periphery between chipset 1120 and various ancillary equipmentDevice bus 1230, be connected to the letter between storage control 1140 and hard disk drive 1160Number line 1240, be connected between storage control 1140 and CD-ROM drive 1170Holding wire 1250 and be connected to display controller 1150 and display 1180 between signalLine 1260.
Client computer 1300 can directly or via network 1400 be connected with computing equipment 1000. VisitorFamily machine 1300 can send deblurring task to computing equipment 1000, and computing equipment 1000Deblurring result can be turned back to client computer 1300.
Next, by describe according to of the present invention for make to have perspective distortion image (such asFile and picture etc.) method of deblurring.
Fig. 2 is schematically illustrated according to the general flow figure of deblurring method of the present invention.
As shown in Figure 2, first, at step 210(perspective information detecting step) locate, detectThere is the perspective information of the blurred picture of perspective distortion.
Then, in step 220(subregion segmentation step) locate, based on perspective information by imageBe divided into multiple subregions, identical (this meaning in this manual, of fuzziness in all subregionIt is substantially the same that taste fuzziness in all subregion).
Finally, in step 230(deblurring step) locate, make all subregion based on its fuzzy letterCease and deblurring, to obtain the image of deblurring.
As previously mentioned, the blurred picture with perspective distortion has inhomogeneous fuzziness, that is,The fuzziness difference of its different piece. In the deblurring method of prior art, do not considering notUniformly in the situation of fuzziness, blurred picture is divided into multiple subimages and by deblurring, byThis, deblurring effect is affected to a certain extent. Contrast ground, in above the present inventionDeblurring method in, by consider perspective information, blurred picture is divided into multiple subregions,And fuzziness in all subregion is substantially the same, and, fuzziness uniformly there is substantiallyThe fuzzy message of all subregion based on himself by deblurring individually, thus, with existing skillArt is compared, and can obtain better deblurring effect.
The flow chart of Fig. 2 illustrates the basic step according to deblurring method of the present invention briefly.Below, will the exemplary process of above each step be described in more detail.
First implement in the present invention perspective information detecting step 210, with can be based on detectingPerspective information blurred picture is divided into and respectively has substantially multiple subareas of fuzziness uniformlyTerritory, and Fig. 3 and schematically illustrated two the substituting flow chart of Fig. 4. By the way,Here will retouch as an example of horizontal end point (vanishingpoint) and vertical end point example,State, still, clearly, perspective information is not necessarily limited to this.
As shown in Figure 3, first, at step 212 place, the edge image of detected image. CanAdopt any proper method as known in the art to obtain edge image. For example, can adoptCanny method. The parameter of Canny method comprise minimum edge threshold, maximal margin threshold value,σ etc., and, in an example of the present invention, for 256 gray levels, minimum edge thresholdValue is set as 8, and maximal margin threshold value is set as 35, σ and is set as 1.3. But, it should be noted thatAll these values are only exemplary, and those skilled in the art can select any as requiredAppropriate value.
Next, at step 214 place, detection level page boundary and vertical page in edge imageBorder. Can adopt any proper method as known in the art obtain horizontal page boundary and hang downStraight page boundary. As an example, the strategy of detection level page boundary can comprise: select edge imageOn the edge pixel in the upper left corner; Determine current edge pixel right side pixel whetherComprise at least one edge pixel; If so the edge pixel, detecting will be set as currentEdge pixel, and this processing continues detection of edge pixels, until arrive row end orCan not find edge pixel. By this way, can obtain horizontal page boundary. Much less, inspectionSurveying direction also can be from right to left. For vertical page boundary, by with from top to bottom or from top to bottomDetection side to obtaining similarly them.
In an example of the present invention, the length threshold of horizontal page boundary is set as 0.3 × imageWidth, and vertically the length threshold of page boundary is set as 0.3 × picture altitude, but these valuesBe not necessarily limited to this.
Then, at step 216 place, based on horizontal page boundary and vertical page boundary computed imageHorizontal end point and vertical end point.
If the page boundary that comprises horizontal page boundary and vertical page boundary can be detected and be fuzzyImage has perspective distortion, and so horizontal page boundary will be in x direction (, horizontal direction)Intersect, and the intersection point in x direction will be regarded as candidate's horizontal end point. Similarly,Vertical page boundary will be upper crossing in y direction (, vertical direction), and, in y directionIntersection point will be regarded as candidate's vertical end point.
In the present invention, the candidate's intersection point P x direction by following formula (1)intersectSelectFinal horizontal end point PtHorVanish
Pt HorVanish = arg min Pinter sec t ( Σ i = 1 n | slope HORi - slope CONi _ inter sec t | ) - - - ( 1 )
Here, n represents the counting of horizontal page boundary; SlopeHORiRepresent the oblique of i horizontal page boundaryRate; SlopeCONi_intersectRepresent marginal point and x direction by i horizontal page boundaryOn the slope of i article of connecting line forming of candidate's intersection point; And, formula (1) wholeRight side represents the candidate's intersection point in the x direction of the value minimum in its bracket.
Similarly, the candidate's intersection point P y direction by following formula (2)intersectSelect finalVertical end point PtVertVanish
Pt VertVanish = arg min Pinter sec t ( Σ j = 1 m | slope VERTj - slope CONj _ inter sec t | ) - - - ( 2 )
Here, m represents the counting of vertical page boundary; SlopeVERTjRepresent j vertical page boundarySlope; SlopeCONj_intersectRepresent marginal point and y by the vertical page boundary of jThe slope of the j article of connecting line that candidate's intersection point in direction forms; And, formula (2)Whole right side represents the candidate's intersection point in the y direction of the value minimum in its bracket.
In some images, may not there is not enough page boundary, for example, horizontal page boundaryCounting is less than 2. In this case, alternatively based on line of text and vertical characters strokeThe horizontal end point of computed image and vertical end point (referring to Fig. 4).
As shown in Figure 4, first, at step 212 place, the edge image of detected image. This stepRapid identical with the step 212 in Fig. 3, and will omit its detailed description.
Next,, in step 214 ' locate, in edge image, detect line of text and vertical characters penDraw. Can adopt any proper method being known in the art to obtain line of text and vertical wordSymbol stroke.
For example, in order to obtain line of text, can implement the distance of swimming in the horizontal direction with in vertical directionLength smoothing algorithm (RunLengthSmoothingAlgorithm, RLSA). At thisIn a bright example, the connecting length of horizontal direction is set as 0.05 × picture traverse, and hangs downNogata to connecting length be set as 0.01 × picture altitude. Then, can extract in the y-directionThe central point of black run, and can be by the same policy of use in page boundary detects by waterHorizontal curve detects as line of text (referring to step 214).
On the other hand, for example, in order to obtain vertical characters stroke, can first as below, goExcept horizontal edge. By the gradient oblique line slopeGrad of following formula (3) edge calculation pixel, withDetermine its edge direction:
slopeGrad=arctan(Grady/Gradx)(3)
Here Grad,yAnd GradxRepresent respectively y gradient and the x gradient of edge pixel. IfThe angle of slopeGrad and line of text is less than predetermined angle (for example, 30 °), this limit soEdge will be regarded as horizontal edge and be removed. After horizontal edge is removed, at edge imageMiddle detection connected domain (CC). If the edge pixel in a CC has good linearity,These edge pixels will be regarded as a vertical stroke so. In an example of the present invention,Adopt principal component analysis (PrincipalComponentAnalysis, PCA) method to calculateThe linearity of CC.
Then, in step 216 ' locate, based on the water of line of text and vertical characters stroke computed imageFlat end point and vertical end point. For this purpose, can use and formula (1) and (2) classLike formula, and will omit its detailed description.
To so far, obtain perspective information (for example, horizontal end point and vertical end point).It will be used to blurred picture to be divided into respectively and to have in subregion segmentation step 220 subsequentlySubstantially multiple subregions of uniform fuzziness. Next, with reference to Fig. 5 to 8 and Figure 12To the 15 subregion segmentation step 220 of describing in more detail according to deblurring method of the present invention.
Two substituting flow processs of the schematically illustrated subregion segmentation step 220 of Fig. 5 and Fig. 6Figure.
As shown in Figure 5, first, at step 222 place, calculate and be used for cutting apart based on perspective informationImage cut apart direction.
Then,, at step 224 place, along cutting apart direction, image is divided into multiple subregions. KnotReally, the fuzziness in all subregion is substantially the same.
From relatively can finding out of Fig. 5 and Fig. 6, only difference is therebetween that Fig. 6's is attachedThe accurate adjustment step 223 adding. More specifically, the step between step 222 and step 224223 places, obtain based on perspective information cut apart direction further based on fuzziness by accurate adjustment, soAfter, at step 224 place, image after accurate adjustment cut apart direction be divided into respectively have basicMultiple subregions of upper uniform fuzziness. That is to say, must not need step 223,But can implement as desired it or not implement it. Below, will carry out taking Fig. 6 as exampleMore detailed description.
At step 222 place of Fig. 6, for example, horizontal end point that can be by based on image straightThe slope of line carrys out computed segmentation direction with the slope of the straight line of the vertical end point based on image. MoreParticularly, in an example of the present invention, by following formula (4) computed segmentation direction:
Directionsplit=arctan(slopey/slopex)(4)
Here Direction,splitRepresent to cut apart the angle of cutting apart of direction, slopeyExpression connection layout pictureThe slope of central point and the straight line of the vertical end point of image, slopexIn expression connection layout pictureThe slope of the straight line of the horizontal end point of heart point and image.
In order to reduce the impact of the local serious precision for perspective information out of focus and to obtain thus moreCut apart accurately direction, alternatively, can at step 223 place, based on fuzziness, further accurate adjustment existsWhat step 222 place obtained based on perspective information cuts apart direction. Exemplary accurate adjustment shown in Fig. 7Process.
As shown in Figure 7, first, at step 223a place, select a part for image, and fromThe bight of described part is partitioned into multiple with the window size of being scheduled to from described part along cutting apart directionSubimage.
As an example, can select core p ' q ' r ' s ' (ginseng of the blurred picture pqrs of inputSee Figure 12 and Figure 13). In example of the present invention, the left margin of core p ' q ' r ' s ',Right margin, top boundary and bottom boundaries can lay respectively at 0.2 × picture traverse, 0.8 × picture traverse,0.2 × picture altitude and 0.8 × picture altitude place, but the present invention is not necessarily limited to this.
What obtain along step 222 from the bight of core p ' q ' r ' s ' subsequently, cuts apart direction SS 'Be partitioned into multiple subimages with the window size of being scheduled to from core p ' q ' r ' s '. The height of windowWith width for example can be identical, and, in an example of the present invention, each in themBe set as 256 pixels. But the present invention is not necessarily limited to this, and those skilled in the artCan select as required any suitable value.
Such x direction and y direction of defining as shown in Figure 12 and Figure 13. In this case,If cut apart direction SS ' for just (referring to Figure 12), so along cutting apart direction SS ' from upper left to the right sideUnder cut apart core p ' q ' r ' s ':, the first subimage is positioned at the upper left of core p ' q ' r ' s 'Angle; Dynatron image is between the first subimage and last subimage, and dynatron imageCentral point be positioned at (, institute on the connecting line of the first subimage and the central point of last subimageThere is the central point of subimage to be positioned at along cutting apart on the same line of direction); And, based on the first sonThe position of image and the position of cutting apart direction (or it cuts apart angle θ) calculating dynatron image. SeparatelyOn the one hand, if cut apart direction for negative (referring to Figure 13), similarly along cutting apart direction SS 'Cut apart core p ' q ' r ' s ', but cut apart from lower-left to upper right:, the first subimageBe positioned at the lower left corner of core p ' q ' r ' s '. The schematically illustrated core of Figure 12 and Figure 13The position of the first subimage in p ' q ' r ' s ' and last subimage, and can obtain as follows subimageParticular location.
If cut apart the absolute value of cutting apart angle of direction be less than 45 ° (can be referring to retouching in the backThe Figure 15 stating) or equal 45 °, can pass through so following formula (5) and obtain the first subimage and lastThe center point coordinate x of the subimage between subimagecenterAnd ycenter
x center = x first + radius * i y center = y first + radius * i * tan ( θ ) - - - ( 5 )
Here, radius represents the length of side (, the length of side of subimage of window; Here suppose window,Or the height of subimage is identical with width), and can for example be set as 256 pixels; xfirstAnd yfirstRepresent the center point coordinate of the first subimage; I represent subimage label (i=0 withThe first subimage correspondence); θ represents to cut apart the angle of cutting apart of direction.
In addition, can pass through following formula (6) and (7) obtain first subimage and last subimage itBetween left margin coordinate, right margin coordinate, top boundary coordinate and the bottom boundaries coordinate x of subimageleft、xright、ytopAnd ybottom
x left = x center - radius * 0.5 x right = x center + radius * 0.5 - - - ( 6 )
y top = y center - radius * 0.5 y bottom = y center + radius * 0.5 - - - ( 7 ) .
On the other hand, if cut apart the absolute value of cutting apart angle of direction be greater than 45 ° (can be referring toThe Figure 14 will be described later), can pass through so following formula (8) obtain the first subimage withThe center point coordinate x of the subimage between rear subimagecenterAnd ycenter:
y center = y first + radius * sign ( θ ) * i x center = x first + radius * | c tan ( θ ) | * i ,
In this case, also can pass through above formula (6) and (7) obtain first subimage withLeft margin coordinate, right margin coordinate, top boundary coordinate and the end of the subimage between rear subimageBoundary coordinate xleft、xright、ytopAnd ybottom
To (8), cut apart direction SS ' from the edge, bight of core p ' q ' r ' s ' by above formula (5)Be partitioned into multiple subimages with the window size of being scheduled to from core p ' q ' r ' s '.
Next,, at the step 223b place of Fig. 7, calculate multiple subimages of cutting apart and (compriseThe first subimage, last subimage and dynatron image) fuzziness BD. Of the present inventionIn an example, can for example pass through following formula (9) and calculate BD for each subimage:
BD = 1 n Σ q ∈ Edge max k ∈ neib ( q ) f ( k ) - min k ∈ neib ( q ) f ( k ) ( q max - q min ) * 255 - - - ( 9 )
Here, q represents edge pixel; K represents the neighborhood pixels of pixel q; F (k) represents pixel k'sGray value; qmaxRepresent along the pixel q of greatest gradient direction around there is maximum gray scaleThe position of pixel; qminRepresent along the pixel q of greatest gradient direction around there is minimal grayThe position of the pixel of level; N represents the sum of the edge pixel in subimage. Contiguous size can exampleAs be set as 13 × 13 pixels. It is (all that the fuzziness obtaining can be used to calculate fuzziness difference subsequentlyAs fuzziness variance etc.).
Then,, at step 223c place, cut apart the predetermined of direction from what comprise that step 222 obtainsAmong direction scope, the difference minimum of the fuzziness of multiple subimages that selection is cut apart along itDirection is as the direction of cutting apart after accurate adjustment. More specifically, can be by step 222 place be obtainedThe direction of cutting apart obtaining changes a δ angle (for example, 10 °), and then again cuts apart subimageAnd recalculate fuzziness difference to find minimum one, realize this point. As an example,The difference of the fuzziness of multiple subimages of cutting apart can be the mould of cut apart multiple subimagesThe variance (but it is not necessarily limited to this) of paste degree, and, in this case, can use downFormula (10) finds the direction of cutting apart after accurate adjustment:
θ split = arg m in θ θ init - 30 ≤ θ ≤ θ init + 30 ( Σ i = 1 n ( BD i - BD aver ) 2 / n ) - - - ( 10 )
Here θ,splitRepresent the accurate adjustment angle of cutting apart of cutting apart direction afterwards; θinitRepresent based on perspective letterBreath produce cut apart direction cut apart angle (, step 222 place obtain accurate adjustment beforeCut apart direction); BDiRepresent the fuzziness of the i number of sub images in core p ' q ' r ' s ';BDaverRepresent the average blur degree of all subimages in core p ' q ' r ' s '; During n representsThe counting of the subimage in heart part p ' q ' r ' s '.
By the way, in above description, predetermined direction scope is set asθinit-30°≤θ≤θinit+ 30 °, and δ angle is set as 10 °. But all values are only allExemplary, and those skilled in the art can select any suitable value as required.
By the above processing in Fig. 7, obtain dividing after accurate adjustment based on perspective information and fuzzinessCut direction. This cuts apart direction more accurately by the subregion dividing processing being further conducive to subsequently,Make each in must divided subregion there is more uniform fuzziness, make thus withPrior art is compared deblurring better effects if of the present invention.
Get back to now Fig. 6. At step 224 place, along cut apart direction (in this case, forAfter the accurate adjustment that step 223 place obtains, cut apart direction) image is divided into multiple subregions.As a result, the fuzziness in all subregion is substantially the same. Fig. 8 illustrates that exemplary subregion dividesCut processing, and Figure 14 and Figure 15 illustrate respectively cut apart in the situation that angle is greater than 45 ° andCut apart the details in the situation that angle is less than 45 °.
As shown in Figure 8, first, at step 224a place, be partitioned into multiple basic subgraphs from imagePicture. More specifically, as shown in figure 14, if it is large to cut apart the absolute value of cutting apart angle of directionIn 45 °, so from the bight along continuous straight runs of image pqrs with the window size be scheduled to from imagePqrs is partitioned into multiple basic subimage a, b, c, d and e. On the other hand, as Figure 15 instituteShow, be less than 45 ° if cut apart the absolute value of cutting apart angle of direction, so from image pqrs'sBight is vertically partitioned into multiple basic subgraphs with the window size of being scheduled to from image pqrsPicture a, b, c, d and e. In addition, equal 45 ° if cut apart the absolute value of cutting apart angle of direction,So can be from the bight along continuous straight runs of image or vertical direction with the window size be scheduled to from imageBe partitioned into multiple basic subimages.
By the way, the height of window and width for example can be identical, and of the present invention oneIn individual example, they are set as 256 pixels. But the present invention is not necessarily limited to this, and thisThose skilled in the art can select any suitable value as required. In addition, the window in step 224aWindow size in mouthful size and step 223a can be identical or can be different.
Next, at step 224b place, respectively from multiple basic subimage a, b, c, d andEach in e is partitioned into and belongs to one from image pqrs with the window size of being scheduled to along cutting apart directionMultiple subimages of sub regions. That is to say, all subregion is by having the multiple of same sizeSubimage forms, and described multiple subimage such distribution as shown in Figure 14 and Figure 15.And the height of each subimage is identical with height and the width of window respectively with width, therefore,The height of each subimage and width for example can be identical, and, in an example of the present invention,They can be set as 256 pixels. But the present invention is not necessarily limited to this. By the way,Partitioning scheme in partitioning scheme in step 224b and step 223a is similar, and, can joinAccording to the above description to step 223a.
Then, at step 224c place, if the subregion obtaining does not cover whole image, thatAlong cutting apart direction, the remainder of image is continued to be divided into subregion in a similar fashion.
By the way, as shown in Figure 14 and Figure 15, the subregion dividing processing in Fig. 8 withFollowing processing is equal to. First, from a bight of image along cutting apart the window of direction to be scheduled toSize is partitioned into the multiple subimages that belong to a sub regions from image; Then, by following placeWhole image is divided into multiple subregions by reason: if cut apart the absolute value of cutting apart angle of directionBe greater than 45 °, the sub regions that along continuous straight runs translation is cut apart so; If cut apart directionThe absolute value of cutting apart angle be less than 45 °, the son that so vertically translation is cut apartRegion; And, equal 45 ° if cut apart the absolute value of cutting apart angle of direction, so along waterSquare to or the sub regions cut apart of vertical direction translation.
So far, the input of fuzzy images with perspective distortion is divided into multiple child area, each area of the fuzzy degree of basically the same
0.060000230. As previously mentioned, in the present invention, there is substantially all subregion base of fuzziness uniformlyIn himself fuzzy message by deblurring individually, thus, compared with prior art, canObtain better deblurring effect.
Fig. 9 is schematically illustrated according to the flow process of the deblurring step of deblurring method of the present inventionFigure.
As shown in Figure 9, first, at step 232 place, in all subregion, based on subimageMarginal density and fuzziness select a typical subimage. In this manual, typicalSubimage is the best subimage of the fuzzy message for calculating corresponding subregion.
Figure 10 illustrates exemplary typical subimage selection processing.
At the step 232a place of Figure 10, for all subregion, calculate the edge of its each subimageDensity and fuzziness. For example, can be for each subimage edge calculation pixel counts and fuzziness.
Can adopt any suitable method to obtain edge pixel counting and fuzziness. At thisIn a bright example, for each subimage, can adopt Otsu method to obtain bianry image.Then, can determine as follows that edge pixel is to obtain edge pixel counting. For example,, if for oneIndividual foreground pixel determines that its 3 × 3 neighborhood pixels is background pixel, and so described foreground pixel will be byBe considered as non-edge pixel; Otherwise it will be regarded as edge pixel. Fuzzy about each subimageDegree, for example, can use formula (9).
Next,, at the step 232b place of Figure 10, for all subregion, calculate its each subgraphThe mark of picture. All subgraphs in marginal density and the corresponding subregion of mark instruction subimageThe degree of approach between the maximum of the marginal density of picture and the fuzziness of described subimage and correspondingSubregion in the mean value of fuzziness of all subimages between the degree of approach. If subgraphThe marginal density fuzziness high and described subimage simultaneously of picture approaches in corresponding subregionThe mean value of the fuzziness of all subimages, the mark of so described subimage is high. That is to say,Described subimage is the best subimage of the fuzzy message for calculating corresponding subregion.
As an example, can pass through following formula (11) to (13) and calculate mark:
Scorei=Scorei_edge+Scorei_BD(11)
Scorei_BD=MIN(BDi,BDaver)/MAX(BDi,BDaver)(12)
Scorei_edge=EdgeCounti/EdgeCountmax(13)
Here Score,iRepresent the mark of i number of sub images; Scoreie_dgeRepresent i number of sub imagesThe mark of marginal density; Scorei_BDRepresent the mark of the fuzziness of i number of sub images;EdgeCountiRepresent the counting of the edge pixel of i number of sub images; EdgeCountmaxRepresentThe maximum of the counting of the edge pixel of all subimages in corresponding subregion; BDiIThe fuzziness of number of sub images; BDaverIt is the fuzziness of all subimages in corresponding subregionMean value.
It should be noted that formula (11) to (13) is only exemplary, and those skilled in the artIt can use any suitable formula for mark, as long as can be used to find for calculating correspondingSubregion in the best subimage of fuzzy message.
Then,, at the step 232c place of Figure 10, for all subregion, select it to there is maximumOne number of sub images of mark is as typical subimage.
By the above processing of Figure 10, select a typical subimage for all subregion,It is the best subimage of the fuzzy message for calculating corresponding subregion.
Get back to now Fig. 9. At step 234 place, calculate the fuzzy message of each typical subimage.Can adopt any suitable method to obtain the fuzzy message for deblurring. Of the present inventionIn an example, fuzzy message comprises point spread function. Therefore, can and hang down by horizontal edgeNear the gray level of pixel straight edge is calculated step response, and then can be by step response meterCalculate point spread function. Below provide concise and to the point description.
First, by Grad detection level edge and vertical edge. More specifically, ifThe gradient of the gradient ratio y direction of x direction is large, and edge pixel will be marked as horizontal edge so;Otherwise edge pixel will be marked as vertical edge. By this way, can produce two limitsEdge image: horizontal edge image and vertical edge image.
Next, implement connected domain (CC) analysis for two edge images, and as belowAs described in, in the connected domain detecting, collect gray scale based on edge pixel along edge directionValue.
More specifically, select an edge pixel from a connected domain. Selected edge pictureElement is set as center, and collects gray value along edge direction on blurred picture, until contiguousThe difference of the gray value of pixel is less than threshold value. Threshold value for example can be set as 1, but is not limited to this.
For each edge pixel, produce a gray value group. Any length be greater than 7 group all byGive up. All gray value groups of collecting from two edge images are combined, for making noise flatCunningization. Gray value group is aimed at according to gray value, and wherein, identical gray value means identicalPosition. The final lengths of the gray value group of combination can not exceed 7, and exceeds part quilt sectionGo.
Then, the difference diff of the gray value by following formula (14) calculation combinationi:
diffi=grayValuei+1-grayValuei-1(14)
Here grayValue,i+1The combination ash of (i+1) element in the gray value group that represents to combineDegree value; GrayValuei-1The combination gray scale of (i-1) element in the gray value group that represents to combineValue.
Based on more than, find the maximum of difference results, and during this corresponding element is set asThe heart. Then, can calculate step response by following steps: central element is rung as stepThe first element in answering; And calculate other element in step response by following formula (15):
ElemsRi=(ElemdiffCenter-i+ElemdiffCenter+i)*0.5(15)
Here Elem,SRiRepresent the i element in step response; Subscript diffCenter represents combinationThe difference results of gray value in the label of central element; ElemdiffCenter-iExpression combination(diffCenter-i) element in the difference results of gray value; ElemdiffCenter+iExpression group(diffCenter+i) element in the difference results of the gray value closing.
Finally, calculate the point spread function as fuzzy message by step response.
Obtain each typical subimage fuzzy message (for example, point spread function) afterwards,At step 236 place of Fig. 9, the fuzzy message of all subregion based on its typical subimage goneFuzzy. In an example of the present invention, pass through Wiener filter based on point spread functionMake all subregion deblurring.
More specifically, point spread function is transformed into optical transform function (OTF), andAdopt Wiener filter in frequency domain, to make all subregion deblurring. Show by following formula (16)Show Wiener filter:
F ( u , v ) = H * ( u , v ) Y ( u , v ) | H ( u , v ) | 2 + γ - - - ( 16 )
Here, F(u, v) represents the function of the de-blurred image in frequency domain; H(u, v) and H*(u,V) represent respectively optical transform function and transposition thereof in frequency domain; Y(u, v) represents in frequency domainThe function of blurred picture; γ represents the parameter of noise, and it for example can be set as 0.004.
By above processing, recover the mould of inputting by deblurring method according to the present inventionStick with paste image.
As previously mentioned, for the blurred picture with perspective distortion, the deblurring side of prior artMethod all has shortcoming. More specifically, they do not consider that the difference of different piece of image is fuzzySpend, and be therefore unsuitable for having the blurred picture of perspective distortion. And, some masters in themDepend on the marginal information of current subimage for deblurring. In this case, ifMarginal information is not enough or be subject to the impact of serious noise, is difficult to so obtain accurate fuzzy letterBreath is for deblurring.
Contrast ground, in the present invention, for the blurred picture with perspective distortion, first baseMarginal information in whole image obtains perspective information, and therefore it will not be subject to local noise or edgeThe part of information lacks institute to be affected. Then, obtain and cut apart direction based on perspective information, schemingPicture is divided into multiple subregions, and fuzziness in all subregion is substantially the same. Finally, toolThe fuzzy message of all subregion that has substantially a fuzziness uniformly based on himself gone individuallyFuzzy, thus, compared with prior art, can obtain better deblurring effect.
And In some examples of the invention, the direction of cutting apart obtaining based on perspective information canBased on the further accurate adjustment of fuzziness quilt, to reduce the local serious precision for perspective information out of focusImpact, and obtain thus cutting apart more accurately direction. Therefore, compared with prior art, canTo obtain even better deblurring effect.
And, In some examples of the invention, the edge based on subimage in all subregionDensity and fuzziness are selected a typical subimage. Typical subimage is corresponding for calculatingSubregion in the best subimage of fuzzy message, for example, it always have enough forThe marginal information of ambiguous estimation information. Then, the mould of all subregion based on its typical subimagePaste information is by deblurring. By means of these features, compared with prior art, also can further changeKind deblurring effect of the present invention.
Up to the present, the figure that has perspective distortion according to of the present invention has schematically been describedThe deblurring method of picture. Next, with reference to Figure 16 to 18 evaluation deblurring side of the present inventionThe deblurring effect of method.
Figure 16 illustrates a part for the exemplary blurred picture with perspective distortion. In Figure 16,The fuzziness on the left side is more serious than the right.
Figure 17 illustrates deblurring method (its letter of edge based on each subimage by prior artBreath ambiguous estimation information, and make each subimage by the fuzzy message of himself deblurring)The exemplary deblurring result of image in the Figure 16 obtaining. As can be seen from Figure 17, bounded frameIn the ring (ringing) in region be serious, that is, deblurring effect is bad.
Contrast ground, Figure 18 illustrates in the Figure 16 obtaining by deblurring method of the present invention and schemesThe exemplary deblurring result of picture. As can be seen from Figure 18, region in corresponding bounded frameRing is slight,, has greatly improved deblurring effect that is.
Following table 1 and 2 provides respectively above example images part and whole image (not shown)OCR precision.
Table 1
Example images part (142 characters altogether) Error character counting
Prior art 33
The present invention 18
Table 2
Whole image (679 characters altogether) Error character counting
Prior art 127
The present invention 99
From table 1 and 2 obviously, the present invention can reduce the blurred picture with perspective distortionError character counting.
Therefore, the present invention can provide the method that can make the image deblurring with perspective distortion.In addition, the present invention can further provide than prior art have better deblurring effect forMake the method for the image deblurring with perspective distortion.
By the way, deblurring method of the present invention can have many application, such as from shineImage or video extraction and identification text message that camera is captured.
Below, briefly describe according to the image that makes to have perspective distortion of the present invention with reference to Figure 11The device 100 of deblurring.
As shown in figure 11, deblurring device 100 can comprise: perspective information detection part 110,For detection of the perspective information of image; Subregion partition member 120, for based on perspective informationImage is divided into multiple subregions, and the fuzziness in all subregion is identical; And deblurring portionPart 130, the deblurring for making all subregion based on its fuzzy message.
Alternatively, subregion partition member 120 further comprises: calculate and use based on perspective informationIn the parts of cutting apart direction of cutting apart image; And image is divided into described many along cutting apart directionThe parts of sub regions.
Alternatively, subregion partition member 120 further comprises: calculating based on perspective informationFor cut apart image cut apart direction after and along cutting apart described in direction is divided into imageBefore multiple subregions, carry out the parts of accurate adjustment to cutting apart direction based on fuzziness.
Alternatively, calculating the parts of cutting apart direction for cutting apart image based on perspective information,The slope of the straight line by the horizontal end point based on image and the vertical end point based on imageThe slope of straight line carrys out computed segmentation direction.
Alternatively, calculating the parts of cutting apart direction for cutting apart image based on perspective information,By following formula computed segmentation direction:
Directionsplit=arctan(slopey/slopex)
Here Direction,splitThe angle of cutting apart of direction, slope are cut apart in representativeyRepresent connection layout pictureThe slope of central point and the straight line of the vertical end point of image, slopexRepresent connection layout picture inThe slope of the straight line of the horizontal end point of heart point and image.
Alternatively, the parts that carry out accurate adjustment to cutting apart direction based on fuzziness further comprise: choosingSelect a part for image and cut apart the window chi of direction to be scheduled to from the edge, bight of described partThe very little parts that are partitioned into multiple subimages from described part; Calculate the fuzzy of described multiple subimagesThe parts of degree; And select to cut apart along it among comprising the predetermined direction scope of cutting apart directionThe direction of difference minimum of fuzziness of multiple subimages as the direction of cutting apart after accurate adjustmentParts.
The difference of the fuzziness of multiple subimages of cutting apart alternatively, is cut apart multiple sonsThe variance of the fuzziness of image.
Optionally, the window at the same height and width.
0.060000Comprise: be greater than 45 ° if cut apart the absolute value of cutting apart angle of direction, from the edge, bight of imageIf being partitioned into multiple basic subimages with the window size of being scheduled to from image, horizontal direction cuts apartThe absolute value of cutting apart angle of direction be less than 45 ° from the bight of image vertically to be scheduled toIf window size be partitioned into multiple basic subimages and cut apart cutting apart of direction from imageThe absolute value of angle equal 45 ° from the bight along continuous straight runs of image or vertical direction to be scheduled toWindow size be partitioned into the parts of multiple basic subimages from image; Respectively from described multiple basesEach in this subimage is partitioned into and belongs to from image with the window size of being scheduled to along cutting apart directionThe parts of multiple subimages of one sub regions; And if the subregion obtaining does not cover wholeImage continues along the parts of cutting apart direction the remainder of image is divided into subregion.
Alternatively, further along cutting apart the parts that image is divided into described multiple subregions by directionComprise: be partitioned into from image with the window size of being scheduled to along cutting apart direction from a bight of imageBelong to the parts of multiple subimages of a sub regions; And by processing whole image belowBe divided into the parts of described multiple subregions: if it is large to cut apart the absolute value of cutting apart angle of directionIn 45 °, the sub regions that along continuous straight runs translation is cut apart so; If cut apart directionThe absolute value of cutting apart angle is less than 45 °, the subarea that so vertically translation is cut apartTerritory; And if the absolute value of cutting apart angle of cutting apart direction equals 45 °, so along level sideTo or the sub regions cut apart of vertical direction translation.
Alternatively, the height of window is identical with width.
Alternatively, fuzzy message comprises point spread function, and deblurring parts 130 are based on pointSpread function makes all subregion deblurring by Wiener filter.
Alternatively, deblurring parts 130 further comprise: in all subregion based on subimageMarginal density and fuzziness select the parts of a typical subimage, described typical sonImage is the best subimage of the fuzzy message for calculating corresponding subregion; Calculate each allusion quotationThe parts of the fuzzy message of the subimage of type; And make all subregion based on its typical subimageFuzzy message and the parts of deblurring.
Alternatively, in all subregion, the marginal density based on subimage and fuzziness select oneThe parts of individual typical subimage further comprise: calculate its each subimage for all subregionThe parts of marginal density and fuzziness; Calculate the portion of the mark of its each subimage for all subregionPart, all subimages in marginal density and the corresponding subregion of described mark instruction subimageThe maximum of marginal density between the degree of approach and the fuzziness of described subimage with correspondingThe degree of approach between the mean value of the fuzziness of all subimages in subregion; And for respectivelySubregion selects it to have a number of sub images of largest score as the parts of typical subimage.
Alternatively, calculate mark by following formula:
Scorei=EdgeCounti/EdgeCountmax+MIN(BDi,BDaver)/MAX(BDi,BDaver)
Here Score,iRepresent the mark of i number of sub images, EdgeCountiRepresent i number of sub imagesThe counting of edge pixel, EdgeCountmaxRepresent all subimages in corresponding subregionThe maximum of counting of edge pixel, BDiThe fuzziness of i number of sub images, and BDaverThe mean value of the fuzziness of all subimages in corresponding subregion.
Alternatively, perspective information comprises image horizontal end point and vertical end point.
Alternatively, perspective information detection part 110 further comprises: the edge graph of detected imageThe parts of picture; The parts of detection level page boundary and vertical page boundary in edge image; AndCome the horizontal end point and vertical end point of computed image based on horizontal page boundary and vertical page boundaryParts. Or perspective information detection part 110 further comprises: the edge of detected imageThe parts of image; In edge image, detect the parts of line of text and vertical characters stroke; AndCome the horizontal end point of computed image and vertical end point based on line of text and vertical characters strokeParts.
Up to the present, schematically described according to of the present invention and there is perspective for makingThe device of the image deblurring of distortion. It should be noted that above all parts are for implementing thisThe exemplary preferred module of bright deblurring method. But, do not describe and use exhaustively in the aboveIn the module of implementing various steps. Conventionally,, in the time there is the step of carrying out a certain processing, depositAt corresponding function module or parts for implementing this same treatment.
In addition, it should be noted that and two or more parts can be merged into parts, as long as canTo realize their function; On the other hand, any one parts can be divided into multipleParts, as long as can realize similar function.
Can implement in many ways method and apparatus of the present invention. For example, can be by softPart, hardware, firmware or its any combination realize method and apparatus of the present invention. In addition, existIn some embodiment, the present invention also may be implemented as the program being recorded in recording medium, bagDraw together the machine readable instructions for implementing the method according to this invention. Thereby the present invention is also containedStore the recording medium of the program for implementing the method according to this invention.
Although described the present invention with reference to exemplary embodiment, be appreciated that the present invention does not limitIn disclosed exemplary embodiment. It will be obvious to those skilled in the art that can be notDeviate from the above exemplary embodiment of amendment in the situation of scope and spirit of the present invention. AppendedThe scope of claim will be endowed the widest explanation, with comprise all such amendments and etc.Same 26S Proteasome Structure and Function.

Claims (24)

1. for making a method for the image deblurring with perspective distortion, it is characterized in that bagDraw together:
Perspective information detecting step, for detection of the perspective information of image;
Subregion segmentation step, for based on perspective information, image being divided into multiple subregions,Fuzziness in all subregion is identical; And
Deblurring step, the deblurring for making all subregion based on its fuzzy message,
Wherein, subregion segmentation step is further comprising the steps:
Calculate the direction of cutting apart for cutting apart image based on perspective information; And
Along cutting apart direction, image is divided into described multiple subregion,
Wherein, subregion segmentation step is further comprising the steps:
Calculating based on perspective information for after cutting apart the step of cutting apart direction of image andAlong cutting apart before image is divided into the step of described multiple subregions by direction, based on fuzzinessCarry out accurate adjustment to cutting apart direction.
2. according to the process of claim 1 wherein, carry out essence based on fuzziness to cutting apart directionThe step of adjusting is further comprising the steps:
Select the part of image, and from the bight of described part along cutting apart direction to be scheduled toWindow size is partitioned into multiple subimages from described part;
Calculate the fuzziness of described multiple subimages; And
Among comprising the predetermined direction scope of cutting apart direction, select to make cut apart multiple subgraphsThe direction of the difference minimum of the fuzziness of picture is as the direction of cutting apart after accurate adjustment.
3. according to the method for claim 2, wherein, the fuzziness of multiple subimages of cutting apartDifference be the variance of the fuzziness of cut apart multiple subimages.
4. according to the method for claim 2, wherein, the height of window is identical with width.
5. according to the process of claim 1 wherein, along cutting apart described in direction is divided into imageThe step of multiple subregions is further comprising the steps:
Be partitioned into genus with the window size of being scheduled to from image along cutting apart direction from a bight of imageIn multiple subimages of a sub regions; And
By following processing, whole image is divided into described multiple subregion: if cut apart directionThe absolute value of cutting apart angle be greater than 45 °, the son that along continuous straight runs translation is cut apart soRegion; Be less than 45 ° if cut apart the absolute value of cutting apart angle of direction, so verticallyThe sub regions that translation is cut apart; And, if cut apart the absolute value of cutting apart angle of directionEqual 45 °, the sub regions that along continuous straight runs or vertical direction translation are cut apart so.
6. according to the method for claim 5, wherein, the height of window is identical with width.
7. according to the process of claim 1 wherein, fuzzy message comprises point spread function, andAnd deblurring step makes all subregion deblurring based on point spread function by Wiener filter.
8. according to the method for any one in claim 1 to 7, wherein, deblurring step enters oneStep comprises the following steps:
Marginal density based on subimage in all subregion and fuzziness are selected one typicallySubimage, described typical subimage is fuzzy message for calculating corresponding subregionBest subimage;
Calculate the fuzzy message of each typical subimage; And
Make the fuzzy message of all subregion based on its typical subimage and deblurring.
9. method according to Claim 8, wherein, the limit based on subimage in all subregionEdge density and fuzziness select the step of a typical subimage further to comprise following stepRapid:
Calculate marginal density and the fuzziness of its each subimage for all subregion;
Calculate the mark of its each subimage for all subregion, the limit of described mark instruction subimageConnecing between the maximum of the marginal density of all subimages in edge density and corresponding subregionAll subimages in the fuzziness of recency and described subimage and corresponding subregion fuzzyThe degree of approach between the mean value of degree; And
Select its number of sub images with largest score as typical subgraph for all subregionPicture.
10. according to the method for claim 9, wherein, calculate mark by following formula:
Scorei=EdgeCounti/EdgeCountmax+MIN(BDi,BDaver)/MAX(BDi,BDaver)
Here Score,iRepresent the mark of i number of sub images, EdgeCountiRepresent i number of sub imagesThe counting of edge pixel, EdgeCountmaxRepresent all subimages in corresponding subregionThe maximum of counting of edge pixel, BDiThe fuzziness of i number of sub images, and BDaverThe mean value of the fuzziness of all subimages in corresponding subregion.
11. according to the method for any one in claim 1 to 7, and wherein, perspective information comprisesThe horizontal end point of image and vertical end point.
12. according to the method for claim 11, and wherein, perspective information detecting step further wrapsContain following steps:
The edge image of detected image;
Detection level page boundary and vertical page boundary in edge image; And
The horizontal end point of carrying out computed image based on horizontal page boundary and vertical page boundary disappears with verticalLose point,
Or perspective information detecting step further comprises following steps:
The edge image of detected image;
In edge image, detect line of text and vertical characters stroke; And
Come the horizontal end point and vertical disappearance of computed image based on line of text and vertical characters strokePoint.
13. 1 kinds for making the device of the image deblurring with perspective distortion, comprising:
Perspective information detection part, for detection of the perspective information of image;
Subregion partition member, for based on perspective information, image being divided into multiple subregions,Fuzziness in all subregion is identical; And
Deblurring parts, the deblurring for making all subregion based on its fuzzy message,
Wherein, subregion partition member further comprises:
Calculate the parts of cutting apart direction for cutting apart image based on perspective information; And
Along the parts of cutting apart direction image is divided into described multiple subregions,
Wherein, subregion partition member further comprises:
Based on perspective information calculate for cut apart image cut apart direction after and along pointCut before image is divided into described multiple subregion by direction, enter cutting apart direction based on fuzzinessThe parts of row accurate adjustment.
14. according to the device of claim 13, wherein, carries out cutting apart direction based on fuzzinessThe parts of accurate adjustment further comprise:
Select a part for image and from the bight of described part along cutting apart direction to be scheduled toWindow size is partitioned into the parts of multiple subimages from described part;
Calculate the parts of the fuzziness of described multiple subimages; And
Among comprising the predetermined direction scope of cutting apart direction, select to make cut apart multiple subgraphsThe direction of the difference minimum of the fuzziness of picture is as the parts of cutting apart direction after accurate adjustment.
15. according to the device of claim 14, wherein, and multiple subimages of cutting apart fuzzyThe difference of degree is the variance of the fuzziness of cut apart multiple subimages.
16. according to the device of claim 14, and wherein, the height of window is identical with width.
17. according to the device of claim 13, wherein, along cutting apart direction, image is divided into instituteThe parts of stating multiple subregions further comprise:
Be partitioned into genus with the window size of being scheduled to from image along cutting apart direction from a bight of imageIn the parts of multiple subimages of a sub regions; And
By following processing, whole image is divided into the parts of described multiple subregions: if pointThe absolute value of cutting apart angle that cuts direction is greater than 45 °, and along continuous straight runs translation is cut apart soOne sub regions; Be less than 45 ° if cut apart the absolute value of cutting apart angle of direction, so along hanging downThe sub regions that Nogata is cut apart to translation; And, if cut apart the angle cut apart of directionAbsolute value equals 45 °, the subarea that along continuous straight runs or vertical direction translation are cut apart soTerritory.
18. according to the device of claim 17, and wherein, the height of window is identical with width.
19. according to the device of claim 13, and wherein, fuzzy message comprises point spread function,And deblurring parts make all subregion remove mould based on point spread function by Wiener filterStick with paste.
20. devices according to claim 13 to any one in 19, wherein, deblurring partsFurther comprise:
Marginal density based on subimage in all subregion and fuzziness are selected one typicallyThe parts of subimage, described typical subimage is for calculating the fuzzy of corresponding subregionThe best subimage of information;
Calculate the parts of the fuzzy message of each typical subimage; And
Make the fuzzy message of all subregion based on its typical subimage and the parts of deblurring.
21. according to the device of claim 20, wherein, based on subimage in all subregionMarginal density and fuzziness select the parts of a typical subimage further to comprise:
Calculate the marginal density of its each subimage and the parts of fuzziness for all subregion;
Calculate the parts of the mark of its each subimage for all subregion, described mark instruction subgraphThe maximum of the marginal density of all subimages in the marginal density of picture and corresponding subregion itBetween the degree of approach and all subimages in the fuzziness of described subimage and corresponding subregionThe mean value of fuzziness between the degree of approach; And
Select its number of sub images with largest score as typical subgraph for all subregionThe parts of picture.
22. according to the device of claim 21, wherein, calculates mark by following formula:
Scorei=EdgeCounti/EdgeCountmax+MIN(BDi,BDaver)/MAX(BDi,BDaver)
Here Score,iRepresent the mark of i number of sub images, EdgeCountiRepresent i number of sub imagesThe counting of edge pixel, EdgeCountmaxRepresent all subimages in corresponding subregionThe maximum of counting of edge pixel, BDiThe fuzziness of i number of sub images, and BDaverThe mean value of the fuzziness of all subimages in corresponding subregion.
23. devices according to claim 13 to any one in 19, wherein, perspective information bagContaining the horizontal end point and vertical end point of image.
24. according to the device of claim 23, and wherein, perspective information detection part further wrapsContain:
The parts of the edge image of detected image;
The parts of detection level page boundary and vertical page boundary in edge image; And
The horizontal end point of carrying out computed image based on horizontal page boundary and vertical page boundary disappears with verticalLose the parts of point,
Or perspective information detection part further comprises:
The parts of the edge image of detected image;
In edge image, detect the parts of line of text and vertical characters stroke; And
Come the horizontal end point and vertical disappearance of computed image based on line of text and vertical characters strokeThe parts of point.
CN201210154166.XA 2012-05-17 2012-05-17 For making the method and apparatus of the image deblurring with perspective distortion Active CN103426144B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201210154166.XA CN103426144B (en) 2012-05-17 2012-05-17 For making the method and apparatus of the image deblurring with perspective distortion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201210154166.XA CN103426144B (en) 2012-05-17 2012-05-17 For making the method and apparatus of the image deblurring with perspective distortion

Publications (2)

Publication Number Publication Date
CN103426144A CN103426144A (en) 2013-12-04
CN103426144B true CN103426144B (en) 2016-05-11

Family

ID=49650841

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201210154166.XA Active CN103426144B (en) 2012-05-17 2012-05-17 For making the method and apparatus of the image deblurring with perspective distortion

Country Status (1)

Country Link
CN (1) CN103426144B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107784631B (en) * 2016-08-24 2020-05-05 深圳市豪恩安全科技有限公司 Image deblurring method and device
CN107292892B (en) * 2017-06-23 2020-04-21 北京奇艺世纪科技有限公司 Video frame image segmentation method and device
CN108979624B (en) * 2018-08-07 2022-03-08 东北大学 Rod pumping system friction factor identification method based on indicator diagram moment characteristics

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008165006A (en) * 2006-12-28 2008-07-17 Brother Ind Ltd Projection device and image distortion correcting method
CN101739679A (en) * 2008-11-27 2010-06-16 奇景光电股份有限公司 System and method for generating three-dimensional depth message
CN101789122A (en) * 2009-01-22 2010-07-28 佳能株式会社 Method and system for correcting distorted document image
CN102073993A (en) * 2010-12-29 2011-05-25 清华大学 Camera self-calibration-based jittering video deblurring method and device

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110044554A1 (en) * 2009-08-21 2011-02-24 Konica Minolta Systems Laboratory, Inc. Adaptive deblurring for camera-based document image processing

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008165006A (en) * 2006-12-28 2008-07-17 Brother Ind Ltd Projection device and image distortion correcting method
CN101739679A (en) * 2008-11-27 2010-06-16 奇景光电股份有限公司 System and method for generating three-dimensional depth message
CN101789122A (en) * 2009-01-22 2010-07-28 佳能株式会社 Method and system for correcting distorted document image
CN102073993A (en) * 2010-12-29 2011-05-25 清华大学 Camera self-calibration-based jittering video deblurring method and device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
Progress in Camera-Based Document Image Analysis;David Doermann et al.;《Proceedings of the Seventh International Conference on Document Analysis and Recognition》;20030806;第606-616页 *

Also Published As

Publication number Publication date
CN103426144A (en) 2013-12-04

Similar Documents

Publication Publication Date Title
CN104200210B (en) A kind of registration number character dividing method based on component
US8488896B2 (en) Image processing apparatus and image processing method
WO2021109697A1 (en) Character segmentation method and apparatus, and computer-readable storage medium
US10169673B2 (en) Region-of-interest detection apparatus, region-of-interest detection method, and recording medium
CN106934806B (en) It is a kind of based on text structure without with reference to figure fuzzy region dividing method out of focus
CN109325930A (en) Detection method, device and the detection device of boundary defect
JP2010171976A (en) Method and system for correcting distorted document image
CN114299275A (en) Hough transform-based license plate inclination correction method
TWI482468B (en) Device, method and computer readable storage medium thereof for detecting object
CN103426144B (en) For making the method and apparatus of the image deblurring with perspective distortion
KR101377910B1 (en) Image processing method and image processing apparatus
JP6542230B2 (en) Method and system for correcting projected distortion
JP6797046B2 (en) Image processing equipment and image processing program
CN108197624A (en) The recognition methods of certificate image rectification and device, computer storage media
JP2020201823A (en) Image processing device, image processing method, and program
CN111626179B (en) Micro-expression detection method based on optical flow superposition
CN112285876A (en) Camera automatic focusing method based on image processing and bubble detection
JP2006285564A (en) Road area detection device and road area detection method
US8693769B2 (en) Image classification methods and systems
JP2021111228A (en) Learning device, learning method, and program
CN108109120B (en) Illumination compensation method and device for dot matrix two-dimensional code
JP2004280713A (en) License plate number recognition device
CN108629786B (en) Image edge detection method and device
CN106846419B (en) Method and device for determining portrait outline in image
JP3983721B2 (en) Image distortion correction apparatus, image reading apparatus, image forming apparatus, and program

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant