CN102572222B - Image processing apparatus and method, and image display apparatus and method - Google Patents

Image processing apparatus and method, and image display apparatus and method Download PDF

Info

Publication number
CN102572222B
CN102572222B CN201110359416.9A CN201110359416A CN102572222B CN 102572222 B CN102572222 B CN 102572222B CN 201110359416 A CN201110359416 A CN 201110359416A CN 102572222 B CN102572222 B CN 102572222B
Authority
CN
China
Prior art keywords
mentioned
pixel
frame
vision signal
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201110359416.9A
Other languages
Chinese (zh)
Other versions
CN102572222A (en
Inventor
藤山直之
小野良树
久保俊明
堀部知笃
那须督
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Mitsubishi Electric Corp
Original Assignee
Mitsubishi Electric Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Mitsubishi Electric Corp filed Critical Mitsubishi Electric Corp
Publication of CN102572222A publication Critical patent/CN102572222A/en
Application granted granted Critical
Publication of CN102572222B publication Critical patent/CN102572222B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Television Systems (AREA)
  • Picture Signal Circuits (AREA)

Abstract

The present invention provides an image processing apparatus and method and an image display apparatus and method, which can reduce motion blur amplitude contained in input video signals and further increase the image quality of dynamic images undergone frame interpolation. As resolution means, the direction and size of motion blur can be estimated according to the motion vector detected by a motion vector detection unit for detecting the motion vector of the video signals; the video signals (D2) are filtered (34) through using filtering coefficients according to the direction and size of estimated motion blur; the gain (39) is obtained according to a filtering result (FL1(I, j)); correction is performed through multiplying the obtained gain by the video signals (D2(I, j)); and the frame interpolation is performed using corrected video signals (E(I, j)).

Description

Image processing apparatus and method and image display device and method
Technical field
The present invention relates to image processing apparatus and method and image display device and method.The invention particularly relates to the frame interpolation processing of inserting new interpolated frame between the frame of image.
Background technology
Liquid crystal display etc. maintain escope and within 1 image duration, continue to show identical image, and human eye is continuous moving for the tracking of moving object, when in the situation of the object of which movement in image, because the movement of object shows as the discontinuous movement taking 1 frame as unit, thereby exist marginal portion to seem fuzzy problem.
In addition, about film etc., film video is converted to the material of TV signal, due to the difference of the frame frequency of both (film video and TV signal), become the situation of being produced the picture signal of 2 frames or 3 frames by same frame, there will be motion blur if this is directly shown or produce motion unsmooth (jerky) such shake (judder) problem.
Similarly, about the material that is converted to TV signal through the video of computer processing, be similarly the situation of being produced the picture signal of 2 frames by same frame, if directly show, can with above-mentioned situation similarly, occur motion blur or produce motion unsmooth such jitter problem.
In order to address these problems, can consider to increase demonstration frame number by frame is carried out to interpolation, to make the movement of object become smooth.
As existing image processing apparatus and method, known have the zeroth order of utilizing the image identical with front 1 frame to carry out interpolation to interpolated frame to keep the average image of method, front 1 two field picture of utilization and rear 1 two field picture interpolated frame to be carried out to the interpolation of average value method etc. of interpolation.But, keep method about zeroth order, for the image in certain orientation motion, do not move smoothly, therefore still cannot solve the fuzzy problem that maintains escope.And interpolation of average value method exists moving image to become the problem of ghost image.
As this improvement measure, be known that according to the relevant maximum pixel between the pixel of frame formerly and the pixel of the pixel of upper posterior frame of time in point-symmetric position and on the time with respect to the interpolating pixel of interpolated frame, the interpolating pixel (for example, referring to patent documentation 1) of generation interpolated frame.According to the method, for example, when existing in video the object moving to posterior frame from frame formerly, in the interpolated frame generating between frame formerly and posterior frame, interpolation goes out to move to the object of the position, middle of the object's position of frame formerly and the object's position of posterior frame, therefore can show level and smooth video flowing.But, even if suitably carried out frame interpolation, for example frame rate is made as to 2 times, but cause in the frame of mobile object of deterioration including motion fuzzy (being also referred to as below motion blur) that cause, it is fuzzy that the motion producing still cannot reduce shooting time causes, and can only demonstrate fuzzy video flowing.
On the other hand, in patent documentation 2, disclose by having used the gimmick of deconvoluting motion vector detection, ambiguity function, to including the frame of the fuzzy mobile object that causes deterioration causing because of motion, proofreaied and correct the gimmick in the fuzzy region due to motion.
The vision signal that display unit receives is by the light accepting part of camera, frame to be accumulated the time to the light total amount that is subject to for example, receiving from subject in (1/60 second) to quantize, according to according to specification and definite pixel order is arranged and sent.When in the light accepting part of camera and the situation of subject relative motion, the profile portion of subject can produce by frame accumulate time and camera and subject relative velocity determine fuzzy.To this, the gimmick of patent documentation 2 is that Mathematical Modeling is applicable to image, utilizes the inverse function of the ambiguity function that Mathematical Modeling comprises to carry out filtering.But, as mentioned above, liquid crystal display etc. maintain escope and within 1 image duration, continue to show identical image, when in the situation of the object of which movement in image, the movement of object is the discontinuous movement taking 1 frame as unit, thereby the problem that still can exist marginal portion to show fuzzyly.
[patent documentation 1] TOHKEMY 2006-129181 communique (the 8th page, Fig. 3)
No. 3251127 specification of [patent documentation 2] Japanese Patent
Existing frame interpolation processing is exactly as constructed as above, even if suitably generated interpolated frame, and the fuzzy problem that also can exist the motion that cannot alleviate object mobile in video to cause.In addition, even by the liftering based on deconvoluting, the mobile object fuzzy due to motion alleviated to move and cause fuzzy, but because mobile object carries out discontinuous movement with 1 frame unit, thereby marginal portion seems fuzzy.
Summary of the invention
The image processing apparatus of one aspect of the present invention is characterised in that, it has: motion vector detection section, the 2nd vision signal that it is inputted according to the 1st vision signal of inputting from outside and from outside, detect the motion vector of above-mentioned the 1st vision signal, wherein, the 2nd vision signal with respect to above-mentioned the 1st vision signal in time before 1 frame is above or 1 frame above after; And image rectification portion, it uses the motion vector that detected by above-mentioned motion vector detection section, proofreaies and correct the motion blur in above-mentioned the 1st vision signal, and above-mentioned image rectification portion has: motion blur estimator, it estimates direction and the size of motion blur according to above-mentioned motion vector; Filtering portion, its use corresponding to estimated go out above-mentioned motion blur direction and size and predetermined filter factor carries out filtering to above-mentioned the 1st vision signal; And correction intensity adjustment part, it is according near the intensity of variation of pixel value concerned pixel, adjust the correction intensity to above-mentioned concerned pixel, above-mentioned filtering portion carries out amplitude limiting processing to each pixel value of the pixel in the neighboring area of concerned pixel, make the absolute value of the difference of each pixel value of the pixel in pixel value and its neighboring area of concerned pixel be no more than predetermined threshold value, and use the above-mentioned pixel value after amplitude limiting processing to carry out low-pass filtering to the pixel in above-mentioned neighboring area.
The image processing method of another aspect of the present invention is characterised in that, this image processing method comprises: motion vector detection step, the 2nd vision signal of inputting according to the 1st vision signal of inputting from outside and from outside, detect the motion vector of above-mentioned the 1st vision signal, wherein, the 2nd vision signal with respect to above-mentioned the 1st vision signal in time before 1 frame is above or 1 frame above after; And image rectification step, use the motion vector detecting in above-mentioned motion vector detection step, proofread and correct the motion blur in above-mentioned the 1st vision signal, above-mentioned image rectification step comprises: motion blur estimating step, according to above-mentioned motion vector, estimate direction and the size of motion blur; Filter step, use corresponding to estimated go out above-mentioned motion blur direction and size and predetermined filter factor carries out filtering to above-mentioned the 1st vision signal; And correction intensity set-up procedure, according near the intensity of variation of pixel value concerned pixel, adjust the correction intensity to above-mentioned concerned pixel, in above-mentioned filter step, each pixel value to the pixel in the neighboring area of concerned pixel carries out amplitude limiting processing, make the absolute value of the difference of each pixel value of the pixel in pixel value and its neighboring area of concerned pixel be no more than predetermined threshold value, and use the above-mentioned pixel value after amplitude limiting processing to carry out low-pass filtering to the pixel in above-mentioned neighboring area.
According to the present invention, proofread and correct the motion blur of present frame by reference movement vector, with reference to 2 frames that motion blur continuous in time is proofreaied and correct and motion vector, thereby generates at the frame between these frames the picture quality therefore can promote dynamic image demonstration time by interpolation.
In addition, in order to obtain effect same, the processing that can consider that the processing that correction of movement is fuzzy is set respectively and generate the frame between frame by interpolation, has following effect but compare this situation the present invention.
(1) in reason, use motion vector detection result throughout, thereby using motion vector detection as omnibus circuit (common process), thereby can cut down circuit scale (treating capacity, the frame memory amount of preserving motion vector according to each pixel).
(2) in the treatment step of motion vector detection and frame interpolation processing, need respectively to keep at least 2 frame memories with epigraph, and by shared frame memory, just can reduce required memory capacity.
Brief description of the drawings
Fig. 1 is the block diagram of the image display device that represents that the present invention's the 1st execution mode relates to.
Fig. 2 is the block diagram of the structure example of the picture delay portion 4 of presentation graphs 1.
Fig. 3 is the block diagram of the structure example of the motion vector detection section 5 of presentation graphs 1.
Fig. 4 (a) and Fig. 4 (b) are the figure that is illustrated in an example of the hunting zone of motion vector in the vision signal of 2 continuous frames.
Fig. 5 is the block diagram of the structure example of the image rectification portion 6 of presentation graphs 1.
Fig. 6 is the figure of relation representing during image duration and shooting.
Fig. 7 is the figure representing for an example of effective filter field EFA of motion blur.
Fig. 8 is the figure representing for another example of effective filter field EFA of motion blur.
Fig. 9 is the figure representing for the another example of effective filter field EFA of motion blur.
Figure 10 is the difference that represents pixel value and the mean value figure with an example of the relation between the correction intensity parameter after adjusting.
Figure 11 (a)~Figure 11 (j) is the sequential chart of the signal timing that shows of the each several part of the structure of presentation graphs 1 and Fig. 5.
Figure 12 is the figure that represents each component of motion vector.
Figure 13 (a) and Figure 13 (b) are the figure that represents the motion vector of 2 frames and an example of motion blur.
Figure 14 (a) and Figure 14 (b) are the figure that represents the motion vector of 2 frames and another example of motion blur.
Figure 15 represents the direction of motion vector and size and the figure for an example of the pointer (IND) of table of filter coefficients.
Figure 16 is the curve chart that represents the Nonlinear Processing based on threshold value.
Figure 17 is the block diagram that represents the structure example of the image rectification portion 6 of the present invention's the 2nd execution mode.
Label declaration
1 image display device; 2 image processing apparatus; 3 image displaying parts; 4 picture delay portions; 5 motion vector detection section; 6 image rectification portions; 7 frame generating units; 11 frame memories; 12 frame memory control parts; 21 pay close attention to frame piece cuts portion; 22 rear frame pieces cut portion; 23 motion vector determination portions; 24 memories; 25 memory controllers; 30 proofread and correct handling part; 31 operation signal handling parts; 32 motion blur estimators; 33 filter factor storage units; 34 filtering portions; 35 Nonlinear Processing portions; 36 low pass filters; 37 mean value calculation portions; 38 correction intensity adjustment parts; 39 gain calculating parts
Embodiment
The 1st execution mode
Fig. 1 is the block diagram that represents to have the structure of the image display device of the image processing apparatus the present invention relates to.Illustrated image display device 1 has image processing apparatus 2 and image displaying part 3, and image processing apparatus 2 has picture delay portion 4, motion vector detection section 5, image rectification portion 6 and frame generating unit 7.
Image processing apparatus 2 receives inputted vision signal D0, carries out motion blur correction and frame interpolation.Vision signal D0 is made up of the row of signal of pixel value of the multiple pixels that represent composing images, image processing apparatus 2 carries out in order motion blur and proofreaies and correct processing using multiple pixels as calibration object pixel (concerned pixel), be created on motion blur by interpolation and proofread and correct the frame HF comprising between vision signal E1 after treatment and E2 (row by the signal that possesses proofreaied and correct pixel value form), output packet is vision signal (picture signal row) HV in interior process interpolation containing vision signal E1, E2 and HF.
The vision signal D0 that is input to image processing apparatus 2 is provided for picture delay portion 4.Picture delay portion 4 is used frame memory, and inputted vision signal D0 is carried out to frame delay, and the vision signal D2 of 2 frames that differ from one another, D1 are exported to motion vector detection section 5.
Motion vector detection section 5 is used vision signal D2, the D1 of 2 different frames exporting from picture delay portion 4, detects the motion vector V that vision signal D2 comprises, and motion vector V is exported to image rectification portion 6.
In addition, make motion vector V postpone for 1 image duration, export to frame generating unit 7 as motion vector Vd.
Image rectification portion 6 is using the motion vector V exporting from motion vector detection section 5 as input, the motion blur that produces the video of deterioration due to the motion of subject and/or the motion of camera the vision signal D2 exporting from picture delay portion 4 is proofreaied and correct, proofreaied and correct vision signal E is exported to picture delay portion 4, picture delay portion 4 carries out frame delay to proofreaied and correct vision signal E, vision signal E1 and the E2 of 2 frames that differ from one another of output.
The motion vector Vd that frame generating unit 7 is used vision signal E1, the E2 of 2 different frames exporting from picture delay portion 4 and inputs from motion vector detection section 5, to interpolated frame between vision signal E1 and E2, export to image displaying part 3 as the vision signal HF that comprises interpolated frame at the interior vision signal HV through interpolation.In the vision signal HV of interpolation, export from frame generating unit 7 by the order of vision signal E1, interpolated frame vision signal HF, vision signal E2.
The image that image displaying part 3 carries out based on being corrected motion blur, also passed through the vision signal HV of frame interpolation shows.Wherein, adjust parameter PR by being inputted by user, can realize the adjustment of degree and/or the adjustment of correcting image quality that motion blur is proofreaied and correct.
In the following description, establish picture size for thering is in the vertical direction M pixel, there is in the horizontal direction N pixel.Now, variable i and j are defined as respectively to 1≤i≤M, 1≤j≤N, the coordinate that represents location of pixels is represented with (i, j), the P for pixel (i, j) of the position that this coordinate is represented represents.That is, variable i represents vertical direction position, and variable j represents horizontal direction position.In the position of the pixel in the image upper left corner, i=1, j=1, along with 1 pel spacing of every propelling downwards i successively add 1, along with to 1 pel spacing of right-hand every propelling j successively add 1.
The structure example of Fig. 2 presentation video delay portion 4.Illustrated picture delay portion 4 has frame memory 11 and frame memory control part 12.Frame memory 11 has at least can store the vision signal D0 that 2 frames are inputted, and can store the capacity of the calibrated vision signal E of 2 frames.And, can also be configured to vision signal D0, the E that can not store respectively 2 frames, and only can store vision signal D0, the E of 1 frame.
Frame memory control part 12 carries out the reading of the vision signal D0 that writes He accumulate of incoming video signal according to storage address, generate continuous 2 frame video signal D1, D2, wherein, this storage address is that the synchronizing signal comprising according to inputted vision signal D0 generates.
Vision signal D1 does not postpone with respect to incoming video signal D0, is also known as present frame vision signal.
Vision signal D2 postpone 1 frame with respect to vision signal D1 and obtain, be the signal before 1 image duration in time, be also known as 1 frame delay video signal.
In addition, in the content of following explanation, using vision signal, D2 processes as object, therefore sometimes vision signal D2 is called to concern frame video signal, and vision signal D1 is called to rear frame video signal.As mentioned above, vision signal D1, D2 are made up of the row of the signal of multiple pixels of composing images, and the pixel value of the pixel P (i, j) of the position in coordinate (i, j) is expressed as D1 (i, j), D2 (i, j).
Frame memory control part 12 also carries out the reading of the vision signal E that writes He accumulate of proofreaied and correct vision signal E, outputting video signal E1 within this image duration identical with vision signal E, outputting video signal E2 after 1 image duration.
Fig. 3 represents the structure example of motion vector detection section 5.Motion vector detection section 5 according to the 1st vision signal (D2) from its outside input and from this outside input and with respect to above-mentioned the 1st vision signal in time before 1 frame is above or 2nd vision signal (D1) of 1 frame after above, detect the motion vector of above-mentioned the 1st vision signal (D2), illustrated motion vector detection section 5 has that the frame piece of concern cuts portion 21, rear frame piece cuts portion 22, motion vector determination portion 23, memory 24 and memory controller 25.
Pay close attention to frame piece and cut portion 21 as shown in Fig. 4 (a), with concerned pixel P (i, j) neighboring area, for example, centered by concerned pixel, the concern frame video signal D2 exporting from picture delay portion 4 cuts height (vertical direction size) for (2*BM+1), width (horizontal direction size) is rectangular area (piece) D2B (i of (2*BN+1), j), motion vector determination portion 23 is estimated this rectangular area D2B (i, j) at which regional movement of rear frame video signal D1, by estimated region with respect to rectangular area D2B (i, j) relative position, as about concerned pixel P (i, j) motion vector V is (in order to distinguish with the motion vector of other pixels, sometimes be also expressed as V (i, j)) output.
Rear frame piece cuts portion 22 with according to the set of the coordinate of each above-mentioned concerned pixel P (i, j) definition
S(i,j)={(i+k,j+1)} (1)
(wherein,-SV≤k≤SV ,-SH≤1≤SH and SV, SH are set value) position (i+k that comprises, j+1) centered by, cut and rectangular area D2B (i for the vision signal D1 inputting from picture delay portion 4, j) measure-alike rectangular area D1B (i+k, j+1) (Fig. 4 (b)).Wherein, S (i, j) is also known as the hunting zone about the motion vector of concerned pixel P (i, j).As defined above hunting zone be laterally for 2*SH+1, be longitudinally the rectangular area of 2*SV+1.
Motion vector determination portion 23 is cutting from paying close attention to frame piece the rectangular area D2B (i that portion 21 inputs, j) Yu from rear frame piece cut the piece D1B (i+k that portion 22 inputs, j+1) between, obtain all pixels in each, i.e. summation (the absolute value sum of the difference) SAD (i+k, j+1) of the absolute value of the difference each other of pixels at the individual position to correspond to each other of (2*BM+1) * (2*BN+1) place.The absolute value sum SAD (i+k, j+1) of this difference can pass through following formula (2) and represent.
[numerical expression 1]
SAD ( i + k , j + l ) = Σ r = - BM BM Σ s = - BN BN | D 1 ( i + k + r , j + l + s ) - D 2 ( i + r , j + s ) | - - - ( 2 )
As above, corresponding to (2*SV+1) * (2*SH+1) individual rectangular area D1B (i+k, j+1) the absolute value sum SAD (i+k of the individual difference of acquisition (2*SV+1) * (2*SH+1), j+1), specify out the rectangular area D1B (i+km that has wherein produced the absolute value sum that is worth minimum difference, j+lm), by this rectangular area with respect to rectangular area D2B (i, j) relative position (km, lm) as motion vector V=(Vy, Vx)=(km, lm) exports to image rectification portion 6.
In addition, owing to inserting frame between 2 continuous frames, therefore motion vector V accumulated in memory 24 and it is postponed, setting it as motion vector Vd by memory controller 25 and export to frame generating unit 7.
All pixels of the vision signal D2 exporting from picture delay portion 4 are carried out to the detection of above-mentioned such motion vector, to each pixel detection motion vector, the motion vector as above obtaining is used for alleviating motion blur and the frame of interpolation between 2 continuous frames.
And, in the time of the motion vector detection of motion vector detection section 5, the pixel in the upper end of image, lower end, left end, right-hand member outside becomes above-mentioned rectangular area D2B (i, j), D1B (i+k, j+1) a part, when in the situation of these pixel values of needs, for example using upper end, the pixel in lower end, left end, right-hand member outside processes as having respectively with the part of the identical value of pixel of upper end, lower end, left end, right-hand member.This computing for filtering described later portion 34, mean value calculation portion 37 etc. is suitable for too.
In addition, the motion vector detecting method of motion vector detection section 5 is not limited to said method, can also adopt vision signal except paying close attention to frame and with pay close attention to the vision signal of posterior frame of time compared with frame, also use the method for the vision signal of preceding frame of time compared with paying close attention to frame, or do not use the vision signal of posterior frame of time compared with paying close attention to frame and use the method for paying close attention to the vision signal of frame and the vision signal of preceding frame of time compared with paying close attention to frame, or can also adopt the vision signal of frame and the vision signal of posterior frame of time the method that uses phase place correlation function to obtain compared with paying close attention to frame paid close attention to that use.And pay close attention to frame with its front and back frame the time interval be not limited to for 1 image duration, can be image duration more than 2 frames.
The structure example of Fig. 5 presentation video correction unit 6.Illustrated image rectification portion 6 has the handling part 30 of correction, operation signal handling part 31, motion blur estimator 32, filter factor storage unit 33, filtering portion 34, mean value calculation portion 37, correction intensity adjustment part 38 and gain calculating part 39.
Proofread and correct handling part 30 receiving video signals D2, by gain G AIN described later, each pixel is proofreaied and correct to processing, the vision signal E after proofreading and correct is exported to picture delay portion 4.
Operation signal handling part 31 is resolved the signal PR that is used not shown interface input by user, the parameter that output obtains as analysis result.
The parameter of exporting from operation signal handling part 31 comprises adjusts parameter A DJ, correction intensity B parameter ST0, threshold value TH1 and TH2.
Adjust parameter A DJ and be used for according to motion vector computation amount of movement blur, and be provided for motion blur estimator 32.
Threshold value TH1 is used for adjusting the characteristic of filtering portion 34, and is provided for filtering portion 34.
Correction intensity B parameter ST0 is for determining correction intensity, and threshold value TH2 is for differentiating feature, for example flatness of image, and they are all provided for correction intensity adjustment part 38.
Motion blur estimator 32 is using the motion vector V being detected by motion vector detection section 5 (vertical point is divided to component Vx (=lm) to component Vy (=km), level) as input, the component (size and angle) while calculating with this motion vector of polar coordinate representation.Particularly, using the right side that is oriented horizontal direction of motion vector towards situation as 0 degree, utilize following formula to calculate direction A (degree) and the big or small LM (pixel) of motion vector.
[numerical expression 2]
A=(Arctan(Vy/Vx))*180/π (3)
LM = Vy 2 + Vx 2 - - - ( 4 )
Motion blur estimator 32 is also obtained the angle corresponding with motion vector and the size of motion blur (the fuzzy amplitude of the direction of motion).The angle of for example motion blur is identical with the angle A of motion vector, and the big or small LB of motion blur and the big or small LM of motion vector are multiplied by and adjust parameter A DJ (0 < ADJ≤1) and the value obtaining equates, can obtain by following formula (5) the big or small LB of motion blur.
LB=LM*ADJ (5)
As shown in Figure 6, adjust parameter A DJ possess with shooting during length T s, for example electric charge accumulate the value that the ratio (Ts/Tf) of the length T f of time to image duration is suitable, can change according to the length T s during the actual shooting of each frame, also can determine as the representative value during the shooting under the condition of object, mean value or median according to the present invention.For example use in the situation of median, if doubly doubly in the scope of (EXS, EXL are less than 1), its median (EXS+EXL)/2 is defined as to ADJ to EXL at the EXS of image duration during shooting.
As above be multiplied by like this reason of adjusting parameter A DJ and be, motion vector V detects between frame, and therefore reflection is the amount of the motion of each image duration, and on the other hand, and motion blur is because the motion of subject in during shooting causes.
Filter factor storage unit 33 is associated the direction of multiple low-pass filtering coefficients (two-dimentional FIR filter factor) and multiple motion blurs and big or small combination to store with sheet form in advance.This filter factor is for reducing the composition of motion blur in interior vision signal from comprising specific direction and big or small motion blur.
Motion blur estimator 32 reads out the filter factor corresponding with the direction A of the motion blur as above calculating and the combination of big or small LB from table, thereby the pointer IND that calculates his-and-hers watches according to the direction A of motion blur and big or small LB, be input to filter factor storage unit 33.
Filter factor storage unit 33 reads out the filter factor CF (p, q) with inputted pointer IND corresponding stored, exports to filtering portion 34.
By this processing, motion blur estimator 32 is selected the filter factor CF (p, q) corresponding with the estimated direction A of motion blur and the combination of big or small LB from be kept at the filter factor filter coefficient storage unit 33.
Filtering portion 34 is used the filter factor CF (p, q) selecting by motion blur estimator 32 to carry out filtering.; filtering portion 34 has Nonlinear Processing portion 35 and low pass filter 36; it uses the filter factor CF (p as above reading out from filter factor storage unit 33; q) (wherein-P≤p≤P ,-Q≤q≤Q); use each concerned pixel P (i of vision signal D2; the pixel value of the pixel in neighboring area j) carries out filtering, the result FL1 (i, j) of output filtering.
Nonlinear Processing portion 35 is according to the pixel value D2 (i of concerned pixel, pixel value D2 (the i-p of the pixel j) and in its neighboring area, j-q) difference and the threshold value TH1 being inputted by operation signal handling part 31, carry out the Nonlinear Processing of processing shown in following formula (6a)~(6f).
(A) in the time of D2 (i-p, j-q)-D2 (i, j) > TH1, make
D2b(i-p,j-q)-D2(i,j)=TH1 (6a),
Pass through
D2b(i-p,j-q)=D2(i,j)+TH1 (6b)
Determine D2b (i-p, j-q);
(B) in the time of D2 (i-p, j-q)-D2 (i, j) <-TH1, make
D2b(i-p,j-q)-D2(i,j)=-TH1 (6c),
Pass through
D2b(i-p,j-q)=D2(i,j)-TH1 (6d)
Determine D2b (i-p, j-q);
(C), when the situation outside belonging to above-mentioned (A), (B), make
D2b(i-p,j-q)-D2(i,j)=D2(i-p,j-q)-D2(i,j) (6e),
Pass through
D2b(i-p,j-q)=D2(i-p,j-q) (6f)
Determine D2b (i-p, j-q).
Low pass filter 36 is at each concerned pixel P (i, j) neighboring area, the i.e. scope that formed by the individual pixel of (2P+1) * (2Q+1), value D2b (the i-p that result as above-mentioned Nonlinear Processing is obtained, j-q) be multiplied by corresponding filter factor CF (p, q), the summation of multiplied result is obtained as filtering result FL1 (i, j).
The filter factor CF (p, q) using in low pass filter 36 is below described.
Filter factor is pixel definition in the region of p-P≤p≤P ,-Q≤q≤Q centered by concerned pixel.
With recited above same, filter factor CF (p, q) determines according to the angle A of motion blur and big or small LB.
Fig. 7~Fig. 9 represents to define the region that in the region that has filter factor, several examples of motion blur is defined as filter factor the value beyond 0.The region that by this filter factor is below the value beyond 0 is called effective filter field EFA.Effectively the filter factor summation of the location of pixels in filter field EFA is 1.
The belt-like zone corresponding with the big or small LB of motion blur and angle A thereof is considered as effective filter field EFA.Then for the pixel that is contained in wholly or in part effective filter field EFA give with at weight coefficient corresponding to this effective filter field EFA proportion.For example, be contained in the pixel of effective filter field EFA than complete (its entirety), the pixel that part is contained in to effective filter field EFA reduces the value of weight coefficient, is in the effective proportional value of filter field EFA proportion to this pixel about the value of the weight coefficient of each pixel.
This belt-like zone extends in the direction of motion blur, its length is the set several times of the big or small LB of motion blur, is for example 2 times, has from the initiating terminal of motion blur and end to both having extended quantitatively before and after it, for example, according to the length of 0.5 times of prolongation of the big or small LB of motion blur.The width of belt-like zone is equivalent to the size of 1 pixel.The size that example shown in Fig. 7~Fig. 9 illustrates 1 pixel in the horizontal direction with all identical situations of vertical direction.In Fig. 7~Fig. 9, be made as the starting point of motion blur in position shown in coordinate (i, j).
In the example shown in Fig. 7, the big or small LB of the motion blur in the horizontal direction right side is oriented 4 pixel sizes.In this case, motion blur is observed to the starting point pixel Ps (coordinate (i from motion blur, j) pixel) center extends to terminal pixel Pe (coordinate (i, j+4) center), to adding the length of 2 pixels (0.5 × 4 pixel) before and after it.; from coordinate (i; j-2) pixel center is until coordinate (i; j+6) scope till pixel center is effective range; wherein; this coordinate (i; j-2) pixel center be from starting point pixel Ps center towards the rear (left Fig. 7) retreated (after moving towards left in figure) positions of 2 pixels; the pixel center of this coordinate (i, j+6) is by forwards (right-hand in Fig. 7) 2 pixels (in figure to right-hand movement after) position of having advanced, terminal pixel Pe center.Give the weight coefficient corresponding with the ratio comprising at this effective filter field EFA to these pixels.; to from coordinate (i; j-1) pixel is to (i; j+5) pixel is given the coefficient of identical value, and due to pixel, the coordinate (i of coordinate (i, j-2); j+6) pixel only has half to be contained in effective filter field EFA separately; therefore they are given 1/2 value of the coefficient of other pixels (pixel from coordinate (i, j-1) to coordinate (i, j+5)).
In the example of Fig. 7, only to be contained in the quantity of the pixel in effective filter field EFA be 2 to half, the quantity that is contained in the pixel in effective filter field EFA is completely 6, therefore give 1/7 weight coefficient for the pixel being contained in completely in effective filter field EFA, give 1/14 weight coefficient to the pixel that only half is contained in effective filter field EFA.
In example shown in Fig. 8, the big or small LB of the motion blur in the horizontal direction right side is oriented 3 pixel sizes.In this case, motion blur is observed to the starting point pixel Ps (coordinate (i from motion blur, j) pixel) center extends to terminal pixel Pe (coordinate (i, j+3) center), to adding the length of 1.5 pixels (0.5 × 3 pixel) before and after it.; from coordinate (i; j-1) pixel left end is until coordinate (i; j+4) scope till pixel right-hand member is effective range; wherein; this coordinate (i; j-1) pixel left end be from starting point pixel Ps center towards the rear (left Fig. 8) retreated the position of 1.5 pixels; the pixel right-hand member of this coordinate (i, j+4) is by the position that forwards advance behind the position of 1.5 pixels on (right-hand in Fig. 8), terminal pixel Pe center.So, in the example of Fig. 8, do not exist part to be contained in the pixel of effective filter field EFA, the quantity that is contained in the pixel in effective filter field EFA is completely 6, is therefore 1/6 by parameter identification respectively to these pixels.
In the example shown in Fig. 9, the big or small LB of motion blur is 3 pixel sizes, identical with the situation of Fig. 8, effectively the length of filter field EFA is also identical with the situation of Fig. 8 with width, but the angle of motion blur is 30 degree, its result, exists more only part to be contained in the quantity of the pixel in effective filter field EFA.Particularly, coordinate (i-3, j+4), coordinate (i-2, j+2), coordinate (i-2, j+3), coordinate (i-2, j+4), coordinate (i-1, j), coordinate (i-1, j+1), coordinate (i-1, j+2), coordinate (i-1, j+3), coordinate (i, j-1), coordinate (i, j), coordinate (i, j+1), coordinate (i, j+2), the pixel of coordinate (i+1, j-1), coordinate (i+1, j) is partly contained in respectively effective filter field EFA.So give weight coefficient to these 14 pixels according to the ratio that is contained in effective filter field EFA.
Obtain too the weight coefficient about each pixel for the big or small LB of motion blur, other values of angle A.But, not the big or small LB to motion blur, all values that angle A can be got are all obtained weight coefficient, but about big or small LB, angle A respectively to typical value LR, the AR of each set scope obtain weight coefficient each other, be stored in filter factor storage unit 33 as filter factor, for big or small LB, angle A in each scope, use the filter factor that typical value LR, AR are obtained and preserved.Typical value LR, AR (or value corresponding with it) are for the generation of pointer IND described later.Can further describe these contents below.
And, in above-mentioned example, effectively filter field EFA has the length of 0.5 times of the big or small LB that motion blur has been extended respectively from its initiating terminal and tip forward to motion blur, and this prolongation amount can be also the set value irrelevant with the big or small LB of motion blur, for example can establish this prolongation amount is 0.5 pixel.In addition, can also make this prolongation amount is zero.
In addition, carry out the weighting corresponding with the ratio in effective filter field EFA of being contained in for the pixel comprising in effective filter field EFA, on the other hand, although use the moving average filter that possesses the structure of not carrying out the weighting corresponding with distance apart from concerned pixel, also can be configured to and carry out the weighting corresponding with distance apart from concerned pixel.Example as this filter can be enumerated Gaussian filter.
With above described in similarly, low pass filter 36 is for to each concerned pixel P (i, j) neighboring area pixel is carried out the result of Nonlinear Processing and the value D2b (i-p that obtains, j-p) be multiplied by the corresponding filter factor CF (p reading out from filter factor storage unit 33, q), the summation of multiplied result is obtained as filtering result FL1 (i, j).Can represent this filtering by following formula.
[numerical expression 3]
FL 1 = &Sigma; q = - Q Q &Sigma; p = - P P CF ( p , q ) D 2 b ( i - p , j - q ) - - - ( 7 )
The filtering result FL1 (i, j) of formula (7) is exported to gain calculating part 39.
The mean value FL2 (i, j) of the pixel value of the pixel in the neighboring area of each concerned pixel P (i, j) of the 37 outputting video signal D2 of mean value calculation portion.
So-called neighboring area is for example the scope being made up of the individual pixel of (2P+1) * (2Q+1) herein, pixel value D2 (the i-p that mean value calculation portion 37 is calculated within the scope of this, j-q) mean value FL2 (i, j), pass through the value of following formula (8) tabular form, export to correction intensity adjustment part 38.
[numerical expression 4]
FL 2 = &Sigma; q = - Q Q &Sigma; p = - P P D 2 ( i - p , j - q ) - - - ( 8 )
Correction intensity adjustment part 38 is according to the pixel value D2 (i of near the intensity of variation of pixel value concerned pixel and/or size, for example concerned pixel, the correction intensity of the difference adjustment of the mean value FL2 (i, j) of the pixel value of the pixel j) and in neighboring area to concerned pixel.The adjustment of correction intensity is to be undertaken by the adjustment of correction intensity B parameter ST1 (i, j) as described below.Particularly, correction intensity adjustment part 38 is according to the correction intensity B parameter ST0 being inputted by operation signal handling part 31, correction intensity B parameter ST1 after output is adjusted, as the pixel value D2 (i of the concerned pixel of the vision signal D2 inputting from picture delay portion 4, j) with from the mean value FL2 (i of mean value calculation portion 37, the absolute value of difference j) is less than in the situation of the threshold value TH2 being inputted by operation signal handling part 31, generate than the correction intensity B parameter ST1 (i after the little adjustment of the correction intensity B parameter ST0 being inputted by operation signal handling part 31, j) and export to gain calculating part 39.As the correction intensity B parameter ST1 (i, j) after adjusting, for example, can use the value being provided by BST0 × β (β < 1).Also can be made as and can determine correction intensity B parameter ST1 (i, j) after adjusting by user to compare correction intensity B parameter ST0 little for example, to which kind of degree (value of β).Can be both for example β=1/2, can be also β=0.
In the time that pixel value D2 (i, j) is not less than threshold value TH2 with the absolute value of the difference of mean value FL2 (i, j), the directly output of the correction intensity B parameter ST1 (i, j) after adjusting of intensity parameters BST0 will be proofreaied and correct.Therefore, the relation between the correction intensity B parameter ST1 after (D2 (i, j)-FL2 (i, j)) and adjustment as shown in figure 10.
Gain calculating part 39 is with reference to the filtering result FL1 (i obtaining from filtering portion 34, correction intensity B parameter ST1 (i the adjustment of j), exporting from correction intensity adjustment part 38, and the pixel value D2 (i of the concerned pixel of the vision signal D2 inputting from picture delay portion 4 j), j), calculate the namely gain G AIN (i, j) of multiplication coefficient using in processing proofreading and correct according to following formula.
GAIN(i,j)=1+BST1(i,j)-BST1(i,j)*FL1(i,j)/D2(i,j) (9)
Wherein, when in the situation of D2 (i, j)=0, calculate as D2 (i, j)=1 for simplicity.In addition, when the result of calculating formula (9) is in the situation of GAIN < 0, establish GAIN (i, j)=0.Then the gain G AIN (i, j) obtaining is exported to and proofreaies and correct handling part 30.
Proofread and correct handling part 30 by the calculating based on following formula, pixel value D2 (the i of the concerned pixel to the vision signal D2 inputting from picture delay portion 4, j) obtain pixel value E (i, j), the pixel value of pixel P (i, j) as the vision signal after proofreading and correct is exported to picture delay portion 4.
E(i,j)=GAIN(i,j)*D2(i,j) (10)
In the present invention, only luminance signal (Y) is processed in picture delay portion 4, motion vector detection section 5 and image rectification portion 6, thereby can be proofreaied and correct the motion blur that produces the video of deterioration due to the motion of subject and/or the motion of camera.But, can be also to process luminance signal (Y), can also process respectively danger signal (R), blue signal (G) and green (B).In addition, can also obtain the gain G AIN (i, j) of formula (9) by the signal that represents R, G, B sum, use the formula (10) of image rectification portion 6 to process respectively R, G, B.Can also process respectively luminance signal (Y) and color difference signal (Cb, Cr).Can obtain gain G AIN (i by luminance signal (Y), j), the computing of through type (10) is used the gain G AIN (i, j) obtaining respectively luminance signal (Y) and color difference signal (Cb, Cr) to be processed.Can also carry out same processing by other color forms of expression.
In addition, in the 1st execution mode, illustrate and used the gimmick of low pass filter, but also can use the solution of the image repair problem in the past working out.
Then,, about from 2 of picture delay portion 4 continuous correcting image E1 and E2, input the motion vector Vd corresponding with correcting image E2, interpolated frame between E1 and E2 by motion vector detection section 5 to frame generating unit 7.In the 1st execution mode, explanation be that to make the frame rate of the vision signal that is input to picture delay portion 1 be that the frame of 2 times generates and processes, and generate or for the frame of the phase place that staggers generates thought delta frame that also can be based on same for 3 times of above frames.
E2 is called to the correcting image of paying close attention to frame, E1 is called to the correcting image of rear frame, in the situation of interpolated frame between them, with reference to the motion vector Vd that pays close attention to frame.The motion vector Vd that the motion vector of frame (being after this referred to as interpolated frame) H of institute's interpolation can be paid close attention to frame by reference obtains.Particularly, in the situation that making frame rate be 2 times, use the 1/2 mobile destination of obtaining in the correcting image of frame of the motion vector of paying close attention to frame, will pay close attention to 1/2 motion vector as this position of motion vector of frame.If obtained the motion vector of interpolated frame H, can obtain the corresponding relation of paying close attention between the correcting image E2 of frame and the correcting image E1 of rear frame, thereby can generate interpolated frame H.Can describe the generation of interpolated frame below in detail.
Further describe the action of each inscape of image processing apparatus 2 below.
The vision signal D0 that is input to image processing apparatus 2 is input to picture delay portion 4.
The signal timing of the each several part in Figure 11 (a)~Figure 11 (j) presentation video processing unit 2.As shown in Figure 11 (b), with the synchronously incoming video signal D0 of incoming frame F0, F1, F2, F3, F4 successively of the input vertical synchronizing signal SYI shown in Figure 11 (a).
Frame memory control part 12 is according to input vertical synchronizing signal SYI delta frame memory writing address, make frame memory 11 store incoming video signal, and synchronously, as shown in Figure 11 (d), export the vision signal D1 (vision signal of frame F0, F1, F2, F3, F4) that does not have frame delay with respect to incoming video signal D0 with the output vertical synchronizing signal SYO shown in Figure 11 (c) (being depicted as the signal that does not have delay with respect to input vertical synchronizing signal SYI).
Frame memory control part 12 also, according to output vertical synchronizing signal SYO delta frame memory reading address, makes to read out the 1 frame delay video signal D2 (Figure 11 (e)) the output that are stored in frame memory 11.
Its result is exported continuous 2 frame video signal D1, D2 from picture delay portion 4 simultaneously.; the timing (image duration) of inputting as vision signal D0 in the vision signal of frame F1; the vision signal of frame F1, F0 is exported as vision signal D1, D2; the timing (image duration) of inputting as vision signal D0 in the vision signal of frame F2, the vision signal of frame F2, F1 is exported as vision signal D1, D2.
Continuous 2 frame video signal D1, the D2 that export from picture delay portion 4 are provided for motion vector detection section 5, and vision signal D2 is also provided for image rectification portion 6.
Motion vector detection section 5 generates motion vector V according to vision signal D1, D2.This motion vector V is the motion vector to the vision signal D1 of next frame by the vision signal D2 of each frame, and the timing that therefore motion vector from frame F0 to frame F1 (representing with " F0 → F1 " Figure 11 (f)) is input to motion vector detection section 5 at vision signal D2, the D1 of frame F0, F1 is output.
Vision signal D2 and it is multiplied by gain G AIN and the vision signal E that generates is output (Figure 11 (h)) in the image duration identical with vision signal D2.
At the image duration outputting video signal E1 identical with vision signal E (Figure 11 (i)), outputting video signal E2 after 1 image duration (Figure 11 (j)).
Motion vector V was delayed for 1 image duration, to frame F0 outputting video signal E2, timing to frame F1 outputting video signal E1, output by frame F0 to the motion vector Vd of frame F1 (in Figure 11 (g) with " F0 → F1 " expression).
In motion vector detection section 5, use the detection of the motion vector of the absolute value sum SAD of the difference often using in Video coding.The object of the invention is to alleviate the motion blur of the pixel that produces motion blur, therefore, according to the absolute value sum SAD of each pixel calculated difference, obtain motion vector according to its minimum value.
But, if all pixels are carried out to the computing of the absolute value sum SAD that obtains difference, operand can become huge, thereby can be adjacent according to making, the mode that similarly do not overlap each other for detection of the piece of motion vector with Video coding processes, for the pixel that does not detect motion vector, utilize the motion vector detecting at periphery to carry out interpolation.
In addition, in foregoing, the piece size that motion vector detection section 5 is used is as with concerned pixel P (i, j) centered by and rectangular area upper and lower and that left and right is same size, using the height of rectangular area and width respectively as the odd number of use (2*BM+1), (2*BN+1) expression.But the height of rectangular area and width can not be odd numbers, the position in the rectangular area of concerned pixel can not be center accurately, can be slightly off-centered position.
In addition, shown in (1), hunting zone is defined as-SV≤k≤SV ,-SH≤l≤SH to the absolute value sum SAD of all k that this scope is comprised and l calculated difference.Drawing > but also can the object based on cutting down operand to dredge (Inter between appropriate) k and l carry out the absolute value sum SAD of calculated difference.In this case, about the position (i+k, j+l) that is thinning out (by thin being removed) between quilt, can carry out interpolation according to the absolute value sum SAD (i+k, j+l) of the difference of this peripheral position.The precision of all right study movement vector, dredges the absolute value sum SAD of the difference obtaining if precision does not have problems between use.
First the motion vector V that is input to image rectification portion 6 is imported into motion blur estimator 32.As shown in figure 12, the motion vector V that is input to motion blur estimator 32 can pass through the component Vy (i of vertical direction, and the component Vx (i of horizontal direction j), j) represent, therefore the direction A (degree) of through type (3) calculating kinematical vector, the big or small LM (pixel) of through type (4) calculating kinematical vector.
Here investigate and make camera static, take the situation of the object that carries out linear uniform motion.Figure 13 (a), Figure 13 (b) represent an example of the motion of the key element of the image with 3 continuous frame video signal performances of now taking.In illustrated example, between the 1st frame and the 2nd frame between (Figure 13 (a)) and the 2nd frame and the 3rd frame between (Figure 13 (b)), the key element ES of image moves in the horizontal direction 4 pixels, and do not move in vertical direction (Vx=4, Vy=0).Therefore, as shown in the arrow of Figure 13 (a), Figure 13 (b), be detected as 4 pixels of horizontal direction, vertical direction 0 pixel at the motion vector between the 1st frame and the 2nd frame and between the 2nd frame and the 3rd frame.
Suppose during the shooting of the image shown in Figure 13 (a), Figure 13 (b) Ts with 1 image duration Tf equate, the big or small LB of motion blur is also 4 pixels of horizontal direction, vertical direction 0 pixel.
But, in fact shooting during Ts as shown in Figure 6 than 1 image duration Tf short, therefore as shown in Figure 14 (a), Figure 14 (b), the big or small LB of motion blur is less than the big or small LM of motion vector, its ratio be equivalent to shooting during Ts length with respect to 1 image duration Tf ratio (Ts/Tf).
Under the circumstances, the value that the big or small LM of motion vector is multiplied by the adjustment parameter A DJ that is less than 1 is estimated as to the big or small LB of motion blur.As mentioned above, adjust parameter A DJ and both can determine according to the length T s during the actual shooting of each frame, also can by virtue of experience come to determine, can also be set by the user.
The following describes the computational methods for read out the pointer IND of filter factor from the table of filter factor storage unit 33.
For example, the filter factor that is stored in filter factor storage unit 33 be to the typical value as angle (unit is " degree ") from 0 degree to 165 degree every the angle of 15 degree, define as from 1 to 21 odd number of big or small typical value.
Now, the LB that through type (5) is obtained rounds up, if the result rounding up is even number, after adding 1, become odd number (LB=LB+1), if the result of as above processing is greater than " 21 ", amplitude limit is " 21 ", the big or small typical value LR output using the result of having carried out as above processing as motion blur.If in the set scope of the value of the big or small LB of motion blur in comprising typical value LR, by carrying out above-mentioned processing, the big or small LB of motion blur is converted to typical value LR.
On the other hand, about angle A, if the A that the formula of utilization (3) is obtained is less than 0, add 180 degree (A=A+180), (the R revision of the convention) rounds up taking 15 degree as unit, thereby A2=(A+7.5)/15 is got rid of to the following part of decimal point, if its result (A2 >=12) more than 12 is made as A2=0.Using the result of this processing as the value AR2 output corresponding with the typical value AR of the angle of motion blur.Between AR and AR2, there is following relation.
AR=15×AR2
If in the set scope of the value of the angle A of motion blur in comprising typical value AR,, by carrying out above-mentioned processing, the angle A of motion blur is converted to the value AR2 corresponding with typical value AR.
Use big or small typical value LR and the value AR2 corresponding with the typical value AR of angle of motion blur, can be as the pointer IND for reading from table by the calculating of following formula.
IND=12*((LR-1)/2-1)+AR2 (11)
Figure 15 illustrates the object lesson of obtaining the table of pointer IND based on formula (11) according to AR2 and LR.Although do not illustrate in Figure 15, but about the filter factor CF (p, q) in the situation of LR=1, at i=0, be for example CF (i, j)=1 when j=0, it in situation in addition, is CF (i, j)=0.
In the time having inputted pointer IND by motion blur estimator 32, the filter factor CF (p, q) corresponding with inputted pointer IND offered low pass filter 36 by filter factor storage unit 33.The filter factor that is kept at filter factor storage unit 33 can freely be designed by user.As long as filter factor can be realized low pass filter, be relatively easy to design, this is also feature of the present invention.
Then, describe the filtering portion 34 that possesses low pass filter 36 in detail.The object of the invention is to suitably to alleviate the motion of subject and the motion of camera causes the motion blur in the region that produces motion blur, to use the gimmick of the low pass filter shown in following formula as basis.
E(i,j)=D2(i,j)+BST1(i,j)*(D2(i,j)-FL1(i,j)) (12)
To formula (12), distortion can obtain formula (9), formula (10).If the consideration mode based on formula (12) is processed, for example use green (G) to carry out the calculating of formula (9), obtain gain G AIN (i, j), in correction handling part 30, multiple color signals of same pixel are used to identical gain G AIN (i, j) carry out the computing of formula (10), thereby have advantages of and can cut down operand.But there is following shortcoming in the gimmick of use formula (12), therefore should be handled as follows.
The gimmick of formula (12) is to use the filter factor CF (p exporting from filter factor storage unit 33, q), the vision signal D2 that is input to image rectification portion 6 is carried out to low-pass filtering, filtering result FL1 (i, j) is exported to gain calculating part 39.But the motion blur carrying out based on low pass filter according to formula (12) is proofreaied and correct to process at the stronger edge part of correcting image and is easily caused the shortcoming that produces overshoot.
At this, insert Nonlinear Processing portion 35 to the prime of low pass filter 36, carry out suppressing at stronger edge part the Nonlinear Processing of overshoot.For example use the threshold value TH1 being inputted by operation signal handling part 31 to carry out Nonlinear Processing, carry out the inhibition of overshoot.Particularly, as shown in figure 16, pixel value D2 (the i of passing threshold TH1 to concerned pixel, pixel value D2 (the i-p of the pixel j) and in its neighboring area, j-q) difference DIF (i-p, j-q)=D2 (i, j)-D2 (i-p, j-q) carries out amplitude limit.; filtering portion 34 is to make the pixel value D2 (i of concerned pixel; each pixel value D2 (i-p of the pixel j) and in its neighboring area; j-q) difference DIF (i-p; j-q) absolute value can not exceed the mode of predetermined threshold value TH1 to the pixel value D2 (i-p separately of the pixel in neighboring area; j-q) carry out amplitude limiting processing, use the pixel value after amplitude limiting processing to carry out low-pass filtering to the pixel in neighboring area.Thus, if hypothesis does not suppress, the suitable ride gain of edge part of can gain G AIN (i, j) large at difference DIF (i-p, j-q) and that gain calculating part 39 calculates larger image.
Describe the processing of correction intensity adjustment part 38 below in detail.
Correction intensity adjustment part 38 reduces because noise amplification effect makes the quality of motion blur correcting image for suppressing after processing at motion blur, according to feature, for example flatness of image, the correction intensity B parameter ST0 that makes to input from operation signal handling part 31 reduces or is zero, exports to gain calculating part 39 as the correction intensity B parameter ST1 after adjusting.
Particularly, incoming video signal D2, the variation of the pixel value (for example brightness value) of the pixel in the neighboring area of detection concerned pixel, the value of the correction intensity B parameter ST1 after adjusting according to the size of this variation is definite.As the index that represents that above-mentioned pixel value changes, use the pixel value D2 (i, j) of concerned pixel and the absolute value of the difference of the mean value FL2 (i, j) exporting from mean value calculation portion 37.And for example this absolute value is less than the threshold value TH2 being inputted by operation signal handling part 31, the variation that is judged as the pixel value in the neighboring area of concerned pixel is less, for example the correction intensity B parameter ST1 after adjusting is made as to 1/2 of correction intensity B parameter ST0 before adjustment, if above-mentioned absolute value is greater than threshold value TH2 and is judged as pixel value and changes greatly, the correction intensity B parameter ST0 before adjusting is the correction intensity B parameter ST1 after adjustment directly.Then the correction intensity B parameter ST1 after as above definite adjustment is exported to gain calculating part 39.
Below carry out the meaning of above-mentioned processing.
For alleviating the noise that the processing of the motion blur that the motion of subject and the motion of camera produce the region of motion blur will inevitably amplification video signal.Even if especially change less flat site and produced motion blur in pixel value variation, for example brightness, its impact is also very little in visual aspects, a little less than proofreading and correct processing.Suppose directly to use in this region correction intensity parameter value BST0 to proofread and correct, can amplify largely noise, the quality that makes motion blur proofread and correct result reduces.At this, flat site is detected, use less value to replace the adaptation processing of correction intensity B parameter ST0 in this region.Now, in order to take a decision as to whether flat site, as mentioned above the mean F L2 of the pixel value to the pixel in the pixel value D2 (i, j) of concerned pixel and its neighboring area gets difference, by this difference and threshold value is big or small judges.
In addition, based on this reason, the simple average value of the pixel value of all pixels in the-region of P≤p≤P ,-Q≤q≤Q that use as mentioned above that mean value calculation portion 37 calculates.
Gain calculating part 39 uses the output FL1 (i of filtering portion 34, correction intensity B parameter ST1 (i the adjustment of j), exporting from correction intensity adjustment part 38, j), the pixel value D2 (i of the concerned pixel of vision signal D2, j) according to above formula (9) calculated gains GAIN (i, j), the gain G AIN (i, j) calculating is offered and proofreaies and correct handling part 30.
Wherein, in the computing shown in formula (9), the needs of the pixel value D2 (i, j) based on divided by concerned pixel are pressed D2 (i, j)=1 and are calculated in the time of D2 (i, j)=0.In addition, when in the situation of GAIN (i, j) < 0, amplitude limit is GAIN (i, j)=0.To export to correction handling part 30 by the above gain G AIN (i, j) obtaining that calculates.
Proofreading and correct in handling part 30, provided gain G AIN (i, j) and pixel value D2 (i, j) are multiplied each other, thereby carry out motion blur correction.Multiplied result, as pixel value E (i, the j) output after motion blur is proofreaied and correct, offers picture delay portion 4.
Then, describe the processing of frame generating unit 7 in detail.Due to interpolated frame between the correcting image E1 of the correcting image E2 that pays close attention to frame, rear frame, therefore input these vision signals by picture delay portion 4 to frame generating unit 7, inputted the motion vector Vd that pays close attention to frame to frame generating unit 7 by motion vector detection section 5.If the motion vector Vd of position (i, j) is expressed as to vertical direction Vdx (i, j), horizontal direction Vdy (i, j), can obtain as follows the motion vector that the position (i, j) of interpolated frame H is located.
Vhx(i+si,j+sj)=Vdx(i,j)/2 (13a)
Vhy(i+si,j+sj)=Vdy(i,j)/2 (13b)
Wherein, si=round[Vdx (i, j)/2], sj=round[Vdy (i, j)/2] and, round[*] represent * to round up.
; generate in paying close attention to the middle frame of the correcting image E2 of frame and the correcting image E1 of rear frame; therefore by the motion vector Vd that pays close attention to frame divided by 2 and position (i+si, the j+sj) storage that the calculates motion vector Vd that pays close attention to frame that rounds up divided by 2 value.In addition, in following processing, use the absolute value sum of difference, therefore also calculate the absolute value sum SADh of the difference of interpolated frame.
SADh(i+si,j+sj)=mv_sad(i,j) (13c)
(si, sj is identical with the situation of explanation in above-mentioned formula (13a), (13b).)
In formula (13c), in the time that (si, sj) specified the position of the scope that exceedes the position defining as video, do not process.Wherein, the motion vector that should be noted that the interpolated frame calculating in formula (13a) and formula (13b) cannot obtain in all positions (i, j).Therefore just need to be to the correction of motion vector value and/or interpolation processing (being designated hereinafter simply as correcting process).Propose various algorithms about the correcting process of motion vector, described representational processing herein.
If the correction of motion vector by following 2 processing form, i.e. the processing of the minimum value of the motion-vector search SAD of all pixels to interpolated frame HF and do not have the processing of new settings motion vector of motion vector in 3 × 3 scope in 3 × 3 scope.
With position (i, minimum value and the position thereof of search SAD in the scope of 3 × 3 j), the result of search is judged as to the position (ci of minimum value, cj) motion vector (Vhx (ci, cj), Vhy (ci, cj)) as the correction value of the motion vector of position (i, j).
Vcx(i,j)=Vhx(ci,cj)
Vcy(i,j)=Vhy(ci,cj) (14)
Now (ci, cj) can represent by following formula.
[numerical expression 5]
( cii , cjj ) = arg min ( ci , cj ) [ { SADh ( i + cii , j + cjj ) , cii = - 1 , . . . , 1 , cjj = - 1 , . . . , 1 } ] - - - ( 15 a )
ci=i+cii、cj=j+cjj (15b)
In addition, if there is not motion vector in 3 × 3 scope, Vcx (i, j)=Vcy (i, j)=0 is set as to the correction value of motion vector.
Thus, obtain motion vector Vcx (i, j), the Vcy (i, j) of interpolated frame, therefore used these values and with reference to the value of paying close attention to the correcting image E2 of frame and the correcting image E1 of rear frame, obtain interpolated frame HF.If the concerned pixel (i with interpolated frame, j) the corresponding correcting image E2 of concern frame and the position of the correcting image E1 of rear frame are respectively (bi, bj), (ai, aj), the pixel value HF (i, j) of each pixel of interpolated frame HF can obtain as following formula.
HF(i,j)={E2(bi,bj)+E1(ai,aj)}/2 (16)
Wherein, bi=i-round[Vcx (i, j)]
bj=j-round[Vcy(i,j)]
ai=i+fix[Vcx(i,j)]
aj=j+fix[Vcy(i,j)]
Wherein, fix[*] represent the casting out to 0 direction of *.
The 2nd execution mode
Figure 17 represents the image rectification portion 6 using in the 2nd execution mode.
The image rectification portion of illustrated image rectification portion 6 and Fig. 5 is roughly the same, difference is that the parameter of exporting from operation signal handling part 31 comprises threshold value TH3, this threshold value TH3 is provided for and proofreaies and correct handling part 30, and in correction handling part 30, according to threshold value, TH3 processes.The degree of threshold value TH3 for suppressing to proofread and correct, makes the correction of pixel value can not become overcorrect.
Proofreading and correct in handling part 30, the gain G AIN providing from gain calculating part 39 is provided, obtain motion blur correcting image.But even if carry out the inhibition processing of overshoot in filtering portion 34 according to the method for the 1st execution mode, motion blur is proofreaied and correct the result of processing also can produce overshoot sometimes.This is to set to obtain higher situation of proofreading and correct by proofreading and correct intensity parameters BST0 mostly.
So, in this 2nd execution mode, motion blur is proofreaied and correct to the result of processing, carry out amplitude limiting processing to avoid overshoot.Particularly, input threshold value TH3 by operation signal handling part 31, similarly be handled as follows with the Nonlinear Processing of filtering portion 34: the pixel value D2 (i of the concerned pixel before proofreading and correct, j) with to this pixel value D2 (i, j) be multiplied by gain G AIN (i, when the absolute value of the difference of the value j) obtaining exceedes threshold value TH3, make | E (i, j)-D2 (i, j) |=TH3, if the pixel value D2 (i of the concerned pixel before proofreading and correct, j) with to this pixel value D2 (i, j) be multiplied by gain G AIN (i, the absolute value of the difference of the value j) obtaining is less than or equal to threshold value TH3, make E (i, j)=GAIN (i, j) * D2 (i, j).
, if (A) GAIN (i, j) * D2 (i, j)-D2 (i, j) > TH3 makes
E(i,j)-D2(i,j)=TH3 (17a),
Pass through
E(i,j)=D2(i,j)+TH3 (17b)
Determine E (i, j).
(B) if GAIN (i, j) * D2 (i, j)-D2 (i, j) < is TH3, make
E(i,j)-D2(i,j)=-TH3 (17c),
Pass through
E(i,j)=D2(i,j)-TH3 (17d)
Determine E (i, j).
(C) if be not the situation of (A), (B), make
E(i,j)-D2(i,j)=GAIN(i,j)*D2(i,j)-D2(i,j) (17e),
Pass through
E(i,j)=GAIN(i,j)*D2(i,j) (17f)
Determine E (i, j).
And, in above-mentioned the 1st, the 2nd execution mode, vision signal D1 postponed for 1 image duration (on the time before 1 image duration) with respect to vision signal D2, but can be also that vision signal D1 postpones (being more than or equal to for 2 image durations on the time in formerly) above 2 image durations with respect to vision signal D2, can also be that vision signal D1 goes up in rear 1 image duration or was more than or equal to for 2 image durations in the time with respect to vision signal D2.
As mentioned above, in the 1st, the 2nd execution mode, motion vector between the frame of the picture signal of inputting according to each pixel detection, thereby the region of the generation motion blur comprising in detection video, determine gain according to the direction of the motion blur detecting, size, thereby can proofread and correct because motion blur causes the vision signal worsening.And, by using 2 continuous vision signals of proofreading and correct, interpolation is considered to be present in the vision signal between them, compares according to the situation of original video signal interpolation or the situation of the motion blur in correct frames only, the picture quality can promote dynamic image and show time.
In addition, in order to obtain same effect, the processing that can consider that the processing that correction of movement is fuzzy is set respectively and generate the frame between frame by interpolation, can obtain following effect and the present invention compares this situation.
(1) in reason, use motion vector detection result throughout, therefore using motion vector detection as omnibus circuit (common process), can cut down circuit scale (treating capacity, the frame memory amount of preserving motion vector according to each pixel).
(2) motion vector detection and frame interpolation are processed needs respectively to preserve at least 2 frame memories with epigraph based on its treatment step, and by shared frame memory, can reduce required memory size.
Describe the present invention with image processing apparatus and image display device above, and image processing method and the method for displaying image carried out by these devices also form a part of the present invention.The present invention can also be served as the program of carrying out above-mentioned image processing apparatus or the step of image processing method and the processing of each step, also can be as the recording medium of embodied on computer readable that has recorded this program.

Claims (12)

1. an image processing apparatus, is characterized in that, this image processing apparatus has:
Motion vector detection section, the 2nd vision signal that it is inputted according to the 1st vision signal of inputting from outside and from outside, detect the motion vector of above-mentioned the 1st vision signal, wherein, the 2nd vision signal with respect to above-mentioned the 1st vision signal in time before 1 frame is above or 1 frame above after; And
Image rectification portion, it uses the motion vector being detected by above-mentioned motion vector detection section, proofreaies and correct the motion blur in above-mentioned the 1st vision signal,
Above-mentioned image rectification portion has:
Motion blur estimator, it estimates direction and the size of motion blur according to above-mentioned motion vector;
Filtering portion, its use corresponding to estimated go out above-mentioned motion blur direction and size and predetermined filter factor carries out filtering to above-mentioned the 1st vision signal; And
Correction intensity adjustment part, it adjusts the correction intensity to above-mentioned concerned pixel according near the intensity of variation of pixel value concerned pixel,
Above-mentioned filtering portion carries out amplitude limiting processing to each pixel value of the pixel in the neighboring area of concerned pixel, make the absolute value of the difference of each pixel value of the pixel in pixel value and its neighboring area of concerned pixel be no more than predetermined threshold value, and use the above-mentioned pixel value after amplitude limiting processing to carry out low-pass filtering to the pixel in above-mentioned neighboring area.
2. image processing apparatus according to claim 1, is characterized in that,
Above-mentioned correction intensity adjustment part is poor according to the mean value of the pixel value of the pixel in the pixel value of above-mentioned concerned pixel and above-mentioned neighboring area, adjusts the correction intensity to above-mentioned concerned pixel.
3. image processing apparatus according to claim 1 and 2, is characterized in that,
Above-mentioned image rectification portion has:
Gain calculating part, its filtering result according to above-mentioned filtering portion is obtained gain; And
Proofread and correct handling part, it multiplies each other the above-mentioned gain and above-mentioned the 1st vision signal that are calculated by above-mentioned gain calculating part, thus above-mentioned the 1st vision signal is proofreaied and correct.
4. image processing apparatus according to claim 3, is characterized in that,
Above-mentioned image rectification portion also has the direction of filter factor and multiple motion blurs and the big or small combination filter factor storage unit of preserving that is mapped,
Above-mentioned motion blur estimator from be kept at the filter factor in above-mentioned filter factor storage unit, select with estimated go out direction and the filter factor corresponding to size of above-mentioned motion blur,
Above-mentioned filtering portion use selected go out above-mentioned filter factor carry out filtering.
5. image processing apparatus according to claim 1 and 2, is characterized in that,
This image processing apparatus also has the frame generating unit that generates the frame between 2 correcting images that differ from one another in having alleviated above-mentioned motion blur by interpolation.
6. an image display device, is characterized in that, this image display device has:
Image processing apparatus as claimed in claim 1 or 2; And
Show the image displaying part of the image that above-mentioned image processing apparatus generates.
7. an image processing method, is characterized in that, this image processing method comprises:
Motion vector detection step, the 2nd vision signal of inputting according to the 1st vision signal of inputting from outside and from outside, detect the motion vector of above-mentioned the 1st vision signal, wherein, the 2nd vision signal with respect to above-mentioned the 1st vision signal in time before 1 frame is above or 1 frame above after; And
Image rectification step, is used the motion vector detecting in above-mentioned motion vector detection step, proofreaies and correct the motion blur in above-mentioned the 1st vision signal,
Above-mentioned image rectification step comprises:
Motion blur estimating step, according to above-mentioned motion vector, estimates direction and the size of motion blur;
Filter step, use corresponding to estimated go out above-mentioned motion blur direction and size and predetermined filter factor carries out filtering to above-mentioned the 1st vision signal; And
Correction intensity set-up procedure, according near the intensity of variation of pixel value concerned pixel, adjusts the correction intensity to above-mentioned concerned pixel,
In above-mentioned filter step, each pixel value to the pixel in the neighboring area of concerned pixel carries out amplitude limiting processing, make the absolute value of the difference of each pixel value of the pixel in pixel value and its neighboring area of concerned pixel be no more than predetermined threshold value, and use the above-mentioned pixel value after amplitude limiting processing to carry out low-pass filtering to the pixel in above-mentioned neighboring area.
8. image processing method according to claim 7, is characterized in that,
In above-mentioned correction intensity set-up procedure, poor according to the mean value of the pixel value of the pixel in the pixel value of above-mentioned concerned pixel and above-mentioned neighboring area, adjusts the correction intensity to above-mentioned concerned pixel.
9. according to the image processing method described in claim 7 or 8, it is characterized in that,
Above-mentioned image rectification step comprises:
Gain calculation procedure, obtains gain according to the filtering result of above-mentioned filter step; And
Proofread and correct treatment step, the above-mentioned gain and above-mentioned the 1st vision signal that in above-mentioned gain calculation procedure, calculate are multiplied each other, thus above-mentioned the 1st vision signal is proofreaied and correct.
10. image processing method according to claim 9, is characterized in that,
In advance the direction of filter factor and multiple motion blurs and big or small combination are mapped and are stored in filter factor storage unit,
In above-mentioned motion blur estimating step, from be kept at the filter factor in above-mentioned filter factor storage unit, select with estimated go out direction and the big or small corresponding filter factor of above-mentioned motion blur,
In above-mentioned filter step, use selected above-mentioned filter factor to carry out filtering.
11. according to the image processing method described in claim 7 or 8, it is characterized in that,
This image processing method also comprises that frame generates step, generates in step at this frame, generates the frame between 2 correcting images that differ from one another in having alleviated above-mentioned motion blur by interpolation.
12. 1 kinds of method for displaying image, is characterized in that, this method for displaying image comprises:
Image processing method as claimed in claim 7 or 8; And
Show the image display step of utilizing the image that above-mentioned image processing method generates.
CN201110359416.9A 2010-11-15 2011-11-14 Image processing apparatus and method, and image display apparatus and method Expired - Fee Related CN102572222B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2010-254753 2010-11-15
JP2010254753A JP2012109656A (en) 2010-11-15 2010-11-15 Image processing apparatus and method, and image display unit and method

Publications (2)

Publication Number Publication Date
CN102572222A CN102572222A (en) 2012-07-11
CN102572222B true CN102572222B (en) 2014-10-15

Family

ID=46416613

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201110359416.9A Expired - Fee Related CN102572222B (en) 2010-11-15 2011-11-14 Image processing apparatus and method, and image display apparatus and method

Country Status (2)

Country Link
JP (1) JP2012109656A (en)
CN (1) CN102572222B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5775977B2 (en) * 2012-12-11 2015-09-09 富士フイルム株式会社 Image processing apparatus, imaging apparatus, image processing method, and image processing program
CN108476319A (en) * 2016-01-11 2018-08-31 三星电子株式会社 Image encoding method and equipment and picture decoding method and equipment
JP7139858B2 (en) * 2018-10-12 2022-09-21 株式会社Jvcケンウッド Interpolated frame generation device and method
US11874679B2 (en) * 2019-01-09 2024-01-16 Mitsubishi Electric Corporation Using an imaging device to correct positioning errors
CN110084765B (en) * 2019-05-05 2021-08-06 Oppo广东移动通信有限公司 Image processing method, image processing device and terminal equipment
CN111698427B (en) * 2020-06-23 2021-12-24 联想(北京)有限公司 Image processing method and device and electronic equipment

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1913585A (en) * 2005-06-13 2007-02-14 精工爱普生株式会社 Method and system for estimating motion and compensating for perceived motion blur in digital video
CN101272488A (en) * 2007-03-23 2008-09-24 展讯通信(上海)有限公司 Video decoding method and device for reducing LCD display movement fuzz
CN101305396A (en) * 2005-07-12 2008-11-12 Nxp股份有限公司 Method and device for removing motion blur effects
CN101365053A (en) * 2007-08-08 2009-02-11 佳能株式会社 Image processing apparatus and method of controlling the same

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1913585A (en) * 2005-06-13 2007-02-14 精工爱普生株式会社 Method and system for estimating motion and compensating for perceived motion blur in digital video
CN101305396A (en) * 2005-07-12 2008-11-12 Nxp股份有限公司 Method and device for removing motion blur effects
CN101272488A (en) * 2007-03-23 2008-09-24 展讯通信(上海)有限公司 Video decoding method and device for reducing LCD display movement fuzz
CN101365053A (en) * 2007-08-08 2009-02-11 佳能株式会社 Image processing apparatus and method of controlling the same

Also Published As

Publication number Publication date
CN102572222A (en) 2012-07-11
JP2012109656A (en) 2012-06-07

Similar Documents

Publication Publication Date Title
CN102572222B (en) Image processing apparatus and method, and image display apparatus and method
EP2075756B1 (en) Block-based image blending for camera shake compensation
US8155468B2 (en) Image processing method and apparatus
JP4534594B2 (en) Image processing apparatus, image processing method, program for image processing method, and recording medium recording program for image processing method
US20080199101A1 (en) Image Processing Apparatus and Image Processing Program
US8781225B2 (en) Automatic tone mapping method and image processing device
US8369644B2 (en) Apparatus and method for reducing motion blur in a video signal
JP2008205737A (en) Imaging system, image processing program, and image processing method
WO2009107487A1 (en) Motion blur detecting device and method, and image processor, and image display
US10091422B2 (en) Image processing device and recording medium
JP6182056B2 (en) Image processing device
US20150187051A1 (en) Method and apparatus for estimating image noise
CN101212563A (en) Noise estimation based partial image filtering method
US7903901B2 (en) Recursive filter system for a video signal
CN104036471A (en) Image noise estimation method and image noise estimation device
CN110197467A (en) A kind of optimization system based on FPGA image Penetrating Fog
US8345163B2 (en) Image processing device and method and image display device
CN109284062B (en) Touch data processing method, device, terminal and medium
CN108632501B (en) Video anti-shake method and device and mobile terminal
JP2013106151A (en) Image processing apparatus and image processing method
CN105304031A (en) Method based on still image scene judgment to avoid noise amplification
US8675963B2 (en) Method and apparatus for automatic brightness adjustment of image signal processor
JP2009194721A (en) Image signal processing device, image signal processing method, and imaging device
US9401010B2 (en) Enhancing perceived sharpness of images
JP5559275B2 (en) Image processing apparatus and control method thereof

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20141015

Termination date: 20161114